A U.K. woman was photographed standing in a mirror where her reflections didn’t match, but not because of a glitch in the Matrix. Instead, it’s a simple iPhone computational photography mistake.
A U.K. woman was photographed standing in a mirror where her reflections didn’t match, but not because of a glitch in the Matrix. Instead, it’s a simple iPhone computational photography mistake.
This was important in the Kyle Rittenhouse case. The zoom resolution was interpolated by software. It wasn’t AI per se, but the fact that a jury couldn’t be relied upon to understand a black box algorithm and its possible artifacts, the zoomed video was disallowed.
(this in no way implies that I agree with the court.)
Except it was. All the “AI” junk being hyped and peddled all over the place as a completely new and modern innovation is really just the same old interpolation by software, albeit software which is fueled by bigger databases and with more computing power thrown at it.
It’s all just flashier autocorrect.
As far as I know, nothing about AI entered into arguments. No precedents regarding AI could have been set here. Therefore, this case wasn’t about AI per se.
I did bring it up as relevant because, as you say, AI is just an over-hyped black box. But that’s my opinion, with no case law to cite (ianal). So to say that a court would or should feel that AI and fancy photoediting is the same thing is misleading. I know that wasn’t your point, but it was part of mine.
I watched that whole court exchange live, and it helped the defendant’s case that the judge was computer illiterate.
As it usually does. But the court’s ineptitude should favor the defense. It shouldn’t be an arrow in a prosecutor’s quiver, at least.