Reminds me of an article I read a few years ago about an Israeli start up aiming to use AI to analyse images of people and detect what they are likely to work with. Doctor, teacher, terrorist, etc.
Basically phrenology. They were making a digital racist uncle.
If the model is accurate though, I’d be okay with it. I understand it’ll never be perfect and laws should be put in place banning people being arrested or interrogated just for that kind of suspicion, but still
Yes, yes. We do this already and we are probably safer for it. If we had a model we could point to evidence that, in general, this helps. It should never be admissible evidence in a court, but it would help police figure out who to keep eyes on
Reminds me of an article I read a few years ago about an Israeli start up aiming to use AI to analyse images of people and detect what they are likely to work with. Doctor, teacher, terrorist, etc.
Basically phrenology. They were making a digital racist uncle.
If the model is accurate though, I’d be okay with it. I understand it’ll never be perfect and laws should be put in place banning people being arrested or interrogated just for that kind of suspicion, but still
Except that is literally judging you from how you look. And you don’t think actual bad actors will try to trick the system?
Yes, yes. We do this already and we are probably safer for it. If we had a model we could point to evidence that, in general, this helps. It should never be admissible evidence in a court, but it would help police figure out who to keep eyes on
Well, sorry if it’s something I’m super skeptical towards as a minority