Thanks to rapid advancements in generative AI and a glut of training data created by human actors that has been fed into its AI model, Synthesia has been able to produce avatars that are indeed more humanlike and more expressive than their predecessors. The digital clones are better able to match their reactions and intonation to the sentiment of their scripts—acting more upbeat when talking about happy things, for instance, and more serious or sad when talking about unpleasant things. They also do a better job matching facial expressions—the tiny movements that can speak for us without words.
But this technological progress also signals a much larger social and cultural shift. Increasingly, so much of what we see on our screens is generated (or at least tinkered with) by AI, and it is becoming more and more difficult to distinguish what is real from what is not. This threatens our trust in everything we see, which could have very real, very dangerous consequences.
“I think we might just have to say goodbye to finding out about the truth in a quick way,” says Sandra Wachter, a professor at the Oxford Internet Institute, who researches the legal and ethical implications of AI. “The idea that you can just quickly Google something and know what’s fact and what’s fiction—I don’t think it works like that anymore.”
Lol, I see how you could see it that way. I meant rather that I thought I did understand the part where you claimed free will existed, but not the argument based off that.
While Superderminism is a valid solution to both Bell’s paradox and this result, it isn’t a factor in the Frauchiger-Renner paradox so there must be something else going on at very least in addition to it (which then complies less with Occam’s razor).
And it would be pretty superfluous for our universe to behave the way it does around interactions and measurements if free will didn’t exist.