Members of SAG-AFTRA and the National Association of Voice Actors united at San Diego Comic-Con to address artificial intelligence and how it can harm creators.
People don’t understand that it’s not AI, it’s machine learning. Imagine you have someone alone in a room. He takes the experience and knowledge of people and puts it into his library. When someone asks for something, it uses what it’s got in its library to answer that person’s question. But that person stays in his room, he doesn’t experience life.
These AIs are like that. They feed on human creativity. Human experience of life produces creativity. These AIs do not experience life and do not think, so they can cannot replace humans.
Not sure if you read the article but in this specific instance I believe they are denouncing studios ability to copy their likeness and voice without their consent. Think deepfakes and simulated voices such as elevenlabs’ AI voice tool. That is something that is actively being tested and actors and voice actors want control of how their likeness and voice are used.
I think this is a reasonable and valid argument and should be protected.
I read the article but felt writing a “generic” comment about AI as various studios also wants to replace writers with AIs. I’ve been thinking about this for a long time.
Ranting about this at length was one of my last posts on Reddit. The whole AI situation feels exactly like covid - where you had many “expert doctors” doomsaying how vaccines will be the end of us all.
I’ve seen a few podcasts with industry experts on AI, where he managed to mention stuff like “to me, it felt sentient” or “in a few years, we will have AGIs that are hundred times more intelligent than us. Imagine a standart commoner at the time with IQ like 70 talking to Einstein, but the smartest people alive now will be the commoners compared to the AI”. It’s such a bullshit, as far as I know from my limited ML knowledge from college, I don’t see any way how anything using machine learning can become AGI - or can get smarter than humans.
Because ML needs feedback. And we can’t give feedback on something that’s more inteligent that we are. It’s as if the lowest commoner in the metaphor was staring Einstain over the shoulder, and Einstain would only continue working on a theory if the commoner agreed with him that this is the correct approach. If he would say no, he would try an entierly different approach. I don’t see how that can get smarter than people. Or how it can learn to “escape into the internet and destroy humanity”. Unless I’m missing a major advancement in ML algorithms, I can’t imagine any approach I know being capable of something like that. It just doesn’t work like that, it’s by definition not possible. (But if anyone knows more about the topic and disagrees with me, please let me know - I would really love to discuss this topic, since it’s pretty important to me)
But, on an entirely different point - ML will be better than people at the single task it’s given. Give it someones profile of data collected about him from internet and smart devices, and let AI select a marketing campaign, email text or a video to show to him to convince him to vote for XY. And given enough time to experiment and train, the model will get results - and there’s nothing you can do about it. Even if you know you will be manipulated, the ML model knows that - based on your data - and will figure out a way how to manipulate you anyway. That’s what I’m worried about, especially since Facebook had literally years of billions of users and data to train and perfect their model. Facebook feed is ML training wet dream.
We’re fucked, the only way how to defend yourself is to avoid any kind of personalized content. Google search, YT feed, news-sites, streaming services… Anything that’s personalized will potentionally be able to manipulate you. That’s the biggest issue with ML.
I actually wonder why, to play devils advocate a bit.
If I’m watching a film and it’s starring, say, Arnold Schwarzenegger part of the deal is I’m getting him specifically. The whole package. An AI that looks like him isn’t the same thing at all, and can’t be said to be starring him.
But his face or his voice isn’t valuable on its own - it’s his reputation as a good actor. That’s why he’s paid the big bucks.
Say the studio has an AI that can replicate an actors acting ability perfectly… they don’t… but let’s say they do one day… Why would they need the face? Once you can generate an near infinite number of good actors, individual personalities don’t mean a lot.
You said it perfectly, it’s his reputation as a good actor, not the good acting itself. Stars get payed a ludicrous amount of money, you can easily find a decent actor for less and have plenty to spare to train them up.
The face is everything, they plaster it all over the advertising and it works. People will talk about the new movie: “oh and it has actor C in it” “in that case I’ll take a look”
If made today, that might look like 20 hours of gameplay preplanned with main story writers featuring expensive voice actors. And then another 40 hours of side content with less expensive talent.
But what if soon you could have a game that still has around 60 hours of gameplay, but no playthrough was exactly the same. Because while everyone has the same core 20 hours main story, the side content adapts to the things you focus on and enjoy.
Hate lore but love combat? Arnold will take you to the future to help fight in the first robot war.
Love lore and hate combat? He acts as your bodyguard in a more thriller paced sequence infiltrating Dyson labs while hiding from another robot hunting you.
With adaptable generative tech powering pipelines, you might end up with a thousand hours of different content pulled into a 60 hour experience that changes based on the player.
I think many currently can’t really comprehend just how crazy the future is going to be.
Now - the best way to do this is have Arnold paid for both the primary work and the extended content, and have his involvement in making sure the extended content stays true to his performances.
As for getting rid of humans entirely - while that will likely happen for small roles, big stars have a marketing value that AI won’t be able to match until it walks red carpets, appears on talk shows, and gets mentioned in gossip tabloids.
Arguably already many big stars decrease the quality of voice acting roles vs voice actors, but they still get the jobs because more people buy the movie/game due to the big name.
I mostly agree with you but think it’s important to clarify that even with machine learning many humans can be replaced.
To extend your metaphor, that library has always had a bunch of clerks sitting inside of it. They’ve been handling requests, finding books, and organizing them into a system that works to best serve that information.
Now with machine learning, instead of having all of those clerks making the library run smoothly, they’ve effectively replaced 99% of all of the humans with an organizational system that serves content and helps find books even faster than a human would be able to.
Slightly deeper: this machine learning replacement can also now mix and match bits of content. The human system before might have a request that looks like this - “I want information on Abrahamic Religion in Western Culture” so they’d gather up a ton of books and pass them to the person that requested info.
In the new replacement system, the request could take bits and pieces from all of those books and present a mostly comprehensive overview of Abrahamic Religion in the West without having to run and fetch all of the books.
Deeper yet, and the scary iceberg - today, someone still needs to write all of those books and we as a society tend to trust information gotten from those books (cited sources and all that) so humans are safe as the content authors right? We’ve basically just made a super efficient organizational and content delivery system. But as we start to trust the new system and use it more, we’re potentially seeing the system reference its own outputs as opposed to the source material…which creates a recursive, negative feedback loop.
We still need human content creation today, but the scary part (IMO) is when we treat these LLMs as generative general AI. The LLMs are fallible and can be incorrect and often hallucinate - so when most people start blindly trusting these systems (they already do - look no further than general confusion on the terms AI and machine learning and LLMs), we’re going to get increasingly further away from new knowledge generation.
I think we are on the same page. There is a sociological concept of Generic Worker and Self-Programmable worker by the sociologist Manuel Castell. The self-programmable workforce is endowed with the ability to retrain and adapt to new tasks, new processes and new sources of information, as technology, demand and management accelerate their pace of change. Generic labor, on the other hand, is exchangeable and disposable, and coexists in the same circuits with machines and unskilled labor from all over the world.
Generic workers are already being replaced by automation (robots), but now LLMs are threatening self-programmable workers. The only way to adapt to the new reality is to become indispensable in training LLMs. It will completely upend the current job market as we know it. And as you said, the danger is if we treat LLMs as generative AIs.
To be fair that’s how I learn too! I read what others have written, watch was others do, and generally take knowledge that other people have found, experienced, or learned from other folks themselves. Also I used to never leave my room and didn’t experience life either lol.
Anyway, it’s rare that I can sit down and poke and prod something or some subject to learn and research on my own something new, only to find that hundreds of other people have already figured that out and I just reinvented the wheel.
I think what you were trying to get at is usually referred to the concept of the Chinese room.
Regardless, the concern for me hasn’t been that LLMs can’t replace humans, it’s that they can replace writers, voice actors, extras, and programmers. (Software engineers just see AI as a new tool for the toolbox, but that’s a tangent I’m not getting into here) replacing the creatives that make entertainment is largely concerning in that like you said, the LLMs just regurgitate content that was fed into the model and new creative works can’t be created with it.
The corporations that make money on the creatives love these new AIs as a way to cut costs. I think there is going to be an influx of cheap scripts created for sitcoms, children’s shows, and popcorn summer movies and there isn’t much anyone can do other than boycott the content.
It’s kind of scary knowing this could shut out new writers and actors from the industry as their market effectively shrinks. Though I’m hoping that people will recognize the cheap junk food level content created by LLMs and stop consuming it in favor of content written and acted by humans.
Luckily, the big name celebrities can’t be replaced by LLM anytime soon, but I’m sure the scripts for the superhero movies will have acts largely written by LLMs soon. I also expect something like the Simpsons to be entirely AI generated and voice acted. They have over 30 years of content to feed into the model and the show is already known for regurgitating its own jokes, stories and content.
I don’t think there will be much of a difference, as far as the mass entertainment production is concerned - it’s already soulless and driven by divisions of marketers analyzing every single line to maximize the amount of people it will be entertaining for. It’s not creative, it’s just psychologically designed to exploit years of psychology research about engagement, and create something that is mass-appealing and entertaining, without care about any kind of artistic value.
By replacing the poor writers who are forced by marketing executives to dumb down their script to engage as large audience as possible with AIs, they will have a change to start building a new indie scene instead of getting their passion for the art sucked out of them by marketing research. That is a good thing, because they will have more freedom to create, not driven by the only goal of making as much money from as much people as possible. Let the mass entertainment create the same soulless bullshit with AIs, which they will definitely do to reduce costs, and let’s hope that it will be the starting point for larger independent scene. It will keep the people who enjoy the kind of content for the masses entertained, so they will keep generating money, but without ruining the passion of so many artists by forcing them to create such content.
The fact that this is upvoted so much is just sad.
While at face value it appears to be critical of AI, and thus bandwagoning on a very popular slant these days online, the inherent anthropomorphizing of the model in question is extremely wrong in so many ways.
LLMs are trained to complete human thought. And as a result, that very narrow class of machine learning ends up being oddly good at seeming human in responses.
But a diffusion model for generating images? Or text to voice generation?
To anthropomorphize these models is like saying that your cell tower triangulating your position won’t care about you as much as your mother would.
It’s just incredibly bizarre.
It is going to get better and better at replicating human speech patterns, and is going to be able to be further customized in how it expresses sounds mimicking human emotions. Already it can get uncannily good off just a few seconds of a sample.
As for the actors - as soon as residuals get figured out such that they get paid per hour of secondary usage of their recordings, they are going to go from “I’ll never deign to let AI replace me” to “yes, of course I’ll let you pay me more for me to do less work.”
The creativity of Matt Mercer in deciding on as frightened goblin voice for an innkeeper is going to be years before an AI successfully replaces that contribution.
But for Matt Mercer to provide samples of many different voices to an AI which pairs with GPT-5+ to DM your DnD campaigns with that voice pack for a monthly fee he gets a large cut from?
That’s not only going to be extremely possible sooner than you might think, but you’ll be seeing serious voice actors falling over themselves to directly market their voices to main street for personalized content.
It’s all about economical fairness, and those rigidly protesting change that endangers the status quo are very much like the MPAA fighting Napster instead of funding its successor - who as a result left a clear victory open to Apple and then Spotify and others by resisting change rather than embracing it.
People don’t understand that it’s not AI, it’s machine learning. Imagine you have someone alone in a room. He takes the experience and knowledge of people and puts it into his library. When someone asks for something, it uses what it’s got in its library to answer that person’s question. But that person stays in his room, he doesn’t experience life.
These AIs are like that. They feed on human creativity. Human experience of life produces creativity. These AIs do not experience life and do not think, so they can cannot replace humans.
Not sure if you read the article but in this specific instance I believe they are denouncing studios ability to copy their likeness and voice without their consent. Think deepfakes and simulated voices such as elevenlabs’ AI voice tool. That is something that is actively being tested and actors and voice actors want control of how their likeness and voice are used.
I think this is a reasonable and valid argument and should be protected.
I read the article but felt writing a “generic” comment about AI as various studios also wants to replace writers with AIs. I’ve been thinking about this for a long time.
deleted by creator
Ranting about this at length was one of my last posts on Reddit. The whole AI situation feels exactly like covid - where you had many “expert doctors” doomsaying how vaccines will be the end of us all.
I’ve seen a few podcasts with industry experts on AI, where he managed to mention stuff like “to me, it felt sentient” or “in a few years, we will have AGIs that are hundred times more intelligent than us. Imagine a standart commoner at the time with IQ like 70 talking to Einstein, but the smartest people alive now will be the commoners compared to the AI”. It’s such a bullshit, as far as I know from my limited ML knowledge from college, I don’t see any way how anything using machine learning can become AGI - or can get smarter than humans.
Because ML needs feedback. And we can’t give feedback on something that’s more inteligent that we are. It’s as if the lowest commoner in the metaphor was staring Einstain over the shoulder, and Einstain would only continue working on a theory if the commoner agreed with him that this is the correct approach. If he would say no, he would try an entierly different approach. I don’t see how that can get smarter than people. Or how it can learn to “escape into the internet and destroy humanity”. Unless I’m missing a major advancement in ML algorithms, I can’t imagine any approach I know being capable of something like that. It just doesn’t work like that, it’s by definition not possible. (But if anyone knows more about the topic and disagrees with me, please let me know - I would really love to discuss this topic, since it’s pretty important to me)
But, on an entirely different point - ML will be better than people at the single task it’s given. Give it someones profile of data collected about him from internet and smart devices, and let AI select a marketing campaign, email text or a video to show to him to convince him to vote for XY. And given enough time to experiment and train, the model will get results - and there’s nothing you can do about it. Even if you know you will be manipulated, the ML model knows that - based on your data - and will figure out a way how to manipulate you anyway. That’s what I’m worried about, especially since Facebook had literally years of billions of users and data to train and perfect their model. Facebook feed is ML training wet dream.
We’re fucked, the only way how to defend yourself is to avoid any kind of personalized content. Google search, YT feed, news-sites, streaming services… Anything that’s personalized will potentionally be able to manipulate you. That’s the biggest issue with ML.
I actually wonder why, to play devils advocate a bit.
If I’m watching a film and it’s starring, say, Arnold Schwarzenegger part of the deal is I’m getting him specifically. The whole package. An AI that looks like him isn’t the same thing at all, and can’t be said to be starring him.
But his face or his voice isn’t valuable on its own - it’s his reputation as a good actor. That’s why he’s paid the big bucks.
Say the studio has an AI that can replicate an actors acting ability perfectly… they don’t… but let’s say they do one day… Why would they need the face? Once you can generate an near infinite number of good actors, individual personalities don’t mean a lot.
You said it perfectly, it’s his reputation as a good actor, not the good acting itself. Stars get payed a ludicrous amount of money, you can easily find a decent actor for less and have plenty to spare to train them up.
The face is everything, they plaster it all over the advertising and it works. People will talk about the new movie: “oh and it has actor C in it” “in that case I’ll take a look”
Let’s say you have a Terminator video game.
If made today, that might look like 20 hours of gameplay preplanned with main story writers featuring expensive voice actors. And then another 40 hours of side content with less expensive talent.
But what if soon you could have a game that still has around 60 hours of gameplay, but no playthrough was exactly the same. Because while everyone has the same core 20 hours main story, the side content adapts to the things you focus on and enjoy.
Hate lore but love combat? Arnold will take you to the future to help fight in the first robot war.
Love lore and hate combat? He acts as your bodyguard in a more thriller paced sequence infiltrating Dyson labs while hiding from another robot hunting you.
With adaptable generative tech powering pipelines, you might end up with a thousand hours of different content pulled into a 60 hour experience that changes based on the player.
I think many currently can’t really comprehend just how crazy the future is going to be.
Now - the best way to do this is have Arnold paid for both the primary work and the extended content, and have his involvement in making sure the extended content stays true to his performances.
As for getting rid of humans entirely - while that will likely happen for small roles, big stars have a marketing value that AI won’t be able to match until it walks red carpets, appears on talk shows, and gets mentioned in gossip tabloids.
Arguably already many big stars decrease the quality of voice acting roles vs voice actors, but they still get the jobs because more people buy the movie/game due to the big name.
I mostly agree with you but think it’s important to clarify that even with machine learning many humans can be replaced.
To extend your metaphor, that library has always had a bunch of clerks sitting inside of it. They’ve been handling requests, finding books, and organizing them into a system that works to best serve that information.
Now with machine learning, instead of having all of those clerks making the library run smoothly, they’ve effectively replaced 99% of all of the humans with an organizational system that serves content and helps find books even faster than a human would be able to.
Slightly deeper: this machine learning replacement can also now mix and match bits of content. The human system before might have a request that looks like this - “I want information on Abrahamic Religion in Western Culture” so they’d gather up a ton of books and pass them to the person that requested info.
In the new replacement system, the request could take bits and pieces from all of those books and present a mostly comprehensive overview of Abrahamic Religion in the West without having to run and fetch all of the books.
Deeper yet, and the scary iceberg - today, someone still needs to write all of those books and we as a society tend to trust information gotten from those books (cited sources and all that) so humans are safe as the content authors right? We’ve basically just made a super efficient organizational and content delivery system. But as we start to trust the new system and use it more, we’re potentially seeing the system reference its own outputs as opposed to the source material…which creates a recursive, negative feedback loop.
We still need human content creation today, but the scary part (IMO) is when we treat these LLMs as generative general AI. The LLMs are fallible and can be incorrect and often hallucinate - so when most people start blindly trusting these systems (they already do - look no further than general confusion on the terms AI and machine learning and LLMs), we’re going to get increasingly further away from new knowledge generation.
I think we are on the same page. There is a sociological concept of Generic Worker and Self-Programmable worker by the sociologist Manuel Castell. The self-programmable workforce is endowed with the ability to retrain and adapt to new tasks, new processes and new sources of information, as technology, demand and management accelerate their pace of change. Generic labor, on the other hand, is exchangeable and disposable, and coexists in the same circuits with machines and unskilled labor from all over the world.
Generic workers are already being replaced by automation (robots), but now LLMs are threatening self-programmable workers. The only way to adapt to the new reality is to become indispensable in training LLMs. It will completely upend the current job market as we know it. And as you said, the danger is if we treat LLMs as generative AIs.
To be fair that’s how I learn too! I read what others have written, watch was others do, and generally take knowledge that other people have found, experienced, or learned from other folks themselves. Also I used to never leave my room and didn’t experience life either lol.
Anyway, it’s rare that I can sit down and poke and prod something or some subject to learn and research on my own something new, only to find that hundreds of other people have already figured that out and I just reinvented the wheel.
I think what you were trying to get at is usually referred to the concept of the Chinese room.
https://en.m.wikipedia.org/wiki/Chinese_room
Regardless, the concern for me hasn’t been that LLMs can’t replace humans, it’s that they can replace writers, voice actors, extras, and programmers. (Software engineers just see AI as a new tool for the toolbox, but that’s a tangent I’m not getting into here) replacing the creatives that make entertainment is largely concerning in that like you said, the LLMs just regurgitate content that was fed into the model and new creative works can’t be created with it.
The corporations that make money on the creatives love these new AIs as a way to cut costs. I think there is going to be an influx of cheap scripts created for sitcoms, children’s shows, and popcorn summer movies and there isn’t much anyone can do other than boycott the content.
It’s kind of scary knowing this could shut out new writers and actors from the industry as their market effectively shrinks. Though I’m hoping that people will recognize the cheap junk food level content created by LLMs and stop consuming it in favor of content written and acted by humans.
Luckily, the big name celebrities can’t be replaced by LLM anytime soon, but I’m sure the scripts for the superhero movies will have acts largely written by LLMs soon. I also expect something like the Simpsons to be entirely AI generated and voice acted. They have over 30 years of content to feed into the model and the show is already known for regurgitating its own jokes, stories and content.
Thanks for listening to my TED talk.
I don’t think there will be much of a difference, as far as the mass entertainment production is concerned - it’s already soulless and driven by divisions of marketers analyzing every single line to maximize the amount of people it will be entertaining for. It’s not creative, it’s just psychologically designed to exploit years of psychology research about engagement, and create something that is mass-appealing and entertaining, without care about any kind of artistic value.
By replacing the poor writers who are forced by marketing executives to dumb down their script to engage as large audience as possible with AIs, they will have a change to start building a new indie scene instead of getting their passion for the art sucked out of them by marketing research. That is a good thing, because they will have more freedom to create, not driven by the only goal of making as much money from as much people as possible. Let the mass entertainment create the same soulless bullshit with AIs, which they will definitely do to reduce costs, and let’s hope that it will be the starting point for larger independent scene. It will keep the people who enjoy the kind of content for the masses entertained, so they will keep generating money, but without ruining the passion of so many artists by forcing them to create such content.
The fact that this is upvoted so much is just sad.
While at face value it appears to be critical of AI, and thus bandwagoning on a very popular slant these days online, the inherent anthropomorphizing of the model in question is extremely wrong in so many ways.
LLMs are trained to complete human thought. And as a result, that very narrow class of machine learning ends up being oddly good at seeming human in responses.
But a diffusion model for generating images? Or text to voice generation?
To anthropomorphize these models is like saying that your cell tower triangulating your position won’t care about you as much as your mother would.
It’s just incredibly bizarre.
It is going to get better and better at replicating human speech patterns, and is going to be able to be further customized in how it expresses sounds mimicking human emotions. Already it can get uncannily good off just a few seconds of a sample.
As for the actors - as soon as residuals get figured out such that they get paid per hour of secondary usage of their recordings, they are going to go from “I’ll never deign to let AI replace me” to “yes, of course I’ll let you pay me more for me to do less work.”
The creativity of Matt Mercer in deciding on as frightened goblin voice for an innkeeper is going to be years before an AI successfully replaces that contribution.
But for Matt Mercer to provide samples of many different voices to an AI which pairs with GPT-5+ to DM your DnD campaigns with that voice pack for a monthly fee he gets a large cut from?
That’s not only going to be extremely possible sooner than you might think, but you’ll be seeing serious voice actors falling over themselves to directly market their voices to main street for personalized content.
It’s all about economical fairness, and those rigidly protesting change that endangers the status quo are very much like the MPAA fighting Napster instead of funding its successor - who as a result left a clear victory open to Apple and then Spotify and others by resisting change rather than embracing it.