AI on its own isn’t a threat, but people (mis)using and misrepresenting AI are. That isn’t a problem unique to AI but there sure are a lot of people doing dumb and bad things with AI right now.
People are getiing paid by corporation to “do their job”.
People who apeak up against the interest of the corporation are getting laid off.
Unions are regularly busted to prevent collective actions and workers cooperation.
CEO’s are getting piad by corporation stupid amounts of money to keep maximizing shareholders profits against everything else, even moral considerations.
People decide who to hire for what roles and who to lay off. People form unions and people bust unions. The shareholders are people, and the decisions made in their interests are made by other people.
No the “AI” isn’t a threat in itself. And treating generative algorithms like LLM like it’s general intelligence is dumb beyond words. However:
It massively increases the reach and capacity of foreign (and sadly domestic) agents to influence people. All of those Russian trolls that brought about fascism, Brexit and the rise of the far right used to be humans. Now a single human can do more than a whole army of people could in the past using AI. Spreading misinformation has never been easier.
Then there’s the whole replacing peoples jobs with AI. No the AI can’t actually do those jobs, not very well at least. But if management and the share holders think they can increase profits using AI, they will certainly fire a lot of folk. And even if that ends up ruining the company down the line, that costs even more jobs and usually impacts the people lower in the organization the most.
Also there’s a risk of people literally becoming less capable and knowledgeable because of AI. If you can have a digital assistant you carry around on your pocket at all times answer every question ever, why bother learning anything yourself? Why take the hard road, when the easy road is available? People are at risk of losing information, knowledge and the ability to think for themselves because of this. And it can become so bad, when the AI just makes shit up, people think it’s the truth. And in a darker tone, if the people behind the big AIs want something to not be known or misrepresented, they can make it happen. And people would be so reliant on it, they wouldn’t even know this happens. This is already an issue with social media, AI is much much worse.
Then there is the resource usage for AI. This makes the impact of crypto currency seem like a rounding error. The energy and water usage is huge and becoming bigger every day. This has the potential to undo almost all of the climate wins we’ve had for the past two decades and push the Earth beyond the tipping point. What people seem to forget about climate change is once things start becoming bad, it’s way too late and the situation will deteriorate at an exponential rate.
That’s just a couple of big things I can think of on the top of my head. I’m sure there are many more issues (such as the death of the internet). But I think this is enough to call the current level of “AI” a threat to humanity.
Miss me with the doomsday news cycle capture, we aren’t even close to AI being a threat to ~anything
(and all hail the AI overlords if it does happen, can’t be worse than politicians)
Except for the environment
idk most politicians are a threat to the environement like AI (if not even more so with their moronic laws)
And people’s jobs (not because it can replace people, but because execs think it can)
AI on its own isn’t a threat, but people (mis)using and misrepresenting AI are. That isn’t a problem unique to AI but there sure are a lot of people doing dumb and bad things with AI right now.
*Corporations
When was the last time you saw a corporation making decisions and taking actions of its own accord, without people?
Maybe they will start to, now, as people delegate their responsibilities to “AI”
People are getiing paid by corporation to “do their job”. People who apeak up against the interest of the corporation are getting laid off. Unions are regularly busted to prevent collective actions and workers cooperation. CEO’s are getting piad by corporation stupid amounts of money to keep maximizing shareholders profits against everything else, even moral considerations.
People decide who to hire for what roles and who to lay off. People form unions and people bust unions. The shareholders are people, and the decisions made in their interests are made by other people.
No the “AI” isn’t a threat in itself. And treating generative algorithms like LLM like it’s general intelligence is dumb beyond words. However:
It massively increases the reach and capacity of foreign (and sadly domestic) agents to influence people. All of those Russian trolls that brought about fascism, Brexit and the rise of the far right used to be humans. Now a single human can do more than a whole army of people could in the past using AI. Spreading misinformation has never been easier.
Then there’s the whole replacing peoples jobs with AI. No the AI can’t actually do those jobs, not very well at least. But if management and the share holders think they can increase profits using AI, they will certainly fire a lot of folk. And even if that ends up ruining the company down the line, that costs even more jobs and usually impacts the people lower in the organization the most.
Also there’s a risk of people literally becoming less capable and knowledgeable because of AI. If you can have a digital assistant you carry around on your pocket at all times answer every question ever, why bother learning anything yourself? Why take the hard road, when the easy road is available? People are at risk of losing information, knowledge and the ability to think for themselves because of this. And it can become so bad, when the AI just makes shit up, people think it’s the truth. And in a darker tone, if the people behind the big AIs want something to not be known or misrepresented, they can make it happen. And people would be so reliant on it, they wouldn’t even know this happens. This is already an issue with social media, AI is much much worse.
Then there is the resource usage for AI. This makes the impact of crypto currency seem like a rounding error. The energy and water usage is huge and becoming bigger every day. This has the potential to undo almost all of the climate wins we’ve had for the past two decades and push the Earth beyond the tipping point. What people seem to forget about climate change is once things start becoming bad, it’s way too late and the situation will deteriorate at an exponential rate.
That’s just a couple of big things I can think of on the top of my head. I’m sure there are many more issues (such as the death of the internet). But I think this is enough to call the current level of “AI” a threat to humanity.
Sorry I made this before Drake was a
certified lover boycertified pedophile.i agree with the first part