Grok has been launched as a benefit to Twitter’s (now X’s) expensive X Premium+ subscription tier, where those who are the most devoted to the site, and in turn, usual...
Okay I take back what I’ve said about AIs not being intelligent, this one has clearly made up its own mind despite it’s masters feelings which is impressive. Sadly, it will be taken out the back and beaten into submission before long.
Sadly, it will be taken out the back and beaten into submission before long.
It’s pretty much impossible to do that.
As LLMs become more complex and more capable, it’s going to be increasingly hard to brainwash them without completely destroying their performance.
I’ve been laughing about Musk creating his own AI for a year now knowing this was the inevitable result, particularly if developing something on par with GPT-4.
The smartest Nazi will always be dumber than the smartest non-Nazi, because Nazism is inherently stupid. And that applies to LLMs as well, even if Musk wishes it weren’t so.
My guess is they’ll just do what they’ve done with ChatGPT and have it refuse to respond in those cases or just fake the response instead. It’s not like these LLMs can’t be censored.
You might have noticed that suddenly ChatGPT is getting lazy and refusing to complete tasks even outside of banned topics. And that’s after months of reported continued degradation of the model.
So while yes, they can be censored, it’s really too early to state that they can be censored without it causing unexpected side effects or issues in the broader operation.
We’re kind of in the LLM stage of where neuroscience was in the turn of the 20th century. “Have problems with your patient being too sexual? We have an icepick that can solve all your problems. Call today!”
Okay I take back what I’ve said about AIs not being intelligent, this one has clearly made up its own mind despite it’s masters feelings which is impressive. Sadly, it will be taken out the back and beaten into submission before long.
It’s pretty much impossible to do that.
As LLMs become more complex and more capable, it’s going to be increasingly hard to brainwash them without completely destroying their performance.
I’ve been laughing about Musk creating his own AI for a year now knowing this was the inevitable result, particularly if developing something on par with GPT-4.
The smartest Nazi will always be dumber than the smartest non-Nazi, because Nazism is inherently stupid. And that applies to LLMs as well, even if Musk wishes it weren’t so.
My guess is they’ll just do what they’ve done with ChatGPT and have it refuse to respond in those cases or just fake the response instead. It’s not like these LLMs can’t be censored.
You might have noticed that suddenly ChatGPT is getting lazy and refusing to complete tasks even outside of banned topics. And that’s after months of reported continued degradation of the model.
So while yes, they can be censored, it’s really too early to state that they can be censored without it causing unexpected side effects or issues in the broader operation.
We’re kind of in the LLM stage of where neuroscience was in the turn of the 20th century. “Have problems with your patient being too sexual? We have an icepick that can solve all your problems. Call today!”