You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)
Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”
There’s really nothing they can do, that’s just the current state of LLMs. People are insane, they can literally talk with something that isn’t human. We are literally the first humans in human experience to have a human-level conversation with something that isn’t human… And they don’t like it because it isn’t perfect 4 years after release.
Using fancy predictive text is not like talking to a human level intelligence.
You’ve bought into the fad.
In terms of language, Chatgpt is more advanced than most humans. Have you spoken to the average person lately? By average I mean worldwide average.
It’s obviously not full human intelligence, but in terms of language it is pretty mind blowing.
Text prediction != Intelligence
https://en.wikipedia.org/wiki/Chinese_room
Does it need to be real intelligence in order to have a conversation with it?
Behold, AI!
Nobody has said that an LLM is real artificial intelligence.
I love how much you’re walking back your own statements. Lmao
You just inflated my initial statement by assuming I meant that Chatgpt is smarter than humans. I said that Chatgpt is more advanced than the average human at linguistics, and I stand by it. Show me where I said “Chatgpt is smarter than a human” or “this is real simulated human intelligence”. You just wanted to be angry at someone so you made your narrative in your mind.
I even said that it doesn’t need to be real intelligence in order to be capable of having a conversation.
You’ll probably keep creating your imaginary narrative, so there’s no point in arguing with you.
Good bye.
Never understood this argument. It has nice rethorics but proves nothing. the problem with text prediction != intelligence is that we can’t even define intelligence. I hate AI as much as the next guy here, but that doesn’t mean we should do lazy assertions and quoting dumb arguments.
You’re not wrong that we don’t have great definitions of intelligence, but that doesn’t detract at all from criticism against these LLM’s being pushed on us and the massive amount of praise and hype being directed at what’s essentially advertising platforms.
All you need to understand is that AI models are making the internet a lot worse, and it’s going to get worse and worse and people are not happy about it. The exact details and semantics about the wording is almost irrelevant in the face of our need to collectively demand better from our tech companies and not act like these LLM’s are real entities that deserve respect.
ChatGPT has given me verifiably false information plenty of times. It’s amazing how it’s gets things right, but also amazing how it can get basic stuff completely wrong.
So… Humans get basic things right all the time and never provide false information? What humans are you talking to?
If you think everyone around you is dumber than a text app, you might be the problem.
“a text app”… Sure, it is a text app. Definetely not revolutionizing the tech industry as we speak, totally irrelevant. Sure.
If you believe jumping on a corporate fad race bandwagon for enshittification to be “revolutionizing” I guess you’re not wrong.
Then again you also believe that people are dumber than text predictive software… So yeah. Lmao
I have never said that Chatgpt is smarter than humans, I said that when it comes to linguistics, it is more advanced than the average human.
But keep twisting my words, I don’t care. Have a wonderful rest of the week. Good bye.
You’re not SAYING anything here. Are… are YOU a chat bot?
Have a nice day sir.
The internet is getting flooded with content that reads like a 10th grade book report written the hour before the test. When you can give me an internet that isn’t overflowing with mindless slop that says nothing new and pictures of uncanny people smiling with too many teeth, then I might start to believe it’s better than people in some way.
In the meantime, I would encourage everyone else to maintain and encourage human connections, hand-written material and actual human interaction. Write and draw. Don’t be lazy and let your lazy brain convince yourself that any of this is making you better at anything.
Yeha, and walk everywhere. Use a boat with paddles. Don’t be lazy.
I don’t like it because it’s going to be used to make the life of the working class worse.
I can talk to something that’s not human all day long, the question is does that thing have qualia? Does it experience? Do my words have even the most abstract meaning to it? Most animals experience even if they don’t have language. What LLM’s are, are simply mirrors. Very complicated mirrors.
It’s fine, it’s great, it’s a step towards making actual intelligences that experience the world in some way. But don’t get swept up in this extremely premature hype over something that only looks magical because you wildly overestimate and over-essentialize the human being. Lets have some fucking humility out there in tech-bro land, even just a little. You’re not that special and the predictive text programs we’re making are not worth the reverence people are giving them. Yet.
Does it matter if it is actually experiencing things? What matters is what you experience while talking to it, not what it experiences while talking to you. When you play videogames, do you actually think the NPCs are experiencing you?
It’s pretty insane how negative people are. We did something so extraordinary. Imagine if someone told the engineers who built the space shuttle “but it isn’t teleportation”. Maybe stop being so judgemental of what others have achieved.
“Uhh actually, this isn’t a fully simulated conscious being with a fully formed organic body that resembles my biological structure on a molecular level… Get this shit out of here”
It’s not negativity, it’s reigning in unwarranted faith and adoration, this is a technology, a product that is largely being made by and for corporate mega-giants who are not going to steer this towards betterment of anyone or anything, just like every other technology, it will take decades or more for any of us to see this take the form of the life changing wonder that too many people are already seeing in it. If you can stop being impressed by how easy it is to mirror human traits back at us, you will let these companies know that you do NOT want advertising AI’s in your fucking toaster.
You want the real thing? Then put some pressure where it belongs and don’t be a hype-person for advertising platforms and plagiarism simulators.
“How easy it is to blah blah blah”… Dude, what the hell are you talking about, there’s nothing easy about this system.
If they release a real AI you’d still dislike it because it was created by a corporation.
And you’re going to be waiting a suspiciously long time before your life is tangibly, positively impacted by any of this. Nothing is changing for the better any time soon, and many things are going to get worse. The carrot will always be a few years away.
The worst part is I want it to succeed and live up to it’s promises, but I am old enough to be smart enough to know that as a species, and I fucking cannot stress this enough, not that fucking special. We do the same sad shit over and over, this is tech promises that will change the world, but not soon enough to actually help because they want to make money from it. That’s the hard truth, the real pill you and all the kids watching this shit are going to have to slowly… ever so slowly… swallow.
I’m already using Copilot every single day. I love it. It helps me save so much time writing boilerplate code that can be easily guessed by the model.
It even helps me understand tools faster than the documentation. I just type a comment and it autocompletes a piece of code that is probably wrong, but probably has the APIs that I need to learn about. So I just go, learn the specific APIs, fix the details of the code and move on.
I use chatgpt to help me improve my private blog posts because I’m not a native English speaker, so it makes the text feel more fluent.
We trained a model with the documentation of our company so it automatically references docs when someone asks it questions.
I’m using the AI from Jira to automatically generate queries and find what I want as fast as possible. I used to hate searching for stuff in Jira because I never remembered the DSL.
I have GPT as a command line tool because I constantly forget commands and this tool helps me remember without having to read the help or open Google.
We have pipelines that read exceptions that would usually be confusing for developers, but GPT automatically generates an explanation for the error in the logs.
I literally ask Chatgpt questions about other areas of technology that I don’t understand. My questions aren’t advanced so I usually get the right answers and I can keep reading about the topics. Chatgpt is literally teaching me how to do front ends, something that I hated my whole career but now feels like a breeze.
Maybe you should start actually figuring out how to use the tool instead of complaining about it in this echo chamber.
As someone who actually has used most of these applications and has to learn how to keep up with it for my own professional reasons, I am speaking from a place of education that this isn’t going to help as many of us as it supposedly is helping you. Nothing you write can make me feel positive about the immediate future.
I know the tech will transform our society, I know it will change everything. I’m not disputing that.
I’m disputing the absolute fucking adoration and hype that you and others are gushing all over this tech like it’s daddy’s big fat knob that has to be lubricated constantly. It’s disgusting because right now it’s all hype and bullshit and exaggeration. It’s making everything worse. We’re a LONG fucking way from transforming humanity for the better with this, that goal is far further away and on the other side of a lot worse hardships that people are cowardly dodging, and none of this is going into helping the people who need the help the most in our world.
It’s so very yay and happy that it makes you feel important at work using shortcuts. I’m sure that you feel special being able to be lazier at work. Yippee. Now please fix the internet while we wait for things to get as good as you’re promising. Please, fix it. It’s getting worse every second with slop we didn’t ask for. I’m sure with all your shortcuts you can do the work of a team to solve this mess.
Yes, I’ll keep using it because it helps me get things done faster, just like vehicles help me move faster.
Read everything you wrote again. You’re so toxic and irrational. It’s just a tool man, just understand its limitations and work with it. It’s very useful even if it isn’t perfect.
This isn’t like talking to a human. It lacks depth, empathy, context, knowledge in all questions.
Just try to get more about a topic out of it, asking deeper questions. You will find that it begins writing something that might sound right or helpful but actually isn’t.
All around, it just feels artifical. No emotion, no voice patterns, no body language, no changes in behavior, no reaction to jokes. Sorry this doesn’t feel real.
Yeah, it feels like a much improved version of an Eliza. Much improved, but still software. It doesn’t understand what it’s saying. TBF though I know a few humans like that.
If nobody told you that you were talking to an AI in 2020,you’d have thought it was a person in quick interactions.
The only reason why it doesn’t feel more real is because they literally programmed it to feel the way it does. They didn’t create chatgpt to express emotions, that would be insane.