A shocking story was promoted on the “front page” or main feed of Elon Musk’s X on Thursday:

“Iran Strikes Tel Aviv with Heavy Missiles,” read the headline.

This would certainly be a worrying world news development. Earlier that week, Israel had conducted an airstrike on Iran’s embassy in Syria, killing two generals as well as other officers. Retaliation from Iran seemed like a plausible occurrence.

But, there was one major problem: Iran did not attack Israel. The headline was fake.

Even more concerning, the fake headline was apparently generated by X’s own official AI chatbot, Grok, and then promoted by X’s trending news product, Explore, on the very first day of an updated version of the feature.

  • Ottomateeverything@lemmy.world
    link
    fedilink
    English
    arrow-up
    99
    arrow-down
    2
    ·
    9 months ago

    I bet if such a law existed in less than a month all those AI developers would very quickly abandon the “oh no you see it’s impossible to completely avoid hallucinations for you see the math is just too complex tee hee” and would actually fix this.

    Nah, this problem is actually too hard to solve with LLMs. They don’t have any structure or understanding of what they’re saying so there’s no way to write better guardrails… Unless you build some other system that tries to make sense of what the LLM says, but that approaches the difficulty of just building an intelligent agent in the first place.

    So no, if this law came into effect, people would just stop using AI. It’s too cavalier. And imo, they probably should stop for cases like this unless it has direct human oversight of everything coming out of it. Which also, probably just wouldn’t happen.

    • wizardbeard@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      59
      arrow-down
      4
      ·
      9 months ago

      Yep. To add on, this is exactly what all the “AI haters” (myself included) are going on about when they say stuff like there isn’t any logic or understanding behind LLMs, or when they say they are stochastic parrots.

      LLMs are incredibly good at generating text that works grammatically and reads like it was put together by someone knowledgable and confident, but they have no concept of “truth” or reality. They just have a ton of absurdly complicated technical data about how words/phrases/sentences are related to each other on a structural basis. It’s all just really complicated math about how text is put together. It’s absolutely amazing, but it is also literally and technologically impossible for that to spontaneously coelesce into reason/logic/sentience.

      Turns out that if you get enough of that data together, it makes a very convincing appearance of logic and reason. But it’s only an appearance.

      You can’t duct tape enough speak and spells together to rival the mass of the Sun and have it somehow just become something that outputs a believable human voice.


      For an incredibly long time, ChatGPT would fail questions along the lines of “What’s heavier, a pound of feathers or three pounds of steel?” because it had seen the normal variation of the riddle with equal weights so many times. It has no concept of one being smaller than three. It just “knows” the pattern of the “correct” response.

      It no longer fails that “trick”, but there’s significant evidence that OpenAI has set up custom handling for that riddle over top of the actual LLM, as it doesn’t take much work to find similar ways to trip it up by using slightly modified versions of classic riddles.

      A lot of supporters will counter “Well I just ask it to tell the truth, or tell it that it’s wrong, and it corrects itself”, but I’ve seen plenty of anecdotes in the opposite direction, with ChatGPT insisting that it’s hallucination was fact. It doesn’t have any concept of true or false.

      • neatchee@lemmy.world
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        2
        ·
        edit-2
        9 months ago

        The shame of it is that despite this limitation LLMs have very real practical uses that, much like cryptocurrencies and NFTs did to blockchain, are being undercut by hucksters.

        Tesla has done the same thing with autonomous driving too. They claimed to be something they’re not (fanboys don’t @ me about semantics) and made the REAL thing less trusted and take even longer to come to market.

        Drives me crazy.

        • FlashMobOfOne@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          9 months ago

          Yup, and I hate that.

          I really would like to one day just take road trips everywhere without having to actually drive.

          • neatchee@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            edit-2
            9 months ago

            Right? Waymo is already several times safer than humans and tesla’s garbage, yet municipalities keep refusing them. Trust is a huge problem for them.

            And yes, haters, I know that they still have problems in inclement weather but that’s kinda the point: we would be much further along if it weren’t for the unreasonable hurdles they keep facing because of fear created by Tesla

          • humorlessrepost@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            9 months ago

            For road trips (i.e. interstates and divided highways), GM’s Super Cruise is pretty much there unless you go through a construction zone. I just went from Atlanta to Knoxville without touching the steering wheel once.

      • cygon@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        9 months ago

        I love that example. Microsoft’s Copilot (based on GTP-4) immediately doesn’t disappoint:

        Microsoft Copilot: Two pounds of feathers and a pound of lead both weigh the same: two pounds. The difference lies in the material—feathers are much lighter and less dense than lead. However, when it comes to weight, they balance out equally.

        It’s annoying that for many things, like basic programming tasks, it manages to generate reasonable output that is good enough to goat people into trusting it, yet hallucinates very obviously wrong stuff or follows completely insane approaches on anything off the beaten path. Every other day, I have to spend an hour to justify to a coworker why I wrote code this way when the AI has given him another “great” suggestion, like opening a hidden window with an UI control to query a database instead of going through our ORM.

      • rottingleaf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        9 months ago

        but it is also literally and technologically impossible for that to spontaneously coelesce into reason/logic/sentience

        Yeah, see, one very popular modern religion (without official status or need for one to explicitly identify with id, but really influential) is exactly about “a wonderful invention” spontaneously emerging in the hands of some “genius” who “thinks differently”.

        Most people put this idea far above reaching your goal after making myriad of small steps, not skipping a single one.

        They also want a magic wand.

        The fans of “AI” today are deep inside simply luddites. They want some new magic to emerge to destroy the magic they fear.

        • JackGreenEarth@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          9
          ·
          9 months ago

          Lol, the AI haters are luddites, not the AI supporters. AI is the present and future, and just because it isn’t perfect doesn’t mean it’s not good enough for many things. And it will continue to get better, most likely.

          • rottingleaf@lemmy.zip
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            edit-2
            9 months ago

            You should try and understand that it’s not magic, it’s a very specific set of actions aimed at a very specific result with very specific area of application. Every part of it is clear. There’s no uncharted area where we don’t know at all what happens. Engineering doesn’t work like that anywhere except action movies.

            By the same logic as that “it isn’t perfect” a plane made of grass by cargo cult members can suddenly turn into a real aircraft.

            And it won’t magically become something above it, if that’s what you mean by “get better”.

            For the same reason we still don’t have a computer virus which developed conscience, and we won’t.

            And if you think otherwise then you are what I described.

      • PopShark@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        Yep the hallucinations issue happens even in GPT4, in my experience certain topics can bring about potential hallucinations more than others but if ChatGPT (even with GPT4 or whatever other advanced version of it) gets “stuck” on believing its hallucinations the only way to convince it is literally plainly stating the part that’s wrong and directing it to search Bing or the internet some other way specifically for that. Otherwise you just let out a sigh and start a new chat. If you spend too much time negotiating with it that wastes tokens anyway so the chat becomes bloated and it forgets stuff from earlier in the chat, not to mention technically you’re paying for being able to use the more advanced model anyway and yeah basically the more you treat the chat like a normal conversation the worse it is with AI. I guess that’s why “prompt engineering” was or is a thing, whether legitimate or not.

        I did also importantly note that if you pay for credits with OpenAI to use their “playground” to create a specifically customized GPT4 adjusting temperature and response types it takes getting used to because it is WAY different than ChatGPT regardless of which version of GPT you have it set to. It actually kind of blew me away with how much better it “””understood””” software development but the issue is you kind of have to set up chats yourself it’s more complex and you pay per token so mistakes cost you. If it wasn’t such a pain and I had a specific use case I would definitely rather pay for OpenAI credits as needed than their bs “Plus” $20/month subscription for nerfed GPT4 as a chatbot.

      • Akisamb@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        It’s absolutely amazing, but it is also literally and technologically impossible for that to spontaneously coelesce into reason/logic/sentience.

        This is not true. If you train these models on game of Othello, they’ll keep a state of the world internally and use that to predict the next move played (1). To execute addition and multiplication they are executing an algorithm on which they were not explicitly trained (although the gpt family is surprisingly bad at it, due to a badly designed tokenizer).

        These models are still pretty bad at most reasoning tasks. But training on predicting the next word is a perfectly valid strategy, after all the best way to predict what comes after the “=” in 1432 + 212 = is to do the addition.

    • rottingleaf@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Unless you build some other system that tries to make sense of what the LLM says, but that approaches the difficulty of just building an intelligent agent in the first place.

      I actually think an attempt at such an agent would have to include the junk generator. And some logical structure with weights and feedbacks it would form on top of that junk would be something easier for me to call “AI”.

      • atrielienz@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        I actually have been thinking about this some, and all those “jobs” that people are losing to AI? Will probably end up being jobs that add a human component back into AI for the firms that have doubled down on it. Human oversight is going to be necessary and these companies don’t want to admit that. Even for things that the LLM’s are actually reasonably good at. So either companies will not adopt AI and keep their human workers, or they’ll dump them for AI LLM’S, quickly realize they need people in specialities to comb through AI responses, and either hire them back for that, or hire them back for the job they wanted to supplant them with LLM’S for.

        Because reliability and cost are the only things that are going to make one LLM more preferable to another now that the Internet has basically been scraped for useful training data.

        This is algorithms all over again but on a much larger scale. We can’t even keep up with mistakes made by algorithms (see copyright strikes and appeals on YouTube or similar). Humans are supposed to review them. They don’t have enough humans to do that job.