Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’::Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.

  • joelthelion@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    1 year ago

    I don’t understand why they don’t use a second model to detect falsehoods instead of trying to fix it in the original LLM?

    • Flying Squid@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      1
      ·
      1 year ago

      And then they can use a third model to detect falsehoods in the second model and a fourth model to detect falsehoods in the third model and… well, it’s LLMs all the way down.

    • doggle@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      Ai models are already computationally intensive. This would instantly double the overhead. Also being able to detect problems does not mean you’re able to fix them.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        More than double, as query size is very much connected to the effective cost of the generation, and you’d need to include both the query and initial response in that second pass.

        Then - you might need to make an API call to a search engine or knowledge DB to fact check it.

        And include that data as context along with the query and initial response to whatever decides if it’s BS.

        So for a dumb realtime chat application, no one is going to care enough to slow out down and exponentially increase costs to avoid hallucinations.

        But for AI replacing a $120,000 salaried role in writing up a white paper on some raw data analysis, a 10-30x increase over a $0.15 query is more than acceptable.

        So you will see this approach taking place in enterprise scenarios and professional settings, even if we may never see them in chatbots.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      2+ times the cost for every query for something that makes less than 5% unusable isn’t a trade off that people are willing to make for chat applications.

      This is the same fix approach for jailbreaking.

      You absolutely will see this as more business critical integrations occur - it just still probably won’t be in broad consumer facing realtime products.

    • Sethayy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      Cause what are you gonna train the second model on? Same data as the first just recreates it and any other data is gonna be nice and mucky with all the ai content out there