Several big businesses have published source code that incorporates a software package previously hallucinated by generative AI.

Not only that but someone, having spotted this reoccurring hallucination, had turned that made-up dependency into a real one, which was subsequently downloaded and installed thousands of times by developers as a result of the AI’s bad advice, we’ve learned. If the package was laced with actual malware, rather than being a benign test, the results could have been disastrous.

  • penquin@lemm.ee
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    2
    ·
    7 months ago

    I asked several AIs to write unit and integration tests for my code, and they all literally failed every single time. Some just straight up garbage, others just come up with shit I don’t even have in my code. AI is really good if you know what you’re doing and can spot what’s right and what’s wrong. Blindly taking its code is just useless, and dangerous, too.

    • residentmarchant@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      7 months ago

      I find if I write one or two tests on my own then tell Copilot to complete the rest of them it’s like 90% correct.

      Still not great but at least it saves me typing a bunch of otherwise boilerplate unit tests.

      • penquin@lemm.ee
        link
        fedilink
        English
        arrow-up
        8
        ·
        7 months ago

        I actually haven’t tried it this way. I just asked it to write the tests for whatever class I was on and it started spitting some stuff at me. I’ll try your way and see.

    • anlumo@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      7 months ago

      It’s a matter of learning how to prompt it properly. It’s not a human and thus needs a different kind of instructions.