AI girlfriend bots are already flooding OpenAI’s GPT store::OpenAI’s store rules are already being broken, illustrating that regulating GPTs could be hard to control

  • afraid_of_zombies@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    10 months ago

    If we get wiped out by AI girlfriends we deserve it. If the reason why a person never reproduced is solely because they had a chatbot they really should not reproduce.

    • sramder@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      10 months ago

      I was trying to dream up the justification for this rule that wasn’t about mitigating the ick-factor and fell short… I guess if the machines learn how to beguile us by forming relationships then they could be used to manipulate people honeypot style?

      Honestly the only point I set out to make was that people were probably working on virtual girlfriends for weeks (months?) before they were banned. They had probably been submitted to the store already and the article was trying to drum up panic.

      • afraid_of_zombies@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        Sure which you know we already can do. Honeypots are a thing and a thing so old the Bible mentions them. Delilah anyone? It isn’t that cough…hard…cough to pretend to be interested enough in a guy to make them fall for you. Sure if the tech keeps growing, which it will, you can imagine more and more complex cons. Stuff that could even have webcam chats with the marks.

        I suggest we treat this the same way we currently treat humans doing this. We warn users, block accounts that do this, and criminally prosecute.

      • HelloHotel@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        10 months ago

        Its a hard question to answer, there is a good reason but its sevral pargraphs long and i likely have gaps in knolage and in some places misguided. The reduced idea: being emotionally open (no emotional guarding or sandboxing/RPing) with a creature that lacks many traits required to take on that responsability. the model is being pretrained to perform jestures that make us happy, having no internal state to ask itself if it would enjoy garlic bread given its experience with garlic. its an advanced tape recorder, being pre-populated with an answer. Or it lies and picks somthing because saying idk is the wrong response. As apposed to a creature that has some kind of consistant external world and a memory system. firehosing it with data, means less room for artistic intent.

        If your sandboxing/Roleplaying, theres no problem.

    • mhague@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      10 months ago

      Interesting idea. We could effectively practice eugenics in a way that won’t make people so mad. They’ll have to contend with ideas like free will and personal responsibility before they can go after our program.

      Let’s make a list of all the “asocials” we want removed from the gene pool and we can get started.