• flamingo_pinyata@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    95
    arrow-down
    9
    ·
    4 days ago

    But how? The thing is utterly dumb. How do you even have a conversation without quitting in frustration from it’s obviously robotic answers?

    But then there’s people who have romantic and sexual relationships with inanimate objects, so I guess nothing new.

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      35
      arrow-down
      3
      ·
      4 days ago

      How do you even have a conversation without quitting in frustration from it’s obviously robotic answers?

      Talking with actual people online isn’t much better. ChatGPT might sound robotic, but it’s extremely polite, actually reads what you say, and responds to it. It doesn’t jump to hasty, unfounded conclusions about you based on tiny bits of information you reveal. When you’re wrong, it just tells you what you’re wrong about - it doesn’t call you an idiot and tell you to go read more. Even in touchy discussions, it stays calm and measured, rather than getting overwhelmed with emotion, which becomes painfully obvious in how people respond. The experience of having difficult conversations online is often the exact opposite. A huge number of people on message boards are outright awful to those they disagree with.

      Here’s a good example of the kind of angry, hateful message you’ll never get from ChatGPT - and honestly, I’d take a robotic response over that any day.

      I think these people were already crazy if they’re willing to let a machine shovel garbage into their mouths blindly. Fucking mindless zombies eating up whatever is big and trendy.

      • musubibreakfast@lemm.ee
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        1
        ·
        4 days ago

        Hey buddy, I’ve had enough of you and your sensible opinions. Meet me in the parking lot of the Wallgreens on the corner of Coursey and Jones Creek in Baton Rouge on april 7th at 10 p.m. We’re going to fight to the death, no holds barred, shopping cart combos allowed, pistols only, no scope 360, tag team style, entourage allowed.

      • pinkfluffywolfie@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 days ago

        I agree with what you say, and I for one have had my fair share of shit asses on forums and discussion boards. But this response also fuels my suspicion that my friend group has started using it in place of human interactions to form thoughts, opinions, and responses during our conversations. Almost like an emotional crutch to talk in conversation, but not exactly? It’s hard to pin point.

        I’ve recently been tone policed a lot more over things that in normal real life interactions would be light hearted or easy to ignore and move on - I’m not shouting obscenities or calling anyone names, it’s just harmless misunderstandings that come from tone deafness of text. I’m talking like putting a cute emoji and saying words like silly willy is becoming offensive to people I know personally. It wasn’t until I asked a rhetorical question to invoke a thoughtful conversation where I had to think about what was even happening - someone responded with an answer literally from ChatGPT and they provided a technical definition to something that was apart of my question. Your answer has finally started linking things for me; for better or for worse people are using it because you don’t receive offensive or flamed answers. My new suspicion is that some people are now taking those answers, and applying the expectation to people they know in real life, and when someone doesn’t respond in the same predictable manner of AI they become upset and further isolated from real life interactions or text conversations with real people.

        • Opinionhaver@feddit.uk
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          3 days ago

          I don’t personally feel like this applies to people who know me in real life, even when we’re just chatting over text. If the tone comes off wrong, I know they’re not trying to hurt my feelings. People don’t talk to someone they know the same way they talk to strangers online - and they’re not making wild assumptions about me either, because they already know who I am.

          Also, I’m not exactly talking about tone per se. While written text can certainly have a tone, a lot of it is projected by the reader. I’m sure some of my writing might come across as hostile or cold too, but that’s not how it sounds in my head when I’m writing it. What I’m really complaining about - something real people often do and AI doesn’t - is the intentional nastiness. They intend to be mean, snarky, and dismissive. Often, they’re not even really talking to me. They know there’s an audience, and they care more about how that audience reacts. Even when they disagree, they rarely put any real effort into trying to change the other person’s mind. They’re just throwing stones. They consider an argument won when their comment calling the other person a bigot got 25 upvotes.

          In my case, the main issue with talking to my friends compared to ChatGPT is that most of them have completely different interests, so there’s just not much to talk about. But with ChatGPT, it doesn’t matter what I want to discuss - it always acts interested and asks follow-up questions.

          • pinkfluffywolfie@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            3 days ago

            I can see how people would seek refuge talking to an AI given that a lot of online forums have really inflammatory users; it is one of the biggest downfalls of online interactions. I have had similar thoughts myself - without knowing me strangers could see something I write as hostile or cold, but it’s really more often friends that turn blind to what I’m saying and project a tone that is likely not there to begin with. They used to not do that, but in the past year or so it’s gotten to the point where I frankly just don’t participate in our group chats and really only talk if it’s one-one text or in person. I feel like I’m walking on eggshells, even if I were to show genuine interest in the conversation it is taken the wrong way. That being said, I think we’re coming from opposite ends of a shared experience but are seeing the same thing, we’re just viewing it differently because of what we have experienced individually. This gives me more to think about!

            I feel a lot of similarities in your last point, especially with having friends who have wildly different interests. Most of mine don’t care to even reach out to me beyond a few things here and there; they don’t ask follow-up questions and they’re certainly not interested when I do speak. To share what I’m seeing, my friends are using these LLM’s to an extent where if I am not responding in the same manner or structure it’s either ignored or I’m told I’m not providing the appropriate response they wanted. This where the tone comes in where I’m at, because ChatGPT will still have a regarded tone of sorts to the user; that is it’s calm, non-judgmental, and friendly. With that, the people in my friend group that do heavily use it have appeared to become more sensitive to even how others like me in the group talk, to the point where they take it upon themselves to correct my speech because the cadence, tone and/or structure is not fitting a blind expectation I wouldn’t know about. I find it concerning, because regardless of the people who are intentionally mean, and for interpersonal relationships, it’s creating an expectation that can’t be achieved with being human. We have emotions and conversation patterns that vary and we’re not always predictable in what we say, which can suck when you want someone to be interested in you and have meaningful conversations but it doesn’t tend to pan out. And I feel that. A lot unfortunately. AKA I just wish my friends cared sometimes :(

            • Opinionhaver@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              3 days ago

              I’m getting the sense here that you’re placing most - if not all - of the blame on LLMs, but that’s probably not what you actually think. I’m sure you’d agree there are other factors at play too, right? One theory that comes to mind is that the people you’re describing probably spend a lot of time debating online and are constantly exposed to bad-faith arguments, personal attacks, people talking past each other, and dunking - basically everything we established is wrong with social media discourse. As a result, they’ve developed a really low tolerance for it, and the moment someone starts making noises sounding even remotely like those negative encounters, they automatically label them as “one of them” and switch into lawyer mode - defending their worldview against claims that aren’t even being made.

              That said, since we’re talking about your friends and not just some random person online, I think an even more likely explanation is that you’ve simply grown apart. When people close to you start talking to you in the way you described, it often means they just don’t care the way they used to. Of course, it’s also possible that you’re coming across as kind of a prick and they’re reacting to that - but I’m not sensing any of that here, so I doubt that’s the case.

              I don’t know what else you’ve been up to over the past few years, but I’m wondering if you’ve been on some kind of personal development journey - because I definitely have, and I’m not the same person I was when I met my friends either. A lot of the things they may have liked about me back then have since changed, and maybe they like me less now because of it. But guess what? I like me more. If the choice is to either keep moving forward and risk losing some friends, or regress just to keep them around, then I’ll take being alone. Chris Williamson calls this the “Lonely Chapter” - you’re different enough that you no longer fit in with your old group, but not yet far enough along to have found the new one.

              • pinkfluffywolfie@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 days ago

                I think it has a unique influence that will continue to develop, but I don’t think LLM’s are the only influence to blame. There’s a lot that can influence this behavior, like the theory you’ve described. Off the top of my head, limerence is something that could be an influence. I know that it is common for people to experience limerence for things like video game characters, and sometimes they project expectations onto others to behave like said characters. Other things could be childhood trauma, glass child syndrome, isolation from peers in adolescence, asocial tendencies, the list is long I’d imagine.

                For me, self journey started young and never ends. It’s something that’s just apart of the human experience, relationships come and go, then sometimes they come back, etc. I will say though, with what I’m seeing with the people I’m talking about, this is a novel experience to me. It’s something that’s hard to navigate, and as a result I’m finding that it’s actually isolating to experience. Like I mentioned before, I can have one-one chats, and when I see them in person, we do activities and have fun! But if any level of discomfort is detected and the expectation is brought on. By the time I realize what’s happening they’re offering literal formatted templates on how to respond in conversations. Luckily it’s not everyone in our little herd that has this behavior, but the people that do this the most I know for sure utilize ChatGPT heavily for these types of dicussions only because they recommended me to start doing the same not too long ago. Nonetheless, I did like this discussion, it offers a lot of prospect in looking at how different factors influence our behavior with each other.

    • Telorand@reddthat.com
      link
      fedilink
      English
      arrow-up
      45
      arrow-down
      1
      ·
      4 days ago

      In some ways, it’s like Wikipedia but with a gigantic database of the internet in general (stupidity included). Because it can string together confident-sounding sentences, people think it’s this magical machine that understands broad contexts and can provide facts and summaries of concepts that take humans lifetimes to study.

      It’s the conspiracy theorists’ and reactionaries’ dream: you too can be as smart and special as the educated experts, and all you have to do is ask a machine a few questions.

    • glitchdx@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      4 days ago

      The fact that it’s not a person is a feature, not a bug.

      openai has recently made changes to the 4o model, my trusty goto for lore building and drunken rambling, and now I don’t like it. It now pretends to have emotions, and uses the slang of brainrot influencers. very “fellow kids” energy. It’s also become a sicophant, and has lost its ability to be critical of my inputs. I see these changes as highly manipulative, and it offends me that it might be working.

    • saltesc@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      4 days ago

      Yeah, the more I use it, the more I regret asking it for assistance. LLMs are the epitome of confidentiality incorrect.

      It’s good fun watching friends ask it stuff they’re already experienced in. Then the pin drops

    • Victor@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      4 days ago

      At first glance I thought you wrote “inmate objects”, but I was not really relieved when I noticed what you actually wrote.