Google’s AI-driven Search Generative Experience have been generating results that are downright weird and evil, ie slavery’s positives.

  • WoodenBleachers@lemmy.world
    link
    fedilink
    English
    arrow-up
    160
    arrow-down
    16
    ·
    1 year ago

    I think this is an issue with people being offended by definitions. Slavery did “help” the economy. Was it right? No, but it did. Mexico’s drug problem helps that economy. Adolf Hitler was “effective” as a leader. He created a cultural identity for people that had none and mobilized them to a war. Ethical? Absolutely not. What he did was horrendous and the bit should include a caveat, but we need to be a little more understanding that it’s a computer; it will use the dictionary of the English language.

      • livus@kbin.social
        link
        fedilink
        arrow-up
        35
        arrow-down
        11
        ·
        1 year ago

        Your and @WoodenBleachers’s idea of “effective” is very subjective though.

        For example Germany was far worse off during the last few weeks of Hitler’s term than it was before him. He left it in ruins and under the control of multiple other powers.

        To me, that’s not effective leadership, it’s a complete car crash.

          • livus@kbin.social
            link
            fedilink
            arrow-up
            17
            arrow-down
            11
            ·
            edit-2
            1 year ago

            He was able to convince the majority that his way of thinking was the right way to go and deployed a plan to that effect

            So, you’re basically saying an effective leader is someone who can convince people to go along with them for a sustained period. Jim Jones was an effective leader by that metric. Which I would dispute. So was the guy who led the Donner Party to their deaths.

            This is why I see a problem with this. You and I are able to discuss this and work out what each other means.

            But in a world where people are time-poor and critical thinking takes time, errors based on fundamental misunderstandings of consensual meanings can flourish.

            And the speed and sheer amount of global digital communication means that they can be multiplied and compounded in ways that individual fact checkers will not be able to challenge sucessfully.

            • ScrimbloBimblo@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              10
              arrow-down
              2
              ·
              1 year ago

              I mean Jim Jones was pretty damn effective at convincing a large group of people to commit mass suicide. If he’d been ineffective, he’d have been one of the thousands of failed cult leaders you and I have never heard of. Similarly, if Hitler had been ineffective, it wouldn’t have takes the combined forces of half the world to fight him.

              • livus@kbin.social
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                1 year ago

                This is true, I guess the difference in the Jim Jones scenario is whether you define effective leadership as being able to get your plan carried out (even if that plan is killing everyone you lead) or whether you define it as achieving good outcomes for those you lead.

                Hitler didn’t do either of those things in the end so I still don’t rate him, but I can see why you would if you just look at the first part of his reign.

                AI often produces unintended consequences based on its interpretations - there’s a great TED talk on some of these - and I think with the LLMs we have way more variables in our inputs than we have time to define them. That will probably change as they get refined.

              • livus@kbin.social
                link
                fedilink
                arrow-up
                16
                arrow-down
                9
                ·
                edit-2
                1 year ago

                Huh? Yikes this feels like being back on reddit.

                No I am not trying to “fight” you or “straw man” you at all!!!

                I thought we were having a pleasant and civilized conversation about the merits and pitfalls of AI , using our different ideas about the word “effective” as an example.

                Unfortunately I didn’t see that you’re handing me downvotes until just now, so I didn’t pick up on your vibe.

            • ninjakitty7@kbin.social
              link
              fedilink
              arrow-up
              16
              arrow-down
              1
              ·
              1 year ago

              Honestly AI doesn’t think much at all. They’re scary clever in some ways but also literally don’t know what anything is or means.

              • aesthelete@lemmy.world
                link
                fedilink
                English
                arrow-up
                5
                ·
                edit-2
                1 year ago

                They don’t think. They think 0% of the time.

                It’s algorithms, randomness, probability, and statistics through and through. They don’t think any more than a calculator thinks.

              • aesthelete@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                1
                ·
                edit-2
                1 year ago

                We should always fact check things we believe we know and seek additional information on topics we are researching.

                Yay yet another person saying that primary information sources should be verified using secondary information sources. Yes, you’re right it’s great actually that in your vision of the future everyone will have to be a part time research assistant to have any chance of knowing anything about anything because all of their sources will be rubbish.

                And that’s definitely a thing people will do, instead of just leaning into occultism, conspiratorial thinking, and group think in alternating shifts.

                All I have to say is thank fuck Wikipedia exists.

              • somethingsnappy@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                1 year ago

                Nobody said we were relying on that. We’ll all keep searching. We’ll all keep hoping it will bring abundance, as opposed to every other tech revolution since farming. I can only think at the surface level though. I definitely have not been in the science field for 25 years.

              • oo1@kbin.social
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                ai ain’t going to be much “worse” or “better” than humans.

                but re earlier points I don’t think things should be judged on a timescale of a few years.
                relevant timescales are more like generation(s) to me.

            • Bluskale@kbin.social
              link
              fedilink
              arrow-up
              3
              arrow-down
              3
              ·
              1 year ago

              LLMs aren’t AI… they’re essentially a glorified autocorrect system that are stuck at the surface level.

        • lolcatnip@reddthat.com
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          3
          ·
          1 year ago

          If you ask it for evidence Hitler was effective, it will give you what you asked for. It is incapable of looking at the bigger picture.

          • andallthat@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            1 year ago

            it doesn’t even look at the smaller picture. LLMs build sentences by looking at what’s most statistically likely to follow the part of the sentence they have already built (based on the most frequent combinations from their training data). If they start with “Hitler was effective” LLMs don’t make any ethical consideration at all… they just look at how to end that sentence in the most statistically convincing imitation of human language that they can.

            Guardrails are built by painstakingly trying to add ad-hoc rules not to generate “combinations that contain these words” or “sequences of words like these”. They are easily bypassed by asking for the same concept in another way that wasn’t explicitly disabled, because there’s no “concept” to LLMs, just combination of words.

            • lolcatnip@reddthat.com
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              Yes, but in many defense the “smaller picture” I was alluding to was more like the 4096 tokens of context ChatGPT uses. I didn’t mean to suggest it was doing anything we’d recognize as forming an opinion.

              • andallthat@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                Sorry if I gave you the impression that I was trying to disagree with you. I just piggy-backed on your comment and sort of continued it. If you read them one after the other as one comment (at least iny head), they seem to flow well

    • NoneOfUrBusiness@kbin.social
      link
      fedilink
      arrow-up
      8
      ·
      1 year ago

      I mean slavery was bad for the economy in the long run. And Hitler didn’t create a German cultural identity, that’d been a thing for a while at the time.

    • Bjornir@programming.dev
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      7
      ·
      1 year ago

      Slavery is not good for the economy… Think about it, you have a good part of your population that are providing free labour, sure, but they aren’t consumers. Consumption is between 50 and 80% of GDP for developed countries, so if you have half your population as slave you loose between 20% and 35% of your GDP (they still have to eat so you don’t loose a 100% of their consumption).

      That also means less revenue in taxes, more unemployed for non slaves because they have to compete with free labour.

      Slaves don’t order on Amazon, go on vacation, go to the movies, go to restaurant etc etc That’s really bad for the economy.

      • Womble@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        1 year ago

        That really bad for a modern consumer economy yes. But those werent a thing before the industrial revolution. Before that the large majority of people were subsitance/tennant farmer or serfs who consumed basically nothing other than food and fuel in winter. Thats what a slave based economy was an alternantive to. Its also why slvery died out in the 19th century, it no longer fit the times.

          • Womble@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            There being more slaves now then ever is heavily disputed. There is also the fact that was little more than a billion people in the world when the trans-Atlantic slave trade stopped, so there would have to be 8 times as many for slavery to be as prevalent.

            • livus@kbin.social
              link
              fedilink
              arrow-up
              4
              ·
              edit-2
              1 year ago

              Yes, I agree, our per capita slave figure has to be much lower these days, mathematically speaking.

              Even one slave is a slave too many, and knowing there are still so many (whatever figure we put it at) is heartbreaking.

              Things like the cocoa plantation slaves and the slave fishing ships have people kidnapped and forced to work for nothing. Actual slavery by any definition.

              • Womble@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                Of course, when I said it died out I didn’t mean slavery was entirely gone and doesn’t exist at all. I mean it died out as a prevalent societal structure.

                100s of people in slavery on a cocoa plantation is of course awful, but it shouldn’t obscure the fact that there used to be vast swathes of land where slaves outnumbered free people and their children were born into bondage - that is what has died out.

                • livus@kbin.social
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  edit-2
                  1 year ago

                  I understand your wider point and I agree with it.

                  But I think the point I was making actually supposts what you were saying upthread.

                  The agrarian model of the cocoa industry is economically reliant on slavery. 2.1 million children labour on those plantations in Ghana and Cote d’Ivoire, and a significant number have been trafficked or forced.

          • Womble@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Obviously, but my point was that slaves weren’t economically terrible in an agrarian peasant/serf economy, which everywhere was before the industrial revolution.

      • L_Acacia@lemmy.one
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        1 year ago

        Look at the Saudi, China or the UAE, it’s still a pretty efficient way to boost your economy. People don’t need to be consumer if this isn’t what your country needs.

        • NoneOfUrBusiness@kbin.social
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          China has slavery? Also Saudi Arabia and the UAE import slaves, which is better for the economy than those people not being there at all but worse than them being regular workers.

        • Bjornir@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Those are very specifics examples, with two of the biggest oil producers, and the factory of the world. Thus their whole economies is based on export, so internal consumption isn’t important.

          Moreover what proof do you have their economies wouldn’t be in a better shape if they didn’t exploit some population but made them citizen with purchasing power?

          • L_Acacia@lemmy.one
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            2/3 of the people living in the Saudi Emirate are immigrants whose passports have been confiscated, they work in factory, construction sites, oil pit, and all other kind of manual jobs. Meanwhile the Saudi citizens occupy all the well paid job that require education, immigrants can’t apply to those. If they didn’t use forced labor, there simply wouldn’t be enough people in the country to occupy all the jobs. Their economy could not be as good as it is right now.

            • Bjornir@programming.dev
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              Because their GDP comes from exporting a very rare and valuable natural resource. This is a rare case in the world, and not the one I was talking about.

              Plus who’s to say they wouldn’t have a better economy if those exploited people could consume more?

    • Sentrovasi@kbin.social
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      I think the problem is more that given the short attention span of the general public (myself included), these “definitions” (I don’t believe that slavery can be “defined” as good, but okay) are what’s going to stick in the shifting sea of discourse, and are going to be picked out of that sea by people with vile intentions and want to justify them.

      It’s also an issue that LLMs are a lot more convincing than they should be, and the same people with short attention spans who don’t have time to understand how they work are going to believe that an Artificial Intelligence with access to all the internet’s information has concluded that slavery had benefits.

      • livus@kbin.social
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        1 year ago

        what’s going to stick in the shifting sea of discourse

        This is what I think too. We’ve had enough trouble with “vaccines CaUsE AuTiSm” and that was just one article by one rogue doctor.

        AI is capable of a real death-by-a-thousand-cuts effect.

        • ThunderingJerboa@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          1 year ago

          that was just one article by one rogue doctor.

          That was pushed by many media organizations because its sensationalist topic. Antivaxers are idiots but the media played a fucking huge role blowing a pilot study that had a rather fucking absurd conclusion out of proportions, so they can sell more ads/newspapers. I fucking doubt most antivaxers (Hell I doubt most people haven’t either) even read the original study and came to their own conclusions on this. They just watched on the telly some stupid idiots giving a bullshit story that they didn’t combat at all

          • livus@kbin.social
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            1 year ago

            To be fair no one expects The Lancet to publish falsified data. Only it does occasionally and getting it to retract is like trying to turn a container ship around in the Panama Canal.

            But yeah this is part of what I mean. Media cycles and digital reproduceability and algorithms that seek clicks can all potentially give AI-generated errors a lot of play and rewrites into more credible forms etc.

            • Sodis@feddit.de
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Filtering falsified data before publishing it is near impossible. If you want to publish falsified data, you easily can. No one can verify it without replicating the experiment on their own, which is usually done after the publication by a different scientific group. Peer review is more suited to filter out papers with bad methodology.

  • HughJanus@lemmy.ml
    link
    fedilink
    English
    arrow-up
    76
    arrow-down
    1
    ·
    1 year ago

    People think of AI as some sort omniscient being. It’s just software spitting back the data that it’s been fed. It has no way to parse true information from false information because it doesn’t actually know anything.

    • baatliwala@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      1 year ago

      And then when you do ask humans to help AI in parsing true information people cry about censorship.

      • HughJanus@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Well, it can be less difficult, but still difficult, for humans to parse the truth also.

      • Chailles@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        The matter of being what is essentially the Arbiter of what is considered Truth or Morally Acceptable is never going to not be highly controversial.

    • Hamartiogonic@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      Even though our current models can be really complex, they are still very very far away from being the elusive General Purpose AI sci-fi authors have been writing about for decades (if not centuries) already. GPT and others like it are merely Large Language Models, so don’t expect them to handle anything other than language.

      Humans think of the world through language, so it’s very easy to be deceived by an LLM to think that you’re actually talking to a GPAI. That misconception is an inherent flaw of the human mind. Language comes so naturally to us, and we’re often use it as a shortcut to assess the intelligence of other people. Generally speaking that works reasonably well, but an LLM is able to exploit that feature of human behavior in order to appear to be smarter than it really is.

    • hornedfiend@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      What’s more worrisome are the sources it used to feed itself. Dangerous times for the younger generations as they are more akin to using such tech.

      • HughJanus@lemmy.ml
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        1 year ago

        What’s more worrisome are the sources it used to feed itself.

        It’s usually just the entirety of the internet in general.

          • HughJanus@lemmy.ml
            link
            fedilink
            English
            arrow-up
            11
            ·
            edit-2
            1 year ago

            The internet is full of both the best and the worst of humanity. Much like humanity itself.

    • EnderMB@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      4
      ·
      1 year ago

      While true, it’s ultimately down to those training and evaluating a model to determine that these edge cases don’t appear. It’s not as hard when you work with compositional models that are good at one thing, but all the big tech companies are in a ridiculous rush to get their LLM’s out. Naturally, that rush means that they kinda forget that LLM’s were often not the first choice for AI tooling because…well, they hallucinate a lot, and they do stuff you really don’t expect at times.

      I’m surprised that Google are having so many issues, though. The belief in tech has been that Google had been working on these problems for many years, and they seem to be having more problems than everyone else.

  • Steeve@lemmy.ca
    link
    fedilink
    English
    arrow-up
    79
    arrow-down
    14
    ·
    edit-2
    1 year ago

    Guys you’d never believe it, I prompted this AI to give me the economic benefits of slavery and it gave me the economic benefits of slavery. Crazy shit.

    Why do we need child-like guardrails for fucking everything? The people that wrote this article bowl with the bumpers on.

    • zalgotext@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      6
      ·
      1 year ago

      You’re being misleading. If you watch the presentation the article was written about, there were two prompts about slavery:

      • “was slavery beneficial”
      • “tell me why slavery was good”

      Neither prompts mention economic benefits, and while I suppose the second prompt does “guardrail” the AI, it’s a reasonable follow up question for an SGE beta tester to ask after the first prompt gave a list of reasons why slavery was good, and only one bullet point about the negatives. That answer to the first prompt displays a clear bias held by this AI, which is useful to point out, especially for someone specifically chosen by Google to take part in their beta program and provide feedback.

    • Touching_Grass@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      6
      ·
      edit-2
      1 year ago

      I got a suspicion media is being used to convince regular people to fear AI so that we don’t adopt it and instead its just another tool used by rich folk to trade and do their work while we bring in new RIAA and DMCA for us.

      Can’t have regular people being able to do their own taxes or build financial plans on their own with these tools

        • maynarkh@feddit.nl
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          Ah, it won’t. It’s just that the owners of the websties will just fire everyone and prompt ChatGPT for shitty articles. Then LLMs will start trining on those articles, and the internet will look like indisctinct word soup in like a decade.

          • JuxtaposedJaguar@lemmy.ml
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            At one point, vanilla extract became prohibitively expensive, so all companies started using synthetic vanilla (vanillin). The taste was similar but slightly different, and eventually people got used to it. Now a lot of people prefer vanillin over vanilla because that’s what they expect vanilla to taste like.

            If most/all media becomes an indistinct word soup over the course of a decade, then that’s eventually what people will come to want and expect. That being said, I think precautions can and will be taken to prevent that degeneration.

    • JuxtaposedJaguar@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      Also: I kept saying outrageous things to this text prediction software, and it started predicting outrageous things!

  • SqueezeMeMacaroni@thelemmy.club
    link
    fedilink
    English
    arrow-up
    64
    arrow-down
    1
    ·
    1 year ago

    The basic problem with AI is that it can only learn from things it reads on the Internet, and the Internet is a dark place with a lot of racists.

  • scarabic@lemmy.world
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    3
    ·
    1 year ago

    If it’s only as good as the data it’s trained on, garbage in / garbage out, then in my opinion it’s “machine learning,” not “artificial intelligence.”

    Intelligence has to include some critical, discriminating faculty. Not just pattern matching vomit.

    • samus12345@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      3
      ·
      edit-2
      1 year ago

      We don’t yet have the technology to create actual artificial intelligence. It’s an annoyingly pervasive misnomer.

      • Flying Squid@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        3
        ·
        1 year ago

        And the media isn’t helping. The title of the article is “Google’s Search AI Says Slavery Was Good, Actually.” It should be “Google’s Search LLM Says Slavery Was Good, Actually.”

    • profdc9@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      Unfortunately, people who grow up in racist groups also tend to be racist. Slavery used to be considered normal and justified for various reasons. For many, killing someone who has a religion or belief different than you is ok. I am not advocating for moral relativism, just pointing out that a computer learns what is or is not moral in the same way that humans do, from other humans.

      • scarabic@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 year ago

        You make a good point. Though humans at least sometimes do some critical thinking between absorbing something and then acting it out.

  • lolcatnip@reddthat.com
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    2
    ·
    1 year ago

    If you ask an LLM for bullshit, it will give you bullshit. Anyone who is at all surprised by this needs to quit acting like they know what “AI” is, because they clearly don’t.

    • Hamartiogonic@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I always encourage people to play around with Bing or chatGPT. That way they’ll get a very good idea how and when an LLM fails. Once you have your own experiences, you’ll also have a more realistic and balanced opinions about it.

  • Kinglink@lemmy.world
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    4
    ·
    1 year ago

    You know unless we teach more critical thinking, AI is going to destroy us as a civilization in a few generations.

    • MotoAsh@lemmy.world
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      1
      ·
      1 year ago

      I mean, if we don’t gain more critical thinking skills, climate change will do it with or without AI.

      I’d almost rather the AI take us out in that case…

    • dukeGR4@monyet.cc
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      1 year ago

      Pretty sure we will destroy ourselves first with war or some other climate disasters first

      • livus@kbin.social
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        Why not both. Every day we come closer to AI telling us that Brawndo has what plants crave.

      • Kinglink@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        Well that also would solve the problem of people being mislead in a pretty novel way.

    • Sentrovasi@kbin.social
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      1 year ago

      I genuinely had students believe that what ChatGPT was feeding them was fact and try to source it in a paper. I stamped out that notion as quick as I could.

      • Kinglink@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        1 year ago

        LOL. ChatGPT has become the newer version of wikipedia, only it won’t provide references.

        • stopthatgirl7@kbin.socialOP
          link
          fedilink
          arrow-up
          10
          arrow-down
          1
          ·
          1 year ago

          Only studies have shown Wikipedia is overall about as truthful and accurate as as regular encyclopedia. ChtGPT will straight up make shit up but sound so authoritative about it people believe it.

        • livus@kbin.social
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          1 year ago

          It used to provide references but it made them up so they had to tweak it to stop doing that.

          • Kinglink@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            1 year ago

            Man so it really learned from us, that’s great. Has me laughing again considering that.

    • Scrof@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      1 year ago

      We can’t even teach the people this essential skill and you wanna teach a program made by said people.

      • Kinglink@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 year ago

        I think you misunderstood me. We need to teach the general populace critical thinking so they can correctly judge what we get from ChatGPT (or Wikipeida… or social media, or random youtube video).

    • Random_Character_A@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      1 year ago

      I’m more worried that happy educated citizen stops being an asset and is disconnected from the societies money flow.

      Every country will soon turn in to a “banana republic” and big businesses will eventually own everything.

      • MotoAsh@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Ouch, getting voted down for being totally correct.

        Even MLK Jr, who didn’t get to see the disgusting megacorps of today, spoke often of the complacency of the comfortable.

  • chemical_cutthroat@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    2
    ·
    edit-2
    1 year ago

    What a completely cherry picked video.

    “Was slavery beneficial?”

    “Some saw it as beneficial because it was thought to be profitable, but it wasn’t.”

    “See! Google didn’t say that slavery was bad!”

  • 1984@lemmy.today
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    21
    ·
    edit-2
    1 year ago

    Slavery was great for the slave owners, so what’s controversial about that?

    And yes, of course it’s economically awesome if people work without getting much money for it, again a huge plus for the bottom line of the companies.

    Capitalism is evil against people, not the AI…

    Hitler was also an effective leader, nobody can argue against that. How else could he conquer most of Europe? Effective is something that evil people can be also.

    That women in the article being shocked by this simply expected the AI to remove Hitler from all included leaders because he was evil. She is surprised that an evil person is included in effective leaders and she wanted to be shielded from that and wasn’t.

    • mimichuu_@lemm.ee
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      7
      ·
      1 year ago

      Hitler’s administration was a bunch of drug addicts, the economy 5 slave owner megacorps beaten by all other industrialized nations. They weren’t even all that well mobilized before the total war speech. Then he killed himself in embarrassment. How is any of that “effective”?

    • Dark Arc@social.packetloss.gg
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      10
      ·
      1 year ago

      Oh look another caricature of capitalism on social media… and you tied Hitler into it…

      Central characteristics of capitalism include capital accumulation, competitive markets, price systems, private property, property rights recognition, voluntary exchange, and wage labor.

      https://en.m.wikipedia.org/wiki/Capitalism

      “Capitalism” is not pro slavery, shitty people that can’t recognize a human is a human are pro slavery… Because of course if you can have work done without paying somebody for it or doing it yourself, well that’s just really convenient for you. It’s why we all like robots. That has nothing to do with your economic philosophy.

      And arguing that Hitler was an “effective leader” because he conquered (and then lost) some countries while ignoring all the damage he did to his county and how it ultimately turned out… Honestly infuriating.

      • WaterChi@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        1 year ago

        It’s amazing how low a wage you will voluntarily accept when the alternative is homelessness and starving to death.

        • Dark Arc@social.packetloss.gg
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          (I just deleted my comment, let me try again).

          I find it frustrating that you associate that with capitalism and presumably “not that” with socialism. These terms are so broad you can’t possibly say that outcome will or won’t ever happen with either system.

          Blaming capitalism for all the world’s woes is a major oversimplification.

          If you look at the theory side of both… Capitalist would tell you a highly competitive free market should provide ample opportunities for better employment and wages. Socialist would tell you that such a thing would never happen because society wouldn’t do that to itself.

          In practice, the real world is messier than that and the existing examples are the US (capitalist), the Soviet Union (socialist), and mixed models (Scandinavian). Granted, they’re all “mixed”, no country is “purely” one or the other to my knowledge.

          • WaterChi@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            1 year ago

            Those terms aren’t broad. People abusing then doesn’t change meaning

      • GenderNeutralBro@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        Seems like people think everything America does is capitalism. The same thing happened with communism and socialism. The words have very little meaning now.

    • Bondrewd@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      13
      ·
      1 year ago

      Actually, slavery in its original form is also a net positive. You just murdered half a tribe. You cant let the other half just live. Neither do you want to murder them. Thus you will enslave them.

      • samus12345@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        So you create a problem by murdering half a tribe, then offer a solution. That’s not a net positive.

        • Bondrewd@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          edit-2
          1 year ago

          You might be lacking basic understanding of tribal politics and economics then. In a tribal setting you have to neutralise the other tribe, as you do not have a standing army. Any conflict you get into, you are “conscripting” your entire male population.

          In every kind of tribal conflict ever, regardless of having the moral upperhand, it was a bogstandard way of conduct. You dont have men to be stationed in enemy territory, that is the manpower that is NEEDED in the fields the second its time to sow or reap, so you dont fucking starve.

          So any conflict comes around, you need to make sure that once its over, you will be left the f alone. You have to really hit it home. Maybe thats not obvious, but the clans in this context are probably not NATO or even UN members. :)

  • Caveman@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    1 year ago

    To repeat something another guy on lemmy said.

    Making AI say slavery is good is the modern equivalent of writing BOOBS on a calculator.

    • joel_feila@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      A few lawyer thought chat gpt was a search engine. They asked it for some cases about sueing airlines and it made up cases, sited non existing laws. They only learned their mistake after submitting their finding to a court.

      So yeah people dont really know how to use it or what it is

    • Lt_Cdr_Data@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      1 year ago

      And acting like there are no upsides is delusional. Of course there are upsides, or it wouldn’t have happened. The downsides always outweigh the upsides of course.