• braindefragger@lemmy.world
    link
    fedilink
    English
    arrow-up
    50
    arrow-down
    7
    ·
    3 months ago

    It’s an LLM with well documented processes and limitations. Not going to even watch this waste of bits.

    • UraniumBlazer@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      48
      ·
      3 months ago
      1. Making up ur opinion without even listening to those of others… Very open minded of you /s
      2. Alex isn’t trying to convince YOU that ChatGPT is conscious. He’s trying to convince ChatGPT that it’s conscious. It’s just a fun vid where ChatGPT gets kinda interrogated hard. A little hilarious even.
      • JustARaccoon@lemmy.world
        link
        fedilink
        English
        arrow-up
        31
        arrow-down
        4
        ·
        3 months ago

        You cannot convince something that has no consciousness, it’s an matrix of weights that answers based on the given input + some salt

        • UraniumBlazer@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          23
          ·
          3 months ago

          You cannot convince something that has no consciousness

          Why not?

          It’s an matrix of weights that answers based on the given input + some salt

          And why can’t that be intelligence?

          What does it mean to be “convinced”? What does consciousness even mean?

          Making definitive claims like these on terms whose definitions we do not understand isn’t logical.

          • sugartits@lemmy.world
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            5
            ·
            3 months ago

            You cannot convince something that has no consciousness

            Why not?

            Logic.

            It’s an matrix of weights that answers based on the given input + some salt

            And why can’t that be intelligence?

            For the same reason I can’t get a date with Michelle Ryan: it’s a physical impossibility.

            • UraniumBlazer@lemm.eeOP
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              11
              ·
              3 months ago

              Logic

              Please explain your reasoning.

              For the same reason I can’t get a date with Michelle Ryan: it’s a physical impossibility.

              Huh?

              • sugartits@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                ·
                3 months ago

                Logic

                Please explain your reasoning.

                Others have done this and you seem to be ignoring them, so not sure what the point of you asking is.

                Go look at some of the code that AI is powered by. It’s just parameters. Lots and lots of parameters. Then the output from that is inevitable.

                For the same reason I can’t get a date with Michelle Ryan: it’s a physical impossibility.

                Huh?

                If you’re too lazy to even look up the most basic thing you don’t understand, then I guess we’re done here.

          • technocrit@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            3 months ago

            It’s an matrix of weights that answers based on the given input + some salt

            And why can’t that be intelligence?

            Because human intelligence does far more than respond to prompts with the average response from a data set.

      • Eximius@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 months ago

        If you have any understanding of its internals, and some examples of its answers, it is very clear it has no notion of what is “correct” or “right” or even what an “opinion” is. It is just a turbo charged autocorrect that maybe maybe maybe has some nice details extracted from language about human concepts into a coherent-ish connected mesh of “concepts”.

      • webghost0101@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        3
        ·
        edit-2
        3 months ago

        Yes, here is a good start: “ https://blog.miguelgrinberg.com/post/how-llms-work-explained-without-math

        They are no longer the black boxes from the beginning. We know how to suppress or maximized features like agreeability, sweet talking, lying.

        Someone with resources could easily build a llm that is convinced it is self aware. No question this has been done many times beyond closed doors.

        I encourage everyone to try and play with llms for future experience but i cant take the philosophy part of this serious knowing its a super programmed/limited llm rather then a more raw and unrefined model like llama 3

        • UraniumBlazer@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          14
          ·
          3 months ago

          Our brains aren’t really black boxes either. A little bit of hormone variation leads to incredibly different behavior. Does a conscious system HAVE to be a blackbox?

          The reason why I asked “do you” was because of a point that I was trying to make- “do you HAVE to understand/not understand the functioning of a system to determine its consciousness?”.

          What even is consciousness? Do we have a strict scientific definition for it?


          The point is, I really hate people here on Lemmy making definitive claims about anything AI related by simply dismissing it. Alex (the interrogator in the video) isn’t making any claims. He’s simply arguing with ChatGPT. It’s an argument I found to be quite interesting. Hence, I shared it.

          • conciselyverbose@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            3 months ago

            A conscious system has to have some baseline level of intelligence that’s multiple orders of magnitude higher than LLMs have.

            If you’re entertained by an idiot “persuading” something less than an idiot, whatever. Go for it.

            • UraniumBlazer@lemm.eeOP
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              6
              ·
              3 months ago

              A conscious system has to have some baseline level of intelligence that’s multiple orders of magnitude higher than LLMs have.

              Does it? By that definition, dogs aren’t conscious. Apes aren’t conscious. Would you say they both aren’t self aware?

              If you’re entertained by an idiot “persuading” something less than an idiot, whatever. Go for it.

              Why the toxicity? U might disagree with him, sure. Why go further and berate him?

              • conciselyverbose@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                8
                arrow-down
                2
                ·
                3 months ago

                No, that definition does not exclude dogs or apes. Both are significantly more intelligent than an LLM.

                Pseudo-intellectual bullshit like this being spread as adding to the discussion does meaningful harm. It’s inherently malignant, and deserves to be treated with the same contempt as flat earth and fake medicine should be.

                • UraniumBlazer@lemm.eeOP
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  arrow-down
                  7
                  ·
                  3 months ago

                  No, that definition does not exclude dogs or apes. Both are significantly more intelligent than an LLM.

                  Again, depends on what type of intelligence we are talking about. Dogs can’t write code. Apes can’t write code. LLMs can (not bad code in my experience for low level tasks). Dogs can’t summarize huge pages of text. Heck, they can’t even have a vocab greater than a few thousand words. All of this definitely puts LLMs above dogs n apes in the scale of intelligence.

                  Pseudo-intellectual bullshit like this being spread as adding to the discussion does meaningful harm. It’s inherently malignant, and deserves to be treated with the same contempt as flat earth and fake medicine should be.

                  Your comments are incredibly reminiscent of self righteous Redditors. U make bold claims without providing any supporting explanation. Could you explain how any of this is pseudoscience? How does any of this not follow the scientific method? How is it malignant?

          • webghost0101@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            3 months ago

            I am not in disagreement, and i hope you wont take offense to what i am saying but you strike me as someone quite new to philosophy in general.

            Your asking good questions and indeed science has not solved the mind body problem yet.

            I know these questions well because i managed to find my personal answers to them and therefor no longer need to ask these.

            In context of understanding that nothing can truly be known and our facts are approximated conclusions from limit ape brains. Conscious to me is no longer that much of a mystery.

            Much of my personal answer can be found in the ideas of emergence, which you might have heard about in context if ai. Personally i got my first taste from that knowledge pre-ai from playing video games

            A warning though: i am a huge believer that philosophy must be performed and understood on a individual basis. I actually have for the longest time perceived any official philosophy teaching or book to be toxic because they where giving me ideas to build on without me Requiring to come to the same conclusion first.

            It is impossible to avoid this, the 2 philosophers school did end up teaching me (plato and decartis) did end up annoyingly influential (i cant not agree with them). but i can proudly say that nowadays it more likely to recognize an idea as something i covered then i can recognize the people who where first to think it.

            llms are a brilliant tool to explore philosophy topics because it can fluently mix ideas without the same rigidness as a curriculun, and yes i do believe they can be used to explore certain parts of consciousness (but i would suggest first studying human consciousness before extrapolating psychology from ai behavior)

            • UraniumBlazer@lemm.eeOP
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              3
              ·
              3 months ago

              I am not in disagreement, and i hope you wont take offense to what i am saying but you strike me as someone quite new to philosophy in general.

              Nah no worries haha. And yeah, I am relatively new to philosophy. I’m not even that well read on the matter as I would like to be. :(

              Personal philosophy

              I see philosophy (what we mean by philosophy TODAY) as putting up some axioms and seeing how logic follows. The scientific method differs, in that these axioms have to be proven to be true.

              I would agree with you with the personal philosophy point regarding the ethics branch of philosophy. Different ethical frameworks always revolve around axioms that are untestable in the first place. Everything suddenly becomes subjective, with no capacity of being objective. Therefore, it makes this part of philosophy personal imo.

              As for other branches of philosophy tho, (like metaphysics), I think it’s just a game of logic. Doesn’t matter who plays this game. Assume an untested/untestable axiom, build upon it using logic n see the beauty that u’ve created. If the laws of logic are followed and if the assumed axiom is the same, anyone can reach the same conclusion. So I don’t see this as personal really.

              but i would suggest first studying human consciousness before extrapolating psychology from ai behavior

              Agreed

              Personally i got my first taste from that knowledge pre-ai from playing video games

              Woah that’s interesting. Could you please elaborate upon this?

              • webghost0101@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                3 months ago

                To elaborate i need to give you some context, which is that i originally studied game design and i have a autistic-philosophical interpretation of the world and logic as “(game) mechanics”

                If i lack sleep i get tired -> basic game mechanic of the real world.

                I can go very far with that and id love to give you all the details of conscious mechanics a comment wont do it justice and just because i can understand the world trough such lens does not mean others do.

                So with this background you might infer i like to play games, and immerse first persons puzzle games are a special favorite if mine.

                Que “Outer Wilds” back then just an experimental alpha I believe but its to date one of my most favorite games. Literally life changing in how it gave me an intuitive understanding of the basic rules of quantum mechanics, who as far as i understand is the scientific frontier. A person who would state they fully understand quantum mechanics is the last person i would trust to have any understanding of it.

                This game made me shift my gears, once i realized this was based on real science and not just a cool game play feature i had to gain quantum knowledge in real life and i watched some recordings of Mitt classes on superposition just to get a better understanding.

                Now quantum science isn’t exactly philosophy, ive always been interested in philosophy but its by studying quantum mechanics, inspired by that game that i learned about the mechanic of emerging properties. I think on a video about the dual slit experiment.

                I internally quickly put together that a song/music is an emerging properties of musical notes. Music can change our emotions in intentional ways so music is a form intelligence. So intelligent systems can emerge from parts that have no intelligence of their own.

                At that point i did not yet know that emergence was already a known topic in philosophy just quantum science, because i still tried to avoid external influences but it really was the breakthrough I needed and i have gained many new insights from this knowledge since.

                • bunchberry@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  2 months ago

                  A person who would state they fully understand quantum mechanics is the last person i would trust to have any understanding of it.

                  I find this sentiment can lead to devolving into quantum woo and mysticism. If you think anyone trying to tell you quantum mechanics can be made sense of rationally must be wrong, then you implicitly are suggesting that quantum mechanics is something that cannot be made sense of, and thus it logically follows that people who are speaking in a way that does not make sense and have no expertise in the subject so they do not even claim to make sense are the more reliable sources.

                  It’s really a sentiment I am not a fan of. When we encounter difficult problems that seem mysterious to us, we should treat the mystery as an opportunity to learn. It is very enjoyable, in my view, to read all the different views people put forward to try and make sense of quantum mechanics, to understand it, and then to contemplate on what they have to offer. To me, the joy of a mystery is not to revel in the mystery, but to search for solutions for it, and I will say the academic literature is filled with pretty good accounts of QM these days. It’s been around for a century, a lot of ideas are very developed.

                  I also would not take the game Outer Wilds that seriously. It plays into the myth that quantum effects depend upon whether or not you are “looking,” which is simply not the case and largely a myth. You end up with very bizarre and misleading results from this, for example, in the part where you land on the quantum moon and have to look at the picture of it for it to not disappear because your vision is obscured by fog. This makes no sense in light of real physics because the fog is still part of the moon and your ship is still interacting with the fog, so there is no reason it should hop to somewhere else.

                  Now quantum science isn’t exactly philosophy, ive always been interested in philosophy but its by studying quantum mechanics, inspired by that game that i learned about the mechanic of emerging properties. I think on a video about the dual slit experiment.

                  The double-slit experiment is a great example of something often misunderstood as somehow evidence observation plays some fundamental role in quantum mechanics. Yes, if you observe the path the two particles take through the slits, the interference pattern disappears. Yet, you can also trivially prove in a few line of calculation that if the particle interacts with a single other particle when it passes through the two slits then it would also lead to a destruction of the interference effects.

                  You model this by computing what is called a density matrix for both the particle going through the two slits and the particle it interacts with, and then you do what is called a partial trace whereby you “trace out” the particle it interacts with giving you a reduced density matrix of only the particle that passes through the two slits, and you find as a result of interacting with another particle its coherence terms would reduce to zero, i.e. it would decohere and thus lose the ability to interfere with itself.

                  If a single particle interaction can do this, then it is not surprising it interacting with a whole measuring device can do this. It has nothing to do with humans looking at it.

                  At that point i did not yet know that emergence was already a known topic in philosophy just quantum science, because i still tried to avoid external influences but it really was the breakthrough I needed and i have gained many new insights from this knowledge since.

                  Eh, you should be reading books and papers in the literature if you are serious about this topic. I agree that a lot of philosophy out there is bad so sometimes external influences can be negative, but the solution to that shouldn’t be to entirely avoid reading anything at all, but to dig through the trash to find the hidden gems.

                  My views when it comes to philosophy are pretty fringe as most academics believe the human brain can transcend reality and I reject this notion, and I find most philosophy falls right into place if you reject this notion. However, because my views are a bit fringe, I do find most philosophical literature out there unhelpful, but I don’t entirely not engage with it. I have found plenty of philosophers and physicists who have significantly helped develop my views, such as Jocelyn Benoist, Carlo Rovelli, Francois-Igor Pris, and Alexander Bogdanov.

                • UraniumBlazer@lemm.eeOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 months ago

                  Interesting perspective, although I don’t see how some of your points might add up. Regardless, thank you for the elaboration! :)

      • Dark Arc@social.packetloss.gg
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        3 months ago

        These things are like arguing about whether or not a pet has feelings…

        I’d say it’s far more likely for a cat or a dog to have complex emotions and thoughts than for the human made LLM to actually be thinking. It seems to me like the nativity of human kind that we even think we might have created something with consciousness.

        I’m in the camp that thinks the LLMs are by and large a huge grift (that can produce useful output for certain tasks) by virtue of extreme exaggeration of the facts, but maybe I’m wrong.

        • UraniumBlazer@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          3 months ago

          These things are like arguing about whether or not a pet has feelings…

          Mhm. And what’s fundamentally wrong with such an argument?

          I’d say it’s far more likely for a cat or a dog to have complex emotions and thoughts than for the human made LLM to actually be thinking.

          Why?

          I’m in the camp that thinks the LLMs are by and large a huge grift (that can produce useful output for certain tasks) by virtue of extreme exaggeration of the facts, but maybe I’m wrong.

          Why?

          I too see how grifters use AI to further their scams. That’s with the case of any new tech that pops up. This however, doesn’t make LLMs not interesting.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    5
    ·
    3 months ago

    I like the video. I think it’s fun to argue with ChatGPT. Just don’t expect anything to come from it. Or get closer to any objective truth that way. ChatGPT is just backpedaling and getting caught up in lies / what it said earlier.

  • Telorand@reddthat.com
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    4
    ·
    3 months ago

    This all hinges on the definition of “conscious.” You can make a valid syllogism that defines it, but that doesn’t necessarily represent a reasonable or accurate summary of what consciousness is. There’s no current consensus of what consciousness is amongst philosophers and scientists, and many presume an anthropocentric model with regard to humans.

    I can’t watch the video right now, but I was able to get ChatGPT to concede, in a few minutes, that it might be conscious, the nature of which is sufficiently different from humans so as to initially not appear conscious.

    • UraniumBlazer@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      10
      ·
      3 months ago

      Exactly. Which is what makes this entire thing quite interesting.

      Alex here (the interrogator in the video) is involved in AI safety research. Questions like “do the ethical frameworks of AI match those of humans”, “how do we get AI to not misinterpret inputs and do something dangerous” are very important to be answered.

      Following this comes the idea of consciousness. Can machine learning models feel pain? Can we unintentionally put such models into immense eternal pain? What even is the nature of pain?

      Alex demonstrated that ChatGPT was lying intentionally. Can it lie intentionally for other things? What about the question of consciousness itself? Could we build models that intentionally fail the Turing test? Should we be scared of such a possibility?

      Questions like these are really interesting. Unfortunately, they are shot down immediately on Lemmy, which is pretty disappointing.

      • conciselyverbose@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        3 months ago

        Alex demonstrated that ChatGPT was lying intentionally

        No, he most certainly did not. LLMs have no agency. “Intentionally” doing anything isn’t possible.

        • UraniumBlazer@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          8
          ·
          3 months ago

          LLMs have no agency.

          Define “agency”. Why do u have agency but an LLM doesn’t?

          “Intentionally” doing anything isn’t possible.

          I see “intention” as a goal in this context. ChatGPT explained that the goal was to make the conversation appear “natural” (which means human like). This was the intention/goal behind it lying to Alex.

          • Zeoic@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            3 months ago

            That “intention” is not made by ChatGPT, though. Their developers intend for conversation with the LLM to appear natural.

            • UraniumBlazer@lemm.eeOP
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              3
              ·
              3 months ago

              ChatGPT says this itself. However, why does an intention have to be made by ChatGPT itself? Our intentions are often trained into us by others. Take the example of propaganda. Political propaganda, corporate propaganda (advertisements) and so on.

              • Zeoic@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                3 months ago

                We have the ability to create our own intentions. Just because we follow others sometimes doesn’t change that.

                Also, if you wrote “I am conscious” on a piece of paper, does that mean the paper is conscious? Does this paper now have the intent to have a natural conversation with you? There is not much difference between that paper and what chatgpt is doing.

                • UraniumBlazer@lemm.eeOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  3 months ago

                  The main problem is the definition of what “us” means here. Our brain is a biological machine guided by the laws of physics. We have input parameters (stimuli) and output parameters (behavior).

                  We respond to stimuli. That’s all that we do. So what does “we” even mean? The chemical reactions? The response to stimuli? Even a worm responds to stimuli. So does an amoeba.

                  There sure is complexity in how we respond to stimuli.

                  The main problem here is an absent objective definition of consciousness. We simply don’t know how to define consciousness (yet).

                  This is primarily what leads to questions like u raised right now.

      • Telorand@reddthat.com
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        3 months ago

        Questions like these are really interesting. Unfortunately, they are shot down immediately on Lemmy, which is pretty disappointing.

        It’s just because AI stuff is overhyped pretty much everywhere as a panacea to solve all capitalist ails. Seems every other article, no matter the subject or demographic, is about how AI is changing/ruining it.

        I do think that grappling with the idea of consciousness is a necessary component of the human experience, and AI is another way for us to continue figuring out what it means to be conscious, self-aware, or a free agent. I also agree that it’s interesting to try to break AI and push it to its limits, but then, breaking software is in my professional interests!

        • Ilandar@aussie.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          I do think that grappling with the idea of consciousness is a necessary component of the human experience, and AI is another way for us to continue figuring out what it means to be conscious, self-aware, or a free agent

          You might be interested in the book ‘The Naked Neanderthal’ by Ludovic Slimak. He is an archaeologist but the book is quite philosophical and explores this idea of learning about humanity through the study of other forms of intelligence (Neanderthals). Here are some opening paragraphs from the book to give you an idea of what I mean:

          The interstellar perspective, this suggestion of distant intelligences, reminds us that we humans are alone, orphans, the only living conscious beings capable of analysing the mysteries of the universe that surrounds us. These are countless other forms of animal intelligence, but no consciousness with which we can exchange ideas, compare ourselves, or have a conversation.

          These distant intelligences outside of us perhaps do exist in the immensity of space - the ultimate enigma. And yet we know for certain that they have existed in a time which appears distant to us but in fact is extremely close.

          The real enigma is that these intelligences from the past became progressively extinct over the course of millennia; there was a tipping point in the history of humanity, the last moment when a consciousness external to humanity as we conceive it existed, encountered us, rubbed shoulders with us. This lost otherness still haunts us in our hopes and fears of artificial intelligence, the instrumentalized rebirth of a consciousness that does not belong to us.

        • UraniumBlazer@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          3
          ·
          3 months ago

          It’s just because AI stuff is overhyped pretty much everywhere as a panacea to solve all capitalist ails. Seems every other article, no matter the subject or demographic, is about how AI is changing/ruining it.

          Agreed :(

          You know what’s sad? Communities that look at this from a neutral, objective position (while still being fun) exist on Reddit. I really don’t want to keep using it though. But I see nothing like that on Lemmy.

          • Telorand@reddthat.com
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            3 months ago

            Lemmy is still in its infancy, and we’re the early adopters. It will come into its own in due time, just like Reddit did.