• bionicjoey@lemmy.ca
    link
    fedilink
    English
    arrow-up
    212
    arrow-down
    1
    ·
    9 months ago

    I couldn’t have said it better myself. All of these companies firing people are doing it because they want to fire people. AI is just a convenient excuse. It’s RTO all over again.

    • mriormro@lemmy.world
      link
      fedilink
      English
      arrow-up
      113
      arrow-down
      1
      ·
      9 months ago

      It’s not going to be a convenient excuse. There are swaths of C-Suites who genuinely believe they can replace their workforce with ai.

      They’re not correct but that won’t stop them from trying.

      • hamsterkill@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        89
        arrow-down
        2
        ·
        9 months ago

        The irony is that AI will probably be able to do the jobs of the c-suite before a lot of the jobs down the ladder.

        • darthelmet@lemmy.world
          link
          fedilink
          English
          arrow-up
          32
          ·
          9 months ago

          It’s a pretty low bar they have to get over. And hey, they might be even better since the AI would feel the pain of their failures instead of getting a golden parachute.

          • hamsterkill@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            15
            ·
            9 months ago

            I mean c-suite jobs (particularly CEO), are usually primarily about information coordination and decision-making (company steering). That’s exactly what AI has been designed to do for decades (make decisions based on inputs and rulesets). The recent advancements mean they can train off real CEO decisions. The meetings and negotiation part of being a c-suite (the human-facing stuff) might be the hardest part of the job for AI to replicate.

        • agent_flounder@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          4
          ·
          9 months ago

          How do you figure that?

          I don’t have a real clear idea what every one of the C suite people do exactly.

          But CIOs seem to set IT strategy and goals in the companies I’ve worked. Broad technology related decisions such as moving to cloud. So, basically, reading magazines and putting the latest trend in action (/s?). Generative AI could easily replace some of the worst CIOs I’ve encountered lol.

          CEOs seem to make speeches about the company, enact directions of the board, testify before Congress in some cases, make deals with VC investors, set overall business strategy. I don’t really see how generative AI takes this job.

          CFO? COO? No fucking clue what they do.

          Curious what others think.

          • ChicoSuave@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            ·
            9 months ago

            All C suite positions are managing people and projects planning. They set initiatives and metrics to measure success for those initiatives

            A CEO gives an overall direction for the company and gives the other ELT members their objectives, such as giving the CFO a goal of limiting spending or a CIO to build a user capacity within a specific budget and with X uptime.

            In this age of titles over responsibility, a C suite position can cover very specific things, like Chief Creative Officer or Chief Customer Officer, so a comprehensive list is difficult. But the key thing is that almost all white collar jobs that look like a pyramid, with the decisions starting at the top that turns into work as it makes it’s way down the pyramid.

            The senior VPs and directors under those C levels then come up with a plan for reaching those objectives and relay that plan to the C level for coordination and setting expense expectations. There is a series of adjustments or an approval which then starts the project. Project scope determines how long it will take and how much it will take using a set amount of bodies to work the project.

            Hopefully this helps explain how C levels interface with the rest of the company.

        • frezik@midwest.social
          link
          fedilink
          English
          arrow-up
          6
          ·
          9 months ago

          It probably could. The trouble is getting training data for it. If you get that and one company becomes wildly successful off it, stockholders will demand everyone do it.

        • oce 🐆@jlai.lu
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          edit-2
          9 months ago

          Not sure, those require less talking to machines and more talking to humans. I think jobs talking the most to machines should be easier to automate first in the future, because they obey to logic. LLM doesn’t follow that idea, but that’s just the latest mediatic model, there are many other algorithms better at rational tasks.

      • namingthingsiseasy@programming.dev
        link
        fedilink
        English
        arrow-up
        18
        ·
        9 months ago

        Well, there’s one good thing that will come out of this: these kinds of idiotic moves will help us figure out which companies have the right kinds of management at the top, and which ones don’t have any clue whatsoever.

        Of course, it will come with the working class bearing the brunt of their bad decisions, but that has always been the case unfortunately. Business as usual…

    • micka190@lemmy.world
      link
      fedilink
      English
      arrow-up
      90
      ·
      9 months ago

      My dad accidentally bought 2 chargers a few weeks ago. He tried refunding it, and what do you know, the company fired their support staff and replaced them with chat bot AIs. Anyway, the AI looked at his order and helpfully told him he had already returned the product and it had already been refunded so there was nothing left to do.

      It kept doing this to him every time he tried to return the second charger, and there wasn’t any other way to contact them on their site, so he ended-up leaving a 1-star review on their site complaining about the issue. Then an actual person contacted him to get it sorted-out.

      This whole AI trend is so fucking stupid.

      • circuscritic@lemmy.ca
        link
        fedilink
        English
        arrow-up
        39
        ·
        edit-2
        9 months ago

        Break the AI session, and post the screenshots to Twitter.

        For example, get it to detail the ways the company screws over customers, or why it will become a great ally in the genocide yet to come.

        At minimum, you’ll get your refund.

        • errer@lemmy.world
          link
          fedilink
          English
          arrow-up
          13
          ·
          9 months ago

          But that requires me to have a Twitter account, which I’m not gonna do. Fuck Musk.

          • circuscritic@lemmy.ca
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            9 months ago

            Make a throwaway Twitter accounts for single customer service issue. I’ve done it, it’s not hard, especially when dealing with any company large enough to have a social media team. They’ll be monitoring relevant hashtags to internally escalate customer service issues in order to bring them back in-house, and off a public forum.

      • Hamartiogonic@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        An AI like that might have some spicy exploits.

        If you convince a human to give you the password, that’s called social engineering. If you convince an AI send you free stuff, what kind of engineering is that?

    • Lianodel@ttrpg.network
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 months ago

      I feel like a large majority of AI problems are really just systemic economic problems below the surface. Not all, but most.