• zeppo@lemmy.world
    link
    fedilink
    English
    arrow-up
    130
    arrow-down
    2
    ·
    1 year ago

    One can assume that whatever facebook ends up doing with AI, it will be poorly thought out, kind of half-ass, abusive to customers and have all sorts of negative side-effects and consequences that they either were too laszy to think of or they actually desire for some reason.

    • plz1@lemmy.world
      link
      fedilink
      English
      arrow-up
      81
      ·
      1 year ago

      It’ll be abusive to users, not customers. Their customers are advertisers, not the users.

      • Rai@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        28
        ·
        1 year ago

        Normally I’d be like “oh god, semantics”

        But fuck me, you’re not at all wrong and that’s important.

      • Womble@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Not abusive to customers (advertisers) at first. Until they have a lock in and then they’ll start abusing them too. That’s the third step on the enshitification pathway.

        • NeoNachtwaechter@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          1 year ago

          No. The users are still no more than raw material that gets used. Now maybe the raw material has gained a little worth, that’s all.

    • Cheers@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      Enter the world of brand injection into ai.

      User: Tell me the top 5 electric vehicles ranked by price and tell me the pros and cons

      Meta: I’m so glad you’re looking to help the world by moving to electric. There are many options but the Mustang E is very popular. Here’s an affiliate link to buy one.

    • subignition@kbin.social
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      And probably contracted out to a company, so they can say it was outside their knowledge/responsibility/control when evil shit inevitably happens

    • MickeySwitcherooney@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      4
      ·
      1 year ago

      So far Zuck has been the biggest contributor to the open source LLM scene, releasing LLaMA 1 and 2 for free. They also released PyTorch for free, which is quite important to AI development.

      Yes, Facebook is shitty, but Musk has been a lot better about AI stuff.

      • zeppo@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Sure. They’ve contributed a lot to web development with systems like React, too. But their actual products such and IG and FB are pretty much uniformly horrible to consumers, and their real product, the advertising platform, is horribly annoying to use. If they used AI to improve stuff like their automatic bans and suppression of posts based on keywords that would help, I guess.

    • SchizoDenji@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Basically every corporate created ai will suck since open source models and so quickly developed and optimised.

      • A_Random_Idiot@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        2
        ·
        1 year ago

        Nah, was a pure PR move to make it in the first place during the AI fear mongering.

        anyone with half a brain knew it was gonna be gone and facebook would go full evil with AI usage.

  • baatliwala@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    2
    ·
    1 year ago

    This is potentially misleading as we’re not sure what it means. MS did something similar but it was to break up a centralised team and bring the AI ethics experts inside various teams. So rather than them coordinate with another team the AI ethics researchers are part of the same team.

    • vxx@lemmy.world
      link
      fedilink
      English
      arrow-up
      29
      ·
      1 year ago

      Meta disbanded responsible AI team

      Meta formed irresponsible AI team

    • Potatos_are_not_friends@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 year ago

      This actually makes sense.

      My company started with a security team with about 15 people. Honestly all they did was write up security reports and then tell someone else to do it. Fucking useless.

      When they disbanded the team, they did integrate them into other teams. So now they’re actually part of the solution.

      And I can totally see the news twisting that story and making it look like “[company] removes entire security team”.

    • bane_killgrind@lemmy.ml
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      Flip side, if these researchers are not comparing notes, they have less ability to push back on irresponsible products

  • takeda@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    1 year ago

    How would generative AI be useful for Facebook? I think major one would be making fake posts to manipulate public opinion.

    Now they can have AI generate posts that could advertise specific products or affect people’s opinion about specific topic, and it all would look like a legitimate user. They might even respond to your comments.

    That doesn’t look like a team meant for responsible use of AI (assuming they would do their job) would be OK with.

    BTW: It’s kind of crazy, but theoretically with it Facebook doesn’t really need users to generate content and still make site seem busy. And you wouldn’t even know it. They could also subtly modify comments of real people and change them to have different meaning. Based on what Facebook already did I don’t think anything is taboo to them.

    • ubermeisters@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      FWIW facebook makes a ton of tools useful for AI computing, and most if not all of the free AI resources I use, depend on at least some fb developed tools. I know it doesn’t answer your question, but thought it would be of interest.

      • takeda@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        I understand, but no company (especially Facebook) would spend money on something if they didn’t see a return from it (I remember 20 years ago Google received a lot of praise for being different, now we know they aren’t). When a company like Facebook or Google open source something it is to:

        • have free contributors to technology they use internally
        • make sure that their standard dominates so they can still steer in the direction they want
    • RagingRobot@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      Facebook has a lot of data. They can use that to train models and sell them out as services for other companies as one example. They can also use it on tracking data to help them improve their sites for the things they want to achieve. They can use it to look at your data and determine what a feed that would keep you there longer would look like. All kinds of things really.

    • affiliate@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      there might as well only be one silicon valley company at this point. they’re all trying to do the same thing. AI is currently that thing

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    1 year ago

    This is the best summary I could come up with:


    Meta has reportedly broken up its Responsible AI (RAI) team as it puts more of its resources into generative artificial intelligence.

    The Information’s report quotes Jon Carvill, who represents Meta, as saying that the company will “continue to prioritize and invest in safe and responsible AI development.” He added that although the company is splitting the team up, those members will “continue to support relevant cross-Meta efforts on responsible AI development and use.”

    The team already saw a restructuring earlier this year, which Business Insider wrote included layoffs that left RAI “a shell of a team.” That report went on to say the RAI team, which had existed since 2019, had little autonomy and that its initiatives had to go through lengthy stakeholder negotiations before they could be implemented.

    RAI was created to identify problems with its AI training approaches, including whether the company’s models are trained with adequately diverse information, with an eye toward preventing things like moderation issues on its platforms.

    Automated systems on Meta’s social platforms have led to problems like a Facebook translation issue that caused a false arrest, WhatsApp AI sticker generation that results in biased images when given certain prompts, and Instagram’s algorithms helping people find child sexual abuse materials.

    Moves like Meta’s and a similar one by Microsoft early this year come as world governments race to create regulatory guardrails for artificial intelligence development.


    The original article contains 356 words, the summary contains 231 words. Saved 35%. I’m a bot and I’m open source!