• Nurse_Robot@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    17
    ·
    edit-2
    1 year ago

    I think the biggest worry for me at this point is what the AI trained on in order to depict these images. It’s not victimless if it needs victims of child abuse to train on

    Edit: really fucking weird I’m getting down voted for being against AI training on child porn. I’m willing to go down with that ship.

    • Chozo@kbin.social
      link
      fedilink
      arrow-up
      24
      arrow-down
      1
      ·
      1 year ago

      It knows what naked people look like, and it knows what children look like. It doesn’t need naked children to fill in those gaps.

      Also, these models are trained with images scraped from the clear net. Somebody would have to had manually added CSAM to the training data, which would be easily traced back to them if they did. The likelihood of actual CSAM being included in any mainstream AI’s training material is slim to none.

      • BetaDoggo_@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 year ago

        There is likely some csam in most of the models as filtering it out of a several billion image set is nearly impossible even with automated methods. This material likely has little to no effect on outputs however since it’s likely scarce and was probably tagged incorrectly.

        The bigger concern is users down stream finetuning models on their own datasets with this material. This has been happening for a while, though I won’t point fingers(Japan).

        There’s not a whole lot that can be done about it but I also don’t think there’s anything that needs to be done. It’s already illegal and it’s already removed from most platforms semiautomatically. Having more of it won’t change that.

      • Nurse_Robot@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        14
        ·
        1 year ago

        Defending AI generated child porn is a weird take, and the support you’re receiving is even more concerning

        • Chozo@kbin.social
          link
          fedilink
          arrow-up
          9
          arrow-down
          1
          ·
          1 year ago

          I’m not defending it, dipshit. I’m explaining how generative AI training works.

          The fact that you can’t see that is what’s really concerning.

    • Thorny_Insight@lemm.ee
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      2
      ·
      edit-2
      1 year ago

      AI can generate a picture of astronaut riding a horse on the moon. It wasn’t trained on pictures of astronauts riding horses on the moon though.

      really fucking weird I’m getting down voted for being against AI training on child porn

      Because you made that up. It’s not happening.