We Asked A.I. to Create the Joker. It Generated a Copyrighted Image.::Artists and researchers are exposing copyrighted material hidden within A.I. tools, raising fresh legal questions.

  • archomrade [he/him]@midwest.social
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    9 months ago

    “metadata” is such a pretty word. How about “recipe” instead?

    Well isn’t recipe another one of those pretty words? ‘Metadata’ is specific to other precedents that deal with computer programs that gather data about works (see Authors Guild, Inc. v. HathiTrust and Authors Guild v. Google), but you’re welcome to challenge the verbiage if you don’t like it. Regardless, what we’re discussing is objectively something that describes copyrighted works, not copies or a copy of the works themselves. A computer program that is very good at analyzing textual/pixelated data is still only analyzing data, it is itself a novel, non-expressive factual representation of other expressive works, and because of this, it cannot be considered as infringement on its own.

    It stores all information necessary to reproduce work verbatim or grab any aspect of it.

    This isn’t really true, at least not for the majority of works analyzed by the model, but granted. If a person uses a tool to copy the work of another person, it is the person who is doing the copying, not the tool. I think it is far more reasonable to hold an individual who uses an AI model to infringe on a copyright responsible. If someone chooses to author a work with the use of a tool that does the work for them (in part or in whole), it is more than reasonable to expect that individual to check the work that is being produced.

    All in all I don’t argue about the legality of AI, but as a professional creative I highlight ethical (plagiarism) risks that are beginning to arise in majority of the models.

    As a professional creative myself, I think this is a load of horseshit. We always hold individual authors responsible for the work that they publish, and it should be no different here. That some choose to be lazy and careless is more of a reflection of them.

    How sure can we be that the work “we” produced using AI is truly original and not a perfect copy of someone else’s work?

    If you have the words to describe a desired image/text response to the model that produce a ‘perfect copy of someone else’s work’, then we have the words to search for that work, too.

    Or should the companies releasing AI models stop adding features and fix that broken foundation first?

    How about we stop expanding the scope of an already broken copyright law and fix that broken foundation first?