Not sure if you’re disagreeing or agreeing with me. What I mean is, if a LLM’s output is in practice indistinguishable from human output, fingerprinting some popular services just creates a false sense of security, since we know malicious agents will for sure not fingerprint it.
Isn’t it just better to let humanity accept that a LLM’s output is identical to a person’s and always be skeptical?
I don’t think it’s fair to abandon the idea that it’s possible to get a reliable fingerprint to differentiate between some hypothetical LLM/NLP AI and humans. I haven’t been convinced it’s impossible to tweak things purposefully to make them inherently produce a fingerprint every single time to help differentiate.
I just think we need more time, so I guess I’m abstaining?
Not sure if you’re disagreeing or agreeing with me. What I mean is, if a LLM’s output is in practice indistinguishable from human output, fingerprinting some popular services just creates a false sense of security, since we know malicious agents will for sure not fingerprint it.
Isn’t it just better to let humanity accept that a LLM’s output is identical to a person’s and always be skeptical?
To be honest with you I’m torn on the subject.
I don’t think it’s fair to abandon the idea that it’s possible to get a reliable fingerprint to differentiate between some hypothetical LLM/NLP AI and humans. I haven’t been convinced it’s impossible to tweak things purposefully to make them inherently produce a fingerprint every single time to help differentiate.
I just think we need more time, so I guess I’m abstaining?