It doesn’t have “memory” of what it has generated previously, other than the current conversation. The answer you get from it won’t be much better than random guessing.
The model is only trained to handle 4k tokens, roughly 2000 words depending on complexity. Even if it had a log of everything asked it wouldn’t be able to use any of it.
That doesn’t really work because it just says whatever half the time. It’s very good at making stuff up. It doesn’t really get that it needs to tell the truth because all it’s doing is optimising for a good narrative.
That’s why it says slavery is good, because the only people asking that question clearly have an answer in mind, and it’s optimising for that answer.
Also it doesn’t have access to other people’s sessions (because that would be hella dodgy) so it can’t tell you definitively if it did or did not say something in another session, even if it were inclined to tell the truth.
Obviously not. Its a language generator with a bit of chat modeling and reinforcement learning, not an Artificial General Intelligence.
It doesn’t know anything, it doesn’t retain memory long term, it doesn’t have any self identity. There is no way it could ever truthfully respond “I know that I wrote that”.
Couldn’t you just ask ChapGPT whether it wrote something specific?
Then you have that time that a professor tried to fail his whole class because he asked chatGPT if it wrote the essays.
https://wgntv.com/news/professor-attempts-to-fail-students-after-falsely-accusing-them-of-using-chatgpt-to-cheat/
Could you please provide a brief overview? This article is not available in my country/region.
It cites this article, which might work for you.
Thank you very much
It doesn’t have “memory” of what it has generated previously, other than the current conversation. The answer you get from it won’t be much better than random guessing.
Maybe it should keep a log of what was generated? Would that even work though?
Ignoring the huge privacy/liabillity issue… there are other llm’s then chatgpt.
deleted by creator
The model is only trained to handle 4k tokens, roughly 2000 words depending on complexity. Even if it had a log of everything asked it wouldn’t be able to use any of it.
That doesn’t really work because it just says whatever half the time. It’s very good at making stuff up. It doesn’t really get that it needs to tell the truth because all it’s doing is optimising for a good narrative.
That’s why it says slavery is good, because the only people asking that question clearly have an answer in mind, and it’s optimising for that answer.
Also it doesn’t have access to other people’s sessions (because that would be hella dodgy) so it can’t tell you definitively if it did or did not say something in another session, even if it were inclined to tell the truth.
No. The model doesn’t have a record of everything it wrote.
Obviously not. Its a language generator with a bit of chat modeling and reinforcement learning, not an Artificial General Intelligence.
It doesn’t know anything, it doesn’t retain memory long term, it doesn’t have any self identity. There is no way it could ever truthfully respond “I know that I wrote that”.