We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision.
If you want to get into a full blown discussion of whether ChatGPT has “agency” then I’d open the topic of whether humans have “agency” as well. But I don’t see the need here.
These words were perfectly fine labels for describing the behaviour of ChatGPT in this scenario. I’m merely annoyed about how people are jumping on them and going off on philosophical digressions that add nothing.
I think the reason I’m not comfortable with using the term “lying” is because it implies some sort of negative connotation. When you say that someone lies, it comes with an understanding that they made a choice to lie, usually with ill intent. I agree, we don’t need to get into a philosophical discussion on choice and free will. But I think saying something like “GPT lies” is a bit irresponsible for the purposes of a discussion