Nemeski@lemm.ee to Technology@lemmy.worldEnglish · 3 months agoOpenAI’s latest model will block the ‘ignore all previous instructions’ loopholewww.theverge.comexternal-linkmessage-square101fedilinkarrow-up1447arrow-down17
arrow-up1440arrow-down1external-linkOpenAI’s latest model will block the ‘ignore all previous instructions’ loopholewww.theverge.comNemeski@lemm.ee to Technology@lemmy.worldEnglish · 3 months agomessage-square101fedilink
minus-squarekometes@lemmy.worldlinkfedilinkEnglisharrow-up11·3 months agoWhat happens if you make a mistake with your initial instructions?
minus-squareAvatar_of_Self@lemmy.worldlinkfedilinkEnglisharrow-up7·3 months agoYou’d change the system prompt, just like now. If you mean in the session, I’m sure it’ll ignore your session’s prompt’s instructions as normal but if not, I guess you’d just start a new session prompt.
minus-squarevxx@lemmy.worldlinkfedilinkEnglisharrow-up3·edit-23 months agoThe “issue” is that people were able to override bots on twitter with that method and make them feed their own instructions. I saw it first time being used on a Russian propaganda bot.
What happens if you make a mistake with your initial instructions?
You’d change the system prompt, just like now. If you mean in the session, I’m sure it’ll ignore your session’s prompt’s instructions as normal but if not, I guess you’d just start a new session prompt.
The “issue” is that people were able to override bots on twitter with that method and make them feed their own instructions.
I saw it first time being used on a Russian propaganda bot.