Basically, any time a user prompt homes in on a concept that isn’t represented well in the AI model’s training dataset, the image-synthesis model will confabulate its best interpretation of what the user is asking for.
I’m so happy that the correct terminology is finally starting to take off in replacing ‘hallucinate.’
I’m so happy that the correct terminology is finally starting to take off in replacing ‘hallucinate.’