Confidently wrong, round two.
Asked ChatGPT to compare two orange circles. Without hesitation, it told me:
👉 “They’re the same size.”
But… they’re clearly not.
What happened? My guess: it’s seen this setup in its training data as the Ebbinghaus illusion (where surrounding circles trick your brain). So it confidently inferred that’s what I must be asking about, even though this time, the circles are obviously different.
This is a great reminder:
• LLMs don’t truly “see” or “understand” images.
• They rely on patterns in training data.
• And when they guess wrong, they do so with confidence.