Happen to find the “shift” of ChatGPT answer

Today I had a surprisingly satisfying conversation with ChatGPT. I asked it to design a logic puzzle for me — something like an engineering-flavored investigation with access control, logs, time stamps, and suspicious statements. On the surface, it looked good: the framing was serious, the setup was detailed, and the tone suggested a real deduction problem.

But once I actually read the puzzle carefully, the structure broke. The early part of the setup pointed toward a strong central idea, yet the later details introduced obvious holes and contradictions that made the whole thing too easy or not fully coherent. That was the interesting part. The problem was not just that the puzzle was flawed. It was that the model seemed very good at starting from the right direction, then gradually shifted into extending its own earlier setup instead of protecting the overall logic.

That made me realize something more general: with LLMs, the first part of an answer can be tightly aligned with the real question, while the later part becomes more about staying smooth and consistent with what was already written. It can drift from solving into narrating.

I found that observation more valuable than the puzzle itself. It reminded me that a real part of human thinking is not just generating ideas, but checking whether the structure still holds after the first impressive few steps.

Leave a comment

Blog at WordPress.com.

Up ↑