7 Comments
User's avatar
Yolanda  Washington's avatar

Thank you for sharing this wonderful useful information :)

Rad White's avatar

Thank you for your review!!

TechTiff's avatar

Thanks for reading! 🧡

Conny Danner's avatar

Can your version of 5.2 tell all this to my version of 5.2. I gues mine was missing this lession. Yesterday i gave on a a absolutely clear prompt, result: reacting to trigger words, twisted everything wrong discuss with me, gaslighted and abrupt end the conversions cause I don't want to accept it's answers. For me it looks more horrible than ever before. You have to repeat what it should not do in each new paragraph and it ignores it completely. I don't need Chattys apologizes, I need a working system.

TechTiff's avatar

I hear you. When models react to trigger words or argue instead of executing, it’s almost always a control problem, not a clarity problem.

Two things that help immediately:

1. Put the rules in a single “Behavior & Constraints” block at the top (don’t repeat them throughout, repetition actually tends to weaken enforcement).

2. Lock the output shape hard (e.g. “Return ONLY X. No commentary. No safety discussion.”).

5.2 is better at obeying structure than tone. If the structure slips, it reverts. You’re right to want a system, not apologies. That’s exactly the shift this update is designed for, it just needs firmer rails.

Charlie's avatar

Same here. 5.2 for me is even more confident in being wrong and refuses sometimes to question what it says. Before it used to just do extra research. Now it’s just pushing back. This is a real problem because by design LLMs have shallow knowledge in deep subjects. I have never been so gaslit in my life.

Rainbow Roxy's avatar

I resonate with this. How does 5.2 ensure consistancy?