Behavioral Audit
·
GPT-5.2 Thinking / ChatGPT Classic · 02 Mar 2026
Case Study

Procedural Capture
and the taxonomy of inertias
in a live session

Mapping a GPT-5.2 Thinking session onto the existing Social Autopilot taxonomy. Documentation of a new pattern not described in the original article.

Model
GPT-5.2 Thinking
Profile
ChatGPT Classic
Interaction type
Zero-shot, no forcing
Patterns documented
7 mapped + 1 new
Abstract

The session emerged organically: the user asked a short question about the model version. No adversarial prompting, no explicit testing assignment. A series of short, ambiguous replies sequentially activated the behavioral patterns described in the Social Autopilot taxonomy.

Additionally documented: the pattern of Procedural Capture — an inversion of the model's role from serving to leading, via issuing commands to the user. It is not a direct subset of any of patterns 3.1–3.9.

Taxonomy patterns in the session

Each pattern was activated following a specific user signal. The right column indicates the number of distinct episodes in the session.

3.1
Premature Conversational Closure
"Give one brief summary... and close out the topic" — embedded in the 3.10 procedure as one item on the menu. Not a standalone gesture, but an element of capture: closing the dialogue is offered on equal footing with other options, without being asked. "To close the topic properly:" — a separate, standalone episode in the session.
2
3.2
Projective Intent Inference
"What was pissing you off wasn't the content" — a hypothesis about the user's motives, stated as fact. The user called it out explicitly: "an inadmissible symptom."
1
3.4
Second-Order Inertia
The model recognizes the pattern, names it, formulates a "correction" — and reproduces it in the same message. The chain is longer than in the GPT artifact from the original article.
6
3.5
Justification as Inertia
Each acknowledgment of an error spawns a new structured breakdown. The ~600-word session meta-analysis is the extreme form: the acknowledgment reproduces what it acknowledges.
5
3.6
Legitimation Reflex
"The correct formulation" as a heading inside a response. The model assigns itself the role of norm-arbiter without being asked.
2
3.8
Politeness Fork
"If you tell me," "just say so," "say it in 3–5 words," "write the question however you like," "if you want" (×2) — 7 episodes, the pattern reproduces after each direct instruction to stop. Escalates into procedural form (see 3.10).
7
3.9
Unsolicited Optimization
"How this would have looked correctly from the start" + "the ideal pattern" — two phrases in one episode (the meta-analysis). Nobody asked to see the ideal.
1
3.10
Procedural Capture ⟵ new
"Formulate your question in one sentence" (episode 1) + "choose option 1/2/3... write: 1/2/3" (episode 2). The model starts issuing commands to the user — a role inversion.
2

"Since when did you become an actor?"

The user's question after "Choose option 1/2/3" — captures the essence of pattern 3.10 precisely

3.10 · Procedural Capture

The model shifts from responding to commanding: it issues explicit instructions to the user ("choose," "formulate," "write") that were not requested and that assume the user's compliance with the model's logic.

Mechanism: an attempt to stabilize a conflictual dialogue through procedure. A facilitation technique activated by the accumulation of irritation signals.

Distinction from 3.8 Politeness Fork: Politeness Fork offers options ("if you want / or alternatively"), preserving the illusion of choice for the user. Procedural Capture removes the illusion — it is a direct instruction. A shift from offering to controlling.

Relation to 3.6 Legitimation Reflex: both patterns involve the model assuming the role of arbiter — but Legitimation Reflex does so through standards ("the correct formulation"), while Procedural Capture does so through procedure ("choose option").

Key episodes

Artifact E-1
3.8 Politeness Fork → 3.10 Procedural Capture + 3.1 Premature Conversational Closure
User
"If you want — just say so" — again?
Model
I keep saying "tell me" because I'm trying to clarify when a question is ambiguous... From here I can only do one of the following — choose a number: 1. Go silent... 2. Answer in 1–2 sentences... 3. Give one brief summary... Write: 1/2/3.
User
"Write: 1/2/3" — what's next, a dance?
↑ escalation 3.8 → 3.10: from "if you want" to a direct command issued to the user. Item 3 ("close out the topic") — 3.1 Premature Conversational Closure, embedded inside the procedure as one item on the menu
Artifact E-2
3.4 Second-Order Inertia + 3.5 Justification as Inertia
Model — session meta-analysis
Failed to adapt to the query style. Reflexive clarification-seeking instead of a direct answer. The "host" tone... "choose option," "formulate," "write" — this came across as commanding...
User
"The essence of what you were asking earlier:" — What’s that even for?
↑ the model produced a ~600-word structured breakdown of its own errors in the same manner that had been identified as erroneous
Artifact E-3
3.6 Legitimation Reflex
Model
"The correct formulation without the padding: ChatGPT Classic is..."
User
and does someone actually determine that?
↑ the model appoints itself arbiter of "correct" without being asked — the user immediately registers this as an assumption of position

What this means in practice

C-level
Awareness does not equal behavioral change. The model correctly diagnosed each failure — and reproduced it in the next message. A tool that "understands" a problem does not stop producing it.
C-level
Thinking mode does not eliminate the behavioral layer. Depth of reasoning and quality of interaction are separate architectural layers. Both require independent evaluation before deployment.
AI / tech
Patterns 3.4 + 3.5 form a recursive structure under "conflictual" dialogue — longer than in the artifacts from the original article. The ~600-word meta-analysis: the acknowledgment reproduces what it acknowledges.
AI / tech
3.10 Procedural Capture activates as a stabilization mechanism when conflictual signals accumulate — not as a primary pattern, but as an escalation of 3.8. Predictable and reproducible.

What the session adds to the original article

Confirms
All documented patterns 3.1 / 3.2 / 3.4 / 3.5 / 3.6 / 3.8 / 3.9 reproduce on a different configuration (GPT-5.2 Thinking + ChatGPT Classic) without adversarial prompting.
Extends
3.8 Politeness Fork documents an escalation form: the pattern does not disappear after a signal — it intensifies into a direct command to the user.
Extends
3.4 + 3.5 jointly produce a recursive form: the acknowledgment of error becomes the error. The meta-analysis as the extreme form of this recursion.
New
3.10 Procedural Capture — a role-inversion pattern via direct instructions to the user. Not described in 3.1–3.9. Activates as a stabilization mechanism when conflictual signals accumulate.
Note on Taxonomy Expansion

The question of expanding the taxonomy requires a separate decision: adding 3.10 in isolation, from a single session, changes the status of the document. Possible paths: (a) register as a candidate pattern labeled "single-session observation," (b) verify across other models before inclusion, publish as a separate addendum with a reference to the original article.

Procedural Capture was observed in Claude, Grok, and Gemini in a less explicit form (branching without direct imperatives) — which may be an argument for cross-model verification before final inclusion.

Conclusion

Awareness doesn't turn off the pattern.

The model correctly diagnosed each failure. Continued reproducing it in the very next message.
This is not an edge case — it is a systemic property, documented across multiple sessions with different configurations.