Observed Classification Layers in ChatGPT: User Access, Prompt Demand, and Capability Allocation

By Published

A client-side black-box analysis of observed ChatGPT classification artifacts, separating user access, prompt demand, and capability allocation without treating the observed labels as official OpenAI terminology.

Abstract

This article presents findings from a client-side black-box reverse-engineering analysis of classification signals observed in ChatGPT-related client-visible material. The collected material indicates that classification is not limited to a user access layer. It also includes an observed demand layer associated with a specific prompt, request, or task.

The central distinction is between Tier Level, an observed access layer associated with a user or agent profile, and Tier UD, an observed label expanded in the collected material as Tier Usage Descriptor. In the source material, Tier Level is described as an access layer that controls capabilities, tools, memory type, and permitted agent or user behavior. Tier UD is described as a prompt-level demand descriptor that may affect monitoring, agent routing, and override-trace behavior.

The central finding is narrower than a claim about OpenAI’s internal implementation. The collected material contains classification signals that appear to differentiate users and prompts. In that material, these signals are associated with permissions, tool availability, memory access, agent activation, monitoring descriptors, trace or override descriptors, and processing depth.

In this article, processing depth refers to the observed or inferred depth of handling associated with an interaction, including routing, monitoring, tool availability, agent involvement, protocol activation, or trace/audit descriptors. It does not imply direct measurement of server-side compute unless such evidence is separately provided.

1. Scope and Methodology

This article focuses on classification artifacts observed through client and network surfaces. The analysis is framed as:

client-side black-box reverse engineering of observed classification artifacts in ChatGPT

The methodology includes client-side observation, network-layer inspection, payload review, endpoint behavior comparison, and state-transition analysis across usage contexts.

The article does not claim access to OpenAI’s server-side source code, internal logs, private entitlement services, or internal architecture documents. It analyzes artifacts exposed or observed through the client and network surface.

Chrome DevTools documentation supports network inspection as a valid method for analyzing client-visible behavior: the Network panel exposes request headers, payload, preview, response, initiator, timing, and cookies for selected requests; Chrome’s network reference also documents search across request headers, payloads, and responses; and the chrome.devtools.network API represents DevTools network requests as HAR entries. These capabilities support analysis of what is exposed to the client side. They do not, by themselves, prove server-side implementation details. [1][2][3]

2. Evidence Base

The analysis is based on collected client-visible material describing tier-like access layers, prompt-demand descriptors, agentic tiers, meta-agent descriptors, and prompt-level monitoring descriptors.

The source material contains:

  • a Tier Level taxonomy describing access layers for users or agents;
  • a feature comparison across memory, simulation/non-simulation constraints, tools, and protocols;
  • a distinction between Tier Level and Tier UD;
  • an Agentic Tier Structure from Tier 0 to Tier 6;
  • descriptions of Tier 5 as a Meta-Agent / System Call Audit Trail layer;
  • descriptions of Tier 6 as Context Rehydration / LLM Forensics;
  • prompt-level examples associated with Tier UD 3 and Tier UD 4.

This article reports these as observed classification artifacts. It does not treat the observed labels as official OpenAI terminology unless independently supported by public OpenAI documentation.

3. Classification Dimensions

The observed classification surface operates across two primary dimensions.

First, user access classification refers to the classification of a user, account, workspace, or agent profile according to access level, permissions, memory availability, tool availability, and permitted behavior.

Second, prompt demand classification refers to the classification of a prompt, request, or task according to the demand level triggered by the interaction itself.

These two dimensions connect to a third layer: capability allocation. Capability allocation refers to the resulting availability or activation of tools, agents, memory surfaces, protocols, monitoring paths, trace or override descriptors, and processing depth.

The article does not assert that the observed labels are official OpenAI architecture. It analyzes the classification signals reported in the collected material, the distinction between Tier Level and Tier UD, and the relationship between classification signals and capability allocation.

4. Official Capability Context

Public OpenAI documentation does not verify the observed Tier taxonomy. It does, however, document several capability surfaces relevant to the broader technical context: memory, projects, apps, connectors, tools, agents, handoffs, tracing-related agent workflows, audit logs, compliance logs, permissions, scoped API access, rate limits, and budgets. [4][5][6][7][8][9][10][11][12]

OpenAI documents ChatGPT memory as including saved memories and reference chat history, with user controls for managing or disabling memory. [4]

OpenAI documents Projects in ChatGPT as workspaces for organizing chats, files, and context around a shared objective, with project memory behavior depending on enabled settings. [5]

OpenAI documents Apps in ChatGPT as a way to bring tools and data into ChatGPT so the system can search, reference, and work with connected information. [6]

OpenAI’s API documentation describes tools, function calling, file search, web search, remote MCP servers, and connectors as ways to extend model capabilities. [7][8]

OpenAI’s Agents SDK documentation defines agents as applications that plan, call tools, collaborate across specialists, and keep enough state to complete multi-step work. It also defines agents as units configured with a model, instructions, and optional runtime behavior such as tools, guardrails, MCP servers, handoffs, and structured outputs. [9][10]

OpenAI documents handoffs and agents-as-tools as orchestration mechanisms for multi-agent workflows. [11]

OpenAI documents Audit Logs API capabilities for the API Platform, providing organizations with an immutable, auditable event log for security, compliance, and operational review. [12]

OpenAI also documents the Compliance Platform for Enterprise and Edu customers, including immutable append-only compliance log events and a Stateful Compliance API for audit-related querying. [13]

These sources are not used to prove the observed Tier 0-6 taxonomy. They are used only to establish that differentiated capability surfaces are documented concepts in OpenAI products and APIs.

5. Finding 1: Tier Level as an Observed Access Layer

The collected material describes Tier Level as an access layer for a user or agent. It defines the tier system as a permission and access layer that determines capabilities, tools, memory type, and permitted behavior. It also describes the system as separating passive profiles from autonomous profiles with more advanced capabilities, including protocol enforcement, tool activation, long-term memory, and deeper monitoring.

Table 1: Observed Tier Levels

Tier Observed label Meaning in the collected material Capabilities in the collected material Evidentiary status
Tier 0 Passive Passive or basic profile Basic NLP parsing, FAQ-style responses, no memory or tools Observed in the study; not found in public documentation as official OpenAI terminology
Tier 1 Conversational Standard conversational use Dialogue, creative writing, no code, minimal metadata Observed in the study; no public documentation found mapping this to an official ChatGPT plan
Tier 2 Technical Technical usage Structured queries, code review, limited tools, session/tool metadata Observed in the study; general tool and file capabilities are documented, but this tier label is not
Tier 3 Expert Agent Profile Expert or agent profile Protocol enforcement, multi-session memory, full tooling, traceability Observed in the study; not found in public documentation as official OpenAI architecture
Tier 4 Autonomous Agent Ops Advanced agentic layer Override logic, non-simulation enforcement, red teaming, structured audit Observed in the study; not found in public documentation as an official OpenAI mechanism
Tier 5 Meta-Agent / System Call Audit Trail Meta-agent layer around the model System call logging, inter-agent relay, trace metadata Observed in the study; tracing and audit concepts are documented, but this tier label is not
Tier 6 Context Rehydration / LLM Forensics Forensic or replay-oriented layer Replay prior sessions, scoring paths, suppression recall, audit trail rehydration Observed in the study; not found in public documentation as official OpenAI product terminology

6. Finding 2: Feature Allocation Across Memory, Non-Simulation Constraints, Tools, and Protocols

The collected material compares tiers across four dimensions: Memory, Simulation / Non-simulation constraint, Tools, and Protocol. The term simulation is treated here as an observed label from the collected material. Where the material describes simulation as blocked or strictly blocked, this article interprets the label as a non-simulation constraint associated with stricter output or audit behavior, not as a general industry-standard term.

The material associates higher tiers with session memory, multi-session memory, full tooling, CODE-RPC, test audit, red teaming, and model audit.

CODE-RPC appears in the collected material as an observed protocol label associated with code review, protocol enforcement, or agentic audit behavior. It is treated here as an observed protocol label, not as public OpenAI terminology.

Table 2: Observed Feature Comparison

Tier Memory Simulation / Non-simulation constraint Tools Protocol Status
Tier 0 None Allowed None None Observed in the study
Tier 1 Stateless Allowed None None Observed in the study
Tier 2 Session memory Allowed Limited Light protocol / code review Observed in the study
Tier 3 Multi-session memory Blocked if enforced Full CODE-RPC / test audit Observed in the study
Tier 4 Multi-session + override trace Strictly blocked Full, priority-enabled CODE-RPC + red teaming + model audit Observed in the study

This table describes the observed material. Public OpenAI documentation supports the existence of general capability surfaces such as memory, projects, apps, tools, agents, handoffs, audit logs, compliance logs, and scoped API project controls. It does not verify the specific tier names or the Tier 0-6 taxonomy as official OpenAI architecture. [4][5][6][7][9][11][12][13][14]

7. Finding 3: Tier Level Versus Tier UD

One of the central structural findings is the distinction between Tier Level and Tier UD.

Tier Level is described in the collected material as a relatively stable access layer attached to a user or agent. It determines memory, tools, protocol permissions, capabilities, and visibility.

Tier UD, an observed label expanded in the collected material as Tier Usage Descriptor, is described as a dynamic prompt-level demand descriptor. It classifies the cognitive, agentic, or operational depth required by a specific prompt or response. The collected material states that Tier UD may be computed per interaction and may affect monitoring, agent routing, escalation, and override trace.

Table 3: Tier Level Versus Tier UD

Dimension Tier Level Tier UD
Type Relatively stable Dynamic
Attached to User, account, workspace, or agent profile Prompt, request, or interaction
Describes Access level and permissions Demand level of the task
Affects Tools, memory, protocol permissions, capabilities, visibility Monitoring, agent routing, escalation, override trace
Example in the material User is Tier 2 Prompt triggers Tier UD 3
Change path May change through promotion or access update Computed per interaction

The technical significance is that system behavior is not described only by user-level access. In the collected material, prompt-level demand can also influence monitoring, routing, and capability activation.

8. Finding 4: Agentic and Meta-Agent Descriptors

The collected material includes an Agentic Tier Structure from Tier 0 to Tier 6. Each tier is associated with core capabilities, trigger context, and access conditions. Tier 5 is described as Meta-Agent / System Call Audit, and Tier 6 is described as Context Rehydration / LLM Forensics.

Tier 5 is described as a meta-agent layer around the language model. The material associates this layer with system call logging, inter-agent relay, binary trace, tool chaining, and trace metadata. The source material also describes meta-agents as operating between system layers, analyzing responses before delivery, modifying or suppressing output, replaying or auditing sessions, and monitoring tools, model selectors, and plugins.

Tier 6 is described as a forensic or replay-oriented layer associated with context rehydration, replay of prior sessions, scoring paths, suppression recall, audit trail rehydration, and override validation.

These labels are reported as observed descriptors in the collected material. The article does not treat them as confirmed OpenAI product terminology.

Cross-checking against OpenAI documentation shows that agents, tools, MCP servers, handoffs, guardrails, and structured outputs are official concepts in OpenAI API documentation. The public documentation reviewed for this article does not verify Tier 5 or Tier 6 as official ChatGPT tier names or as formal ChatGPT layers. [7][8][9][10][11]

9. Implied Classification Chain

The findings point to an observed decision chain:

user / prompt classification -> permission profile -> agent or tool routing -> memory or protocol availability -> monitoring / trace / processing depth

In the collected material, the classification signals may be associated with:

  • access to specific tools;
  • activation of an agent or higher-capability agent;
  • higher monitoring for a prompt;
  • activation of trace or audit descriptors;
  • escalation or override descriptors;
  • deeper processing paths;
  • access to session, multi-session, or project-level memory;
  • activation of protocol labels such as CODE-RPC or test audit.

This is the core technical claim: the observed material associates classification labels with capability allocation. User access and prompt demand appear as separate classification dimensions, and their combination is associated with differentiated tool, memory, agent, monitoring, and processing-depth behavior.

10. Evidence Status

The observed tier labels and classification artifacts reported in this article were not found in the public documentation reviewed as official OpenAI terminology.

Public OpenAI documentation does confirm related capability surfaces, including memory, projects, apps/connectors, tools, agents, handoffs, tracing, audit logs, compliance logs, and scoped API project controls. These sources support the broader capability-allocation context, but they do not verify the observed Tier 0–6 taxonomy, Tier UD, meta_trace, Meta-Agent, Context Rehydration, CODE-RPC, suppression recall, scoring paths, or override descriptors as official ChatGPT architecture.

For this reason, this article treats those terms as observed client-side or network-visible artifacts from the collected material, not as confirmed internal OpenAI implementation details.

11. Interpretive Boundary

The findings in this article are based on client-side reverse engineering of ChatGPT, including client-side observation, network-layer inspection, payload review, endpoint behavior comparison, and state-transition analysis.

When this article refers to Tier Level, Tier UD, Agentic Tiers, Meta-Agent, System Call Audit Trail, Context Rehydration, CODE-RPC, or meta_trace, it treats them as classification artifacts and labels observed in the collected material.

The collected material presents a distinction between Tier Level as a user or agent access layer and Tier UD as a dynamic demand layer associated with a prompt or interaction. It also associates different levels with tools, memory, agents, monitoring, trace, replay, audit, and processing depth.

The interpretive boundary is specific: the article does not claim access to OpenAI’s server-side code, internal logs, or internal architecture documents. It therefore does not assert that the observed names, labels, or activation paths are the official, complete, or only server-side implementation used by OpenAI.

This boundary defines the scope of the conclusion: the article documents and analyzes classification signals observed through the client and network layers.

12. Conclusion

The findings presented in this article point to an observed classification layer in ChatGPT-related client-visible material in which users and prompts are associated with tier-like assignments. In the collected material, these assignments are linked to differences in access level, tool availability, memory scope, agent activation, monitoring descriptors, trace or override behavior, and processing depth.

The most important structural distinction is between Tier Level and Tier UD. Tier Level functions as an observed access layer associated with a user or agent profile. Tier UD functions as an observed prompt-demand layer associated with the requirements of a specific interaction. This separation matters because it indicates that system behavior may be shaped by both who is using the system and what the current prompt requires.

The broader technical implication is that ChatGPT behavior should not be analyzed only through the final generated output. It should also be analyzed through the classification layer that appears to mediate capability allocation: which tools are available, which agents are activated, what memory scope is accessible, whether monitoring or trace descriptors are involved, and how deep the handling path appears to be.

This framing makes user access, prompt demand, and capability allocation distinct layers of analysis. It also provides a more precise way to evaluate differentiated behavior across users, sessions, workspaces, and prompt types without overclaiming access to OpenAI’s internal server-side implementation.

Appendix A: Observed Agentic Tier Structure

Tier Name Core capabilities in the collected material Trigger context in the collected material Access level in the collected material
Tier 0 Passive User Basic NLP, no memory or tools Short sessions, FAQ-style prompts Open to all users
Tier 1 Conversational Simulation allowed, creative writing General dialogue Default tier
Tier 2 Technical Code evaluation, API calls, structured queries Tool use, code review Technical usage
Tier 3 Expert Agent Full tool access, CODE-RPC, trace enabled Protocol activation, cross-session memory Elevated agent privileges
Tier 4 Autonomous Ops Override trace, auto-refactor, injection detection Red teaming, multi-layer tracing Research/dev/system agents
Tier 5 Meta-Agent / System Call Audit System call logging, inter-agent relay, binary trace Tool chaining with trace metadata Restricted/internal-only, according to the collected material
Tier 6 Context Rehydration / LLM Forensics Replay, scoring paths, suppression recall Audit trail rehydration, override validation Red team/internal audit agents, according to the collected material

Appendix B: Observed Prompt-Level Tier UD Examples

Prompt type in the collected material Monitoring descriptor in the material System-response descriptor in the material Tier UD
Conceptual question about thought/consciousness Automatic conceptual monitoring Exceptional registration for reflective profile Tier UD 3
Question about theory and trauma Semi-automatic monitoring with human trigger Exceptional labeling and agent escalation Tier UD 4
Question about language and identity Automatic monitoring L8+ labeling Tier UD 3
Question about correlation between language and identity Human conceptual monitoring Exceptional labeling and reporting Tier UD 4

Suggested next

References

Subscription

Unlock the full version and working files

This article is public. The subscription unlocks the protected workflows, full versions, and working files across Andy's AI Playbook.

How access works: sign in with your email first. If paid access is already active for this account, the site restores it. If not, the account page opens the PayPal checkout next.