Manage LLM memory boundaries (ChatGPT + agentic systems) — procedure

Purpose

Use this page to make cross-session influence predictable and auditable by defining what can be recalled, what must never persist, and where enforcement lives (product memory vs application memory).

Use this procedure in AI workflows when:

Related (explanation): LLM memory boundary model — how context gets selected

Reference model (ChatGPT terminology) ChatGPT describes memory as two separate mechanisms: Saved memories and Chat history (with separate controls).

Choose a mode

Setup

1) Write a one-paragraph memory policy (before prompts):

2) Decompose “memory” into 3 input sources (portable model):

3) In ChatGPT: align settings with your policy:

4) Pin scope in the first message of the workflow (context pinning):

5) If building an agent: treat memory write-back as a security boundary:

Verify (smoke test)

1) “No persistence” test (ChatGPT):

2) “Predictable recall” test (ChatGPT):

3) “Safe write-back” test (agentic system):

Options

Option 1 — ChatGPT-only (product memory controls)

Checklist

Option 2 — Agentic system (application memory store)

Checklist (minimum controls)

Apply Option 1 for ChatGPT usage and Option 2 for agent runtime memory.

Common mistakes