Practical guidance for reliable AI use, prompt engineering, and agent workflows.
How-to guides, prompt assets, policies, articles, and reference material for daily AI work, verification, and AI agent/security practice.
Choose a path
Start with the workflow that matches your goal.
Use AI in daily work
Safer day-to-day AI use, file-only workflows, evidence boundaries, and fact-checking.
Build AI workflows
Architecture, tool use, memory boundaries, and implementation review.
Audit AI systems
Boundary review, evidence checks, and security-oriented validation.
Research & publish
Evidence-gated literature review, writing, and publishable outputs.
Baseline for every track
Apply these controls whether you are using AI for daily work, building systems, auditing systems, or publishing.
-
Decide which sources are allowed for factual claims and when the workflow must fail closed.
-
Run the fact-checking procedure before output, review, or publication.
-
Check architecture, regressions, and implementation quality before shipping changes.
Baseline policies (rules)
Browse policiesRules referenced across procedures and prompt library assets.
Objective Technical Baseline Rules (No Simulation) — policy
Non-simulative, objective operating profile with fail-closed posture.
Facts-only: Authoritative sources required (citations required)
Evidence boundary: world-claims require authoritative sources with citations.
Web Verification & Citations Policy
How to browse/verify and cite sources in outputs.
Engineering Quality Gate Policy (Architecture & Best Practices)
Architecture checks + best practices + regression-minded review gate.
Articles (explanation)
Browse all articlesContext and threat models (not procedures).
Why “Almost Human, But Not Quite” Feels Wrong: From Clowns to AI-Generated Images and Text
Two separable mechanisms behind the “something feels off” reaction: cue-level perceptual mismatch (uncanny/cue conflict) vs AI-label effects on credibility and sharing.
Theory of mind in LLMs — what benchmarks test (and what they don’t)
Evidence-anchored overview of how ToM is defined in psychology, how it is operationalized for LLM evaluation, and what current results do and do not justify.
Sycophancy in LLM Assistants: What It Is, How Training Creates It, and Why It Shows Up in Production
A technically grounded explanation of sycophancy (belief-agreement bias): what it is, what the evidence supports about prevalence, how preference optimization can produce it, and what changes in training and release practice reduce it.
Scope
What this site covers (and what it intentionally does not).
Focus
- Tool-using LLM and agent systems: orchestration, boundaries, verification, and audits.
- Reusable assets: policies, workflow templates, prompt components, procedures, and diagrams.
Not covered Hide exclusions
- Vendor-internal or proprietary details.