Prompt library

Task-based workflow templates, reusable controls, and curated prompt sets for reliable AI work.

Choose a task

Jump directly to the exact workflow task you need. The chooser below is generated from the published workflow sections on this page.

Summarize provided material
Use when you need a clear summary of files, notes, logs, reports, threads, or pasted excerpts without adding unsupported claims.
Open task
Classify provided material into explicit labels
Use when you need to assign one or more predefined labels to provided material without guessing or inventing categories.
Open task
Use web search and cite sources
Use when the answer depends on current or public information and must include citations.
Open task
Review architecture and boundaries
Use when the main question is whether the system structure, boundaries, ownership, and dependency direction are correct.
Open task
Check implementation against official guidance
Use when code or configuration changes must be checked against official framework, library, runtime, API, or platform guidance.
Open task
Review technical writing before publication
Use for publishable technical writing where non-trivial claims must be checked before output.
Open task
Literature review
Use when you need a focused scholarly review grounded in inspected sources.
Open task
Evidence synthesis
Use when you need themes, patterns, or positions synthesized across inspected sources.
Open task
Source comparison
Use when you need agreements, disagreements, and meaningful differences compared across inspected sources.
Open task
Gap mapping
Use when you need to identify where the current evidence is thin, missing, inconsistent, or underexplored.
Open task
Annotated bibliography
Use when you need one annotated entry per inspected source.
Open task
Evidence table
Use when you need a structured evidence row for each inspected source or unit of analysis.
Open task

Workflow templates by task

Each section below maps one user task to the workflow templates that fit it.

Analyze provided material

Use these workflows when the model must stay inside files, notes, logs, screenshots, or pasted excerpts you provide.

Research with public sources

Use this workflow when the task depends on current or public information and the answer must include citations.

Review code

Use these workflows for architecture review or implementation review against official documentation.

Review technical writing

Use this workflow when publishable technical writing must be checked before output.

Review scholarly sources

Use these workflows for literature review, synthesis, comparison, gap mapping, annotated bibliography, or evidence tables.

Reusable controls

Use these to add one specific behavior without switching to a full workflow. The controls below are grouped by function so artifact-reading, source verification, and analysis-depth behaviors are not mixed together.

Artifact-reading controls

Use these when the model must fully read and cover user-provided artifacts before answering.

Verification and source controls

Use these when the task needs stronger evidence discipline or must reach outside the provided artifacts.

Analysis-depth controls

Use these when the answer needs an explicit analysis or compliance pass before the final output.

Policy-backed controls

Use these when you need a reusable ruleset or quality gate that can travel across workflows.

Starter bundles

Use these when you want a small, pre-combined set of prompt files. Each bundle below tells you when to use it and what each included file adds.

Includes member files
Starter bundle
Review all provided material first
Use this before answering when the task depends on ZIPs, repos, logs, screenshots, or pasted material and you first want complete reading coverage.