Paitho
← Product / Stage 09 · Human Review
Pipeline · Stage 09 · Human gate

Every draft
passes a human.

Keyboard-first triage. Approve, edit, or reject. Edits create clean versions. Rejections feed a structured signal back to the prompt registry. Manifesto tenet 03, made operational.

What goes in, what comes out.

Inputs
Draft + Context
  • email_draft.subject, .body, .cta
  • lead.pitchBrief — visible side-by-side for fact-check
  • lead.enrichment.contacts[0] — addressed person
  • workspace.reviewerId — who is reviewing
  • pack.feedbackTaxonomy[] — controlled rejection reasons
Outputs
Decision
  • review.decisionapprove | edit | reject
  • review.editedDraft — full revision body if edited
  • review.diff — char-level diff vs original draft
  • review.rejectionReasons[] — taxonomy-bound
  • review.reviewedAt, .reviewerId, .durationMs
  • review.feedsBackToemail_prompt_id for self-teaching
Throughput target: ~15 sec/lead  PLACEHOLDER

Keyboard-first.
No mouse necessary.

Jnext draft
Kprevious draft
approve as-is
Eopen inline editor
Xreject (taxonomy picker)
Fjump to facts panel
?show all shortcuts
— shortcuts are global; the cursor never has to leave the queue

The reviewer sees one draft at a time, with the pitch brief and the enriched contact open in a side panel for fact-check. approves and advances. E drops into an inline editor that produces a clean revision; the diff against the original draft is stored as the learning signal. X rejects with a taxonomy pick — never freeform text — so rejection reasons aggregate cleanly across leads and packs. The whole loop is built around the fact that a reviewer should be able to clear two-hundred drafts in roughly an hour.

Edits version cleanly. Each edited draft is a new row keyed by the original email_prompt_id, with a parent pointer to the unedited version. The edit diff and the reviewer id are stored alongside. After a quarter of edits, the pack's prompt team can ask the registry "what does my reviewer change about openers in legal_v17" and get a corpus of paired before/after text — the kind of training data that actually moves draft quality.

Rejections are the most valuable signal in the system. A taxonomy-bound reject (tone_off, facts_wrong, cta_weak, credit_link_off) carries a structured, comparable failure mode. The prompt registry can roll those up by version, by pack, by reviewer, and surface saturating failures for the next prompt bump. The reviewer is, in a real sense, the editor of the next prompt.

Shortcut map and feedback taxonomy.

config · review_ui · legal +
--- shortcuts ---
{
  "approve":      ["Enter"],
  "edit":         ["e", "E"],
  "reject":       ["x", "X"],
  "next":         ["j", "ArrowDown"],
  "prev":         ["k", "ArrowUp"],
  "factsPanel":   ["f", "F"],
  "showHelp":     ["?"]
}

--- feedback taxonomy (per pack) ---
[
  "tone_off",            # voice doesn't match brand profile
  "facts_wrong",         # claim contradicts pitchBrief
  "specific_obs_weak",   # observation isn't specific enough
  "cta_weak",            # question is generic
  "credit_link_off",     # wrong link for audience
  "wrong_addressee",     # named leader not the buyer
  "language_drift"       # language wrong
]

--- write-back ---
{
  "decision":         "approve"|"edit"|"reject",
  "diff":             string|null,            # on edit
  "rejectionReasons": string[],               # taxonomy keys only
  "reviewerId":       uuid,
  "feedsBackTo":      email_prompt_id          # self-teaching attribution
}

# <!-- PLACEHOLDER — taxonomy editable per pack in app -->

What can break.
And what catches it.

Risk
Rubber-stamping

Reviewer hits Enter on every draft without reading. Reputation cost on send.

Mitigation

Per-reviewer throughput dashboard. Approve-rate > 95% with zero edits triggers a calibration prompt; admin can mandate sample re-review.

Risk
Lost edit context

Reviewer rewrites the draft heavily; the original signal is gone.

Mitigation

Edits are stored as a new versioned row keyed back to original prompt id. Char-level diff persisted alongside both bodies.

Risk
Freeform reject text

Rejections come back as prose, not aggregable.

Mitigation

Reject UI is a multi-select on the pack's feedbackTaxonomy. Optional note is freeform; the structured codes are required.

Review a queue of drafts.
In a sandbox, in 2 minutes.

Try the keyboard loop. J / Enter / E / X. Feel the rhythm.