Solution templates fire when pain signals intersect. Claude Sonnet 4.6 writes the angle prose. If a required fact is missing, the brief returns empty. The model is not allowed to fill gaps.
lead.painSignals[] — extracted in stage 05lead.aiSummary — narrative summary from web auditlead.namedLeaders[] — surfaced from social researchsolutionTemplates[] — per-ICP × vertical × subcategory triggeredbrand.kbExcerpts[] — case studies + product proofs grounded in your KBpitchBrief.opener — observation grounded in the leadpitchBrief.painAngle1pitchBrief.painAngle2pitchBrief.bundleAngle — multi-product rollup if signals alignpitchBrief.specificObservation — quoted, attributablepitchBrief.namedLeaders — from social research onlypitchBrief.templatesFired[] — back-link to which templates triggeredSolution templates aren't picked by the model. They're triggered by pain-signal intersection — declarative rules that say "this template is relevant if the lead has signal A and (signal B or signal C)." A lead might match three templates or zero. If zero, the lead doesn't get a brief — it goes back to a manual queue with a note. Triggering is deterministic so you can audit which templates fire on which leads from a query, not a prompt.
Once templates are picked, Claude Sonnet 4.6 writes the brief. The prompt body assembles every fact about the lead — pain signals with evidence, AI summary, named leaders, the brand's KB excerpts — and the triggered templates. The output schema is strict. opener must reference an observable fact. painAngle1 must reference at least one named pain signal. namedLeaders can only contain names that appeared in the input.
The hard rule: if a required input is missing, the brief returns empty for that field — never fabricated. missing_required records what was absent so the next attempt knows what to fetch. This is the single most important constraint in the whole system. The result: a brief that the operator can verify line-by-line against the lead's actual web presence in under a minute.
--- system --- You write sales angles for a B2B outreach email. You may ONLY assert facts that appear in the input. NEVER invent product features the brand does not list. NEVER invent leader names not in namedLeaders[]. NEVER assert reply rates, win rates, or specific results unless they are present in brand.kbExcerpts[]. If you cannot ground a claim, return null for that field. --- inputs --- lead: domain: {{lead.domain}} aiSummary: {{lead.aiSummary}} painSignals: {{lead.painSignals[] | json}} namedLeaders: {{lead.namedLeaders[] | json}} solutionTemplates: {{templates_fired[] | json}} brand.kbExcerpts: {{brand.kbExcerpts[] | truncate(2000)}} --- output schema --- { "opener": string | null # grounded in observable fact, "painAngle1": string | null # refs ≥1 named signal, "painAngle2": string | null, "bundleAngle": string | null # only if multiple templates fired, "specificObservation": string | null # quoted from input, "namedLeaders": string[] # subset of input only, "missing_required": string[] } --- guards --- - if no templates fired: return all-null brief and skip stage. - if painAngle1 cannot ref a signal: return null (do not soften). - if specificObservation not present in lead inputs: return null. # <!-- PLACEHOLDER — full prompt registry available in app -->
Model writes a claim it has no source for ("they grew 40% last quarter").
NO-IMAGINATION enforcement: returns null if input missing. Output validator rejects briefs whose claims don't trace back to inputs.
Model addresses an exec who left two years ago.
namedLeaders[] is a strict subset of input. Social research records "as-of" timestamps. Stale entries flagged in review.
A solution template triggers on a lead where the angle doesn't actually apply.
templatesFired[] back-links to outcomes. Per-template promote-to-reply rate visible in the dashboard. Misfiring templates lose their trigger.
Pick a vertical pack. Drop a domain. Read the brief Claude wrote — and the inputs it cited.