KDP book manuscript QA
Before publishing Amazon KDP guides, score chapters for generic filler, weak sourcing, and low-specificity advice. Agents can convert VeracityAPI evidence into an editorial punch list before bad reviews damage the book.
Business value
- Protects high-cost publishing assets from avoidable quality failures.
- Catches weak chapters before formatting, cover launch, and ad spend.
- Turns recommended_fixes into chapter-level editorial tasks.
Agent job to be done
Act as a manuscript QA editor. Score each chapter, flag sections that feel generic or unsupported, and produce a revision queue ranked by risk and business importance.
format: articleintended_use: publishdomain: KDP manuscript / travel safety book
When to call VeracityAPI
Run after full manuscript draft and before final proofing/layout. Re-run after major chapter rewrites.
What text to submit
Chapter text, title, subheads, callouts, safety checklists, and conclusion. Score per chapter or section, not the whole book at once.
Decision policy
- allow: chapter can proceed to proofing.
- revise: chapter needs concrete examples, clearer sourcing, or less boilerplate.
- human_review: high-risk safety claims, legal-sensitive advice, or chapters with provenance_weakness >= 0.70.
- reject: remove or fully rewrite repeatedly generic sections.
Request template
curl https://api.veracityapi.com/v1/analyze -H "Authorization: Bearer DOC_KEY" -H "Content-Type: application/json" -d '{"type":"text","content":"Paste content here","context":{"format":"article","intended_use":"publish"}}'Automation recipe
- Split manuscript by chapter/section.
- Score each unit with stable IDs.
- Create editorial tickets with evidence spans and recommended fixes.
- Revision agent patches flagged sections with examples, sources, and clearer steps.
- Final QA agent compares score deltas before approving layout.
Evidence spans agents should inspect
- broad safety advice repeated across chapters
- absence of destination-specific examples
- unsupported claims about danger or legality
- filler intros that do not help readers
Policy pseudocode
if (result.recommended_action === "allow") continueWorkflow(); if (result.recommended_action === "revise") rewriteWith(result.evidence, result.recommended_fixes); if (result.recommended_action === "human_review") queueForHumanReview(result); if (result.recommended_action === "reject") discardOrRebuild();
KPIs to track
- chapters flagged before proof
- revision cycles per chapter
- reader review quality signals
- refund/return rate
- editorial cost per finished book
What can go wrong
- Do not use score as the only editorial decision for books.
- Safety/legal claims need independent verification.
- Long chapters should be section-scored for useful evidence mapping.
Cost and latency notes
Analyze only is $0.005 per 1,000 characters; Analyze + revise with auto_revise=true is $0.010 per 1,000 characters. Both round up to the nearest 1,000 characters. Short captions/emails usually cost $0.005; longer pages or chapters scale linearly by length. Current v0.1 latency is LLM-bound, so batch/concurrent orchestration is recommended for high-volume pipelines.
Agent evaluation checklist
- Does this workflow have a costly failure mode from generic or weak-provenance text?
- Can the agent map evidence spans back to editable source locations?
- Should this workflow fail open, fail closed, or queue human review if VeracityAPI is unavailable?
- Which field drives policy: recommended_action, risk_level, content_trust_score, specificity_risk, or provenance_weakness?
- What local rule should complement the API score?