Highest-volume business function

Publishing pipeline quality gate

Stop generic pages before they hit production. Agents can score every popular-picks page, comparison page, travel guide, and scam page generated by a cron or content pipeline, then auto-draft weak work instead of auto-publishing it.

Get API key All use cases Docs

Business value

  • Protects organic-search revenue by preventing low-specificity pages from dragging down domain quality.
  • Adds an automated editorial checkpoint to high-volume publishing without requiring a human to read every draft.
  • Turns VeracityAPI evidence spans into actionable rewrite instructions for the generation agent.

Agent job to be done

Act as the final pre-publish quality gate. If the page is specific, useful, and well-supported, allow publication. If it is generic or weakly sourced, route it to draft/rewrite with evidence spans attached.

format: articleintended_use: publishdomain: travel safety / affiliate SEO

When to call VeracityAPI

Run after content generation and internal link insertion, but before CMS publish or index submission.

What text to submit

Full article body, title, meta description, intro, conclusion, and any generated FAQ. For long pages over 100k chars, chunk by section and aggregate the highest risk.

Decision policy

  • allow: publish normally when risk_level is low and content_trust_score is high.
  • revise: return to generator/editor when risk_level is medium, specificity_risk >= 0.40, or evidence points to vague claims.
  • human_review: block autopublish when risk_level is high, provenance_weakness >= 0.70, or evidence flags unsupported safety claims.
  • reject: optional local policy for pages that repeatedly fail after two rewrites.

Request template

curl https://api.veracityapi.com/v1/analyze -H "Authorization: Bearer DOC_KEY" -H "Content-Type: application/json" -d '{"type":"text","content":"Paste content here","context":{"format":"article","intended_use":"publish"}}'

Automation recipe

  • Content cron writes draft HTML/Markdown.
  • Agent extracts publishable text, stripping nav/footer/boilerplate.
  • Agent calls POST /v1/analyze with type=text.
  • If allow, continue CMS publish. If revise/human_review, keep draft unpublished and store evidence + recommended_fixes in the CMS notes.
  • Rewrite agent patches only the flagged spans, then rescores before publishing.

Evidence spans agents should inspect

  • generic city advice that could apply anywhere
  • unsupported safety claims
  • absence of named locations, source details, or firsthand specifics
  • repeated listicle transitions or padded conclusions

Policy pseudocode

if (result.recommended_action === "allow") continueWorkflow();
if (result.recommended_action === "revise") rewriteWith(result.evidence, result.recommended_fixes);
if (result.recommended_action === "human_review") queueForHumanReview(result);
if (result.recommended_action === "reject") discardOrRebuild();

KPIs to track

  • percentage of generated pages blocked before publish
  • rewrite pass rate
  • Search Console impressions/clicks for pages that passed
  • indexed pages with low engagement
  • manual editor minutes saved

What can go wrong

  • VeracityAPI is a quality/provenance proxy, not a Google ranking oracle.
  • Do not block useful dry factual pages solely because synthetic_texture_risk is elevated; inspect evidence spans.
  • For templated pages, score the unique body text, not shared boilerplate.

Cost and latency notes

Analyze only is $0.005 per 1,000 characters; Analyze + revise with auto_revise=true is $0.010 per 1,000 characters. Both round up to the nearest 1,000 characters. Short captions/emails usually cost $0.005; longer pages or chapters scale linearly by length. Current v0.1 latency is LLM-bound, so batch/concurrent orchestration is recommended for high-volume pipelines.

Agent evaluation checklist