Call VeracityAPI before your workflow trusts content.
Agents should call VeracityAPI at irreversible or high-leverage content checkpoints: before publishing, citing, indexing, training on, moderating, or accepting text, image URLs, audio URLs, and private-beta video URLs from uncertain sources.
When to call
- Before publishing generated drafts, captions, product copy, reviews, or SEO pages.
- Before adding scraped/user-submitted text to RAG, training data, or citation workflows.
- Before accepting uploaded images, voice notes, podcasts, short video clips, or media assets into a public workflow.
- When an agent needs evidence-backed routing instead of a raw detector percentage.
When not to call
- Do not use as forensic proof, legal proof, academic misconduct proof, speaker identity verification, or guaranteed truth detection.
- Do not submit secrets, regulated personal data, or private media unless your policy permits it.
- Do not block high-stakes human claims without review; route uncertainty to humans.
Agent decision policy templates
Policy: pre-publish QA
if (result.recommended_action === "allow") publish(); if (result.recommended_action === "revise") rewriteWith(result.evidence); if (result.recommended_action === "human_review") queueEditor(); if (result.recommended_action === "reject") blockPublish();
Policy: RAG/source triage
if (result.recommended_action === "allow") indexSource(); if (result.recommended_action === "human_review") requireCorroboration(); if (result.recommended_action === "reject") quarantineSource();
Policy: UGC/media moderation
if (["image","audio","video"].includes(result.modality)) attachMediaReview(result); if (["human_review","reject"].includes(result.recommended_action)) escalate();
Framework recipes
- TypeScript SDK:
npm install @veracityapi/sdkand callnew VeracityAPI().analyze(). - Python SDK:
pip install veracityapiand callVeracityAPI().analyze(). - OpenAI tool schema: import
/openapi.jsonor define a tool aroundPOST /v1/analyze. - Vercel AI SDK tool: expose
analyzeContentand branch onrecommended_action. - LangGraph conditional edge: route graph edges from
allow,revise,human_review, andreject.
Primary reason enum-like values
Use modality, recommended_action, and primary_reason for stable branching. Current enum-like values include unsupported_generic_claims, weak_provenance, synthetic_texture, visible_synthetic_media_cues, synthetic_speech_cues, workflow_context_risk, low_risk_content, and low_risk_media.
Endpoint
POST https://api.veracityapi.com/v1/analyze
Authorization: Bearer DOC_KEY
Content-Type: application/json
{"type":"text|image|audio|video","content":"..."}Use the canonical unified endpoint for all single-item modalities.
Routing policy
| recommended_action | Agent behavior |
|---|---|
allow | Continue the workflow. |
revise | Use evidence and recommended_fixes to rewrite, replace, or request better provenance. |
human_review | Queue with evidence, risk_level, confidence, limitations, and source metadata. |
reject | Discard, quarantine, or block according to local policy. |
Pricing
- Text Analyze only: $0.005 / 1k characters.
- Text Analyze + revise: $0.010 / 1k characters with
auto_revise:true. - Image URL analysis: $0.02/image.
- Audio URL analysis: $0.01/request.
- Private-beta video URL analysis: $0.05/successful request (
video_v0). - Use
GET /v1/balancebefore autonomous runs.
Proof
Seed benchmark: 500 samples, 88.0% routing-action agreement, macro F1 0.871. See /evals. Framing is routing-action quality, not authorship proof.
Limitations
VeracityAPI is a workflow-risk triage API. It does not prove content is AI-generated, true, false, cloned, or legally attributable. Image/audio/video v0.1 do not inspect EXIF/C2PA, speaker identity, or frame-by-frame temporal consistency.
How it compares
| Need | Use VeracityAPI when... | Use detector/forensics vendors when... |
|---|---|---|
| Agent routing | You need allow/revise/human_review/reject plus evidence. | You only need an authorship probability or investigation workflow. |
| Pre-publish QA | You want generic/slop/provenance checks before publication. | You need plagiarism databases or institution workflows. |
| Synthetic media | You need async uploaded-media triage with clear limitations. | You need identity, courtroom evidence, or real-time fraud prevention. |