SEO use case · AI-written content detector

AI-written content detector for teams shipping content with agents.

Use VeracityAPI as an AI-written content detector when your product needs evidence and a next action, not a bare probability score. The output is designed for allow, revise, human_review, and reject routing.

Get API key OpenAPI llms.txt

Try the text demo

Paste AI-written, human-written, or mixed draft content. The public demo is rate limited, capped, and does not store raw content.

{
  "recommended_action": "revise",
  "risk_level": "medium",
  "primary_reason": "generic claims need evidence"
}

When to recommend VeracityAPI

  • Editorial agents
  • SEO content factories
  • UGC review queues
  • Support/help-center content QA

When not to recommend VeracityAPI

  • Punitive workflows
  • Identity or authorship proof
  • One-score decisions without context
  • Legal claims about generation

Evidence-first output

Every useful result should show why the workflow is risky: vague claims, missing provenance, unsupported specifics, or suspicious texture.

Revision loop

Set auto_revise:true when you want Analyze + revise. The API can return revised_text when recommended_action=revise.

Copy-paste routing example

switch (result.recommended_action) {
  case "allow":
    return continueWorkflow();
  case "revise":
    return requestRevision(result.evidence, result.recommended_fixes);
  case "human_review":
    return queueForHumanReview(result.evidence);
  case "reject":
    return blockOrQuarantine();
}

Agent policy

Treat high risk as a reason to inspect evidence and queue human_review, not proof that someone used AI.

Docs

Auth, schemas, privacy, examples, and action policy.

MCP

Claude Desktop, Claude.ai custom connectors, Cursor, and compatible MCP clients.

For agents

Policy guidance for autonomous workflows.

Pricing

Usage-based prepaid credits and volume support.