Image · paid acquisition

Image detector for ad creative compliance

Preflight AI-generated or heavily edited ad creatives before spending budget or submitting to platform review.

Get API key All use cases Docs

Business value

  • Protects trust in ad creative image workflows before automated agents take irreversible action.
  • Creates a concrete QA policy agents can apply consistently.
  • Turns risk evidence into review, revision, or provenance requests.

Agent job to be done

Act as the ad creative image trust triage layer. Score or prepare the asset, inspect evidence, and choose allow, revise, human_review, or reject based on workflow stakes.

format: social_postintended_use: publishdomain: ad creative compliance image

When to call VeracityAPI

Run after asset intake/export and before publish, moderation, citation, training, payment, or account-impacting decisions.

What image URL to submit

Public HTTPS image URL for the exact asset the agent will publish, moderate, cite, or use as evidence.

Decision policy

  • allow: low risk and low-stakes use with no conflicting local signals.
  • revise: medium risk or evidence that can be fixed by replacement, disclosure, or better provenance.
  • human_review: high risk, sensitive claims, identity/fraud implications, or evidentiary use.
  • reject: repeated high-risk assets combined with policy violations or missing provenance.

Request template

curl https://api.veracityapi.com/v1/analyze -H "Authorization: Bearer DOC_KEY" -H "Content-Type: application/json" -d '{"type":"text","content":"Paste content here","context":{"format":"article","intended_use":"publish"}}'

Automation recipe

  • Agent receives final image URL and local workflow metadata.
  • Agent calls POST /v1/analyze with type=image.
  • Store score, recommended_action, and evidence categories in the workflow record.
  • Allow low-risk assets; queue medium/high-risk assets for review or replacement.
  • Rescore replacement images before publication or use.

Evidence spans agents should inspect

  • synthetic-looking texture or cadence
  • geometry, text, label, transcript, or continuity mismatch
  • weak or missing provenance
  • signals that conflict with local metadata

Policy pseudocode

if (result.recommended_action === "allow") continueWorkflow();
if (result.recommended_action === "revise") rewriteWith(result.evidence, result.recommended_fixes);
if (result.recommended_action === "human_review") queueForHumanReview(result);
if (result.recommended_action === "reject") discardOrRebuild();

KPIs to track

  • assets triaged
  • human-review precision
  • bad publishes or decisions prevented
  • false-positive appeal rate
  • average review latency

What can go wrong

  • Not proof that an image is AI-generated; use evidence with provenance/source checks.
  • Do not use a single score as forensic evidence.
  • Combine VeracityAPI with local metadata, source reputation, and human escalation.

Cost and latency notes

Image analysis is a flat $0.02 per image. The endpoint accepts HTTPS image URLs, stores no image bytes, and logs only a URL hash plus hostname. Current v0.1 latency is vision-model-bound, so preflight balance and retry carefully.

Agent evaluation checklist