Social media caption pre-flight
Score reel, carousel, TikTok, Facebook, Pinterest, and YouTube Shorts captions before publishing. Agents can rewrite captions that sound generic, padded, or engagement-bait-like before algorithms and humans ignore them.
Business value
- Improves caption specificity before posts go live.
- Reduces generic social copy that gets scrolled past or deprioritized.
- Creates a cheap QA step across high-volume short-form workflows.
Agent job to be done
Act as a caption quality reviewer. Preserve the creative idea, but block or rewrite captions that lack concrete details, platform-native specificity, or credible hook context.
format: captionintended_use: publishdomain: social media / travel safety
When to call VeracityAPI
Run after final caption draft and hashtag generation, before scheduling or publishing.
What text to submit
Caption text, hook, on-screen text, CTA, and optionally first comment. Do not include unrelated transcript unless the caption depends on it.
Decision policy
- allow: publish when low risk and evidence is empty or minor.
- revise: rewrite when slop_risk >= 0.40 or recommended_fixes mention adding concrete details.
- human_review: hold when high risk, especially for safety/scam claims or sensitive destination advice.
- local override: require at least one named place, mechanism, customer detail, or concrete claim before publishing.
Request template
curl https://api.veracityapi.com/v1/analyze -H "Authorization: Bearer DOC_KEY" -H "Content-Type: application/json" -d '{"type":"text","content":"Paste content here","context":{"format":"article","intended_use":"publish"}}'Automation recipe
- Scheduler creates a post package: video asset, caption, hashtags, platform target.
- Agent scores caption with format=caption and intended_use=publish.
- If revise, caption agent rewrites using evidence spans and brand voice constraints.
- Rescore once. If still high, send to human review or publish with a safer minimal caption.
Evidence spans agents should inspect
- generic hooks like ‘don’t make this mistake’ without specifics
- broad claims about places or people
- caption bloat that does not add context
- salesy or inauthentic phrasing
Policy pseudocode
if (result.recommended_action === "allow") continueWorkflow(); if (result.recommended_action === "revise") rewriteWith(result.evidence, result.recommended_fixes); if (result.recommended_action === "human_review") queueForHumanReview(result); if (result.recommended_action === "reject") discardOrRebuild();
KPIs to track
- caption revise rate
- average reach per post before/after
- saves/shares/comments per impression
- percentage of posts requiring human review
- caption rewrite latency
What can go wrong
- Social algorithms are not directly measured by VeracityAPI.
- Short captions may produce lower confidence; pair score with local heuristics like named-place checks.
- Do not optimize away brand personality just to reduce risk.
Cost and latency notes
Analyze only is $0.005 per 1,000 characters; Analyze + revise with auto_revise=true is $0.010 per 1,000 characters. Both round up to the nearest 1,000 characters. Short captions/emails usually cost $0.005; longer pages or chapters scale linearly by length. Current v0.1 latency is LLM-bound, so batch/concurrent orchestration is recommended for high-volume pipelines.
Agent evaluation checklist
- Does this workflow have a costly failure mode from generic or weak-provenance text?
- Can the agent map evidence spans back to editable source locations?
- Should this workflow fail open, fail closed, or queue human review if VeracityAPI is unavailable?
- Which field drives policy: recommended_action, risk_level, content_trust_score, specificity_risk, or provenance_weakness?
- What local rule should complement the API score?