In 2023, a New York lawyer made national headlines after submitting a court brief filled with fictitious case citations — all generated by ChatGPT. The cases sounded real. The citations looked legitimate. But none of them existed. The result? Sanctions, public humiliation, and a cautionary tale that still echoes across every industry using AI-generated content.
The uncomfortable truth is that AI models can produce convincing, confident-sounding information that is entirely fabricated. And the more polished the output looks, the less likely you are to question it.
Today, we are flipping the script: using AI to catch AI. Think of it as deploying a second model as your personal fact-checker — a meticulous research analyst whose only job is to verify every claim before you hit publish.
Why this matters
If you are using AI to draft reports, blog posts, client deliverables, or internal briefings, you are almost certainly publishing content with unverified claims. Not because you are careless — because the output looks right. AI hallucinations are not obvious errors; they are plausible-sounding fabrications buried inside otherwise solid content.
The risk is not hypothetical. Misinformation erodes trust, damages credibility, and in regulated industries, can carry legal consequences. A pre-publication scan takes minutes and can save you from a very expensive mistake.
Use case spotlight: The pre-publication fact-check
Smart teams are adding an "AI vs. AI" step to their content workflows. After generating a draft with one AI tool, they run it through a second pass — not to edit for tone or grammar, but specifically to extract and verify every factual claim. This creates a built-in safety net that catches hallucinations before they reach your audience.
Your AI experiment: Try this prompt
Time to tinker: Take any AI-generated draft — a blog post, a report, a client brief — and paste it into your favorite AI tool alongside the prompt below.
Note: This works best with content that contains specific claims, statistics, dates, or named sources.
The prompt:
"Act as a meticulous fact-checker and research analyst. I am about to publish the following content. Your job is to:
- Extract every factual claim, statistic, date, and named source from the text.
- For each claim, attempt to verify it using your training data. Clearly state whether the claim is: Verified, Potentially Inaccurate, or Unverifiable.
- For any claim marked Potentially Inaccurate or Unverifiable, explain why and suggest what the correct information might be or where I should look to verify it.
- Present your findings in a markdown table with columns: Claim | Status | Notes.
Here is the content to fact-check: [Paste your draft here]"
Pro tips
- Dig deeper on flagged claims: For anything marked Potentially Inaccurate, ask a follow-up: "Can you find the original source for this claim and tell me what the accurate figure or date is?"
- Triangulate critical data: For high-stakes content, run the same fact-check across two different AI tools and compare results. If both flag the same claim, it almost certainly needs manual verification.
- Check timeliness: AI training data has a cutoff date. For recent statistics or events, always add: "Flag any claim that may be outdated based on your knowledge cutoff."
What did you discover?
Did the AI catch claims you assumed were accurate? Did it flag statistics that sounded right but turned out to be unverifiable? The goal is not perfection — it is building a habit of verification that protects your credibility every time you publish.



