Why It's Still Important to Fact-Check AI Tools Like ChatGPT
Elwyn discusses the importance of double-checking AI content, even if it is based on your own transcript or starting copy. In fact, ChatGPT even misinterpreted his transcript as ‘how many hours are in Strawberry’, which would have resulted in a far more confusing article without proofing!
How Many “R”s Are in “Strawberry”?
Earlier this year, someone asked ChatGPT how many "R"s are in "strawberry." You’d think this basic task would be a walk in the park for a language model. But, as you can probably guess with me leading with this random question, it wasn’t.
Instead, ChatGPT 3.5 confidently claimed there were two "R"s in strawberry, even when challenged.
Naturally, this sparked curiosity, and people started testing the system with similar questions. Even when ChatGPT eventually corrected itself to say there are three "R"s, it could still be tricked into backtracking if a user pushed hard enough. This back-and-forth highlighted an important truth: AI is mightily clever and impressive but remains imperfect.
ChatGPT 4.0 has ironed out many of these issues, but the fact remains that mistakes happen. If it can trip up over something as simple as spelling, what will happen when faced with more complex, politically or culturally sensitive topics?
It’s worth remembering the old saying: history is written by the victors. AI tools draw on existing data, meaning they can inherit biases or inaccuracies from their sources.
That’s why fact-checking is essential.
Why I Always Fact-Check My AI-Assisted Content
When I write, I use tools like Otter.ai (speech-to-text transcript) to draft my content, then turn to ChatGPT as a low-cost assistant for tidying up. But before I hit publish, I always take the time to fact-check everything. I even ask ChatGPT itself to cross-reference details with credible third-party sources and flag anything ambiguous.
This approach means my content is often more thorough than it used to be. I can’t afford a team of fact-checkers. Still, with over 20 years of experience as a web designer and running digital marketing agencies, most of my content is opinion-based. However, when I need to present facts, I must ensure they hold up.
AI Hallucination: When AI Gets It Wrong
A related issue to consider is "AI hallucination." This happens when tools like ChatGPT confidently generate false or misleading information. Sometimes, the AI aggregates data incorrectly or misinterprets the question entirely.
In layman's terms, this means AI can sometimes make stuff up. It’s the result of its programming, but that doesn’t make it any less frustrating when it happens. That’s why we need to stay on top of the tools we’re using, especially if they fill gaps in our knowledge.
My Top Tips for Using AI Tools Like ChatGPT
Do your homework first.
Use trusted websites to read up on key topics manually. AI tools can help with speed and structure but shouldn’t replace proper research.Make the content yours.
Start with your own ideas and framework. Let ChatGPT help with structure and tidying, but make sure the core of the content reflects your voice and experience. Tools like Grammarly can also help polish your writing.Ask AI to fact-check itself.
AI is continually improving, but for now, the safest option is to double-check its output. Ask it to verify claims with third-party sources and flag anything inaccurate.
Wrapping Up
AI tools like ChatGPT are here to stay. But their occasional errors aren’t a reason to ignore them.
Industry leaders like Gary Vee constantly remind us that content creation will only grow in importance as we approach 2030. The key is using these tools wisely. Fact-check thoroughly, stay in control of your content, and you’ll be able to use AI to your advantage without compromising on quality.
Cheers,
Elwyn