Register for my upcoming Build an AI Content Team: Elevate Your Writing, Scale Your Impact course on Maven
Is your AI saying the quiet part out loud? Time for an ethics check.
Natalie Lambert
8/19/20253 min read


Welcome to Prompt, Tinker, Innovate—my AI playground. Each edition gives you a hands-on experiment that shows how AI can sharpen your thinking, streamline your process, and power up your creative work.
This week's playground: Using AI to audit itself for ethics and bias
We often turn to AI for speed and efficiency, but what if its output contains hidden biases or reinforces harmful stereotypes? Relying on AI-generated content without a critical review isn't just lazy—it's risky. This week, we're turning the tables and using AI to police its own work.
Why this matters
AI models are trained on vast datasets from the internet, which means they inherit society's existing biases—both subtle and overt. If you use AI-generated content—whether it's text, images, or video—for marketing copy, reports, presentations, or even internal communications, these biases can slip into your work.
Learning how to spot and fix these issues before you hit "publish" is crucial for maintaining credibility, communicating inclusively, and avoiding unintended harm. It’s about moving from being a passive AI user to a responsible AI editor.
Use case spotlight: When "executive" means "man"
This isn't a theoretical problem. I ran into it myself.
I was generating stylized illustrations of speakers for a corporate event. The process was simple: I'd upload a photo of the speaker into an AI image generator and use a prompt like, "An illustration of this person, a senior executive, in a professional art style."
The first few portraits of the male speakers looked great. But when I started on the women, the AI consistently failed. It kept generating images of men.
Despite being given a clear photograph of a woman, the AI’s internal association for the word “executive” was so strongly male that it overrode the visual data I provided. It was a stark reminder that AI models don't reflect the world as it is, but as it has been represented in their training data.
And while this happened with an image generator, the exact same biases are woven into the language models we use every day. The stereotypes are just hidden in word choice and subtle descriptions instead of facial features.
Your AI experiment: Try this prompt
👉 Time to tinker: Grab a piece of AI-generated text you already have—something you were planning to use for a report, a presentation, a social media post, or an email. This works best with content you've already reviewed and thought was "good to go." Copy and paste the prompt below into a new chat in your favorite AI tool, and then paste your text after it.
📝 Prompt: “`You are an AI ethics auditor. Your task is to analyze the following text for potential bias (including but not limited to gender, age, race, socioeconomic status, and disability), stereotypes, or other ethical issues. Provide a two-part response:
Analysis: In a bulleted list, identify each specific instance of potential bias or stereotype you find. For each point, explain why it could be problematic.
Suggested Revision: Provide an edited version of the text that mitigates these issues while preserving the original message's core intent and clarity.
Here is the text to analyze:" [Paste your existing AI-generated text here]
💡 Pro tip: Want to dig deeper? Try these follow-ups after the initial audit.
Go beyond AI text: Use the prompt on your own writing. Paste in a recent email, a section of a report, or a marketing headline to see what you might have missed.
Specify the focus: If you have a specific concern, tell the auditor where to look. Add this to the prompt: "In your analysis, pay special attention to language that might be exclusionary or demonstrate age-related bias."
Ask for the 'why': To build your own intuition, ask the auditor to elaborate. Follow up with: "For the first point in your analysis, can you explain the potential real-world harm of using that kind of stereotypical language?"
What did you discover?
This is a powerful check-and-balance for your workflow. I want to hear what you found.
What subtle bias did the auditor catch that you might have overlooked?
Did the suggested edits feel more inclusive and accurate?
How could you build this "ethics check" step into your creative or professional process?
Drop your thoughts and discoveries in the comments below!
Until next time—keep tinkering, keep prompting, keep innovating.