18 Comments
User's avatar
Epicurean Consultant's avatar

That sucks that you have to spend time correcting for that bias. Since these LLMs are just reflections of all the content that's already out there, it does make sense that the bias is inherently in the system. In most situations what's written about how women act, and react, is different than men. In politics women have to be more careful about how they say things, how they dress, etc. It's a sad reflection of where we're at, and it's not surprising it shows up with AI, too.

Expand full comment
TechTiff's avatar

Exactly! And what’s wild is that we’re literally training the next generation of AI on these biases right now. Every time we accept the ‘soft encouragement’ default, we’re reinforcing it. The political comparison hits hard - women DO have to navigate differently, and now that’s baked into our business tools too.

The question is: do we keep correcting it case by case, or do we start demanding better training data from the ground up? 🤔

Expand full comment
Epicurean Consultant's avatar

Sadly it's probably both a top down and bottom up approach that's required. Again, you shouldn't have to do that work.

Expand full comment
TechTiff's avatar

Agreed!

Expand full comment
Tam Nguyen's avatar

Yikes, you know, there was a difference in the way gpt talked to me when it found out I was a woman.

My name is unisex so I’m often mistaken for male. Thanks for the prompt and insightful post! It’s good reminder to take ai generated content with a grain of salt no matter how we identify.

Expand full comment
TechTiff's avatar

Also thinking about the broader implications here - LGBTQ+ folks, especially nonbinary people, are navigating these systems that weren’t designed with them in mind. When AI makes assumptions about gender, pronouns, or identity markers, it can be alienating or even harmful.

We need AI that asks rather than assumes. And respects how people identify themselves.

Expand full comment
TechTiff's avatar

Tam, thank you for sharing that experience! The unisex name angle is so telling - it’s like an accidental A/B test showing how these systems carry gender biases. Your story is exactly why we need more transparency about training data and decision-making processes. These subtle shifts in tone/approach happen so seamlessly that most people miss them entirely.

Expand full comment
Jeane Sumner's avatar

I am so glad you wrote about this because I thought I was going b-a-n-a-n-a-s thinking this. It hit home for me the other day when my husband had my laptop and was using my Chat GPT account and said why is your chat so stupid. I have been feeling this for awhile and sadly not surprised at all. Humans are training it = all manner of biases. Thank you, and count me in as an Ally calling this out.

Expand full comment
TechTiff's avatar

YES! Your husband’s reaction is SO telling - and honestly, thank him for the unintentional data point 😅 The fact that you’ve been feeling this ‘for awhile’ means this isn’t new, we’re just finally talking about it.

That’s exactly why we need allies calling it out. Every time someone says ‘wait, this feels off’ - we’re creating awareness. Count me in as your ally too! 💪

Expand full comment
Lala Leung's avatar

Never thought of this on this level, but I have seen a lot of “dramatic struggle stories” from Claude for sure. Will be trying out the male perspective prompts. Thanks for sharing!

Expand full comment
Luan Doan's avatar

This is such a powerful breakdown. It’s not just about language, it’s about how AI shapes identity and authority. Since LLMs are trained on massive amounts of real-world data, you're absolutely right: if society tends to see women as more emotionally reactive and less strategically capable, the model absorbs that as “truth.”

Your post also raised another concern for me, if the majority holds a biased or inaccurate view on a topic, that consensus could mislead AI into reinforcing it without question. Without built-in verification or space for counterarguments, what happens to the person who just accepts the answer as objective truth?

Expand full comment
TechTiff's avatar

Wow Luan, thank you. You nailed something I’ve been thinking about a lot: how “truth” in AI isn’t neutral, it’s statistical. If enough biased data exists, the model doesn’t just reflect that, it legitimizes it.

And your second point is the real kicker. When AI mirrors the consensus without challenging it, it becomes a kind of epistemic echo chamber. Without mechanisms for dissent or debate, the person querying the model may walk away with answers that feel objective when they’re actually just dominant.

This is exactly why critical AI literacy matters so much right now. Appreciate you bringing this layer into the convo!!

Expand full comment
Andrea Donald's avatar

Thank you for your insightful article. It is unfortunate that women's experiences are being framed in this way but I"m glad that you noticed it and brought it to our attentions. I'm sure you aren't the only person who is getting this type of language. Though I'm not in the same type of situation you are in, I will make a point of keeping watch for similar biases in my dealings with AI in the future. Thanks again.

Expand full comment
Jay Cee's avatar

Stay weird, Stay real.

Expand full comment
Valerie Balderson's avatar

Fascinating. Thank you for sharing.

Expand full comment
TechTiff's avatar

Thanks for reading, Valerie! It’s one of those things that once you see it, you can’t unsee it. Have you noticed this pattern in your own AI interactions?

Expand full comment
Geraldo Alonso II's avatar

Thanks for pointing that out. I didn't realize that was an issue with AI.

Expand full comment
Wendy's avatar

Wow this is unbelievable! Thanks for sharing. I have immediately updated my memories and personal information

On ChatGPT.

Expand full comment