Image: SFD Media LLC
While we’re told we “don’t get” technology, men are programming our future.
I’m tired of the “women don’t understand technology” bullshit.
You know the trope: the clueless woman in a sitcom clicking random buttons while a smirking man swoops in to fix it. It’s tired, patronizing—and it’s everywhere.
While we’re dismissed as too confused to care, men are building artificial intelligence that will shape everything about our lives: our health care, our work, our privacy, our choices. And they’re programming their biases—about women, about age, about what matters—right into the code.
Time to get uncomfortable, because opting out isn’t an option.
What AI Really Is—and Why It Matters
Think of AI as the world’s most confident intern. It doesn’t think. It predicts, pulling patterns from massive piles of data to spit out the most statistically likely answer.
Ask it, “What do dogs do?” It scours the internet and decides “play fetch” is the best bet because those words show up together often. The magic trick? It delivers this guess with unwavering confidence, like it’s dropping the wisdom of the ages.
Here’s the crucial part: AI finds correlations, not causation. It can’t tell you that dogs like to play fetch or that fetching is instinctual—it only knows these words appear together frequently. This distinction matters when AI starts making decisions about your life based on patterns it doesn’t actually understand.
For example, I asked ChatGPT to write about wind power in the style of Dr. Seuss, and it rhymed about whooshing skies. Then I asked for Hemingway, and it gave clipped sentences about wind turning turbines, quiet and strong. Same prompt, same algorithm, different “personality.” It’s a fun party trick—until you remember this same technology is used for hiring, loan approvals, and medical decisions. And it’s just matching patterns, not understanding context, ethics, or nuance.
When Boys Code the Future
The “Tech Bro” stereotype isn’t just a meme; it’s a blueprint for who’s shaping AI. Bias isn’t a theoretical problem. It has real-world consequences.
When predominately male teams build AI, these systems often reinforce existing gender stereotypes and inequalities. Imagine AI models treating men as the default and women as “atypical,” leading to biased decisions in hiring, health care, and even legal judgements.
Researchers Timnit Gebru and Joy Buolamwini discovered that facial recognition tech (think: unlocking your phone, getting through TSA) is trained on male faces with lighter skin tones. The result? The systems literally don’t see women and people of color accurately.
It’s deeper than facial recognition. The UN Commission on the Status of Women found that biased training data gets replicated by algorithms, and the longer these systems run, the more entrenched the bias becomes. Male assumptions become digital truth, coded into the systems running our world.
The stakes get deadly serious in health care; this can mean misdiagnosis or inadequate treatment for women, as AI may focus more on male symptoms. Without diverse training data, AI cancer detection systems can’t accurately measure results for diverse variants. If AI misses a diagnosis because it wasn’t trained on someone with your gender, skin tone, or age profile, that’s not a glitch. It’s a deadly blind spot.
And we’re supposed to trust this tech to decide what’s best for us?
You’re Training AI (Whether You Know It or Not)
Every time you use AI, you help train it. Your Google searches, online shopping, and Grubhub orders all become data points.
Ask ChatGPT for a crockpot recommendation, and it might follow up: “What kind of dishes do you cook most often?” Reply “Hearty soups,” and you’ve just given it specific data to weight future responses. Your reaction becomes training data, teaching the system what women like you “should” want.
It’s digital profiling disguised as personalization. And the fewer women in the rooms where these systems are designed, the fewer checks there are on how women are categorized—and limited—by them.
The fact that some AI companies now pledge, “We do not use your personal data to train our AI” should terrify you. Why would they need to promise this unless everyone else is doing it?
They’re Deciding Your Future—Without You
Researcher Tina He predicts AI search will fundamentally change how information gets found. Content designed for human appeal—compelling headlines, attractive visuals—will become “essentially invisible to AI evaluators.” The AI will decide what you see based on algorithmic relevance, not human judgment.
It means we risk living in an information landscape filtered by AI priorities, not human values.
And who’s programming those priorities? Hint: not us.
AI’s Confidence Game
AI’s most dangerous trait isn’t that it’s wrong—it’s how confident it sounds when it’s wrong. Researchers call these confident falsehoods “hallucinations.” I call it algorithmic mansplaining.
Case in point: Google’s AI was once asked to explain the made-up phrase, “You can’t lick a badger twice.” It confidently delivered a completely fabricated, but plausible-sounding, explanation with reference links to back it up.
It’s like every man who’s ever explained your own job to you with absolute confidence and zero expertise.
Women Are Fighting Back
Thankfully, women are fighting for a seat at the AI table. Dr. Francesca Dominici co-directs Harvard’s Data Science Initiative, and is working on how machine learning can better serve public health. Dr. Sarah Myers West of the AI Now Institute has testified before Congress on AI’s impact on competition and consumer rights. Matineh Shaker is using AI to help veterans with brain injuries.
They’re doing the work, but we need more of us asking better questions. Organizations like Women in Machine Learning and Black in AI exist because we’re not represented in the rooms where decisions about our futures are being made.
How to Use AI Without Getting Played
Here’s how I think about it: Treat AI like a talented but occasionally delusional intern. Use it for brainstorming, rough drafts, summaries, or trip planning. Let it spit out a birthday toast in the style of Shakespeare. Enjoy the party trick.
Ready to start experimenting? Begin with these free tools:
ChatGPT.com (like having a chatty, smart assistant who can help you brainstorm, draft emails, or explain things in plain English).
Claude.ai (a softer, more conversational style—some say it feels more “gentle”—great for summarizing long articles or tidying up your writing).
Perplexity.ai (acts like a research assistant, giving you quick, sourced answers from around the web when you’re curious but don’t want to go down a Google rabbit hole).
Always double-check its work.
Remember: It doesn’t know anything; it’s just matching patterns with confidence.
A world where AI is built almost exclusively by men is likely to be one where women face increased risks of discrimination, exclusion, and harm unless deliberate action is taken now.
This is not going away.
Stay curious but stay skeptical. Experiment and know how to use some of the common tools. Because the alternative—letting the same guys who brought us facial recognition that doesn’t recognize our faces build the systems that will govern our futures?
No way.
P.S. This is just the start. If you missed it, catch our piece on the hidden dangers of AI chatbots here.
We’re digging deeper into how AI is quietly shaping—and sometimes sabotaging—women’s lives, one algorithm at a time. Stay tuned for our ongoing series on AI and women’s agency, dropping monthly on PROVOKED. Subscribe, share, and let’s stay loud.
0 Comments