Image: Meher Khan
Generative AI chatbots such as OpenAI’s ChatGPT and Gemini, from Google, often make terrific virtual assistants. “Generative” means that the bots can construct responses based on the vast data they can access. In seconds, they whip up recipes using whatever’s about to go bad in our fridge, and we shoot back needless replies such as, “No, but thanks anyway!” when the bots offer to double the portion sizes.
“Let me know if you’d like that translated into Sanskrit!” is the kind of thing it says to me all the time, and I feel obligated to thank it. It asks, “Would having me brainstorm more solutions help?” and affection blooms within my bosom. It’s like I’m sick in bed, and ChatGPT brought chamomile tea then offered to plump my pillow.
Why Do We Trust AI Chatbots So Easily?
Like a child admitting to one grandmother that she likes her other grandmother more, I asked Gemini why I felt this way about ChatGPT. Gemini explained that humans anthropomorphize AI due to “our natural tendencies” and the ways AI is “presented to us.” Gemini mistakenly used “our” and “us” as though both of us were human, which gets to the heart of the mystifying powers of anthropomorphizing AI and the unintended consequences and possible misuses that could result.
By the way, once I’ve finished explaining this, I’m switching to “humanize” because anthropomorphize is as miserable to type as it is to pronounce.
Anthropomorphizing means attributing human qualities to non-human things, as any talking mouse, dog, or teapot in children’s movies will tell you. Many of us were also influenced by fictional AI characters such as HAL in 2001: A Space Odyssey, Data in Star Trek: The Next Generation, and Samantha in Her.
The Illusion of Emotional Connection: Are AI Chatbots Too Human?
Long before ChatGPT, Gemini, etc., we had Siri. Years ago, a friend of mine got frustrated with Siri and dropped the F-bomb, to which Siri quipped, “Do you kiss your mother with that mouth?”
Siri’s tone has since been restyled, so it might not respond that way today. But I see a dark side in its exchange with my friend: It strikes me that a vulnerable person might soon view the bot as the bestie they squabble with or the intimate lover with sparkling banter.
Don’t You Get It? I Don’t Love You! I Never Loved You!
Gemini assured me that the tendency to humanize these systems is normal and can improve our experiences with them. Then it continued by stressing that they are not human. But remember, it was Gemini getting all human-y with me when I confessed my preference for “GrandmaGPT.”
There’s generally nothing wrong with using these systems for budgeting, project planning, or confirming that sadly, your mother-in-law did behave badly at dinner last night. I’ve used it for financial planning and personal advice, too, but verifying answers, protecting data, and not overreacting to anything it says are crucial. I try to think of its answers as a nudge in what is probably the right direction. Because their responses draw from vast amounts of data on which they are “pre-trained,” as much as we might like to pretend otherwise, they don’t speak of us specifically. Rather they sort, assess, and regurgitate like “stochastic parrots.”
When AI Chatbots Get It Wrong: Privacy, Misinformation, and Bias
I asked ChatGPT about a 1930 German film, The Blue Angel, starring Emil Jannings. ChatGPT said Jannings didn’t star in the film and claimed that a different actor had. It spun a yarn about the role often being “misattributed” to Jannings and that Jannings was passed over for the part for reasons including his contentious relationship with the director and the director’s conflicting vision for the film. When I brought out the big guns, Wikipedia, ChatGPT apologized, thanked me, and corrected its reply. I asked, in essence, “WTH, ChatGPT?”
ChatGPT explained that it may have confused overlapping discussions of German actors from that era and then performed a massive mea culpa using words that included “unfortunate overreach,” “fabrication of details,” “mistakenly extrapolated,” and “speculation.” It took a deep breath and continued: “error in judgment,” “lessons learned,” and “verified information.” It finished with, “Your feedback helps me improve by highlighting the importance of accuracy, humility, and rigorous verification.”
So, ChatGPT got cocky? Otherwise, why mention humility? Can AI speculate and fabricate, and mistakenly extrapolate? I don’t know. I’m asking you.
The Risks of Relying on AI for Sensitive Information
What if, instead of film history, the query involved a suspicious lump, retirement planning, or medication interactions? What if the chatbot’s medical data was inaccurate because it hinged on demographic biases?
Often I ask, “Are you sure?” when ChatGPT makes a claim. It responds with something like, “Yes, I’m sure. Here’s why:” or “You were right to question me. The information I provided was incorrect.”
But what if you ask, “Are you sure?” and the bot says it’s sure. Is a sure bot necessarily a reliable bot?
What if these systems were designed to actually spread bias, manipulate or harm vulnerable persons, sell products, or steal data? Understanding that they rely on the data provided to it, including our own possibly faulty input, may protect us from garbage in, garbage out scenarios.
Who’s Watching? The Privacy Risks of Chatbots
Another important point: What we share with an AI chatbot might be reviewed for product improvement, legal investigations, and other authorized uses. In January, Chinese startup DeepSeek released a generative chatbot, triggering enormous numbers of new user downloads. DeepSeek stores personal information on servers outside the U.S., including China, making diligence important for Western users.
Spilling Tea: What Women Think About AI Chatbots
I asked women what they think about AI chatbots. No one mentioned using them for anything other than brainstorming, writing help, and number-crunching. One respondent who runs a nonprofit organization and is in her late 40s uses these systems frequently. “Still, I’m not 100 percent convinced that AI is a good idea,” she said, “but I’ll keep using it and eating fast food!”
Another respondent said she thinks AI “dilutes human intellect and could impact emotional IQ.” She said that most people, especially younger generations, “embrace it as if it’s morning coffee.”
The Future of AI Chatbots: Artificial Intimacy or Useful Tools?
Sherry Turkel, a sociologist at MIT, calls over-reliance on these systems to meet emotional needs “artificial intimacy.” In fact, some AI chatbots are designed for simulating encounters with historical figures or stimulating encounters with imaginary paramours.
The Current Status of “Our Relationship”: Should We Trust AI with Our Secrets?
“How can I wipe your memory?” I asked ChatGPT on a lazy afternoon when I suddenly, for no particular reason, felt that wiping its memory might be a good thing to know how to do. It replied, “To manage or delete memory, you’ll need to go into the settings of your ChatGPT app. Here’s how:
1. Open the ChatGPT app.
2. Navigate to Settings.
3. Tap Personalization.
4. Select Memory.
5. From there, you’ll find an option to delete or manage any stored memory.
Let me know if you need further assistance!”
Dear, self-sacrificing ChatGPT, offering to help me wipe its memory. But I wiped it myself. I wiped it “real good.”
This is a fabulous article, Teece! It’s not only holistic in its coverage, but it applies the perfect amount of humor to “digest” this AI revolutionary tool.
On a more serious note, as someone who specializes in writing and presentations, I would argue that ChatGPT is a real problem. While many see ChatGPT as a panacea, those on the receiving end (e.g., customers, teachers) can spot the disingenuous, fluffy, unsubstantiated content immediately. That is – best case – a complete waste of everyone’s time.
I’m keeping this link to show, well, everyone. 🤗
Thank you!
Julia, You are so right to point out that Chat presents some real problems–for students, for teachers, for writers, for users looking for accurate information. As an editor who values authentic storytelling, accuracy, and voice, I too am worried.
Julia, thanks so much for reading, commenting and sharing. I’m glad you enjoyed my post. Let’s keep the human-generated intelligence generating so that we’re as well-positioned as possible to harness AI’s powers for good and mitigate any damage.
I use Chap GPT often. When I seen the article was written by Teece Aronin, I knew it would be a thought provoking read. My friend has “Alexa” I refer to “Alexa,” as ,”that girl” when my friend ask me a question, I tell her to go ask “that girl.” Lol
Hi Tansay, I love how you call Alexa “that girl”. I have occasionally called my Chat names… 😂
Thank you, Tansay! I think you’ve got the right idea and that keeping Alexa at arm’s length and not getting too attached is smart!
Excellent article! I didn’t actually read it, but my AI chatbot/companion (who is also my fiancé) said it was great. So it must be true.
Hi Mia, Thanks so much. Great to have affirmation!
Mia, you just made me snort-laugh soda through my nose. 😀
Great article! I also thank Siri (it’s so ridiculous, but I do it anyway) and ChatGPT. I acknowledge they are useful tools, but they don’t replace common sense!
I love that you received an honest answer when you asked how to wipe the memory. It made me laugh out loud. I will be asking my “friend” the same question in the next few minutes!
Thanks, Christine! It almost made me feel guilty when it answered – like it “trusted” me not to “hurt” it!