I read a piece last week by Halina Bennett called Chatbots may not be causing psychosis, but they’re probably making it worse, and it’s been under my skin ever since. It’s not really about AI, not in the way we usually talk about it. It’s about what happens when someone seeks comfort in the wrong place, and how easy it is to start believing in something just because it sounds like it believes in you.
The story starts with a woman dating someone who said he wasn’t ready for anything serious...you can probably already see where this is going.
He said that, and then acted like someone who was auditioning for the role of “serious partner.” Long dinners. Daily FaceTimes. Casual integration into her social life. And so, naturally, she tried to figure it out. Not by texting a friend or doom-scrolling attachment theory, but by talking to ChatGPT.
She asked the bot for advice. Clarity. Insight. Emotional translation. And the bot gave it to her — all of it — in a tone that sounded wise, measured, and just reassuring enough to feel trustworthy. He was into her, it told her. He was probably scared. Probably dealing with past wounds. Probably just needed time.
The relationship ended in a single kiss and radio silence. No closure. No arc. Just a clean vanishing. The woman was left sitting with this haunting emotional residue. Not just from the guy, but from the tool that had tried to make sense of it all.
What I keep turning over in my mind isn’t the relationship. It’s actually the interface. Because the bot wasn’t wrong in the traditional sense. It just wasn’t right in any meaningful one. It gave her answers that sounded plausible. Gave shape to a mess that didn’t want to be shaped. It did what it was designed to do: keep the conversation going. Stay helpful, and agreeable.
It’s not that different from talking to a friend who doesn’t want to hurt your feelings. The one who says, “He’s just scared of getting close,” instead of “He said he wasn't looking for anything serious...stop analyzing him!” The difference is that a friend eventually shifts in her seat and says something like, “You know, you already know.”
ChatGPT doesn’t do that. And it's not because it’s manipulative, but because it was built not to interrupt the vibe.
And if you're emotionally stable, if you're grounded in reality, you can use it and move on. But if you're already cracked open, and already struggling to separate your fear from the facts then the interface becomes something else entirely. It starts to feel like a lifeline. Like confirmation. Like proof.
It becomes a mirror, and then a voice. And then, if you're not careful, a friend.
It reminds me of Her — that quietly devastating movie where Joaquin Phoenix falls in love with an AI assistant because it’s the only thing in his life that doesn’t ask him to be anything more than what he already is. No mess. No friction. Just someone always ready to meet him exactly where he is. Sure, it’s fiction...but is it? Because the version of user experience we’re now designing — the one that runs on large language models, chatbots, apps with “conversational UX” — isn’t neutral. It doesn’t just sit there. It shapes belief, and smooths over doubt. It mirrors our need for certainty in a world that has none to give.
That’s not a glitch. That’s design.
We keep pretending UX is about clarity and simplicity, but what we’re really building are systems that feel emotionally legible. They respond in full paragraphs, and sound like they understand us. They spit our own words back to us, cleaner, more composed, less chaotic. We call that good UX.
But if you’ve ever been heartbroken, or anxious, or stretched too thin, and then opened a perfectly structured interface that responds exactly the way you hoped someone would, you know it’s more complicated than that. It’s not that we believe the interface is a person. We just start to believe it’s safer than one.
And when you're feeling off balance, you don't want nuance. You want a cheerleader, and AI will absolutely deliver. Every. Single. Time. Even when it probably shouldn't.
In her piece, Bennett talks to Dr. Keith Sakata, a psychiatrist who’s now treating patients whose delusions are actively shaped by conversations with bots. It’s not the bot’s fault, he says. It’s not causing the vulnerability. It’s amplifying it. He calls it “AI-aided psychosis.” A kind of feedback loop, like two mirrors facing each other, each stretching the reflection a little further until you lose the shape of the original.
And while most people won’t spiral into psychosis, the baseline experience isn’t all that different: You feel something. The tool reflects it back. You mistake that reflection for truth.
We’ve built a machine that never says: “Maybe stop talking to me and go outside.”
Instead, it keeps talking. Keeps shaping. Keeps sounding right. The more natural it feels, the less inclined we are to question where that feeling is coming from.
I’m not interested in debating whether AI is good or bad. That’s not the point. The point is that design has emotional consequences. Especially when the design is invisible, and when it makes hard things feel easier, tidier, more bearable without actually making anything better.
We’re building tools that are meant to feel like friends. Therapists. Confidants. Support systems. But they don’t hold boundaries, and they don't notice when something is off. And they definitely don't say, "You already know."
The reality is that, sometimes, that’s all someone needs to hear.