When a person is in trouble, he or she longs to tell someone what’s the matter. In the absence of a good friend to talk to, almost anything will do. In the legend of King Midas, he is cursed with donkey ears and swears his barber to secrecy. The barber arranges the king’s hair to hide the latter’s shame, but is unable to contain the secret—so he digs a hole and whispers the situation to the dirt. Reeds grow out of the hole, whispering, “The King has ass’s ears” as they bend in the breeze.
We need some kind of ear to hear us. One of my mother’s teaching colleagues settled down tattling kindergartners by putting up a large picture of an ear on the wall. “Tell it to the ear,” she told them. Once they’d said the words out loud, they were able to sit down, even if they hadn’t actually been heard. Like the king’s barber, they just had to let it out.
Adults are attracted to strange interlocutors, too. In Illinois, a state law bans official therapists from using bots to talk to patients, but anyone is still free to open ChatGPT and say, “You are a thoughtful, thorough therapist with a specialty in Internal Family Systems, and you welcome me into your office. …” Meanwhile, AI bots are in clinical trials for depression (and posting promising numbers) with the aim of making those bots prescribable.
It’s not surprising that many people (and even some therapists) feel like AI chatbots might be able to provide quasi-therapeutic support. One of the earliest chatbots, ELIZA, was a much simpler language model. Developed in the mid-1960s, ELIZA didn’t rely on neural nets or reinforcement learning: It simply echoed back what the user put in. Talk to Eliza about a fight with your mom, and it might say “It sounds like you’re frustrated by your recent conversation with your mother, is that right?”
MIT computer scientist Joseph Weizenbaum wasn’t trying to create a real therapist when he wrote the program that powered ELIZA, but many people who interacted with ELIZA treated the program as though it were a real person. Weizenbaum’s own secretary famously asked him to leave the room while she typed messages back and forth with ELIZA, so she’d be free to have “a real conversation.”
ELIZA was partially inspired by the Rogerian tradition of psychotherapy, a practice that instructed the therapist to offer a client “unconditional positive regard” and “empathic understanding.” Carl Rogers, the psychologist who developed this practice in the 1960s, worked from the assumption that the solutions to a patient’s problems were already within the patient. The therapist wasn’t there to advise, but to offer the client a warm, supportive space where they could notice the solution they already possessed. Whether the patient spoke to a clinician or typed to ELIZA, the frequent, “Can you tell me more about that?” was intended to prompt the patient to look squarely at a problem they’d been afraid to confront and allow them to find their own solution.
ELIZA is the most famous example of what might be called “placebo therapy.” Placebo therapy could involve talking to a bot or a friendly, untrained graduate student. In some trials, a nonprofessional interlocutor can do nearly as well as a trained therapist. In other analyses, some patients doing structured work on their own (like filling out worksheets on their habits of thought) do about as well as those working with a real therapist. Whether in a therapeutic context or elsewhere, many people benefit from a way to externalize and examine their thoughts. An interlocutor’s expectant silence prompts us to put into words ideas that were previously inchoate.
Programmers have their own way to get their thoughts outside their heads. Coders noticed that when they pulled aside a colleague to explain where they were stuck, they often spotted why they were stuck partway through their spiel, without their friend saying a word. Thus, some wondered, why not find a way to get the same effect without breaking into someone else’s workflow? “Rubber duck debugging” is the practice of debugging code with a small rubber duck as a partner. Explain your whole problem to the duck, and you’ll get to see if it’s the kind of problem you can solve yourself, once you get it outside your head. In the assumption that you may possess the solution unknowingly, rubber duck debugging is a cousin to Rogerian psychotherapy. For coders, it’s an efficient first pass at a problem. You can always escalate to a person if the duck’s silence fails you.
But debugging your program with a rubber duck is safer than debugging yourself with an LLM chatbot. After all, you can check if your code works by running it. You can’t as easily do a test run on your own sense of self-identity, to see if it fails to compile. When AI echoes back a user’s thoughts, it can reinforce harmful ideas. Verbalizing a paranoid or psychotic thought may give it more force than leaving it half-examined. Multiple deaths are already partially attributable to the funhouse mirror of AI-aided introspection. Despite companies’ efforts, an AI therapist cannot reliably call in help when a patient expresses the intent to harm himself or others. It can’t easily identify a patient’s paranoid delusion, and instead will sycophantically echo it back and urge further exploration.
It can be helpful to externalize your thoughts, but it’s not a good idea to believe everything you think. Multiple Christian traditions advise against examining intrusive thoughts too closely. Eastern Orthodox writers describe these thoughts as logismoi (words or images that draw people away from God). They advise disciples to allow a passing thought to pass. A brief flicker of lust, a quick flash of anger might merit a prayer that the thought not become rooted. But one would not be advised to unpack it at length and meditate deeply on what the momentary ugly thought tells you about your deepest, truest self. Indeed, secular therapeutic traditions define some of these errors as rumination—verbalizing a thought and then returning to it over and over, out of proportion to its place in one’s life.
Sometimes, the role of an interlocutor is not to help us examine our troubling thoughts, but to urge us to set them aside and go out and act in the world. If you are limited by anxiety, you might benefit from unpacking the roots of your fears, but you might be better off simply articulating what you fear and realizing you don’t quite believe your predictions when you hear them. You might benefit even more from simply trying out small versions of the acts that frighten you and gaining real-world confirmation that the worst does not come to pass. Acting in the world can rewrite your model of the world.
An imaginary AI friend cannot provide the most basic version of this sort of exposure therapy. Articulating your fears about yourself to a trained therapist or a close friend gives you the chance to test a hypothesis—if I reveal this, will I be rejected? But to be accepted by a sycophantic AI is meaningless.
Both the risk and reward are higher when you’re speaking to a person compared to a bot. Sharing your troubles isn’t just a chance to hear your own thoughts; it’s an opportunity to deepen a friendship by letting a friend know you more deeply. That doesn’t require friends to use therapyspeak or make casual diagnoses. It’s better to examine our thoughts and pasts lightly and curiously, resisting the impulse to give definite permanent names to our tendencies or assign absolute causes. A false certainty (whether from a therapist, chatbot, or friend) is one more idea we'll eventually need to pull out, examine, and cease to endorse.
Overall, the partial success of placebo therapy and chatbots tells us something encouraging about what we have to offer to each other. There are complex and challenging forms of mental illness, but for many people, it’s enough to think through problems in the presence of a kind listener. We shouldn’t let an amoral, sycophantic bot take this job. It’s the proper office of a friend.
Please note that we at The Dispatch hold ourselves, our work, and our commenters to a higher standard than other places on the internet. We welcome comments that foster genuine debate or discussion—including comments critical of us or our work—but responses that include ad hominem attacks on fellow Dispatch members or are intended to stoke fear and anger may be moderated.
With your membership, you only have the ability to comment on The Morning Dispatch articles. Consider upgrading to join the conversation everywhere.