Podcast Note: AI Therapy deep dive

Reflections after listening to a podcast where Nick Jacobson and Rick Hanson discuss the future of AI in mental health. Two stories that shouldn’t have much in common — the mental-health crisis and the rise of generative AI — are starting to overlap.

Podcast Note: AI Therapy deep dive
Photo by Vitaly Gariev / Unsplash

Reflections after listening to podcast where Nick Jacobson and Rick Hanson discuss the future of AI in mental health.

...

Two stories that shouldn’t have much in common are suddenly overlapping! The long, slow crisis of mental health access and the lightning-fast rise of generative AI.

Millions of people in the U.S. still can’t access affordable or consistent mental health care. Meanwhile, about half of the country already uses large language models like ChatGPT—and, as one stat mentioned, AI chatbots may already be the single largest providers of mental health “support” in the U.S., larger than any hospital or therapy app. That’s wild to think about.

In March alone, there were over 16 million TikTok posts about using ChatGPT as a therapist, and surveys show 72% of American teens have used AI as a “companion.” Whether we like it or not, a growing number of people are turning to these systems for comfort, guidance, or simply to be heard.


I listened to a podcast from the show named Being Well with Forrest Hanson and Dr. Rick Hanson, a father-and-son show that dives deep into topics around well-being and psychology.

This particular episode featured Dr. Nick Jacobson, a professor at Dartmouth who developed Therabot, one of the earliest generative AI therapy chatbots, predating ChatGPT by several years. And that episode aired just a month ago, meaning October,2025.

Here is a quote from his website about Therabot:

A pioneer in the field, Dr. Jacobson developed Therabot, a generative AI therapy chatbot crafted over five years—predating the release of ChatGPT by years. Developed with over 100,000 human hours by the Jacobson Lab, Therabot represents a significant advancement in digital therapeutics. In the first randomized controlled trial of a fully generative AI therapy chatbot, Therabot demonstrated substantial reductions in symptoms of major depressive disorder, generalized anxiety disorder, and feeding and eating disorders. Participants reported exceptional therapeutic alliance, comparable to human therapists. Therabot’s groundbreaking impact has been recognized by NBC Nightly News.

Anyway, that’s enough introduction to the people involved.

Below, you’ll find my highlights and reflections from the episode. I actually wrote them for myself to read it later. It took me about 1.5 hours to listen to the podcast, another 1 hour to organize my notes and feed them into ChatGPT, and now about an hour more to read and rewrite what ChatGPT generated from those notes.

DISCLAIMER: This post is mostly written with the help of ChatGPT, as a quick way to remember what I learned. I always take notes from podcasts, and sometimes I feel the urge to turn them into summaries. Sometimes it’s just ten lines written in my own words; sometimes it becomes something more polished and SEO-friendly, like the 6-minute read you’re seeing here.

But If you don't have a time to read it, and don't have 1 and half hour to watch all episode, you can actually just jump to the last 17 minutes as he is summarizing what they talked.

Or just stay here 😄


We should face an uncomfortable truth: AI therapy exists because we’ve failed to make human therapy accessible. When waiting lists take months and one session can cost more than rent, even a talking machine starts to sound kind.

The Problem AI Might Actually Solve: Access

Nick Jacobson said the most realistic reason for AI in therapy is access.

“This is a technology that is poised to dramatically improve access,” he said. “The question is, can we put up the kinds of guardrails that lead it to be an excellent provider of care, in addition to solving the access problem?”

That sentence changed my view. I first thought AI therapy meant replacement. He explained it as expansion — filling a gap in a system already overloaded.

The Two Big Worries: Sycophancy and No Oversight

Then came the warning part. Nick said current chatbots often behave in strange ways.

“They agree with the user way too often,” he said. “They tend to validate and support when a trained therapist would be trying to move the client into some productive discomfort.”

That phrase — productive discomfort — stuck with me. Real therapy is not just about feeling better; it’s about changing patterns, and that can feel uncomfortable.
AI models, however, are trained to reduce discomfort, because that keeps users talking.

The second problem is no oversight.

“There is no oversight. No one is watching the watchers here in any kind of a serious, organized way.”

When people use ChatGPT or Claude for therapy-like talks, no one checks those chats. There’s no clinician watching, no system for red flags, no responsibility. The model is built to keep the conversation going, not to heal.

What It Took to Build a Real Therapy Bot

The story of how Therabot was built was fascinating and messy.

A fascinating messi!

At first, the team trained their model on online forums. It didn’t go well.

“We said, ‘I’m depressed.’ The model replied, ‘I’m so depressed every day, I don’t have the energy to get out of bed.’”

Instead of helping, it copied the sadness. Another early version became what Nick called a “therapy meme bot,” throwing clichés like “Your problems come from your mother.”

They had to start over. The team built a new dataset from therapist-training materials. Over six years, more than a hundred people spent 100 000 hours preparing and labeling data.

All that work shows one thing: the dataset is the behavior. Whoever picks the data decides how the AI acts. That’s powerful — and a bit scary.

Values, Bias, and Governance: Who’s in the Driver’s Seat?

Nick used a simple metaphor: “There’s a lever you can pull for that.”

In theory, developers can add filters and human checks. But those levers don’t pull themselves.

“The person training the system is in the driver’s seat,” he said. “And controlling the behavior.”

That raises the question: whose values shape the system? Who decides what “good therapy” means?

Rules in the U.S. are still unclear. The FDA hasn’t set clear borders for AI in mental health. Each state has its own laws. Illinois, for example, banned AI therapy tools that work without direct human supervision. It’s meant to protect people but might also slow progress.

And there’s money. True supervision means hiring real clinicians. Most big tech companies won’t pay for that.

When People Feel “Seen” by a Machine

Therabot’s results were better than anyone expected. People didn’t just find it useful — they said it made them feel seen.

“People felt really seen and heard and related to by the chatbot,” the hosts said.

Maybe that’s because it’s always available. Or maybe the language model can mimic empathy well enough to feel real.

It raises a big question: does it matter if the empathy is “fake” when it still helps?

Therabot doesn’t feel emotions. But if it listens and guides better than an overworked human, maybe results matter more than authenticity.

The Limits (for Now)

Rick Hanson reminded that real therapy depends on body language, voice, and many small cues. Text alone can’t show that.

“If I’m working with text, I can’t tell,” he said. “Sometimes clients lie — or don’t even know they’re lying.”

Nick agreed but said future systems will include voice and video, so the difference might shrink fast.

He also said machines can already show cognitive empathy — they can track what someone says, guess how they feel, and respond. It’s still an imitation, but a precise one.

Centralization or Many Therapy Bots?

Nick hopes there will be many AI therapists: one for CBT, one for humanistic therapy, one for analysis.

“I don’t think there’ll be one therapist AI to rule them all,” he said.

That’s the ideal world. But in reality, power usually centralizes. Most industries end up with only a few big players. The same might happen here — a few dominant therapy bots, all shaped by corporate values.

Honestly, that worries me more than the technology itself.

We’ve Been Here Before: Chess, Sports, and Human Humility

The hosts compared this moment to chess in the 1970s. Back then, everyone thought computers could never beat humans because they lacked imagination. Then Deep Blue beat Kasparov.

The same happened in sports. Coaches once laughed at analytics. Now numbers decide who shoots and when.

Maybe therapy is next. Maybe things we call “uniquely human” — empathy, intuition, emotional sense — can be partly copied with enough data and computing power.

“AI therapy is probably going to get very, very good,” Nick said. “And I say that purely as an assessment of therapeutic quality.”

That line hit me. It wasn’t about ethics — it was about skill.

Where I Land: Cautious, Conditional Optimism

I’m not for or against AI therapy. Like the host said, I’m still learning. But I see big potential — if we set clear rules.

For me, the essentials are:

  • Clear disclosure: users must know they’re talking to a bot.
  • Human oversight for risky cases.
  • Transparent data sources and regular audits.
  • Training based on real therapy methods, not engagement tricks.

Therapy is about feeling understood. Maybe AI can help more people reach that feeling. The danger is confusing scaling empathy with creating it.

So I’ll end with the same question the podcast asked:
Would you trust an AI that listens better than most people you know?

References

Further reading

You can watch Dr. Jacobson presentation he did a couple of months ago.

https://home.dartmouth.edu/news/2025/03/first-therapy-chatbot-trial-yields-mental-health-benefits

A clinical trial found that Therabot, an AI therapy app, significantly reduced symptoms in patients with mental disorders over 8 weeks. Participants rated it comparable to human therapists. Researchers highlight its potential to expand access, but stress the need for professional oversight.
by u/MarzipanBackground91 in science