AI, Therapy, and Everything in Between: What I Learned from Ellie Pavlik & Soraya Darabi

AI is becoming a surprising source of emotional support. This summary explores what experts say about AI therapy, risks, opportunities, and human connection.

AI, Therapy, and Everything in Between: What I Learned from Ellie Pavlik & Soraya Darabi
Photo by Marcel Strauß / Unsplash

This episode dives deep into one of the most sensitive crossroads of our time: the meeting point between generative AI and mental health. Host Bob Safian talks with two people who sit right at that intersection:

  • Ellie Pavlik – Director of ARIA (AI Research Institute on Interaction for AI Assistance) at Brown University, leading a new academic effort on AI and mental health.
  • Soraya Darabi – VC partner at TMV, an early investor in mental health and AI startups like Slingshot AI and Daylight Health.

The conversation sits between excitement and anxiety: AI is already being used for emotional support, but we’re only just starting to understand what that means.


Why AI and Mental Health Can’t Be Ignored Anymore

The episode opens with a stark reality:

  • A huge portion of people who use ChatGPT and similar tools are using them for mental health, emotional support, or “therapy-like” conversations.
  • At the same time, OpenAI has faced lawsuits claiming that chatbots contributed to suicides or psychological crises.

Ellie explains that ARIA didn’t originally set out to focus on mental health. In fact, they initially removed it from their list because the stakes felt too high:
If AI goes wrong here, the harm can be massive.

But they came back to it for one simple reason: People are already doing it.

Users are turning chatbots into therapists, confidants, even partners. Startups are building products on top of this behavior. Some may be responsible, others not. The worst outcome, Ellie says, is that this all evolves without scientific leadership, guardrails, and shared language to talk about what’s actually happening.

The Scale of the Mental Health Gap

Soraya zooms out to the big picture:

  • Roughly 1 in 8 people worldwide struggle with some form of mental health issue.
  • Fewer than 50% seek treatment, and for many who do, cost and access are huge barriers.

From an investor point of view, that’s a massive “market.” But she emphasizes that for TMV, it’s not just market size; it’s ethics + scale: can AI expand access without causing more harm?

She mentions:

  • Slingshot AI – a foundational model focused on psychology, powering the app Ash for mental health support.
  • Daylight Health – tech that helps nurses and primary care settings deliver basic mental health support when doctors don’t have time.

These tools aren’t substitutes for full therapy in complex cases, but they can take pressure off the system by handling lighter, repetitive or structured support needs.

Why People Are Drawn to AI for Emotional Support

A big theme: we need to understand not just what AI does, but what humans are getting from it.

People are using AI for:

  • late-night emotional support when no therapist is available,
  • “practice” conversations,
  • working through anxiety in private,
  • journaling and self-reflection with feedback.

Ellie suggests that right now, AI might be closer to “smart journaling” than full human replacement. A good therapist often helps you:

  • externalize thoughts,
  • reflect on experiences,
  • reframe patterns (like in CBT),
  • practice gratitude or goal-setting.

AI can already support some of that structured reflection, especially in CBT-like exercises, which have clearer protocols and don’t require deep psychoanalytic insight into childhood or relationships.

Empathy, “Lived Experience,” and What We Don’t Actually Know

One common critique is: “AI can’t be empathetic. It hasn’t lived a human life.”

Ellie pushes back not with a counterclaim, but with a reality check:
We don’t even have clear scientific definitions for big concepts like “empathy,” “understanding,” or “lived experience.”

So when we say, “AI can’t do X,” we first need to ask:

  • What exactly is X?
  • Which part of empathy is essential for positive mental health outcomes?
  • Which parts can be approximated by patterns in language and large-scale interaction data?

Humans also recognize fake empathy in other humans. We filter out people who “say the right words but don’t mean them.” That suggests there are multiple layers to what we call empathy – and we still don’t know which layers matter most in therapy-like settings.

Not One Big Chatbot for Everything

Ellie is especially skeptical about the “one giant LLM for all use cases” idea.

Right now, the same type of model is being used for:

  • writing code,
  • helping with homework,
  • answering science questions,
  • and now, mental health support.

Just because the model works brilliantly for code (which is easy to evaluate: it either runs or it doesn’t) doesn’t mean it’s well suited for therapy, where success is fuzzy and long-term.

Mental health, education, and leadership are fields where:

  • We don’t have good quantitative metrics even for humans.
  • We rely on proxies (test scores, symptom checklists, user satisfaction) that everyone agrees are imperfect.

If AI progress depends on clear metrics, mental health is one of the hardest possible testing grounds.

Evaluation and Participatory Design

ARIA’s approach is built on two key principles:

  1. New evaluation methods
    • No simple leaderboard.
    • No “correct vs incorrect response” labeling like coding tasks.
    • Success will likely be defined through long-term outcomes, user experience, safety, and social impact.
  2. Participatory design
    • Not just AI researchers and startups deciding what “success” looks like.
    • Involving:
      • clinicians,
      • patients,
      • skeptics of AI,
      • regulators,
      • and everyday users.

The goal: define what we actually want AI to do in mental health before we lock ourselves into the wrong patterns.

Cautious Optimism and a Narrow Window

Both guests end in a similar emotional place: cautious optimism.

  • Soraya: We can’t pretend AI isn’t here. People are using it for everything from “I’m anxious about work” to “I need help handling patients.” The job now is to use it responsibly, not freeze in fear.
  • Ellie: Right now, we still have a window of choice. We can set guardrails, build better models, and shape norms before the technology and business incentives fully solidify.

If we move too fast, or only chase hype, that window could close.
If we only react with fear, we miss real opportunities to help people who have no access to human support today.

The episode leaves us with a challenging but hopeful stance:
AI in mental health is neither savior nor villain. It’s a powerful, messy tool in the hands of humans — and what we do next will determine whether it mostly heals or mostly harms.

And honestly, I liked this crossover between Artificial Intelligence and therapy. I’m also one of those men who struggles to admit that I need therapy.

And I listened to that episode and summarize it by the help of a couple of friends. And those friends of mine are actually not human.

Thanks, please share your thoughts with me if you can find me. I think you can find me.