Can AI improve access to mental health care? Possibly, Stanford psychologist says — Scientists Talk Funny

“Hey Siri, am I depressed?” When I posed this question to my iPhone, Siri’s reply was “I can’t really say, Jennifer.” But someday, software programs like Siri or Alexa may be able to talk to patients about their mental health symptoms to assist human therapists.

To learn more, I spoke with Adam Miner, PsyD, an instructor and co-director of Stanford’s Virtual Reality-Immersive Technology Clinic, who is working to improve conversational AI to recognize and respond to health issues.

What do you do as an AI psychologist?

“AI psychology isn’t a new specialty yet, but I do see it as a growing interdisciplinary need. I work to improve mental health access and quality through safe and effective artificial intelligence. I use methods from social science and computer science to answer questions about AI and vulnerable groups who may benefit or be harmed.”

How did you become interested in this field?

“During my training as a clinical psychologist, I had patients who waited years to tell anyone about their problems for many different reasons. I believe the role of a clinician isn’t to blame people who don’t come into the hospital. Instead, we should look for opportunities to provide care when people are ready and willing to ask for it, even if that is through machines.

I was reading research from different fields like communication and computer science and I was struck by the idea that people may confide intimate feelings to computers and be impacted by how computers respond. I started testing different digital assistants, like Siri, to see how they responded to sensitive health questions. The potential for good outcomes — as well as bad — quickly came into focus.”

Why is technology needed to assess the mental health of patients?

“We have a mental health crisis and existing barriers to care — like social stigma, cost and treatment access. Technology, specifically AI, has been called on to help. The big hope is that AI-based systems, unlike human clinicians, would never get tired, be available wherever and whenever the patient needs and know more than any human could ever know.

However, we need to avoid inflated expectations. There are real risks around privacy, ineffective care and worsening disparities for vulnerable populations. There’s a lot of excitement, but also a gap in knowledge. We don’t yet fully understand all the complexities of human–AI interactions.

People may not feel judged when they talk to a machine the same way they do when they talk to a human — the conversation may feel more private. But it may in fact be more public because information could be shared in unexpected ways or with unintended parties, such as advertisers or insurance companies.”

What are you hoping to accomplish with AI?

“If successful, AI could help improve access in three key ways. First, it could reach people who aren’t accessing traditional, clinic-based care for financial, geographic or other reasons like social anxiety. Second, it could help create a ‘learning healthcare system’ in which patient data is used to improve evidence-based care and clinician training.

Lastly, I have an ethical duty to practice culturally sensitive care as a licensed clinical psychologist. But a patient might use a word to describe anxiety that I don’t know and I might miss the symptom. AI, if designed well, could recognize cultural idioms of distress or speak multiple languages better than I ever will. But AI isn’t magic. We’ll need to thoughtfully design and train AI to do well with different genders, ethnicities, races and ages to prevent further marginalizing vulnerable groups.

If AI could help with diagnostic assessments, it might allow people to access care who otherwise wouldn’t. This may help avoid downstream health emergencies like suicide.”

How long until AI is used in the clinic?

“I hesitate to give any timeline, as AI can mean so many different things. But a few key challenges need to be addressed before wide deployment, including the privacy issues, the impact of AI-mediated communications on clinician-patient relationships and the inclusion of cultural respect.

The clinician–patient relationship is often overlooked when imagining a future with AI. We know from research that people can feel an emotional connection to health-focused conversational AI. What we don’t know is whether this will strengthen or weaken the patient-clinician relationship, which is central to both patient care and a clinician’s sense of self. If patients lose trust in mental health providers, it will cause real and lasting harm.”

This is a reposting of Scope blog story, courtesy of Stanford School of Medicine.

 

 

 

 

Advertisements

10 Comments

    1. AI has moved swiftly from science fiction to reality in the last few years, and the tech is already enhancing business processes, and we can see it in neural machine translations, chatbots, data analysis, pattern recognition of imaging, voice, etc. However we use it already, depends of our desire, and that can be scary sometimes, but I have huge faith in humanity.

      Liked by 1 person

  1. There is something keeping from fully accepting AI, however, this is an interesting concept and although I believe nothing can truly compare to human-to-human interaction, it is something worth researching more into. After all, we truly are in a digital age..

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s