IHPST PhD Candidate Rachel Katz examines pros and cons of AI-delivered therapy

July 25, 2023 by Sean McNeely - A&S News

'Talking to an affectionate wall': PhD candidate Rachel Katz examines pros and cons of AI-delivered therapy
July 18, 2023 by Sean McNeely - A&S News

 

Earlier this year, the U.S.-based National Eating Disorder Association (NEDA) shut down its AI-powered chatbot after it provided unsafe advice to people seeking help.

The chatbot, called “Tessa” instructed people with eating disorders to, among other “tips,” greatly reduce their daily calorie intake, according to complaints.

“The chatbot was suggesting callers try things that gave them eating disorders in the first place,” says Rachel Katz, a PhD candidate with the Faculty of Arts & Science’s Institute for the History & Philosophy of Science & Technology (IHPST).

Her research focuses on bioethics, as well as the philosophy of medicine and psychiatry, including an interest in AI ethics.

Katz is in the early stages of her doctoral research that examines the pros and cons of AI-facilitated psychotherapy, specifically AI-delivered therapy that doesn’t involve a clinician at any point.

Her work received coverage from major news outlets this summer, following her research presentation at the Congress 2023 gathering of the Canadian Society for the History and Philosophy of Science (CSHPS) at York University in May.

“There are several ways that AI has come to be part of psychotherapy,” says Katz. “I'm focusing on the patient interaction aspect. I’m looking at apps like Bloom and Youper. There are a series of them that don't require interaction with a human therapist.

“I'm not against the use of AI therapy chatbots, they are a useful tool, but they need to be understood and regulated properly before they spin out of control,” she says. “Currently, we don't have sufficient rules and guidelines for the kinds of things these chatbots could be useful for.”

Katz believes the difference between an AI chatbot and a human therapist is trust as opposed to reliance.

“When we have a good relationship with a human therapist, we've formed a trust-based relationship built on goodwill and vulnerability,” she says.

According to Katz, the relationship with an AI therapist is one of reliance. She has described the experience as “talking to an affectionate wall.”

“You can have this supportive, helpful experience working with an AI therapist,” says Katz. “There's no vulnerability, but you can rely on it to be there.”

 

A graphic of a phone with chat bubbles that say, "Let's talk about your day... how are you feeling?".
AI-delivered psychotherapy creates reliance but not trust, says Rachel Katz. Photo credit: iStock: alexmillos.

 

But she believes there is something fundamentally different about human-to-human relationships — and part of that comes from the fact that a therapist can make mistakes.

“You may have a great working relationship with a therapist, and then they suggest something that doesn't work for you,” says Katz. “Or they may say, ‘Here's how I've interpreted what you've told me’ and you correct them.”

That could become problematic with misdiagnosing or mistreating a mental health concern. “But part of that ability to mess up represents that ‘special human element’ that I’m exploring and it’s one of the things that makes me trust someone in general,” she says. “That's something you completely lose out on with an AI therapist.”

As well, that element of vulnerability can flow both ways. A human therapist can make themselves vulnerable by offering insights into their personal lives, which can also strengthen a bond with a patient.

Katz intends to further investigate what makes that human relationship special, something she feels she hasn’t fully uncovered yet. “That’s the big philosophical question underpinning the whole project,” she says.

Katz also questions whether AI chatbots are effective when dealing with crises. For example, chatbots are not considered very effective when assisting someone who is suicidal.

“They will just direct you to call 911 or some other kind of emergency service,” says Katz. “That's not always a good solution for people who may be very distressed.”

Despite the challenges, Katz believes AI has some advantages.

For example, crisis lines are often staffed by volunteers, and such work can bring with it a tremendous emotional toll. “You could argue that turning those call systems into AI saves the emotional burden from volunteers,” says Katz.

Distance and accessibility could also be a factor that’s favourable for AI. If a person living remotely has to drive hours for an appointment, AI might be a more convenient option.

“Or if you're someone who works strange hours, say you work a night shift and need to see a therapist, that's also a difficult situation that might be better suited to AI,” says Katz.

But AI’s biggest appeal may be the fact that it’s impersonal.

For some, particularly those who have never taken part in therapy before, chatting to AI rather than a person may be more appealing. They may feel more at ease sharing their problems and issues.

“I was teaching a class and asked my students, ‘How would you feel about having a therapist?’ A shocking number of them said they were keen on first talking to an AI therapist.

“They were nervous about the idea of expressing difficult emotions to another human. They preferred having something that could listen but was incapable of human judgment. Ideally,

a therapist is non-judgmental, but someone who's seeking out a resource for the first time may not be aware of that.”

As Katz continues her research, she does see AI-based psychotherapy working well in certain situations, with the proper guidelines and disclaimers.

“Ultimately, I want to advocate for patient choice,” she says. “I wouldn't want to deny a patient their ability to make the choice for what intervention they feel will make the most sense for them. If the type of therapy or the method of delivery of therapy doesn't work for the patient, the treatment is not going to be effective.”

Katz intends to delve deeper into this subject and believes her work could have real-world applications.

“The goal is to do some philosophical investigating and hopefully, come up with some answers that are philosophically interesting and can also help inform policy development in this area.”