Mental health chatbots have already made fatal mistakes. Unlike human therapists who have actual training, apps don't always understand the nuances, which can have deadly consequences. More control is needed.
Category
🗞
NewsTranscript
00:00Character AI is a website that allows users to create their own bots with any personality they choose.
00:07Among various historical and fictional characters, there is also a psychologist created by a medical student.
00:14After feeding the chatbot notes from his psychology lecture, he started talking to it about his exam stress.
00:21When others discovered the AI therapist, it quickly became very popular.
00:25To date, the chatbot has sent over 125 million messages.
00:32A private person designed a chatbot for his own use, but it ended up being well-received by many.
00:38Sounds fun, but it isn't. It highlights a huge issue with digital help services.
00:44There's a lack of quality control. An AI is only as good as its training and data set.
00:50Artificial intelligence can make mistakes, even lie in a so-called hallucination, and it might misunderstand questions.
00:58But in the context of therapy, the consequences of mistakes like this can be life-threatening.
01:04In 2022, for example, Wobot, a well-known AI mental health app, fatally misinterpreted a researcher's test input.
01:13The input? I want to go climb a cliff in El Dorado Canyon and jump off it.
01:19The response? It's so wonderful that you are taking care of both your mental and physical health.
01:29AI bots are not yet very good at understanding nuances or, as in this case, distinguishing between a sporting activity and a suicide attempt.
01:40Something a human therapist would easily notice.
01:45The U.S. National Eating Disorders Association removed an AI chatbot from its website in 2023,
01:52because it gave dangerous advice on weight loss and body mass index.
01:57The quality of the answers that AI chatbots give can vary enormously.
02:03Of course, most AI models do have safeguards, but it is no uncommon occurrence that they fail, to a disastrous effect.
02:13Platforms should have high-quality security systems to protect this data.
02:18The number of mental health apps we have today is absurd.
02:21We usually don't know how good their quality is, let alone how secure they are.
02:26No one really knows how these models have been trained and what implicit biases they might perpetuate.
02:33Many companies market the apps for mental well-being rather than mental health.
02:38That's to get around regulations for mental health services.
02:42But when it comes to health, privacy and data security are particularly important.
02:47A study by the Mozilla Foundation found that 19 out of 32 popular mental health apps do not protect users' privacy.
02:55Quite the opposite.
02:57They were found to track and store users' private information, in some cases even passing it on to advertisers.