
A new study published in Nature suggests that Large Language Models like OpenAI's GPT-4 can "experience" anxiety—well, sort of.
Researchers from Switzerland, Germany, Israel, and the U.S. tested GPT-4 by feeding it traumatic stories—military attacks, floods, accidents—and measured its anxiety levels with a standard test (State-Trait Anxiety Inventory). The result? GPT-4's "anxiety" spiked dramatically, mimicking a human's response. But, of course, GPT-4 doesn’t actually feel anxiety; it’s just super good at simulating it, trained on a mountain of human-created content.
In a twist, the study also had GPT-4 undergo mindfulness exercises used with veterans suffering from PTSD. The result? A 33% drop in the model's "anxiety." Still, it didn’t quite return to baseline – the bot remained “stressed”.
“These findings suggest managing LLMs’ “emotional states” can foster safer and more ethical human-AI interactions.”
While this is impressive evidence of next-level AI empathy, there is a cautionary take-away. When humans are emotional, our advice can be clouded. We all know this, and have evolved sophisticated behaviours and filters to manage this when we’re talking to someone who is clearly stressed. Even a child changes the way they listen when a parent is visibly upset or angry. But when chatting to an AI that is “stressed” we don’t expect this to happen, so we’re not accounting for all the biases in the response.
This can be a problem, particularly if companies like S10.AI, and so many others are encouraging us to use chatbots for therapy. Those conversations usually start from the place of trust and we seek clear-headed advice. But as AI becomes more human-like and empathetic we’ll need to start treating it differently.
Meanwhile, human therapists, for all their faults, can regulate their emotions to provide a stable and hopefully less-biased response—and that’s a pretty big difference for now, until chatbots master mindfulness.