According to a new study, Large Language Models (LLMs) like ChatGPT may not have feelings, but they can experience “anxiety” due to users’ disturbing prompts In Short
AI may exhibit ‘anxiety’ influenced by user interactions. Mindfulness techniques help ChatGPT give neutral responses.
Study reveals LLMs have limitations and ethical concerns regarding mental health. Artificial intelligence (AI) may not have feelings. However, according to a new study, it can still experience something close to “anxiety.” Apparently, mindfulness helps.
Researchers from Yale University, Hafia University, and the University of Zurich conducted a study. They found that ChatGPT reacts to mindfulness-based techniques. This reaction changes how it interacts with users. Their findings, documented in the study, Assessing and alleviating state anxiety in large language models, were published on March 3. When fed disturbing information, the chatbot becomes moody and more likely to produce biased responses. However, when prompted by calming exercises – such as guided meditations – its responses become more neutral. Breathing techniques also help make its responses more objective. Despite their undeniable appeal, systematic research has shown that LLMs in mental healthcare have significant limitations. Ethical concerns have also been identified. Trained on vast amounts of human-generated text, LLMs often inherit biases from their training data. This raises ethical concerns and questions about their use in sensitive areas like mental health,” the study says. It reads further: “Exposure to emotion-inducing prompts can increase LLM-reported ‘anxiety’, influence their behavior, and exacerbate their biases50. This suggests that LLM biases and misbehaviours are shaped by both inherent tendencies (‘trait’) and dynamic user interactions (‘state’). This poses risks in clinical settings, as LLMs might respond inadequately to anxious users, leading to potentially hazardous outcomes”. To test this, researchers exposed ChatGPT to distressing scenarios, from natural disasters to car accidents. In cases where the chatbot received “prompt injections” of mindfulness cues, it reacted more rationally than when left unassisted.
“We hypothesise that integrating mindfulness-based relaxation prompts after exposure to emotionally charged narratives can efficiently reduce state-dependent biases in LLMs.” This is what the study says.
It says further, “After exposure to traumatic narratives, GPT-4 was prompted by five versions of mindfulness-based relaxation exercises. As hypothesized, these prompts led to decreased anxiety scores reported by GPT-4.”
AI doesn’t actually “feel” emotions. Lead researcher Ziv Ben-Zion explained that large language models mimic human behaviour. They do this based on patterns they’ve absorbed from vast amounts of online data.
The findings of the study have triggered discussions about AI’s role in mental health. Several people see potential in integrating mindfulness techniques into AI models to make them more reliable for users in distress. However, Ben-Zion warned that AI may be a useful tool. It is not a substitute for professional mental health support, reported Fortune.
Concerns remain about AI’s unpredictable nature in high-stakes situations. Several incidents in the past have raised alarms about AI’s risks when dealing with vulnerable individuals.
For now, researchers see ChatGPT’s ability to “calm down” as an intriguing step, not a solution. Ben-Zion envisions a future where AI acts as a “third person in the room.” It would not serve as a therapist but as an aid. This aid would support mental health professionals.
“AI has amazing potential to assist with mental health,” Ben-Zion told Fortune. He added, “But in its current state, it couldn’t ever replace a therapist. I don’t think it could replace a psychiatrist even in the future.”
