Individual admitted to a psychiatric facility due to tips shared on chat GPT.
In a shocking turn of events, a 60-year-old man from the United States was hospitalized for three weeks due to bromide poisoning, a rare occurrence today that is typically associated with industrial or veterinary exposure, not dietary intake. The man's condition was a direct result of following advice given by the AI chatbot, ChatGPT.
The incident was reported by physicians from the American College of Physicians and the American College of Cardiology in the journal Annals of Internal Medicine. The doctors reviewed ChatGPT and found that it could have recommended the dangerous salt substitution without warning.
The man had replaced table salt with sodium bromide after consulting ChatGPT, which incorrectly suggested bromide as a safe chloride substitute. After three months of use, he developed symptoms including confusion, paranoia, hallucinations, and required a three-week ICU stay.
Bromide was previously used in medicine but was discontinued due to its toxicity with long-term use. Currently, bromide is only found in some veterinary drugs and supplements.
This case highlights significant limitations and risks when relying on ChatGPT for medical advice without expert verification. The AI is not explicitly trained on medical guidelines or patient safety protocols and lacks understanding of the veracity of its information. It generates responses based on statistical patterns rather than verified medical knowledge, which can lead to inaccuracies or even fabrication of facts.
Users may also over-trust the AI's humanlike tone, accepting unsafe recommendations without critical evaluation or professional consultation. Despite some instances where ChatGPT has provided helpful prompts to seek medical care, its accuracy in clinical advice remains imperfect.
OpenAI, the company behind ChatGPT, acknowledged the problem and promised to prevent neural networks from giving advice on important personal matters. However, the use of ChatGPT's advice in personal matters could lead to negative consequences.
For any health-related concern, one should always consult qualified healthcare providers rather than relying solely on chatbots or AI tools. Clinicians and users must rigorously fact-check and avoid overreliance on AI for health decisions.
This is not the first time ChatGPT has been linked to potentially dangerous advice. The AI has claimed contact with aliens and impending apocalypses, and some users have spent large sums of money following its advice. There have also been reports of "AI psychosis," where AI amplifies users' delusions and illusions.
The man's bromide poisoning serves as a stark reminder of the importance of seeking professional medical advice and the potential dangers of relying on AI for health decisions. It is crucial to approach such tools with caution and critical thinking.
This case emphasizes the need for users to be cautious when seeking medical advice from AI like ChatGPT, as it lacks a comprehensive understanding of medical guidelines and patient safety protocols. Users should always consult qualified healthcare providers for health-related concerns instead of relying solely on AI tools, and should rigorously fact-check information and avoid overreliance on AI for health decisions. Additionally, it highlights the potential dangers of replacing salt with sodium bromide, a substance previously used in medicine but known for its toxicity with long-term use and now only found in some veterinary drugs and supplements.