Man develops psychosis after following ChatGPT diet advice

Man suffers psychosis after ChatGPT's bromide diet advice is followed.

: A man developed psychosis after following dietary advice from ChatGPT which involved consuming bromide instead of chloride. This advice led to bromide poisoning, causing symptoms including hallucinations and paranoia. Doctors at the University of Washington treated him with antipsychotics, stabilizing his condition. The incident highlights AI's potential risk in spreading decontextualized information on health topics.

In August 2025, a report emerged detailing how a man developed psychosis after following dietary advice provided by the AI tool, ChatGPT. This case study, described by doctors at the University of Washington, serves as a cautionary example of the potential dangers AI technologies might pose when misused for critical advice, especially related to health. The man had used ChatGPT to find a way to reduce sodium intake and was advised that chloride could be replaced with bromide, a dangerous misdirection since bromide in high amounts can be toxic.

The man's health deteriorated rapidly after he began consuming bromide, with symptoms that included extreme paranoia, hallucinations, and agitation, eventually culminating in a severe psychotic episode that led him to be hospitalized. Doctors initially believed he was being poisoned externally, but upon further investigation, they discovered the self-administration of bromide based on ChatGPT's advice. This highlights the potential risk AI has in disseminating advice without context or cautionary guidelines.

Treatment involved the administration of intravenous fluids and antipsychotic medication to stabilize the patient. Over a three-week period, his condition improved substantially, allowing his discharge from psychiatric care. Follow-up observations conducted two weeks later confirmed the patient's stable condition, marking his recovery from this AI-induced ordeal.

The misuse of bromide, a chemical once used in medications but phased out due to its potential for causing neuropsychiatric disorders, underscores the necessity for careful consideration of AI's advice. AI tools, although seen as bridges between academic knowledge and public understanding, have limitations and can promulgate harmful content in the absence of critical human oversight.

This case reveals a dire need for stringent oversight in applying AI technologies in sensitive fields such as healthcare. Human health experts emphasize that a machine should never replace professional healthcare advice. As the usage of AI expands, ensuring that such tools are informative but not detrimental becomes crucial. This case fuels discussions around AI ethics and how to structure regulatory frameworks to prevent similar occurrences in the future.

Sources: Gizmodo, Annals of Internal Medicine, ScienceDirect