OpenAI transcription tool widely used by doctors and hospitals raises concerns over hallucinations
Concerns arise over OpenAI's Whisper tool in healthcare, prone to hallucinations.
OpenAI’s AI transcription tool, Whisper, faces criticism due to its tendency to produce fabricated entries, posing risks in crucial medical environments. Despite warnings against high-risk usage, it is utilized by over 30,000 clinicians and 40 health systems, such as Mankato Clinic and Children's Hospital Los Angeles, for transcribing medical consultations.
The primary concern revolves around Whisper fabricating nonexistent medical treatments and adding false content during transcriptions. A study found hallucinations in 80% of examined public meeting transcripts and similar issues across medical logs, with over 7 million medical visits recorded using Nabla’s Whisper-based tool.
Calls have been made to rectify these hallucinations, especially since Whisper is integrated into popular platforms like Oracle and Microsoft. Critics insist OpenAI needs to resolve its flaws promptly, as inaccuracies in transcriptions may have dire implications in healthcare settings, potentially leading to misdiagnosis.