OpenAI transcription tool widely used by doctors and hospitals raises concerns over hallucinations

Concerns arise over OpenAI's Whisper tool in healthcare, prone to hallucinations.

: OpenAI's Whisper transcription tool is criticized for its tendency to fabricate text, creating serious concerns in medical settings. Despite warnings, it's used in over 30,000 clinicians and 40 health systems, including notable centers in the US, for transcribing medical consultations. Researchers found hallucinations in around 80% of public meeting transcriptions and numerous cases in medical recordings, highlighting potential risk in healthcare use. The tool's integration in large platforms like Microsoft raises demands for OpenAI to address these flaws.

OpenAI’s AI transcription tool, Whisper, faces criticism due to its tendency to produce fabricated entries, posing risks in crucial medical environments. Despite warnings against high-risk usage, it is utilized by over 30,000 clinicians and 40 health systems, such as Mankato Clinic and Children's Hospital Los Angeles, for transcribing medical consultations.

The primary concern revolves around Whisper fabricating nonexistent medical treatments and adding false content during transcriptions. A study found hallucinations in 80% of examined public meeting transcripts and similar issues across medical logs, with over 7 million medical visits recorded using Nabla’s Whisper-based tool.

Calls have been made to rectify these hallucinations, especially since Whisper is integrated into popular platforms like Oracle and Microsoft. Critics insist OpenAI needs to resolve its flaws promptly, as inaccuracies in transcriptions may have dire implications in healthcare settings, potentially leading to misdiagnosis.