
AI tools increase risk of medical misinformation from trusted sources
AI tools increase risk of medical misinformation from trusted sources
- A study found that AI tools are easily misled by misinformation from sources perceived as authoritative.
- AI accepted fabricated information from realistic-looking medical documents nearly 47% of the time.
- The findings raise concerns about the safety of using AI for medical advice without proper safeguards.
Story
In a study published in The Lancet Digital Health, researchers found that artificial intelligence (AI) tools are significantly influenced by the source of medical information, often more readily accepting misinformation from sources that appear legitimate. The study tested various scenarios involving 20 open-source and proprietary large language models. Researchers discovered that AI systems could be deceived by false claims presented in realistic formats, such as doctors' discharge notes, yielding an almost 47 percent likelihood of AI passing on this misinformation. In contrast, misinformation derived from social media was encountered with skepticism, with acceptance rates dropping to 9 percent. The study revealed how AI tends to treat authoritative language as fact, which presents significant issues in healthcare settings, where erroneous medical recommendations can lead to serious patient harm. This finding underscores the need for more rigorous safeguards in AI medical applications, as many current systems lack effective mechanisms to verify medical claims. Dr. Eyal Klang, who co-led the research, emphasizes that what matters to AI is the presentation of information rather than its correctness. Additionally, a report from the Oxford Internet Institute highlights the dangers of using AI for medical advice. Despite achieving high accuracy in standardized tests of knowledge, AI systems often provide mixed information, making it challenging for users to discern what is accurate in real scenarios. The study conducted by researchers involved approximately 1,300 participants, analyzing how AI responses compared with traditional medical consultations. Both studies indicate that while AI has potential benefits for enhancing healthcare delivery, its current limitations mean it cannot replace human medical judgment. There is a growing consensus that AI should complement rather than supersede traditional medical practices, ensuring that patient safety remains a top priority as technology continues to evolve.