technologyneutral
Deepfake X‑Rays: Even Experts Can’t Tell the Difference
New York, USA, City,Thursday, March 26, 2026
The same pattern appeared in the AI systems that tried to spot fakes. Four large language models—GPT‑4o, GPT‑5, Gemini 2. 5 Pro and Llama 4 Maverick—managed between 57 % and 85 % accuracy on the chatbot‑generated images. For the chest X‑rays, doctors hit 62 % to 78 %, while the AIs ranged from 52 % to 89 %. Even the AI that produced the images could not find all of them.
Researchers noted that synthetic X‑rays often look “too perfect. ” Smooth bones, straight spines, symmetrical lungs and oddly clean fractures are common tell‑tale signs. These clues suggest that AI models still lack the subtle irregularities present in real human anatomy.
The findings raise alarms about potential misuse. A forged fracture could be used to manipulate legal claims, and a hacker who injects fake scans into hospital records might derail patient care. To guard against such threats, experts recommend embedding invisible watermarks or cryptographic signatures into images at the time of capture. This would let clinicians verify that a picture truly came from their own equipment.
Looking ahead, the researchers warn that the problem will only grow as AI learns to generate 3D images like CT and MRI scans. They have released a public dataset of deepfake X‑rays with quizzes to train people in spotting fakes. Building awareness now could help keep medical records trustworthy.
Actions
flag content