This post was originally published on this site
An artificial-intelligence (AI) model built at Mass Eye and Ear was shown to be significantly more accurate than doctors at diagnosing pediatric ear infections in the first head-to-head evaluation of its kind, a research team working to develop the model for clinical use reported.
According to a new study published August 16 in Otolaryngology-Head and Neck Surgery, the model, called OtoDX, was more than 95 percent accurate in diagnosing an ear infection in a set of 22 test images compared to 65 percent accuracy among a group of clinicians consisting of ENTs, pediatricians and primary care doctors, who reviewed the same images.
When tested in a dataset of more than 600 inner ear images, the AI model had a diagnostic accuracy of more than 80 percent, representing a significant leap over the average accuracy of clinicians reported in medical literature.
The model utilizes a type of AI called deep learning and was built from hundreds of photographs collected from children prior to undergoing surgery at Mass Eye and Ear for recurrent ear infections or fluid in the ears. The results signify a major step towards the development of a diagnostic tool that can one day be deployed to clinics to assist doctors during patient evaluations, according to the authors. An AI-based diagnostic tool can give providers, like pediatricians and urgent care clinics, an additional test to better inform their clinical decision-making.
“Ear infections are incredibly common in children yet frequently misdiagnosed, leading to delays in care or unnecessary antibiotic prescriptions,” said lead study author Matthew Crowson, MD, an otolaryngologist and artificial intelligence researcher at Mass Eye and Ear, and assistant professor of Otolaryngology-Head and Neck Surgery at Harvard Medical School. “This model won’t replace the judgment of clinicians but can serve to supplement their expertise and help them be more confident in their treatment decisions.”
Difficult to diagnose common condition
Ear infections occur from a buildup of bacteria inside the middle ear. According to the National Institute on Deafness and Other Communication Disorders, at least five out of six children in the United States have had at least one ear infection before the age of three. When left untreated, ear infections can lead to hearing loss, developmental delays, complications like meningitis, and, in some developing nations, death. Conversely, overtreating children when they don’t have an ear infection can lead to antibiotic resistance and render the medications ineffective against future infections. This latter problem is of significant public health importance.
To ensure the best outcomes for children, clinicians must diagnose ear infections as accurately and early as possible. However, previous studies suggest the conventional diagnostic accuracy of ear infections in children from a physical exam is routinely below 70 percent, even with innovations to technology and clinical practice guidelines. The difficulty of evaluating a child who is struggling or crying during an examination, coupled with the general inexperience many doctors and urgent care providers have in ear evaluations may explain the lower-than-expected diagnostic rate, according to Dr. Crowson.
“Since clinicians would rather stay on the side of caution, it’s pretty easy to see why parents typically walk out of urgent care with a prescription for antibiotics,” he said.
In 2021, Dr. Crowson collaborated with Mass Eye and Ear colleagues Michael S. Cohen, MD, director of the Multidisciplinary Pediatric Hearing Loss Clinic, and Christopher J. Hartnick, MD, MS, director of the Division of Pediatric Otolaryngology, to develop a more accurate method of diagnosing ear infections using a machine learning algorithm. An artificial neural network was trained with high-resolution, photographs of tympanic membranes collected directly from patients during ear procedures where infection can be seen. These photos represent a gold standard, “ground truth” set of data compared to AI-based tools that rely on images collected from search engines. In a proof-of-concept study published last year, the model was found to be 84 percent accurate in detecting “normal” versus “abnormal” middle ears.
Human versus machine
In the new study, the researchers compared the accuracy of a refined model head-to-head against clinicians. More than 639 images of tympanic membranes from children aged 18 years or younger who were undergoing surgery for tube placement or draining fluid from the ears were used to train the model. The images were tagged as either “normal,” “infected,” or having “liquid behind the eardrum,” as opposed to the “normal” or “abnormal” classification from the team’s earlier model. With the added segment, the model achieved a mean diagnostic accuracy of 80.8 percent.
A survey was then created asking clinicians and trainees of various medical specialties to view 22 new images of tympanic membranes and diagnose the ear as one of the three tagged categories. While the machine-learning model correctly categorized more than 95 percent of the sample images, the average diagnostic score among 39 clinicians who responded to the survey was 65 percent. Moreover, pediatricians and family medicine/general internists correctly categorized 60.1 percent and 59.1 percent of images, respectively.
Bringing artificial intelligence to the clinic
Ongoing studies are underway to validate and refine the AI model. To date, more than 1,000 intraoperative images of tympanic membranes have been amassed at Mass Eye and Ear.
In collaboration with Mass General Brigham Innovation, OtoDx is currently being employed in a prototype device paired with a smartphone app. The device acts as a “mini otoscope” that would fit over the phone’s camera and allow clinicians to take photos of the inside of a child’s ear, upload them directly to the app and receive a diagnostic reading in seconds. With further validation, OtoDX may provide another tool for clinicians to glean information from in real time during an exam.
As feedback for the pilot is processed, Mass General Brigham Innovation will support the OtoDx team in exploring opportunities to commercialize this impactful tool to assist even more clinicians and their patients.
In addition to Dr. Crowson, Cohen and Hartnick, co-authors on the study included Krish Suresh, MD, of Mass Eye and Ear/Harvard Medical School, and David W. Bates, MD, MSc, of the Brigham and Women’s Hospital/Harvard T. H. Chan School of Public Health.
This study was supported in part by a grant from the National Institutes of Health’s Biomedical Informatics and Data Science Training Program. (T15LM007092-30). The technology is the subject of a pending patent application.