Machine learning model builds on imaging methods to better detect ovarian lesions

This post was originally published on this site

Although ovarian cancer is the deadliest type of cancer for women, only about 20% of cases are found at an early stage, as there are no real screening tests for them and few symptoms to prompt them. Additionally, ovarian lesions are difficult to diagnose accurately — so difficult, in fact that there is no sign of cancer in more than 80% of women who undergo surgery to have lesions removed and tested.

Quing Zhu, the Edwin H. Murty Professor of Biomedical Engineering at Washington University in St. Louis’ McKelvey School of Engineering, and members of her lab have applied a variety of imaging methods to diagnose ovarian cancer more accurately. Now, they have developed a new machine learning fusion model that takes advantage of existing ultrasound features of ovarian lesions to train the model to recognize whether a lesion is benign or cancerous from reconstructed images taken with photoacoustic tomography. Machine learning traditionally has been focused on single modality data. Recent findings have shown that multi-modality machine learning is more robust in its performance over unimodality methods. In a pilot study of 35 patients with more than 600 regions of interest, the model’s accuracy was 90%.

It is the first study using ultrasound to enhance the machine learning performance of photoacoustic tomography reconstruction for cancer diagnosis. Results of the research were published in the December issue of the journal Photoacoustics.

“Existing modalities are mainly based on the size and shape of the ovarian lesions, which do not provide an accurate diagnosis for earlier ovarian cancer and for risk assessment of large adnexal/ovarian lesions,” said Zhu, also a professor of radiology at the School of Medicine. “Photoacoustic imaging adds more functional information about vascular contrast from hemoglobin concentration and blood oxygen saturation.”

Yun Zou, a doctoral student in Zhu’s lab, developed a new machine learning fusion model by combining an ultrasound neural network with a photoacoustic tomography neural network to perform ovarian lesion diagnosis. Cancerous lesions of the ovaries can present in several different morphologies from ultrasound: some are solid, and others have papillary projects inside cystic lesions, making them more difficult to diagnose. To improve overall diagnosis of ultrasound, they added the total hemoglobin concentration and blood oxygenation saturation from photoacoustic imaging, both of which are biomarkers for cancerous ovarian tissue.

“Our results showed that the ultrasound-enhanced photoacoustic imaging fusion model reconstructed the target’s total hemoglobin and blood oxygen saturation maps more accurately than other methods and provided an improved diagnosis of ovarian cancers from benign lesions,” Zou said.

Story Source:

Materials provided by Washington University in St. Louis. Original written by Beth Miller. Note: Content may be edited for style and length.