This post was originally published on this site
The continuous improvement of imaging technology holds great promise in areas where visual detection is necessary, such as with cancer screening. Three-dimensional imaging in particular has become popular because it provides a more complete picture of the target object and its context.
“More doctors and radiologists are looking at these 3D volumes, which are new technologies that allow you to look not just at one image, but a set of images,” said UC Santa Barbara psychology professor Miguel Eckstein(link is external), whose expertise lies in the field of visual search. “In some imaging modalities this gives doctors information about volume and it allows them to segment what they’re interested in.”
Common wisdom is that with all this additional information provided, the rate of detection success should increase considerably. However that’s not always the case, Eckstein said. In a study(link is external) published in the journal Current Biology, he, lead author Miguel Lago and their collaborators point out an odd foible of human vision: We’re actually worse at finding small targets in 3D image stacks than if they were in a single 2D image.
“For those type of small targets, what happens is that they become harder to find in these 3D volumes,” Eckstein explained. Unlike humans, machine observers (e.g., deep neural networks) did not show this deficit with small targets in 3D search, suggesting that the effect is related to some human visual-cognitive bottleneck.
It’s a phenomenon that could have important implications in the medical field, particularly in the realm of breast cancer screening with the growing popularity of breast tomosynthesis (3D mammography) to detect not just large, unusual masses but also microcalcifications that could signal the beginnings of cancer development. According to the study, searching through 3D renderings led to high small target miss rates and a significantly decreased decision confidence on the part of the observer.
“Another thing we found out was that when you ask people searching these 3D volumes how much they explored, they tended to overestimate quite a bit how much they thought they explored,” he added. Based on results from eye-tracking software, subjects conducting the 3D search were looking through only about half of the search area while reporting up to more than 80% image exploration.
Much of the reason for this diminished performance, according to the paper, is how we use our vision when we search. We use both focused and peripheral vision to analyze the object before us and decide where next to fix our attention. People searching through a 2D image tended to rely more on their fovea (the part of the retina that brings objects into sharp, direct focus) and more exhaustively move their focus around the image. Those searching through 3D renderings — composites of many images — were found to move their gaze less and rely on peripheral visual processing.
“What happens is when doctors are looking through these 3D images, they basically underexplore the whole data set,” said Eckstein, whose collaborators in the Department of Radiology at University of Pennsylvania reproduced the effect with some radiologists. “They’re not looking at every single spot on every single image, because it takes a long time.” The lack of eye movement in 3D searches could also be a matter of strategy, he added, in which clinicians fix on the same spot in every image as they flip back and forth through the stack.
Small targets, Eckstein explained, were highly detectable at or near the point of fixation but became much less noticeable as they moved toward the periphery. This fundamental visual limitation, the eye movement under-exploration and reliance on peripheral vision resulted in a high number of errors in the 3D searches.
The same couldn’t be said for large targets, which followed the common wisdom about the benefits of 3D images; their detection was improved in the 3D searches.
The findings of this paper illustrate the gaps that sometimes arise between the technology we invent and our ability to make the best use of it, according to Eckstein.
“We’re good at making technology, but sometimes we don’t really connect with it that well,” he said. “And we don’t know that we don’t connect with it that well.”
In the case of radiologists combing through 3D images for small targets, this bottleneck of human vision and cognition, once recognized, could be improved with practice and extended search times. In some cases, clinicians already lean on synthesized 2D images for the small targets while using 3D renderings for the large objects. Performance may also be improved with the use of computer vision, artificial intelligence and/or having multiple observers scrutinizing images.
Lead author and former UCSB postdoctoral scholar Miguel Lago is now a researcher in the Food and Drug Administration’s Division of Imaging, Diagnostics, and Software Reliability, which contributes to evaluating new medical imaging technologies.