Skip to Main Content
The primary detector of breast cancer is the human eye. Radiologists read mammograms by mapping exogenous and endogenous factors, which are based on the image and observer, respectively, into observer-based decisions. These decisions rely on an internal schema that contains a representation of possible malignant and benign findings. Thus, to understand the hits and misses made by the radiologists, it is important to model the interactions between the measurable image-based elements contained in the mammogram and the decisions made. The image-based elements can be of two types, i.e., areas that attracted the visual attention of the radiologist, but did not yield a report, and areas where the radiologist indicated the presence of an abnormal finding. In this way, overt and covert decisions are made when reading a mammogram. In order to model this decision-making process, we use a system that is based upon the processing done by the human visual system, which decomposes the areas under scrutiny in elements of different sizes and orientations. In our system, this decomposition is done using wavelet packets (WPs). Nonlinear features are then extracted from the WP coefficients, and an artificial neural network is trained to recognize the patterns of decisions made by each radiologist. Afterwards, the system is used to predict how the radiologist will respond to visually selected areas in new mammogram cases.