Skip to Main Content
One way to define operators for detecting edges in digital images is to fit a surface (plane, quadric,...) to a neighborhood of each image point and take the magnitude of the gradient of the surface as an estimate of the rate of change of gray level in the image at that point. This approach is extended to define edge detectors applicable to multidimensional arrays of data-e.g., three-dimensional arrays obtained by reconstruction from projections-by locally fitting hypersurfaces to the data. The resulting operators, for hypersurfaces of degree 1 or 2, are closely analogous to those in the two-dimensional case. Examples comparing some of these three-dimensional operators with their twodimensional counterparts are given.