Skip to Main Content
When an underwater object is insonified over a range of aspect angles, the signals that are backscattered from the object can be sampled in space and time by an array of sensors, which can form either a real or a synthetic aperture. The spatial information contained in the backscattered signals can be processed using inverse transform methods to form an image of the 2-D spatial distribution of the object's acoustic reflectivity function. The signal measurement space (or data space) defines a sector of the polar wave number spectrum that quantifies the variation with aspect angle of the spatial frequency components that contribute to the backscattered signal. The temporal bandwidth of the incident sonar pulse determines the radial extent of the sector, while the range of aspect angles over which the object is insonified defines the angular extent of the sector. Equivalently, the extent of the aperture that is formed by the sensor positions relative to the object determines the angular extent of the sector. This concept provides a generalized framework that unifies sonar imaging techniques such as reconstructive tomography (image reconstruction from projections), synthetic aperture sonar, and real aperture sidescan sonar. The difference in these techniques is simply due to the azimuthal angle subtended by the aperture of the sensing array, which is typically a fraction of a degree for a real aperture sonar, several degrees for a strip map synthetic aperture sonar, tens of degrees for a spotlight synthetic aperture sonar, and 360deg for a tomographic imaging sonar. Experimental results are presented for a real aperture sidescan sonar, a synthetic aperture sonar, and a tomographic sonar that demonstrate the development of an acoustic image from a single blurred point to a clearly identifiable object as the azimuthal extent of the angle subtended by the sonar aperture is increased.