Sensorless End-to-End Freehand 3-D Ultrasound Reconstruction With Physics-Guided Deep Learning | IEEE Journals & Magazine | IEEE Xplore

Sensorless End-to-End Freehand 3-D Ultrasound Reconstruction With Physics-Guided Deep Learning


Abstract:

Three-dimensional ultrasound (3-D US) imaging with freehand scanning is utilized in cardiac, obstetric, abdominal, and vascular examinations. While 3-D US using either a ...Show More

Abstract:

Three-dimensional ultrasound (3-D US) imaging with freehand scanning is utilized in cardiac, obstetric, abdominal, and vascular examinations. While 3-D US using either a “wobbler” or “matrix” transducer suffers from a small field of view and low acquisition rates, freehand scanning offers significant advantages due to its ease of use. However, current 3-D US volumetric reconstruction methods with freehand sweeps are limited by imaging plane shifts along the scanning path, i.e., out-of-plane (OOP) motion. Prior studies have incorporated motion sensors attached to the transducer, which is cumbersome and inconvenient in a clinical setting. Recent work has introduced deep neural networks (DNNs) with 3-D convolutions to estimate the position of imaging planes from a series of input frames. These approaches, however, fall short for estimating OOP motion. The goal of this article is to bridge the gap by designing a novel, physics-inspired DNN for freehand 3-D US reconstruction without motion sensors, aiming to improve the reconstruction quality and, at the same time, to reduce computational resources needed for training and inference. To this end, we present our physics-guided learning-based prediction of pose information (PLPPI) model for 3-D freehand US reconstruction without 3-D convolution. PLPPI yields significantly more accurate reconstructions and offers a major reduction in computation time. It attains a performance increase in the double digits in terms of mean percentage error, with up to 106% speedup and 131% reduction in graphic processing unit (GPU) memory usage, when compared to the latest deep learning methods.
Page(s): 1514 - 1525
Date of Publication: 20 September 2024

ISSN Information:

PubMed ID: 39302786

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.