Fiducial markers detection trained exclusively on synthetic data for image-to-patient alignment in HMD-based surgical navigation | IEEE Conference Publication | IEEE Xplore

Fiducial markers detection trained exclusively on synthetic data for image-to-patient alignment in HMD-based surgical navigation


Abstract:

Surgical navigation guides surgeons during interventions. It provides them with spatial insights on where the anatomy and surgical instruments are in the patient space, a...Show More

Abstract:

Surgical navigation guides surgeons during interventions. It provides them with spatial insights on where the anatomy and surgical instruments are in the patient space, and with respect to preoperative images. Image-to-patient alignment in this case is an important step which enables the visualization of preoperative images directly overlayed on the patient. Conventionally, image-to-patient alignment can be done with surface or point-based registration using anatomical or artificial landmarks. In case of point-based registration, surgeons use a trackable pointer to pinpoint some landmarks on the patient (fiducial markers placed preoperatively) and match them with their counterparts in the preoperative image. This method although accurate can be cumbersome and time-consuming. Direct detection of these landmarks in video may speed up the registration process, making it a first step towards AR navigation using head-mounted displays. Detection of objects, including such landmarks, is a task that can be performed with deep learning networks; however, the training of such networks requires large sets of annotated data, which are normally not available in clinical practice. In this study, we investigate the feasibility of using a deep learning model trained on synthetic images in detecting medical fiducial markers in real images, therefore bypassing the need for large sets of annotated patient data. To this end, we generate photorealistic synthetic images of subjects with landmarks using Unreal Engine and MetaHuman, train the detection model using these generated images and assess the model’s capability of detecting the registration markers on real 2D images. Our experimental results demonstrate that the object detection model, although trained exclusively on synthetic data, is capable of detecting the markers on the HoloLens 2 video feed with a F1 score of 81%, which can be used for image-to-patient alignment.
Date of Conference: 16-20 October 2023
Date Added to IEEE Xplore: 04 December 2023
ISBN Information:

ISSN Information:

Conference Location: Sydney, Australia

Funding Agency:


1 Introduction

Surgical navigation is a technology that allows surgeons to locate their surgical instruments and the patient’s anatomy in 3D space, with respect to the patient’s acquired preoperative image (e.g., CT, MRI). It allows the visualization of target structures together with (tracked) instruments on a 2D display of the navigation system [11], [25] while performing the surgery.

Contact IEEE to Subscribe

References

References is not available for this document.