Research on Accuracy of Augmented Reality Surgical Navigation System Based on Multi-View Virtual and Real Registration Technology

The traditional navigation system used in spinal puncture is not able to monitor the surgical process in real time and the accuracy of navigation is unsatisfactory. In this study, an augmented reality surgical navigation system based on multi-view virtual and real registration is proposed to solve these problems. The theory of virtual and real registration in augmented reality technology is analyzed, and the methods of single-view and multi-view virtual and real registration are compared. The principle of coordinate transformation in the surgical navigation module is analyzed. The platform of augmented reality surgical navigation system based on multi-view virtual and real registration technology is designed. The experiments of the spinal model are used to verify the accuracy of virtual and real registration. The accuracy of the proposed navigation system is verified by the experiment of simulating puncture operation with robotic control of surgical tool. The experimental results show that the accuracy of single-view and multi-view virtual and real registration is 9.85±0.80mm and 1.62±0.22mm respectively. The accuracy of the augmented reality surgical navigation system added with multi-view virtual and real registration technology is 1.70±0.25mm, which is 35% higher than that of the augmented reality surgical navigation system not previously used. The augmented reality surgical navigation system for spinal puncture based on multi-view virtual and real registration proposed in this study can meet the requirements of physician on the accuracy of surgery. It can also help physician to monitor the surgical process in real time and improve the success rate of surgery.


I. INTRODUCTION
Augmented reality (AR) is the interaction between the images generated by the device and the realistic environment [1]. In recent years, AR has been rapidly applied in commercial, military, medical and other fields. For the medical industry, the application of AR is showing a rapidly rising trend [2]- [4]. Minimally invasive surgery is one of the early applications of AR technology. The spinal structure required for surgery is projected from a computed tomography (CT) or magnetic resonance imaging (MRI) to a real surgical scene. Visual techniques are used to view images in two or The associate editor coordinating the review of this manuscript and approving it for publication was Utku Kose. three dimensions, such as holding the screen in your hand and projecting images onto the surgical area of the patient's spine. AR allows physician to see the focal point of pathogenesis (it refers to the point of lesion that occurs in a part of the spine) inside the spine that they cannot see directly. It helps physician guide surgery through interactive images [5]- [9].
At present, the research of augmented reality in the medical field has made many achievements and breakthroughs. Suraj Pokhrel et al. proposed a novel AR scheme for knee replacement surgery that took the accuracy of the cutting error into account. The problem of using AR to navigate in the predetermined direction and deep cut of the bone is solved by this solution, and the cutting error is minimized to about 1mm [10]. Jacob T et al., proposed a method for VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ physician to use a head-mounted display augmented reality (HDD-AR) with superimposed CT data to guide the percutaneous placement of pedicle screws in an opaque model of the lumbar spine [11]. This method greatly reduces the pain of the patient and improves the success rate of the surgery. Jungo Yasuda et al. developed a new navigation system by using augmented reality in some surgical procedures similar to hepatobiliary soft tissue [12]. It can perform surgery on the liver and pancreas using a tablet computer, and capture the three-dimensional (3D) model in real time by the camera of the tablet computer to display the 3D model superimposed on the surgical area. The technique is particularly useful in the operation of multiple metastatic liver cancers, because it is easy to locate cancers and blood vessels. In terms of spinal pedicle screw implantation, Adrian et al. conducted the first human prospective cohort study of pedicle screw placement by using augmented reality surgical navigation and intraoperative 3D images [13]. The accuracy of pedicle screw placement using augmented reality surgical navigation (ARSN) in clinical trials is evaluated. The traditional navigation system used in spinal puncture can only locate the patient's focal point of pathogenesis based on the patient's preoperative 3D images. However, most surgeries are more complicated, and the physician may cause deviation in the localization of the focal point of pathogenesis due to the change of the patient's posture or the touch of the key equipment during the operation. The X-ray image can be taken during surgery to confirm the position of the focal point of pathogenesis, but this is bound to cause a lot of radiation to the physician and the patient [14]- [18]. If the focal point of pathogenesis is re-located during the operation, it will certainly prolong the operation time and increase the patient's pain. In addition, the physician needs to observe the navigation results in real time during the operation to confirm the position of the surgical tools in the patient's body. It is difficult for the physician to coordinate the hands and eyes, which increases the difficulty of the operation [19]- [21]. Based on the above situation, it is necessary to develop a navigation system used in spinal puncture that can monitor the surgical process in real time and has high precision [22]- [25].
In this paper, an ARSN system based on multi-view virtual and real registration is proposed for spinal puncture surgery, which can help physician monitor the surgical process in real time and improve the accuracy and success rate of spinal puncture surgery. The contributions of our work are pointed as follows: 1. The virtual model of the spine is registered with the real surgical scenes by using the virtual and real registration technology in AR, so that the physician can observe the surgical tools and the focal point of pathogenesis in real time through the video signal of the camera, and monitor whether the whole surgical process is deviated. 2. A multi-view virtual and real registration method is proposed to solve the problem that the accuracy of the single-view virtual and real registration is low due to the lack of deep information. 3. The technology based on multi-view virtual and real registration is integrated into the augmented reality module, and it is also integrated with the existing surgical navigation module. 4. The experiments are carried out to verify the superiority of the multi-view virtual and real registration in the accuracy of registration. The accuracy and efficiency of the proposed ARSN system is verified by a simulation of spinal puncture surgery with robotic control of surgical tools.
The paper is structured as follows: Section II describes the design of ARSN system platform based on multi-view virtual and real registration. The theory of virtual and real registration and the principle of coordinate transformation of surgical navigation module are analyzed. In section III, the accuracy of the registration based on multi-view virtual and real registration and the accuracy of the ARSN system are verified by experiments. Section IV discusses the advantages and disadvantages of the existing ARSN system, as well as the improvement of the virtual and real registration. Section V summarizes the specific improvements of the proposed methods in this paper on the accuracy of ARSN system based on multi-view virtual and real registration.

A. THE DESIGN OF AUGMENTED REALITY SURGICAL NAVIGATON SYSTEM PLATFORM
The design of augmented reality surgery navigation system platform mainly includes hardware design and software module design. The design of ARSN system platform enables the simulated experiments of spinal puncture based on the system to achieve the following goals: 1. During the experiment, the virtual and real model of the spine are registered through the augmented reality module, enabling the physician to observe the shape of the bone covered with silica gel in real time (In order to verify the authenticity of the experiment, the method of silica gel wrapping the spinal model was used to replace the skin and muscles of the real patient's spine, so that the physician could not see the shape of the bones).
2. The operation of human and robot can be monitored during the whole operation process to avoid mistakes. The accuracy of spinal puncture operation should be improved.

1) THE COMPOSITION AND DESIGN OF HARDWARE OF AUGMENTED REALITY SURGERY NAVIGATION SYSTEM PLATFORM
In this study, based on the existing three-dimensional spinal surgical navigation system in our laboratory, virtual and real registration technology in augmented reality is introduced to establish an augmented reality spinal surgical navigation system. The design of the entire augmented reality spinal surgical navigation system is shown in Figure 1, which is mainly composed of the following hardware: camera   (Logi-C310), C-arm (PLX7000A), NDI optical tracker, robot (KR 6 R900 sixx), spinal model, monitor, surgical probe, surgical tool (rigid body), surgical tool tracker, identification, calibration target and so on. In this study, virtual and real registration in augmented reality is adopted to realize the registration between preoperative 3D data of spinal images of the patient and real surgical scenes. The identification method in the augmented reality development kit of ARToolKit is used for virtual and real registration. The real surgical scenes are mainly collected by the camera. During the process of registration, the relative position between the camera and the identification is calculated in real time. The aim of placing the virtual spinal model in the correct position of the identification is to realize the fusion between the virtual spinal model and the surgical scenes by using the related technology of computer vision. The process of 3D registration based on identification method is shown in Figure 3. Firstly, the video signal obtained by the camera is processed by the computer to identify the identification. Then the relative position between the camera and the identification in each frame is calculated to obtain the 3D registration matrix. Finally, the virtual spinal model is imported into the video stream and displayed in the correct position of the identification.

2) CALCULATION OF THREE -DIMENSIONAL REGISTRATION MATRIX
The key to the virtual and real registration of augmented reality is the 3D registration, which establishes a rigid connection between the virtual spinal model and the identification. In this way, we can move the virtual spinal model to the desired position by moving the identification. During this process, it is assumed that the positional relationship between the identification and the virtual spinal model is always constant. In the process of calculating the registration matrix of each frame video image, the camera coordinate system, screen coordinate system, identification coordinate system and observation coordinate system are mainly involved. As shown in Figure 4, the key to the realization of identification detection and the 3D registration of the virtual spinal model in the real surgical scenes is to calculate the transformation relationship between the coordinate systems in ARToolKit in real time [26].
It can be assumed that there is a point (X m , Y m , Z m ) in the identification coordinate system, and the corresponding point in the screen coordinate system is (x c , y c ). According to the imaging principle of the camera, the transformation relationship between the identification coordinate system and the screen coordinate system is shown in equation (1).
The point (x c , y c ) in the screen coordinate system is distorted to get point (x d , y d ) in the observation coordinate system. The optical imaging of the camera is not completely equivalent to the ideal hole imaging, and there will inevitably be distortion, which is mainly divided into radial distortion and tangential distortion [27], [28]. Among these two kinds of distortion, radial distortion is the most important distortion factor, and its mathematical model can be expressed as: D x and D y are the distortions in the x direction and the y direction respectively. k is the radial distortion coefficient, and r is the distance between the imaging point and the imaging center. The imaging model of the camera considering radial distortion is shown in Figure 5.   It can assume that a point P(x w , y w , z w ) in space is point P c (x c , y c , z c ) in the camera coordinate system, and that the point transferred to the imaging plane is ideally point P u (x u , y u ). However, due to radial distortion, its actual point on the imaging plane is P d (x d , y d ). The transformation relationship between them is: Therefore, according to equations (2) and (3), we can get the equation (4): (u, v) are the coordinate of the point P in the image coordinate system. f is the focal length, f u , f v is the ratio of focal length. (u 0 , v 0 ) is the center of the image coordinate system. d u and d v imaging plane is respectively in the x direction and y direction on a single pixel width.
The point (x c , y c ) in the screen coordinate system is obtained after distortion correction (x d , y d ) in the observation coordinate system. The internal parameter matrix and distortion parameters of camera can be obtained according to this paper.
In equation (5), K is the internal parameter matrix obtained after the camera is calibrated in two steps. According to equation (5), the transformation matrix of the identification coordinate system and the observation coordinate system is VOLUME 8, 2020 obtained.
The registered matrix of camera is T CM . The coordinates of the four corners of the identification under the identification coordinate system (x mi , y mi , z mi ) (i = 0,1,2,3) and its points under the observation coordinate system (x di , y di ) (i = 0,1,2,3) are combined. As shown in equation (7), combined with the equation (6), eventually T CM can be obtained:

C. ANALYSIS OF THE PRINCIPLE OF COORDINATE TRANSFORMATION IN THE NAVIGATION MODULE OF ROBOT-ASSISTED SPINAL PUNCTURE
The main function of the surgical navigation module is to project the coordinates of a point on the 3D spinal model reconstructed based on data of preoperative CT into the world coordinate system. The transformation matrix can be calculated so that the robot can control the surgical tool to reach the precise position to complete the spinal puncture.
Based on the previous research of our team [29], the navigation module of spinal puncture mainly includes four parts: 2D-3D registration, calibration and back projection of calibration target, tracking of spinal reference frame and surgical tools. The transformation matrix T 3D−Model X −Ray of the 3D model from the virtual coordinate system to the coordinate system of the X-ray image can be obtained through the 2D-3D registration parameters. The inverse projection matrix T X −Ray T arg et can be obtained after the calibration target is calibrated (The inverse projection matrix is the transformation matrix from the X-ray image coordinate system to the calibration target coordinate system). The transformation matrix T T arg et Spine from the calibration target coordinate system to the spinal reference frame coordinate system is obtained through the relative position between the calibration target coordinate system and the spinal reference frame coordinate system obtained by the NDI optical tracker when X-ray images are taken. The transformation matrix T Spine Tool from the spinal reference frame coordinate system to the coordinate system of the surgical tools can be obtained by NDI optical tracker, and the spatial position of the spinal model and the coordinate system of the surgical tools can be obtained. As shown in Figure 6, the relationship of position between the hardware of the 3D surgical navigation system is obtained by the NDI optical tracker. The relationship between the virtual spinal model and the surgical tools can be calculated: The transformation relationship between the surgical tool coordinate system and the surgical tool tracker coordinate  system is shown in Figure 7. The transformation matrix T Spine Tool between the spinal reference frame coordinate system and the surgical tool coordinate system is mainly divided into two parts: The transformation matrix T Spine ToolTrac ker from the spinal reference frame coordinate system to the surgical tool tracker coordinate system can be obtained by the surgical tool tracking module. The transformation matrix T ToolTrac ker Tool between the surgical tool tracker coordinate system and the surgical tool coordinate system can also be obtained by the surgical tool tracking module.

A. VIRTUAL AND REAL REGISTRATION EXPERIMENTS OF SPINAL MODEL
In the experiments of virtual and real registration based on identification method, it is not easy to register due to the complexity of the surface of the spinal model. The experiments of virtual and real registration are achieved by registering the landmarks on the virtual spinal model and the landmarks on the real spinal model. This method has the advantages of simple operation and obvious effect. In this study, small balls matched with the NDI optical tracker are fixed to the spinal model as landmarks. This method can easily and accurately calculate the error of virtual and real registration. The position of each landmarks (NDI small ball) can be accurately obtained through the NDI optical tracker.

1) EXPERIMENTS OF SINGLE-VIEW VIRTUAL AND REAL REGISTRATION
A single camera is used to complete the single-view virtual and real registration of the virtual spinal model and the real spinal model based on the identification method. In the experiment, the method based on the identification method is used to complete the registration of the landmarks on the real and virtual spinal model. The four landmarks (NDI small balls) are fixed on the spinal model and named as P1, P2, P3 and P4. The four landmarks (NDI small balls) must be kept non-coplanar (assuming that each segment of the spinal model is relatively fixed). As shown in Figure 8, four landmarks on the virtual model and four landmarks on the real model are overlapped respectively, so it can be considered that the registration of the virtual model and the real model is completed. Four groups of experiments are conducted respectively, and four landmarks at different positions are selected for each group. The experiment needs to keep the four landmarks on the virtual model and the real model basically coincide, and then the person holds the surgical probe to pick up the landmarks on the spine. The NDI optical tracker is used to read the 3D spatial coordinates of the landmarks on the real spinal model. The 3D spatial coordinates of the landmarks on the virtual model taken from the tip of the surgical probe are also read by the NDI optical tracker. The actual virtual and real registration error of each landmark is calculated by the difference between the coordinates of the landmarks on the real spinal model read by NDI optical tracker and the coordinates of the landmarks on the virtual model taken by the surgical probe read by NDI optical tracker. The experimental data obtained are shown in Table 1. Meanwhile, the average error of each group of data is calculated.

2) EXPERIMENTS OF MULTI-VIEW VIRTUAL AND REAL REGISTRATION
It can be seen from the experimental data in Table 1 that the average error of each group of experiments based on the single-view virtual and real registration is about 10mm, and the range is 8.23mm-10.98mm. It can be seen from the experimental results that the accuracy of the single-view virtual and real registration based on the spinal model is not satisfactory. From the perspective of Figure 8, it can be seen that the registration of the real spinal model and the virtual spinal model is basically successful. The blue circle in Figure 9 represents the position of the real landmarks and the yellow circle represents the position of the virtual landmarks. However, from the perspective of Figure 9, it is found that the real and the virtual landmarks on the spinal model do not completely coincide, and there are some errors.  There are some errors in the registration of the contour edge of the spinal model.
The main reason for the large deviation between Figure 8 and Figure 9 is that a single camera can only capture video images in a single line-of-sight direction. In terms of vision, the human eye tends to be insensitive to the perception of depth in the direction of vision. In the case of single-view virtual and real registration, it is difficult to obtain the deep information of the camera's view direction, which will lead to the situation that the virtual model is not registered with the real model when observing from another perspective. The deep information of the camera's direction of vision is difficult to obtain, such deviation can only be perceived from other perspective. In view of the lack of deep information caused by single-view virtual and real registration, a camera is added on the basis of the hardware in this study to collect video images from different perspectives of the real surgical scenes to form a multi-view virtual and real registration. The process of multi-perspective virtual and real registration with two cameras is shown in Figure 10: Two cameras are used to simultaneously obtain video streams of real surgical scenes from different perspectives. The same virtual spinal model is imported into the two video streams respectively, and virtual and real registration is performed simultaneously in the two video streams by using the above method. The virtual and real registration between the real spinal model and the virtual spinal model must be completed in both video streams at the same time before the virtual and real registration can be completed. The effect of multi-view virtual and real registration is shown in Figure 11.
From Figure 11, we can clearly see that the four landmarks on the real and virtual spinal models basically coincide in two different perspectives, and the errors are almost invisible to human eyes.  Table 2.

B. SPINAL PUNCTURE EXPERIMENTS OF AUGMENTED REALITY SURGICAL NAVIGATION BASED ON MULTI-PERSPECTIVE VIRTUAL AND REAL REGISTRATION TECHNOLOGY
In order to simulate real spinal puncture surgery, the spinal model is wrapped in silicone to replace the skin and muscles of the real patient's spine, so that the skeletal form could not be seen by the naked eye. Four metal balls are inserted into the spinal model to replace the focal points of pathogenesis. In the process of surgery, the existing navigation module of spinal puncture is first used to transform the focal point of pathogenesis and planned surgical paths on the virtual model into the world coordinate system [29]. The robot replaces the human to control the surgical tool to perform the puncture of the simulated surgery. During the puncture process, the virtual spinal model is registered with the real surgical scenes and then fused and superimposed by the augmented reality module. The video signal of the camera is used to observe the position of the surgical tool and the focal point of pathogenesis in real time to monitor whether the deviation occurred in the whole surgical process. Four puncture experiments of ARSN are performed on four different positions of the spinal model. The real perspective of spinal puncture surgery based on ARSN system is shown in Figure 12. In each group of experiments, four different surgical pathways are planned to allow the robot to control the surgical tool to locate the focal point of pathogenesis and conduct simulated puncture. The final puncture effect of spinal puncture surgery based on the ARSN system is observed from the perspective of AR, as shown in Figure 13. Three-dimensional coordinates of focal point   of the end of the surgical tool are compared with the 3D coordinates of the focal point of pathogenesis and the error of puncture is calculated. The specific experimental data are shown in Table 3.

C. ANALYSIS OF EXPERIMENTAL DATA 1) THE ACCURACY ANALYSIS OF SPINAL AUGMENTED REALITY SURGICAL NAVIGATION SYSTEM
It can be seen from the experimental data in Table 1 that the average error of each group of experiments is about 10mm and the error range is 8.23mm-10.98mm by using the single-view virtual and real registration. Due to the lack of deep information caused by a camera, the results are far from meeting the requirements of actual spinal puncture surgery. As can be seen from the experimental data in Table 2, the average accuracy of multi-view virtual and real registration is about 1.6mm, and the error range is 1.25mm-1.95mm. Compared with the single-view virtual and real registration, the average accuracy of multi-view virtual and real registration is improved by 84%. As shown in Figure 14,  by comparing the accuracy of the virtual and real registration of the two methods, the average error of the multi-view virtual and real registration is obviously much lower than that of the average error of the single-view virtual and real registration, VOLUME 8, 2020 which directly demonstrates the feasibility and advantages of the multi-view virtual and real registration. The accuracy of the multi-view virtual and real registration fully conforms to the requirements of the 2.5mm. In order to introduce the multi-view virtual and real registration technology into the ARSN, and to meet the accuracy of real-time reconstruction of the patient's spinal morphology under the skin during surgery, it laid a foundation for guiding physician to conduct surgery.
In the navigation process of spinal puncture surgery based on multi-view virtual and real registration, the errors of 3D coordinate of each focal point of pathogenesis are analyzed in Table 3. From Figure 15, we can see the actual distribution of puncture errors at the X, Y and Z coordinates of each focal point of pathogenesis. As can be seen from Figure 15, the puncture errors of X, Y and Z coordinates in the experiments of robot-assisted spinal puncture navigation are mainly between 0.1 and 0.9mm. As can be seen from the blue area in the figure, about 80% of the puncture error is between 0.1 mm and 0.6 mm. This indirectly indicates that the method of robot-assisted spinal puncture based on multi-view virtual and real registration has high accuracy and stability. More intuitively, it can be seen that the puncture error of any coordinate of X, Y and Z plays a decisive role in the puncture error of the whole focal point of pathogenesis. In addition, the puncture errors of X, Y and Z coordinates are irregular.
In this paper, the accuracy of the spinal ARSN system based on multi-perspective virtual and real registration is 1.70 ± 0.25mm. Without the multi-view virtual and real registration technology in augmented reality, the puncture accuracy of the spinal surgical navigation system developed by our team is 2.54 ± 0.15mm [29]. In comparison, the puncture accuracy of the entire spinal ARSN system is improved by about 35%. It is fully proved that the multi-view virtual and real registration method can observe the position of surgical tools and focal points of pathogenesis in real time and monitor the deviation of the whole surgical process. In addition, it is of great help to reduce the deviation of puncture and improve the accuracy of navigation system in spinal puncture. In the navigation experiments of robot-assisted puncture, two intraoperative X-ray images are randomly selected to verify the accuracy. As shown in Figure 16, the end of the surgical tool can accurately reach the position of the focal point of pathogenesis. Compared with the previous research of our team, the accuracy of the spinal ARSN system based on multi-view virtual and real registration can be proved to meet the requirements of spinal puncture surgery.

2) ANALYSIS ON THE EFFICIENCY AND FEASIBILITY OF SPINAL AUGMENTED REALITY SURGICAL NAVIGATION SYSTEM
On the basis of ensuring the accuracy of the ARSN system, experiments are carried out respectively on the ARSN system and the surgical navigation system without AR [29]. To verify the efficiency and feasibility of the spinal ARSN system. Finally, the efficiency and feasibility of the spinal   Table 4. It can be seen from the Table 4 that the average time of puncture experiment using ARSN system is about 9 seconds, while the average time of our previous surgical navigation system is about 5.5 seconds. In terms of the efficiency of the surgical navigation system, the ARSN takes about 4 seconds longer. Meanwhile, in terms of the success rate of puncture experiments, the ARSN system's success rate is obviously 40% higher than that of the previous surgical navigation system. Based on the time and success rate of the ARSN system, the efficiency of the system has met the basic needs of physician and has a high feasibility. We divide operators into three levels according to their familiarity with the use of the system: Level 1: This mainly refers to the number of spinal puncture experiments performed using the system is less than 20 times. Level 2: The number of spinal puncture experiments performed using the navigation system is between 20 times and 30 times.
Level 3: The number of spinal puncture experiments performed using the navigation system is more than 40 times.
All of the above experiments are performed by operators who meet the Level 3. In order to study the influence of the operator's proficiency on the efficiency of the navigation system, three operators of Level 1, Level 2 and Level 3 are selected to complete the spinal puncture experiments respectively. The three operators conducted several experiments respectively, and the final result is shown in Figure 17. It can be seen from Figure 17 that operators with different proficient levels take different time to complete a puncture experiment. As an operator at Level 1, it takes him about 22 seconds to complete a puncture experiment at the beginning. With the increase of the number of experiments, the time spent after the 10th time is significantly reduced. After 30th training, the average time required for puncture experiments can be basically reached. For an operator at Level 2, the initial completion of a puncture experiment can be within 16 seconds, and the requirements within 10 seconds can be achieved as the number of experiments increases.
For an operator at Level 3, the time spend in the whole puncture experiment can be controlled about 8 seconds. With the increase of proficiency, the more efficient the operator is in completing spinal puncture, the more feasible it will be.
The angle of installation between two cameras is studied in the experiments. The angle of installation of 90 • , 120 • and 180 • are commonly used respectively. The experiments of spinal puncture are performed by an operator at Level 3 at three different angles. The average puncture time and the average success rate of puncture are shown in the Table 5. It can be seen from the data in the Table 5 that at an angle of 90 • , the puncture efficiency of this system is the highest, and the average success rate of puncture is also the highest.

D. ANALYSIS ON THE SOURCE OF ERROR IN THE SPINAL AUGMENTED REALITY SURGICAL NAVIGATION SYSTEM
The spinal ARSN system studied in this paper has some errors in the actual puncture process of the spine, and the sources of these errors may mainly focus on the following aspects:

1) ERRORS CAUSED BY THE VIRTUAL AND REAL REGISTRATION OF THE SPINAL MODEL
The virtual 3D model used for virtual and real registration is generated by 3D reconstruction of CT slices, and the reconstructed virtual model could not reproduce all the details of the real spinal model completely. In the process of virtual and real registration, the virtual model is displayed at a fixed position in the identified coordinate system after 3D registration, and there will be errors in the conversion between coordinate systems in the process of 3D registration. The lighting, occlusion and other factors in the experimental environment will affect the imaging effect of the camera. However, the position of the camera relative to the spinal model will cause deviation to the image of the camera and finally affect the accuracy of virtual and real registration.

2) ERRORS CAUSED BY NDI OPTICAL TRACKER
According to the data provided by the manufacturer of the NDI optical tracker, the tracking error of the NDI optical tracker used in this research is about 0.15mm. In this study, NDI optical tracker is used to track surgical tools, landmarks and focal points of pathogenesis in the experimental process to obtain their 3D coordinates in real time, which may result in the accumulation of NDI tracking errors.

3) ERRORS CAUSED BY ROBOT
The repeatability accuracy of the robots used in this study is < ± 0.03mm.
The robot controls the surgical tool to carry out the puncture experiments, so the surgical path in the world coordinate system needs to be transferred to the robotic coordinate system, which may cause errors. There is an error in the installation of surgical tools to the end of the robot.

4) ERRORS IN EXISTING SURGICAL NAVIGATION MODULE
In the process of spinal puncture, the focal points of pathogenesis in the virtual model coordinate system and the preoperative surgical route should be converted to the world coordinate system in which the real spinal model is located. In this process, it needs to go through multiple links of the surgical navigation module, and the connection and transformation between each link may cause errors.

IV. DISCUSSION
The virtual 3D spinal model is accurately superimposed and displayed at the patient's surgical site by using the technique of multi-view virtual and real registration. The introduction of multiple cameras and multiple perspectives directly improved the accuracy of virtual and real registration in surgery from about 10mm to 1.6mm. As a result, the accuracy of virtual and real operations has been improved by 84%. The introduction of multi-view virtual and real registration method can observe the position of surgical tools and focal points of pathogenesis in real time and monitor the deviation of the whole surgical process. From the experimental data, the accuracy of the ARSN system studied in this paper based on multi-view virtual and real registration is 1.70 ± 0.25mm, which is 35% higher than that of the previous surgical navigation system that do not use augmented reality [29].
Virtual and real registration mainly realizes the registration of preoperative imaging data of patients with real surgical scenes [30], [14]. In this paper, the multi-view virtual real registration technology is based on the common identification of the virtual real registration method. It collects real surgical scenes through cameras. During the registration process, it calculates the relative position between the camera and the identification in real time. The aim of placing the virtual spinal model in the correct position of the identification is achieved by using the related technology of computer vision, so as to realize the fusion of the virtual model and the real surgical scene. If you want to make registration more accurate, easier to operate, and more interactive, it is necessary to implement software that controls the movement and rotation of virtual models. This method is similar to the method adopted by Microsoft's HoloLens, in which the virtual model can be moved to the position of the real model in the space by means of a handheld identification, thus achieving the first registration [31], [32]. The second registration is completed by voice control of the movement of front, back, left, right, up and down as well as the rotation of six degrees of freedom, so as to accurately fine-tune the position of the 3D model [33]. In the future, if multiple perspectives are combined with improved identification method, the accuracy of virtual and real registration may be improved to a certain extent, the difficulty of operation will be further reduced, and the accuracy of the whole ARSN system will be improved accordingly.
Based on the operator's comfort level and actual effect, the interactive modes of AR are studied. In 2007, Navab et al. [34]. argued that in medical AR applications, the physician could neither move the patient undergoing surgery nor move virtual objects aligned with the patient's anatomy. In existing AR systems, physician is limited to 3D perception only through stereoscopic displays. They propose the use of an interactive virtual mirror that allows users to make full use of 3D data. They venture to speculate that interactive virtual devices will be integrated into many AR systems in the future. For the current study, interactive devices in AR include Head-mounted Display (HMD), Augmented Optics and Enhanced Display [35]. In practice, HMD is the most widely used interactive device [36]. For example, Dennler et al. [37]. used head-mounted augmented reality (AR) device to guide the placement of instruments and implants in spinal surgery and achieved good results. At the same time, Itoh and Klinker [38]. proposed a calibration method based on Optical See-Through Head Mounted Display (OST-HMD) system for simultaneous correction of two kinds of distortion. The spatial alignment between the user's vision and the display screen is completed, the seamless AR experiences become a reality. The problem of how to naturally coordinate the reality with the virtual content that the user sees in the AR based on OST-HMD is also studied. They also proposed an OST-HMDs color calibration method, which makes the user feel that the output color of the display is the same as the input color [39]. However, due to the heavy weight of optically transparent head-mounted displays, as Studied by Chen et al. [19], physician can feel uncomfortable wearing them for hours during surgery. When a physician wears a head-mounted display to perform an operation, he can only see the scenes captured by the camera, but can't see the real scenes, which greatly increases the risks of the operation [40]. The use of Augmented Optics as a means of interaction is relatively rare. Augmented Optics refers to a translucent silver-coated mirror inserted between the physician and the area of the operation. With good transparency, the physician can clearly see the tool and the patient's surgical site through the Augmented Optics. Good specular reflectivity also ensures that the physician can clearly see the virtual object reflected in the Augmented Optics [41]. However, the registration accuracy of the Augmented Optics depends very much on the perspective of the lens and the reflection effect of the mirror [42]. Moreover, the Augmented Optics need to be placed between the physician and the operating area, which limits the physician's operation and increases the difficulty of the operation. The advantage of the interactive mode of Enhanced Display is that the complex computing module is separated from the display module, which makes the whole system have the characteristics of low coupling and easy to maintain and expand later [43], [44]. There is no need to wear a helmet or glasses during surgery, and no need to place additional augmented reality devices between the physician and the patient. This will neither increase the difficulty of the operation nor make the physician feel uncomfortable. Therefore, our study uses the interactive mode of Enhanced Display. In addition, Bichlmeier et al. [45]. introduced the concept of a tactile/controllable medical AR virtual mirror in 2009. This approach enhances the physician's direct field of vision, allowing all views needed on the volumetric medical image data registered at the site of surgery, without moving the table or displaced patient. This technique has great potential in future surgical applications.
From the perspective of the research and development of the accuracy of spinal puncture navigation system, it is one of the development trends of spinal surgical navigation system in the future to introduce augmented reality technology to monitor the surgical process and allow physician to observe the position of surgical tools and focal points of pathogenesis in real time. The introduction of robots to replace physician to complete tasks like spinal puncture is also one of the future development trends [46], [47]. The development of navigation systems for spinal puncture is not limited to these. In 2018, Ma et al. proposed a remote IMN interlocked surgical navigation method combined with optical and electromagnetic tracking. In the experiment, all drills successfully drilled into the distal hole of the tibia model [48]. The satisfactory tracking accuracy is obtained by using several hybrid tracking systems. However, due to the use of electromagnetic navigation, many traditional surgical tools cannot be used, so it is necessary to customize non-magnetic surgical tools, which also increases the complexity of design of surgical tools. In 2019, Gibby et al. took the ARSN system one step further, using a head-mounted display to display focal points of pathogenesis and surgical scenes in real time [11]. Compared with our study, physician can observe the patient's focal points of pathogenesis more conveniently during the operation. However, there are some problems with this device. Wearing the device for a long time will increase the discomfort of the doctor's head. The device requires a wireless network connection, which can cause a delay in the operation when the network is in bad condition. In order to study the puncture accuracy of robot-assisted ARSN system. In 2020, Gustav et al. proposed that the robot be integrated into a hybrid operating room equipped with ARSN system for semi-automatic and minimally invasive pedicle screw placement [49]. The feasibility, accuracy and effectiveness of the guidance system are verified. The accuracy of the final technique of the system was verified by experiments to be 0.48±0.44 mm, which greatly improved the accuracy of the robot-assisted surgical navigation system in our study. However, from the perspective of the cost of the whole ARSN system, the cost of our proposed ARSN system is lower. In 2020, Li et al. proposed a robot system based on intraoperative 2D navigation, which acquired perspective images through the widely used C-arm in clinical practice [50]. The accuracy of robot-assisted spinal puncture is eventually achieved with an error of 0.17mm. Although it is more accurate than the robot-assisted navigation system proposed in our study, there is a large amount of radiation in obtaining perspective images with C-arm, which causes great harm to human. In the same year, Fabio et al. used head-mounted augmented reality 3D fluoroscopy to assess the accuracy of holographic pedicle screw navigation [51]. The head-mounted navigation device is compared with the most advanced position tracking system, and the final navigation accuracy is 3.4±1.6mm. The 3D fluoroscopy technology contained in the system can provide physician with a better image of 3D surgery, but its accuracy is lower than our study, which can barely meet the needs of surgeons.
The efficiency and feasibility of ARSN system is also a topic of concern in the current research field of surgical navigation system. Based on the actual experimental results of our study, the operator's proficiency in using the navigation system plays a crucial role in the efficiency of the navigation system. As can be seen from the comparison between the time required for the three different levels of proficiency, the higher the operator's proficiency, the less time required, and the higher the efficiency of the navigation system. The operator's proficiency can be improved through training, and can effectively save time. The angle of installation between the two camera mounting positions is also a key factor affecting the efficiency and success rate of the navigation system. It can be seen from the final experimental results that when the angle of installation is 90 • , the efficiency and the puncture accuracy is relatively high. When the angle of installation is 180 • , the result is relatively poor. Because the two cameras are parallel, the axial interface and cylinder of the spinal model cannot be fully displayed in the camera at the same time during virtual and real registration, which results in poor registration results and thus affects the efficiency and accuracy of actual puncture.
Navab et al. [34]. also conducted extensive research on the problem of intraoperative organ deformation. The study points out that the deformation movement caused by the patient's position or respiratory state is almost solved by rigid anatomy. The rigid body hypothesis, similar to that of the spine, can be considered a reasonable hypothesis for several surgical procedures. However, in deformable organs such as the abdomen (liver, lung, etc.), the problem of errors caused by organ deformation still exists. Our study also has some limitations. The subjects are not real patients (spinal model). There may be some deviation in the actual body of the patient. The number of specimens in the experiment is also relatively small.

V. CONCLUSION
Based on the study of navigation system in spinal puncture, this study carried out an in-depth study on the virtual and real registration technology in augmented reality. The accuracy of single-view and multi-view virtual and real registration is compared. In order to improve the accuracy of the navigation system, the multi-view virtual and real registration technique is introduced in the previous navigation system. From the results of the virtual and real registration experiment and the navigation puncture experiment, it can be seen that the accuracy of the single-view virtual and real registration is 9.85 ± 0.80mm. The accuracy of multi-view virtual and real registration is 1.62 ± 0.22mm, which is 84% higher than that of single-view virtual and real registration. The accuracy of the spinal ARSN system is 1.70 ± 0.25mm after multi-view virtual and real registration technology is introduced which is 35% higher than that of the previous surgical navigation system. Therefore, the multi-view virtual and real registration technology can effectively improve the accuracy of registration and spinal augmented reality surgery navigation system. The position of surgical tools and focal points of pathogenesis can be observed in real time. It can meet the requirements of physician for the precision of spinal puncture surgery, so as to guide the physician to carry out surgery and improve the success rate of surgery.