Design and Accuracy Assessment of an Automated Image-Guided Robotic Osteotomy System

Image-guided robotic spine surgery systems, currently used only for pedicle screw placement, have been in clinical use since 2004. Robotic spine osteotomy (bone removal and shaping), however, is still in the research phase. This article presents the development and evaluation of a KUKA-based image-guided robotic system that automates the osteotomy process, from automatic milling path determination to milling execution, using laminectomy as the experimental paradigm. An approach to quantify milling (overall path) and margin (from thecal sac penetration) accuracy is also described. System accuracy was evaluated in two experiments. In the first, common preoperative images and image fiducial points were used to perform a bilateral laminectomy on 10 identical 3D-printed vertebrae phantoms. In the second, individual preoperative images with individually identified fiducial points were used to perform a bilateral laminectomy on 4 identical 3D-printed vertebrae phantoms. The accuracy results for the first experiment were 0.19 ± 0.16 mm (milling) and 0.69 ± 0.37 mm (margin). For the second, the accuracy results were 0.24 ± 0.15 mm and 0.42 ± 0.26 mm, respectively. The results compare favorably to current accepted clinical standards for laminectomy. The system developed here implements a valuable new role for robotics in spinal surgery.

the lumbar spine can occur, characterized by a loss of disc height, which can result in disc herniation or bulging, and thickening of the facet joints and ligamentum flavum.These changes reduce the space in the central canal and lateral recesses, leading to a condition called lumbar spinal stenosis (LSS) [1], [2].LSS is a common cause of neurogenic claudication from the compression of the descending nerve roots manifesting as pain, weakness, sensory difficulty, bowel and bladder dysfunction and gait difficulty, making ambulation and the performance of normal activities of daily living difficult.It affects 103 million adults worldwide and is most prevalent among the older population (>65 years) [3], [4].
Treatment of LSS commonly involves decompression of the spinal elements by removal of the lamina and ligamentum flavum.During the procedure, the surgeon mills all or part of the vertebral lamina and spinal process using a handheld, highspeed surgical mill.The surgeon preserves a small thickness of the lamina (i.e., the margin) that is removed with a handheld Kerrison Rongeur (Fig. 2).It is a well-established, free-hand technique utilized in over 350,000 spinal procedures each year in the United States [3].This number is expected to increase owing to an increase in the aging population [5].The milling operation demands precise control of the handheld milling tool and a high degree of hand-eye coordination.Challenges with handheld drilling include inadequate visualization of critical structures due to the formation of bone debris, inadequate lighting or magnification, operator fatigue and the challenges in variable density and hardness of the bone.Complications during milling can result in injury to vascular structures leading to hemorrhage, injury to the thecal sac resulting in cerebrospinal fluid leak or injury to the neurologic structures resulting in neurologic dysfunction including weakness, sensory loss, and neuropathic pain.Considering this, a robotic surgery system offers a potential advantage as it does not fatigue over long procedures and performs repetitive tasks with high accuracy.
In the past decade, image-guided surgical robots aimed at addressing the accuracy and safety problems of pedicle screw placement during spinal procedures have been developed.So far, the United States Food and Drug Administration has approved a total of 9 robots in spine surgery and China's National Medical Products Administration has approved 1 [6], [7].Compared to free-hand screw placement techniques, these robots offer the potential for a high degree of accuracy and reliability, lower intraoperative complications, ease of interaction, and less chance for surgeon fatigue [8], [9], [10], [11], [12], [13].However, these robots provide only static guidance, such as positioning the robotic arm at a location and angle that allows screw placement.They lack the capability for dynamic operations, such as milling.
At present, in the field of robot-assisted osteotomy, there are no commercially available robots and studies are in the experimental stage.The research is primarily focused on automated path planning via pre-operative images and developing automated milling control and state recognition systems based on either grayscale values in medical images or feedback from force, sound, or vibration.The control systems aim at minimizing the procedure time by optimizing the milling parameters like linear velocity and cutting depth while state recognition involves separating the milling procedure into the outer cortical, middle cancellous, and inner cortical milling states, to preserve the margin.Milling stops when the inner cortical state is reached.Li et al. [14] summarizes the current progress in the field of robot-assisted laminectomy.
Wang et al. [15] adopted a force-based state recognition system using cross-correlation.The point at which the milling tool made contact with the inner cortical bone was established as the signal to halt the milling process.Deng et al. [16] used a state recognition strategy based on a Hilbert-Hang Transform and a Support Vector Machine to analyze and extract the features of the feedback force during bone milling and recognize the cortical and cancellous layers.Deng et al. [17] maintained the milling force by adjusting the linear velocity and the cutting depth of the milling tool using a fuzzy force controller.
A state recognition algorithm based on energy consumption during bone milling was also developed for safety purposes.Fan et al. [18] developed a fuzzy logic controller to maintain the milling force by adjusting the cutting depth of the milling tool.Milling states were identified by analyzing the milling force via a Normalized Mean Feature algorithm.Qi [19] proposed a multilevel fuzzy controller to adjust the linear velocity of the milling tool in real time based on force feedback.An Energy Consumed Density feature extraction method was also proposed to identify the bone texture while milling under adaptive linear velocity.Jiang [20] built a force model to derive milling coefficients for a ball-end milling tool.To maintain the milling force, the cutting depth of the milling tool was adjusted by optimizing these milling coefficients by means of Particle Swarm Optimization.Li et al. [21] used a custom designed rectangular blade as a milling tool to perform vertical step-by-step milling.The milling force information was analyzed by a fuzzy control mechanics model to prevent inner lamina penetration.Dai [22], [23], [24], [25] developed state recognition techniques based on vibration and sound signals during bone milling.Li [26] proposed a surgical status perception method based on features extracted from milling force and vibration signals.Sun [27], [28] used pre-operative CT for path planning and margin determination.In addition, a mapping relationship between the gray value of the medical image and the virtual force was established to determine the linear velocity of the milling tool, assuming that lower gray scale values would offer less resistance and could therefore be milled at a faster rate.Z. Li et al. [29] used pre-operative CT scans to plan the milling path and determine the margin.The surgical procedure involved a step-by-step vertical milling approach, and force information was used as a secondary safety measure to avoid inner lamina breach.
It is interesting that, despite extraordinary research in the field of robot-assisted laminectomy, there has so far been no unified quantitative standard to measure overall milling accuracy.One method has been recently proposed [29].The most accepted accuracy standard is non-penetration of the inner cortical bone.This is in contrast to the field of robot-assisted pedicle screw fixation where Gertzbein-Robbins scale [30] is used to evaluate screw placement accuracy and studies quantitatively comparing preoperative and postoperative screw placement trajectories are reported [31], [32].
The current work involves the design and characterization of an image-guided robotic osteotomy milling system where the entire process from mill path planning to milling the bone is automated.A laminectomy was used as the surgical paradigm.A novel medical image-based laminectomy planning module was developed which is independent of the composition of the lamina.The module, upon selection of the lamina by the surgeon, automatically generates the milling path which upon registration is transferred to the robot to execute the milling operation.The milling path incorporates a margin that is curved at both ends.The purpose of the curved milling path is to ensure that the milling tool never encounters the contents of the central canal.Osteotomies are commonly performed in degenerative conditions that result in narrowing of the central canal and may result in herniation of the contents of the central canal through the interlaminar space posteriorly.It is that tissue that would be at risk if the milling path were not curved at the edges of the lamina.The milling operation is terminated based on the margin set during the path generation.While a system of this type has many important specifications and features, the most important evaluation criterion is the milling accuracy and the residual thickness (margin) of the lamina.Without clinically acceptable accuracy, the system is not viable.Here we describe the design and operation of this new system and the first ex-vivo accuracy study.

II. IMAGE-GUIDED ROBOTIC OSTEOTOMY SYSTEM
The spine surgery system is similar to our automated pedicle screw placement system described in [31].It comprises a KUKA LBR iiwa 7 R800 robot (KUKA, Augsburg, Germany), a robot controller unit with an embedded PC running Microsoft R Windows 7, a user interface pendant, a surgical planning workstation, an emergency stop button, a hand-guiding switch, and a milling tool assembly.A system block diagram is shown in Fig. 3.The KUKA LBR iiwa 7 R800 is a 7-axis lightweight robot with a positional repeatability of 0.1 mm, and a maximum payload of 7 kg.Its inbuilt position and axis torque sensors contribute to its safety and ability to collaborate with humans.The robot is programmed in Java on a Microsoft R Windows 10 based PC running KUKA Sunrise.OS (version 1.14), a development environment provided by KUKA.The robot's internal localization system is used for both registration and milling the lamina, as there is no integrated external navigation system.The hand-guidance switch enables manual guidance mode in which the robot reacts compliantly to external forces and can be manually guided to any point within its working envelope.The surgical planning workstation is a Microsoft R Windows 10 based PC running 3D Slicer (version 4.11), an open-source medical image visualization and application development platform (https://www.slicer.org/).A scripted module was developed in 3D Slicer using Python3 (https://www.python.org/);SimpleITK (https://simpleitk.org/), a simplified interface to the Insight Toolkit (ITK) (https://itk.org/);and Qt designer (https://www.qt.io/), a cross-platform framework used as a graphical interface toolkit.Pre-operative images are imported into the module and used by the surgeon to plan the laminectomy procedure.The milling path plan is then transferred to the robot over Ethernet using the OpenIGTLink protocol [33].The workstation is also used to measure milling accuracy and the post-milling lamina margin (thickness) error.
The milling tool end-effector assembly is shown in Fig. 4. It is a two-part assembly consisting of a base unit and a motor housing unit.It was designed in Solidworks (Dassault Systèmes SOLIDWORKS Corp, Waltham, MA), and 3D printed using Nylon 12 on a Stratasys Fortus 450mc system (Stratasys, Rehovot, Israel).Nylon 12 was chosen for its excellent mechanical properties such as hardness, tensile strength, and resistance to abrasion.The motor housing unit is attached to the base unit with screws, as shown in Fig. 4 (e) & (f).It holds a 2057 S024BA series brushless DC-servomotor (Faulhaber, Schonaich, Germany) which is powered by an SC 5008S 4476 speed controller (Faulhaber, Schonaich, Germany).The motor (Fig. 4 (b)) is surrounded by a thin, 3D printed layer of TPU (PolyFlex TM TPU90) which acts as a vibration dampener.A rigid coupler (Fig. 4  (d)) connects the 1/8-inch motor shaft to the milling tool, a 3 mm spherical burr, via an embedded set screw for each shaft (Fig. 4(c)).The base unit is attached to the robot's flange.It supports the speed controller (Fig. 4 (a)) which is programmed to operate the motor at 24,000 rpm using Motion Manager 6 control software provided by Faulhaber.

III. SYSTEM OPERATION Automated robotic lamina milling involves three sequential tasks:
A. Pre-operative planning, B. Coordinate system registration, and C. Lamina milling.
The user interacts with both the surgical planning workstation and the robot while performing these tasks.The workflow is shown in Fig. 5.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

A. Pre-Operative Planning
The lamina milling procedure is planned on a pre-operative image of the vertebrae using a scripted module developed in 3D Slicer (Fig. 6).The module has a ROI selection section, which uses 3D Slicer's built-in modules to allow the surgeon to define the region of the lamina to be milled, and an image processing and path generation section which uses a pythonbased algorithm to compute the milling path.
1) ROI Selection: A pre-operative DICOM image of the vertebra is imported into the scripted module.After computing and displaying a volume rendering, an oblique laminectomy plane, created by rotating the sagittal slice view plane around the superior-inferior (SI) axis, is overlaid on the volume rendered image.The plane is then translated along the leftright (LR) axis to select the region of the lamina to be milled (Fig. 6 (c-f)).This cross section is then segmented along the laminectomy plane to create a 2D representation of the lamina in 3D image space using the Segment Editor module.However, due to the oblique characteristics of the laminectomy plane relative to the image, where the pixels are aligned in the cardinal directions, there is a misalignment between the plane and the segmentation axes of the image, causing a striped pattern artifact on the segmented image (Fig. 7 (a-c)).To avoid this artifact, a rectangular region of interest (ROI) is defined around the selected lamina prior to the segmentation process.This ROI is then rotated and translated using the Transforms module so that it has the same orientation as that of the laminectomy plane.The region within the ROI is then cropped from the original image using the Crop Volume module.The resulting cropped image obtained using the rotated ROI is a linearly interpolated, isotopically spaced, resampled image whose cardinal directions match those of the laminectomy plane.The cropped image also has a transformed coordinate system with respect to the coordinate system of the original image.Since the laminectomy plane and the segmentation axes now align, a smooth 2D grayscale image of the lamina cross section is obtained post segmentation (Fig. 7 (d-f)).
2) Image Processing: The 2D grayscale cropped image of the lamina (Fig. 8 (a)) is processed using SimpleITK filters.The grayscale image is first filtered using the MedianImageFilter.The filter removes speckle noise in the image while preserving the edges (Fig. 8 (b)).In the next step, the filtered image is converted to a binary image using the OstuThresholdImageFilter, which creates a threshold image by selecting a threshold value with maximum inter-class variance between the foreground and background intensities (Fig. 8 (c)).The threshold image is then eroded using the BinaryErodeImageFilter.Erosion plays a critical role in ensuring that the mill does not breach the lower surface of the lamina.Eroding with a 'ball' structuring element of neighborhood size 3 shrinks the lamina geometry by approximately 2.0 mm.Fig. 8 (d) shows the eroded lamina overlaid on the original lamina.The mill path generated (discussed below in path generation) using this eroded lamina ensures that the milling stops approximately 2.0 mm above the lower surface of the lamina.The eroded lamina is filtered again using the MedianImageFilter to remove any isolated pixels remaining after the erosion process (Fig. 8 (e)).The ConnectedComponentImageFilter then scans through the filtered image pixel by pixel to identify connected pixel regions.It verifies that the filtered image contains only one region, the lamina, and assigns it a unique label.The BinaryContourImageFilter is then used to identify the structural outline of the lamina (Fig. 8 (f)).The filter takes the binary labelled image of the lamina and keeps only the pixels on the border; all other pixels are set to the background value.
3) Path Generation: The processed image from the previous section is a 2D binary image containing the contour of the eroded lamina.The path generation algorithm works on this image to determine the milling path.It consists of the following stages: 1. Pixel localization, 2. Pixel characterization, and 3. Path computation.a) Pixel localization: In the first stage, key pixels in the image are localized.These pixels assist in determining the path and the orientation the robot will take while milling the lamina.First, the number of pixels along the contour is determined using SimpleITK's LabelShapeStatisticsImageFilter.Each pixel along the contour is then localized and marked as border point.Since the cropped image has a transformed coordinate system (x, y, z) with respect to the original image (r, a, s), the key pixels are localized in these transformed coordinates.The border points are then sorted in ascending order of their 'y' coordinate as a primary sort-key and 'z' coordinate as a secondary sort-key.The first pixel in this sorted list is the leftmost pixel along the contour, while the last pixel is the rightmost pixel along the contour.The leftmost pixel is marked as the bottom point and the rightmost pixel is marked as the top point (Fig. 9 (b)).In the next step, the algorithm analyzes the 'y' coordinate of the border points to compute the minimum (min_y) and the maximum (max_y) values.Then using the size of the image (image size (x, y, z)), min_y and max_y, four points are chosen on the image each having the following coordinates:    The equation of the classifying line passing through top point (r 1 , a 1 , s 1 ) and bottom point (r 2 , a 2 , s 2 ), in 3D cartesian space is given by: where λ is a constant, and l, m, and n are direction ratios.
To interpolate points along this classifying line, the s coordinate of the point on the classifying line is set equal to the s coordinate of the first border point (r 3 , a 3 , s 3 ).Substituting s = s 3 in (1) yields By substituting the value of λ obtained in (2) in (1), the r and a co-ordinate of the point can be calculated as: where p 1 and p 2 are the two corner points, p 3 is the point on the line joining p 1 and p 2 , d is the depth of each milling layer (0.5 mm) and D is the Euclidean distance between p 1 and p 2 .To obtain the first start point, the algorithm identifies the pair of right points where the 'a' co-ordinate of the first layer defining point falls.Once identified, the a co-ordinate of this start point is set equal to the a coordinate of the layer defining point.Let the two identified right points have coordinates (r 1 , a 1 , s 1 ) and (r 2 , a 2 , s 2 ), and the coordinates of the first layer defining point be (r 3 , a 3 , s 3 ).The equation of the line passing through these two right points is given by (1), substituting a = a 3 yields Substituting λ obtained from ( 5) in (1) yields the r and s coordinate of the first start point:  Thus, the co-ordinates of the first start point are (r, a, s).The process is repeated for the remaining layer defining points to compute the respective start points.The procedure is then repeated on the left points to get milling end points.
To compensate for the effect of erosion during image processing on the actual geometry of the lamina, two additional pairs of start and end points are added at the beginning to maintain a constant cutting depth of 0.5 mm throughout the milling process.These additional pairs of start and end points are along the line joining the first two right and left points respectively and are computed using the same approach as above using Corner point 2 as a layer defining point and another point defined to the left of Corner point 2. Fig. 10 illustrates the process on right points to compute start points.
In the next step the curved margin path is created.The slope of each line passing through consecutive start points and consecutive end points is analyzed to categorize these points into upper and lower lamina surface points.The point where the slope changes direction is considered the transition point.All points prior to the transition point are categorized as upper lamina surface points and the rest as lower lamina surface points.The upper surface start points, and end points are moved 7 mm away from the lamina surface so that the milling burr cuts through the entire length of the lamina.The position of the lower surface endpoints is unchanged while the lower surface start points are moved 1.7 mm into the lamina.This complements the erosion operation in preventing the milling burr from breaching the lower surface of the lamina and achieving a consistent post milling lamina width of approximately 2 mm.Fig. 9 (d) & (e) show the final set of start and end points on the selected lamina.This last step in the path generation process, creating a curved margin, is a critical step in ensuring maximum protection of the spinal canal.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

B. Coordinate System Registration
To translate the milling path defined in the CT image on 3D slicer to locations in the robot co-ordinate system, the co-ordinate systems must be registered.For this process, the system relies on the four concave hemispherical fiducial markers, embedded in the phantom, that are the diameter of the milling cutter (3 mm).The milling tool itself is used to record the location of each of these markers.In hand-guiding mode the user navigates the milling tool to each marker as shown in Fig 11 .The locations are recorded by the robot controller which uses this data, along with the fiducial marker locations identified by the user in the pre-operative image, to register the co-ordinate systems.This method is a substitute for intra-operative, image-based registration, where the robot would hold a registration fixture, and be positioned such that it would be visible in the intra-operative image.We used a closed-form registration algorithm based on matrix singular value decomposition, described by Arun et al. [34].

C. Lamina Milling
The mid points, start points, and end points computed for the lamina during pre-operative planning are sent from 3D Slicer to the robot controller via the OpenIGTLink protocol where they are transformed into robot coordinates.The number of milling layers, robot milling path, and the approach angle are computed.The robot then moves the milling tool to a location that is 100 mm away from the first start point, along the milling cutter axis.This is done to avoid any contact with the lamina or spinous or transverse processes during navigation.It then orients the milling tool at the computed approach angle and then advances to the first start point.The motor is enabled manually which rotates the cutter at a pre-programmed speed of 24,000 rpm.The robot then advances linearly towards the first end point at a rate of 0.5 mm/s.It then moves linearly towards the second start point and then towards the second end point.This Z-shaped milling pattern (Fig. 12) is followed for the subsequent start and end points.Once all the layers are milled, the robot moves back to the first start point and then to a pre-programmed home position.The process is then repeated for subsequent laminae.

IV. ACCURACY EVALUATION
An L1 lumbar vertebra phantom was designed using a 3D scanned model of an L1 synthetic vertebra (Sawbones R , Pacific Research Laboratories, Vashon Island, WA).A Go!SCAN 3D (Creaform, Levis, Canada) laser triangulation scanner was used to scan the vertebra.The scanned model was imported into Solidworks where a support platform, four nonplanar concave hemispherical holes (each 3 mm in diameter), and four calibration blocks, as shown in Fig. 13 (a), were built around the vertebra.The hemispherical holes function as fiducial markers for registration.
A total of 14 such phantoms were printed using Siraya Tech Fast ABS-Like resin (Siraya Tech, San Gabriel, CA) on a Photocentric LC opus (Photocentric, Peterborough, U.K.) stereo lithograph 3D printer.The phantoms were completely solid.This resin was chosen because it exhibits milling behavior similar to that of bovine bone and the Sawbones phantom.The two primary material considerations were how the material was removed by the mill and the force required by the robot during the milling operation.During milling, the removed material was in small discrete particle form.(This stands in contrast to phantoms we tested that were printed on fused deposition modeling printers, which melted and unraveled during milling).The SLA resin was less dense than the Sawbones material (ratio = 0.78).For early bone and Sawbones model experiments, the robot forces were approximately 2 N in the mill path direction and 2 N in the depth direction.For the 3D printed phantom experiments, the linear milling velocity and cutting depth were increased slightly from these earlier experiments so that the forces required by the robot were the same.
Pre-operative images were captured using a 6-second Carm CT scan and reconstructed with 0.49 mm 3 isotropic voxel resolution (Artis Q, Siemens Healthineers, Erlangen, Germany).The phantoms were secured with a vice which was bolted to the table (Fig. 11).
Two sets of experiments were performed.In the first experiment, a bilateral laminectomy was performed on 10 phantoms using one representative preoperative image and one representative set of image-identified fiducial points (for registration) for each of the 20 milling operations.
To determine the optimal location within the pre-operative image for each of the four fiducial makers, a one-time iterative calibration process was carried out, as follows: An initial set of image fiducial markers was chosen (using three planar views and the Markups Module).A robot-to-image registration process was then performed, and the robot was navigated to the four calibration blocks in succession (the centers of which were identified in the pre-operative image).Based on the location of the burr within the calibration blocks, the image-based fiducial markers were adjusted, and the process was repeated until the tip of the milling tool was centered (qualitatively, by eye) in each of the calibration blocks, as shown in Fig. 13 (b).
Once the calibration process was complete, this image and the optimized image-based fiducial markers were used to plan and execute each of the 20 milling operations.The userdefined milling planes (and resulting images to be processed into milling paths) were unique to each of the test objects.The oblique sagittal angle varied from 24 degrees to 30 degrees, and the translation range was 2 mm.
In the second experiment, a pre-procedure C-arm CT was performed on each of 4 phantoms.Fiducial markers were located in each of the four images (Their locations within the hemispherical holes were not optimized).As in the first experiment, a unique laminectomy plane was selected by the user for each of the 8 milling operations, using the same angle and translation range as the first experiment.
Post milling (Fig. 13 (c)), each phantom from both experiments was rescanned in the same scanner, as shown in Fig. 14, for use in accuracy measurements.
The milling error and the post-milling lamina margin (thickness) error were measured in 3D Slicer.The pre-operative and post-operative scans for a particular vertebra were imported, and their volumes were cropped so that a higher percentage of the image volume included the vertebra, fiducial markers, and the milled lamina.The pre-operative and post-operative images were then registered using the General Registration module.This module uses the image content rather than individual fiducial markers for registration.The percentage of the image volume samples used was set at 2%, and a rigid 6 degree-of-freedom registration was performed.SimpleITK OstuThresholdImageFilter was then used to threshold the registered image followed by BinaryContourImageFilter to identify the structural outline of the milled lamina.The original surgical milling plan was then uploaded for accuracy evaluation.Fig. 15 (a) illustrates the milling error evaluation procedure for the lamina on the left side of the spinal process.In the axial plane, two points were chosen along the planned path.One point was located at the upper section of the milled region (P1), and the other point was located at the lower section (P2).The distance of each of these points from the respective milled wall was determined: L1 and R1 for P1, and L2 and R2 for P2.A total of 10 points (5 upper and 5 lower) were analyzed along the planned milled path, spaced at semiregular intervals along the coronal plane.The error at each of these points was calculated by subtracting the distance L from the distance R, and then dividing the difference by 2. The milling error was subsequently determined by calculating the average magnitude of these errors.
For the post-mill lamina margin error evaluation, planned lamina margin thickness (P1, P2, P3, P4, P5) and residual lamina thickness (T1, T2, T3, T4, T5) at five semi-regular distances along the sagittal plane were measured (Fig. 15 (b)).The average residual lamina thickness was then subtracted from the average planned lamina thickness to obtain the postmill lamina margin error.The evaluation procedures were then repeated for the lamina on the right side.

V. RESULTS
Tables I & II list the error measurements for experiment 1 (common preoperative image and image fiducial markers) and experiment 2 (individual preoperative images with individually Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.identified fiducial markers) respectively.For experiment 1, the mean absolute milling error was 0.10 ± 0.05 mm on the left side and 0.29 ± 0.17 mm on the right side.The mean absolute margin error was 0.35 ± 0.16 mm on the left side and 1.03 ± 0.10 mm on the right side.The average residual lamina marigin was 1.71 mm on the left side and 1.05 mm on the right side.For experiment 2, the mean absolute milling error was 0.16 ± 0.13 mm on the left side and 0.32 ± 0.14 mm on the right side.The mean absolute margin error was 0.19 ± 0.06 mm on the left side and 0.65 ± 0.13 mm on the right side.The average residual lamina marigin was 1.96 mm on the left side and 1.35 mm on the right side.No lamina breach was observed in either of the experiments.The average time required for registration of 4 fiducial markers was 1 minute, 20 seconds and the time required to mill the lamina was in the range of 35 -40 mins.

VI. DISCUSSION
The assessment of the effectiveness of the image-guided robotic laminectomy procedure depends on the accuracy and precision of the robot in adhering to the planned trajectory, in both milling and in preserving the lamina margin.Surgeons generally consider a tool navigational accuracy of ±1.0 mm [35] and a margin thickness of 1-2 mm [18] sufficient to meet the safety requirements of the laminectomy, but there is currently no universally agreed-upon quantitative standard to evaluate the accuracy of robot-assisted laminectomy.The most commonly used standard in research literature is the non-penetration of the inner lamina.
This study introduces a mathematical approach to quantify both the complete milling and the margin accuracy.The results obtained demonstrate that the overall milling and margin error for experiment 1 were 0.19 ± 0.16 mm and 0.69 ± 0.37 mm, respectively.Similarly, for experiment 2, the overall milling error was 0.24 ± 0.15 mm, and the overall margin error was 0.42 ± 0.26 mm.All four values fall within the acceptable standard of ±1.0 mm.The increased accuracy error in experiment 2 (as compared to experiment 1) is not statistically significant (2-sided t-test, p = 0.09) but one would expect this experiment to have a greater error, as the identification of the image-based registration markers was not optimized.Although the overall milling and margin errors in each experiment were small, a statistically significant difference (two-sided student's t-test) was observed between the left and right milling and margin accuracies in each experiment (except for left and right milling accuracy in experiment 2).This difference is likely due to a combination of a small registration bias, minor misalignment of the tool with respect to the robot axis, and/or a slight error in the measured tool length.We performed graphical overlay experiments (actual to registered) and it is possible to increase the accuracy of one side over the other, at the expense of the margin accuracy.Furthermore, we performed additional robot experiments, flipping the test object 180 degrees, to verify that the bias remained on the same side of the robot system, not the test object.
In Experiment 1, a single representative image and a single set of image fiducial points were utilized for milling all 10 phantoms.The position of these points within each hemispherical hole was adjusted to ensure that, during the post-registration navigation of the robot to the calibration blocks, the tip of the milling tool was centered within each block (evaluated qualitatively, by eye).Conversely, in Experiment 2, a pre-procedure C-arm CT scan was conducted on each of the 4 phantoms, and the fiducial points were positioned approximately at the center of the bottom of the hemispherical holes for each phantom image (post-registration navigation of the robot to the calibration blocks was not performed in this experiment).Comparing the results shows that the former approach yielded better milling accuracy, while the latter approach yielded better margin accuracy.From the results we conclude that the hemispherical fiducial markers used in this study are sub-optimal for registration.In future studies, the intention is to use spherical fiducial markers with easily identifiable centers.It is interesting that the overall precision of the robot was such that these small biases were detectable.
It is beneficial to contrast the findings of this study with those of prior research.The systems proposed by Wang et al. [15], Fan et al. [18], and Dai et al. [22] each used some form of non-image-based feedback to detect the beginning of the margin and stop milling.While all the studies were able to achieve a margin of approximately 1 mm (1.07 mm to 1.34 mm), the milling paths were straight lines, there was no pre-set margin thickness, and there was no measure of overall milling accuracy.Sun et al. [27] presented an image-based milling methodology.In addition to defining the margin thickness pre-operatively, the gray-scale values were used to modulate the linear milling velocity.Using this approach, an average residual lamina margin of 1.19 mm and an overall margin accuracy of 0.19 ± 0.06 mm were achieved.Again, there was no measure of overall milling accuracy.Quite recently, Li et al. [29] reported both milling and margin accuracy for an experiment using a 6-DOF based system.This system is based on pre-operative planning with a force sensor added as a safety measure.In 80 cutting operations in cadavers, they reported a mean milling error of 0.70 mm.The measurement compared the planned to the actual path (via a post-op image) at two points in one axial image.In contrast, our method uses 10 points spread over the entire laminectomy plane.Margin accuracy for this experiment was a binary classification of whether the mill breached the lamina, which occurred in 19% of the lamina milled.
In terms of milling accuracy, the KUKA based system described here performed slightly better than Li et al. [29] (0.70 mm vs 0.24 mm [Exp 2]), although they used cadavers, and we used phantoms.Their margins were less accurate, breaching in 19% of the cases, vs 0% for our system.In comparing our margin accuracy to that of Sun et al. [27], their results were superior to ours (0.19 mm vs 0.42 [Exp 2]).
Many prior studies have used feedback from a sensor, without imaging, to determine when to stop milling (and preserve the margin).Most rely on detecting the section of cortical bone at the bottom of the lamina.However, some laminas may not have a distinct cancellous and inner cortical bone structure, rendering these techniques potentially open to error.Some lamina cortical layers may be thicker, making manual removal by the surgeon much more difficult.Finally, these techniques appear to preserve only a straight section of margin, potentially compromising safety.
The objective of this study was to develop a fully automated and standardized process for performing laminectomy.The image-based laminectomy planning approach discussed in this study is based on basic image processing techniques and mathematical algorithms and takes into account the overall geometry of the lamina without relying on its three-layer structure or any form of intra-procedure feedback.The ends of the margins are curved to maximize the safety of the milling process.Once the plan is transferred to the robot and registration is completed, it automatically moves to the target location and starts milling.This not only eliminates the need for manual laminectomy but also reduces the need for surgeons to assume challenging and potentially harmful body positions during the procedure which can lead to fatigue and loss of accuracy during the procedure.Although a laminectomy was used as the surgical paradigm, this process can be easily adapted to other osteotomies that are common in spinal surgery including facetectomy.This process is also amenable to adaptation for minimally invasive procedures.Robotic osteotomy and surgical planning using the patient's anatomy would ensure that the goals of osteotomy were achieved without removing excess structure both reducing surgical time and reducing complications of over-dissection.Automation of osteotomy demonstrated here would benefit surgeons by increasing accuracy, improving safety, and decreasing operative time.Providing surgeons with the benefits of automation is a welcome development in the field of surgery [36], [37], [38], [39].
This study has several noteworthy limitations.Firstly, it employed 3D printed phantoms of the same vertebra type (L1), which may not adequately represent the variations and complexities observed in real patients with pathological changes in their vertebrae.While early experiments included bovine femur, future experiments will need to be performed on human bone, prior to clinical use.The robot's motion in the study was conservatively programmed with a low feed rate of 0.5 mm/sec.As a result, the milling time for the entire lamina took approximately 35-40 minutes.However, further program optimization, such as increasing the robot force, rotating the burr at a higher rate, or employing the robot's internal force/torque sensors to measure the force during the procedure and dynamically adjusting the feed rate of the milling tool in real-time, may have the potential to significantly reduce the milling time to less than 5 minutes.
While we have validated the performance of the robot system itself, there are two important components that must be added prior to clinical use.The first is a tracking system that is able to account for patient movement in real-time.This could be a separate optical tracking system, as employed by several commercial robotic spine surgery companies, a passive robot (mechanical position sensor) attached to both the patient and the surgical robot, that can measure patient movement, or real-time intra-operative imaging, such as a newly emerging surgical tomography system [nView Medical, Inc., Salt Lake City, Utah].Additional intelligence would also need to be built into the milling algorithm, for example to stop milling temporarily and assume a flexible pose (impedance mode for the Kuka) in the event of a large, abrupt patient movement.
The second addition is extending the registration paradigm to work with intra-operative imaging, such as an O-arm or C-arm CT system.Prior to recording the 3D image, an imageable registration marker that is attached to the robot would be placed in the image's field-of-view such that its markers could be identified in the image (in a similar manner to the identification of the model's fiducial markers in this study).While the user would need to navigate the robot to this position, it would obviate the need to hand guide the robot to multiple fiducial markers.Once these components are integrated and validated, the next step will be a cadaver study.A system of this type would be evaluated first in an open surgical field, with vertebrae completely exposed.Additional monitoring and planning would be required to move this to a minimally invasive surgical procedure, where instrument access is via tubes.

VII. CONCLUSION
A KUKA-based image-guided automated robotic osteotomy system was presented in this study.The results show that the accuracy compared very favorably to current accepted clinical standards for laminectomy.Additional errors will likely be introduced as the system evolves into clinical use.As robotic milling geometries become more complex, characterization of overall milling accuracy will become increasingly important.Further development and testing are underway.

Fig. 3 .
Fig. 3. KUKA LBR-based, image-guided robotic spine surgery system with milling tool powered by an external power supply (in red) as an end effector.

Fig. 5 .
Fig. 5. Workflow block diagrams: (a) Workflow diagram for the surgical planning workstation.(b) Workflow diagram for the robot.

Fig. 6 .
Fig. 6.Scripted module user interface in 3D slicer: (a) ROI selection unit, (b) Image processing & path generation unit.Lamina selection process using laminectomy plane (yellow): (c) Axial view of the selected lamina region, (d) Coronal view, (e) Volume rendered image of the vertebra with overlaid laminectomy plane and (f) Sagittal view showing the cross-section of the selected lamina.

Fig. 7 .
Fig. 7. Lamina slice extraction process: (a -c) shows effect of segmenting lamina on the original image.Segmentation axes and the lamina plane do not align (a & b) causing a striped pattern effect (green) on the extracted lamina (c).(d -f) shows segmentation using rotated cropped image.Cropped image is overlaid on the original image to display the difference between the two approaches.A smooth segmented lamina can be seen in (f).
)Thus, the coordinates of the first interpolated point on the classifying line are (r, a, s).This process is repeated for each border point.Once all the interpolated points are defined, the r co-ordinate of the border point and corresponding interpolated point are compared.For the lamina on the left side of the spinal process, if the r co-ordinate of the border point is greater than the r co-ordinate of the interpolated point, then the border point is marked as a left point otherwise the border point is marked as a right point and vice versa if the selected lamina is on the right side of the spinal process.In both cases (left and right side), the top point is considered as a left point and the bottom point as right point.The right and left points are then sorted in descending order of their a coordinate as a primary sort-key and s as a secondary sort-key (Fig.9 (c)) for path computation.c)Path computation: In this final stage, the milling path is defined by linearly interpolating between each right point to get the milling start points and between each left point to get the milling end points.This defines a layer-by-layer milling approach.The distance between each milling layer (cutting depth) is first set to 0.5 mm.A set of equally spaced layer defining points along a line joining Corner point 1 and Corner point 2 is then computed using (4),

Fig. 9 .
Fig. 9. Pixel localization: (a) Sorted border points (b) top, bottom, mid and corner points in (r, a, s) coordinate system.Pixel characterization and path computation: (c) Border pixels characterized as either right (yellow) or left (green) (d) Computed start (purple) and end (cyan) points (e) Final set of start and end points superimposed on the selected lamina.The start and end points in (d) & (e) include the additional start and end points (as shown in Fig. 10).

Fig. 10 .
Fig. 10.Detailed illustration of computing the start points using the right points.

Fig. 11 .
Fig.11.The process of hand-guiding the robot to a fiducial marker.At the request of the user the, the robot program records the position of the robot end effector (which is located at a fiducial marker (red)) to use in the image-to-robot coordinate system registration process.

Fig. 13 .
Fig. 13.(a) Pre-milling vertebra phantom, (b) iterative calibration block used in experiment 1 to optimally identify fiducial markers.The milling tool tip is centered in the calibration block and (c) post-milling vertebra phantom.

Fig. 14 .
Fig. 14.C-arm CT image of the vertebra phantom post laminectomy: (a) Volume rendered image showing the preoperative planned (green arrow) and postoperative measured milling region (orange arrow), (b & c) axial and coronal view of the postoperative vertebra phantom, (d) sagittal view showing residual lamina.

Fig. 15 .
Fig. 15.Post milling accuracy evaluations: (a) illustrates milling accuracy evaluation procedure for the lamina along axial view (2 of 10 points total in laminectomy plane), and (b) illustrates post-mill lamina thickness measurement along the sagittal view.

TABLE I EXPERIMENT 1 :
MILLING AND MARGIN ACCURACY RESULTS

TABLE II EXPERIMENT 2 :
MILLING AND MARGIN ACCURACY RESULTS