Anatomical Joint Measurement With Application to Medical Robotics

Robotic-assisted orthopaedic procedures demand accurate spatial joint measurements. Tracking of human joint motion is challenging in many applications, such as in sport motion analyses. In orthopaedic surgery, these challenges are even more prevalent, where small errors may cause iatrogenic damage in patients – highlighting the need for robust and precise joint and instrument tracking methods. In this study, we present a novel kinematic modelling approach to track any anatomical points on the femur and / or tibia by exploiting optical tracking measurements combined with a priori computed tomography information. The framework supports simultaneous tracking of anatomical positions, from which we calculate the pose of the leg (joint angles and translations of both the hip and knee joints) and of each of the surgical instruments. Experimental validation on cadaveric data shows that our method is capable of measuring these anatomical regions with sub-millimetre accuracy, with a maximum joint angle uncertainty of ±0.47°. This study is a fundamental step in robotic orthopaedic research, which can be used as a ground-truth for future research such as automating leg manipulation in orthopaedic procedures.


I. INTRODUCTION
Over the last two decades, robot-assisted procedures have become an adopted standard in many surgical theatres. Robotic platforms are deployed for a range of orthopaedic procedures such as the Mako system [1] with a typical setup for knee replacement surgery. However, the same level of robotic integration is missing for minimally invasive surgeries (MIS), which constitutes for a large portion of iatrogenic damage [2] -especially when considering inexperienced surgeons.
For an arthroscopy, surgeons physically position and restrict parts of the patient's leg in the coronal, sagittal or transverse planes to allow surgical equipment to reach specific positions inside the body as shown in Figure 1. In this procedure, a millimetre accuracy is required for The associate editor coordinating the review of this manuscript and approving it for publication was Chao Zuo . measurement of the internal joint geometry, such as the size of the instrument gap [2]. Jaiprakash et al. have shown that surgeons regularly overestimate the gap resulting in unintended damage. As an example, for one in ten arthroscopies 49.5% cartilage damage is estimated and 7.5% damage across all surgeries. This poses a significant opportunity for robotic-assisted arthroscopy systems to reduce these unintended surgical injuries when considering that approximately four million surgeries are performed in just this category each year. Automating leg manipulation is a major step towards developing a robotic system to reduce the workload on surgeons. It provides benefits in accurately positioning the leg to ensure the internal joint geometry (e.g. instrument gap) is suitable for surgical instruments to pass through without damaging the cartilage.
To control the nine Degrees of Freedom (DoF) [3, p. 245] of the combined hip [4] and knee motion [5] during surgery, it is necessary to know the pose of each joint in real-time. A demonstration of an experienced surgeon performing a knee arthroscopy requiring exceptional multi-tasking and dexterity. The surgeon is required to manipulate the patient's leg with his body and one hand in order to control the internal knee joint geometry, arthroscope and instruments [2]. All the while performing the surgery with the other hand.
Robotic arm motion is usually defined by kinematic models, such as the screw or Denavit-Hartenberg (DH) parameters. The advantage of a kinematic model of the human leg is the ability to set leg angles accurately for both MIS and traditional leg surgeries, without drilling optical markers (optical reflectors) into the patient's leg as it is done in the stateof-the art procedures. To design and verify such a model, it is necessary to develop a surgically aligned ground-truthing method for measuring anatomical positions inside the leg, and the pose of the leg.
In this study, we investigate the important aspect of creating a high-fidelity bench-marking technique for motion analysis of the human knee and hip joints for diagnostic, research and surgical purposes. We present a novel method using inverse kinematic principals to semi-directly measure the pose of the leg through a 3D motion capture system, combined with Computed Tomography (CT) data. This data is used to accurately reconstruct the pose of the femur and tibia that include the nine DOF rotations and translations in the knee and hip joints. Additionally, the uncertainty in the kinematic modelling is characterised due to the propagation of measurement errors. The proposed method is demonstrated on three cadaver experiments.

A. PROBLEM DESCRIPTION
Although there are many examples of 3D motion tracking ranging from sports science research [6], [7] to cinematography [8], a method to estimate high-fidelity leg motion for medical orthopaedic research and development lacks in the academic literature, as well as the open-source community. For the small spaces in the knee and hip joint, measurement accuracy is essential to guide instruments through the inside of these joints. Kinematic models such as shown by Joukov et al., reduce the degrees of freedom (DoF) of the knee joint from six to only one [9], but for orthopaedic surgeries the measurements for all DoF need to be taken into account. Therefore, this research endeavours to provide a bench-marking tool for high-fidelity human leg motion of all nine DoF of both the hip and knee joints. The methods developed require consideration of the anatomical nuances of each individuals leg. In particular, our method provides a way to compute the inverse kinematic parameters of an individual's leg through measurement of anatomical points, which is crucial to the research of next-generation robotic leg manipulators for surgeries such as knee arthroscopy. To overcome the complex surgical theatre environment, we propose a custom rigid body layout for the OptiTrack 1 motion capture system as shown in Figure 2, where rigid bodies (RB's) are mounted on the femur, tibia and the surgical instrument. A robust camera configuration is used to capture the surgical volume around the patient effectively. Optical markers are placed on the the rigid bodies as shown and anatomical points are defined on the endpoints of the mechanical axes. Point B is on the hip ball joint centre, C between the femoral condyles, D on the tibial plateau, E on the tibia end at the ankle joint with point F at the tip of the arthroscope.

B. ASSUMPTIONS
For this research, it is assumed that: 1) The ankle joint is locked as we use the foot to manipulate the leg for knee and hip surgery. 2) During extreme manipulation of the leg, the femur, tibia or surgical instruments might bend slightly, which is not considered as part of this study.

C. RELATED WORK
Each of the joint variables in the leg can be manipulated or locked to control parts of the leg to enable a specific surgical procedures. In knee arthroscopy, the surgeon manoeuvrers the patient's foot in order to create space for surgical instruments such as the arthroscope to reach desired locations inside the joint, highlighting the importance of leg manipulation and joint motion analysis. From the twenty-one degrees of freedom (DoF) in the human leg, this study will consider nine DOF -the three DoF of the hip ball joint and six DoF in the knee joint. The twelve DoF across the ankle joint complex (AJC) [10] will be locked during experiments. The hip joint is built of a ball and a socket joint [4], and Apkarian et al. noticed that the hip joint leads through the femur head onto the femur with an offset (femoral neck), defining the anatomical femoral axis [11]. It changes the rotational properties to extend the motion capabilities of the hip's kinematic range. However, this study will use the mechanical axes of both the femur and tibia to analyse the joint parameters.
The knee joint is the largest joint in the human body with six DoF, three rotations and three translations [12]. For surgical applications, most of these variables are manipulated to gain access to the inner knee [12], [13]. Grood and Suntay developed a Cartesian joint coordinate system (JCS) FIGURE 2. The Human leg with mechanical axes, RB's, arthroscope and points on the femur and tibia as used in this study. Reference positions are denoted by: blue for femur, orange for tibia, Green for RB's and yellow for surgical instruments. The mechanical axe vectors V f and V t are represented by the solid black lines for the femur and tibia, respectively. RB1 is mounted on the femur, RB2 on the tibia and RB3 on the arthroscope. and applied it to the knee joint [14]. This model is still recommended by the International Society of Bio-mechanics and is used in the mathematical formulation of the hip and knee kinematic model. A typical surgery is performed with the patient lying on their back and for this study, we choose for the global axis y-up and z aligned along the body from toe to head as shown in Figure 2. Dabirrahmani and Hogg addressed the limitations of the Grood and Suntay model to provide the correct hyper-flexion and hyper-extension angles during extreme joint movement [15].
Charlton et al. investigated the repeatability of an optimised lower body model as a measure of rigidity [16]. In their study, they constrained the coordinate systems to ensure minimal singularities [16]. Mena et al. analyze and synthesize the human leg swing motion and developed a mathematical model for the leg [17]. Arnold et al. developed a model of the lower limb for analysis of the human movement, which quantifies the muscle architecture and geometric representations of the bone structures [18]. However, all these studies, as with most others only used a reduced DoF model for the knee and ankle. For robotic knee and hip surgery, all six degrees in the knee needs to be taken into consideration, where the translation is a few millimetres, especially in the posterior/anterior direction. The experiments in this study align with future robotic leg manipulation, where the ankle is locked and the foot is used to manipulate the leg.
Kinematic transformations are used in many applications, especially to describe the pose of robot arms, which are documented in Peter Corke's book on Robotics, Vision and Control [19]. Using the well known joint coordinate system and the leg joint motion of the hip and knee [14], the kinematic transformation principals [19], and the motion kinematics as defined by Jazar in his book on Theory of Applied Robotics [20], we derive the theory suited to leg surgery.
Colyer reviewed the evolution of vision-based motion capture systems and found that there is a need in various industries to accurately capture human motion un-obstructively [21]. They found that marker-less computer vision systems are currently not accurate enough for motion analysis [21]. It is very relevant in the surgical environment where errors have a large impact on a patient's future motion and quality of life.
Lu et al. proposed an optimization method for optical tracking of markers mounted on the skin to determine the positions and orientations of multi-link musculoskeletal models from marker coordinates [22]. Their optimized model compensates for the movement of the markers on the skin. Charlton et al. used skin markers in their study, resulting in a low marker accuracy during motion [16]. Carse et al. [23] determined marker tracking accuracy of optical 3D motion analysis system, such as the OptiTrack system used during this study. They fastened the rigid bodies (RB's) with elastic straps and found the standard deviation of all vector magnitudes in the order of one to four millimetres. However, this study required a high level of accuracy not suited to the use of skin or strap-on optical markers. Markers in this study are mounted using two surgical pins as shown in Figure 3 as used on the current Mako Rio system [1], for each rigid body to ensure rigidity.
Maletsky et al. shows the accuracy of motion between two rigid bodies for biological experiments (translation and rotation) to be smaller than 0.1 (mm or deg) for an operational volume of less than 4m; using a similar OptiTrack optical motion capture system as used in this study [24]. However, they don't address a specific rigid body layout, and marker and volume configurations required for real-time tracking during surgery. A key consideration is marker visibility and in this study the optical volume is set up in a configuration so that at least three markers are visible at any point in time [25]. Nagymate et al. introduced a novel micro-triangulation-based validation and calibration method for motion capture systems, supporting the volume setup and calibration used in this study [26]. There are several other optical tracking systems such as the one used by Gaspar et al., who introduced an infrared-optical tracking system designed for large scale immersive Virtual Environments or Augmented Reality [27]. The system can track markers arranged in a 3D structure in real-time to recover all 6 DoF. Yang et al. proposed a design of an Optical system to track surgical tools, accurate to 0.009mm, however as with the Mako Rio system, their solution use only two cameras which is not suitable for real-time tracking with continuous leg motion [28]. A dense optical volume will be required to ensure continuous coverage during surgery in a real surgical volume.
Eichelberger et al. performed an analysis of the accuracy in optical motion capture using trueness and uncertainty of marker distances for human kinematics [29]. The optical volume setup is essential, and they showed that the accuracy of the system is highly influenced by the number of cameras and movement conditions [29]. Both these observations are key considerations during robotic surgery and a motivation in this study to customized the RB's and optical volumes.
Computed Tomography (CT) is pre-operatively used in many orthopaedic surgeries (such as Mako) to manually register points inside the joints. Visentini-Scarzanella et al. [30] used CT depth information and endoscopic imaging in a deep learning framework for localisation in bronchoscopy and achieved an accuracy of 1.5mm. However, for real-time tracking of the leg, CT imaging needs to be registered with optical measurement data to achieve sub-millimetre accuracy of anatomical positions inside the leg. Kim et al. showed that with CT scans, distances inside the body could be measured with an accuracy of 0.3mm [31]. In this study, CT is used to pre-operatively measure vectors from a marker on a RB, calculating points in the leg during the surgical experiments.
Finding the centre of the hip joint ball centre (HJC) is essential to determine the mechanical axis of the femur accurately. Kainz et al. provided a review of 34 articles on hip joint centre (HJC) estimation in human motion [32]. However, due to surgical accuracy required to track both the leg parameters and surgical instrument positions, we use a combination of optical rigid bodies, CT scan data and optical reference markers in combination with a kinematic transformation model to measure key positions in the leg and on the instruments to sub-millimetre accuracy.
In measuring the propagation of uncertainty in a computer vision sensor systems, Brandner et al. suggested that ''The uncertainty related to a measurement is at least as important as the measurement itself'' [33]. With the small spaces in joints, measurement errors can be significant and can have an impact on the accuracy of measuring the leg joint parameters. El-Gohary and McNames determine 6 DoF of the shoulder and elbow using IMU's, with an average error of 3 • [34], however the small spaces in the knee joint requires accuracies of less than 1 • to steer instruments accurately without damaging the joints. Bredemann et al. estimated the uncertainty of using multiple CT measurements during surgical planning tasks to range from 0.228mm to 0.315 [35]. In this study, we measure multiple points on the femur and tibia and setup vectors between these to calculate leg parameters. Apart from CT measurement errors, we further take into account the optical measuring. Strydom et al. show the uncertainty in monocular measurement errors due to optical tracking and image segmentation [36]. A greater insight into the uncertainty of measurements is required when combining CT scan and optical measurement data in calculating anatomical positions inside the leg.
From the literature, it is clear that significant research has been done with optical tracking, CT measurements, kinematics, and medical processes to manually perform leg surgeries. However, a higher level of accuracy and a detailed understanding of the measurement integration and uncertainty estimation to support better outcomes for patients and surgical staff. In this study we develop and integrate optical tracking volumes and measurements with a priori CT scan data and kinematic transformation theory, to determine points in the leg from which joint parameters such as knee angles can be calculated. To determine the accuracy of the results, the uncertainty of measuring the leg parameters are calculated to determine the fitness of the data for joint manipulation.

II. OPTICAL TRACKING
In contrast to many robot automation applications, surgery requires sub-millimetre accuracy in the small joint cavities. In modern orthopaedic surgeries, optical tracking is VOLUME 8, 2020 extensively used, however with the latest systems such as the Mako Rio [1], the optical system may impede the surgeon's movement as shown in Figure 3. From observations during live surgeries, limitations on staff movement and the optical visibility of the system, increases the surgical time by an average of 17% to ensure tracking.
Surgical robots typically use a single vision system, mounted on a tripod next to the patient (See Figure 3), resulting in interference to the surgical staff and equipment; during operations in real surgeries, the surgeon using the Mako system may need to move away from the patient to re-acquire sufficient optical tracking performance. Existing rigid body marker solutions are large and designed for minimal leg movement such as for knee replacement surgery. Currently, minimally invasive surgeries lack any tracking capabilities, and as a result no leg pose information is available. Therefore, solely relying on the surgeon to manipulate the leg in order to access internal joint structures.

A. RIGID BODY CONFIGURATION
A rigid body (RB) includes a mounting plate shaped for a body part or surgical instrument, with markers (optical tracking balls) mounted on it and in a specific pattern as shown in Figure 4d. It enables the precise measurement of anatomical points in the leg from which the inverse kinematics can be computed to determine the joint angles and translations.
For automation and to minimise interference in the surgical area, the cadavers's leg is moved (by the surgeon or robotically) from the heel position as shown in Figure 4a, and as currently performed by surgeons in live surgeries.
The femur anatomical axis follows the femoral head and femur structures, while the femoral mechanical axis (FMA) is the axis that links the hip ball joint centre to the centre of the condyles on the knee. The FMA determines the hip to knee joint motion, even though tracking devices are mounted to the femur. Marker data from rigid bodies mounted on the femur and tibia, together with CT scans of the leg, are used to determine positions relative to the anatomy of the leg. To mount the optical markers on the leg, surgical pins are drilled into the bones and rigid bodies are mounted on these pins as shown in Figure 4c. For measuring with sub-millimetre accuracy the small spaces inside the knee joint, the RB's need to be rigid relative to the femur or tibia.
We considered the following criteria for optical marker layout (rigid body configuration): 1) Maximise marker visibility during an arthroscopy; 2) Standard markers from OptiTrack need to fit on the RB's; 3) Minimise the potential for damage to RB's due to surgery; 4) Fit to existing surgical pins and instruments; and 5) Create an orthonormal reference frame.

B. NOVEL FEMUR AND TIBIA RIGID BODIES
During a cadaver arthroscopy, a rigid body from OptiTrack was mounted on an arthroscope and failed physically within a few minutes, as shown in Figure 4b.
From experimental trials on artificial and cadaver legs and joints, various rigid bodies were developed for both the femur and tibia ( Figure 5). The RB's are mounted in specific positions as shown in Figure 5b and 5d and tracked using an optical tracking system (such as ''OptiTrack''). The position of the markers relative to a point inside the leg is measured using Computed Tomography (CT) scans of the leg. The components of the RB's is shown in Figure 6 include: 1) Surgical pins that is drilled into the femur or tibia to mount the RB on; 2) Mounting plate that mounts on the surgical pints; 3) A base plate that is fastened onto the mounting plate; and 4) Markers that screw into the base plate at specfic position; Different RB configurations on the tibia or femur allowed placement of markers such as to minimize the RB's size to not obstruct surgical manipulation. The RB marker plate (Figure 5a) for the femur or tibia have limited (5mm) adjustment when mounted, to allow alignment on the leg to facilitate measuring of the vectors in CT scans. Mounting a rigid body with markers on the femur or tibia for this study required the use of two surgical pins drilled into each of the bones Figure 5b). A mounting plate is attached to the pins as shown on the tibia in Figure 6, with the marker plate that fits on the base plate, as shown in Figure 6 on the femur. The base plate adjusts relative to the mounting plate up and down, and left to right directions. Once the markers are installed on the plate, it forms a rigid body as shown in Figure 5 that can be tracked in real-time to support measurements of leg motion.  The optical markers on the RB's centre is 7.5 mm above the base plate in the threaded holes at specific positions on the plate.

C. RIGID BODY FOR SURGICAL INSTRUMENTS
Tracking of the surgical camera/instruments is significant for autonomous navigation, monocular measurements or 3D reconstruction of the joint cavity. For leg motion, feedback can be received from surgical instruments such as the arthroscope to determine the appropriate movement at that point in time. As an example, tracking of the arthroscope can provide feedback on the size of the knee gap to navigate surgical instruments. The prototype design of the arthroscope marker layout (Figure 5c and 5d) is based on extensive experimental design with standard OptiTrack RB's during cadaver surgeries. Improvements on rigidity, size and placement of the marker layout assisted with frame setup and continuous tracking and were validated through the 'Motive' . Optical volume setup with ten cameras mounted above (marked in yellow) on a frame around the theatre bed during a cadaver experiment. Two cameras were mounted on each side and above the cadaver leg to ensure continuous tracking.
data during cadaver experiments. The rigid body on the arthroscope is based a square that tightly fits onto the arthroscopic frame Figure 5c. The complete assembly was tested during a cadaver experiment, as shown in Figure 5d. The markers are positioned such that they allow the motion of the instrument without obstruction to the surgeon.

D. OPERATIONAL OPTICAL TRACKING VOLUME
The optical volume setup in conjunction with the rigid bodies determines the tracking accuracy and allows more markers to be tracked during surgical manoeuvres. To effectively reconstruct the RB layout (if some markers are occluded) at least three markers (reflective balls) are required to be continuously visible from three cameras, irrespective of staff and equipment placing. Marker and RB layout can increase visibility, however, the combination of effective camera placement and increasing the number of cameras achieve a higher level of visibility and accuracy. One possible setup that was successfully tested was where cameras were mounted above the theatre bed and placed on all four corners and the centre of the bed as shown in Figure 7. Together with the unique RB designs, tracking was never lost during any leg motion.

III. MATHEMATICAL ANALYSIS
In order to estimate the pose of any chosen point on or inside the leg, it is necessary to setup an orthonormal basis for each rigid body mounted on the leg. The notation for the transformations, vectors, positions, poses, axes and joint angles as used in this document are adapted from Peter Corke's 'Robotics, Vision & Control' [19] and are summarised in Table 1. Using the OptiTrack global frame (W) and the CT images (Figure 8), it is possible to calculate the local transformation between the RB's and points on the leg. The z-direction in Figure 8 is pointing into the the femur from the CT scan viewpoint (towards the body). The marker (not seen in the Figure 8) centre is 7.5 mm above the rigid body as  [19]. All angles for the hip and knee are in radians. The hip angles are relative to the global body frame, while knee angles are relative to the femur frame.  Figure 9) on the rigid body and is taken from the RB1 and to centre of the right femoral condyle.
shown in Figure 9 and the gap seen in the RB is the centre of the marker mounting thread. It supports the retrieval of the pose of any position on the leg with respect to the global frame (W). In this section the theory is formulated to infer the leg joint angles through inverse kinematic modelling.

A. MARKER COORDINATE FRAMES
Instrument and leg pose analysis requires the setup of frames -that is, the standard euclidean basis vectors (î,ĵ,k) using the Cartesian coordiante system -from marker information measured during optical tracking, using the rigid body layouts as detailed in section II-A. The axis for the analysis uses a right-hand coordinate system to align for the optical tracking system configuration, as shown on marker H (Figure 9).
The generalised homogeneous transformation matrix of the marker H coordinate frame, relative to the origin (or pose of frame H relative to frame W -see Figure 10) is: where x, y and z (first three columns) are the local frame axes on the rigid body at point H andî,ĵ andk the unit vector axes of the global frame (W). For a frame on marker H (RB1 in Figure 10), the axes for the transformation matrix (T) can be inferred directly from the rigid body using the appropriate marker configuration by creating the local frame, as shown in Figure 9: where the z-axis, z 1 = (zˆi, zˆj, zˆi), is a unity vector (see Figure 10) along the quasi-mechanical axis (e.g., from H to G in Figure 9). Using the homogeneous matrix (1), we are able to setup frames on any of the markers of any rigid body.

B. LOCAL TRANSFORMATIONS
A CT scan of the leg is essential to determine the state (i.e., position and rotation) for any point of the leg with respect to one of the rigid body frames, as shown in Figure 8. Figure 11 shows a CT scan of the femur and the relationship between  Table 1 for details. the mechanical and anatomical axes of rotation of the femur relative to the hip. Using dynamic (updated in real time) frames on the leg, we can determine any positions on the leg or arthroscope at any point in time, and relative to a specific frame. For instance, the point c (or vector from W to C) on the leg relative to W is denoted in the homogeneous matrix as: which is:

C. TRANSFORMATIONS BETWEEN LEGS AND INSTRUMENTS COORDINATE FRAMES
The transformation between rigid bodies can be determined from the relationship between frames on the RB's or leg as shown in Figure 10. For the transformation from frame M to frame H: Any point on the tibia in frame M can thus be expressed relative to frame H on the femur.

D. ARTHROSCOPE TIP POSITION
Frame C is used to expressed all points in the same local frame. To determine in real-time the arthroscope tip position in the femur C frame ( C f ) we observe from Figure 10 that:

E. MOTION ANALYSIS
Using the transformations described above, we define vectors between points on the bones, from which knee and hip joint rotations and translations are analysed. Knee angles are relative to frame C on the femur and hip angles relative to the body-aligned global frame W.

1) KNEE ANGLES
The mechanical axis vector along the tibia is defined from the centre of the condyles to the ankle centre. However, the state of the tibia is relative to the femur and therefore, must to be measured relative to a frame on the femoral condyle centre. VOLUME 8, 2020 FIGURE 12. The knee angles (α, β and γ ) are relative to frame C on the femoral condyle, while the hip angles are relative to the world frame at point B. is the total angle between the femur and Tibia vectors.
The rotational matrix on the condyle is constructed using measurements from the CT scan data as shown in Figure 11. As illustrated in Figure 11, we determine the centre of the ball joint (B) and the connection centre (K) of the femoral anatomical and femoral head axes. This vector forms the x' axis and we obtain: We define the zx' plane by points B, K and C in Figure 11 with y perpendicular to this plane. The rotational frame ( W R B ) on the FMA is the combination of the x, y and z vectors on point B. For rotations or translations of the tibia relative to the femur, the transformation frame in point C, on the femoral mechanical axis is: where w c B is point C in W via frame B (see Figure 10). The vector from the centre of frame T C to point E describes the motion of the tibial mechanical axis, which is: The femur mechanical axis is defined as the link from the hip joint centre to the centre of the condyles on the knee as shown in Figure 11. The femur vector that describes the hip rotations relative to the world frame is: In kinematics, the angles of the hip and knee joints are extensively used and is essential for future robotic applications. We will use the rigid bodies and transformations outlined above to calculate the joint angles. In particular, the knee varus (β), flexion (α) and (λ) angles are measured as shown in Figure 12. We use the XYZ rotational matrix as detailed in appendix A (1) of Jazar [20] to obtain the knee angles (α, β and γ ) between vectors v f and v t . With the limited angle ranges for both the hip and knee joints, the roll, yaw and pitch (XYZ) order has no gimbal lock throughout the leg motion range. Using Rodrigues rotational matrix to rotate v t to coincide with v f , the rotational matrix between the femur and tibia is [37]: (9) and using this rotational matrix in the XYZ order, the knee angles are:

2) KNEE TRANSLATIONS
During minimally invasive surgery, the knee gap size between the femur and tibia is required for accessing inner knee areas with surgical instruments. Translations in the joints can be measured by setting up vectors at the condyle joint points C and D, that is using point D in frame C (see Section III-D). C d will provide the x (medial/lateral),y (posterior/anterior) and z (knee gap) translation of the knee joint as a result of rotation and translation during motion.

3) HIP ANGLES
Angles and translations are measured relative to the sagittal (flexion), coronal (varus) and transverse planes. Using the XYZ hip rotational matrix W R C as detailed in appendix A (1) of Jazar [20], we obtain the hip angles as [20]:

IV. UNCERTAINTY
This study endeavours to measure the leg motion to a high-degree of accuracy due to the small spaces in the knee and hip joints. As with any measurement it is equally important to understand this measurement accuracy and its associated uncertainty. The selection of the anatomical points affect the accuracy of the measurements. On the femur the centre of the hip ball joint is used and at the knee the intersection of the femoral mechanical axis and the point between the condyles (at the very tip of bone between the condyles) is used. On the tibia at the knee the point where the mechanical axis enters the tibia (when the leg is straight) is selected and at the ankle the point on the tibial mechanical axis at the end of the tibia is used just as it start to cross over into the AJC. There are known errors in the positional measurements, which directly effect the pose accuracy of the hip or knee joints. Strydom et al. [36] showed that the CT and Optical tracking errors are 0.3mm and 0.18mm, respectively. These errors are approximated by a spherical volume around each joint as shown in the vector diagram in Figure 13 and a leg in  Figure 16. In considering the rotational error ( ) in vector v t due to the maximum translation error (τ ) and the rotational errors due to measurement (φ), as detailed in Figure 16: The same principle can be applied to the vector v f . Then the true angle between the two vectors (see Figure 13) can fall between the range defined by: for: ω is a symmetrical range around .

A. METHOD
Ethical approvals were gained for three Cadaver experiments; using four legs; as detailed in Table 2 and conducted at the Queensland University of Queensland's Medical Engineering Research Facility (MERF) located on the Prince Charles Hospital campus in Brisbane, Australia. The twelve DoF across the AJC was locked during cadaver experiments by using a moon-boot and robot attachment mounted on it. The cadaver leg was manually moved for three minutes by four theatre staff, to as many positions of the knee and hip joints as possible. For each of the three experiments, pins were drilled into the cadaver leg to mount the rigid bodies on the cadaver femur, tibia, arthroscope and robot boot. Marker rigid bodies were mounted in specific patterns and the leg CT-scanned with the rigid bodies installed on it. Ten Flex 13 optical tracking cameras with one in video recording mode, were mounted on a 3mx3mx2.5m overhead structure (Figure 7), and calibrated with a standardised OptiTrack wand. Each of the cameras has a 1.3 Megapixel resolution, a framerate of 30-120 fps, a 5.5mm F#1.8 lens with a Horizontal field of view (FOV) of 56 • and Vertical FOV of 46 • . The Flex 13 has twenty eight 850nm IR LED's for tracking the reflective 14mm markers. Data were recorded  in real-time using the Optitrack Motive software application. The one camera in video mode is time-synced with the tracking data recorded.
Post experiment, the data were converted to a csv file for detailed post analysis using Matlab R1019a. Each experiment tested different rigid bodies layouts and mounting options, optical volume arrangements, CT scan positions and options, theatre staff influence on measurements and mathematical alignment with the experiment setup.
For the first set of experiments the robustness and accuracy of existing rigid bodies from Optitrack were tested.
For the second experiment Customized rigid bodies were designed and tested for the femur, tibia and arthroscope. The designed RB's and CT scan measurements as shown in Figure 15.
For the final experiment a refined process, CT scans and RB's designs were used.

B. UNCERTAINTY VALIDATION
Using the homogeneous transformation matrices and the cadaver experimental data, the analytical uncertainty calculated in section IV can be verified. Figure 16 shows the uncertainty error clouds for both the Optitrack system and CT scan measurement uncertainty around the vectors used to measure the angles. For an arbitrary leg vector v n (femur or tibia) calculated from the Optical tracking and CT scan positions, we can determine the vector range that include the measurement uncertainty: v n = R e v n + v e Femur or tiba Vector (19) VOLUME 8, 2020 with v n the femur or tibia vector, v n the vector including errors, v e the translational error (CT and Optical tracking) and R e is the rotational error matrix, which is defined by: where σ is the flexion error angle of the knee, κ the varus error angle and ρ the inner rotational error. Note the matrix above uses a shortened notation: sin(σ ) is shortened to s σ and cos(σ ) is shortened to c σ . From (19) we can calculate the tibia or femur vector that include the uncertainty.

C. EXPERIMENTAL RESULTS
OptiTrack results show that there was continuous visibility of the markers during a full leg motion experiment of four minutes. Sufficient markers were tracked on each RB for the OptiTrack system to fully recover the position of each marker.
To determine the overall precision of the measurements we determine the position of a particular point by using the homogeneous transform (1) from several marker positions. Table 3 firstly shows point E (e M -relative to the world frame) via local translation from M to point E only use the local translation measured from the CT scan. Secondly point E is calculated with a translation via frame C and frame M to point E (e CM ). Leg angles as shown in Figure 17 were calculated from the measured marker positions as detailed in Section III-E during cadaver experiments. The leg was moved through a range of angles, manually and with the leg manipulator robot.
For the femur and tibia lengths for cadaver 3 are: ||v f || = 379mm, ||v t || = 372mm and where the Optitrack and CT systems accuracy were as shown by Strydom et al. [36] to be: φ = 0.18 • , the maximum rotation error τ = 0.037 + 0.3 = 0.337mm, and from (16): We can determine the uncertainty in the angle measurements using the analytical solution presented in Section IV and substitute the above in (18), that the uncertainty range is: ω = ±0.47 • (range:0.93 • ), the maximum uncertainty for all angles.
To practically verify this uncertainty with the cadaver data, we redefine the errors above to calculate (18) • The Optitrack error: σ = κ = ρ = ±0.18 • , which will be used to calculate the largerst error region for each angle.
• v e = 0.337mm. The uncertainty was validated using a Matlab software simulation of the cadaver data. The mechanical axes was setup in Matlab using the anatomical points in the hip, knee and ankle. An optical tracking error of ±0.18 • is used with the total translation error of ±0.0337mm. To obtain the uncertainty range, the angular error was stepped from −0.18 • to +0.18 • , with a step size of 0.18/2 to limit the iterations. The translational error was stepped in the same way from −0.0337mm to +0.0337mm. Figure 18 shows a only a few seconds for visibility of the experiment of the knee flexion angle with the uncertainty range plotted around it. Figure 19 plots the flexion angle over the complete experiment relative to the uncertainty. It shows over a flexion range of 0 • to 120 • a maximum uncertainty of 0.625 • .

VI. DISCUSSION
Providing autonomy for leg positioning and surgical instrument navigation in future robotic-assisted orthopaedic surgery requires accurate spatial information -specifically, the pose of the leg and instruments. One of the key outcomes of this study is the integration of optical tracking, a priori computed tomography information and the mathematical for- mulation with the associated uncertainty in measurements. It relates and localises anatomical points inside the leg across joints, relative to each other or surgical instruments.
Standard OptiTrack rigid bodies were initially tested and failed physically, while markers were obstructed by theatre staff and instrument movement. The level of accuracy obtained from using pose data provided by the OptiTrack system is not suitable for orthopaedic surgery, as it relies on manual initialisation of the rigid bodies with the world frame during calibration. Setting up of coordinate frames on these standard RB's to manually acquire accurate pose data, were difficult as markers are configured in a single plane and not closely aligned with the leg coordinate frames. Durable and configurable RB's were developed for the femur and tibia to fit existing surgical pins to withstand harsh joint procedures. Marker positions support setting up of local frames for both the femur and tibia, updating the transformation matrices on each RB in real-time. CT scanning with the rigid bodies installed on the cadaver leg supported the accurate localisation of the RB relative to anatomical positions. For other areas of the body or different surgeries, it will be necessary to customise these RB's. However, the measurement and mathematical approach remain the same.
For optical tracking, both the volume setup and marker configuration on the patient's leg forms part of the system to ensure visibility and tracking accuracy. At least three markers (reflective balls) on a RB need to be visible to know all marker positions. It was found that for the cadaver arthroscopy, ten cameras placed above and at the sides of the volume, ensured continuous optical marker tracking, irrespective of surgeon or instrument motion. For live surgeries, it may differ due to theatre setup, staff movements and equipment used. As reported by Maletsky et al. [24], the OptiTrack accuracy was measured over 33000 samples in surgical conditions and found to be 0.03mm. It supports an accurate setup of frames to track points on the leg or instruments. The optical tracking accuracy of markers on the leg using the mathematical model is shown in table 3, where the ankle centre point (E) is tracked across RB's, showing consistent positional information for the ankle.
The anatomical point measurement inside the leg is dependent on the accuracy of both the optical system and the CT scan information. With a CT scan measurement of 0.3mm as shown by Kim et al. [31], the measurement of a point in the leg is thus largely dependent on how accurate the CT scans are analysed. As shown in Table 4, the error crossing two local measurements is on average 0.78mm and in line with the CT scan accuracy, which is small relative to sizes in the knee joint. The combination of CT and optical tracking show that during cadaver surgery it is possible to express points in a joint relative to any frame accurately.
For knee surgery, the dynamic space in the knee joint and an arthroscope diameter of 4mm make the sub-millimetre accuracy in this study suitable for robotic leg manipulation and instrument guidance. The calculated uncertainty in  section V-C shows a maximum of ±0.47 • , which is significantly better than the 5.03 • ± 1.27 • flexion error obtained by Joukov et al. for a one DoF model of the knee joint [9]. In this study it was practically verified using a Matlab simulation with the motion data from the cadaver experiments, and as shown in figures 18 and 19 is a maximum of 0.57 • to 0.62 • . Over the 160 • flexion range of the knee this error will have an insignificant impact on the size of the joint [2] or the instrument measurements. It is smaller than the maximum we calculated, which might change slightly if a larger number of cloud points are used. As surgical instruments reduce in size in the future, the measurement accuracy will need to improve to suit the measurement range and more accurate optical tracking and CT equipment will be required. However, the mathematical modelling developed in this study will still be suitable to determine the joint parameters and uncertainty.
Key parameters for robotic leg manipulation include the rotations and translations of each joint, which is calculated from the combination of CT, optical tracking, measurement uncertainty and the mathematical model. It forms an integrated system during surgery for real-time anatomical measurements. Angles for each joint were calculated from the cadaver data and are shown in Figure 17a to 17d. Figure 17e and 17f show snapshots from video analysis at time 5 and 45 seconds, which is marked with black vertical lines on each of the angle graphs. For clarity, only the first 60 seconds are shown. Limitations: • Inserting surgical pins in a patient's leg is currently not used in all procedures, i.e. knee arthroscopy; • The solution in this study was purposely build for knee arthroscopy and will need different hardware configurations if used for joints such as the hip. However, the mathematical, tracking and measurement principals can easily be adapted for the hip or even for other joints such as the elbow or shoulder; • Angles were not independently measured, such as using a fluoroscope. However, angle accuracy is a factor of the measurement errors of the anatomical points, which we use to set up the mechanical vectors. Uncertainty calculation shows that using the maximum errors, the maximum angular errors in all the angles are within ±0.47 • -providing confidence in the computed joint angles. Manual verification was tried, however skin movement is much larger than the errors measured and not an practical approach. Future work can include verification using a fluoroscope; • Uncertainty of anatomical points and angles are calculated using the maximum error of both the CT data and optical tracking. Future work can refine it, for example, through setting up a probability distribution of the uncertainty for each measurement.

VII. CONCLUSION
Optical tracking, rigid body designs, CT measurements, measurement uncertainty and kinematic analysis presented in this study, form the first integrated system for accurate spatial joint measurements suitable for research in robotic orthopaedic surgery. During three cadaver experiments the leg was moved through surgical positions for full motion ranges of the hip and knee joints. Across joints the system was verified by comparing positional data to known marker positions to determine the measurement uncertainty. The rotations of the hip and knee joints are calculated with an accuracy relative to that of the anatomical positional data of the mechanical vectors. With a maximum uncertainty of ±0.47 • , the kinematic data from this study is suitable as a reference for future research in robotic orthopaedic surgery.

VIII. APPLICATIONS TO MEDICAL ROBOTICS
This study will support future research in robotic-assisted leg surgery by providing a benchmark method to evaluate new research and development in robot joint surgery effectively. The accuracy of the vector's positional data ensures that calculated parameters such as the joint angles can be used as ground-truth for future research in robotic joint manipulation and surgery. This research will include kinematic models of the human leg, and non-invasive alternatives to infer the pose of the human leg for routine minimally invasive arthroscopic surgeries. It will reduce patient trauma by supporting accurate leg manipulation for surgeries such as knee arthroscopy, which currently relies on surgeons guessing the spaces inside the knee from a 2D image. The solutions in this study is also suited for modelling of the joint surfaces and structures for research in localisation and navigation inside joints.