Laser Beams-Based Localization Methods for Boom-Type Roadheader Using Underground Camera Non-Uniform Blur Model

The efficiency of automatic underground tunneling is significantly depends on the localization accuracy and reliable for the Boom-type roadheader. In comparison with other underground equipment positioning methods, vision-based measurement has gained attention for its advantages of non-contact and no accumulated error. However, the harsh underground environment, especially the geometric errors brought by the vibration of the machine body to the underground camera model, has a certain influence on the accuracy and stability for the vision-based underground localization. In this paper, a laser beams-based localization methods for the machine body of Boom-type roadheader is presented, which can tackle the dense-dust, low illumination environment with the stray lights interference. Taking mining vibration into consideration, an underground camera non-uniform blur model that incorporate the two-layer glasses refraction effect was established to eliminate vibration errors. The blur model explicitly reveals the change of imaging optical path under the influence of vibration and double layer explosion-proof glass. On the basis of this, the underground laser beams extraction and positioning are presents, which is with well environmental adaptability, and the improved 2P3L (two-points-three-lines) localization model from line correspondences are developed. Experimental evaluation are designed to verify the performance of the proposed method, and the deblurring algorithm is investigated and evaluated. The results show that the proposed methods is effective to restore the blurred laser beams image that caused by the vibration, and can meet the precision need of roadheader body localization for roadway construction in coal mine.


I. INTRODUCTION
The intelligent localization of coal mine roadway tunneling is facing severe challenges. Accurate self-localization of cutting path is of key importance to the improvement of the excavation efficiency in tunnel process [1], [2]. However, the long-distance accurate positioning of Boom-type roadheader is still an unresolved problem which makes it impossible to track the cutting trajectory accurately. The tunneling roadway located at underground and with no global positioning system (GPS) signals. Therefore, the conventional global The associate editor coordinating the review of this manuscript and approving it for publication was Orazio Gambino .
positioning system measurement method is invalid. Inertial navigation system (INS) is an infrastructure independent pose estimation method. However, there exists accumulated drift due to the double integration, which could result in serious inaccuracy positioning [3]- [5]. The Ultra Wideband (UWB) system has a certain positioning accuracy in personnel positioning [6], [7]. However, it has a high requirement for the antenna sensitivity when the distance increase between the UWB stations and the object during measurement process. Laser based methods [8]- [10] can have a high pose estimation results when with accurate dense 3D point clouds. However, it have a very high requirement for the 3D laser scanners. Therefore, there exist some deficiencies in the VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ above approaches, an infrastructure independent sensor that with high reliability and measurement accuracy is necessary for Boom-type roadheader body automatic localization. In recent years, vision-based pose estimation methods have received great attention for the non-contact and no accumulated error [11]. But it is difficulty to long-distance track the scene feature in the dense-dust and low illuminance environment. We have used multi-points infrared LED target to achieve the pose measurement of the cutting-head [12], but the body pose of Boom-type roadheader is the basis of the cutting trajectory tracking. The mine laser pointer usually used for long distance directional heading, which can produce a laser beam. Considering the advantages of the higher extraction accuracy and the stronger anti-occlusion ability, the laser beams is expected to be taken as a target to achieve greater accuracy in pose measurements. The pose estimation from line correspondences involves iterative solutions and closed-form solutions. For iterative solutions, Iterative approach [13]- [16] are usually time-consuming. For closed-form solutions, [17], [18] proposed an 8th-degree polynomial to the 3-D object pose estimation problems from three lines in general position, which is based on the perspective projection of linear ridges and form the closed-form solutions of object pose. [19] presented a biquadratic polynomial for the case of three non-coplanar lines. However, these pose estimation approaches from three lines exist multi-solutions or the complex iteration solution process.
Due to the motion and vibration of the Boom-type roadheader in coal mining, the pixel points in the laser beams image will continuously move across the image plane during exposure and form a blurred laser beams image. This blurring effect increases feature location errors and degrades pose estimation accuracy. Some existing deblurring methods describe the blurry image as the convolution of an unobserved latent image with a uniform blur kernel. However, camera shake motions can cause spatially varying blur, which often resulting in a non-uniform blur kernel across the image. Levin et al. [20] show that spatial invariance is often violated and the spatially varying blur kernels is quite common. The blurred image can be regarded as the integration of a sharp image under a sequence of projective transformation matrix that describe the camera's shake path during exposure [21], a hybrid camera [22] or inertial measurement sensors [23] can be used to recover the camera's path. Tai et al. [21] derived a projective motion blur model that incorporated spatially varying motion to restore the original sharp image. [24] developed a restoration method from a single blurred image, which described the camera motion with non-uniform blur kernels that derived from MDF. [25] proposed a variational Bayes framework to removing camera shake based on uniform blur kernel. [26] describe a similar framework where they approximated the 3-dimensional rotational camera motion with a spatially-varying blur, which was incorporated within the frameworks proposed by [25] and [27] to achieve blind deblurring. This paper proposes a novel laser beams-based machine body localization system for the Boom-type roadheader to improve the accuracy and reliability of the body positioning in tunneling face. The main contribution of this paper are as follows: 1) A parallel laser beams-based target were specially designed to tackle the dense-dust, low illumination environment with the stray lights interference. On the basis of this, the underground laser beams segmentation, extraction and positioning algorithms are presents, which is verified to be with well environmental adaptability. 2) Taking the vibration into the consideration, an underground camera non-uniform blur model that incorporate the two-layer glasses refraction effect is established to eliminate vibration errors. The established model described explicitly the blurred process of a clear laser beam under a sequence of transformation matrix of the underground camera's path. 3) Considering the existing pose estimation algorithms from three lines involves higher-degree polynomial and the complex iteration solution process, the improved 2P3L pose estimation model enables to determine the unique closed-form solutions from line correspondences. 4) The experimental are designed to verify and evaluate the performance of the proposed underground camera blur model and the localization methods.

II. LASER BEAMS-BASED VISION MENSEARMENT SYSTEM FOR BOOM-TYPE ROADHEADER
The Boom-type roadheader carries out tunneling work in the tunnel face, the realization of the automatic localization of the cutting-head implies two aspects: the body positioning in the tunnel coordinate system and the cutting-head positioning in the body coordinate system. As shown in Fig.1a, the location system involves two artificial target: one is a specially designed laser beams-based target, which is formed by the double parallel laser alignment instruments, the other is the multi-points infrared LED target. The front underground camera is used to collects the infrared target image, the rear underground camera is used to capture the laser beams-based target image. The cutting-head positioning in the body coordinate system is realized by the infrared LEDs-based vision measurement methods. The body positioning system is realized by the laser beam-based vision measurement methods.
As shown in Fig, 1b, infrared LED-based cutting-head pose estimation , which have been given illustration in our previous work [12]. This paper we focus on the Boom-type roadheader body positioning in the tunnel coordinate system. As shown in Fig.1b  Vision-based Boom-type roadheader body localization in the tunnel coordinate system is performed with where, M c 0 denotes the transformation matrix between the body and camera coordinate system, M b c expresses the transformation matrix between the camera and target coordinate system, M h b denotes the transformation matrix between the body and the tunnel roadway coordinate system.
The rigid transformation matrix of M c 0 and M h b can be obtained by pre-calibrated. This paper we focus on the laser beams-based vision measurement methods to obtain M b c , which is illustrated in Section IV. As described in our previous work [12], the underground camera is specially designed with two-layer explosion-proof and an underground camera model that considered the two-layer glasses refractive effect has been developed. However, the geometric errors brought by the roadheader's motion and vibration to the geometric model of underground camera needs to be furtherly considered for the practical situation. Hence, a non-uniform underground camera non-uniform blur model that incorporated two-layer glasses refraction effect is proposed this paper, which is illustrated in Section III.

III. UNDERGROUND CAMERA BLUR MODELING AND DEBLURING A. UNDERGROUND CAMERA BLUR MODELING THAT INCORPORATED REFRACTION EFFECT
Considering the explosion-proof and dust removal in the tunnel face, an underground camera was specially designed as shown in Fig. 2. Due to the light-path will change when pass through two layer glasses, the underground camera's projective model differs significantly from the pinhole camera model. With the Boom-type roadheader's motion and vibration in coal mining, the relative pose between the underground camera and the laser beam-based target will changed. The pixel points in the laser beams target images will continuously move across the image plane during exposure and form a blurred laser beams target image. The blur effect can lead to laser beams extraction and positioning errors, which degrades roadheader body localization accuracy.
For the perspective camera model, the blurry image can be generally modeled as a sum of the projective transformations of the sharp image in the exposure time. The projective transformations can be defined by the homographies as follows where, M is the projective transformation matrix, K is the camera's internal calibration matrix. The observed image g can be formulated as an integral of f after rotation and translation during exposure. g can be described by the integration of all the homographies H t of f over the exposure time T. The blurred image g can be defined as where, H t is the homography deduced by the transformation matrix, T denotes the exposure time, ε is the observation noise.
By discretization, g can be estimated by Considering the refraction of the two-layer glasses, the underground camera blur model differs significantly from the above perspective camera blur model, as shown in Fig.3. In this section, we proposed an underground camera blur model that incorporate the two-layer glasses refraction. The dynamic imaging process of a blurred laser beams image is equivalent to the integration of a sharp laser beams image under a sequence of translation and rotation matrix that describe the underground camera's path.
Assuming X is the 3D space point of the laser beams in the target coordinate system, M is the transformation matrix between the target and camera coordinate system, the incoming ray is x 0 =[x, y, f ], M h c is the rotation matrix between the camera coordinate system and virtual imaging system, F is the perspective offset, s is the arbitrary scale factor. The relationship between a 3D point X in the object coordinate system and its image projection x in static condition can be expressed as Considering the underground camera motion and vibration during image capture, we map each pixel in the transformed image g to one pixel in f. Suppose that H k is a transformation matrix between the camera and target coordinate system at time t k during exposure that describe the underground camera's motion path. There is one H k for each time slot. Assuming at time t k , the incoming ray is x k =[x k , y k , f ], the perspective offset is F k , the arbitrary scale factor is s k . The relationship between the 3D point X in the object coordinate system and its image projection x k at time t k can be expressed as The pixel in the blurry image can be mapped under a homography H k to a pixel point in the sharp image as follows: The proposed underground camera blur model can be defined in terms of the blurred image g, the sharp image f and the blur kernel w. Each element w k corresponds to an underground camera's homography H k . The blurred image pixel g i can be modeled as a weighted sum of a set of projectively transformation of p j , which constructed a system of linear equations.
where C k is the coefficients matrix of bilinear interpolation corresponding to the homography H k . ε denotes noise at each pixel, and i, j and k are the index of the blurred image, the sharp image and the blur kernel, respectively.

B. NON-UNIFORM DEBLURING ALGORITHM
To tackle the deblurring problems, we apply proposed underground camera blur model within the existing deblurring frameworks [25], [26], which is based on the variational inference approach of Miskin and MacKay [27]. The variational inference approach is used to find the optimal forms of q(f j ), q(w k ) and q(β σ ) by minimizing the following cost function, which is equivalent to minimizing the Kullback-Leibler (KL) divergence between the true posterior and the variational approximation.
With the bilinear form of Eq. (8) that described as follows The optimal forms of q(f j ), q(w k ) and q(β σ ) can be derived using the calculus of variations, and the parameters can then be updated with the equations as follows [26].
The parameters optimization results are given evaluated using the following equation Once obtained the optimal q( ), the expectation of q(w) can be regarded as the blur kernel. Then the Richardson-Lucy algorithm [21], [28] can be applied to estimate the sharp image f. The latent sharp image f can be estimated iteratively using the following equation.
where, denotes the element-wise product, g is the observed blurry image, the matrix A can be calculated with estimated underground camera non-uniform blur kernel. The proposed underground camera non-uniform blur modelling and deblurring algorithm can be taken as a pre-process for the Boom-type roadheader body pose estimation in the practical situation.

IV. POSE ESTIMATION
This section gives the pose estimation method including the laser beams segmentation, extraction and positing, as well as the improved pose estimation algorithms from three lines correspondences.

A. LASER BEAMS SEGMENTATION AND POSITIONING
The vision-based Boom-type roadheader location system adopts double parallel laser beams as the feature target, which is realized by two parallel mounted underground laser pointers and can tackle the coal mine environment. The underground laser pointer adopts the red laser with a wavelength of about 660 nm. Due to there exist influence factors such as stray light in the downhole working environment, the laser line and laser spot location accuracy are seriously disturbed by stray lights spot from mine lights. However, due to the wavelength of the laser beam is included in the wavelength range of the mine lights, it is impossible to filter out stray light source directly with narrow-band filter. Therefore, in order to prevent the interference of the stray lights for the laser beams positioning, an accuracy and reliable downhole laser beams image segmentation and feature positioning methods are proposed.
The process of laser beams segmentation, extraction and feature positioning are shown in Fig. 4, which includes the following three parts: laser beam segmentations and extraction, laser spot segmentations and extraction, as well as laser beams positioning.

1) LASER BEAM SEGMENTATIONS AND EXTRACTION
According to the corresponding range of HSV space components H, S and V for the red laser beams, the laser beams can be distinguished from stray lights in complex background. The range of H is set as 0∼10, the range of S is set as 40∼250, and the range of V is set as 40∼250. As shown in Fig.4a and Fig.4b, the mine lights and other miscellaneous lights can be filtered out, and the laser beam pixels with brighter red color are allowed to pass through. In accordance with the red laser beams cluster based on the color space, it can effectively obtain the laser beam region under the background of miscellaneous lights. Assume the i th image points in the image is expressed as I(x i ,y i ), the collection of the pixel points of laser beams L can be defined as

2) LASER SPOT SEGMENTATIONS AND EXTRACTION
With obtained laser beam segmentation results as shown in Fig.4b, the dynamic gray thresholds can be obtained by calculating the maximum gray values in the extracted laser beams in current frame image. The dynamic gray thresholds is the gray value of the pixel points that labeled by red circle in Fig.4b. With obtained adaptive gray thresholds, to calculate the binary image of current frame image with Otsu algorithm. Assume the obtained center coordinates of connected regions are C k and its corresponding connected regions by threshold segmentation are defined as S k . C k is the k th element in the collection C.
As shown in Fig.4c, the connected regions S k contain not only the laser spot, but also the stray light spot. The color space constraints is defined here to filter the stray light spots. Let S k denotes the collection formed by the neighboring pixels in the 20 × 20 window of the center coordinates C k . Assume S k is the collection that formed by the elements both within the collection S k and within the collection L, N is the collection of the total numbers of the elements in the corresponding collection S k .
The color space constraints N k > m is used for the laser spot segmentation where the constant m is usually set at 5. The obtained connected regions by the constraints conditions is defined as R k , C R is the center coordinates that corresponding to the connected regions R k .
Euclidean distance constraints is used to further confirm the laser spots, which are respectively defined by calculating the pixel distance between the extracted different connected region R k along y-axis and x-axis. The extracted laser spots are shown in Fig.4d. (20) where, n 1 , n 2 , and n 3 are set at 10, 30, and 100, respectively. i, j denotes the i th and j th connected regions, respectively.

3) LASER BEAMS POSITIONING
For laser spots, the classical spot location algorithms are weighted centroid method, circle fitting method and curved surface fitting method [29]- [31]. Gaussian fitting algorithm [31] is adopted for the laser spots center positioning. For laser beams, the typical straight line detection algorithms are the Hough transform [32], [33]. But Hough transform-based line detection approaches do not consider the line width, which are inaccuracy for the center line detection of the laser beams that are discontinuous and with varying line width. An underground laser beam center line location algorithm is proposed and introduced as follows: Step1: A boundary line of the double laser beams that pass through the center point (x 0 , y 0 ) of the two laser spots is first to be defined to distinguish the left laser beams and the right laser beams as shown in Fig.4e. Establish accumulator A = {ρ, θ} , and Hough transform is used to obtain the parameters space collection of ρ and θ for the laser beams, where θ is the angle from abscissa axis to the vector which is perpendicular to the laser line, and ρ is the distance from the origin to the laser line. The angle collections for the extracted laser beams are defined as θ = {θ 1 , θ 2 , θ 3 , · · · , θ k } . Defined a vector that is normal to the boundary line, and the angle θ 0 from the defined vector to the abscissa axis is The equation for the boundary line of the double laser beams can be calculated by the center point (x 0 , y 0 ) of the two laser spots and the obtained angle θ 0 .
Step 2: Combine with the above boundary line constraints, form the cluster C left = {C Lrow1 , C Lrow2 , · · · , C Lrowk , · · · , C LrowN } for the pixel points of laser beams on the left area of the boundary and the cluster C right = {C Rrow1 , C Rrow2 , · · · , C Rrowk , · · · , C RrowN } for the pixel points of laser beams on the right area of the boundary. C Lrowk and C Rrowk are the pixel points of the laser beams of the left and right area of the boundary in each row, respectively. Then, form the cluster C left_max = C max Lrow1 , C max Lrow2 , · · · , C max Lrowk , · · · , C max LrowN and the cluster C right_max = C max Rrow1 , C max Rrow2 , · · · , C max Rrowk , · · · , C max RrowN , where C left_ max and C right_ max are the cluster for the pixel point of the laser beams with maximum gray value in the left and right area of the boundary in each row, respectively.
Step 3: Combine with the obtained C left_max and C right_max , the linear equation of the laser beam on the left and right side of the boundary line can be fitted by the least square fitting with constraints, respectively. Assume that the distance equation from the each pixel point in the cluster C left_max and C right_max to the fitting straight line are: where, a l , b l and c l are the center line parameters for the left laser beam. a r , b r and c r are the center line parameters for the right laser beam. i, j denotes the i th and j th pixel point in the cluster C left_max and C right_max , respectively. The linear equation of the center line of the double laser beams can be respectively fitted by the least square fitting with constraints as follows The line fitting error objective function is defined by Eq. (24), the optimal solution can be obtained when the line fitting error is smaller than the maximum allowable error.
The proposed methods are beneficial and reliable for the laser beams segmentation, extraction and positioning in the complex background in coal mine.

B. IMPROVED 2P3L POSE ESTIMATION MODEL BASED ON VANISHING POINT
Inspired by Dhome [17] and Radu Horaud [19], an improved pose estimation algorithm is developed to determine the closed-form solutions from three lines. Different the higherdegree polynomial and the complex iteration solution process, the improved pose estimation algorithm can obtain the unique close-form solution by introduced vanishing point. The perspective projection model for the laser beams-based target are shown in Fig.5. The laser beams-based target consists of the two parallel center lines L 1 , L 3 of laser beams, and the line L 2 that formed by the laser spot centers P 1 , P 2 . The lines L 1 , L 2 and L 3 are coplanar, in which the L 1 , L 3 are perpendicular to L 2 , respectively. P 1 , P 2 are the intersection point of the lines L 1 , L 3 and L 2 . Suppose the camera coordinate system is O c X c Y c Z c . The laser beams-based target coordinate system is O d X d Y d Z d , the origin O d is located at the center of P 1 and P 2 . Let the vectors of space line 1, 2, 3) . The X d axis is along the vectors direction of L 2 , the Z d axis is along the vectors direction of L 3 , the Y d axis is along the vectors direction of Assume l i is the projection line on the image plane that corresponding to the space line L i . p 1 , p 2 is the projection point on the image plane that corresponding to the space point P 1 , P 2 . Let the projection line equation be a i x + b i y + c = 0, (i = 1, 2, 3) . The direction vector of the image line l i can be defined as v i (−b i , a i , 0) . The point on the projection line l i is defined as t i (x i , y i , f ) in the camera coordinate system. Define the constraints plane S i that formed by the laser beam space line L i , the image projection line l i and the camera optical center O c . Therefore, the normal vector of the constraints plane S i can be expressed as where In accordance with the constraints conditions that the space line L 1 is parallel to L 3 , the space line L 1 is perpendicular to the normal vector N 1 of the projection plane S 1 , and the space line L 3 is perpendicular to the normal vector N 3 of projection plane S 3 . Thus, the direction vectors of the double laser lines can be respectively expressed as follows The direction vectors of the space line L 2 can be expressed as follows Let p 1 = (x 1 , y 1 , f ) and p 2 = (x 2 , y 2 , f ) . The direction vectors of the space line L 2 can also be expressed as follows Eq. (28) can be rewritten as Suppose that k i (i = 1,2) is the ratio between the distance from camera optical center O c to the space point P i and the distance from camera optical center O c to the image points p i .
The space point P i in the camera coordinate system can be expressed as P i (k i x i , k i y i , k i f ) , (i = 1, 2) . As shown in Fig.5, p v (x v , y v , f ) is the intersection point of the image line l 1 and l 2 . The line l 4 is the connection line between the vanishing point p v and the camera optical O c . According to linear perspective, the line l 4 is parallel to the line L 1 and L 3 . Therefore, the line l 4 is perpendicular to the line L 2 , we can obtain the geometric relation as follows Eq. (30) can be rewritten as follows Due to the laser beams L 2 is parallel to L 2 , we can obtain the Eq. (32) by the relation of the vector dot product where, Suppose the prior constraints condition that the distance between two parallel laser beams L 1 and L 3 is a. In addition, by combing with the Eq. (31) and (32), we can obtain The three-dimensional coordinates of P 1 and P 2 in the camera coordinate system can be calculated by the obtained k 1 and k 2 .
Let P 3 and P 4 are respectively the space point in the space straight line L 1 and L 2 , where the distance between P 3 and P 1 is a and the distance between P 4 and P 2 is a. The three-dimension space coordinate of the point P i (i = 3,4) are defined as P i = (X ci , Y ci , Z ci ) . Similar to the Eq. (32), we can obtain the relation by the vector dot product as follows where, By combing with the relation We obtain the three-dimensional coordinates of the space point P 3 and P 4 in the camera coordinate system, which can be calculated as follow With obtained the positioning results of the laser spot center and laser line of the laser beams target, the proposed improved pose estimation algorithm can obtain the 3D space coordinate of feature points in the camera coordinate through close-form solution. Assume that the 3D space coordinate of P i in the laser beams-based target coordinate system are P bi (x bi , y bi , z bi ), the obtained 3D space coordinate of P i in the camera coordinate system are P ci (x ci , y ci , z ci ). The transform relation between the target and camera coordinate system can be defined as The rotation matrix R c b and translation matrix T c b can be respectively defined with Q(r) and W(r) as follows The rotation matrix R c b and translation matrix T c b can be obtained by the error objective function that defined by Eq. (45) using dual quaternion [34]. where, The experimental platforms were built to verify and evaluate the above theoretical research, which include the environmental adaptability of vision-based laser beams segmentation, extraction and positioning, the effectiveness of the proposed underground camera blur model and deblurring algorithms, and the performance of the proposed machine body localization method.

A. ENVIRONMENTAL ADAPTABILITY EVALUATION
Low illumination environment with mining lights, and other stray lights in driving working face were simulated in laboratory. The laser beams-based target were specially designed, which was realized by two parallel laser alignment instruments. The camera MV_EM510C was used to capture the laser beams-based target images. The wavelength of laser beams is 660 nm. The collected image and corresponding feature extraction and location results are shown in Fig. 6. It can be seen from Fig.6 that the laser spots and laser lines can be well segmented and extracted through the proposed feature extraction methods in Section IV.A. In comparison with the laser beams location results as shown in Fig.6a when without stray interference, the RMS errors of the laser spots location results when with stray lights interferences were 0.029, 0.016, 0.033, 0.026, 0.055, 0.043, 0.052, and 0.064 pixel, respectively. The RMS errors of the laser lines location when with stray lights interferences were 0.018, 0.038, 0.079, 0.045, 0.081, 0.064, 0.095, and 0.048 pixel, respectively. The results demonstrated that the laser beams feature extraction and location results by the proposed algorithm are reliable and robust in the complex environment. It demonstrated that the proposed methods can enables the laser-beams based target to be distinguished in the complex complicated background where there exist interference of stray lights in the low illumination. The specially designed laser beams target and the proposed methods are effective, which make the vision measurement simplification in the tunnel work face.

B. UNDERGROUND CAMERA NON-UNIFORM BLUR MODEL EVALUATION
To verify the effectiveness of the proposed blurred laser beams restoration algorithm, the combination unit of the   two-layer glasses and camera were placed on a vibration platform as shown in Fig.7 to simulate the effect of vibration from underground tunneling. The image were firstly collected in static condition when with and without glasses, which were used as the ground-truth to make a comparison. The vibration platform provided a sinusoidal excitation with frequency at 10Hz∼30Hz and maximum amplitude at 3mm. The camera with two-layer glass were used to collect laser beam-based target images. The collected blurred laser beams were processed with our proposed laser beams restoration algorithm. Fig.8 gives the laser beam extraction comparison results between the blurred images, deblurred images by Fergus's and deblurred images by proposed algorithm. The deblurred results shown in Fig.8 indicated that energy distribution of restored laser spots are more concentrated and the restored laser beams location are closer to the ground-truth. The laser line extraction deviation in Fig.8 and the laser beams extraction residual distribution in Fig.8 both demonstrated that proposed algorithm is effective. It can also be seen from Fig.8 that the RMS errors of the blurred images were 8.2670, 14.7585, 3.4315, 11.2567 pixel, respectively. The RMS errors of the deblurred images by proposed algorithm were 0.0650, 0.7178, 0.2231, 0.0955 pixel, respectively. The RMS errors of the deblurred images by Fergus's were 4.0464, 6.2607, 1.8511, 5.2312 pixel, respectively. The results demonstrated that the deblurred laser beams location were both decreased by Fergus' algorithm and proposed algorithm, and the restored laser beams and laser spot by proposed algorithm are closer to ground-truth than Fergus's.

C. POSE ESTIMATION EVALUATION
Vision-based Boom-type roadheader pose estimation platform is setup as shown in Fig. 9. The system mainly consists of a mobile robot, the laser beam-based target, mine explosion-proof camera MV_EM510C, total station, smoke generator, PC computer and so on. The specially designed laser beams-based target is formed by double parallel mine laser pointers. Here, the mobile robot is used to simulate the movement of the Boom-type roadheader in tunnel roadway. The smoke generator is used to simulate the dusty environment in coal mine. For the visual system, the vision-based base coordinate system is placed in the laser beams-based target coordinate system of O 0 X 0 Y 0 Z 0 and the measurement coordinate system is placed in the camera coordinate system of O c X c Y c Z c . The pose transform relation M 0 c of the camera in the laser beams-based target coordinate system can be directly obtained by the proposed vision-based methods in Section IV. The total station is adopted for the pose accuracy evaluation of the proposed visual system.
Establish the base coordinate system O t X t Y t Z t of the total station, obtain the 3D spatial coordinates of multiple points VOLUME 8, 2020  on the laser beams in the total coordinate system. Suppose the three-dimensional coordinate of the multiple points in the visual base coordinate system are P 0i (x 0i , y 0i , z 0i ), and the three-dimensional coordinate of multiple points in the total station coordinate system are P ti (x ti , y ti , z ti ).Thus, the transformation matrix M t 0 between the vision-based base coordinate system O 0 X 0 Y 0 Z 0 and the total station's base coordinate system O t X t Y t Z t can be calculated using dual quaternion [34]. The pose transformation relation M 0 c of camera in the vision-based base coordinate system can be transformed to in the total station system by M t c = M 0 c M t 0 . It makes the performance comparison between the vision-based pose estimation system and total station system can be realized in the unified coordinate system O t X t Y t Z t .
As shown in Fig. 9, the prisms were tied up on the mobile robot body to create the measurement coordinate frame of the total station system. The pose transformation matrix M p t of the prisms in the base coordinate system O t X t Y t Z t can  be given directly, which were taken as the ground-truth. Thus, the external parameters between the camera coordinate system and the prisms coordinate system can be calibrated by M c p = M 0 c M t 0 M p t . On the basis of the above calibration, the pose estimation accuracy can be evaluated on the basis of the unified coordinate system O t X t Y t Z t . Control the mobile robot to move in the simulated tunnel roadway. The 100 times repeatability test at several different distances were conducted to obtain the vision-based pose estimation. The 95% confidence interval error ellipsoid shown in Fig. 10 presents the position uncertainty at different distances between the camera and laser beams-based target along the simulated tunnel roadway. The position error directions were calculated using the obtained localization results in the ellipsoid wireframe. It demonstrated that the target position has the largest uncertainty along the direction nearly parallel to the z-axis direction, which were shown in red solid line. Similar results are showed on each error ellipsoid at different distances. Moreover, the results in Table 1 showed that position error evaluation at different distances between the camera and target. The largest uncertainty of target position in the camera system increases to the target's distance to the camera, and the two smallest uncertainty of target position has little to  do with the distances. The results verify the effectiveness of the proposed laser beams-based machine body localization method for Boom-type roadheader this paper.
The proposed vision-based measurement system this paper and the total station were used to carry out the pose estimation VOLUME 8, 2020 at the same time. The measurement comparison results were shown in Figs. 11, 12 and 13. Fig. 11 shows the trajectory of the mobile robot in the simulated tunnel roadway and its projection on the plane. It can be seen from Fig. 11 that the proposed method exhibits good performance, as the trajectory measured by the proposed method was close to that measured by the total station. Fig. 12 and Fig.13 show the position and orientation measurement errors, and it can be deduced that the mean relative error of position were 14.62 mm in X-axis, 15.08 mm in Y-axis, and 36.67 mm in Z-axis, respectively. The mean relative error of pitch angle θ x , yaw angle θ y , and roll angle θ z were 0.24 • , 0.19 • and 0.21 • , respectively. According to the rule that the allowed pose estimation error in underground roadway construction, the proposed laser beams-based localization method can meet the requirement for pose measurement of Boom-type roadheader.

VI. CONCLUSION
This paper proposed a monocular vision-based measurement method for the machine body of Boom-type roadheader. An underground camera blur model and deblurring algorithm were proposed to decrease vibration errors. By establishing monocular vision measurement system, the body localization is realized by the laser beam images. The laser beams feature extraction, the pose estimation method, the blur model and deblurring algorithms were discussed. Experimental evaluation are designed to verify the performance of the proposed method, and the deblurring algorithm is investigated and evaluated.
Laser beams from the mine laser pointer was used to taken as image features to build the pose estimation system for Boom-type roadheader. A parallel laser beams-based target is mounted on the top of underground roadway in coal mine, the images of the parallel laser beams are captured by underground explosion-proof camera that installed on the body of Boom-type roadheader. The environmental adaptability evaluation results shows that the laser beams target has the higher extraction accuracy and the stronger anti-occlusion ability in the coal environment where is dense-dust, low illumination and with the stray lights interference Taking mining vibration into consideration, an underground camera blur model that incorporated the two-layer glasses refractive effect was proposed. The blur model explicitly reveals the change of imaging optical path under the influence of vibration and double layer explosion-proof glass. Moreover, a deblurring algorithm was developed to decrease the image blurring for the underground visual measurement system in the tunneling process. The performance of the deblurring algorithm is investigated and evaluated, the experimental results indicates that the proposed method can significantly reduce the influence of motion blur, and the accuracy of the restored laser beams image is higher than that of the existing Fergus algorithms.
The localization process includes the laser beams extraction, feature positioning, as well as the pose estimation.
The defined color space constraints, Euclidean distance constraints as well as the introduced boundary line constraints are demonstrated to be effective for the underground laser beams target image segmentations, feature extraction, and positioning. Then the proposed improved pose estimation algorithm introduce the vanishing point and enables to determine the unique closed-form solutions from line correspondences, which is better than existing pose estimation algorithms that involves higher-degree polynomial and the complex iteration solution process.
The experimental platforms were built to verify the performance of the proposed localization method, the measurement comparison results shows that the pose measurement error less than the permissible error of roadway cutting in coal mine safety regulation, which can meet the machine body localization requirement for Boom-type roadheader.
The future work will be to focus on the performance of the system in the tunnel face in coal mine. Considering that the lost frame caused by occlusion may affect the precise positioning, another potential research direction is on the fusion of the vision-based system with the inertial navigation technology. In addition, the localization method and underground camera non-uniform blur model will be further studied and applied to improve the system measurement accuracy and reliability.