An Individual Tree Segmentation Method From Mobile Mapping Point Clouds Based on Improved 3-D Morphological Analysis

Street tree extraction based on the 3-D mobile mapping point cloud plays an important role in building smart cities and creating highly accurate urban street maps. Existing methods are often over- or under-segmented when segmenting overlapping street tree canopies and extracting geometrically complex trees. To address this problem, we propose a method based on improved 3-D morphological analysis for extracting street trees from mobile laser scanner (MLS) point clouds. First, the 3-D semantic point cloud segmentation framework based on deep learning is used for preclassification of the original point cloud to obtain the vegetation point cloud in the scene. Considering the influence of terrain unevenness, the vegetation point cloud is deterraformed and slice point cloud containing tree trunks is obtained through spatial filtering on height. On this basis, a voxel-based region growing method constrained with the changing rate of convex area is used to locate the stree trees. Then we propose a progressive tree crown segmentation method, which first completed the preliminary individual segmentation of the tree crown point cloud based on the voxel-based region growth constrained by the minimum increment rule, and then optimizes the crown edges by “valley” structure-based clustering. In this article, the proposed method is validated and the accuracy is evaluated using three sets of MLS datasets collected from different scenarios. The experimental results show that the method can effectively identify and localize street trees with different geometries and has a good segmentation effect for street trees with large adhesion between canopies. The accuracy and recall of tree localization are higher than 96.08% and 95.83%, respectively, and the average precision and recall of instance segmentation in three datasets are higher than 93.23% and 95.41%, respectively.


I. INTRODUCTION
A S A vital component of the street environment, street trees play an important role in reducing noise [1] and improving air quality and other aspects of the urban environment [2]. Threedimensional models of individual trees are used in road improvement design, 3-D modeling of street trees, urban climate studies, monitoring of road tree growth, and parameter extraction and estimation of road tree biomass. These applications rely on the accurate extraction of information such as the location, height, and canopy width of trees in the street environment [3]. For example, to create highly accurate navigation maps of cities, the obstruction of the field of view (FOV) by trees must be accurately calculated, which requires the accurate detection of tree contours as well as locations [4]. In the power industry, vegetation easily interferes with power lines due to its rapid growth, so obtaining accurate information regarding the height, location, and other characteristics of trees is crucial [5]. These and many other applications illustrate the practical value of accurately extracting individual trees in urban streets. With the rapid development of 3-D sensing, computer vision, and other technologies, laser scanning is widely used for 3-D data acquisition in cities. Compared with airborne laser scanning, the 3-D point clouds acquired by MLS on the ground produce complete structures with a higher level of detail and are, thus, widely used for high-quality urban data collection and road modeling. Subsequently, a large number of researchers are committed to extracting building facades [6], vegetation element segmentation [7], street facilities extraction and modeling [8], and semantic segmentation of 3-D scenes from mobile scanning laser point clouds [9]. Segmentation and extraction of tree elements based on mobile scanning laser data is an important process in the current high-precision map generation and 3-D scene modeling in smart cities, but some challenges remain.
1) In addition to vegetation elements, the vehicle's 3-D point cloud also contains a large number of other urban targets, such as buildings, roads, and urban infrastructure. Removing such elements from the complex urban environment and accurately segmenting vegetation elements remains difficult.
2) Existing algorithms tend to under-segment or miss trees when 3-D point cloud data are incomplete due to occlusions. 3) When vegetation on urban streets is too dense, resulting in large adhesions between trees, existing algorithms are also prone to problems such as inaccurate canopy segmentation and large errors in tree parameter calculations. Considering the above problems, this article proposes an improved 3-D morphology analysis method for accurate extraction and parameter calculation of street trees in complex street environments. This method compensates for the shortcomings of existing methods and has significant advantages, particularly in the extraction of trees with adhesion cases. The rest of this article is organized as follows. In Section II, previous methods of extracting individual street trees based on MLS point clouds are presented. In Section III, the street tree extraction method proposed in this article is introduced in detail. Three sets of data are used to verify the accuracy and robustness of the proposed method in Section IV. Finally, Section V concludes this article.

II. RELATED WORKS
In recent years, researchers have proposed many methods to extract individual row trees from MLS point clouds, which can be divided into three main categories: cluster-based methods, graph cut-based methods, and contextual information-based methods.

A. Cluster-Based methods
The cluster-based approach first preprocesses the original point cloud. Then, the objects are segmented by instances using various clustering algorithms, and individual trees are identified according to the size, height, location, and shape of the clusters. The K-means clustering algorithm and density-based spatial clustering of applications with noise (DBSCAN) clustering algorithm, as classical density-based point cloud clustering methods, are widely used in single tree extraction. For example, [10] and [11] first preprocessed original data using methods such as principal component analysis to remove the ground and buildings from the original point clouds, and then used the improved K-means clustering algorithm and the DBSCAN clustering algorithm to extract a single tree from the remaining point cloud. In the K-means clustering algorithm, the number of clusters must be determined artificially, and it is difficult to detect nonspherical tree crowns when the crowns overlap. Alternatively, the DBSCAN algorithm is more sensitive to the threshold of density and tends to reject some crown points as noisy point clouds, which results in incomplete crowns. To solve this issue, [12] and [13] segmented trees from LiDAR data using the mean drift method. However, the clustering method based on adaptive density is prone to over-and under-segmentation in tree crown extraction when the trees are mixed among other street components. In contrast to direct tree segmentation methods, some researchers have used progressive segmentation methods to improve the accuracy of tree crown segmentation. These methods first perform tree localization over the trunk and then perform further individual segmentation of trees. The authors in [14] used the constraints of the morphological characteristics of trunks and crowns, including the horizontal positional relationship between trunk and crown, cylindrical morphology, and crown diameter, to distinguish the trunk from various road poles and finish tree localization. Then, a voxel-based region growth method could be used to segment tree crowns. This type of method can effectively extract tree crowns when trees are accurately located. However, it remains difficult to accurately segment trees when crowns are sagging. In contrast, [15] first utilized the super voxel-based clustering method to enhance the robustness of tree localization, in which super voxels are processed through principal component analysis (PCA) and trunks are identified by regional growth on the super voxels. Overall, the clustering method can typically segment individual trees, but these methods are mainly used for tree extraction in simple scenes due to difficulties in obtaining accurate extraction results for scenes with challenging situations or complex tree structures.

B. Graph Cut-Based Methods
Graph cut-based methods are effective and popular energy optimization algorithms widely used in the field of computer vision for image segmentation and stereo vision. In recent years, such approaches also have been used to segment point clouds by obtaining the topology relations between points using methods such as the kdtree algorithm and then segmenting objects by the feature vector of the weight matrix. The authors in [16] and [17] individually segmented trees from laser point clouds using the graph cut method, in which the radius parameter was calculated based on the position of the tree crown and trunk. These methods can greatly improve the detection of crowns in the lower and middle heights of a tree, but the accuracy of trunk identification is sensitive to laser point cloud density. The authors in [18] located trees based on local maxima in a horizontal histogram of point cloud octree nodes and their shape features, and then used Voronoi diagrams and NCut segmentation to achieve instance segmentation. Moreover, [19] combined machine learning methods and graph-cut methods for individual tree segmentation. They first constructed a global graph model and then used graph-cut-based clustering to achieve tree instance segmentation. The graph-cut-based approach can generally achieve individual tree segmentation in nonadhesive scenes, but this method relies on manually set parameters, such as the tree radius. It therefore struggles to obtain robust tree extraction results when tree features are highly variable or contain different tree species in the scenes.

C. Contextual Information-Based Methods
Tree segmentation methods based on contextual information use the nearest neighbor feature of each point in a point cloud as the contextual feature of the corresponding point. As early as 2006, Lalonde et al. [20] attempted to use 3-D descriptors to characterize the local geometric features of point clouds and classify the geometric tensor features of the target point cloud into three categories: voluminous, which mainly represent volumetric objects (e.g., grass, treetops); faceted, which mainly represent planar objects (e.g., ground, elevation); and linear, which mainly represent linear objects (e.g., tree trunks, light poles). Such methods can roughly classify scenes, but the classification results are often too coarse to be used. Subsequently, [7] proposed a cylindrical fitting model by modifying the existing probabilistic relaxation model, which identifies cylindrical objects as tree trunks (and voluminous features) and merges the adjacent tree to construct the street model. However, this method has poor extraction accuracy for tree crowns when there are large areas of adhesions. The authors in [21] also proposed a new 3-D segmentation algorithm for dominant tree detection by using the symmetric structure of the tree, but the search radius parameter in this method must be manually determined based on a priori knowledge, and its generalizability must be improved. Although context-based methods can roughly segment point clouds into several types, they have difficulty with accurately extracting individual trees and can over-or under-segment in complex scenes. The aforementioned state-of-the-art methods can successfully identify and extract individual street trees in cases with simple trunk shapes, large tree spacing, and small overlaps between trees. However, when the distance between trees is too small, the adhesion is severe, or the tree branching structure is complex, the existing algorithms struggle to achieve accurate individual tree segmentation and tend to poorly generalize parameter settings and over-or under-segment. To address these issues, we propose an improved 3-D morphological analysis-based method for individual tree segmentation in complex street environments. The main contributions of this work are as follows.
1) The data preprocessing phase involves 3-D deep learning to presegment point clouds and reduce the impact of terrain unevenness on individual tree segmentation through the terrain filtering process. 2) A voxel-based region growing algorithm based on the rate of change of the convex area is proposed to locate street trees, which enables accurate positioning of trees with complex geometry. 3) A coarse-to-fine canopy extraction method for adhesion scenes is proposed, which integrates the area increment rule and height change rule to achieve multiscale area growth of crown point clouds.
III. METHODOLOGY Fig. 1 shows the overall technical flow of this method. First, the raw data are preprocessed to remove noise and perform a preliminary classification of the point clouds to obtain tree point clouds. The digital elevation model is then created by filtering, and the total elevation of the point cloud is unified to a datum before locating the trees. Based on these tree localization results, a coarse to fine tree canopy extraction method is used to accurately extract the individual tree canopies and merge them with the branches to obtain individual tree point clouds.

A. Data Preprocessing
Mobile mapping point cloud data includes vegetation, as well as information about the ground, buildings, road infrastructure, and other elements that require a large amount of memory [22], [23]. This information affects the accuracy of vegetation extraction and causes problems such as long data processing times. In this study, we preprocess original vehicle point cloud data using mainly point cloud denoising and preliminary semantic classification. First, the point cloud is denoised based on the statistical outlier removal filter (SOR) algorithm to remove the noise information. Furthermore, we use the RandLA-Net deep learning 3-D point cloud network for semantic classification based on [9] to classify the original point cloud into the ground, vegetation, and man-made buildings (including human-operated poles), and reserve the vegetation elements for individual tree segmentation. The results of mobile mapping point cloud data preprocessing are shown in Fig. 2.

B. Terrain Filtering and Trunk Filtering
Most studies [24], [25], [26] have used the spatial elevation section method to obtain the main stem or part of the branch point cloud of trees for use as the basic information for tree localization. However, if a uniform height threshold is used in a scene with hilly terrain, point cloud filtering tends to miss numerous tree trunks, which leads to failed localization. Our work therefore considers the influence of terrain ripple, and the point clouds of vegetation are unified to the same height level to eliminate this influence before height-based point cloud filtering. The details are as follows. 1) Ground points are interpolated according to the ground point cloud obtained by preprocessing to create a digital elevation model (DEM) using the inverse distance-weighted interpolation method (1), (2). The vegetation point cloud is then subtracted from the DEM elevation according to (3) to unify it to the same horizontal plane In (1) and (2), (X, Y, Z) are the coordinates of the interpolation points; (x i , y i , andz i ) are the coordinates of the neighboring points; p is the weight; q is the power; i is the index of the neighbor points; and n is the number of neighboring points in the search area. In (3), Z i − deterr is the normalized height value of any vegetation point, Z i is the original height value of the point, and Z grid is the corresponding height value of the DEM grid in which the point is located. After eliminating the influence of terrain undulation, all vegetation point clouds are unified to the same horizontal plane. By iterating the vegetation point cloud data, the lowest value of elevation Z b is determined and the elevation threshold T h is set. As shown in Fig. 3, the vegetation point cloud is intersected based on the height filtering within [Z b , Z b + T h ] to obtain the local trunk point cloud. Fig. 3 shows the point cloud after terrain filtering and trunk filtering.

C. Tree Localization
The trunk is an important feature to distinguish street trees from other objects. When investigating the individual segmentation of street trees, most studies [27], [28], [29] have tended to first identify and extract the trunk structure from the target point cloud as the basis for localization. However, the existing methods are prone to inaccurate localization of trees with complex structures, such as those with nonvertical trunks, multiple branches, and low branches. We thus propose a method to localize street trees with complex trunk geometry. The method first performs Euclidean distance clustering on the trunk point clouds obtained by filtering and then voxelizes each clustered point cloud cluster. The seed points in the voxels are subsequently selected for voxel area growth to obtain the stem candidate point clouds as constrained by the convex packet area change rate. Finally, possible low shrubs in the candidate trunks are removed by a point cloud filtering algorithm based on surface variability. Each step in this process is described in greater detail below.
1) The trunk point cloud is clustered based on the adaptive Euclidean clustering method proposed by [24]. To obtain the optimal clustering effect, the three thresholds d(p 1 , p 2 ), d NN (p 1 ), and d NN are calculated using (4)-(6) to obtain d(p 1 , p 2 ), where d(p 1 , p 2 ) represents the 3-D spatial distance between points (p 1 and p 2 ), d NN (p 1 ) represents the average spatial distance from the point (p 1 ) to its n nearest neighbors, and d NN represents the average distance from all points in point cloud P to their nearest neighbors, which is generally used to measure the point cloud density. To obtain the fine trunk point cloud, the single cluster point cloud Z obtained after clustering is filtered, and the point cloud data less than 0.5 m from the ground are retained (as shown in Fig. 4) to obtain the point cloud cluster P Cs trunk 2) The point cloud cluster P Cs trunk is voxelized, and all voxels are organized hierarchically from bottom to top as Layer 0 , Layer 1 , . . . , Layer n . The layer with the lowest number of voxels is selected as the location of the stem seeds (Layer seed ). Considering that a single point cloud cluster may contain more than one stem, we cluster Layer Seed to obtain multiple subclusters and compute the horizontal convex envelope area Area CH of each subcluster corresponding to the point cloud if it is smaller than the specified threshold T Area , where if T Area = 1.5, then the subcluster represents a trunk seed seed Trunk .
3) As shown in Fig. 5(a), for each subcluster seed Trunk , the upward and downward growth methods are used to obtain the overall trunk point cloud. The upward growth method obtains the overlapping voxels in the layer Layer Seed + 1 with Seed Trunk in the X, Y plane and grows in the horizontal region of the layer such that the growth of the layer ends when no adjacent voxels are added. As seen in Fig. 5(b), the growth of the current layer is analogous to that of the previous layer. If the ratio of the horizontal convex packet area Area cur to the upper layer area Area pre is greater than T RArea , then the current layer is the canopy layer and the growth ends. The bottom of the tree trunk is reached by similar downward growth. 4) Based on the above results, the local surface variation in trunk growth P sv is used to distinguish trees from shrubs.The parameters are first calculated by s (7) and (8), where λ 1 , λ 2 , λ 3 (λ 1 ≥ λ 2 ≥ λ 3 ) are the point p i neighborhood eigenvalues; P sv (p i ) is the degree of surface variation of the neighboring point cloud; and P sv (P ) denotes the degree of local surface variation of the point cloud P . If the degree of local surface variation in this growth result exceeds the specified threshold T sv , the result is considered a shrub and removed

D. Coarse Extraction of Tree Crowns
Based on the trunk information obtained in Section III-C, the tree crown is then extracted using the coarse-to-fine process. The point cloud for the tree crown is obtained by Euclidean clustering at the local region and the initial point cloud of the tree crown by region growth, after which the crown point cloud is optimized based on the "valley" structure as follows. 1) All trunk points are removed from the vegetation point cloud, and the trunk centroid T bc is used as the seed point. The average distance D NeT between this trunk and neighboring trunks is used as the initial radius to perform a Euclidean clustering operation in this region, and the resulting clustered point cloud P CroCd is used as the initial candidate crown point cloud. To obtain a more complete point cloud of the tree crowns, a growth method similar to that of trunk extraction is used to further extract the tree canopies. The distance d(P i CroCd , P j CroCd ) between each point cloud of the candidate regions is calculated, and if d(P i CroCd , P j CroCd ) = 0 it is assumed that the point clouds of the candidate regions P i CroCd , P j CroCd overlap with those of the candidate regions P i CroCd , P j CroCd . If there is an overlap, the overlapping region point clouds are obtained and voxelized. 2) Fig. 7(a) shows the voxelization results for two tree crowns and that the algorithm grows from the initial layer marked in blue. When two adjacent trees grow simultaneously in the same layer and a voxel is marked at the same time, the mapping is judged based on the minimum increment rule in which A bef ore and A af ter are the horizontal convex envelope areas corresponding to the tree crown growth results before and after adding the voxel, respectively. As shown in (9) and Fig. 6, the increase in the horizontal convex parcel area of Area 1 grow , Area 2 grow (10) is first calculated after adding Tree 1 , Tree 2 to the point cloud of this voxel. If Area 1 grow < Area 2 grow , the voxel goes to Tree 1 , while if Area 1 grow > Area 2 grow , it goes to Tree 2 Area grow = Area 1 grow − Area 2 grow . Fig. 7 shows the crown growth process in two adjacent layers, where green voxels represent Tree 1 , blue voxels represent Tree 2 , red voxels represent the point clouds to be grown, and yellow   voxels represent adherent point clouds. Fig. 7(a) shows the side view of the initial voxels of two trees, and Fig. 7(b) shows the top view of the canopy point cloud to be segmented after the upward regional growth of Layer i+1 , Fig. 7(c) shows the voxel adhesion between two trees in the current layer, and Fig. 7(d) shows the canopy point cloud of two trees obtained using the minimum increment rule.

E. Tree Crown Refinement
As shown in Fig. 10(a), the results of coarse segmentation have some degree of over-or under-segmentation due to possible size differences or excessive adhesion between adjacent trees. To further improve the tree crown segmentation results, this work optimizes adjacent tree crowns based on the "valley" structural feature such that the point cloud in the overlapping region of adjacent street trees has a height change from high to low and then low to high, as shown in Fig. 8. 1) Equation (10) calculates the deviation degree P d of each point such that if P d < 0.2, the point occurs in the point cloud of the middle region of the adherent canopy. Euclidean clustering is performed on the point cloud of this middle region, and the overlapping canopy point clouds that must be reassigned are clustered into two classes, CR 1 and CR 2 , as shown in Fig. 10(b) and (c) where d xy (p i , p trunk j ) denotes the 2-D distance between point p i and the principal point p trunk j of trunk j.
2) The tree canopy point clouds CR 1 and CR 2 with a lower mean height CR 1 are used as initial points, while their edge points are used as the seed point set P seeds for clustering, which is iteratively optimized by the following method. 1) First, the highest point p seed h of the seed point P seeds in CR x is obtained as the nearest neighbor (R-nearest neighbors, Rnn). 2) If the highest point p already of CR f in Rnn is higher than the highest point p yet in CR x , and the lowest distance d min from CR 1 to CR 2 in Rnn is small than double the average distance d NN in Rnn, then the seed point is added to the clustered set CR 1 and the other points in Rnn are added as new seed points to P seeds . If the above condition is not satisfied, the seed point is removed from the seed set. 3) Repeat the above steps until there are no more seed points. As shown in Fig. 9(d), the remaining unfinished labeled point clouds are merged into CR 2 to complete the tree crown optimization When multiple street trees adhere to each other in multiple directions, we first arbitrarily select two adherent street trees for processing and then iteratively traverse each pair of adherent street trees and use the same method to complete the single canopy extraction.

A. Experimental Data and Evaluation Criterion
To verify the feasibility and effectiveness of the proposed monomerization extraction method for street trees, experiments were conducted using three MLS point cloud datasets, followed by qualitative and quantitative analyses. Datasets I [see Fig. 10(a)] and II are mobile laser scanning point clouds of campus streets. The street trees in Dataset I are mostly lychee trees with complex trunk geometry, and the topographic elevation of the area is hilly. The trees in Dataset II [see Fig. 10(b)] have serious problems with canopy adhesion, and clearly identifying boundaries between canopies is difficult. These two datasets were used to test the effectiveness of the proposed method for individual segmentation of trees with complex geometric structures or strong adhesion. Dataset III [see Fig. 10(c)] is a public point cloud dataset (part of the Oakland 3-D point cloud dataset), which has a lower point cloud density than the first two  I  DETAILS OF THE EXPERIMENTAL DATASETS   TABLE II  RECALL AND PRECISION OF TREE LOCALIZATION IN THREE DATASETS datasets. For accuracy validation, the ground truth of individual trees was obtained by manual segmentation and labeling. Table I shows a basic overview of the three experimental datasets. For quantitative analysis, we used Recall and P recision as detection and evaluation metrics for the results of the segmentation of individual trees (11). Recall represents the completeness or quantity of tree segmentation, while precision is a metric of precision or quality. The experiment was divided into two parts: tree localization and individual tree segmentation, the results of which were analyzed in comparison with those of existing tree segmentation methods

B. Performance of Tree Localization
An initial classification of the entire point cloud based on the RandLA-Net point cloud classification network was performed in this work before individual tree segmentation. To achieve better classification results, we used a 3 km road point cloud obtained from Shenzhen University for RandLA-Net model training. We performed manual segmentation of the point cloud and labeled the different point clouds with vegetation, roads, buildings, and light poles. Fig. 11 shows the tree localization results for Dataset I. From Fig. 11(a), we see that deep learning semantic classification removed most of the elements that were unrelated to vegetation and maintained the vegetation information. We use a resolution of 0.05 m for the DEM. Fig. 11(b) shows the point cloud of vegetation obtained after removing the terrain and cutting the tree trunks. All the vegetation is on one plane after terrain removal, and the tree trunk information can be obtained by height filtering. Based on the filtered trunk point cloud, the proposed method performed accurate tree localization as shown in Fig. 11(c). As shown in Table II, 72 tree trunk structures were identified in Dataset I, with 69 correctly identified and 3 incorrectly identified, while Recall = 100.00% and P recision = 95.83%. Fig. 12 shows the tree location results for Dataset II. Similar to Dataset I, this method yielded excellent tree localization results: 193 trunk structures were identified, with 190 correctly identified, 3 incorrectly identified, and 7 not identified. Therefore, Recall = 96.44.00% and P recision = 98.44% in Dataset II. As shown in Fig. 13, in Dataset III, 48 trunk structures were identified, all of which were done correctly, but three were missed such that Recall = 96.08% and P recision = 100%. Fig. 14 shows two incorrect identification results of tree localization. Mislocalization mainly occurred because some of the point clouds of the street lights near the street trees were semantically labeled as vegetation point clouds during deep learning classification, which led to confusion between street lights and tree trunks. These street lights have a high degree of similarity with tree trunks, it is not considered in our current method. We will further optimize in our further research to improve the accuracy of trunk detection. In addition, some trees were missed during localization mainly due to the trunks being obscured by shrubs or elements such as cars, which resulted in a substantial lack of trunk point clouds. Nonetheless, this method performed well for tree localization in three datasets of different complexity, as its Recall accuracy exceeded than 96% and P recision exceeded 95%, thus, providing a good basis for subsequent tree crown segmentation.

C. Performance of Individual Tree Segmentation
The complete tree crown was obtained using the tree localization results and the proposed segmentation algorithm. Individual trees are obtained by merging them with the trunk point cloud. Notably, all parameters in the crown segmentation phase are computed adaptively and do not require human adjustment. We compared the accuracy of the proposed method with the existing tree monomerization algorithms TreeSeparation [30] and TreeSeg [24].To be fair, all three methods are implemented in Windows OS and the experiments were performed on a computer with 3.6 GHz CPU and 64 GB RAM. Figs. 15-17 compare the tree crown segmentation results for the three datasets across methods. Fig. 15 illustrates that the proposed method extracted the individual tree model well according to the tree localization results. Fig. 15(b)-(e) shows the ground truth and the segmentation results of the proposed method, TreeSeparation, and TreeSeg, respectively. The proposed tree canopy segmentation method  achieved highly precise individual tree segmentation. Compared with the results of the other two methods, the segmentation error (red point cloud of segmentation errors in (c)-(e) is the lowest and optimal tree crown segmentation results were achieved. The TreeSeparation method achieved better individual extraction for scenes with small adhesion areas but tended to under-and over-segment when trees strongly adhered, mainly because the method tends to discard the point cloud of the region as it obtains the individual tree by growing downward. In contrast, TreeSeg effectively extracted the main parts of street tree crowns in complex scenes with strong adhesions, but the extracted tree crowns had missing point cloud edges. To further quantify the results of individual tree extraction, we selected individual trees from different regions and calculated the precision and recall  after segmenting the trees based on the ground truth. In this case, TP, FP, and FN no longer represent the number of street trees but rather the number of correct, incorrect, and missing points from individual trees, respectively. As shown in Table III, this analysis produced an average precision of 97.80% and an average recall of 97.83%. While the average precision and recall of individual tree segmentation by TreeSeparation were 92.02% and 87.37%, respectively, TreeSeg achieved 95.24% precision and 64.85% recall. The proposed method achieved the best average recall and precision, with all trees attaining more than 93% precision and more than 94% recall. It also demonstrated good robustness for individual tree segmentation in different scenes, while the   TreeSeparation and TreeSeg algorithms were less robust. For example, TreeSeparation achieved a segmentation precision of only 67.18% for the eighth tree, and TreeSeg achieved a recall of less than 70% for most trees. The low recall of the TreeSeg algorithm is mainly due to inconsistency between the search radius of the cylindrical filter and the real crown width of the street trees, which leads to substantial missing data in tree crown edges. Figs. 16 and 17 shows the segmentation results of individual trees for Datasets II and III, respectively, while Tables IV and V also provide quantitative evaluations of the segmentation accuracy of a single tree. The proposed tree segmentation algorithm achieved optimal results across all of the experimental datasets, especially Dataset II in which optimal segmentation accuracy was achieved for all eight trees, with an average precision of 96.37% and an average recall of 97.46%. In the results of Dataset III, an average precision of 93.23% and an average recognition of 95.41% were achieved. Tables VI also shows a comparison of the time cost by the three methods for different data sets. It can be seen that the    method proposed in this work achieves the highest processing efficiency on Datasets II and III. The above results illustrate that the proposed method achieves better results in terms of segmentation accuracy and robustness than the other methods.

V. CONCLUSION
To achieve highly precise individual segmentation of street trees, an effective individual tree extraction method with improved 3-D morphological analysis is proposed in this article. The method was optimized and improved based on existing tree localization and tree crown segmentation algorithms to achieve individual tree segmentation of a large number of street trees and effectively solve the problem of inaccurate segmentation in regions with strong tree adhesion. Three sets of MLS point clouds from different areas, which contain point clouds of trees with different geometric features, were used for experimental validation and accuracy analysis. The experimental results show that this method achieved superior segmentation accuracy for street trees in all three datasets compared to other existing methods. The average precision and recall accuracy of the proposed method exceeded 93.23% and 95.41%, respectively. In terms of algorithm robustness, the proposed method displayed good performance on most single tree extractions, which indicates that the algorithm functions in scenes of varying complexity, achieving good robustness in cases of tree adhesion. In future research, we will optimize and improve this work in two aspects. First, the color features of the point cloud will be added to help segment different tree species. Second, the shrub rejection and clutter rejection (e.g., light poles) will be optimized and improved with the deep learning method, and the 3-D semantic point cloud segmentation method with deep learning will be optimized to further improve the effect of scene classification.