Combination of Lidar Intensity and Texture Features Enable Accurate Prediction of Common Boreal Tree Species With Single Sensor UAS Data

We evaluated the performance of unmanned aerial system (UAS) airborne light detection and ranging (lidar) data in the species classification of pine, spruce, and broadleaf trees. Classifications were conducted with three machine learning (ML) approaches (multinomial logistic regression, random forest, and multilayer perceptron) using features computed from automatically segmented point clouds that represent individual trees. Trees were segmented from the point cloud using a marker-controlled watershed algorithm, and two types of features were computed for each segment: intensity and texture. Textural features were computed from gray-level co-occurrence matrices built from horizontal cross sections of the point cloud. Intensity features were computed as the average intensity values within voxels. The classification accuracies were validated on 39 rectangular $30\times30$ m field plots using leave-one-plot out cross-validation. The results showed only very small differences in the classification performance between different ML approaches. Intensity features provided greater classification accuracy (kappa 0.73–0.77) than textural features (kappa 0.60–0.64). However, the best classification results (kappa 0.81) were achieved when both intensity and textural features were used. Feature importance in different ML approaches was also similar. We conclude that the accurate classification of the three tree species considered in this study is possible using single-sensor UAS lidar data.


Combination of Lidar Intensity and Texture
Features Enable Accurate Prediction of Common Boreal Tree Species With Single Sensor UAS Data Mikko Kukkonen , Timo Lähivaara , and Petteri Packalen Abstract-We evaluated the performance of unmanned aerial system (UAS) airborne light detection and ranging (lidar) data in the species classification of pine, spruce, and broadleaf trees.Classifications were conducted with three machine learning (ML) approaches (multinomial logistic regression, random forest, and multilayer perceptron) using features computed from automatically segmented point clouds that represent individual trees.Trees were segmented from the point cloud using a marker-controlled watershed algorithm, and two types of features were computed for each segment: intensity and texture.Textural features were computed from gray-level co-occurrence matrices built from horizontal cross sections of the point cloud.Intensity features were computed as the average intensity values within voxels.The classification accuracies were validated on 39 rectangular 30 × 30 m field plots using leave-one-plot out cross-validation.The results showed only very small differences in the classification performance between different ML approaches.Intensity features provided greater classification accuracy (kappa 0.73-0.77)than textural features (kappa 0.60-0.64).However, the best classification results (kappa 0.81) were achieved when both intensity and textural features were used.Feature importance in different ML approaches was also similar.We conclude that the accurate classification of the three tree species considered in this study is possible using single-sensor UAS lidar data.

I. INTRODUCTION
T REE species are a crucial component of forest inventories and essential for forest management planning.Optical remote sensing techniques, such as satellite images, aerial images, and images from unmanned aerial system (UAS), have been shown to be effective in capturing tree species information at different spatial scales.The cost, spatial resolution, and extent of the data are crucial factors in determining their applicability.Satellite images provide wide coverage with lower spatial resolution, making them ideal for large-scale projects such as national-level forest inventories [1], [2].UAS data, on the other hand, provides high spatial resolution but covers smaller areas, making it suitable for precision forestry [3], [4].Aerial imagery offers high to moderate spatial resolution over relatively large areas, making it a preferred option for management-oriented forest inventories [5].Classification of tree species using optical information relies on the assumption that the amount of radiation reflected at distinct wavelengths is different between tree species.This assumption is true in the sense that the reflectance of radiation is affected by the chemical properties of plants, such as water content and photosynthetic pigments, leaf morphology, and canopy structure [6].However, these discriminating features vary not only between tree species but also within species due to factors such as age and health [7] and for suppressed, codominant, and dominant trees due to competitive status and shadows [8].In addition, these features can also be affected by properties that are not directly related to tree species or habitat such as sensor configuration, weather condition, and solar zenith angle.Thus, the operational use of optical data for tree species classification can be problematic.
Airborne light detection and ranging (lidar) data can be used to describe the geometrical properties of the trees.In the context of tree species classification, this information is primarily related to the geometry of crown structure, which is based on, for example, the shape of the crown [9] or crown base height [10].Moreover, leaf orientation, size, clumping, and foliage density are associated with lidar intensity and have been found useful in the classification of tree species [11].Tree species classification based solely on geometric or intensity information can be prone to errors caused by several factors, including variations in crown structure and morphology within the same species, as well as the absence of distinct features that differentiate between different tree species.Perhaps due to these challenges, tree species classification using lidar data has remained a challenging task.Note that in mixed forests, tree species can be classified only at the level of individual trees, but lidar data also describe the geometrical properties of tree groups and this information can be used in the areabased lidar inventory [12].However, in that case, tree species must be predicted for stands instead of classifying individual trees.
Although geometrical information might have limited benefits for tree species classification, in some experiments, tree species have been classified exclusively using aircraftborne lidar data.However, these single-sensor approaches often require specific conditions, such as seasonal changes in leaf phenology [13], multispectral (MS) lidar data [14], or exceptionally high point densities [15], [16].Despite these challenges, single-sensor approaches offer several advantages over data fusion from different sensors.One significant advantage is that they simplify the data collection, storage, and processing workflows, reducing the potential for errors and inconsistencies that can arise during data integration.It is also worth noting that the use of lidar data in forest inventories expands beyond the classification of tree species, e.g., to the prediction of biomass or extraction of ground level.As a result, a single-sensor solution to tree species classification based on lidar would be highly advantageous for conducting tree species-specific forest inventories.
An increasing number of recent studies have experimented with machine learning (ML) methods to predict tree species of individually segmented trees using very high-point density lidar data acquired from terrestrial laser scanning (TLS) [17], [18], [19], [20], mobile platforms [21], or UAS platforms [20], [22], [23].For instance, Liu et al. [20] and Chen et al. [22] investigated the classification of birch and larch trees at Saihanba National Forest Park, Hebei, China.Chen et al. [22] proposed a neural network architecture that operated directly on point clouds, which were acquired using TLS and UAS lidar data sources.The authors reported an increase in classification accuracy when the number of points at the individual tree level increased from 512 to 2048 and found that TLS data yielded higher classification accuracy than UAS lidar under identical experimental conditions.Similarly, Liu et al. [20] proposed a deep neural network architecture that also operated directly on point clouds.Their approach achieved a higher classification accuracy with TLS data (kappa 0.85) than with UAS lidar data (kappa 0.76) in their experimental setup.In another related study, Briechle et al. [24] investigated the simultaneous classification of tree species and standing dead trees using UAS lidar and helicopter-borne lidar data at two test sites.The authors created multiple side-view silhouette images of lidar intensity from different viewing directions as geometrical features for each tree.Using these geometrical features as the sole inputs to a neural network classifier, the authors reported overall accuracies (OAs) ranging from 0.72 to 0.87, depending on the test site, lidar data characteristics, and classifier used.However, when the geometrical features were used as complementary information to MS data, the OA values improved and ranged from 0.82 to 0.96, whereas the use of MS data alone resulted in OA values ranging from 0.86 to 0.94.This study underscores the challenges associated with classifying tree species using geometrical data alone in comparison to the advantages of incorporating auxiliary MS data.The objectives of this study were to explore the feasibility of using UAS lidar data to classify tree species in a boreal forest environment and to assess the effectiveness of features derived solely from point cloud data in achieving this classification.Specifically, this study aimed to: 1) evaluate the accuracy of different ML models in classifying tree species using UAS lidar data; 2) investigate the use of intensity and textural features derived from point cloud data in improving the accuracy of ML models for tree species classification; and 3) explore the importance of these descriptive features in achieving accurate tree species classification.
Overall, this study aimed to contribute to the development of more accurate and efficient methods for tree species classification in boreal forest environments, which can support forest management efforts.

A. Field Data
Field data were collected from 39 square 30 × 30 m forest plots located in eastern Finland (see Fig. 1 and Table I).The forests in the area are predominantly privately owned and are, thus, primarily used for timber production.Tree species distribution in the area is typical of Finnish inland areas Each tree with a diameter at breast height (DBH) >5 cm was recorded.For these trees, tree species was determined, DBH was calipered, and height was measured using an electronic Vertex instrument.In addition, the location of each measured tree was determined with submeter accuracy using an adaptation of the triangulation method described in [25].

B. Lidar Data
The UAS lidar data were scanned using a Riegl VUX-1 UAS with an AP20 inertial measurement unit.The data were acquired in July 2020 using an Avartek Boxer hybrid drone.The scanner had a scanning angle of 120 • and a pulse repetition frequency of 380 kHz.Each plot was scanned from ten flight lines: five in an East-West direction and five in a South-North direction, at a cruising speed of about 4 m/s from an altitude of 50-60 m above ground level.This setup resulted in a nominal density of approximately 3700 pulses/m 2 .After ten flight lines were merged, the lidar echoes were classified as ground and nonground with the method proposed by Axelsson [26].A triangulated irregular network (TIN) was created from ground lidar echoes.By subtracting the interpolated TIN value from the height of the echoes, all lidar echoes were normalized to ground level.

A. Tree Crown Segmentation
The delineation of individual tree crowns from lidar data was performed using marker-controlled watershed segmentation.Initially, a 0.5-m resolution canopy height model (CHM) was generated from the lidar echoes by assigning the highest echo within each 0.5-m resolution raster cell.The resulting CHM was Gaussian filtered with a sigma value of 0.8 (in pixel coordinate system).Subsequently, the markers for the watershed segmentation were identified from the Gaussianfiltered CHM using a modified variable window filter method (as detailed in [27]).
The crown segments were linked to field-measured trees using an automated linking procedure.The closest neighbors were computed from crown segments to field-measured trees and vice versa.A connection between a crown segment and a field-measured tree was created only if they were each other's closest neighbor and the distance between them was <2 m.The distances were computed in 3-D using the local maxima from the CHM (i.e., detected treetop) and field-measured treetops.

B. Predictor Variables
Two categories of predictor variables were computed from the lidar echoes for each segmented tree: 1) intensity and 2) texture (Fig. 2).The intensity features were computed at the voxel level, whereas the texture variables were computed for multiple horizontal cross sections of the lidar point cloud.Both categories of predictor variables were computed using the first of many and only lidar echoes with height values >50% of the height of the tree.A height threshold was used to mitigate the impact of understory trees in the classification process.For the remainder of this section, when referring to lidar echoes, we refer to the first of many and only lidar echoes with height values >50% of the detected treetop.
1) Intensity Features: Each segmented tree was divided into 8 × 8 × 10 (x × y × z) voxels so that the detected treetop was located in the center of the highest layer of voxels.Fixing the number of voxels ensures that the size of the voxels can vary between different-sized trees.The mean intensity of all lidar echoes within each voxel was computed.It should be noted that these intensity features indirectly contain information about the shape of the segmented tree.This is because an intensity value of zero for any given voxel means that the voxel contains no lidar echoes.
Looking downward from the treetop, the crown of the tree can be considered to be rotationally symmetrical.Given this assumption, the original 8 × 8 × 10 voxel grid can be reduced to a 4 × 4 × 10 voxel grid (x × y × z).This computation can be explained in 2-D by initially folding the 8 × 8 grid in half (4 × 8) and computing the mean of the voxels that are laid on the top of each other.Thereafter, the 4 × 8 grid is folded in half in the other direction and the mean of the voxels that are on top of each other composes the final 4 × 4 grid, where the treetop is located at the lower right cell.The number of intensity features was 160 when this was repeated for all ten layers.
2) Texture: Haralick texture features [28] were computed from 40 horizontal cross sections from the lidar echoes for each segmented tree.Haralick features which cannot always be computed, i.e., the two Information Measures of Correlation and Maximal Correlation Coefficient, were excluded.Each horizontal cross section was divided into an array of 40 × 40 pixels.Therefore, similar to voxels of intensity features, pixel size and the thickness of each horizontal cross section varied in different-sized trees.A pixel had a value of 1 if it contained lidar echoes and a value of 0 if it contained no echoes (i.e., a binary image).The graylevel co-occurrence matrix was computed as a mean of four directions (0 • , 45 • , 90 • , and 135 • ) with a distance of 1 between neighboring pixels.Then, the gray-level co-occurrence matrix was normalized to the sum of 1.
Texture features with a strong correlation (< −0.99 or >0.99) and features that could have an invalid value were excluded.Tree species classification was performed by linear discriminant analysis (LDA) on one texture feature at a time.
Features with very little explanatory potential (i.e., very poor classification accuracy) were excluded.Texture features with a strong correlation (< −0.99 or >0.99) were excluded, prioritizing the removal of the feature that displayed lower classification accuracy as indicated by the LDA classifier.The remaining texture features were angular second moment (ASM), contrast (CON), correlation, and sum average (SAVE).Thus, the final number of textural features was 160 (four textural features in 40 cross sections).

C. Classification Algorithms
Classification accuracy and feature importance were evaluated using three ML algorithms: random forest (RF), multinomial logistic regression (MLR) with least absolute shrinkage and selection operator (LASSO), and multilayer perceptron (MLP).These algorithms were chosen as they are well-known, widely used, and partly because they all enable the computation of feature importance measures in a different fashion.For each classification algorithm, the two categories of predictor variables, i.e., intensities and texture, were evaluated separately and then in combination.
1) RF: RFs are an ensemble learning method used in a range of tasks [29].The method is based on utilizing a multitude of decision trees that all vote on the outcome from a given set of inputs.In the context of classification, each individual decision tree produces a class prediction and the class with the greatest number of votes among all the decision trees becomes the model prediction.In this study, we used the RF R implementation described by Liaw and Wiener [30].A permutation-based method was used to determine the importance of individual features to the output of the RF model.The features were permutated (i.e., the input feature array was shuffled) one input feature at a time.The resulting prediction error was compared to the prediction error when that specific feature was not shuffled.The difference in the prediction error denotes the importance of that feature.

2) MLR With LASSO:
The LASSO is a regression and classification method that performs both variable selection and regularization [31].It uses the L1 penalty for fitting and penalization of the model coefficients.We fit an MLR model with LASSO, as implemented in the R-package [32].The input features were standardized before fitting the model.Thus, the importance of each feature could be defined simply by the magnitude of the coefficient of the feature.The greater the coefficient, the more important the feature is.
3) MLP: MLP refers to a feedforward neural network, in which the layers are fully connected.The neural network architecture consists of input and output layers, and hidden layers between them.In addition, each of the hidden layers consists of neurons whose operation is controlled by activation functions.For more detailed discussion of MLP, see [33].
The hyperparameter tuning of the MLP model was done using a BayesianOptimization tuner in KerasTuner [34].In our setup, the number of layers ranged from 1 to 3, and the number of neurons per layer ranged from 3 to 300.In addition, each layer had an L2 regularization (weight decay) of 0.01, 0.001, or 0.0001, and the learning rate for the chosen Adam optimizer [35] of 0.01, 0.001, 0.0001, or 0.00001.The rectified linear unit was used as an activation function in all but the last layer where softmax activation was used.One must note that the parameters were optimized separately at each fold of the k-fold validation.Feature importance of the MLP model was calculated using shapley additive explanations (SHAP) values [36].

D. Validation
The results were validated using a leave-one-plot-out procedure.This validation scheme was chosen because the sample size was relatively small and to avoid the effect of spatial autocorrelation among trees.Training, prediction, and model evaluation were repeated 39 times, leaving the trees of a single plot out from the training data each time for validation.Thus, trees of a spatially excluded area (i.e., 30 × 30 m field plot) were always used for validation, thereby avoiding the case that training and validation trees Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.exist in the same plot.For the MLP model, the trees from the 38 training plots at each k-fold were divided into training and test datasets (70% for training and 30% for test), which were used to regulate the early stopping criterion of the MLP model.Classification accuracies were evaluated using confusion matrices and Cohen's kappa [37], a statistical measure that accounts for chance agreement between raters.Additionally, class-specific accuracies were reported as F1-scores [38], which combine both precision and recall into a single metric.Feature importance measures were scaled between 0 and 1 for each classification method separately.This means that feature importance is comparable between feature categories for each classification method although feature importance cannot be directly compared between classification methods.

A. Feature Importance
The scaled feature importance of different classification methods is presented in Fig. 3 for intensities and in Fig. 4 for textural features.Similar features were ranked as important across different classification methods except in the case of intensities, where RF appears to value voxels at a lower portion of the crown than the other classifiers.This is an interesting observation because feature importance was computed with fundamentally different techniques.We see that MLR often sets unimportant features to 0, a result of how the coefficients are estimated in LASSO.
All classification methods seemed to value the CON textural feature at the top of the canopy and one-third of the vertical distance from the treetop to the middle of the tree.All classification methods also agreed that the intensity information attached to voxels is most important closest to the tree trunk and that the voxels furthest away from the trunk are less important.In RF, the most important intensity features were computed from different heights of the tree than in the other classification methods.The MLR and MLP ranked the upper most intensity features to be most important, but in RF, the most important intensity features were computed below the upper most part of the tree (Fig. 2).Indirect information about tree shape was most apparent in Fig. 2, where the intensity feature importance resembled the cone-like shape of a tree crown that expands toward the base of the tree.

B. Classification Accuracy
Kappa values of all classification methods, using different input features, are presented in Table II.Corresponding confusion matrices can be found in Tables III-V.All classification methods performed similarly with the same input features.However, the input features noticeably influenced the classification outcome.Kappa values were least (0.60-0.64) when only textural features were used, whereas the use of intensity features resulted in clearly greater kappa values (0.73-0.77).All classification methods yielded the greatest kappa value (0.81) when both intensity and textural features were used.

V. DISCUSSION AND CONCLUSION
We introduced two new approaches to compute tree-level features from UAS lidar data.One feature set uses lidar intensities, and the second quantifies texture at different heights.These feature sets were used as inputs for three different ML approaches both individually and in combination.All three ML approaches delivered very similar results given identical input features.Thus, the focus of the discussion is related to the features and feature importance, rather than discussing the minute differences between the performance of the prediction methods.
The utilization of textural features alone resulted in the lowest classification accuracy (kappa 0.60-0.64)compared to using only intensity features, which yielded a notably higher accuracy (kappa 0.73-0.77).Intensity features contain information pertaining to crown characteristics that are also captured by texture.Intensity values of 0 indicate the absence of lidar echoes, while the mean intensity of a voxel can indicate foliage density, trunk, leaf orientation, and clumping.In this study, only the first echo returns of each pulse were used, which ensures the validity of such interpretation of lidar intensity as for all other echoes of the pulse the energy loss from preceding echoes would be unknown.The differences in geometry and foliage among the tree species investigated may account for why intensity features were more effective than textural features for classification.
This study found that combining intensity and textural features resulted in the highest classification accuracies, regardless of the prediction method used.The addition of texture information improved the accuracy of identifying broadleaves the most, increasing the F1-score from 0.73 to 0.79 and the kappa value from 0.75 to 0.81.These improved accuracies are similar to those in other studies, which typically report OAs of around 90% depending on factors such as the number of tree species, biome, and data acquisition parameters.For example, Chen et al. [22] found that point density at the individual tree level had a noticeable impact on tree species classification, with a kappa value of 0.77 for lower point density data and 0.85 for higher point density UAS lidar data when classifying pine and larch.Similarly, Lv et al. [23] achieved an OA of 86.6% when classifying four tree species with relatively low-point density UAS lidar data (40 pts/m 2 ) using convex hull-based feature descriptors with the PointNet++ network.
In most cases, different ML approaches ranked similar features important although their methods of determining importance varied.Notably, the MLP approach assigned fewer features with zero importance, consistent with findings reported by Tibshirani [31] and Hooker et al. [39].Across all three approaches, intensity features closest to the tree trunk were considered the most important.However, important features were more spread out in lower voxels due to the cone-like shape of trees, with increasing branches and Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
leaves further away from the top.The central voxels of the trees were also important, even at lower cross sections, suggesting the importance of not only the crown's shape but also the way branches and leaves are organized within it.Furthermore, the tree crown's central voxels were important in all prediction methods, indicating that differences in this area can distinguish the three tree species in this study.However, it should be noted that, unlike the other classifiers, RF valued less intensity features computed from the upper most part of the tree.The reason for this behavior of RF feature importance remains unclear.Spruce trees typically have conical tops, while pine and birch may have rounder tops and possibly multiple distinguishable treetops.
Texture features were also ranked similarly with all approaches, with the CON feature appearing to be the most important at the top of the canopy and at about 75% of the height of the tree.The other three texture features were considered important only at the very top of the canopy.CON is a measure of local variation in the image and can be linked to the number of distinct features in a horizontal cross section.The main contribution of textural features as complementary to intensity features was the increase in classification accuracy for broadleaved trees, which could indicate that the improvement is in some way related to distinct local variations in the point cloud due to leaves or multiple apparent treetops.
With regard to the operational applications of the proposed classification scheme, an obvious disadvantage of the intensity features, and to a certain extent the textural features as well, is that they are not sensor-agnostic.Lidar intensity is strongly dependent on the hardware and the associated software that decides where the discrete echoes are located in the waveform data.It is also affected by the flying parameters and scanning geometry that influences the size of the footprint of the laser pulse.This can be an issue as the collection of large amounts of tree-level georeferenced training data from a relatively small inventory area can be prohibitively expensive.The same problem with sensor-specific features and exorbitant field measurement expenses applies to most tree-level inventory methods.Ideally, the training data from one inventory area could be utilized in another inventory area and different hardware, given that the tree species have similar characteristics between different geographical locations.
This study demonstrated that the combination of two fundamentally different types of features from lidar datasets can deliver very good classification accuracies for common boreal tree species.These features should also be tested with lower density airborne lidar data.However, it is reasonable to assume that the successful application of the features presented in this study requires point densities that a conventional airborne lidar is unable to achieve at a reasonably high altitude.Recently, commercialized singlephoton lidar sensors could be used as an alternative to conventional airborne lidar sensors to enhance the point densities.

Fig. 1 .
Fig. 1.Map of the study area.Black squares indicate the locations of the forest plots in the study area.

Fig. 2 .
Fig. 2. Illustration of tree-level intensity and texture features.Voxels are distributed in a regular 8 × 8 × 10 grid, and each voxel contains the average intensity of lidar echoes within it.Texture features are computed from 40 horizontal cross sections, which are each a 40 × 40 binary image.

Fig. 3 .
Fig. 3. Intensity feature importance with RF, MLR with LASSO, and MLP.The upper 4 × 4 grid is located at the top of the tree, and the lowest 4 × 4 grid is located at half the height of the tree.The center of the treetop (and by extension the tree trunk) is located at the lower right corner of each 4 × 4 grid.The color of the cell indicates the importance of that feature, scaling from low (light) to high (dark).

Fig. 4 .
Fig. 4. Texture feature importance with RF, MLR with LASSO, and MLP.The upper row contains textural features from the top of the tree and the lowest row contains textural features from half the height of the tree.The color of the cell indicates the importance of that feature scaling from low (light) to high (dark).The columns correspond to: 1) ASM; 2) CON; 3) COR; and 4) SAVE Haralick textural features.
Manuscript received 5 August 2022; revised 19 March 2023, 14 July 2023, and 16 October 2023; accepted 27 November 2023.Date of publication 21 December 2023; date of current version 19 January 2024.This work was supported in part by the Academy of Finland through the Finnish Flagship Programme for the Forest-Human-Machine Interplay Building Resilience, Redefining Value Networks and Enabling Meaningful Experiences (UNITE), under Grant 337655; in part by the Centre of Excellence of Inverse Modelling and Imaging under Project 321761; in part by the Project "Unmanned Aerial Vehicles in Forest Remote Sensing" under Grant 323484; and in part by the Timo Lähivaara is with the Department of Technical Physics, Faculty of Science, Forestry and Technology, University of Eastern Finland, 70211 Kuopio, Finland.Petteri Packalen is with the Bioeconomy and Environment Unit, The Natural Resources Institute Finland, 00790 Helsinki, Finland.Digital Object Identifier 10.1109/TGRS.2023.3345745

TABLE I AVERAGE
GROWING STOCK VOLUME, STEM COUNT, HEIGHT OF THE BASAL AREA MEDIAN TREE (HGM), AND DIAMETER OF THE BASAL AREA MEDIAN TREE (DGM) IN THE FIELD PLOTS with the forests dominated by Scots pine (Pinus sylvestris) and Norway spruce (Picea abies).Broadleaved tree species, mainly Silver birch (Betula pendula) and Downy birch (Betula pubescens), are found abundantly in more fertile habitats but are typically in the minority.

TABLE II KAPPA
VALUES OF DIFFERENT CLASSIFICATION METHODS AND INPUT FEATURES.RF DENOTES THE RANDOM FOREST, MLR DENOTES THE MULTINOMIAL LOGISTIC REGRESSION WITH LASSO, AND MLP DENOTES THE MULTILAYER PERCEPTRON

TABLE IV TREE
SPECIES CLASSIFICATION USING MLR WITH DIFFERENT INPUT FEATURES.CLASS NAMES ARE ABBREVIATED: P DENOTES THE PINE, S DENOTES THE SPRUCE, AND B DENOTES THE BROADLEAVED TABLE V TREE SPECIES CLASSIFICATION USING MLP WITH DIFFERENT INPUT FEATURES.CLASS NAMES ARE ABBREVIATED: P DENOTES THE PINE, S DENOTES THE SPRUCE, AND B DENOTES THE BROADLEAVED