Comparing Pixel-Based Random Forest and the Object-Based Support Vector Machine Approaches to Map the Quasi-Circular Vegetation Patches Using Individual Seasonal Fused GF-1 Imagery

The seasonal effect on land cover classification has been widely recognized. It is important to use the imagery acquired at key points of vegetation phenological development to obtain a higher classification accuracy for land cover. This study compared the effect of seasons on landscape classification and the quasi-circular vegetation patches (QVPs) detection from four fused Gaofen 1 images acquired in the different seasons by using the pixel-based random forest (RF) and object-based support vector machine (SVM) methods over the Yellow River Delta, China. The results from this study demonstrated that the seasonal effect on classifying landscapes and detecting the QVPs is significant, especially for the pixel-based RF method. The object-based SVM method was more appropriate for classifying landscape from the non-growing season images, while the pixel-based RF approach was more suitable for classifying the growing-season images. The spring data (April imagery; overall accuracy = 99.8%) and the winter data (February imagery; F measure = 65.9%) yielded the best results for landscape classification and QVP detection, respectively, by using the object-based SVM approach. Therefore, in practice, we recommend the use of February to April imagery with the object-based SVM approach to map the QVPs in the future.


I. INTRODUCTION
Vegetation patchiness can contribute to an increase in plant biomass and biodiversity, and provide habitat for various animals, and impede and alter surface runoff, and improve local soil physicochemical properties, and suggest key indicators for assessing vegetation pattern formation, ecosystem function, evolution, resilience, and degradation in the arid and semi-arid areas around the world [1]- [5]. Analogous to arid and semi-arid areas around the world, vegetation pattern in The associate editor coordinating the review of this manuscript and approving it for publication was Qingli Li . the Yellow River Delta (YRD), China is marked by the bare soils interspersed with the quasi-circular vegetation patches (QVPs) [6], which was first discovered from high spatial resolution imagery in 2011. The QVPs have a quick succession rate and can develop to their maximum size within one to three years, rendering them optimum for examining the mechanisms of spontaneous plant colonization, evolution, resilience, and degradation [5]- [8]. It has been suggested that patch-size distributions can act as good indicators of discontinuous transitions and imminent desertification in arid and semi-arid regions, which are issues that increasingly draw attention from the scientific community [9]- [12]. VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ Identify the QVPs is a prerequisite for studying the mechanisms of vegetation pattern formation in the YRD, China. Often, remote sensing data are used to map vegetation patches, estimate patch size, and analyze vegetation patterns [9], [12], [13]. Due to the relatively small size of vegetation patches, high spatial resolution satellite data are recognized as a significant data source for mapping vegetation patches. Previous studies have demonstrated that 2 m resolution remotely sensed data are enough to map vegetation patches at the decametric scale [14], [15].
In some cases, a single-date high resolution imagery with traditional classifiers such as k-means, and maximum likelihood estimation could obtain a high classification accuracy because of a high contrast between the vegetation patches and other land cover types [6], [12], [16]. However, variation in seasonal imagery demonstrated its potential for mapping vegetation by capturing the differences in phenological features [7], [17]- [23]. In general, the use of imagery acquired at critical dates of the phenological evolution of vegetation patches or the transition period from green canopy to senescence or vice versa, provides a better classification result than just the image obtained at the peak productivity period [19], [20]. However, it is expensive to acquire and process high resolution satellite remote sensing data over large regions, making it challenging to obtain the complete key phenological images of vegetation [6]. It is also a challenge to comprehensively evaluate the impact of seasons on classification accuracy of vegetation patches, which needs to consider the trade-off between image availability and effectiveness of assessment on seasonal effects on classification accuracy. Therefore, suitable seasonal imagery needs to be selected to understand the seasonal effect on the classification of the vegetation patch. It is gratifying that with development and popularization of the unmanned aerial vehicle (UAV), the difficulty of obtaining image acquired at critical period will be alleviated effectively [24]. Admittedly, a combined multi-season image dataset can improve the overall classification accuracy, as has been found in other studies that focused on classifying tree species, bush encroachment, grassland, land cover, and QVPs [7], [15], [21], [25], [26], but this investigation lies beyond the scope of this study.
The detection of vegetation patches is often performed using two types of methods: the pixel-based approach, and the object-based approach. The pixel-based K-Means classifier was comparatively successful in detecting the QVPs using high spatial resolution panchromatic images from China-Brazil Earth Resource Satellite 4 (CBERS-04), Gaofen 1 (GF-1), and Gaofen 2 (GF-2) satellites [6]. Tree, shrubs, and dwarf-shrubs were easily separated from aerial photographs using the pixel-based maximum likelihood classifier in northern Israel [16], which also was used to successfully detect the broad-leafed weed patches larger than about 25% of the Quickbird pixel area [27]. Odindi and Kakembo (2009) identified the Pteronia incana patches from the multi-temporal perpendicular vegetation index images from the infra-red high-resolution data using pixel-based maximum likelihood classifier with the overall kappa index of 0.85 [28]. The pixel-based decision tree algorithm was used to map juniper forest encroachment using a combination of Phased Array type L-band Synthetic Aperture Radar and Landsat data with an overall accuracy of 96% [29]. The two-dimensional Mexican hat and Haar wavelet basis function were considered to have considerable potential for monitoring conifer tree height and crown diameter, and bush encroachment in a savanna via remote sensing imagery [15], [30], [31]. Recently, the pixel-based random forest (RF) has been used to classify land cover with an overall accuracy of 98% and detect the QVPs in the YRD with the precision rate, recall rate, and F measure of 66.3%, 43.9%, and 0.528 using CBERS-04 satellite imagery [7]. A pixel-based approach depends on the spectral reflectance information from each individual pixel in an image, which easily leads to the socalled ''salt-and-pepper'' effect [32], [33]. To overcome this problem, the object-based approach has been developed. The object-based approach first segments a pixel-based image into image objects based on relatively homogeneous spectral and spatial/textural features, and image objects are classified based on certain rules or training samples using the different classifiers [21], [32]. McGlynn and Okin (2006) applied the object-based nearest-neighbor approach embedded in the eCognition software to classify high spatial resolution remote sensing imagery into shrub cover, grasses and mixed vegetation, and bare soil [34]. Levin et al. (2009) applied an object recognition technique with a combination of spectral and segmentation-based methods to map forest patches and isolated trees from SPOT images with an overall accuracy (80-90%) [35]. Boggs (2010) used the object-based nearest-neighbor and membership classification algorithms to classify SPOT imagery into bare ground, shadow, moderate vegetation, and vigorous vegetation in two savanna systems in southern Africa with an overall accuracy (72-81%) [36]. Browning et al. (2011) used the object-based methods to identify shrub cover and patch density in Chihuanhuan Desert rangeland with an overall accuracy (89.9%-98.0%) [37]. Fernandes et al. (2014) used the object-based bagging classification and regression tree approach to successfully detect the giant reed in riparian habitats from the WorldView-2 imagery with the kappa coefficient of 0.77 [38]. Recently, the object-based support vector machine (SVM) has been used to identify the QVPs in the YRD with an overall accuracy (67-77%) using the panchromatic CBERS-04, GF-1, and GF-2 satellite imagery [6], and map bamboo patches in lower Gangetic plains using very high-resolution World-View 2 imagery with an overall accuracy of 91% [39]. Some studies have reported that the object-based approach shows greater potential for classification than the pixel-based approach in agriculture, tree species, and vegetation structure [32], [39]- [42]. However, contradictory results have also been reported in mapping the QVPs, wild oat (Avena sterilis) weed patches, Prosopis spp., and Harrisia pomanensis patches [6], [43]- [45]. Classification accuracy is often related to the applied algorithm [42]. Nevertheless, it is still blurred by which of the classifiers can perform the best with the object-based and pixel-based approaches, especially when applied with high spatial resolution satellite data for mapping vegetation patches [42]. Thus, it is necessary to compare more pixel-based and object-based classifiers to gain a broader understanding of the possible impacts of the pixel-based and object-based approaches on vegetation patch mapping from high spatial resolution imagery.
At this stage, the RF and SVM classifier are two of the most widely used classifiers [46], [47] and have been successfully used for classification of tree species, crop types, and land covers [6], [7], [21], [39], [48]- [50]. However, previous studies have not reached a consensus on which of the SVM and RF approaches has better classification accuracy [21], [39], [48]- [50], which should be evaluated by classifying more types of land cover with more high spatial resolution images from different sensors in more regions.
The object-based SVM and pixel-based RF approach have been previously used to map the QVPs and compare the capability of images obtained at different spatial resolutions from different sensors [6], [7], [23]. The object-based SVM approach had a superior detection accuracy than that of the pixel-based K-Mean classifier for mapping the QVPs from the summer GF-1 image [6]. The potential of multi-seasonal China-Brazil Earth Resource Satellite 4 images for mapping the QVPs were also evaluated using the pixel-based RF approach [7]. Nevertheless, to our knowledge, the ability of the object-based SVM and pixel-based RF classifiers to map the QVPs from the different seasonal GF-1 images has not been compared and evaluated before. It is not clear which of the object-based SVM and pixel-based RF approaches are more susceptible to seasonal influences, which is very important for classifying high resolution images acquired in the different seasons. With the objective to fill this gap, we assessed the potential of individual seasonal GF-1 images for mapping the QVPs in order to understand the seasonal effect on the mapping accuracy of the QVPs using the object-based SVM and pixel-based RF approach, (1) Evaluating the seasonal effects of individual season GF-1 images on accuracy of mapping landscape and the QVPs, (2) assessing which of the pixel-based RF and object-based SVM is better for classifying landscape and the QVPs using individual season GF-1 images, and (3) examining which of the pixel-based RF and object-based SVM is more subjected to seasonal influences for mapping landscape and the QVPs using individual season GF-1 images. The results of this assessment shall help to gain a broader understanding of the possible effect of individual season on the pixel-based and object-based classifiers, and then be considered to guide the selection as to which image classification approaches to be applied when the different seasonal imagery is used to map the QVPs.

A. STUDY SITE
The study site (119 • 0 36 E-119 • 1 47 E and 37 • 56 8 N-37 • 56 44 N) covers an area of 1.83 km 2 , and is   Figure 1). The study area is comprised of low-laying, flat, unused land. In terms of seasons, spring, summer, autumn, and winter are March to May, June to August, September to November, and December to February respectively. It has an aridity index (average annual rainfall of 580 mm/average annual evaporation of 1962 mm) of 0.30, qualifying it as a semi-arid area [7]. The study site is adjacent to the Bohai Sea in the East and the North, and the salt accumulation in the topsoil mainly occurs in spring and autumn every year. The vegetation pattern is characterized by a nearly regular distribution of the QVPs, which mainly consist of Phragmites australis (begins to grow in early April and withers in middle November), scattered Tamarix Chinensis (begins to grow in late March and withers in early December), Imperata cylindrical and Apocynum ventum I , and the ring of Suadea salsa (begins to grow in late April and withers in middle November) [5], [7].

B. GROUND DATA
The landscape of the study site can be simply classified into three classes, including the vegetated area (containing the QVPs, and non-QVPs), water, and the others (mainly including bare soil and road). The vegetated area, water, and the others accounts for about 28%, 10%, 62% of the total area of the study site, respectively. The actual number of QVPs was 139 at the study site, which were obtained through the visual interpretation of the 1 m resolution IKONOS images, GF-2 satellite images, and high-resolution images from Google TM Earth. According to the previous studies [5]- [8], generally the area of the QVPs was from 50 to 1206 m 2 . Due to the VOLUME 8, 2020 adhesion of multiple QVPs, the area of some QVPs can reach 3000 m 2 .
A total of 43 datasets (including plant species, diameter of the QVPs, soil physiochemical properties, and so on) of the QVPs from five field surveys in April 2019, May 2018 [5], June 2017, August 2017 [8], and October 2013 were included into the reference dataset for landscape classification and the detection of the QVPs from the GF-1 imagery, and accuracy assessments. Because the areas of QVPs have changed little since 2005 [5], so field survey samples, although not obtained synchronously with the acquisition of satellite images, can be used as the reference data. In addition, due to the uneven distribution of field survey samples, the reference dataset also included the samples of visual interpretation from the existing high spatial resolution IKONOS (a pixel size of 1 m) images, GF-2 (a pixel size of 0.8 m) satellite images, and high-resolution images from GoogleTM Earth. The individual GF-1 images were first converted to spectral radiance with the absolute radiometric calibration coefficients of GF-1 satellite data published by the CRESDA (for the 2014 images, please browse the website http://218.247.138.119/CN/Downloads/dbcs/3776.shtml, and for the 2015 image, please browse the website http://218.247. 138.119/CN/Downloads/dbcs/6709.shtml). Then, the quick atmospheric correction approach was used to convert the spectral radiance values to the spectral reflectance values [51]. Next, 2 m resolution pan-sharpened GF-1 multispectral images were created by sharpening the multispectral imagery with the panchromatic imagery by using the Gramm-Schmidt spectral sharpening technique, which can improve the spatial resolution of the images while maintaining the spectral information [52]. Finally, geometric correction was performed by using approximately twenty ground control points for the individual pan-sharpened GF-1 images, resulting in an average root mean square error of less than 0.6 pixels. Due to the flat terrain, topographic correction was not required in this study. Thus, the four pan-sharpened GF-1 images were subset to focus on the study site (see Figure 2), and were used to classify landscapes and map the QVPs with the pixel-based RF method and the object-based SVM method. All the abovementioned preprocessing steps were performed on the ENVI v5.1 software (Harris Inc., Boulder, CO, USA).

D. MAPPING APPROACHES AND ACCURACY ASSESSMENTS FOR THE QVPs
SVM is one of the most efficient supervised non-parametric machine learning techniques [6]. The object-based SVM classification was performed using the four spectral bands of every pan-sharpened imagery in the ENVI v5.1 software, and primarily included two steps: (1) the determination of appropriate scale level and merge level for image segmentation using the edge algorithm through repeated experiments, and (2) classification of the segmented image objects using SVM with the training samples. After many experiments and assessments, scale level (30) and merge level (90) were determined to be able to segment the QVPs in the February and April images effectively, and scale level (30) and merge level (80) were effective for the June and September images. Then, the training samples were used in the classification of the four segmented images using SVM with the radial basis kernel (degree of kernel polynomial = 2; bias in kernel function =1.0; gamma in kernel function = 0.03; penalty parameter = 100.0).
RF is an automatic learning algorithm consisting of a set of multiple decision trees built using a bootstrap aggregation [7]. The pixel-based RF approach was implemented using ImageRF, which is included in the free EnMAP-Box v2.1.1, and can be embedded in the classic menu of ENVI v5.1 software to enable the direct use of the ENVI format files [53]. The EnMAP-Box is initially developed for processing and analyzing data acquired by the German spaceborne imaging spectrometer EnMAP (Environmental Mapping and Analysis Program), and offers a range of tools and applications for image processing, including powerful machine learning methods [53]. All four multispectral bands of the 2 m resolution pan-sharpened imagery were used as the predictable variables for the pixel-based RF classification. Previous studies have demonstrated that the number of trees (ntree) is 500, and the number of split variables (mtry) is default value, which equals the square root of the total predictive variables, are generally a good choice for mapping tree species, and the QVPs [7], [49], [54]- [55], making them suitable for this study.
After landscape classification, the QVPs were detected from the vegetated area class using the thresholds based on the area (< 3000 m 2 ) and perimeter to area ratio (< 0.54), as determined in previous studies [6], [7], [56].
Accuracy assessment for the results from landscape classification were performed using the user's accuracy (UA), and producer's accuracy (PA), overall accuracy (OA), and Kappa coefficient (Kappa). They have been calculated from the confusion matrix resulting from the above-mentioned validation samples. The precision rate (A d ), recall rate (A r ), and F measure (F) were then applied to assess the mapping accuracy of the QVPs. The equations for A d , A r , F are as follows: where, N c , N d and N r are the correctly detected, the detected, and the existing QVPs, respectively. Among them, N r was 139.

A. OBJECT-BASED SUPPORT VECTOR MACHINE CLASSIFICATION 1) LANDSCAPE CLASSIFICATION
The individual 2 m resolution pan-sharpened GF-1 multispectral images were classified into three landscape types: vegetated area, water, and others using the object-based SVM approach. Figure 3 shows the landscape classification maps from the four seasonal GF-1 images. The classification maps from the summer (June imagery) and autumn (September imagery) images showed more cohesive QVPs than that from the winter (February imagery) and spring (April imagery) images. This exacerbates the effect of image segmentation and could potentially be one of the reasons for a lower classification accuracy of the June (Figure 3c) and September (Figure 3d) images. Table 1 summarizes the classification accuracies from the four seasonal GF-1 images. The results present that classification based on the April imagery was more precise than the classification achieved by other three seasonal images. The lowest result was created from the September imagery. Figure 4 shows the detected QVPs from the landscape classification maps of the four seasonal GF-1 images. Table 2 summarizes the detection accuracies for the QVPs from the four classification results mapped using the object-based SVM approach. The February imagery (Figure 4a) produced the best detection accuracy for the QVPs, followed by the April (Figure 4b) and then June imagery (Figure 4c). The September imagery (Figure 4d) produced the worst detection accuracy because of overlapping between the QVPs or between the QVPs and the adjacent vegetation.  Figure 5 shows the classification maps from the four seasonal GF-1 images obtained with the pixel-based RF approach. The winter image (February imagery) showed the worst classification result, hardly mapping the QVPs (Figure 5a). Table 3 summarizes the classification accuracies from the four seasonal GF-1 images using the pixel-based RF approach. The results present that classification based on the April imagery was more precise than the classification based on other seasonal images. The lowest result was achieved by the February image.   Figure 6 shows the detected QVPs from the landscape classification maps of the four seasonal GF-1 images. Table 4 summarizes the detection accuracies for the QVPs from the four classification results mapped using the pixel-based RF approach. The April imagery (Figure 6a) produced the best detection accuracy for the QVPs, followed by the September ( Figure 6d) and June imagery (Figure 6c). The February imagery (Figure 6a) produced the worst detection accuracy because of the low spectral contrast between the QVPs and areas marked others and overlapping between the QVPs or between the QVPs and the adjacent vegetation.

IV. DISCUSSION
Findings of this study demonstrates that the seasonal effect plays a significant role in accurately classifying landscapes and detecting the QVPs from the pan-sharpened GF-1 images. This is consistent with previous studies indicating the seasonal effects on classification accuracy [7], [21]. Our study goes further, indicating that compared that of the pixel-based RF approach, the seasonal effect was relatively less significant for landscape classification of the study site using the object-based SVM approach (Table 3).

A. COMPARISON OF THE PIXEL-BASED RF AND OBJECT-BASED SVM CLASSIFICATION
Overall, the object-based approach produced a better classification of the study site for landscape (OA=93.1%-99.8%) and the QVPs (F=0.268-0.659) than the pixelbased approach (OA=74.1%-98.7%; F=0.025-0.566). VOLUME 8, 2020 This is consistent with the previous research results in mapping agriculture, tree species, vegetation structure, and QVPs [6], [32], [39]- [42], [45]. Kamal and Phinn (2011) evaluated three image classifiers (the pixel-based spectral angle mapper and linear spectral unmixing, and the object-based approach which combined a rule-base and nearest-neighbor classification method) for mapping mangrove species, and found that the object-based approach produced the best classification map of mangrove species with an OA increase of more than 7% than the pixel-based classifiers [32]. Mafanya et al. (2017) assessed five image classifiers (unsupervised pixel-based classifiers (K-mediums and Euclidean Length), unsupervised object-based classifier (Isoseg), supervised pixel-based classifier (Maxver), and supervised object-based classifier (Bhattacharya)) for identifying Harrisia pomanensis, and found that the object-based Bhattacharya classifier (OA=86.1%) performed better for mapping Harrisia pomanensis than the pixel-based Maxver classifier (OA=65.2%) and any other classifier used in their study [45]. The maximum accuracy difference for landscape (the difference of OA value=25.1%) and QVPs (the difference of F value=0.634) classification occurred in the February imagery using the object-based SVM and the pixel-based RF approach.
Unsurprisingly, the lowest detection accuracy of the QVPs (F=0.025) was produced by using the pixel-based RF approach with the February imagery, because this seasonal imagery also resulted in the lowest accuracy for landscape classification (OA=74.1%). This can be explained by the difference in the structure of the two algorithms. The object-based SVM approach used the spectral and spatial/textural features of the segmented homogeneous objects to classify land covers, which significantly reduced the effect of the minor spectral differences between the QVPs and areas classified as others (such as the bare soil and roads) during the non-growing season imagery (Figure 2a). The previous study also demonstrated that the increased object-based classification accuracy could partly be attributed to image segmentation before image classification [45]. For two approaches, the minimum accuracy difference for landscape classification (about 0.3%) happened in the September imagery. This may be explained by the relatively middle spectral and spatial/texture differences between vegetation and bare soil in autumn, which was not beneficial to the object-based method which depended on edge detection algorithm for image segmentation. This was also proved that the pixel-based RF approach (F=0.460) obtained a better result for detecting the QVPs than the pixel-based SVM (F=0.268) in this season. The commission and omission arose primarily from the model's inability to distinguish the vegetated areas from areas marked as others, which mainly includes the bare soil and the road.

B. ASSESSMENT OF THE SEASON INFLUENCE OF INDIVIDUAL SEASON GF-1 IMAGES ON THE CLASSIFICATION ACCURACY
The season had the influence on the accuracy of individual season GF-1 images for classifying landscape and detecting the QVPs. This is consistent with the previous research results in mapping shrub cover, tree species, and QVPs [7], [21], [37]. Browning et al. (2011) showed that classification accuracies from the different monthly aerial photographs using the object-based methods for mapping shrub cover had the difference [37]. Madonsela et al. (2017) showed that the April (transition from wet to dry season, senescence) WorldView-2 imagery produced higher classification accuracy than the March imagery (peak productivity season) using the pixel-based RF approach for mapping savannah tree species [20]. Pu et al. (2018) demonstrated that the imagery acquired in a season transiting from dry-spring to wet-summer (the April and May imagery) produced a better result for mapping urban tree species using the pixel-based RF and SVM approach than the dry-winter and wet-summer imagery (the February and August imagery) [21]. Luca et al.
(2019) showed that the spring UAV image achieved a higher accuracy of land cover than that of the summer images for both the object-based SVM and object-based RF classifiers [57].  concluded that for an individual seasonal imagery, the March and May imagery (spring season) were more suitable for classifying land cover and identifying the QVPs using the pixel-based RF approach than the July, October, and December imagery (summer, autumn, and winter seasons) [7]. A combined multi-season images has been proved that it can improve the overall classification accuracy [7], [15], [20], [21], [25], [26], this is outside the scope of this study, and will be investigated in the future. Irrespective of the classification method used, the April GF-1 imagery produced the best results for classifying landscape, which to some extent, confirmed the view that the classifier itself was generally of low importance if the remote sensing images met the requirements of the classifier and the research target [6], [58]. Combining the classification results for the landscapes and QVPs, it can be inferred that the winter imagery (February) combined with the object-based approach was the most suitable for mapping the QVPs. This is consistent with the results from previous studies [7], [23], which concluded that the images acquired from February to May could be used to map the QVPs in the study area.
The abovementioned studies either focused on the comparison of the potential of the pixel-based and object-based approach for land cover classification performed on the similar seasonal imagery from the different regions [6], [32], [39]- [45], or evaluated the seasonal effects on land cover classification from the different seasonal imagery using either the pixel-based or object-based approach [7], [15], [20], [21], [25], [26], [57]. It is rare to compare the seasonal influence on accuracy of the pixel-based and object-based approaches for land cover classification. This will be analyzed in the following section. The new novel significance of our study demonstrated that the seasonal effect was relatively less significant for landscape classification of the study site using the object-based SVM approach than the pixel-based RF approach. This finding indicated that the pixel-based RF approach was not suitable for classifying landscapes from the non-growing season imagery in the study area, and the object-based SVM approach with edge algorithm could be used to classify the non-growing season imagery. The imagery acquired during the peak productivity period, such as the June and September images, produced lower classification accuracy using the object-based SVM approach than that in the non-growing or the early growing seasons, which can be attributed to an increased spectral mixing effect of pixels around the QVPs due to vegetation covers or shadows from vegetation growth (Figure 3c, and Figure 3d). Compared to the classification results obtained by the object-based SVM method, the pixelbased RF approach resulted in slightly higher classification accuracies from the June and September images ( Table 1, and Table 3), suggesting that the pixel-based RF approach was more suitable for classifying the growing season images.
Of course, this finding should be checked for the different research objects in different regions using more pixel-based and object-based approaches in the future.
Classification results obtained from the June imagery showed that the QVPs from the object-based SVM approach had many jagged edges, and compared to that of other seasons, the shape was quite different, indicating the influence of the growth of QVPs on image segmentation. This finding also supported the previously mentioned result of this study that the classification accuracy results obtained using the pixel-based RF approach for the June and September images were better than those obtained from the object-based SVM approach.
However, it is obvious that the object-based SVM and pixel-based RF approaches had resulted in a relatively low accuracy of detection for QVPs, especially for the QVPs adjacent to the vegetation growing at the canals and ditches, or smaller or irregular shaped QVPs. This may be attributed to the adhesion between the QVPs, or the adhesion between the QVPs and the surrounding vegetation [59].
In the future, it is possible to improve the detection accuracy for the QVPs through four approaches: (1) Two-season combined image data can be applied for the classification of QVPs. It has proved to be cost-effective for mapping tree species, and QVPs [7], [21], (2) additional spectral and textural features can be applied to the classification of QVPs. Spectral features such as tasseled cap brightness and greenness components, and textural features have been useful in mapping of tree species, farmland, invasive species, and the QVPs [7], [28], [49], [50], [60], (3) algorithms for segment overlapping QVPs can be applied for detecting the QVPs. The effect of watershed transformation on the detection of QVPs has been proven [59]. More advanced image segmentation algorithms such as mask-based watershed transformation, advanced level set-based method, bottleneck detection and ellipse fitting method, and active contours approach have been successful in segmenting overlapping cells and tree crowns [61]- [65], and (4) the sub-meter level resolution imagery containing useful spectral bands for mapping vegetation can be applied. For example, the WorldView-2 imagery with red edge, and shortwave infrared bands has demonstrated a high accuracy for mapping tree species [42], [55], and the UAV high-resolution imagery has been successfully used for extraction of vegetation information [24], [45]. In the future, this is a good direction for research to improve the mapping of QVPs in the study area.
In this study, the QVPs were only detected based on landscape classification, which was less affected by the phenological features of the different plant species that make up the QVPs. This was because if the overall spectral and spatial/texture characteristics of the QVPs were different from the surrounding bare soil, a relatively good result could be achieved. In fact, the structure and composition of the QVPs are very important for the study of the formation and encroachment of the QVPs. The season should have more serious influences on mapping plant species, which requires not only field survey data synchronized with the remote sensing acquisition time as the training and validation samples, but also sub-meter or even centimeter level very high-resolution remote sensing data. The influences of different pixel-based and object-based approaches and seasons on accuracy of mapping them will need more detailed studies in the future.

V. CONCLUSION
The novelty of this study was to demonstrate the effects of seasons, by using different seasonal GF-1 imagery, on land cover classification and QVP detection over the study area, by using the pixel-based RF and the object-based SVM approaches. We found that the object-based SVM approach was more appropriate for classifying the land cover in the fallow season images, while the pixel-based RF approach was more suitable for the growing season images. Compared with the object-based SVM approach, the pixel-based RF approach was more sensitive to the season. Overall, the object-based SVM showed greater potential for classifying land cover than the pixel-based RF approach. The images acquired from February to May could be used to detect the QVPs in the YRD, especially the February imagery. Given the seasonal effect on mapping of the QVPs, we recommend the February imagery with the object-based SVM approach for mapping the QVPs in the future. The GF-1 trio was launched on March 31, 2018, and the combined 3+1 GF-1 constellation can offer one-day revisit coverage, which shall be more suitable to the seasonal or monthly requirement for the accurate detection of QVPs.