Detecting White Cotton Bolls using High-Resolution Aerial Imagery Acquired through Unmanned Aerial System

In recent decades, agricultural goods demand has grown exponentially due to the growth of the human population. Agricultural production demands new and simple techniques. Moreover, safe, efficient, and cost-effective methods are required for monitoring agriculture crops. This research aims to provide a simple image processing technique that detects and distinguishes the object (agricultural goods) in drone-based agrarian imagery. We used cotton crop images as experimental data in this research due to its different spectral characteristics according to drone camera sensors, minimum weather constraints, and flight schedule timing-related constraints. The proposed method used fuzzy reasoning-based tactics combined with RGB and HSV color spaces by manipulating image color pixel values and setting the upper/lower limits values of the colors for detection and distinguishing the agricultural objects such as white cotton bolls from the rest of the image.


I. INTRODUCTION
Agriculture production plays a vital role in the world of economy. In this sense, the necessity for safe and efficient methods and techniques for producing agricultural goods or stuff is increasing. Among these available methods and techniques, computer science techniques or, more precisely, image processing techniques incorporating artificial intelligence (AI) seem most promising due to their feature and characteristics-based approaches. Image processing techniques are widely used for computer vision applications, from the classic image segmentation approach to the modern deep neural network approach.
It is noteworthy that computer vision applications have expanded in agronomy due to lessening the cost of the equipment and increased computational capacity to solve complex challenges. The rapidly rising interest in nondestructive assessment approaches for agriculture crops is critical. In this context, precision agriculture (PA) is a new technological approach that uses computer science techniques and methods. It ensures that the agriculture field receives what they need for optimal productivity and incorporate with other new emerging technologies like Unmanned aerial vehicle (UAV), leading to enhanced agriculture fields and farms output by enriching their input. The UAV-based aerial images play a crucial role in remote sensing and precision applications in agriculture. Remote sensing with UAV is a significant shift in precision farming and agricultural-specific crop management. This technology provides a variety of data such as biomass, spatial and temporal data along with different angular observations [1][2] [3]. Conventionally, these agricultural-related data are collected from satellites (e.g., MODIS, SPOT, etc.) or conventional airborne data. However, these methods and techniques are expensive, time-consuming, and challenging to obtain.
The evolution toward sustainable agricultural systems, emerging technologies like UAVs equipped with the latest sensing devices (e.g., camera or micro ADC) are already starting to generate significant contributions [1], [2], [3], [4], and [5] in the farming world. UAVs are relatively new technology and still continuously enhancing [5] in the agricultural field. However, UAVs still regularly allow frequent mapping of agriculture crop fields and crop data (e.g., images, NDVI, etc.). Due to UAVs, these characteristics are especially timely and efficient; UAVs can make significant changes in precision agriculture.
UAVs help detect and inspect bugs, diseases, yield estimation, and growth assessments of crops [6]. Properly adjusted and well-equipped UAV with appropriate models and algorithms seems promising for PA [1] [3]. Image processing algorithms significantly impact PA for smart farming [3] [4]. The traditional image processing tactics can be utilized in agriculture for ascertaining the size and shape, detection, quantification, and the most critical color recognition. Crop/plant color detection and identification are vital in agriculture. Almost all agricultural harvesting processes are begun based on crop colors and then identification of shape and texture. The crop/plant color detection also helps identify crop health conditions and identify agricultural diseases in early stages. Image processing techniques such as masking and identifying values of colored pixel techniques are already in use for these processes.
However, the efficiency of these kinds of techniques is continuously improving due to technological evolution; for example, previously, agricultural aerial imagery data was only collected from Satellites or Aeroplan's, and that collection process is not an easy job due to time, weather, location, and other resources availability constraints. But UAVs overcome these constraints and provide high-resolution agricultural imagery data in real-time. To get meaningful data information after processing, effective image processing techniques in every possible way (e.g., financial, computational, and work with limited resources).
This may be the era of complex machine/deep learning (ML/DL) algorithms and big data, where ML/DL algorithms analyze images. However, conventional image processing still has potential, and color spaces are a crucial part of image processing the image analyzing process. This research study detects objects (White cotton balls) in agricultural (ultra-fine spatial resolution) aerial images using color identification. The key purpose of this research study is as follow: • Provide an agricultural object detecting method for digital agrarian images. • The method used for UAV-based agricultural image.
• Design a methodology that exploited simple image processing-based techniques with minimum resources.
Cotton crop image data set collected and used in this research due to crop different spectral characteristics according to UAV sensors, weather, and flight timing. The phantom-IV UAV collects cotton crop images in sRGB color space, but other color spaces are also considered in this research, such as HSV. The color space plays a crucial role in image processing-based applications. In this research study, "color spaces and image processing" is discussed, the following section discusses "Color histogram & feature extraction". This study paved the way for other agricultural and forestry color vision-related tasks and precision agriculture (PA). Later in this section, PA is discussed in detail, and artificial intelligence and its approaches (e.g., machine/ deep learning) because one of the fundamental aims of this research is to provide an agricultural object detecting method for PA, more precisely agricultural remote sensing area.

A. COLOR SPACES & IMAGE PROCESSING
The color space's objective is to facilitate the specifications of colors concerning given standards and different color spaces/models used to fulfill the additional purpose of need (Table 01). In image processing, RGB space is generally employed for hardware-concerned jobs (e.g., printers and color monitors), CMY/CMYK areas are used for softwarerelated jobs (e.g., color printing), and HSV/HIS deals with colors as humans understand. The original RGB formatted cotton crop image is converted into HSV space for accurate detection considering HSV color space with a more balanced color-difference perception. The RGB space is old [7][8] and the commonly used color space. This is a non-uniform color space, and each color in this range is based on a combination of or defined by spectral components of R-red, G-green, and B-blue. Since three colors (RGB) are perceptually non-uniform, the Euclidian distance between colors perceived through the human eye and colors in RGB color space doesn't correspond [28], as well as because of the dependence on intensity; a high correlation is present between three (RGB) components. [29] The RGB color space is suitable for color picture tube working. However, this system does not adapt to human visual characteristics.
The commonly used color difference formulas in RGB color space are the distance and angle color difference formulas. But both color difference formulas are not excellent due to the inherent uniformity in RGB space [9]. On the other hand, the HSV space is expressed or articulated by the three elements of Munsell color space (Munsell, 1912) shown in Figure 1, the hue H, the saturation S, and the lightness V. this is from a nonlinear color system. The color interpretation technique of HSV is consistent with human perception (or consciousness) of color. Each element of the color space is isolated, thus suitable for image processing. The value V in Figure 1 is presented by the main axis orthogonal of the plane. The angle presents the Hue value, while the radius presents the level of Saturation (purity of color) [9]. This research work captured the drone imagery and stored it in RGB, then later moved into HSV space for further work. Since experimental imagery is caught in the middle of a full sunny day in September, the colors brightness level of investigational imagery is has a high illumination level, and controlling illumination or different brightness from color is not possible in RGB but possible in HSV color space.

B. TRADITIONAL IMAGE PROCESSING & MODERN TECHNIQUES
Modern techniques such as complex machine/deep learning (ML/DL) and big data, where ML/DL algorithms analyze images, but conventional image processing still has potential, and color spaces are crucial parts of image processing for the image analyzing process. DL has indeed transformed traditional image processing and pushed image processing limits in the Artificial Intelligence (A.I) field [10] [11] [12] [13], and unlocked the potential opportunities and prospects across the industry. Several challenges that once appeared to be challenging now have solutions to a point where machines have outperformed humans [13] [14]. However, that doesn't mean that the conventional image processing techniques become obsolete that have developed and advanced in the years before the DL rise. Rapid growth and advancement in DL along with evolutionary developments in electronic device capabilities including memory storage and data retrieval ability, computation power, replaced power-hungry devices with new lower power consume small size devices, optics, and image sensor resolution have boosted the spread of machine visionbased applications along with improved-quality efficiency and cost-effectiveness. Compared to old-fashioned image processing techniques, DL aids in achieving better precision and accuracy in object recognition, image categorization/classification, segmentation, semantic segmentation, and simultaneous localization and mapping (SLAM).
Since neural network (NN) approaches used in DL/ML are trained instead of programmed-based approaches, the applications that follow this approach frequently require less fine-tuning and professional analysis. The availability of a massive amount of video data in current systems and available research literature [11] [14] [15] supports this cause. While image processing algorithms and tactics are domain-specific, DL delivers superior flexibility because CNN models and applications can be retrained using a custom dataset for whichever use case. But traditional image processing requires fewer resources, such as less computing processing power and less storage space, and even good with slow old hard drives. Considering these low spec factors of image processing, we can quickly deploy our project in the real world and get efficient results at a low cost.
Furthermore, image processing does not require large data sets for training like machine/deep learning. However, its dose required precise parameters for achieving the desired objective. The project source code can deploy onboard systems or single-board computers (SBC) such as raspberry pi; moreover, an image processing-based detecting algorithm (as proposed in this research article) is deployed on SBC. Its python script file can take space in kilobytes (KB) on storage SBC, and it can quickly run and show the result. But similar ML algorithms require extra space for their dataset and additional computing resources for training and testing before deploying. This system needs reliable internet to connect to the third-party cloud using third-party ML APIs.

C. PRECISION AGRICULTURE (PA)
PA is the deployment of geospatial, e.g., GIS, RS, GPS [16] to find and identify variations in the field as well as deal with them using alternative strategies or a managing system that uses information technologies (e.g., image processing or machine vision) to bring in data from numerous resources [17] and generate judgments associated with crop production. Remote sensing (RS) imagery is a frequent practice and utilized to find early signs of decay of crops [18] [19]. Moreover, to process, acquire, distribute, and interpret crucial data, the communications, as well as information processing and sharing mechanisms, and other similar technologies are needed for better decision backing and to make sure optimal use of inputs components as well as to maximize farm output [17], GIS is significantly utilized in PA to determine and validate field variability.
Consequently, using GPS, with the RS, and other kinds of imagery and time-series data, the producers of agriculture goods or farmers be able to choose precisely what to plant and decide when to plant and in what quantities, thus helping them in making more efficient and well-organized use of expensive inputs components (e.g., fertilizers, etc.). Producers who practice PA methods maximize crop yields, reduce operating expenses, and increase farm profits [2]. Furthermore, RS (remote sensing) with UAVs is revolutionary in PA.

II. RELATED WORK
This may be the age of complex machine/deep learning (ML/DL) and big data. However, conventional image processing still has potential and is still effective in agriculture, where colors, shape, and texture-based features are crucial for finalizing the decision. In this context, several similar research studies, such as [3], propose a method to detect cotton bolls in aerial images by using a hierarchical region-based growing process to enhance the computational effectiveness and effectively apply high-resolution UAV images. They exploited region growing segmentation's spatial knowledge to determine the cotton bolls. The threshold values automatically detect white cotton bolls and non-cotton bolls.
On the other hand, another study [20] also proposed a methodology to detect cotton bolls/blooms in UAV-based RGB images but different ways. They used ML/DL-based approaches, developed a CNN (convolutional neural network), trained cotton bloom/flower detection, and generated their 3D locations. This methodology monitors cotton flowering for crop production estimation and management over the season. However, this method miscalculates the cotton bloom count due to the inability to count hidden flowers. Furthermore, false classification of CNN leads to wrong bloom count.
Besides cotton bolls/bloom detection, other crops are also used as research data sampling to check the algorithmic-based methodologies such as [21]. They digitally count maize plants by performing segmentation processes on UAV-based RGB images. Their works show image processing capability in the agriculture field and detect plants post-emergence. The results show ground cover detection didn't match or correlate the plant numbers on a plot level due to unwanted weed detections and blurry effects, and only possible during plant early leaf development stage when young light green leaves differ from older dark green.
Not just crops, but some researchers dive a little bit deep and used just green leaves as sample data to test the proposed combined algorithmic-based methodology, such as [22] using fuzzy color histogram and edge histogram incorporating multi-layer perceptron for recognition of fragmented plant leaf. They take different plant leaves and fragments into the top right, left and bottom right, left, center images. Research results show that center fragmented image recognition is superior to other fragmented images based on color, shape, and texture.
As we know, color spaces are important in agriculture for detection and identification purposes. However, colors spaces also play a crucial role in medical science. In this context, [23] research supports this statement by proposing a method/technique for skin color identification to distinguish skin lesions in digital images of malignant melanomas. Presented color analysis tactic using a fuzzy set of skin lesion colors for a defined class of skin lesions.
Similarly, another research study [9] works on the color range, specifically RGB color range, applying the HSV color difference formula, and detecting colored capsules blended in the normal capsule product but with different colors. Their research [9] initially converted RGB capsule pictures into HSV for correct detection because HSV color space considered is well-adjusted color-difference perception.
First, the RGB-colored capsule image is transformed into HSV and then detects the rare color defect by the HSV color difference formula. Unlike the traditional way, the proposed color transformation is not pixel-level. Only a set of R (red), G (green), and B (blue) color values are calculated to denote the RGB image by a weighted median of the normalized histogram.
So, the transition requires only to calculate the 3 data of R, G, and B but not every one of the pixels of the whole RGB image. The color difference is calculated based on the transformed HSV component data set. The recognition, quickness, and accuracy are enhanced obviously. The introduced approach put forward a new resolution on the challenge of color identification.
Furthermore, [24] also recommends a dehazing algorithm using HSV color space. For this algorithm, they proposed to process hazy images in HSV rather than RGB to save hue and reduce computational difficulties. Secondly, use the altered architecture of the morphological opening procedure for approximating the transmission map. This way subdues the halo effects in significant amounts, and it consumes a lesser amount of time than the specially designed filters. The investigational outcomes reveal that the suggested algorithm can efficiently eliminate haze and maintain naturalness by maintaining hue and controlling halo effects. Moreover, the computational difficulty has been mainly decreased. Therefore the algorithm is appropriate for real-time applications.
Recently, another research study was done by [25]. In this study, they used UAVs and RGB sensors or cameras on board and, with the SLIC and RF classifier, proposed a method for distinguishing crops or weeds in high ground rice fields. The SLIC-RF algorithm assessed various fusions of input features such as 3-color spaces (RGB, HSV, CIE-L*a*b), canopy height model (CHM), spatial texture, and vegetation index (VI). Moreover, the proposed SLIC-RF model (with HSV color space) showed good efficiency with the highest accuracy. The outcomes of this research show that the rice and unwanted plant or weed can be differentiated by standard consumer UAV images using the proposed algorithm (SLIC-RF) with satisfactory efficiency to meet the demands of specific weed or wild plant management in the field, even in the area the early growth stages of small rice plants.
This research study also focuses on color space, especially the HSV color range with the cotton crop as sampling data. According to [9], the HSV color space considered is welladjusted color-difference perception. HSV color spaceefficient results shown in previous researches such as [22], [23], [9] and [24].

III. MATERIALS AND METHOD
The cotton crop is considered a data source in this research study due to crop different spectral characteristics according to UAV sensors (e.g., camera), weather, and flight timing. Cotton crop image dataset collected using UAV, in this case, using quadcopter from agronomy research forms of University of Agriculture Faisalabad (UAF), Punjab, Pakistan. However, the Image dataset is not limited, and other sources are also considered-other images taken into account for just the sake of algorithm testing or checking accuracy. But the primary source of imagery data is agronomy research forms of UAF, Pakistan.

A. SITE DESCRIPTION
As previously described, this study was conducted at the primary agronomy research forms located in UAF main campus (182m, Latitude: N 31°26'12.578899'', Longitude: E 73°04'28.827099''). Figure 2 shows the actual targeted cotton field sample site in the red square box boundary. That agricultural field images are used in the research study for testing. This image also includes other crops, such as the right side of the red square in the image is the fodder crop, the Bottom of the red square is other cotton crop plots, and the upper side includes trees' branches and leaves. As we can see in the ground view image (Figure 3), the cotton plants are densely planted in the field, and at this stage, cotton plants look good and healthy. We also take a satellite image of the sample field depicted in Figure 4 to show the exact location and give an idea about the size of the cotton plot.

B. IMAGE ACQUISITION & PROCESSING
• Phantom-IV drone (Fig 5) was used for imagery sample data collection. Its range is approximately 5km (3.1mi) with a speed of 72kph while maintaining a satellite and vision positioning system.
• An FC6310 camera is mounted on board with a gimbal for acquiring images on 24 September 2020 at mid of the day. All imagery data is collected and saved in RGB color space.

A B
• Weather conditions are suitable for drone flight, and data is collected in a total of 7 flight lines; and height of the drone from ground to up in the sky is around 60 to 70 ft.
• All sample data during drone flight were collected and saved in mp4 video format and later taken single frame from video for further processing.

C. DATA SAMPLING
After acquiring cotton crop RGB aerial images, manually determining and classifying which image is good for the experiment, and achieving higher accuracy, we took the image with minor motion movements to avoid blurriness and best light conditions in it. Because our UAV took a sample image of a cotton field from height and this distance between UAV and cotton crop, affect our final image results, such as light and wind, dust particles in the wind, birds and flying bugs, in that distance all can affect. We take a single image from recorded sample data containing the whole site and other neighboring agriculture crops as mentioned in the site description section. The main sample image ( Figure 2) has 4864 pixels width and 3648 pixels height with 72 dpi vertically and horizontally and a bit depth of 24. We cropped it and narrowed it down to a single agricultural or cotton crop field as depicted in Figure  6. For better understanding, we only focus on the target cotton field, remove other neighboring farming fields, divide the target cotton field image into the grid as shown in Figure 6, and label every single squire-box of the grid.

FIGURE 6. Cotton crop sample image. Plot approximate size according to google map measurement is on Y (axis) = 29 and X (axis) = 20 meter
This way, we can focus on small areas of the cotton field because, from height, the white cotton bolls/blooms look like white dots, and when we close up a specific area of the target crop then we can identify cotton bolls by the human eye, otherwise human eye mostly see cotton green plants. We also labeled each sub-square area in the red boundary area of the grid. So, we can discuss specific areas by their assigned tagged number without confusion. As we can see, some areas of the target field have highly green and healthy cotton plants, and some areas have a small number of healthy cotton plants.

D. Image Analysis
We analyzed our sample image color space using RGB and HSV color space histograms. Color space Histogram plays a X axis Y axis vital role in image processing. This way, we better accurately understand our imagery colors saturation, gradation, and white balance inclination. We can also set algorithmic parameters for our proposed detection method more precisely. We mainly use the python language and its image processing library [26] to develop a histogram function of RGB and HSV. So, we can know our sample image's actual color range values in numeric values and use them for further processing decisions.
Furthermore, since we target all pixel values of the sample image in histogram function, we set a range of bin values 0 to 256 for both RGB and HSV. Because the tonal value determines the color of each pixel in digital RGB image and from 0 to 255 numerical tonal values are assigned to every three channels (R, G, and B), for example, if "R" value is 255 and "G" value is 255 but "B" channel value is equal to 0 then the output color is pure red (R=255, + G= 0, + B=0). However, if we change the blue "B" channel value zero to 42 and "G" value change to 207, and "R" change to 33, then these values change the output color, and we get new color (R=33, + G= 207, + B=42) is green.

FIGURE 7. Field grid view of Cotton crop sample image. (Area numbering mentioned for discussing the specific area in the article)
HSV is a non-linear color space, also has three channels, Hhue (color), S-saturation (color depth), and V-value (or brightness), and represent human perception. Similar to RGB color space, the tonal value from 0 to 255 is assigned to each channel to get different colors, as shown in Figure 6. But the main fundamental difference is brightness level control, which is not easily achieved in RGB color space, but brightness or value "V" is the third channel in HSV color space. Moreover, the luminance component in digital imagery can be separated from color information in HSV color space by achieving the desired H (hue) after setting up specific values of S (saturation) and V (value/ brightness). This method is helpful in various object detecting applications; some examples are [23], [9], [30], and [31].
This research study focuses on using HSV color space due to the aforementioned technique. Because the sample cotton field image is taken by UAV in the middle of the day on a full sunny day, we can see color levels in the sample image, which further supports this statement. We apply the RGB color space histogram algorithm on the cotton field sample RGB image (Figure 6), the result of the histogram algorithm depicted in Figure 8. The values of r(red), g(green), and b(blue) represent the actual RGB color space of the three components of the sample image. As Figure 8 depicted, the blue color pixel values are higher than the other two components (more than 40000 pixels). Despite that all these three channels numerical values are aligned, thus separating one channel or component from the other both components is complex, and since we aim to detect white cotton bolls/blooms, and those are visible as small white dots in UAV based imagery, are impossible to detect in RGB color space. However, this might be possible using advanced techniques such as AI machine learning. But for that, we need unique resources such as UAV-based big datasets of cotton crops and the other agriculturally-based UAV imagery-based datasets. After that, we need good computational machines and time for the training and computation process. Now apply the HSV histogram algorithm on a sample image ( Figure 6) after converting RGB to HSV using [27] software library and get the HSV color range/or space depicted in Figure 8. Eqs. (1)-(6) used for transforming images from RGB color space to HSV color space [32]. The values of h-(hue), s-(saturation), and v-(value) represent the actual HSV color range values of the sample image. The V (value/brightness) values are pretty much different from the other two components, and the other two components, H (hue) and S (saturation) values correspond to each other. When the hue value is high, so is the saturation value. However, when V (value/brightness) is high, hue and saturation values drop.

No of Pixels
Color Range (Bin) 3D Scatter plot algorithm also applied on sample image for deep analysis of sample image and results shows color purity and illumination (brightness) are distinguishably visible in HSV 3D scatter plot ( Figure 11) in contrast of RGB scatter plot ( Figure 10) in which illumination is blended in with colors. Whiteness in the sample image is much more localized and visually distinguishable from the rest. The saturation and value of the sample image do fluctuate. However, they are mainly located within an insignificant range along the hue axis. That is a significant fact and can be given an advantage or edge for the segmentation process in later stages. Moreover, we compare our HSV plot depicted in Figure 11 with Figure  1. This provides a deep understanding.

E. Proposed method & its Results
After understanding our sample data, we can set parameters and adjust parameters values in the algorithm of our proposed method for desired results. We obtained white cotton bolls/blooms pixel level values from HSV images both lower value as well as higher values. For this purpose, we adjust HSV components h-(hue), s-(saturation) and v-(value) numerical values. Now first, we apply the algorithm on a small portion of the sample because after cropping from the main image (Figure 2), the sample image Figure 6 is still large for processing and analyzing, so we narrowed down our sample size and cropped out the area number 13 from grid view image (2)

No of Pixels
Color Range (Bin) ( Figure 6). In this zoomed image, we can easily see cotton bolls/blooms. All three components are adjusted by tuning and then tested for both upper and lower numerical values. This loop continuously runs until cotton bolls/blooms are the only objects visible from the sample image; results are depicted in image Figure 13. We tried but didn't use traditional gray image conversion or binary image conversion techniques due to the Illuminance and brightness sensitivity of the sample image discussed in the image analysis section. We apply these techniques, but with a high level of brightness, most of the images turn white and cannot find their target object. So instead of grayscale or binary image techniques, we focused on pixel-level manipulations. These pixel-level lower (0-255) and upper (0-255) limit manipulations give more segmentation freedom than other image processing techniques.
In this technique, we target each pixel value separately and combine it. We create six variables, more precisely two variables for each HSV component, and as mentioned earlier, HSV has three features, so three multiplied by 2 is 6. For example, Hue 'h' components have upper limit variables, as well as lower limit variables both variable values vary between 0 to 255. Zero (0) is the lowest value, and the maximum limit max value is 255 because the HSV color range is 0 to 255 max. In other words, this way, we create three windows for both upper and lower limits. In the end, we get two main variable arrays, and each primary variable contains three sub-variables for each HSV component. Moreover, the size of each window is decided by upper and lower limit values or set parameters. All these limit variables are array variables, and our sample image is also stored in array form. So, only between upper and lower limits values or in specific range pixel values are passed or detected and then hold the output temporarily for following step input and further processing. The complete illustration is depicted in Figure 14. Furthermore, following algorithm lines (7,8)   and so forth. After multiple testing and iteration, we set up our first upper and lower-pixel values for HSV, all three components shown in the following table (III). Now we apply the first set of parameters on the image shown in Figure 13 and the result of 'Set-1' parameters depicted in Figure 15. Then we used the Contour Line algorithm for drawing lines around white cotton bolls/blooms. The Contours (or curves lines) can be described merely as a curve joining and merging all the straight points or, more precisely, spots (along with the frontier), having identical color or intensity levels as well as contours are beneficial for the structure or shape analysis and object finding and recognition, in this case, used cotton bolls as an object for finding and recognition, which Contour line algorithm perfectly do their job by finding and detecting white cotton bolls. Now, as we apply the Contour line algorithm, we use contour approximation (implementation of Douglas-Peucker algorithm) and encircle the detected cotton bolls visible in Figure 15 and see if our pixel level manipulation is successful or not. We set Figure 15 as one of the Contours algorithm input parameters because image Figure 15 contains coordinates of cotton bolls in the sample and set Contour line thickness a little bit high and set line color coordinate as (0, 0, 255) -red. The Contour algorithm output is depicted in Figure 15. and right down corner), and in areas 2 and 3 (upper side of image), we can see noise horizontally. The contour algorithm draws lines on the place mentioned above because the pixel intensity is similar to upper-and lower-pixel parameters. To remove and reduce these noises, we need to readjust or tune the upper and lower parameters defined in table-III (Set-1).
As we can see, unwanted detection by contour algorithm in Figure 17 image after applying Set-1 parameters in our proposed method, this noise could cause hurdles if the proposed algorithm is used for a big and broader image containing more objects. So, now we again calculate values of parameters and create our second set called Set-2(table-III). In this second set, we change the HUE window size form' 211-93' to '255-04', and Saturation window size form '46-00' to' 67-00' and Value(brightness) window size from '255-197' to '255-144'. Furthermore, we readjust our parameters according to 'Set-2' in the proposed algorithm. As depicted in Figure 18, this time results are a total failure.  (table-III). In this 3rd set, we change the HUE window size from '255-04' to '255-00', and Saturation window size change from' 67-00' to '05-00' and Value(brightness) window size also change from '255-144' to '255-248'. We readjust our parameters according to 'Set-3' in the proposed algorithm. This time get improved results as depicted in Figure 19.  The results of set-3 are not a failure like set-2. However, white cotton bolls/blooms detection is significantly reduced as small red color dots are depicted in Figure 19. Despite that, unwanted detection was also removed. But we need to readjust Unwanted detection area closeup our parameters to improve the cotton bolls detection and reduce the noise and unwanted object detection. Now we change the HUE window size from '255-00' to '255-02', and Saturation window size change from '05-00' to' 03-00' and Value(brightness) window size also change from '255-248' to '255-220'. We readjust our parameters according to 'Set-4' in the proposed algorithm, and the result is shown in Figure 20. As we can see results of Set-3 and Set-4, there is no unwanted detection or noise, but we also have fewer cotton bolls detection compared to Set-1 results. If we compare Set-3 and Set-4 parameters' final results, there is not much difference between these two output results. But we can still detect and mark cotton bolls with red contour lines if we close up the results, images of the 3rd and 4th iteration, and focus on specific areas (mentioned earlier in Figure 7), such as areas 12 and 13 shown in Figure 21. Not much different from each other. Now we are testing our proposed method on different images of similar cotton fields taken from the same drone (UAV) on the same day. However, this time we involved images taken from different angles and heights, as shown in image Figure 21. We apply the Proposed method with Set-1 values.
As we all know, light is the most critical factor in analog and digital photography, and in drones, photography light is also crucial. The results show the significant importance of light intensity, angle of light, drone height, and grade of drone camera and camera stability. Other factors are other than these, but these are the cornerstone of done photography. After applying the proposed method with Set-1 parameter values on the image that was taken from the same drone with different height and angles at the same time, the result depicted ( Figure  22) there is no noise, and unwanted detections and almost all cotton bolls are detected and marked by red contour lines around them.   We thoroughly test our proposed algorithm with several iterations and apply a manual trial and error method to correct our proposed algorithm. We test the robustness of the proposed algorithm by performing a comparison result image with a manually labeled image. We took another closeup image from area 10 (mentioned in Figure 7) out of the sample image and manually marked the visible white cotton bolls with blue circles. One most important thing to note here is that this new image also contains a cotton plant with a tag on it. This sticky note tag is placed there for testing purposes, whether the algorithm detects the tag along with white cotton bolls or not. In the result image the proposed algorithm marked all the visible cotton bolls successfully, even the marked cotton bolls that are grown on small height plants, and the algorithm ignored the unwanted object detection such as yellow color tag (sticky note) that was placed on top cotton plant for other research purposes. Manually we spotted 20 white cotton bolls then marked with blue circles in image 'A' Figure 22, and after that we test proposed algorithm and results clearly show ('B' Figure 22) that the algorithm was able to detect all manually spotted white cotton bolls efficiently.

IV. CONCLUSION
After multiple iterations (Max=4) of the proposed method with different sets of HSV parameters (Set_1 -Set_4), we achieved our objective and detected white cotton balls using simple image processing techniques with a manual trial and error method. As mentioned in this article, we also face some dead ends, but this gives us a new direction for our research and results. This research concluded, if we control illumines value, we can control pixel color brightness intensity value, and by controlling Hue, we can manipulate pixel color value and same for saturation. This way, we can separate illumines from colors and control color value separately. Finally, we can detect white cotton bolls in drone imagery.
This research methodology can be used for other object detection. To achieve that, we need to readjust our parameters (HSV components Hue→h, Saturation→s, and Value→v numerical values) according to the objective. However, as mentioned and observed in this research, sample data (image) quality, light intensity, angle of light, camera angle, camera stability, and other similar factors play a significant role in any detection and distinguishing based tasks.
Furthermore, we can improve the proposed method output and add autonomy by using a neural network (NN) approach with fuzzy logic such Adaptive neuro-fuzzy inference system (ANFIS). But for this, we need more data sets, and we need to perform multiple iterations for improving our output, and that's required time. However, once the data set of color space values is completed for a specific target object, we can build fuzzy logic membership functions, and we can add NN for autonomy. Now we don't readjust our parameters. Moreover, other NN-based approaches might be used for proposed method autonomy, but a fuzzy logic approach is more suitable due to the closeness of human perception.