Advanced Color Edge Detection Using Clifford Algebra in Satellite Images

Edge detection is widely used for image processing to improve the detection and classification of objects, segmentation, and extraction of other features. Satellite images are rich in information about objects with different color intensity and have a large amount of noise, so it is difficult to achieve recognition, classification, and feature extraction of small objects through traditional edge detection algorithms. The colors in satellite images suffer from a large amount of overlap due to areas or weather conditions that generate a lot of noise. Edge detection provides detailed information about objects in an image by reducing unnecessary feature information. Edge detection in color images is more challenging than edge detection in gray-level images. This paper proposes a method for the edge detection of color images using Clifford algebra and its sub-algebra, quaternions. Quaternion-based Fourier transform is used to process red, green and blue (RGB) images separately in the vector field. A 3×3 quaternion mask is developed to filter out frequencies of the image in multiple directions and only provides details about the edges. The algorithm works on three channels individually; the output is then processed through quaternion Fourier transform (QFT) and inverse QFT with a 3×3 mask to filter high frequencies. The proposed algorithm is compared with traditional edge detection algorithms using a satellite image dataset that has different types of objects and detailed information. Results are validated through entropy, structure similarity, and noise error to prove that our proposed algorithm provides satisfactory performance on different remote sensed images.


Introduction
The edge of a given object contains the basic structure and details of its shape, with extensive information about its position in an image, and carries most of the information about it. The primary step towards locating or recognizing an object in an image is finding complete, connected, visible information about the edges of that object. Edges exist in the irregular structure and unevenness of an image, that is, at the points of sudden change in its signals. During recent years, remote sensing technology has gradually been applied to various phenomena and fields, such as the weather, geology, disaster prediction, and urban planning. Edge detection in digital images is an important basis for such aspects of image processing as image segmentation, target region recognition, and region shape extraction. It is an important method for extracting features with clear visibility in image object recognition [1]- [3]. Edge detection is significant in relation to remote sensing images, as it supports object recognition, image segmentation, and image registration [4]- [5]. A remote sensing image has a small-scale and non-uniform image brightness distribution; the main edge has more breakpoints and a huge visual loss; the map is more complex; the secondary edge has more interference, and the noise is obvious [6]. Therefore, it is difficult to obtain a satisfactory result by performing edge detection on a complex remote sensing image containing a large amount of noise [7]- [8]. Images with ships contain a lot of sea waves, which have a different type of noise and are of similar blue shades; similarly, agriculture remote sensing images have a multitude of similar green areas. Thus, identifying a particular object is a challenging task in remote sensing satellite images [9]- [11]. Fig. 1 highlights some of the issues involved. Due to noise and distance, the intensity (high color resolution) of the colors on the image is fairly uniform, and it is difficult to recognize the object due to the similar structure of the background. It is not very clear where is ship is located around the dock in Fig. 1(a). Similarly, Fig. 1(e) shows a ship as a small object that is not very visible, with its geometric shape being hard to make out without proper identification. Fig. 1(c) shows various urban areas with multiple objects, each with different color ranges.
Numerous researchers have made efforts to design algorithms for edge detection and performance evaluation. In Grayscale edge detection, the most famous approaches are those of Sobel [13], Roberts [14], and Prewitt [15], who use first-order differentials and detect edges from different directions. The derivative of the image is used to enhance its protruding edges and contours to detect the position of the edge. Laplace of Gaussian (LoG) [16], [17], the difference of Gaussians (DoG) [18], and Canny use second-order differentials [19], which help to improve the quality of an edge by reducing its blurriness. High-frequency details are removed, which results in less noise and enhanced feature visibility.
In 1977, Hueckel [20] proposed an edge detection operator that maintains a view of the major issues in edge detection, i.e., the direction of the edge, its location, noise within it, its accuracy, and its width. This operator worked on the principle of expanding a small, circular neighborhood of pixels. An improvement was made by Ramakant [21], through the extension of grayscale edge detection towards color edge detection. The technique performed edge detection three times in the colored space of RGB images and then used that combined effect to optimize edge detection; however, it took time to separate each color segment. Yang and Tsai [22] used the technique of partitioning a color image into n×n non-overlapping sections and applied the reduction in color dimensionality by using a moment-preserving threshold, which reduces the time taken to perform edge detection.
Trahanias and Venetsanopoulos [23] used the color image as a vector field and proposed the color edge detector as a vector range (VR) to detect the edges of color images. This approach takes the angle between the color of vector pixels and is helpful in implementing edge detection in different blocks of colors. This approach is extended further in the vector domain, as Basic Vector Directional Filter (BVDF) [24], Directional-Distance Filter (DDF) [24] and Generalized Vector Directional Filter (GVDF) [25]. Ju Xiaohan et al. [26] proposed an automatic segmentation algorithm for remote sensing images based on three-dimensional data and merged segmented edges. Using geometric algebra, Shagufta et al. [27] used a linear quaternion mask and applied it for the detection of edges from multiple directions. The limitation of this approach is the implementation of four masks separately in each direction of the image processing angle. Another approach, using quaternions [28], was implemented in the segmentation of colors using a filter mask, size 3×3. This approach limited the size of the quaternion mask and used the scaling factor for the edge of the image, with a detailed comparison after the conversion of the image in grayscale. New advancement and improvements are being developed for every method; however, it can be seen from the literature review that these algorithms still require improvements, especially in relation to remote sensing images in which objects are smaller in size and colors overlap with objects (like Fig. 1(a)'s ship overlaps and 1(b)'s similar colors), resulting in blurring. A good edge detector algorithm must achieve three things: 1) Good edge connectivity 2) Smooth edges on objects 3) Clear object visibility In this paper, we use the parallel processing approach using the RGB colors of an image with Clifford algebra (also known as Geometric Algebra [GA]) and a spectral image mask. Quaternions are a part of Clifford algebra; they are basically the extension of a three-dimensional space (x,y,z), with complex numbers and an imaginary part. Labunets et al. [65] characterizes the use of Clifford algebra with multicolor processing as hypercomplex numbers that are useful for describing multicolored images and operating directly on multi-channel images just as on single-channel multiple-valued images. RGB images can be represented in vector forms by using quaternions [66], which helps in algebraically solving the color spatial content in color images. The quaternion-based color image processing algorithm describes the three components of a color image pixel in the RGB color space as a pure quaternion and performs the whole process on the color image in a similar manner to a vector. The advantage of using the quaternion theory for color image processing is that the three-color channels of the image can be represented as a whole. The color host image is represented as a quaternion matrix, and QFT is used to obtain the image's frequency domain information. The edges of an image usually comprise high frequencies; therefore, after the QFT of an image, we apply a high-frequency pass filter to this QFT-transformed image. This filter, in turn, will block all low frequencies and only allow high frequencies to go through. Finally, we apply inverse QFT on this filtered image to locate the edges of the object. Our approach is new in solving the mathematical way for processing the edges of images by getting geometric vector features of edges in high frequency domains, which provides better results than other state-of-the-art techniques.
Experiments have been performed on different types of remote sensing images that have varying RGB intensity. Validation results show that our QFT methodology performs well on color images. Fig. 2 clearly shows how our image is converted into R, G and B components, and then QFT is applied to get frequency of the components. The developed quaternion mask is then applied to the QFT part, and an inverse quaternion Fourier transform (IQFT) is applied to the image to obtain the high frequencies of the image's edge by excluding the lower frequencies.
The main contributions of the proposed research are as follows: 1) This paper proposes a new quaternion-based Fourier transform approach for edge detection through the implementation of a mask. 2) A novel approach was applied to color edge detection using Clifford algebra on all types of image. Object detection can be made better through recognition of objects' geometrical features using advanced Clifford algebra methods.
3) The advanced use of quaternions with detailed implementation in color image processing by segmentation of RGB colors was explored. 4) The paper provides an approach for further advancement in vector-based multidimensional image processing. The rest of this paper is organized as follows. Section 2 is divided into two subsections: quaternions and QFT. We explore the basics of quaternions and their implementation in color image processing with the current advancement by another study in the implementation of GA. In Section 3, the proposed research design and methodology are explained. Section 4 explains the numerical experiments for the proposed algorithm in color edge detection. A conclusion is presented in Section 5.

Preliminaries
In this section, we provide an overview of Clifford algebra quaternions and QFT with another author's implementation methodology using geometric algebra. First, the geometric properties of Clifford algebra are discussed, followed by a consideration of geometric product and projection operation in a three-dimensional Clifford algebra space. Thereafter, the Fourier transform and its calculation formula in Clifford algebraic space are introduced. The existence theorem of Clifford algebra is then studied.

Quaternions in Color Image Processing
Since they are the simplest member of the hypercomplex system, the study of quaternions plays an inestimably important role in the development of the whole hypercomplex system. Quaternions from ordinary complex results of number generalization follow the addition rule of general complex numbers, but the multiplication rule of quaternions and general complex numbers are quite different; this is one of the most important differences between quaternions and ordinary complex numbers. In the calculation of geometric rotation in three dimensions, quaternions have more advantages than traditional matrices [29]. Therefore, in robot technology, computer vision and image programming, research based on quaternions has enjoyed great progress. A quaternion is a number in a four-dimensional space with one real part and three imaginary parts. The quaternion q can be expressed as: Where a, b, c, d are real numbers, i, j, and k are imaginary units, and the following relationship is satisfied: When a = 0, q is called a pure quaternary imaginary number. Module | q | and conjugate q is defined as follows: There are two quaternions q 1 , q 2 as follows: Addition, subtraction, and multiplication operations are defined as follows: Using (1), a color RGB image can also be represented in quaternion form with the scalar part as zero. For example, a pixel at image coordinates (s, t) in an RGB image can be represented as is the blue component. Other types of color space can also be shown in pure quaternion form, such as the luminance-chrominance color space, e.g., YC b C r . Because the three primary colors have 256 brightness levels, the three primary colors are superimposed to form 16.7 million colors. For color images, the color of each pixel is represented by the three primary colors in the RGB color model, so each matrix element needs to save three values of R, G, and B.
Sangwine et al. [30] were the first researchers to introduce the implementation of the above idea (i.e., Fig. 3) within a vector space using a quaternion mask to filter the edge of an image. After a few years, they used the same quaternion operations discussed previously and produced the color subtraction model between two images. They developed a four-dimensional hypercomplex filter used to highlight the edge between different color spaces [31].
In the past, color image processing has generally processed two-dimensional matrices corresponding to the three color channels of R, G, and B, respectively. In [32], each color in RGB is separately stored and processed in matrix form to mark the edges. A similarity relation matrix is developed to calculate the smoothness among the pixels by normalizing the gray color space. In other words, the color image in three-dimensional color spaces is mapped into one dimension in which each color represents R, G, and B individually and are processed independently, as shown in Fig. 3. The threshold value is used to find the edge up to a particular range of pixels; if the pixel value is below that threshold, then those pixels are assumed to be an edge. This approach finds the edge pixels simultaneously without any difficult calculations for image finding gradients, Laplace transform, or statistical matrix development. However, the color image is not a superposition of three gray images with different meanings in a simple sense: they have certain constraints [33]. Moreover, a loss of information will inevitably occur during the individual processing of the components and the final fusion of the results. The entire process splits the color image and ignores the inherent relationship between its three components, which cannot reflect the relevance of the color image pixels as a whole. The quaternion color image processing method treats color image pixels as a vector as a whole in a multi-dimensional space, which can reflect the color correlation of the image. Therefore, the complex numbers can be represented as quaternion fourtuple (w, x, y, z). Under the RGB color model, each pixel of a color image can be represented as a pure quaternion with a real part q r = 0:

Quaternion Fourier Transform
The Fourier transform connects the time domain and the frequency domain. A time-domain sequence can be obtained through a Fourier transform in a corresponding frequency-domain sequence, and the original time-domain sequence can also be obtained by the inverse Fourier transform [34]. Two-dimensional Fourier transform is an important image processing tool that decomposes grayscale images into sine and cosine components; it has a wide range of applications, such as image analysis, image filtering, image compression, and image reconstruction. Fourier analysis has been performed on real and complex domains. It has been deeply researched and is widely used in various numerical processing fields. Studying the response of quaternary numerical signals in the frequency domain has become a topic of interest because many image processing methods can be performed more efficiently in the frequency domain. According to the definition of quaternion multiplication and its exponents, Ell first introduced the following bilateral form of the Quaternion Fourier Transform (QFT): The generalized form of the above definition is: Here, µ 1 and µ 2 are orthogonal unit pure quaternions, and f (x, y) is a two-dimensional signal based on the quaternion representation.
(14) is a special form of (13) when µ 1 = i and µ 2 = j. Considering the non-commutative nature of quaternion multiplication, there were later left and right QFTs ( [22]): The application of bilateral QFT and unilateral QFT should be selected according to the actual situation. It is difficult to compare the advantages and disadvantages of these two transforms.
Similarly, the IQFT has the following three types of definitions ( [22]): In the discrete case, the DQFT and DIQFT also have three definitions ( [22]). Only two-sided DQFT and DIQFT are now given. The left and right forms are similar to the continuous case.
Quaternion Fourier transforms play an important role in color image processing. For certain operations, such as color image smoothing and compression, they are more efficient and convenient in the frequency domain than the spatial domain.

Dataset and Tools
For the implementation of edge detection in satellite images, we used maritime satellite imagery (MASATI) [12], which consists of 6000+ colored, optical aerial images in high resolution. The dataset is further divided into different categories for the classification of images: land area, coast, ships, multi-objects, coast with ships, and details about those objects in images. To maintain consistency in the results and similarity in the structure comparison for validation purposes, we selected the images of resolution 512×512 with objects and land area. Selecting different scenes from each class of dataset helped us to check the efficiency of the algorithm against various scenes, rather than just marine environments. MATLAB R2018a was used to test the result with the image processing toolbox.

Image Color Segmentation
A color image is comprised of three independent components; R, G, and B, the three components of the RGB space; and H, I, and S, the three components of the HIS space.
For a color image I (x, y) of size (X × Y), x and y represent the positions of the rows and columns of the matrix in which the pixel is located, x [0, X-1], y [0, Y-1]; let the three imaginary components of the quaternion represent the red (R), green (G), and blue (B) primary color components, with the real part being 0; the color image I (x, y). This is as follows: Real color images with different colors can be converted to the RGB model, as proposed by the color opponent theory [35], which suggests how colors are different from one another and proposes the L * a * b * cubical color space model: the L * axis runs perpendicular, with 0 as black and 100 as white, while a * and b * show the positivity and negativity to show change in other colors, as shown in Fig. 4.
Segmentation of colors is done as follows: Where E is the total difference, which is always a positive value. The finalized results are shown in Fig. 5 as preprocessing of images.

Fourier Transform and the Quaternion Matrix
The color host image is represented as a quaternion matrix; the quaternion matrix as a whole is then subjected to a QFT using (13). The RGB image is converted into quaternion form [28] from (1) and (2), where the scalar part is null while the three vectors are in the R, G, and B color space. As shown in Fig. 3, the range for each color is 0-255 in each dimension, so the quaternion of each pixel can be obtained through the division of that pixel with the maximum value of 255 [36]. In a sample pixel (133211142) with RGB components, the quaternions can be computed as: After QFT from (14), low frequencies are found in the center, and high frequencies are scattered around; we apply a quaternion mask to filter the edges on low and high frequencies.

Quaternion Filter Mask
A quaternion mask of 3×3 is applied [36] to the image after QFT, which can detect the edges in multi-directional rotation, e.g., horizontal, vertical, and 45° [37].
Rotation is expressed in quaternion algebra using the following formula [27]: where R = Se μ θ 2 = cos θ 2 + μ sin θ 2 and μ is a quaternion of unit modulus μ = i+ j+k √ 3 ; μ represents a direction in 3-space along the axis of the rotation; S is the scaling factor, which identifies the distance between the pixels; and q is the angle of rotation. Points on the axis of rotation are invariant. The over-bar represents a quaternion conjugate (negation of the vector part), which, in this case, is also obtained by negating the angle:R = e −μ θ 2 . This formula may strike readers as odd at first, but the form it takes follows from the non-commutative nature of quaternion multiplication, since coefficients on the left and right sides will, in general, have different effects.
The square brackets represent the space in which the pixel values will be operated upon. This is where the filter operates by rotating pixel values about the r = g = b axis (the line of grey pixels in RGB color space). This rotation was defined by Hamilton [38]; clockwise and anticlockwise rotation are shown with + and -signs like + θ 2 , − θ 2 . Therefore, using the concept of (25) and applying the results like in (26)

Inverse Fourier transform
The size of the rotation can be changed with respect to the size of the filter matrix [39]; for example, a 5×5 mask can detect (45/2)23.5°edges in both left and right directions. For a 7×7 mask, it can detect (23.5/2)11.75°edges in different directions. The increasing size of the filter can decrease the quality of edges by creating blurriness and increasing softness in the shape.
After applying the filter mask, the next step is to take inverse QFT to obtain filtered edges. Our main consideration is to test the results on the different quality and types of images; therefore, our selected images contain all aspects of satellite images such as water, mountains, multiple objects, similar lighting, and blurred effects. Fig. 2 gives a clear perspective of the study design; the first image is segmented using color image processing, then QFT is used to sort the frequencies from lower to higher, and a quaternion mask of 3×3 is applied. An inverse QFT is applied to obtain the edges of the high frequencies using that mask. It clearly shows that the frequencies of the colors were initially on a different level, while, after edge detections, only high-level frequencies have remained in the image.

Results & Discussion
The stepwise implementation of the proposed algorithm is as follows:

QFT With Filter Mask (3×3) at a Different Scale
Here, we applied our proposed method on different types of satellite images, as shown in Fig. 6. Each image has different properties, for which we considered further selection and results testing. Image 1 represents the total land area; Image 2 represents the land with a path; Images 3 and 4 show various ships for shape recognition; Images 5 and 6 show partial land and sea; Image 7 shows waves of water in the sea; Image 8 exhibits multiple ships of different sizes. Various scaling levels of S = 1 and S = 2 were applied to find more detail in the images from different angles. The results are quite interesting if we look into them in depth regarding the implementation of those results. Image 1(e) shows the high edges of the objects, clearly showing the building architecture outline, while 1(f) shows the frequencies at a peak level at the edges of the image when more detailed information was inducted. Fig. 7 , Image 2(e) shows the clear path of the roads in the land area, with less of the scaling shown in 2(f). Images 3, 4, and 8 give clear visibility of objects on a low scale (3(c), 4(c), and 8(c)) as well as on a high scale (3(e), 4(e), and 8(e)). If we focus on the colored spectrum in Image 8, it clearly provides more detail about the issues of satellite image noise. At a low-scale frequency (8(d)), the red color is visible, but if we increase scaling in more directions (8(f)), the blue color is more dominant-the average color of the image is greenish-blue. This is an important implementation of RGB image processing: using quaternions can help in detecting objects based on colors and segment them according to different scales. The best application of these features is in medical ultrasounds [40], in which colors are important features for image segmentation; this is also the case in the diagnosis of bone fractures. However, our study focuses on remote sensing images. Image 7 shows waves in water, which on a low scale (7(d)) do not provide a satisfactory result, but on a high scale (7(e)), the peaks of the wave's edges are visible.

Qualitative Analysis:
The work of Canny, Sobel, and Prewitt, as mentioned previously, was effective in relation to grayscale images; therefore, to validate and test our new quaternion-based Fourier transform approach, we converted our image to grayscale. There are different techniques for the conversion of RGB to grayscale images, but the most suitable changeover utilizes the handling of the tint and immersion of the image while holding the luminance [36]. Using this strategy, we take an RGB image and change it to grayscale by computing coefficients of luminance. This was done by framing a weighted aggregate of each component of the color (such as R, G, and B segments) using Equation 27. The coefficient is similar for estimating grayscale values and luminance values Ey.
The following formula is used in Rec. ITU BT.601-7 for grayscale conversion [41]. Fig. 7 shows a detailed comparison of images of all techniques with the proposed algorithm following the grayscale conversion. The validity of the results can be seen from the quality of extracted edges, which make objects more visible, with smooth outlines. In Image 2(d), the path of the road is clearly visible in the proposed algorithm; it can also be noted in Images 5(d) and 7(d) that the ship object is clearly visible compared to the results of other algorithms. Images 8(a)-(c) highlight the noise as an object in a satellite image, while our algorithm shows minor waves in the sea. In Image 1(d), the building shapes are more prominently visible, even after grayscale conversion.

Quantitative Analysis:
A quantitative evaluation of the proposed algorithm was conducted using these methods: 4.2.2.a. SSIM (Structured Similarity Index): We can explore structural information in an image by separating the effects of three contrast modules on objects: brightness, contrast, and structure. Here, the brightness and contrast related to the structure of the object are taken as the definition of the structure's information in the image [42]: where μ x , μ y , σ x , σ y and σ xy are the local means, standard deviations, and cross-covariance for images x, y. If α = β = γ = 1 (the default for Exponents), and C 1 = luminance, C 2 = contrast, and C 3 = structural terms.  [34] proposed the entropy, which can be defined as:    The p i value is the occurrence probability of a given i. Here, the symbols are the pixels. Table 2 gives the entropy results of all images computed from (29). Images 3, 5, and 8 show that our proposed algorithm has a lower entropy value, which means that our algorithm avoids a lot of the noise of the sea images and is helpful in identifying the ship objects. Other results are still satisfactory, as the proposed method is better than the Canny algorithm and resists a lot of noise in detailed images, giving clear information about the objects. 4.2.2.c. Mean Square Error and PSNR: Mean-Squared Error and Peak Signal-to-Noise Ratio [43] are useful techniques when measuring restoration results and comparing image quality. The mean-squared error (MSE) between two images g(x,y) is: The peak signal-to-noise ratio PSNR is used to measure the distortion of a satellite image of an edged image. The formula is as follows: where I ((i, j)) and represent the gray value of the pixel of the original gray image and the image with edges, respectively, with coordinates (i, j); M, N represents the image row and column pixel values. Table 3 shows the results of the MSE values of images from all types of edge detectors. The MSE value from the Sobel and Prewitt edge detectors is high overall, while the proposed method provides the lowest error values. Image 4 gives a similar value for Sobel and Prewitt and the proposed methodology due to there being less detail in the objects than in Image 7. PSNR values are shown in Table 4, which illustrates that our proposed algorithm is performing well against noise.  In this study, SSIM, Entropy, MSE, and PSNR are used for the validation of the results of the proposed algorithm. This study in focused on effectively implementing the QFT with geometric features of the Clifford algebra to give best performance on color images. Table 5 gives a relative comparison in terms of the features used by the other algorithms using the quaternion Clifford approach. Scheme 27 is limited to the directional approach of the angles, while Scheme 23 is not yet implemented on color images to effectively test the results. Scheme 44 implemented the Hardy filter with linear direction; however, the impact of this in multiscale direction is not implemented. Another scheme, 64, proposes a hardware implementation of an edge detection method for color images that exploits the definition of geometric product of vectors given in the CA framework to extend the convolution operator and the Fourier transform to vector fields. These methods are yet to be tested on remote sensing images because the high color density and high noise results of these approaches may give a different level of performance. Accurate urban object extraction from remotely sensed images is a very challenging task due to the different geometric features of urban scenes. There are various types of objects, such as buildings, vehicles, low vegetation, trees, and roads, that can be found in a small local neighborhood, and this makes it difficult to extract them reliably [45], [46]. Our approach can solve these objects' detection based on vector algebra with accurate features extraction of each shape.
Most existing algorithms regard a color image as a set of vectors, but vector-based processing requires a large number of calculations, and the implementation process is relatively complicated. Sangwine [47] proposed a vector-based filter based on hypercomplex masks, which was inspired by Sobel, Prewitt and Kirsh's filters for edge detection. Later on, Orouji et al. [48] extended this work for the hardware-based vector implementation of edge detection. In hardware-based vector implementation, Rotor-based and Prewitt-inspired Sangwine (RBS and PIS) were used jointly as one of the best geometric algebra operators to solve the problem of color edge detection. In recent years, color image processing methods based on quaternion have been widely used and have achieved better results than traditional algorithms [49][50][51]. At the same time, in order to efficiently extract and detect edge feature information, we have considered using the Smallest Univalue Segment Assimilating Nucleus (SUSAN) algorithm for low-level image processing proposed by Smith [52]. The SUSAN algorithm was simple and did not require much processing time to detect the corners of objects. It has advantages in edge positioning, connectivity at the intersection, and noise suppression. One article [53] has proposed a SUSAN color image edge detection method based on CIELAB space and realized SUSAN edge detection based on color differences in CIELAB space. However, others [54] have pointed out that the conversion from the RGB model to other color models (such as CIELAB or HSV) will consume a lot of calculation time and storage space; moreover, it has been pointed out [55] that such nonlinear conversion would lose information contained in the original image.
The transformed image is very sensitive to noise, and edge detection in RGB space can achieve sub-optimal and extremely-close-to-optimal results [56]. Therefore, this study intends to express color image information based on quaternions in RGB space and integrate it with geometric algebra to avoid the nonlinear transform of color space in image processing as much as possible, as well as to obtain edge point sets efficiently and accurately through the vector processing of colors.
All the existing frequency domain techniques that have been developed for complex spectra and FFTs should generalize to quaternion spectra and FFTs. There may be some cases in which non-commutative quaternion multiplication will require careful analysis [57]. There is further scope for the use of QDFT on monochrome images due to the true two-dimensional nature of the transform, as pointed out by Ell [58] and discussed briefly above. As a result of many satellites having been launched, a large amount of remote sensing image data has been produced. Satellites for agriculture, resources, environment, and disaster resistance provide abundant remote sensing image information, and the amount of data is very large [59]. Therefore, new challenges have been issued in the realms of preprocessing, target recognition and extraction, edge detection, and early warning. Clifford algebra also provides many solutions for various physics problems [60][61][62][63]. At the same time, in the process of acquiring, transmitting, and converting high-resolution remote sensing images, various factors (e.g., instrument problems) will generate noise that interferes with the remote sensing images [45]. Edge detection algorithms may face many challenges in the future because of the noise of high-density colors; to achieve good edge detection of high-resolution remote sensing images, significant difficulties still need to be overcome.

Conclusion
This study gives an advanced solution to the problems of edge detection in color images. A new area of research is explored using Clifford algebra and quaternions. The proposed methodology provides a new method of image processing. The characteristics of the quaternion Fourier transform are analyzed, and its real implementation is utilized. The color image is represented as a quaternion matrix, and a transform is performed; the image is represented as a quaternion matrix, and an inverse Fourier transform is performed to hide the lower frequencies of the image and filter the higher frequencies through masking. The results of the proposed methodology are compared with other traditional edge detection approaches. Entropy, SSIM, MSE, and PSNR show that, for all validation criteria, our proposed methodology works better than existing methods.
Finally, we focus on future work related to applying this new technique to the medical examination of bone fractures, including 3D image processing and disaster impact calculations; we explore this technique in video processing for the detection of real-time edges. Another useful approach lies in the classification of vegetation, roads, trees, and other urban objects in remotely sensed images. This study also opens a new era for image segmentation, object detection, and image classification. With the development of image processing technology, the extraction of remote sensing image information has been converted from manual visual judgment to computer-driven automatic or semi-automatic extraction, which effectively solves the problem of human timeliness. Because high-resolution, remote sensing images have the characteristics of complex scenes and multiple forms of the same target, in the practical application of high-resolution remote sensing images, feature extraction, building edge detection, extraction rate, information accuracy, and timeliness are still areas of challenge where progress is needed to obtain more accurate results.