Edge-Aware Filter Based on Adaptive Patch Variance Weighted Average

Edge-aware smoothing is an essential tool for computer vision, graphics and photography. In this paper, we develop a new and efficient local weighted average filter for edge-aware smoothing. The proposed filter can use guidance information which permits an iterative filtering process. Since the weights of the proposed filter depend on the local variance, the implementation requires linear filters only, leading to $\mathcal {O}(N_{pix})$ computational complexity. We also present statistical analysis and simulations which provide new insights into its computational efficiency and its relationship with the bilateral filter. The performance of the proposed filter is comparable to those state-of-the-art filters in many applications including: edge-preserving smoothing, compression artifact removal, structure separation, edge extraction, non-photo realistic image rendering, salience detection, detail magnification and multi-focus image fusion.

The associate editor coordinating the review of this manuscript and approving it for publication was Senthil Kumar .
Local filters model an output pixel as a weighted average of the surrounding pixels as follows: J (m) = 1 w(n, m) n∈ m w(n, m)I (n) (1) where w(n, m) is a weight function measuring the similarity between two pixels at locations n and m, I (n) is the pixel intensity at the location n of the input image I , and regularizer that encodes the filter designer's knowledge about the expected output image J . Global filters can be formulated through either a variational model or a Bayesian estimation model. The optimization problem can be convex or non-convex, continuous or discrete. Example of global filters in the literature include the weighted-least squares filter (WLS) [4], L 0 filter [6], relative total variation (RTV) filter [7], the region-covariance filter (RC) [25], the staticdynamic filter [26], and the iterative global optimization filter (IGO) [27]. Transform domain filters can be summarized in the following general model: where T and T −1 are the forward transform and inverse transform operations, and f : R → R is usually a pointwise nonlinear function. Filters in this category include the classic Wiener filter [3], edge-avoiding wavelets [28], guided wavelet shrinkage [29], mixed-domain filter [30], and the domain transform filter [5]. A related type of filters are based on multi-resolution techniques. Examples of such filters include the local-Laplacian filter [31] and mixed-Domain filter [30]. A more recent attempt has been towards taking a machine learning approach [32] in which the image filtering process is formulated as a parametric model in the form of a deep neural network. Parameters of the model are learned from a huge data set of image patches. The filtering techniques in this group can generally be sub-categorized into two groups: learned priors [33], [34] and end-to-end learned models [35], [36].
This work is partly motivated by recent developments in edge-aware filtering, especially those based on the idea of guided filter [8]. An intriguing question is: can we develop a new local weighted average filter in the form of equation (1)?
The new filter should retain the computational efficiency of the guided filter, and should avoid the computational complexity of the bilateral filter. The main contributions of this paper summarized below aim to answer this question.
• We develop a new statistically motivated local weighted average filter. The intuition behind the proposed filter is that a larger weight should be given to a pixel in a flat area, while a smaller weight should be given to a pixel in an edge or highly textured area. The variance of the patch can be used to measure the flatness and the weight is thus defined as a decreasing function of the patch variance. We then adapt this filter to use the bilateral weight, guidance image [37], and the idea of rolling guidance [10].
• We show that the proposed filter is related to the bilateral filter [2] in that the range weight is calculated by the patch variance. The patch variance is used to measure the similarity between two pixels.
• We further show that the proposed filter not only retains the same O(N pix ) computational complexity as that of the guided filter, but also produces comparable or better results in a wide range of applications where edge-aware filters are required. The organization of this paper is as follows. We first review the basic idea of the guided filter in Section II, which provides a foundation for the development of the proposed filter. In Section III, we present the theoretical development of the proposed filter as well as a detailed discussion on its properties such as the computational complexity and its relationship with the bilateral filter. In Section IV, we present examples of typical applications of the proposed filter including: texturestructure separation, detail magnification, multi-focus image fusion, edge-detection, compression artifact removal, and salient object detection. We also compare the performance of the proposed filter with related state-of-the-art filters [4]- [11], [18], [26], [27]. In Section V, we present a summary of the main idea of the proposed filter and its applications.

II. THE GUIDED FILTER AND ITS WEIGHTED VERSION
We briefly review the main idea of the guided filter and its weighted version in this section. We use the following notation. The image to be processed and the guidance image are represented as I and G, respectively. An image patch consists of pixels from a square neighborhood. The set of location indices for pixels in the patch centered at location k is denoted k . The number of pixels in the patch is N = | k |. A pixel in the patch is denoted I (n k ) where n k ∈ k is the location index within the patch. We also define the following mean-notation: Similarly, we define µ G (k) as the corresponding patch mean for image G, µ GI (k) as the patch mean for the pixelwise product image G(n k ) × I (n k ), and µ GG (k) as the patch mean of the pixel-wise square of image G(n k ) × G(n k ).
Using these notations, we can define the patch variance as and define the patch covariance as There are two main ideas in the original guided filter: patch modelling and model averaging. In patch modelling, a linear model is assumed for pixels in the patch The two model parameters a(k) and b(k) are determined through a regularized optimization 2 + a 2 (k) (8) It can be shown that the results are Once the model parameters are determined, we can use the model to generate the filter result. The key point is that a pixel I (m) belongs to N patches. For example, let m ∈ k . The kth patch model generates a result Therefore, there are N results due to N patch models. Let m denote the set of patch indices such that the pixel I (m) is in the patch k ∈ m . Model averaging is a principled technique for combining these results. The original guided filter takes the simplest form of model averaging by taking an average of the results where k ∈ m ,ā andb are the average of a(k) and b(k) over N patch models. There have been several recent attempts to improve the performance of the original guided filter. The main idea behind these attempts is to replace the simple average operation by the weighted average in k ∈ m [18]: One proposal is to define the weight as a function of the patch variance where σ r is a user defined scale parameter and f (x) is a decreasing function of |x|.

A. THE BASIC IDEA
We consider the case in which (in equation (8)) is set to a huge number such that the patch parameter a k can be considered as almost the same value for all patches. Let us denote this value α. We can thus write the weighted guided filter output as where τ I (m) and τ G (m) are weighted average of N pixels in the patches centered at location of m for the mean images µ I and µ G as follows: and

B. A PATCH VARIANCE WEIGHTED AVERAGE (VWA) FILTER AND EXTENSIONS
In this paper, we only focus on a special case where α = 0 (corresponding to → ∞). Let us first consider the simplest case I = G. The filter will be called the patch variance weighted average (VWA) filter which can be written as

1) THE BILATERAL VWA FILTER
We can change the filter by replacing µ I (k) in (19) with the original image I (k) such that the new filter is in the form of local average: where k ∈ m is the location index of a pixel in the patch center at location m. We should point out that the index k is now used to represent the pixel location rather than the patch index. The weight is calculated as where σ 2 I (k) is the variance of the patch centered at location k and σ r is a user defined scale parameter. We further define this scale parameter as the following and s is a user defined scale parameter which controls the scale of smoothing. Inspired by the bilateral filter, the VWA filter is further extended to have the following form and σ s is a user defined spatial scale parameter and ||m − k|| 2 2 is the Euclidean distance between the two pixels. We set w 2 (k) = w(k). While the filtering effect of the VWA filer is completely controlled by the patch size, the smoothing effect of the bilateral VWA filter is controlled by adjusting σ s and s.

2) GUIDED VWA (GVWA) FILTER AND ROLLING GUIDANCE FILTER
We now consider the case I = G which leads to the use of guidance image. There are two main ideas in using the guidance image: (1) using the guidance image to calculate the weights, e.g., in the joint-bilateral filter [1], and (2) iterative guidance filtering which is called the rolling guidance filter [10]. Following the first idea we can easily extend the VWA filter by using the guidance image to calculate the weights, i.e., where σ 2 r is also calculated using the guidance image: The resulting filter will be called guided VWA (GVWA) filter which is mathematically represented as where J , I , and G present the output, input and guidance images, respectively. The symbol θ represents the collection of all user defined parameters θ = {σ s , s, N iter }, where N iter is the number of iterations to be discussed next.
Following the second idea, the rolling guidance filter can be defined as one of the following iteration methods which stops when the maximum number of iteration is reached (n = N iter ).
• Type-III: Rolling input I and guidance G, where For all three types we can use the special self-guided case, i.e., G = I . In Fig. 1 we show the smoothing effect of the 3 types of rolling filters using the original image as a guidance and varying the number of iterations. We can see that Type-I has the least smoothing power, while Type-II and Type-III produce similar results which have stronger smoothing effect than that of Type-I. The weights are calculated each time the guidance image is updated, so Type I and Type III filters are computationally more expensive than Type-II as it requires the weights to be calculated only once. In the following we will focus on using the Type-II iteration only.

C. DISCUSSIONS 1) IMPLEMENTATION AND COMPUTATIONAL COMPLEXITY
We can see from section III-B1 that the computational complexity of the proposed filter is the same as that of the guided filter which is O(N pix ). This is verified by a brute-force MATLAB implementation of the algorithm GVWA Type-II and Type-III which are applied to images of different sizes. Type-I was omitted because its running time is the same as Type-III method. Fig. 2 shows that the computational time is a linear function of the size of the image and that Type-II has a lower running time than Type-III, this is because the number of iterations used on this experiment is N iter = 10, so the weights need to be calculated 10 times when using GVWA Type-III and only once when using GVWA Type-II. Updating the weights requires the use of 3 linear filters (f mean ) and two elementwise multiplications and one element-wise division which significantly increase the running time when performed on each iteration. Such calculations require significantly more computation time when an application requires a large number of iterations. Since GVWA Type-II is more efficient, for the rest of the paper we only use this method for applications. Fig. 2 also shows the running time of the guided filter (GIF 1 ) [8] when applied for 10 iterations to the same image sizes. We can see that our filter is faster than the GIF, this is because our method does not need to compute the patch covariance. The steps of the implementation of the GVWA filter are summarized in Algorithm 1, where ''. * '' is the symbol for the element-wise multiplication operation, ''./'' is the symbol for the element-wise division operation, and f mean and f gauss are a 1 http://kaiminghe.com/eccv10/ mean and a Gaussian filter respectively. The size of the filter kernel (patch size) is determined by the parameter σ s . In our implementation, the patch size is defined as: patchSize = floor(4σ s ) + 1. For a color image, the weight for the {R,G,B} components is the same and is defined as: where the patch variance σ 2 I (k) is replaced by the maximum of the three component: * f mean and f gauss are a mean and a Gaussian filter respectively. */

2) RELATIONSHIP WITH THE BILATERAL FILTER
The bilateral filter is defined as: where m is the set of pixel indices of a patch centered at I (m). The spatial weight w s (k) and the range weight w r (k) with user defined scale parameters σ s and σ r are defined as: The bilateral filter is computationally expensive because the range weight involves the pixel to be processed. As a result, a double for-loop is required for a brute-force implementation. The outer loop runs through all pixels of the input image, while the inner loop runs through a patch of pixels centered at pixel I (m). Let the number of pixels of the image be N pix and the number of pixels in the patch be M , the computational complexity is O(N pix M ).
Comparing the proposed filter with the bilateral filter, we can see that the spatial weight is the same: w 1 (k) = w s (k). The difference is in the way the range weight is calculated. In the following, we show that the proposed bilateral VWA filter is related to the bilateral filter in the following sense where σ 2 G (k) is the variance of a patch centered at G(k) and k ∈ m . We use the symbol ''→'' to indicate that the left term can be approximated/replaced by the right term. The proposed filter avoids the computationally expensive calculation of (G(m) − G(k)) 2 by replacing it with σ 2 G (k). What is the justification for the relationship shown in (38)? For simplicity, let us assume pixels in the patch follow an independent and identical distribution with mean µ G (k) and variance σ 2 . For a normal distribution, the probability of the difference lies within two standard As such, we can say that 2 × √ 2σ G (k) is the estimate of the worst case of the absolute difference. In other words, such estimate is an over estimate with probability 0.95 and is an under estimate with probability 0.05. If we instead use one standard deviation as the estimate, then In other words, the standard deviation is an over estimate of the absolute difference with probability 0.65 and is an under estimate with probability 0.35. Therefore, we can replace the range weight by the approximation:

A. PARAMETER SETTINGS
The proposed filter has three user defined parameters: (1) σ s defines the patch size which controls the smoothing effect, (2) s controls the sharpness of the result, and (3) N iter is the number of iterations when the filter is used in iterative mode. In this section, we study the effects of these three parameters.

1) THE TWO SCALE PARAMETERS
To demonstrate the edge-preservation ability of the proposed filter in the non-iterative operation (N iter = 1), we start by filtering a 1D signal which provides a demonstration of the filter performance. In Fig. 4, we can see that the GVWA filter reduces the effect of small-scale oscillations/edges while keeping the large-scale edges. In Fig. 5, we observe that increasing σ s leads to an increased smoothing in the small details of the image. In addition, setting s a small value has two different consequences: increasing the smoothing effect on small oscillations and producing sharper edges for large scale objects.

B. APPLICATIONS AND COMPARISONS 1) EDGE-PRESERVING IMAGE SMOOTHING
We start by demonstrating the performance of the proposed filter in edge-aware filtering. Fig. 7 shows the edgepreserving properties of the filter compared with the domain transform filter (DT 3 ) [5], L 0 filter (L 0 4 ) [6], rolling guidance filter (RGF 5 ) [10], bilateral texture filter (BTF 6 ) [11], weighted least-squares (WLS 7 ) [4], tree filter (STF 8 ) [9], static-dynamic filter (SDF 9 ) [26] and relative total variation (RTV 10 ) [7]. To make comparison with other weighted versions of the guided filter, we wrote our own code for Weighted guided image filtering (WGF) and Anisotropic guided filtering (AnisGF). Parameter settings for all methods are provided in the figure caption. We can see that results produced by 3     the proposed filter are similar to those produced by other methods.
The proposed filter can also be used to selectively remove objects of different scales from an image. Fig. 8 (d) shows the result of applying the proposed filter to smooth objects of different sizes by preserving the structural information of the image. In this experiment we compared with two state of the art methods for scale-ware smoothing: Local activity-driven structural-preserving filtering (LADR) [38] and Mutually guided image filtering (muGIF) [39]. To selectively smooth objects from 3 different scales we use three different set of parameter. When applying LADR we set the λ parameter to 0.03, 0.08 and 0.3. On the other hand, when muGIF is used we vary the regularization parameter α t to 0.001, 0.1 and 1 while keeping a fixed α r , N iter and mode equal to 1, 10 and 0 respectively.
As can be seen in Fig. 8 our method can successfully remove small, medium and large scale details depending on the settings, but it does not overpass the outstanding performance of LARD and muGIF at edge preservation when filtering large scale objects. Boundaries of meaningful objects at each scale are better preserved with LARD and muGIF.

2) CLIP-ART AND JPEG COMPRESSION ARTIFACT REMOVAL
To demonstrate the performance of the proposed filter on the task of removing JPEG compression artifacts, we first use a VOLUME 9, 2021  high quality image and compress it using the JPEG format with a compression quality factor of 10%. Then we filter the low quality image with compression artifacts using our filter rolling GVWA Type-II with σ s = 0.75, s = 0.5, N iter = 20. We compare the result using the L 0 smoothing filter [6] with λ = 0.02, κ = 1.5 and two weighted versions of the guided filter WGF [18] and AnisGF [23] with settings σ s = 1.5, = 0.01, N iter = 3 and σ s = 2.5, = 0.01, respectively. Fig. 9 shows that our filter is able to remove all the color artifacts due to the low quality compression while keeping the edges and preserving the colors of the image. The proposed filter performs better than the L 0 smoothing in preserving some features of the image, e.g., the shade on the ear (marked by a red square in the figure). Also, the proposed filter removes all the artifacts due to the compression while WGF and AnisGF struggle to remove artifacts near high contrast edges.
Using the original high quality image as a reference, we calculated the mean squared error (MSE) and peak signalto-noise ratio (PSNR) to perform a quantitative comparison. It can be seen in Table 1 that the MSE is improved from 3.2 × 10 −3 to 2.0 × 10 −3 using the proposed filter while with L 0 smoothing only improved to 2.3 × 10 −3 . Similar results can be seen in the PSNR. On the other hand, AnisGF and WGF do not improve the MSE and PSNR with respect to the compressed image even though there is a notorious visual improvement.
To demonstrate the removal of compression artifact from clip-art images, a low quality clip-art image is processed by the proposed rolling GVWA Type-II filter with σ s = 0.5, s = 0.75, N iter = 30, L 0 smoothing [6] with λ = 0.05, κ = 2 and IRWF [40] with r = 8. We can see in Fig. 10 that the three methods successfully remove the artifacts, the proposed filter performs better in preserving the low contrast edges than L 0 and produce more natural looking edges than IRFW (see areas marked by squares).

3) STRUCTURE SEPARATION
Structure separation is a process of decomposing the overall image structures (meaningful information) from highly correlated background (texture/noise). In this application, we demonstrate the performance of the proposed filter in smoothing out small scale textures while maintaining the prominent structures. The proposed filter is able to extract the main structure from irregular textures because the small scale details will progressively vanish as the number of iterations increases. Results are shown in Fig. 11 which provides a comparison of the performance of the proposed filter with a group of state-of-the-art structure separation algorithms. We can clearly see that our filter can produce comparable results in terms of structure extraction. In fact, the proposed filter produces sharper edge boundaries, less blocking artifacts in texture areas, and better contrast in comparison with the other filters. In particular, as shown in Fig. 11 (a) and (b), we use a magnified box to highlight the performance of the filters on the fish eye where the main improvement of performance can be observed. When comparing Fig. 11 (b) with (c) we can see that our method under-performed RTV since it did not remove some pixels with high gradients from the background, although changing the filter parameters can smooth the background as RTV (σ s = 1, scale = 0.5, N iter = 100) we opted to prioritize sharpness and contrast at the cost of leaving those few pixels. Table 2 presents a quantitative assessment using the entropy metric (T3SI 11 ) [41]. This metric aims to measure the similarity between the original image and the processed image. A bigger value indicates higher similarity. We can see from Table 2 that the performance of the proposed algorithm produces comparable results and is closer to the average of the other filters (the average is 2.0851).

4) EDGE EXTRACTION/ENHANCEMENT EXAMPLE
Edge detection aims at finding the boundaries of the objects in the scene. The gradient is commonly used to extract the edges of an image I . For example, the magnitude of the gradients defined in (42) is used for edge extraction. However, gradientbased methods are greatly affected by noise or small details.
In Fig 12, we demonstrate the benefits of applying the proposed filter to smooth the image before calculating the edge gradient magnitude. We can see that pre-processing the image using the proposed filter leads to a cleaner and sharper edge map which not only preserves the main structure of the scene but also reduces the effect of the noise and textures.

5) NON-PHOTO REALISTIC RENDERING
In this application, we demonstrate the use of the proposed filter to produce non realistic versions of the image using the framework proposed in [14]. This method stylizes the image by first simplifying its content using a filter to blur small details and sharp edges. The high contrast details or edges are then magnified to further increase the visual abstraction. The luminance is quantized to add the cartoon appearance to the image. In this paper, we take a slightly different approach since we do not employ the luminance quantization.
We use the proposed filter to blur the low contrast details while keeping the edges of the image. We then calculate the gradient map denoted E = |∇B| for the filtered image denoted B to detect the edges. We further process the value E using the following method: where κ is a user defined parameter. We define ζ ∈ [0, 1] as another user defined parameter to set to zero all the pixels that comply with D(x, y) < ζ . The purpose is to remove all the small edges that weren't removed by smoothing the image and were amplified in the previous step. As a by-product of our approach, the gradient magnitude map of the filtered image represent only large and high contrast edges in the original image. As such, the processed edge map produces a sketch effect S(x, y) = 1 − D(x, y). We then use the edge map D(x, y) and the smooth image to produce the abstract image: Results in Fig. 13 show that our approach successfully produces an artistic abstraction and a sketch effect from the input images. Column (b) shows that the proposed filter preserves only strong edges and structure. Abstraction results are shown in column (c), as described in [14], these images look stylized since all low contrast details were blurred while strong edges were visually increased. The sketch images are shown in column (d) which show only large and high contrast edges of the original images.   Fig. 11.

6) SALIENCY OBJECT DETECTION
Saliency detection aims at locating the structural information (objects/regions) in a natural scene without emphasizing unimportant details. This process is similar to human perception. In some images, the foreground and background are correlated which makes saliency detection a challenging task. To address this problem, edge-aware filters can be used as a pre-processing step to aid saliency detection. We employ our filter to abstract the object of interest by getting rid of the unwanted details while preserving meaningful structures. In [45], the authors smoothed out the input image prior to the application of the saliency map generation algorithm in [46]. We follow the same approach to generate the map. Results are shown in Fig. 14 in which it is clearly noticeable that our saliency map is more consistent and uniform than the algorithms in [45] and [46].

7) DETAIL MAGNIFICATION
Unsharp masking is an effective algorithm to enhance the details of an image. The algorithm is defined as: 118302 VOLUME 9, 2021 where I is the input image, J , called base layer, is the result of a low-pass filter, and γ is the gain used to amplify the high frequency components (I − J ) called detail layer. In Fig. 15, we demonstrate that the proposed filter can be used to produce J in the unsharp masking algorithm, we also compare the result with other 3 well-known methods for image sharpening and detail enhancement such as contrast adaptive sharpening (CAS), 14 generalized unsharp masking (GUM) [47] and a guided edge-aware smoothing-sharpening filter (SSIF 15 ) [48]. The settings for each algorithm were selected to avoid over-sharpening so the resulting image has a natural appearance. We can see that our method is able to amplify the details of the scene without producing halo artifacts and its result is comparable to all 3 other algorithms.

8) MULTI-FOCUS IMAGE FUSION
Multi-focus image fusion is a technique that blends two or more images which are only focused on certain objects of the same image. The fusion algorithm produces a new image in which all objects are in focus. The methods to perform multifocus image fusion can be categorized into four categories: transform domain, spatial domain, combined transform, and deep learning methods [54], [55]. In this paper we modify two popular fusion methods by replacing the guided filter [8] by the proposed filter to investigate its performance in this application. Both qualitative and quantitative comparison are performed to validate the proposed filter.
The first method called GFF [30] decomposes each source image into a base layer and a detail layer. Base layers and detail layers of source images are then fused individually using a weighted average technique. The weight map is calculated based on the salience map which is refined by using the guided filter. The resulting base and detail layers are used to finally reconstruct the fused image.
The second method called GFDF [15] performs a pixelbased weighted linear combination of the source images. First, a rough focus map for each source image is estimated by subtracting the image from a filtering result. The rough map is then refined using the guided filter. A decision map is generated by applying a pixel-based maximum rule. It is also refined by using another instance of the guided filter. The refined decision map is used as the weight map for the linear combination that fuses the input images.
To demonstrate the performance of the proposed filter in this application, we implemented the GFF 16 and GFDF 17 filters in MATLAB. In our implementation, we replace the guided filter with the proposed filter. We use the terms Proposed filter (1) and Proposed filter (2) to represent the algorithms of GFF and GFDF which use the proposed filter, respectively. Figures 16 and 17 show the fusion results for the two pairs of images from the Lytro dataset [56]. Input A and Input B focus on the foreground and background, respectively. The fusion result applying the original GFF [16] and GFDF [15] are shown in columns (c) and (d) respectively. The result of using the proposed filter in the GFF and GFDF algorithms are displayed in columns (e) and (f). We can see that the proposed filter produces similar results to those of the guided filter.
We perform an objective comparison using five metrics to evaluate the quality of the fusion result without a reference as suggested in [15]. These metrics are: [49] evaluates the amount of edge information transferred from the source images to the fusion result.
• Q P [50] measures the edge information transferred from the source to the fusion result by using phase congruence.
• Q Y [51] measures the degradation of structural information of an image with respect to another image by using the structural similarity [57] between the source images and the fusion result.
• Q CB [52] performs a perceptual quality evaluation of the fusion result by using a local contrast and saliency map.  • Q FMI [53] measures the mutual information between the feature map of the fusion image and the feature map of the source images with small windows and average all the results to get a single value.  Table 3 shows the results of the quantitative assessments of images shown in Fig. 16 and Fig. 17. For all five metrics higher values indicate higher fusion quality. We can see that, in general, using our filter produces similar values as those using the guided filter.  Fig. 16 and Fig. 17.

V. CONCLUSION
In this paper, we have presented a new edge-preserving filter which is based on the local weighted averaging structure and statistics of the image. The new feature of this filter is the use of a decreasing function of the local variance as the weight. As a result, the filter has a computational complexity of O(N pix ). We have motivated the development of this filter by taking an extreme parameter setting of the guided filter and have performed statistical analysis and simulations. Results not only show the connections between the proposed filter, the bilateral filter and the guided filter, but also provides new insights into the edge-preserving ability and the computational complexity of the proposed filter. In addition, we have presented extensions to the proposed filter using the ideas of bilateral weight, guidance information and iteration.
The edge-preservation performance of the proposed filter has been demonstrated in many applications including: edge-preserving smoothing, non-photo realistic image rendering, compression artifact removal, detail magnification, edge extraction, multi-focus image fusion, structure separation, and salience detection. We have shown by using many images and objective evaluation metrics (where they are available) that the performance of the proposed filter is comparable or superior to state-of-the-art filters. Therefore, the proposed filter is a new tool for tackling a wide range of image processing problems.