Removing rain streaks by a linear model

Removing rain streaks from a single image continues to draw attentions today in outdoor vision systems. In this paper, we present an efficient method to remove rain streaks. First, the location map of rain pixels needs to be known as precisely as possible, to which we implement a relatively accurate detection of rain streaks by utilizing two characteristics of rain streaks.The key component of our method is to represent the intensity of each detected rain pixel using a linear model: $p=\alpha s + \beta$, where $p$ is the observed intensity of a rain pixel and $s$ represents the intensity of the background (i.e., before rain-affected). To solve $\alpha$ and $\beta$ for each detected rain pixel, we concentrate on a window centered around it and form an $L_2$-norm cost function by considering all detected rain pixels within the window, where the corresponding rain-removed intensity of each detected rain pixel is estimated by some neighboring non-rain pixels. By minimizing this cost function, we determine $\alpha$ and $\beta$ so as to construct the final rain-removed pixel intensity. Compared with several state-of-the-art works, our proposed method can remove rain streaks from a single color image much more efficiently - it offers not only a better visual quality but also a speed-up of several times to one degree of magnitude.


I. INTRODUCTION
Because of rain's high reflection to light, rain usually is imaged as bright streaks in an image and influences the visual quality of the image. Hence, removing rain streaks in images has been necessary for most photographers. Garg and Nayar revealed that the visuality of rain is strongly related to some camera parameters [1]. Photographers can tune these parameters (e.g., exposure time and field depth) to constrain the captured rain streaks. However, this method can avoid rain streaks only to a small extent. In addition, the majority of rain images are obtained by outdoor vision systems where it is difficult to tune the camera's parameters timely.
Rain streaks change the information conveyed by the original image. Therefore, the effectiveness of many computer vision algorithms that are based on some small features would be degraded severely. Though a majority of tracking and recognition algorithms is implemented for videos, rain-removal in the key frames of a video turns to play an important role.
Due to the inevitability of rain weather and the wide deployment of vision systems in practice, rain removal has The associate editor coordinating the review of this manuscript and approving it for publication was John See.
been an important problem in computer vision for a long time. The related research for rain can date back to 1948 when Marshall and Palmer analyzed the relationship between the distribution of rain and its size [2]. Then, Nayar and Narasimhan studied the visual manifestations of different weather conditions, including rain and snow [3]. Because of the randomness of rain's location, accurate detection of rain is a very difficult task. The early research works were mainly concentrating on rain removal in videos [4], [5], thanks to the strong correlation among neighboring video frames. Later on, the research focus gradually shifts to the study of rain removal in a single (color) image [6]- [8], [15]- [22].
The recent methods for rain removal from a single image can be classified into four categories. The first category is simply filtering-based where a nonlocal mean filter or guided filter is often used [17], [18], [20]. Due to the use of a filter simply, its implementation is very fast. However, it can hardly produce a satisfactory performance consistentlyeither the output image is left over with some rain streaks or quite a few image details are lost so that the output image becomes blurred. The second category builds models for rain streaks [19], [21], [22]. These models attempt to discriminate rain streaks from the background. However, it often happens that some details of the image will be mistreated as rain streaks. The third category, which seems more reasonable, is to form a 2-step processing [6]- [8]. Specifically, a low-pass filtering is first used to decompose a rain image into the low-frequency part and high-frequency part. While the low-frequency part can be made free of rain as much as possible, some descriptors can be applied on the high-frequency part to further extract the image details to be added back into the low-frequency part. The last category combines deep learning methods with the rain removal task and obtains excellent results by designing appropriate deep networks [10]- [13].
Our formulation to remove rain streaks in a single color image also includes two steps. In the first step, we try to detect rain streaks by utilizing two characteristics of rain streaks. Then, a linear model is built to remove rain streaks in the second step.
Step-1: In order to remove rain streaks as much as possible, we hope that all rain streaks can be detected. However, the existing methods are difficult to obtain the locations of rain accurately and two kinds of detection errors usually occur: 1) some rain streaks are missed, and 2) some non-rain image details are mis-detected as rain.
If rain streaks are missed in the detection, they will remain in the final result. Hence, we try our best to avoid this detection error in our work. In our extensive tests, we found that our detection method captures nearly all rain streaks for a big majority of the test images. This is largely because that raindrops usually have strong reflection to light so that their pixel intensities are apparently larger as compared to the background pixels. Even when some rain streaks are missed, the final results could still be acceptable visually, because those missed rain-streaks have color components that are very similar to the background.
On the other hand, it is inevitable that some non-rain image details will be mis-detected as rain streaks. To reduce its influence, we try to revise the error detection by an eigen color method [25]. To this end, two characteristics of rain streaks are used: • rain usually possesses a higher reflection to light as compared to its neighboring non-rain objects, thus leading to larger intensities, and • rain is semi-transparent and colorless so as to present a gray color in the image.
These two characteristics are rather robust and have been utilized to rain removal works before, such as [8] and [26].
Step-2: We follow the imaging principle of rain pixels to build a physical model to represent each rain pixel. In reality, there are many factors influencing the imaging of rain pixels, such as light, wind, and even the background. By a reasonable approximation, we simplify the imaging of rain into a linear function: where s is the pixel intensity of the scene before being affected by rain (which is unknown), p is the observed intensity of rain pixel, α and β are the parameters of the linear model, respectively. Our goal is to determine α and β for each rain pixel so that s can be reconstructed optimally. Advantages of Our Approach: In our work, we propose to perform a relatively accurate detection of rain streaks to make sure that nearly no rain streaks are missed. Then, based on the linear model for rain pixels, we determine the involved parameters by optimizing a convex loss function. Since our proposed processing happens only on all detected rain pixels, other non-rain image details remain in the final result, which plays an important role in preserving image details. It can be seen from later sections that our algorithm produces higher PSNR/SSIM values for most rendered rain images compared with some state-of-the-art traditional methods. The optimization formulated in our work is a convex one. We can obtain the global optimal solution and avoid complex iterative calculation. Hence, our algorithm offers a speed-up of several times to more than one degree of magnitude when compared with several recent state-of-the-art works, thanks to the linear model and the detection of rain streaks developed in our work.
Furthermore, our linear model shows good robustness for removing rain streaks no matter on ordinary rain images or even on challenging heavy rain images. Another important point is that our algorithm is not a memory-consuming one. During experiment, we found that some algorithms (e.g., [21]) have high requirements for computer memory. If the input rain images are relatively large, they will lead to memory overflow easily. By testing, our algorithm can deal with large rain images even on ordinarily configured computer. Through relatively complete comparison, our results prove to outperform those state-of-the-art works and be comparable to the deep learning based works both subjectively and objectively -referring to Fig. 1 for one set of results.
The remainder of this paper is organized as follows. We briefly review the existing rain-removal methods in Section II. In Section III, we present the details of our rain detection method. The linear model for the imaging of rain pixels and the associated optimization to determine the involved parameter are described in Section IV. In Section V, we show the experiment results of our algorithm and make objective and subjective comparison with several state-of-theart works. Finally, we conclude this paper in Section VI.

II. RELATED WORKS A. RAIN REMOVAL FROM VIDEOS IN THE SPATIAL DOMAIN
Early work on detection and removal of rain is mainly focused on videos by making use of the correlation among video frames. In [4], Garg and Nayar analyzed the visual effect of rain on an imaging system. In order to detect and remove rain streaks from videos, they developed a correlation model to describe the dynamics of rain and a motion blur model to explain the photometry of rain. To make the study more complete, they further revealed that the appearance of rain is the interaction result of the lighting direction, the viewing direction, and the oscillating shape of a raindrop [28]. Then, they built a new appearance model for rain (which is based on the raindrop's oscillation model) and rendered rain streaks. In [29], they further analyzed the visual effect of rain and the factors that influence it. In order to detect and remove rain streaks in videos, they developed a photometric model that describes the intensities caused by individual rain streaks and a dynamic model that reflects the spatio-temporal properties of rain.
Based on the temporal and chromatic characteristics of rain streaks in video, Zhang et al. proposed another rain detection and removal algorithm in [30]. This work shows that rain streaks do not always influence a certain area in videos. Besides, the intensity changes of three color components (namely, R, G, B) of a pixel are approximately equal to each other, i.e., R ≈ G ≈ B. By these two characteristics, rain streaks are detected and removed in videos. However, this method can only deal with videos that are captured by a stationary camera.
In [31], Brewer and Liu utilized three characteristics of rain to detect rain streaks in video, and the detected rain streaks are removed by calculating the mean value of two neighbouring frames. In [5], Bossu et al. proposed the histogram of orientation of streaks (HOS) to detect rain streaks in image sequences. Specifically, this method decomposes an image sequence into foreground and background by a Gaussian mixture model and rain streaks are separated into the foreground. Then, HOS which follows a model of Gaussian-uniform mixture is calculated to detect rain streaks more accurately.

B. RAIN REMOVAL FROM VIDEOS IN THE FREQUENCY DOMAIN
In [32], Barnum et al. detected rain streaks in the frequency domain. Specifically, they developed a physical model to simulate the general shape of rain and its brightness. Combined with the statistical properties of rain, this model is utilized to determine the influence of rain on the frequency of image sequences. Once the frequency of rain is detected, they will be constrained to obtain the rain-removed image sequences. Later on, in order to analyze the global effect of rain in the frequency space, they built a shape and appearance model for single rain streak in the image space [33]. Then, this model is combined with the statistical properties of rain to create another model to describe the global effect of rain in the frequency space.

C. SINGLE IMAGE RAIN REMOVAL
In [34], monocular rain detection is implemented by Roser and Geiger in a single image, in which a photometric raindrop model is utilized. Meanwhile, Halimeh and Roser detected raindrops on the car windshield in a single image by utilizing the standard interesting point detector [35]. In this algorithm, they built a model for the geometric shape of raindrops and studied the relationship between raindrops and the environment.
To the best of our knowledge, in [6], Fu et al. accomplished the rain-removal task in single images for the first time by representing the image signal sparsely [23]. Kang et al. [7] and Chen et al. [8] followed a similar framework and demonstrated some improved results. In particular, Kang et al. identified the dictionary atoms [24] of rain streaks by utilizing the histogram of oriented gradients (HOG) [36], while Chen et al. developed the depth of field (DoF) to extract more image details from some high-frequency components. Some other learning-based image decomposition methods were also proposed to remove rain in a single image [15], [16]. They follow the similar formulation used in [6]- [8]. In [15], a context-constrained image segmentation on the input image is implemented. In [16], a unsupervised clustering on the observed dictionary atoms is implemented via affinity propagation, which makes the image-dependent components with similar context information be identified.
Meanwhile, Xu et al. [17] developed a rain-free guidance image and then utilized the guided filter [37] to remove rain. Luo et al. separated a rain image into the rain layer and de-rained layer by a nonlinear generative model (screen blend model) [21]. Specifically, they approximated the rain and de-rained layers by high discriminative codes over a learned dictionary. Kim et al. [18] detected rain streaks in a single image by combining an elliptical shape model of rain and a kernel regression method [38]. Then, they removed rain streaks by a non-local mean filter [39]. Based on the fact that rain streaks usually reveal similar and repeated patterns in the image, Chen et al. captured the spatio-temporally correlated rain streaks by a generalized low-rank model from matrix to tensor structure [19].
In [20], Ding et al. designed a guided L 0 smoothing filter and obtained a coarse rain-free image. The final rain-removed image is then acquired by a further minimization operation. Wang et al. analyzed the characteristics of rain and proposed a rain-removal framework [26]. Li et al. utilized some patch-based priors of both the background and rain to separate a rain image into the rain layer and de-rained layer [22]. In [27], we proposed a rain or snow removal algorithm based on a hierarchical approach. At first, a rain/snow image is decomposed into rain/snow-free low-frequency part and high-frequency by combining rain/snow detection and guided filter [37]. Then, we extract 3-layers non-rain/snow details from the high-frequency part by utilizing sparse coding. Finally, we add the low-frequency part with the 3-layers image details together to obtain the rain/snowremoved image.
Recently, deep learning has been applied to the rain removal task. For instance, Fu et al. extended ResNet to a deep detail network that reduces the input-to-output mapping range and makes the learning process easier [10], whereas the de-rained result is further improved by using some image-domain priori knowledge. They also built DerainNet to remove rain streaks [13] in which they use the high-frequency part of an image rather than the image itself during the training process. In the meantime, Zhang et al. proposed a de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN) [12], which adds the quantitative, visual, and discriminative performance into the objective function and obtains good results. Yang et al. designed a new rain image model and constructed a new deep learning architecture [11], which also achieves good rain-removed results. In a most recent work [14], Zhang and Patel proposed a density-aware multi-stream densely connected convolutional neural network, called DID-MDN, to remove rain from single images. This network can automatically determine the rain-density information and then remove rain efficiently. In [9], they also proposed to utilize conditional generative adversarial network to complete the same task.

III. DETECTION OF RAIN
Because of the randomness of rain streaks in an image, the accurate detection of rain streaks is a challenging problem in the rain removal tasks, even by deep learning based methods. Our goal here is to detect nearly all rain pixels in the input rain image and, at the same time, try to avoid the mis-detection of non-rain details as much as possible.
In [44], Chen and Huang propose an unsupervised clustering-based to identify rain-affected areas in X-band marine radar images. Meanwhile, they also propose a support vector machine based method to identify rain images and rain-free images [45]. In [27], we proposed a rough detection method of rain streaks based on the fact that the intensity of a rain pixel is usually larger than its neighboring non-rain pixels. In this work, we follow this approach but optimize it by utilizing the color characteristics of rain to reduce the mis-detection of non-rain details.
Let us use I to denote an input rain image, which consists of three color channels as the input image is assumed to be a color one. Because the intensity of a rain pixel is larger than its neighboring non-rain pixel in a small window, a pixel p is considered as a rain pixel if its intensity is larger than the FIGURE 2. This figure shows five windows that are utilized to implement the rain detection in our work: p is a pixel and located at the different location of five widows. Different colors denote four different windows with the pixel p located at bottom-right (the green one), bottom-left (the yellow one), top-right (the orange one), and top-left (the blue one); the red square denotes the window with p at the center location. mean intensity of a window that includes p. In this work, we modify slightly the detection rule as follow: p is regarded as a rain pixel candidate if the inequality holds for all three color channels in five windows where p locates at the center, top-left, top-right, bottom-left, and bottom-right position respectively, as shown in Fig. 2, and where ω (k) represents a 7 × 7 window. The parameter µ is an empirical value which will be given later in the experimental part. For all pixels p i,j in image I , we implement the same detection. After the detection is completed, a binary map B is generated in which b i,j is set to 0 if a rain pixel candidate is detected at (i, j). As compared to [27], a small incremental µ is added in (2). The reason is as follows: the intensity of some non-rain pixels could be smaller than the intensity of the neighboring rain pixel but larger thanp k so that µ can avoid mis-detecting this kind of non-rain pixels as rain.
According to [30] and our observations, rain streaks usually appear in relatively smooth areas. When encountered with background which has complex textures, rain streaks will merge into the background and cannot been imaged by the camera. Therefore, nearly all rain pixels can be detected by our method. But some non-rain details of the image are unfortunately mis-detected as rain pixels. An efficient way of identifying these mis-detections is described in the following.
Rain streaks usually possess a neutral color. According to this feature, Chen et al. [8] proposed to identify rain dictionary atoms by the eigen color feature [25]. In our work, we utilize the color characteristics of rain to revise the mis-detected non-rain pixels. For a given pixel p i,j , we use to represent its color vector in the RGB space. We transform this 3-D RGB space into a 2-D space as follows: where  It is clear from (4) that, after the transformation, any pixels having a neutral color will be clustered around (0, 0) in the u-v space. For each rain pixel candidate detected above, we transform their RGB values into the u-v space to form a 2-D vector. If the magnitude of this 2-D vector (i.e.,the Euclidean distance to the origin of the u-v space) is larger than a pre-set value , p i,j is recognized as a mis-detected pixel. Consequently, we set its corresponding value in the location map B back to 1. Now, the remaining content consists of real rain pixels and a few mis-detected non-rain details that are similar to rain in their size and eigen color. We cannot revise these remaining image details any more. From the statistics presented in Section V, however, we will see that these mis-detected pixels contribute only a small percentage. By hundreds of tests on various kinds of rain images, their influence on the final rain-removed results is tolerable visually, we will analyze the reason later.
Our detection method is based on the two heuristic characteristics of rain streaks. It cannot been applied to rain-free images. In rain-free images, the non-rain pixels which possess higher intensities than other non-rain pixels will be mistreated as rain. So that majority of non-rain pixels will be mis-detected as rain pixels. As we mentioned above, even in the rainy images, small amount of non-rain pixels may be still mistreated. Fortunately, as verified by the deraining results, it is tolerable. Here, we show an example of applying our detection method to rain-free image and corresponding rendered rainy image in Fig. 3.

IV. A LINEAR MODEL AND ITS OPTIMIZATION
In this section, we build a linear model that is based on the imaging principle of a pixel to describe the influence of rain on the pixel intensity, followed by an optimization in which the involved parameters can be determined according to a convex loss function.

A. THE LINEAR MODEL
When a scene is photographed, the intensity of each image pixel is determined by an integral of the scene's irradiance over the entire exposure time T : In a rainy scene, the intensity of a rain pixel p can be expressed as: where R r and R s are the irradiance of raindrops and the scene, respectively, and τ is the occupation time of the raindrop during the entire exposure time T . Raindrops have no regular shape. Hence, the irradiance R r of a raindrop is non-constant, and we utilizeR r to denote the time-averaged irradiance of raindrop during the whole exposure time T . For a given pixel, the corresponding scene may change during the exposure time T in video. However, in photography, the scene is stationary. Therefore, the irradiance R s of scene is constant. Then, (6) can be rewritten as Assuming that rain lasts for the entire exposure time, we have For the scene, we make the same assumption and obtain By combining (7), (8), and (9) together, we obtain where α = (T − τ )/T and β = (1 − α)p r . This linear model establishes the relationship between the original scene intensity s and the intensity p after the scene is affected by rain. As mentioned in [29], Gunn and Kinzer made an empirical study of the raindrop's terminal velocity in terms of the size of raindrop and found v = 200 √ a, where a is radius of a raindrop. In [29], Garg et al. obtained a conservative upper bound of τ by simulating the process of rain's passing through the pin hole of a camera: 0 < τ < 4a/v. Then, the range of τ can be rewritten as 0 < τ < √ a/50. Garg et al. also found that the maximum value of a is 3.5 × 10 −3 m so that the maximum value of τ is 1.2ms [29].
As analyzed above, the time τ is small compared with the entire exposure time T . Therefore, the values of α for all rain pixels in an image are nearly the same. Clearly, α is in the range [0, 1]. There are three situations according to the value of α: 1) α = 1 -the scene is not influenced by rain.
2) α ∈ (0, 1) -the most common situation where the pixel intensity follows the linear model described above.
3) α = 0 -implying that only rain is captured during the whole exposure time. In reality, this situation is impossible. When a raindrop falls down from the sky, its velocity becomes larger when getting closer to the ground. As we mentioned above, the maximum value of τ is 1.2ms, which is usually small as compared with the exposure time of a camera T . This implies that α is close to 1. We can further infer some imaging principles for rain from this model. For example, a low-intensity pixel would receive a larger intensity enhancement than a high-intensity pixel while being affected by rain. To prove this, the intensity enhancement is defined as: After applying our model, it will become: Because α is less than 1, the larger s will lead to a smaller s. Furthermore, our proposed linear model can be used to infer some other results obtained in previous works. For instance, in [30], Zhang et al. demonstrated that the intensity changes of three color channels of a pixel are approximately equal to each other after being affected by rain. To prove this result, let us assume that (R, G, B) is the color vector of a pixel and (R,Ĝ,B) is the color vector after the pixel is affected by rain. Then, we haveR the intensity change of each color channel is as follows: As we analyzed above, α is close to 1. Therefore, the intensity change of each color channel is approximately equal to each other, namely,

B. OPTIMIZATION
In order to train the linear model for the given image I , we need to know the values of original pixels. Unfortunately, none of them is known in reality. The best we can do is to approximate each of them. In [31], Brewer and Liu pointed out that a majority of rain streaks appears in the relatively constant areas in an image. This point can also be observed in rain images. Even in the image with complex background, there is still a small constant area around rain streaks. Otherwise, the rain streaks can not be seen. In such a relatively constant area, a weighted average intensity of the neighboring non-rain pixels around each rain pixel can be utilized to approximate its original intensity. Such an approximation is supported by two reasons: (1) in the case of light rain, the intensity of the neighboring non-rain pixels is very close to the original intensity of the rain-affected pixel in a small window and (2) when encountered with heavy rain where the fog effect will appear, the weighted average serves as a de-hazing preprocessing and then the situation turns back to (1).

1) APPROXIMATION OF THE ORIGINAL INTENSITY OF RAIN PIXEL
For each rain pixel p, let us denote by H = {h k } the set of color vectors of all non-rain pixels in the N ×N window that is centered at p, i.e., h k , k = 1, 2, · · · , |H |, represents the color vector of the k-th non-rain pixel in H , and p is the observed color vector of rain pixel p. To compute a weighted average of these non-rain pixels, the involved weights are calculated as follows: where σ is a tuning parameter, whose value is given in the experimental section. exp() is exponential function. For the rain pixel p, the approximation of its original color vector is We apply (17) to all detected rain pixels to obtain the approximations of their original intensities.

2) TRAIN THE PARAMETERS OF OUR LINEAR MODEL
As analyzed above, the time τ is small compared with the entire exposure time T and the parameters α and β for all rain pixels in an image are approximately the same. Hence, we train the parameters of our model in a relatively large window, but only apply the resulted parameters to the rain pixel that is located at the window center. Suppose that p is a detected rain pixel, in one color channel X (i.e., X ∈ {R, G, B}), we form a relatively large M × M window centered at p and all detected rain pixels D = {d k } in the window will be utilized. Let Q = {q k } as obtained by (17) be the approximations of the original intensities of rain pixels in and K = |D| = |Q|. According to our linear model (10), we have: In order to determine {α, β}, we minimize a loss function defined below: where λ is the regularization parameter. This is a convex loss function and the solution is as follows: β =d − αq (21) VOLUME 8, 2020 After obtaining α and β for all detected rain pixels D = {d k }, k = 1, 2, . . . , K in the window , we only apply the model's parameters α and β to the rain pixel p that is the center of , so that its rain-removed intensity can be obtained as follows: We implement the same processing on all detected rain pixels in color channel X , X ∈ {R, G, B} so that the rain-removed image S can be obtained.

V. EXPERIMENTAL RESULTS
In this section we apply our algorithm to rain images. We first show some detailed intermediate experimental results and analysis for one test image. Then, several state-of-the-art methods are selected to perform subjective and objective comparisons.

A. DETAILED EXPERIMENTAL RESULTS
We use the rain image in Fig. 4(a) as an example to run our algorithm and show the implementation details of our algorithm.
Step-1. We first detect the rain streaks by (2). The resulted binary location map is shown in Fig. 4(b) in which the black areas stand for the detected rain pixels. In order to revise the mis-detected image details, we transform the RGB color space of each detected rain pixel candidate in I into the u-v space by (3) and calculate its magnitude. The distribution of detected rain pixels in u-v space is shown in Fig. 4(c). Then, we revise all rain pixel candidates whose magnitudes in the u-v space are larger than the preset (the green part in Fig. 4(c)). The revised result is shown in Fig. 4(d), from which one can see that many rain pixel candidates have been revised.
Step-2. For all detected rain pixels (i.e., those after the revising in Step-1), we calculate the approximations of their original intensities by (17). We plot in Fig. 5(a)-(c) the corresponding relationship between them, where 300 rain pixels are selected randomly in each of the R, G, and B channels. It can be seen from Fig. 5 that the observed intensities of rain pixels and the approximations of their original intensities (before being affected by rain) indeed construct a good linear relationship in each of the three color channels.
In the meantime, we find that a few points deviate from the linear line in Fig. 5(a)-(c) and they are the mis-detected non-rain pixels. In order to verify this, we select true rain pixels manually to implement the same statistics and the results are shown in Fig. 5(a1)-(c1). We can see that most isolated points have been eliminated. Because those isolated points contribute only a small percentage, their influence on the training of the linear models is in the tolerable range. To visually verify our detection method further, we use the binary location map to multiply the original rain image and the result is shown in Fig. 6(a). It can be seen that no rain streaks are left. We would like to point out that very similar results have also been found in many other rain images.
By optimization in Section IV-B, we train the linear model and obtain the rain-removed result as shown in Fig. 6(b). In order to verify the linear model further, we utilize the trained linear model to add rain streaks on the rain-removed image in Fig. 6(b) and obtain the image in Fig. 6(c). We can see that this calculated rain image is very similar to the 54808 VOLUME 8, 2020 original rain image in Fig. 4(a). In this particular example, we would like to report that the ranges of α and β are [0.91, 0.98] and [0.10, 0.18] respectively. For other rain images, we can obtain the similar ranges of α and β, which implies a good consistency with our previous analysis, namely, the values of α are close to 1 and vary within a small range.
Finally, we present an analysis of the intensities of mis-detected non-rain details in the rain-removed results. The simulated diagram is shown in Fig. 6(d), where p i,j denotes the observed intensity of a mis-detected nonrain pixel. According to the statistics of Fig. 5, mis-detected non-rain pixels often have high intensities and therefore are located above the linear line drawn by the trained linear model. When α of the model is close to 1, s i,j would be lower than p i,j . Hence, the intensities of mis-detected non-rain pixels will usually be reduced a little bit in the rain-removed results. As we analyzed previously, non-rain pixels that have high intensities and whose color channels are nearly equal to each other will be mis-detected. After applying our linear model, their color channels are still approximately equal to each other, except for the slightly reduced intensities. This leads to a similar color appearance in the final result so that the image details are maintained well even mis-detection exists.

B. PERFORMANCE EVALUATION
In this subsection, we evaluate the performance of our proposed rain-removal algorithm. We conduct both objective and subjective evaluations through a comparison with several state-of-the-art works. Specifically, three very recent works which are based on the traditional methods have been selected here: the work of Li et al. in 2016 in which some patch-based priors of both the background and rain are used to separate a rain image into the rain layer and de-rained layer [22], the work of Luo et al. in 2015 that utilizes a nonlinear generative model to remove rain streaks from single images [21], and the work of Chen et al. in 2014 that removes rain steaks by sparse coding [8]. For deep learning based rain-removal

1) COMPLEXITY ANALYSIS
We implement the selected methods on an Intel (R) Xeon (R) CPU E5-2643 v2 @ 3.5 GHz 3.5 GHz (2 processors) with 64G RAM, and use 256 × 256 color images to test the time consumption. The average time consumed by our algorithm is 7.68s. Specifically, the detection step consumes about 5.89s, the approximation consumes about 0.29s, and the optimization step consumes about 1.25s. The other small part of time is consumed by other intermediate steps. The run time consumed by the works of [8], [21], [22] are listed in the Table 1. Apparently, our algorithm provides a significant speed-up (of several times to more than one degree of magnitude).
With the size change of the rain images, the consumed time is also different. In our paper, we simplify the rain imaging into a linear model and reduce the calculation complexity largely. Suppose that N is the number of pixels in a given rain image I , M is the number of candidate rain pixels after first detection, and G is the number of detected rain pixels after revised by eigen color. Then the complexity for the first detection, the revision of the candidate rain pixels, and the approximation of the rain pixel are O (N ), O(M ), and O(G), respectively. For the optimization of linear model, assume that K i , i = 1, 2, . . . , G is the number of rain pixels in the windows which centered at each detected rain pixel. The complexity of optimization is

2) OBJECTIVE ASSESSMENT
In order to assess different methods quantitatively, we use the synthesized rain images by the screen blend model 1 [21] and calculate the PSNR/SSIM [40] as the objective index. These two indexes are widely-used assessment metrics in computer vision task. PSNR is short for peak signal-to-noise ratio, and SSIM is short for structural similarity. Here, we give out the definition of PSNR, please refer to [40] for the details of SSIM. To calculate PSNR, we first compute the mean-square error (MSE) of noisy image I 1 and denoised image I 2 : where M , N are the width and height of given images, R is the maximum fluctuation in the input image data type, log 10 () is the logarithm function. The PSNR/SSIM values of ten synthesized rain images which are handled by traditional methods are shown in Table 2 and several ground-truthes, corresponding rendering rain images and their rain-removed results are shown in Fig. 7. In order to facilitate the comparison, we list the PSNR/SSIM values at the top left of each rain-removed image in Fig. 7.
Overall, the method by Li et al. [22] provides lower PSNR values. In particular, this method losses a lot useful information for images that possess many details (e.g., the second row in Fig. 9). Hence, the resulted SSIM values are much lower. On the contrary, the method by Luo et al. [21] can remove light rain streaks and make the relatively heavy rain streaks blur. Therefore, it leads to high SSIM values. However, the PSNR values resulted by this method are still much lower. The method by Chen et al. [8] also losses image details, thus leading to lower SSIM values. However, the loss is not as severe as the method by Li et al. so that its PNSR values remain to be high comparatively. It is clear that our algorithm produces better PSNR/SSIM values for a large majority of test images compared with the works which use the traditional methods. The comparisons of PSNR and SSIM with the deep learning methods [10] and [14] are shown in Fig 8. We can see from these results that our method obtains comparable PSNR values compared with [10] and [14], while the SSIM values of the deep detailed network [10] are consistently higher than those of DID-MDN and our method for this group of images. For the deep learning methods, the involved networks are trained from thousands of rendering rain images and the corresponding groungtruthes. Hence, it is not surprising that the PSNR/SSIM values of deep learning methods are higher.
In order to evaluate our algorithm more completely, we test it on several large datasets synthesized in [43]. There are three testing datasets in this paper, namely, rain streak dataset, raindrops dataset, and rain and mist dataset. Our algorithm is tested on all these three datasets and compared with above two deep learning based methods [10], [14] and the results are in Table 3. We can see that our algorithm obtains best results on the rain streak dataset. The raindrops and mist in the other two datasets are not the target of our algorithm, we do not obtain best performance compared with deep learning based methods, but our algorithm still produce satisfactory results. The reason is that our algorithm only functions on the detected rain pixels and other pixels will keep unchanged, which is conductive to high PSNR and SSIM values. Besides, our detection stage makes our algorithm focus on the rain streaks, which reduces the interference from other non-rain areas. Deep learning based methods always obtain good results for the rainy images which have similar 54810 VOLUME 8, 2020   type to the training samples. Otherwise, the performance can reduce apparently. Hence, compared with deep learning based methods, conventional methods tend to possess better generalization.

3) USER STUDY
To conduct a visual (subjective) evaluation on the performances of different traditional methods, 20 viewers are invited to evaluate the visual quality of different methods in terms of the following three aspects: • less rain residual, • the maintenance of image details, • overall perception. In the evaluation, 20 groups of results are selected and every group involves the results by Chen et al. [8], Li et al. [22], Luo et al. [21] and our method. To ensure fairness, the results in each group are arranged randomly. For each group, the viewers are asked to select only one result which they like most. The evaluation result is shown in Table 4. It is clear that our rain removal results are favored by a majority of viewers (69.8%).

4) REAL RAIN IMAGES
Taking the practical utility into consideration, we implement our algorithm on several real rain images and compare with the selected works. The results are in Fig. 9. The work by Li et al. can remove rain streaks completely, but loss many image details, especially for the image like the second row. This is because that the patch-based priors cannot separate small image details from rain streaks well. Because the HOG descriptor used in the work by Chen et al. cannot identify small image details either, this work is not suitable for rain images with small details (e.g., the second and third row)edges in the rain-removed images are blurred. Besides, when the intensities of rain become higher (e.g., the fifth row), rain streaks cannot be removed well by this work. This is because that the guided filter used in this work cannot filter out all relatively bright rain streaks. For light rain streaks (e.g., the second row), the work by Luo et al. removes rain streaks well and offers a good visual quality. Once rain streaks become relatively higher, the rain-removed performance decreases severely. Because of the detection of rain and reasonable linear model of rain imaging, the visual quality of our rain-removed results are better than selected traditional works. We can also see from Fig. 9 that, for majority of light real rain images, our linear model can obtain comparable rain-removed results compared with the deep learning work [10] and [14].

C. CHALLENGING FROM HEAVY RAIN IMAGES
In order to further verify our linear model, we consider the challenging from heavy rain images. In this scenario, fog often appears in the image so that we propose to implement a de-haze preprocessing before removing rain streaks.
There are several excellent algorithms for the de-hazing task. In [41], the authors learnt the mapping between haze images and their corresponding transmission maps by a multi-scale deep neutral network, and the resulting performance outperforms many previous de-haze works (e.g. the dark channel method [42] by He et al.). Hence, [41] is selected in our work.
It is necessary to point out that the rain-removing methods presented in [8] and [21] are very memory-consuming, which may lead to memory overflow even on a highly-configured computer. Hence, a down-sizing has been included in [8] and [21] before the rain-removing, while the output image is up-sized back. One example is shown in Fig. 10, from which we can see that the intensity and size of rain streaks are reduced. This would make the removal of rain streaks much easier.
In order to make the performance comparison among different algorithms fair and accurate, we need to implement them on the same rain image. To this end, we crop a 256×256 patch from each original rain image. Then, the rain-removing methods in [8] and [21] can be implemented without resizing.
Some comparison results are presented in Fig. 11. Several observations can be made from these results. (1) The method of [22] can not deal with heavy rain images well and the performance of [21] is also sensitive to the scale of rain streaks. (2) The method of [8] removes majority of rain streaks and produces better results as compared to the above two methods. (3) Overall, our linear model produces the best results, no matter in image-detail preservation or rain-streak removal compared with the selected works which are based on traditional methods. Besides, our linear model has a very low computational complexity and is not memory-consuming. (4) The work by Fu et al. can not remove very heavy rain in the image of the first line. The work by Zhang et al. has done a good job in removing heavy rain streaks. Our method also removes heavy rain streaks and makes the image details clearer. For the other two images, these three methods obtain comparable rain-removed results. Although deep learning methods have made great differences in many fields, they still cannot obtain very good results for some images whose patterns are not included in the training set. One advantage of the traditional methods is that they usually produce relatively stable results.

D. COMPARISONS WITH OUR PREVIOUS HIERARCHICAL APPROACH
In [27], we developed a rain/snow removal algorithm based on a hierarchical approach and it also needs to detect rain/snow. We make some comparisons between these two works in this subsection.
In this work, we use the rain/snow detection method in [27] to detect rain streaks initially. As we mentioned in [27], this is an over-detection method and some non-rain details will be mis-detected as rain streaks. In [27], the lost information is complemented by a 3-layer extraction of non-rain/snow details. In this work, we improve rain's detection by eigen color property, which has revised many mis-detected non-rain details. More specifically, in [27], an over-complete dictionary is learnt first and we identify rain/snow dictionary atoms by the characteristics of rain/snow. Rain/snow and nonrain/snow components are reconstructed by sparse coding, and the non-rain/snow component forms the first layer of the non-rain/snow details. Then, by combining detection of rain/snow and guided filter, we extract the second layer of non-rain/snow details from the rain/snow component that is obtained by the sparse reconstruction. Finally, in order  to enhance image contrast, the third layer of non-rain/snow details is extracted by the developed sensitivity of variance across color channels (SVCC) descriptor. After adding the low frequency and three layers of image details together, we obtain the final rain/snow-removed results. In this work, we derive a linear imaging model of rain and remove rain by training our linear model.
Because of the dictionary learning method used in the hierarchical approach [27], its time consumption is much bigger than the linear model. When tested on 256 × 256 images, the average time consumed by the hierarchical approach is 83.38 seconds, while our linear model only needs 7.68 seconds as mentioned earlier. In Fig. 12, we show some rain removal results of these two methods. For light to medium rain streaks (the first five images), these two methods produce comparable results. However, for rain images that contains many small image details (e.g., the first and fourth ones), the linear model maintains more image details and keeps better structural similarity with original images. When encounterring heavy rain images (the last three images), the hierarchical approach removes more rain streaks, but the linear model method can preserve more image details.
Some objective comparisons are evaluated in terms of PSNR/SSIM for these two methods, as listed in Fig. 13. It can be seen that the linear model obtains lightly better PSNR/SSIM values for most images. This is mainly due to that our linear model will not change the non-rain areas of an image and the linear model describes the imaging of rain more accurately.

E. LIMITATIONS AND FUTURE WORKS
Our linear model has shown good rain removal performances in several important aspects and possesses good robustness no matter for common rain images or for challenging heavy rain images. Like most of algorithms, our method still has its own limitations. Rain streaks are over-detected in our algorithm, a few of non-rain details which have similar intensity and color to rain streaks can be mis-regarded as rain. Though these mis-detections are only a small part, the performance of our algorithm will become better if we can make the detection more accurate. In our future work, we will try our best to improve the detection of rain streaks by deep learning methods. Besides, we will try to combine our linear model with deep learning to solve the heavy rain tasks. We believe this will make a great difference if it can be realized.

VI. CONCLUSION
In this paper, we derive a simple linear model p = αs + β to describe the physical principle of imaging rain pixels. In order to remove rain streaks in a rain image, we first detect rain streaks by two characteristics of rain streaks. Once the binary location map of rain pixels is obtained, the original intensity of each rain pixel is approximated by a weighted average of all neighboring non-rain pixels. For every rain pixel, we train the parameters involved in the linear model. Once the parameters are determined for a rain pixel, its rain-removed intensity can be calculated by plugging the observed intensity of the rain pixel into the model. Subjective and objective evaluations demonstrate that our algorithm outperforms several state-ofthe-art traditional methods for rain-removal. Compared with deep learning based method, our linear model can obtain comparable rain-removed results no matter for light rain images or for majority of heavy rain images. For some very heavy rain images, our model even outperforms the deep learning based algorithms. Moreover, our algorithm offers a significant speed-up of several times to more than one degree of magnitude compared with the selected traditional methods.