An Automated Unified Framework for Video Deraining and Simultaneous Moving Object Detection in Surveillance Environments

In many instances, outdoor surveillance systems suffer from atrocious weather conditions such as rain, since images or videos captured by such vision systems in rainy days may undergo severe visual dilapidations. This can cause a glitch in those algorithms which are further used for object detection and tracking. Therefore, an ancillary video processing algorithm namely, video deraining is necessary prior to the implementation of object detection and tracking. This indicates the requirement of a time-consuming and complicated two-step process for the detection of moving objects in a rainy environment. This paper proposes an automated single-stage formulation for the conventional three-stage procedure such as rain steak removal, original data recovery and moving object detection as simultaneous operation in the tensor framework. The brilliance of this work is confined in the efficient formulation of an operator termed as Slice Rotational Total Variation (SRTV), to unify the different rain patterns into a common pattern so that any form of rain pattern can be effectively removed from the rainy data in a visually appealing manner by preserving the important details of the data. In this paper, SRTV regularization and tensor low-rank minimization are utilized for the effective deraining as well as efficient retrieval of clean background. Besides, $l_{1}$ norm and Tensor Total Variation (TTV) regularizers together with SRTV regularizer are employed for the faithful detection of derained moving objects. The experimental results show that the proposed method outperforms the state of the art methods in terms of deraining capability and accurate moving object detection.


I. INTRODUCTION
Usually, most of the modern outdoor vision systems show adequate performance in clear climatic conditions. However, certain dynamic weather conditions such as rain will degrade the visual quality of outdoor scenes, thereby deteriorating the performance of many image processing and computer vision algorithms such as object detection, event detection, tracking, classification, scene analysis and surveillance. Hence it is essential to remove such undesirable visual artifacts caused by the rain on outdoor images or videos so that the outdoor vision system can achieve better performance and greater accuracy.
Many methods have been proposed in literature to remove the rain streaks from input rainy video. Existing deraining algorithms can be classified into four approaches such as The associate editor coordinating the review of this manuscript and approving it for publication was Hugo Proenca . time domain, frequency domain, learning and reconstruction approaches. Time domain based techniques basically made the use of chromatic and temporal properties of the rain for rain removal. Grag et al. proposed a method to detect and remove rain from video data using the physical properties of raindrops [1], [2]. However, this method is incapable of distinguishing rain streaks from moving objects, when video data contains heavy rain streaks. Starik et al. introduced a video deraining algorithm in which each rainy pixel value was replaced by the median of corresponding pixel values in temporal frames [3]. However, median filtering brought blurring artifacts around the moving objects in the videos. Zhang et al. developed a method for removing the rain streaks by exploiting the temporal and chromatic properties [4]. This method debases the visual quality of video with dynamic background. Park et al. designed a video deraining algorithm based on Kalman filtering [5]. This algorithm is inefficient to detect rainy pixels from videos containing moving objects. VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ Brewer et al. analyzed the shape characteristics of rain streaks to identify and remove rain streaks from the video data [6]. This technique misclassifies the rain streaks when either multiple rain streaks intersect at some point or a rain streak intersects some other scene motion. These time domain based methods require all the available frames in a video, even then all the potential rain streaks may not be detected and removed effectively. Barnum et al. suggested a blurred Gaussian model in frequency domain for detecting the rain streaks [7]. This idea leads to visually unpleasant effects in image space when any changes are made in frequency space. Lin et al. composed a bilateral filter to disintegrate the rainy image into low frequency and high frequency parts and the high frequency part is further decomposed into rain and non-rain components [8]. Chen et al. employed a guided filter to transform the input image into frequency domain and extracted the rain streaks from high frequency component by means of histogram of oriented gradients [9]. These frequency domain methods may not always lead to pleasing effects in spatial domain whenever changes are made in frequency domain. Moreover, if the frequency components corresponding to rain streaks contain clutters, degraded results will be produced.
Kang et al. presented a dictionary learning based method that uses Morphological Component Analysis (MCA) decomposition [10]. Chiang et al. utilized the concept of dictionary learning to compute the rain streak map [11]. However, this method misdetects the non-rain components as rain. Li et al. invented a rain removal algorithm using Gaussian Mixture Models (GMM) [12]. This algorithm is inadequate on images having patches with a substantial amount of background details corrupted by many rain streaks. Mi et al. employed a fused dictionary for removing the rain component from videos [13]. However, this method fails to remove the entire rain streaks from videos. Ren et al. designed Markov Random Fields (MRFs) to distinguish between moving objects and rain streaks [14]. This method is unable to deal with the dynamic components of the video data. Zhu et al. suggested a method that uses three priors on two layers of rainy input image [15]. However, this method causes over-smoothening of the background details. These learning based methods have high time complexity and cause visually unpleasant results in the output.
Kim et al. proposed a two stage process to remove the rain streaks using rain map which is obtained by optical flow estimation process [16]. However, this method is limited when the input video contains heavy rain streaks. Jiang et al. developed an algorithm in which unidirectional tensor total variation regularizers are used to reconstruct the rain-free video data [17]. This algorithm is incompetent when the video is having oblique rain streaks. Wei et al. invented a method that encodes rain in stochastic manner as patch-based mixture of Gaussians [18]. This method is not suitable for removing heavy rain patterns. Wang et al. formulated group sparsity based optimization model for rain removal from videos [19]. This method is limited when the rain direction is far away from vertical direction. Huang et al. invented a video rain streak removal algorithm using directional gradient priors [20]. This algorithm was inadequate for handling the residual rain artifacts and the texture features of the dynamic objects in the video are not well maintained. Sun et al. proposed an algorithm based on directional regularized tensor for the rain removal [21]. However, this algorithm was incapable of removing heavy rain streaks from videos.
One of the prime merits of reconstruction based deraining methods is that the derained video can be obtained from the available single input rainy video data. Moreover, the non-essentialness of prior database makes these methods are more convenient in nature and a time consuming training session which is essential for learning based approaches can be avoided. Hence, our research is mainly focused on the design of video deraining algorithm using reconstruction based approach in order to negate the above mentioned drawbacks of the existing techniques. Hence the aim of this work is to design an efficient video deraining technique which can detect and remove the unwanted rain steaks from the rainy video data and reconstruct the clean data by incorporating the missing details from the available rainy data alone. Among various reconstruction based methods, optimization approach using low rank concept with suitable regularizers is the leading and emerging technique for various video processing applications. This is the main motive of the proposed work.
Apart from rain removal methods, several Moving Object Detection (MOD) algorithms using the concept of sparse and low rank decomposition technique have been introduced in the past decade. These techniques are dominant compared with the conventional MOD techniques in terms of accurate object detection and exact recovery of background information. Candes et al. proposed a low rank approximation based method for background subtraction using Robust Principal Component Analysis (RPCA) [22]. Zhou et al. developed a classical method for object detection using low rank representation [23]. Cao et al. invented a modified version of RPCA for foreground detection by utilizing spatio-temporal continuity behaviour of moving objects [24]. Chen et al. developed a method for separating background and moving objects, by extending the matrix based RPCA into tensor domain [25]. This method is restricted to videos with less number of frames. Sajid et al. suggested an online tensor decomposition based method for extracting moving objects from videos, which is effective for videos containing more frames [26]. Anju et al. developed tensor based Low Rank with Total Variation model for MOD applications [27]. Shijila et al. introduced a matrix based technique for partitioning foreground and background by incorporating l 1 norm and Total Variation (TV) regularizations into RPCA [28]. The same authors extended their work [28] to detect the moving objects in noisy environment by including an extra term for inculcating the noise in the observation model [29]. In [30], an MOD method for extreme surveillance environments by using twist spatio-temporal total variation to enhance the detection performance of the foreground and tensor nuclear  [20] in rain removal process. Top row: Original frames of 'truck', 'highway' and 'traffic' videos respectively. Middle row: Corresponding derained results obtained by Fastderain [20]. Bottom row: Corresponding MOD results obtained by [29]. The circled portions correspond to the moving objects with distorted texture features.
norm minimization for efficient background separation was proposed.
However, the object detection algorithms mentioned above were actually designed for clear weather conditions. Bad weather conditions such as rain may put these detection methods into trouble. Even if the the recent reconstruction based deraining technique such as Fastderain [20] can remove the rain steaks in a better manner, it will adversely affect the texture features of the moving objects especially when small sized moving objects are being considered. This scenario is depicted in the middle row of Fig. 1. This will diminish the detection accuracy of moving objects when the derained data is fed to an MOD system as illustrated in the bottom row of Fig. 1, where the most recent MOD method [29] is used for verification. Moreover, blurring of objects may create some misinterpretation about the data. For example, it may adversely affect the number plate recognition system or similar type of computer vision applications.
In summary, even the most recent and popular deraining techniques cannot preserve texture features of moving objects and the recent MOD methods cannot detect the moving objects from a derained data in an effective manner. Thus, it is beneficial to devise an algorithm to address all these drawbacks in a single step. Hence the aim of this work is reformulated as to design an efficient single stage algorithm for three objectives such as rain streak removal, recovery of rain-free data and moving object detection in a unified framework without much computational overhead. In this work, we model the video data as a third order tensor and tensor decomposition technique is attempted to address the above mentioned objective.
The rest of the paper is structured as follows. Section II provides the mathematical preliminaries required for the better description of the proposed method. Section III explains the proposed work in which the formulation of the optimization and the solution of the proposed model are elaborated. Section IV discusses experimental results and performance evaluation. Lastly, section V concludes the paper.

II. MATHEMATICAL PRELIMINARIES AND NOTATIONS
This section presents the preliminaries on the concept of tensor algebra used in this paper. The mathematical notations used in this paper are shown in Table 1. A tensor, X ∈ R m 1 ×m 2 ×....m N is a multi-dimensional array [31]- [36]. In this paper, Euler's script is used to denote the tensors. Slice of a tensor is a two dimensional section defined by all but two indices [32], [35], [36]. A third order tensor, X ∈ R m 1 ×m 2 ×m 3 has horizontal, lateral and frontal slices denoted by X (k, :, :), X (:, k, :) and X (:, :, k) respectively, where, k is the slice number. A fiber of a tensor is a one dimensional segment defined by fixing all indices except one [37]. A third order tensor, X has mode-1, mode-2 and mode-3 fibers denoted by X (:, j, k), X (i, :, k) and X (i, j, :) respectively.

A. TENSOR PRODUCT (T-PRODUCT)
t-product of two 3 rd order tensors, X ∈ R m 1 ×m 2 ×m 3 and Y ∈ R m 2 ×m 4 ×m 3 is defined as [38], and, where, is the circular convolution operator.

B. TENSOR SINGULAR VALUE DECOMPOSITION (T-SVD)
t-SVD of a tensor, X ∈ R m 1 ×m 2 ×m 3 is given by [36], [38], [39], where, is a rectangular tensor with each frontal slice as f -diagonal of size m 1 × m 2 × m 3 , U and V are orthogonal tensors of size m 1 × m 1 × m 3 and m 2 × m 2 × m 3 respectively and ' * ' denotes t-product.

C. TENSOR MULTI-RANK
The multi-rank of a tensor, X ∈ R m 1 ×m 2 ×m 3 is a vector, r ∈ R m 3 , whose individual elements are the ranks of frontal slices of Fourier transform of the tensor, X [36], [40].
where, r i = rank(X The l 1 norm of a 3 rd order tensor, X ∈ R m 1 ×m 2 ×m 3 is given by [37], The tensor nuclear norm of a tensor, X ∈ R m 1 ×m 2 ×m 3 is defined as [38], [39], where, f is the Fourier transform of a rectangular tensor, with each frontal slice as f -diagonal of size m 1 × m 1 × m 3 .

F. WEIGHTED TENSOR NUCLEAR NORM (WTNN)
For a tensor, X ∈ R m 1 ×m 2 ×m 3 with f ∈ R min (m 1 ,m 2 )×m 3 as its singular value tensor in Fourier domain, Weighted Tensor Nuclear Norm operator . W is defined as [37], where, W X ∈ R min (m 1 ,m 2 )×m 3 is a weight tensor whose individual element is given by For a given tensor, X ∈ R m 1 ×m 2 ×m 3 , anisotropic total variation is defined as [24], where, D h X = |X (x,y,z) − X (x+1,y,z) | and D v X = |X (x,y,z) − X (x,y+1,z) | and D t X = |X (x,y,z) − X (x,y,z+1) | In the following section, the proposed method for video deraining and simultaneous moving object detection by utilizing the above mentioned concepts in tensor algebra is presented.

III. PROPOSED METHOD
Video deraining task generally aims at removing the rain streaks and recovering the original data from the outdoor rainy videos. However, object detection is the main operation of many surveillance systems where an additional step is required for detecting the objects from derained videos. This two step process takes more execution time and is not suitable for real time applications. Moreover, the texture features of dynamic objects will be lost during the deraining process even for the most recent deraining algorithms which may adversely affect the detection performance of the MOD system. Hence, our method proposes a single stage optimization method for deraining and moving object detection in an effective manner. The objectives of the this problem consists of efficient removal of the rain components without affecting the important features of the original rain-free data and accurate detection of moving objects in surveillance videos.
In the proposed method, rainy video, V is modeled as a third order tensor in order to effectively utilize the spatial and temporal features of the data. It is because of the fact that the application of matrix algebra ignores the temporal redundancy present in the data as it is restricted to 2D space only. However, the different operators in tensor algebra help to negate this drawback to a certain extend by considering the data as a whole instead of individual slices. The proposed method decomposes the rainy data, V into three 3 rd order tensors as given below.
where, B, F and R are background, moving objects or foreground and rain components respectively. The proposed decomposition model is illustrated in Fig 2. Hence the objective is to accurately decompose B, F and R components for the efficient separation of background component from the foreground component for moving object detection succeeded by the integration of these components for obtaining the rain-free data. The significant contributions of the proposed method are summarized below.
• To the best of our understanding, this is the first attempt to introduce rain steak removal, rain-free data recovery and moving object detection for outdoor surveillance videos by solving a single objective function in a unified framework from the available rainy data in the tensor framework. This objective is accomplished by decomposing the 3 rd order rainy tensor as the sum of three tensors such as rainy, background and foreground components.
• A new operator termed as Slice Rotational Total Variation (SRTV) is formulated to amalgamate the different types of rain patterns such as vertical and oblique rain streaks to a single pattern so that the further processing can become more simplified and efficient.
• A tensor low rank regularization term is incorporated in the formulated optimization model for exploiting the low rank nature of background component. The low rank regularization term together with SRTV regularization helps to increase the efficacy of exact background separation without rain steaks from the rainy input data.
• In order to extract the sparse moving objects, a greedy sparsity regularization term such as l 1 minimization term is included. The inclusion of Tensor Total Variation (TTV) regularization helps the fine tuning of exact foreground and background separation. Moreover, the inclusion of SRTV regularization terms removes the rain steaks from the foreground component. The integration of decomposed and derained foreground and background components provides the rain-free video data.
• The formulated three-way objective function is solved using Augmented Lagrangian Method (ALM) with Alternating Direction Method of Multipliers (ADMM) technique. The most popular and recent techniques in the domain of video deraining and moving object detection are compared with the proposed scheme. It is found that the proposed method excels in majority cases compared with the other schemes. However, none of these compared techniques can perform video deraining and simultaneous MOD as a single stage operation like the proposed method.

A. THEORY BEHIND THE PROPOSED MODEL
In this section, the basic idea behind the decomposition of rainy video into foreground, background and rainy components are outlined.

1) REMOVAL OF RAINY COMPONENT
Rain streak removal is the crucial stage of deraining process. This is because of the fact that the nature of rain streaks such as light and heavy and the direction of rain streaks such as vertical and oblique may differently affect the removal process of the rain component. To diminish this problem, a unique operator called Slice Rotational Total Variation (SRTV) is introduced for the effective elimination of rain streaks from background and foreground components irrespective of nature as well as direction of the rain streaks.

a: SLICE ROTATIONAL TOTAL VARIATION (SRTV) OPERATOR
In this technique, each horizontal slice of the rainy data is transformed into frontal slice by rotating an angle of 90 0 and then two dimensional Total Variation (TV) is applied to each rotated frontal slice. Fig. 3 shows that despite the fact that the frontal slices of heavy and oblique rainy videos are having different rain pattens, their corresponding horizontal slices look alike and occupy dot-like pattern. Hence, if all the horizontal slices are converted to frontal slices by means of rotation, minimization of simple two dimensional Total Variation (TV) operator will be adequate to remove such dot-like pattern and thereafter the reverse process of slice rotation yields the clear background /foreground. Thus, SRTV operator is an efficacious method for eradicating rain component from videos having complex rain patterns such as heavy and oblique rain streaks. SRTV operator can be mathematically expressed as follows.
Let Y ∈ R m 3 ×m 2 ×m 1 be the resultant tensor obtained by applying slice rotation on a tensor, X ∈ R m 1 ×m 2 ×m 3 . Then, the entries in each horizontal slice of X are same as those in Now, SRTV of X is defined as,

2) SEPARATION OF BACKGROUND COMPONENT
The background component has spatial correlation within each frame and temporal correlation among the frames. This spatio-temporal consistency reveals that the background is low rank in nature [41]. The popular tensor robust principal component analysis (TRPCA) suggests the modeling of background component as a low rank tensor [42]. Hence tensor multi-rank minimization is the best solution for recovering background from videos. Since multi-rank minimization is not convex in nature, unique solution can be obtained by replacing the problem by Tensor Nuclear Norm (TNN) minimization, where, TNN is assumed to be the best convex surrogate of tensor multi-rank [40]. However, in [38], Baburaj et al. proposed that re-weighting the singular values in terms of their significance improves the performance of low rank recovery in a more fruitful manner. Hence, the proposed model utilizes Weighted Tensor Nuclear Norm (WTNN) minimization for enhancing the efficient recovery of background components. The WTNN minimization of foreground component together with SRTV minimization on the rainy component will yield the retrieval of rain-free component. VOLUME 8, 2020 3

) SEPARATION OF FOREGROUND COMPONENT
The moving objects in each frame usually contain lesser number of pixels. As a result, they exhibit sparse nature. Therefore, l 0 norm minimization can provide foreground component. Since l 0 norm is non-convex in nature, l 1 norm minimization is considered as the best substitute for the same [43]. Moreover, the moving objects have salient and continuous change in intensity along the spatial as well as temporal directions. Hence the more accurate foreground detection can be achieved by exploiting this spatio-temporal continuity of the data. Tensor Total Variation (TTV) minimization is the superior method for detecting moving objects by imposing the spatio-temporal continuity constraints [24]. Thus, the combined usage of l 1 norm and TTV norm minimization along with SRTV regularization will provide the detection of rain-free foreground components.

B. FORMULATION OF OPTIMIZATION MODEL
The aforementioned mathematical ingredients for the proposed method are recapped in terms of the following short comments; • Introduction of the joint processing by applying WTNN minimzation on background component and SRTV norm minimization on rainy component accomplishes mutual exclusiveness between background and rain streaks. The joint action of these two regularization terms results a visually pleasing background video without rain steaks.
• Since it is assumed that the foreground component poses sparse nature, sparsity greedy l 1 norm minimization is incorporated. In order to ensure the piecewise smoothness, TTV norm minimization term is included so that the fine tuning of exact foreground component can be ensured. Moreover, the formulated SRTV norm minimization term ensures the removal of rain steaks from the foreground component. Considering all the above-mentioned points, an optimization model can be formulated as, where, . W , . 1 , . TTV and . SRTV represent WTNN, l 1 , TTV and SRTV oprerators respectively. The Augmented Lagrangian function for Eq. (14) is defined as, The problem defined in Eq. (15) can be solved by using Alternating Direction Method of Multipliers (ADMM) in which the whole problem is divided into four subproblems and each one is solved in an iterative manner as follows [44].
The closed form solution of Eq. (16) is given by, where, D δ (X ) is the singular value convolution operator defined on a tensor X ∈ R m 1 ×m 2 ×m 3 having t-SVD X = U * * V T given by [38], where, C δ ( ) = * J and J is an f -diagonal tensor whose diagonal element in Fourier domain is here, f is the Fourier transform of where, The closed form solution of Eq. (20) is given by, where, S τ (X ) = sgn(X ) × max (X − τ, 0) is the soft thresholding operation [42].
The solution of the Eq. (25) is given by, where, SRTV [.] is the slice rotational TV minimization operator defined by, where, P ↑ is the frontal to horizontal slice rotated version of the tensor, where, X ↓ is the horizontal to frontal slice rotated version of the tensor X and TV [.] is the two dimensional TV minimization procedure applied to each frontal slice of X ↓ [46]. Finally, the Lagrangian multipliers are updated by the following equations, Here, the final updates of B and F can provide clear background and foreground components respectively. Since, the resultant F component does not contain rain streaks, our algorithm is able to precisely detect the moving objects in rainy environments. Moreover, the original rain-free video can be effectively regained by simply adding the last updates of B and F components together.
The algorithm description for solving the aforementioned optimization problem is summarized in Algorithm 1.

IV. EXPERIMENTAL RESULTS AND DISCUSSION
The performance evaluation of the proposed method is done on synthetic and real video sequences. In order to assess the efficacy of the proposed method for removing rain streaks from videos, we compare our method with latest state of the art deraining methods such as Kim [29]. None of these method can perform the deraining and moving object detection as a simultaneous action.
The proposed method is implemented on the platform of Linux 14.04 and Matlab (2016a) with an Intel(R) Xenon(R)

Algorithm 1 Rain Removal and Foreground Extraction
Data: Rainy video, V ∈ R m 1 ×m 2 ×m 3 Result: Foreground video, F and Derained video, B + F Initialization: ≤ threshold then converged = true end n = n + 1 end E5-1620 CPU at 3.70GHz and 8GB RAM. Since our method is capable of removing rain streaks as well as detecting moving objects, we need to analyse both deraining performance and object detection performance separately.

A. DERAINING PERFORMANCE ANALYSIS
In order to validate the capacity of our method for removing rain streaks, synthetic videos with different rain patterns such as light, heavy and oblique, and certain real rainy videos are utilized.

1) SYNTHETIC DATA
Synthetic rainy videos are obtained by adding three types of simulated rain streaks such as light rain, heavy rain and rain at an arbitrary angle with ground truth videos named 'truck', 'highway', 'pedestrian', 'park', 'backdoor' and 'bungalows', which are collected from CD.net and SABS datasets. The size of the videos is set to 256 × 256 × 3 × 100. The parameters, ρ and are experimentally set as, ρ = 2.5 and = 0.01 For the comparison purpose of our method with existing video deraining techniques such as [16], [18], [20] and [21], the quality metrics such as peak signal-to-noise ratio (PSNR) in dB, structural similarity (SSIM) [47] and inverse Relative Error (iRSE) in dB [38] are calculated for each frame of the aforementioned synthetic videos and the corresponding mean values of such parameters along with execution time are entered in Table 2. As shown in Table 2, the proposed VOLUME 8, 2020   method remarkably outperformed the state of the art methods in terms of chosen quality metrics. However, the execution time for Fast Derain [20] is very less compared with other methods including proposed method. However, the proposed  method can provide dual solutions such as deraining and MOD in a reasonable computational time. Fig 4 and 5 represent derained results and comparison of sample frames taken from synthetic videos named 'truck' and 'highway' respectively with three different rain patterns as light, heavy and oblique. It can be obviously seen that rain removal task is effectively carried out by the proposed method.

2) REAL DATA
Natural rainy videos such as 'traffic', 'wall', 'banana' and 'backyard' of size 256 × 256 × 3 × 100 are collected from various database including 'Storyblocks' and are utilized to verify how the proposed method performs rain removal for real data. To substantiate the ability of proposed method to VOLUME 8, 2020  remove rain streaks on real videos, visual quality parameters called Natural Image Quality Evaluator (NIQE) [48], Blind or Referenceless Image Spatial Quality Evaluator (BRISQUE) [49] and Sharpness Index (SI) [50] are calculated for each frame and the corresponding mean values of such parameters for whole video are entered in Table 3. Smaller values of NIQE and higher values of BRISQUE and SI indicate better performance. As seen from Table 3, the proposed method gives better results for all the four real videos. Fig 6 shows the visual comparison of our method with existing methods as [16], [18], [20] and [21]. It is clear that our method outperforms the other state of the art methods.

B. OBJECT DETECTION PERFORMANCE ANALYSIS
In order to analyse the performance effectiveness of our method for detecting moving objects, we have done a comparison with recent object detection methods such as [25], [26] and [29]. Videos used for comparison, such as 'highway', 'pedestrian', 'backdoor' and 'bungalows' are collected from CD.net dataset and simulated rain streaks with different patterns are added. The performance evaluation for background as well as foreground detection can be measured in terms of parameters such as f 0 measure, f 1 measure and f j measure [30], [51]. These parameters can be formulated as follows. where, and Here, FP, FN, TP and TN indicate false positives, false negatives, true positives, and true negatives, respectively. By using the above expressions, the selected parameters are calculated for each frame of the videos and the corresponding mean values are entered in Table 4. It is clear that the proposed method shows better performance than the state of the art methods with respect to the selected quality metrics. Fig 7 shows the visual comparison of our method with existing object detection methods. From Fig 7, it is seen that the proposed method can efficiently detect the foreground irrespective of the nature of rain streaks.

V. CONCLUSION
A new method for video deraining and simultaneous moving object detection is proposed and implemented in this paper. A 3-way tensor decomposition model is introduced for the retrieval of rain-free background and extraction of foreground from the rainy video data. A new operator termed as Slice Rotational Total Variation (SRTV) norm is formulated for transforming the all different rain patterns into unique dot-like pattern. SRTV combined with Weighted Tensor Nuclear Norm (WTNN) regularizations on rainy and low rank components are employed for effectively removing the rain streak outlier from background. Moreover, l 1 norm minimization along with TTV regularizers are incorporated for the proper detection of moving objects from rainy video. The quantitative comparison of the proposed method for deraining performance with the existing methods such as, LRMC method [16], Wei et al. [18], Fast Derain [20], Sun et al. [21] was done. Moreover, the proposed method was compared for analysing the object detection performance with the recent MOD techniques such as, Javed et al. [26], Chen et al. [25] and Shijila et al. [29]. In both the above cases, our method brought good results. The main contributions of this work are the formulation of the new operator, SRTV norm for addressing the all types of rain patterns and the simultaneous video deraining and moving object detection by iteratively solving a single optimization model without any training phase. In this work, we have considered only videos with static backgrounds. In future, we can reformulate the optimization model for addressing videos with dynamic backgrounds.