Perceptual Image Hashing Based on Three-Dimensional Global Features and Image Energy

In order to improve classification performance and operating efficiency of the hash algorithm, this paper proposes a novel hash algorithm that combines three-dimensional global features and local energy features. During the stage of three-dimensional features extraction, the image is firstly compressed by SVD decomposition to form a secondary image. Then the statistical features of the secondary image at the three-dimensional visual angle are extracted as the global features. Finally, the global feature hash is generated by using the relationship between the statistical features of the image layers from different three-dimensional visual angles. In the energy feature extraction stage, the luminance image is divided into blocks, and then the energy value of each image sub-block is obtained. The multi-directional energy change features are taken as the local features of the image. Subsequent experimental results prove the effectiveness of the proposed algorithm. The algorithm not only has good robustness to the conventional content-preserving operations, but also achieves a good balance between discrimination capability and robustness. In addition, compared with several state-of-the-art schemes, this algorithm has the best ROC curve, the shortest running time and the best local tamper detection ability.


I. INTRODUCTION
Due to the interconnected network environment and the rapid development of free image editing software, the editing and dissemination of digital images have become very easy, which inevitably includes malicious editing and illegal transmission, such as copying and editing the original image for commercial profit; Spreading maliciously tampered images to damage the reputation of organizations or individuals, so image authentication and image retrieval become more and more important. Image hashing is a method of converting human visual perception of images into short characters for representation. Short characters do not change with the specific data representation of the image, and the required storage space is small, therefore, image hashing has been widely used in image retrieval and content authentication.
The associate editor coordinating the review of this manuscript and approving it for publication was Hossein Rahmani .
The design principle of image hashing is mainly to be robust against unintentional distortion caused by content-preserving operations and geometric distortions, to be sensitive to malicious tampering to image content, and have a certain degree of security. The performance of the image hashing algorithm depends largely on the method of extracting image features, so the hash algorithm is divided into the following four categories according to the extraction method of image features.

A. BASED ON INVARIANT FEATURE TRANSFORMATION
The hash algorithm based on invariant feature transformation mainly uses the frequency coefficients of the image in the transform domain to be robust to one or more attack operations. Ouyang et al. [1] utilized the amplitude correlation of the low-frequency quaternion discrete Fourier transform (QDFT) coefficients of the secondary image obtained by polar coordinate transformation to construct hash sequence. This algorithm can better resist rotation attacks. In scheme, VOLUME 9, 2021 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ Qin et al. [2] performed Weber local binary patterns (W-LBP) operation on the low-frequency sub-blocks obtained by discrete wavelet transform (DWT) transformation to extract local texture features, and utilized discrete cosine transform (DCT) transform for the color angle matrix to extract the color features. The texture and color features are combined to generate hash sequence. Tang et al. [3] utilized the phase spectrum of Fourier transform (PFT) visual model for the luminance Y component to generate visual saliency maps, extracted low-frequency characteristics from the visual saliency maps transformed by the dual-tree complex wavelet transform. The algorithm has good rotation robustness.
Qin et al. [4] performed DCT transformation on the image blocks containing rich edge information, and the coefficient features and position information were processed by principal component analysis (PCA) dimensionality reduction to generate hash. Vadlamudi et al. [5] performed two-dimensional DWT decomposition on overlapping blocks containing feature points, and used the row average of DWT approximation coefficients to generate hash sequence. In scheme [6], Tang et al. applied discrete Fourier transform (DFT) to each row of the image processed by log-polar transformation (LPT), and used the amplitude of the DFT coefficient to construct a rotation-resistant feature matrix, and finally generated hash sequence by multidimensional scale decomposition (MDS). Lei et al. [7] [28] used the Zernike moments of the luminance and chroma components as the global features, and the location information and texture information of the salient regions as the local features. The algorithm not only can detect the tampering of the image, but also locate the tampered area. Tang et al. [29] combined the color vector angle with the edge information obtained by the Canny operator, and used its statistical features to construct the image hash.
Although the above-mentioned hash algorithms have their own advantages, there are some hash algorithms whose classification performance and operation efficiency cannot be balanced; the robustness and discrimination capability of some hash algorithms cannot achieve the performance trade-off. In order to solve the above problems, we propose a hash algorithm based on the global features from three-dimensional visual angle in different directions and energy features. Our contributions mainly include the following: (1) The algorithm innovatively uses the statistical features of the image in different three-dimensional visual angles as the global features of the image, and the relationship between the statistical features of the image layers from different visual angles as the final image hash.
(2) The image energy adopted in this article is almost never mentioned in the hash algorithms, but it has good robustness to the conventional content-preserving operation, which has been proved in subsequent section II-C. In this paper, the energy matrix of the image block is used to construct the energy matrix, and the multi-directional change feature of the energy matrix is used to construct the energy local feature hash. On the basis of excellent robustness of image energy, the discrimination capability of the algorithm is improved.
(3) The hash sequence in this paper is compact, only 162 bits. It has good robustness to the conventional geometric distortion. And its classification performance, operating efficiency, and local tampering detection are superior to five state-of-the-art schemes. It also has a good detection result in image copy detection.
The framework of this paper is mainly divided into the following parts: the second part is the specific steps of feature extraction and hash generation; the third part is a series of experiments and experimental analysis; the fourth part is a summary and prospects for future work.

II. PROPOSED IMAGE HASHING SCHEME
The main content of the hash algorithm in this paper is shown in Fig. 1, which includes four parts: preprocessing, global features extraction under different three-dimensional visual angles, local features extraction of multi-directional energy changes of image blocks and hash encryption.

A. PREPROCESSING
First, the resolution of the input image I 0 is uniformly adjusted by bilinear interpolation to M × M , which improves the robustness against image scaling attacks of the algorithm. In addition, input images of any size have the same hash length, which facilitates subsequent performance analysis. Then, Gaussian low-pass filtering is performed on the size-normalized image. This operation can reduce the impact of noise, compression and other minor operations on the image [17]. Finally, the preprocessed image is converted to YC b C r color space, and its luminance component is taken for subsequent feature extraction.

B. GLOBAL FEATURES EXTRACTION OF 3D PERSPECTIVE
Before extracting features from image, the luminance component Y is first divided into non-overlapping blocks, the block size is n × n, and an image block matrix B is formed.
where, B i,j is the image block located in the i-th row and the j-th column.
On the one hand, in order to reduce storage requirements and improve algorithm efficiency, on the other hand, in order to further improve the robustness of the algorithm to noise, each image block B i,j is further divided into four non-overlapping sub-blocks with size of (n/2) × (n/2), and then perform SVD decomposition on the image sub-block b (k) i,j (k = 1,2,3,4) according to (3).
i,j are arranged and combined according to (4) to form a secondary image block p i,j , with size of (n/2) × 8. All the secondary image blocks are rearranged according to (5) to form the secondary image P with size of (M /2) × (8M /n).
Taking the horizontal resolution of the secondary image P as the x axis and the vertical resolution as the y axis, and the pixel value of the coordinate (x, y) as the z axis. Through the above operations, we can get the three-dimensional visual angle of the secondary image shown in Fig. 2. Observing  Fig. 2, through the x-axis viewing angle and the y-axis viewing angle respectively, we can get completely different visual effects, as shown in Fig. 3. The following separately extracts the statistical characteristics of image from different three-dimensional visual angles to construct hash sequences. At the x-axis perspective, the secondary image P is layered according to the y-axis resolution, and is divided into M /2 layers, of which the i-th image layer is shown in Fig. 4. Calculate the statistical characteristics of each layer separately: mean, variance and kurtosis, and then form the mean Similarly, construct the mean matrix m y , the variance matrix v y , the kurtosis matrix s y , and the statistical feature matrix T y from the y-axis viewing angle.
The matrix T x is subjected to row standardization according to (10) to obtain the matrix F.
where, T i,j is the i-th row and j-th column of the matrix T x , u i is the mean of the vector in the i-th row, and σ i is the standard deviation of the vector in the i-th row. Similarly, the matrix T y is subjected to the above normalization operation to obtain the matrix Q.
Calculate the Euclidean distance of each column of matrix F and matrix Q according to (11), and obtain an invariant feature matrix h of size 1 × M /2.
where, F i,j and Q i,j are the i-th row and j-th column of the matrix F and Q, h(j) is the j-th element of the matrix h.
By operating the invariant feature matrix h according to (12), we can obtain the binary sequence H S with size of M /2 − 1.
where, h(j) is the j-th element of matrix h, H S (j) is the j-th element of sequence H S .

C. FEATURE EXTRACTION OF LOCAL ENERGY
For the luminance image Y of size M × M , the energy E(Y ) can be expressed as (13) [30]: where, trace(·) represents the trace of the matrix, and y ij represents the pixel value of the luminance image Y . When the image Y is disturbed by a small amount of W during transmission, its energy will not change significantly, the proof process is shown in (14). Because the conventional content-preserving operation on the image will only have a slight impact on the pixel values of the image, so it can be considered that the image energy has good robustness for the conventional content-preserving operation.
where, E is the amount of energy change caused by small disturbances in the image. When extracting the energy features of the image, firstly, the luminance Y component is divided into non-overlapping blocks, the block size is a × a, and the energy value of each image sub-block is obtained in proper order to form the energy matrix N 1 . The reasons for choosing to extract the energy of each image block are mainly the following three aspects: firstly, the energy of the image block has good robust to the content-preserving operation; secondly, different images may have the same image energy, but it is difficult for different images to have exactly the same image block energy; finally, the image block processing can improve the algorithm's robustness to subtle operations.
where, n i,j is the energy value of the image sub-block located in the i-th row and j-th column. Perform matrix operations on the energy matrix N 1 in four directions as shown in Fig. 5. The upper left energy change matrix N lu , the upper right energy change matrix N ru , the lower left energy change matrix N ld , and the lower right energy change matrix N rd are obtained by sequentially subtracting the matrices LU , RU , LD and RD from the center matrix CE. In order to obtain a concise feature matrix, the above four matrices are processed according to (16), and then the energy change matrix N v is obtained.
In order to ensure the operating efficiency of the algorithm and reduce the storage space redundancy, the energy change matrix N v is expanded into a matrix N by rows, and quantized into a binary sequence H N according to (17). In order to ensure the security of the algorithm, we rearrange the columns of H m through the pseudo-random number sequence S generated by the randperm (·) function in MATLAB to obtain the final hash sequence H , as shown in (18).
where, H 1 (i) and H 2 (i) are the i-th elements of the hash sequences H 1 and H 2 , and L is the total length of the hash sequence of the proposed algorithm.

III. EXPERIMENTAL RESULTS AND ANALYSIS
In this section, firstly, we conduct robust experiments and discrimination experiments to test whether the proposed algorithm can meet the requirements of the basic properties of image hashing. Then we analyze the effect of parameters on the performance of the proposed algorithm. The proposed algorithm is compared with several state-of-the-art schemes in various aspects. Finally, copy detection experiment and local tampering detection experiment are carried out. All the experiments were simulated by MATLAB R2014a platform, and the computer was configured with Intel (R) Core (TM) i5-7200U CPU @2.50GHz 2.7GHz and 8.00GB RAM.

A. PARAMETER SETTINGS
The specific settings of the proposed hash algorithm are as follows: the size of normalized image and the standard deviation of the Gaussian low-pass filter in the image preprocessing section are 256 and 1, respectively. As for global feature extraction at different three-dimensional visual angles, the image block size is 16 × 16. In the feature extraction part of local energy change, the image block size is 32 × 32. That is, M = 256, σ = 1, n = 16 and a = 32. According to the above parameter settings, the hash length of this article is L = M /2 + (M /a − 2) 2 − 2 = 162 bits.

B. PERCEPTUAL ROBUSTNESS
The experiments in this section mainly reflect the robust performance of the hash algorithm between the input image and the similar images generated by the content preservation operation. In this paper, 20 color images are selected as the robust experimental samples, some standard images are shown in Fig. 6. Firstly, the 20 sample images are subjected to 12 kinds of attack operations shown in Table 1, respectively, and a total of 1380 similar images are generated; Then use the proposed algorithm to obtain hash sequences of sample images and similar images; Finally, the hash distance between each sample image and its similar images are calculated according to formula (19). Table 2 shows the statistics of hash distance (minimum, maximum, average and standard deviation) between 20 sample images and their similar images under different attack types. In Table 2, except for mean filtering and rotation attacks, for the remaining 10 attack operations, the minimum hash distance between the sample  image and its similar versions is 0, and the maximum distance does not exceed 0.1; Except for rotation attacks, the mean and standard deviation of the hash distance for other attack types are both less than 0.1; Therefore, this algorithm can effectively resist other conventional content-preserving operations except for rotation attacks. Fig. 7 are the graphs of the robust experimental results of 5 standard images (Airplane, Baboon, House, Lena and Peppers) and their similar images under various types of content-preserving operations, which is convenient for intuitively displaying the robust performance of the algorithm. The horizontal coordinate of the sub-picture is the corresponding conventional image processing parameter setting, and the vertical coordinate is the Hamming distance between the standard image and its corresponding conventional processing images. As shown in Fig. 7, besides the rotation attack, the distance curve of the same attack operation with different parameter settings has small fluctuation range and gentle change, which further illustrates that the propose algorithm has good robustness to multiple image attacks. For the rotation attack, the distance increases with the increase of the rotation angle, so the algorithm in this paper cannot effectively resist the large angle rotation.

C. DISCRIMINATION CAPABILITY
The discrimination experiments can effectively reflect the classification performance of the algorithm. The experimental dataset consists of 1000 different images, of which 700 images are from the University of Washington Ground Truth database [31], and 300 images are taken from the VOC2007 database [32]. Any two of the 1000 images are different image pairs, and the total number of different image pairs is C 2 1000 = 499500. Perform 11 kinds of content-preserving operations on the above 1000 different images. The specific attack types and parameter settings are 49330 VOLUME 9, 2021  shown in Table 3. The total number of similar image pairs is C 2 23 × 1000 = 253000. The distance distribution between similar image pairs and different image pairs can be intuitively seen through Fig. 8, where the red curve is the distance distribution between similar image pairs and the blue curve is the distance distribution between different image pairs. The abscissa of the red curve is between 0 ∼ 0.2461, the abscissa of the blue curve is in the range of 0.2222 ∼ 0.6728, the distance between the two curves overlapping is 0.2222 ∼ 0.2461, because the overlap distance is short, the number of overlaps is small, therefore, an appropriate threshold can be selected to effectively distinguish between different images and similar images.
When the selected threshold is too small, similar image pairs are easily misjudged as different image pairs, resulting in a large error detection rate P E [33]; when the selected threshold is large, different image pairs are easily mistaken for similar image pairs, resulting in a large collision rate P C [33], that is, the collision rate and the error detection rate are mutually suppressed. Therefore, the threshold should be VOLUME 9, 2021  selected when the collision rate and the error detection rate are small, so that the robustness and discrimination capability of the algorithm reach a good trade-off. The formulas of collision rate and error detection rate are shown in (20). The collision rate P C and the error detection rate P E under specific thresholds are shown in Table 4, and T = 0.24 is chosen in this paper.
where, N C is the total number of different image pairs misjudged as similar image pairs, N E is the total number of similar image pairs misjudged as different image pairs, and N D and N S are the total number of different image pairs and similar image pairs, respectively.

D. IMPACT OF IMPORTANT PARAMETERS ON ALGORITHM PERFORMANCE
In the process of extracting energy change features, because the brightness image is processed by non-overlapping block, so the size of the image block will affect the performance of the proposed hash algorithm. Therefore, other experimental parameters are unchanged, the performances are compared with different a, that is a = 8, a = 16, and a = 32.
We analyze the impact of image block size on algorithm classification performance by plotting by plotting (Receiver Operating Characteristics) ROC curves [34] at different values of a. This experiment still uses 499500 different image pairs and 253000 similar image pairs mentioned in section III-C, and the horizontal and vertical coordinates of the ROC curve can be obtained by (21).
where, N F is the total number of image pairs misjudged as similar image pairs, N T is the total number of similar image pairs correctly judged, and N D and N S are the total number of different image pairs and similar image pairs, respectively. Fig. 9 shows the ROC curves at different values of a. First of all, it can be seen intuitively that when a = 32, a = 16, and a = 8, the ROC curve is all close to the upper left corner, indicating that the classification performance of the algorithm is good. Secondly, when a = 32, the ROC curve of the algorithm is closest to the upper left corner, so it can be considered that the algorithm achieves the best classification performance when a = 32. In addition, the hash length and average hash generation time when a = 32, a = 16 and a = 8 are summarized in Table 5. It is not difficult to find that when a = 32, the hash length is the shortest and the time required is the least, which is more in line with the low storage and high efficiency requirements of the hash algorithm. In summary, we set a = 32.   FIGURE 10. ROC curves of proposed scheme and the schemes [14], [15], [18], [25], [29].

E. PERFORMANCE COMPARISON
In this section, the algorithm of this paper is compared with five state-of-the-art schemes for performance comparison experiments, i.e., Davarzani [29]. The performance of the above algorithms is mainly measured by three aspects: classification performance, storage requirements, and hash generation efficiency. In order to ensure the fairness of the experimental results, we abide by the following three rules during the experiment: do not change the original parameter settings of the comparison algorithm, all algorithms use the same data set, and all experiments are completed on the same computer.

1) COMPARISON OF CLASSIFICATION PERFORMANCE
In the classification performance comparison experiment, the ROC curve is also used as the theoretical analysis tool. The data set in this experiment is consistent with section III-C, with 499500 different image pairs and 253000 similar image pairs. It can be seen intuitively from Fig. 10 that the ROC curve of the proposed algorithm is closest to the upper left corner of the square area. Since the upper left corner area represents that when P FPR has a smaller value, P TPR has a larger value, so this algorithm has the best classification performance. In fact, when P FPR = 0, the P TPR of the proposed algorithm and schemes [14], [15], [18], [25] and [29] are 0.99997, 0.9985, 0.865, 0.9997, 0.8823 and 0.0861, respectively. When P TPR ≈ 1, the P FPR of the proposed algorithm and schemes [14], [15], [18], [25] and [29] are 8.008 × 10 −6 , 0.1121, 0.0830, 6.126 × 10 −5 , 0.0963 and 0.3501 in order. That is, under the same conditions, the proposed algorithm has the largest P TPR and the smallest P FPR compared with other algorithms.
In order to further show the classification performance of the hash algorithm, we have drawn the distance distribution maps of various algorithms. As shown in Fig. 11, the subgraphs (a) ∼ (f) in Fig. 11 correspond to the algorithm of this  [14]. (c) Scheme [15]. (d) Scheme [18]. (e) Scheme [25].(f) Scheme [29].
paper and the schemes [14], [15], [18], [25] and [29]. The red part in the distance distribution diagram represents similar image pairs, and the blue part represents different image pairs. It can be seen intuitively from these distance distribution graphs that the overlapping area of the red part and the blue part of the proposed algorithm is the smallest, indicating that the classification performance of this algorithm is the best.

2) COMPARISON OF COMPUTATIONAL COMPLEXITY
When the number of test images is huge, the storage requirements of hash sequences and the efficiency of hash generation are particularly important, so shorter hash generation time and shorter hash length should be the basic requirements for the proposed algorithm. In the comparison experiment of hash generation time between different hash algorithms, under the premise that the external parameters such as the experimental parameter settings, computer configuration, and experimental database are same, we recorded the total time of producing hashes of 1000 different images, and then divide the total time by 1000 to get the average generation time. The average time to generate 1000 different image hash sequences of proposed algorithm and schemes [14], [15], [18], [25] and [29] are 0.0287s, 0.2213s, 0.3021s, 0.0617s, 0.1376s and 2.9783s respectively, so the proposed algorithm takes the shortest time to generate hash. The hash length of the proposed algorithm is 162 bits, and the hash lengths of literatures [14], [15], [18], [25] and [29] are 64 decimal digits, 96 bits, 452 decimal digits, 720 bits and 400 bits. Since a decimal number requires at least 4 binary numbers, the hash length of the proposed algorithm is only slightly longer than that of scheme [15], but its classification performance and hash generation efficiency are significantly better than scheme [15].
According to the performance comparison results of the above six hash algorithms, it can be seen that the proposed algorithm not only has the best classification performance but also requires the shortest hash generation time. The specific comparison results of the six algorithms are summarized in Table 6.

F. APPLICATION OF IMAGE COPY DETECTION
By choosing an appropriate threshold, the algorithm in this paper can effectively detect the copied images. The experimental data set contains 3600 test images, of which 1000 are different images downloaded from the network, 100 randomly selected from the above different images as query images, 13 content retention operations are performed on each query image to generate 2600 copies images, specific attack types and corresponding parameter settings are shown in Table 7. The copy detection capability of the algorithm is described by the recall rate R and precision rate P [35] under different thresholds. The definition of the recall rate and precision rate is defined as (22), the specific results are shown in Table 8. When the threshold is 0.27, the algorithm can detect all the copied images, but the accuracy rate needs to be improved; when the threshold is 0.31, the precision can also achieve good level. (22) where, N p is the number of copy images in the query result that correctly match the query images, N q is the number of all copied images included in the query result, and N a is the number of all copied images in the test image set.

G. APPLICATION OF IMAGE TAMPERING DETECTION
When the image is partially tampered, the hash distance between the tampered image and the original image should be greater than the distance between similar image pairs and smaller than the distance between different image pairs. The total number of different image pairs in this experiment is 499500, and the total number of similar image pairs is 253000, which are the data sets used in section III-C. The tampered image set contains 15000 original images and 15000 tampered images. The original images are taken from the VOC2012 database [36], and 20% of the original image area is added to each original image to form tampered images. Fig. 12 shows the distance distribution between similar image pairs, original images and tampered images, and different image pairs. The red curve is the distance distribution between similar image pairs, ranging from 0 to 0.247; the blue curve is the distance distribution between the original images and the tampered images, and the endpoint values are 0.0123 and 0.401; the green curve is the distance distribution between different image pairs, ranging from 0.222 to 0.673. It can be seen intuitively from Fig. 12 that the blue curve is between the red curve and the green curve, the horizontal coordinate of the intersection point T 1 of the blue curve and the red curve is 0.0710, and the horizontal coordinate of the   intersection point T 2 of the blue curve and the green curve is 0.3364. When the distance between the test image and the original image is less than T 1 , the test image and the original image are considered to be similar image pairs; when it is greater than T 2 , the test image and the original image are different from each other; when the distance is between T 1 and T 2 , The detect image is regarded as partial tampering image. When the threshold value is T 1 , the probability of the proposed algorithm correctly identifying similar image pairs is 93.29%; when the threshold value is T 2 , the probability of the algorithm correctly identifying different image pairs is 99.89%; the probability of the algorithm correctly identifying local tampered images is 94.17%. Regarding the effect of a value on the tampering detection results, we obtained the results shown in Table 9 through experiments. It can be found from Table 9 that as the side  length of the image block decreases and the number of image blocks increases, the algorithm can detect more detailed changes of the image, which is conducive to the tamper detection of the proposed algorithm, and also illustrates the effectiveness of the algorithm in this paper to detect tampered images.
Next, the tampering detection capabilities of the proposed algorithm (a = 32) and the comparison algorithms will be further explained through some tampering examples. The original image and the corresponding partial tampering image are shown in Fig. 13 and Fig. 14, where the type of tampering VOLUME 9, 2021 includes local color tampering and local content tampering (including the deletion of objects and the addition of objects).In Fig. 13, from left to right and from top to bottom are (a 1 ) to (l 1 ). In Fig. 14, from left to right and from top to bottom are (a 2 ) to (l 2 ). Since Schemes [15], [25] and [29] in the comparison algorithm do not mention the algorithm's ability to tamper detection in the original article, the detection result may be unsatisfactory, but all algorithms use the same data sets. In addition, the hash distance measurement standards of this article and schemes [14], [15], [18], [25] and [29] are normalized hamming distance, correlation coefficient, hamming distance, L 2 norm, correlation coefficient, and correlation coefficient, respectively. From Table 10, we can find that the distance between the original image and the tampered image for the proposed algorithm is all between T 1 and T 2 , and other schemes cannot completely detect the tampered image, therefore, the proposed algorithm has a certain detection ability for tampered images.

IV. CONCLUSION
This algorithm uses the three-dimensional statistical features from different visual angles as the image global features, and the energy variation features in the four directions as the local features. Finally, the three-dimensional global features and the multi-directional local variation features are combined and scrambled to obtain the final hash sequences. It can be seen that the algorithm achieves a good trade-off between discrimination capability and robustness from the collision rate and error detection rate under different thresholds. Compared with five state-of-the-art schemes, the proposed algorithm has the advantages of the best ROC curve, compact hash sequence, the most excellent operating efficiency and the best tamper detection performance. However, the algorithm in this paper also has some shortcomings, such as the inability to effectively resist large-angle rotation attack operations, the inability to effectively detect subtle tampering images. The next research work will focus on improving these shortcomings.