Reversible Data Hiding Scheme Based on VQ Prediction and Adaptive Parametric Binary Tree Labeling for Encrypted Images

In this paper, we propose a reversible data hiding scheme for the encrypted images (RDHEI) based on vector quantization (VQ) prediction and parametric binary tree labeling (PBTL). VQ compression is a lossy image compression method, the difference between the original image and the decompressed image is small when the length of codebook is sufficient. Thus, VQ can be applied as a tool for pixel value prediction. Based on VQ prediction, PBTL method is applied to label the embeddable and non-embeddable pixels. Through adaptive setting of parameters, the modified PBTL can provide optimal pixel labeling strategies and thus maximize the overall embedding capacity. Furthermore, the VQ index and the secret data are stream ciphered to avoid leakage of the image content and secret information. Different metrics are used to show that the marked encrypted images are highly secure. In comparison with several state-of-the-art schemes, our scheme outperforms the related works in embedding rate for two commonly applied image databases. In addition, extraction of the secret data and recovery of the original image can be operated separately according to authorization.


I. INTRODUCTION
As people's demand for secure transmission increases, the technology of data hiding has been greatly developed. By hiding secret information into an ordinary media such as a plain image, a JPEG image, or an App, the secret message can be communicated without drawing the eavesdropper's attention. According to the restorability of the cover media, data hiding techniques can be divided into two categories: reversible data hiding [1]- [4] and irreversible data hiding [5]- [7]. According to the format of cover images, the data hiding techniques for digital images can be applied to four different domains: the spatial domain [1]- [7], the frequency domain [8]- [10], the compressed domain [11], [12], and the encryption domain [13]- [23].
In view of the rapid development of cloud computing and storage services, many data hiding techniques for encrypted images have been proposed. An RDHEI scheme involves three participants, namely, image owner, data hider, and receiver [13]- [16]. As shown in Figure 1, the image owner The associate editor coordinating the review of this manuscript and approving it for publication was Sabah Mohammed . encrypts the original image using the encryption key and sends it to cloud storage. The cloud service provider plays the role of a data hider, who embeds the secret data into encrypted image without knowing its content. After received the marked encrypted image, the receiver can restore the original cover image, extract the secret data, or do both according to his/her specific authority.
Depending on the approach of vacating the spare room for data embedding, the existing RDHEI schemes can be divided into three categories: (1) vacating room after encryption (VRAE) [13], [17]; (2) vacating room by encryption (VRBE) [18]; and (3) reserving room before encryption (RRBE) [19]- [21]. The spare space is practically vacated by exploiting the spatial redundancy of the plain image. To ensure security, a proper image encryption process has to disrupt the spatial correlation between neighboring pixels and thus increases the difficulty of vacating spare room. To preserve the spatial redundancy, the VRBE schemes usually encrypt the image block-wise and suffer from information leakage. Different from the VRAE and VRBE methods, the RRBE schemes make full use of the correlation between adjacent pixels and vacate more space for the data embedding, so these schemes can obtain a better embedding rate.
Traditional RDHEI schemes use a single secret key to encrypt image and secret data, so that the data extraction and image restoration are processed jointly. In the modern cloud service-based scheme, the image encryption and data hiding are served by different participants with different secret keys. The two decoding processes can be executed separately. In 2012, a separable RDHEI scheme is proposed by Zhang [17]. This scheme compresses the least significant bits (LSBs) to free a spare space. Yi and Zhou [18] proposed a separable RDHEI scheme based on the parametric binary tree labeling (PBTL) method, which uses spatial correlation to predict pixel values and exploits the resulting spatial redundancy to embed secret data. Puteaux and Puech [19] proposed an RDHEI scheme based on the most significant bit (MSB) substitution, which uses predictable bits to embed the secret data and restore the original image. Since the scheme in [19] only substitutes one-MSB for data embedding, its embedding capacity is limited. In 2018, Puyang et al. [20] extended the Puteaux et al.'s scheme to two MSBs and thus improved the embedding capacity. In 2019, Chen and Chang [21] proposed a scheme which transforms the block-based MSB planes into bits stream and compresses the stream using run-length coding to embed data. In 2020, Wang et al. [22] proposed an adjusting pixel modulation strategy to reduce the occurrences of pixel value overflow and thus obtain more embeddable pixels. In 2020, an improved reversible data hiding scheme in encrypted images using parametric binary tree labeling (IPBTL-RDHEI) is proposed by Wu et al. [23], which takes advantage of the spatial correlation in the entire original image but not in small image blocks to reserve room for hiding data.
An effective RDHEI scheme is usually based on an accurate prediction of pixel values followed by an efficient labeling of pixels according to their prediction errors to vacate the embedding room. The existing schemes mostly utilize the spatial correlation between neighboring pixels to vacate spare room for embedding. Our new proposed scheme is a completely different approach which exploits VQ image as a basis for prediction. Based on the accurate approximation of a VQ code-word, an image block can be predicted and labeled efficiently to free a desirable embedding space.
The PBTL proposed by Yi and Zhou [18] is adopted to record the prediction errors. In addition, the features of each image block is examined to determine the optimal parameters for labeling. Due to the accurate prediction and adaptive labeling, the proposed RDHEI scheme can provide a higher embedding rate than the recent related works.
The rest of this paper is organized as follows. Section 2 introduces parametric binary tree labeling scheme and the VQ-based prediction of pixel values. The detailed procedure of the proposed scheme is addressed in Section 3. Section 4 introduces the experimental results of the proposed scheme. Section 5 gives the conclusions of this paper.

II. RELATED WORK
In this section, the technical details of the PBTL proposed by Yi and Zhou [18] are introduced first. Two example series of the coding system are also provided. Then, the VQ image compression technique is discussed. It includes the training of VQ codebook, the image compression, and the image decoding.

A. PARAMETRIC BINARY TREE LABELING SCHEME
The parametric binary tree is a full binary tree with a structure of seven layers as shown in Figure 2. As a full binary tree, there are 2 i nodes in the i-th layer, where i = 1, 2, . . . , 7. The nodes in the i-th layer are numbered with i-bit binary codes from left to right in the ascending order of binary value.
In the PBTL scheme, pixels are grouped into two categories, namely, embeddable pixels G 1 and non-embeddable pixels G 2 , which is labeled by α and β bits respectively, where 1 ≤ α, β ≤ 7. Before labeling, assume that the parameters α and β are given. The first code of the β-th layer is used to label the pixels in G 2 .Therefore, a zero stream of β bits is used to label the pixels that cannot be applied to embed secret data. The pixels in G 1 are labeled with n α different binary codes of α bits, where n α is given by   When α ≤ β, different sub-categories in G 1 are separately labeled with 2 α − 1 codes of α bits; when α > β, different sub-categories in G 1 are separately labeled with 2 α − 2 α−β codes corresponding to nodes that are not succeeded from the first node of the β-th layer.
Two example coding systems of β = 2 and 3 for PBTL scheme are listed in Table 1 and 2, respectively. As shown in the tables, zero streams ''00'' and ''000'' are used to label the pixels in G 2 . As the number of bits α increases, more sub-categories of pixels in G 1 can be separately labeled.

B. VECTOR QUANTIZATION
VQ compression is a lossy image compression technique, which encodes an image block-wise to reduce the amount of stored data. As shown in Fig. 3, the original image is divided into blocks sized W ×W first. Then, for each image block, search the codebook to find the closest codeword and record its index to the index table. In this way, the original image can be compressed and recorded using an index table with a corresponding codebook.
In VQ decoding, the index table is processed in the raster scan order. According to the recorded indices, codewords can be retrieved and tiled to constitute a decompressed image. The approximation error of the decompressed image depends largely on the length of codebook. The more codewords it contains, the better approximation it could achieve. Of course, the bit depth of the index table should also be lengthened at the same time.
Besides, the selection of codewords is also a key point. A typical strategy is to determine the codewords through training. Generally, four to six images are selected as training sample. The process of codebook training is as follows. Firstly, divide the sample images into m blocks sized W × W , and randomly select 2 n (2 n ≤ m) blocks as the initial centroids. Secondly, treat each block as a vector, calculate the Euclidean distances between the remaining (m − 2 n ) blocks and the 2 n centroids, and group them according to the shortest distance. Thirdly, recalculate the new centroids of 2 n groups. Finally, repeat the previous process, until the recalculated 2 n centroids are stable. A proper codebook with a sufficient index length can produce good approximations of a wide variety of images.

III. PROPOSED SCHEME
The proposed VQPBTL-RDHEI adopts VQ encoding and PBTL technique to predict pixel value and record prediction error, so that the spare space can be obtained to embed secret messages. As shown in Fig. 4, our scheme is composed of three parts: image encryption, data hiding, and data extraction with image recovery. In the first part, the image owner encrypts the image blocks with an encryption key and predicts each image block based on its VQ codeword. According to the prediction error, PBTL is applied to label the error type and mark the available embedding space. In the second part, the data hider embeds the secret data with the data hiding key. In the last part, the receiver can extract the secret data with the data hiding key or recover the original image with the encryption key.

A. VQ PREDICTION AND PIXEL LABELING
In this subsection, the VQ-based pixel value prediction method is proposed. Based on the prediction, PBTL can be applied to label the image blocks. An algorithm is proposed to find the optimal parameter set for a given cover image. With the optimal parameter set, a corresponding block-labeled VOLUME 9, 2021

1:
Decompress T I into I VQ and obtain the bit depth n of T I .

7:
Calculate the embedding capacity by EC j 10: End 11:

27:
Output image can be produced by applying the proposed adaptive PBTL technique.

1) VQ-BASED PIXEL VALUE PREDICTION
As mentioned above, VQ compression is a lossy image compression technique. The performance of a VQ decompressed image is related to the codebook length. With more codewords, the VQ approximation can be closer to the original image and thus it can be applied as a tool for pixel value prediction. However, a larger codebook requires a longer code for indexing. A proper choice of index length can provide an efficient and effective prediction of the original image.
Suppose the applied bit depth of the VQ index is n(n> 8), and the original image I with the size of M × N is divided into blocks of size W ×W . After VQ encoding, an index table T I with bit-depth n and sized M /W × N /W is obtained. By referring to the codebook U , the index table T I can be decompressed back to an approximate image. An example is given in Fig. 5, where (a) is the applied image 'Lena' and (b) is the distribution of prediction error with n = 14. The prediction error E I is calculated by (2) In Eq. (2), i and j represent the coordinates of the images, I and I VQ represent the original and the decompressed images, respectively. The distribution of prediction error is highly concentrated around the origin. That means the VQ image I VQ provides a good prediction of the original image I .

2) OPTIMAL PARAMETER SET FOR ADAPTIVE PBTL
To spare the room for data embedding, VQ prediction and PBTL cooperate to store the information of image blocks. Depending on the features of an image block, different parameter settings of (α, β) vacate different volumes of space. To maximize the vacated room, an adaptive PBTL scheme is proposed to label the cover image.
For each image I , an optimal set of four parameter pairs G * I = {(α * (t),β * (t)) | t= 0, 1, 2, 3.} is applied to label its sub-blocks. That is, four optimal parameter pairs are used to best represent a given image block-wise. Each block is represented by the most suitable pair within the set. Based on our investigation, the parameter α dominates the performance. Under efficiency consideration, the parameter β is set equal to α in searching for the optimal combination of α values. According to the data structure of digital images, the optimal set of α values is searched in the range of [2], [7]. The corresponding value of β for each α value in the optimal set is fine-tuned subsequently.
The algorithm fully searches the fifteen combinations of four out of six α values to find the optimal set. For each combination, the best fitted parameter value within the combination is used to label each block of the given image and accumulate the spare space. The combination of parameter values which achieves the greatest accumulated space is chosen as the optimal set for the current image and sent to the fine-tuning phase.
To count the spare space of an image block, the prediction error E B is calculated first by The notations B and B VQ represent the image block and its VQ approximation. When the parameter pair (α, β) is chosen to encode the block, the number of embeddable labels is n α as given in Eq. (1). These labels are applied to represent prediction errors within the range defined by For the pixels with their prediction errors within the range, each pixel value can be labeled with α bits and (8−α) bits can be vacated. Conversely, each outranged pixel value should be marked non-embeddable with additional β bits. The algorithm for determining the maximal embedding capacity of an image with its corresponding optimal parameter set is summarized as follows.

3) ADAPTIVE PBTL
For a cover image I , we use its corresponding VQ index table T I with bit depth n and optimal parameter set G * I = {(α * (t),β * (t)) | t= 0, 1, 2, 3.} to produce a block-labeled image I L . The production process is executed in the raster scan order block-wise.
For each image block B k , the MSB of the first n pixels are reserved for recording its encrypted VQ codeword. Count the embedding capacities EC (t) corresponding to its four parameter pairs, the pair (α * (t k ),β * (t k )) with the maximal capacity EC (t k ) is applied to label the block. The 2-bit binary code (t k ) 2 is recorded to the MSB of the (n + 1)-th and (n + 2)-th pixels. For each pixel in the block, check its prediction error to determine the embeddability. An embeddable pixel is labeled with a prediction error in its α * (t k ) LSBs, while a non-embeddable pixel is labeled with β * (t k ) zeros in its LSBs.
To record the optimal parameter set G * I , the first image block is processed with a fixed parameter pair (α, β) = (4, 3), which provides acceptable embedding capacity for various features of the image block. Each parameter of the optimal set is encoded with 3 bits. Therefore, L p = (4pairs × 2) × 3bits = 24 bits in total are recorded in the first block B 1 . The recording of index t is not required for this first block. Starting from the second block B 2 , the adaptive PBTL described above is applied to the remaining blocks. For very rare cases, the embeddable pixels in the first block are insufficient for recording the parameters. In such cases, the fixed-parameter labeling continues until the whole parameter set is recorded.
An example of adaptive PBTL is shown in Fig. 6. A parametric block, which usually means the first block, is labeled as illustrated. The fixed parameter pair (α, β) = (4, 3) is applied, where β valued '000' is used to label the nonembeddable pixels and four bits of α value is used to label the VOLUME 9, 2021

Algorithm 2 Production of Block-Labeled Image
Input: Phase 1: parametric block labeling.

1:
Decompress T I into I VQ and obtain the bit depth n.

2:
Initialize I L sized as I .

5:
Record 24 bits binary representation of optimal parameters to the leading embeddable space. Phase 2: adaptive block labeling.

9:
Calculate the embedding capacity by
embeddable pixels with a mapping table given in the figure.
The fourteen enclosed dash bits are the bits reserved for recording the encrypted VQ index, no marker bit is required. The free dash bits are the embeddable bits. As shown in the figure, the first 24 free bits are used to record the parameter set. In the lower portion of the figure, an adaptive block (which can be a metadata block or a secret data block) with (3, 2) labeling is illustrated. Here, embeddable pixels are labeled with three bits of α value according to the mapping table while non-embeddable pixels are labeled with β valued '00.' Two additional marker bits '00' are recorded to indicate the usage of the parameter pair (3,2) . Again, the embeddable bits are denoted with '-' while the non-embeddable bits are denoted with 'x.' The algorithm is summarized as follows.

Algorithm 3 Image Encryption
Input:

Output:
encrypted image with labels I E .

2:
Stream cypher T I with encryption key K e intoT I .

3:
Copy I L to I E .

7:
The VQ index and non-embeddable pixel values are encrypted as illustrated in Fig. 7.

8:
Encrypted VQ index and the MSBs are in-block recorded while LSBs are recorded to S meta .

11:
Record the auxiliary stream S meta sequentially into the embeddable space immediately after the parameter bits.

12:
Fill the remaining embeddable space with randomly generated bit stream.

B. IMAGE ENCRYPTION AND DATA HIDING
After adaptive PBTL, the block-labeled image is ready for recording the encrypted image and embedding the secret data. In our scheme, image encryption and data hiding are designed to be separable processes with different secret keys.

1) IMAGE ENCRYPTION
The content owner uses the encryption key K e to encrypt VQ index and non-embeddable pixel values of each image block. The encrypted VQ index and MSBs of non-embeddable pixels are in-block recorded, while LSBs of non-embeddable pixels are queued in the auxiliary binary stream S meta and recorded to the embeddable space in the first priority. The encryption process of the first parametric block is illustrated in Fig. 7. The VQ index and non-embeddable pixel values are stream ciphered with a key stream generated using the encryption key K e . Then, the encrypted VQ index is recorded to the predefined reserved space. Four nonembeddable bits of the β-labeled pixel are recorded with the MSBs of the encrypted pixel value. The LSBs of the encrypted pixel value are recorded to the auxiliary stream S meta consecutively. After all blocks are processed in the same way, the auxiliary stream is recorded to the embeddable space in the first priority. As shown in the figure, the auxiliary stream is recorded immediately after the parameter set. Finally, the remaining embeddable space are filled with random bit stream to complete the whole encrypted image. The image encryption algorithm is summarized as follows.

2) DATA EMBEDDING
Upon receiving the encrypted image with labels, the data hider can embed secret data according to the labeling rules VOLUME 9, 2021 Algorithm 4 Data Extraction Input: marked encrypted image I M , data hiding key K h . Output: secret data S secret .

4:
Decode the labels according to the parameter set.

5:
Accumulate the auxiliary space required for each β-labeled pixel.

6:
Retrieve the embedded data in each α-labeled pixel and queue to the streamŜ secret . 7: End 8: Skip the leading segment of length L p + L m inŜ secret .

9:
Stream cypherŜ secret to obtain S secret using data hiding key K h .

Algorithm 5 Cover Image Restoration
Input: marked encrypted image I M , image encryption key K e . Output: cover image I .

4:
Decode the labels according to the parameter set.

6:
Retrieve the recorded pixel data in the non-embeddable bits of β-labeled pixels.

7:
Accumulate the required metadata space of β-labeled pixels to L m .

8:
Retrieve the encrypted VQ index recorded in the specified space. 9: End 10: Skip the embeddable space with length L p .

11:
Retrieve the recorded metadata of length L m .

12:
Decrypt the VQ index, pixel data, and metadata according to the same order of encryption using the encryption key K e .

13:
Reconstruct each image block by the indexed VQ code-word.

14:
Replace the non-embeddable pixel values with their corresponding decrypted values.

15:
Combine the reconstructed image blocks to output I .
without knowing the information of the original image. The encrypted image I E is divided into blocks again. Firstly, labels of the parametric block are decoded according to the fixed parameter pair (α, β) = (4, 3). Then, the optimal parameter set is retrieved from the leading embeddable space. Based on the optimal parameter set, labels of the following blocks can be decoded. The β-labeled pixels can be identified, and the required bits for recording LSBs can be accumulated. The total accumulated length L m is the length of the auxiliary stream S meta . Skipping the embeddable space (L p + L m ) occupied by the optimal parameter set G * I and the auxiliary stream S meta , the rest of the embeddable space can be exploited to embed the secret data. The data hider encrypts the secret data S secret by stream ciphering with a key stream generated using the data hiding key K h . Then, the encrypted secret stream is sequentially embedded into the embeddable space. The algorithm for data embedding is summarized as follows.

C. DATA EXTRACTION AND IMAGE RECOVERY
The extraction of secret data and the recovery of cover image from the marked encrypted image are processed in the reverse order of embedding and encryption. Based on the framework of our scheme, the two processes can be executed separately. Details are described below.

1) DATA EXTRACTION
The receiver authorized to hold the data hiding key K h can extract the secret data from the marked encrypted image without knowing anything about the cover image. Firstly, divide the marked encrypted image into mutually exclusive blocks. Then, apply the fixed parameter pair (α, β) = (4, 3) to decode the labels of the parametric blocks and retrieve the optimal parameter set. According to the optimal parameter set, determine the embeddable space and the required metadata space of the whole image, and retrieve all the embedded data. By skipping the parameter set and the auxiliary metadata, the encrypted secret stream can be obtained. Using the data

4:
Decode the labels and identify β-labeled pixels.

5:
Accumulate the auxiliary space required for each β-labeled pixel. 6: End 7: Calculate the occupied embeddable space L p + L m . Phase 2: Secret data embedding.

8:
Stream cypher the secret data S secret intoŜ secret using data hiding key K h .

11:
Embed secret dataŜ secret into the rest embeddable space.

12: End 13:
Output I M . hiding key, the embedded secret data can be decoded through stream ciphering. The algorithm is provided as follows.

2) COVER IMAGE RESTORATION
The receiver authorized to hold the image encryption key K e can perfectly reconstruct the cover image without knowing anything about the embedded secret data. Firstly, divide the marked encrypted image into blocks. Then, apply the fixed parameter pair (α, β) = (4, 3) to decode the labels of the parametric blocks and retrieve the optimal parameter set. According to the optimal parameter set, determine the embeddable space of each block, identify the β-labeled pixels, retrieve the recorded pixel data in its non-embeddable bits, and accumulate its required metadata space. Skip the embeddable space L p occupied by the optimal parameter set, then consecutively retrieve the required amount of recorded metadata. For each block, retrieve the encrypted VQ index from the specified recording space. After obtaining all the required information, decipher the VQ index, pixel data, and metadata according to the same order of encryption using the encryption key K e . Finally, reconstruct each image block by the indexed VQ codeword and replace the non-embeddable pixel values with their corresponding decrypted values. The image restoration algorithm is given below.

IV. EXPERIMENTAL RESULTS AND DISCUSSIONS
In this section, experiments are conducted to evaluate the performance of the proposed VQPBTL-RDHEI scheme. In the experiment, five standard 512 × 512 gray-scale images are applied as test images: Airplane, Lena, Peppers, Baboon, and Man as shown in Fig. 8. In addition, the proposed scheme is also applied to the databases BOWS-2 [24] and BOSSbase [25] in order to examine its applicability to images of various kinds. Two samples of our experimental results are shown in Fig. 9, where 9(a) and (d) are the cover images; 9(b) and (e) are their encrypted versions; 9(c) and (f) are the secret data embedded (marked) encrypted images, respectively. The bit depth of index table for VQ compression is n = 14. As expected, the encrypted images and the marked encrypted images are completely random, thus it is impossible to analyze the information about the cover images or secret messages.
The optimal parameter sets for the five test images and their corresponding numbers of fitted blocks are listed in Table 3. Diverse results were performed, to fit the various test images.
It can be observed that for smooth images, the optimal values of parameter α are a collection of low values; while for complex images, the optimal choices are a collection of high values. These results also indicate that the adaptive PBTL is effective. Fig. 10 shows the pixel-value distributions of two sample experiments, including the distributions of their cover images, encrypted images, and marked encrypted images, respectively. The distributions of encrypted images and marked encrypted images are completely random that there is no clue to predict their corresponding cover images or embedded secrets.

A. SECURITY ANALYSIS
Two metrics of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) are used to evaluate the similarity between two images. Their formulas are given by (17) and (18), respectively. In (17), P and P represent the cover image and the stego image, respectively, H and W represent their height and width, and (m, n) represents the coordinates of the pixel. In (18), µ P is the average of P, µ P is the average of P ,   is the variance of P , and σ PP is the covariance of P and P . In addition, L is the dynamic range of pixel values, and k 1 = 0.01, k 2 = 0.03. Table 4 lists the values of PSNR and SSIM for the encrypted images and marked encrypted images. As shown in the table, PSNR values are less than 10 dB and the SSIM values are very close to 0 for the encrypted images and marked encrypted images, which means that the proposed scheme can prevent the content of cover images and embedded secrets from leakage.
(18) VOLUME 9, 2021 FIGURE 11. Comparison of embedding rates of test images between our scheme and five state-of-the-art schemes.

FIGURE 12.
Comparison of average embedding rates of two databases between our scheme and five state-of-the-art schemes.

B. EMBEDDING CAPACITY
The embedding capacity is measured with embedding rate in bpp (bits per pixel). Table 5 lists the embedding rates of test images Airplane, Lena, Peppers, Baboon, and Man, when the bit depth of VQ compression n is set to 12, 13, and 14, respectively. As the bit depth n increases, the VQ prediction becomes more accurate and thus more embeddable space can be obtained. Although additional bits are required to record a longer index code, the embedding rate increases with the code length of the VQ index. Besides, images with less fine texture can achieve a higher embedding rate; Airplane and Lena, can embed 3.3983 bpp and 3.1419 bpp, respectively.

C. COMPARISONS
In this subsection, the proposed scheme is compared with several state-of-the-art schemes. Fig. 11 lists the embedding rates of our scheme and six related works proposed in [18]- [22] and [23]. The embedding rates of the proposed scheme outperforms the related works in all cases except for the cover image 'man.' However, our embedding rate is more than 2 bpp for all the five cases, which is not achieved by any other. For all these RDHEI schemes, the embedding rate is greatly dependent on the compressibility of the given cover image.
The experimental results for two commonly applied image databases BOWS-2 and BOSSBase are also provided to examine the applicability of our proposed scheme. The average embedding rates of the two databases for our scheme and the six state-of-the-art schemes are plotted in Fig. 12. It can be seen that our scheme also has excellent performance; 3.4863 bpp for the BOWS-2 database and 3.1205 bpp for the BOSSBase database are achieved, while ER of all the other schemes are less than 3 bpp.

V. CONCLUSION
By leveraging the accuracy of VQ prediction and the efficiency of PBTL, we propose an RDHEI scheme which can achieve a high embedding capacity for a variety of cover images. In addition, an adaptive mechanism is proposed to further improve the efficiency of pixel labeling. Based on a set of four pre-searched optimal parameter pairs, the adaptive PBTL always applies the best-fitting parameter pair when labeling each image block.
In comparison with state-of-the-art schemes, our scheme has an outstanding performance in embedding capacity. As demonstrated in the experiments, the embedding rate of our scheme reaches 3.4863 bpp for the BOWS-2 image database and 3.1205 bpp for the BOSSBase image database, respectively.
Security of the proposed scheme is examined with two different metrics. An additional benefit of our scheme is that the secret communication and the cover image transmission are completely separable through secret key management. In other words, the cover image can be perfectly restored even if the receiver only holds the image encryption key.
JI-HWEI HORNG received the B.S. degree from the Department of Electronic Engineering, Tamkang University, Taipei, Taiwan, in 1990, and the M.S. and Ph.D. degrees from the Department of Electrical Engineering, National Taiwan University, Taipei, in 1992 and 1996, respectively. He was a Professor and the Chairman of the Department of Electronic Engineering, from 2006 to 2009, and the Dean of the College of Science and Engineering, National Quemoy University (NQU), Kinmen, Taiwan, from 2011 to 2014. He is currently the Vice President of Academic Affairs with NQU. His research interests include image processing, pattern recognition, information security, and artificial intelligence.
CHIN-CHEN CHANG (Fellow, IEEE) received the B.S. degree in applied mathematics, the M.S. degree in computer and decision sciences from National Tsing Hua University, and the Ph.D. degree in computer engineering from National Chiao Tung University. From 1989 to 2005, he was with National Chung Cheng University. He was an Associate Professor with Chiao Tung University, a Professor with National Chung Hsing University, and a Chair Professor with National Chung Cheng University. He has also been a Visiting Researcher with Tokyo University, Japan, and a Visiting Scientist with Kyoto University, Japan. He is currently the Chair Professor with the Department of Information Engineering and Computer Science, Feng Chia University. During his service in Chung Cheng, he served as the Chairman of the Institute of Computer Science and Information Engineering, the Dean of College of Engineering, Provost, and the Acting President of Chung Cheng University and the Director of Advisory Office in Ministry of Education, Taiwan. On numerous occasions, he has been invited to serve as a Visiting Professor, Chair Professor, Honorary Professor, Honorary Director, Honorary Chairman, Distinguished Alumnus, Distinguished Researcher, and Research Fellow by various universities and research institutes. His current research interests include database design, computer cryptography, image compression, and data structures. He has received many research awards and has honorary positions in prestigious organizations both nationally and internationally. Since his early years of career development, he has consecutively received awards, including the Outstanding Talent