0% found this document useful (0 votes)
56 views

Prediction-Based Reversible Data

prediction based data

Uploaded by

surajkr23
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views

Prediction-Based Reversible Data

prediction based data

Uploaded by

surajkr23
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

2013 Ninth International Conference on Intelligent Information Hiding and Multimedia Signal Processing

Prediction-Based Reversible Data Hiding with


Content Characteristics

Hsiang-Cheh Huang Chuan-Chang Lin Feng-Cheng Chang


Department of Electrical Department of Electrical Department of Innovative
Engineering Engineering Information and Technology
National University of Kaohsiung National University of Kaohsiung Tamkang University
Kaohsiung 811, Taiwan, R.O.C. Kaohsiung 811, Taiwan, R.O.C. Ilan 262, Taiwan, R.O.C.
[email protected] [email protected] [email protected]

Abstract—Reversible data hiding is one of the popular topics in image quality, the size of embedded secret data (or capacity),
watermarking researches, and it belongs to the branch of the side information for perfect separation of original image
digital rights management (DRM) applications. Similar to and secret at the decoder,
conventional watermarking techniques, for reversible data In this paper, we take the inherent characteristics of
hiding, at the encoder, secret information can be embedded original images into consideration [4], and propose an
into original images, and the marked image can be obtained. effective means of reversible data hiding with prediction-
Different from conventional watermarking, due to the term based method, in order to look for better performances than
‘reversible’, it implies that at the decoder, both the original conventional methods. For adaptively adjusting the
image and embedded secret should be perfectly separated from
parameters for prediction, and the improved ways for
marked image. It requires keeping the reversibility of
handling the difference values between original and
proposed algorithm, while looking for good image quality and
large amount of secret for embedding. With the prediction- predicted images, the larger amount of embedded secret can
based algorithm, output image can be predicted, and be reached with similar quality of output images.
differences between original and predicted images can be This paper is organized as follows.ʳ In Sec. II, we briefly
altered to make reversible data hiding possible. We also utilize describe the concepts for conventional algorithms in
inherent characteristics of original images for reaching better reversible data hiding. In Sec. III, we present the proposed
performances. Simulation results reveal that with our algorithm with prediction-based method and inherent
algorithm, comparable or better performances can be observed characteristics of original images. Simulation results are
due to the characteristics from different images. demonstrated in Sec. IV, which reveals comparable or better
performances with our algorithm. Finally, we make the
Keywords-reversible data hiding; image quality; capacity; conclusion of this paper in Sec. V.
prediction; difference histogram
II. BRIEF DESCRIPTIONS FOR CONVENTIONAL
I. INTRODUCTION ALTOGITHMS
Reversible data hiding is one of recently developed topics in Reversible data hiding algorithms can roughly be classified
watermarking or digital rights management (DRM) into three major categories for practical implementations.
researches [1]. For conventional watermarking or data hiding The first type is by intentionally modifying the histogram of
techniques, at the encoder, secret data can be embedded into original image, called the histogram-based method [5]. The
multimedia contents, images in most cases, and marked second type is by adjusting the difference value between
images are delivered to the decoder. At the decoder, only the adjacent pixels, named the difference-expansion-based
embedded secret should be extracted for ownership or method [6]. Finally, for the third type, with the aid of
copyright protection, and the marked images are ignored. inherent characteristics of original image, it predicts the
Performance assessments include the marked image quality, output image, and intentionally modifies the difference
the size of embedded secret data, and the robustness for the between input and predicted images for making data hiding
watermarking algorithm [2][3]. Similar to conventional possible. This type integrates the advantages from the first
watermarking techniques, for reversible data hiding, at the two types of methods, and is known for the prediction-based
encoder, it follows identical concepts to its watermarking method [7]. We briefly describe the implementations,
counterpart. At the decoder, due to the term ‘reversible’, advantages, and drawbacks as follows.
reversibility should be retained. It implies that original image A. Concepts of the Histogram-Based Schemes
and embedded secret are required to be perfectly separated
from marked image. Therefore, for assessing the The histogram-based scheme is famous for its ease of
performance of reversible data hiding algorithms, the marked implementation. By slightly modifying the histogram, secret
data can be reversibly embedded with following steps.

978-0-7695-5120-3/13 $26.00 © 2013 IEEE 13


DOI 10.1109/IIH-MSP.2013.12
Step 1. Generate the histogram of original image. III. PROPOSED ALGORITHM
Step 2. Find the peak of the histogram with the luminance Here we propose our algorithm by considering inherent
value a. Find the luminance b with no occurrence, called characteristics of original images, and we expect to look for
zero point. Without loss of generality, we set b > a. better performances over existing methods.
Step 3. Move the portion of the histogram between a and b
to the right by one. Original luminance at the peak at a A. Classification of Smooth or Active Blocks
gets empty intentionally. For making reversible data hiding possible, by following the
Step 4. For original luminance of 0 or 255, such spatial concepts from conventional methods, we first split the
coordinates should be recorded as the location map to original image with size of 512×512, into 1024 blocks with
prevent from overflow. sizes of 16×16. Variances of each block can be calculated,
Step 5. For embedding bit 0, move the luminance from and we can set a threshold based on the variance of entire
(a+1) to a. For embedding bit 1, keep it at (a+1). image for the classification of smooth or active blocks. If the
At the decoder, reverse operations can be performed to variance of one block is smaller than the threshold, such a
obtain original image and embedded secret. Only the values block is smooth; otherwise, it is active. Here, we need to use
of a and b, and the location map for preventing overflow the 1024-bit block map to point out the smoothness of the
while necessary, should be provided to serve as the side block. If it is smooth, we use bit ‘0’ in the block map; if it is
information for decoding. active, we use bit ‘1’. We provide an illustration in Fig. 1
The histogram-based scheme is famous for guaranteed with an image with size of 80×80, active blocks in shadow,
quality of at least 48.13 dB [5], and ease of implementation. smooth blocks in blank, and corresponding 25-bit location
But limited capacity, which associates with the peak of map for the ease of comprehension.
histogram, is the major drawback of this kind of scheme.
B. Concepts of the Difference-Expansion-Based Schemes
Considering the local characteristics of original images, the
difference-expansion-based (DE) schemes are devised. For
the DE-based scheme, two neighboring pixels are grouped as
a pair. First, the average between the two is kept the same.
Then, for data embedding, the difference between the two is ª1 0 1 0 1º
doubled, and one secret bit can be embedded. For embedding «1 1 0 0 1»»
one bit into two pixels, it leads to the capacity of 0.5 bpp. «
This is the reason for “difference expansion” [6]. «1 0 0 1 1»
When modifying the difference values, overflow of some « »
pairs may be observed, and such locations are unsuitable for «1 0 1 1 0»
data embedding. The coordinates of such locations are «¬1 1 0 0 1»¼
grouped together to form the location map for decoder. In
order to keep reversibility, location map is required; however, Figure 1. Location map for classifying active or smooth blocks.
it would reduce the effective capacity of the DE-based
schemes. Compare to the histogram-based schemes, for DE-
based schemes, they lead to much larger capacities in general,
while the output qualities would be dependent on the nNW nN nNE
characteristics of original images.
C. Concepts of the Prediction-Based Schemes nW nC nE
By following the advantages in Sec. II.A and Sec. II.B,
researchers proposed methods with prediction of images. nSW nS nSE
Based on original image, predicted image is produced with
weighted averages. The weighting factors for producing Figure 2. Weighting factors for image prediction. Here, n denotes the
predicted image are served as side information to the decoder. weighting factor, and the subscripts mean the directions corresponding to
The difference between predicted and original images is the center C.
calculated, its histogram is generated, and procedures
corresponding to the histogram-based scheme can be applied B. Selection of Weighting Factors for Prediction
for making reversible data hiding possible.
In Fig. 2, for the prediction of image, we may calculate the
With the prediction-based scheme, we would expect the
weighted average corresponding to the pixel at the center.
larger capacity for embedding, and the reasonable amount of
Therefore, the predicted pixel at the center would be the
side information for decoding. Regarding to the output image
weighted average from surrounding pixels, with the carefully
quality, the weighting factors for producing the predicted
selected weighting factors. To express more clearly, in Fig. 2,
image play an important role [7]. Therefore, the choice of
the weighting factor corresponding to the upper-left direction,
weighting factors would influence the output quality and
or the northwest direction, can be denoted by nNW. With the
embedding capacity with this kind of schemes.
suggestions in [7], due to raster scanning, only four pixels, or

14
the ones at the northwest, north, northeast, and west
directions, are included for pixel prediction. Thus, the four
weighting factors at corresponding directions are necessary
for image prediction. With Fig. 2, we represent the four
weighting factors by n = [nNW , nN , nNE , nW ] . In [7], authors
use n = [1, 2, 1, 2] for performing image prediction. We may
expect the adaptive selection of the four weighting factors,
other than those provided in [7], for obtaining better
performance for reversible data hiding.
After classification of blocks in Sec. III.A, we are going
to train the weighting factors for ns, or the factors for smooth
blocks in Fig. 1, and na, or the factors for active blocks,
denoted by shadowed blocks, in Fig. 1. The elements in ns
and na are integers between 1 and 16. By minimizing the
mean squared error (MSE) between original and predicted
images, we apply exhaustive search, with 2×164 = 131072
selections for ns and na, to obtain optimized combination of
weighting factors. With the minimized MSE, there would be
higher chance to obtain high peaks near zero in the
difference histogram, and hence enhanced embedding
capacity might be expected.
C. Single-Bit or Multi-Bits Embedding Figure 3. The marked image Lena after embedding 305064 bits (or
1.1637 bpp), leading to the PSNR value of 31.68 dB.
With the method in Sec. II.A, modification of difference
histogram is able to obtain larger capacity because the
difference values generally concentrate around zero.
Following the classification in Sec. III.A , we are going to
embed two bits into smooth blocks at a time, and one bit into
active blocks, based on the difference trained with methods
in Sec. III.B.
For embedding one bit, we need to spare one difference
value next to the peak point, for data embedding. For
embedding two bits, we need to spare 22 – 1, or three
difference values. More bits can be embedded with this
derivation. By doing so, there may be overflow for
difference values at the extremes, and location map for such
spatial coordinate should be recorded. By doing so,
degradation of output image quality may be expected,
combining the side information for the location map, we
choose to embed two-bit at a time in this paper.
IV. SIMULATION RESULTS
Figure 4. Performance comparisons corresponding to Fig. 2, with
In our simulations, we choose two test images, Lena and, na = [1, 10, 9, 8], and ns = [1, 8, 7, 8].
F16, with the picture sizes of, 512×512 for conducting
simulations. The secret data for reversible data hiding are the weighting factors of n = [1, 2, 1, 2], as suggested in [7],
randomly generated bitstreams with user-determined seed for both smooth and active blocks. In [9], authors choose the
value. We also make comparisons with relating algorithms in different sizes of square blocks, from 2×2 to 2L×2L, with the
[8] and [9]. user-defined integer value L, to compose the original image.
Fig. 3 depicts subjective evaluation for the test image The largest value of L is 5, as depicted in [9], leading to the
Lena. For objective evaluations, 305064 bits, or 1.1637 bpp, largest blocks of 32×32. Different block sizes lead to
can be embedded into the 512×512 test image, with 563 different amounts of capacities. With our algorithm, we keep
smooth blocks and 461 active blocks, respectively. For the block sizes to 16×16, adaptively adjust the number of
smooth blocks, the weighting factors after training are ns = smooth and active blocks, and train the coefficients for data
[1, 8, 7, 8], while for active blocks, the training factors are na embedding. The three methods share similar concepts
= [1, 10, 9, 8]. fundamentally, while the implementations are different. For
Fig. 4 demonstrates comparisons with relating methods the Lena test image, capacities corresponding to the three
in [8] and [9], respectively. In [8], authors adjust thresholds methods are able to reach more than 0.5 bpp easily, meaning
for determining the smoothness of blocks adaptively, and use that they can reside a much more capacity than the DE-based

15
that our method and that in [8] perform generally better than
the method in [9]. It might be because there are more smooth
blocks in the F16 test image, and the change of block sizes,
associated with the side information for recording the block
coordinates, may hardly help to improve the embedding
capacity. Again, the method in [8] work well for high
embedding capacities. For preserving high quality of output
image, the change of block size in [9] may have some
benefits. Our method works well in mid-ranges for capacities.
V. CONCLUSIONS
In this paper, we presented an effective means for making
reversible data hiding possible, with the prediction-based
schemes and the inherent characteristics of original images.
With the careful selection of smoothness of blocks, and the
training of weighting factors for performing image prediction,
comparable or better performances can be observed with
proposed method. Side information for decoding is
comparable to relating methods in literature. Due to the fact
that other methods may work well for extremely low or high
embedding qualities, techniques including the change of
block sizes may be integrated into our method in the future
Figure 5. The marked image F16 after embedding 341201 bits (or 1.3016 to look for enhancements under a variety of applications.
bpp), leading to the PSNR value of 32.89 dB.
ACKNOWLEDGMENT
The authors wish to thank National Science Council (Taiwan,
R.O.C.) for supporting this paper under Grant No. NSC 102-
2220-E-390-002.ʳʳ
REFERENCES
[1] H.C. Huang and W.C. Fang, “Metadata-based image watermarking
for copyright protection,” Simulation Modelling Practice and Theory,
vol. 18, no. 4, pp. 436–445, Apr. 2010.
[2] C.C. Lai, H.C. Huang, and C.C. Tsai, “A digital watermarking
scheme based on singular value decomposition and micro-genetic
algorithm,” Int’l J. of ICIC, vol. 5, no. 7, pp. 1867–1873, Jul. 2009.
[3] H.C. Huang, et al., “Tabu search based multi-watermarks embedding
algorithm with multiple description coding,” Information Sciences,
vol. 181, no. 16, pp. 3379–3396, Aug. 2011.
[4] H.C. Huang and F.C. Chang, “Hierarchy-based reversible data hiding,”
Expert Systems with Applications, vol. 40, no. 1, pp. 34–43, Jan.
2013.
[5] Z. Ni, Y.Q. Shi, N. Ansari, and W. Su, “Reversible data hiding,”
Figure 6. Performance comparisons corresponding to Fig. 4, with IEEE Trans. Circuits and Systems for Video Technology, vol. 16, no.
na = [1, 5, 5, 11], and ns = [1, 4, 5, 10]. 3, pp. 354–362, Mar. 2006.
[6] J. Tian, “Reversible data embedding using a difference expansion,”
IEEE Trans. Circuits and Systems for Video Technology, vol. 13, no.
scheme. With our method, it performs better for capacities in 8, pp. 890–896, Aug. 2003.
the mid-range. Looking for larger capacities, method in [8] [7] H. Luo, F.X. Yu, Z.L. Huang, H. Chen, and Z.M. Lu, “Reversible
performs better, while seeking for better image qualities, data hiding based on hybrid prediction and interleaving histogram
methods in [9] would be more suitable for reversible data modification with single seed pixel recovery,” Signal, Image and
hiding. Video Processing, Mar. 2012 (online).
Results with F16 can be observed in Fig. 5 and Fig. 6, [8] H.C. Huang, Y.H. Chen, F.C. Chang, and S.H. Li, “Reversible data
respectively. For objective evaluations in Fig. 5, 341201 bits, hiding using prediction-based adaptive embedding,” The 3rd Int’l
Workshop on Ubiquitous Computing & Applications, paper no.: 98,
or 1.3016 bpp, can be embedded into the 512×512 test image. Hong Kong, China, 2012.
For smooth blocks, the weighting factors after training are ns [9] C.C. Chen and Y.H. Tsai, “Adaptive reversible image watermarking
= [1, 4, 5, 10], while for active blocks, the training factors scheme,” Journal of Systems and Software, vol. 84, no. 3, pp. 428–
are na = [1, 5, 5, 11]. For objective comparisons, we observe 434, Mar. 2011.

16

You might also like