0% found this document useful (0 votes)
148 views19 pages

Reversible Color Image Watermarking in YCoCg Space

A research paper.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
148 views19 pages

Reversible Color Image Watermarking in YCoCg Space

A research paper.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Reversible Color Image Watermarking

in the YCoCg-R Color Space


Aniket Roy1(B) , Rajat Subhra Chakraborty1 , and Ruchira Naskar2
1

Secured Embedded Architecture Laboratory (SEAL),


Department of Computer Science and Engineering, Indian Institute of Technology,
Kharagpur 721302, West Bengal, India
[email protected], [email protected]
2
Department of Computer Science and Engineering,
National Institute of Technology, Rourkela 769008, Odisha, India
[email protected]

Abstract. Reversible Image Watermarking is a technique to losslessly


embed and retrieve information (in the form of a watermark) in a cover
image. We have proposed and implemented a reversible color image
watermarking algorithm in the YCoCg-R color space, based on histogram
bin shifting of the prediction errors, using weighted mean based prediction
technique to predict the pixel values. The motivations for choosing the
YCoCg-R color space lies in the fact that its transformation from the
traditional RGB color space is reversible, with higher transform coding
gain and near to optimal compression performance than the RGB and
other reversible color spaces, resulting in considerably higher embedding
capacity. We demonstrate through information theoretic analysis and
experimental results that reversible watermarking in the YCoCg-R color
space results in higher embedding capacity at lower distortion than RGB
and several other color space representations.

Introduction

Digital watermarking [1] is an important technique adopted for copyright protection and authentication. Digital watermarking is the act of hiding secret information (termed a watermark) into a digital cover medium (image, audio
or video), such that this information may be extracted later for authentication of the cover. However, the process of watermark embedding in the cover
medium usually leads to distortion of the latter, even if it is perceptually negligible. Reversible watermarking [2,3,10] is a special class of digital watermarking,
whereby after watermark extraction, both the watermark and the cover medium
remain unmodied, bit-by-bit. In traditional reversible watermarking schemes,
the watermark to be embedded is usually generated as a cryptographic hash of
the cover image. Reversible watermarking is most widely used in industries dealing with highly sensitive data, such as the military, medical and legal industries,
where data integrity is the major concern for users [10]. In this paper we focus
on reversible watermarking algorithms for digital images.
c Springer International Publishing Switzerland 2015

S. Jajodia and C. Mazumdar (Eds.): ICISS 2015, LNCS 9478, pp. 480498, 2015.
DOI: 10.1007/978-3-319-26961-0 28

Reversible Color Image Watermarking in the YCoCg-R Color Space

481

A large number of reversible image watermarking algorithms have been previously proposed [24]. Most of them have been developed for grayscale images.
Although the algorithms developed for grayscale images may trivially be modied to work with for color images, most of the times the performance achieved
by such trivial extension is not satisfactory. Relatively few works have been proposed for reversible color image watermarking. Moreover, in the existing literature, almost all reversible color image watermarking algorithms [47] utilize the
RGB color space. Tian et al. [2] introduced Dierence Expansion based reversible
watermarking for grayscale images. Allatar used that concept for reversibly
watermarking color images using dierence expansion of triplets [5], quads [6],
and later formulated a generalised integer transform [4]. However, these schemes
embed watermark into the individual color components of the RGB color space.
Li et al. [7] proposed a prediction error expansion based color image watermarking algorithm where prediction accuracy is enhanced by exploiting the correlation between color components of the RGB color space. Published literature on
reversible color image watermarking in other (non-RGB) color spaces are very
rare. Investigation of color image watermarking in a non-RGB color space is
something that we aim to investigate in this paper.
In this paper, we propose a reversible watermarking technique, specically
meant for color images, providing considerably high embedding capacity. by
systematically investigating the following questions:
1. What theoretical considerations should determine selection of a color space
for reversible watermarking of color images?
2. Which color space is practically the best suited in this context?
3. Is there any additional constraint for selecting color space to ensure reversibility of the watermarking scheme?
Our key observation in this paper is that, instead of the traditional RGB color space, if we choose a color space having higher
transform coding gain (i.e., better compression performance), then
the reversible watermarking capacity will be increased significantly.
Moreover, better compression along color components increases intracorrelation of individual color components. Hence, prediction accuracy of such prediction based watermarking scheme improves, which
additionally enhances the embedding capacity of the reversible watermarking scheme.
In this paper, we propose a reversible watermarking algorithm for color
images, which utilizes the YCoCg-R color space (a modication of the YCoCg
color space) having higher transform coding gain and near to optimal compression performance. The transformation from RGB to YCoCg-R, and the reverse
transformation from YCoCg-R to RGB, are integer-to-integer transforms which
guarantee reversibility [8], and are also implementable very eciently. The proposed algorithm is based on the principle of histogrambinshifting, which is
computationally one of the simplest reversible watermarking technique. Specically, we use a newer and more ecient enhancement of histogram-bin-shifting,
which performs histogram modication of pixel prediction errors [2,9]. In this

482

A. Roy et al.

technique image pixel values are predicted from their neighbourhood pixel values, and the prediction error histogram bins are shifted to embed the watermark
bits. This technique provides much higher embedding capacity compared to the
traditional frequency-histogram shifting.
The rest of the paper is organised as follows. We investigate an information
theoretic analysis of watermarking embedding capacity maximization in Sect. 2.
The proposed reversible watermarking algorithm is presented in Sect. 3. Experimental results and related discussions are presented in Sect. 4. We conclude in
Sect. 5 with some future research directions.

Principle of Embedding Capacity Maximization

Embedding capacity maximization is one of the major challenges in reversible


watermarking, given the reversibility criterion. In this section, we explore successively two approaches to enhance embedding capacity:
1. Selection of the target color space oering higher watermarking performance,
and,
2. Selection of the watermark embedding algorithm.
2.1

Color Space Selection

We consider the selection of the color space from three perspectives: information
theory, reversibility and compressibility in the transformed color space, and ease
of implementation of the color space transformation. We start with the review
a relevant information theoretic result.
Information Theoretic Justification. The following theorem is of fundamental importance:
Theorem 1 (Sepain-Wolf Coding Theorem). Given two correlated nite alphabet random sequences X and Y , the theoretical bound for lossless coding rate for
distributed coding of two sources are related by:
RX H(X|Y ),
RY H(Y |X),
RX + RY H(X, Y ).

(1)

Thus, ideally the minimum total rate (RX,Y ) necessary for lossless encoding of
the two correlated random sequences X and Y , is equal to their joint entropy
(H(X, Y )), i.e. RX,Y = H(X, Y ).
The signicance of the above result is that for three correlated random
sequences X, Y , Z, the total rate RX,Y,Z = H(X, Y, Z) is sucient for an ideal
lossless encoding. This theorem can be extended to a nite number of correlated
sources. It can be shown that the same result holds even for the i.i.d and ergodic
processes [11].
We make the following proposition related to the selection of color space on
the embedding capacity of reversible watermarking:

Reversible Color Image Watermarking in the YCoCg-R Color Space

483

Fig. 1. Venn diagram to explain the impact of color space transformation on entropy
and mutual information.

Proposition 1. If the cover color image is (losslessly) converted into a dierent


color space with higher coding gain (i.e. better compression performance) before
watermark embedding, then the watermark embedding capacity in the transformed
color space is greater than the original color space.
Consider the color components for color images to be nite discrete random
variables. Let X,Y ,Z be three random variables as depicted in a Venn diagram
in Fig. 1, where the area of each circle (corresponding to each random variable)
is proportional to its entropy, and the areas of the intersecting segments are
proportional to the corresponding mutual information values of the relevant
random variables.
Now consider a bijective transformation T applied to the point (X,Y ,Z)
in the original sample space, to transform it to another point (X  ,Y  ,Z  ) in
the transformed sample space, corresponding to the three random variables
X  ,Y  ,Z  :
(2)
T : (X, Y, Z) (X  , Y  , Z  )
such that the image in the transformed sample space has higher coding gain.
Since higher coding gain implies better compression performance, hence, each
element of X  ,Y  and Z  is the compressed version of the corresponding element
in X, Y and Y respectively. Moreover, let T be an invertible, lossless and it
maps integers to integers.

484

A. Roy et al.

As a consequence of the properties of the transformation T, both the sample


spaces are discrete and contain the same number of points. The values of the
pixels in the transformed color space (i.e. X  , Y  and Z  ) get closer to each
other, as these are the compressed version of the pixel color channel values in the
original sample space (i.e. X,Y and Z). This implies that the random variables
corresponding to the color channels in the transformed color space (X  ,Y  and
Z  ), become more correlated among themselves than those in the original sample
space (X, Y and Z). Since for individual random variables higher correlation
between values implies lesser entropy [11], the entropies of the variables X  , Y 
and Z  in the transformed domain are relatively lesser compared to those of X,
Y and Z. i.e.,
H(X  ) H(X)
H(Y  ) H(Y )
(3)
H(Z  ) H(Z)
This is depicted by having the circles corresponding to X  , Y  and Z  have
lesser areas compared to the circles corresponding to X, Y and Z in Fig. 1.
Joint entropy of X, Y and Z, i.e., H(X, Y, Z) is depicted by the union of the
three circles corresponding to X,Y and Z. Now, as the circles corresponding to
X  , Y  and Z  have lesser areas than those corresponding to X, Y and Z, it is
evident that area of the union of these circles corresponding X  , Y  and Z  (i.e.,
H(X  , Y  , Z  )), must be smaller than that corresponding to X, Y and Z, i.e.,
H(X  , Y  , Z  ) H(X, Y, Z)

(4)

We can draw an analogy between lossless (reversible) watermarking and lossless encoding, since in reversible watermarking, we have to losslessly encode
the cover image into the watermarked image such that the cover image can be
retrieved bit by bit. So, in that sense we can apply the Sepian-Wolf Theorem
to estimate the embedding capacity of the reversible watermarking scheme. For
lossless encoding of a color image I consisting of color channels X,Y and Z, we
need a coding rate greater than equal to H(X, Y, Z). The total size of the cover
image is a constant, say N bits. Then, after an ideal lossless encoding of the
image I which can encode it in H(X, Y, Z) bits, there remains (N H(X, Y, Z))
bits of space for auxiliary data embedding. Hence, theoretical embedding capacity of the reversible watermarking schemes in the two color spaces are given by:

and

C = N H(X, Y, Z)

(5)

C  = N H(X  , Y  , Z  )

(6)

Since H(X  , Y  , Z  ) H(X, Y, Z), hence we can conclude that C  C. Hence,


we conclude that a color space transformation T with certain characteristics can
result in higher embedding capacity.

Reversible Color Image Watermarking in the YCoCg-R Color Space

485

Compressibility in Transformed Color Space and Reversibility of


Transformation. When we transform the representation of a color image from
one color space to another, the Transform Coding Gain is dened as the ratio of
the arithmetic mean to the geometric mean of the variances of the variables in
the new transformed domain coordinates, scaled by the norms of the synthesis
basis functions for non-unitary transformations [8]. It is usually measured in dB.
Transform coding gain is a metric to estimate compression performance [8]
higher transform coding gain implies more compression among the color channels of a color image representation. In general, the Karhunen-Loeve Transform
(KL Transform), Principle Component Analysis (PCA) etc. might also be used to
decorrelate color channels. However, for reversible watermarking we need
to choose an integer-to-integer linear transformation. If C1 = (X, Y, Z)T
denote the color components in the original color space, and C2 = (X  , Y  , Z  )T
denote the color components in the transformed color space after a linear transformation, then we can write C2 = TC1 , where T is the transformation matrix.
Similarly, the reverse transformation is expressed as C1 = T1 C2 , It is desirable
that det T = 1, which is a necessary condition for optimal lossless compression
performance [8].
Ease of Color Space Transformation. Color space transformation during watermark embedding/extraction processes is a computational overhead.
Another consideration that determines the selection of a candidate color space
is the ease of implementation of the computations involved in the color space
transformation, i.e. multiplication by the transformation matrix T. If the operations involved are only integer addition/subtractions and shifts, the color space
transformation can be implemented extremely eciently in both software and
hardware.
From the discussion so far, our color space selection for performing the
reversible watermarking operations is guided by the following criteria:

Lower correlation among the color channels.


Reversibility of transformation from the RGB color space.
Higher transform coding gain, and,
Ease of implementation of the transformation.

Some of the reversible color space transformations available in the literature [13,14] are described below in brief.
RCT Color Space. Reversible Color Transform (RCT) is used for lossless color
transformation in JPEG 2000 standard [14]. It is also known as reversible YUV
color space. This color space transformation equations are simple, integer-tointeger and invertible:





R + 2G + B
Ur + V r

G = Yr
Yr =
4
4

(7)
Ur = R G
R = Ur + G

Vr = B G
B = Vr + G

486

A. Roy et al.

O1O2O3 Color Space. This is another color space with higher compression
performance, while maintaining integer-to-integer reversibility [13]. Here, the R,
G, and B color channels are transformed into O1, O2, O3 color channels, and
conversely:
 





O3
O3

R+G+B

+ 0.5
+ 0.5
B = O1 O2 +

+ 0.5
O1 =
2
2






O3
RB
G = O1

+ 0.5
O2 =
+ 0.5

2

 


O3
O3

R = O1 + O2 + O3
O3 = B 2G + R
+ 0.5
+ 0.5
2
2

(8)

Our Selection: The YCoCg-R Color Space. In our case, X, Y and Z correspond to the R, G and B color channels of the RGB color space, and X  , Y  and
Z  correspond to the Y , Co and Cg color channels in the YCoCg-R color space.
The well-known YCoCg color space decomposes a color image into three components Luminance (Y ), Chrominance orange (Co) and Chrominance green (Cg)
respectively. YCoCg-R is the integer to integer reversible version of YCoCg. The
transformation T (for RGB to YCoCg-R), and the inverse transformation are
given by [8]:
Co = R
 B,
,
t = B + Co
2
(9)
Cg = G
t,


Y =t+
and similarly,

Cg
2


,
t = Y Cg
2
G = Cg +
t,

,
B = t Co
2
R = B + Co

(10)

Notice that rolling out the above transformation equations allows us to write
the direct transformation equations:



1 0 1
Co
R
R
Cg = T G = 1 1 1 G
(11)
2
2
1 1 1
Y
B
B
4 2 4
and hence det T = 1, which is desirable for achieving optimal compression ratio,
as mentioned in Sect. 2.1. A close look would reveal that the transformations are
nothing but repeated dierence expansion of the color channels.
To summarize, selection of the YCoCg-R color space has the following consequences:
Repeated dierence expansion of the color channels makes the resultant color
channels less correlated in the YCoCg-R color space. It is known that the
YCoCg-R representation has higher coding gain [8].

Reversible Color Image Watermarking in the YCoCg-R Color Space

487

The RGB to YCoCg-R transformation is an integer-to-integer reversible transform.


YCoCg-R achieves close to optimal compression performance [8].
The arithmetic operations of the transformation are simple integer additions/subtractions and shifts, and hence extremely eciently implementable
in hardware and software.
We establish the superiority of our choice of the YCoCg-R color space over
other color space representations through detailed experimental results in Sect. 4.
We next discuss the impact of the embedding scheme on the embedding capacity.
We justify the selection of a scheme used by us, which is a combination of the
well-known histogram-bin-shifting scheme with pixel prediction techniques.
2.2

Embedding Scheme Selection for Capacity Enhancement

Ni et al. [3] introduced the histogram-bin-shifting based reversible watermarking


scheme for grayscale images. In this scheme, rst the statistical mode of the
distribution, i.e., the most frequently occurring grayscale value, is determined
from the frequency histogram of the pixel values, let us call the pixel value to
be the peak point. Now, the pixels with grayscale value greater than the peak
value are searched, and their corresponding grayscale values are incremented by
one. This is equivalent to right shifting the frequency histogram for the pixels
having grayscale value greater than the peak point by one unit. Generally, all
images from natural sources have one of more pixel values which are absent in the
images, let us call these zero points. The existence of zero points ensure that
the partial shift of the frequency histogram do not cause any irreversible change
in the pixel values. The shift results in an empty frequency bin just next to the
peak point in the image frequency histogram. Next, the whole image is scanned
sequentially and the watermark is embedded into the pixels having grayscale
value equal to the peak point. When the watermark bit to be embedded is 1,
the watermarked pixel occupies the empty bin just next to the peak value in
the histogram, and when it is 0, the watermarked pixel value is left unmodied
at the peak point. The embedding capacity of the scheme is limited by the
number of pixels having the peak grayscale value. Figure 2 shows an example
of the classical histogram-bin-shifting based watermarking scheme for an 8-bit
grayscale image, where the peak point is 2 and the zero point is 7.
To improve the embedding capacity histogram-bin-shifting is blended with
pixel prediction method [9]. In the pixel prediction technique, some of the cover
image pixel values are predicted based on their neighbourhood pixel values. Such
prediction gives prediction errors with respect to the original cover image. Generally, the frequency distribution of such prediction error resembles an Laplacian
distribution [9], with peak value at zero as shown in Fig. 3. Watermarking bits
are embedded into the prediction errors by histogram shifting of bins close to
zero, where the closeness is pre-dened with respect to some threshold. The bins
that are close to zero in the prediction error histogram can be both right or
left shifted to embed watermark bits. This two-way histogram shifting enhances

488

A. Roy et al.

Fig. 2. Operations in the Histogram-bin-shifting reversible scheme proposed by Ni.


et. al [3]: (a) histogram before shifting with peak point=2 and zero point=7; (b) histogram after shifting the pixels; (c) histogram after watermark embedding.

the capacity of the scheme signicantly, compared to the classical histogrambin-shifting case. The embedding in error histogram is shown in Fig. 3. During
extraction, prediction errors are computed from the watermarked image, and
the watermark bits are extracted from the errors. After watermark extraction,
the error histogram bins are shifted back to their original position. The retrieved
errors are combined with the predicted pixel values to get back the original cover
image losslessly.

Proposed Algorithm

Our proposed algorithm consists of the following main steps:


1. Transformation of the cover color image from RGB color space to the YCoCgR color space, using transformation-(9).
2. Pixel prediction based watermark embedding in the YCoCg-R color space.
3. Watermark extraction and lossless retrieval of original cover image.
4. Reconversion from YCoCg-R color space to RGB color space, using
transformation-(10).
The rst and the last steps have already been discussed. We now describe
the remaining steps.

Reversible Color Image Watermarking in the YCoCg-R Color Space

489

Fig. 3. Steps of watermark embedding using histogram shifting of prediction error:


(a) prediction error histogram; (b) histogram shifting; (c) watermark embedding.

3.1

Pixel Prediction Based Watermark Embedding

We use Weighted Mean based Prediction [2] in the proposed scheme. In this
scheme two levels of predicted pixel values are calculated exploiting the correlation between the neighboring pixels. One out of every four pixels in the original
cover image is chosen as a base pixel, as shown in Fig. 4, and the values of these
pixels are used to predict the values of their neighboring pixels. Positions of next
levels of predicted pixels are also shown in Fig. 4. Neighborhood of the pixels are
partitioned into two directional subsets which are orthogonal to each other. We
calculate the rst level predicted pixels and the second level predicted pixels
by interpolating the base pixels along two orthogonal directions: the 45 diagonal and the 135 diagonal as shown in Fig. 5. The rst level predicted pixels,
occupying coordinates (2i, 2j) are computed as follows:
1. First, interpolated values along directions 45 and 135 are calculated. Let
these values be denoted by p45 and p135 , and calculated as shown in Fig. 5:
p45 = (p(i, j + 1) + p(i + 1, j))/2
p135 = (p(i, j) + p(i + 1, j + 1))/2

(12)

490

A. Roy et al.

Fig. 4. Locations of (a) base pixels (0s), (b) predicted rst set of pixels (1s),
(c) predicted second set of pixels (2s).

Fig. 5. (a) Prediction along 45 and 135 diagonal direction; (b) Prediction along 0
and 90 diagonal direction.

2. Interpolation error corresponding to the pixel at position (2i, 2j) along directions 45 and 135 are calculated as:
e45 (2i, 2j) = p45 p(2i, 2j)
e135 (2i, 2j) = p135 p(2i, 2j)

(13)

3. Sets S45 and S135 contain the neighbouring pixels of the rst level predicted
pixel along the 45 and 135 directions respectively, i.e.,
S45 = {p(i, j + 1), p45 , p(i + 1, j)}
S135 = {p(i, j), p135 , p(i + 1, j + 1)}

(14)

4. The mean value of the base pixels around the pixel to be predicted, is denoted
by u, and calculated as:
u=

p(i,j)+p(i+1,j)+p(i,j+1)+p(i+1,j+1)
4

(15)

5. In the weighted mean based prediction, weights of the means are calculated
using variance along both diagonal direction. Variance along 45 and 135
are denoted as (e45 ) and (e135 ), and calculated as:
1
2
(S45 (k) u)
3

(16)

1
2
(S135 (k) u)
3

(17)

(e45 ) =

k=1

and

(e135 ) =

k=1

Reversible Color Image Watermarking in the YCoCg-R Color Space

491

6. Weights of the means along 45 and 135 directions are denoted by w45 and
w135 , and calculated as
135 )
w45 = (e45(e
+(e135 ))
(18)
w135 = 1 w45
7. We estimate the rst level predicted pixel value p , as a weighted mean of the
diagonal interpolation terms p45 and p135 :
p = round (w45 p45 + w135 p135 )

(19)

Once the rst level pixel values are predicted, the values of the second level
pixels can be computed from the base pixels and the rst level predicted pixels.
A similar procedure as described above is used, but now pixel values along the
horizontal and vertical directions are used for prediction, i.e. the values along
the 0 and 90 directions are used, as shown in Fig. 5. In this way, we can predict
the entire image (other than the base pixels) using interpolation and weighted
mean of interpolated pixels along two mutually orthogonal directions.
Embedding Algorithm. After the given color cover image is transformed
into the YCoCg-R color space, the given watermark bits are embedded into the
color channels Co, Cg and Y in order. We preferentially embed watermarks into
the Chroma components (Co and Cg), and then to the Luma component (Y ),
Algorithm 1. EMBED WATERMARK
/* Embed watermark bits into the prediction errors */
Input: Color cover image of size M N pixels in YCoCg-R color space (I), Watermark bits (W ),
Embedding Threshold (T )
Output: Watermarked image Iwm in the YCoCg-R color space
1: for Color channels P {Co, Cg, Y } in order do
2: if W is not empty then
3:
for i = 1 to M do
4:
for j = 1 to N do
5:
if P (i, j) is not a base pixel then
6:
P  (i, j) P redictweightedmean P (i, j)
7:
Compute prediction error eP (i, j) = P (i, j) P  (i, j)
8:
if eP (i, j) 0 then
9:
sign(eP (i, j)) 1
10:
else
11:
sign(eP (i, j)) 1
12:
end if
13:
if |eP (i, j)| T then
14:
eP (i, j) sign(eP (i, j)) [2 |eP (i, j)| + next bit of W ]
15:
else
16:
eP (i, j) sign(eP (i, j)) [|eP (i, j)| + T + 1]
17:
end if
18:
Pwm (i, j) P  (i, j) + eP (i, j)
19:
else
20:
Pwm (i, j) P (i, j)
21:
end if
22:
end for
23:
end for
24: end if
25: end for
26: Obtain watermarked image Iwm by combining the watermarked color channels Ywm , Cowm and
Cgwm .

492

A. Roy et al.

to minimize the visual distortion. Moreover, as human vision is least sensitive


to changes in the blue color [12], so among the chroma components, the Co
component (mainly combination of orange and blue) is embedded rst, and then
we embed in the Cg component (mainly combination of green and violet).
In each of the color channels, we apply the weighted mean based pixel prediction technique separately. Let P (i, j) denote the value of the color channel
at coordinate (i, j) with P {Co, Cg, Y }, and let P  (i, j) be the corresponding
predicted value of P (i, j):
P  (i, j) P redictweightedmean (P (i, j))

(20)

Then, the prediction error at the (i, j) pixel position for the P color channel
is given by:
eP (i, j) = P (i, j) P  (i, j), where P, P  {Co , Cg , Y }

(21)

Next the frequency histograms of the prediction errors are constructed. For
watermark embedding, prediction errors which are close to zero are selected
considering a threshold T 0. Hence, the frequency histogram of the prediction
errors in the range [T, T ] are histogram-bin-shifted to embed the watermark
bits. Rest of the histogram-bins are shifted away from zero by a constant amount
of (T + 1) to avoid any overlap of absolute error values.
For embedding watermark bits, prediction errors eP (i, j) are modied due to
histogram shifting to eP (i, j) according to the following equation:

sign(eP (i, j)) [2 |eP (i, j)| + b] if |eP (i, j)| T

eP (i, j) =
(22)
sign(eP (i, j)) [|eP (i, j)| + T + 1] otherwise
where b  [0, 1] is the next watermarking bit to be embedded, and sign(eP (i, j))
is dened as:

+1 if eP (i, j) 0
sign(eP (i, j)) =
(23)
1 otherwise
Finally, the modied prediction errors eP (i, j) are combined with the predicted pixels P  (i, j) in the corresponding color space to obtain the watermarked
pixels Pwm (i, j):
Pwm (i, j) = P  (i, j) + eP (i, j)
(24)
The same procedure is applied in the three color channels (Co , Cg , Y ) of
YCoCg-R color space. Hence, YCoCg-R color channels are watermarked. Now
we transform Pwm from Y CoCgR to RGB losslessly by Eq. 10 to nally obtain
the watermarked image Iwm .
The proposed watermark embedding algorithm is presented as Algorithm 1.
3.2

Extraction Algorithm

The extraction algorithm just reverses the steps of the embedding algorithm.
Watermark extraction is done in order from the Co, Cg and Y color channels

Reversible Color Image Watermarking in the YCoCg-R Color Space

493

respectively as used for embedding. In the extraction phase, we also predict the
pixels except the base pixels for each color channel P {Co, Cg, Y }. At each

(i, j) is
pixel position (i, j) of color channel P of the watermarked image, Pwm
calculated to be the predicted value of Pwm (i, j):

(i, j) P redictweightedmean (Pwm (i, j)))
Pwm

(25)

Then prediction error at (i, j)-th position of the P color channel is denoted
by ePwm (i, j). Then,

(i, j) Pwm (i, j)
ePwm (i, j) = Pwm

(26)

Then the prediction error frequency histogram is generated and the watermark bits are extracted from the frequency histogram bins close to zero, as
dened by the embedding threshold T :
|ePwm (i, j)| (2T + 1)

(27)

Hence, the watermark bit b is extracted as:


b = |ePwm (i, j)| 2

|ePwm (i,j)|

if |ePwm (i, j)| (2T + 1)

(28)

After extraction, all bins are shifted back to their original positions, so the
prediction errors in their original form are restored as given in following equation:

ePwm (i, j)

sign(ePwm (i, j))

|ePwm (i,j)|
2

sign(ePwm (i, j)) (|ePwm (i, j)| T 1)

if |ePwm (i, j)| (2T + 1)


otherwise

(29)
where the restored error ePwm (i, j) is exactly same as the prediction error eP (i, j).

(i, j)) are combined with the restored errors
Next, the predicted pixels (Pwm

(ePwm (i, j)) to obtain each of the retrieved color channels (Pret (i, j)) losslessly,


(i, j) + ePwm (i, j) = Pwm
(i, j) + eP (i, j) = P (i, j)
Pret (i, j) = Pwm

(30)

where P {Co, Cg , Y }. After we retrieve the color channels Y , Co and Cg


losslessly, we transform the cover image to the RGB color space by the lossless YCoCg-R to RGB transformation. The extraction algorithm is presented as
Algorithm 2.
3.3

Handling of Overflow and Underflow

An overow or underow is said to have occurred if the watermark pixel Pwm (i, j)
/ {0, 255}. The underow condition
as obtained in Eq. 24 is such that Pwm (i, j)
is: Pwm (i, j) < 0 and the overow condition is : Pwm (i, j) > 255. In embedding phase, we do not embed watermark into such above stated pixels to avoid
overow and underow.
In extraction phase, we rst nd out which of the pixels cause overow and
underow. These pixels indicate two types of possibilities:

494

A. Roy et al.

Algorithm 2. EXTRACT WATERMARK


/* Embed watermark bits into the prediction errors */
Input: Color watermarked image of size M N pixels in YCoCg-R color space (Iwm ), Embedding
Threshold (T )
Output: Retrieved cover image (Iret ), Watermark (W )
1: for Color channels P {Co, Cg, Y } in order do
2: for i = 1 to M do
3:
for j = 1 to N do
4:
if Pwm (i, j) is not a base pixel then

5:
Pwm
(i, j) P redictweightedmean Pwm (i, j)

6:
Compute prediction error ePwm (i, j) Pwm (i, j) Pwm
(i, j)
7:
if ePwm (i, j) 0 then
8:
sign(ePwm (i, j)) 1
9:
else
10:
sign(ePwm (i, j)) 1
11:
end if
12:
if |ePwm (i, j)| (2T + 1) then

13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:

|eP
(i,j)|
wm

2
|eP
(i,j)|
wm

(Next bit of W ) |ePwm (i, j)| 2


ePwm (i, j)

sign(ePwm (i, j))


else
ePwm (i, j) = sign(ePwm (i, j)) [|ePwm (i, j)| T 1]
end if

Pret (i, j) = Pwm
(i, j) + ePwm (i, j)
else
Pret (i, j) = Pwm (i, j)
end if
end for
end for
end for
Obtain original cover image Iret in YCoCg-R color space by combining the Yret , Coret and
Cgret color components

Fig. 6. Test images used in our experiments: (a) Bird; (b) Cap; (c) Cycle; (d) House;
(e) Sea; and (f) Nature.

1. During embedding, it caused overow or underow, and hence was not used
for embedding.
2. Previously the pixel did not causes overow or underow, hence watermark
bit was embedded. However, after watermark embedding the pixel causes
overow or underow.

Reversible Color Image Watermarking in the YCoCg-R Color Space

495

To correctly distinguish between which one of the cases have occurred, a


binary bit stream, called a location map is generally used [9,10]. We assign 0
for the rst case and 1 for the second case respectively, in the location map. If
none of the cases occur the location map remains empty. Now during extraction,
if a pixel with overow or underow occurs we check the next location map. If
the location map bit is 0, we do not use the corresponding pixel for extraction
and it remains unchanged. On the other hand, if the location map bit is 1, we
use the corresponding pixel for extraction using Algorithm 2. Size of the location
map is generally small and we can further reduce the size of the location map
using lossless compression. The compressed location map is then inserted into
the LSBs of the base pixels starting from the last base pixel. The original base
pixel LSBs are concatenated at the beginning of the watermark and embedded
into the cover image, before replacement with the location map bits.

Results and Discussion

The proposed algorithm was implemented in MATLAB and tested on several


images from the Kodak Image Database [15]: Bird, Cap, Cycle, House, Sea and
Nature, as shown in Fig. 6. The performance measurement for our proposed
scheme is done with respect to the following:
1. Maximum embedding capacity, and,
2. distortion of the watermarked image with respect to the original cover image.
Maximum embedding capacity can be estimated as the number of pure watermark bits that can be embedded into the original cover image. To make the
comparison independent of the size of the cover image, we normalized the embedding capacity with respect to the size of the cover image, and report it as the
average number of bits that can be embedded per pixel, measured in units of

Fig. 7. Comparison of embedding capacity in dierent color space for several test
images.

496

A. Roy et al.

Fig. 8. Distortion characteristics of test images: (a) Bird; (b) Cap; (c) Cycle; (d) House;
(e) Sea; and (f) Nature.

bits-per-pixel (bpp). Distortion of the watermarked image is estimated by the


Peak-Signal-to-Noise-Ratio (PSNR), which is dened as:


M AX 2
P SN R = 10 log10
dB
(31)
M SE
where M AX represent the maximum possible pixel value. Mean Square Error
(MSE) for color images is dened as:
M SE =

M 
N

1
2
2
[(R(i, j) R (i, j)) + (G(i, j) G (i, j))
3 M N i=1 j=1

+ (B(i, j) B  (i, j)) ]


2

(32)

where R(i, j), G(i, j) and B(i, j) represent the red, green and blue color component values at location (i, j) of the original cover image; R (i, j) , G (i, j) and
B  (i, j) represent the corresponding color component values of the watermarked
image, and the color image is of size M N .
The result of watermarking in the YCoCg-R color space using the proposed
algorithm, and those obtained by watermarking using the same prediction-based

Reversible Color Image Watermarking in the YCoCg-R Color Space

497

histogram-bin-shifting scheme in the RGB, RCT [14] and O1O2O3 [13] color
space representations are compared for the test images, as given in Fig. 7. The
comparison clearly demonstrates that the embedding capacity is higher in the
YCoCg-R color space representation than the RGB, RCT and O1O2O3 color
spaces.
Distortion characteristics (i.e., variation of PSNR vs. Embedded bpp) for several test images are shown in Fig. 8. Note that the maximum bpp value attempted
for each color space corresponds to their embedding capacity. The plots also
suggest that the distortion of the images with increasing amount of embedded
watermark bits is the least for the YCoCg-R color space representation in most
cases. Since no color space representation can reach the embedding capacity of
the YCoCg-R representation, overall we can conclude that the YCoCg-R color
space is the best choice for reversible watermarking of color images. This observation was found to hold for most of the images in the Kodak image database [15].

Conclusions

In this paper we have proposed a novel reversible watermarking scheme for color
images using histogram-bin-shifting of prediction errors in the YCoCg-R color
space. We used a weighted mean based prediction scheme to predict the pixel values, and watermark bits were embedded by histogram-bin-shifting of the prediction errors in each color channel of the YCoCg-R color space. The motivations for
the choice of the YCoCg-R color space over other color space representations
were justied through detailed theoretical arguments and experimental results
for several standard test images. Our future work would be directed towards
exploiting other color space representations, and comparison of watermarking
performance among them through theoretical and empirical techniques.

References
1. Cox, I.J., Miller, M.L., Bloom, J.A., Fridrich, J., Kalker, T.: Digital Watermarking
and Steganography. Morgan Kaufmann Publishers, San Francisc (2008)
2. Tian, J.: Reversible data embedding using a dierence expansion. IEEE Trans.
Circuits Syst. Video Technol. 13(8), 890896 (2003)
3. Ni, Z., Shi, Y.-Q., Ansari, N., Su, W.: Reversible data hiding. IEEE Trans. Circuits
Syst. Video Technol. 16(3), 354362 (2006)
4. Alattar, A.M.: Reversible watermark using the dierence expansion of a generalized
integer transform. IEEE Trans. Image Process. 13(8), 11471156 (2004)
5. Alattar, A.M.: Reversible watermark using dierence expansion of triplets. In:
Proceedings of International Conference on Image Processing, vol. 1 (2003)
6. Alattar, A.M.: Reversible watermark using dierence expansion of quads. In: Proceedings of International Conference on Acoustics, Speech, and Signal Processing,
vol. 3 (2004)
7. Li, J., Li, Xi., Yang, B.: A new PEE-based reversible watermarking algorithm
for color image. In: Proceedings of International Conference on Image Processing
(2012)

498

A. Roy et al.

8. Malvar, H.S., Sullivan, G.J., Srinivasan, S.: Lifting-based reversible color transformations for image compression. In: Proceedings of Optical Engineering and Applications (2008)
9. Naskar, R., Chakraborty, R.S.: Histogram-bin-shifting-based reversible watermarking for colour images. IET Image Process. 7(2), 99110 (2013)
10. Naskar, R., Chakraborty, R.S.: Fuzzy inference rule based reversible watermarking
for digital images. In: Venkatakrishnan, V., Goswami, D. (eds.) ICISS 2012. LNCS,
vol. 7671, pp. 149163. Springer, Heidelberg (2012)
11. Cover, T.M., Thomas, J.A.: Elements of Information Theory. Wiley, Hoboken
(2012)
12. Kandel, E.R., Schwartz, J.H., Jessell, T.M.: Principles of Neural Science. McGrawHill, New York (2000)
13. Nakachi, T., Fujii, T., Suzuki, J.: Lossless and near-lossless compression of still
color images. In: Proceedings of International Conference on Image Processing,
vol. 1. IEEE (1999)
14. Acharya, T., Tsai, P.-S.: JPEG2000 Standard for Image Compression: Concepts,
Algorithms and VLSI Architectures. Wiley-Interscience, New York (2004)
15. Kodak lossless true color image suite. http://r0k.us/graphics/kodak/

You might also like