OpenCV Android Programming by Example - Sample Chapter
OpenCV Android Programming by Example - Sample Chapter
ee
$ 29.99 US
19.99 UK
P U B L I S H I N G
Amgad Muhammad
pl
C o m m u n i t y
E x p e r i e n c e
D i s t i l l e d
Sa
m
Amgad Muhammad
Preface
Learn how to use OpenCV to develop vision-aware, intelligent Android applications
in a step-by-step tutorial and join the interesting and rapidly expanding field of
computer vision to enable your Android phone to make sense of the world.
Starting from the basics of computer vision and OpenCV, we'll take you through all
the ways to create exciting applications. You will discover that although computer
vision is a challenging subject, the ideas and algorithms used are simple and
intuitive, and you will appreciate the abstraction layer that OpenCV offers in order
to do the heavy lifting for you.
Packed with many examples, the book will help you understand the main data
structures used in OpenCV, and how you can use them to gain performance
boosts. Next, we will discuss and use several image processing algorithms, such as
histogram equalization, filters, and color space conversion. You then will learn about
image gradients and how they are used in many shape analysis techniques, such as
edge detection, Hough line transform, and Hough circle transform. In addition to
using shape analysis to find things in images, you will learn how to describe objects
in images in a more robust way using different feature detectors and descriptors.
Finally, you will be able to make intelligent decisions using machine learning,
specifically, the famous adaptive boosting learning algorithm and cascade classifiers.
Preface
Chapter 3, App 2 - Software Scanner, explains how to implement your next application,
a software scanner. It allows people to take a photo of, let's say, a receipt, and apply
some transformations to make it look as if it was scanned. In this chapter, we will
introduce two important topics that will help us to reach our final goal.
The first topic will be about spatial filtering and its definition and applications. The
second topic will be about a famous shape analysis technique called the Hough
transform. You will learn about the basic idea behind this technique that has made it
very popular and widely used, and we will use the OpenCV implementation to start
fitting lines and circles to a set of edge pixels.
Chapter 4, App 2 - Applying Perspective Correction, continues to build on the application
that we started in Chapter 3. We will use the concepts that you've learned, namely,
the edge detection and Hough line transform, to do perspective correction to a
quadrilateral object. Applying perspective transformation to an object will change
the way that we see it; this idea will come in handy when you take pictures of
documents, receipts, and so on, and you want to a have better view of the captured
image or a scan-like copy.
Chapter 5, App 3 - Panoramic Viewer, starts working on a new application. The goal of
the application is to stitch two images together in order to form a panoramic view,
and in this chapter, we will introduce the concept of image features and why they
are important, and we will see them in action.
Chapter 6, App 4 Automatic Selfie, introduces a new application. The goal of the
application is to be able to take a selfie without touching your phone's screen. Your
application will be able to detect a certain hand gesture that will trigger the process
of saving the current camera frame.
Digital images
Digital images
Images can be found around us wherever we look; so it is very important to
understand how images are represented and how the images' colors are mapped if
we want to understand, process, and analyze these images automatically.
[ 27 ]
Color spaces
We live in a continuous world, so to capture a scene in a discreet digital sensor, a
discrete spatial (layout) and intensity (color information) mapping has to happen in
order to store the real-world data in a digital image.
The two-dimensional digital image, D(i,j), represents a sensor response value at the
pixel indicated by the row number i and column number j, starting from the left
upper corner as i=j=0.
To represent colors, a digital image usually contains one or more channels to store
the intensity value (color) of each pixel. The most widely used color representation is
a one-channel image, also known as a grayscale image, where every pixel is assigned
a shade of gray depending on its intensity value: zero is black and the maximum
intensity value is white.
8
There is one more color space to consider that is more related to human
understanding and perception of colors than the RGB representation. It is the Hue,
Saturation, and Value (HSV) color space.
Each of the color dimensions can be understood as follows:
Saturation (S): It measures how pure the color is; for example, is it a dull red
or dark red? Think of it as how much white is blinded with the color.
The last image type to consider is the binary image. It is a two-dimensional array
of pixels; however, each pixel can store only the value of zero or one. This type or
representation is important to the solving of vision problems such as edge detection.
[ 28 ]
Chapter 2
numRow=5;
numCol=5;
type=org.opencv.core.CvType.CV_8UC1;
myMatrix=newMat(numRow,numCol,type);
[ 29 ]
In order to specify what type the Mat class is storing and how many
channels there are, OpenCV provides you with a CvType class with
static int fields with the following naming convention:
CV_(Data type size ["8" | "16" | "32" | "64"])(["S" | "U" | "F" , for
signed, unsigned integers, or floating point numbers])(Number of
channels["C1 | C2 | C3 | C4", for one, two, three, or four channels
respectively])
For example, you specified the type parameter as org.opencv.
core.CvType.CV_8UC1; this means that the matrix will hold 8-bit
unsigned characters for color intensity with one channel. In other
words, this matrix will store a grayscale image with intensities from
0 (black) to 255 (white).
Let's go through the important lines and the rest is actually straightforward:
double [] pixelValue=cameraFram.get(0, 0);
[ 30 ]
Chapter 2
In this line, we are calling the get(0,0) function and passing it to the row and
column index; in this case, it is the top left pixel.
Note that the get() method returns a double array because the Mat object can hold
up to four channels.
In our case, it is a full color image, so each pixel will have three intensities for each of
the Red (r), Green (g), and Blue (b) color channels in addition to one channel for the
transparency, Alpha (a), hence the name of the method is rgba().
You can access each channel intensity independently using the array index operator
[] so, for the Red, Green, and Blue intensities, you use 0, 1, and 2, respectively:
double redChannelValue=pixelValue[0];
double greenChannelValue=pixelValue[1];
double blueChannelValue=pixelValue[2];
The following table is a list of the basic Mat class operations that you will need to be
familiar with:
Functionality
To retrieve the number of
channels
To make a deep copy of a Mat
object including the matrix data
To retrieve the number of
matrix columns
Code sample
Mat myImage; //declared and initialized
int numberOfChannels=myImage.channels();
Mat newMat=existingMat.clone();
First method:
Mat myImage; //declared and initialized
int colsNum=myImage.cols();
Second method:
int colsNum=myImage.width();
Third method:
//And yes, it is a public instance
variable.
int colsNum=myImage.size().width;
[ 31 ]
Functionality
Code sample
First method:
Mat myImage; //declared and initialized
int rowsNum=myImage.rows();
Second method:
int rowsNum=myImage.height();
Thirst method:
//And yes, it is a public instance
variable.
int rowsNum=myImage.size().height;
Mat myImage; //declared and initialized
int depth=myImage.depth()
[ 32 ]
Chapter 2
[ 33 ]
UI definitions
In this project, you will load an image stored on your phone, convert it to a bitmap
image, and display it in an image view.
Let's start by setting the layout of the application activity:
<LinearLayoutxmlns:android="http://schemas.android.com/apk/res/
android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:orientation="horizontal">
<ImageView
android:id="@+id/IODarkRoomImageView"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:src="@drawable/ic_launcher"
android:layout_marginLeft="0dp"
android:layout_marginTop="0dp"
android:scaleType="fitXY"/>
</LinearLayout>
It is a simple linear layout with an image view. The next step is to set some needed
permissions. Just in case you will be loading images from your SD card, you will
need to set the corresponding permission so that Android allows your application to
read and write from the external storage.
In your manifest file, add the following line:
<uses-permissionandroid:name=
"android.permission.WRITE_EXTERNAL_STORAGE"/>
Chapter 2
<actionandroid:name="android.intent.action.MAIN"/>
<categoryandroid:name="android.intent.category.LAUNCHER"/>
</intent-filter>
</activity>
</application>
We are done with the UI definitions for this part of the application, so let's move on
to the code behind it.
The next step is to handle the user clicks on the menu item that we defined earlier:
private static final int SELECT_PICTURE = 1;
private String selectedImagePath;
@Override
public boolean onOptionsItemSelected(MenuItem item) {
// Handle action bar item clicks here. The action bar will
// automatically handle clicks on the Home/Up button, so long
// as you specify a parent activity in AndroidManifest.xml.
int id = item.getItemId();
if (id == R.id.action_openGallary) {
Intent intent = newIntent();
intent.setType("image/*");
intent.setAction(Intent.ACTION_GET_CONTENT);
startActivityForResult(Intent.createChooser(intent,
"Select Picture"), SELECT_PICTURE);
return true;
}
return super.onOptionsItemSelected(item);
}
Once the user selects an image to load from the gallery, we execute the loading and
display it in the activity result callback method:
public void onActivityResult(int requestCode, int resultCode,
Intent data) {
if (resultCode == RESULT_OK) {
if (requestCode == SELECT_PICTURE) {
[ 36 ]
Chapter 2
Uri selectedImageUri = data.getData();
selectedImagePath = getPath(selectedImageUri);
Log.i(TAG, "selectedImagePath: " + selectedImagePath);
loadImage(selectedImagePath);
displayImage(sampledImage);
}
}
}
After you make sure that the opened activity returned the needed resultin this
case, it is the image URIwe call the helper method, getPath(), to retrieve the
image path in the format that is needed to load the image using OpenCV:
private String getPath(Uri uri) {
// just some safety built in
if(uri == null ) {
return null;
}
// try to retrieve the image from the media store first
// this will only work for images selected from gallery
String[] projection = { MediaStore.Images.Media.DATA };
Cursor cursor = getContentResolver().query(uri, projection,
null, null, null);
if(cursor != null ){
int column_index = cursor.getColumnIndexOrThrow(
MediaStore.Images.Media.DATA);
cursor.moveToFirst();
return cursor.getString(column_index);
}
return uri.getPath();
}
[ 37 ]
This method reads an image from the given path and returns it. It is provided as a
static member in the Highgui class.
[ 38 ]
Chapter 2
In order to load the image as an RGB bitmap, we first need to convert the decoded
image from the color space B, G, R to the color space R, G, B.
First, we instantiate an empty Mat object, rgbImage, then we execute color space
mapping using the Imgproc.cvtColor() method. The method takes three
parameters: the source image, destination image, and mapping code. Luckily,
OpenCV supports over 150 mappings and, in our case, we need the BGR to RGB
mapping. Now, let us see the following snippet:
Display display = getWindowManager().getDefaultDisplay();
Point size = new Point();
display.getSize(size);
int width = size.x;
int height = size.y;
double downSampleRatio= calculateSubSampleSize(
rgbImage,width,height);
It would be very wasteful and sometimes impossible to display the images in their
original resolution due to memory constraints.
For example, if you captured an image with your phone's 8 megapixel camera,
then the memory cost of the colored image, assuming 1 byte color depth, is
8 3 ( RGB ) = 24 megabyte .
To overcome this issue, it is advisable to resize (downsample) the image to your
phone's display resolution. To do so, we first retrieve the phone's display resolution
and then calculate the downsample ratio using the calculateSubSampleSize()
helper method:
private static double calculateSubSampleSize(
Mat srcImage, int reqWidth, int reqHeight) {
// Raw height and width of image
final int height = srcImage.height();
[ 39 ]
Now, we are ready to resize the loaded image to fit on the device screen. First, we
create an empty Mat object, sampledImage, to hold the resized image. Then, we call
Imgproc.resize() passing to it:
The size of the new image; in our case, a new empty Size object as we will
send the downsample ratio instead
A double for the downsample ratio in the X direction (for the width)
A double for the downsample ratio in the Y direction (for the height)
Interpolation is needed here because we will change the size of an image (upsize or
downsize) and we want the mapping from the source image to the destination image
to be as smooth as possible.
[ 40 ]
Chapter 2
Interpolation will decide what the value of the destination image pixel is when it
falls between two pixels in the source image in case we are downsizing. It will also
compute the value of the new pixels in the destination image, which doesn't have a
corresponding pixel in the source image, in case we are upsizing.
In either case, OpenCV has several options to compute the value of such pixels. The
default INTER_LINEAR method computes the destination pixel value by linearly
weighing the 2-by-2 surrounding source pixels' values according to how close they
are to the destination pixel. Alternatively, INTER_NEAREST takes the value of the
destination pixel from its closest pixel in the source image. The INTER_AREA option
virtually places the destination pixel over the source pixels and then averages the
covered pixel values. Finally, we have the option of fitting a cubic spline between
the 4-by-4 surrounding pixels in the source image and then reading off the
corresponding destination value from the fitted spline; this is the result of choosing
the INTER_CUBIC interpolation method.
To shrink an image, it will generally look best with the INTER_
AREA interpolation, whereas to enlarge an image, it will generally
look best with INTER_CUBIC (slow) or INTER_LINEAR (faster, but
still looks OK).
try {
ExifInterface exif = new ExifInterface(selectedImagePath);
int orientation = exif.getAttributeInt(
ExifInterface.TAG_ORIENTATION, 1);
switch (orientation)
{
case ExifInterface.ORIENTATION_ROTATE_90:
//get the mirrored image
sampledImage=sampledImage.t();
//flip on the y-axis
Core.flip(sampledImage, sampledImage, 1);
break;
case ExifInterface.ORIENTATION_ROTATE_270:
//get upside down image
sampledImage=sampledImage.t();
//Flip on the x-axis
Core.flip(sampledImage, sampledImage, 0);
break;
}
} catch (IOException e) {
e.printStackTrace();
}
[ 41 ]
Now, we need to handle the image orientation and because the activity only works
in the portrait mode, we will handle the images with a rotation of 90 or 270 degrees.
In the case of a 90 degree rotation, this means that you took the image with the
phone in the portrait position; we rotate the image 90 degrees counterclockwise by
calling the t() method in order to transpose the Mat object.
The result of the transpose is a mirrored version of the original image, so we need
one more step to flip the image around the vertical axis by calling Core.flip() and
passing it to the source image and destination image and calling a flip code to specify
how to flip the image; 0 means flipping around the x axis, a positive value (for
example, 1) means flipping around the y axis, and a negative value (for example, -1)
means flipping around both the axes.
For the 270 degree rotation case, this means that you took the picture with your
phone upside down. We follow the same algorithm, transpose the image and then
flip it. Yet, after we transpose the image, it will be a mirrored version around the
horizontal direction, thus we call Core.flip() with the 0 flip code.
Now, we are ready to display the image using the image view component:
private void displayImage(Mat image)
{
// create a bitMap
Bitmap bitMap = Bitmap.createBitmap(image.cols(),
image.rows(),Bitmap.Config.RGB_565);
// convert to bitmap:
Utils.matToBitmap(image, bitMap);
// find the imageview and draw it!
ImageView iv = (ImageView) findViewById(
R.id.IODarkRoomImageView);
iv.setImageBitmap(bitMap);
}
First, we create a bitmap object with the color channels' order matching the loaded
image color channels' order, RGB. Then, we use Utils.matToBitmap() to convert a
Mat object to a bitmap object. Finally, we set the image view bitmap with the newly
created bitmap object.
[ 42 ]
Chapter 2
[ 43 ]
Now, we are ready to show how to calculate a histogram for an image using the
OpenCV library.
UI definitions
We will continue to build on the same app that we started in the previous section.
The change is to add one more menu item to the menu file in order to trigger the
histogram calculation.
Go to the res/menu/iodark_room.xml file and open it to include the following
menu item:
<item
android:id="@+id/action_Hist"
android:orderInCategory="101"
android:showAsAction="never"
android:title="@string/action_Hist">
</item>
[ 44 ]
Chapter 2
[ 45 ]
Note that in case the display histogram menu item is pressed, we first check to see
that the user already loaded an image and in case he didn't, we display a friendly
message and then return it.
Now for the histogram part, which is as follows:
Mat histImage=new Mat();
sampledImage.copyTo(histImage);
calcHist(histImage);
displayImage(histImage);
return true;
We first make a copy of the downsized image that the user loaded; this is necessary
as we will change the image to display the histogram, so we need to have a pristine
copy. Once we have the copy, we call calcHist() and pass it to the new image:
private void calcHist(Mat image)
{
int mHistSizeNum = 25;
MatOfInt mHistSize = new MatOfInt(mHistSizeNum);
Mat hist = new Mat();
float []mBuff = new float[mHistSizeNum];
MatOfFloat histogramRanges = new MatOfFloat(0f, 256f);
Scalar mColorsRGB[] = new Scalar[] { new Scalar(200, 0, 0, 255),
new Scalar(0, 200, 0, 255), new Scalar(0, 0, 200, 255) };
org.opencv.core.PointmP1 = new org.opencv.core.Point();
org.opencv.core.PointmP2 = new org.opencv.core.Point();
int thikness = (int) (image.width() / (mHistSizeNum+10)/3);
if(thikness> 3) thikness = 3;
MatOfInt mChannels[] = new MatOfInt[] { new MatOfInt(0),
new MatOfInt(1), new MatOfInt(2) };
Size sizeRgba = image.size();
int offset = (int) ((sizeRgba.width (3*mHistSizeNum+30)*thikness));
// RGB
for(int c=0; c<3; c++) {
Imgproc.calcHist(Arrays.asList(image), mChannels[c], new
Mat(), hist, mHistSize, histogramRanges);
Core.normalize(hist, hist, sizeRgba.height/2, 0,
Core.NORM_INF);
hist.get(0, 0, mBuff);
for(int h=0; h<mHistSizeNum; h++) {
[ 46 ]
Chapter 2
mP1.x = mP2.x = offset + (c * (mHistSizeNum + 10) + h) *
thikness;
mP1.y = sizeRgba.height-1;
mP2.y = mP1.y - (int)mBuff[h];
Core.line(image, mP1, mP2, mColorsRGB[c], thikness);
}
}
}
First, we define the number of histogram bins. In this case, our histogram will have
25 bins. Then, we initialize a MatOfInt() object, which is a subclass of the Mat class
but only stores integers, with the number of histogram bins. The result of such an
initialization is a MatOfInt object of the dimension, 111( rows columns channels ) ,
holding the number 25.
We need to initialize such an object because, according to the
specification, the OpenCV calculate histogram method takes a
Mat object holding the number of histogram bins.
Then, we initialize a new Mat object to hold the histogram value using the following
command:
Mat hist = newMat();
This time, the Mat object will have the dimension, 11 number of bins :
float []mBuff = new float[mHistSizeNum];
Recall that in the beginning of this chapter, we accessed individual pixels in the
image. Here, we will use the same technique to access the histogram bins' values
and store them in an array of the float type. Here we are defining another histogram
component, which is the histogram range:
MatOfFloat histogramRanges = new MatOfFloat(0f, 256f);
We use the MatOfFloat() class; it is a subclass of the Mat class and as the name
suggests, it only holds floating point numbers.
[ 47 ]
For every line that we draw for the histogram bin, we need to specify the line
thickness:
int thikness = (int) (image.width() / (mHistSizeNum+10)/3);
if(thikness> 3) thikness = 3;
Initialize three MatOfInt objects with the values 0, 1, and 2 to index every image
channel independently:
MatOfInt mChannels[] = new MatOfInt[] { new MatOfInt(0),
new MatOfInt(1), new MatOfInt(2) };
Calculate the offset from which we will start drawing the histogram:
Size sizeRgba = image.size();
int offset = (int) ((sizeRgba.width (3*mHistSizeNum+30)*thikness));
Let's move forward to part two where we calculate and plot the histogram:
// RGB
for(int c=0; c<3; c++) {
Imgproc.calcHist(Arrays.asList(image), mChannels[c], new Mat(),
hist, mHistSize, histogramRanges);
Core.normalize(hist, hist, sizeRgba.height/2, 0, Core.NORM_INF);
hist.get(0, 0, mBuff);
for(int h=0; h<mHistSizeNum; h++) {
[ 48 ]
Chapter 2
mP1.x = mP2.x = offset + (c * (mHistSizeNum + 10) + h) *
thikness;
mP1.y = sizeRgba.height-1;
mP2.y = mP1.y - (int)mBuff[h];
Core.line(image, mP1, mP2, mColorsRGB[c], thikness);
}
}
The first thing to notice is that we can only compute the histogram for one channel
at a time. That's why we have a for loop running for the three channels. As for the
body for the loop, the first step is to call Imgproc.calcHist() that does all the heavy
lifting after passing it to the following arguments:
Now that we have computed the histogram, it is necessary to normalize its values
so that we can display them on the device screen. Core.normalize() can be used
in several different ways:
Core.normalize(hist, hist, sizeRgba.height/2, 0, Core.NORM_INF);
The one used here is to normalize using the norm of the input array, which is the
histogram values in our case, passing the following arguments:
A double alpha. In the case of a norm normalization, the alpha will be used
as the norm value. For the other case, which is a range normalization, the
alpha will be the minimum value of the range.
[ 49 ]
A double beta. This parameter is only used in the case of a range normalization
as the maximum range value. In our case, we passed 0 as it is not used.
Finally, we plot a line for every bin in the histogram using Core.line():
for(int h=0; h<mHistSizeNum; h++) {
//calculate the starting x position related to channel C plus 10
//pixels spacing multiplied by the thickness
mP1.x = mP2.x = offset + (c * (mHistSizeNum + 10) + h) *
thikness;
mP1.y = sizeRgba.height-1;
mP2.y = mP1.y - (int)mBuff[h];
Core.line(image, mP1, mP2, mColorsRGB[c], thikness);
}
[ 50 ]
Chapter 2
The final output would be the loaded image with a histogram for every color channel:
[ 51 ]
[ 52 ]
Chapter 2
UI definitions
We will build on the project that we developed earlier by adding more menu items
to trigger the image enhancing functionality.
Open the menu file, res/menu/iodark_room.xml, and add the new submenu:
<item android:id="@+id/enhance_gs"
android:title="@string/enhance_gs"
android:enabled="true"android:visible="true"
android:showAsAction="always"
android:titleCondensed="@string/enhance_gs_small">
<menu>
<item android:id="@+id/action_togs"
android:title="@string/action_ctgs"/>
<item android:id="@+id/action_egs"
android:title="@string/action_eqgsistring"/>
</menu>
</item>
In the new submenu, we added two new items: one to convert the image to grayscale
and the second to trigger the histogram equalization.
We do a check to see if the sampled image is already loaded and then call Imgproc.
cvtColor() and pass to it the following parameters:
An integer to indicate which color space to convert from and which color
space to convert to. In our case, we chose to convert from RGB to grayscale.
[ 54 ]
Chapter 2
We will again check to see if the user already converted the image to grayscale;
otherwise the histogram equalization method will fail. Then, we call Imgproc.
equalizeHist() passing in two parameters:
[ 55 ]
UI definitions
The changes are related to adding the new menu item to trigger the HSV
enhancement:
<item android:id="@+id/action_HSV"
android:titleCondensed="@string/action_enhanceHSV"
android:title="@string/action_enhanceHSV"android:enabled="true"
android:showAsAction="ifRoom"android:visible="true"/>
Initialize two new Mat objects to hold the image value and saturation channels:
Mat HSV=new Mat();
Imgproc.cvtColor(sampledImage, HSV, Imgproc.COLOR_RGB2HSV);
[ 56 ]
Chapter 2
Then, we access the image pixel by pixel to copy the saturation and value channels:
Imgproc.equalizeHist(V, V);
Imgproc.equalizeHist(S, S);
Now, we copy the enhanced saturation and value back to the original image:
Mat enhancedImage=new Mat();
Imgproc.cvtColor(HSV,enhancedImage,Imgproc.COLOR_HSV2RGB);
displayImage(enhancedImage);
return true;
[ 57 ]
Finally, we convert the HSV color space to RGB and display the enhanced image:
UI definitions
We will add a new menu item to execute the RGB enhancement on individual
channels or a group of channels:
<item android:id="@+id/action_RGB"
android:title="@string/action_RGB"
android:titleCondensed="@string/action_enhanceRGB_small"
android:enabled="true"android:showAsAction="ifRoom"
android:visible="true">
<menu>
[ 58 ]
Chapter 2
<item android:id="@+id/action_ER"
android:titleCondensed="@string/action_enhance_red_small"
android:title="@string/action_enhance_red"
android:showAsAction="ifRoom"android:visible="true"
android:enabled="true"android:orderInCategory="1"/>
<item android:id="@+id/action_EG" android:showAsAction="ifRoom"
android:visible="true"android:enabled="true"
android:titleCondensed="@string/action_enhance_green_small"
android:title="@string/action_enhance_green"
android:orderInCategory="2"/>
<item android:id="@+id/action_ERG" android:showAsAction="ifRoom"
android:visible="true"android:enabled="true"
android:titleCondensed="@string/
action_enhance_red_green_small"
android:title="@string/action_enhance_red_green"
android:orderInCategory="3"/>
</menu>
</item>
The important line here is initializing redMask, which is a Mat object, with all
the channels set to 0 except the first channel, which is the red channel in an
RGB image.
[ 59 ]
Then, we call the enhanceChannel() method passing in a copy of the loaded image
and channel mask that we created:
enhanceChannel(redEnhanced,redMask);
However, this time we pass a mask to the copy method to extract only the
designated channel of the image.
Then, we convert the copied channel to a grayscale color space so that the depth is
8-bit and equalizeHist() doesn't fail.
Finally, we convert it to an RGB Mat object, replicating the enhanced channel to
the Red, Green, and Blue, and then we copy the enhanced channel to the passed
argument using the same mask.
You can easily play around with masks that you construct in order to enhance
different channels or a combination of channels.
[ 60 ]
Chapter 2
Summary
By now you should have learned about how images are represented and stored in
OpenCV. You also developed your own darkroom application, loaded images from
your gallery, calculated and displayed their histograms, and executed histogram
equalization on different color spaces in order to enhance how the image looks.
In the next chapter, we will develop a new application to utilize more of the OpenCV
image processing and computer vision algorithms. We will use algorithms to smooth
images and detect ages, lines, and circles.
[ 61 ]
www.PacktPub.com
Stay Connected: