Opencv Tutorials PDF
Opencv Tutorials PDF
Release 2.4.2
CONTENTS
Introduction to OpenCV
1.1 Installation in Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Using OpenCV with gcc and CMake . . . . . . . . . . . . . . . . . . . .
1.3 Using OpenCV with Eclipse (plugin CDT) . . . . . . . . . . . . . . . . .
1.4 Installation in Windows . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.5 How to build applications with OpenCV inside the Microsoft Visual Studio
1.6 Using Android binary package with Eclipse . . . . . . . . . . . . . . . . .
1.7 Using C++ OpenCV code with Android binary package . . . . . . . . . .
1.8 Installation in iOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.9 Load and Display an Image . . . . . . . . . . . . . . . . . . . . . . . . .
1.10 Load, Modify, and Save an Image . . . . . . . . . . . . . . . . . . . . . .
1.11 How to write a tutorial for OpenCV? . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
6
7
9
15
26
35
54
62
63
65
68
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
79
81
87
92
94
96
100
105
109
113
120
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
125
130
135
141
147
153
161
165
169
175
179
184
190
194
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3.14
3.15
3.16
3.17
3.18
3.19
3.20
3.21
3.22
3.23
3.24
3.25
Affine Transformations . . . . . . . . . . . . . . . . . . .
Histogram Equalization . . . . . . . . . . . . . . . . . .
Histogram Calculation . . . . . . . . . . . . . . . . . . .
Histogram Comparison . . . . . . . . . . . . . . . . . . .
Back Projection . . . . . . . . . . . . . . . . . . . . . .
Template Matching . . . . . . . . . . . . . . . . . . . . .
Finding contours in your image . . . . . . . . . . . . . .
Convex Hull . . . . . . . . . . . . . . . . . . . . . . . .
Creating Bounding boxes and circles for contours . . . . .
Creating Bounding rotated boxes and ellipses for contours
Image Moments . . . . . . . . . . . . . . . . . . . . . .
Point Polygon Test . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
200
206
212
219
224
230
238
239
241
244
246
248
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
281
283
285
289
292
295
297
301
304
306
309
312
315
ii
347
The following links describe a set of basic OpenCV tutorials. All the source code mentioned here is provide as part
of the OpenCV regular releases, so check before you start copy & pasting the code. The list of tutorials below is
automatically generated from reST files located in our SVN repository.
As always, we would be happy to hear your comments and receive your contributions on any tutorial.
Introduction to OpenCV
Here you will learn the about the basic building blocks of the library. A
must read and know for understanding how to manipulate the images on a
pixel level.
In this section you will learn about the image processing (manipulation)
functions inside OpenCV.
This section contains valuable tutorials about how to read/save your image/video files and how to use the built-in graphical user interface of the
library.
CONTENTS
Learn about how to use the feature points detectors, descriptors and matching framework found inside OpenCV.
Look here in order to find use on your video stream algoritms like: motion
extraction, feature tracking and foreground extractions.
Ever wondered how your digital camera detects peoples and faces? Look
here to find out!
Use the powerfull machine learning classes for statistical classification, regression and clustering of data.
Squeeze out every little computation power from your system by using the
power of your video card to run the OpenCV algorithms.
General tutorials
These tutorials are the bottom of the iceberg as they link together multiple
of the modules presented above in order to solve complex problems.
CONTENTS
CHAPTER
ONE
INTRODUCTION TO OPENCV
Here you can read tutorials about how to set up your computer to work with the OpenCV library. Additionaly you can
find a few very basic sample source code that will let introduce you to the world of the OpenCV.
Linux
Windows
Title: How to build applications with OpenCV inside the Microsoft Visual
Studio
Compatibility: > OpenCV 2.0
Author: Bernt Gbor
You will learn what steps you need to perform in order to use the OpenCV
library inside a new Microsoft Visual Studio project.
Android
iOS
Title: Installation in iOS
Compatibility: > OpenCV 2.3.1
Author: Artem Myagkov
We will learn how to setup OpenCV for using it in iOS!
Want to contribute, and see your own work between the OpenCV tutorials?
Required packages
GCC 4.4.x or later. This can be installed with
sudo apt-get install build-essential
Building OpenCV from source using CMake, using the command line
1. Create a temporary directory, which we denote as <cmake_binary_dir>, where you want to put the generated
Makefiles, project files as well the object filees and output binaries
2. Enter the <cmake_binary_dir> and type
cmake [<some optional parameters>] <path to the OpenCV source directory>
For example
cd ~/opencv
mkdir release
cd release
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local ..
Note:
If the size of the created library is a critical issue (like in case of an Android build) you can use the
install/strip command to get the smallest size as possible. The stripped version appears to be twice as small.
However, we do not recommend using this unless those extra megabytes do really matter.
Steps
Create a program using OpenCV
Lets use a simple program such as DisplayImage.cpp shown below.
#include <cv.h>
#include <highgui.h>
using namespace cv;
int main( int argc, char** argv )
{
Mat image;
image = imread( argv[1], 1 );
if( argc != 2 || !image.data )
{
printf( "No image data \n" );
return -1;
}
namedWindow( "Display Image", CV_WINDOW_AUTOSIZE );
imshow( "Display Image", image );
waitKey(0);
return 0;
}
Result
By now you should have an executable (called DisplayImage in this case). You just have to run it giving an image
location as an argument, i.e.:
./DisplayImage lena.jpg
Prerequisites
1. Having installed Eclipse in your workstation (only the CDT plugin for C/C++ is needed). You can follow the
following steps:
Go to the Eclipse site
Download Eclipse IDE for C/C++ Developers . Choose the link according to your workstation.
2. Having installed OpenCV. If not yet, go here.
Making a project
1. Start Eclipse. Just run the executable that comes in the folder.
2. Go to File -> New -> C/C++ Project
3. Choose a name for your project (i.e. DisplayImage). An Empty Project should be okay for this example.
10
7. So, now you have a project with a empty .cpp file. Lets fill it with some sample code (in other words, copy and
paste the snippet below):
#include <cv.h>
#include <highgui.h>
using namespace cv;
int main( int argc, char** argv )
{
Mat image;
image = imread( argv[1], 1 );
if( argc != 2 || !image.data )
{
printf( "No image data \n" );
return -1;
}
namedWindow( "Display Image", CV_WINDOW_AUTOSIZE );
imshow( "Display Image", image );
waitKey(0);
return 0;
}
11
8. We are only missing one final step: To tell OpenCV where the OpenCV headers and libraries are. For this, do
the following:
Go to Project>Properties
In C/C++ Build, click on Settings. At the right, choose the Tool Settings Tab. Here we will enter the
headers and libraries info:
(a) In GCC C++ Compiler, go to Includes. In Include paths(-l) you should include the path of the
folder where opencv was installed. In our example, this is /usr/local/include/opencv.
Note: If you do not know where your opencv files are, open the Terminal and type:
pkg-config --cflags opencv
(b) Now go to GCC C++ Linker,there you have to fill two spaces:
First in Library search path (-L) you have to write the path to where the opencv libraries reside, in
my case the path is:
/usr/local/lib
Then in Libraries(-l) add the OpenCV libraries that you may need. Usually just the 3 first on the list
below are enough (for simple applications) . In my case, I am putting all of them since I plan to use
the whole bunch:
opencv_core opencv_imgproc opencv_highgui opencv_ml opencv_video opencv_features2d
opencv_calib3d opencv_objdetect opencv_contrib opencv_legacy opencv_flann
12
If you dont know where your libraries are (or you are just psychotic and want to make sure the path
is fine), type in Terminal:
pkg-config --libs opencv
13
cd <DisplayImage_directory>
cd src
./DisplayImage ../images/HappyLittleFish.png
in
<DisplayIm-
1. Go to Run->Run Configurations
2. Under C/C++ Application you will see the name of your executable + Debug (if not, click over C/C++ Application a couple of times). Select the name (in this case DisplayImage Debug).
3. Now, in the right side of the window, choose the Arguments Tab. Write the path of the image file we want to
open (path relative to the workspace/DisplayImage folder). Lets use HappyLittleFish.png:
4. Click on the Apply button and then in Run. An OpenCV window should pop up with the fish image (or whatever
you used).
5. Congratulations! You are ready to have fun with OpenCV using Eclipse.
V2: Using CMake+OpenCV with Eclipse (plugin CDT)
(See the getting started <http://opencv.willowgarage.com/wiki/Getting_started> section of the OpenCV Wiki)
Say you have or create a new file, helloworld.cpp in a directory called foo:
#include <cv.h>
#include <highgui.h>
int main ( int argc, char **argv )
{
cvNamedWindow( "My Window", 1 );
IplImage *img = cvCreateImage( cvSize( 640, 480 ), IPL_DEPTH_8U, 1 );
CvFont font;
double hScale = 1.0;
double vScale = 1.0;
int lineWidth = 1;
cvInitFont( &font, CV_FONT_HERSHEY_SIMPLEX | CV_FONT_ITALIC,
14
1. Create a build directory, say, under foo: mkdir /build. Then cd build.
2. Put a CmakeLists.txt file in build:
PROJECT( helloworld_proj )
FIND_PACKAGE( OpenCV REQUIRED )
ADD_EXECUTABLE( helloworld helloworld.cxx )
TARGET_LINK_LIBRARIES( helloworld ${OpenCV_LIBS} )
1. Run: cmake-gui .. and make sure you fill in where opencv was built.
2. Then click configure and then generate. If its OK, quit cmake-gui
3. Run make -j4 (the -j4 is optional, it just tells the compiler to build in 4 threads). Make sure it builds.
4. Start eclipse . Put the workspace in some directory but not in foo or foo\\build
5. Right click in the Project Explorer section. Select Import And then open the C/C++ filter. Choose Existing
Code as a Makefile Project
6. Name your project, say helloworld. Browse to the Existing Code location foo\\build (where you ran your
cmake-gui from). Select Linux GCC in the Toolchain for Indexer Settings and press Finish.
7. Right click in the Project Explorer section. Select Properties. Under C/C++ Build, set the build directory: from something like ${workspace_loc:/helloworld} to ${workspace_loc:/helloworld}/build
since thats where you are building to.
1. You can also optionally modify the Build command: from make to something like make VERBOSE=1 -j4
which tells the compiler to produce detailed symbol files for debugging and also to compile in 4 parallel threads.
1. Done!
15
3. Choose a build you want to use and download it. The naming conventions used will show what kind of support
they offer. For example:
vs2010 means the Visual Studio
win32 means that it is for 32 bit applications in the OS
gpu means that it includes the support for using your GPU in order to further increase the performance of
the library).
If you downloaded the source files present here see Installation by making your own libraries from the source
files.
4. Make sure you have admin rights. Start the setup and follow the wizard. Agree to the License Agreement .
5. While adding the OpenCV library to the system path is a good decision for a better control of this we will do it
manually. Therefore, make sure you do not set this option.
6. Most of the time it is a good idea to install the source files as this will allow for you to debug into the OpenCV
library, if it is necessary. Therefore, just follow the default settings of the wizard and finish the installation.
7. You can check the installation at the chosen path as you can see below.
8. To finalize the installation go to the Set the OpenCV enviroment variable and add it to the systems path section.
16
An Integrated Developer Enviroment (IDE) preferably, or just a CC++ compiler that will actually make the
binary files. Here I will use the Microsoft Visual Studio. Nevertheless, you can use any other IDE that has a
valid C\C++ compiler.
Then CMake is a neat tool that will make the project files (for your choosen IDE) from the OpenCV source files.
It will also allow an easy configuration of the OpenCV build files, in order to make binary files that fits exactly
to your needs.
A Subversion Control System (SVN) to acquire the OpenCV source files. A good tool for this is TortoiseSVN.
Alternatively, you can just download an archived version of the source files from the Sourceforge OpenCV page.
OpenCV may come in multiple flavors. There is a core section that will work on its own. Nevertheless, they are a
couple of tools, libraries made by other organizations (so called 3rd parties) that offer services of which the OpenCV
may take advantage. These will improve in many ways its capabilities. In order to use any of them, you need to
download and install them on your system.
The Python libraries are required to build the Python interface of OpenCV. For now use the version 2.7.x. This
is also a must have if you want to build the OpenCV documentation.
Numpy is a scientific computing package for Python. Required for the Python interface.
Intel Threading Building Blocks (TBB) is used inside OpenCV for parallel code snippets. Using this will
make sure that the OpenCV library will take advantage of all the cores you have in your systems CPU.
Intel Integrated Performance Primitives (IPP) may be used to improve the performance of color conversion,
Haar training and DFT functions of the OpenCV library. Watch out as this isnt a free service.
OpenCV offers a somewhat fancier and more useful graphical user interface, than the default one by using the
Qt framework. For a quick overview of what this has to offer look into the documentations highgui module,
under the Qt New Functions section. Version 4.6 or later of the framework is required.
Eigen is a C++ template library for linear algebra.
The latest CUDA Toolkit will allow you to use the power lying inside your GPU. This will drastically improve
performance for some of the algorithms, like the HOG descriptor. Getting to work more and more of our
algorithms on the GPUs is a constant effort of the OpenCV team.
OpenEXR source files are required for the library to work with this high dynamic range (HDR) image file
format.
The OpenNI Framework contains a set of open source APIs that provide support for natural interaction with
devices via methods such as voice command recognition, hand gestures and body motion tracking.
Miktex is the best TEX implementation on the Windows OS. It is required to build the OpenCV documentation.
Sphinx is a python documentation generator and is the tool that will actually create the OpenCV documentation.
This on its own requires a couple of tools installed, I will cover this in depth at the How to Install Sphinx section.
Now I will describe the steps to follow for a full build (using all the above frameworks, tools and libraries). If you do
not need the support for some of these you can just freely skip those parts.
Building the library
1. Make sure you have a working IDE with a valid compiler. In case of the Microsoft Visual Studio just install it
and make sure it starts up.
2. Install CMake. Simply follow the wizard, no need to add it to the path. The default install options are great. No
need to change them.
3. Install TortoiseSVN. Choose the 32 or 64 bit version according to the type of OS you work in. Again follow the
wizard, default options are good. Restart of your system is required.
17
4. Choose a directory in your file system where you will download the OpenCV libraries. I recommend creating
a new one that has short path and no special charachters in it, for example D:/OpenCV. During this tutorial Ill
suppose youve done so. If you use a different directory just change this front part of the path in my future
examples. Then, Right Click SVN Checkout... in the directory.
A window will appear where you can select from what repository you want to download source files (1) and to
which directory (2):
Add here either ones of the versions described above. Then push the OK button and be patient as the repository
currently is over 330MB to download. It will take some time until it is finished depending on your Internet
connection.
When you are done you should have a opencv and an opencv_extra directory as seen at (3).
5. In this section I will cover installing the 3rd party libraries.
(a) Download the Python libraries and install it with the default options. You will need a couple other python
extensions. Luckily installing all these may be automated by a nice tool called Setuptools. Download and
install again.
(b) Installing Sphinx is easy once you have installed Setuptools. This contains a little application that will
automatically connect to the python databases and download the latest version of many python scripts.
Start up a command window (enter cmd into the windows start menu and press enter) and use the CD
command to navigate to your Python folders Script sub-folder. Here just pass to the easy_install.exe as
18
argument the name of the program you want to install. Add the sphinx argument.
Note: The CD navigation command works only inside a drive. For example if you are somewhere in
the C: drive you cannot use it this to go to another drive (like for example D:). To do so you first need
to change drives letters. For this simply enter the command D:. Then you can use the CD to navigate to
specific folder inside the drive. Bonus tip: you can clear the screen by using the CLS command.
This will also install its prerequisites Jinja2 and Pygments.
(c) The easiest way to install Numpy is to just download its binaries from the sourceforga page. Make sure
your download and install exactly the binary for your python version (so for version 2.7).
(d) Download the Miktex and install it. Again just follow the wizard. At the fourth step make sure you select
for the Install missing packages on-the-fly the Yes option, as you can see on the image below. Again this
will take quite some time so be patient.
(e) For the Intel Threading Building Blocks (TBB) download the source files and extract it inside a directory
on your system. For example let there be D:/OpenCV/dep. For installing the Intel Integrated Perfor-
19
mance Primitives (IPP) the story is the same. For exctracting the archives I recommend using the 7-Zip
application.
(f) In case of the Eigen library it is again a case of download and extract to the D:/OpenCV/dep directory.
(g) Same as above with OpenEXR.
(h) For the OpenNI Framework you need to install both the development build and the PrimeSensor Module.
(i) For the CUDA you need again two modules: the latest CUDA Toolkit and the CUDA Tools SDK. Download
and install both of them with a complete option by using the 32 or 64 bit setups according to your OS.
(j) In case of the Qt framework you need to build yourself the binary files (unless you use the Microsoft Visual
Studio 2008 with 32 bit compiler). To do this go to the Qt Downloads page. Download the source files
(not the installers!!!):
Extract it into a nice and short named directory like D:/OpenCV/dep/qt/ . Then you need to build it. Start
up a Visual Studio Command Prompt (2010) by using the start menu search (or navigate through the start
menu All Programs Microsoft Visual Studio 2010 Visual Studio Tools Visual Studio Command
Prompt (2010)).
20
Now navigate to the extracted folder and enter inside it by using this console window. You should have a
folder containing files like Install, Make and so on. Use the dir command to list files inside your current
directory. Once arrived at this directory enter the following command:
configure.exe -release -no-webkit -no-phonon -no-phonon-backend -no-script -no-scripttools
-no-qt3support -no-multimedia -no-ltcg
Completing this will take around 10-20 minutes. Then enter the next command that will take a lot longer
(can easily take even more than a full hour):
nmake
After this set the Qt enviroment variables using the following command on Windows 7:
setx -m QTDIR D:/OpenCV/dep/qt/qt-everywhere-opensource-src-4.7.3
Also, add the built binary files path to the system path by using the Path Editor. In our case this is
D:/OpenCV/dep/qt/qt-everywhere-opensource-src-4.7.3/bin.
Note: If you plan on doing Qt application development you can also install at this point the Qt Visual
Studio Add-in. After this you can make and build Qt applications without using the Qt Creator. Everything
is nicely integrated into Visual Studio.
6. Now start the CMake (cmake-gui). You may again enter it in the start menu search or get it from the All Programs
CMake 2.8 CMake (cmake-gui). First, select the directory for the source files of the OpenCV library (1).
Then, specify a directory where you will build the binary files for OpenCV (2).
Press the Configure button to specify the compiler (and IDE) you want to use. Note that in case you can choose
between different compilers for making either 64 bit or 32 bit libraries. Select the one you use in your application
development.
21
CMake will start out and based on your system variables will try to automatically locate as many packages as
possible. You can modify the packages to use for the build in the WITH WITH_X menu points (where X is
the package abbreviation). Here are a list of current packages you can turn on or off:
Select all the packages you want to use and press again the Configure button. For an easier overview of the
build options make sure the Grouped option under the binary directory selection is turned on. For some of the
packages CMake may not find all of the required files or directories. In case of these CMake will throw an error
in its output window (located at the bottom of the GUI) and set its field values, to not found constants. For
example:
22
For these you need to manually set the queried directories or files path. After this press again the Configure
button to see if the value entered by you was accepted or not. Do this until all entries are good and you cannot
see errors in the field/value or the output part of the GUI. Now I want to emphasize an option that you will
definitely love: ENABLE ENABLE_SOLUTION_FOLDERS. OpenCV will create many-many projects and
turning this option will make sure that they are categorized inside directories in the Solution Explorer. It is a
must have feature, if you ask me.
Furthermore, you need to select what part of OpenCV you want to build.
BUILD_DOCS -> It creates two projects for building the documentation of OpenCV (there will be a
separate project for building the HTML and the PDF files). Note that these arent built together with the
solution. You need to make an explicit build project command on these to do so.
BUILD_EXAMPLES -> OpenCV comes with many example applications from which you may learn most
of the libraries capabilities. This will also come handy to easily try out if OpenCV is fully functional on
your computer.
BUILD_PACKAGE -> Prior to version 2.3 with this you could build a project that will build an OpenCV
installer. With this you can easily install your OpenCV flavor on other systems. For the latest source files
of OpenCV it generates a new project that simply creates zip archive with OpenCV sources.
BUILD_SHARED_LIBS -> With this you can control to build DLL files (when turned on) or static library
files (*.lib) otherwise.
BUILD_TESTS -> Each module of OpenCV has a test project assigned to it. Building these test projects is
also a good way to try out, that the modules work just as expected on your system too.
BUILD_PERF_TESTS -> There are also performance tests for many OpenCV functions. If youre concerned about performance, build them and run.
BUILD_opencv_python -> Self-explanatory. Create the binaries to use OpenCV from the Python language.
Press again the Configure button and ensure no errors are reported. If this is the case you can tell CMake to
create the project files by pushing the Generate button. Go to the build directory and open the created OpenCV
solution. Depending on just how much of the above options you have selected the solution may contain quite a
lot of projects so be tolerant on the IDE at the startup. Now you need to build both the Release and the Debug
binaries. Use the drop-down menu on your IDE to change to another of these after building for one of them.
23
In the end you can observe the built binary files inside the bin directory:
For the documentation you need to explicitly issue the build commands on the doc project for the PDF files
and on the doc_html for the HTML ones. Each of these will call Sphinx to do all the hard work. You can find
the generated documentation inside the Build/Doc/_html for the HTML pages and within the Build/Doc the
PDF manuals.
To collect the header and the binary files, that you will use during your own projects, into a separate directory
(simillary to how the pre-built binaries ship) you need to explicitely build the Install project.
This will create an install directory inside the Build one collecting all the built binaries into a single place. Use
this only after you built both the Release and Debug versions.
Note: To create an installer you need to install NSIS. Then just build the Package project to build the installer
into the Build/_CPack_Packages/win32/NSIS folder. You can then use this to distribute OpenCV with your
build settings on other systems.
24
To test your build just go into the Build/bin/Debug or Build/bin/Release directory and start a couple of
applications like the contours.exe. If they run, you are done. Otherwise, something definitely went awfully
wrong. In this case you should contact us via our user group. If everything is okay the contours.exe output
should resemble the following image (if built with Qt support):
Note: If you use the GPU module (CUDA libraries) make sure you also upgrade to the latest drivers of your
GPU. Error messages containing invalid entries in (or cannot find) the nvcuda.dll are caused mostly by old video
card drivers. For testing the GPU (if built) run the performance_gpu.exe sample application.
Set the OpenCV enviroment variable and add it to the systems path
First we set an enviroment variable to make easier our work. This will hold the install directory of our OpenCV library
that we use in our projects. Start up a command window and enter:
setx -m OPENCV_DIR D:\OpenCV\Build\Install
Here the directory is where you have your OpenCV binaries (installed or built). Inside this you should have folders
like bin and include. The -m should be added if you wish to make the settings computer wise, instead of user wise.
If you built static libraries then you are done. Otherwise, you need to add the bin folders path to the systems path.This
is cause you will use the OpenCV library in form of Dynamic-link libraries (also known as DLL). Inside these are
stored all the algorithms and information the OpenCV library contains. The operating system will load them only on
demand, during runtime. However, to do this he needs to know where they are. The systems PATH contains a list of
folders where DLLs can be found. Add the OpenCV library path to this and the OS will know where to look if he ever
needs the OpenCV binaries. Otherwise, you will need to copy the used DLLs right beside the applications executable
file (exe) for the OS to find it, which is highly unpleasent if you work on many projects. To do this start up again the
Path Editor and add the following new entry (right click in the application to bring up the menu):
%OPENCV_DIR%\bin
25
Save it to the registry and you are done. If you ever change the location of your install directories or want to try out
your applicaton with a different build all you will need to do is to update the OPENCV_DIR variable via the setx
command inside a command window.
Now you can continue reading the tutorials with the How to build applications with OpenCV inside the Microsoft
Visual Studio section. There you will find out how to use the OpenCV library in your own projects with the help of
the Microsoft Visual Studio IDE.
1.5 How to build applications with OpenCV inside the Microsoft Visual Studio
Everything I describe here will apply to the C\C++ interface of OpenCV. I start out from the assumption that you have
read and completed with success the Installation in Windows tutorial. Therefore, before you go any further make sure
you have an OpenCV directory that contains the OpenCV header files plus binaries and you have set the environment
variables as described here.
The OpenCV libraries, distributed by us, on the Microsoft Windows operating system are in a Dynamic Linked
Libraries (DLL). These have the advantage that all the content of the library are loaded only at runtime, on demand,
and that countless programs may use the same library file. This means that if you have ten applications using the
OpenCV library, no need to have around a version for each one of them. Of course you need to have the dll of the
OpenCV on all systems where you want to run your application.
Another approach is to use static libraries that have lib extensions. You may build these by using our source files as
described in the Installation in Windows tutorial. When you use this the library will be built-in inside your exe file. So
there is no chance that the user deletes them, for some reason. As a drawback your application will be larger one and
as, it will take more time to load it during its startup.
To build an application with OpenCV you need to do two things:
Tell to the compiler how the OpenCV library looks. You do this by showing it the header files.
26
Tell to the linker from where to get the functions or data structures of OpenCV, when they are needed.
If you use the lib system you must set the path where the library files are and specify in which one of them to
look. During the build the linker will look into these libraries and add the definitions and implementation of all
used functions and data structures to the executable file.
If you use the DLL system you must again specify all this, however now for a different reason. This is a
Microsoft OS specific stuff. It seems that the linker needs to know that where in the DLL to search for the data
structure or function at the runtime. This information is stored inside lib files. Nevertheless, they arent static
libraries. They are so called import libraries. This is why when you make some DLLs in Windows you will also
end up with some lib extension libraries. The good part is that at runtime only the DLL is required.
To pass on all this information to the Visual Studio IDE you can either do it globally (so all your future projects will
get these information) or locally (so only for you current project). The advantage of the global one is that you only
need to do it once; however, it may be undesirable to clump all your projects all the time with all these information. In
case of the global one how you do it depends on the Microsoft Visual Studio you use. There is a 2008 and previous
versions and a 2010 way of doing it. Inside the global section of this tutorial Ill show what the main differences are.
The base item of a project in Visual Studio is a solution. A solution may contain multiple projects. Projects are the
building blocks of an application. Every project will realize something and you will have a main project in which you
can put together this project puzzle. In case of the many simple applications (like many of the tutorials will be) you
do not need to break down the application into modules. In these cases your main project will be the only existing
one. Now go create a new solution inside Visual studio by going through the File New Project menu selection.
Choose Win32 Console Application as type. Enter its name and select the path where to create it. Then in the upcoming
dialog make sure you create an empty project.
1.5. How to build applications with OpenCV inside the Microsoft Visual Studio
27
The really useful stuff of these is that you may create a rule package once and you can later just add it to your new
projects. Create it once and reuse it later. We want to create a new Property Sheet that will contain all the rules that
the compiler and linker needs to know. Of course we will need a separate one for the Debug and the Release Builds.
Start up with the Debug one as shown in the image below:
Use for example the OpenCV_Debug name. Then by selecting the sheet Right Click Properties. In the following I
will show to set the OpenCV rules locally, as I find unnecessary to pollute projects with custom rules that I do not use
it. Go the C++ groups General entry and under the Additional Include Directories add the path to your OpenCV
include. If you dont have C/C++ group, you should add any .c/.cpp file to the project.
$(OPENCV_DIR)\include
When adding third party libraries settings it is generally a good idea to use the power behind the environment variables.
The full location of the OpenCV library may change on each system. Moreover, you may even end up yourself with
moving the install directory for some reason. If you would give explicit paths inside your property sheet your project
will end up not working when you pass it further to someone else who has a different OpenCV install path. Moreover,
fixing this would require to manually modifying every explicit path. A more elegant solution is to use the environment
variables. Anything that you put inside a parenthesis started with a dollar sign will be replaced at runtime with the
current environment variables value. Here comes in play the environment variable setting we already made in our
28
previous tutorial.
Next go to the Linker General and under the Additional Library Directories add the libs directory:
$(OPENCV_DIR)\libs
Then you need to specify the libraries in which the linker should look into. To do this go to the Linker Input and
under the Additional Dependencies entry add the name of all modules which you want to use:
A full list, for the currently latest trunk version would contain:
opencv_core231d.lib
opencv_imgproc231d.lib
opencv_highgui231d.lib
opencv_ml231d.lib
opencv_video231d.lib
opencv_features2d231d.lib
opencv_calib3d231d.lib
opencv_objdetect231d.lib
opencv_contrib231d.lib
opencv_legacy231d.lib
opencv_flann231d.lib
The letter d at the end just indicates that these are the libraries required for the debug. Now click ok to save and do the
same with a new property inside the Release rule section. Make sure to omit the d letters from the library names and
to save the property sheets with the save icon above them.
1.5. How to build applications with OpenCV inside the Microsoft Visual Studio
29
You can find your property sheets inside your projects directory. At this point it is a wise decision to back them up
into some special directory, to always have them at hand in the future, whenever you create an OpenCV project. Note
that for Visual Studio 2010 the file extension is props, while for 2008 this is vsprops.
Next time when you make a new OpenCV project just use the Add Existing Property Sheet... menu entry inside the
Property Manager to easily add the OpenCV build rules.
In Visual Studio 2010 this has been moved to a global property sheet which is automatically added to every project
you create:
30
The process is the same as described in case of the local approach. Just add the include directories by using the
environment variable OPENCV_DIR.
Test it!
Now to try this out download our little test source code or get it from the sample code folder of the OpenCV sources.
Add this to your project and build it. Heres its content:
1
2
3
4
5
// Video
#include
#include
#include
#include
Image PSNR
<iostream>
<string>
<iomanip>
<sstream>
and SSIM
// for standard I/O
// for strings
// for controlling float print precision
// string to number conversion
6
7
8
9
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
// Gaussian Blur
// Basic OpenCV structures (cv::Mat, Scalar)
// OpenCV window I/O
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
void help()
{
cout
<< "\n--------------------------------------------------------------------------" << endl
<< "This program shows how to read a video file with OpenCV. In addition, it tests the"
<< " similarity of two input videos first with PSNR, and for the frames below a PSNR " << endl
<< "trigger value, also with MSSIM."<< endl
<< "Usage:"
<< endl
<< "./video-source referenceVideo useCaseTestVideo PSNR_Trigger_Value Wait_Between_Frames " << endl
<< "--------------------------------------------------------------------------"
<< endl
<< endl;
}
int main(int argc, char *argv[], char *window_name)
{
help();
if (argc != 5)
{
cout << "Not enough parameters" << endl;
return -1;
}
stringstream conv;
38
39
40
41
42
1.5. How to build applications with OpenCV inside the Microsoft Visual Studio
31
43
char c;
int frameNum = -1;
44
45
// Frame counter
46
VideoCapture captRefrnc(sourceReference),
captUndTst(sourceCompareWith);
47
48
49
if ( !captRefrnc.isOpened())
{
cout << "Could not open reference " << sourceReference << endl;
return -1;
}
50
51
52
53
54
55
if( !captUndTst.isOpened())
{
cout << "Could not open case test " << sourceCompareWith << endl;
return -1;
}
56
57
58
59
60
61
62
63
64
65
captRefrnc.get(CV_CAP_PROP_FRAME_WIDTH),
captRefrnc.get(CV_CAP_PROP_FRAME_HEIGHT)),
captUndTst.get(CV_CAP_PROP_FRAME_WIDTH),
captUndTst.get(CV_CAP_PROP_FRAME_HEIGHT));
66
if (refS != uTSi)
{
cout << "Inputs have different size!!! Closing." << endl;
return -1;
}
67
68
69
70
71
72
73
74
75
// Windows
namedWindow(WIN_RF, CV_WINDOW_AUTOSIZE );
namedWindow(WIN_UT, CV_WINDOW_AUTOSIZE );
cvMoveWindow(WIN_RF, 400
,
cvMoveWindow(WIN_UT, refS.width,
76
77
78
79
80
0);
0);
81
cout << "Frame resolution: Width=" << refS.width << " Height=" << refS.height
<< " of nr#: " << captRefrnc.get(CV_CAP_PROP_FRAME_COUNT) << endl;
82
83
84
85
86
87
88
89
90
91
while( true) //Show the image captured in the window and repeat
{
captRefrnc >> frameReference;
captUndTst >> frameUnderTest;
92
93
94
95
96
97
98
99
100
32
101
102
++frameNum;
cout <<"Frame:" << frameNum;
103
104
105
106
107
108
109
110
111
112
113
114
cout <<
<<
<<
<<
115
116
117
118
119
120
121
122
123
124
125
126
c = cvWaitKey(delay);
if (c == 27) break;
127
128
129
130
return 0;
131
132
133
134
135
136
137
138
139
140
Scalar s = sum(s1);
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
1.5. How to build applications with OpenCV inside the Microsoft Visual Studio
33
int d
159
= CV_32F;
160
161
162
163
164
Mat I2_2
Mat I1_2
Mat I1_I2
165
166
167
= I2.mul(I2);
= I1.mul(I1);
= I1.mul(I2);
// I2^2
// I1^2
// I1 * I2
168
169
170
171
172
173
174
Mat mu1_2
=
Mat mu2_2
=
Mat mu1_mu2 =
175
176
177
mu1.mul(mu1);
mu2.mul(mu2);
mu1.mul(mu2);
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
t1 = 2 * mu1_mu2 + C1;
t2 = 2 * sigma12 + C2;
t3 = t1.mul(t2);
193
194
195
196
197
198
199
200
Mat ssim_map;
divide(t3, t1, ssim_map);
201
202
// ssim_map =
t3./t1;
203
204
205
206
You can start a Visual Studio build from two places. Either inside from the IDE (keyboard combination: Control-F5)
or by navigating to your build directory and start the application with a double click. The catch is that these two arent
the same. When you start it from the IDE its current working directory is the projects directory, while otherwise it is
the folder where the application file currently is (so usually your build directory). Moreover, in case of starting from
the IDE the console window will not close once finished. It will wait for a keystroke of yours.
This is important to remember when you code inside the code open and save commands. Youre resources will be
saved ( and queried for at opening!!!) relatively to your working directory. This is unless you give a full, explicit path
as parameter for the I/O functions. In the code above we open this OpenCV logo. Before starting up the application
34
make sure you place the image file in your current working directory. Modify the image file name inside the code to
try it out on other images too. Run it and voil:
D:
CD OpenCV\MySolutionName\Release
MySolutionName.exe exampleImage.jpg
Here I first changed my drive (if your project isnt on the OS local drive), navigated to my project and start it with an
example image argument. While under Linux system it is common to fiddle around with the console window on the
Microsoft Windows many people come to use it almost never. Besides, adding the same argument again and again
while you are testing your application is, somewhat, a cumbersome task. Luckily, in the Visual Studio there is a menu
to automate all this:
Specify here the name of the inputs and while you start your application from the Visual Studio enviroment you have
automatic argument passing. In the next introductionary tutorial youll see an in-depth explanation of the upper source
code: Load and Display an Image.
35
2. Android SDK
Get the latest Android SDK from http://developer.android.com/sdk/index.html
Here is Googles install guide for SDK.
Note: If you choose SDK packed into Windows installer, then you should have 32-bit JRE installed. It is not
needed for Android development, but installer is x86 application and requires 32-bit Java runtime.
Note: If you are running x64 version of Ubuntu Linux, then you need ia32 shared libraries for use on amd64
and ia64 systems to be installed. You can install them with the following command:
sudo apt-get install ia32-libs
For Red Hat based systems the following command might be helpful:
36
See Adding SDK Components for help with installing/updating SDK components.
4. Eclipse IDE
Check the Android SDK System Requirements document for a list of Eclipse versions that are compatible with
the Android SDK. For OpenCV 2.4.x we recommend Eclipse 3.7 (Indigo) or later versions. They work well for
OpenCV under both Windows and Linux.
If you have no Eclipse installed, you can get it from the download page.
5. ADT plugin for Eclipse
This instruction is copied from Android Developers site. Please, visit that page if you have any troubles with
ADT (Android Development Tools) plugin installation.
37
Assuming that you have Eclipse IDE installed, as described above, follow these steps to download and install
the ADT plugin:
(a) Start Eclipse, then select Help Install New Software...
(b) Click Add (in the top-right corner).
(c) In the Add Repository dialog that appears, enter ADT Plugin for the Name and the following URL for
the Location:
https://dl-ssl.google.com/android/eclipse/
(d) Click OK
Note: If you have trouble acquiring the plugin, try using http in the Location URL, instead of https
(https is preferred for security reasons).
(e) In the Available Software dialog, select the checkbox next to Developer Tools and click Next.
(f) In the next window, youll see a list of the tools to be downloaded. Click Next.
(g) Read and accept the license agreements, then click Finish.
Note: If you get a security warning saying that the authenticity or validity of the software cant be
established, click OK.
(h) When the installation completes, restart Eclipse.
38
39
Select Use existing SDKs option, browse for Android SDK folder and click Finish.
To make sure the SDK folder is set correctly do the following step taken from Configuring the ADT Plugin
document from Google:
Select Window Preferences... to open the Preferences panel (Mac OS X: Eclipse Preferences):
40
For the SDK Location in the main panel, click Browse... and locate your Android SDK directory.
Click Apply button at the bottom right corner of main panel.
If the SDK folder is already set correctly youll see something like this:
41
42
In the main panel select General Existing Projects into Workspace and press Next button:
For the Select root directory in the main panel locate your OpenCV package folder. (If you have created
workspace in the package directory, then just click Browse... button and instantly close directory choosing
dialog with OK button!) Eclipse should automatically locate OpenCV library and samples:
43
44
In some cases these errors disappear after Project Clean... Clean all OK.
Sometimes more advanced manipulations are needed:
The provided projects are configured for android-11 target that can be missing platform in your Android
SDK. After right click on any project select Properties and then Android on the left pane. Click some
target with API Level 11 or higher:
After this manipulation Eclipse will rebuild your workspace and error icons will disappear one after another:
45
Once Eclipse completes build you will have the clean workspace without any build errors:
46
Select the Android Application option and click OK button. Eclipse will install and run the sample.
Here is Tutorial 1 - Add OpenCV sample detecting edges using Canny algorithm from OpenCV:
47
48
2. In application project add reference to OpenCV Java SDK in Project > Properties > Android > Library >
Add select OpenCV Library - 2.4.2;
49
If you want to use OpenCV Manager-based approach you need to install packages with the Service and OpenCV
package for you platform. You can do it using Google Play service or manually with adb tool:
1
2
There is a very base code snippet for Async init. It shows only basis principles of library Initiation. See the 15-puzzle
OpenCV sample for details.
1
2
3
4
5
50
switch (status) {
case LoaderCallbackInterface.SUCCESS:
{
Log.i(TAG, "OpenCV loaded successfully");
// Create and set View
mView = new puzzle15View(mAppContext);
setContentView(mView);
} break;
default:
{
super.onManagerConnected(status);
} break;
}
}
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
};
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
// ...
}
It this case application works with OpenCV Manager in asynchronous fashion. OnManagerConnected callback will be
called in UI thread, when initialization finishes. Please note, that it is not allowed to use OpenCV calls or load OpenCVdependent native libs before invoking this callback. Load your own native libraries after OpenCV initialization.
Application development with static initialization
According to this approach all OpenCV binaries are linked and put to your application package. It is designed mostly
for development purposes. This way is deprecated for the production code, release package should communicate with
OpenCV Manager, use the async initialization described above.
1. Add OpenCV library project to your workspace. Go to File > Import > Existing project in your workspace,
push Browse button and select OpenCV SDK path.
51
2. In application project add reference to OpenCV Java SDK in Project > Properties > Android > Library >
Add select OpenCV Library - 2.4.2;
52
static {
if (!OpenCVLoader.initDebug()) {
// Report initialization error
}
}
If you application includes other OpenCV-dependent native libraries you need to init OpenCV before them.
53
1
2
3
4
5
6
7
8
static {
if (OpenCVLoader.initDebug()) {
System.loadLibrary("my_super_lib1");
System.loadLibrary("my_super_lib2");
} else {
// Report initialization error
}
}
Whats next?
Read the Using C++ OpenCV code with Android binary package tutorial to learn how to add native OpenCV code to
your Android project.
54
where
the src folder contains Java code of the application,
the res folder contains resources of the application (images, xml files describing UI layout , etc),
the libs folder will contain native libraries after successful build,
and the jni folder contains C/C++ application source code and NDKs build scripts Android.mk and
Application.mk.
These scripts control the C++ build process (they are written in Makefile language).
Also the root folder should contain the following files:
AndroidManifest.xml file presents essential information about application to the Android system (name of
the Application, name of main applications package, components of the application, required permissions, etc).
It can be created using Eclipse wizard or android tool from Android SDK.
project.properties is a text file containing information about target Android platform and other build details.
This file is generated by Eclipse or can be created with android tool from Android SDK.
Note: Both files (AndroidManifest.xml and project.properties) are required to compile the C++ part of the
application (NDK build system uses information from these files). If any of these files does not exist, compile the Java
part of the project before the C++ part.
Theory: Building application with C++ native part from command line
Here is the standard way to compile C++ part of an Android application:
1. Open console and go to the root folder of Android application
cd <root folder of the project>/
Note: Alternatively you can go to the jni folder of Android project. But samples from OpenCV binary package
are configured for building from the project root level (because of relative path to the OpenCV library).
2. Run the following command
55
<path_where_NDK_is_placed>/ndk-build
Note: On Windows we recommend to use ndk-build.cmd in standard Windows console (cmd.exe) rather
than the similar bash script in Cygwin shell.
3. After executing this command the C++ part of the source code is compiled.
After that the Java part of the application can be (re)compiled (using either Eclipse or ant build tool).
Note: Some parameters can be set for the ndk-build:
Example 1: Verbose compilation
<path_where_NDK_is_placed>/ndk-build V=1
56
To install the CDT plugin use menu Help -> Install New Software..., then paste the CDT 8.0 repository URL
http://download.eclipse.org/tools/cdt/releases/indigo as shown in the picture below and click Add..., name it CDT and
click OK.
57
58
Important: OpenCV for Android 2.4.2 package contains sample projects pre-configured to use CDT Builder. It
automatically builds JNI part via ndk-build.
1. Define the NDKROOT environment variable containing the path to Android NDK in your system (e.g.
X:\Apps\android-ndk-r8 or /opt/android-ndk-r8).
2. CDT Builder is configured for Windows hosts, on Linux or MacOS open Project Properties of the projects
having JNI part (face-detection, Tutorial 3 and Tutorial 4), select C/C++ Build in the left pane, remove .cmd
and leave "${NDKROOT}/ndk-build" in the Build command edit box and click OK.
59
3. Use menu Project -> Clean... to make sure that NDK build is invoked on the project build:
60
This is the minimal file Android.mk, which builds a C++ source code of an Android application. Note that the first
two lines and the last line are mandatory for any Android.mk.
Usually the file Application.mk is optional, but in case of project using OpenCV, when STL and exceptions are used
in C++, it also should be written. Example of the file Application.mk:
APP_STL := gnustl_static
APP_CPPFLAGS := -frtti -fexceptions
APP_ABI := armeabi-v7a
4. The line
include C:\Work\android-opencv\OpenCV-2.4.0\share\OpenCV\OpenCV.mk
should be inserted into the jni/Android.mk file right after the line
include $(CLEAR_VARS)
Several variables can be used to customize OpenCV stuff, they should be set before the "include
...\OpenCV.mk" line:
OPENCV_INSTALL_MODULES:=on
Copies necessary OpenCV dynamic libs to the project libs folder in order to include them into the APK.
61
OPENCV_CAMERA_MODULES:=off
Skip native OpenCV camera related libs copying to the project libs folder.
OPENCV_LIB_TYPE:=STATIC
Perform static link with OpenCV. By default dynamic link is used and the project JNI lib depends on
libopencv_java.so.
5. The file Application.mk should exist and should contain lines
APP_STL := gnustl_static
APP_CPPFLAGS := -frtti -fexceptions
the
current
OpenCV
snapshot
from
here:
Building OpenCV from source using CMake, using the command line
1. Make symbolic link for Xcode to let OpenCV build scripts find the compiler, header files etc.
cd /
sudo ln -s /Applications/Xcode.app/Contents/Developer Developer
If everythings fine, after a few minutes you will get ~/<my_working_directory>/ios/opencv2.framework. You can add
this framework to your Xcode projects.
62
Source Code
Download the source code from here.
1
2
3
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
4
5
6
7
8
9
10
11
12
13
14
15
Mat image;
image = imread(argv[1], CV_LOAD_IMAGE_COLOR);
16
17
18
if(! image.data )
// Check for invalid input
{
cout << "Could not open or find the image" << std::endl ;
return -1;
}
19
20
21
22
23
24
25
26
27
waitKey(0);
return 0;
28
29
30
Explanation
In OpenCV 2 we have multiple modules. Each one takes care of a different area or approach towards image processing.
You could already observe this in the structure of the user guide of these tutorials itself. Before you use any of them
you first need to include the header files where the content of each individual module is declared.
Youll almost always end up using the:
core section, as here are defined the basic building blocks of the library
63
highgui module, as this contains the functions for input and output operations
// Video Image PSNR and SSIM
#include <iostream> // for standard I/O
#include <string>
// for strings
We also include the iostream to facilitate console line output and input. To avoid data structure and function name
conflicts with other libraries, OpenCV has its own namespace: cv. To avoid the need appending prior each of these the
cv:: keyword you can import the namespace in the whole file by using the lines:
using namespace cv;
using namespace std;
This is true for the STL library too (used for console I/O). Now, lets analyze the main function. We start up assuring
that we acquire a valid image name argument from the command line.
if( argc != 2)
{
cout <<" Usage: display_image ImageToLoadAndDisplay" << endl;
return -1;
}
Then create a Mat object that will store the data of the loaded image.
Mat image;
Now we call the imread function which loads the image name specified by the first argument (argv[1]). The second
argument specifies the format in what we want the image. This may be:
CV_LOAD_IMAGE_UNCHANGED (<0) loads the image as is (including the alpha channel if present)
CV_LOAD_IMAGE_GRAYSCALE ( 0) loads the image as an intensity one
CV_LOAD_IMAGE_COLOR (>0) loads the image in the RGB format
image = imread(argv[1], CV_LOAD_IMAGE_COLOR);
Note: OpenCV offers support for the image formats Windows bitmap (bmp), portable image formats (pbm, pgm,
ppm) and Sun raster (sr, ras). With help of plugins (you need to specify to use them if you build yourself the library,
nevertheless in the packages we ship present by default) you may also load image formats like JPEG (jpeg, jpg, jpe),
JPEG 2000 (jp2 - codenamed in the CMake as Jasper), TIFF files (tiff, tif) and portable network graphics (png).
Furthermore, OpenEXR is also a possibility.
After checking that the image data was loaded correctly, we want to display our image, so we create an OpenCV
window using the namedWindow function. These are automatically managed by OpenCV once you create them. For
this you need to specify its name and how it should handle the change of the image it contains from a size point of
view. It may be:
CV_WINDOW_AUTOSIZE is the only supported one if you do not use the Qt backend. In this case the window
size will take up the size of the image it shows. No resize permitted!
CV_WINDOW_NORMAL on Qt you may use this to allow window resize. The image will resize itself according
to the current window size. By using the | operator you also need to specify if you would like the image to keep
its aspect ratio (CV_WINDOW_KEEPRATIO) or not (CV_WINDOW_FREERATIO).
namedWindow( "Display window", CV_WINDOW_AUTOSIZE );// Create a window for display.
Finally, to update the content of the OpenCV window with a new image use the imshow function. Specify the OpenCV
window name to update and the image to use during this operation:
64
Because we want our window to be displayed until the user presses a key (otherwise the program would end far too
quickly), we use the waitKey function whose only parameter is just how long should it wait for a user input (measured
in milliseconds). Zero means to wait forever.
waitKey(0);
Result
Compile your code and then run the executable giving an image path as argument. If youre on Windows the
executable will of course contain an exe extension too. Of course assure the image file is near your program file.
./DisplayImage HappyFish.jpg
Goals
In this tutorial you will learn how to:
Load an image using imread
Transform an image from RGB to Grayscale format by using cvtColor
Save your transformed image in a file on disk (using imwrite)
Code
Here it is:
1.10. Load, Modify, and Save an Image
65
1
2
#include <cv.h>
#include <highgui.h>
3
4
5
6
7
8
9
10
11
Mat image;
image = imread( imageName, 1 );
12
13
14
15
16
17
18
19
20
Mat gray_image;
cvtColor( image, gray_image, CV_RGB2GRAY );
21
22
23
24
25
26
27
28
29
30
waitKey(0);
31
32
33
return 0;
}
Explanation
1. We begin by:
Creating a Mat object to store the image information
Load an image using imread, located in the path given by imageName. Fort this example, assume you are
loading a RGB image.
2. Now we are going to convert our image from RGB to Grayscale format. OpenCV has a really nice function to
do this kind of transformations:
cvtColor( image, gray_image, CV_RGB2GRAY );
66
Which will save our gray_image as Gray_Image.jpg in the folder images located two levels up of my current
location.
4. Finally, lets check out the images. We create two windows and use them to show the original image as well as
the new one:
namedWindow( imageName, CV_WINDOW_AUTOSIZE );
namedWindow( "Gray image", CV_WINDOW_AUTOSIZE );
imshow( imageName, image );
imshow( "Gray image", gray_image );
5. Add add the waitKey(0) function call for the program to wait forever for an user key press.
Result
When you run your program you should get something like this:
And if you check in your folder (in my case images), you should have a newly .jpg file named Gray_Image.jpg:
67
Goal
The tutorials are just as an important part of the library as the implementation of those crafty data structures and
algorithms you can find in OpenCV. Therefore, the source codes for the tutorials are part of the library. And yes, I
meant source codes. The reason for this formulation is that the tutorials are written by using the Sphinx documentation generation system. This is based on the popular python documentation system called reStructuredText (reST).
ReStructuredText is a really neat language that by using a few simple conventions (indentation, directives) and emulating old school e-mail writing techniques (text only) tries to offer a simple way to create and edit documents. Sphinx
extends this with some new features and creates the resulting document in both HTML (for web) and PDF (for offline
usage) format.
Usually, an OpenCV tutorial has the following parts:
1. A source code demonstration of an OpenCV feature:
(a) One or more CPP, Python, Java or other type of files depending for what OpenCV offers support and for
what language you make the tutorial.
(b) Occasionaly, input resource files required for running your tutorials application.
2. A table of content entry (so people may easily find the tutorial):
(a) Adding your stuff to the tutorials table of content (reST file).
(b) Add an image file near the TOC entry.
3. The content of the tutorial itself:
(a) The reST text of the tutorial
68
(b) Images following the idea that A picture is worth a thousand words.
(c) For more complex demonstrations you may create a video.
As you can see you will need at least some basic knowledge of the reST system in order to complete the task at hand
with success. However, dont worry reST (and Sphinx) was made with simplicity in mind. It is easy to grasp its basics.
I found that the OpenAlea documentations introduction on this subject (or the Thomas Cokelaer one ) should enough
for this. If for some directive or feature you need a more in-depth description look it up in the official reStructuredText
help files or at the Sphinx documentation.
In our world achieving some tasks is possible in multiple ways. However, some of the roads to take may have obvious
or hidden advantages over others. Then again, in some other cases it may come down to just simple user preference.
Here, Ill present how I decided to write the tutorials, based on my personal experience. If for some of them you know
a better solution and you can back it up feel free to use that. Ive nothing against it, as long as it gets the job done in
an elegant fashion.
Now the best would be if you could make the integration yourself. For this you need first to have the source code. I
recommend following the guides for your operating system on acquiring OpenCV sources. For Linux users look here
and for Windows here. You must also install python and sphinx with its dependencies in order to be able to build the
documentation.
Once you have downloaded the repository to your hard drive you can take a look in the OpenCV directory to
make sure you have both the samples and doc folder present. Anyone may download the trunk source files from
/svn/opencv/trunk/ . Nevertheless, not everyone has upload (commit/submit) rights. This is to protect the integrity
of the library. If you plan doing more than one tutorial, and would like to have an account with commit user rights
you should first register an account at http://code.opencv.org/ and then contact dr. Gary Bradski at [email protected]. Otherwise, you can just send the resulting files to us via the Yahoo user group or to me at
[email protected] and Ill add it. If you have questions, suggestions or constructive critics I will
gladly listen to them. If you send it to the OpenCV group please tag its subject with a [Tutorial] entry.
69
// ...
int main(int argc, char *argv[], char *window_name)
{
help();
// here comes the actual source code
}
Additionally, finalize the description with a short usage guide. This way the user will know how to call your
programs, what leads us to the next point.
Prefer command line argument controlling instead of hard coded one. If your program has some variables that
may be changed use command line arguments for this. The tutorials, can be a simple try-out ground for the user.
If you offer command line controlling for the input image (for example), then you offer the possibility for the
user to try it out with his/her own images, without the need to mess in the source code. In the upper example
you can see that the input image, channel and codec selection may all be changed from the command line. Just
compile the program and run it with your own input arguments.
Be as verbose as possible. There is no shame in filling the source code with comments. This way the more
advanced user may figure out whats happening right from the sample code. This advice goes for the output
console too. Specify to the user whats happening. Never leave the user hanging there and thinking on: Is this
program now crashing or just doing some computationally intensive task?. So, if you do a training task that
may take some time, make sure you print out a message about this before starting and after finishing it.
Throw out unnecessary stuff from your source code. This is a warning to not take the previous point too
seriously. Balance is the key. If its something that can be done in a fewer lines or simpler than thats the way
you should do it. Nevertheless, if for some reason you have such sections notify the user why you have chosen
to do so. Keep the amount of information as low as possible, while still getting the job done in an elegant way.
Put your sample file into the opencv/samples/cpp/tutorial_code/sectionName folder. If you write a
tutorial for other languages than cpp, then change that part of the path. Before completing this you need to
decide that to what section (module) does your tutorial goes. Think about on what module relies most heavily
your code and that is the one to use. If the answer to this question is more than one modules then the general
section is the one to use. For finding the opencv directory open up your file system and navigate where you
downloaded our repository.
If the input resources are hard to acquire for the end user consider adding a few of them to the
opencv/samples/cpp/tutorial_code/images. Make sure that who reads your code can try it out!
70
.. _Table-Of-Content-Section:
Section title
----------------------------------------------------------Description about the section.
.. include:: ../../definitions/noContent.rst
.. raw:: latex
\pagebreak
The first line is a reference to the section title in the reST system. The section title will be a link and you may refer to it
via the :ref: directive. The include directive imports the template text from the definitions directories noContent.rst
file. Sphinx does not creates the PDF from scratch. It does this by first creating a latex file. Then creates the PDF from
the latex file. With the raw directive you can directly add to this output commands. Its unique argument is for what
kind of output to add the content of the directive. For the PDFs it may happen that multiple sections will overlap on a
single page. To avoid this at the end of the TOC we add a pagebreak latex command, that hints to the LATEX system
that the next line should be on a new page.
If you have one of this, try to transform it to the following form:
.. _Table-Of-Content-Section:
Section title
----------------------------------------------------------.. include:: ../../definitions/tocDefinitions.rst
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
=============== ======================================================
|MatBasicIma| **Title:** :ref:matTheBasicImageContainer
*Compatibility:* > OpenCV 2.0
*Author:* |Author_BernatG|
You will learn how to store images in the memory and how to print out their content to the console.
=============== =====================================================
.. |MatBasicIma| image:: images/matTheBasicImageStructure.jpg
:height: 90pt
:width: 90pt
.. raw:: latex
\pagebreak
.. toctree::
:hidden:
../mat - the basic image container/mat - the basic image container
If this is already present just add a new section of the content between the include and the raw directives (excluding
1.11. How to write a tutorial for OpenCV?
71
those lines). Here youll see a new include directive. This should be present only once in a TOC tree and the reST file
contains the definitions of all the authors contributing to the OpenCV tutorials. We are a multicultural community and
some of our name may contain some funky characters. However, reST only supports ANSI characters. Luckily we can
specify Unicode characters with the unicode directive. Doing this for all of your tutorials is a troublesome procedure.
Therefore, the tocDefinitions file contains the definition of your author name. Add it here once and afterwards just use
the replace construction. For example heres the definition for my name:
.. |Author_BernatG| unicode:: Bern U+00E1 t U+0020 G U+00E1 bor
The |Author_BernatG| is the text definitions alias. I can use later this to add the definition, like Ive done in the
TOCs Author part. After the :: and a space you start the definition. If you want to add an UNICODE character
(non-ASCI) leave an empty space and specify it in the format U+(UNICODE code). To find the UNICODE code of
a character I recommend using the FileFormat websites service. Spaces are trimmed from the definition, therefore we
add a space by its UNICODE character (U+0020).
Until the raw directive what you can see is a TOC tree entry. Heres how a TOC entry will look like:
As you can see we have an image to the left and a description box to the right. To create two boxes we use a table with
two columns and a single row. In the left column is the image and in the right one the description. However, the image
directive is way too long to fit in a column. Therefore, we need to use the substitution definition system. We add this
definition after the TOC tree. All images for the TOC tree are to be put in the images folder near its reStructuredText
file. We use the point measurement system because we are also creating PDFs. PDFs are printable documents, where
there is no such thing that pixels (px), just points (pt). And while generally space is no problem for web pages (we
have monitors with huge resolutions) the size of the paper (A4 or letter) is constant and will be for a long time in the
future. Therefore, size constrains come in play more like for the PDF, than the generated HTML code.
Now your images should be as small as possible, while still offering the intended information for the user. Remember
that the tutorial will become part of the OpenCV source code. If you add large images (that manifest in form of
large image size) it will just increase the size of the repository pointlessly. If someone wants to download it later, its
download time will be that much longer. Not to mention the larger PDF size for the tutorials and the longer load time
for the web pages. In terms of pixels a TOC image should not be larger than 120 X 120 pixels. Resize your images if
they are larger!
Note: If you add a larger image and specify a smaller image size, Sphinx will not resize that. At build time will
add the full size image and the resize will be done by your browser after the image is loaded. A 120 X 120 image is
somewhere below 10KB. If you add a 110KB image, you have just pointlessly added a 100KB extra data to transfer
over the internet for every user!
Generally speaking you shouldnt need to specify your images size (excluding the TOC entries). If no such is found
Sphinx will use the size of the image itself (so no resize occurs). Then again if for some reason you decide to specify a
size that should be the width of the image rather than its height. The reason for this again goes back to the PDFs. On a
PDF page the height is larger than the width. In the PDF the images will not be resized. If you specify a size that does
not fit in the page, then what does not fits in will be cut off. When creating your images for your tutorial you should
try to keep the image widths below 500 pixels, and calculate with around 400 point page width when specifying image
widths.
The image format depends on the content of the image. If you have some complex scene (many random like colors)
then use jpg. Otherwise, prefer using png. They are even some tools out there that optimize the size of PNG images,
72
such as PNGGauntlet. Use them to make your images as small as possible in size.
Now on the right side column of the table we add the information about the tutorial:
In the first line it is the title of the tutorial. However, there is no need to specify it explicitly. We use the reference
system. Well start up our tutorial with a reference specification, just like in case of this TOC entry with its ..
_Table-Of-Content-Section: . If after this you have a title (pointed out by the following line of -), then Sphinx
will replace the :ref:Table-Of-Content-Section directive with the tile of the section in reference form
(creates a link in web page). Heres how the definition looks in my case:
.. _matTheBasicImageContainer:
Mat - The Basic Image Container
*******************************
Note, that according to the reStructuredText rules the * should be as long as your title.
Compatibility. What version of OpenCV is required to run your sample code.
Author. Use the substitution markup of reStructuredText.
A short sentence describing the essence of your tutorial.
Now before each TOC entry you need to add the three lines of:
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
The plus sign (+) is to enumerate tutorials by using bullet points. So for every TOC entry we have a corresponding
bullet point represented by the +. Sphinx is highly indenting sensitive. Indentation is used to express from which point
until to which point does a construction last. Un-indentation means end of that construction. So to keep all the bullet
points to the same group the following TOC entries (until the next +) should be indented by two spaces.
Here, I should also mention that always prefer using spaces instead of tabs. Working with only spaces makes possible
that if we both use monotype fonts we will see the same thing. Tab size is text editor dependent and as should be
avoided. Sphinx translates all tabs into 8 spaces before interpreting it.
It turns out that the automatic formatting of both the HTML and PDF(LATEX) system messes up our tables. Therefore,
we need to help them out a little. For the PDF generation we add the .. tabularcolumns:: m{100pt} m{300pt}
directive. This means that the first column should be 100 points wide and middle aligned. For the HTML look we simply name the following table of a toctableopencv class type. Then, we can modify the look of the table by modifying
the CSS of our web page. The CSS definitions go into the opencv/doc/_themes/blue/static/default.css_t
file.
.toctableopencv
{
width: 100% ;
table-layout: fixed;
}
73
However, you should not need to modify this. Just add these three lines (plus keep the two space indentation) for all
TOC entries you add. At the end of the TOC file youll find:
.. raw:: latex
\pagebreak
.. toctree::
:hidden:
../mat - the basic image container/mat - the basic image container
The page break entry comes for separating sections and should be only one in a TOC tree reStructuredText file. Finally,
at the end of the TOC tree we need to add our tutorial to the Sphinx TOC tree system. Sphinx will generate from this
the previous-next-up information for the HTML file and add items to the PDF according to the order here. By default
this TOC tree directive generates a simple table of contents. However, we already created a fancy looking one so we
no longer need this basic one. Therefore, we add the hidden option to do not show it.
The path is of a relative type. We step back in the file system and then go into the mat - the basic image
container directory for the mat - the basic image container.rst file. Putting out the rst extension for the
file is optional.
You start the tutorial by specifying a reference point by the .. _matTheBasicImageContainer: and then its
title. The name of the reference point should be a unique one over the whole documentation. Therefore, do not
use general names like tutorial1. Use the * character to underline the title for its full width. The subtitles of the
tutorial should be underlined with = charachter.
Goals. You start your tutorial by specifying what you will present. You can also enumerate the sub jobs to be
done. For this you can use a bullet point construction. There is a single configuration file for both the reference
manual and the tutorial documentation. In the reference manuals at the argument enumeration we do not want
any kind of bullet point style enumeration. Therefore, by default all the bullet points at this level are set to do
not show the dot before the entries in the HTML. You can override this by putting the bullet point in a container.
Ive defined a square type bullet point view under the name enumeratevisibleitemswithsquare. The CSS style
definition for this is again in the opencvdoc_themesbluestaticdefault.css_t file. Heres a quick example
of using it:
.. container:: enumeratevisibleitemswithsquare
74
Note that you need the keep the indentation of the container directive. Directive indentations are always three
(3) spaces. Here you may even give usage tips for your sample code.
Source code. Present your samples code to the user. Its a good idea to offer a quick download link for the
HTML page by using the download directive and pointing out where the user may find your source code in the
file system by using the file directive:
Text :file:samples/cpp/tutorial_code/highgui/video-write/ folder of the OpenCV source library
or :download:text to appear in the webpage
<../../../../samples/cpp/tutorial_code/HighGUI/video-write/video-write.cpp>.
For the download link the path is a relative one, hence the multiple back stepping operations (..). Then you can
add the source code either by using the code block directive or the literal include one. In case of the code block
you will need to actually add all the source code text into your reStructuredText text and also apply the required
indentation:
.. code-block:: cpp
int i = 0;
l = ++j;
The only argument of the directive is the language used (here CPP). Then you add the source code into its
content (meaning one empty line after the directive) by keeping the indentation of the directive (3 spaces). With
the literal include directive you do not need to add the source code of the sample. You just specify the sample
and Sphinx will load it for you, during build time. Heres an example usage:
.. literalinclude:: ../../../../samples/cpp/tutorial_code/HighGUI/video-write/video-write.cpp
:language: cpp
:linenos:
:tab-width: 4
:lines: 1-8, 21-22, 24-
After the directive you specify a relative path to the file from what to import. It has four options: the language to use, if you add the :linenos: the line numbers will be shown, you can specify the tab size with the
:tab-width: and you do not need to load the whole file, you can show just the important lines. Use the lines
option to do not show redundant information (such as the help function). Here basically you specify ranges, if
the second range line number is missing than that means that until the end of the file. The ranges specified here
do no need to be in an ascending order, you may even reorganize the structure of how you want to show your
sample inside the tutorial.
The tutorial. Well here goes the explanation for why and what have you used. Try to be short, clear, concise and
yet a thorough one. Theres no magic formula. Look into a few already made tutorials and start out from there.
Try to mix sample OpenCV code with your explanations. If with words is hard to describe something do not
hesitate to add in a reasonable size image, to overcome this issue.
When you present OpenCV functionality its a good idea to give a link to the used OpenCV data structure or
function. Because the OpenCV tutorials and reference manual are in separate PDF files it is not possible to
make this link work for the PDF format. Therefore, we use here only web page links to the opencv.itseez.com
website. The OpenCV functions and data structures may be used for multiple tasks. Nevertheless, we want to
avoid that every users creates its own reference to a commonly used function. So for this we use the global link
collection of Sphinx. This is defined in the file:opencv/doc/conf.py configuration file. Open it and go all the way
down to the last entry:
75
In short here we defined a new huivideo directive that refers to an external webpage link. Its usage is:
A sample function of the highgui modules image write and read page is the :huivideo:imread() function <imread>.
Which turns to: A sample function of the highgui modules image write and read page is the imread()
function. The argument you give between the <> will be put in place of the %s in the upper definition, and as the link will anchor to the correct function. To find out the anchor of a given function
just open up a web page, search for the function and click on it. In the address bar it should appear like:
http://opencv.itseez.com/modules/highgui/doc/reading_and_writing_images_and_video.html#imread
. Look here for the name of the directives for each page of the OpenCV reference manual. If none present for
one of them feel free to add one for it.
For formulas you can add LATEX code that will translate in the web pages into images. You do this by using
the math directive. A usage tip:
.. math::
MSE = \frac{1}{c*i*j} \sum{(I_1-I_2)^2}
X
1
(I1 I2 )2
cij
You may observe a runtime instance of this on the YouTube here <https://www.youtube.com/watch?v=jpBwHxsl1_0
.. raw:: html
<div align="center">
<iframe title="Creating a video with OpenCV" width="560" height="349" src="http://www.youtube.com/embed/j
</div>
This results in the text and video: You may observe a runtime instance of this on the YouTube here.
When these arent self-explanatory make sure to throw in a few guiding lines about what and why we can see.
Build the documentation and check for errors or warnings. In the CMake make sure you check or pass the option
for building documentation. Then simply build the docs project for the PDF file and the docs_html project for
the web page. Read the output of the build and check for errors/warnings for what you have added. This is also
the time to observe and correct any kind of not so good looking parts. Remember to keep clean our build logs.
76
Read again your tutorial and check for both programming and spelling errors. If found any, please correct them.
77
78
CHAPTER
TWO
Title: How to scan images, lookup tables and time measurement with
OpenCV
Compatibility: > OpenCV 2.0
Author: Bernt Gbor
Youll find out how to scan images (go through each of the image pixels)
with OpenCV. Bonus: time measurement with OpenCV.
79
80
Title: File Input and Output using XML and YAML files
Compatibility: > OpenCV 2.0
Author: Bernt Gbor
You will see how to use the FileStorage data structure of OpenCV to write
and read data to XML or YAML file format.
For example in the above image you can see that the mirror of the care is nothing more than a matrix containing
all the intensity values of the pixel points. Now, how we get and store the pixels values may vary according to
what fits best our need, in the end all images inside a computer world may be reduced to numerical matrices and
some other informations describing the matric itself. OpenCV is a computer vision library whose main focus is to
process and manipulate these information to find out further ones. Therefore, the first thing you need to learn and get
accommodated with is how OpenCV stores and handles images.
Mat
OpenCV has been around ever since 2001. In those days the library was built around a C interface. In those days
to store the image in the memory they used a C structure entitled IplImage. This is the one youll see in most of the
older tutorials and educational materials. The problem with this is that it brings to the table all the minuses of the
C language. The biggest issue is the manual management. It builds on the assumption that the user is responsible
for taking care of memory allocation and deallocation. While this is no issue in case of smaller programs once your
code base start to grove larger and larger it will be more and more a struggle to handle all this rather than focusing on
actually solving your development goal.
Luckily C++ came around and introduced the concept of classes making possible to build another road for the user:
automatic memory management (more or less). The good news is that C++ if fully compatible with C so no compatibility issues can arise from making the change. Therefore, OpenCV with its 2.0 version introduced a new C++
interface that by taking advantage of these offers a new way of doing things. A way, in which you do not need to fiddle
with memory management; making your code concise (less to write, to achieve more). The only main downside of the
C++ interface is that many embedded development systems at the moment support only C. Therefore, unless you are
targeting this platform, theres no point on using the old methods (unless youre a masochist programmer and youre
asking for trouble).
The first thing you need to know about Mat is that you no longer need to manually allocate its size and release it as
soon as you do not need it. While doing this is still a possibility, most of the OpenCV functions will allocate its output
data manually. As a nice bonus if you pass on an already existing Mat object, what already has allocated the required
space for the matrix, this will be reused. In other words we use at all times only as much memory as much we must to
perform the task.
2.1. Mat - The Basic Image Container
81
Mat is basically a class having two data parts: the matrix header (containing information such as the size of the matrix,
the method used for storing, at which address is the matrix stored and so on) and a pointer to the matrix containing
the pixel values (may take any dimensionality depending on the method chosen for storing) . The matrix header size
is constant. However, the size of the matrix itself may vary from image to image and usually is larger by order of
magnitudes. Therefore, when youre passing on images in your program and at some point you need to create a copy
of the image the big price you will need to build is for the matrix itself rather than its header. OpenCV is an image
processing library. It contains a large collection of image processing functions. To solve a computational challenge
most of the time you will end up using multiple functions of the library. Due to this passing on images to functions
is a common practice. We should not forget that we are talking about image processing algorithms, which tend to be
quite computational heavy. The last thing we want to do is to further decrease the speed of your program by making
unnecessary copies of potentially large images.
To tackle this issue OpenCV uses a reference counting system. The idea is that each Mat object has its own header,
however the matrix may be shared between two instance of them by having their matrix pointer point to the same
address. Moreover, the copy operators will only copy the headers, and as also copy the pointer to the large matrix
too, however not the matrix itself.
1
2
Mat A, C;
// creates just the header parts
A = imread(argv[1], CV_LOAD_IMAGE_COLOR); // here well know the method used (allocate matrix)
3
4
Mat B(A);
C = A;
// Assignment operator
5
6
All the above objects, in the end point to the same single data matrix. Their headers are different, however making any
modification using either one of them will affect all the other ones too. In practice the different objects just provide
different access method to the same underlying data. Nevertheless, their header parts are different. The real interesting
part comes that you can create headers that refer only to a subsection of the full data. For example, to create a region
of interest (ROI) in an image you just create a new header with the new boundaries:
1
2
Now you may ask if the matrix itself may belong to multiple Mat objects who will take responsibility for its cleaning
when its no longer needed. The short answer is: the last object that used it. For this a reference counting mechanism
is used. Whenever somebody copies a header of a Mat object a counter is increased for the matrix. Whenever a header
is cleaned this counter is decreased. When the counter reaches zero the matrix too is freed. Because, sometimes you
will still want to copy the matrix itself too, there exists the clone() or the copyTo() function.
1
2
3
Mat F = A.clone();
Mat G;
A.copyTo(G);
Now modifying F or G will not affect the matrix pointed by the Mat header. What you need to remember from all this
is that:
Output image allocation for OpenCV functions is automatic (unless specified otherwise).
No need to think about memory freeing with OpenCVs C++ interface.
The assignment operator and the copy constructor (ctor)copies only the header.
Use the clone() or the copyTo() function to copy the underlying matrix of an image.
Storing methods
This is about how you store the pixel values. You can select the color space and the data type used. The color space
refers to how we combine color components in order to code a given color. The simplest one is the gray scale. Here
82
the colors at our disposal are black and white. The combination of these allows us to create many shades of gray.
For colorful ways we have a lot more of methods to choose from. However, every one of them breaks it down to
three or four basic components and the combination of this will give all others. The most popular one of this is RGB,
mainly because this is also how our eye builds up colors in our eyes. Its base colors are red, green and blue. To code
the transparency of a color sometimes a fourth element: alpha (A) is added.
However, they are many color systems each with their own advantages:
RGB is the most common as our eyes use something similar, our display systems also compose colors using
these.
The HSV and HLS decompose colors into their hue, saturation and value/luminance components, which is a
more natural way for us to describe colors. Using you may for example dismiss the last component, making
your algorithm less sensible to light conditions of the input image.
YCrCb is used by the popular JPEG image format.
CIE L*a*b* is a perceptually uniform color space, which comes handy if you need to measure the distance of a
given color to another color.
Now each of the building components has their own valid domains. This leads to the data type used. How we store
a component defines just how fine control we have over its domain. The smallest data type possible is char, which
means one byte or 8 bits. This may be unsigned (so can store values from 0 to 255) or signed (values from -127 to
+127). Although in case of three components this already gives 16 million possible colors to represent (like in case of
RGB) we may acquire an even finer control by using the float (4 byte = 32 bit) or double (8 byte = 64 bit) data types
for each component. Nevertheless, remember that increasing the size of a component also increases the size of the
whole picture in the memory.
For two dimensional and multichannel images we first define their size: row and column count wise.
Then we need to specify the data type to use for storing the elements and the number of channels per
matrix point. To do this we have multiple definitions made according to the following convention:
CV_[The number of bits per item][Signed or Unsigned][Type Prefix]C[The channel number]
For instance, CV_8UC3 means we use unsigned char types that are 8 bit long and each pixel has three
items of this to form the three channels. This are predefined for up to four channel numbers. The Scalar
is four element short vector. Specify this and you can initialize all matrix points with a custom value.
83
However if you need more you can create the type with the upper macro and putting the channel number
in parenthesis as you can see below.
Use C\C++ arrays and initialize via constructor
int sz[3] = {2,2,2};
Mat L(3,sz, CV_8UC(1), Scalar::all(0));
The upper example shows how to create a matrix with more than two dimensions. Specify its dimension, then
pass a pointer containing the size for each dimension and the rest remains the same.
Create a header for an already existing IplImage pointer:
IplImage* img = cvLoadImage("greatwave.png", 1);
Mat mtx(img); // convert IplImage* -> Mat
Create() function:
M.create(4,4, CV_8UC(2));
cout << "M = "<< endl << " "
You cannot initialize the matrix values with this construction. It will only reallocate its matrix data
memory if the new size will not fit into the old one.
MATLAB style initializer: zeros(), ones(), :eyes(). Specify size and data type to use:
Mat E = Mat::eye(4, 4, CV_64F);
cout << "E = " << endl << " " << E << endl << endl;
Mat O = Mat::ones(2, 2, CV_32F);
cout << "O = " << endl << " " << O << endl << endl;
Mat Z = Mat::zeros(3,3, CV_8UC1);
cout << "Z = " << endl << " " << Z << endl << endl;
84
Create a new header for an existing Mat object and clone() or copyTo() it.
Mat RowClone = C.row(1).clone();
cout << "RowClone = " << endl << " " << RowClone << endl << endl;
In the above examples you could see the default formatting option. Nevertheless, OpenCV allows you to format your
matrix output format to fit the rules of:
Default
cout << "R (default) = " << endl <<
Python
cout << "R (python)
Numpy
cout << "R (numpy)
85
C
cout << "R (c)
3D Point
Point3f P3f(2, 6, 7);
cout << "Point (3D) = " << P3f << endl << endl;
v.push_back(2);
v.push_back(3.01f);
cout << "Vector of floats via Mat = " << Mat(v) << endl << endl;
std::vector of points
vector<Point2f> vPoints(20);
for (size_t E = 0; E < vPoints.size(); ++E)
vPoints[E] = Point2f((float)(E * 5), (float)(E % 7));
cout << "A vector of 2D Points = " << vPoints << endl << endl;
Most of the samples here have been included into a small console application. You can download it from here or in
the core section of the cpp samples.
A quick video demonstration of this you can find on YouTube.
86
2.2 How to scan images, lookup tables and time measurement with
OpenCV
Goal
Well seek answers for the following questions:
How to go through each and every pixel of an image?
How is OpenCV matrix values stored?
How to measure the performance of our algorithm?
What are lookup tables and why use them?
Iold
) 10
10
A simple color space reduction algorithm would consist of just passing through every pixel of an image matrix and
applying this formula. Its worth noting that we do a divide and a multiplication operation. These operations are bloody
expensive for a system. If possible its worth avoiding them by using cheaper operations such as a few subtractions,
addition or in best case a simple assignment. Furthermore, note that we only have a limited number of input values for
the upper operation. In case of the uchar system this is 256 to be exact.
Therefore, for larger images it would be wise to calculate all possible values beforehand and during the assignment
just make the assignment, by using a lookup table. Lookup tables are simple arrays (having one or more dimensions)
that for a given input value variation holds the final output value. Its strength lies that we do not need to make the
calculation, we just need to read the result.
Our test case program (and the sample presented here) will do the following: read in a console line argument image
(that may be either color or gray scale - console line argument too) and apply the reduction with the given console
line argument integer value. In OpenCV, at the moment they are three major ways of going through an image pixel by
pixel. To make things a little more interesting will make the scanning for each image using all of these methods, and
print out how long it took.
You can download the full source code here or look it up in the samples directory of OpenCV at the cpp tutorial code
for the core section. Its basic usage is:
how_to_scan_images imageName.jpg intValueToReduce [G]
The final argument is optional. If given the image will be loaded in gray scale format, otherwise the RGB color way
is used. The first thing is to calculate the lookup table.
2.2. How to scan images, lookup tables and time measurement with OpenCV
87
Here we first use the C++ stringstream class to convert the third command line argument from text to an integer format.
Then we use a simple look and the upper formula to calculate the lookup table. No OpenCV specific stuff here.
Another issue is how do we measure time? Well OpenCV offers two simple functions to achieve this getTickCount()
and getTickFrequency(). The first returns the number of ticks of your systems CPU from a certain event (like since
you booted your system). The second returns how many times your CPU emits a tick during a second. So to measure
in seconds the number of time elapsed between two operations is easy as:
double t = (double)getTickCount();
// do something ...
t = ((double)getTickCount() - t)/getTickFrequency();
cout << "Times passed in seconds: " << t << endl;
Column 0
0,0
1,0
...,0
n,0
Column 1
0,1
1,1
...,1
n,1
Column ...
...
...
...
n,...
Column m
0, m
1, m
..., m
n, m
For multichannel images the columns contain as many sub columns as the number of channels. For example in case
of an RGB color system:
Row 0
Row 1
Row ...
Row n
0,0
1,0
...,0
n,0
Column 0
0,0
0,0
1,0
1,0
...,0 ...,0
n,0
n,0
0,1
1,1
...,1
n,1
Column 1
0,1
0,1
1,1
1,1
...,1 ...,1
n,1
n,1
...
...
...
n,...
Column ...
...
...
...
...
...
...
n,...
n,...
0, m
1, m
..., m
n, m
Column m
0, m
0, m
1, m
1, m
..., m ..., m
n, m
n, m
Note that the order of the channels is inverse: BGR instead of RGB. Because in many cases the memory is large
enough to store the rows in a successive fashion the rows may follow one after another, creating a single long row.
Because everything is in a single place following one after another this may help to speed up the scanning process.
We can use the isContinuous() function to ask the matrix if this is the case. Continue on to the next section to find an
example.
88
Here we basically just acquire a pointer to the start of each row and go through it until it ends. In the special case that
the matrix is stored in a continues manner we only need to request the pointer a single time and go all the way to the
end. We need to look out for color images: we have three channels so we need to pass through three times more items
in each row.
Theres another way of this. The data data member of a Mat object returns the pointer to the first row, first column. If
this pointer is null you have no valid input in that object. Checking this is the simplest method to check if your image
loading was a success. In case the storage is continues we can use this to go through the whole data pointer. In case of
a gray scale image this would look like:
uchar* p = I.data;
for( unsigned int i =0; i < ncol*nrows; ++i)
*p++ = table[*p];
You would get the same result. However, this code is a lot harder to read later on. It gets even harder if you have some
more advanced technique there. Moreover, in practice Ive observed youll get the same performance result (as most
of the modern compilers will probably make this small optimization trick automatically for you).
2.2. How to scan images, lookup tables and time measurement with OpenCV
89
these tasks from the user. All you need to do is ask the begin and the end of the image matrix and then just increase
the begin iterator until you reach the end. To acquire the value pointed by the iterator use the * operator (add it before
it).
Mat& ScanImageAndReduceIterator(Mat& I, const uchar* const table)
{
// accept only char type matrices
CV_Assert(I.depth() != sizeof(uchar));
const int channels = I.channels();
switch(channels)
{
case 1:
{
MatIterator_<uchar> it, end;
for( it = I.begin<uchar>(), end = I.end<uchar>(); it != end; ++it)
*it = table[*it];
break;
}
case 3:
{
MatIterator_<Vec3b> it, end;
for( it = I.begin<Vec3b>(), end = I.end<Vec3b>(); it != end; ++it)
{
(*it)[0] = table[(*it)[0]];
(*it)[1] = table[(*it)[1]];
(*it)[2] = table[(*it)[2]];
}
}
}
return I;
}
In case of color images we have three uchar items per column. This may be considered a short vector of uchar items,
that has been baptized in OpenCV with the Vec3b name. To access the n-th sub column we use simple operator[]
access. Its important to remember that OpenCV iterators go through the columns and automatically skip to the next
row. Therefore in case of color images if you use a simple uchar iterator youll be able to access only the blue channel
values.
90
{
for( int i = 0; i < I.rows; ++i)
for( int j = 0; j < I.cols; ++j )
I.at<uchar>(i,j) = table[I.at<uchar>(i,j)];
break;
}
case 3:
{
Mat_<Vec3b> _I = I;
for( int i = 0; i < I.rows; ++i)
for( int j = 0; j < I.cols; ++j )
{
_I(i,j)[0] = table[_I(i,j)[0]];
_I(i,j)[1] = table[_I(i,j)[1]];
_I(i,j)[2] = table[_I(i,j)[2]];
}
I = _I;
break;
}
}
return I;
}
The functions takes your input type and coordinates and calculates on the fly the address of the queried item. Then
returns a reference to that. This may be a constant when you get the value and non-constant when you set the value.
As a safety step in debug mode only* there is performed a check that your input coordinates are valid and does exist.
If this isnt the case youll get a nice output message of this on the standard error output stream. Compared to the
efficient way in release mode the only difference in using this is that for every element of the image youll get a new
row pointer for what we use the C operator[] to acquire the column element.
If you need to multiple lookups using this method for an image it may be troublesome and time consuming to enter
the type and the at keyword for each of the accesses. To solve this problem OpenCV has a Mat_ data type. Its the
same as Mat with the extra need that at definition you need to specify the data type through what to look at the data
matrix, however in return you can use the operator() for fast access of items. To make things even better this is easily
convertible from and to the usual Mat data type. A sample usage of this you can see in case of the color images of the
upper function. Nevertheless, its important to note that the same operation (with the same runtime speed) could have
been done with the at() function. Its just a less to write for the lazy programmer trick.
Finally call the function (I is our input image and J the output one):
LUT(I, lookUpTable, J);
2.2. How to scan images, lookup tables and time measurement with OpenCV
91
Performance Difference
For the best result compile the program and run it on your own speed. For showing off better the differences Ive used
a quite large (2560 X 1600) image. The performance presented here are for color images. For a more accurate value
Ive averaged the value I got from the call of the function for hundred times.
Efficient Way
Iterator
On-The-Fly RA
LUT function
79.4717 milliseconds
83.7201 milliseconds
93.7878 milliseconds
32.5759 milliseconds
We can conclude a couple of things. If possible, use the already made functions of OpenCV (instead reinventing these).
The fastest method turns out to be the LUT function. This is because the OpenCV library is multi-thread enabled via
Intel Threaded Building Blocks. However, if you need to write a simple image scan prefer the pointer method. The
iterator is a safer bet, however quite slower. Using the on-the-fly reference access method for full image scan is the
most costly in debug mode. In the release mode it may beat the iterator approach or not, however it surely sacrifices
for this the safety trait of iterators.
Finally, you may watch a sample run of the program on the video posted on our YouTube channel.
+1
I(i, j) M, where M = 0 1
+1
5
1
1
0
The first notation is by using a formula, while the second is a compacted version of the first by using a mask. You
use the mask by putting the center of the mask matrix (in the upper case noted by the zero-zero index) on the pixel
you want to calculate and sum up the pixel values multiplied with the overlapped matrix values. Its the same thing,
however in case of large matrices the latter notation is a lot easier to look over.
Now let us see how we can make this happen by using the basic pixel access method or by using the filter2D function.
92
Result.create(myImage.size(),myImage.type());
const int nChannels = myImage.channels();
for(int j
{
const
const
const
At first we make sure that the input images data is in unsigned char format. For this we use the CV_Assert function
that throws an error when the expression inside it is false.
CV_Assert(myImage.depth() == CV_8U);
We create an output image with the same size and the same type as our input. As you can see in the How the image
matrix is stored in the memory? section, depending on the number of channels we may have one or more subcolumns.
We will iterate through them via pointers so the total number of elements depends from this number.
Result.create(myImage.size(),myImage.type());
const int nChannels = myImage.channels();
Well use the plain C [] operator to access pixels. Because we need to access multiple rows at the same time well
acquire the pointers for each of them (a previous, a current and a next line). We need another pointer to where were
going to save the calculation. Then simply access the right items with the [] operator. For moving the output pointer
ahead we simply increase this (with one byte) after each operation:
for(int j
{
const
const
const
On the borders of the image the upper notation results inexistent pixel locations (like minus one - minus one). In these
points our formula is undefined. A simple solution is to not apply the mask in these points and, for example, set the
93
//
//
//
//
The
The
The
The
top row
bottom row
left column
right column
0, -1, 0,
-1, 5, -1,
0, -1, 0);
Then call the filter2D function specifying the input, the output image and the kernell to use:
filter2D(I, K, I.depth(), kern );
The function even has a fifth optional argument to specify the center of the kernel, and a sixth one for determining what
to do in the regions where the operation is undefined (borders). Using this function has the advantage that its shorter,
less verbose and because there are some optimization techniques implemented it is usually faster than the hand-coded
method. For example in my test while the second one took only 13 milliseconds the first took around 31 milliseconds.
Quite some difference.
For example:
You can download this source code from here or look in the OpenCV source code libraries sample directory at
samples/cpp/tutorial_code/core/mat_mask_operations/mat_mask_operations.cpp.
Check out an instance of running the program on our YouTube channel .
94
Theory
Note: The explanation below belongs to the book Computer Vision: Algorithms and Applications by Richard Szeliski
From our previous tutorial, we know already a bit of Pixel operators. An interesting dyadic (two-input) operator is the
linear blend operator:
g(x) = (1 )f0 (x) + f1 (x)
By varying from 0 1 this operator can be used to perform a temporal cross-disolve between two images or videos,
as seen in slide shows and film productions (cool, eh?)
Code
As usual, after the not-so-lengthy explanation, lets go to the code:
#include <cv.h>
#include <highgui.h>
#include <iostream>
using namespace cv;
int main( int argc, char** argv )
{
double alpha = 0.5; double beta; double input;
Mat src1, src2, dst;
/// Ask the user enter alpha
std::cout<<" Simple Linear Blender "<<std::endl;
std::cout<<"-----------------------"<<std::endl;
std::cout<<"* Enter alpha [0-1]: ";
std::cin>>input;
/// We use the alpha provided by the user iff it is between 0 and 1
if( alpha >= 0 && alpha <= 1 )
{ alpha = input; }
/// Read image ( same size, same type )
src1 = imread("../../images/LinuxLogo.jpg");
src2 = imread("../../images/WindowsLogo.jpg");
if( !src1.data ) { printf("Error loading src1 \n"); return -1; }
if( !src2.data ) { printf("Error loading src2 \n"); return -1; }
/// Create Windows
namedWindow("Linear Blend", 1);
beta = ( 1.0 - alpha );
addWeighted( src1, alpha, src2, beta, 0.0, dst);
imshow( "Linear Blend", dst );
waitKey(0);
return 0;
}
95
Explanation
1. Since we are going to perform:
g(x) = (1 )f0 (x) + f1 (x)
We need two source images (f0 (x) and f1 (x)). So, we load them in the usual way:
src1 = imread("../../images/LinuxLogo.jpg");
src2 = imread("../../images/WindowsLogo.jpg");
Warning: Since we are adding src1 and src2, they both have to be of the same size (width and height) and
type.
2. Now we need to generate the g(x) image. For this, the function addWeighted comes quite handy:
beta = ( 1.0 - alpha );
addWeighted( src1, alpha, src2, beta, 0.0, dst);
Result
96
Theory
Note: The explanation below belongs to the book Computer Vision: Algorithms and Applications by Richard Szeliski
Image Processing
A general image processing operator is a function that takes one or more input images and produces an output
image.
Image transforms can be seen as:
Point operators (pixel transforms)
Neighborhood (area-based) operators
Pixel Transforms
In this kind of image processing transform, each output pixels value depends on only the corresponding input
pixel value (plus, potentially, some globally collected information or parameters).
Examples of such operators include brightness and contrast adjustments as well as color correction and transformations.
Brightness and contrast adjustments
Two commonly used point processes are multiplication and addition with a constant:
g(x) = f(x) +
The parameters > 0 and are often called the gain and bias parameters; sometimes these parameters are said
to control contrast and brightness respectively.
You can think of f(x) as the source image pixels and g(x) as the output image pixels. Then, more conveniently
we can write the expression as:
g(i, j) = f(i, j) +
where i and j indicates that the pixel is located in the i-th row and j-th column.
Code
The following code performs the operation g(i, j) = f(i, j) + :
#include <cv.h>
#include <highgui.h>
#include <iostream>
using namespace cv;
97
Explanation
1. We begin by creating parameters to save and to be entered by the user:
double alpha;
int beta;
3. Now, since we will make some transformations to this image, we need a new Mat object to store it. Also, we
want this to have the following features:
Initial pixel values equal to zero
Same size and type as the original image
98
We observe that Mat::zeros returns a Matlab-style zero initializer based on image.size() and image.type()
4. Now, to perform the operation g(i, j) = f(i, j) + we will access to each pixel in image. Since we are
operating with RGB images, we will have three values per pixel (R, G and B), so we will also access them
separately. Here is the piece of code:
for( int y = 0; y < image.rows; y++ )
{ for( int x = 0; x < image.cols; x++ )
{ for( int c = 0; c < 3; c++ )
{ new_image.at<Vec3b>(y,x)[c] =
saturate_cast<uchar>( alpha*( image.at<Vec3b>(y,x)[c] ) + beta ); }
}
}
Note: Instead of using the for loops to access each pixel, we could have simply used this command:
image.convertTo(new_image, -1, alpha, beta);
where convertTo would effectively perform new_image = a*image + beta. However, we wanted to show you how to
access each pixel. In any case, both methods give the same result.
Result
Running our code and using = 2.2 and = 50
$ ./BasicLinearTransforms lena.jpg
Basic Linear Transforms
------------------------* Enter the alpha value [1.0-3.0]: 2.2
* Enter the beta value [0-100]: 50
We get this:
99
OpenCV Theory
For this tutorial, we will heavily use two structures: Point and Scalar:
Point
It represents a 2D point, specified by its image coordinates x and y. We can define it as:
Point pt;
pt.x = 10;
pt.y = 8;
or
Point pt =
100
Point(10, 8);
Scalar
Represents a 4-element vector. The type Scalar is widely used in OpenCV for passing pixel values.
In this tutorial, we will use it extensively to represent RGB color values (3 parameters). It is not necessary to
define the last argument if it is not going to be used.
Lets see an example, if we are asked for a color argument and we give:
Scalar( a, b, c )
We would be defining a RGB color such as: Red = c, Green = b and Blue = a
Code
This code is in your OpenCV sample folder. Otherwise you can grab it from here
Explanation
1. Since we plan to draw two examples (an atom and a rook), we have to create 02 images and two windows to
display them.
/// Windows names
char atom_window[] = "Drawing 1: Atom";
char rook_window[] = "Drawing 2: Rook";
/// Create black empty images
Mat atom_image = Mat::zeros( w, w, CV_8UC3 );
Mat rook_image = Mat::zeros( w, w, CV_8UC3 );
2. We created functions to draw different geometric shapes. For instance, to draw the atom we used MyEllipse and
MyFilledCircle:
/// 1. Draw a simple atom:
/// 1.a. Creating ellipses
MyEllipse( atom_image, 90 );
MyEllipse( atom_image, 0 );
MyEllipse( atom_image, 45 );
MyEllipse( atom_image, -45 );
/// 1.b. Creating circles
MyFilledCircle( atom_image, Point( w/2.0, w/2.0) );
101
8 );
/// 2.c. Create a few lines
MyLine( rook_image, Point( 0, 15*w/16 ), Point( w, 15*w/16 ) );
MyLine( rook_image, Point( w/4, 7*w/8 ), Point( w/4, w ) );
MyLine( rook_image, Point( w/2, 7*w/8 ), Point( w/2, w ) );
MyLine( rook_image, Point( 3*w/4, 7*w/8 ), Point( 3*w/4, w ) );
As we can see, MyLine just call the function line, which does the following:
Draw a line from Point start to Point end
The line is displayed in the image img
The line color is defined by Scalar( 0, 0, 0) which is the RGB value correspondent to Black
The line thickness is set to thickness (in this case 2)
The line is a 8-connected one (lineType = 8)
MyEllipse
void MyEllipse( Mat img, double angle )
{
int thickness = 2;
int lineType = 8;
ellipse( img,
Point( w/2.0, w/2.0 ),
Size( w/4.0, w/16.0 ),
angle,
0,
360,
Scalar( 255, 0, 0 ),
thickness,
lineType );
}
From the code above, we can observe that the function ellipse draws an ellipse such that:
The ellipse is displayed in the image img
The ellipse center is located in the point (w/2.0, w/2.0) and is enclosed in a box of size (w/4.0, w/16.0)
The ellipse is rotated angle degrees
The ellipse extends an arc between 0 and 360 degrees
102
The color of the figure will be Scalar( 255, 255, 0) which means blue in RGB value.
The ellipses thickness is 2.
MyFilledCircle
void MyFilledCircle( Mat img, Point center )
{
int thickness = -1;
int lineType = 8;
circle( img,
center,
w/32.0,
Scalar( 0, 0, 255 ),
thickness,
lineType );
}
Similar to the ellipse function, we can observe that circle receives as arguments:
The image where the circle will be displayed (img)
The center of the circle denoted as the Point center
The radius of the circle: w/32.0
The color of the circle: Scalar(0, 0, 255) which means Red in BGR
Since thickness = -1, the circle will be drawn filled.
MyPolygon
void MyPolygon( Mat img )
{
int lineType = 8;
/** Create some points */
Point rook_points[1][20];
rook_points[0][0] = Point( w/4.0, 7*w/8.0 );
rook_points[0][1] = Point( 3*w/4.0, 7*w/8.0 );
rook_points[0][2] = Point( 3*w/4.0, 13*w/16.0 );
rook_points[0][3] = Point( 11*w/16.0, 13*w/16.0 );
rook_points[0][4] = Point( 19*w/32.0, 3*w/8.0 );
rook_points[0][5] = Point( 3*w/4.0, 3*w/8.0 );
rook_points[0][6] = Point( 3*w/4.0, w/8.0 );
rook_points[0][7] = Point( 26*w/40.0, w/8.0 );
rook_points[0][8] = Point( 26*w/40.0, w/4.0 );
rook_points[0][9] = Point( 22*w/40.0, w/4.0 );
rook_points[0][10] = Point( 22*w/40.0, w/8.0 );
rook_points[0][11] = Point( 18*w/40.0, w/8.0 );
rook_points[0][12] = Point( 18*w/40.0, w/4.0 );
rook_points[0][13] = Point( 14*w/40.0, w/4.0 );
rook_points[0][14] = Point( 14*w/40.0, w/8.0 );
rook_points[0][15] = Point( w/4.0, w/8.0 );
rook_points[0][16] = Point( w/4.0, 3*w/8.0 );
rook_points[0][17] = Point( 13*w/32.0, 3*w/8.0 );
rook_points[0][18] = Point( 5*w/16.0, 13*w/16.0 );
rook_points[0][19] = Point( w/4.0, 13*w/16.0) ;
const Point* ppt[1] = { rook_points[0] };
int npt[] = { 20 };
103
fillPoly( img,
ppt,
npt,
1,
Scalar( 255, 255, 255 ),
lineType );
}
Finally we have the rectangle function (we did not create a special function for this guy). We note that:
The rectangle will be drawn on rook_image
Two opposite vertices of the rectangle are defined by ** Point( 0, 7*w/8.0 )** and Point( w, w)
The color of the rectangle is given by Scalar(0, 255, 255) which is the BGR value for yellow
Since the thickness value is given by -1, the rectangle will be filled.
Result
Compiling and running your program should give you a result like this:
104
Code
In the previous tutorial (Basic Drawing) we drew diverse geometric figures, giving as input parameters such as
coordinates (in the form of Points), color, thickness, etc. You might have noticed that we gave specific values
for these arguments.
In this tutorial, we intend to use random values for the drawing parameters. Also, we intend to populate our
image with a big number of geometric figures. Since we will be initializing them in a random fashion, this
process will be automatic and made by using loops .
This code is in your OpenCV sample folder. Otherwise you can grab it from here .
Explanation
1. Lets start by checking out the main function. We observe that first thing we do is creating a Random Number
Generator object (RNG):
RNG rng( 0xFFFFFFFF );
RNG implements a random number generator. In this example, rng is a RNG element initialized with the value
0xFFFFFFFF
2. Then we create a matrix initialized to zeros (which means that it will appear as black), specifying its height,
width and its type:
105
3. Then we proceed to draw crazy stuff. After taking a look at the code, you can see that it is mainly divided in 8
sections, defined as functions:
/// Now, lets draw some lines
c = Drawing_Random_Lines(image, window_name, rng);
if( c != 0 ) return 0;
/// Go on drawing, this time nice rectangles
c = Drawing_Random_Rectangles(image, window_name, rng);
if( c != 0 ) return 0;
/// Draw some ellipses
c = Drawing_Random_Ellipses( image, window_name, rng );
if( c != 0 ) return 0;
/// Now some polylines
c = Drawing_Random_Polylines( image, window_name, rng );
if( c != 0 ) return 0;
/// Draw filled polygons
c = Drawing_Random_Filled_Polygons( image, window_name, rng );
if( c != 0 ) return 0;
/// Draw circles
c = Drawing_Random_Circles( image, window_name, rng );
if( c != 0 ) return 0;
/// Display text in random positions
c = Displaying_Random_Text( image, window_name, rng );
if( c != 0 ) return 0;
/// Displaying the big end!
c = Displaying_Big_End( image, window_name, rng );
All of these functions follow the same pattern, so we will analyze only a couple of them, since the same explanation applies for all.
4. Checking out the function Drawing_Random_Lines:
int Drawing_Random_Lines( Mat image, char* window_name, RNG rng )
{
int lineType = 8;
Point pt1, pt2;
for( int
{
pt1.x =
pt1.y =
pt2.x =
pt2.y =
x_1,
y_1,
x_1,
y_1,
x_2
y_2
x_2
y_2
);
);
);
);
106
We know that rng is a Random number generator object. In the code above we are calling
rng.uniform(a,b). This generates a radombly uniformed distribution between the values a and b
(inclusive in a, exclusive in b).
From the explanation above, we deduce that the extremes pt1 and pt2 will be random values, so the
lines positions will be quite impredictable, giving a nice visual effect (check out the Result section
below).
As another observation, we notice that in the line arguments, for the color input we enter:
randomColor(rng)
As we can see, the return value is an Scalar with 3 randomly initialized values, which are used as the
R, G and B parameters for the line color. Hence, the color of the lines will be random too!
5. The explanation above applies for the other functions generating circles, ellipses, polygones, etc. The parameters
such as center and vertices are also generated randomly.
6. Before finishing, we also should take a look at the functions Display_Random_Text and Displaying_Big_End,
since they both have a few interesting features:
7. Display_Random_Text:
int Displaying_Random_Text( Mat image, char* window_name, RNG rng )
{
int lineType = 8;
for ( int i = 1; i < NUMBER; i++ )
{
Point org;
org.x = rng.uniform(x_1, x_2);
org.y = rng.uniform(y_1, y_2);
putText( image, "Testing text rendering", org, rng.uniform(0,8),
rng.uniform(0,100)*0.05+0.1, randomColor(rng), rng.uniform(1, 10), lineType);
107
Besides the function getTextSize (which gets the size of the argument text), the new operation we can observe
is inside the foor loop:
image2 = image - Scalar::all(i)
So, image2 is the substraction of image and Scalar::all(i). In fact, what happens here is that every pixel of
image2 will be the result of substracting every pixel of image minus the value of i (remember that for each pixel
108
we are considering three values such as R, G and B, so each of them will be affected)
Also remember that the substraction operation always performs internally a saturate operation, which
means that the result obtained will always be inside the allowed range (no negative and between 0 and
255 for our example).
Result
As you just saw in the Code section, the program will sequentially execute diverse drawing functions, which will
produce:
1. First a random set of NUMBER lines will appear on screen such as it can be seen in this screenshot:
4. Now, polylines with 03 segments will appear on screen, again in random configurations.
7. Near the end, the text Testing Text Rendering will appear in a variety of fonts, sizes, colors and positions.
8. And the big end (which by the way expresses a big truth too):
109
How to do it in OpenCV?
Usage of functions such as: copyMakeBorder(), merge(), dft(), getOptimalDFTSize(), log() and normalize() .
Source code
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
int main(int argc, char ** argv)
{
const char* filename = argc >=2 ? argv[1] : "lena.jpg";
9
10
11
12
Mat padded;
//expand input image to optimal size
int m = getOptimalDFTSize( I.rows );
int n = getOptimalDFTSize( I.cols ); // on the border add zero values
copyMakeBorder(I, padded, 0, m - I.rows, 0, n - I.cols, BORDER_CONSTANT, Scalar::all(0));
13
14
15
16
17
18
19
20
21
dft(complexI, complexI);
22
23
24
25
26
27
28
29
magI += Scalar::all(1);
log(magI, magI);
30
31
32
33
34
35
36
37
38
39
Mat
Mat
Mat
Mat
40
41
42
43
q0(magI,
q1(magI,
q2(magI,
q3(magI,
//
//
//
//
44
Mat tmp;
q0.copyTo(tmp);
q3.copyTo(q0);
45
46
47
110
tmp.copyTo(q3);
48
49
q1.copyTo(tmp);
q2.copyTo(q1);
tmp.copyTo(q2);
50
51
52
53
normalize(magI, magI, 0, 1, CV_MINMAX); // Transform the matrix with float values into a
// viewable image form (float between values 0 and 1).
54
55
56
imshow("Input Image"
, I
);
imshow("spectrum magnitude", magI);
waitKey();
57
58
59
60
return 0;
61
62
Explanation
The Fourier Transform will decompose an image into its sinus and cosines components. In other words, it will transform an image from its spatial domain to its frequency domain. The idea is that any function may be approximated
exactly with the sum of infinite sinus and cosines functions. The Fourier Transform is a way how to do this. Mathematically a two dimensional images Fourier transform is:
F(k, l) =
N1
X N1
X
ki
lj
f(i, j)ei2( N + N )
i=0 j=0
111
domains range is much larger than its spatial counterpart. Therefore, we store these usually at least in a float
format. Therefore well convert our input image to this type and expand it with another channel to hold the
complex values:
Mat planes[] = {Mat_<float>(padded), Mat::zeros(padded.size(), CV_32F)};
Mat complexI;
merge(planes, 2, complexI);
// Add to the expanded another plane with zeros
3. Make the Discrete Fourier Transform. Its possible an in-place calculation (same input as output):
dft(complexI, complexI);
4. Transform the real and complex values to magnitude. A complex number has a real (Re) and a complex
(imaginary - Im) part. The results of a DFT are complex numbers. The magnitude of a DFT is:
q
2
2
2
M = Re(DFT (I)) + Im(DFT (I))
Translated to OpenCV code:
split(complexI, planes);
// planes[0] = Re(DFT(I), planes[1] = Im(DFT(I))
magnitude(planes[0], planes[1], planes[0]);// planes[0] = magnitude
Mat magI = planes[0];
5. Switch to a logarithmic scale. It turns out that the dynamic range of the Fourier coefficients is too large to be
displayed on the screen. We have some small and some high changing values that we cant observe like this.
Therefore the high values will all turn out as white points, while the small ones as black. To use the gray scale
values to for visualization we can transform our linear scale to a logarithmic one:
M1 = log (1 + M)
Translated to OpenCV code:
magI += Scalar::all(1);
log(magI, magI);
6. Crop and rearrange. Remember, that at the first step, we expanded the image? Well, its time to throw away
the newly introduced values. For visualization purposes we may also rearrange the quadrants of the result, so
that the origin (zero, zero) corresponds with the image center.
magI = magI(Rect(0, 0, magI.cols & -2, magI.rows & -2));
int cx = magI.cols/2;
int cy = magI.rows/2;
Mat
Mat
Mat
Mat
q0(magI,
q1(magI,
q2(magI,
q3(magI,
//
//
//
//
Mat tmp;
q0.copyTo(tmp);
q3.copyTo(q0);
tmp.copyTo(q3);
q1.copyTo(tmp);
q2.copyTo(q1);
tmp.copyTo(q2);
7. Normalize. This is done again for visualization purposes. We now have the magnitudes, however this are still
out of our image display range of zero to one. We normalize our values to this range using the normalize()
function.
112
normalize(magI, magI, 0, 1, CV_MINMAX); // Transform the matrix with float values into a
// viewable image form (float between values 0 and 1).
Result
An application idea would be to determine the geometrical orientation present in the image. For example, let us find
out if a text is horizontal or not? Looking at some text youll notice that the text lines sort of form also horizontal lines
and the letters form sort of vertical lines. These two main components of a text snippet may be also seen in case of the
Fourier transform. Let us use this horizontal and this rotated image about a text.
In case of the horizontal text:
You can see that the most influential components of the frequency domain (brightest dots on the magnitude image)
follow the geometric rotation of objects on the image. From this we may calculate the offset and perform an image
rotation to correct eventual miss alignments.
2.9 File Input and Output using XML and YAML files
Goal
Youll find answers for the following questions:
How to print and read text entries to a file and OpenCV using YAML or XML files?
How to do the same for OpenCV data structures?
How to do this for your data structures?
Usage of OpenCV data structures such as FileStorage, FileNode or FileNodeIterator.
2.9. File Input and Output using XML and YAML files
113
Source code
#include <opencv2/core/core.hpp>
#include <iostream>
#include <string>
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
class MyData
{
public:
MyData() : A(0), X(0), id()
{}
explicit MyData(int) : A(97), X(CV_PI), id("mydata1234") // explicit to avoid implicit conversion
{}
void write(FileStorage& fs) const
//Write serialization for this class
{
fs << "{" << "A" << A << "X" << X << "id" << id << "}";
}
void read(const FileNode& node)
//Read serialization for this class
{
A = (int)node["A"];
X = (double)node["X"];
id = (string)node["id"];
}
public:
// Data Members
int A;
double X;
string id;
};
30
31
32
33
34
35
36
37
38
39
40
41
//These write and read functions must be defined for the serialization in FileStorage to work
void write(FileStorage& fs, const std::string&, const MyData& x)
{
x.write(fs);
}
void read(const FileNode& node, MyData& x, const MyData& default_value = MyData()){
if(node.empty())
x = default_value;
else
x.read(node);
}
42
43
44
45
46
47
48
49
50
51
52
114
53
54
55
56
57
58
{
if (ac != 2)
{
help(av);
return 1;
}
59
60
61
62
63
64
65
66
67
fs
fs
fs
fs
68
69
70
71
<<
<<
<<
<<
72
fs << "Mapping";
fs << "{" << "One" << 1;
fs <<
"Two" << 2 << "}";
73
74
75
// text - mapping
76
77
78
// cv::Mat
79
80
fs.release();
cout << "Write Done." << endl;
// explicit close
81
82
83
84
85
86
87
88
89
{//read
cout << endl << "Reading: " << endl;
FileStorage fs;
fs.open(filename, FileStorage::READ);
90
91
92
93
94
95
96
97
98
99
100
int itNr;
//fs["iterationNr"] >> itNr;
itNr = (int) fs["iterationNr"];
cout << itNr;
if (!fs.isOpened())
{
cerr << "Failed to open " << filename << endl;
help(av);
return 1;
}
101
102
103
104
105
106
107
FileNode n = fs["strings"];
// Read string sequence - Get node
if (n.type() != FileNode::SEQ)
{
cerr << "strings is not a sequence! FAIL" << endl;
return 1;
}
108
109
110
2.9. File Input and Output using XML and YAML files
115
111
112
113
n = fs["Mapping"];
// Read mappings from a sequence
cout << "Two " << (int)(n["Two"]) << "; ";
cout << "One " << (int)(n["One"]) << endl << endl;
114
115
116
117
118
MyData m;
Mat R, T;
119
120
121
fs["R"] >> R;
fs["T"] >> T;
fs["MyData"] >> m;
122
123
124
// Read cv::Mat
// Read your own structure_
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
return 0;
140
141
Explanation
Here we talk only about XML and YAML file inputs. Your output (and its respective input) file may have only one of
these extensions and the structure coming from this. They are two kinds of data structures you may serialize: mappings
(like the STL map) and element sequence (like the STL vector>. The difference between these is that in a map every
element has a unique name through what you may access it. For sequences you need to go through them to query a
specific item.
1. XML\YAML File Open and Close. Before you write any content to such file you need to open it and at the
end to close it. The XMLYAML data structure in OpenCV is FileStorage. To specify that this structure to which
file binds on your hard drive you can use either its constructor or the open() function of this:
string filename = "I.xml";
FileStorage fs(filename, FileStorage::WRITE);
\\...
fs.open(filename, FileStorage::READ);
Either one of this you use the second argument is a constant specifying the type of operations youll be able to
on them: WRITE, READ or APPEND. The extension specified in the file name also determinates the output
format that will be used. The output may be even compressed if you specify an extension such as .xml.gz.
The file automatically closes when the FileStorage objects is destroyed. However, you may explicitly call for
this by using the release function:
116
fs.release();
// explicit close
2. Input and Output of text and numbers. The data structure uses the same << output operator that the STL
library. For outputting any type of data structure we need first to specify its name. We do this by just simply
printing out the name of this. For basic types you may follow this with the print of the value :
fs << "iterationNr" << 100;
Reading in is a simple addressing (via the [] operator) and casting operation or a read via the >> operator :
int itNr;
fs["iterationNr"] >> itNr;
itNr = (int) fs["iterationNr"];
3. Input\Output of OpenCV Data structures. Well these behave exactly just as the basic C++ types:
Mat R = Mat_<uchar >::eye (3, 3),
T = Mat_<double>::zeros(3, 1);
fs << "R" << R;
fs << "T" << T;
// Write cv::Mat
fs["R"] >> R;
fs["T"] >> T;
// Read cv::Mat
4. Input\Output of vectors (arrays) and associative maps. As I mentioned beforehand we can output maps and
sequences (array, vector) too. Again we first print the name of the variable and then we have to specify if our
output is either a sequence or map.
For sequence before the first element print the [ character and after the last one the ] character:
fs << "strings" << "[";
// text - string sequence
fs << "image1.jpg" << "Awesomeness" << "baboon.jpg";
fs << "]";
// close sequence
For maps the drill is the same however now we use the { and } delimiter characters:
fs << "Mapping";
fs << "{" << "One" << 1;
fs <<
"Two" << 2 << "}";
// text - mapping
To read from these we use the FileNode and the FileNodeIterator data structures. The [] operator of the FileStorage class returns a FileNode data type. If the node is sequential we can use the FileNodeIterator to iterate through
the items:
FileNode n = fs["strings"];
// Read string sequence - Get node
if (n.type() != FileNode::SEQ)
{
cerr << "strings is not a sequence! FAIL" << endl;
return 1;
}
FileNodeIterator it = n.begin(), it_end = n.end(); // Go through the node
for (; it != it_end; ++it)
cout << (string)*it << endl;
For maps you can use the [] operator again to acces the given item (or the >> operator too):
2.9. File Input and Output using XML and YAML files
117
n = fs["Mapping"];
// Read mappings from a sequence
cout << "Two " << (int)(n["Two"]) << "; ";
cout << "One " << (int)(n["One"]) << endl << endl;
5. Read and write your own data structures. Suppose you have a data structure such as:
class MyData
{
public:
MyData() : A(0), X(0), id() {}
public:
// Data Members
int A;
double X;
string id;
};
Its possible to serialize this through the OpenCV I/O XML/YAML interface (just as in case of the OpenCV data
structures) by adding a read and a write function inside and outside of your class. For the inside part:
void write(FileStorage& fs) const
//Write serialization for this class
{
fs << "{" << "A" << A << "X" << X << "id" << id << "}";
}
void read(const FileNode& node)
{
A = (int)node["A"];
X = (double)node["X"];
id = (string)node["id"];
}
Then you need to add the following functions definitions outside the class:
void write(FileStorage& fs, const std::string&, const MyData& x)
{
x.write(fs);
}
void read(const FileNode& node, MyData& x, const MyData& default_value = MyData())
{
if(node.empty())
x = default_value;
else
x.read(node);
}
Here you can observe that in the read section we defined what happens if the user tries to read a non-existing
node. In this case we just return the default initialization value, however a more verbose solution would be to
return for instance a minus one value for an object ID.
Once you added these four functions use the >> operator for write and the << operator for read:
MyData m(1);
fs << "MyData" << m;
fs["MyData"] >> m;
118
Result
Well mostly we just print out the defined numbers. On the screen of your console you could see:
Write Done.
Reading:
100image1.jpg
Awesomeness
baboon.jpg
Two 2; One 1
R = [1,
0, 1,
0, 0,
T = [0;
0, 0;
0;
1]
0; 0]
MyData =
{ id = mydata1234, X = 3.14159, A = 97}
Attempt to read NonExisting (should initialize the data structure with its default).
NonExisting =
{ id = , X = 0, A = 0}
Tip: Open up output.xml with a text editor to see the serialized data.
Nevertheless, its much more interesting what you may see in the output xml file:
<?xml version="1.0"?>
<opencv_storage>
<iterationNr>100</iterationNr>
<strings>
image1.jpg Awesomeness baboon.jpg</strings>
<Mapping>
<One>1</One>
<Two>2</Two></Mapping>
<R type_id="opencv-matrix">
<rows>3</rows>
<cols>3</cols>
<dt>u</dt>
<data>
1 0 0 0 1 0 0 0 1</data></R>
<T type_id="opencv-matrix">
<rows>3</rows>
<cols>1</cols>
<dt>d</dt>
<data>
0. 0. 0.</data></T>
<MyData>
<A>97</A>
<X>3.1415926535897931e+000</X>
<id>mydata1234</id></MyData>
</opencv_storage>
2.9. File Input and Output using XML and YAML files
119
strings:
- "image1.jpg"
- Awesomeness
- "baboon.jpg"
Mapping:
One: 1
Two: 2
R: !!opencv-matrix
rows: 3
cols: 3
dt: u
data: [ 1, 0, 0, 0, 1, 0, 0, 0, 1 ]
T: !!opencv-matrix
rows: 3
cols: 1
dt: d
data: [ 0., 0., 0. ]
MyData:
A: 97
X: 3.1415926535897931e+000
id: mydata1234
General
When making the switch you first need to learn some about the new data structure for images: Mat - The Basic Image
Container, this replaces the old CvMat and IplImage ones. Switching to the new functions is easier. You just need to
remember a couple of new things.
OpenCV 2 received reorganization. No longer are all the functions crammed into a single library. We have many
modules, each of them containing data structures and functions relevant to certain tasks. This way you do not need to
ship a large library if you use just a subset of OpenCV. This means that you should also include only those headers
you will use. For example:
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
120
All the OpenCV related stuff is put into the cv namespace to avoid name conflicts with other libraries data structures
and functions. Therefore, either you need to prepend the cv:: keyword before everything that comes from OpenCV or
after the includes, you just add a directive to use this:
using namespace cv;
// The new C++ interface API is inside this namespace. Import it.
Because the functions are already in a namespace there is no need for them to contain the cv prefix in their name.
As such all the new C++ compatible functions dont have this and they follow the camel case naming rule. This
means the first letter is small (unless its a name, like Canny) and the subsequent words start with a capital letter (like
copyMakeBorder).
Now, remember that you need to link to your application all the modules you use, and in case you are on Windows
using the DLL system you will need to add, again, to the path all the binaries. For more in-depth information if youre
on Windows read How to build applications with OpenCV inside the Microsoft Visual Studio and for Linux an example
usage is explained in Using OpenCV with Eclipse (plugin CDT).
Now for converting the Mat object you can use either the IplImage or the CvMat operators. While in the C interface
you used to work with pointers here its no longer the case. In the C++ interface we have mostly Mat objects. These
objects may be freely converted to both IplImage and CvMat with simple assignment. For example:
Mat I;
IplImage pI = I;
CvMat
mI = I;
Now if you want pointers the conversion gets just a little more complicated. The compilers can no longer automatically
determinate what you want and as you need to explicitly specify your goal. This is to call the IplImage and CvMat
operators and then get their pointers. For getting the pointer we use the & sign:
Mat I;
IplImage* pI
CvMat* mI
= &I.operator IplImage();
= &I.operator CvMat();
One of the biggest complaints of the C interface is that it leaves all the memory management to you. You need to figure
out when it is safe to release your unused objects and make sure you do so before the program finishes or you could
have troublesome memory leeks. To work around this issue in OpenCV there is introduced a sort of smart pointer.
This will automatically release the object when its no longer in use. To use this declare the pointers as a specialization
of the Ptr :
Ptr<IplImage> piI = &I.operator IplImage();
Converting from the C data structures to the Mat is done by passing these inside its constructor. For example:
Mat K(piL), L;
L = Mat(pI);
A case study
Now that you have the basics done heres an example that mixes the usage of the C interface with
the C++ one. You will also find it in the sample directory of the OpenCV source code library at the
samples/cpp/tutorial_code/core/interoperability_with_OpenCV_1/interoperability_with_OpenCV_1.cpp
. To further help on seeing the difference the programs supports two modes: one mixed C and C++ and one pure C++.
If you define the DEMO_MIXED_API_USE youll end up using the first. The program separates the color planes,
does some modifications on them and in the end merge them back together.
1
2
#include <stdio.h>
#include <iostream>
121
4
5
6
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
7
8
9
10
using namespace cv; // The new C++ interface API is inside this namespace. Import it.
using namespace std;
#define DEMO_MIXED_API_USE
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
#ifdef DEMO_MIXED_API_USE
Ptr<IplImage> IplI = cvLoadImage(imagename);
// Ptr<T> is safe ref-counting pointer class
if(IplI.empty())
{
cerr << "Can not load image " << imagename << endl;
return -1;
}
Mat I(IplI); // Convert to the new style container. Only header created. Image not copied.
#else
Mat I = imread(imagename);
// the newer cvLoadImage alternative, MATLAB-style function
if( I.empty() )
// same as if( !I.data )
{
cerr << "Can not load image " << imagename << endl;
return -1;
}
#endif
Here you can observe that with the new structure we have no pointer problems, although it is possible to use the old
functions and in the end just transform the result to a Mat object.
// convert image to YUV color space. The output image will be created automatically.
Mat I_YUV;
cvtColor(I, I_YUV, CV_BGR2YCrCb);
1
2
3
4
vector<Mat> planes;
split(I_YUV, planes);
5
6
Because, we want to mess around with the images luma component we first convert from the default RGB to the YUV
color space and then split the result up into separate planes. Here the program splits: in the first example it processes
each plane using one of the three major image scanning algorithms in OpenCV (C [] operator, iterator, individual
element access). In a second variant we add to the image some Gaussian noise and then mix together the channels
according to some formula.
The scanning version looks like:
// Method 1. process Y plane using an iterator
MatIterator_<uchar> it = planes[0].begin<uchar>(), it_end = planes[0].end<uchar>();
for(; it != it_end; ++it)
{
double v = *it * 1.7 + rand()%21 - 10;
*it = saturate_cast<uchar>(v*v/255);
}
1
2
3
4
5
6
7
8
9
10
11
12
122
13
14
15
16
// Method 3. process the second chroma plane using individual element access
uchar& Vxy = planes[2].at<uchar>(y, x);
Vxy =
saturate_cast<uchar>((Vxy-128)/2 + 128);
17
18
19
20
21
Here you can observe that we may go through all the pixels of an image in three fashions: an iterator, a C pointer
and an individual element access style. You can read a more in-depth description of these in the How to scan images,
lookup tables and time measurement with OpenCV tutorial. Converting from the old function names is easy. Just
remove the cv prefix and use the new Mat data structure. Heres an example of this by using the weighted addition
function:
1
2
3
4
5
// Fills the matrix with normally distributed random values (around number with deviation off).
// There is also randu() for uniformly distributed random number generation
randn(noisyI, Scalar::all(128), Scalar::all(20));
6
7
8
// blur the noisyI a bit, kernel size is 3x3 and both sigmas are set to 0.5
GaussianBlur(noisyI, noisyI, Size(3, 3), 0.5, 0.5);
9
10
11
12
13
14
15
16
#ifdef
//
//
//
DEMO_MIXED_API_USE
To pass the new matrices to the functions that only work with IplImage or CvMat do:
step 1) Convert the headers (tip: data will not be copied).
step 2) call the function
(tip: to pass a pointer do not forget unary "&" to form pointers)
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
// alternative form of cv::convertScale if we know the datatype at compile time ("uchar" here).
// This expression will not create any temporary arrays ( so should be almost as fast as above)
planes[2] = Mat_<uchar>(planes[2]*color_scale + 128*(1-color_scale));
32
33
34
// Mat::mul replaces cvMul(). Again, no temporary arrays are created in case of simple expressions.
planes[0] = planes[0].mul(planes[0], 1./255);
As you may observe the planes variable is of type Mat. However, converting from Mat to IplImage is easy and made
automatically with a simple assignment operator.
1
2
3
merge(planes, I_YUV);
cvtColor(I_YUV, I, CV_YCrCb2BGR);
4
5
123
7
8
9
10
11
12
13
#ifdef DEMO_MIXED_API_USE
// this is to demonstrate that I and IplI really share the data - the result of the above
// processing is stored in I and thus in IplI too.
cvShowImage("image with grain", IplI);
#else
imshow("image with grain", I); // the new MATLAB style function show
The new imshow highgui function accepts both the Mat and IplImage data structures. Compile and run the program
and if the first image below is your input you may get either the first or second as output:
You may observe a runtime instance of this on the YouTube here and you can download the source code from
here or find it in the samples/cpp/tutorial_code/core/interoperability_with_OpenCV_1/interoperability_with_OpenCV_
of the OpenCV source code library.
124
CHAPTER
THREE
125
126
Title: Remapping
Compatibility: > OpenCV 2.0
Author: Ana Huamn
Where we learn how to manipulate pixels locations
127
128
129
Theory
Note: The explanation below belongs to the book Computer Vision: Algorithms and Applications by Richard Szeliski
and to LearningOpenCV
Smoothing, also called blurring, is a simple and frequently used image processing operation.
There are many reasons for smoothing. In this tutorial we will focus on smoothing in order to reduce noise
(other uses will be seen in the following tutorials).
To perform a smoothing operation we will apply a filter to our image. The most common type of filters are
linear, in which an output pixels value (i.e. g(i, j)) is determined as a weighted sum of input pixel values (i.e.
f(i + k, j + l)) :
X
g(i, j) =
f(i + k, j + l)h(k, l)
k,l
h(k, l) is called the kernel, which is nothing more than the coefficients of the filter.
It helps to visualize a filter as a window of coefficients sliding across the image.
There are many kind of filters, here we will mention the most used:
Normalized Box Filter
This filter is the simplest of all! Each output pixel is the mean of its kernel neighbors ( all of them contribute
with equal weights)
The kernel is below:
K=
130
1
Kwidth Kheight
1
1
.
1
1
1
.
.
1
1 ... 1
1 ... 1
. ... 1
. ... 1
1 ... 1
Gaussian Filter
Probably the most useful filter (although not the fastest). Gaussian filtering is done by convolving each point in
the input array with a Gaussian kernel and then summing them all to produce the output array.
Just to make the picture clearer, remember how a 1D Gaussian kernel look like?
Assuming that an image is 1D, you can notice that the pixel located in the middle would have the biggest weight.
The weight of its neighbors decreases as the spatial distance between them and the center pixel increases.
Note: Remember that a 2D Gaussian can be represented as :
(x x )2 (y y )2
+
22x
22y
G0 (x, y) = Ae
where is the mean (the peak) and represents the variance (per each of the variables x and y)
Median Filter
The median filter run through each element of the signal (in this case the image) and replace each pixel with the
median of its neighboring pixels (located in a square neighborhood around the evaluated pixel).
Bilateral Filter
So far, we have explained some filters which main goal is to smooth an input image. However, sometimes the
filters do not only dissolve the noise, but also smooth away the edges. To avoid this (at certain extent at least),
we can use a bilateral filter.
In an analogous way as the Gaussian filter, the bilateral filter also considers the neighboring pixels with weights
assigned to each of them. These weights have two components, the first of which is the same weighting used by
the Gaussian filter. The second component takes into account the difference in intensity between the neighboring
pixels and the evaluated one.
For a more detailed explanation you can check this link
131
Code
What does this program do?
Loads an image
Applies 4 different kinds of filters (explained in Theory) and show the filtered images sequentially
Downloadable code: Click here
Code at glance:
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
using namespace std;
using namespace cv;
///
int
int
int
Global Variables
DELAY_CAPTION = 1500;
DELAY_BLUR = 100;
MAX_KERNEL_LENGTH = 31;
132
Explanation
1. Lets check the OpenCV functions that involve only the smoothing procedure, since the rest is already known
by now.
2. Normalized Block Filter:
OpenCV offers the function blur to perform smoothing with this filter.
for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
{ blur( src, dst, Size( i, i ), Point(-1,-1) );
if( display_dst( DELAY_BLUR ) != 0 ) { return 0; } }
133
We use 5 arguments:
src: Source image
dst: Destination image
d: The diameter of each pixel neighborhood.
Color : Standard deviation in the color space.
Space : Standard deviation in the coordinate space (in pixel terms)
134
Results
The code opens an image (in this case lena.jpg) and display it under the effects of the 4 filters explained.
Here is a snapshot of the image smoothed using medianBlur:
Cool Theory
Note: The explanation below belongs to the book Learning OpenCV by Bradski and Kaehler.
135
Morphological Operations
In short: A set of operations that process images based on shapes. Morphological operations apply a structuring
element to an input image and generate an output image.
The most basic morphological operations are two: Erosion and Dilation. They have a wide array of uses, i.e. :
Removing noise
Isolation of individual elements and joining disparate elements in an image.
Finding of intensity bumps or holes in an image
We will explain dilation and erosion briefly, using the following image as an example:
Dilation
This operations consists of convoluting an image A with some kernel (B), which can have any shape or size,
usually a square or circle.
The kernel B has a defined anchor point, usually being the center of the kernel.
As the kernel B is scanned over the image, we compute the maximal pixel value overlapped by B and replace the
image pixel in the anchor point position with that maximal value. As you can deduce, this maximizing operation
causes bright regions within an image to grow (therefore the name dilation). Take as an example the image
above. Applying dilation we can get:
The background (bright) dilates around the black regions of the letter.
136
Erosion
This operation is the sister of dilation. What this does is to compute a local minimum over the area of the kernel.
As the kernel B is scanned over the image, we compute the minimal pixel value overlapped by B and replace
the image pixel under the anchor point with that minimal value.
Analagously to the example for dilation, we can apply the erosion operator to the original image (shown above).
You can see in the result below that the bright areas of the image (the background, apparently), get thinner,
whereas the dark zones (the writing( gets bigger.
Code
This tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
#include
"opencv2/imgproc/imgproc.hpp"
"opencv2/highgui/highgui.hpp"
"highgui.h"
<stdlib.h>
<stdio.h>
erosion_elem = 0;
erosion_size = 0;
dilation_elem = 0;
dilation_size = 0;
const max_elem = 2;
const max_kernel_size = 21;
137
if( !src.data )
{ return -1; }
/// Create windows
namedWindow( "Erosion Demo", CV_WINDOW_AUTOSIZE );
namedWindow( "Dilation Demo", CV_WINDOW_AUTOSIZE );
cvMoveWindow( "Dilation Demo", src.cols, 0 );
/// Create Erosion Trackbar
createTrackbar( "Element:\n 0: Rect \n 1: Cross \n 2: Ellipse", "Erosion Demo",
&erosion_elem, max_elem,
Erosion );
createTrackbar( "Kernel size:\n 2n +1", "Erosion Demo",
&erosion_size, max_kernel_size,
Erosion );
/// Create Dilation Trackbar
createTrackbar( "Element:\n 0: Rect \n 1: Cross \n 2: Ellipse", "Dilation Demo",
&dilation_elem, max_elem,
Dilation );
createTrackbar( "Kernel size:\n 2n +1", "Dilation Demo",
&dilation_size, max_kernel_size,
Dilation );
/// Default start
Erosion( 0, 0 );
Dilation( 0, 0 );
waitKey(0);
return 0;
}
/** @function Erosion */
void Erosion( int, void* )
{
int erosion_type;
if( erosion_elem == 0 ){ erosion_type = MORPH_RECT; }
else if( erosion_elem == 1 ){ erosion_type = MORPH_CROSS; }
else if( erosion_elem == 2) { erosion_type = MORPH_ELLIPSE; }
Mat element = getStructuringElement( erosion_type,
Size( 2*erosion_size + 1, 2*erosion_size+1 ),
Point( erosion_size, erosion_size ) );
/// Apply the erosion operation
erode( src, erosion_dst, element );
imshow( "Erosion Demo", erosion_dst );
}
/** @function Dilation */
void Dilation( int, void* )
{
int dilation_type;
if( dilation_elem == 0 ){ dilation_type = MORPH_RECT; }
else if( dilation_elem == 1 ){ dilation_type = MORPH_CROSS; }
else if( dilation_elem == 2) { dilation_type = MORPH_ELLIPSE; }
138
Explanation
1. Most of the stuff shown is known by you (if you have any doubt, please refer to the tutorials in previous sections).
Lets check the general structure of the program:
Load an image (can be RGB or grayscale)
Create two windows (one for dilation output, the other for erosion)
Create a set of 02 Trackbars for each operation:
The first trackbar Element returns either erosion_elem or dilation_elem
The second trackbar Kernel size return erosion_size or dilation_size for the corresponding operation.
Every time we move any slider, the users function Erosion or Dilation will be called and it will update
the output image based on the current trackbar values.
Lets analyze these two functions:
2. erosion:
/** @function Erosion */
void Erosion( int, void* )
{
int erosion_type;
if( erosion_elem == 0 ){ erosion_type = MORPH_RECT; }
else if( erosion_elem == 1 ){ erosion_type = MORPH_CROSS; }
else if( erosion_elem == 2) { erosion_type = MORPH_ELLIPSE; }
Mat element = getStructuringElement( erosion_type,
Size( 2*erosion_size + 1, 2*erosion_size+1 ),
Point( erosion_size, erosion_size ) );
/// Apply the erosion operation
erode( src, erosion_dst, element );
imshow( "Erosion Demo", erosion_dst );
}
The function that performs the erosion operation is erode. As we can see, it receives three arguments:
src: The source image
erosion_dst: The output image
element: This is the kernel we will use to perform the operation. If we do not specify, the default
is a simple 3x3 matrix. Otherwise, we can specify its shape. For this, we need to use the function
getStructuringElement:
Mat element = getStructuringElement( erosion_type,
Size( 2*erosion_size + 1, 2*erosion_size+1 ),
Point( erosion_size, erosion_size ) );
139
Results
Compile the code above and execute it with an image as argument. For instance, using this image:
140
We get the results below. Varying the indices in the Trackbars give different output images, naturally. Try them
out! You can even try to add a third Trackbar to control the number of iterations.
141
Morphological Gradient
Top Hat
Black Hat
Theory
Note: The explanation below belongs to the book Learning OpenCV by Bradski and Kaehler.
In the previous tutorial we covered two basic Morphology operations:
Erosion
Dilation.
Based on these two we can effectuate more sophisticated transformations to our images. Here we discuss briefly 05
operations offered by OpenCV:
Opening
It is obtained by the erosion of an image followed by a dilation.
dst = open(src, element) = dilate(erode(src, element))
Useful for removing small objects (it is assumed that the objects are bright on a dark foreground)
For instance, check out the example below. The image at the left is the original and the image at the right is
the result after applying the opening transformation. We can observe that the small spaces in the corners of the
letter tend to dissapear.
Closing
It is obtained by the dilation of an image followed by an erosion.
dst = close(src, element) = erode(dilate(src, element))
Useful to remove small holes (dark regions).
142
Morphological Gradient
It is the difference between the dilation and the erosion of an image.
dst = morphgrad (src, element) = dilate(src, element) erode(src, element)
It is useful for finding the outline of an object as can be seen below:
Top Hat
It is the difference between an input image and its opening.
dst = tophat(src, element) = src open(src, element)
143
Black Hat
It is the difference between the closing and its input image
dst = blackhat(src, element) = close(src, element) src
Code
This tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
"opencv2/imgproc/imgproc.hpp"
"opencv2/highgui/highgui.hpp"
<stdlib.h>
<stdio.h>
morph_elem = 0;
morph_size = 0;
morph_operator = 0;
const max_operator = 4;
const max_elem = 2;
const max_kernel_size = 21;
144
Mat element = getStructuringElement( morph_elem, Size( 2*morph_size + 1, 2*morph_size+1 ), Point( morph_size, morph_
/// Apply the specified morphology operation
morphologyEx( src, dst, operation, element );
imshow( window_name, dst );
}
Explanation
1. Lets check the general structure of the program:
Load an image
Create a window to display results of the Morphological operations
Create 03 Trackbars for the user to enter parameters:
The first trackbar Operator returns the kind of morphology operation to use (morph_operator).
createTrackbar("Operator:\n 0: Opening - 1: Closing \n 2: Gradient - 3: Top Hat \n 4: Black Hat",
window_name, &morph_operator, max_operator,
Morphology_Operations );
The second trackbar Element returns morph_elem, which indicates what kind of structure our
kernel is:
createTrackbar( "Element:\n 0: Rect - 1: Cross - 2: Ellipse", window_name,
&morph_elem, max_elem,
Morphology_Operations );
145
The final trackbar Kernel Size returns the size of the kernel to be used (morph_size)
createTrackbar( "Kernel size:\n 2n +1", window_name,
&morph_size, max_kernel_size,
Morphology_Operations );
Every time we move any slider, the users function Morphology_Operations will be called to effectuate
a new morphology operation and it will update the output image based on the current trackbar values.
/**
* @function Morphology_Operations
*/
void Morphology_Operations( int, void* )
{
// Since MORPH_X : 2,3,4,5 and 6
int operation = morph_operator + 2;
We can observe that the key function to perform the morphology transformations is morphologyEx. In this
example we use four arguments (leaving the rest as defaults):
src : Source (input) image
dst: Output image
operation: The kind of morphology transformation to be performed. Note that we have 5 alternatives:
* Opening: MORPH_OPEN : 2
* Closing: MORPH_CLOSE: 3
* Gradient: MORPH_GRADIENT: 4
* Top Hat: MORPH_TOPHAT: 5
* Black Hat: MORPH_BLACKHAT: 6
As you can see the values range from <2-6>, that is why we add (+2) to the values entered by the
Trackbar:
int operation = morph_operator + 2;
element: The kernel to be used. We use the function getStructuringElement to define our own structure.
Results
After compiling the code above we can execute it giving an image path as an argument. For this tutorial we use
as input the image: baboon.png:
146
And here are two snapshots of the display window. The first picture shows the output after using the operator
Opening with a cross kernel. The second picture (right side, shows the result of using a Blackhat operator with
an ellipse kernel.
Theory
Note: The explanation below belongs to the book Learning OpenCV by Bradski and Kaehler.
Usually we need to convert an image to a size different than its original. For this, there are two possible options:
1. Upsize the image (zoom in) or
2. Downsize it (zoom out).
Although there is a geometric transformation function in OpenCV that -literally- resize an image (resize, which
we will show in a future tutorial), in this section we analyze first the use of Image Pyramids, which are widely
applied in a huge range of vision applications.
147
Image Pyramid
An image pyramid is a collection of images - all arising from a single original image - that are successively
downsampled until some desired stopping point is reached.
There are two common kinds of image pyramids:
Gaussian pyramid: Used to downsample images
Laplacian pyramid: Used to reconstruct an upsampled image from an image lower in the pyramid (with
less resolution)
In this tutorial well use the Gaussian pyramid.
Gaussian Pyramid
Imagine the pyramid as a set of layers in which the higher the layer, the smaller the size.
Every layer is numbered from bottom to top, so layer (i + 1) (denoted as Gi+1 is smaller than layer i (Gi ).
To produce layer (i + 1) in the Gaussian pyramid, we do the following:
Convolve Gi with a Gaussian kernel:
1
4
1
6
16
4
1
4
16
24
16
4
6
24
36
24
6
4 1
16 4
24 6
16 4
4 1
These two procedures (downsampling and upsampling as explained above) are implemented by the OpenCV
functions pyrUp and pyrDown, as we will see in an example with the code below:
Note: When we reduce the size of an image, we are actually losing information of the image.
Code
This tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
#include
"opencv2/imgproc/imgproc.hpp"
"opencv2/highgui/highgui.hpp"
<math.h>
<stdlib.h>
<stdio.h>
/**
* @function main
*/
int main( int argc, char** argv )
{
/// General instructions
printf( "\n Zoom In-Out demo \n " );
printf( "------------------ \n" );
printf( " * [u] -> Zoom in \n" );
printf( " * [d] -> Zoom out \n" );
printf( " * [ESC] -> Close program \n \n" );
///
src
if(
{
tmp = src;
dst = tmp;
/// Create window
namedWindow( window_name, CV_WINDOW_AUTOSIZE );
imshow( window_name, dst );
/// Loop
while( true )
{
int c;
c = waitKey(10);
if( (char)c == 27 )
{ break; }
if( (char)c == u )
149
Explanation
1. Lets check the general structure of the program:
Load an image (in this case it is defined in the program, the user does not have to enter it as an argument)
///
src
if(
{
Create a Mat object to store the result of the operations (dst) and one to save temporal results (tmp).
Mat src, dst, tmp;
/* ... */
tmp = src;
dst = tmp;
(char)c == 27 )
break; }
(char)c == u )
pyrUp( tmp, dst, Size( tmp.cols*2, tmp.rows*2 ) );
printf( "** Zoom In: Image x 2 \n" );
}
else if( (char)c == d )
{ pyrDown( tmp, dst, Size( tmp.cols/2, tmp.rows/2 ) );
printf( "** Zoom Out: Image / 2 \n" );
}
imshow( window_name, dst );
150
tmp = dst;
}
Our program exits if the user presses ESC. Besides, it has two options:
Perform upsampling (after pressing u)
pyrUp( tmp, dst, Size( tmp.cols*2, tmp.rows*2 )
Results
After compiling the code above we can test it. The program calls an image chicky_512.jpg that comes in the
tutorial_code/image folder. Notice that this image is 512 512, hence a downsample wont generate any error
(512 = 29 ). The original image is shown below:
151
First we apply two successive pyrDown operations by pressing d. Our output is:
Note that we should have lost some resolution due to the fact that we are diminishing the size of the image. This
is evident after we apply pyrUp twice (by pressing u). Our output is now:
152
Cool Theory
Note: The explanation below belongs to the book Learning OpenCV by Bradski and Kaehler.
What is Thresholding?
The simplest segmentation method
Application example: Separate out regions of an image corresponding to objects which we want to analyze.
This separation is based on the variation of intensity between the object pixels and the background pixels.
To differentiate the pixels we are interested in from the rest (which will eventually be rejected), we perform a
comparison of each pixel intensity value with respect to a threshold (determined according to the problem to
solve).
153
Once we have separated properly the important pixels, we can set them with a determined value to identify them
(i.e. we can assign them a value of 0 (black), 255 (white) or any value that suits your needs).
Types of Thresholding
OpenCV offers the function threshold to perform thresholding operations.
We can effectuate 5 types of Thresholding operations with this function. We will explain them in the following
subsections.
To illustrate how these thresholding processes work, lets consider that we have a source image with pixels with
intensity values src(x, y). The plot below depicts this. The horizontal blue line represents the threshold thresh
(fixed).
Threshold Binary
154
maxVal
If the intensity of the pixel src(x, y) is higher than thresh, then the new pixel intensity is set to a 0. Otherwise,
it is set to MaxVal.
Truncate
The maximum intensity value for the pixels is thresh, if src(x, y) is greater, then its value is truncated. See
figure below:
155
Threshold to Zero
src(x, y)
If src(x, y) is lower than thresh, the new pixel value will be set to 0.
0
src(x, y)
If src(x, y) is greater than thresh, the new pixel value will be set to 0.
Code
The tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
"opencv2/imgproc/imgproc.hpp"
"opencv2/highgui/highgui.hpp"
<stdlib.h>
<stdio.h>
156
int
int
int
int
int
threshold_value = 0;
threshold_type = 3;;
const max_value = 255;
const max_type = 4;
const max_BINARY_value = 255;
/**
* @function Threshold_Demo
*/
void Threshold_Demo( int, void* )
{
157
/* 0:
1:
2:
3:
4:
*/
Binary
Binary Inverted
Threshold Truncated
Threshold to Zero
Threshold to Zero Inverted
Explanation
1. Lets check the general structure of the program:
Load an image. If it is RGB we convert it to Grayscale. For this, remember that we can use the function
cvtColor:
src = imread( argv[1], 1 );
/// Convert the image to Gray
cvtColor( src, src_gray, CV_RGB2GRAY );
Wait until the user enters the threshold value, the type of thresholding (or until the program exits)
Whenever the user changes the value of any of the Trackbars, the function Threshold_Demo is called:
/**
* @function Threshold_Demo
*/
void Threshold_Demo( int, void* )
{
/* 0: Binary
1: Binary Inverted
2: Threshold Truncated
3: Threshold to Zero
4: Threshold to Zero Inverted
*/
threshold( src_gray, dst, threshold_value, max_BINARY_value,threshold_type );
158
Results
1. After compiling this program, run it giving a path to an image as argument. For instance, for an input image as:
2. First, we try to threshold our image with a binary threhold inverted. We expect that the pixels brighter than the
thresh will turn dark, which is what actually happens, as we can see in the snapshot below (notice from the
original image, that the doggies tongue and eyes are particularly bright in comparison with the image, this is
reflected in the output image).
159
3. Now we try with the threshold to zero. With this, we expect that the darkest pixels (below the threshold) will
become completely black, whereas the pixels with value greater than the threshold will keep its original value.
This is verified by the following snapshot of the output image:
160
Theory
Note: The explanation below belongs to the book Learning OpenCV by Bradski and Kaehler.
Convolution
In a very general sense, convolution is an operation between every part of an image and an operator (kernel).
What is a kernel?
A kernel is essentially a fixed size array of numerical coefficeints along with an anchor point in that array, which is
tipically located at the center.
161
M
j 1
i 1 M
X
X
i=0
I(x + i ai , y + j aj )K(i, j)
j=0
Fortunately, OpenCV provides you with the function filter2D so you do not have to code all these operations.
Code
1. What does this program do?
Loads an image
Performs a normalized box filter. For instance, for a kernel of size size = 3, the kernel would be:
1 1 1
1
1 1 1
K=
33
1 1 1
The program will perform the filter operation with kernels of sizes 3, 5, 7, 9 and 11.
The filter output (with each kernel) will be shown during 500 milliseconds
2. The tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
162
"opencv2/imgproc/imgproc.hpp"
"opencv2/highgui/highgui.hpp"
<stdlib.h>
<stdio.h>
Explanation
1. Load an image
163
4. Perform an infinite loop updating the kernel size and applying our linear filter to the input image. Lets analyze
that more in detail:
5. First we define the kernel our filter is going to use. Here it is:
kernel_size = 3 + 2*( ind%5 );
kernel = Mat::ones( kernel_size, kernel_size, CV_32F )/ (float)(kernel_size*kernel_size);
The first line is to update the kernel_size to odd values in the range: [3, 11]. The second line actually builds the
kernel by setting its value to a matrix filled with 1 0 s and normalizing it by dividing it between the number of
elements.
6. After setting the kernel, we can generate the filter by using the function filter2D:
filter2D(src, dst, ddepth , kernel, anchor, delta, BORDER_DEFAULT );
Results
1. After compiling the code above, you can execute it giving as argument the path of an image. The result should
be a window that shows an image blurred by a normalized filter. Each 0.5 seconds the kernel size should change,
as can be seen in the series of snapshots below:
164
Theory
Note: The explanation below belongs to the book Learning OpenCV by Bradski and Kaehler.
1. In our previous tutorial we learned to use convolution to operate on images. One problem that naturally arises is
how to handle the boundaries. How can we convolve them if the evaluated points are at the edge of the image?
2. What most of OpenCV functions do is to copy a given image onto another slightly larger image and then
automatically pads the boundary (by any of the methods explained in the sample code just below). This way,
the convolution can be performed over the needed pixels without problems (the extra padding is cut after the
operation is done).
3. In this tutorial, we will briefly explore two ways of defining the extra padding (border) for an image:
(a) BORDER_CONSTANT: Pad the image with a constant value (i.e. black or 0
(b) BORDER_REPLICATE: The row or column at the very edge of the original is replicated to the extra
border.
This will be seen more clearly in the Code section.
Code
1. What does this program do?
Load an image
Let the user choose what kind of padding use in the input image. There are two options:
(a) Constant value border: Applies a padding of a constant value for the whole border. This value will be
updated randomly each 0.5 seconds.
(b) Replicated border: The border will be replicated from the pixel values at the edges of the original
image.
165
"opencv2/imgproc/imgproc.hpp"
"opencv2/highgui/highgui.hpp"
<stdlib.h>
<stdio.h>
166
Explanation
1. First we declare the variables we are going to use:
Mat src, dst;
int top, bottom, left, right;
int borderType;
Scalar value;
char* window_name = "copyMakeBorder Demo";
RNG rng(12345);
Especial attention deserves the variable rng which is a random number generator. We use it to generate the
random border color, as we will see soon.
2. As usual we load our source image src:
src = imread( argv[1] );
if( !src.data )
{ return -1;
printf(" No data entered, please enter the path to an image file \n");
}
3. After giving a short intro of how to use the program, we create a window:
namedWindow( window_name, CV_WINDOW_AUTOSIZE );
4. Now we initialize the argument that defines the size of the borders (top, bottom, left and right). We give them a
value of 5% the size of src.
top = (int) (0.05*src.rows); bottom = (int) (0.05*src.rows);
left = (int) (0.05*src.cols); right = (int) (0.05*src.cols);
5. The program begins a while loop. If the user presses c or r, the borderType variable takes the value of
BORDER_CONSTANT or BORDER_REPLICATE respectively:
while( true )
{
c = waitKey(500);
if( (char)c == 27 )
{ break; }
else if( (char)c == c )
{ borderType = BORDER_CONSTANT; }
else if( (char)c == r )
{ borderType = BORDER_REPLICATE; }
167
with a random value generated by the RNG variable rng. This value is a number picked randomly in the range
[0, 255]
7. Finally, we call the function copyMakeBorder to apply the respective padding:
copyMakeBorder( src, dst, top, bottom, left, right, borderType, value );
Results
1. After compiling the code above, you can execute it giving as argument the path of an image. The result should
be:
By default, it begins with the border set to BORDER_CONSTANT. Hence, a succession of random colored
borders will be shown.
If you press r, the border will become a replica of the edge pixels.
If you press c, the random colored borders will appear again
If you press ESC the program will exit.
Below some screenshot showing how the border changes color and how the BORDER_REPLICATE option
looks:
168
Theory
Note: The explanation below belongs to the book Learning OpenCV by Bradski and Kaehler.
1. In the last two tutorials we have seen applicative examples of convolutions. One of the most important convolutions is the computation of derivatives in an image (or an approximation to them).
2. Why may be important the calculus of the derivatives in an image? Lets imagine we want to detect the edges
present in the image. For instance:
169
You can easily notice that in an edge, the pixel intensity changes in a notorious way. A good way to express
changes is by using derivatives. A high change in gradient indicates a major change in the image.
3. To be more graphical, lets assume we have a 1D-image. An edge is shown by the jump in intensity in the plot
below:
4. The edge jump can be seen more easily if we take the first derivative (actually, here appears as a maximum)
170
5. So, from the explanation above, we can deduce that a method to detect edges in an image can be performed by
locating pixel locations where the gradient is higher than its neighbors (or to generalize, higher than a threshold).
6. More detailed explanation, please refer to Learning OpenCV by Bradski and Kaehler
Sobel Operator
1. The Sobel Operator is a discrete differentiation operator. It computes an approximation of the gradient of an
image intensity function.
2. The Sobel Operator combines Gaussian smoothing and differentiation.
Formulation
1 0 +1
Gx = 2 0 +2 I
1 0 +1
(b) Vertical changes: This is computed by convolving I with a kernel Gy with odd size. For example for a
kernel size of 3, Gy would be computed as:
1 2 1
0
0 I
Gy = 0
+1 +2 +1
2. At each point of the image we calculate an approximation of the gradient in that point by combining both results
above:
171
G=
G2x + G2y
Note:
When the size of the kernel is 3, the Sobel kernel shown above may produce noticeable inaccuracies (after
all, Sobel is only an approximation of the derivative). OpenCV addresses this inaccuracy for kernels of
size 3 by using the Scharr function. This is as fast but more accurate than the standar Sobel function. It
implements the following kernels:
3 0 +3
Gx = 10 0 +10
3 0 +3
3 10 3
0
0
Gy = 0
+3 +10 +3
You can check out more information of this function in the OpenCV reference (Scharr). Also, in the sample code
below, you will notice that above the code for Sobel function there is also code for the Scharr function commented.
Uncommenting it (and obviously commenting the Sobel stuff) should give you an idea of how this function works.
Code
1. What does this program do?
Applies the Sobel Operator and generates as output an image with the detected edges bright on a darker
background.
2. The tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
"opencv2/imgproc/imgproc.hpp"
"opencv2/highgui/highgui.hpp"
<stdlib.h>
<stdio.h>
172
int c;
/// Load an image
src = imread( argv[1] );
if( !src.data )
{ return -1; }
GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );
/// Convert it to gray
cvtColor( src, src_gray, CV_RGB2GRAY );
/// Create window
namedWindow( window_name, CV_WINDOW_AUTOSIZE );
/// Generate grad_x and grad_y
Mat grad_x, grad_y;
Mat abs_grad_x, abs_grad_y;
/// Gradient X
//Scharr( src_gray, grad_x, ddepth, 1, 0, scale, delta, BORDER_DEFAULT );
Sobel( src_gray, grad_x, ddepth, 1, 0, 3, scale, delta, BORDER_DEFAULT );
convertScaleAbs( grad_x, abs_grad_x );
/// Gradient Y
//Scharr( src_gray, grad_y, ddepth, 0, 1, scale, delta, BORDER_DEFAULT );
Sobel( src_gray, grad_y, ddepth, 0, 1, 3, scale, delta, BORDER_DEFAULT );
convertScaleAbs( grad_y, abs_grad_y );
/// Total Gradient (approximate)
addWeighted( abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad );
imshow( window_name, grad );
waitKey(0);
return 0;
}
Explanation
1. First we declare the variables we are going to use:
Mat src, src_gray;
Mat grad;
char* window_name = "Sobel Demo - Simple Edge Detector";
int scale = 1;
int delta = 0;
int ddepth = CV_16S;
173
3. First, we apply a GaussianBlur to our image to reduce the noise ( kernel size = 3 )
GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );
5. Second, we calculate the derivatives in x and y directions. For this, we use the function Sobel as shown below:
Mat grad_x, grad_y;
Mat abs_grad_x, abs_grad_y;
/// Gradient X
Sobel( src_gray, grad_x, ddepth, 1, 0, 3, scale, delta, BORDER_DEFAULT );
/// Gradient Y
Sobel( src_gray, grad_y, ddepth, 0, 1, 3, scale, delta, BORDER_DEFAULT );
7. Finally, we try to approximate the gradient by adding both directional gradients (note that this is not an exact
calculation at all! but it is good for our purposes).
addWeighted( abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad );
Results
1. Here is the output of applying our basic detector to lena.jpg:
174
Theory
1. In the previous tutorial we learned how to use the Sobel Operator. It was based on the fact that in the edge area,
the pixel intensity shows a jump or a high variation of intensity. Getting the first derivative of the intensity,
we observed that an edge is characterized by a maximum, as it can be seen in the figure:
175
You can observe that the second derivative is zero! So, we can also use this criterion to attempt to detect edges in
an image. However, note that zeros will not only appear in edges (they can actually appear in other meaningless
locations); this can be solved by applying filtering where needed.
Laplacian Operator
1. From the explanation above, we deduce that the second derivative can be used to detect edges. Since images are
2D, we would need to take the derivative in both dimensions. Here, the Laplacian operator comes handy.
2. The Laplacian operator is defined by:
Laplace(f) =
2 f
2 f
+
x2 y2
1. The Laplacian operator is implemented in OpenCV by the function Laplacian. In fact, since the Laplacian uses
the gradient of images, it calls internally the Sobel operator to perform its computation.
176
Code
1. What does this program do?
Loads an image
Remove noise by applying a Gaussian blur and then convert the original image to grayscale
Applies a Laplacian operator to the grayscale image and stores the output image
Display the result in a window
2. The tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
"opencv2/imgproc/imgproc.hpp"
"opencv2/highgui/highgui.hpp"
<stdlib.h>
<stdio.h>
177
Explanation
1. Create some needed variables:
Mat src, src_gray, dst;
int kernel_size = 3;
int scale = 1;
int delta = 0;
int ddepth = CV_16S;
char* window_name = "Laplace Demo";
Results
1. After compiling the code above, we can run it giving as argument the path to an image. For example, using as
an input:
178
2. We obtain the following result. Notice how the trees and the silhouette of the cow are approximately well defined
(except in areas in which the intensity are very similar, i.e. around the cows head). Also, note that the roof of
the house behind the trees (right side) is notoriously marked. This is due to the fact that the contrast is higher in
that region.
Theory
1. The Canny Edge detector was developed by John F. Canny in 1986. Also known to many as the optimal detector,
Canny algorithm aims to satisfy three main criteria:
179
2 4 5 4 2
4 9 12 9 4
1
5 12 15 12 5
K=
159
4 9 12 9 4
2 4 5 4 2
2. Find the intensity gradient of the image. For this, we follow a procedure analogous to Sobel:
(a) Apply a pair of convolution masks (in x and y directions:
1 0 +1
Gx = 2 0 +2
1 0 +1
1 2 1
0
0
Gy = 0
+1 +2 +1
(b) Find the gradient strength and direction with:
q
G2x + G2y
Gy
= arctan(
)
Gx
G=
The direction is rounded to one of four possible angles (namely 0, 45, 90 or 135)
3. Non-maximum suppression is applied. This removes pixels that are not considered to be part of an edge. Hence,
only thin lines (candidate edges) will remain.
4. Hysteresis: The final step. Canny does use two thresholds (upper and lower):
(a) If a pixel gradient is higher than the upper threshold, the pixel is accepted as an edge
(b) If a pixel gradient value is below the lower threshold, then it is rejected.
(c) If the pixel gradient is between the two thresholds, then it will be accepted only if it is connected to a pixel
that is above the upper threshold.
Canny recommended a upper:lower ratio between 2:1 and 3:1.
5. For more details, you can always consult your favorite Computer Vision book.
Code
1. What does this program do?
180
Asks the user to enter a numerical value to set the lower threshold for our Canny Edge Detector (by means
of a Trackbar)
Applies the Canny Detector and generates a mask (bright lines representing the edges on a black background).
Applies the mask obtained on the original image and display it in a window.
2. The tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
"opencv2/imgproc/imgproc.hpp"
"opencv2/highgui/highgui.hpp"
<stdlib.h>
<stdio.h>
181
Explanation
1. Create some needed variables:
Mat src, src_gray;
Mat dst, detected_edges;
int edgeThresh = 1;
int lowThreshold;
int const max_lowThreshold = 100;
int ratio = 3;
int kernel_size = 3;
char* window_name = "Edge Map";
Note the following:
3. Create a matrix of the same type and size of src (to be dst)
dst.create( src.size(), src.type() );
182
6. Create a Trackbar for the user to enter the lower threshold for our Canny detector:
createTrackbar( "Min Threshold:", window_name, &lowThreshold, max_lowThreshold, CannyThreshold );
9. Finally, we will use the function copyTo to map only the areas of the image that are identified as edges (on a
black background).
src.copyTo( dst, detected_edges);
copyTo copy the src image onto dst. However, it will only copy the pixels in the locations where they have nonzero values. Since the output of the Canny detector is the edge contours on a black background, the resulting
dst will be black in all the area but the detected edges.
10. We display our result:
imshow( window_name, dst );
Result
After compiling the code above, we can run it giving as argument the path to an image. For example, using as
an input the following image:
183
Moving the slider, trying different threshold, we obtain the following result:
Notice how the image is superposed to the black background on the edge regions.
Theory
Note: The explanation below belongs to the book Learning OpenCV by Bradski and Kaehler.
184
1. As you know, a line in the image space can be expressed with two variables. For example:
(a) In the Cartesian coordinate system: Parameters: (m, b).
(b) In the Polar coordinate system: Parameters: (r, )
For Hough Transforms, we will express lines in the Polar system. Hence, a line equation can be written as:
r
cos
y=
x+
sin
sin
Arranging the terms: r = x cos + y sin
1. In general for each point (x0 , y0 ), we can define the family of lines that goes through that point as:
r = x0 cos + y0 sin
Meaning that each pair (r , ) represents each line that passes by (x0 , y0 ).
2. If for a given (x0 , y0 ) we plot the family of lines that goes through it, we get a sinusoid. For instance, for x0 = 8
and y0 = 6 we get the following plot (in a plane - r):
185
The three plots intersect in one single point (0.925, 9.6), these coordinates are the parameters (, r) or the line
in which (x0 , y0 ), (x1 , y1 ) and (x2 , y2 ) lay.
4. What does all the stuff above mean? It means that in general, a line can be detected by finding the number of
intersections between curves.The more curves intersecting means that the line represented by that intersection
have more points. In general, we can define a threshold of the minimum number of intersections needed to
detect a line.
5. This is what the Hough Line Transform does. It keeps track of the intersection between curves of every point in
the image. If the number of intersections is above some threshold, then it declares it as a line with the parameters
(, r ) of the intersection point.
Standard and Probabilistic Hough Line Transform
Code
1. What does this program do?
Loads an image
186
187
Vec4i l = lines[i];
line( cdst, Point(l[0], l[1]), Point(l[2], l[3]), Scalar(0,0,255), 3, CV_AA);
}
#endif
imshow("source", src);
imshow("detected lines", cdst);
waitKey();
return 0;
}
Explanation
1. Load an image
Mat src = imread(filename, 0);
if(src.empty())
{
help();
cout << "can not open " << filename << endl;
return -1;
}
Now we will apply the Hough Line Transform. We will explain how to use both OpenCV functions available
for this purpose:
3. Standard Hough Line Transform
(a) First, you apply the Transform:
vector<Vec2f> lines;
HoughLines(dst, lines, 1, CV_PI/180, 100, 0, 0 );
188
pt1.y
pt2.x
pt2.y
line(
= cvRound(y0 + 1000*(a));
= cvRound(x0 - 1000*(-b));
= cvRound(y0 - 1000*(a));
cdst, pt1, pt2, Scalar(0,0,255), 3, CV_AA);
Result
Note: The results below are obtained using the slightly fancier version we mentioned in the Code section. It still
implements the same stuff as above, only adding the Trackbar for the Threshold.
Using an input image such as:
189
We get the following result by using the Probabilistic Hough Line Transform:
You may observe that the number of lines detected vary while you change the threshold. The explanation is sort of
evident: If you establish a higher threshold, fewer lines will be detected (since you will need more points to declare a
line detected).
190
Theory
Hough Circle Transform
The Hough Circle Transform works in a roughly analogous way to the Hough Line Transform explained in the
previous tutorial.
In the line detection case, a line was defined by two parameters (r, ). In the circle case, we need three parameters to define a circle:
C : (xcenter , ycenter , r)
where (xcenter , ycenter ) define the center position (gree point) and r is the radius, which allows us to completely define a circle, as it can be seen below:
For sake of efficiency, OpenCV implements a detection method slightly trickier than the standard Hough Transform: The Hough gradient method. For more details, please check the book Learning OpenCV or your favorite
Computer Vision bibliography
Code
1. What does this program do?
Loads an image and blur it to reduce the noise
Applies the Hough Circle Transform to the blurred image .
Display the detected circle in a window.
2. The sample code that we will explain can be downloaded from here. A slightly fancier version (which shows
both Hough standard and probabilistic with trackbars for changing the threshold values) can be found here.
#include
#include
#include
#include
"opencv2/highgui/highgui.hpp"
"opencv2/imgproc/imgproc.hpp"
<iostream>
<stdio.h>
191
Explanation
1. Load an image
src = imread( argv[1], 1 );
if( !src.data )
{ return -1; }
2. Convert it to grayscale:
cvtColor( src, src_gray, CV_BGR2GRAY );
3. Apply a Gaussian blur to reduce noise and avoid false circle detection:
GaussianBlur( src_gray, src_gray, Size(9, 9), 2, 2 );
vector<Vec3f> circles;
HoughCircles( src_gray, circles, CV_HOUGH_GRADIENT, 1, src_gray.rows/8, 200, 100, 0, 0 );
You can see that we will draw the circle(s) on red and the center(s) with a small green dot
6. Display the detected circle(s):
namedWindow( "Hough Circle Transform Demo", CV_WINDOW_AUTOSIZE );
imshow( "Hough Circle Transform Demo", src );
Result
The result of running the code above with a test image is shown below:
193
3.13 Remapping
Goal
In this tutorial you will learn how to:
1. Use the OpenCV function remap to implement simple remapping routines.
Theory
What is remapping?
It is the process of taking pixels from one place in the image and locating them in another position in a new
image.
To accomplish the mapping process, it might be necessary to do some interpolation for non-integer pixel locations, since there will not always be a one-to-one-pixel correspondence between source and destination images.
We can express the remap for every pixel location (x, y) as:
g(x, y) = f(h(x, y))
where g() is the remapped image, f() the source image and h(x, y) is the mapping function that operates on
(x, y).
Lets think in a quick example. Imagine that we have an image I and, say, we want to do a remap such that:
h(x, y) = (I.cols x, y)
What would happen? It is easily seen that the image would flip in the x direction. For instance, consider the
input image:
194
observe how the red circle changes positions with respect to x (considering x the horizontal direction):
Code
1. What does this program do?
Loads an image
Each second, apply 1 of 4 different remapping processes to the image and display them indefinitely in a
window.
Wait for the user to exit the program
2. The tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
"opencv2/highgui/highgui.hpp"
"opencv2/imgproc/imgproc.hpp"
<iostream>
<stdio.h>
3.13. Remapping
195
196
src.cols*0.25 && i < src.cols*0.75 && j > src.rows*0.25 && j < src.rows*0.75 )
map_x.at<float>(j,i)
map_y.at<float>(j,i)
}
else
{ map_x.at<float>(j,i)
map_y.at<float>(j,i)
}
break;
case 1:
map_x.at<float>(j,i)
map_y.at<float>(j,i)
break;
case 2:
map_x.at<float>(j,i)
map_y.at<float>(j,i)
break;
case 3:
map_x.at<float>(j,i)
map_y.at<float>(j,i)
break;
} // end of switch
= 0 ;
= 0 ;
= i ;
= src.rows - j ;
= src.cols - i ;
= j ;
= src.cols - i ;
= src.rows - j ;
}
}
ind++;
}
Explanation
1. Create some variables we will use:
Mat src, dst;
Mat map_x, map_y;
char* remap_window = "Remap demo";
int ind = 0;
2. Load an image:
src = imread( argv[1], 1 );
3. Create the destination image and the two mapping matrices (for x and y )
dst.create( src.size(), src.type() );
map_x.create( src.size(), CV_32FC1 );
map_y.create( src.size(), CV_32FC1 );
5. Establish a loop. Each 1000 ms we update our mapping matrices (mat_x and mat_y) and apply them to our
source image:
while( true )
{
/// Each 1 sec. Press ESC to exit the program
int c = waitKey( 1000 );
if( (char)c == 27 )
3.13. Remapping
197
{ break; }
/// Update map_x & map_y. Then apply remap
update_map();
remap( src, dst, map_x, map_y, CV_INTER_LINEAR, BORDER_CONSTANT, Scalar(0,0, 0) );
/// Display results
imshow( remap_window, dst );
}
The function that applies the remapping is remap. We give the following arguments:
src: Source image
dst: Destination image of same size as src
map_x: The mapping function in the x direction. It is equivalent to the first component of h(i, j)
map_y: Same as above, but in y direction. Note that map_y and map_x are both of the same size as src
CV_INTER_LINEAR: The type of interpolation to use for non-integer pixels. This is by default.
BORDER_CONSTANT: Default
How do we update our mapping matrices mat_x and mat_y? Go on reading:
6. Updating the mapping matrices: We are going to perform 4 different mappings:
(a) Reduce the picture to half its size and will display it in the middle:
h(i, j) = (2 i src.cols/2 + 0.5, 2 j src.rows/2 + 0.5)
src.cols
3 src.cols
src.rows
3 src.rows
<i<
and
<j<
4
4
4
4
(b) Turn the image upside down: h(i, j) = (i, src.rows j)
for all pairs (i, j) such that:
198
break;
case 2:
map_x.at<float>(j,i)
map_y.at<float>(j,i)
break;
case 3:
map_x.at<float>(j,i)
map_y.at<float>(j,i)
break;
} // end of switch
= src.cols - i ;
= j ;
= src.cols - i ;
= src.rows - j ;
}
}
ind++;
}
Result
1. After compiling the code above, you can execute it giving as argument an image path. For instance, by using
the following image:
2. This is the result of reducing it to half the size and centering it:
3.13. Remapping
199
Theory
What is an Affine Transformation?
1. It is any transformation that can be expressed in the form of a matrix multiplication (linear transformation)
followed by a vector addition (translation).
2. From the above, We can use an Affine Transformation to express:
(a) Rotations (linear transformation)
(b) Translations (vector addition)
(c) Scale operations (linear transformation)
you can see that, in essence, an Affine Transformation represents a relation between two images.
3. The usual way to represent an Affine Transform is by using a 2 3 matrix.
b00
a00 a01
B=
A=
b10 21
a10 a11 22
a
a01 b00
M = A B = 00
a10 a11 b10 23
Considering that we want to transform a 2D vector X =
x
by using A and B, we can do it equivalently with:
y
x
T =A
+ B or T = M [x, y, 1]T
y
a00 x + a01 y + b00
T=
a10 x + a11 y + b10
How do we get an Affine Transformation?
1. Excellent question. We mentioned that an Affine Transformation is basically a relation between two images.
The information about this relation can come, roughly, in two ways:
(a) We know both X and T and we also know that they are related. Then our job is to find M
(b) We know M and :math:X. To obtain T we only need to apply T = M X. Our information for M may
be explicit (i.e. have the 2-by-3 matrix) or it can come as a geometric relation between points.
2. Lets explain a little bit better (b). Since M relates 02 images, we can analyze the simplest case in which it
relates three points in both images. Look at the figure below:
201
the points 1, 2 and 3 (forming a triangle in image 1) are mapped into image 2, still forming a triangle, but now
they have changed notoriously. If we find the Affine Transformation with these 3 points (you can choose them
as you like), then we can apply this found relation to the whole pixels in the image.
Code
1. What does this program do?
Loads an image
Applies an Affine Transform to the image. This Transform is obtained from the relation between three
points. We use the function warpAffine for that purpose.
Applies a Rotation to the image after being transformed. This rotation is with respect to the image center
Waits until the user exits the program
2. The tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
"opencv2/highgui/highgui.hpp"
"opencv2/imgproc/imgproc.hpp"
<iostream>
<stdio.h>
202
/// Set the dst image the same type and size as src
warp_dst = Mat::zeros( src.rows, src.cols, src.type() );
/// Set your 3 points to calculate the Affine Transform
srcTri[0] = Point2f( 0,0 );
srcTri[1] = Point2f( src.cols - 1, 0 );
srcTri[2] = Point2f( 0, src.rows - 1 );
dstTri[0] = Point2f( src.cols*0.0, src.rows*0.33 );
dstTri[1] = Point2f( src.cols*0.85, src.rows*0.25 );
dstTri[2] = Point2f( src.cols*0.15, src.rows*0.7 );
/// Get the Affine Transform
warp_mat = getAffineTransform( srcTri, dstTri );
/// Apply the Affine Transform just found to the src image
warpAffine( src, warp_dst, warp_mat, warp_dst.size() );
/** Rotating the image after Warp */
/// Compute a rotation matrix with respect to the center of the image
Point center = Point( warp_dst.cols/2, warp_dst.rows/2 );
double angle = -50.0;
double scale = 0.6;
/// Get the rotation matrix with the specifications above
rot_mat = getRotationMatrix2D( center, angle, scale );
/// Rotate the warped image
warpAffine( warp_dst, warp_rotate_dst, rot_mat, warp_dst.size() );
/// Show what you got
namedWindow( source_window, CV_WINDOW_AUTOSIZE );
imshow( source_window, src );
namedWindow( warp_window, CV_WINDOW_AUTOSIZE );
imshow( warp_window, warp_dst );
namedWindow( warp_rotate_window, CV_WINDOW_AUTOSIZE );
imshow( warp_rotate_window, warp_rotate_dst );
/// Wait until user exits the program
waitKey(0);
return 0;
}
Explanation
1. Declare some variables we will use, such as the matrices to store our results and 2 arrays of points to store the
2D points that define our Affine Transform.
Point2f srcTri[3];
Point2f dstTri[3];
Mat rot_mat( 2, 3, CV_32FC1 );
203
2. Load an image:
src = imread( argv[1], 1 );
3. Initialize the destination image as having the same size and type as the source:
warp_dst = Mat::zeros( src.rows, src.cols, src.type() );
4. Affine Transform: As we explained lines above, we need two sets of 3 points to derive the affine transform
relation. Take a look:
srcTri[0] = Point2f( 0,0 );
srcTri[1] = Point2f( src.cols - 1, 0 );
srcTri[2] = Point2f( 0, src.rows - 1 );
dstTri[0] = Point2f( src.cols*0.0, src.rows*0.33 );
dstTri[1] = Point2f( src.cols*0.85, src.rows*0.25 );
dstTri[2] = Point2f( src.cols*0.15, src.rows*0.7 );
You may want to draw the points to make a better idea of how they change. Their locations are approximately
the same as the ones depicted in the example figure (in the Theory section). You may note that the size and
orientation of the triangle defined by the 3 points change.
5. Armed with both sets of points, we calculate the Affine Transform by using OpenCV function getAffineTransform:
warp_mat = getAffineTransform( srcTri, dstTri );
204
8. We generate the rotation matrix with the OpenCV function getRotationMatrix2D, which returns a 2 3 matrix
(in this case rot_mat)
rot_mat = getRotationMatrix2D( center, angle, scale );
9. We now apply the found rotation to the output of our previous Transformation.
warpAffine( warp_dst, warp_rotate_dst, rot_mat, warp_dst.size() );
10. Finally, we display our results in two windows plus the original image for good measure:
namedWindow( source_window, CV_WINDOW_AUTOSIZE );
imshow( source_window, src );
namedWindow( warp_window, CV_WINDOW_AUTOSIZE );
imshow( warp_window, warp_dst );
namedWindow( warp_rotate_window, CV_WINDOW_AUTOSIZE );
imshow( warp_rotate_window, warp_rotate_dst );
11. We just have to wait until the user exits the program
waitKey(0);
Result
1. After compiling the code above, we can give it the path of an image as argument. For instance, for a picture
like:
205
and finally, after applying a negative rotation (remember negative means clockwise) and a scale factor, we get:
Theory
What is an Image Histogram?
It is a graphical representation of the intensity distribution of an image.
206
207
details, refer to Learning OpenCV). For the histogram H(i), its cumulative distribution H (i) is:
X
0
H(j)
H (i) =
0j<i
0
To use this as a remapping function, we have to normalize H (i) such that the maximum value is 255 ( or the
maximum value for the intensity of the image ). From the example above, the cumulative function is:
Finally, we use a simple remapping procedure to obtain the intensity values of the equalized image:
0
Code
What does this program do?
Loads an image
Convert the original image to grayscale
Equalize the Histogram by using the OpenCV function EqualizeHist
Display the source and equalized images in a window.
Downloadable code: Click here
Code at glance:
#include
#include
#include
#include
"opencv2/highgui/highgui.hpp"
"opencv2/imgproc/imgproc.hpp"
<iostream>
<stdio.h>
208
Explanation
1. Declare the source and destination images as well as the windows names:
Mat src, dst;
char* source_window = "Source image";
char* equalized_window = "Equalized Image";
3. Convert it to grayscale:
cvtColor( src, src, CV_BGR2GRAY );
As it can be easily seen, the only arguments are the original image and the output (equalized) image.
5. Display both images (original and equalized) :
209
Results
1. To appreciate better the results of equalization, lets introduce an image with not much contrast, such as:
210
notice that the pixels are clustered around the center of the histogram.
2. After applying the equalization with our program, we get this result:
this image has certainly more contrast. Check out its new histogram like this:
211
Notice how the number of pixels is more distributed through the intensity range.
Note: Are you wondering how did we draw the Histogram figures shown above? Check out the following tutorial!
212
What happens if we want to count this data in an organized way? Since we know that the range of information
value for this case is 256 values, we can segment our range in subparts (called bins) like:
[0, 255] = [0, 15] [16, 31] .... [240, 255]
range = bin1 bin2 .... binn=15
and we can keep count of the number of pixels that fall in the range of each bini . Applying this to the example
above we get the image below ( axis x represents the bins and axis y the number of pixels in each of them).
213
This was just a simple example of how an histogram works and why it is useful. An histogram can keep count
not only of color intensities, but of whatever image features that we want to measure (i.e. gradients, directions,
etc).
Lets identify some parts of the histogram:
1. dims: The number of parameters you want to collect data of. In our example, dims = 1 because we are
only counting the intensity values of each pixel (in a greyscale image).
2. bins: It is the number of subdivisions in each dim. In our example, bins = 16
3. range: The limits for the values to be measured. In this case: range = [0,255]
What if you want to count two features? In this case your resulting histogram would be a 3D plot (in which x
and y would be binx and biny for each feature and z would be the number of counts for each combination of
(binx , biny ). The same would apply for more features (of course it gets trickier).
What OpenCV offers you
For simple purposes, OpenCV implements the function calcHist, which calculates the histogram of a set of arrays
(usually images or image planes). It can operate with up to 32 dimensions. We will see it in the code below!
Code
What does this program do?
Loads an image
Splits the image into its R, G and B planes using the function split
Calculate the Histogram of each 1-channel plane by calling the function calcHist
Plot the three histograms in a window
Downloadable code: Click here
Code at glance:
#include
#include
#include
#include
"opencv2/highgui/highgui.hpp"
"opencv2/imgproc/imgproc.hpp"
<iostream>
<stdio.h>
214
/**
* @function main
*/
int main( int argc, char** argv )
{
Mat src, dst;
/// Load image
src = imread( argv[1], 1 );
if( !src.data )
{ return -1; }
/// Separate the image in 3 places ( B, G and R )
vector<Mat> bgr_planes;
split( src, bgr_planes );
/// Establish the number of bins
int histSize = 256;
/// Set the ranges ( for B,G,R) )
float range[] = { 0, 256 } ;
const float* histRange = { range };
bool uniform = true; bool accumulate = false;
Mat b_hist, g_hist, r_hist;
/// Compute the histograms:
calcHist( &bgr_planes[0], 1, 0, Mat(), b_hist, 1, &histSize, &histRange, uniform, accumulate );
calcHist( &bgr_planes[1], 1, 0, Mat(), g_hist, 1, &histSize, &histRange, uniform, accumulate );
calcHist( &bgr_planes[2], 1, 0, Mat(), r_hist, 1, &histSize, &histRange, uniform, accumulate );
// Draw the histograms for B, G and R
int hist_w = 512; int hist_h = 400;
int bin_w = cvRound( (double) hist_w/histSize );
Mat histImage( hist_h, hist_w, CV_8UC3, Scalar( 0,0,0) );
/// Normalize the
normalize(b_hist,
normalize(g_hist,
normalize(r_hist,
result to [ 0, histImage.rows ]
b_hist, 0, histImage.rows, NORM_MINMAX, -1, Mat() );
g_hist, 0, histImage.rows, NORM_MINMAX, -1, Mat() );
r_hist, 0, histImage.rows, NORM_MINMAX, -1, Mat() );
215
/// Display
namedWindow("calcHist Demo", CV_WINDOW_AUTOSIZE );
imshow("calcHist Demo", histImage );
waitKey(0);
return 0;
}
Explanation
1. Create the necessary matrices:
Mat src, dst;
3. Separate the source image in its three R,G and B planes. For this we use the OpenCV function split:
vector<Mat> bgr_planes;
split( src, bgr_planes );
our input is the image to be divided (this case with three channels) and the output is a vector of Mat )
4. Now we are ready to start configuring the histograms for each plane. Since we are working with the B, G and
R planes, we know that our values will range in the interval [0, 255]
(a) Establish number of bins (5, 10...):
int histSize = 256; //from 0 to 255
(b) Set the range of values (as we said, between 0 and 255 )
/// Set the ranges ( for B,G,R) )
float range[] = { 0, 256 } ; //the upper boundary is exclusive
const float* histRange = { range };
(c) We want our bins to have the same size (uniform) and to clear the histograms in the beginning, so:
bool uniform = true; bool accumulate = false;
(d) Finally, we create the Mat objects to save our histograms. Creating 3 (one for each plane):
Mat b_hist, g_hist, r_hist;
(e) We proceed to calculate the histograms by using the OpenCV function calcHist:
/// Compute the histograms:
calcHist( &bgr_planes[0], 1, 0, Mat(), b_hist, 1, &histSize, &histRange, uniform, accumulate );
calcHist( &bgr_planes[1], 1, 0, Mat(), g_hist, 1, &histSize, &histRange, uniform, accumulate );
calcHist( &bgr_planes[2], 1, 0, Mat(), r_hist, 1, &histSize, &histRange, uniform, accumulate );
216
1: The number of source arrays (in this case we are using 1. We can enter here also a list of arrays )
0: The channel (dim) to be measured. In this case it is just the intensity (each array is single-channel)
so we just write 0.
Mat(): A mask to be used on the source array ( zeros indicating pixels to be ignored ). If not defined
it is not used
b_hist: The Mat object where the histogram will be stored
1: The histogram dimensionality.
histSize: The number of bins per each used dimension
histRange: The range of values to be measured per each dimension
uniform and accumulate: The bin sizes are the same and the histogram is cleared at the beginning.
5. Create an image to display the histograms:
// Draw the histograms for R, G and B
int hist_w = 512; int hist_h = 400;
int bin_w = cvRound( (double) hist_w/histSize );
Mat histImage( hist_h, hist_w, CV_8UC3, Scalar( 0,0,0) );
6. Notice that before drawing, we first normalize the histogram so its values fall in the range indicated by the
parameters entered:
/// Normalize the
normalize(b_hist,
normalize(g_hist,
normalize(r_hist,
result to [ 0, histImage.rows ]
b_hist, 0, histImage.rows, NORM_MINMAX, -1, Mat() );
g_hist, 0, histImage.rows, NORM_MINMAX, -1, Mat() );
r_hist, 0, histImage.rows, NORM_MINMAX, -1, Mat() );
217
Scalar( 0, 0, 255), 2, 8, 0
);
where i indicates the dimension. If it were a 2D-histogram we would use something like:
b_hist.at<float>( i, j )
8. Finally we display our histograms and wait for the user to exit:
namedWindow("calcHist Demo", CV_WINDOW_AUTOSIZE );
imshow("calcHist Demo", histImage );
waitKey(0);
return 0;
Result
1. Using as input argument an image like the shown below:
218
Theory
To compare two histograms ( H1 and H2 ), first we have to choose a metric (d(H1 , H2 )) to express how well
both histograms match.
OpenCV implements the function compareHist to perform a comparison. It also offers 4 different metrics to
compute the matching:
1. Correlation ( CV_COMP_CORREL )
P
H1 )(H2 (I) H2 )
P
2
2
I (H1 (I) H1 )
I (H2 (I) H2 )
d(H1 , H2 ) = qP
I (H1 (I)
219
where
1 X
Hk (J)
Hk =
N
J
H1 (I)
3. Intersection ( method=CV_COMP_INTERSECT )
X
min(H1 (I), H2 (I))
d(H1 , H2 ) =
I
Code
What does this program do?
Loads a base image and 2 test images to be compared with it.
Generate 1 image that is the lower half of the base image
Convert the images to HSV format
Calculate the H-S histogram for all the images and normalize them in order to compare them.
Compare the histogram of the base image with respect to the 2 test histograms, the histogram of the lower
half base image and with the same base image histogram.
Display the numerical matching parameters obtained.
Downloadable code: Click here
Code at glance:
#include
#include
#include
#include
"opencv2/highgui/highgui.hpp"
"opencv2/imgproc/imgproc.hpp"
<iostream>
<stdio.h>
220
printf( " Method [%d] Perfect, Base-Half, Base-Test(1), Base-Test(2) : %f, %f, %f, %f \n", i, base_base, base_h
221
}
printf( "Done \n" );
return 0;
}
Explanation
1. Declare variables such as the matrices to store the base image and the two other images to compare ( RGB and
HSV )
Mat
Mat
Mat
Mat
src_base, hsv_base;
src_test1, hsv_test1;
src_test2, hsv_test2;
hsv_half_down;
2. Load the base image (src_base) and the other two test images:
if( argc < 4 )
{ printf("** Error. Usage: ./compareHist_Demo <image_settings0> <image_setting1> <image_settings2>\n");
return -1;
}
src_base = imread( argv[1], 1 );
src_test1 = imread( argv[2], 1 );
src_test2 = imread( argv[3], 1 );
4. Also, create an image of half the base image (in HSV format):
hsv_half_down = hsv_base( Range( hsv_base.rows/2, hsv_base.rows - 1 ), Range( 0, hsv_base.cols - 1 ) );
5. Initialize the arguments to calculate the histograms (bins, ranges and channels H and S ).
int h_bins = 50; int s_bins = 32;
int histSize[] = { h_bins, s_bins };
float h_ranges[] = { 0, 256 };
float s_ranges[] = { 0, 180 };
const float* ranges[] = { h_ranges, s_ranges };
int channels[] = { 0, 1 };
hist_base;
hist_half_down;
hist_test1;
hist_test2;
7. Calculate the Histograms for the base image, the 2 test images and the half-down base image:
222
8. Apply sequentially the 4 comparison methods between the histogram of the base image (hist_base) and the other
histograms:
for( int i = 0; i < 4; i++ )
{ int compare_method = i;
double base_base = compareHist( hist_base, hist_base, compare_method );
double base_half = compareHist( hist_base, hist_half_down, compare_method );
double base_test1 = compareHist( hist_base, hist_test1, compare_method );
double base_test2 = compareHist( hist_base, hist_test2, compare_method );
printf( " Method [%d] Perfect, Base-Half, Base-Test(1), Base-Test(2) : %f, %f, %f, %f \n", i, base_base, base
}
Results
1. We use as input the following images:
where the first one is the base (to be compared to the others), the other 2 are the test images. We will also
compare the first image with respect to itself and with respect of half the base image.
2. We should expect a perfect match when we compare the base image histogram with itself. Also, compared with
the histogram of half the base image, it should present a high match since both are from the same source. For the
other two test images, we can observe that they have very different lighting conditions, so the matching should
not be very good:
3. Here the numeric results:
Method
Correlation
Chi-square
Intersection
Bhattacharyya
Base - Base
1.000000
0.000000
24.391548
0.000000
Base - Half
0.930766
4.940466
14.959809
0.222609
Base - Test 1
0.182073
21.184536
3.889029
0.646576
Base - Test 2
0.120447
49.273437
5.775088
0.801869
223
For the Correlation and Intersection methods, the higher the metric, the more accurate the match. As
we can see, the match base-base is the highest of all as expected. Also we can observe that the match
base-half is the second best match (as we predicted). For the other two metrics, the less the result, the
better the match. We can observe that the matches between the test 1 and test 2 with respect to the base
are worse, which again, was expected.
Theory
What is Back Projection?
Back Projection is a way of recording how well the pixels of a given image fit the distribution of pixels in a
histogram model.
To make it simpler: For Back Projection, you calculate the histogram model of a feature and then use it to find
this feature in an image.
Application example: If you have a histogram of flesh color (say, a Hue-Saturation histogram ), then you can
use it to find flesh color areas in an image:
How does it work?
We explain this by using the skin example:
Lets say you have gotten a skin histogram (Hue-Saturation) based on the image below. The histogram besides
is going to be our model histogram (which we know represents a sample of skin tonality). You applied some
mask to capture only the histogram of the skin area:
224
Now, lets imagine that you get another hand image (Test Image) like the one below: (with its respective histogram):
What we want to do is to use our model histogram (that we know represents a skin tonality) to detect skin areas
in our Test Image. Here are the steps
1. In each pixel of our Test Image (i.e. p(i, j) ), collect the data and find the correspondent bin location for
that pixel (i.e. (hi,j , si,j ) ).
2. Lookup the model histogram in the correspondent bin - (hi,j , si,j ) - and read the bin value.
3. Store this bin value in a new image (BackProjection). Also, you may consider to normalize the model
histogram first, so the output for the Test Image can be visible for you.
4. Applying the steps above, we get the following BackProjection image for our Test Image:
225
5. In terms of statistics, the values stored in BackProjection represent the probability that a pixel in Test
Image belongs to a skin area, based on the model histogram that we use. For instance in our Test image,
the brighter areas are more probable to be skin area (as they actually are), whereas the darker areas have
less probability (notice that these dark areas belong to surfaces that have some shadow on it, which in
turns affects the detection).
Code
What does this program do?
Loads an image
Convert the original to HSV format and separate only Hue channel to be used for the Histogram (using the
OpenCV function mixChannels)
Let the user to enter the number of bins to be used in the calculation of the histogram.
Calculate the histogram (and update it if the bins change) and the backprojection of the same image.
Display the backprojection and the histogram in windows.
Downloadable code:
1. Click here for the basic version (explained in this tutorial).
2. For stuff slightly fancier (using H-S histograms and floodFill to define a mask for the skin area) you can
check the improved demo
3. ...or you can always check out the classical camshiftdemo in samples.
Code at glance:
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
using namespace cv;
using namespace std;
/// Global Variables
Mat src; Mat hsv; Mat hue;
int bins = 25;
226
/**
* @function Hist_and_Backproj
* @brief Callback to Trackbar
*/
void Hist_and_Backproj(int, void* )
{
MatND hist;
int histSize = MAX( bins, 2 );
float hue_range[] = { 0, 180 };
const float* ranges = { hue_range };
/// Get the Histogram and normalize it
calcHist( &hue, 1, 0, Mat(), hist, 1, &histSize, &ranges, true, false );
normalize( hist, hist, 0, 255, NORM_MINMAX, -1, Mat() );
/// Get Backprojection
MatND backproj;
calcBackProject( &hue, 1, 0, hist, backproj, &ranges, 1, true );
/// Draw the backproj
imshow( "BackProj", backproj );
///
int
int
Mat
227
Explanation
1. Declare the matrices to store our images and initialize the number of bins to be used by our histogram:
Mat src; Mat hsv; Mat hue;
int bins = 25;
3. For this tutorial, we will use only the Hue value for our 1-D histogram (check out the fancier code in the links
above if you want to use the more standard H-S histogram, which yields better results):
hue.create( hsv.size(), hsv.depth() );
int ch[] = { 0, 0 };
mixChannels( &hsv, 1, &hue, 1, ch, 1 );
5. Show the image and wait for the user to exit the program:
imshow( window_image, src );
waitKey(0);
return 0;
6. Hist_and_Backproj function: Initialize the arguments needed for calcHist. The number of bins comes from
the Trackbar:
228
8. Get the Backprojection of the same image by calling the function calcBackProject
MatND backproj;
calcBackProject( &hue, 1, 0, hist, backproj, &ranges, 1, true );
all the arguments are known (the same as used to calculate the histogram), only we add the backproj matrix,
which will store the backprojection of the source image (&hue)
9. Display backproj:
imshow( "BackProj", backproj );
Results
1. Here are the output by using a sample image ( guess what? Another hand ). You can play with the bin values
and you will observe how it affects the results:
229
Theory
What is template matching?
Template matching is a technique for finding areas of an image that match (are similar) to a template image (patch).
How does it work?
We need two primary components:
1. Source image (I): The image in which we expect to find a match to the template image
2. Template image (T): The patch image which will be compared to the template image
our goal is to detect the highest matching area:
To identify the matching area, we have to compare the template image against the source image by sliding it:
230
By sliding, we mean moving the patch one pixel at a time (left to right, up to down). At each location, a metric
is calculated so it represents how good or bad the match at that location is (or how similar the patch is to
that particular area of the source image).
For each location of T over I, you store the metric in the result matrix (R). Each location (x, y) in R contains
the match metric:
the image above is the result R of sliding the patch with a metric TM_CCORR_NORMED. The brightest
locations indicate the highest matches. As you can see, the location marked by the red circle is probably the
one with the highest value, so that location (the rectangle formed by that point as a corner and width and height
equal to the patch image) is considered the match.
231
In practice, we use the function minMaxLoc to locate the highest value (or lower, depending of the type of
matching method) in the R matrix.
Which are the matching methods available in OpenCV?
Good question. OpenCV implements Template matching in the function matchTemplate. The available methods are
6:
1. method=CV_TM_SQDIFF
X
R(x, y) =
(T (x 0 , y 0 ) I(x + x 0 , y + y 0 ))2
x 0 ,y 0
2. method=CV_TM_SQDIFF_NORMED
P
, y 0 ) I(x + x 0 , y + y 0 ))2
P
0
0 2
0
0 2
x 0 ,y 0 T (x , y )
x 0 ,y 0 I(x + x , y + y )
R(x, y) = qP
x 0 ,y 0 (T (x
3. method=CV_TM_CCORR
X
R(x, y) =
(T (x 0 , y 0 ) I(x + x 0 , y + y 0 ))
x 0 ,y 0
4. method=CV_TM_CCORR_NORMED
P
, y 0 ) I 0 (x + x 0 , y + y 0 ))
P
0
0 2
0
0 2
x 0 ,y 0 T (x , y )
x 0 ,y 0 I(x + x , y + y )
x 0 ,y 0 (T (x
R(x, y) = qP
5. method=CV_TM_CCOEFF
R(x, y) =
(T 0 (x 0 , y 0 ) I(x + x 0 , y + y 0 ))
x 0 ,y 0
where
P
T 0 (x 0 , y 0 ) = T (x 0 , y 0 ) 1/(w h) x 00 ,y 00 T (x 00 , y 00 )
P
I 0 (x + x 0 , y + y 0 ) = I(x + x 0 , y + y 0 ) 1/(w h) x 00 ,y 00 I(x + x 00 , y + y 00 )
6. method=CV_TM_CCOEFF_NORMED
P
(x 0 , y 0 ) I 0 (x + x 0 , y + y 0 ))
P
0
0
0 2
0
0
0 2
x 0 ,y 0 T (x , y )
x 0 ,y 0 I (x + x , y + y )
R(x, y) = qP
x 0 ,y 0 (T
Code
What does this program do?
Loads an input image and a image patch (template)
Perform a template matching procedure by using the OpenCV function matchTemplate with any of the
6 matching methods described before. The user can choose the method by entering its selection in the
Trackbar.
232
"opencv2/highgui/highgui.hpp"
"opencv2/imgproc/imgproc.hpp"
<iostream>
<stdio.h>
233
/// For SQDIFF and SQDIFF_NORMED, the best matches are lower values. For all the other methods, the higher the bette
if( match_method == CV_TM_SQDIFF || match_method == CV_TM_SQDIFF_NORMED )
{ matchLoc = minLoc; }
else
{ matchLoc = maxLoc; }
/// Show me what you got
rectangle( img_display, matchLoc, Point( matchLoc.x + templ.cols , matchLoc.y + templ.rows ), Scalar::all(0), 2, 8,
rectangle( result, matchLoc, Point( matchLoc.x + templ.cols , matchLoc.y + templ.rows ), Scalar::all(0), 2, 8, 0 );
imshow( image_window, img_display );
imshow( result_window, result );
return;
}
Explanation
1. Declare some global variables, such as the image, template and result matrices, as well as the match method and
the window names:
Mat img; Mat templ; Mat result;
char* image_window = "Source Image";
char* result_window = "Result window";
int match_method;
int max_Trackbar = 5;
4. Create the Trackbar to enter the kind of matching method to be used. When a change is detected the callback
function MatchingMethod is called.
234
6. Lets check out the callback function. First, it makes a copy of the source image:
Mat img_display;
img.copyTo( img_display );
7. Next, it creates the result matrix that will store the matching results for each template location. Observe in detail
the size of the result matrix (which matches all possible locations for it)
int result_cols = img.cols - templ.cols + 1;
int result_rows = img.rows - templ.rows + 1;
result.create( result_cols, result_rows, CV_32FC1 );
the arguments are naturally the input image I, the template T, the result R and the match_method (given by the
Trackbar)
9. We normalize the results:
normalize( result, result, 0, 1, NORM_MINMAX, -1, Mat() );
10. We localize the minimum and maximum values in the result matrix R by using minMaxLoc.
double minVal; double maxVal; Point minLoc; Point maxLoc;
Point matchLoc;
minMaxLoc( result, &minVal, &maxVal, &minLoc, &maxLoc, Mat() );
12. Display the source image and the result matrix. Draw a rectangle around the highest possible matching area:
235
Results
1. Testing our program with an input image such as:
2. Generate the following result matrices (first row are the standard methods SQDIFF, CCORR and CCOEFF,
second row are the same methods in its normalized version). In the first column, the darkest is the better match,
for the other two columns, the brighter a location, the higher the match.
236
3. The right match is shown below (black rectangle around the face of the guy at the right). Notice that CCORR
and CCDEFF gave erroneous best matches, however their normalized version did it right, this may be due to the
fact that we are only considering the highest match and not the other possible high matches.
237
Theory
Code
This tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
#include
"opencv2/highgui/highgui.hpp"
"opencv2/imgproc/imgproc.hpp"
<iostream>
<stdio.h>
<stdlib.h>
238
Explanation
Result
1. Here it is:
239
Theory
Code
This tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
#include
"opencv2/highgui/highgui.hpp"
"opencv2/imgproc/imgproc.hpp"
<iostream>
<stdio.h>
<stdlib.h>
240
Explanation
Result
1. Here it is:
241
Theory
Code
This tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
#include
"opencv2/highgui/highgui.hpp"
"opencv2/imgproc/imgproc.hpp"
<iostream>
<stdio.h>
<stdlib.h>
242
Explanation
Result
1. Here it is:
243
Theory
Code
This tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
#include
"opencv2/highgui/highgui.hpp"
"opencv2/imgproc/imgproc.hpp"
<iostream>
<stdio.h>
<stdlib.h>
244
Explanation
Result
1. Here it is:
245
Theory
Code
This tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
#include
"opencv2/highgui/highgui.hpp"
"opencv2/imgproc/imgproc.hpp"
<iostream>
<stdio.h>
<stdlib.h>
246
247
printf(" * Contour[%d] - Area (M_00) = %.2f - Area OpenCV: %.2f - Length: %.2f \n", i, mu[i].m00, contourArea(c
Scalar color = Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );
drawContours( drawing, contours, i, color, 2, 8, hierarchy, 0, Point() );
circle( drawing, mc[i], 4, color, -1, 8, 0 );
}
}
Explanation
Result
1. Here it is:
Theory
Code
This tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
#include
"opencv2/highgui/highgui.hpp"
"opencv2/imgproc/imgproc.hpp"
<iostream>
<stdio.h>
<stdlib.h>
248
=
=
=
=
=
=
Point(
Point(
Point(
Point(
Point(
Point(
1.5*r, 1.34*r );
1*r, 2*r );
1.5*r, 2.866*r );
2.5*r, 2.866*r );
3*r, 2*r );
2.5*r, 1.34*r );
249
Explanation
Result
1. Here it is:
250
CHAPTER
FOUR
251
In this tutorial we will just modify our two previous programs so that they get the input information from the
trackbar.
Goals
In this tutorial you will learn how to:
Add a Trackbar in an OpenCV window by using createTrackbar
Code
Lets modify the program made in the tutorial Adding (blending) two images using OpenCV. We will let the user enter
the value by using the Trackbar.
#include <cv.h>
#include <highgui.h>
using namespace cv;
/// Global Variables
const int alpha_slider_max = 100;
int alpha_slider;
double alpha;
double beta;
///
Mat
Mat
Mat
/**
* @function on_trackbar
* @brief Callback for trackbar
*/
void on_trackbar( int, void* )
{
alpha = (double) alpha_slider/alpha_slider_max ;
beta = ( 1.0 - alpha );
addWeighted( src1, alpha, src2, beta, 0.0, dst);
imshow( "Linear Blend", dst );
252
}
int main( int argc, char** argv )
{
/// Read image ( same size, same type )
src1 = imread("../../images/LinuxLogo.jpg");
src2 = imread("../../images/WindowsLogo.jpg");
if( !src1.data ) { printf("Error loading src1 \n"); return -1; }
if( !src2.data ) { printf("Error loading src2 \n"); return -1; }
/// Initialize values
alpha_slider = 0;
/// Create Windows
namedWindow("Linear Blend", 1);
/// Create Trackbars
char TrackbarName[50];
sprintf( TrackbarName, "Alpha x %d", alpha_slider_max );
createTrackbar( TrackbarName, "Linear Blend", &alpha_slider, alpha_slider_max, on_trackbar );
/// Show some stuff
on_trackbar( alpha_slider, 0 );
/// Wait until user press some key
waitKey(0);
return 0;
}
Explanation
We only analyze the code that is related to Trackbar:
1. First, we load 02 images, which are going to be blended.
src1 = imread("../../images/LinuxLogo.jpg");
src2 = imread("../../images/WindowsLogo.jpg");
2. To create a trackbar, first we have to create the window in which it is going to be located. So:
namedWindow("Linear Blend", 1);
253
Note that:
We use the value of alpha_slider (integer) to get a double value for alpha.
alpha_slider is updated each time the trackbar is displaced by the user.
We define src1, src2, dist, alpha, alpha_slider and beta as global variables, so they can be used everywhere.
Result
Our program produces the following output:
As a manner of practice, you can also add 02 trackbars for the program made in Changing the contrast and
brightness of an image!. One trackbar to set and another for . The output might look like:
254
#include
#include
#include
#include
<iostream>
<string>
<iomanip>
<sstream>
//
//
//
//
5
6
7
8
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
// Gaussian Blur
// Basic OpenCV structures (cv::Mat, Scalar)
// OpenCV window I/O
9
10
255
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
char c;
int frameNum = -1;
29
30
// Frame counter
31
VideoCapture captRefrnc(sourceReference),
captUndTst(sourceCompareWith);
32
33
34
if ( !captRefrnc.isOpened())
{
cout << "Could not open reference " << sourceReference << endl;
return -1;
}
35
36
37
38
39
40
if( !captUndTst.isOpened())
{
cout << "Could not open case test " << sourceCompareWith << endl;
return -1;
}
41
42
43
44
45
46
47
48
49
50
51
if (refS != uTSi)
{
cout << "Inputs have different size!!! Closing." << endl;
return -1;
}
52
53
54
55
56
57
58
59
60
// Windows
namedWindow(WIN_RF, CV_WINDOW_AUTOSIZE );
namedWindow(WIN_UT, CV_WINDOW_AUTOSIZE );
cvMoveWindow(WIN_RF, 400
,
cvMoveWindow(WIN_UT, refS.width,
61
62
63
64
65
0);
0);
66
cout << "Reference frame resolution: Width=" << refS.width << " Height=" << refS.height
<< " of nr#: " << captRefrnc.get(CV_CAP_PROP_FRAME_COUNT) << endl;
67
68
256
69
70
71
72
73
74
75
76
while( true) //Show the image captured in the window and repeat
{
captRefrnc >> frameReference;
captUndTst >> frameUnderTest;
77
78
79
80
81
82
83
84
85
86
87
++frameNum;
cout <<"Frame:" << frameNum <<"# ";
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
c = cvWaitKey(delay);
if (c == 27) break;
112
113
114
115
return 0;
116
117
118
119
120
121
122
123
124
125
126
Scalar s = sum(s1);
257
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
Mat I2_2
Mat I1_2
Mat I1_I2
150
151
152
= I2.mul(I2);
= I1.mul(I1);
= I1.mul(I2);
// I2^2
// I1^2
// I1 * I2
153
154
155
156
157
158
159
Mat mu1_2
=
Mat mu2_2
=
Mat mu1_mu2 =
160
161
162
mu1.mul(mu1);
mu2.mul(mu2);
mu1.mul(mu2);
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
t1 = 2 * mu1_mu2 + C1;
t2 = 2 * sigma12 + C2;
t3 = t1.mul(t2);
178
179
180
181
182
183
184
258
185
Mat ssim_map;
divide(t3, t1, ssim_map);
186
187
// ssim_map =
t3./t1;
188
189
190
191
35 10
We do a similarity check. This requires a reference and a test case video file. The first two arguments refer to this.
Here we use a relative address. This means that the application will look into its current working directory and open
the video folder and try to find inside this the Megamind.avi and the Megamind_bug.avi.
const string sourceReference = argv[1],sourceCompareWith = argv[2];
VideoCapture captRefrnc(sourceReference);
// or
VideoCapture captUndTst;
captUndTst.open(sourceCompareWith);
To check if the binding of the class to a video source was successful or not use the isOpened function:
if ( !captRefrnc.isOpened())
{
cout << "Could not open reference " << sourceReference << endl;
return -1;
}
Closing the video is automatic when the objects destructor is called. However, if you want to close it before this you
need to call its release function. The frames of the video are just simple images. Therefore, we just need to extract
them from the VideoCapture object and put them inside a Mat one. The video streams are sequential. You may get the
frames one after another by the read or the overloaded >> operator:
Mat frameReference, frameUnderTest;
captRefrnc >> frameReference;
captUndTst.open(frameUnderTest);
The upper read operations will leave empty the Mat objects if no frame could be acquired (either cause the video
stream was closed or you got to the end of the video file). We can check this with a simple if:
259
if( frameReference.empty()
{
// exit the program
}
|| frameUnderTest.empty())
A read method is made of a frame grab and a decoding applied on that. You may call explicitly these two by using the
grab and then the retrieve functions.
Videos have many-many information attached to them besides the content of the frames. These are usually numbers,
however in some case it may be short character sequences (4 bytes or less). Due to this to acquire these information
there is a general function named get that returns double values containing these properties. Use bitwise operations to
decode the characters from a double type and conversions where valid values are only integers. Its single argument is
the ID of the queried property. For example, here we get the size of the frames in the reference and test case video file;
plus the number of frames inside the reference.
Size refS = Size((int) captRefrnc.get(CV_CAP_PROP_FRAME_WIDTH),
(int) captRefrnc.get(CV_CAP_PROP_FRAME_HEIGHT)),
cout << "Reference frame resolution: Width=" << refS.width << " Height=" << refS.height
<< " of nr#: " << captRefrnc.get(CV_CAP_PROP_FRAME_COUNT) << endl;
When you are working with videos you may often want to control these values yourself. To do this there is a set
function. Its first argument remains the name of the property you want to change and there is a second of double type
containing the value to be set. It will return true if it succeeds and false otherwise. Good examples for this is seeking
in a video file to a given time or frame:
captRefrnc.set(CV_CAP_PROP_POS_MSEC, 1.2); // go to the 1.2 second in the video
captRefrnc.set(CV_CAP_PROP_POS_FRAMES, 10); // go to the 10th frame of the video
// now a read operation would read the frame at the set position
For properties you can read and change look into the documentation of the get and set functions.
X
1
(I1 I2 )2
cij
MAX2I
MSE
Here the MAX2I is the maximum valid value for a pixel. In case of the simple single byte image per pixel per channel
this is 255. When two images are the same the MSE will give zero, resulting in an invalid divide by zero operation in
the PSNR formula. In this case the PSNR is undefined and as well need to handle this case separately. The transition
to a logarithmic scale is made because the pixel values have a very wide dynamic range. All this translated to OpenCV
and a C++ function looks like:
double getPSNR(const Mat& I1, const Mat& I2)
{
Mat s1;
absdiff(I1, I2, s1);
// |I1 - I2|
260
s1.convertTo(s1, CV_32F);
s1 = s1.mul(s1);
Scalar s = sum(s1);
Typically result values are anywhere between 30 and 50 for video compression, where higher is better. If the images
significantly differ youll get much lower ones like 15 and so. This similarity check is easy and fast to calculate,
however in practice it may turn out somewhat inconsistent with human eye perception. The structural similarity
algorithm aims to correct this.
Describing the methods goes well beyond the purpose of this tutorial. For that I invite you to read the article introducing
it. Nevertheless, you can get a good image of it by looking at the OpenCV implementation below.
See Also:
SSIM is described more in-depth in the: Z. Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli, Image quality
assessment: From error visibility to structural similarity, IEEE Transactions on Image Processing, vol. 13, no. 4, pp.
600-612, Apr. 2004. article.
Scalar getMSSIM( const Mat& i1, const Mat& i2)
{
const double C1 = 6.5025, C2 = 58.5225;
/***************************** INITS **********************************/
int d
= CV_32F;
Mat I1, I2;
i1.convertTo(I1, d);
i2.convertTo(I2, d);
Mat I2_2
Mat I1_2
Mat I1_I2
= I2.mul(I2);
= I1.mul(I1);
= I1.mul(I2);
// I2^2
// I1^2
// I1 * I2
mu1.mul(mu1);
mu2.mul(mu2);
mu1.mul(mu2);
261
// ssim_map =
t3./t1;
This will return a similarity index for each channel of the image. This value is between zero and one, where one
corresponds to perfect fit. Unfortunately, the many Gaussian blurring is quite costly, so while the PSNR may work
in a real time like environment (24 frame per second) this will take significantly more than to accomplish similar
performance results.
Therefore, the source code presented at the start of the tutorial will perform the PSNR measurement for each frame,
and the SSIM only for the frames where the PSNR falls below an input value. For visualization purpose we show both
images in an OpenCV window and print the PSNR and MSSIM values to the console. Expect to see something like:
262
3
4
5
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
VideoCapture inputVideo(source);
// Open input
if ( !inputVideo.isOpened())
{
cout << "Could not open the input video." << source << endl;
263
return -1;
24
25
26
27
28
29
30
31
32
33
34
35
36
VideoWriter outputVideo;
// Open the output
if (askOutputType)
outputVideo.open(NAME , ex=-1, inputVideo.get(CV_CAP_PROP_FPS),S, true);
else
outputVideo.open(NAME , ex, inputVideo.get(CV_CAP_PROP_FPS),S, true);
37
38
39
40
41
42
if (!outputVideo.isOpened())
{
cout << "Could not open the output video for write: " << source << endl;
return -1;
}
43
44
45
46
47
48
49
50
51
52
cout << "Input frame resolution: Width=" << S.width << " Height=" << S.height
<< " of nr#: " << inputVideo.get(CV_CAP_PROP_FRAME_COUNT) << endl;
cout << "Input codec type: " << EXT << endl;
53
54
55
56
int channel = 2;
switch(argv[2][0])
{
case R : {channel
case G : {channel
case B : {channel
}
Mat src,res;
vector<Mat> spl;
57
58
59
60
61
62
63
64
65
= 2; break;}
= 1; break;}
= 0; break;}
66
while( true) //Show the image captured in the window and repeat
{
inputVideo >> src;
// read
if( src.empty()) break;
// check if at end
67
68
69
70
71
split(src, spl);
// process - extract only the correct channel
for( int i =0; i < 3; ++i)
if (i != channel)
spl[i] = Mat::zeros(S, spl[0].type());
merge(spl, res);
72
73
74
75
76
77
//outputVideo.write(res); //save or
outputVideo << res;
78
79
80
81
264
82
83
84
As you can see things can get really complicated with videos. However, OpenCV is mainly a computer vision library,
not a video stream, codec and write one. Therefore, the developers tried to keep this part as simple as possible. Due
to this OpenCV for video containers supports only the avi extension, its first version. A direct limitation of this is that
you cannot save a video file larger than 2 GB. Furthermore you can only create and expand a single video track inside
the container. No audio or other track editing support here. Nevertheless, any video codec present on your system
might work. If you encounter some of these limitations you will need to look into more specialized video writing
libraries such as FFMpeg or codecs as HuffYUV, CorePNG and LCL. As an alternative, create the video track with
OpenCV and expand it with sound tracks or convert it to other formats by using video manipulation programs such as
VirtualDub or AviSynth.
2. The codec to use for the video track. Now all the video codecs have a unique short name of maximum four
characters. Hence, the XVID, DIVX or H264 names. This is called a four character code. You may also ask this
265
from an input video by using its get function. Because the get function is a general function it always returns
double values. A double value is stored on 64 bits. Four characters are four bytes, meaning 32 bits. These four
characters are coded in the lower 32 bits of the double. A simple way to throw away the upper 32 bits would be
to just convert this value to int:
VideoCapture inputVideo(source);
int ex = static_cast<int>(inputVideo.get(CV_CAP_PROP_FOURCC));
// Open input
// Get Codec Type- Int form
OpenCV internally works with this integer type and expect this as its second parameter. Now to convert from the
integer form to string we may use two methods: a bitwise operator and a union method. The first one extracting
from an int the characters looks like (an and operation, some shifting and adding a 0 at the end to close the
string):
char EXT[] = {ex & 0XFF , (ex & 0XFF00) >> 8,(ex & 0XFF0000) >> 16,(ex & 0XFF000000) >> 24, 0};
The advantage of this is that the conversion is done automatically after assigning, while for the bitwise operator
you need to do the operations whenever you change the codec type. In case you know the codecs four character
code beforehand, you can use the CV_FOURCC macro to build the integer:
If you pass for this argument minus one than a window will pop up at runtime that contains all the codec installed
on your system and ask you to select the one to use:
3. The frame per second for the output video. Again, here I keep the input videos frame per second by using the
get function.
4. The size of the frames for the output video. Here too I keep the input videos frame size per second by using the
get function.
5. The final argument is an optional one. By default is true and says that the output will be a colorful one (so for
write you will send three channel images). To create a gray scale video pass a false parameter here.
Here it is, how I use it in the sample:
VideoWriter outputVideo;
Size S = Size((int) inputVideo.get(CV_CAP_PROP_FRAME_WIDTH),
(int) inputVideo.get(CV_CAP_PROP_FRAME_HEIGHT));
Afterwards, you use the isOpened() function to find out if the open operation succeeded or not. The video file automatically closes when the VideoWriter object is destroyed. After you open the object with success you can send
the frames of the video in a sequential order by using the write function of the class. Alternatively, you can use its
overloaded operator << :
266
outputVideo.write(res);
outputVideo << res;
//or
Extracting a color channel from an RGB image means to set to zero the RGB values of the other channels. You can
either do this with image scanning operations or by using the split and merge operations. You first split the channels
up into different images, set the other channels to zero images of the same size and type and finally merge them back:
split(src, spl);
// process - extract only the correct channel
for( int i =0; i < 3; ++i)
if (i != channel)
spl[i] = Mat::zeros(S, spl[0].type());
merge(spl, res);
Put all this together and youll get the upper source code, whose runtime result will show something around the idea:
267
268
CHAPTER
FIVE
269
Pose estimation
Now, let us write a code that detects a chessboard in a new image and finds its distance from the camera. You can
apply the same method to any object with known 3D geometry that you can detect in an image.
Test data: use chess_test*.jpg images from your data folder.
1. Create an empty console project. Load a test image:
Mat img = imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE);
3. Now, write a function that generates a vector<Point3f> array of 3d coordinates of a chessboard in any coordinate system. For simplicity, let us choose a system such that one of the chessboard corners is in the origin and
the board is in the plane z = 0.
4. Read camera parameters from XML/YAML file:
FileStorage fs(filename, FileStorage::READ);
Mat intrinsics, distortion;
fs["camera_matrix"] >> intrinsics;
fs["distortion_coefficients"] >> distortion;
6. Calculate
reprojection
error
like
it
is
done
in
calibration
opencv/samples/cpp/calibration.cpp, function computeReprojectionErrors).
sample
(see
Question: how to calculate the distance from the camera origin to any of the corners?
270
this. Furthermore, with calibration you may also determinate the relation between the cameras natural units (pixels)
and the real world units (for example millimeters).
Theory
For the distortion OpenCV takes into account the radial and tangential factors. For the radial one uses the following
formula:
xcorrected = x(1 + k1 r2 + k2 r4 + k3 r6
ycorrected = y(1 + k1 r2 + k2 r4 + k3 r6
So for an old pixel point at (x, y) coordinate in the input image, for a corrected output image its position will be
(xcorrected ycorrected ) . The presence of the radial distortion manifests in form of the barrel or fish-eye effect.
Tangential distortion occurs because the image taking lenses are not perfectly parallel to the imaging plane. Correcting
this is made via the formulas:
xcorrected = x + [2p1 y + p2 (r2 + 2x2 )]
ycorrected = y + [2p1 (r2 + 2y2 ) + 2p2 x]
So we have five distortion parameters, which in OpenCV are organized in a 5 column one row matrix:
Distortioncoefficients = (k1
Now for the unit conversion, we use the following formula:
fx 0
x
y = 0 fy
w
0 0
k2
p1
p2
k3 )
cx
X
cy Y
Z
1
Here the presence of the w is cause we use a homography coordinate system (and w = Z). The unknown parameters
are fx and fy (camera focal lengths) and (cx , cy ) what are the optical centers expressed in pixels coordinates. If for
both axes a common focal length is used with a given a aspect ratio (usually 1), then fy = fx a and in the upper
formula we will have a single f focal length. The matrix containing these four parameters is referred to as the camera
matrix. While the distortion coefficients are the same regardless of the camera resolutions used, these should be scaled
along with the current resolution from the calibrated resolution.
The process of determining these two matrices is the calibration. Calculating these parameters is done by some basic
geometrical equations. The equations used depend on the calibrating objects used. Currently OpenCV supports three
types of object for calibration:
Classical black-white chessboard
Symmetrical circle pattern
Asymmetrical circle pattern
Basically, you need to take snapshots of these patterns with your camera and let OpenCV find them. Each found
pattern equals in a new equation. To solve the equation you need at least a predetermined number of pattern snapshots
to form a well-posed equation system. This number is higher for the chessboard pattern and less for the circle ones.
For example, in theory the chessboard one requires at least two. However, in practice we have a good amount of noise
present in our input images, so for good results you will probably want at least 10 good snapshots of the input pattern
in different position.
Goal
The sample application will:
5.2. Camera calibration With OpenCV
271
Source code
You may also find the source code in the samples/cpp/tutorial_code/calib3d/camera_calibration/ folder
of the OpenCV source library or download it from here. The program has a single argument. The name of its
configuration file. If none given it will try to open the one named default.xml. Heres a sample configuration
file in XML format. In the configuration file you may choose to use as input a camera, a video file or an image list.
If you opt for the later one, you need to create a configuration file where you enumerate the images to use. Heres
an example of this. The important part to remember is that the images needs to be specified using the absolute
path or the relative one from your applications working directory. You may find all this in the beforehand mentioned
directory.
The application starts up with reading the settings from the configuration file. Although, this is an important part of it,
it has nothing to do with the subject of this tutorial: camera calibration. Therefore, Ive chosen to do not post here the
code part for that. The technical background on how to do this you can find in the File Input and Output using XML
and YAML files tutorial.
Explanation
1. Read the settings.
Settings s;
const string inputSettingsFile = argc > 1 ? argv[1] :
FileStorage fs(inputSettingsFile, FileStorage::READ);
if (!fs.isOpened())
{
cout << "Could not open the configuration file:
return -1;
}
fs["Settings"] >> s;
fs.release();
"default.xml";
// Read the settings
if (!s.goodInput)
{
cout << "Invalid input detected. Application stopping. " << endl;
return -1;
}
For this Ive used simple OpenCV class input operation. After reading the file Ive an additional post-process
function that checks for the validity of the input. Only if all of them are good will be the goodInput variable
true.
2. Get next input, if it fails or we have enough of them calibrate. After this we have a big loop where we do the
following operations: get the next image from the image list, camera or video file. If this fails or we have enough
images we run the calibration process. In case of image we step out of the loop and otherwise the remaining
frames will be undistorted (if the option is set) via changing from DETECTION mode to CALIBRATED one.
272
for(int i = 0;;++i)
{
Mat view;
bool blinkOutput = false;
view = s.nextImage();
//----- If no more image, or got enough, then stop calibration and show result ------------if( mode == CAPTURING && imagePoints.size() >= (unsigned)s.nrFrames )
{
if( runCalibrationAndSave(s, imageSize, cameraMatrix, distCoeffs, imagePoints))
mode = CALIBRATED;
else
mode = DETECTION;
}
if(view.empty())
// If no more images then run calibration, save and stop loop.
{
if( imagePoints.size() > 0 )
runCalibrationAndSave(s, imageSize, cameraMatrix, distCoeffs, imagePoints);
break;
imageSize = view.size(); // Format input image.
if( s.flipVertical )
flip( view, view, 0 );
}
For some cameras we may need to flip the input image. Here we do this too.
3. Find the pattern in the current input. The formation of the equations I mentioned above consists of finding
the major patterns in the input: in case of the chessboard this is their corners of the squares and for the circles,
well, the circles itself. The position of these will form the result and is collected into the pointBuf vector.
vector<Point2f> pointBuf;
bool found;
switch( s.calibrationPattern ) // Find feature points on the input format
{
case Settings::CHESSBOARD:
found = findChessboardCorners( view, s.boardSize, pointBuf,
CV_CALIB_CB_ADAPTIVE_THRESH | CV_CALIB_CB_FAST_CHECK | CV_CALIB_CB_NORMALIZE_IMAGE);
break;
case Settings::CIRCLES_GRID:
found = findCirclesGrid( view, s.boardSize, pointBuf );
break;
case Settings::ASYMMETRIC_CIRCLES_GRID:
found = findCirclesGrid( view, s.boardSize, pointBuf, CALIB_CB_ASYMMETRIC_GRID );
break;
}
Depending on the type of the input pattern you use either the findChessboardCorners or the findCirclesGrid
function. For both of them you pass on the current image, the size of the board and youll get back the positions
of the patterns. Furthermore, they return a boolean variable that states if in the input we could find or not the
pattern (we only need to take into account images where this is true!).
Then again in case of cameras we only take camera images after an input delay time passed. This is in order to allow for the user to move the chessboard around and as getting different images. Same images mean
same equations, and same equations at the calibration will form an ill-posed problem, so the calibration will
fail. For square images the position of the corners are only approximate. We may improve this by calling the
cornerSubPix function. This way will get a better calibration result. After this we add a valid inputs result to
the imagePoints vector to collect all of the equations into a single container. Finally, for visualization feedback
273
purposes we will draw the found points on the input image with the findChessboardCorners function.
if ( found)
// If done with success,
{
// improve the found corners coordinate accuracy for chessboard
if( s.calibrationPattern == Settings::CHESSBOARD)
{
Mat viewGray;
cvtColor(view, viewGray, CV_BGR2GRAY);
cornerSubPix( viewGray, pointBuf, Size(11,11),
Size(-1,-1), TermCriteria( CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 30, 0.1 ));
}
if( mode == CAPTURING && // For camera only take new samples after delay time
(!s.inputCapture.isOpened() || clock() - prevTimestamp > s.delay*1e-3*CLOCKS_PER_SEC) )
{
imagePoints.push_back(pointBuf);
prevTimestamp = clock();
blinkOutput = s.inputCapture.isOpened();
}
// Draw the corners.
drawChessboardCorners( view, s.boardSize, Mat(pointBuf), found );
}
4. Show state and result for the user, plus command line control of the application. The showing part consists
of a text output on the live feed, and for video or camera input to show the capturing frame we simply bitwise
negate the input image.
//----------------------------- Output Text -----------------------------------------------string msg = (mode == CAPTURING) ? "100/100" :
mode == CALIBRATED ? "Calibrated" : "Press g to start";
int baseLine = 0;
Size textSize = getTextSize(msg, 1, 1, 1, &baseLine);
Point textOrigin(view.cols - 2*textSize.width - 10, view.rows - 2*baseLine - 10);
if( mode == CAPTURING )
{
if(s.showUndistorsed)
msg = format( "%d/%d Undist", (int)imagePoints.size(), s.nrFrames );
else
msg = format( "%d/%d", (int)imagePoints.size(), s.nrFrames );
}
putText( view, msg, textOrigin, 1, 1, mode == CALIBRATED ?
GREEN : RED);
if( blinkOutput )
bitwise_not(view, view);
If we only ran the calibration and got the camera matrix plus the distortion coefficients we may just as correct
the image with the undistort function:
//------------------------- Video capture output undistorted -----------------------------if( mode == CALIBRATED && s.showUndistorsed )
{
Mat temp = view.clone();
undistort(temp, view, cameraMatrix, distCoeffs);
}
//------------------------------ Show image and check for input commands -------------------
274
Then we wait for an input key and if this is u we toggle the distortion removal, if it is g we start all over the
detection process (or simply start it), and finally for the ESC key quit the application:
char key = waitKey(s.inputCapture.isOpened() ? 50 : s.delay);
if( key == ESC_KEY )
break;
if( key == u && mode == CALIBRATED )
s.showUndistorsed = !s.showUndistorsed;
if( s.inputCapture.isOpened() && key == g )
{
mode = CAPTURING;
imagePoints.clear();
}
5. Show the distortion removal for the images too. When you work with an image list it is not possible to
remove the distortion inside the loop. Therefore, you must append this after the loop. Taking advantage of this
now Ill expand the undistort function, which is in fact first a call of the initUndistortRectifyMap to find out the
transformation matrices and then doing the transformation with the remap function. Because, after a successful
calibration the map calculation needs to be done only once, by using this expanded form you may speed up your
application:
if( s.inputType == Settings::IMAGE_LIST && s.showUndistorsed )
{
Mat view, rview, map1, map2;
initUndistortRectifyMap(cameraMatrix, distCoeffs, Mat(),
getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, imageSize, 1, imageSize, 0),
imageSize, CV_16SC2, map1, map2);
for(int i = 0; i < (int)s.imageList.size(); i++ )
{
view = imread(s.imageList[i], 1);
if(view.empty())
continue;
remap(view, rview, map1, map2, INTER_LINEAR);
imshow("Image View", rview);
char c = waitKey();
if( c == ESC_KEY || c == q || c == Q )
break;
}
}
275
We do the calibration with the help of the calibrateCamera function. This has the following parameters:
The object points. This is a vector of Point3f vector that for each input image describes how should the pattern
look. If we have a planar pattern (like a chessboard) then we can simply set all Z coordinates to zero. This is
a collection of the points where these important points are present. Because, we use a single pattern for all the
input images we can calculate this just once and multiply it for all the other input views. We calculate the corner
points with the calcBoardCornerPositions function as:
void calcBoardCornerPositions(Size boardSize, float squareSize, vector<Point3f>& corners,
Settings::Pattern patternType /*= Settings::CHESSBOARD*/)
{
corners.clear();
switch(patternType)
{
case Settings::CHESSBOARD:
case Settings::CIRCLES_GRID:
for( int i = 0; i < boardSize.height; ++i )
for( int j = 0; j < boardSize.width; ++j )
corners.push_back(Point3f(float( j*squareSize ), float( i*squareSize ), 0));
break;
case Settings::ASYMMETRIC_CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(Point3f(float((2*j + i % 2)*squareSize), float(i*squareSize), 0));
break;
}
}
The image points. This is a vector of Point2f vector that for each input image contains where the important
points (corners for chessboard, and center of circles for the circle patterns) were found. We already collected
this from what the findChessboardCorners or the findCirclesGrid function returned. We just need to pass it on.
The size of the image acquired from the camera, video file or the images.
The camera matrix. If we used the fix aspect ratio option we need to set the fx to zero:
276
The function will calculate for all the views the rotation and translation vector that transform the object points
(given in the model coordinate space) to the image points (given in the world coordinate space). The 7th and
8th parameters are an output vector of matrices containing in the ith position the rotation and translation vector
for the ith object point to the ith image point.
The final argument is a flag. You need to specify here options like fix the aspect ratio for the focal length, assume
zero tangential distortion or to fix the principal point.
double rms = calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix,
distCoeffs, rvecs, tvecs, s.flag|CV_CALIB_FIX_K4|CV_CALIB_FIX_K5);
The function returns the average re-projection error. This number gives a good estimation of just how exact is
the found parameters. This should be as close to zero as possible. Given the intrinsic, distortion, rotation and
translation matrices we may calculate the error for one view by using the projectPoints to first transform the
object point to image point. Then we calculate the absolute norm between what we got with our transformation
and the corner/circle finding algorithm. To find the average error we calculate the arithmetical mean of the errors
calculate for all the calibration images.
double computeReprojectionErrors( const vector<vector<Point3f> >& objectPoints,
const vector<vector<Point2f> >& imagePoints,
const vector<Mat>& rvecs, const vector<Mat>& tvecs,
const Mat& cameraMatrix , const Mat& distCoeffs,
vector<float>& perViewErrors)
{
vector<Point2f> imagePoints2;
int i, totalPoints = 0;
double totalErr = 0, err;
perViewErrors.resize(objectPoints.size());
for( i = 0; i < (int)objectPoints.size(); ++i )
{
projectPoints( Mat(objectPoints[i]), rvecs[i], tvecs[i], cameraMatrix,
distCoeffs, imagePoints2);
err = norm(Mat(imagePoints[i]), Mat(imagePoints2), CV_L2);
int n = (int)objectPoints[i].size();
perViewErrors[i] = (float) std::sqrt(err*err/n);
totalErr
+= err*err;
totalPoints
+= n;
// project
// difference
}
return std::sqrt(totalErr/totalPoints);
}
Results
Let there be this input chessboard pattern that has a size of 9 X 6. Ive used an AXIS IP camera to create a couple of snapshots of the board and saved it into a VID5 directory. Ive put this inside the images/CameraCalibraation
folder of my working directory and created the following VID5.XML file that describes which images to use:
277
<?xml version="1.0"?>
<opencv_storage>
<images>
images/CameraCalibraation/VID5/xx1.jpg
images/CameraCalibraation/VID5/xx2.jpg
images/CameraCalibraation/VID5/xx3.jpg
images/CameraCalibraation/VID5/xx4.jpg
images/CameraCalibraation/VID5/xx5.jpg
images/CameraCalibraation/VID5/xx6.jpg
images/CameraCalibraation/VID5/xx7.jpg
images/CameraCalibraation/VID5/xx8.jpg
</images>
</opencv_storage>
Then specified the images/CameraCalibraation/VID5/VID5.XML as input in the configuration file. Heres a chessboard pattern found during the runtime of the application:
278
The same works for this asymmetrical circle pattern by setting the input width to 4 and height to 11. This
time Ive used a live camera feed by specifying its ID (1) for the input. Heres, how a detected pattern should look:
In both cases in the specified output XML/YAML file youll find the camera and distortion coefficients matrices:
<Camera_Matrix type_id="opencv-matrix">
<rows>3</rows>
<cols>3</cols>
<dt>d</dt>
279
<data>
6.5746697944293521e+002 0. 3.1950000000000000e+002 0.
6.5746697944293521e+002 2.3950000000000000e+002 0. 0. 1.</data></Camera_Matrix>
<Distortion_Coefficients type_id="opencv-matrix">
<rows>5</rows>
<cols>1</cols>
<dt>d</dt>
<data>
-4.1802327176423804e-001 5.0715244063187526e-001 0. 0.
-5.7843597214487474e-001</data></Distortion_Coefficients>
Add these values as constants to your program, call the initUndistortRectifyMap and the remap function to remove
distortion and enjoy distortion free inputs with cheap and low quality cameras.
You may observe a runtime instance of this on the YouTube here.
280
CHAPTER
SIX
281
282
Theory
Code
This tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
#include
<stdio.h>
<iostream>
"opencv2/core/core.hpp"
"opencv2/features2d/features2d.hpp"
"opencv2/highgui/highgui.hpp"
283
Explanation
Result
1. Here is the result after applying the BruteForce matcher between the two original images:
284
Theory
What is a feature?
In computer vision, usually we need to find matching points between different frames of an environment. Why?
If we know how two images relate to each other, we can use both images to extract information of them.
When we say matching points we are referring, in a general sense, to characteristics in the scene that we can
recognize easily. We call these characteristics features.
So, what characteristics should a feature have?
It must be uniquely recognizable
Types of Image Features
To mention a few:
Edges
Corners (also known as interest points)
Blobs (also known as regions of interest )
In this tutorial we will study the corner features, specifically.
Why is a corner so special?
Because, since it is the intersection of two edges, it represents a point in which the directions of these two edges
change. Hence, the gradient of the image (in both directions) have a high variation, which can be used to detect
it.
How does it work?
Lets look for corners. Since corners represents a variation in the gradient in the image, we will look for this
variation.
Consider a grayscale image I. We are going to sweep a window w(x, y) (with displacements u in the x direction
and v in the right direction) I and will calculate the variation of intensity.
X
E(u, v) =
w(x, y)[I(x + u, y + v) I(x, y)]2
x,y
where:
w(x, y) is the window at position (x, y)
6.2. Harris corner detector
285
x,y
2
X
Ix
v
w(x, y)
Ix Iy
Ix Iy
I2y
x,y
!
u
v
Lets denote:
M=
I2x
Ix Iy
w(x, y)
x,y
Ix Iy
I2y
u
v M
v
A score is calculated for each window, to determine if it can possibly contain a corner:
R = det(M) k(trace(M))2
where:
det(M) = 1 2
trace(M) = 1 + 2
a window with a score R greater than a certain value is considered a corner
Code
This tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
#include
286
"opencv2/highgui/highgui.hpp"
"opencv2/imgproc/imgproc.hpp"
<iostream>
<stdio.h>
<stdlib.h>
Global variables
src, src_gray;
thresh = 200;
max_thresh = 255;
Scalar(0), 2, 8, 0 );
287
}
}
}
/// Showing the result
namedWindow( corners_window, CV_WINDOW_AUTOSIZE );
imshow( corners_window, dst_norm_scaled );
}
Explanation
Result
The original image:
288
Theory
Code
This tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
#include
<stdio.h>
<iostream>
"opencv2/core/core.hpp"
"opencv2/features2d/features2d.hpp"
"opencv2/highgui/highgui.hpp"
main */
argc, char** argv )
3 )
return -1; }
289
290
waitKey(0);
return 0;
}
/** @function readme */
void readme()
{ std::cout << " Usage: ./SURF_FlannMatcher <img1> <img2>" << std::endl; }
Explanation
Result
1. Here is the result of the feature detection applied to the first image:
291
Theory
Code
This tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
#include
#include
<stdio.h>
<iostream>
"opencv2/core/core.hpp"
"opencv2/features2d/features2d.hpp"
"opencv2/highgui/highgui.hpp"
"opencv2/calib3d/calib3d.hpp"
292
293
std::vector<Point2f> scene;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
}
Mat H = findHomography( obj, scene, CV_RANSAC );
//-- Get the corners from the image_1 ( the object to be "detected" )
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img_object.cols, 0 );
obj_corners[2] = cvPoint( img_object.cols, img_object.rows ); obj_corners[3] = cvPoint( 0, img_object.rows );
std::vector<Point2f> scene_corners(4);
perspectiveTransform( obj_corners, scene_corners, H);
//-- Draw lines between the corners
line( img_matches, scene_corners[0]
line( img_matches, scene_corners[1]
line( img_matches, scene_corners[2]
line( img_matches, scene_corners[3]
- image_2 )
scene_corners[1]
scene_corners[2]
scene_corners[3]
scene_corners[0]
+
+
+
+
Point2f(
Point2f(
Point2f(
Point2f(
img_object.cols,
img_object.cols,
img_object.cols,
img_object.cols,
Explanation
Result
1. And here is the result for the detected object (highlighted in green)
294
0),
0),
0),
0),
Theory
Code
This tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
#include
"opencv2/highgui/highgui.hpp"
"opencv2/imgproc/imgproc.hpp"
<iostream>
<stdio.h>
<stdlib.h>
295
* @function main
*/
int main( int argc, char** argv )
{
/// Load source image and convert it to gray
src = imread( argv[1], 1 );
cvtColor( src, src_gray, CV_BGR2GRAY );
/// Create Window
namedWindow( source_window, CV_WINDOW_AUTOSIZE );
/// Create Trackbar to set the number of corners
createTrackbar( "Max corners:", source_window, &maxCorners, maxTrackbar, goodFeaturesToTrack_Demo );
imshow( source_window, src );
goodFeaturesToTrack_Demo( 0, 0 );
waitKey(0);
return(0);
}
/**
* @function goodFeaturesToTrack_Demo.cpp
* @brief Apply Shi-Tomasi corner detector
*/
void goodFeaturesToTrack_Demo( int, void* )
{
if( maxCorners < 1 ) { maxCorners = 1; }
/// Parameters for Shi-Tomasi algorithm
vector<Point2f> corners;
double qualityLevel = 0.01;
double minDistance = 10;
int blockSize = 3;
bool useHarrisDetector = false;
double k = 0.04;
/// Copy the source image
Mat copy;
copy = src.clone();
/// Apply corner detection
goodFeaturesToTrack( src_gray,
corners,
maxCorners,
qualityLevel,
minDistance,
Mat(),
blockSize,
useHarrisDetector,
k );
296
Explanation
Result
297
Theory
Code
This tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
#include
"opencv2/highgui/highgui.hpp"
"opencv2/imgproc/imgproc.hpp"
<iostream>
<stdio.h>
<stdlib.h>
Global variables
src, src_gray;
myHarris_dst; Mat myHarris_copy; Mat Mc;
myShiTomasi_dst; Mat myShiTomasi_copy;
298
*/
j < src_gray.rows; j++ )
= 0; i < src_gray.cols; i++ )
lambda_1 = myHarris_dst.at<float>( j, i, 0 );
lambda_2 = myHarris_dst.at<float>( j, i, 1 );
299
*myHarris_qualityLevel/max_qualityLevel )
_
{ circle( myHarris copy, Point(i,j), 4, Scalar( rng.uniform(0,255), rng.uniform(0,255),
rng.uniform(0,255) ), -1, 8, 0 ); }
}
}
imshow( myHarris_window, myHarris_copy );
}
Explanation
Result
300
Theory
Code
This tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
#include
"opencv2/highgui/highgui.hpp"
"opencv2/imgproc/imgproc.hpp"
<iostream>
<stdio.h>
<stdlib.h>
301
302
int r = 4;
for( int i = 0; i < corners.size(); i++ )
{ circle( copy, corners[i], r, Scalar(rng.uniform(0,255), rng.uniform(0,255),
rng.uniform(0,255)), -1, 8, 0 ); }
/// Show what you got
namedWindow( source_window, CV_WINDOW_AUTOSIZE );
imshow( source_window, copy );
/// Set the neeed parameters to find the refined corners
Size winSize = Size( 5, 5 );
Size zeroZone = Size( -1, -1 );
TermCriteria criteria = TermCriteria( CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 40, 0.001 );
/// Calculate the refined corner locations
cornerSubPix( src_gray, corners, winSize, zeroZone, criteria );
/// Write them down
for( int i = 0; i < corners.size(); i++ )
{ cout<<" -- Refined Corner ["<<i<<"] ("<<corners[i].x<<","<<corners[i].y<<")"<<endl; }
}
Explanation
Result
303
Theory
Code
This tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
#include
<stdio.h>
<iostream>
"opencv2/core/core.hpp"
"opencv2/features2d/features2d.hpp"
"opencv2/highgui/highgui.hpp"
main */
argc, char** argv )
3 )
return -1; }
304
Explanation
Result
1. Here is the result of the feature detection applied to the first image:
305
Theory
Code
This tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
#include
<stdio.h>
<iostream>
"opencv2/core/core.hpp"
"opencv2/features2d/features2d.hpp"
"opencv2/highgui/highgui.hpp"
main */
argc, char** argv )
3 )
return -1; }
306
{ std::cout<< " --(!) Error reading images " << std::endl; return -1; }
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 400;
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints_1, keypoints_2;
detector.detect( img_1, keypoints_1 );
detector.detect( img_2, keypoints_2 );
//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_1, descriptors_2;
extractor.compute( img_1, keypoints_1, descriptors_1 );
extractor.compute( img_2, keypoints_2, descriptors_2 );
//-- Step 3: Matching descriptor vectors using FLANN matcher
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_1.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );
//-- Draw only "good" matches (i.e. whose distance is less than 2*min_dist )
//-- PS.- radiusMatch can also be used here.
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors_1.rows; i++ )
{ if( matches[i].distance < 2*min_dist )
{ good_matches.push_back( matches[i]); }
}
//-- Draw only "good" matches
Mat img_matches;
drawMatches( img_1, keypoints_1, img_2, keypoints_2,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
//-- Show detected matches
imshow( "Good Matches", img_matches );
for( int i = 0; i < good_matches.size(); i++ )
{ printf( "-- Good Match [%d] Keypoint 1: %d -- Keypoint 2: %d
307
waitKey(0);
return 0;
}
/** @function readme */
void readme()
{ std::cout << " Usage: ./SURF_FlannMatcher <img1> <img2>" << std::endl; }
Explanation
Result
1. Here is the result of the feature detection applied to the first image:
308
Theory
Code
This tutorial codes is shown lines below. You can also download it from here
#include
#include
#include
#include
#include
#include
<stdio.h>
<iostream>
"opencv2/core/core.hpp"
"opencv2/features2d/features2d.hpp"
"opencv2/highgui/highgui.hpp"
"opencv2/calib3d/calib3d.hpp"
309
310
std::vector<Point2f> scene;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
}
Mat H = findHomography( obj, scene, CV_RANSAC );
//-- Get the corners from the image_1 ( the object to be "detected" )
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img_object.cols, 0 );
obj_corners[2] = cvPoint( img_object.cols, img_object.rows ); obj_corners[3] = cvPoint( 0, img_object.rows );
std::vector<Point2f> scene_corners(4);
perspectiveTransform( obj_corners, scene_corners, H);
//-- Draw lines between the corners
line( img_matches, scene_corners[0]
line( img_matches, scene_corners[1]
line( img_matches, scene_corners[2]
line( img_matches, scene_corners[3]
- image_2 )
scene_corners[1]
scene_corners[2]
scene_corners[3]
scene_corners[0]
+
+
+
+
Point2f(
Point2f(
Point2f(
Point2f(
img_object.cols,
img_object.cols,
img_object.cols,
img_object.cols,
Explanation
Result
1. And here is the result for the detected object (highlighted in green)
311
0),
0),
0),
0),
4. Now, find the closest matches between descriptors from the first image to the second:
// matching descriptors
BruteForceMatcher<L2<float> > matcher;
vector<DMatch> matches;
matcher.match(descriptors1, descriptors2, matches);
312
7. Create a set of inlier matches and draw them. Use perspectiveTransform function to map points with homography:
Mat points1Projected; perspectiveTransform(Mat(points1), points1Projected, H);
8. Use drawMatches for drawing inliers.
313
314
CHAPTER
SEVEN
315
316
CHAPTER
EIGHT
317
Theory
Code
This tutorial codes is shown lines below. You can also download it from here . The second version (using LBP for
face detection) can be found here
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
using namespace std;
using namespace cv;
/** Function Headers */
void detectAndDisplay( Mat frame );
/** Global variables */
String face_cascade_name = "haarcascade_frontalface_alt.xml";
String eyes_cascade_name = "haarcascade_eye_tree_eyeglasses.xml";
CascadeClassifier face_cascade;
CascadeClassifier eyes_cascade;
string window_name = "Capture - Face detection";
RNG rng(12345);
/** @function main */
int main( int argc, const char** argv )
{
CvCapture* capture;
Mat frame;
//-- 1. Load the cascades
if( !face_cascade.load( face_cascade_name ) ){ printf("--(!)Error loading\n"); return -1; };
if( !eyes_cascade.load( eyes_cascade_name ) ){ printf("--(!)Error loading\n"); return -1; };
//-- 2. Read the video stream
capture = cvCaptureFromCAM( -1 );
if( capture )
{
while( true )
{
frame = cvQueryFrame( capture );
318
Explanation
Result
1. Here is the result of running the code above and using as input the video stream of a build-in webcam:
319
320
321
322
CHAPTER
NINE
323
What is a SVM?
A Support Vector Machine (SVM) is a discriminative classifier formally defined by a separating hyperplane. In
other words, given labeled training data (supervised learning), the algorithm outputs an optimal hyperplane which
categorizes new examples.
In which sense is the hyperplane obtained optimal? Lets consider the following simple problem:
For a linearly separable set of 2D-points which belong to one of two classes, find a separating straight
line.
Note: In this example we deal with lines and points in the Cartesian plane instead of hyperplanes and vectors in a
high dimensional space. This is a simplification of the problem.It is important to understand that this is done only
because our intuition is better built from examples that are easy to imagine. However, the same concepts apply to tasks
where the examples to classify lie in a space whose dimension is higher than two.
In the above picture you can see that there exists multiple lines that offer a solution to the problem. Is any of them
better than the others? We can intuitively define a criterion to estimate the worth of the lines:
A line is bad if it passes too close to the points because it will be noise sensitive and it will not generalize
correctly. Therefore, our goal should be to find the line passing as far as possible from all points.
Then, the operation of the SVM algorithm is based on finding the hyperplane that gives the largest minimum distance
to the training examples. Twice, this distance receives the important name of margin within SVMs theory. Therefore,
the optimal separating hyperplane maximizes the margin of the training data.
324
where x symbolizes the training examples closest to the hyperplane. In general, the training examples that are closest
to the hyperplane are called support vectors. This representation is known as the canonical hyperplane.
Now, we use the result of geometry that gives the distance between a point x and a hyperplane (, 0 ):
distance =
|0 + T x|
.
||||
In particular, for the canonical hyperplane, the numerator is equal to one and the distance to the support vectors is
distance support vectors =
1
|0 + T x|
=
.
||||
||||
Recall that the margin introduced in the previous section, here denoted as M, is twice the distance to the closest
examples:
M=
2
||||
Finally, the problem of maximizing M is equivalent to the problem of minimizing a function L() subject to some
constraints. The constraints model the requirement for the hyperplane to classify correctly all the training examples
xi . Formally,
min L() =
,0
1
||||2 subject to yi (T xi + 0 ) 1 i,
2
Source Code
325
1
2
3
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/ml/ml.hpp>
4
5
6
7
8
9
10
11
int main()
{
// Data for visual representation
int width = 512, height = 512;
Mat image = Mat::zeros(height, width, CV_8UC3);
12
13
14
15
16
float trainingData[4][2] = { {501, 10}, {255, 10}, {501, 255}, {10, 501} };
Mat trainingDataMat(3, 2, CV_32FC1, trainingData);
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
if (response == 1)
image.at<Vec3b>(j, i) = green;
else if (response == -1)
image.at<Vec3b>(j, i) = blue;
38
39
40
41
42
43
44
45
46
47
48
49
50
Scalar( 0,
0,
0), thickness,
Scalar(255, 255, 255), thickness,
Scalar(255, 255, 255), thickness,
Scalar(255, 255, 255), thickness,
lineType);
lineType);
lineType);
lineType);
51
52
53
54
55
56
57
58
326
59
60
6,
61
62
imwrite("result.png", image);
63
64
65
66
67
68
Explanation
1. Set up the training data
The training data of this exercise is formed by a set of labeled 2D-points that belong to one of two different
classes; one of the classes consists of one point and the other of three points.
float labels[4] = {1.0, -1.0, -1.0, -1.0};
float trainingData[4][2] = {{501, 10}, {255, 10}, {501, 255}, {10, 501}};
The function CvSVM::train that will be used afterwards requires the training data to be stored as Mat
objects of floats. Therefore, we create these objects from the arrays defined above:
Mat trainingDataMat(3, 2, CV_32FC1, trainingData);
Mat labelsMat
(3, 1, CV_32FC1, labels);
Type of SVM. We choose here the type CvSVM::C_SVC that can be used for n-class classification (n
2). This parameter is defined in the attribute CvSVMParams.svm_type.
Note: The important feature of the type of SVM CvSVM::C_SVC deals with imperfect separation of
classes (i.e. when the training data is non-linearly separable). This feature is not important here since the
data is linearly separable and we chose this SVM type only for being the most commonly used.
Type of SVM kernel. We have not talked about kernel functions since they are not interesting for the
training data we are dealing with. Nevertheless, lets explain briefly now the main idea behind a kernel
function. It is a mapping done to the training data to improve its resemblance to a linearly separable set
of data. This mapping consists of increasing the dimensionality of the data and is done efficiently using a
kernel function. We choose here the type CvSVM::LINEAR which means that no mapping is done. This
parameter is defined in the attribute CvSVMParams.kernel_type.
Termination criteria of the algorithm. The SVM training procedure is implemented solving a constrained
quadratic optimization problem in an iterative fashion. Here we specify a maximum number of iterations
327
and a tolerance error so we allow the algorithm to finish in less number of steps even if the optimal
hyperplane has not been computed yet. This parameter is defined in a structure cvTermCriteria.
3. Train the SVM
We call the method CvSVM::train to build the SVM model.
CvSVM SVM;
SVM.train(trainingDataMat, labelsMat, Mat(), Mat(), params);
= green;
= blue;
5. Support vectors
We use here a couple of methods to obtain information about the support vectors.
The method
CvSVM::get_support_vector_count outputs the total number of support vectors used in the problem and with
the method CvSVM::get_support_vector we obtain each of the support vectors using an index. We have used
this methods here to find the training examples that are support vectors and highlight them.
int c
= SVM.get_support_vector_count();
Results
The code opens an image and shows the training examples of both classes. The points of one class are represented with white circles and black ones are used for the other class.
The SVM is trained and used to classify all the pixels of the image. This results in a division of the image in a
blue region and a green region. The boundary between both regions is the optimal separating hyperplane.
Finally the support vectors are shown using gray rings around the training examples.
328
Motivation
Why is it interesting to extend the SVM optimation problem in order to handle non-linearly separable training data?
Most of the applications in which SVMs are used in computer vision require a more powerful tool than a simple
linear classifier. This stems from the fact that in these tasks the training data can be rarely separated using an
hyperplane.
Consider one of these tasks, for example, face detection. The training data in this case is composed by a set of images
that are faces and another set of images that are non-faces (every other thing in the world except from faces). This
training data is too complex so as to find a representation of each sample (feature vector) that could make the whole
set of faces linearly separable from the whole set of non-faces.
,0
1
||||2 subject to yi (T xi + 0 ) 1 i
2
There are multiple ways in which this model can be modified so it takes into account the misclassification errors. For
example, one could think of minimizing the same quantity plus a constant times the number of misclassification errors
in the training data, i.e.:
min ||||2 + C(# misclassication errors)
329
However, this one is not a very good solution since, among some other reasons, we do not distinguish between samples
that are misclassified with a small distance to their appropriate decision region or samples that are not. Therefore, a
better solution will take into account the distance of the misclassified samples to their correct decision regions, i.e.:
min ||||2 + C(distance of misclassified samples to their correct regions)
For each sample of the training data a new parameter i is defined. Each one of these parameters contains the distance
from its corresponding training sample to their correct decision region. The following picture shows non-linearly
separable training data from two classes, a separating hyperplane and the distances to their correct regions of the
samples that are misclassified.
Note: Only the distances of the samples that are misclassified are shown in the picture. The distances of the rest of
the samples are zero since they lay already in their correct decision region.
The red and blue lines that appear on the picture are the margins to each one of the decision regions. It is very
important to realize that each of the i goes from a misclassified training sample to the margin of its appropriate
region.
Finally, the new formulation for the optimization problem is:
X
min L() = ||||2 + C
i subject to yi (T xi + 0 ) 1 i and i 0 i
,0
How should the parameter C be chosen? It is obvious that the answer to this question depends on how the training
data is distributed. Although there is no general answer, it is useful to take into account these rules:
Large values of C give solutions with less misclassification errors but a smaller margin. Consider that in this
case it is expensive to make misclassification errors. Since the aim of the optimization is to minimize the
argument, few misclassifications errors are allowed.
Small values of C give solutions with bigger margin and more classification errors. In this case the minimization
does not consider that much the term of the sum so it focuses more on finding a hyperplane with big margin.
330
Source Code
You may also find the source code and these video file in the samples/cpp/tutorial_code/gpu/non_linear_svms/non_linear_svm
folder of the OpenCV source library or download it from here.
1
2
3
4
#include
#include
#include
#include
<iostream>
<opencv2/core/core.hpp>
<opencv2/highgui/highgui.hpp>
<opencv2/ml/ml.hpp>
5
6
7
#define NTRAINING_SAMPLES
#define FRAC_LINEAR_SEP
100
0.9f
8
9
10
11
12
13
14
15
16
int main()
{
// Data for visual representation
const int WIDTH = 512, HEIGHT = 512;
Mat I = Mat::zeros(HEIGHT, WIDTH, CV_8UC3);
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
//------------------ Set up the non-linearly separable part of the training data ---------------
46
47
48
49
50
51
52
53
54
331
55
56
57
58
59
60
61
62
63
64
65
66
//------------------------ 3. Train
cout << "Starting training process"
CvSVM svm;
svm.train(trainData, labels, Mat(),
cout << "Finished training process"
67
68
69
70
71
72
73
74
75
76
77
78
79
80
if
(response == 1)
else if (response == 2)
81
82
I.at<Vec3b>(j, i)
I.at<Vec3b>(j, i)
= green;
= blue;
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
332
113
114
imwrite("result.png", I);
// save the Image
imshow("SVM for Non-Linear Training Data", I); // show it to the user
waitKey(0);
115
116
117
118
Explanation
1. Set up the training data
The training data of this exercise is formed by a set of labeled 2D-points that belong to one of two different
classes. To make the exercise more appealing, the training data is generated randomly using a uniform
probability density functions (PDFs).
We have divided the generation of the training data into two main parts.
In the first part we generate data for both classes that is linearly separable.
// Generate random points for the class 1
Mat trainClass = trainData.rowRange(0, nLinearSamples);
// The x coordinate of the points is in [0, 0.4)
Mat c = trainClass.colRange(0, 1);
rng.fill(c, RNG::UNIFORM, Scalar(1), Scalar(0.4 * WIDTH));
// The y coordinate of the points is in [0, 1)
c = trainClass.colRange(1,2);
rng.fill(c, RNG::UNIFORM, Scalar(1), Scalar(HEIGHT));
// Generate random points for the class 2
trainClass = trainData.rowRange(2*NTRAINING_SAMPLES-nLinearSamples, 2*NTRAINING_SAMPLES);
// The x coordinate of the points is in [0.6, 1]
c = trainClass.colRange(0 , 1);
rng.fill(c, RNG::UNIFORM, Scalar(0.6*WIDTH), Scalar(WIDTH));
// The y coordinate of the points is in [0, 1)
c = trainClass.colRange(1,2);
rng.fill(c, RNG::UNIFORM, Scalar(1), Scalar(HEIGHT));
In the second part we create data for both classes that is non-linearly separable, data that overlaps.
// Generate random points for the classes 1 and 2
trainClass = trainData.rowRange( nLinearSamples, 2*NTRAINING_SAMPLES-nLinearSamples);
// The x coordinate of the points is in [0.4, 0.6)
c = trainClass.colRange(0,1);
rng.fill(c, RNG::UNIFORM, Scalar(0.4*WIDTH), Scalar(0.6*WIDTH));
// The y coordinate of the points is in [0, 1)
c = trainClass.colRange(1,2);
rng.fill(c, RNG::UNIFORM, Scalar(1), Scalar(HEIGHT));
333
params.kernel_type = SVM::LINEAR;
params.term_crit
= TermCriteria(CV_TERMCRIT_ITER, (int)1e7, 1e-6);
There are just two differences between the configuration we do here and the one that was done in the
previous tutorial that we use as reference.
CvSVM::C_SVC. We chose here a small value of this parameter in order not to punish too much the
misclassification errors in the optimization. The idea of doing this stems from the will of obtaining
a solution close to the one intuitively expected. However, we recommend to get a better insight of
the problem by making adjustments to this parameter.
Note: Here there are just very few points in the overlapping region between classes, giving
a smaller value to FRAC_LINEAR_SEP the density of points can be incremented and the
impact of the parameter CvSVM::C_SVC explored deeply.
Termination Criteria of the algorithm. The maximum number of iterations has to be increased
considerably in order to solve correctly a problem with non-linearly separable training data. In
particular, we have increased in five orders of magnitude this value.
3. Train the SVM
We call the method CvSVM::train to build the SVM model. Watch out that the training process may take
a quite long time. Have patiance when your run the program.
CvSVM svm;
svm.train(trainData, labels, Mat(), Mat(), params);
I.at<Vec3b>(j, i)
I.at<Vec3b>(j, i)
= green;
= blue;
334
py = trainData.at<float>(i,1);
circle(I, Point( (int) px, (int) py ), 3, Scalar(0, 255, 0), thick, lineType);
}
// Class 2
for (int i = NTRAINING_SAMPLES; i <2*NTRAINING_SAMPLES; ++i)
{
px = trainData.at<float>(i,0);
py = trainData.at<float>(i,1);
circle(I, Point( (int) px, (int) py ), 3, Scalar(255, 0, 0), thick, lineType);
}
6. Support vectors
We use here a couple of methods to obtain information about the support vectors. The method
CvSVM::get_support_vector_count outputs the total number of support vectors used in the problem and
with the method CvSVM::get_support_vector we obtain each of the support vectors using an index. We
have used this methods here to find the training examples that are support vectors and highlight them.
thick = 2;
lineType = 8;
int x
= svm.get_support_vector_count();
for (int i = 0; i < x; ++i)
{
const float* v = svm.get_support_vector(i);
circle(
I, Point( (int) v[0], (int) v[1]), 6, Scalar(128, 128, 128), thick, lineType);
}
Results
The code opens an image and shows the training examples of both classes. The points of one class are represented with light green and light blue ones are used for the other class.
The SVM is trained and used to classify all the pixels of the image. This results in a division of the image in
a blue region and a green region. The boundary between both regions is the separating hyperplane. Since the
training data is non-linearly separable, it can be seen that some of the examples of both classes are misclassified;
some green points lay on the blue region and some blue points lay on the green one.
Finally the support vectors are shown using gray rings around the training examples.
335
336
CHAPTER
TEN
337
You may also find the source code and these video file in the samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basi
folder of the OpenCV source library or download it from here. The full source code is quite long (due to the
controlling of the application via the command line arguments and performance measurement). Therefore, to avoid
cluttering up these sections with those youll find here only the functions itself.
The PSNR returns a float number, that if the two inputs are similar between 30 and 50 (higher is better).
1
2
3
4
5
6
Scalar s = sum(s1);
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
b.gI1.convertTo(b.t1, CV_32F);
29
338
b.gI2.convertTo(b.t2, CV_32F);
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
struct BufferPSNR
// Optimized GPU versions
{
// Data allocations are very expensive on GPU. Use a buffer to solve: allocate once reuse later.
gpu::GpuMat gI1, gI2, gs, t1,t2;
50
gpu::GpuMat buf;
51
52
};
53
54
55
56
57
gI1.upload(I1);
gI2.upload(I2);
58
59
60
gI1.convertTo(t1, CV_32F);
gI2.convertTo(t2, CV_32F);
61
62
63
64
65
66
Scalar s = gpu::sum(gs);
double sse = s.val[0] + s.val[1] + s.val[2];
67
68
69
70
71
72
73
74
75
76
77
78
The SSIM returns the MSSIM of the images. This is too a float number between zero and one (higher is better),
however we have one for each channel. Therefore, we return a Scalar OpenCV data structure:
1
2
3
4
5
339
6
7
8
9
10
11
12
Mat I2_2
Mat I1_2
Mat I1_I2
13
14
15
= I2.mul(I2);
= I1.mul(I1);
= I1.mul(I2);
// I2^2
// I1^2
// I1 * I2
16
17
18
19
20
21
22
Mat mu1_2
=
Mat mu2_2
=
Mat mu1_mu2 =
23
24
25
mu1.mul(mu1);
mu2.mul(mu2);
mu1.mul(mu2);
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
t1 = 2 * mu1_mu2 + C1;
t2 = 2 * sigma12 + C2;
t3 = t1.mul(t2);
41
42
43
44
45
46
47
48
Mat ssim_map;
divide(t3, t1, ssim_map);
49
50
// ssim_map =
t3./t1;
51
52
53
54
55
56
57
58
59
60
61
gI1.upload(i1);
gI2.upload(i2);
62
63
340
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
// I2^2
// I1^2
// I1 * I2
80
81
82
83
84
85
gpu::GpuMat mu1_2,
gpu::multiply(mu1,
gpu::multiply(mu2,
gpu::multiply(mu1,
86
87
88
89
mu2_2, mu1_mu2;
mu1, mu1_2);
mu2, mu2_2);
mu2, mu1_mu2);
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
t1 = 2 * mu1_mu2 + C1;
t2 = 2 * sigma12 + C2;
gpu::multiply(t1, t2, t3);
105
106
107
108
109
110
111
112
gpu::GpuMat ssim_map;
gpu::divide(t3, t1, ssim_map);
113
114
// ssim_map =
t3./t1;
115
Scalar s = gpu::sum(ssim_map);
mssim.val[i] = s.val[0] / (ssim_map.rows * ssim_map.cols);
116
117
118
}
return mssim;
119
120
121
341
122
123
124
struct BufferMSSIM
// Optimized GPU versions
{
// Data allocations are very expensive on GPU. Use a buffer to solve: allocate once reuse later.
gpu::GpuMat gI1, gI2, gs, t1,t2;
125
126
127
128
129
130
131
132
133
134
gpu::GpuMat ssim_map;
135
136
137
138
139
140
141
gpu::GpuMat buf;
};
Scalar getMSSIM_GPU_optimized( const Mat& i1, const Mat& i2, BufferMSSIM& b)
{
int cn = i1.channels();
142
143
144
145
b.gI1.upload(i1);
b.gI2.upload(i2);
146
147
148
gpu::Stream stream;
149
150
151
152
153
154
155
156
157
158
159
160
161
162
++i )
b.I2_2, stream);
b.I1_2, stream);
b.I1_I2, stream);
// I2^2
// I1^2
// I1 * I2
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
342
180
181
182
//here too it would be an extra data transfer due to call of operator*(Scalar, Mat)
gpu::multiply(b.mu1_mu2, 2, b.t1, stream); //b.t1 = 2 * b.mu1_mu2 + C1;
gpu::add(b.t1, C1, b.t1, stream);
gpu::multiply(b.sigma12, 2, b.t2, stream); //b.t2 = 2 * b.sigma12 + C2;
gpu::add(b.t2, C2, b.t2, stream);
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
stream.waitForCompletion();
201
202
203
204
205
}
return mssim;
206
207
208
GPU stands for graphics processing unit. It was originally build to render graphical scenes. These scenes somehow
build on a lot of data. Nevertheless, these arent all dependent one from another in a sequential way and as it is
possible a parallel processing of them. Due to this a GPU will contain multiple smaller processing units. These arent
the state of the art processors and on a one on one test with a CPU it will fall behind. However, its strength lies in its
numbers. In the last years there has been an increasing trend to harvest these massive parallel powers of the GPU in
non-graphical scene rendering too. This gave birth to the general-purpose computation on graphics processing units
(GPGPU).
The GPU has its own memory. When you read data from the hard drive with OpenCV into a Mat object that takes
place in your systems memory. The CPU works somehow directly on this (via its cache), however the GPU cannot.
He has too transferred the information he will use for calculations from the system memory to its own. This is done
via an upload process and takes time. In the end the result will have to be downloaded back to your system memory
343
for your CPU to see it and use it. Porting small functions to GPU is not recommended as the upload/download time
will be larger than the amount you gain by a parallel execution.
Mat objects are stored only in the system memory (or the CPU cache). For getting an OpenCV matrix to the GPU
youll need to use its GPU counterpart GpuMat. It works similar to the Mat with a 2D only limitation and no reference
returning for its functions (cannot mix GPU references with CPU ones). To upload a Mat object to the GPU you need
to call the upload function after creating an instance of the class. To download you may use simple assignment to a
Mat object or use the download function.
Mat I1;
// Main memory item - read image into with imread for example
gpu::GpuMat gI; // GPU matrix - for now empty
gI1.upload(I1); // Upload a data from the system memory to the GPU memory
I1 = gI1;
Once you have your data up in the GPU memory you may call GPU enabled functions of OpenCV. Most of the
functions keep the same name just as on the CPU, with the difference that they only accept GpuMat inputs. A full list
of these you will find in the documentation: online here or the OpenCV reference manual that comes with the source
code.
Another thing to keep in mind is that not for all channel numbers you can make efficient algorithms on the GPU.
Generally, I found that the input images for the GPU images need to be either one or four channel ones and one of the
char or float type for the item sizes. No double support on the GPU, sorry. Passing other types of objects for some
functions will result in an exception thrown, and an error message on the error output. The documentation details in
most of the places the types accepted for the inputs. If you have three channel images as an input you can do two
things: either adds a new channel (and use char elements) or split up the image and call the function for each image.
The first one isnt really recommended as you waste memory.
For some functions, where the position of the elements (neighbor items) doesnt matter quick solution is to just reshape
it into a single channel image. This is the case for the PSNR implementation where for the absdiff method the value
of the neighbors is not important. However, for the GaussianBlur this isnt an option and such need to use the split
method for the SSIM. With this knowledge you can already make a GPU viable code (like mine GPU one) and run it.
Youll be surprised to see that it might turn out slower than your CPU implementation.
Optimization
The reason for this is that youre throwing out on the window the price for memory allocation and data transfer. And
on the GPU this is damn high. Another possibility for optimization is to introduce asynchronous OpenCV GPU calls
too with the help of the gpu::Stream.
1. Memory allocation on the GPU is considerable. Therefore, if its possible allocate new memory as few times as
possible. If you create a function what you intend to call multiple times it is a good idea to allocate any local
parameters for the function only once, during the first call. To do this you create a data structure containing all
the local variables you will use. For instance in case of the PSNR these are:
struct BufferPSNR
// Optimized GPU versions
{
// Data allocations are very expensive on GPU. Use a buffer to solve: allocate once reuse later.
gpu::GpuMat gI1, gI2, gs, t1,t2;
gpu::GpuMat buf;
};
And finally pass this to the function each time you call it:
344
Now you access these local parameters as: b.gI1, b.buf and so on. The GpuMat will only reallocate itself on a
new call if the new matrix size is different from the previous one.
2. Avoid unnecessary function data transfers. Any small data transfer will be significant one once you go to the
GPU. Therefore, if possible make all calculations in-place (in other words do not create new memory objects for reasons explained at the previous point). For example, although expressing arithmetical operations may be
easier to express in one line formulas, it will be slower. In case of the SSIM at one point I need to calculate:
b.t1 = 2 * b.mu1_mu2 + C1;
Although the upper call will succeed observe that there is a hidden data transfer present. Before it makes
the addition it needs to store somewhere the multiplication. Therefore, it will create a local matrix in the
background, add to that the C1 value and finally assign that to t1. To avoid this we use the gpu functions, instead
of the arithmetic operators:
gpu::multiply(b.mu1_mu2, 2, b.t1); //b.t1 = 2 * b.mu1_mu2 + C1;
gpu::add(b.t1, C1, b.t1);
3. Use asynchronous calls (the gpu::Stream). By default whenever you call a gpu function it will wait for the call
to finish and return with the result afterwards. However, it is possible to make asynchronous calls, meaning it
will call for the operation execution, make the costly data allocations for the algorithm and return back right
away. Now you can call another function if you wish to do so. For the MSSIM this is a small optimization
point. In our default implementation we split up the image into channels and call then for each channel the gpu
functions. A small degree of parallelization is possible with the stream. By using a stream we can make the
data allocation, upload operations while the GPU is already executing a given method. For example we need
to upload two images. We queue these one after another and call already the function that processes it. The
functions will wait for the upload to finish, however while that happens makes the output buffer allocations for
the function to be executed next.
gpu::Stream stream;
stream.enqueueConvert(b.gI1, b.t1, CV_32F);
// Upload
PSNR
PSNR
call
PSNR
Time
Time
Time
Time
MSSIM
MSSIM
MSSIM
MSSIM
of
of
of
of
CPU
GPU
GPU
GPU
CPU
GPU
GPU
GPU
With
With
With
With
result
result
result
result
With
With
With
With
of:
of:
of:
of:
result
result
result
result
of
of
of
of
19.2506
19.2506
19.2506
19.2506
B0.890964 G0.903845 R0.936934
B0.89922 G0.909051 R0.968223
B0.890964 G0.903845 R0.936934
B0.890964 G0.903845 R0.936934
In both cases we managed a performance increase of almost 100% compared to the CPU implementation. It may be
just the improvement needed for your application to work. You may observe a runtime instance of this on the YouTube
here.
345
346
CHAPTER
ELEVEN
GENERAL TUTORIALS
These tutorials are the bottom of the iceberg as they link together multiple of the modules presented above in order to
solve complex problems.
Note: Unfortunetly we have no tutorials into this section. Nevertheless, our tutorial writting team is working on it. If
you have a tutorial suggestion or you have writen yourself a tutorial (or coded a sample code) that you would like to
see here please contact us via our user group.
347