Smart Image Processing Robot
Smart Image Processing Robot
Group Members:
(2014-2018)
PROJECT SUPERVISOR
ASSISTANT PROFESSOR
FACULTY OF ENGINEERING
Submitted to
Air University
1
In partial fulfillment of the requirements
SUBMITTED BY
(2014-2018)
DEPARTMENT OF ELECTRICAL ENGINEERING
April 2018
At
Project Supervisor
(Assistant Professor)
Head of Department
Dr Shahid Baqar
Acknowledgements
Foremost, we thank Almighty for the completion of this project was only possible because of
His grace. We would like to thank our supervisor Sir Mumajjed-Ul-Mudassir, for his
continuous support, his patience and for his guidance throughout our project. We would also
like to thank Sir Abid and Sir Attique-ur-Rehman who helped us out whenever we asked for it.
Also, we thank our parents who provided an unending supply of prayers and constant support
throughout the course of this project and last but certainly not the least we thank our friends
In the last decade, the open source community has broadened to make it possible for people to
build complex products at home. Self-balancing robots are progressively becoming popular for
their unique ability to move around in two wheels. They are characterized by their high
maneuverability and outstanding agility. This project will undertake the construction and
implementation of a two-wheeled robot that is not just proficient of balancing itself on two
wheels but also navigates its way around with the help of a detecting device (image processing
system) attached to it. The robot can be considered as a merger of two units – the balancing
and the image processing unit. The balancing unit performs all functions that keep the robot
upright whereas the image processing unit contributes in performing the specific task assigned
to it. The balancing unit runs a PID control loop along with a microcontroller responsible for
the robot’s motor control, which improves the systems stability. This system can be used as a
base model to accomplish complex tasks which would otherwise be functioned by humans, of
which, some include foot print analysis in wildlife reserves, autonomous indoor navigation etc.
IV
Table of Contents
Acknowledgements ................................................................................................................ I
Abstract ................................................................................................................................. II
Chapter 1 ................................................................................................................... 1
1 Introduction ............................................................................................................. 1
1.1Background .......................................................................................................... 1
1.2Motivation ............................................................................................................ 1
1.3Problem statement ................................................................................................ 2
1.4Objectives ............................................................................................................. 2
1.5Overall Block Diagram ........................................................................................ 3
Chapter 2 ................................................................................................................... 4
2Literature Review .................................................................................................... 4
Chapter 3 ................................................................................................................... 5
3Hardware Design ..................................................................................................... 5
3.1Hardware Components ......................................................................................... 6
3.1.1 Microcontroller ............................................................................................ 6
3.1.2 Ultrasonic Sensor - HC-SR04 ....................................................................... 7
3.1.3 DC Geared Motor ......................................................................................... 9
3.1.4 Camera ........................................................................................................ 11
3.1.5 RaspberryPi ................................................................................................. 12
3.1.6 MPU(6050) ................................................................................................. 13
3.1.7 ESP 8266 ………………………………………………………………… 14
3.2Circuit Diagram .................................................................................................. 16
3.2.0Motor Driver Circuit ...................................................................................... 16
3.2.1 Motor Selection Calculation........................................................................... 16
3.2.2 Ultrasonic Sensors Interface Circuit .............................................................. 16
3.2.3 Calculation for Ultrasonic Sensor ................................................................. 16
3.2.4 Calculations .................................................................................................... 17
Chapter 4…………………………………………………………………………………18
4.1 Angle Estimation and balancing ……………… …………………………… ..18
4.1.1 Angle estimation ……………. ……………… …………………………… ..18
4.2 HOW TO CODE MPU-6050 (GYRO + ACCELEROMETER ) …………………..19
4.2.1 Degree of freedom (6 DOF ) ……………………………………………..………..19
4.2.2 3 Axis Accelerometer ………………………………………… …………………..19
4.2.3 3 Axis GyroScope …… …………………………………… …………………..19
4.2.4 DMP ……………… ……………………………………… …………………..22
4.2.5 FIFO Buffer ………………………………………… …………………..22
Chapter 6 ................................................................................................................. 51
6.Results and Discussion ......................................................................................... 51
6.1 Operation and Working ............................................................................ 51
6.2 Problems Faced ........................................................................................ 52
Chapter 7 ................................................................................................................. 52
Conclusion ............................................................................................................... 52
References .............................................................................................................. 53
IV
List of Figures
Fig 1.1………………………………………………………………………………..1
Fig 1.2………………………………………………………………………………..3
Fig 3.1……………………………………………………………............…………..8
Fig 3.2………………………………………………………………………………..9
Fig 3.3……………………………………………………………….………………10
Fig 3.4…………………………………………………………………….…………10
Fig 3.5……………………………………………………………….………………11
Fig 3.6………………………………………………………………….……………12
Fig 3.7…………………………………………………………………….…………13
Fig 3.8……………………………………………………………………….………14
Fig 3.9………………………………………………………….……………………16
Fig 3.10……………………………………………………………….……………..16
Fig 3.11……………………………………………………………….……………..17
Fig 4.1……………………………………………………………………….………31
Fig 4.2…………………………………………………………….…………………32
Fig 5.1………………………………………………………….....……………........33
Fig 5.2……………………………………………………………………….………34
Fig 5.3………………………………………………………….....……………........35
Fig 5.4………………………………………………………….....……………........37
Fig 5.5………………………………………………………….....……………........39
Fig 5.6………………………………………………………….....……………........40
Fig 5.7………………………………………………………….....……………........41
Fig 5.8………………………………………………………….....……………........44
Fig 5.9………………………………………………………….....……………........46
Fig 5.10………………………………………………………….....…………….......47
List of Tables
Table 3.1……………………………………………………………………………….…9
Table 3.2………………………………………………………………….………………12
Table 3.3………………………………………………………………….………………15
Table 5.1…………………………………………………………….…………….……...4
IV
Air University Smart Image Processing Robot
Chapter 1
Introduction
1.1 Background
This report clears up the advantages and design of the project titled Smart Robot with Image
Processing. This project incorporates the arrangement of a mechanized vehicle close by camera
which is interfaced for image processing. The user displays symbols/arrows to navigate the
robot to the destination point while the robot self-balances itself throughout this path.. The
robot is showed up in figure 1.1.
1.2 Motivation
In the last decade, the open source community has broadened to make it possible for people to
build complex products at home. Self-balancing robots are progressively becoming popular for
their unique ability to move around in two wheels. They are characterized by their high
maneuverability and outstanding agility. Navigation of the robot has been the most essential
but then most difficult in constructing up such a mobile robot. For instance, this robot is made
to take some object from one place to another which is slope sensitive with the help of
Air University Introduction
extraordinary sensors yet it can't go to the objective by it, how might we assess the robot as
valuable? This shows how navigation is imperative in building up a mobile robot.
IV
Air University Smart Image Processing Robot
Chapter 2
Literature Review
Mechanical innovation has been. a principal of forefront creating for over an expansive part of
a century. As robots and their periphery equipment. end up being more sophisticated, strong,
and downsized, these structures are continuously being utilized for entertainment, military, and
surveillance purposes. A self-balancing robot is described as a two-wheeled robot that is not
just proficient. of balancing itself. on two wheels but also directs its way around with the help
of a detecting device (image processing system) closed to it, for specific purposes.
There are various microcontrollers. in the market including various types of capacity from
fundamental input output to high end. microcontroller. These diverse sorts of microcontroller
are reason made for general. application. In this examination, we propose building for
Raspberry pi. and Arduino UNO based. robot that controls the robot’s navigation and
stabilization.
Robotics has always. been played an vital part of the human. psyche. The dream of creating a
machine that. replicates human thought and physical. features extends throughout the existence
of mankind. Growths in technology. over the past fifty years have established the basics of
making these dreams come true. Robotics is now achievable through the diminishment of the
microprocessors which performs the processing and computations. New forms of sensor
devices are being developed all the time further providing machines with the ability to identify
the world around them in so many ways. Effective and efficient control system designs offer
the robot with the ability to control itself and operate autonomously.
Artificial intelligence (AI) is becoming. a definite possibility with advancements in non-linear
control systems. such as neural networks and fuzzy controllers. Improved synthetics and
materials allow for robust and cosmetically aesthetic designs to be implemented. for the
construction and visual aspects of the robot. Two wheeled robots are one variation of robot that
has become a standard topic of research and exploration. for young engineers and robotic
enthusiasts. They offer the opportunity to develop control systems that can maintain stability of
an otherwise unstable system. This type of system is also known as an inverted pendulum. This
IV
Air University Smart Image Processing Robot
project aims to bring this, and many of the previously mention aspects of a robot. together into
the building of a two-wheeled balancing robot with. a non-linear, fuzzy controller. This field of
research is essential as robots offer an opportunity of improving the quality of life for every
member. of the human race. This will be achieved through the. reduction of human exposure to
hazardous. conditions, dangerous environments and harmful chemicals. and the provision of
continual. 24 Hr assistance and monitoring for people with medical. conditions, etc. Robots
will be employed in many applications within society including. carers, assistants and security.
Air university Hardware design
Chapter 3
Hardware Design
3.1 Hardware Components
This section discusses all the components used in this project.
3.1.1 Microcontroller Arduino Uno
The microcontroller used in the project is Arduino UNO. Arduino Uno is a
microcontroller board based on 8-bit ATmega328P microcontroller. Along with
ATmega328P, it consists other components such as crystal oscillator, serial
communication, voltage regulator, etc. to support the microcontroller. Arduino Uno has
14 digital input/output pins (out of which 6 can be used as PWM outputs), 6 analog
input pins, a USB connection, A Power barrel jack, an ICSP header and a reset button.
The package used in this project is a forty pin PDIP. The major specifications are as
follows.
The 14 digital input/output pins can be used as input or output pins by using
pinMode(), digitalRead() and digitalWrite() functions in arduino programming. Each
pin operate at 5V and can provide or receive a maximum of 40mA current, and has an
internal pull-up resistor of 20-50 KOhms which are disconnected by default. Out of
these 14 pins, some pins have specific functions as listed below:
• Serial Pins 0 (Rx) and 1 (Tx): Rx and Tx pins are used to receive and transmit TTL
serial data. They relate to the corresponding ATmega328P USB to TTL serial chip.
• External Interrupt Pins 2 and 3: These pins can be configured to trigger an interrupt
on a low value, a rising or falling edge, or a change in value.
• PWM Pins 3, 5, 6, 9 and 11: These pins provide an 8-bit PWM output by using
analogWrite() function.
• SPI Pins 10 (SS), 11 (MOSI), 12 (MISO) and 13 (SCK): These pins are used for
SPI communication.
• In-built LED Pin 13: This pin is connected with an built-in LED, when pin 13 is
HIGH – LED is on and when pin 13 is LOW, its off.
Along with 14 Digital pins, there are 6 analog input pins, each of which provide 10 bits
of resolution, i.e. 1024 different values. They measure from 0 to 5 volts but this limit
can be increased by using AREF pin with analog Reference() function.
• Analog pin 4 (SDA) and pin 5 (SCA) also used for TWI communication using Wire
library.
• AREF: Used to provide reference voltage for analog inputs with analogReference()
function.
6
Air University Smart Image processing Robot
c) IF the signal back, through high level , time of high output IO duration is the time
from sending ultrasonic to returning. Test distance = (high level time×velocity of
sound (340M/S) / 2.
SENSOR PARAMETERS:
8
Air University Smart Image processing Robot
SENSOR LOS:
rotations of the shaft per minute and is termed as RPM. The gear gathering helps in
growing the torque and decreasing the speed. Using the correct blend of riggings in a
mechanical assembly motor, its speed can be reduced to any appealing figure. This
thought where gears reduce the speed of the vehicle yet increase its torque is known as
mechanical assembly diminishing. This Insight will explore all the minor and huge
purposes of intrigue that make the mechanical assembly head and consequently the
working of geared DC motor.
External Structure:
At the first sight, the external structure of a DC geared motor looks as a straight
improvement over the fundamental DC ones.
Description of motors
• Gear shaft 4mm
• Rated Voltage: 12V
• Rated Current: 1.6A
• No-load Current: 260mA
• Rated Torque: 7kg.cm
• Rated Speed : 270rpm
• No-load Speed: 320rpm
10
Air University Smart Image processing Robot
Motor Controller:
The motors are controlled using the motor driver LM298N. Different combinations are
sent to motor driver to control motors. There are motor controlling is explained in table
below.
3.1.4 Camera
A 8MP Raspberry Pi compatible Camera has the high quality Sony IMX219 image
sensor.
Sony IMX219 is a CMOS image sensor.
Frame Rate for this camera is 30 frame/s
This sensor is capable of 3280 x 2464 pixel static images
and 640x480 pixel for video. It attaches to Pi by the dedicated standard CSi interface.
It is the supplementary for Raspberry Pi official camera in order to fulfill the demands
for different lens mount, field of view (FOV) and depth of the field (DOF) as well as
the motorized IR cut filter for both daylight and night vision.
• 40 GPIO PINS
• Pins numbering system
➢ BCM
➢ BOARD
• All pins can read or write at 3.3 voltage level
12
Air University Smart Image processing Robot
3.1.7 ESP8266:
ESP8266 is a chip with which manufacturers are making wirelessly networkable micro-
controller modules. More specifically, ESP8266 is a system-on-a-chip (SoC) with
capabilities for 2.4 GHz Wi-Fi (802.11 b/g/n, supporting WPA/WPA2), general-purpose
input/output (16 GPIO), Inter-Integrated Circuit (I²C), analog-to-digital conversion (10-
bit ADC), Serial Peripheral Interface (SPI), I²S interfaces with DMA (sharing pins with
GPIO), UART (on dedicated pins, plus a transmit-only UART can be enabled on
GPIO2), and pulse-width modulation (PWM). It employs a 32-bit RISC CPU based on
the Tensilica Xtensa L106 running at 80 MHz (or overclocked to 160 MHz). It has a
64 KB boot ROM, 64 KB instruction RAM and 96 KB data RAM. External flash memory
can be accessed through SPI.
14
Air University Smart Image processing Robot
16
Air University Smart Image processing Robot
= 4/100
=0.04m
= 3kg/2
=1.5kg
=6kg-cm
We interface an ultrasonic distance sensor with raspberry pi, give a trigger pulse and
receive an echo pulse.
Air university Hardware design
The ultrasonic distance sensor uses sonar to detect obstacles and to measure the
distance to them.
To determine the distance, the sensor measures the elapsed time between sending and
receiving waves. The speed of sound in air is about 343 m/s.
We generate a trigger of at least 10us from raspberry pi and interface a voltage divider
to limit the reception at ≤ 3.3V.
We divide our voltage as such there is 3.3V or less volts at output where vin is pulse
received by echo signal.
34300= Distance/Time/2
17150= Distance/Time
17150*Time = Distance
= 1k
= 2k
18
Air University Smart Image processing Robot
Chapter 4
4.1 ANGLE ESTIMATION AND BALANCING
To find the direction and angle of the tilt, we use inertial sensor unit.
It comprises of an accelerometer and a gyroscope which reads the angular velocity and
angular position.
Accelerometer gives accurate reading over a sufficient interval of time but it is highly
susceptible to noise which results due to sudden jerking movement of the robot. Since
accelerometer measures linear acceleration, the sudden jerking movement throws off
the sensor accuracy. Gyroscope measures angular velocity which is then integrated to
find the angle of tilt. For a small interval of time, the value of gyroscope is very
accurate, but since the gyroscope experiences drift and integration compounds the
error, after some time the reading becomes unreliable. Thus, we require some way to
combine these two values. This is done with complementary filter.
It is basically a high pass filter and a low pass filter combined where the high pass acts
on gyroscope and the low pass on the accelerometer. It makes use of gyroscope for
short term estimation and the accelerometer for the absolute reference.
This simple filter is easy to implement, experimentally tuned and demands very little
processing power.
Gyroscopes work on the principle of Coriolis acceleration. Imagine that there is a fork
like structure, that is in constant back and forth motion. It is held in place using piezo
electric crystals. Whenever you try to tilt this arrangement, the crystals experience a
force in the direction of inclination. This is caused as a result of the inertia of the
moving fork. The crystals thus produce a current in consequence with the piezo electric
effect, and this current is amplified. The values are then refined by the host
microcontroller.
20
Air University Smart Image processing Robot
The SCL line is the clock signal which synchronize the data transfer between the
devices on the I2C bus and it’s generated by the master device. SDA line carries the
data.
I2C can support a multi-master system, allowing more than one master to communicate
with all devices on the bus. Basic I2C communication is using transfers of 8 bits or
bytes. Each I2C slave device has a 7-bit address.7-bit address represents bits 1 to 7
while bit 0 is used for signal reading or writing . If bit 0 is set to 1 then the master
device will read from the slave device and if bit 0 is set to 1 then the master device will
write on the slave device
In normal state both lines SCL and SDA are high. The communication is initiated by
the master device. It generates the Start condition followed by the address of the slave
device . If the bit 0 of the address byte is set to 0 the master device will write to the
slave device and if bit 0 is set to 1 then the master device will read from the slave
device .At the end the master device generates Stop condition.
22
Air University Smart Image processing Robot
Start Condition: The SDA line will switch from a high voltage level to a low
voltage level before the SCL line switches from high to low.
Stop Condition: The SDA line will switch from a low voltage level to a high voltage
level after the SCL line switches from low to high.
Read/Write Bit: There is a single bit at the end that will informs the slave
whether the master wants to receive data or to send data from it. The read/write
bit will be at low voltage if the master wants to send data on the other hand If the
master is requesting data from the slave, the bit is a high voltage level.
ACK/NACK Bit: Now to ensure that the data is received each frame in a
message is followed by an acknowledge/no-acknowledge bit. If an address frame
or data frame was successfully received, an ACK bit is returned to the sender
from the receiving device.
While considering the normal condition both the SCL and SDA lines will be at
the high state. The communication will be initiated by the master via sending the
start condition followed by the address of slave device. Now if the 0th bit of the
address byte is 0, the master device will write on the slave device otherwise the
next byte will be read by the slave. Once all the bytes will be read/written the
master device will generate the stop condition. This is the valid indication for the
other devices that the communication is ended and the communication bus can
be used by the other connected devices.
Air university Angle Estimation and Balancing
1. Master device will send the start condition to every connected slave device, by
switching the SDA line from high to low voltage level before switching the SCL
line from high to low.
2. Master will send the address of the slave to whom it wants to communicate, along
with the read or write bit.
3. Correspondingly each slave compares the address sent from the master to its own
address. Now if the address matches, the slave returns an ACK bit by pulling the
SDA line low for one bit on the other hand if the address from the master does not
match the slave’s own address, the slave leaves the SDA line high.
5. The ACK bit will be returned to the device in order to acknowledge a successful
receipt of the data frame.
6. Now in order to stop the data transmission, the master sends a stop condition to the
slave by switching SCL high before switching SDA high:
24
Air University Smart Image processing Robot
Wire. Begin () function will initiate the Wire library and, we need to initiate the
serial communication because we will use the Serial Monitor to show the data from
the sensor.
• Wire.write() function we will ask for the data from the two registers of the axis.
• Wire.endTransmission() will end the transmission and transmit the data from the
registers.
• Wire.requestFrom() function we will request the transmitted data or the two bytes
from the two registers.
• Wire. Available () function will return the number of bytes available for retrieval
and if that number match with our requested bytes, in our case 2 bytes.
• Wire.read() function we will read the bytes from the two registers of the X axis.
At the end we will print the data into the serial monitor.
Accelerometer Register:
26
Air University Smart Image processing Robot
Gyroscope Register:
Air university Angle Estimation and Balancing
Accelerometer Data:
Earlier described is the configuration settings of accelerometer and gyroscope now here
we explain that how data is gathered from the registers of accelerometer and how this
data is made useful which is understandable for us i.e conversion of data from data in
form of byte and bits to real angle which is a useful information for us . first of all a
value is read as a 8 bit value and we know that the whole value for ACX or ACY ou
ACZ is 16 bit so we combine the 2 registers which are placed adjacent to each other
whose location is described earlier so we combine 2 8 bit registers after that as acording
to selected sensitivity we devide the whole value with a given number in data sheet
after then we apply euler formula to get the exact value of angle from accelerometer .
28
Air University Smart Image processing Robot
Gyroscope Data :
Code :-
Wire.beginTransmission(0x68);
Wire.write(0x43);
Wire.endTransmission(false);
Wire.requestFrom(0x68,4, true);
Gyr_rawX=Wire.read() <<8|Wire.read();
Gyr_rawY=Wire.read() <<8|Wire.read();
Gyro_angle = Gyr_rawX/32.8;
We know that MPU 6050 is a slave device and Arduino is the master device. We
started with beginning the transmission by giving the address (0x68) of the slave device
which make the slave device active The next step is to read / write data from the
address this corresponds to the location from where we will start getting our data we
started with 0x43 register that corresponds to GYx , next we have started the i2c
communication that will not end as we know that we have four registers for the
accelerometer so ,we will take these register values in the form of burst starting from
GYx TILL GYy.In MPU-6050 all the registers are of 8 bits but we need GYRO_XOUT
GYRO_YOUT GYRO_ZOUT values in 16 bits as it accepts only 16 bit value ,we will
shift the eight bit value from each register and then add them to the next eight bit value
and save them to a new register.
Air university Angle Estimation and Balancing
30
Air University Smart Image processing Robot
Chapter 5
Software Design
5.1 Image processing Algorithms
BALL FOLLOWING
For capturing of video we need Video Capture() function its argument is the device i.e.
which camera we are using we need to pass 0, -1 or 1 after the capturing of video we
need to separate the frames so that we can apply our techniques on that separated
frames and at the end we need to release the camera by using release command.
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
We need to resize our video so that our processing become faster for that purpose we
need to change the width and hight of the video by using cap.set(3,320) and
cap.set(4,240) commands . frame rate by default is 640x480 and we are changing it to
320x240
cap.set(3,320)
cap.set(4,240)
32
Air University Smart Image processing Robot
3.CONVERTING TO HSV :-
4.Creating a trackbar :-
We are creating trackbar so that we can get some critical value for our value, saturation
and range components for that purpose we are using cv2.getTrackbarPos() function
First argument which we give to get trackbar position is trackbar name, second
argument is window name third argument is default value fourth value is the maximum
value and fifth value is the callback function which runs every time trackbar value
changes.
cv2.createTrackbar('hmin', 'HueComp',12,179,nothing)
Air university Software Design
cv2.getTrackbarPos(trackbarname, winname)
hmn = cv2.getTrackbarPos('hmin','HueComp')
hmx = cv2.getTrackbarPos('hmax','HueComp')
6.Apply thresholding:-
Thresholding means that If pixel value is greater than some cutoff value, it is assigned
one value or we can say it is assigned as white and other values are assigned as some
other value which may be black. for that purpose we use the function cv2.threshold() .
First argument which is passed to that threshold function is the source image but that
source image must be converted to greyscale. Second argument which is passed is the
threshold value which is used to give the pixel values. Third argument passed is the
maximum Value which represents the value to be given if pixel value is more than that
critical value. OpenCV give us the option different options for thresholding which are :
• cv2.THRESH_BINARY
• cv2.THRESH_BINARY_INV
• cv2.THRESH_TRUNC
• cv2.THRESH_TOZERO
• cv2.THRESH_TOZERO_INV
here we have done thresholding by a simple technique of just by applying a bitwise and
operator to the in range values of hue, saturation and value components.
34
Air University Smart Image processing Robot
# Apply thresholding
hthresh = cv2.inRange(np.array(hue),np.array(hmn),np.array(hmx))
tracking = cv2.bitwise_and(hthresh,cv2.bitwise_and(sthresh,vthresh))
Morphological transformations are the operations related to image shape. But it is most
of the time applied to binary images. It needs two inputs, one is our original image
second one is called structuring element or we can say kernel matrix which decides
the nature of operation which we are going to apply on our binary image. Erosion and
Dilation are two basic morphological transformations. Opening and Closing are its
forms.
Air university Software Design
ERROSION :-
Erosion operation does that it erodes away the boundaries of foreground object i.e it
erodes the the thing which appears as a object rest of the thing is considered as
background. foreground is to be kept white so that it is considered as a object. The
kernel matrix convolve with the image . A pixel in the original image is either 1 or 0
because we have given a binary image for erosion operation so it will be considered 1
only if all the pixels under the kernel is 1, otherwise it is will be made zero.
erosion = cv2.erode(img,kernel,iterations = 1)
DILATION :-
Dilation increases the white region in the image or size of foreground object increases .
dilation = cv2.dilate(img,kernel,iterations = 1)
OPENING :-
For noise removal, erosion is followed by dilation. Erosion removes white noises but
the problem which is caused by erosion is it also shrinks our object size. So, we dilate
it. As noise is gone which is done by erosion by size of object is disturbed which is
being corrected by dilation.
CLOSING :-
Closing is Dilation followed by Erosion. It is useful for removing the noise which
occurs inside the foreground object in closing small black points on the object are
removed.
8.GAUSSIAN BLUR :-
Gaussian blur technique is applied by using Gaussian kernel. It is done with the
function, cv2.GaussianBlur(). We should have to mention the width and height of the
kernel which should be positive and odd. We also have to mention the standard
deviation in the X and Y directions, sigmaX and sigmaY respectively. Gaussian
filtering is highly effective in removing Gaussian noise from the image.
36
Air University Smart Image processing Robot
blur = cv2.GaussianBlur(img,(5,5),0)
9.DRAWING BOUNDRIES :-
To draw a circle, you need its center coordinates and radius. Its first argument is the
image on which we are going to draw the circle , second argument is the pixel value
where we are going to draw that circle third argument is the radius of circle fourth
argument gives the color of circle drawn and last argument gives the thickness of the
circle .
We need to resize our video so that our processing become faster and to meet the
resolution of our raspberry for that purpose we need to change the width and hight of
the video .Argument 3 access the width and 4 access the high and then we set the width
and hight resolution according to our own choice
cap.set(3,320) cap.set(4,240)
38
Air University Smart Image processing Robot
3. Finding Edges:-
We used canny edge detection for finding the edges in our image , cv2.canny() function
is used for edge detection , it takes the input image which we are giving to it is our
smooth image done in previous step and the other arguments which we give to edge
detection is minimum and maximum value of edge which it have to find in our image .
Third argument is aperture_size. It is the size of Sobel kernel used for find image
gradients. Which in other sense take the value that how much change in intensity which
is considered as a gradient
4. FINDING CONTOURS :-
Now what we have to do is to find the sign area that where our sign is located in whole
image In order to find the sign portion in our edged image, we need to find
the contours in the image. A contour is the outline of an object .
To find contours in an image, we need the cv2.findContours function . Three
parameters are required. The first is the image we pass in our edged image .The second
parameter cv2.RETR_TREE tells OpenCV to compute the relationship between contours.
Air university Software Design
We could have also used the cv2.RETR_LIST option as well. Finally, we tell OpenCV
to compress the contours to save space using cv2.CV_CHAIN_APPROX_SIMPLE.
In return, the cv2.findContours function gives us a list of contours .
5. Sorting Of Contours :-
Now we have contours but we don’t know either contour surround just the sign portion
or the contour are made in whole image . for this the first thing we should do is to make
less d the number of contours we need to process. We know the area of our sign is quite
large with respect to the rest of the regions in the image. Line 30 handles sorting our
contours, from largest to smallest, by calculating the area of the contour
using cv2.contourArea. We now have only the 10 largest contours.
I kept only the 15 largest contours and threw the others out. Now here comes the role
of precision value the contours which are 2% similar with each other are retained and
rest of other are discarded .
Till now what we achieved is our four points which is not in any arranged form to be
processed or i can say that i don’t know which is top most corners of the rectangle or
which one are bottom most . (knowing top most or bottom most corners is important
40
Air University Smart Image processing Robot
because if image which is taken from camera is not at parallel frame to camera or at
some angle to camera then by applying this method will help us to take that rectangle
out in 2d format )
8. Order_Points Function:-
I defined order_points(pts) function for this purpose .This function takes pts ( which is
a list of four points specifying the (x, y) coordinates of each point of the rectangle)
which we calculated earlier using contour .
The actual ordering itself can be different, as long as it is made in order by
implementing algorithm . I separate the points as top left, top right, bottom right, and
bottom left.
Then top-left point is found which is have the smallest x + y sum and the bottom right
point, which will have the largest x + y sum. Now we have to find the top right and
bottom left points. Which is done by taking difference x – y between the points using
np.diff function
The coordinates associated with the smallest difference will be the top-right points,
whereas the coordinates with the largest difference will be the bottom-left point
Till now we have four points which is in order i.e we know which point in top left , top
right , bottom left and bottom right now what we are going to do on this function is we
calculate the maximum width and maximum height which we can do by using distance
formula
We calculate top width and bottom width then we apply max( ) function to calculate the
maximum width among the two similarly we calculate left hight and right hight and
check which one is maximum among the two by using max( ) function .
After we get maximum hight and maximum width we have to define my new image
dimension which is The first entry in the list is (0, 0) indicating the top-left corner. The
second entry is (maxWidth - 1, 0) which corresponds to the top-right corner. Then we
have(maxWidth - 1, maxHeight - 1) which is the bottom-right corner. Finally, we
have (0,maxHeight - 1) which is the bottom-left corner. This is how I define the
dimension of my new image .
To extract the image which is inside the rectangular shape we use the
cv2.getPerspectiveTransform function. This function requires two arguments rect ,
which is the list of 4 points in the original image and dst , which is our list of
transformed points. The cv2.getPerspectiveTransform function returns M , which is the
actual transformation matrix.
42
Air University Smart Image processing Robot
As warped image is the region of interest are which is taken from our original image so
it may contain noice and some other factors which we don’t want to be there so we
apply threshold to convert our warped image just to black and white or if we talk in
terms of pixels converted either zero or one we have given threshold at 80 as if we
consider scale of 0 to 255 it is more close to 0 because we are more concerned with
black portion which is our region of interest .After thresholding we converting our
image we have a clear image now we resize this clear image so our image size become
equal to that of size of our video frames .
11.Image Comparison :-
Now we have our final converted video frames on which we can apply comparison
operation .
For this purpose first of all we read our reference images using cv2.imread() function
after that we resized our images to the size of our video frames . then we are going to
use bitwise xor function to compare our original and frames taken from video . when
images are xor the portions which are similar in images will give a black result i.e it
Air university Software Design
gives 0 answer and the portion which is not matched in both of the images will give
white i.e will give answer 1 .
After the bitwise XOR function we have result of pixels whose values are zero or one
or we can say that not matched portion of image is one so if we count ones in an image
by applying Countnonzeros then we take the decision that either image is matched or
not .
44
Air University Smart Image processing Robot
Flow diagram:-
Block diagram:-
This basic working of our project is shown in below block diagram.
Arduino Motor
Driver
Air university Software Design
Working Explanation:
We have to establish some connection between our image processing part which is a
Raspberry PI and Arduino which is installed on our Robot so we establish the
connection with a wifi module on Arduino and a server installed on Raspberry pi .
The process starts with the installation of some web server language libraries on
Raspberry Pi . The pre installations that we have done for webserver that we have
created using raspberry pi as follows.
• sudo apt-get update
• sudo bash
• apt-get install apache2 apache2-doc apache2-utils
• apt-get install libapache2-mod-php5 php5 php-pear php5-xcache
• apt-get install php5-mysql
• apt-get install mysql-server mysql-client
After pre installments on Raspberry Pi we have to test that either our server is working
on Raspberry Pi or not for that we have to type the IP address of Raspberry Pi on web
server it if it shows some page like this then its working if not then we have to do the
pre installations again
If its working then we are on our way to make our own web server this is done by PHP
language . Our purpose is just to read some Pins on Raspberry Pi and continuously
refresh the page so that it works fine with time .
Commands we use to read the pins are :
46
Air University Smart Image processing Robot
This Command is Reading Pin staus of Pin no 17 and now next step after reading the
Pin status is to display that command status on server This can be done with the
command :
echo $pinstatus;
The job for making the server is almost done the last thing which we have to do is to
coninously refreh the web server page this can be done as
header("Refresh: 0;);
Index 0 is showing that that we are refreshing our page after every zero
second which means continuously.
Here is the index file which we have edites and it is located on /var/www directory in
raspberry Pi .
This code is written in PHP language and saved in file named as index.php. This file
can be accessed using raspberry pi IP address in browser.
These commands are used to show pin status of raspberry pi GPIO.
Air university Software Design
On the Robot end ESP8266-12E is used to read the status of Raspberry Pi GPIO pins
from the webserver, acting as client. After reading the pin status ESP processes the data
and triggers its own GPIO pins to control direction of Robot.
We can find the server code of Node MCU from Arduino IDE Examples it explains
that how we can connect with a web server . The thing which we have to edit in that
code is explained below with detail :
WiFiMulti.addAP("Mujtaba", "12345678");
This is the information of the Access point through which we are accessing our server
we have to provide the SSID i.e the user name of our router and the password our our
router .
After this we have to wait for the WIFI connection to be established we begin our
connection with
http.begin("http://192.168.0.2/");
This is the IP address our our web server which we have earlier set on our raspberry Pi .
After successful communication with Raspberry Pi we have to get the data from
Raspberry Pi
httpCode > 0
This condition satisfies only when HTTP header has been send and serponse header has
been handleld
if(httpCode == HTTP_CODE_OK) {
String payload = http.getString();
After successful connection we we have to get the data from server this can be done by
this http.getString command and we are going to save our string in some string variable
for future use and conditioning . After getting string data from server we convert it to
the data which can be useful for us After that we apply conditions on that received
string from server and turn our GPIO pins on and off.
One of the problem here in turning GPIO pins on and off is that the actual pin
numbering is different from GPIO pin numbering and we face some problems here in
locating its pins refring with the pin diagram
48
Air University Smart Image processing Robot
5.2.2 PyCharm
PyCharm. is an Integrated. Development Environment (IDE) used in computer
programming, specifically for the Python language. ... It provides. code analysis, a
graphical debugger, an integrated unit tester, integration with version control systems
(VCSes), and supports web development with Django.
Virtual network computing (VNC) is a type of. remote-control software that makes it
possible to control another computer over a network. connection. Keystrokes
and mouse clicks are transmitted from one computer to another, allowing technical
support. staff to manage a desktop, server, or other networked device without being in
the same physical location.
VNC works on a. client/server model: A VNC viewer (or client) is installed on the
local computer and connects to the server component, which must be installed on the
remote computer
50
Air University Smart Image processing Robot
Chapter 6
Results and Discussion
Chapter 7
Conclusion
This project was started with the problem statement that how we can send vehicle from
one place to another while it stabilizes itself through self-balancing technique. This
problem was proposed to be resolved using a vehicle called ‘Smart Image Processing
Robot’ which was based on the concept of controlling the vehicle by displaying a set of
symbols to navigate it to its destination. The underlying concept was to move the
vehicle autonomously from one place to another and perform image processing.
Furthermore, the real-time image processing was also performed in the project. One of
the aims was to develop an autonomous vehicle for surveillance in less cost. All these
aims and objectives were successfully achieved by the end of the project.
52
Air University Smart Image processing Robot
References:-
1. https://docs.opencv.org/3.1.0/d4/d13/tutorial_py_filtering.html
2. http://opencv-python-
tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_canny/py_
canny.html
3. https://docs.opencv.org/3.1.0/d4/d73/tutorial_py_contours_begin.html
4. https://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.htm
l
5. https://www.pyimagesearch.com/2014/04/21/building-pokedex-python-
finding-game-boy-screen-step-4-6/
6. https://www.pyimagesearch.com/2014/08/25/4-point-opencv-
getperspective-transform-example/
7. http://roboticssamy.blogspot.pt/
8. http://www.robotshop.com/letsmakerobots/rs4-self-balancing-
raspberry-pi-image-processing-robot
9. http://socialledge.com/sjsu/index.php/S15:_Self-Balancing_Robot
Air university Appendix A
APENDIX A :-CODES
BALANCING CODE :-
#include <Wire.h>
#define in_1a 8
#define in_1b 9
#define in_2a 6
#define in_2b 11
#define en1 2
#define en2 3
#define en 51
#define d0 31
#define d1 33
float Acceleration_angle;
float Gyro_angle;
float Total_angle;
/////////////////PID CONSTANTS/////////////////
double kp=18.0;//3.55//16//good p=18// kazim motor = 8
double ki=2.5;//.000545; //0.01;//0.003//1.7//.8//0.5 // k motor=0.4//2.5
double kd=1.0;//1.0427; //2.05;//2.05//1.4//.7//0.3 k motor =0.4//1.5
///////////////////////////////////////////////
void setup() {
TCCR3B = TCCR3B & B11111000 | B00000010;
Wire.begin(); //begin the wire comunication
TWBR = 12; //Set the I2C clock speed to
400kHz
Wire.beginTransmission(0x68); // slave address ( FOR MPU 6050 )
Wire.write(0x6B); // adress of the register data to
be written or read
Wire.write(0); // resetting register 0x6B which
is to reset the register
Wire.endTransmission(true); // end transmission with MPU
54
Air University Smart Image processing Robot
//attachInterrupt(digitalPinToInterrupt(2),forward,HIGH);
//attachInterrupt(digitalPinToInterrupt(3),reverse,HIGH);
/*
if(gyro_error==0)
{
for(int i=0; i<1000; i++)
{
Wire.beginTransmission(0x68);
//begin, Send the slave adress (in this case 68)
Wire.write(0x43);
//First adress of the Gyro data
Wire.endTransmission(false);
Wire.requestFrom(0x68,4,true);
//We ask for just 4 registers
Gyr_rawX=Wire.read()<<8|Wire.read();
//Once again we shif and sum
Gyr_rawY=Wire.read()<<8|Wire.read();
if(i==999)
{
Gyro_raw_error_x = Gyro_raw_error_x/1000;
//Gyro_raw_error_y = Gyro_raw_error_y/1000;
gyro_error=1;
}
}
}
if(acc_error==0)
{
for(int a=0; a<1000; a++)
Air university Appendix A
Wire.beginTransmission(0x68);
Wire.write(0x3B);
//Ask for the 0x3B register- correspond to AcX
Wire.endTransmission(false);
Wire.requestFrom(0x68,6,true);
Acc_rawX=(Wire.read()<<8|Wire.read())/4096.0 ;
//each value needs two registres
Acc_rawY=(Wire.read()<<8|Wire.read())/4096.0 ;
Acc_rawZ=(Wire.read()<<8|Wire.read())/4096.0 ;
Acc_angle_error_x = Acc_angle_error_x +
((atan((Acc_rawY)/sqrt(pow((Acc_rawX),2) + pow((Acc_rawZ),2)))*rad_to_deg));
//Acc_angle_error_y = Acc_angle_error_y + ((atan(-
1*(Acc_rawX)/sqrt(pow((Acc_rawY),2) + pow((Acc_rawZ),2)))*rad_to_deg));
if(a==999)
{
Acc_angle_error_x = Acc_angle_error_x/1000;
//Acc_angle_error_y = Acc_angle_error_y/1000;
acc_error=1;
}}}
*/
void loop() {
digitalRead(d0);
digitalRead(d1);
digitalWrite(en,HIGH);
timePrev = time; // variable to keep record of
time for previous loop
time = millis(); // built in function to record
time
elapsedTime = (time - timePrev) / 1000; //conversion of time from milli-
sec to sec
//Serial.println(millis());
/*The tiemStep is the time that elapsed since the previous loop.
* This is the value that we will use in the formulas as "elapsedTime"
* in seconds. We work in ms so we haveto divide the value by 1000
to obtain seconds*/
Acc_rawY=Wire.read()<<8|Wire.read();// similar
Acc_rawZ=Wire.read()<<8|Wire.read();// similar
56
Air University Smart Image processing Robot
Acceleration_angle = atan((Acc_rawY/4096.0)/sqrt(pow((Acc_rawX/4096.0),2)
+ pow((Acc_rawZ/4096.0),2)))*rad_to_deg; // as 8g is selected for
accelerometer so
Serial.print("ANGLE= ");
Serial.print(Total_angle); // i am taking total angle as just one angle of
x-axis
Serial.print("\t");
else
{error = Total_angle - desired_angle;
}
*/
Air university Appendix A
if(-10 <error <10) // limit on integral value .integral value is just to fine
tune the error so imposing limits on a small range
{
pid_i = pid_i+(ki*error); // summation of what the previous value of error
is with the present value
}
if(pid_i>200)pid_i=200;
if(pid_i<-200)pid_i=-200; // impossing limit on the total integral value so
that value may not become very large
/*
As we know the speed is the amount of error that produced in a certain amount
of
time divided by that time. For taht we will use a variable called
previous_error.
We substract that value from the actual error and divide all by the elapsed
time.
Finnaly we multiply the result by the derivate constant*/
PID = pid_p + pid_i + pid_d; // total PID is the sum of all the pi,pd and pk
values
Serial.print("error= ");
Serial.print(error);
Serial.print("\t");
Serial.print("PID= ");
Serial.print(PID);
if (PID >= 0)
{
pwmLeft = initial_val + PID; // total PID calculated with the initial value
of motors will give the PWM given to motors
pwmRight = initial_val + PID;
}
if (PID < 0)
{
pwmLeft = initial_val - PID;
pwmRight = initial_val - PID;
}
58
Air University Smart Image processing Robot
Serial.print("\t");
Serial.print(pwmRight);
Serial.print("\t");
Serial.println(pwmLeft);
if(PID <=0 && PID >= -200 ) // it robot is falling on one side the robot
moves towrads the falling side
{
digitalWrite(in_1a,LOW);
digitalWrite(in_1b,HIGH);
digitalWrite(in_2a,LOW);
digitalWrite(in_2b,HIGH);
analogWrite(en1,pwmRight);
analogWrite(en2,pwmLeft);
}
if(PID >=0 && PID <= 200 ) // if robot is falling on other side robot moves
towards that
{
digitalWrite(in_1a,HIGH);
digitalWrite(in_1b,LOW);
digitalWrite(in_2a,HIGH);
digitalWrite(in_2b,LOW);
analogWrite(en1,pwmRight);
analogWrite(en2,pwmLeft);
}
Air university Appendix A
int d0=16;
int d1=5;
int d2=4;
int d3=0;
int d4=2;
int d5=14;
int d6=12;
int d7=13;
int d8=15;
/**
* BasicHTTPClient.ino
*
* Created on: 24.05.2015
*
*/
#include <Arduino.h>
#include <ESP8266WiFi.h>
#include <ESP8266WiFiMulti.h>
#include <ESP8266HTTPClient.h>
#define USE_SERIAL Serial
ESP8266WiFiMulti WiFiMulti;
void setup() {
pinMode(d1, OUTPUT);
pinMode(d2, OUTPUT);
pinMode(d3, OUTPUT);
pinMode(d4, OUTPUT);
pinMode(d5, OUTPUT);
USE_SERIAL.begin(115200);
// USE_SERIAL.setDebugOutput(true);
USE_SERIAL.println();
USE_SERIAL.println();
USE_SERIAL.println();
WiFiMulti.addAP("Mujtaba_03030103828", "12345678");
60
Air University Smart Image processing Robot
void loop() {
// wait for WiFi connection
if((WiFiMulti.run() == WL_CONNECTED))
{
HTTPClient http;
USE_SERIAL.print("[HTTP] begin...\n");
// configure traged server and url
//http.begin("https://192.168.1.12/test.html", "7a 9c f4 db 40 d3 62
5a 6e 21 bc 5c cc 66 c8 3e a1 45 59 38"); //HTTPS
http.begin("http://192.168.0.2/"); //HTTP
USE_SERIAL.print("[HTTP] GET...\n");
// start connection and send HTTP header
int httpCode = http.GET();
if(payload=="01")
{
digitalWrite(d2, LOW);
digitalWrite(d1, HIGH); // turn the LED on (HIGH is the
voltage level)
//delay(1000); // wait for a second
//digitalWrite(d1, LOW); // turn the LED off by making
the voltage LOW
//delay(1000);
USE_SERIAL.println("MOVE FORWARD");
}
else if(payload=="10")
{
digitalWrite(d1, LOW);
digitalWrite(d2, HIGH);
// turn the LED on (HIGH is the voltage level)
//delay(1000); // wait for a second
// digitalWrite(d2, LOW); // turn the LED off by making
the voltage LOW
//delay(1000);
USE_SERIAL.println("MOVE RIGHT");
}
else if(payload=="11")
{
digitalWrite(d1, HIGH);
digitalWrite(d2, HIGH);
else
{
digitalWrite(d1, LOW);
digitalWrite(d2, LOW); // turn the LED on (HIGH is the
voltage level)
//delay(500); // wait for a second
//digitalWrite(d4, LOW); // turn the LED off by making
the voltage LOW
//delay(1000);
USE_SERIAL.println("keep balance position ");
}
}}
else
{
USE_SERIAL.printf("[HTTP] GET... failed, error: %s\n",
http.errorToString(httpCode).c_str());
}
http.end();
}
delay(1000);
}
import numpy as np
import cv2
import cv
import RPi.GPIO as GPIO
import time
GPIO.setmode(GPIO.BCM)
TRIG = 23
ECHO = 24
LED=18
GPIO.setup(TRIG,GPIO.OUT)
GPIO.setup(LED,GPIO.OUT)
GPIO.setup(ECHO,GPIO.IN)
GPIO.setup(LED,GPIO.OUT)
62
Air University Smart Image processing Robot
kernel = np.ones((5,5),np.uint8)
img1=cv2.imread('arrowL.jpg',0)
img3=cv2.imread('arrowR.jpg',0)
while(True):
buzz = 0
ret,frame=cap.read()
GPIO.output(TRIG, False)
time.sleep(0.5)
GPIO.output(TRIG, True)
time.sleep(0.00001)
GPIO.output(TRIG, False)
while GPIO.input(ECHO)==0:
pulse_start = time.time()
while GPIO.input(ECHO)==1:
pulse_end = time.time()
(cnts,hierarchy)=cv2.findContours(edged,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)[:10]
screenCnt=None
64
Air University Smart Image processing Robot
#*****************************************************************************
*******
pts = order_points(pts)
warped = four_point_transform(frame, pts)
#****************************************************************************
ret, thresh1 = cv2.threshold(warped, 80, 255, cv2.THRESH_BINARY)
re = cv2.resize(thresh1,(320, 240))
gray1 = cv2.cvtColor(re, cv2.COLOR_BGR2GRAY)
imgxor0 = cv2.bitwise_xor(gray1 , resized_image1)
imgxor2 = cv2.bitwise_xor(gray1, resized_image3)
nzCount0 = cv2.countNonZero(imgxor0)
nzCount2 = cv2.countNonZero(imgxor2)
print(nzCount0)
print(nzCount2)
if nzCount0 <= 55600:
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(frame, 'turn left', (20, 210), font, 1, (255, 255, 255),
2,2 )
elif nzCount2 <= 57500:
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(frame, 'turn right', (20, 210), font, 1, (255, 255, 255),
2,2)
else:
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(frame, 'not match found ', (20, 210), font, 1, (255, 255,
255), 2,2)
#cv2.imshow("comparison with left sign", imgxor0)
#cv2.imshow("comparison with right sign", imgxor2)
#cv2.imshow("Warped",re)
#cv2.imwrite("meri_tasweer.jpg",re)
cv2.imshow("orignal video", frame)
else:
print("distance greater then 35")
cv2.imshow("orignal video", frame)
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(frame, 'distance<25', (20, 210), font, 1, (255, 255, 255),
2,2)
cap.release()
cv2.destroyAllWindows()
Air university Appendix A
APENDIX B :-REFRENCES
1. https://docs.opencv.org/3.1.0/d4/d13/tutorial_py_filtering.html
2. http://opencv-python-
tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_canny/
py_canny.html
3. https://docs.opencv.org/3.1.0/d4/d73/tutorial_py_contours_begin.ht
ml
4. https://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.h
tml
5. https://www.pyimagesearch.com/2014/04/21/building-pokedex-
python-finding-game-boy-screen-step-4-6/
6. https://www.pyimagesearch.com/2014/08/25/4-point-opencv-
getperspective-transform-example/
7. http://roboticssamy.blogspot.pt/
8. http://www.robotshop.com/letsmakerobots/rs4-self-balancing-
raspberry-pi-image-processing-robot
9. http://socialledge.com/sjsu/index.php/S15:_Self-Balancing_Robot
66