0% found this document useful (0 votes)
178 views

Smart Image Processing Robot

This document describes a smart image processing robot project created by three students - Mujtaba Wajid Qureshi, Syeda Ume Aymun, and Muhammad Jaleel. The robot uses a microcontroller, motors, sensors and a Raspberry Pi to balance itself and navigate using image processing. It has two main units - a balancing unit that controls the motors to keep the robot upright, and an image processing unit that allows the robot to perform tasks using a camera. The project aims to build a two-wheeled self-balancing robot that can navigate autonomously using image recognition.

Uploaded by

Asad Asif
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
178 views

Smart Image Processing Robot

This document describes a smart image processing robot project created by three students - Mujtaba Wajid Qureshi, Syeda Ume Aymun, and Muhammad Jaleel. The robot uses a microcontroller, motors, sensors and a Raspberry Pi to balance itself and navigate using image processing. It has two main units - a balancing unit that controls the motors to keep the robot upright, and an image processing unit that allows the robot to perform tasks using a camera. The project aims to build a two-wheeled self-balancing robot that can navigate autonomously using image recognition.

Uploaded by

Asad Asif
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 75

Smart Image Processing Robot

Group Members:

MUJTABA WAJID QURESHI (140180)

SYEDA UME AYMUN (140312)

MOHAMMAD JALEEL (140246)

Bachelor of Electrical Engineering

(2014-2018)

PROJECT SUPERVISOR

MR. MUMAJJED UL MUDASSIR

ASSISTANT PROFESSOR

DEPARTMENT OF ELECTRICAL ENGINEERING

FACULTY OF ENGINEERING

AIR UNIVERSITY, ISLAMABAD

A Final Year Project Report

Submitted to

Air University

1
In partial fulfillment of the requirements

For the Degree, of

Bachelor of Electrical Engineering

Smart Image Processing Robot

SUBMITTED BY

MUJTABA WAJID QURESHI (140180)

SYEDA UME AYMUN (140312)

MUHAMMAD JALEEL (140246)

(2014-2018)
DEPARTMENT OF ELECTRICAL ENGINEERING

April 2018

Approval for Submission


It is to certify that the project report titled

“Smart Image Processing Robot”

Has met the required standard for submission

In partial fulfillment of the requirements

For the award of degree of

Bachelor of Electrical Engineering

At

Air University, Islamabad.

Project Supervisor

(Mr. Mumajjed Ul Mudassir)

(Assistant Professor)

Head of Department

Dr Shahid Baqar
Acknowledgements

Foremost, we thank Almighty for the completion of this project was only possible because of

His grace. We would like to thank our supervisor Sir Mumajjed-Ul-Mudassir, for his

continuous support, his patience and for his guidance throughout our project. We would also

like to thank Sir Abid and Sir Attique-ur-Rehman who helped us out whenever we asked for it.

Also, we thank our parents who provided an unending supply of prayers and constant support

throughout the course of this project and last but certainly not the least we thank our friends

who encouraged us at every point to make it this far.


Abstract

In the last decade, the open source community has broadened to make it possible for people to
build complex products at home. Self-balancing robots are progressively becoming popular for
their unique ability to move around in two wheels. They are characterized by their high
maneuverability and outstanding agility. This project will undertake the construction and
implementation of a two-wheeled robot that is not just proficient of balancing itself on two
wheels but also navigates its way around with the help of a detecting device (image processing
system) attached to it. The robot can be considered as a merger of two units – the balancing
and the image processing unit. The balancing unit performs all functions that keep the robot
upright whereas the image processing unit contributes in performing the specific task assigned
to it. The balancing unit runs a PID control loop along with a microcontroller responsible for
the robot’s motor control, which improves the systems stability. This system can be used as a
base model to accomplish complex tasks which would otherwise be functioned by humans, of
which, some include foot print analysis in wildlife reserves, autonomous indoor navigation etc.

IV
Table of Contents
Acknowledgements ................................................................................................................ I

Abstract ................................................................................................................................. II

Table of Contents ................................................................................................................ III

List of Figures .......................................................................................................... V

List of Tables ........................................................................................................... VI

Chapter 1 ................................................................................................................... 1
1 Introduction ............................................................................................................. 1
1.1Background .......................................................................................................... 1
1.2Motivation ............................................................................................................ 1
1.3Problem statement ................................................................................................ 2
1.4Objectives ............................................................................................................. 2
1.5Overall Block Diagram ........................................................................................ 3

Chapter 2 ................................................................................................................... 4
2Literature Review .................................................................................................... 4

Chapter 3 ................................................................................................................... 5
3Hardware Design ..................................................................................................... 5
3.1Hardware Components ......................................................................................... 6
3.1.1 Microcontroller ............................................................................................ 6
3.1.2 Ultrasonic Sensor - HC-SR04 ....................................................................... 7
3.1.3 DC Geared Motor ......................................................................................... 9
3.1.4 Camera ........................................................................................................ 11
3.1.5 RaspberryPi ................................................................................................. 12
3.1.6 MPU(6050) ................................................................................................. 13
3.1.7 ESP 8266 ………………………………………………………………… 14
3.2Circuit Diagram .................................................................................................. 16
3.2.0Motor Driver Circuit ...................................................................................... 16
3.2.1 Motor Selection Calculation........................................................................... 16
3.2.2 Ultrasonic Sensors Interface Circuit .............................................................. 16
3.2.3 Calculation for Ultrasonic Sensor ................................................................. 16
3.2.4 Calculations .................................................................................................... 17

Chapter 4…………………………………………………………………………………18
4.1 Angle Estimation and balancing ……………… …………………………… ..18
4.1.1 Angle estimation ……………. ……………… …………………………… ..18
4.2 HOW TO CODE MPU-6050 (GYRO + ACCELEROMETER ) …………………..19
4.2.1 Degree of freedom (6 DOF ) ……………………………………………..………..19
4.2.2 3 Axis Accelerometer ………………………………………… …………………..19
4.2.3 3 Axis GyroScope …… …………………………………… …………………..19
4.2.4 DMP ……………… ……………………………………… …………………..22
4.2.5 FIFO Buffer ………………………………………… …………………..22

4.3 I2C communication ……………………………………………………………………22


4.3.1 communication methodology …………………………...…………………………24
4.3.2 Basic functions ……………………………………………………………………24
4.3.3 configuration Registers ……………………………………………………………24
4.3.4 data conversion …….. ……………………………………………………………28

4.4 Complementry filter


4.4 What is PID (proportional , integral and derivative control algorithm ) ……………29
4.5 How to tune PID for a self balancing Robot ………………………………………..29
Chapter 5 ......................................................................................................... ……31
5Software design ........................................................................................ ……….31
5.1.1Algorithms used for ball following ………………………………………………31
5.1.2 Algorithms used for Sign detection ……………………………………………..38
5.1.3 Server communication ……………………………………………… 45
5.2 Software Used ……………………………………………………………49
5.2.1 ARDUINO IDE ..................................................................................... 49
5.2.2 PYCHRAM ........................................................................................... 49
5.2.3 Proteus 8 Pro ......................................................................................... 49
5.2.4 VNC ...................................................................................................... 49

Chapter 6 ................................................................................................................. 51
6.Results and Discussion ......................................................................................... 51
6.1 Operation and Working ............................................................................ 51
6.2 Problems Faced ........................................................................................ 52

Chapter 7 ................................................................................................................. 52
Conclusion ............................................................................................................... 52

References .............................................................................................................. 53

Appendix A: Code .................................................................................................. 54

IV
List of Figures

Fig 1.1………………………………………………………………………………..1
Fig 1.2………………………………………………………………………………..3
Fig 3.1……………………………………………………………............…………..8
Fig 3.2………………………………………………………………………………..9
Fig 3.3……………………………………………………………….………………10
Fig 3.4…………………………………………………………………….…………10
Fig 3.5……………………………………………………………….………………11
Fig 3.6………………………………………………………………….……………12
Fig 3.7…………………………………………………………………….…………13
Fig 3.8……………………………………………………………………….………14
Fig 3.9………………………………………………………….……………………16
Fig 3.10……………………………………………………………….……………..16
Fig 3.11……………………………………………………………….……………..17
Fig 4.1……………………………………………………………………….………31
Fig 4.2…………………………………………………………….…………………32
Fig 5.1………………………………………………………….....……………........33
Fig 5.2……………………………………………………………………….………34
Fig 5.3………………………………………………………….....……………........35
Fig 5.4………………………………………………………….....……………........37
Fig 5.5………………………………………………………….....……………........39
Fig 5.6………………………………………………………….....……………........40
Fig 5.7………………………………………………………….....……………........41
Fig 5.8………………………………………………………….....……………........44
Fig 5.9………………………………………………………….....……………........46
Fig 5.10………………………………………………………….....…………….......47
List of Tables

Table 3.1……………………………………………………………………………….…9

Table 3.2………………………………………………………………….………………12

Table 3.3………………………………………………………………….………………15

Table 5.1…………………………………………………………….…………….……...4

IV
Air University Smart Image Processing Robot

Chapter 1
Introduction

1.1 Background
This report clears up the advantages and design of the project titled Smart Robot with Image
Processing. This project incorporates the arrangement of a mechanized vehicle close by camera
which is interfaced for image processing. The user displays symbols/arrows to navigate the
robot to the destination point while the robot self-balances itself throughout this path.. The
robot is showed up in figure 1.1.

Figure 1.1 Smart Image Processing Robot

1.2 Motivation
In the last decade, the open source community has broadened to make it possible for people to
build complex products at home. Self-balancing robots are progressively becoming popular for
their unique ability to move around in two wheels. They are characterized by their high
maneuverability and outstanding agility. Navigation of the robot has been the most essential
but then most difficult in constructing up such a mobile robot. For instance, this robot is made
to take some object from one place to another which is slope sensitive with the help of
Air University Introduction

extraordinary sensors yet it can't go to the objective by it, how might we assess the robot as
valuable? This shows how navigation is imperative in building up a mobile robot.

1.3 Problem Statement


Nowadays, mechanical advancements have ended up being more basic since a significant
measure of industry is endeavoring to upgrade their equipment weapons. This advancement
has made well-ordered robots to guarantee a splendid result. Recently, as time goes by, a huge
amount of mechanical robots have been devised to help social orders running their daily life.
Self-balancing robot is truly an essential machine. It balances itself on two wheels and
provides the unique stability which differentiates it from other ordinary robots. This ability of
robot allows it to navigate on various terrains, sharp corners etc. These abilities of robot solve
the number of challenges in industry and the society.
Other than that, its future change is tremendous to examine. By adding the feature of this
collision avoidance system, an impressive measure of new and variety mobile robot with
various limits can be outlined.
The basic idea of self-balancing is to is to drive the wheels in the direction in which the robot
tilts while following the external directions given through the camera. This balancing is itself
very complex to achieve. Another problem is the process of Image processing during the
robot’s destabilization as the robot must be stable for the camera to capture the signs.
Once these have been settled the primary target can be achieved.

1.4 Aims and Objectives


The focuses and goals of our project are as follows
• To be able to balance on two wheels.
• Be stable in shortest settling time and smallest over shoot. and not make any large unwanted
movements.
• To be able to use camera, raspberry pi for image processing.
• To be economical.

IV
Air University Smart Image Processing Robot

1.5 Overall Block Diagram


The overall block diagram is shown in Figure 1.2. The general design of the robot is a 2-
layered rectangular body on two wheels. The wheels are placed parallel to each other. The
layers are made up of a special material known as elpolic it is basically a fiber sheet enclosed
in 2 layered aluminum sheets . The bottom layer consists of wheels and comprises of electronic
circuitry which include the microcontroller, angle sensor, power circuit. motors and motor
driver. The topmost layer contains a LIPO battery
The balancing and the image processing unit have a separate CPU. The balancing unit contains
a microcontroller- Arduino UNO which runs at 16MHz. The image processing unit contains a
single board computer- Raspberry Pi which runs at 700MHz. We interface an ultrasonic
distance sensor with raspberry pi, give a trigger pulse and receive an echo pulse. The ultrasonic
distance sensor uses sonar to detect obstacles and to measure the distance to them
An inertial measurement unit (IMU), a microcontroller, a motor driver and two motors form
the balancing unit. The microcontroller continuously reads the data from IMU and calculates
the angle of tilt of the robot with respect to the vertical. Based on this data, the microcontroller
then sends appropriate control signals to the motor driver to drive the motors.

Figure 1.2 Block diagram


With the help of camera video is recorded for image processing , later on. In image
processing, we are extracting frames from the video and apply different image processing
techniques. Also change is detected from the recorded video. The other techniques are edge
detection, histogram, distance measurement (in pixels), image resizing, non-linear filtering,
rotation and flipping.
Air University Literature Review

Chapter 2
Literature Review

Mechanical innovation has been. a principal of forefront creating for over an expansive part of
a century. As robots and their periphery equipment. end up being more sophisticated, strong,
and downsized, these structures are continuously being utilized for entertainment, military, and
surveillance purposes. A self-balancing robot is described as a two-wheeled robot that is not
just proficient. of balancing itself. on two wheels but also directs its way around with the help
of a detecting device (image processing system) closed to it, for specific purposes.

There are various microcontrollers. in the market including various types of capacity from
fundamental input output to high end. microcontroller. These diverse sorts of microcontroller
are reason made for general. application. In this examination, we propose building for
Raspberry pi. and Arduino UNO based. robot that controls the robot’s navigation and
stabilization.

Robotics has always. been played an vital part of the human. psyche. The dream of creating a
machine that. replicates human thought and physical. features extends throughout the existence
of mankind. Growths in technology. over the past fifty years have established the basics of
making these dreams come true. Robotics is now achievable through the diminishment of the
microprocessors which performs the processing and computations. New forms of sensor
devices are being developed all the time further providing machines with the ability to identify
the world around them in so many ways. Effective and efficient control system designs offer
the robot with the ability to control itself and operate autonomously.
Artificial intelligence (AI) is becoming. a definite possibility with advancements in non-linear
control systems. such as neural networks and fuzzy controllers. Improved synthetics and
materials allow for robust and cosmetically aesthetic designs to be implemented. for the
construction and visual aspects of the robot. Two wheeled robots are one variation of robot that
has become a standard topic of research and exploration. for young engineers and robotic
enthusiasts. They offer the opportunity to develop control systems that can maintain stability of
an otherwise unstable system. This type of system is also known as an inverted pendulum. This

IV
Air University Smart Image Processing Robot

project aims to bring this, and many of the previously mention aspects of a robot. together into
the building of a two-wheeled balancing robot with. a non-linear, fuzzy controller. This field of
research is essential as robots offer an opportunity of improving the quality of life for every
member. of the human race. This will be achieved through the. reduction of human exposure to
hazardous. conditions, dangerous environments and harmful chemicals. and the provision of
continual. 24 Hr assistance and monitoring for people with medical. conditions, etc. Robots
will be employed in many applications within society including. carers, assistants and security.
Air university Hardware design

Chapter 3
Hardware Design
3.1 Hardware Components
This section discusses all the components used in this project.
3.1.1 Microcontroller Arduino Uno
The microcontroller used in the project is Arduino UNO. Arduino Uno is a
microcontroller board based on 8-bit ATmega328P microcontroller. Along with
ATmega328P, it consists other components such as crystal oscillator, serial
communication, voltage regulator, etc. to support the microcontroller. Arduino Uno has
14 digital input/output pins (out of which 6 can be used as PWM outputs), 6 analog
input pins, a USB connection, A Power barrel jack, an ICSP header and a reset button.
The package used in this project is a forty pin PDIP. The major specifications are as
follows.

The 14 digital input/output pins can be used as input or output pins by using
pinMode(), digitalRead() and digitalWrite() functions in arduino programming. Each
pin operate at 5V and can provide or receive a maximum of 40mA current, and has an
internal pull-up resistor of 20-50 KOhms which are disconnected by default. Out of
these 14 pins, some pins have specific functions as listed below:

• Serial Pins 0 (Rx) and 1 (Tx): Rx and Tx pins are used to receive and transmit TTL
serial data. They relate to the corresponding ATmega328P USB to TTL serial chip.
• External Interrupt Pins 2 and 3: These pins can be configured to trigger an interrupt
on a low value, a rising or falling edge, or a change in value.
• PWM Pins 3, 5, 6, 9 and 11: These pins provide an 8-bit PWM output by using
analogWrite() function.
• SPI Pins 10 (SS), 11 (MOSI), 12 (MISO) and 13 (SCK): These pins are used for
SPI communication.
• In-built LED Pin 13: This pin is connected with an built-in LED, when pin 13 is
HIGH – LED is on and when pin 13 is LOW, its off.

Along with 14 Digital pins, there are 6 analog input pins, each of which provide 10 bits
of resolution, i.e. 1024 different values. They measure from 0 to 5 volts but this limit
can be increased by using AREF pin with analog Reference() function.

• Analog pin 4 (SDA) and pin 5 (SCA) also used for TWI communication using Wire
library.

Arduino Uno has a couple of other pins as explained below:

• AREF: Used to provide reference voltage for analog inputs with analogReference()
function.

6
Air University Smart Image processing Robot

• Reset Pin: Making this pin LOW, resets the microcontroller.

Figure 3.1 Pin layout

3.1.2 Ultrasonic Sensor – HC-SR04


Product features:
Ultrasonic ranging module HC – SR04 provides 2cm – 400cm non-contact
measurement function, the ranging accuracy can reach to 3mm. The modules includes
ultrasonic transmitters, receiver and control circuit. The basic principle of work:
a) Using IO trigger for at least 10us high level signal.
b) The Module automatically sends eight 40 kHz and detect whether there is a pulse
signal back.
Air university Hardware design

c) IF the signal back, through high level , time of high output IO duration is the time
from sending ultrasonic to returning. Test distance = (high level time×velocity of
sound (340M/S) / 2.

Figure 3.2 Sensor timing diagram

SENSOR PARAMETERS:

Table 3.1 Sensor Parameter

8
Air University Smart Image processing Robot

SENSOR LOS:

Figure 3.3 Sensor timing diagram

Figure 3.4 Ultrasonic Sensor - HC-SR04

3.1.3 DC Geared Motors


Geared DC motors can be described as an extension of DC motor which starting at now
had its Insight purposes of intrigue demystified here. A geared DC Motor has a
contraption gathering joined to the motor. The speed of motor is checked in terms of
Air university Hardware design

rotations of the shaft per minute and is termed as RPM. The gear gathering helps in
growing the torque and decreasing the speed. Using the correct blend of riggings in a
mechanical assembly motor, its speed can be reduced to any appealing figure. This
thought where gears reduce the speed of the vehicle yet increase its torque is known as
mechanical assembly diminishing. This Insight will explore all the minor and huge
purposes of intrigue that make the mechanical assembly head and consequently the
working of geared DC motor.
External Structure:
At the first sight, the external structure of a DC geared motor looks as a straight
improvement over the fundamental DC ones.

Figure 3.5 DC geared motor

Description of motors
• Gear shaft 4mm
• Rated Voltage: 12V
• Rated Current: 1.6A
• No-load Current: 260mA
• Rated Torque: 7kg.cm
• Rated Speed : 270rpm
• No-load Speed: 320rpm

10
Air University Smart Image processing Robot

Motor Controller:
The motors are controlled using the motor driver LM298N. Different combinations are
sent to motor driver to control motors. There are motor controlling is explained in table
below.

Motor 1 Motor 2 Result


Forward Forward Forward
Forward Stop Right
Stop Forward Left
Stop Stop Stop
• Table 3.2 Motors Working

3.1.4 Camera
A 8MP Raspberry Pi compatible Camera has the high quality Sony IMX219 image
sensor.
Sony IMX219 is a CMOS image sensor.
Frame Rate for this camera is 30 frame/s
This sensor is capable of 3280 x 2464 pixel static images
and 640x480 pixel for video. It attaches to Pi by the dedicated standard CSi interface.
It is the supplementary for Raspberry Pi official camera in order to fulfill the demands
for different lens mount, field of view (FOV) and depth of the field (DOF) as well as
the motorized IR cut filter for both daylight and night vision.

Figure 3.6 IP camera


Air university Hardware design

3.1.5 Raspberry Pi 3 model B (1GB)


Raspberry Pi 3 Model B is the third-generation Raspberry Pi. This powerful credit-card
sized single board computer can be utilized for many applications. Furthermore. It
supersedes the first Raspberry Pi Model B+ and Raspberry Pi 2 Model B. the Raspberry
Pi 3 Model B has more powerful processor. 10x faster than the original Raspberry Pi.
Also it includes remote LAN and Bluetooth availability making it the perfect solution
for powerful connected designs.

Here's the complete specs for the Pi 3:

• SoC: Broadcom BCM2837 (roughly 50% faster than the Pi 2)


• CPU: 1.2 GHZ quad-core ARM Cortex A53 (ARMv8 Instruction Set)
• GPU: Broadcom VideoCore IV @ 400 MHz.
• Memory: 1 GB LPDDR2-900 SDRAM.
• USB ports: 4.
• Network: 10/100 MBPS Ethernet, 802.11n Wireless LAN, Bluetooth 4.0.

Figure 3.7 Raspberry Pi 3 Model B

• 40 GPIO PINS
• Pins numbering system
➢ BCM
➢ BOARD
• All pins can read or write at 3.3 voltage level

12
Air University Smart Image processing Robot

Figure 3.8 Raspberry Pi GPIO Pinout Diagram

3.1.6 MPU 6050


We need 2 sensors. Sensing angle using Gyroscope and sensing motion using
Accelerometer. For this we selected MPU 6050 since it can sense both. It provides a
very fast sensing of angle which is a key requirement for our project. Values taken from
the Data sheet of MPU 6050 is attached which tells us about its features.
Air university Hardware design

Table 3.3 MPU 6050 Parameters

3.1.7 ESP8266:

ESP8266 is a chip with which manufacturers are making wirelessly networkable micro-
controller modules. More specifically, ESP8266 is a system-on-a-chip (SoC) with
capabilities for 2.4 GHz Wi-Fi (802.11 b/g/n, supporting WPA/WPA2), general-purpose
input/output (16 GPIO), Inter-Integrated Circuit (I²C), analog-to-digital conversion (10-
bit ADC), Serial Peripheral Interface (SPI), I²S interfaces with DMA (sharing pins with
GPIO), UART (on dedicated pins, plus a transmit-only UART can be enabled on
GPIO2), and pulse-width modulation (PWM). It employs a 32-bit RISC CPU based on
the Tensilica Xtensa L106 running at 80 MHz (or overclocked to 160 MHz). It has a
64 KB boot ROM, 64 KB instruction RAM and 96 KB data RAM. External flash memory
can be accessed through SPI.

14
Air University Smart Image processing Robot

Various vendors have consequently created a multitude of modules containing the


ESP8266 chip at their cores. Some of these modules have specific identifiers,
including monikers such as "Wi07c" and "ESP-01" through "ESP-13"; while other
modules might be ill-labeled and merely referred to by a general description — e.g.,
"ESP8266 Wireless Transceiver." ESP8266-based modules have demonstrated
themselves as a capable, low-cost, networkable foundation for facilitating end-point
IoT developments. The AI-Thinker modules are succinctly labeled ESP-01 through
ESP-13. NodeMCU boards extend upon the AI-Thinker modules.

We are using ESP8266-12E for our project. It is shown in figure below.

We are using ESP as microcontroller with Wi-Fi compatibility. ESP is programmed


through Arduino IDE by following process.
• First we have to install ESP board on Arduino IDE.
• Select the board which we are using and also select port of computer/laptop to which USB to
Serial converter is connected.
• There are two modes of ESP.
o Programming mode
o Operating mode
• To upload code we have to switch to programming mode.
• After uploading we have to reset it again to use it in operating mode.
Air university Hardware design

3.2 Circuit Diagrams

Figure 3.9 Circuit Diagram/ Complete Schematics


3.2.1 Motor Driver circuit

Figure 3.10 Motor Driver schematics

16
Air University Smart Image processing Robot

3.2.1.1 Calculation For Motor Selection:

• Maximum weight of our Robot after placing all components = 3kg

• Radius of Robot tyre = 4cm

= 4/100

=0.04m

• Number of tyres used =2

• Weight per tyre =Total weight/2

= 3kg/2

=1.5kg

• Torque in kg-cm = (1.5)(4)

=6kg-cm

3.2.2 Ultrasonic Sensors Interface Circuit with Raspberry Pi

Figure 3.11 Ultrasonic interfaced circuit

We interface an ultrasonic distance sensor with raspberry pi, give a trigger pulse and
receive an echo pulse.
Air university Hardware design

The ultrasonic distance sensor uses sonar to detect obstacles and to measure the
distance to them.
To determine the distance, the sensor measures the elapsed time between sending and
receiving waves. The speed of sound in air is about 343 m/s.
We generate a trigger of at least 10us from raspberry pi and interface a voltage divider
to limit the reception at ≤ 3.3V.

3.2.3 Calculations For Ultrasonic sensor

We divide our voltage as such there is 3.3V or less volts at output where vin is pulse
received by echo signal.

• By supposing R1 we can calculate R2


R1= 1k , R2= 2K
• Distance from object= 343* elapsed time/ 2
• Converting into cm :

34300= Distance/Time/2
17150= Distance/Time
17150*Time = Distance

= 1k

= 2k

18
Air University Smart Image processing Robot

Chapter 4
4.1 ANGLE ESTIMATION AND BALANCING

4.1.1 ANGLE ESTIMATION:

To find the direction and angle of the tilt, we use inertial sensor unit.
It comprises of an accelerometer and a gyroscope which reads the angular velocity and
angular position.
Accelerometer gives accurate reading over a sufficient interval of time but it is highly
susceptible to noise which results due to sudden jerking movement of the robot. Since
accelerometer measures linear acceleration, the sudden jerking movement throws off
the sensor accuracy. Gyroscope measures angular velocity which is then integrated to
find the angle of tilt. For a small interval of time, the value of gyroscope is very
accurate, but since the gyroscope experiences drift and integration compounds the
error, after some time the reading becomes unreliable. Thus, we require some way to
combine these two values. This is done with complementary filter.
It is basically a high pass filter and a low pass filter combined where the high pass acts
on gyroscope and the low pass on the accelerometer. It makes use of gyroscope for
short term estimation and the accelerometer for the absolute reference.
This simple filter is easy to implement, experimentally tuned and demands very little
processing power.

How does an accelerometer work?


Air university Angle Estimation and Balancing

Fig 4.1 Working of Accelerometer


An accelerometer works on the principle of piezo electric effect. Here, imagine a
cuboidal box, having a small ball inside it, like in the picture above. The walls of this
box are made with piezo electric crystals. Whenever you tilt the box, the ball is forced
to move in the direction of the inclination, due to gravity. The wall with which the ball
collides, creates tiny piezo electric currents. There are totally, three pairs of opposite
walls in a cuboid. Each pair corresponds to an axis in 3D
space: X, Y and Z axes. Depending on the current produced from the piezo electric
walls, we can determine the direction of inclination and its magnitude.

How does a gyroscope work?

Figure 4.2 Gyroscope

Gyroscopes work on the principle of Coriolis acceleration. Imagine that there is a fork
like structure, that is in constant back and forth motion. It is held in place using piezo
electric crystals. Whenever you try to tilt this arrangement, the crystals experience a
force in the direction of inclination. This is caused as a result of the inertia of the
moving fork. The crystals thus produce a current in consequence with the piezo electric
effect, and this current is amplified. The values are then refined by the host
microcontroller.

20
Air University Smart Image processing Robot

4.2 HOW WE PROGRAME MPU-6050 :-


Here is the short description of every unit of MPU 6050 which helps in programming
of MPU. We shortly describe what is DOF , how much axis is supported by
accelerometer , how much axis are supported by gyroscope

4.2.1 Degrees of freedom


The MPU 6050 is a 6 DOF (degrees of freedom) sensor, which means that it gives six
values as output,three values from the accelerometer and three from the gyroscope.

4.2.2 3 Axis Accelerometer:


Accelerometer measures acceleration. Can be used to sense linear motion,vibration, and
infer orientation the force.accelerometer full-scale range of ±2g, ±4g, ±8g, and ±16g.

4.2.3 3 Axis Gyroscope:


Gyro measures angular rotation which can be used to infer orientation.
Gyro full-scale range of ±250, ±500, ±1000, and ±2000°/sec (dps)

4.2.4 DMP (Digital Motion Processor)


The MPU6050 IMU contains a DMP which fuses the accelerometer and gyroscope data
together to minimize the effects of errors inherent in each sensor. The DMP
computes the results and can convert the results to Euler angles and perform other
computations with the data as well.
MPU-6050 operates from VDD power supply voltage range of 2.375V-
3.46V.Additionally, the MPU-6050 provides a VLOGIC reference pin in addition to
VDD.
Air university Angle Estimation and Balancing

4.2.5 FIFO Buffer


The sensor contains a 1024-byte FIFO buffer.MPU-6050 sensor values can be
programmed and place in this FIFO buffer and then Arduino will read the buffer.
FIFO buffer is used with interrupt signal. If there is data in the FIFO buffer,
MPU-6050 will signal the Arduino with the interrupt signal.

4.3 I2C Communication Introduction:


The Inter-integrated Circuit (I2C) Protocol is a protocol intended to allow multiple
“slave” digital integrated circuits to communicate with one or more “master” chips .In
our case the Master device is Arduino UNO and the slave device is MPU 6050.MPU-
6050 supports the
I2C serial interface only.I2C requires two lines for communication.

Serial Data Line (SDA)


Serial Clock Line (SCL)

The SCL line is the clock signal which synchronize the data transfer between the
devices on the I2C bus and it’s generated by the master device. SDA line carries the
data.
I2C can support a multi-master system, allowing more than one master to communicate
with all devices on the bus. Basic I2C communication is using transfers of 8 bits or
bytes. Each I2C slave device has a 7-bit address.7-bit address represents bits 1 to 7
while bit 0 is used for signal reading or writing . If bit 0 is set to 1 then the master
device will read from the slave device and if bit 0 is set to 1 then the master device will
write on the slave device
In normal state both lines SCL and SDA are high. The communication is initiated by
the master device. It generates the Start condition followed by the address of the slave
device . If the bit 0 of the address byte is set to 0 the master device will write to the
slave device and if bit 0 is set to 1 then the master device will read from the slave
device .At the end the master device generates Stop condition.

22
Air University Smart Image processing Robot

Start Condition: The SDA line will switch from a high voltage level to a low
voltage level before the SCL line switches from high to low.

Stop Condition: The SDA line will switch from a low voltage level to a high voltage
level after the SCL line switches from low to high.

Address Frame: Address frame is actually a 7 or 10-bit sequence unique to each


slave that identifies the slave when the master wants to talk to it.

Read/Write Bit: There is a single bit at the end that will informs the slave
whether the master wants to receive data or to send data from it. The read/write
bit will be at low voltage if the master wants to send data on the other hand If the
master is requesting data from the slave, the bit is a high voltage level.

ACK/NACK Bit: Now to ensure that the data is received each frame in a
message is followed by an acknowledge/no-acknowledge bit. If an address frame
or data frame was successfully received, an ACK bit is returned to the sender
from the receiving device.

Figure 5.3 SCL and SDA working

While considering the normal condition both the SCL and SDA lines will be at
the high state. The communication will be initiated by the master via sending the
start condition followed by the address of slave device. Now if the 0th bit of the
address byte is 0, the master device will write on the slave device otherwise the
next byte will be read by the slave. Once all the bytes will be read/written the
master device will generate the stop condition. This is the valid indication for the
other devices that the communication is ended and the communication bus can
be used by the other connected devices.
Air university Angle Estimation and Balancing

4.3.1 HOW COMMUNICATION METHODOLOGY WORKS :-

1. Master device will send the start condition to every connected slave device, by
switching the SDA line from high to low voltage level before switching the SCL
line from high to low.

. Figure 5.4 Communication 1

2. Master will send the address of the slave to whom it wants to communicate, along
with the read or write bit.
3. Correspondingly each slave compares the address sent from the master to its own
address. Now if the address matches, the slave returns an ACK bit by pulling the
SDA line low for one bit on the other hand if the address from the master does not
match the slave’s own address, the slave leaves the SDA line high.

4. The master sends or receives the data frame.

5. The ACK bit will be returned to the device in order to acknowledge a successful
receipt of the data frame.

6. Now in order to stop the data transmission, the master sends a stop condition to the
slave by switching SCL high before switching SDA high:

4.3.2 Basic Functions


Following basic commands are used to initiate the communication between the slave
and the master:

24
Air University Smart Image processing Robot

Wire. Begin () function will initiate the Wire library and, we need to initiate the
serial communication because we will use the Serial Monitor to show the data from
the sensor.

• In the loop () we will start with the

• Wire.beginTransmission() function which will begin the transmission to the


sensor, the 3 Axis Accelerometer in our case.

• Wire.write() function we will ask for the data from the two registers of the axis.

• Wire.endTransmission() will end the transmission and transmit the data from the
registers.

• Wire.requestFrom() function we will request the transmitted data or the two bytes
from the two registers.

• Wire. Available () function will return the number of bytes available for retrieval
and if that number match with our requested bytes, in our case 2 bytes.

• Wire.read() function we will read the bytes from the two registers of the X axis.
At the end we will print the data into the serial monitor.

4.3.3 CONFIGURATION SETTINGS :-

Power Management Register:


Air university Angle Estimation and Balancing

Accelerometer Register:

26
Air University Smart Image processing Robot

Gyroscope Register:
Air university Angle Estimation and Balancing

4.3.4 Accelerometer and Gyroscope Data Conversion From Raw To


Real (Euler Formula):

Accelerometer Data:

Earlier described is the configuration settings of accelerometer and gyroscope now here
we explain that how data is gathered from the registers of accelerometer and how this
data is made useful which is understandable for us i.e conversion of data from data in
form of byte and bits to real angle which is a useful information for us . first of all a
value is read as a 8 bit value and we know that the whole value for ACX or ACY ou
ACZ is 16 bit so we combine the 2 registers which are placed adjacent to each other
whose location is described earlier so we combine 2 8 bit registers after that as acording
to selected sensitivity we devide the whole value with a given number in data sheet
after then we apply euler formula to get the exact value of angle from accelerometer .

28
Air University Smart Image processing Robot

Gyroscope Data :
Code :-
Wire.beginTransmission(0x68);
Wire.write(0x43);

Wire.endTransmission(false);
Wire.requestFrom(0x68,4, true);

Gyr_rawX=Wire.read() <<8|Wire.read();

Gyr_rawY=Wire.read() <<8|Wire.read();
Gyro_angle = Gyr_rawX/32.8;

We know that MPU 6050 is a slave device and Arduino is the master device. We
started with beginning the transmission by giving the address (0x68) of the slave device
which make the slave device active The next step is to read / write data from the
address this corresponds to the location from where we will start getting our data we
started with 0x43 register that corresponds to GYx , next we have started the i2c
communication that will not end as we know that we have four registers for the
accelerometer so ,we will take these register values in the form of burst starting from
GYx TILL GYy.In MPU-6050 all the registers are of 8 bits but we need GYRO_XOUT
GYRO_YOUT GYRO_ZOUT values in 16 bits as it accepts only 16 bit value ,we will
shift the eight bit value from each register and then add them to the next eight bit value
and save them to a new register.
Air university Angle Estimation and Balancing

4.4 Application Of Complementory Filter:


Sensor Fusion:

To remove the high frequency distortion in the reading of accelerometer it is passed


through a low pass filter . the angular velocity obtained from gyroscope is integrated to
obtain the angle and then it is passed through high pass filter to remove all the low
frequency distortions.The Result obtained from low pass fand high pass filter are then
summed to find the estimated angle.

Total_angle = 0.98 *(Total_angle + Gyro_angle*elapsedTime) +


0.02*Acceleration_angle;

30
Air University Smart Image processing Robot

Chapter 5
Software Design
5.1 Image processing Algorithms

5.1.1 Algorithm for ball following :-


The algorithms we have used in our project are explained in two parts. The first part
explains the ball following test. The second part explains the sign detection algorithm.
This covers all the image processing algorithms.

BALL FOLLOWING

Fig 5.1 Flow Chart


Air university Software Design

BALL FOLLOWING TEST:


1.capturing the video :-

For capturing of video we need Video Capture() function its argument is the device i.e.
which camera we are using we need to pass 0, -1 or 1 after the capturing of video we
need to separate the frames so that we can apply our techniques on that separated
frames and at the end we need to release the camera by using release command.

cap = cv2.VideoCapture(0)

while(True):
# Capture frame-by-frame
ret, frame = cap.read()

Fig 5.2 App View in making

2.Resizing of the video :-

We need to resize our video so that our processing become faster for that purpose we
need to change the width and hight of the video by using cap.set(3,320) and
cap.set(4,240) commands . frame rate by default is 640x480 and we are changing it to
320x240

cap.set(3,320)
cap.set(4,240)

32
Air University Smart Image processing Robot

3.CONVERTING TO HSV :-

Fig 5.3 Conversion to HSV

4.Creating a trackbar :-

We are creating trackbar so that we can get some critical value for our value, saturation
and range components for that purpose we are using cv2.getTrackbarPos() function
First argument which we give to get trackbar position is trackbar name, second
argument is window name third argument is default value fourth value is the maximum
value and fifth value is the callback function which runs every time trackbar value
changes.

cv2.createTrackbar('hmin', 'HueComp',12,179,nothing)
Air university Software Design

5.GETTING TRACKBAR POSITION :-

cv2.getTrackbarPos(trackbarname, winname)

• trackbarname – Name of the trackbar.


• winname – Name of the window that is the parent of the trackbar

hmn = cv2.getTrackbarPos('hmin','HueComp')
hmx = cv2.getTrackbarPos('hmax','HueComp')

6.Apply thresholding:-

Thresholding means that If pixel value is greater than some cutoff value, it is assigned
one value or we can say it is assigned as white and other values are assigned as some
other value which may be black. for that purpose we use the function cv2.threshold() .
First argument which is passed to that threshold function is the source image but that
source image must be converted to greyscale. Second argument which is passed is the
threshold value which is used to give the pixel values. Third argument passed is the
maximum Value which represents the value to be given if pixel value is more than that
critical value. OpenCV give us the option different options for thresholding which are :

• cv2.THRESH_BINARY
• cv2.THRESH_BINARY_INV
• cv2.THRESH_TRUNC
• cv2.THRESH_TOZERO
• cv2.THRESH_TOZERO_INV

here we have done thresholding by a simple technique of just by applying a bitwise and
operator to the in range values of hue, saturation and value components.

34
Air University Smart Image processing Robot

# Apply thresholding

hthresh = cv2.inRange(np.array(hue),np.array(hmn),np.array(hmx))

tracking = cv2.bitwise_and(hthresh,cv2.bitwise_and(sthresh,vthresh))

Fig 5.4 Applying thresholding

7.APPLYING MORPHOLOGICAL TRANSFORMATION:-

Morphological transformations are the operations related to image shape. But it is most
of the time applied to binary images. It needs two inputs, one is our original image
second one is called structuring element or we can say kernel matrix which decides
the nature of operation which we are going to apply on our binary image. Erosion and
Dilation are two basic morphological transformations. Opening and Closing are its
forms.
Air university Software Design

ERROSION :-
Erosion operation does that it erodes away the boundaries of foreground object i.e it
erodes the the thing which appears as a object rest of the thing is considered as
background. foreground is to be kept white so that it is considered as a object. The
kernel matrix convolve with the image . A pixel in the original image is either 1 or 0
because we have given a binary image for erosion operation so it will be considered 1
only if all the pixels under the kernel is 1, otherwise it is will be made zero.

erosion = cv2.erode(img,kernel,iterations = 1)
DILATION :-

Dilation increases the white region in the image or size of foreground object increases .

dilation = cv2.dilate(img,kernel,iterations = 1)
OPENING :-

For noise removal, erosion is followed by dilation. Erosion removes white noises but
the problem which is caused by erosion is it also shrinks our object size. So, we dilate
it. As noise is gone which is done by erosion by size of object is disturbed which is
being corrected by dilation.

CLOSING :-

Closing is Dilation followed by Erosion. It is useful for removing the noise which
occurs inside the foreground object in closing small black points on the object are
removed.

closing = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)

8.GAUSSIAN BLUR :-

Gaussian blur technique is applied by using Gaussian kernel. It is done with the
function, cv2.GaussianBlur(). We should have to mention the width and height of the
kernel which should be positive and odd. We also have to mention the standard
deviation in the X and Y directions, sigmaX and sigmaY respectively. Gaussian
filtering is highly effective in removing Gaussian noise from the image.

36
Air University Smart Image processing Robot

blur = cv2.GaussianBlur(img,(5,5),0)

Fig 5.5 Applying Gaussian Blur

9.DRAWING BOUNDRIES :-

To draw a circle, you need its center coordinates and radius. Its first argument is the
image on which we are going to draw the circle , second argument is the pixel value
where we are going to draw that circle third argument is the radius of circle fourth
argument gives the color of circle drawn and last argument gives the thickness of the
circle .

img = cv2.circle(img,(447,63), 63, (0,0,255), -1)

Fig 5.6 drawing boundary


Air university Software Design

5.1.2 Algorithm for sign following

Fig 5.7 Sign Detection Flow chart

1. capturing and resizing of video :-


We used cv2.VideoCapture() function for capturing of video , its argument for taking
video from raspberry pi is 0 because pii cam is at 0 port . After capturing we seprate
the frames so to proceed further
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame ret, frame = cap.read()

We need to resize our video so that our processing become faster and to meet the
resolution of our raspberry for that purpose we need to change the width and hight of
the video .Argument 3 access the width and 4 access the high and then we set the width
and hight resolution according to our own choice
cap.set(3,320) cap.set(4,240)

38
Air University Smart Image processing Robot

2. Color conversion and smoothing :-


Now the next step we have to do is to find the edges In our images. For finding edges
require that first convert it to grayscale then applying some smoothing technique and
then apply canny edge detection . In our code we apply cv2.cvtcolor for conversion to
gray and then we apply bilateral filter for that we apply cv2.bilateralFilter() builtin
function . it is highly effective in noise removal while keeping edges sharp .
Theory of bilateral filter says that Bilateral filter takes a gaussian filter which is a
function of pixel difference . Now what gaussian filter is?? Gaussian filter takes the a
neighbourhood around the pixel and find its gaussian weighted average. This gaussian
filter is a function of space alone, that is, nearby pixels are considered while filtering.
gaussian function of intensity difference make sure only those pixels with similar
intensity to central pixel is considered for blurring. So it preserves the edges since
pixels at edges will have large intensity variation.

gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)


gray = cv2.bilateralFilter(gray, 11, 17, 17)

3. Finding Edges:-
We used canny edge detection for finding the edges in our image , cv2.canny() function
is used for edge detection , it takes the input image which we are giving to it is our
smooth image done in previous step and the other arguments which we give to edge
detection is minimum and maximum value of edge which it have to find in our image .
Third argument is aperture_size. It is the size of Sobel kernel used for find image
gradients. Which in other sense take the value that how much change in intensity which
is considered as a gradient

edged = cv2.Canny(gray, 30, 200)

4. FINDING CONTOURS :-

Now what we have to do is to find the sign area that where our sign is located in whole
image In order to find the sign portion in our edged image, we need to find
the contours in the image. A contour is the outline of an object .
To find contours in an image, we need the cv2.findContours function . Three
parameters are required. The first is the image we pass in our edged image .The second
parameter cv2.RETR_TREE tells OpenCV to compute the relationship between contours.
Air university Software Design

We could have also used the cv2.RETR_LIST option as well. Finally, we tell OpenCV
to compress the contours to save space using cv2.CV_CHAIN_APPROX_SIMPLE.
In return, the cv2.findContours function gives us a list of contours .

5. Sorting Of Contours :-
Now we have contours but we don’t know either contour surround just the sign portion
or the contour are made in whole image . for this the first thing we should do is to make
less d the number of contours we need to process. We know the area of our sign is quite
large with respect to the rest of the regions in the image. Line 30 handles sorting our
contours, from largest to smallest, by calculating the area of the contour
using cv2.contourArea. We now have only the 10 largest contours.

image, cnts,hirarchy = cv2.findContours(edged, cv2.RETR_TREE,


cv2.CHAIN_APPROX_SIMPLE)

cnts = sorted(cnts, key=cv2.contourArea, reverse=True)[:15]

6. Sorting Of Rectangular Contours :-


Now I have 15 largest contours in the image. Now we approximate the contour
using cv2.arcLength and cv2.approxPolyDP. These methods are used to approximate
the polygonal curve of a contour. To approximate a contour, we need to supply our
level of approximation precision. In this case, we use 2% of the perimeter of the
contour. The precision is an important value .Our sign is in rectangle which has four
vertices .

I kept only the 15 largest contours and threw the others out. Now here comes the role
of precision value the contours which are 2% similar with each other are retained and
rest of other are discarded .

7. Warp Perspective Method :-

Till now what we achieved is our four points which is not in any arranged form to be
processed or i can say that i don’t know which is top most corners of the rectangle or
which one are bottom most . (knowing top most or bottom most corners is important

40
Air University Smart Image processing Robot

because if image which is taken from camera is not at parallel frame to camera or at
some angle to camera then by applying this method will help us to take that rectangle
out in 2d format )

Fig 5.8 Applying Warp perspective

8. Order_Points Function:-

I defined order_points(pts) function for this purpose .This function takes pts ( which is
a list of four points specifying the (x, y) coordinates of each point of the rectangle)
which we calculated earlier using contour .
The actual ordering itself can be different, as long as it is made in order by
implementing algorithm . I separate the points as top left, top right, bottom right, and
bottom left.

Memory is four ordered points by using assigned for


Air university Software Design

rect = np.zeros((4, 2), dtype="float32")

Then top-left point is found which is have the smallest x + y sum and the bottom right
point, which will have the largest x + y sum. Now we have to find the top right and
bottom left points. Which is done by taking difference x – y between the points using
np.diff function

The coordinates associated with the smallest difference will be the top-right points,
whereas the coordinates with the largest difference will be the bottom-left point

9. Four Point_Transform Function:-

Till now we have four points which is in order i.e we know which point in top left , top
right , bottom left and bottom right now what we are going to do on this function is we
calculate the maximum width and maximum height which we can do by using distance
formula

We calculate top width and bottom width then we apply max( ) function to calculate the
maximum width among the two similarly we calculate left hight and right hight and
check which one is maximum among the two by using max( ) function .

After we get maximum hight and maximum width we have to define my new image
dimension which is The first entry in the list is (0, 0) indicating the top-left corner. The
second entry is (maxWidth - 1, 0) which corresponds to the top-right corner. Then we
have(maxWidth - 1, maxHeight - 1) which is the bottom-right corner. Finally, we
have (0,maxHeight - 1) which is the bottom-left corner. This is how I define the
dimension of my new image .

To extract the image which is inside the rectangular shape we use the
cv2.getPerspectiveTransform function. This function requires two arguments rect ,
which is the list of 4 points in the original image and dst , which is our list of
transformed points. The cv2.getPerspectiveTransform function returns M , which is the
actual transformation matrix.

42
Air University Smart Image processing Robot

Then we apply cv2.warpPerspective function. We pass in the image , our transform


matrix M , along with the width and height of our output image.The output
of cv2.warpPerspective is our warped image

Fig 5.9 Four Point Transform

10.Thresholding Of Warped Image :-

As warped image is the region of interest are which is taken from our original image so
it may contain noice and some other factors which we don’t want to be there so we
apply threshold to convert our warped image just to black and white or if we talk in
terms of pixels converted either zero or one we have given threshold at 80 as if we
consider scale of 0 to 255 it is more close to 0 because we are more concerned with
black portion which is our region of interest .After thresholding we converting our
image we have a clear image now we resize this clear image so our image size become
equal to that of size of our video frames .

11.Image Comparison :-
Now we have our final converted video frames on which we can apply comparison
operation .
For this purpose first of all we read our reference images using cv2.imread() function
after that we resized our images to the size of our video frames . then we are going to
use bitwise xor function to compare our original and frames taken from video . when
images are xor the portions which are similar in images will give a black result i.e it
Air university Software Design

gives 0 answer and the portion which is not matched in both of the images will give
white i.e will give answer 1 .

Fig 5.10 Image Comparison

Fig 5.1 Truth Table Xor gate

12. CHECK ON COUNTING OF NONZERO BITS :

After the bitwise XOR function we have result of pixels whose values are zero or one
or we can say that not matched portion of image is one so if we count ones in an image
by applying Countnonzeros then we take the decision that either image is matched or
not .

44
Air University Smart Image processing Robot

5.1.3 SERVER COMMUNNICATION ALGORITHMS :-

WIFI Interface circuit (NODE MCU WEB CONNECTIVITY):-

Flow diagram:-

Block diagram:-
This basic working of our project is shown in below block diagram.

Arduino Motor
Driver
Air university Software Design

Working Explanation:

Working Explaination of Server on Raspberry Pi:-

We have to establish some connection between our image processing part which is a
Raspberry PI and Arduino which is installed on our Robot so we establish the
connection with a wifi module on Arduino and a server installed on Raspberry pi .
The process starts with the installation of some web server language libraries on
Raspberry Pi . The pre installations that we have done for webserver that we have
created using raspberry pi as follows.
• sudo apt-get update
• sudo bash
• apt-get install apache2 apache2-doc apache2-utils
• apt-get install libapache2-mod-php5 php5 php-pear php5-xcache
• apt-get install php5-mysql
• apt-get install mysql-server mysql-client

After pre installments on Raspberry Pi we have to test that either our server is working
on Raspberry Pi or not for that we have to type the IP address of Raspberry Pi on web
server it if it shows some page like this then its working if not then we have to do the
pre installations again

If its working then we are on our way to make our own web server this is done by PHP
language . Our purpose is just to read some Pins on Raspberry Pi and continuously
refresh the page so that it works fine with time .
Commands we use to read the pins are :

46
Air University Smart Image processing Robot

$pinstatus=shell_exec("gpio -g read 17");

This Command is Reading Pin staus of Pin no 17 and now next step after reading the
Pin status is to display that command status on server This can be done with the
command :
echo $pinstatus;

The job for making the server is almost done the last thing which we have to do is to
coninously refreh the web server page this can be done as
header("Refresh: 0;);

Index 0 is showing that that we are refreshing our page after every zero
second which means continuously.
Here is the index file which we have edites and it is located on /var/www directory in
raspberry Pi .

This code is written in PHP language and saved in file named as index.php. This file
can be accessed using raspberry pi IP address in browser.
These commands are used to show pin status of raspberry pi GPIO.
Air university Software Design

On the Robot end ESP8266-12E is used to read the status of Raspberry Pi GPIO pins
from the webserver, acting as client. After reading the pin status ESP processes the data
and triggers its own GPIO pins to control direction of Robot.

Working Explaination of Esp8266:-

We can find the server code of Node MCU from Arduino IDE Examples it explains
that how we can connect with a web server . The thing which we have to edit in that
code is explained below with detail :

WiFiMulti.addAP("Mujtaba", "12345678");

This is the information of the Access point through which we are accessing our server
we have to provide the SSID i.e the user name of our router and the password our our
router .

After this we have to wait for the WIFI connection to be established we begin our
connection with
http.begin("http://192.168.0.2/");

This is the IP address our our web server which we have earlier set on our raspberry Pi .
After successful communication with Raspberry Pi we have to get the data from
Raspberry Pi

httpCode > 0

This condition satisfies only when HTTP header has been send and serponse header has
been handleld

if(httpCode == HTTP_CODE_OK) {
String payload = http.getString();

After successful connection we we have to get the data from server this can be done by
this http.getString command and we are going to save our string in some string variable
for future use and conditioning . After getting string data from server we convert it to
the data which can be useful for us After that we apply conditions on that received
string from server and turn our GPIO pins on and off.

One of the problem here in turning GPIO pins on and off is that the actual pin
numbering is different from GPIO pin numbering and we face some problems here in
locating its pins refring with the pin diagram

48
Air University Smart Image processing Robot

5.2 Software Used

5.2.1 ARDUINO IDE


Arduino is a both an open source software. library and an open-source breakout board
for the popular AVR micro-controllers. The Arduino IDE (Integrated Development
Environment) is the program used to write code, and comes in the form of a
downloadable. file on the Arduino website. The Arduino board is the physical. board
that stores and performs the code uploaded to it. Both the software package and the
board are referred to as "Arduino."

5.2.2 PyCharm
PyCharm. is an Integrated. Development Environment (IDE) used in computer
programming, specifically for the Python language. ... It provides. code analysis, a
graphical debugger, an integrated unit tester, integration with version control systems
(VCSes), and supports web development with Django.

5.2.3 Proteus 8 Pro


It is used for simulating circuits. Proteus. 8 Professional is a software that includes the
software packages of all the controllers, ICs, power. supplies etc. All the codes were
first verified on Proteus and then they underwent hardware testing.

5.2.4 Virtual Network Computing (VNC)


Air university Software Design

Virtual network computing (VNC) is a type of. remote-control software that makes it
possible to control another computer over a network. connection. Keystrokes
and mouse clicks are transmitted from one computer to another, allowing technical
support. staff to manage a desktop, server, or other networked device without being in
the same physical location.
VNC works on a. client/server model: A VNC viewer (or client) is installed on the
local computer and connects to the server component, which must be installed on the
remote computer

50
Air University Smart Image processing Robot

Chapter 6
Results and Discussion

6.1 Operation and working:


To operate the project, the following steps should be followed:
• Connect the battery to the robot.
• Show a sign, the camera captures it and does its image processing on raspberry pi.
• The direction is sent through by raspberry pi to server
• Data from the server is read by the node MCU(ESP 8266 module)
• According to the data present on server NODEMCU turns its specific pin high
• The pin logic on NODEMCU is read by the microcontroller (Arduino) which turns
on the motors accordingly.
• The MPU6050 calculates the angle of tilt, sends it to the microcontroller which then
again runs the motors correspondingly to stabilize the robot.
• After reaching the destination, the robot stops itself.

6.2 Problems Faced:


The following are the problems faced during the making of this project:
1. Motor Selection
The motor we chose had the specs fulfilling the linear property but our model required
the specs for rotational property thus a motor of high rpm and high torque was used.
2. PID Tuning
The robot was jerky and image was blur thus a detailed and careful PID tuning was
required.
3. Image Processing
• Making algorithms for extracting region of interest.
• Problem in finding the cut-point for differentiating images for the left/right
indications.
• Problem in using IC2 and serial communication at a time
Air university Conclusion

Chapter 7
Conclusion

This project was started with the problem statement that how we can send vehicle from

one place to another while it stabilizes itself through self-balancing technique. This

problem was proposed to be resolved using a vehicle called ‘Smart Image Processing

Robot’ which was based on the concept of controlling the vehicle by displaying a set of

symbols to navigate it to its destination. The underlying concept was to move the

vehicle autonomously from one place to another and perform image processing.

Furthermore, the real-time image processing was also performed in the project. One of

the aims was to develop an autonomous vehicle for surveillance in less cost. All these

aims and objectives were successfully achieved by the end of the project.

52
Air University Smart Image processing Robot

References:-

1. https://docs.opencv.org/3.1.0/d4/d13/tutorial_py_filtering.html
2. http://opencv-python-
tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_canny/py_
canny.html
3. https://docs.opencv.org/3.1.0/d4/d73/tutorial_py_contours_begin.html
4. https://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.htm
l
5. https://www.pyimagesearch.com/2014/04/21/building-pokedex-python-
finding-game-boy-screen-step-4-6/
6. https://www.pyimagesearch.com/2014/08/25/4-point-opencv-
getperspective-transform-example/
7. http://roboticssamy.blogspot.pt/
8. http://www.robotshop.com/letsmakerobots/rs4-self-balancing-
raspberry-pi-image-processing-robot
9. http://socialledge.com/sjsu/index.php/S15:_Self-Balancing_Robot
Air university Appendix A

APENDIX A :-CODES
BALANCING CODE :-

#include <Wire.h>

int16_t Acc_rawX, Acc_rawY, Acc_rawZ,Gyr_rawX, Gyr_rawY, Gyr_rawZ;

#define in_1a 8
#define in_1b 9
#define in_2a 6
#define in_2b 11
#define en1 2
#define en2 3
#define en 51
#define d0 31
#define d1 33

float Acceleration_angle;
float Gyro_angle;
float Total_angle;

float elapsedTime, time, timePrev;


int i;
float rad_to_deg = 180/3.141592654;

float PID, pwmLeft, pwmRight, error1, previous_error1;


float pid_p=0.0;
float pid_i=0.0;
float pid_d=0.0;

/////////////////PID CONSTANTS/////////////////
double kp=18.0;//3.55//16//good p=18// kazim motor = 8
double ki=2.5;//.000545; //0.01;//0.003//1.7//.8//0.5 // k motor=0.4//2.5
double kd=1.0;//1.0427; //2.05;//2.05//1.4//.7//0.3 k motor =0.4//1.5
///////////////////////////////////////////////

int initial_val=50; //initial value to the motors at which they start

float desired_angle =2.00; //equilibrium value of angle

float error,previous_error; // for diff btw real and desired


float acc_error=0,gyro_error=0;
float Gyro_raw_error_x=0;
float Acc_angle_error_x=0;

void setup() {
TCCR3B = TCCR3B & B11111000 | B00000010;
Wire.begin(); //begin the wire comunication
TWBR = 12; //Set the I2C clock speed to
400kHz
Wire.beginTransmission(0x68); // slave address ( FOR MPU 6050 )
Wire.write(0x6B); // adress of the register data to
be written or read
Wire.write(0); // resetting register 0x6B which
is to reset the register
Wire.endTransmission(true); // end transmission with MPU

54
Air University Smart Image processing Robot

Wire.beginTransmission(0x68); //Start communication with the


address
Wire.write(0x1C); //Accelerometer configuration
register
Wire.write(0x10); //Set the register bits as 00010000
(+/- 8g full scale range)
Wire.endTransmission(true); // 8g selected

Wire.beginTransmission(0x68); //slave address


Wire.write(0x1B); //gyro configuration register
Wire.write(0x10); //Set the register bits as 00010000
(1000 degree per secound full scale range )
Wire.endTransmission(true); //End the transmission with the gyro

Serial.begin(115200); // serial monitor baud rate


pinMode(in_1a,OUTPUT) ; //Logic pins 1) motors
pinMode(in_1b,OUTPUT) ;
pinMode(in_2a,OUTPUT) ;
pinMode(in_2b,OUTPUT) ;
pinMode(en1,OUTPUT) ;
pinMode(en2,OUTPUT) ;
pinMode(en,OUTPUT) ;
pinMode(d0,INPUT) ;
pinMode(d1,INPUT) ;

time = millis(); //Start counting time in milliseconds

//attachInterrupt(digitalPinToInterrupt(2),forward,HIGH);
//attachInterrupt(digitalPinToInterrupt(3),reverse,HIGH);
/*
if(gyro_error==0)
{
for(int i=0; i<1000; i++)
{

Wire.beginTransmission(0x68);
//begin, Send the slave adress (in this case 68)
Wire.write(0x43);
//First adress of the Gyro data
Wire.endTransmission(false);
Wire.requestFrom(0x68,4,true);
//We ask for just 4 registers

Gyr_rawX=Wire.read()<<8|Wire.read();
//Once again we shif and sum
Gyr_rawY=Wire.read()<<8|Wire.read();

Gyro_raw_error_x = Gyro_raw_error_x + (Gyr_rawX/32.8);


//Gyro_raw_error_y = Gyro_raw_error_y + (Gyr_rawY/32.8);

if(i==999)
{
Gyro_raw_error_x = Gyro_raw_error_x/1000;
//Gyro_raw_error_y = Gyro_raw_error_y/1000;
gyro_error=1;
}
}
}

if(acc_error==0)
{
for(int a=0; a<1000; a++)
Air university Appendix A

Wire.beginTransmission(0x68);
Wire.write(0x3B);
//Ask for the 0x3B register- correspond to AcX
Wire.endTransmission(false);
Wire.requestFrom(0x68,6,true);

Acc_rawX=(Wire.read()<<8|Wire.read())/4096.0 ;
//each value needs two registres
Acc_rawY=(Wire.read()<<8|Wire.read())/4096.0 ;
Acc_rawZ=(Wire.read()<<8|Wire.read())/4096.0 ;

Acc_angle_error_x = Acc_angle_error_x +
((atan((Acc_rawY)/sqrt(pow((Acc_rawX),2) + pow((Acc_rawZ),2)))*rad_to_deg));
//Acc_angle_error_y = Acc_angle_error_y + ((atan(-
1*(Acc_rawX)/sqrt(pow((Acc_rawY),2) + pow((Acc_rawZ),2)))*rad_to_deg));

if(a==999)
{

Acc_angle_error_x = Acc_angle_error_x/1000;
//Acc_angle_error_y = Acc_angle_error_y/1000;
acc_error=1;
}}}

*/

void loop() {

digitalRead(d0);
digitalRead(d1);
digitalWrite(en,HIGH);
timePrev = time; // variable to keep record of
time for previous loop
time = millis(); // built in function to record
time
elapsedTime = (time - timePrev) / 1000; //conversion of time from milli-
sec to sec
//Serial.println(millis());

/*The tiemStep is the time that elapsed since the previous loop.
* This is the value that we will use in the formulas as "elapsedTime"
* in seconds. We work in ms so we haveto divide the value by 1000
to obtain seconds*/

Wire.beginTransmission(0x68); // MPU6050 slave device address


Wire.write(0x3B); //Address of register AcX
Wire.endTransmission(false);
Wire.requestFrom(0x68,6,true); // each value of Ax,Ay,Az is of 16 bits so
we called 6 registers to send burst of values

Acc_rawX=Wire.read()<<8|Wire.read(); //take value from first register at


0x3B then make it shift left by 8 then take
// value of next and make them OR
so that complete 16 bits can be saved in single variable

Acc_rawY=Wire.read()<<8|Wire.read();// similar
Acc_rawZ=Wire.read()<<8|Wire.read();// similar

56
Air University Smart Image processing Robot

Acceleration_angle = atan((Acc_rawY/4096.0)/sqrt(pow((Acc_rawX/4096.0),2)
+ pow((Acc_rawZ/4096.0),2)))*rad_to_deg; // as 8g is selected for
accelerometer so

// we have to devide each value with 4096.0

// atan is application of Euler formula

// rad to deg miltiplication is for conversion from rad to degree

Wire.beginTransmission(0x68); // slave address


Wire.write(0x43); // Gyro data first location
adress
Wire.endTransmission(false); // Read request
Wire.requestFrom(0x68,4,true); // Read values of 4 registers
in a burst

Gyr_rawX=Wire.read()<<8|Wire.read(); // gyroscope value is also of


16 bits so take 8 values shift it left and again read next 8 bits
Gyr_rawY=Wire.read()<<8|Wire.read(); // similar

Gyro_angle = Gyr_rawX/32.8- (Gyro_raw_error_x) ;


//conversion of RAW value
Gyro_angle=Gyro_angle*elapsedTime;

/*---X axis angle---*/


Total_angle = 0.98 *(Total_angle + Gyro_angle) + 0.02*Acceleration_angle;
// Application of complimentry filter
//Total_angle=-Total_angle;

Serial.print("ANGLE= ");
Serial.print(Total_angle); // i am taking total angle as just one angle of
x-axis
Serial.print("\t");

if((d0==HIGH && d1==HIGH) || (d0==HIGH && d1==LOW) || (d0==LOW &&


d1==HIGH) )
{
desired_angle=4.00;
}
else
{
desired_angle=1.00;
}

error = Total_angle - desired_angle; // i have taken desired angle as 0


so error is the value of total angle
/*///////////////////////////P I D///////////////////////////////////*/
/*
if(Total_angle>=-0.5 && Total_angle<0)
{error = 0;
}
else if(Total_angle<=0.5 && Total_angle>=0)
{
error = 0;
}

else
{error = Total_angle - desired_angle;
}
*/
Air university Appendix A

pid_p = kp*error; // Kp constant directly linked with error so it us directly


multiple of error
if(pid_p>200)pid_p=200;
if(pid_p<-200)pid_p=-200; // impossing limit on the total product so that
value may not become very large

if(-10 <error <10) // limit on integral value .integral value is just to fine
tune the error so imposing limits on a small range
{
pid_i = pid_i+(ki*error); // summation of what the previous value of error
is with the present value
}
if(pid_i>200)pid_i=200;
if(pid_i<-200)pid_i=-200; // impossing limit on the total integral value so
that value may not become very large

/*
As we know the speed is the amount of error that produced in a certain amount
of
time divided by that time. For taht we will use a variable called
previous_error.
We substract that value from the actual error and divide all by the elapsed
time.
Finnaly we multiply the result by the derivate constant*/

pid_d = kd*((error - previous_error)/elapsedTime); // derivative constant is


related with the speed of error that how fast error is increasing
if(pid_d>200)pid_d=200;
if(pid_d<-200)pid_d=-200;
/*The final PID values is the sum of each of this 3 parts*/

PID = pid_p + pid_i + pid_d; // total PID is the sum of all the pi,pd and pk
values

Serial.print("error= ");
Serial.print(error);
Serial.print("\t");
Serial.print("PID= ");
Serial.print(PID);

if(PID < -200)


{
PID=-200; // impose limit on total PID so that when it sums with initial
value the pwm not exceeds 255
}
if(PID > 200)
{
PID=200;
}

if (PID >= 0)
{
pwmLeft = initial_val + PID; // total PID calculated with the initial value
of motors will give the PWM given to motors
pwmRight = initial_val + PID;
}

if (PID < 0)
{
pwmLeft = initial_val - PID;
pwmRight = initial_val - PID;
}

58
Air University Smart Image processing Robot

if(pwmRight > 255)


{
pwmRight=255;
}

if(pwmLeft > 255)


{
pwmLeft=255;
}

if(d0==HIGH && d1==HIGH)


{
pwmRight=pwmLeft+20;
pwmLeft=pwmLeft-20;
}
else if(d0==HIGH && d1==LOW)
{
pwmLeft=pwmRight+20;
pwmRight=pwmRight-20;
}

else if(d0==LOW && d1==HIGH)


{
pwmRight=pwmRight;
pwmLeft=pwmLeft;
}
else
{
pwmRight=pwmRight;
pwmLeft=pwmLeft;
}

Serial.print("\t");
Serial.print(pwmRight);
Serial.print("\t");
Serial.println(pwmLeft);

if(PID <=0 && PID >= -200 ) // it robot is falling on one side the robot
moves towrads the falling side
{
digitalWrite(in_1a,LOW);
digitalWrite(in_1b,HIGH);
digitalWrite(in_2a,LOW);
digitalWrite(in_2b,HIGH);
analogWrite(en1,pwmRight);
analogWrite(en2,pwmLeft);

}
if(PID >=0 && PID <= 200 ) // if robot is falling on other side robot moves
towards that
{
digitalWrite(in_1a,HIGH);
digitalWrite(in_1b,LOW);
digitalWrite(in_2a,HIGH);
digitalWrite(in_2b,LOW);
analogWrite(en1,pwmRight);
analogWrite(en2,pwmLeft);

}
Air university Appendix A

previous_error = error; //value used in derivative error


}//end of loop

SERVER COMMUNICATION CODE :-

int d0=16;
int d1=5;
int d2=4;
int d3=0;
int d4=2;
int d5=14;
int d6=12;
int d7=13;
int d8=15;
/**
* BasicHTTPClient.ino
*
* Created on: 24.05.2015
*
*/

#include <Arduino.h>

#include <ESP8266WiFi.h>
#include <ESP8266WiFiMulti.h>
#include <ESP8266HTTPClient.h>
#define USE_SERIAL Serial

ESP8266WiFiMulti WiFiMulti;

void setup() {
pinMode(d1, OUTPUT);
pinMode(d2, OUTPUT);
pinMode(d3, OUTPUT);
pinMode(d4, OUTPUT);
pinMode(d5, OUTPUT);
USE_SERIAL.begin(115200);
// USE_SERIAL.setDebugOutput(true);

USE_SERIAL.println();
USE_SERIAL.println();
USE_SERIAL.println();

for(uint8_t t = 4; t > 0; t--) {


USE_SERIAL.printf("[SETUP] WAIT %d...\n", t);
USE_SERIAL.flush();
delay(1000);
}

WiFiMulti.addAP("Mujtaba_03030103828", "12345678");

60
Air University Smart Image processing Robot

void loop() {
// wait for WiFi connection
if((WiFiMulti.run() == WL_CONNECTED))
{

HTTPClient http;

USE_SERIAL.print("[HTTP] begin...\n");
// configure traged server and url
//http.begin("https://192.168.1.12/test.html", "7a 9c f4 db 40 d3 62
5a 6e 21 bc 5c cc 66 c8 3e a1 45 59 38"); //HTTPS
http.begin("http://192.168.0.2/"); //HTTP

USE_SERIAL.print("[HTTP] GET...\n");
// start connection and send HTTP header
int httpCode = http.GET();

// httpCode will be negative on error


if(httpCode > 0) {
// HTTP header has been send and Server response header has been
handled
USE_SERIAL.printf("[HTTP] GET... code: %d\n", httpCode);

// file found at server


if(httpCode == HTTP_CODE_OK) {
String payload = http.getString();
USE_SERIAL.println(payload);
//payload.remove(4, 5);
//USE_SERIAL.println(payload);
//payload.remove(0, 3);
// USE_SERIAL.println(payload.substring(4,5));
payload.replace("\n","");
USE_SERIAL.println(payload);

if(payload=="01")
{
digitalWrite(d2, LOW);
digitalWrite(d1, HIGH); // turn the LED on (HIGH is the
voltage level)
//delay(1000); // wait for a second
//digitalWrite(d1, LOW); // turn the LED off by making
the voltage LOW
//delay(1000);
USE_SERIAL.println("MOVE FORWARD");
}
else if(payload=="10")
{
digitalWrite(d1, LOW);
digitalWrite(d2, HIGH);
// turn the LED on (HIGH is the voltage level)
//delay(1000); // wait for a second
// digitalWrite(d2, LOW); // turn the LED off by making
the voltage LOW
//delay(1000);
USE_SERIAL.println("MOVE RIGHT");
}

else if(payload=="11")
{
digitalWrite(d1, HIGH);
digitalWrite(d2, HIGH);

// turn the LED on (HIGH is the voltage level)


//delay(1000); // wait for a second
//digitalWrite(d3, LOW); // turn the LED off by making
the voltage LOW
//delay(1000);
USE_SERIAL.println("MOVE LEFT");
Air university Appendix A

else
{
digitalWrite(d1, LOW);
digitalWrite(d2, LOW); // turn the LED on (HIGH is the
voltage level)
//delay(500); // wait for a second
//digitalWrite(d4, LOW); // turn the LED off by making
the voltage LOW
//delay(1000);
USE_SERIAL.println("keep balance position ");
}
}}
else
{
USE_SERIAL.printf("[HTTP] GET... failed, error: %s\n",
http.errorToString(httpCode).c_str());
}

http.end();
}

delay(1000);
}

SIGN DETECTION CODE :-

# import the necessary packages

import numpy as np
import cv2
import cv
import RPi.GPIO as GPIO
import time
GPIO.setmode(GPIO.BCM)

TRIG = 23
ECHO = 24
LED=18

GPIO.setup(TRIG,GPIO.OUT)
GPIO.setup(LED,GPIO.OUT)
GPIO.setup(ECHO,GPIO.IN)
GPIO.setup(LED,GPIO.OUT)

# ********************* FUNCTION ORDER POINTS


******************************************************
def order_points(pts):
# initialzie a list of coordinates that will be ordered
# such that the first entry in the list is the top-left,
# the second entry is the top-right, the third is the
# bottom-right, and the fourth is the bottom-left
rect = np.zeros((4, 2), dtype="float32")

# the top-left point will have the smallest sum, whereas


# the bottom-right point will have the largest sum
s = pts.sum(axis=1)
rect[0] = pts[np.argmin(s)]
rect[2] = pts[np.argmax(s)]

62
Air University Smart Image processing Robot

# now, compute the difference between the points, the


# top-right point will have the smallest difference,
# whereas the bottom-left will have the largest difference
diff = np.diff(pts, axis=1)
rect[1] = pts[np.argmin(diff)]
rect[3] = pts[np.argmax(diff)]

# return the ordered coordinates


return rect
#********************** ORDER POINTS FUNCTION END
*****************************************************

#********************** FUNCTION FOUR POINT TRANSFORM


************************************************
def four_point_transform(image, pts):
# obtain a consistent order of the points and unpack them
# individually
rect = order_points(pts)
(tl, tr, br, bl) = rect

# compute the width of the new image, which will be the


# maximum distance between bottom-right and bottom-left
# x-coordiates or the top-right and top-left x-coordinates
widthA = np.sqrt(((br[0] - bl[0]) ** 2) + ((br[1] - bl[1]) ** 2))
widthB = np.sqrt(((tr[0] - tl[0]) ** 2) + ((tr[1] - tl[1]) ** 2))
maxWidth = max(int(widthA), int(widthB))

# compute the height of the new image, which will be the


# maximum distance between the top-right and bottom-right
# y-coordinates or the top-left and bottom-left y-coordinates
heightA = np.sqrt(((tr[0] - br[0]) ** 2) + ((tr[1] - br[1]) ** 2))
heightB = np.sqrt(((tl[0] - bl[0]) ** 2) + ((tl[1] - bl[1]) ** 2))
maxHeight = max(int(heightA), int(heightB))

# now that we have the dimensions of the new image, construct


# the set of destination points to obtain a "birds eye view",
# (i.e. top-down view) of the image, again specifying points
# in the top-left, top-right, bottom-right, and bottom-left
# order
dst = np.array([
[0, 0],
[maxWidth - 1, 0],
[maxWidth - 1, maxHeight - 1],
[0, maxHeight - 1]], dtype="float32")

# compute the perspective transform matrix and then apply it


M = cv2.getPerspectiveTransform(rect, dst)
warped = cv2.warpPerspective(image, M, (maxWidth, maxHeight))

# return the warped image


return warped

# ********************** FOUR POINT TRANSFORM FUNCTION END


*********************************************

kernel = np.ones((5,5),np.uint8)

img1=cv2.imread('arrowL.jpg',0)
img3=cv2.imread('arrowR.jpg',0)

resized_image1 = cv2.resize(img1, (320, 240))


resized_image3 = cv2.resize(img3, (320, 240))
Air university Appendix A

# Take input from webcam


cap = cv2.VideoCapture(0)

# Reduce the size of video to 320x240 so rpi can process faster


cap.set(3,320)
cap.set(4,240)

while(True):

buzz = 0
ret,frame=cap.read()
GPIO.output(TRIG, False)
time.sleep(0.5)
GPIO.output(TRIG, True)
time.sleep(0.00001)
GPIO.output(TRIG, False)

while GPIO.input(ECHO)==0:
pulse_start = time.time()

while GPIO.input(ECHO)==1:
pulse_end = time.time()

pulse_duration = pulse_end - pulse_start


distance = pulse_duration * 17150
distance = round(distance, 2)
print "Distance:",distance,"cm"

#ratio = frame.shape[0] / 300.0


#orig = frame.copy()

# convert the image to grayscale, blur it, and find edges


# in the image
gray=cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)

gray = cv2.bilateralFilter(gray, 11, 17, 17)


edged = cv2.Canny(gray, 30, 200)
# find contours in the edged image, keep only the largest
# ones, and initialize our screen contour

#_, cnts,_ = cv2.findContours(edged, cv2.RETR_TREE,


cv2.CHAIN_APPROX_SIMPLE)

(cnts,hierarchy)=cv2.findContours(edged,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)[:10]
screenCnt=None

# loop over our contours


# loop over our contours
for c in cnts:
# approximate the contour
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02 * peri, True)

# if our approximated contour has four points, then


# we can assume that we have found our screen
if len(approx) == 4:
screenCnt = approx
break

64
Air University Smart Image processing Robot

# draw a rectangle around the screen


orig = frame.copy()

cv2.drawContours(frame, screenCnt, -1, (0, 0, 255), 3)

if(distance>=25 and distance<=35):


#time.sleep(0.1)
pts = screenCnt.reshape(4, 2)

#*****************************************************************************
*******
pts = order_points(pts)
warped = four_point_transform(frame, pts)

#****************************************************************************
ret, thresh1 = cv2.threshold(warped, 80, 255, cv2.THRESH_BINARY)
re = cv2.resize(thresh1,(320, 240))
gray1 = cv2.cvtColor(re, cv2.COLOR_BGR2GRAY)
imgxor0 = cv2.bitwise_xor(gray1 , resized_image1)
imgxor2 = cv2.bitwise_xor(gray1, resized_image3)
nzCount0 = cv2.countNonZero(imgxor0)
nzCount2 = cv2.countNonZero(imgxor2)
print(nzCount0)
print(nzCount2)
if nzCount0 <= 55600:
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(frame, 'turn left', (20, 210), font, 1, (255, 255, 255),
2,2 )
elif nzCount2 <= 57500:
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(frame, 'turn right', (20, 210), font, 1, (255, 255, 255),
2,2)
else:
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(frame, 'not match found ', (20, 210), font, 1, (255, 255,
255), 2,2)
#cv2.imshow("comparison with left sign", imgxor0)
#cv2.imshow("comparison with right sign", imgxor2)
#cv2.imshow("Warped",re)
#cv2.imwrite("meri_tasweer.jpg",re)
cv2.imshow("orignal video", frame)

else:
print("distance greater then 35")
cv2.imshow("orignal video", frame)
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(frame, 'distance<25', (20, 210), font, 1, (255, 255, 255),
2,2)

k = cv2.waitKey(5) & 0xFF


if k == 27:
break
GPIO.cleanup()

cap.release()
cv2.destroyAllWindows()
Air university Appendix A

APENDIX B :-REFRENCES

1. https://docs.opencv.org/3.1.0/d4/d13/tutorial_py_filtering.html
2. http://opencv-python-
tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_canny/
py_canny.html
3. https://docs.opencv.org/3.1.0/d4/d73/tutorial_py_contours_begin.ht
ml
4. https://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.h
tml
5. https://www.pyimagesearch.com/2014/04/21/building-pokedex-
python-finding-game-boy-screen-step-4-6/
6. https://www.pyimagesearch.com/2014/08/25/4-point-opencv-
getperspective-transform-example/
7. http://roboticssamy.blogspot.pt/
8. http://www.robotshop.com/letsmakerobots/rs4-self-balancing-
raspberry-pi-image-processing-robot
9. http://socialledge.com/sjsu/index.php/S15:_Self-Balancing_Robot

66

You might also like