0% found this document useful (0 votes)
211 views

KWECE-A-TEAM1-Autonomous Vehicle Using Image Processing-Documentation

hvghhghjjkb

Uploaded by

Venuka Palepu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
211 views

KWECE-A-TEAM1-Autonomous Vehicle Using Image Processing-Documentation

hvghhghjjkb

Uploaded by

Venuka Palepu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 117

A PROJECT REPORT ON

AUTONOMOUS VEHICLE USING IMAGE PROCESSING


A project is submitted in partial fulfillment of the requirement
For the award of the degree of

BACHELOR OF TECHNOLOGY
IN
ELECTRONICS & COMMUNICATION ENGINEERING
(2016-2020)

Submitted by
A.SHIRISHA (16JN1A0404)
B.SIREESHA (16JN1A0407)
G.MOUNIKA (16JN1A0426)
V.SREE ROHITHA (16JN1A0453)
G.SRIDEVI (16JN1A0454)

Under the esteemed guidance of


Mr. M. SRIHARI, M.Tech
Assistant Professor, Dept. of ECE

DEPARTMENT OF ELECTRONICS & COMMUNICATION ENGINEERING

KAKINADA INSTITUTE OF ENGINEERING & TECHNOLOGY

(Approved by AICTE, Govt. of AP. Affiliated to Jawaharlal Nehru Technological


University Kakinada, Yanam road, Korangi-533461)
KAKINADA INSTITUTE OF ENGINEERING & TECHNOLOGY
(Approved by AICTE, Govt. of AP. Affiliated to Jawaharlal Nehru Technological
University Kakinada, Yanam road, Korangi-533461)
(2016-2020)

CERTIFICATE
This is to certify that the thesis entitle “AUTONOMOUS VEHICLE USING IMAGE
PROCESSING” is a bonafide work of A.SHIRISHA, B.SIREESHA, G.MOUNIKA, V.SREE
ROHITHA , G.SRIDEVI has been carried out in practical fulfillment of the requirement for the
award of the degree of BACHELOR OF TECHNOLOGY in ELECTRONICS AND
COMMUNICATION ENGINEERING Branch to KAKINADA INSTITUTE OF
ENGINNERING AND TECHNOLOGY FOR WOMEN affiliated to
JNTUK, KAKINADA is a record of bonafied work carried out by them under my guidance &
supervision. The results embodied in this thesis have not been submitted to any other University
or Institute for the award of any degree.

Project Guide Head of department

Mr. M.SRIHARI, M. Tech Mr. V.V.SUBHASH, M. Tech


Department of ECE Department of ECE

EXTERNAL EXAMINER
ACKNOWLEDGEMENT

It gives us immense pleasure to acknowledge all those who helped us throughout in making
this project a great success.

With profound gratitude we thank Mr. Y. RAMA KRISHNA, M. Tech, MBA, Principal,
Kakinada Institute of Engineering and Technology, for her timely suggestions which helped us
to complete this project work successfully.

Our sincere thanks and deep sense of gratitude to Mr. V.V.SUBHASH, M. Tech, Head of the
Department ECE, for his valuable guidance, in completion of this project successfully.

We express a great pleasure to acknowledge my profound sense of gratitude to our project


guide Mr. M. SRIHARI, M. Tech, Assistant Professor in ECE Department for this valuable
guidance, comments, suggestions and encouragement throughout the course of this project.

We are thankful to both Teaching and Non-Teaching staff of ECE department for their kind
cooperation and all sorts of help bringing out this project work successfully.

OUR PROJECT MEMBERS

A.SHIRISHA (16JN1A0404)
B.SIREESHA (16JN1A0407)
G.MOUNIKA (16JN1A0426)
V.SREE ROHITHA (16JN1A0453)
G.SRIDEVI (16JN1A0454)
DECLARATION

We hereby declare that the work which is being preserved in the


Dissertation entitled “AUTONOMOUS VEHICLE USING IMAGE
PROCESSING” submitted to the JNTU Kakinada is a record of an original work
done by us under the guidance of, Mr. M. SRIHARI, M. Tech, Asst. professor,
Electronics & Communication Engineering. This project work submitted in partial
fulfilment of the requirement for the award of the degree of Bachelor of
Technology in Electronics & Communication Engineering. The results embodied in
this project report have not been submitted to any other University or Institute for
the award of any degree or diploma.

This work has not been previously submitted to any other institution
or University for the award of any other degree or diploma.

OUR PROJECT MEMBERS

A.SHIRISHA (16JN1A0404)
B.SIREESHA (16JN1A0407)
G.MOUNIKA (16JN1A0426)
V.SREE ROHITHA (16JN1A0453)
G.SRIDEVI (16JN1A0454)
TABLE OF CONTENTS

CHAPTER TITLE PAGE NO

ABSTRACT I

LIST OF FIGURES II-III

LIST OF TABLES IV

1. INTRODUCTION 1-8

1.1 Introduction 1
1.2 Description 1
1.3 Explanation 2
1.3.1 Lane Detection 2
1.3.2 Image Processing 3
1.3.3 Edge Detection 4
1.3.4 Histogram 4
1.3.4 (a) Histogram of Monochrome image 5
1.3.4 (b) Application of Histogram 6
1.4 Advantages of Autonomous vehicle 7
1.5 Keywords 7
1.6 Existing system 7
1.7 Proposed system 8
2. INTORDUCTION TO EMBEDDED SYSTEMS 9-16
2.1 Introduction 9
2.2 Background 10
2.3 Characteristics 11-13
2.3.1 User Interface 11
2.3.2 Processors in embedded systems 12
2.3.3 Readymade computer boards 12
2.3.4 Peripherals 13
TABLE OF CONTENTS
2.4 Applications 14
2.5 Types of embedded systems 15
2.5.1 Small scale embedded systems 15
2.5.2 Medium scale embedded system 15
2.5.3 Sophisticated embedded systems 15
2.6 Advantages 16

3. HARDWARE COMPONENTS 17-60


3.1 Raspberry Pi 3B+ 18-23
3.1.1 Processor: BCM2837B0 18
3.1.2 Central processing unit 19
3.1.3 Memory 19
3.1.4 Networking 19
3.1.5 Video 20
3.1.6 Audio port 21
3.1.7 USB 21
3.1.8 GPIO 21
3.1.9 Power management integrated circuit (PMIC) 22
3.1.10 Power port 23
3.1.11 Operating system and storage support 23
3.2 Arduino Uno 24
3.2.1 Micro controller 25
3.2.2 Architecture 26
3.2.3 Features 26
3.2.4 Types of Arduino 27
3.2.5 Pin out of Arduino Uno 28
3.2.6 Communication 29
3.2.7 Advantages of Arduino 30
3.2.8 Applications of Arduino 30
3.3 Raspberry camera 31
TABLE OF CONTENTS
3.3.1 Pin description 32
3.3.2 Features 32
3.4 Vibration sensor 33
3.4.1 Specification 34
3.5 gas sensor 34
3.5.1 Different type of gas sensor 35
3.5.2 Applications 35
3.6 GSM module 35
3.6.1 Specification 36
3.6.2 Module pin out 37
3.7 DC gear BO motor 37
3.7.1 Performance curve 38
3.7.2 Features 38
3.8 Battery 38
3.9 Buck converter 40
3.9.1 Features 41
3.9.2 Specifications 41
3.10 Boost converter 42
3.10.1 Description 42
3.10.2 Features 42
3.10.3 Operation 43
3.10.4 Applications 43
3.11 Motor driver (L298N) 43
3.11.1 Power supply 44
3.11.2 L298MIC 44
3.11.3 L298N pin out 44
3.11.4 Output pins 45
3.11.5 Control pins 45
3.12 I2C module 46
3.12.1 I2C interface 46
TABLE OF CONTENTS
3.12.2 Applications 47
3.13 LCD (liquid crystal display) 47
3.13.1 Features 47
3.13.2 Shapes and sizes 48
3.13.3 Pin description 48
3.13.4 LCD screen 48
3.13.5 LCD connection 49
3.13.6 LCD initialisation 49
3.14 Cooling fan 50
3.15 SD card (security data card) 50
3.15.1 2003 mini cards 51
3.15.2 2004-20015 micro cars 51
3.15.3 2006-2008 SDHC and SDIO 52
3.15.4 2009 present; SDHC 52
3.15.5 Class 53
3.15.6 Speed class 53
3.15.7 UHS speed class 53
3.15.8 Video speed class 53
3.15.9 Advantages 54
3.16 GPS module 54
3.16.1 Introduction 54
3.16.2 Pin configuration 55
3.16.3 Technical specifications 55
3.16.4 Features 55
3.16.5 Advantages 56
3.16.6 Applications 56
3.17 LED 56
3.17.1 Advantages 57
3.17.2 Applications 57
3.18 Transistor (bc547) 57
TABLE OF CONTENTS
3.18.1 BC547 as a switch 57
3.18.2 Applications 58
3.19 Toggle switch 58
3.20 Car chassis 59
3.21 Wheels 59
3.21.1 Standard \fixed wheels 59
3.21.2 Orientable wheel 60
3.21.3 Omni wheels 60
3.21.4 Conclusion 60

4. SOFTWARE USED 61-82


4.1 Arduino IDE 61-62
4.1.1 Development process 63
4.1.2 Project creation process 63
4.1.3 Basic Arduino command library 65
4.2 Raspbian operating system 66
4.2.1 Installation 67
4.2.2 Raspberry pi configuration 69
4.2.3 Advantages of Raspbian 71
4.3. Open computer vision (CV) 72
4.3.1 Reading, writing and displaying images 73
4.3.2 Changing colour spaces 74
4.3.3 Resizing images 74
4.3.4 Image rotation 75
4.3.5 Image translation 75
4.3.6 Adaptive thresholding 76
4.3.7 Image segmentation 77
4.3.8 Bitwise operators 77
4.3.9 Edge detection 78
4.3.10 image filtering 79
TABLE OF CONTENTS
4.3.11 image contours 79
4.3.12 scale invariant feature transform (SIFT) 80
4.3.13 speeded up robust feature (surf) 80
4.3.14 feature matching 81
4.3.15 face detection 81
4.3.17 end nodes 82

5. RESULT 83
6. CONCLUSION AND FEATURE SCOPE 84
APPENDIX SOURCE CODE 85-103
ABSTRACT
The autonomous vehicle is a self-driving vehicle which will drive by itself without any man
power. Here we use the autonomous vehicle by detecting the traffic signs, symbols, obstacle
and the path of the vehicle. The raspberry pi camera is used to detect the obstacles placed in
front of the vehicle and also detects the path of the vehicle. Using the concept of image
processing the camera detect the path of the vehicle and it vehicle moves along the path. The
traffic signs are detected by the camera and travel with respective to the signs is used. By using
the GPS the vehicle shares the location to the user, if any obstacle is obtained to the vehicle it
sends a message to the user by using GSM and shares the location using GPS. When the vehicle
has gone with accident it sends a message to the user and shares the location. During the low
light conditions the vehicle will enlighten the lights of the vehicle by using an LDR sensor.

The concept of image processing plays a vital role for the detection of path of the vehicle. The
main objective of the project is to make a vehicle drive by itself detecting traffic signs, symbols,
obstacle and path etc. The main function of this project is traffic sign detection, obstacle
detection, accident and gas detection, light detection for automatically enlighten the light of the
vehicle during low light conditions and location sharing systems using GPS.

I
LIST OF FIGURES
S.NO Figure No Figure Name Page no

1. 1.1 Original image 2


2. 1.2 ROI image in area 2
3. 1.3 Gray scale conversion of the ROI region 3
4. 1.4 Gray sketch image 3
5. 1.5 Median filter image 4
6. 1.6 Canny edge detection output image 4
7. 1.7 Pixel intensities for 1bit,2bit,3bit, 4bit image data 5
8. 1.8 Black and white image and its histogram 5
9. 1.9 8 bit gray scale image and its histogram 6
10. 1.10 Histogram of original image and thresholding results 6
11. 1.11 Histogram of two thresholding images 6
12. 2.1 PCB board 9
13. 2.2 Types of embedded systems 15
14. 3.1 Block diagram 17
15. 3.2 Raspberry pi board 18
16. 3.3 BCM2837 BO with heat spreader 18
17. 3.4 Random access memory (RAM) 19
18. 3.5 Wi-Fi/BT components 19
19. 3.6 LAN 7515 gigabit Ethernet and 2.0 usb chip 20
20. 3.7 Display connectors (DSI) 20
21. 3.8 Audio jack 21
22. 3.9 USB ports 21
23. 3.10 GPUIO configuration 21
24. 3.11 Power management integrated circuit (PMIC) 22
25. 3.12 USB type C 23
26. 3.13 SD slot 23
27. 3.14 Arduino Uno 24
28. 3.15 Micro controller 25
29. 3.16 Pin configuration 25
30. 3.17 Architecture 26
31. 3.18 Pin configuration of Arduino 28
32. 3.19 Raspberry pi camera 31
33. 3.20 Vibration sensor 33
34. 3.21 Gas sensor 34
35. 3.22 GSM module 36
36. 3.23 Pin out of SIM 800 37
37. 3.24 Dc gear BO motor 37
38. 3.25 Performance curve 38
39. 3.26 Lithium ion battery 40
40. 3.27 Buck converter 40
41. 3.28 Boost converter 42
42. 3.29 L29A8NIC 44
43. 3.30 Pin out 44
44. 3.31 Output pins of L298N 45
45. 3.32 Directional control pin 46
46. 3.33 I2C module 46
47. 3.34 LCD display 48
48. 3.35 Cooling fan 50
49. 3.36 Micro SD card 51
50. 3.37 GPS pin out 55
51. 3.38 LED 56
52. 3.39 Transistor 58
53. 3.40 Toggle switch 58
54. 3.41 Car chassis 59
55. 4.1 Arduino software 61
56. 4.2 Arduino licence agreement 62
57. 4.3 Installation path 62
58. 4.4 Complete installation 62
59. 4.5 Arduino sketch 63
60. 4.6 Setup and loop functions 64
61. 4.7 Open cv OS 72
62. 4.8 Reading images 73
63. 4.9 Changing the colours of images 74
64. 4.10 Image rotation 75
65. 4.11 Image translation 76
66. 4.12 Adaptive thresholding 77
67. 4.13 Image segmentation 77
68. 4.14 Bitwise operators 78
69. 4.15 Edge detection 78
70. 4.16 Image filtering 79
71. 4.17 Image counters 80
72. 4.18 SIFT image 80
73. 4.19 Feature matching 81
74. 4.20 Face detection 82
75. 5.1 Result 83

III
LIST OF TABLES

S.NO TABLE NO TABLE NAME PAGE NO


1 3.1 Types of Arduino 27
2 3.2 Pin Description of LCD 48

IV
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

CHAPTER 1
INTRODUCTION

1.1 INTRODUCTION

Image processing is one of the main drivers of automation, security and safety related application
of the electronic industry. Most image-processing technologies involve several steps like treat the
image as a two dimensional signal and apply standard signal processing techniques to it. Images
are also handled as 3D signals where the third dimension is the time or the z-axis. Highly efficient,
low memory and reliable solutions can be achieved by utilizing Embedded Systems and Image
processing to bring out the benefits of both for applications. Google is one of the billion dollar
companies who have demonstrated its own driverless car, a design that does away with all
conventional controls including the steering wheel, and other astonishing technologies. In their
driverless car

Google has not only included Image Processing, but also many other amazing technologies and
one of the most important among them is Lidar, which stands for ―Light Detection and Ranging‖.
It consists of a cone or puck-shaped device that projects lasers which bounce off objects to create
a high-resolution map of the environment in real time. In addition to helping driverless cars to
―see‖, Lidar is used to create fast, accurate 3D scans of landscapes, buildings, cultural heritage
sites and foliage. Some of the other technologies include Bumper Mounted Radar for collision
avoidance, Aerial that reads precise geo-location, Ultrasonic sensors on rear wheels which detects
and avoids obstacles, software which is programmed to interpret common road signs etc. Apart
from these, there are altimeters, gyroscopes, and tachymeters that determine the very precise
position of the car and offers highly accurate data for the car to operate safely. Apart from Google,
many other companies like Tesla, Audi, and Uber have also developed their own driverless cars
and have tested potentially. This paper concentrates on how Image processing can be used in
vehicles to drive the automotive industry to completely autonomous and high security pathways. A
real time embedded system environment is inevitable in an automotive application. Also, the scale
of the industry is very high so the solutions should be cost efficient, fast and reliable.

1.2 DESCRIPTION:

In autonomous vehicles, the driving commands from a human driver are replaced by a controller
or a microcomputer system that generates these commands from the information it gets as its input.
Since this paper deals with the applications of image processing in autonomous control of a
vehicle, the input given to the microcomputer system is the visual information obtained from a
camera mounted on the vehicle. This section explains in detail, some of the important factors in
Autonomous vehicles such as Lane Detection, Traffic Sign Detection, Obstacle Detection ,

KIETW- ECE Page 1


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

accident detection , etc., which uses the processing of received image inputs and the algorithms
used in them. Lane Detection represents a robust and real time detection of road lane markers using
the concept of Hough transform in which the edge detection is implemented using the canny edge
detection technique. Traffic Sign Detection includes the recognition of road traffic signs in which,
the concept of Polynomial approximation of digital curves is used in the detection module.

1.3 EXPLANATION:

1.3.1 LANE DETECTION:

Lane detection is one of the main parts in the self-driving car algorithm development. On board
cameras are kept in and around the cars to capture images of road and surrounding of the car in real
time [1]. When the vehicle appears to deviate from the lane or vehicle safety distance is too small,
it can timely alert the driver to avoid dangerous situations. The basic concept of lane detection is
that, from the image of the road, the on-board controller should understand the limits of the lane
and should warn the driver when the vehicle is moving closer to the lanes. In an autonomous car,
lane detection is important to keep the vehicle in the middle of the lane, at all- time, other than
while changing lanes. Lane departure warning systems have already crawled into most of the high-
end passenger cars currently in market. A typical lane detection algorithm can be split into simple
steps:
1. Select the ROI (Region of Interest)
2. Image preprocessing (gray range/image noise subtraction)
3. Get the edge information from the image (edge detection)
4. Hough Transform (or other algorithms) to decide the lane markings

FIGURE 1.1: Original Image

FIGURE 1.2: ROI area

Step 1: Select the ROI the images collected by the on-board camera are color images. Each pixel
in the image is made up of R, G, and B three color components, which contains large amount of
information. Processing these images directly makes the algorithm consume a lot of time. A

KIETW- ECE Page 2


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

better idea for this problem is to select the region of interest (ROI) from the original image by
concentrating on just the region of the image that interests us namely the region where the lane
lines are generally present. Processing on only the ROI can greatly reduce the time of algorithm
and improve the running speed. The original image is shown in Figure 1.1 the ROI region is shown
in Figure 1.2

1.3.2 IMAGE PROCESSING:

Most of the road images have a lot of noise associated. So, before we do any image processing
steps we need to remove those noises. This is typically done through Image Preprocessing. Image
preprocessing includes gray scale conversion of color image, gray stretch, median filter to eliminate
the image noise and other interference information. Gray stretch can increase the contrast between
the lane and the road, which makes the lane lines more prominent.

Equation (1) represents the function which is to be applied to an RGB image to convert it to Gray
Scale.
L(x, y) = 0.21 R(x, y) + 0.72G(x, y) + 0.07 B(x, y) - (1)

Where R - Red component of the image G - Green component of the image B - Blue component
of the image x, y - position of a pixel

Figure 1.3 gray scaled conversion of the ROI region

Figure1.4 gray stretch image

The method of image filtering includes the frequency domain filtering and the spatial domain
filtering. Spatial domain filtering is simpler and faster than the frequency domain filtering.
Spatial domain filtering can remove the salt and pepper noise from the original image and preserve
the edge details of the image.

KIETW- ECE Page 3


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

Its main principle is to use the middle value of every pixel in the neighborhood of one pixel instead
of current pixel value. The image after median filtering is shown

Figure 1.5 median filtered images

1.3.3 EDGE DETECTION:

The next step is to perform edge detection on the output of the preprocessing. It is basically to
detect the lines around the objects in the images. One of the common methods of edge detection is
called Canny Edge Detection introduced by John F Canny, University of California, and Berkeley
in 1986. It basically uses multiple steps including Gaussian filters, intensity gradient changes to
determine edges

Figure 1.6 Canny Edge Detection Output image

In recent researches, one of the main goals are to develop higher efficient edge detection algorithm
for better detecting edges from varying image quality. For that purpose, an alternative approach to
edge detection is used, which is claimed to be a much efficient method than canny edge detector.

1.3.4 HISTOGRAM:

Digital images are composed of two-dimensional integer arrays that represent individual
components of the image, which are called picture elements, or pixels. The number of bits used to
represent these pixels determines the number of gray levels used to describe each pixel.
The pixel values in black-and-white images can be either 0 (black) or 1 (white), representing the
darker and brighter areas of the image, respectively, as shown in Figure 1(7).

KIETW- ECE Page 4


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

Figure 1.7 Available pixel intensities for 1-bit, 2-bit, 3-bit, and 4-bit image data

If n bits are used to represent a pixel, then there will be 2n pixel values ranging from 0 to (2n -1).
Here 0 and (2n - 1) correspond to black and white, respectively, and all other intermediate values
represent shades of gray. Such images are said to be monochromatic (Figures 1(b) through 1(d)).

A combination of multiple monochrome images results in a color image. For example, an RGB
image is a combined set of three individual 2-D pixel arrays that are interpreted as red, green, and
blue color components.
An image histogram is a graph of pixel intensity (on the x-axis) versus number of pixels (on the y-
axis). The x-axis has all available gray levels, and the y-axis indicates the number of pixels that
have a particular gray-level value.2 multiple gray levels can be combined into groups in order to
reduce the number of individual values on the x-axis.

1.3.4. (a) Histogram of a Monochrome Image:

Figure 1.8(a) shows a simple 4 × 4 black-and-white image whose histogram is shown in Figure
1.8(b). Here the first vertical line of the histogram (at gray level 0) indicates that there are 4 black
pixels in the image. The second line indicates that there are 12 white pixels in the image.

Figure1.8. A black-and-white image and its histogram

Figure 1.9(a) is a gray scale image. The four pixel intensities (including black and white) of this
image are represented by the four vertical lines of the associated histogram Figure 1.9(b). Here the
x-axis values span from 0 to 255, which means that there are 256 (=28) possible pixel intensities.

KIETW- ECE Page 5


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

Figure1.9. 8-bit gray scale image and its histogram.

1.3.4. (b) Applications of Histogram:

a) Thresholding

A gray scale image can be converted into a black-and-white image by choosing a threshold and
converting all values above the threshold to the maximum intensity and all values below the
threshold to the minimum intensity. A histogram is a convenient means of identifying an
appropriate threshold.

In Figure 1.10, the pixel values are concentrated in two groups, and the threshold would be a value
in the middle of these two groups.

Fig1.10. Histogram of original image and thresholding results

Fig 1.11Histogram of original image and two thresholding attempts

In Figure 1.11, the more continuous nature of the histogram indicates that the image is not a good
candidate for thresholding, and that finding the ideal threshold value would be difficult.

KIETW- ECE Page 6


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

b) Image Enhancement:

Image enhancement refers to the process of transforming an image so as to make it more visually
appealing or to facilitate further analysis.5 it can involve simple operations (addition,
multiplication, logarithms, etc.)6 or advanced techniques such as contrast stretching and histogram
equalization.7

An image histogram can help us to quickly identify processing operations that are appropriate for
a particular image. For example, if the pixel values are concentrated in the far-left portion of the
histogram (this would correspond to a very dark image), we can improve the image by shifting the
values toward the center of the available range of intensities, or by spreading the pixel values such
that they more fully cover the available range.

1.4 ADVANTAGES OF AUTONOMOUS VEHICLE :

 Reduced Accidents.
 Reduced Traffic Congestion
 Reduced CO2 Emissions.
 Increased Lane Capacity.
 Lower Fuel Consumption.
 Last Mile Services.
 Reduced Travel Time and Transportation Costs.

1.5 KEYWORDS :

Image processing, obstacle detection, GSM, GPS, gas sensor, LCD display, Raspberry pi camera,
vibration sensor.

1.6 EXISTING SYSTEM:

In this existing system the autonomous vehicle will drive by itself using image processing ,
histogram of an image, canny edge detection, gray scaling of path ,median filtering are the
techniques are used to drive the vehicle in its path. But if the vehicle faces any obstacle in its path,
it stops at the position and we can‘t identify its location. If the vehicle gets disturbance in its path
by surrounding environment it needs to detects and identify them and sends information to the user.
So to overcome these drawbacks we proposed a system with additional features to the autonomous
vehicle.

KIETW- ECE Page 7


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

1.7 PROPOSED SYSTEM:

In this proposed system along with image processing we have implemented a GPS, GSM system,
display information through LCD Display, smoke detection, accident prohibition, signal lights to
the vehicle. Whereas GPS shows the location of the vehicle so that we can identify it and GSM will
send message to the user. LCD display is used to display the information which sends by the user
through GSM. If we need to stop the vehicle we can send a message that ‗STOP’ then through
GSM the vehicle receives the message and stops the movement of vehicle and shares the location
where it stops through GPS. If we need the location of the vehicle during its path we should send
text ‗L’ to the vehicle, then the vehicle shows the location of its path. If any obstacle is present in
front of the vehicles path it takes some time delay till the obstacle is removed or else it sends a
message that “AN OBSTACLE IS IN MY PATH, I CANT MOVE FORWARD!” If a shocks
or vibrations are occurred from surroundings of the vehicle it sends a message through GSM and
shares the location. If a heavy smoke is obtained in the path of the vehicle it will send message as
―HEAVY SMOKE IS MY PATH, I CANT MOVE FORWARD!” and shares its location. Any
load information or any notification‘s which we want to show to others we can display on the
vehicle through LCD display and the information will be send to GSM and display it. Whereas the
signal lights of the vehicle is used when the vehicle turns left position then left LED will glow, the
vehicle turns right position the right LED will glow and if the vehicle stops its movement then both
the LEDs will glow .

Vibration sensor is used if the vehicle gets heavy vibrations or shocks during its travelling path, it
detects the heavy vibrations and stop the vehicle immediately. We also used obstacle avoidance to
the vehicle by using a raspberry pi camera and it detects the nearer objects to the vehicle. The
raspberry pi camera we used in the vehicle will scans the path and filters the images of the path and
sends information about the path to the vehicle, so the vehicle moves in a particular path. Traffic
sign detection is the method we implemented in this vehicle to detects the traffic signs to obey its
way of the path. By the traffic sign detection the vehicle the symbols and follow the path.

KIETW- ECE Page 8


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

CHAPTER 2

INTRODUCTION TO EMBEDDED SYSTEMS

2.1 INTRODUCTION:

An embedded system is a computer system—a combination of a computer processor, computer


memory, and input/output peripheral devices—that has a dedicated function within a larger
mechanical or electrical system. It is embedded as part of a complete device often including
electrical or electronic hardware and mechanical parts. Because an embedded system typically
controls physical operations of the machine that it is embedded within, it often has real-time
computing constraints. Embedded systems control many devices in common use today. Ninety-
eight percent of all microprocessors manufactured are used in embedded systems.

Modern embedded systems are often based on microcontrollers (i.e. microprocessors with
integrated memory and peripheral interfaces), but ordinary microprocessors (using external chips
for memory and peripheral interface circuits) are also common, especially in more complex
systems. In either case, the processor(s) used may be types ranging from general purpose to those
specialized in a certain class of computations or even custom designed for the application at hand.
A common standard class of dedicated processors is the digital signal processor (DSP).

Fig 2.1 PCB Board

Since the embedded system is dedicated to specific tasks, design engineers can optimize it to reduce
the size and cost of the product and increase the reliability and performance. Some embedded
systems are mass-produced, benefiting from economies of scale.

Embedded systems range from portable devices such as digital watches and MP3 players, to large
stationary installations like traffic light controllers, programmable logic controllers, and large

KIETW- ECE Page 9


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

complex systems like hybrid vehicles, medical imaging systems, and avionics. Complexity varies
from low, with a single microcontroller chip, to very high with multiple units, peripherals and
networks mounted inside a large equipment rack.

2.2 BACKGROUND:

The origins of the microprocessor and the microcontroller can be traced back to the MOS integrated
circuit, which is an integrated circuit chip fabricated from MOSFETs (metal-oxide- semiconductor
field-effect transistors) and was developed in the early 1960s. By 1964, MOS chips had reached
higher transistor density and lower manufacturing costs than bipolar chips. MOS chips further
increased in complexity at a rate predicted by Moore's law, leading to large- scale integration (LSI)
with hundreds of transistors on a single MOS chip by the late 1960s. The application of MOS LSI
chips to computing was the basis for the first microprocessors, as engineers began recognizing that
a complete computer processor system could be contained on several MOS LSI chips.[5]

The first multi-chip microprocessors, the Four-Phase Systems AL1 in 1969 and the Garrett
AiResearch MP944 in 1970, were developed with multiple MOS LSI chips. The first single-chip
microprocessor was the Intel 4004, released in 1971. It was developed by Federico Faggin, using
his silicon-gate MOS technology, along with Intel engineers Marcian Hoff and Stan Mazor, and
Busicom engineer Masatoshi Shima.
One of the very first recognizably modern embedded systems was the Apollo Guidance
Computer,[citation needed] developed ca. 1965 by Charles Stark Draper at the MIT
Instrumentation Laboratory. At the project's inception, the Apollo guidance computer was
considered the riskiest item in the Apollo project as it employed the then newly developed
monolithic integrated circuits to reduce the computer's size and weight.

An early mass-produced embedded system was the Automatics‘ D-17 guidance computer for the
Minuteman missile, released in 1961. When the Minuteman II went into production in 1966, the
D-17 was replaced with a new computer that represented the first high-volume use of integrated
circuits.

Since these early applications in the 1960s, embedded systems have come down in price and there
has been a dramatic rise in processing power and functionality. An early microprocessor, the Intel
4004 (released in 1971), was designed for calculators and other small systems but still required
external memory and support chips. By the early 1980s, memory, input and output system
components had been integrated into the same chip as the processor forming a microcontroller.
Microcontrollers find applications where a general-purpose computer would be too costly. As the
cost of microprocessors and microcontrollers fell the prevalence of embedded systems increased.

KIETW- ECE Page 10


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

Today, a comparatively low-cost microcontroller may be programmed to fulfill the same role as a
large number of separate components. With microcontrollers, it became feasible to replace, even
in consumer products, expensive knob-based analog components such as potentiometers and
variable capacitors with up/down buttons or knobs read out by a microprocessor. Although in this
context an embedded system is usually more complex than a traditional solution, most of the
complexity is contained within the microcontroller itself. Very few additional components may be
needed and most of the design effort is in the software. Software prototype and test can be quicker
compared with the design and construction of a new circuit not using an embedded processor.

2.3 CHARACTERISTICS:

Embedded systems are designed to do some specific task, rather than be a general-purpose
computer for multiple tasks. Some also have real-time performance constraints that must be met,
for reasons such as safety and usability; others may have low or no performance requirements,
allowing the system hardware to be simplified to reduce costs.

Embedded systems are not always standalone devices. Many embedded systems consist of small
parts within a larger device that serves a more general purpose. For example, the Gibson Robot
Guitar features an embedded system for tuning the strings, but the overall purpose of the Robot
Guitar is, of course, to play music. Similarly, an embedded system in an automobile provides a
specific function as a subsystem of the car itself.
The program instructions written for embedded systems are referred to as firmware, and are stored
in read-only memory or flash memory chips. They run with limited computer hardware resources:
little memory, small or non-existent keyboard or screen.

2.3.1 User interface:

Embedded systems range from no user interface at all, in systems dedicated only to one task, to
complex graphical user interfaces that resemble modern computer desktop operating systems.
Simple embedded devices use buttons, LEDs, graphic or character LCDs (HD44780 LCD for
example) with a simple menu system.

More sophisticated devices that use a graphical screen with touch sensing or screen-edge buttons
provide flexibility while minimizing space used: the meaning of the buttons can change with the
screen, and selection involves the natural behavior of pointing at what is desired. Handheld systems
often have a screen with a "joystick button" for a pointing device.

Some systems provide user interface remotely with the help of a serial (e.g. RS-232, USB, I²C, etc.)
or network (e.g. Ethernet) connection. This approach gives several advantages: extends the
capabilities of embedded system, avoids the cost of a display, simplifies BSP and allows one to
build a rich user interface on the PC. A good example of this is the combination of an embedded

KIETW- ECE Page 11


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

web server running on an embedded device (such as an IP camera) or a network router. The user
interface is displayed in a web browser on a PC connected to the device, therefore needing no
software to be installed.

2.3.2 Processors in embedded systems:

Examples of properties of typical embedded computers, when compared with general-purpose


counterparts, are low power consumption, small size, rugged operating ranges, and low per-unit
cost. This comes at the price of limited processing resources, which make them significantly more
difficult to program and to interact with. However, by building intelligence mechanisms on top of
the hardware, taking advantage of possible existing sensors and the existence of a network of
embedded units, one can both optimally manage available resources at the unit and network levels
as well as provide augmented functions, well beyond those available. For example, intelligent
techniques can be designed to manage power consumption of embedded systems.

Embedded processors can be broken into two broad categories. Ordinary microprocessors (μP) use
separate integrated circuits for memory and peripherals. Microcontrollers (μC) have on-chip
peripherals, thus reducing power consumption, size and cost. In contrast to the personal computer
market, many different basic CPU architectures are used since the software is custom-developed
for an application and is not a commodity product installed by the end user. Both Von Neumann,
as well as various degrees of Harvard architectures, is used. RISC as well as non-RISC processors
are found. Word lengths vary from 4-bit to 64-bits and beyond, although the most typical remain
8/16-bit. Most architecture comes in a large number of different variants and shapes, many of which
are also manufactured by several different companies.

Numerous microcontrollers have been developed for embedded systems use. General-purpose
microprocessors are also used in embedded systems; but generally, require more support circuitry
than microcontrollers.

2.3.3 Ready-made computer boards:

PC/104 and PC/104+ are examples of standards for ready-made computer boards intended for
small, low-volume embedded and ruggedized systems, mostly x86-based. These are often
physically small compared to a standard PC, although still quite large compared to most simple
(8/16-bit) embedded systems. They often use DOS, Linux, NetBSD, or an embedded real-time
operating system such as MicroC/OS-II, QNX or VxWorks. Sometimes these boards use non-x86
processors.

In certain applications, where small size or power efficiency are not primary concerns, the
components used may be compatible with those used in general-purpose x86 personal computers.
Boards such as the VIA EPIA range help to bridge the gap by being PC-compatible but highly
integrated, physically smaller or have other attributes making them attractive to embedded

KIETW- ECE Page 12


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

engineers. The advantage of this approach is that low-cost commodity components may be used
along with the same software development tools used for general software development. Systems
built in this way are still regarded as embedded since they are integrated into larger devices and
fulfill a single role. Examples of devices that may adopt this approach are ATMs and arcade
machines, which contain code specific to the application.

However, most ready-made embedded systems boards are not PC-centered and do not use the ISA
or PCI busses. When a system-on-a-chip processor is involved, there may be little benefit to having
a standardized bus connecting discrete components, and the environment for both hardware and
software tools may be very different.

One common design style uses a small system module; perhaps the size of a business card, holding
high density BGA chips such as an ARM-based system-on-a-chip processor and peripherals,
external flash memory for storage, and DRAM for runtime memory. The module vendor will
usually provide boot software and make sure there is a selection of operating systems, usually
including Linux and some real-time choices. These modules can be manufactured in high volume,
by organizations familiar with their specialized testing issues, and combined with much lower
volume custom mainboards with application-specific external peripherals.

Implementation of embedded systems has advanced so that they can easily be implemented with
already-made boards that are based on worldwide accepted platforms. These platforms include, but
are not limited to, Arduino and Raspberry Pi.

2.3.4 Peripherals:

A close-up of the SMSC LAN91C110 (SMSC 91x) chip, an embedded Ethernet chip
Embedded systems talk with the outside world via peripherals, such as:

Serial Communication Interfaces (SCI): RS-232, RS-422, RS-485, etc.


Synchronous Serial Communication Interface: I2C, SPI, SSC and ESSI (Enhanced Synchronous
Serial Interface)
Universal Serial Bus (USB)
Multi Media Cards (SD cards, Compact Flash, etc.)
Networks: Ethernet, Lon Works, etc.
Fieldbuses: CAN-Bus, LIN-Bus, PROFIBUS, etc.
Timers: PLL(s), Capture/Compare and Time Processing Units
Discrete IO: aka General Purpose Input/output (GPIO)
Analog to Digital/Digital to Analog (ADC/DAC)
Debugging: JTAG, ISP, BDM Port, BITP, and DB9 ports

2.4 APPLICATIONS:

KIETW- ECE Page 13


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

Embedded systems are commonly found in consumer, industrial, automotive, home appliances,
medical, commercial and military applications.

Telecommunications systems employ numerous embedded systems from telephone switches for
the network to cell phones at the end user. Computer networking uses dedicated routers and
network bridges to route data.

Consumer electronics include MP3 players, television sets, mobile phones, video game consoles,
digital cameras, GPS receivers, and printers. Household appliances, such as microwave ovens,
washing machines and dishwashers, include embedded systems to provide flexibility, efficiency
and features. Advanced HVAC systems use networked thermostats to more accurately and
efficiently control temperature that can change by time of day and season. Home automation uses
wired- and wireless-networking that can be used to control lights, climate, security, audio/visual,
surveillance, etc., all of which use embedded devices for sensing and controlling.

Transportation systems from flight to automobiles increasingly use embedded systems. New
airplanes contain advanced avionics such as inertial guidance systems and GPS receivers that also
have considerable safety requirements. Various electric motors — brushless DC motors, induction
motors and DC motors — use electronic motor controllers. Automobiles, electric vehicles, and
hybrid vehicles increasingly use embedded systems to maximize efficiency and reduce pollution.
Other automotive safety systems include anti-lock braking system (ABS), Electronic Stability
Control (ESC/ESP), traction control (TCS) and automatic four-wheel drive.

Medical equipment uses embedded systems for vital signs monitoring, electronic stethoscopes for
amplifying sounds, and various medical imaging (PET, SPECT, CT, and MRI) for non-invasive
internal inspections. Embedded systems within medical equipment are often powered by industrial
computers.

Embedded systems are used in transportation, fire safety, safety and security, medical applications
and life-critical systems. Unless connected to wired or wireless networks via on-chip 3G cellular
or other methods for IoT monitoring and control purposes, these systems can be isolated from
hacking and thus be more secure.[citation needed] For fire safety, the systems can be designed to
have a greater ability to handle higher temperatures and continue to operate. In dealing with
security, the embedded systems can be self-sufficient and be able to deal with cut electrical and
communication systems.

A new class of miniature wireless devices called motes is networked wireless sensors. Wireless
sensor networking, WSN, makes use of miniaturization made possible by advanced IC design to
couple full wireless subsystems to sophisticated sensors, enabling people and companies to
measure a myriad of things in the physical world and act on this information through IT monitoring
and control systems. These motes are completely self-contained, and will typically run off a battery
source for years before the batteries need to be changed or charged.

KIETW- ECE Page 14


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

Embedded Wi-Fi modules provide a simple means of wirelessly enabling any device that
communicates via a serial port.

2.5 TYPES OF EMBEDDED SYSTEMS:

Three types of Embedded Systems are:


1. Small Scale
2. Medium Scale
3. Sophisticated

Fig. 2.2 types of embedded systems

2.5.1. Small Scale Embedded Systems:

This embedded system can be designed with a single 8 or 16-bit microcontroller. It can be operated
with the help of a battery. For developing small scale embedded system, an editor, assembler,
(IDE), and cross assembler are the most vital programming tools.

2.5.2. Medium Scale Embedded Systems:

These types of embedded systems are designed using 16 or 32-bit microcontrollers. These systems
offer both hardware and software complexities. C, C++, Java, and source code engineering tool,
etc. are used to develop this kind of embedded system.

2.5.3. Sophisticated Embedded Systems:

This type of embedded systems has lots of hardware and software complexities. You may require
IPS, ASIPS, PLAs, configuration processor, or scalable processors. For the development of this
system, you need hardware and software co-design & components which needs to combine in the
final system.

KIETW- ECE Page 15


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

2.6 ADVANTAGES:

Here, are Pros/benefits of using Embedded System:


1. It is able to cover a wide variety of environments.
2. Less likely to encore errors.
3. Embedded System simplified hardware which, which reduces costs overall.
4. Offers an enhanced performance.
5. The embedded system is useful for mass production.
6. The embedded system is highly reliable.
7. It has very few interconnections.
8. The embedded system is small in size.
9. It has a fast operation.
10. Offers improved product quality.
11. It optimizes the use of system resources.
12. It has a low power operation.

KIETW- ECE Page 16


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

CHAPTER- 3

HARDWARE COMPONENTS

BOST BUCK
LI-ION CONVERTER
CONVERTER
BATTERY 5V
12V

HEAT SINK RASPBERRY PI RASPBERRY


FAN (MASTER BOARD) CAMERA

LCD (16*12)

GAS SENSOR

ARDUINO UNO LEFT


LED
VIBRATION SENSOR
(SLAVE BOARD)

GPS MODULE RIGHT


LED

GSM MODULE

MOTOR
DRIVER
(L298N)

M M M M

Figs 3.1 BLOCK DIAGRAM

KIETW- ECE Page 17


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

3.1 RASPBERRY PI 3B +:

The raspberry pi 3 model B+ is the final version of third generation single-board computer
boasting a 64-bit quad core processor running at 1.4GHZ, dual-band 2.4GHZ and 5GHZ
Wireless LAN, Bluetooth 4.2/BLE, faster Ethernet, and power over internet (POE) capability via
a separate POE HAT. The dual-band wireless LAN comes with modular compliance
certification, allowing the board to be designed into end products with significantly reduced
wireless LAN compliance testing, improving both cost and time to market. The Raspberry Pi
3modeel B+ maintains the same mechanical footprint as both the Raspberry Pi 2 model B and the
Raspberry Pi 3 model B.

Fig 3.2 raspberry pi board

3.1.1. PROCESSOR: BCM2837B0:

Fig 3.3 BCM2837B0 with heat spreader

The processor is made on system on chip method (SOC) .This is the Broadcom chip used in the
raspberry Pi 3B+.The underlying architecture of the BCM2837B0 is identical to the BCM2837A0
chip used in other versions of pi. The ARM core hardware is the same, only the frequency is rated
higher. The BCM2837B0 chip is packaged slightly different to the other processors, and most
notably includes a heat spreader for better thermals these allows higher clock frequencies (or
running at lower voltages to reduce power consumption), and more accurate monitoring and
control of the chip‘s temperature.

KIETW- ECE Page 18


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

3.1.2. CENTRAL PROCESSING UNIT (CPU):

The Pi 3 model B+ has a 1.4 GHz 64-bit quad-core Broadcom ARM Cortex A53-architecture
processor with 512 KB shared cache memory. By improved packaging alongside a heat-spreader
which have helped to boot its performance of the processor.

3.1.3. MEMORY:

Fig 3.4 Random Access Memory (RAM)

The raspberry pi 3 has 1GB LPDDR2 SDRAM (Low-Power Double Data Rate Synchronous
Dynamic Random Access memory).It is a type of double data rate SD RAM that consumes less
power .It has Level 1 (L1) cache memory of 32KB L1 and a Level 2 (L2) cache memory of 512
KB L2 with 1GB RAM.

3.1.4. NETWORKING:

It has a Gigabit Ethernet over USB 2.0 with max 300 Mbps of data transmissions .It also has Power-
over-Ethernet support (with separate POE HAT) and Improved PXE network and USB mass-
storage booting. And also it has wireless connection such as 2.4GHz and 5GHz
IEEE802.11.b/g/n/ac wireless LAN, Bluetooth 4.2, BLE

Fig 3.5 Wi-Fi/BT components

It has wireless and Bluetooth components are now inside a metallised can. This component group
has been FCC-approved as a module, which means if you incorporate a Pi3B+ into a product,

KIETW- ECE Page 19


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

wireless/BT transmission doesn‘t require any further certification. The can also has the
Raspberry Pi logo embossed on it, which is a rather nice touch.

The Pi3B+ 802.11ac at 5 GHz have up to 100 Mbit performances. If you operate in a busy Wi-Fi
environment, switching to 5 GHz can give some significant improvements. It‘s always good to
have choices.

Base-1000 Ethernet
The Pi 3B+ uses a Microchip LAN7515 chip for ethernet and USB 2.0 hub. So it can take
advantage of a Gigabit Ethernet connection, but because of USB 2.0 limitations, its maximum
throughput is 330 Mbit.

Fig 3.6 Microchip LAN7515 Gigabit Ethernet and USB 2.0 Hub chip

This is still a big performance hike and will be plenty for most applications. My broadband is
100 Mbit, so the Pi3B+ is now able to fully utilise that speed. With the older Pi3B I could get a
maximum speed of around 60 Mbit.

3.1.5. VIDEO:

Fig.3.7 display connector (DSI)

In Raspberry Pi 3 it has 1*full size HDMI, MIPI DSI display ports, It is a display serial interface
with Mobile Industry Processor Interface (MIPI) Alliance aimed at reducing the cost of display
controller. And MIPI CSI camera port which is a camera serial interface bus is used to transfer the
data to camera module Display Serial Interface (DSI), designed for use with the Raspberry Pi
Touch Display. At the right-hand edge of the board you‘ll find 40 metal pins, split into two rows
of 20 pins.

KIETW- ECE Page 20


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

3.1.6. AUDIO PORT:

It consists of 3.5 mm Audio jack and also HDMI port as audio outputs in Raspberry Pi.

Fig 3.8 Audio jack

3.1.7. USB:

The BCM2837 USB port is On-The-Go (OTG) capable. If using either as a fixed slave or fixed
master, please tie the USB OTGID pin to ground. The USB port (Pins USB DP and USB DM)
must be routed as 90 ohm differential PCB traces. Note that the port is capable of being used as a
true OTG port however there is no official documentation. Some users have had success making
this work

Fig 3.9 USB ports


3.1.8. GPIO:

GPIO stands for General Purpose Input/output is a type of pin found on an integrated circuit that
do not have a specific function .There are 40 GPIO in Raspberry Pi .While most of the pins are
used for sending a signal to a certain component ,the function of GPIO pins is customizable and
controllable by software.

Fig 3.10 GPIO configuration

KIETW- ECE Page 21


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

A powerful feature of the Raspberry Pi is the row of GPIO (general-purpose input/output) pins
along the top edge of the board. A 40-pin GPIO header is found on all current Raspberry Pi boards
(unpopulated on Pi Zero and Pi Zero W). Prior to the Pi 1 Model B+ (2014), boards comprised a
shorter 26-pin header.

Any of the GPIO pins can be designated (in software) as an input or output pin and used for a wide
range of purposes.

Note: the numbering of the GPIO pins is not in numerical order; GPIO pins 0 and 1 are present
on the board (physical pins 27 and 28) but are reserved for advanced use (see below).
Voltages
Two 5V pins and two 3V3 pins are present on the board, as well as a number of ground pins (0V),
which are configurable. The remaining pins are all general purpose 3V3 pins, meaning outputs are
set to 3V3 and inputs are 3V3-tolerant.
Outputs
A GPIO pin designated as an output pin can be set to high (3V3) or low (0V).
Inputs

A GPIO pin designated as an input pin can be read as high (3V3) or low (0V). This is made easier
with the use of internal pull-up or pull-down resistors. Pins GPIO2 and GPIO3 have fixed pull-up
resistors, but for other pins this can be configured in software.
More

3.1.9. PMIC (POWER MANAGEMENT INTEGRATED CIRCUIT):

Fig.3.11 the raspberry pi’s power management integrated circuit (pmic)

Plastic-covered chip can be seen to the bottom edge of the board, just behind the middle
set of USB ports. This is the USB controller, and is responsible for running the four USB ports.
Next to this is an even smaller chip, the network controller, which hands the Raspberry Pi's
Ethernet network port. A final black chip, smaller than the rest, can be found a little bit above the
USB Type-C power connector to the upper-left of the board ; this is known as a power management
integrated circuit (PMIC), and handles turning the power that comes in from the micro USB port
into the power the Pi needs to run.

KIETW- ECE Page 22


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

3.1.10. POWER PORT:


When use to connect the Raspberry Pi to a power source. The USB Type-C port is a
common sight on smart phones, tablets, and other portable devices. While you could use a standard
mobile charger to power the Pi, for best results you should use the official Raspberry Pi USB Type-
C Power Supply.

Fig.3.12 USB type-c power port

The power supply requirements are differ by Raspberry Pi models. All models required a 5.1v
supply, but the current supplied generally increases according to the model. For Raspberry Pi 3
requires 2.5A micro USB power supply.

The power requirements of Raspberry Pi increases as you make use of the various interfaces. The
GPIO pins can draw 50mA safely, distributed across all the pins, HDMI port uses 50mA,camera
module requires 250mA,and keyboard and mice can take as little as 100mA or over 1000mA.

3.1.11. OPERATING SYSTEM AND STORAGE SUPPORT:

Raspbian OS is the official operating system used in Raspberry Pi. It is also supportable to Linux,
Ubuntu, Windows and etc. The OS can be installed on a MicroSD, MiniSD. The MicroSD slot is
located on the bottom of the Raspberry Pi board.

Fig 3.13 SD Slot in raspberry pi

The Micro SD card slot on the Raspberry Pi 3 is located just below the Display Serial Adapter on
the other side. Insert the Micro SD card which was loaded with NOOBS in the slot and plug in
power supply.

KIETW- ECE Page 23


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

3.2 ARDUINO UNO:

The Arduino Uno is an open-source microcontroller board based on the microchip ATmega328P
microcontroller and developed by Arduino.cc, The board is equipped with sets of digital and analog
input/output pins that may be interfaced to various expansion boards and other circuits. The board
has 14 digital input output pins, 6 analog input output pins, and is programmable with the arduino
IDE (integrated development environment), via a type B USB cable. It can be powered by the USB
cable or by external 9-volt battery, through it accepts voltage between 7 and 20 volts.

Fig 3.14 Arduino Uno

The word ‗UNO‘ means ‗ONE‘ in Italian and was chosen to mark the initial release of Arduino
software. The UNO board is the first in a series of USB- based Arduino boards. The ATmega328
on the board comes preprogrammed with a boot loader that allows uploading new code to it without
any use of an external hardware programmer.

It is an open source computer hardware and software company, project, and user community that
designs and manufactures single-board microcontrollers and microcontroller kits for building
digital devices and interactive objects that can sense and control objects in the physical world.

Since Arduino is Open Source, the CAD and PCB design is freely available. Everyone can buy a
pre-assembled original Arduino board2 or a cloned board from another company. You can also
build an Arduino for yourself or for selling. Although it is allowed to build and sell cloned
Arduino boards, it‘s not allowed to use the name Arduino and the corresponding logo. Most
boards are designed around the Atmel Atmega328.

KIETW- ECE Page 24


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

3.2.1 MICROCONTROLLER:

ATMEGA328P is an 8- bit microcontroller with based on AVR (Automatic Voltage Controller)


RISC architecture. It is the most popular of all AVR controllers as it is used in arduino boards.
AVR is a hardware device used to maintain a voltage to electronic devices.
ATMEGA328P is high performance, low power controller from Microchip. ATMEGA328P is an
8-bit microcontroller based on AVR RISC architecture. It is the most popular of all AVR
controllers as it is used in ARDUINO boards. The Atmega328 is a very popular microcontroller
chip produced by Atmel. It is an 8-bit microcontroller that has 32K of flash memory, 1K of
EEPROM, and 2K of internal SRAM.

Fig 3.15 Microcontroller


The Atmega328 is one of the microcontroller chips that are used with the popular Arduino
Duemilanove boards. The Arduino Duemilanove board comes with either 1 of 2 microcontroller
chips, the Atmega168 or the Atmega328. Of these 2, the Atmega328 is the upgraded, more
advanced chip. Unlike the Atmega168 which has 16K of flash program memory and 512 bytes of
internal SRAM, the Atmega328 has 32K of flash program memory and 2K of Internal SRAM.
Functions associated with the pins must be known in order to use the device appropriately.
ATmega-328 pins are divided into different ports which are given in detail below.
All of the AVR ports are shown in the figure given below.
AREF is an Analog reference pin for Analog to digital converter.

Fig.3.16.pin configuration of ATmega328


a) VCC is a digital voltage supply.
b) AVCC is a supply voltage pin for analog to digital converter.
c) GND denotes Ground and it has a 0V.

KIETW- ECE Page 25


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

d) Port A consists of the pins from PA0 to PA7. These pins serve as analog input to analog
to digital converters. If analog to digital converter is not used, port A acts as an eight (8)
bit bidirectional input/output port.
e) Port B consists of the pins from PB0 to PB7. This port is an 8 bit bidirectional port having
an internal pull-up resistor.
f) Port C consists of the pins from PC0 to PC7. The output buffers of port C has symmetrical
drive characteristics with source capability as well high sink.
g) Port D consists of the pins from PD0 to PD7. It is also an 8 bit input/output port having an
internal pull-up resistor.

3.2.2 ARCHITECTURE:

RISC stands for reduced instruction set computer, it is a computer instruction set that allows a
computers microprocessors to have fewer cycles per instructions (CPI) than a complex instruction
set computer (CISC).A RISC computer has a small set of simple and general instructions, rather
than a large set of complex and specialized ones .

Fig 3.17 Architecture

The main distinguishing feature of RISC is that the instruction set is optimized for a highly regular
instruction pipe flow. RISC processors have a CPI (clock per instruction) of one cycle. This is due
to the optimization of each instruction on the CPU and a technique called pipelining.

3.2.3 FEATURES:

 It is an open source design and there is an advantage of being open source is that it has a
larger community of people using and troubleshooting it .This makes it easy to help in
debugging projects
 It consists of 13 digital pins and 6 analog pins. These pins are used to connect hardware
to your Arduino UNO board externally.

KIETW- ECE Page 26


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

 It is a 16MHz clock speed which is faster enough for most applications and does not
speeds up the microcontroller
 It has a 32KB of flash memory for sorting the code, 1KB of EEPROM (Electrically
Erasable Programmable Read Only Memory) and 2KB of SRAM (Static Random Access
memory).
 It has a button to reset the program on the chip.

3.2.4 TYPES OFARDUINO :

ARDUINO MODEL YEAR OF MICRO CONTROLLER


NAME INTRODUCING

Diecimila 2007 Atmega168V

Lilly pad 2007 At mega 168V/At mega 328V

Nano 2008 At mega 328 / 168

Mini 2008 Atmega168

Mini Pro 2008 At mega 328

Duemilanove 2008 At mega 328 / 168

Mega 2009 At mega 3280

Fio 2010 At mega 328P

Uno 2010 At mega 328P

Ethernet 2011 At mega 328

Mega ADK 2011 At mega 2560

Leonardo 2012 At mega 32U4

Esplora 2012 At mega 32U4

Micro 2012 At mega 32U4

KIETW- ECE Page 27


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

3.2.5 PIN OUT OF ARDUINO UNO:


The Arduino Uno pin diagram is shown below. It comprises 14-digit I/O pins. From these pins, 6-
pins can be utilized like PWM outputs. This board includes 14 digital input/output pins, Analog
inputs-6, a USB connection, quartz crystal-16 MHz, a power jack, a USB connection resonator-
16Mhz, a power jack, an ICSP header an RST button.

a) Power Supply
a. The power supply of the Arduino can be done with the help of an exterior power supply
otherwise USB connection. The exterior power supply (6 to 20 volts) mainly includes
a battery or an AC to DC adapter. The connection of an adapter can be done by plugging
a center-positive plug (2.1mm) into the power jack on the board. The battery terminals
can be placed in the pins of Vin as well as GND.

b) Vin :

Fig 3.18 pin configuration of arduino

The input voltage or Vin to the Arduino while it is using an exterior power supply
opposite to volts from the connection of USB or else RPS (regulated power supply). By
using this pin, one can supply the voltage

c) 5Volts
The RPS can be used to give the power supply to the microcontroller as well as
components which are used on the Arduino board. This can approach from the input
voltage through a regulator

d) 3.3V
A 3.3 supply voltage can be generated with the onboard regulator, and the highest
draw current will be 50 mA.

e) GND
GND (ground) pins

f) Memory

KIETW- ECE Page 28


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

The memory of an ATmega328 microcontroller includes 32 KB and 0.5 KB memory


is utilized for the Boot loader, and also it includes SRAM-2 KB as well as EEPROM-
1KB.

g) Input and Output


We know that an arguing Uno includes 14-digital pins which can be used as an input
otherwise output by using the functions like pin Mode (), digital Read (), and digital
Write (). These pins can operate with 5V, and every digital pin can give or receive
20mA, & includes a 20k to 50k ohm pull up resistor. The maximum current on any pin
is 40mA which cannot surpass for avoiding the microcontroller from the damage.
Additionally, some of the pins of an Arduino include specific functions.

 Serial: 0 (RX) and 1 (TX). Used to receive (RX) and transmit (TX) TTL serial data. These
pins are connected to the corresponding pins of the ATmega8U2 USB-to-TTL Serial chip.
 External Interrupts: 2 and 3. These pins can be configured to trigger an interrupt on a low
value, a rising or falling edge, or a change in value. See the attach Interrupt () function for
details.
 PWM: 3, 5, 6, 9, 10, and 11. Provide 8-bit PWM output with the analog Write () function.
 SPI: 10 (SS), 11 (MOSI), 12 (MISO), 13 (SCK). These pins support SPI communication using
the SPI library.
 LED: 13. There is a built-in LED connected to digital pin 13. When the pin is HIGH value,
the LED is on, when the pin is LOW, it's off.

The Uno has 6 analog inputs, labeled A0 through A5, each of which provide 10 bits of resolution
(i.e. 1024 different values). By default they measure from ground to 5 volts, though is it possible
to change the upper end of their range using the AREF pin and the analog Reference () function.
Additionally, some pins have specialized functionality:

 TWI: A4 or SDA pin and A5 or SCL pin. Support TWI communication using the Wire
library. There are a couple of other pins on the board:
 AREF. Reference voltage for the analog inputs. Used with analog Reference ().
 Reset. Bring this line LOW to reset the microcontroller. Typically used to add a reset button
to shields which block the one on the board.

3.2.6 COMMUNICATION:

The Arduino Uno has a number of facilities for communicating with a computer, another Arduino,
or other microcontrollers. The ATmega328 provides UART TTL (5V) serial communication,
which is available on digital pins 0 (RX) and 1 (TX). An ATmega16U2 on the board channels this
serial communication over USB and appears as a virtual com port to software on the computer.
The '16U2 firmware uses the standard USB COM drivers, and no external driver is needed.
However, on Windows, an .inf file is required. The Arduino software includes a serial monitor
which allows simple textual data to be sent to and from the Arduino board. The

KIETW- ECE Page 29


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

RX and TX LEDs on the board will flash when data is being transmitted via the USB-to-serial chip
and USB connection to the computer (but not for serial communication on pins 0 and 1).

A Software Serial library allows for serial communication on any of the Uno's digital pins. The
ATmega328 also supports I2C (TWI) and SPI communication. The Arduino software includes a
Wire library to simplify use of the I2C bus; see the documentation for details. For SPI
communication, use the SPI library.

3.2.7 ADVANTAGES OF ARDUINO:

 It is cheap
 The software of the Arduino is well-suited with all kinds of in operations systems
like Linux, Windows, and etc.
 It is used to for real-time applications
 Both the hardware and software and IDE are open source
 Easy to connect sensors, electronic components and motors with jumper cables.
 No need to put long setup on the board, just plug in then the code will runs.
 Price is low and no need to connect many cables.
 Runs on any type of Operating systems like Windows, Linux, and Mac OS.
 It is widely used in the Real time applications.
 Varieties of shields are used to connect for giving protection.
 Basic knowledge of programming is enough to code the Arduino.

3.2.8 APPLICATIONS OF ARDUINO UNO:

1. OBSTACLE AVOIDANCE ROBOT USING ARDUINO:

The main concept of this project is to design a robot using ultrasonic sensors to avoid the
obstacle. It can perform some tasks with some guidance or automatically. The robot vehicle
has an intelligence which is built inside the robot that is Arduino board .it is designed with
a microcontroller named AT mega328 from Atmel family of Arduino board

2. HOME AUTOMATION:

This project is to design a home automation system using an Arduino board with Bluetooth
being controlled by an android OS based smart phone. This project will gives a modern
solution with smart phones. Here the Bluetooth module is attached to the Arduino board at
the receiver side and while on the transmitter side, a Graphical User Interface (GUI)
application on the smart phone sends ON/OFF commands to the receiver where home
appliances are connected. By touching on the particular GUI, the different appliances can
be turned ON/OFF using this technology.

KIETW- ECE Page 30


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

3.3 RASPBERRY PI CAMERA:

The Raspberry Pi camera board plugs directly into the CSI connector on the Raspberry Pi. It‘s able
to deliver a crystal clear 5MP resolution image or 1080 p HD video recording at 30fps latest version
1.3 Custom designed and manufactured by the Raspberry Pi foundation in the UK, the Raspberry
Pi camera board features a 5MP (2592*1944 pixels). Omni vision 5647 sensor in a fixed focus
module. The module attaches to Raspberry Pi, by the way of a 15 pin ribbon cable to the dedicated
15 pin MIPI camera serial interface (CSI), which was designed especially for interfacing to
cameras. The CSI bus is capable of extremely high data rates, and it exclusively carries pixel data
to the BCM 2835 processor. The board itself is tiny, at around 25mm *20 mm* 9mm,and weighs
just over 3g, making it perfect for mobile or other applications where size and weight are important.
The sensor itself has a native resolution of 5 mega pixel, and has a fixed focus lens on board. In
terms of still images, the camera is capable of 2592*1944 pixel images, and also supports 1080 p
with 30fps,720p with 60 fps and 640*480p 60/90 video recording. The camera is supported in the
latest version of Raspbian the Raspberry Pi‘s preferred operating systems.

Fig 3.19 Raspberry pi camera

The OV5647 is a 5-megapixel CMOS image sensor built on Omni Vision‘s proprietary 1.4- micron
OmniBSI™ backside illumination pixel architecture. The OV5647 delivers 5-megapixel
photography in addition to high frame rate of 720p/60 and 1080p/30 high-definition (HD) video
capture in an industry standard camera module size of 8.5 x 8.5 x 5 mm, making it an ideal solution
for the mainstream mobile phone market.

The 720p/60 HD video is captured in full field of view (FOV) with 2x2 binning to double the
sensitivity and improve signal-to-noise ratio (SNR). The post binning re-sampling filter helps
minimize spatial and aliasing artifacts to provide superior image quality.

KIETW- ECE Page 31


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

OmniBSI technology offers significant performance benefits over front-side illumination


technology, such as increased sensitivity per unit area, improved quantum efficiency,

Reduced crosstalk and photo response non-uniformity, which all contribute to significant
improvements in image quality and color reproduction. Additionally, Omni Vision CMOS image
sensors use proprietary sensor technology to improve image quality by reducing or eliminating
common lighting/electrical sources of image contamination, such as fixed pattern noise and
smearing to produce a clean, fully stable color image.

The low power OV5647 supports a digital video parallel port or high-speed two-lane MIPI
interface, and provides full frame, windowed or binned 10-bit images in RAW RGB format. It
offers all required automatic image control functions, including automatic exposure control,
automatic white balance, automatic band filter, automatic 50/60 Hz luminance detection, and
automatic black level calibration

The pi camera module is a portable light weight camera that supports Raspberry pi. It
communicates with Pi using the MIPI camera serial interface protocol. It is normally used in the
image processing, machine learning or in surveillance projects. It is commonly used in
surveillance drones since the payload of the camera is very less.

3.3.1 PIN DESCRIPTION:

PIN 1: DGND (Ground Power ground)


PIN 2: CAM_D0_N (Output MIPI data lane0 negative output)
PIN 3: CAM_D0_P (Output MIPI data lane0 positive output)
PIN 4: DGND (Ground Power ground)
PIN 5: CAM_D1_N (Output MIPI data lane1 negative output)
PIN 6: CAM_D1_P (Output MIPI data lane1 positive output)
PIN 7: DGND (Ground Power ground)
PIN 8: CAM_C_N (Output MIPI clock negative output)
PIN 9: CAM_C_P (Output MIPI clock positive output)
PIN 10: DGND (Ground Power ground)
PIN 11: POWER_EN (Input Camera module power enable active high)
PIN 12: LED_EN (Input Reserved)
PIN 13: SCL (Input Two-Wire Serial Interface Clock)
PIN 14: SDA (Bi-directional Two-Wire Serial Interface Data I/O)
PIN 15: +3.3V POWER (3.3v Power supply)

3.3.2 FEATURES OF RASPBERRY PI CAMERA:

 It is a 5MP Omni vision 5647 camera module.


 It is fully compatible with both the model A and B Raspberry Pi.

KIETW- ECE Page 32


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

 Still picture resolution: 2592*1944.


 Video: supports at 1080p with 30fps, 720p with 60fps and 640*480 60/90 recording.
 It consists of 15 pin MIPI camera serial interface which plugs directly into the Raspberry Pi
board.
 The Size of the camera module is 20*25*9 mm.
 Weight 3g.
 Max video resolution: 1080p
 Max frame rate: 30fps
 Support FREX/ STROBE feature
 Size: 36 x 36 mm
 5MPixel sensor
 Integral IR filter

3.4 VIBRATING SENSOR:

The vibration sensor is also called a piezoelectric sensor. These sensors are flexible devices
which are used for measuring various processes. This sensor uses the piezoelectric effect while
measuring the changes within acceleration, pressure, temperature, force otherwise strain by
changing to an electrical charge. This sensor is also used for deciding fragrances within the air by
immediately measuring capacitance as well as quality.

Fig3.20 Vibration sensor

VIBRATION SENSOR WORKING PRINCIPLE:


The working principle of vibration sensor is a sensor which operates based on different optical
otherwise mechanical principles for detecting observed system vibrations

The sensitivity of these sensors normally ranges from 10 mV/g to 100 mV/g, and there are lower
and higher sensitivities are also accessible. The sensitivity of the sensor can be selected based on
the application. So it is essential to know the levels of vibration amplitude range to which the
sensor will be exposed throughout measurements.

3.4.1 SPECIFICATIONS:

KIETW- ECE Page 33


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

The Vibration Sensor Detector is designed for the security practice When Vibration Sensor Alarm
recognizes movement or vibration, it sends a signal to either control panel Developed a new type
of Omni-directional high sensitivity Security Vibration Detector with Omni-directional detection

 Sensitivity: Height adjustable


 Consistency and Interchange ability: Good
 Reliability and Interference: Accurate triggering strong anti-interference
 Automatic Reset: Automatic reset is strong
 Signal Post-processing: Simple
 Output Signal: Switch signal
 No External Vibration Analysis of Plates: Product design vibration analysis of the internal
amplifier circuit 
 Detection Direction: Omni-directional
 Signal Output: Switch signals 
 Output Pulse Width: The vibration signal amplitude is proportional to
 Operating Voltage: 12VDC (red V + shield V-)
 Sensitivity: Greater than or equal 0.2g
 Frequency Range: 0.5HZ ~ 20HZ
 Operating Temperature Range: -10 ~ 50

3.5 GAS SENSOR:

A gas sensor is a device which detects the presence or concentration of gases in the atmosphere.
Based on the concentration of the gas the sensor produces a corresponding potential difference by
changing the resistance of the material inside the sensor, which can be measured as output voltage.
Based on this voltage value the type and concentration of the gas can be estimated.

Fig 3.21 GAS sensor


The type of gas the sensor could detect depends on the sensing material present inside the sensor.
Normally these sensors are available as modules with comparators as shown above. These
comparators can be set for a particular threshold value of gas concentration. When the
concentration of the gas exceeds this threshold the digital pin goes high. The analog pin can be
used to measure the concentration of the gas.

KIETW- ECE Page 34


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

The gas sensors consist of a sensing element which comprises of the following parts.

1. Gas sensing layer


2. Heater Coil
3. Electrode line
4. Tubular ceramic
5. Electrode

3.5.1 DIFFERENT TYPES OF GAS SENSORS:

Gas sensors are typically classified into various types based on the type of the sensing element it
is built with. Below is the classification of the various types of gas sensors based on the sensing
element that are generally used in various applications:

 Metal Oxide based gas Sensor.


 Optical gas Sensor.
 Electrochemical gas Sensor.
 Capacitance-based gas Sensor.
 Calorimetric gas Sensor.
 Acoustic based gas Sensor.

The gas sensor module basically consists of 4 terminals

 Vcc – Power supply


 GND – Power supply
 Digital output – This pin gives an output either in logical high or logical low (0 or 1) that
means it displays the presence of any toxic or combustible gases near the sensor.
 Analog output – This pin gives an output continuous in voltage which varies based on the
concentration of gas that is applied to the gas sensor.

3.5.2 APPLICATIONS OF GAS SENSOR:

 Used in industries to monitor the concentration of the toxic gases.


 Used in households to detect an emergency incidents.
 Used at oil rig locations to monitor the concentration of the gases those are released.
 Used at hotels to avoid customers from smoking.
 Used in air quality check at offices.
 Used in air conditioners to monitor the CO2 levels.
 Used in detecting fire.

3.6 GSM MODULE:

GSM is a mobile communication modem; it is stands for global system for mobile communication
(GSM). The idea of GSM was developed at Bell Laboratories in 1970. It is widely used mobile
communication system in the world. GSM is an open and digital cellular
KIETW- ECE Page 35
DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

technology used for transmitting mobile voice and data services operates at the 850MHz, 900MHz,
1800MHz and 1900MHz frequency bands

Fig 3.22 GSM module SIM 800

GSM system was developed as a digital system using time division multiple access (TDMA)
technique for communication purpose. A GSM digitizes and reduces the data, then sends it down
through a channel with two different streams of client data, each in its own particular time slot.
The digital system has an ability to carry 64 kbps to 120 Mbps of data rates. SIM800L is a miniature
cellular module which allows for GPRS transmission, sending and receiving SMS and making and
receiving voice

Low cost and small footprint and quad band frequency support make this module perfect solution
for any project that require long range connectivity. After connecting power module boots up,
searches for cellular network and login automatically. On board LED displays connection state (no
network coverage - fast blinking, logged in - slow blinking).

3.6.1 SPECIFICATIONS

 Supply voltage: 3.8V - 4.2V


 Recommended supply voltage: 4V
 Power consumption:
o sleep mode < 2.0mA
o idle mode < 7.0mA
o GSM transmission (a vg): 350 mA
o GSM transmission (peek): 2000mA
 Module size: 25 x 23 mm
 Interface: UART (max. 2.8V) and AT commands
 SIM card socket: microSIM (bottom side)
 Supported frequencies: Quad Band (850 / 950 / 1800 /1900 MHz)
 Antenna connector: IPX
 Status signaling: LED

KIETW- ECE Page 36


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

 Working temperature range: -40 do + 85 ° C

3.6.2 MODULE PINOUT:

Fig 3.23 Pin out of SIM 800

Pin out (bottom side - left):


RING (not marked on PBC, first from top, square) - LOW state while receiving call DTR - sleep
mode. Default in HIGH state (module in sleep mode, serial communication disabled). After
setting it in LOW the module will wake up.

pin out (bottom side - right):

 NET – antenna
 VCC - supply voltage
 RESET - reset
 RXD - serial communication
 TXD - serial communication
 GND – ground

3.7 DC GEAR BO MOTOR:

Fig 3.24 DC gear BO motor

The DC gear BO motor is a DC motor battery operation (BO). DC motor converts electrical
energy into mechanical energy. The addition of a gear head to a motor reduces the speed and
increases the torque output. The BO series straight motor gives good torque and rpm at lower
operating voltages.

A gear motor is an all-in-one combination of a motor and gearbox. The addition of a gear head to
a motor reduces the speed while increasing the torque output. The most important parameters in
regards to gear motors are speed (rpm), torque (lb-in) and efficiency (%). In order to select the

KIETW- ECE Page 37


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

most suitable gear motor for your application you must first compute the load, speed and torque
requirements for your application. ISL Products offers a variety of Spur Gear Motors, Planetary
Gear Motors and Worm Gear Motors to meet all application requirements.
Most of our DC motors can be complimented with one of our unique gearheads, providing you
with a highly efficient gear motor solution.

3.7.1 DC Motor Performance Curve:

Fig 3.25 performance curve


Speed/Revolutions (N) – (unit: rpm) indicated as a straight line that shows the relationship
between the gear motors torque and speed. This line will shift laterally depending on voltage
increase or decrease.

Current (I) – (unit: A) indicated by a straight line, from no load to full motor lock. This shows the
relationship between amperage and torque.

Torque (T) – (unit: gf-cm) this is the load borne by the motor shaft, represented on the X-axis.

Efficiency (η) – (unit: %) is calculated by the input and output values, represented by the dashed
line. To maximize the gear motors potential it should be used near its peak efficiency.

Output (P) – (unit: W) is the amount of mechanical energy the gear motor puts out.

3.7.2 FEATURES:

 It has capability to absorb shock and vibration as a result of elastic compliance.


 Its operating voltage ranges from 3-12 v DC.
 It is light weight.
 RPM : 100
 Output Torque - 4kg/cm

3.8 BATTERY:

Lithium-ion battery or Li-ion battery (abbreviated as LIB) is a type of rechargeable battery.


Lithium-ion batteries are commonly used for portable electronics and electric vehicles and are

KIETW- ECE Page 38


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

growing in popularity for military and aerospace applications. A prototype Li-ion battery was
developed by Akira Yoshino in 1985, based on earlier research by John Goodenough, Stanley
Whittingham, Rachid Yazami and Koichi Mizushima during the 1970s–1980s, and then a
commercial Li-ion battery was developed by a Sony and Asahi Kasei team led by Yoshio Nishi in
1991.

In the batteries, lithium ions move from the negative electrode through an electrolyte to the positive
electrode during discharge, and back when charging. Li-ion batteries use an intercalated lithium
compound as the material at the positive electrode and typically graphite at the negative electrode.
The batteries have a high energy density, no memory effect (other than LFP cells)and low self-
discharge. They can however be a safety hazard since they contain a flammable electrolyte, and if
damaged or incorrectly charged can lead to explosions and fires. Samsung was forced to recall
Galaxy Note 7 handsets following lithium-ion fires, and there have been several incidents
involving batteries on Boeing 787s.

Chemistry, performance, cost and safety characteristics vary across LIB types. Handheld
electronics mostly use lithium polymer batteries (with a polymer gel as electrolyte) with lithium
cobalt oxide (LiCo2) as cathode material, which offers high energy density, but presents safety
risks, especially when damaged. Lithium iron phosphate (LiFePO4), lithium ion manganese oxide
battery (LiMn2O4, Li2MnO3, or LMO), and lithium nickel manganese cobalt oxide
(LiNiMnCoO2 or NMC) offer lower energy density but longer lives and less likelihood of fire or
explosion. Such batteries are widely used for electric tools, medical equipment, and other roles.
NMC and its derivatives are widely used in electric vehicles.

Research areas for lithium-ion batteries include extending lifetime, increasing energy density,
improving safety, reducing cost, and increasing charging speed, among others. Research has been
under way in the area of non-flammable electrolytes as a pathway to increased safety based on the
flammability and volatility of the organic solvents used in the typical electrolyte. Strategies include
aqueous lithium-ion batteries, ceramic solid electrolytes, polymer electrolytes, ionic liquids, and
heavily fluorinated systems.

Li-ion batteries provide lightweight, high energy density power sources for a variety of devices.
To power larger devices, such as electric cars, connecting many small batteries in a parallel circuit
is more effective and more efficient than connecting a single large battery. Such devices include:

Portable devices:
These include mobile phones and smartphones, laptops and tablets, digital cameras and
camcorders, electronic cigarettes, handheld game consoles and torches (flashlights).

Power tools:
Li-ion batteries are used in tools such as cordless drills, sanders, saws, and a variety of garden
equipment including whippersnapper‘s and hedge trimmers.

KIETW- ECE Page 39


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

Electric vehicles:
Electric vehicle batteries are used in electric cars, hybrid vehicles, electric motorcycles and
scooters, electric bicycles, personal transporters and advanced electric wheelchairs. Also radio-
controlled models, model aircraft, aircraft, and the Mars Curiosity rover.

Fig 3.26 Lithium -ion battery

Li-ion batteries are used in telecommunications applications. Secondary non-aqueous lithium


batteries provide reliable backup power to load equipment located in a network environment of a
typical telecommunications service provider. Li-ion batteries compliant with specific technical
criteria are recommended for deployment in the Outside Plant (OSP) at locations such as
Controlled Environmental Vaults (CEVs), Electronic Equipment Enclosures (EEEs), and huts, and
in uncontrolled structures such as cabinets. In such applications, li-ion battery users require
detailed, battery-specific hazardous material information, plus appropriate fire-fighting
procedures, to meet regulatory requirements and to protect employees and surrounding equipment.

3.9 BUCK CONVERTER (LM2596):

The LM2596 series of regulators are monolithic integrated circuits that provide all the active
functions for a step-down (buck) switching regulator, capable of driving a 3-A load with excellent
line and load regulation

Fig 3.27 buck converter


These devices are available in fixed output voltages of 3.3 V, 5 V, 12 V, and an adjustable output
version.

KIETW- ECE Page 40


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

Requiring a minimum number of external components, these regulators are simple to use and
include internal frequency compensation, and a fixed frequency oscillator.

The LM2596 series operates at a switching frequency of 150 kHz, thus allowing smaller sized filter
components than what would be required with lower frequency switching regulators. Available in
a standard 5-pin TO-220 package with several different lead bend options, and a 5- pin TO-263
surface mount package.

The new product, LMR33630, offers reduced BOM cost, higher efficiency, and an 85% reduction
in solution size among many other features. See the device comparison table to compare specs.
Start WEBENCH Design with LMR33630.

A standard series of inductors are available from several different manufacturers optimized for use
with the LM2596 series. This feature greatly simplifies the design of switch-mode power supplies.
Other features include a ±4% tolerance on output voltage under specified input voltage and output
load conditions, and ±15% on the oscillator frequency. External shutdown is included, featuring
typically 80 μA standby current. Self-protection features include a two stage frequency reducing
current limit for the output switch and an over temperature shutdown for complete protection under
fault conditions.

3.9.1 FEATURES:

a. New product available: LMR33630 36-V, 3-A, 400 kHz synchronous converter
b. 3.3-V, 5-V, 12-V, and adjustable output versions
c. Adjustable version output voltage range: 1.2-V to 37-V ±4% maximum over line and load
conditions
d. Available in TO-220 and TO-263 packages
e. 3-A output load current ,Input voltage range up to 40 V
f. Requires only four external components
g. Excellent line and load regulation specifications
h. 150-kHz Fixed-frequency internal oscillator
i. TTL shutdown capability
j. Low power standby mode, IQ, typically 80 μA
k. High efficiency • Uses readily available standard inductors
l. Thermal shutdown and current-limit protection
m. Create a custom design using the LM2596 with the WEBENCH Power Design

3.9.2 SPECIFICATIONS:

a. Maximum supply voltage (VIN) 45 V


b. SD/SS pin input voltage (3)

KIETW- ECE Page 41


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

c. 6 V Delay pin voltage (3) 1.5 V


d. Flag pin voltage –0.3 45 V
e. Feedback pin voltage –0.3 25 V
f. Output voltage to ground, steady-state –1 V
g. Power dissipation Internally limited

3.10 BOOST CONVERTER (MT3608):


A boost converter or step-up converter is a DC-to-DC power converter that steps up voltage from
its input to its output. It is a class of switched-mode power supply (SMPS) containing at least two
semiconductors (a diode and a transistor) and at least one energy storage element: a
capacitor, inductor, or the two in combination. To reduce voltage ripple, filters made of capacitors
(sometimes in combination with inductors) are normally added to such a converter's output (load-
side filter) and input (supply-side filter).

Fig 3.28 boost conveter

3.10.1. DESCRIPTION:

The MT3608 is a constant frequency, 6-pin SOT23 current mode step-up converter intended for
small, low power applications. The MT3608 switches at 1.2MHz and allows the use of tiny, low
cost capacitors and inductors 2mm or less in height. Internal soft-start results in small inrush
current and extends battery life. The MT3608 features automatic shifting to pulse frequency
modulation mode at light loads. The MT3608 includes under-voltage lockout, current limiting, and
thermal overload protection to prevent damage in the event of an output overload. The MT3608 is
available in a small 6-pin SOT-23 package.

3.10.2 FEATURES

• Integrated 80mΩ Power MOSFET


• 2V to 24V Input Voltage
• 1.2MHz Fixed Switching Frequency
• Internal 4A Switch Current Limit
• Adjustable Output Voltage
• Internal Compensation
• Up to 28V Output Voltage
• Up to 97% Efficiency.
• Available in a 6-Pin SOT23-6 Package.

KIETW- ECE Page 42


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

3.10.3 OPERATION

The MT3608 uses a fixed frequency, peak current mode boost regulator architecture to regulate
voltage at the feedback pin. The operation of the MT3608 can be understood by referring to the
block diagram of Figure 3. At the start of each oscillator cycle the MOSFET is turned on through
the control circuitry. To prevent sub-harmonic oscillations at duty cycles greater than 50 percent,
a stabilizing ramp is added to the output of the current sense amplifier and the result is fed into the
negative input of the PWM comparator. When this voltage equals the output voltage of the error
amplifier the power MOSFET is turned off. The voltage at the output of the error amplifier is an
amplified version of the difference between the 0.6V band gap reference voltage and the feedback
voltage. In this way the peak current level keeps the output in regulation. If the feedback voltage
starts to drop, the output of the error amplifier increases. These results in more current to flow
through the power MOSFET, thus increasing the power delivered to the output. The MT3608 has
internal soft start to limit the amount of input current at startup and to also limit the amount of
overshoot on the output.

input Voltage 2 to 24 V

Max Output Current 2A

Adjustment 25-Turn Trimpot

Efficiency up to 93%

Switching Frequency 1.2 MHz

3.10.4 APPLICATIONS

• Battery-Powered Equipment
• Set-Top Boxed
• LCD Bias Supply
• DSL and Cable Modems and Routers
• Networking cards powered from PCI or PCI express slots

3.11 MOTOR DRIVER (L298N):

The heart of the module is the big, black chip with chunky heat sink is an L298N.
The L298N is a dual-channel H-Bridge motor driver capable of driving a pair of DC motors. That
means it can individually drive up to two motors making it ideal for building two-wheel robot
platforms.

KIETW- ECE Page 43


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

3.11.1 POWER SUPPLY:

L298N Motor Driver Module - 5V Jumper, Power Supply Pins & Regulator
The L298N motor driver module is powered through 3-pin 3.5mm-pitch screw terminals. It
consists of pins for motor power supply (Vs), ground and 5V logic power supply (Vss).The L298N
motor driver IC actually has two input power pins viz. ‗Vss‘ and ‗Vs.‘.From Vs pin the H-Bridge
gets its power for driving the motors which can be 5 to 35V. Vss is used for driving the logic
circuitry which can be 5 to 7V. And they both sink to a common ground named ‗GND‘.

3.11.2 L298N IC:

The L298 is an integrated monolithic circuit in a 15lead Multi watt and PowerSO20 packages. It
is a high voltage, high current dual full-bridge driver designed to accept standard TTL logic level
sand drive inductive loads such as relays, solenoids, DC and stepping motors. Two enable inputs
are provided to enable or disable the device independently of the input signals. The emitters of the
lower transistors of each bridge are connected together and the corresponding external terminal
can be used for the connecting of an external sensing resistor. An additional supply input is
provided so that the logic works at a lower voltage.

Fig 3.29 L298N IC

3.11.3 L298N PIN OUT:

Fig 3.30 pin out

 VCC pin supplies power for the motor. It can be anywhere between 5 to 35V. Remember,
if the 5V-EN jumper is in place, you need to supply 2 extra volts than motor‘s actual voltage
requirement, in order to get maximum speed out of your motor.

KIETW- ECE Page 44


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

 GND is a common ground pin

 5V pin supplies power for the switching logic circuitry inside L298N IC. If the 5V-EN
jumper is in place, this pin acts as an output and can be used to power up your Arduino. If
the 5V-EN jumper is removed, you need to connect it to the 5V pin on Arduino.

 ENA pins are used to control speed of Motor A. Pulling this pin HIGH(Keeping the jumper
in place) will make the Motor A spin, pulling it LOW will make the motor stop. Removing
the jumper and connecting this pin to PWM input will let us control the speed of Motor A.

 IN1 & IN2 pins are used to control spinning direction of Motor A. When one of them is
HIGH and other is LOW, the Motor A will spin. If both the inputs are either HIGH or LOW
the Motor A will stop.

 IN3 & IN4 pins are used to control spinning direction of Motor B. When one of them is
HIGH and other is LOW, the Motor B will spin. If both the inputs are either HIGH or LOW
the Motor B will stop.

 OUT1 & OUT2 pins are connected to Motor A. Output 3&4 connected to Motor B.

3.11.4 OUTPUT PINS:

Fig 3.31 output pins of L298N

The L298N motor driver‘s output channels for the motor A and B are broken out to the edge of
the module with two 3.5mm-pitch screw terminals.
You can connect two DC motors having voltages between 5 to 35V to these terminals.
Each channel on the module can deliver up to 2A to the DC motor. However, the amount of
current supplied to the motor depends on system‘s power supply.

3.11.5 CONTROL PINS:

For each of the L298N‘s channels, there are two types of control pins which allow us to control
speed and spinning direction of the DC motors at the same time viz. Direction control pins & Speed
control pins.

KIETW- ECE Page 45


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

Fig 3.32 Direction Control Pins

L298N Motor Driver Module - Spinning Direction Control Pins


Using the direction control pins, we can control whether the motor spins forward or backward.
These pins actually control the switches of the H-Bridge circuit inside L298N IC.

The module has two direction control pins for each channel. The IN1 and IN2 pins control the
spinning direction of the motor A while IN3 and IN4 control motor B.

Pulling these pins HIGH will make the motors spin, pulling it LOW will make them stop. But,
with Pulse Width Modulation (PWM), we can actually control the speed of the motors.

The module usually comes with a jumper on these pins. When this jumper is in place, the motor is
enabled and spins at maximum speed. If you want to control the speed of motors programmatically,
you need to remove the jumpers and connect them to PWM-enabled pins on Arduino.

3.12. I2C MODULE:

I2C is a serial protocol for two-wire interface to connect low-speed devices like microcontrollers,
EEPROMs, A/D and D/A converters, I/O interfaces and other similar peripherals in embedded
systems. It was invented by Philips and now it is used by almost all major IC manufacturers.
Each I2C slave device needs an address – they must still be obtained from NXP (formerly Philips
semiconductors).

Fig 3.33 I2C module


3.12.1 I2C Interface:

I2C uses only two wires: SCL (serial clock) and SDA (serial data). Both need to be pulled up with
a resistor to +Vdd. There are also I2C level shifters which can be used to connect to two I2C buses
with different voltages.

KIETW- ECE Page 46


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

3.12.2 APPLICATION:

Describing connectable devices via small ROM configuration tables to enable "plug and play"
operation, such as
Serial Presence Detect (SPD) EEPROMs on dual in-line memory modules (DIMMs), and
Extended Display Identification Data (EDID) for monitors via VGA, DVI and HDMI connectors.
Accessing real-time clocks and NVRAM chips that keep user settings.
Accessing low-speed DACs and ADCs.
Changing contrast, hue, and color balance settings in monitors (via Display Data Channel).

3.13 LCD (Liquid Crystal Display):

LCD stands for Liquid Crystal Display. LCD is finding wide spread use replacing LEDs
(seven segment LEDs or other multi segment LEDs) because of the following reasons:
1. The declining prices of LCDs.
2. The ability to display numbers, characters and graphics. This is in contrast to LEDs,
which are limited to numbers and a few characters.
3. Ease of programming for characters and graphics.
These components are ―specialized‖ for being used with the microcontrollers, which means that
they cannot be activated by standard IC circuits. They are used for writing different messages on
a miniature LCD

A model described here is for its low price and great possibilities most frequently used in practice.
It is based on the HD44780 microcontroller (Hitachi) and can display messages in two lines with
16 characters each. It displays all the alphabets, Greek letters, punctuation marks, mathematical
symbols etc. In addition, it is possible to display symbols that user makes up on its own. Automatic
shifting message on display (shift left and right), appearance of the pointer, backlight etc. are
considered as useful characteristics.

3.13.1 FEATURES:
 Interface with either 4-bit or 8-bit microprocessor.
 Display data RAM

 Character generator ROM
 -matrix character patterns.
 Character generator RAM
 -matrix patterns.
 Display data RAM and character generator RAM may be accessed by the microprocessor.
 Numerous instructions
 Clear Display, Cursor Home, Display ON/OFF, Cursor ON/OFF, Blink Character,
Cursor Shift, Display Shift.

KIETW- ECE Page 47


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

 Built-in reset circuit is triggered at power ON.

3.13.2 SHAPES AND SIZES:

Even limited to character based modules, there is still a wide variety of shapes and sizes
available. Line lengths of 8, 16,20,24,32 and 40 characters are all standard, in one, two and four
line versions. Several different LC technologies exist. ―Supertwist‖ types, for example, offer
improved contrast and viewing angle over the older ―twisted pneumatic‖ types. Some modules
are available with back lighting, so that they can be viewed in dimly-lit conditions. The back
lighting may be either ―electro-luminescent‖, requiring a high voltage inverter circuit, or simple
LED illumination.

Fig: 3.34 LCD Displays

3.13.3 PIN DESCRIPTION:

Pin Symbol I/O Description


1 Vss - Ground
2 Vcc - +5v power supply
3 VEE - Power supply to control contrast
4 RS I RS=0 selects command register
RS=1 selects data register
5 R/W I R/w=0 for write, R/w=1 for read
6 E I/O Enable
7 DB0 I/O The 8-bit data bus
8 DB1 I/O The 8-bit data bus
9 DB2 I/O The 8-bit data bus
10 DB3 I/O The 8-bit data bus
11 DB4 I/O The 8-bit data bus
12 DB5 I/O The 8-bit data bus
13 DB6 I/O The 8-bit data bus
14 DB7 I/O The 8 bit data bus

3.13.4 LCD SCREEN:

LCD screen consists of two lines with 16 characters each. Each character consists of 5x7 dot
matrix. Contrast on display depends on the power supply voltage and whether messages are

KIETW- ECE Page 48


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

displayed in one or two lines. For that reason, variable voltage 0-Vdd is applied on pin marked as
Vee. Trimmer potentiometer is usually used for that purpose. Some versions of displays have built
in backlight (blue or green diodes). When used during operating, a resistor for current limitation
should be used (like with any LE diode).

3.13.5 LCD CONNECTION:

Depending on how many lines are used for connection to the microcontroller, there are 8-bit and
4-bit LCD modes. The appropriate mode is determined at the beginning of the process in a phase
called ―initialization‖. In the first case, the data are transferred through outputs D0-D7 as it has
been already explained. In case of 4-bit LED mode, for the sake of saving valuable I/O pins of the
microcontroller, there are only 4 higher bits (D4-D7) used for communication, while other may be
left unconnected. Consequently, each data is sent to LCD in two steps: four higher bits are sent
first (that normally would be sent through lines D4-D7), four lower bits are sent afterwards. With
the help of initialization, LCD will correctly connect and interpret each data received. Besides,
with regards to the fact that data are rarely read from LCD (data mainly are transferred from
microcontroller to LCD) one more I/O pin may be Saved by simple connecting R/W pin to the
Ground. Such saving has its price.

3.13.6 LCD INITIALIZATION:

Once the power supply is turned on, LCD is automatically cleared. This process lasts for
approximately 15mS. After that, display is ready to operate. The mode of operating is set by
default. This means that:

1. Display is cleared
2. Mode
DL = 1 Communication through 8-bit interface
N = 0 Messages are displayed in one line
F = 0 Character font 5 x 8 dots
3. Display/Cursor on/off
D = 0 Display off
U = 0 Cursor off

B = 0 Cursor blink off


4. Character entry
ID = 1 Addresses on display are automatically incremented by 1
S = 0 Display shift off
Algorithm according to the initialization is being performed depends on whether connection to
the microcontroller is through 4- or 8-bit interface. All left over to be done after that is to give
basic commands and of course- to display messages.

KIETW- ECE Page 49


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

3.14. COOLING FAN:

Electronic cooling fans move air to cool electronic devices such as computers and appliances.
They are also used in telecommunications, military, and general industrial applications as well as
in heating, ventilation, and air conditioning (HVAC) systems. These are used to cool the CPU
(central processing unit) heat sink. Effective cooling of a concentrated heat source such as a
large-scale integrated circuit requires a heat sink, which may be cooled by fan alone, will not
prevent overheating of the small chip.

Fig 3.35 cooling fan

While in earlier personal computers it was possible to cool most components using natural
convection (passive cooling), many modern components require more effective active cooling. To
cool these components, fans are used to move heated air away from the components and draw
cooler air over them. Fans attached to components are usually used in combination with a heat sink
to increase the area of heated surface in contact with the air, thereby improving the efficiency of
cooling. Fan control is not always an automatic process. A computer's BIOS (basic input/output
system) can control the speed of the built-in fan system for the computer. A user can even
supplement this function with additional cooling components or connect a manual fan controller
with knobs that set fans to different speeds.

In the IBM PC compatible market, the computer's power supply unit (PSU) almost always uses an
exhaust fan to expel warm air from the PSU. Active cooling on CPUs started to appear on the Intel
80486, and by 1997 was standard on all desktop processors. Chassis or case fans, usually one
exhaust fan to expel heated air from the rear and optionally an intake fan to draw cooler air in
through the front, became common with the arrival of the Pentium 4 in late 2000.

3.15 SD CARD (SECURED DIGITAL CARD):

Secure Digital, officially abbreviated as SD, is a proprietary non-volatile memory card format
developed by the SD Card Association (SDA) for use in portable devices.

The standard was introduced in August 1999 by joint efforts between SanDisk, Panasonic
(Matsushita Electric) and Toshiba as an improvement over MultiMediaCards (MMC), and has

KIETW- ECE Page 50


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

become the industry standard. The three companies formed SD-3C, LLC, a company that licenses
and enforces intellectual property rights associated with SD memory cards and SD host and
ancillary products.

It was designed to compete with the Memory Stick, a DRM product that Sony had released the
year before. Developers predicted that DRM would induce wide use by music suppliers concerned
about piracy.

The trademarked "SD" logo was originally developed for the Super Density Disc, which was the
unsuccessful Toshiba entry in the DVD format war. For this reason the D within the logo resembles
an optical disc.

At the 2000 Consumer Electronics Show (CES) trade show, the three companies announced the
creation of the SD Association (SDA) to promote SD cards. The SD Association, headquartered in
San Ramon, California, United States, started with about 30 companies and today consists of about
1,000 product manufacturers that make interoperable memory cards and devices. Early samples of
the SD Card became available in the first quarter of 2000, with production quantities of 32 and 64
MB cards available three months later.

3.15.1 2003: Mini cards:

The miniSD form was introduced at March 2003 CeBIT by SanDisk Corporation which announced
and demonstrated it.The SDA adopted the miniSD card in 2003 as a small form factor extension
to the SD card standard. While the new cards were designed especially for mobile phones, they are
usually packaged with a miniSD adapter that provides compatibility with a standard SD memory
card slot.

3.15.2 2004–2005: Micro cards:

Fig 3.36 micro sd card

The microSD removable miniaturized Secure Digital flash memory cards were originally named
T-Flash or TF, abbreviations of TransFlash. TransFlash and microSD cards are functionally
identical allowing either to operate in devices made for the other.SanDisk had conceived microSD
when its chief technology officer and the chief technology officer of Motorola concluded that
current memory cards were too large for mobile phones.[citation needed] The card was originally
called T-Flash but just before product launch, T-Mobile sent a cease-and- desist letter to SanDisk
claiming that T-Mobile owned the trademark on T-(anything),[citation

KIETW- ECE Page 51


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

needed] and the name was changed to TransFlash. At CTIA Wireless 2005, the SDA announced
the small microSD form factor along with SDHC secure digital high capacity formatting in excess
of 2 GB (2000 MB) with a minimum sustained read and write speed of 17.6 Mbit/s.[citation
needed] SanDisk induced the SDA to administer the microSD standard. The SDA approved the
final microSD specification on July 13, 2005. Initially, microSD cards were available in capacities
of 32, 64, and 128 MB

3.15.3 2006–2008: SDHC and SDIO:

This microSDHC card holds 8 billion bytes. Beneath it is a section of a magnetic-core memory
(used until the 1970s) that holds eight bytes using 64 cores. The card covers approximately 20 bits
(2 1/2 bytes)
The SDHC format, announced in January 2006, brought improvements such as 32 GB storage
capacity and mandatory support for FAT32 filesystems.[citation needed] In April, the SDA
released a detailed specification for the non-security related parts of the SD memory card standard
and for the Secure Digital Input Output (SDIO) cards and the standard SD host controller.[citation
needed]

In September 2006, SanDisk announced the 4 GB miniSDHC. Like the SD and SDHC, the
miniSDHC card has the same form factor as the older miniSD card but the HC card requires HC
support built into the host device. Devices that support miniSDHC work with miniSD and
miniSDHC, but devices without specific support for miniSDHC work only with the older miniSD
card. Since 2008, miniSD cards are no longer produced.

3.15.4 2009–present: SDXC

In January 2009, the SDA announced the SDXC family, which supports cards up to 2 TBand
speeds up to 300 MB/s.[citation needed] It features mandatory support for the exFAT
filesystem.[citation needed] SDXC was announced at Consumer Electronics Show (CES) 2009
(January 7–10). At the same show, SanDisk and Sony also announced a comparable Memory Stick
XC variant with the same 2 TB maximum as SDXC, and Panasonic announced plans to produce
64 GB SDXC cards. On March 6, Pretec introduced the first SDXC card,[16] a 32 GB card with a
read/write speed of 400 Mbit/s. But only early in 2010 did compatible host devices come onto the
market, including Sony's Handycam HDR-CX55V camcorder, Canon's EOS 550D (also known as
Rebel T2i) Digital SLR camera a USB card reader from Panasonic, and an integrated SDXC card
reader from JMicron.The earliest laptops to integrate SDXC card readers relied on a USB 2.0 bus,
which does not have the bandwidth to support SDXC at full speed.[19]

In early 2010, commercial SDXC cards appeared from Toshiba (64 GB),Panasonic (64 GB and 48
GB), and SanDisk (64 GB).In early 2011, Centon Electronics, Inc. (64 GB and 128 GB) and Lexar
(128 GB) began shipping SDXC cards rated at Speed Class 10.Pretec offered cards from 8

KIETW- ECE Page 52


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

GB to 128 GB rated at Speed Class 16. In September 2011, SanDisk released a 64 GB microSDXC
card. Kingmax released a comparable product in 2011

3.15.5 Class:

The SD Association defines standard speed classes for SDHC/SDXC cards indicating minimum
performance (minimum serial data writing speed). Both read and write speeds must exceed the
specified value. The specification defines these classes in terms of performance curves that
translate into the following minimum read-write performance levels on an empty card and
suitability for different applications.
The SD Association defines three types of Speed Class ratings: the original Speed Class, UHS
Speed Class, and Video Speed Class.

3.15.6 Speed Class:

Speed Class ratings 2, 4, and 6 assert that the card supports the respective number of megabytes
per second as a minimum sustained write speed for a card in a fragmented state. Class 10 asserts
that the card supports 10 MB/s as a minimum non-fragmented sequential write speed and uses a
High Speed bus mode. The host device can read a card's speed class and warn the user if the card
reports a speed class that falls below an application's minimum need. The graphical symbol for the
speed class has a number encircled with 'C' (C2, C4, C6, and C10).

3.15.7 UHS Speed Class:

UHS-I and UHS-II cards can use UHS Speed Class rating with two possible grades: class 1 for
minimum read/write performance of at least 10 MB/s ('U1' symbol featuring number 1 inside 'U')
and class 3 for minimum write performance of 30 MB/s ('U3' symbol featuring 3 inside 'U'),
targeted at recording 4K video.Before November 2013, the rating was branded UHS Speed Grade
and contained grades 0 (no symbol) and 1 ('U1' symbol). Manufacturers can also display standard
speed class symbols (C2, C4, C6, and C10) alongside, or in place of UHS speed class.
UHS memory cards work best with UHS host devices. The combination lets the user record HD
resolution videos with tapeless camcorders while performing other functions. It is also suitable for
real-time broadcasts and capturing large HD videos.

3.15.8 Video Speed Class:

Video Speed Class defines a set of requirements for UHS cards to match the modern MLC NAND
flash memory and supports progressive 4K and 8K video with minimum sequential writing speeds
of 6-90 MB/s.The graphical symbols use 'V' followed by a number designating write speed (V6,
V10, V30, V60, and V90).

3.15.9 ADVANTAGES:

KIETW- ECE Page 53


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

 Memory cards are reliable because they have no moving parts (unlike a hard drive), they are
easily marked on their coverage to reflect the organization, and they are not affected by
magnetic fields (unlike tape).

 Memory cards have a nonvolatile memory that maintains the stability of the data on the card,
the data on them are not threatened by the loss of the power source, and they should not be
refreshed. Memory cards are solid supports of the state, they are free of mechanical problems
or damage, they are small, light and compact with high storage capacity, and they require less
amount of energy.

 Memory cards are very portable, they can be used in small devices, lightweight and low power
easily, they do not produce noise at work, and they allow more immediate access.

 Memory cards come in all sorts of sizes, 128GB SD cards are more common, they have
relatively large storage space, they can be used in the slot for the memory card in different
devices easily, and they are easily removable.

 Memory cards are used in various devices such as cameras, computers or mobile phones, they
are easy to follow, and you can use larger map of cost efficiency.

 Data backups is an important task, if a company fails to save the information, you might lose
the job, losing money in the company and drop the clients and customers as well, it is very
important to keep backups of data to avoid something crucial missing.

 Memory cards come in different sizes and formats, they store digital information on a device
with no moving parts, and they can store information from a computer or external device.

3.16 GPS MODULE:

3.16.1 INTRODUCTION:

The GPS QUESTAR TTL is a compact all-in-one GPS module solution intended for a broad range
of Original Equipment Manufacturer (OEM) products, where fast and easy system integration and
minimal development risk is required. The receiver continuously tracks all satellites in view and
provides accurate satellite positioning data. The GPS QUESTAR TTL is optimized for applications
requiring good performance, low cost, and maximum flexibility; suitable for a wide range of OEM
configurations including handhelds, sensors, asset tracking, PDA-centric personal navigation
system, and vehicle navigation products. Its 56 parallel channels and 4100 search bins provide fast
satellite signal acquisition and short startup time. Acquisition sensitivity of –140dBm and tracking
sensitivity of –162dBm offers good navigation performance even in urban canyons having limited
sky view. Satellite-based augmentation systems, such as WAAS and EGNOS, are supported to
yield improved accuracy. USB-level serial interface is provided on the interface connector. Supply
voltage of 3.8V~5.0V is supported

KIETW- ECE Page 54


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

3.16.2 PIN CONFIGURATION:

1. G: Power Ground
2. R: serial port input, Arduino or USB to serial port TXD
3. T: serial port output, Arduino or USB to serial port RXD
4. V: 3.3 to 5v power supply.

Fig 3.37 GPS pin out

3.16.3 TECHNICAL SPECIFICATIONS:

 Industry-standard 25 * 25 * 4MM high sensitivity GPS antenna


 UART / TTL 3.3V
 KDS 0.5ppm high-precision TCXO
 Built-in RTC crystal and picofarads capacitance faster hot start
 Built-in EEPROM, free rich configuration parameters 1Hz-5 Hz positioning update rate
 Support AssistNow Online and AssistNow Offline A-GPS services
 GPS, GALILEO, SBAS (WAAS, EGNOS, MSAS, GAGAN) hybrid engine
 Power Supply: 3.3V to 5v

3.16.4 FEATURES:
 Model : QUESTAR
 Based on u-Blox chip : UBX-G6010-ST
 C / A code 1.023MHz code stream
 Receive bands : L1 [1575.42 MHz]
 Tracking channels : 50
 Support DGPS [WAAS, EGNOS and MSAS]
 Positioning performance o 2D plane : 5m [average] o 2D plane : 3.5m [average], DGPS
auxiliary.
 Drift : <0.02m / s
 Timing accuracy : 1us
 Reference coordinate system: WGS-84
 Maximum Altitude :18,000 m

KIETW- ECE Page 55


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

 Maximum speed : 500 m / s


 Acceleration : <4g

3.16.5 ADVANTAGES:

 Data Rate: 9600 bps (default) [Optional:


1200,2400,4800,19200,38400,57600,115200,230400,460800,921600]
 Output statement: NMEA 0183 V3.0 (GGA, GSA, GSV, RMC, VTG, GLL) protocol
data can be arbitrarily set match.
 Data refresh rate: 1Hz-5Hz refresh rate.
 PPS indicator: Do not position before the light is on or off; positioning flashes.
 AGPS: Support independent auxiliary positioning system.
 Enable control: support external IO trigger the switching state of the control module.
 Satellite quality control: a rich set of satellite quality control and prevent elegant
software settings.
 Scenarios: from the walking mode – Car mode – static mode – portable mode – airborne
mode
 and 2D & 3D locate the user can freely set.

3.16.6 APPLICATION:

 Fleet Management/Asset Tracking


 LBS (location base service) and AVL system
 Security system
 Hand-held device for personal positioning and travel navigation

3.17 LED :

Fig 3.38 LED

A Light emitting diode (LED) is essentially a pn junction diode. When carriers are injected across
a forward-biased junction, it emits incoherent light. Most of the commercial LEDs are realized
using a highly doped n and a p Junction. LEDs are usually built on an n-type substrate, with an
electrode attached to the p-type layer deposited on its surface. P-type substrates, while less
common, occur as well. Many commercial LEDs, especially GaN/InGaN, also use sapphire
substrate.

KIETW- ECE Page 56


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

3.17.1 ADVANTAGES:

• LEDs produce more light per watt than incandescent bulbs; this is useful in battery powered or
energy-saving devices.
• LEDs can emit light of an intended color without the use of color filters that traditional lighting
methods require. This is more efficient and can lower initial costs.
• The solid package of the LED can be designed to focus its light. Incandescent and fluorescent
sources often require an external reflector to collect light and direct it in a usable manner.
• When used in applications where dimming is required, LEDs do not change their color tint as
the current passing through them is lowered, unlike incandescent lamps, which turn yellow.
• LEDs are ideal for use in applications that are subject to frequent on-off cycling, unlike
fluorescent lamps that burn out more quickly when cycled frequently, or High Intensity Discharge
(HID) lamps that require a long time before restarting.
• LEDs, being solid state components, are difficult to damage with external shock. Fluorescent and
incandescent bulbs are easily broken if dropped on the ground.
• LEDs can have a relatively long useful life. A Philips LUXEON k2 LED has a life time of about
50,000 hours, whereas Fluorescent tubes typically are rated at about 30,000 hours, and
incandescent light bulbs at 1,000–2,000 hours.

3.17.2 APPLICATIONS:

LED have a lot of applications. Following are few examples.


• Devices, medical applications, clothing, toys
• Remote Controls (TVs, VCRs)
• Lighting
• Indicators and sign

3.18 TRANSISTOR (BC547):

BC547 is a NPN transistor hence the collector and emitter will be left open (Reverse biased) when
the base pin is held at ground and will be closed (Forward biased) when a signal is provided to
base pin. BC547 has a gain value of 110 to 800, this value determines the amplification capacity
of the transistor. The maximum amount of current that could flow through the Collector pin is
100mA, hence we cannot connect loads that consume more than 100mA using this transistor. To
bias a transistor we have to supply current to base pin, this current (IB) should be limited to 5mA.

3.18.1 BC547 as Switch:

When a transistor is used as a switch it is operated in the Saturation and Cut-Off Region as
explained above. As discussed a transistor will act as an Open switch during Forward Bias and as
a Closed switch during Reverse Bias, this biasing can be achieved by supplying the required

KIETW- ECE Page 57


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

amount of current to the base pin. As mentioned the biasing current should maximum of 5mA.
Anything more than 5mA will kill the Transistor; hence a resistor is always added in series with
base pin.
The value of this resistor (RB) can be calculated using below formulae.
RB = VBE / IB
Where, the value of VBE should be 5V for BC547 and the Base current (IB depends on the
Collector current (IC). The value of IB should not exceed mA.

Fig 3.39 transistor

3.18.2 APPLICATIONS :

 Driver Modules like Relay Driver, LED driver etc..


 Amplifier modules like Audio amplifiers, signal Amplifier etc..
 Darlington pair

3.19. TOOGLE SWICH:

In a toggle switch you have a lever that you turn to one side or to the other to make the current
flow to one side or to other, or to not flow at all. There are several types of toggle switches. These
are characterized by the pole and the throw. A pole represents a contact. The pole represents the
connections that your pole can do.

Fig 3.40 toogle switch

The types of switches are classified into four types namely:

 SPST (Single Pole Single throw)


 SPDT (single pole double throw)
 DPST (double pole, single throw)
 DPDT (double pole double throw)

KIETW- ECE Page 58


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

3.20. CAR CHASIS:

A chassis is the load-bearing framework of an artificial object, which structurally supports the
object in its construction and function. An example of a chassis is a vehicle frame, the underpart
of a motor vehicle, on which the body is mounted; if the running gear such as wheels and
transmission, and sometimes even the driver's seat, are included, then the assembly is described as
a rolling chassis.

Fig 3.41 car chasis

In an electronic device (such as a computer), the chassis consists of a frame or other internal
supporting structure on which the circuit boards and other electronics are mounted.

In some designs, such as older sets, the chassis is mounted inside a heavy, rigid cabinet, while in
other designs such as modern computer cases, lightweight covers or panels are attached to the
chassis .The combination of chassis and outer covering is sometimes called an enclosure

3.21 WHEELS :

God created legs for locomotion and man created wheels for the same purpose, which is one of the
greatest inventions in human era. Wheels are your best bet for robots as they are easy to design,
implement and practical for robots that require speed. They also do not suffer from static or
dynamic stability as the center of gravity of robot does not change when they are in motion or just
standing still and do not require complex models, designs and algorithms. The disadvantage is that
they are not stable on uneven or rough terrain and also on extremely smooth surfaces as they tend
to slip and skid.

3.21.1 STANDARAD /FIXED WHEELS:

This wheel has two degrees of freedom and can traverse Front or Reverse. The center of the wheel
is fixed to the robot chassis. The angle between the robot chassis and wheel plane is constant. Fixed
wheels are commonly seen in most WMR‘s where the wheels are attached to motors and are used
to drive and steer the robot.

3.21.2 ORIENTABLE WHEEL:

KIETW- ECE Page 59


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

These wheels are mounted to a fork which holds the wheel in place. Orientable wheels are normally
used to balance a robot and very unlikely to be used to drive a robot. There are two kinds of
Orientable wheels: Centered and Off-centered Orientable wheels.

3.21.3 OMNI WHEELS :

The best choice for a robot that requires multi-directional movement. These wheels are normal
wheels with passive wheels (rollers) attached around the circumference of the center wheel. Omni
wheels can move in any direction and exhibits low resistance when they move in any direction.
The small wheels are attached in such a way that the axis of the small wheels are perpendicular to
the axis of the bigger center wheel which makes the wheel to rotate even parallel to its own
axis.Omni wheels are sometimes known as Swedish wheels and can be used to both drive and steer
a robot. Mecanum Wheel is also a type of Omni wheel with the exception that rollers are attached
at 45° angle around the circumference of another bigger wheel.

3.21.4 CONCLUSION:

The best wheel for your robot depends on the design and requirements. Fixed wheels are good
for simply connecting wheels to a motor and driving or steering. Orientable and spherical wheels
are good for balancing a robot (especially when two wheels drive and you require a third
balancing wheel; also known as auxiliary wheel). Swedish wheels are good for both driving and
steering, but come with their disadvantages.

KIETW- ECE Page 60


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

CHAPTER-4

SOFTWARE USED

4.1 ARDUINO IDE:

Arduino can sense the environment by receiving input from a variety of sensors and can
affect its surroundings by controlling lights, motors, and other actuators. The microcontroller on
the board is programmed using the Arduino programming language (based on Wiring) and the
Arduino development environment (based on Processing). Arduino projects can be stand - alone
or they can communicate with software on running on a computer (e.g. Flash, Processing, and Max
MSP). Arduino is a cross-platform program. You‘ll have to follow different instructions for your
personal OS.

4.1 Arduino Software

As you can see, downloads are available for Windows, Max OS X and both 32 and 64bit
Linux. This example will download the software on a Windows system with admin rights. If you
are installing the software and do not have administrator rights on your system, you will want to
download the ZIP file instead of the Installer. Download the file (you may be asked for a donation,
but that isn‘t necessary) and save it to your computer. Depending on your internet connection
speed, it may take a while. Once it downloads, run it.

KIETW- ECE Page 61


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

If asked if you want to allow it to make changes to your computer, say yes (That‘s the
only way you can get it installed). Next, you should see the licensing agreement

4.2 Arduino License agreement

Click I Agree if you do agree.


Next you will see the Setup and Installation options screen.
Click Next. It will ask you where you want it installed on the next screen, and its best to
go with the default that it suggests.

4.3 Installation path


Click Install, and it will begin the installation process.
After the main installation is complete, you will be asked to install the drivers Click
―Install‖. Once installation is finished, you will receive a notification screen that will look
something like this:

4.4 Complete Installation

KIETW- ECE Page 62


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

4.1.1 The Development Process:


Here is the process for creating a program to run on your Arduino:
1. Create the sketch in the Arduino software
2. Verify the sketch
3. Correct any errors that are indicated (like typos or misspelled variable names)
4. Compile the sketch
5. Upload the resulting program to your Arduino
6. Test your program
7. Rewire or rewrite code as needed
8. Return to Step 2
In the Arduino software, you will notice that 4 and 5 occur at the same time. You will
probably notice that Steps 3 and 7 are the most frustrating and time consuming, but they are the
first step is to plug the square end of a USB data cable into your Arduino, and the other end into
your computer. Next, start the Arduino program. You firewall may block it, but you need to give
it permission to be allowed through the firewall. Next, you will see the Arduino development
interface.

4.5 Arduino Sketch

4.1.2 Project Creation Process:


In this we are going to look at the Arduino project creation process for programming and
using an Arduino microcontroller. We are going to study the phases involved, then revisit the
simple example from the previous.
Introduction
There are phases to creating a working microcontroller project:
1. Specify
2. Design
3. Prototype
4. Algorithm
5. Sketch
6. Compile and Upload

KIETW- ECE Page 63


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

7. Test and Debug


Now we are going to look at these phases in more detail.
Specify:
Before you can create a good microcontroller project, you must decide exactly what it needs
to accomplish. Then, ask things like this: What kind of input does it need? What kind of output
needs to be achieved? What will you do with the input? How will you generate the output?
Design:
You will need to design a circuit, within the limitations of your Arduino board, to achieve
the input and output. At this stage, you will begin to look what kind of electrical or electronic parts
you will need, such as resistors, sensors, etc. Make a list of what you need, and research what you
don‘t know. You will also need to select which pins you want (or must) use.
Prototype:
The next step is to build a prototype of your circuit. You can do this directly on the
breadboard, or if you prefer you can use an online prototyping tool.
Algorithm:
This is an often neglected aspect of program development. Before you dive into writing
sketch code, take some time to think through what you sketch needs to do. When you open up the
Arduino environment to create a new Sketch, this is what you see:

3.6 Arduino Setup and loop functions.

The Sketch is divided into two parts: setup and loop. Consider this their first guidelines on
how to develop a working sketch.
The setup portion is where you put code that needs to run only once. This includes things
like setting certain pins to HIGH, specifying whether a pin should be used as input or output,
assigning certain values to variables, etc. This code will run once each time the Arduino board is
powered up. Decide what commands need to run once, and plan to place them here.
The loop section is the main portion of the code that will keep running until you power off
the Arduino. This is the more challenging part of developing the algorithm.
Sketch
Here is where you begin to type in the actual commands, being careful about spelling and
syntax.

KIETW- ECE Page 64


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

Compile and Upload


In the example, we saw that we could verify and compile the code at the same time. This
is a step that takes place under the hood, so to speak. As long as we have typed in the code in a
way the computer can understand, then there shouldn‘t be any issues with compiling.
Next, the code must be uploaded to the Arduino. It doesn‘t do you any good until it is uploading
Test and Debug
This is time consuming part of programming. When you run you test, why doesn‘t it work
correctly? I would start first by checking the code again, then checking the circuit.
4.1.3. Basic Arduino Command Library:
There are a set of basic commands needed to interact with the Arduino board. In this
chapter, we look at the most common digital and Analog I/O functions you would use in a sketch
for the Arduino.
Digital I/O Functions:
There are three functions for digital input and output: one to set the mode of the pin (is it
going to be an input pin or an output pin), one to write to the pin (is it going to be set to HIGH or
LOW), and one to read the current status of the pin is it set at (HIGH or LOW). The commands
and their basic structure are shown below. The values that are italicized are called parameters and
are used to provide information to the functions so that they can work properly.
1. Pin Mode (pin, mode):
 The pin number must be an integer value
 There are three possible modes: INPUT, OUTPUT, and INPUT PULLUP
2. Digital Write (pin, value):
 The pin number must be an integer value
 The values are either HIGH or LOW
3. Digital Read (pin):
 The pin number must be an integer value
 will return a value of HIGH or LOW

Analog I/O:
As discussed earlier, the Arduino boards include pins for performing Analog input and
output. One command is used to set a reference voltage (the value used as the maximum range of
the input voltage), another is used to read the Analog voltage, and the last is used to write the
Analog voltage.
Here are the commands:
Analog Reference (type):
 You can choose from 5 options
 DEFAULT is going to be 5 volts (on 5V Arduino boards) or 3.3 volts (on 3.3V Arduino
boards)
 INTERNAL is a built-in reference that varies with the type of processor
 INTERNAL1V1: is a built-in 1.1V reference, but is only available on the Mega
 INTERNAL2V56: is a built-in 2.56V referenced hat is also available only on the Mega
 EXTERNAL: this means that you will use whatever voltage is applied to the Mega
 AREF pin for the reference voltage

KIETW- ECE Page 65


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

Analog Read (pin):


 This reads whatever the Analog voltage level is at pin
 It returns an integer value representing the voltage reading at the pin

Analog Write (pin, duty cycle):


 This command writes a PWM value to the pin
 The duty cycle is a value between 0, which means always off, and 255, which means
always on
 This can be used for things like strobing a LED light.

The source code for the IDE is released under the GNU General Public License. The
Arduino IDE supplies a software library from the Wiring project, which provides many common
input and output procedures. User-written code only requires two basic functions, for starting the
sketch and the main program loop, that are compiled and linked with a program stub main () into
an executable cyclic executive program with the GNU tool chain, also included with the IDE
distribution. The Arduino IDE employs the program avrdude to convert the executable code into a
text file in hexadecimal encoding that is loaded into the Arduino board by a loader program in the
board's firmware. By default, avrdude is used as the uploading tool to flash the user code onto
official Arduino boards.

4.2 RASBIAN OPERATING SYSTEM:

Raspberry Pi OS is the recommended operating system for normal use on a Raspberry Pi.

Raspberry Pi OS is a free operating system based on Debian, optimized for the Raspberry Pi
hardware. Raspberry Pi OS comes with over 35,000 packages: precompiled software bundled in a
nice format for easy installation on your Raspberry Pi.

Raspberry Pi OS is a community project under active development, with an emphasis on improving


the stability and performance of as many Debian packages as possible.

The Raspberry Pi should work with any compatible SD card, although there are some guidelines
that should be followed:

SD card size (capacity):

For installation of Raspberry Pi OS with desktop and recommended software (Full) via NOOBS
the minimum card size is 16GB. For the image installation of Raspberry Pi OS with desktop and
recommended software, the minimum card size is 8GB. For Raspberry Pi OS Lite image
installations we recommend a minimum of 4GB. Some distributions, for example LibreELEC and
Arch, can run on much smaller cards.

Note: Only the Raspberry Pi 3A+, 3B+ and Compute Module 3+ can boot from an SD card larger
than 256 GB. This is because there was a bug in the SoC used on previous models of Pi.

KIETW- ECE Page 66


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

SD card class:

The card class determines the sustained write speed for the card; a class 4 card will be able to write
at 4MB/s, whereas a class 10 should be able to attain 10 MB/s. However, it should be noted that
this does not mean a class 10 card will outperform a class 4 card for general usage, because often
this write speed is achieved at the cost of read speed and increased seek times.

SD card physical size:

The original Raspberry Pi Model A and Raspberry Pi Model B require full-size SD cards. From
the Model B+ (2014) onwards, a micro SD card is required.

Troubleshooting:

We recommend buying the Raspberry Pi SD card which is available here, as well as from other
retailers; this is an 8GB class 6 micro SD card (with a full-size SD adapter) that outperforms almost
all other SD cards on the market and is a good value solution.

If you are having trouble with corruption of your SD cards, make sure you follow these steps:

1. Make sure you are using a genuine SD card. There are many cheap SD cards available which
are actually smaller than advertised or which will not last very long.
2. Make sure you are using a good quality power supply. You can check your power supply by
measuring the voltage between TP1 and TP2 on the Raspberry Pi; if this drops below 4.75V
when doing complex tasks then it is most likely unsuitable.
3. Make sure you are using a good quality USB cable for the power supply. When using a lower
quality power supply, the TP1->TP2 voltage can drop below 4.75V. This is generally due to
the resistance of the wires in the USB power cable; to save money, USB cables have as little
copper in them as possible, and as much as 1V (or 1W) can be lost over the length of the cable.
4. Make sure you are shutting your Raspberry Pi down properly before powering it off.
Type sudo halt and wait for the Pi to signal it is ready to be powered off by flashing the
activity LED.

5. Finally, corruption has been observed if you are overclocking the Pi. This problem has been
fixed previously, although the workaround used may mean that it can still happen. If after
checking the steps above you are still having problems with corruption, please let us know.

4.2.1 INSTALLATION:

Raspberry Pi have developed a graphical SD card writing tool that works on Mac OS, Ubuntu
18.04 and Windows, and is the easiest option for most users as it will download the image and
install it automatically to the SD card.

KIETW- ECE Page 67


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

a) Download the latest version of Raspberry Pi Imager and install it.


b) Connect an SD card reader with the SD card inside.
c) Open Raspberry Pi Imager and choose the required OS from the list presented.
d) Choose the SD card you wish to write your image to.
e) Review your selections and click 'WRITE' to begin writing data to the SD card.
f) Note: if using the Raspberry Pi Imager on Windows 10 with Controlled Folder Access
enabled, you will need to explicitly allow the Raspberry Pi Imager permission to write the
SD card. If this is not done, the Raspberry Pi Imager will fail with a "failed to write" error.

Using other tools

Most other tools require you to download the image first, then use the tool to write it to your SD
card.

Download the image

Official images for recommended operating systems are available to download from the Raspberry
Pi website downloads page.

Alternative distributions are available from third-party vendors.

You may need to unzip .zip downloads to get the image file (.img) to write to your SD card.

Note: the Raspberry Pi OS with Raspberry Pi Desktop image contained in the ZIP archive is over
4GB in size and uses the ZIP64 format. To uncompress the archive, a unzip tool that supports
ZIP64 is required. The following zip tools support ZIP64:

7-Zip (Windows)
The Unarchiver (Mac)
Unzip (Linux)
Writing the image
How you write the image to the SD card will depend on the operating system you are using.

Linux
Mac OS
Windows
Chrome OS
Boot your new OS
You can now insert the SD card into the Raspberry Pi and power it up.

KIETW- ECE Page 68


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

For the official Raspberry Pi OS, if you need to manually log in, the default user name is pi, with
password raspberry. Remember the default keyboard layout is set to UK.

You should change the default password straight away to ensure your Raspberry Pi is secure.

4.2.2 RASPBERRY PI CONFIGURATION:

You will be shown raspi-config on first booting into Raspberry Pi OS. To open the configuration
tool after this, simply run the following from the command line:

Sudo raspi-config
The sudo is required because you will be changing files that you do not own as the pi user.

You should see a blue screen with options in a grey box in the center, like so:

Raspi-config main screen


Moving around the menu
Use the up and down arrow keys to move the highlighted selection between the options available.
Pressing the right arrow key will jump out of the Options menu and take you to the <Select> and
<Finish> buttons. Pressing left will take you back to the options. Alternatively, you can use the
Tab key to switch between these.

Note that in long lists of option values (like the list of timezone cities), you can also type a letter
to skip to that section of the list. For example, entering L will skip you to Lisbon, just two options
away from London, to save you scrolling all the way through the alphabet.

What raspi-config does?

Generally speaking, raspi-config aims to provide the functionality to make the most common
configuration changes. This may result in automated edits to /boot/config.txt and various standard
Linux configuration files. Some options require a reboot to take effect. If you changed any of those,
raspi-config will ask if you wish to reboot now when you select the <Finish> button.

Menu options

Change User Password:

The default user on Raspberry Pi OS is pi with the password raspberry. You can change that here.
Read about other users.

KIETW- ECE Page 69


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

Network Options:
From this submenu you can set the host name, your wireless LAN SSID, and pre-shared key, or
enable/disable predictable network interface names.

Hostname:
Set the visible name for this Pi on a network.

Boot Options:
From here you can change what happens when your Pi boots. Use this option to change your boot
preference to command line or desktop. You can choose whether boot-up waits for the network to
be available, and whether the Plymouth splash screen is displayed at boot-up.

Localization Options:
The localization submenu gives you these options to choose from: keyboard layout, time zone,
locale, and wireless LAN country code.

Change locale:
Select a locale, for example en_GB.UTF-8 UTF-8.

Change time zone:


Select your local time zone, starting with the region, e.g. Europe, and then selecting a city, e.g.
London. Type a letter to skip down the list to that point in the alphabet.

Change keyboard layout:


This option opens another menu which allows you to select your keyboard layout. It will take a
long time to display while it reads all the keyboard types. Changes usually take effect immediately,
but may require a reboot.

Change wireless country:


This option sets the country code for your wireless network.

Interfacing Options:
In this submenu there are the following options to enable/disable: Camera, SSH, VNC, SPI, I2C,
Serial, 1-wire, and Remote GPIO.

Camera: Enable/disable the CSI camera interface.

SSH: Enable/disable remote command line access to your Pi using SSH

SSH allows you to remotely access the command line of the Raspberry Pi from another computer.
SSH is disabled by default. Read more about using SSH on the SSH documentation

KIETW- ECE Page 70


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

page. If connecting your Pi directly to a public network, you should not enable SSH unless you
have set up secure passwords for all users.

VNC: Enable/disable the RealVNC virtual network computing server.

SPI: Enable/disable SPI interfaces and automatic loading of the SPI kernel module, needed for
products such as PiFace.

I2C: Enable/disable I2C interfaces and automatic loading of the I2C kernel module.

Serial: Enable/disable shell and kernel messages on the serial connection.

1-wire: Enable/disable the Dallas 1-wire interface. This is usually used for DS18B20 temperature
sensors.

Overclock:

It is possible to overclock your Raspberry Pi's CPU. The default is 700MHz but it can be set up to
1000MHz. The overclocking you can achieve will vary; overclocking too high may result in
instability. Selecting this option shows the following warning:

Be aware that overclocking may reduce the lifetime of your Raspberry Pi. If overclocking at a
certain level causes system instability, try a more modest overclock. Hold down the Shift key
during boot to temporarily disable overclocking.

4.2.3 ADVANTAGES OF RASPBIAN:

Developer support:

Raspbian pulls more attention from the raspberry foundation given it is the official Raspberry OS.
This results in the development of more features and utility software. With support from the
raspberry community, it is therefore easy to set up this distribution and get going. For this reason,
Raspbian also comes pre-installed with office programs, a web-browser, Mine craft and some
programming languages (scratch, python, c/c++).

Lightweight Nature:

Raspbian is fast and light. Processes share the same resources during execution without the need
for creating process-specific resources unlike in heavyweight systems. This, therefore, increases
the efficiency and speed of the operating system. Since the adoption of epiphany-based software,
the os has gained noticeable speed, unlike previous versions which run on Midori. All Raspbian

KIETW- ECE Page 71


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

programs are also created in a way that they increase performance efficiency. A good example is
the command line used to play media. (OXM command line).

Learner-oriented:

What really sieves great software from the good software is the ability to be mindful of the end-
user. The PI is created especially for educational purposes. That remains that. Therefore what
makes Raspbian more advantageous to most pi users is its ability to choose the best learning tools
and programs. From simple beginner-friendly programming languages like python and ruby to
simplistic teaching software like scratch. I think Raspbian always stands out from the rest

Simplicity and User-friendly:

Raspbian os is easy to maintain as it is to use. The commands are quite easy and whenever you
need to install software, the repository will always provide an updated version of the same. The
repository boasts also of plenty of software all that encompasses most of which that you will need.

4.3 OPEN CV (COMPUTER VISION):

Computer vision is one of the hottest fields in the industry right now. You can expect plenty of
job openings to come up in the next 2-4 years. The question then is – are you ready to take
advantage of these opportunities? Take a moment to ponder this – which applications or products
come to your mind when you think of computer vision? The list is HUGE. We use some of them
everyday! Features like unlocking our phones using face recognition, our smartphone cameras,
self-driving cars – computer vision is everywhere.

Fig 4.7 Open cv OS

OpenCV, or Open Source Computer Vision library, started out as a research project at Intel. It‘s
currently the largest computer vision library in terms of the sheer number of functions it holds.

OpenCV contains implementations of more than 2500 algorithms! It is freely available for
commercial as well as academic purposes. And the joy doesn‘t end there! The library has interfaces
for multiple languages, including Python, Java, and C++.

KIETW- ECE Page 72


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

The first OpenCV version, 1.0, was released in 2006 and the OpenCV community has grown leaps
and bounds since then.

Now, let‘s turn our attention to the idea behind this article – the plethora of functions OpenCV
offers! We will be looking at OpenCV from the perspective of a data scientist and learning about
some functions that make the task of developing and understanding computer vision models easier.

4.3.1 Reading, Writing and Displaying Images

Machines see and process everything using numbers, including images and text. How do you
convert images to numbers – I can hear you wondering. Two words – pixel values:

Fig 4.8 Reading images

Every number represents the pixel intensity at that particular location. In the above image, I have
shown the pixel values for a grayscale image where every pixel contains only one value i.e. the
intensity of the black colour at that location.

Note that colour images will have multiple values for a single pixel. These values represent the
in9tensity of respective channels – Red, Green and Blue channels for RGB images, for instance.

Reading and writing images is essential to any computer vision project. And the OpenCV library
makes this function a whole lot easier.

By default, the imread function reads images in the BGR (Blue-Green-Red) format. We can read
images in different formats using extra flags in the imread function:

 cv2.IMREAD_COLOR: Default flag for loading a color image


 cv2.IMREAD_GRAYSCALE: Loads images in grayscale format
 cv2.IMREAD_UNCHANGED: Loads images in their given format, including the alpha
channel. Alpha channel stores the transparency information – the higher the value of alpha
channel, the more opaque is the pixel

KIETW- ECE Page 73


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

4.3.2 Changing Color Spaces:

A color space is a protocol for representing colors in a way that makes them easily reproducible.
We know that grayscale images have single pixel values and color images contain 3 values for
each pixel – the intensities of the Red, Green and Blue channels.

Most computer vision use cases process images in RGB format. However, applications like video
compression and device independent storage – these are heavily dependent on other color spaces,
like the Hue-Saturation-Value or HSV color space.

As you understand a RGB image consists of the color intensity of different color channels, i.e. the
intensity and color information are mixed in RGB color space but in HSV color space the color
and intensity information are separated from each other. This makes HSV color space more robust
to lighting changes.

OpenCV reads a given image in the BGR format by default. So, you‘ll need to change the color
space of your image from BGR to RGB when reading images using OpenCV. Let‘s see how to do
that:

Fig 4.9 changing colours of an image

4.3.3 Resizing Images

Machine learning models work with a fixed sized input. The same idea applies to computer vision
models as well. The images we use for training our model must be of the same size.

Now this might become problematic if we are creating our own dataset by scraping images from
various sources. That‘s where the function of resizing images comes to the fore.

Images can be easily scaled up and down using OpenCV. This operation is useful for training deep
learning models when we need to convert images to the model‘s input shape. Different
interpolation and downsampling methods are supported by OpenCV, which can be used by the
following parameters:

1. INTER_NEAREST: Nearest neighbor interpolation


2. INTER_LINEAR: Bilinear interpolation

KIETW- ECE Page 74


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

3. INTER_AREA: Resampling using pixel area relation


4. INTER_CUBIC: Bicubic interpolation over 4×4 pixel neighborhood
5. INTER_LANCZOS4: Lanczos interpolation over 8×8 neighborhood

4.3.4 Image Rotation:

―You need a large amount of data to train a deep learning model‖. I‘m sure you must have comes
across this line of thought in form or another. It‘s partially true – most deep learning algorithms
are heavily dependent on the quality and quantity of the data.

But what if you do not have a large enough dataset? Not all of us can afford to manually collect
and label images.

Suppose we are building an image classification model for identifying the animal present in an
image. So, both the images shown below should be classified as ‗dog‘:

But the model might find it difficult to classify the second image as a Dog if it was not trained on
such images. So what should we do?

Let me introduce you to the technique of data augmentation. This method allows us to generate
more samples for training our deep learning model. Data augmentation uses the available data
samples to produce the new ones, by applying image operations like rotation, scaling, translation,
etc. This makes our model robust to changes in input and leads to better generalization.

Fig 4.10 image rotation

Rotation is one of the most used and easy to implement data augmentation techniques. As the name
suggests, it involves rotating the image at an arbitrary angle and providing it the same label as the
original image. Think of the times you have rotated images in your phone to achieve certain angles
– that‘s basically what this function does.

4.3.5 Image Translation:

KIETW- ECE Page 75


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

Image translation is a geometric transformation that maps the position of every object in the image
to a new location in the final output image. After the translation operation, an object present at
location (x,y) in the input image is shifted to a new position (X,Y):

X = x + dx

Y = y + dy

Here, dx and dy are the respective translations along different dimensions.

Image translation can be used to add shift invariance to the model, as by tranlation we can change
the position of the object in the image give more variety to the model that leads to better
generalizability which works in difficult conditions i.e. when the object is not perfectly aligned to
the center of the image.

This augmentation technique can also help the model correctly classify images with partially
visible objects. Take the below image for example. Even when the complete shoe is not present in
the image, the model should be able to classify it as a Shoe.

This translation function is typically used in the image pre-processing stage. Check out the below
code to see how it works in a practical scenario:

Fig 4.11 image translation

4.3.6 Adaptive Thresholding:

In case of adaptive thresholding, different threshold values are used for different parts of the image.
This function gives better results for images with varying lighting conditions – hence the term
―adaptive‖.

Otsu‘s binarization method finds an optimal threshold value for the whole image. It works well for
bimodal images (images with 2 peaks in their histogram).

KIETW- ECE Page 76


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

Fig 4.12 adaptive thresholding

4.3.7 Image Segmentation (Watershed Algorithm):

Image segmentation is the task of classifying every pixel in the image to some class. For example,
classifying every pixel as foreground or background. Image segmentation is important for
extracting the relevant parts from an image.

The watershed algorithm is a classic image segmentation algorithm. It considers the pixel values
in an image as topography. For finding the object boundaries, it takes initial markers as input. The
algorithm then starts flooding the basin from the markers till the markers meet at the object
boundaries.

Let‘s say we have topography with multiple basins. Now, if we fill different basins with water of
different color, then the intersection of different colors will give us the object boundaries. This is
the intuition behind the watershed algorithm.

Fig 4.13 image segmentation

4.3.8 Bitwise Operations:

Bitwise operations include AND, OR, NOT and XOR. You might remember them from your
programming class! In computer vision, these operations are very useful when we have a mask
image and want to apply that mask over another image to extract the region of interest.

KIETW- ECE Page 77


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

Fig 4.14 bitwise operations


In the above figure, we can see an input image and its segmentation mask calculated using the
Watershed algorithm. Further, we have applied the bitwise ‗AND‘ operation to remove the
background from the image and extract relevant portions from the image. Pretty awesome stuff!

4.3.9 Edge Detection

Edges are the points in an image where the image brightness changes sharply or has discontinuities.
Such discontinuities generally correspond to:

 Discontinuities in depth
 Discontinuities in surface orientation
 Changes in material properties
 Variations in scene illumination

Edges are very useful features of an image that can be used for different applications like
classification of objects in the image and localization. Even deep learning models calculate edge
features to extract information about the objects present in image.

Edges are different from contours as they are not related to objects rather they signify the changes
in pixel values of an image. Edge detection can be used for image segmentation and even for image
sharpening.

Fig 4.15 edge detection

4.3.10 Image Filtering:

KIETW- ECE Page 78


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

In image filtering, a pixel value is updated using its neighbouring values. But how are these values
updated in the first place?well, there are multiple ways of updating pixel values, such as selecting
the maximum value from neighbours, using the average of neighbours, etc. Each method has its
own uses.

Gaussian filtering is also used for image blurring that gives different weights to the neighbouring
pixels based on their distance from the pixel under consideration

For image filtering, we use kernels. Kernels are matrices of numbers of different shapes like 3 x 3,
5 x 5, etc. A kernel is used to calculate the dot product with a part of the image. When calculating
the new value of a pixel, the kernel center is overlapped with the pixel. The neighbouring pixel
values are multiplied with the corresponding values in the kernel. The calculated value is assigned
to the pixel coinciding with the center of the kernel.

Fig 4.16 image filtering

In the above output, the image on the right shows the result of applying Gaussian kernels on an
input image. We can see that the edges of the original image are suppressed. The Gaussian kernel
with different values of sigma is used extensively to calculate the Difference of Gaussian for our
image. This is an important step in the feature extraction process because it reduces the noise
present in the image.

4.3.11 Image Contours

A contour is a closed curve of points or line segments that represents the boundaries of an object
in the image. Contours are essentially the shapes of objects in an image.

Unlike edges, contours are not part of an image. Instead, they are an abstract collection of points
and line segments corresponding to the shapes of the object(s) in the image.

We can use contours to count the number of objects in an image, categorize objects on the basis of
their shapes, or select objects of particular shapes from the image.

KIETW- ECE Page 79


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

Fig 4.17 image contours

4.3.12 Scale Invariant Feature Transform (SIFT)

Keypoints is a concept you should be aware of when working with images. These are basically the
points of interest in an image. Keypoints are analogous to the features of a given image.

They are locations that define what is interesting in the image. Keypoints are important, because
no matter how the image is modified (rotation, shrinking, expanding, distortion), we will always
find the same keypoints for the image.

Scale Invariant Feature Transform (SIFT) is a very popular keypoint detection algorithm. It
consists of the following steps:

 Scale-space extreme detection


 Key point localization
 Orientation assignment
 Key point descriptor
 Key point matching

Features extracted from SIFT can be used for applications like image stitching, object detection,
etc. The below code and output show the keypoints and their orientation calculated using SIFT.

Fig 4.18 SIFT image

4.3.13 Speeded-Up Robust Features (SURF):

Speeded-Up Robust Features (SURF) is an enhanced version of SIFT. It works much faster and is
more robust to image transformations. In SIFT; the scale space is approximated using Laplacian
of Gaussian. Wait – that sounds too complex. What is Laplacian of Gaussian?

KIETW- ECE Page 80


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

Laplacian is a kernel used for calculating the edges in an image. The Laplacian kernel works by
approximating a second derivative of the image. Hence, it is very sensitive to noise. We generally
apply the Gaussian kernel to the image before Laplacian kernel thus giving it the name Laplacian
of Gaussian.

In SURF, the Laplacian of Gaussian is calculated using a box filter (kernel). The convolution with
box filter can be done in parallel for different scales which is the underlying reason for the
enhanced speed of SURF (compared to SIFT). There are other neat improvements like this in
SURF – I suggest going through the research paper to understand this in-depth.

4.3.14 Feature Matching

The features extracted from different images using SIFT or SURF can be matched to find similar
objects/patterns present in different images. The OpenCV library supports multiple feature-
matching algorithms, like brute force matching, knn feature matching, among others.

Fig 4.19 feature matching

In the above image, we can see that the keypoints extracted from the original image (on the left)
are matched to keypoints of its rotated version. This is because the features were extracted using
SIFT, which is invariant to such transformations.

4.3.15 Face Detection:

OpenCV supports hear cascade based object detection. Hear cascades are machine learning based
classifiers that calculate different features like edges, lines, etc. in the image. Then, these classifiers
train using multiple positive and negative samples.

Trained classifiers for different objects like faces,eyes etc are available in the OpenCV Github repo
, you can also train your own haar cascade for any object.

Make sure you go through the below excellent article that teaches you how to build a face detection
model from video using OpenCV:

KIETW- ECE Page 81


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

 Building a Face Detection Model from Video using Deep Learning (OpenCV
Implementation)

fig 4.20 face detection

And if you‘re looking to learn the face detection concept from scratch, then this article should be
of interest.

4.3.17 End Notes:

OpenCV is truly an all-encompassing library for computer vision tasks. I hope you tried out all
the above codes on your machine – the best way to learn computer vision is by applying it on
your own. I encourage you to build your own applications and experiment with OpenCV as much
as you can.

OpenCV is continually adding new modules for latest algorithms from Machine learning, do
check out their Github repository and get familiar with implementation. You can even contribute
to the library which is a great way to learn and interact with the community.

KIETW- ECE Page 82


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

CHAPTER 5

RESULT

The main objective of the project is to vehicle will drive by itself by using image processing
technique. By using the raspberry pi camera the vehicle detects the path to move forward and
obstacle avoidance is also done by using camera as it detects the obstacles placed in front of the
vehicle. By this image processing technique the vehicle scans the desired path and filters the
images of the path and drive itself. By these autonomous vehicles the effort of human driving will
be reduced. In the first stage the path of the vehicle is captured by raspberry pi camera and second
stage gray scaling of the path image is done in the process and after canny edge detection is
obtained and thus the path of the vehicle is obtained.

Fig 5.1 result

KIETW- ECE Page 83


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

CHAPTER 6

CONCLUSION AND FUTURE SCOPE

CONCLUSION:

Lots of practical applications are found by using this Image Processing Algorithms and further
studies are carried out. Various image processing techniques have been presented through these
papers which have high end real time applications in the day to day life. The important take away
terms are Computer vision techniques, obstacle avoidance, and traffic sign detection. Various
algorithms and filters are being used to achieve high efficiency data extraction from images. After
evaluating the end results of the papers analyzed, it can be concluded that, 75% were found success
in real time in embedded systems. In real time environment these autonomous vehicles helps in
industrial zones and in daily life also. Software development environment OpenCV has also been
discussed in these papers. Histogram used in this have different usages in other fields too.

FUTURE SCOPE:

In further the features of the vehicle can be improved with high quality and durable megapixel
camera can be used for path detection in image processing at its best performance .By using LDR
sensor we can enlight the body of the vehicle so that the vehicle can travel in low lightning
conditions also. On developing the image processing technique into further steps the vehicle can
easily detects the path to travel and reduce the time travelling.

KIETW- ECE Page 84


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

APPENDIX SOURCE CODE

1. MASTER CODE (RASPBERRY PI):

Program:

#include <opencv2/opencv.hpp>
#include <raspicam_cv.h>
#include <iostream>
#include <chrono>
#include <ctime>
#include <wiringPi.h>

using namespace std;


using namespace cv;
using namespace raspicam;

// Image Processing variables


Mat frame, Matrix, framePers, frameGray, frameThresh, frameEdge, frameFinal,
frameFinalDuplicate, frameFinalDuplicate1;
Mat ROILane, ROILaneEnd;
int LeftLanePos, RightLanePos, frameCenter, laneCenter, Result, laneEnd;

RaspiCam_Cv Camera;

stringstream ss;

vector<int> histrogramLane;
vector<int> histrogramLaneEnd;

Point2f Source[] = {Point2f(40,135),Point2f(360,135),Point2f(0,185), Point2f(400,185)};


Point2f Destination[] = {Point2f(100,0),Point2f(280,0),Point2f(100,240), Point2f(280,240)};

//Machine Learning variables


CascadeClassifier Stop_Cascade, Object_Cascade;
Mat frame_Stop, RoI_Stop, gray_Stop, frame_Object, RoI_Object, gray_Object;
vector<Rect> Stop, Object;
int dist_Stop, dist_Object;

void Setup ( int argc,char **argv, RaspiCam_Cv &Camera )

KIETW- ECE Page 85


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

{
Camera.set ( CAP_PROP_FRAME_WIDTH, ( "-w",argc,argv,400 ) );
Camera.set ( CAP_PROP_FRAME_HEIGHT, ( "-h",argc,argv,240 ) );
Camera.set ( CAP_PROP_BRIGHTNESS, ( "-br",argc,argv,50 ) );
Camera.set ( CAP_PROP_CONTRAST ,( "-co",argc,argv,50 ) );
Camera.set ( CAP_PROP_SATURATION, ( "-sa",argc,argv,50 ) );
Camera.set ( CAP_PROP_GAIN, ( "-g",argc,argv ,50 ) );
Camera.set ( CAP_PROP_FPS, ( "-fps",argc,argv,0));

void Capture()
{
Camera.grab();
Camera.retrieve( frame);
cvtColor(frame, frame_Stop, COLOR_BGR2RGB);
cvtColor(frame, frame_Object, COLOR_BGR2RGB);
cvtColor(frame, frame, COLOR_BGR2RGB);

void Perspective()
{
line(frame,Source[0], Source[1], Scalar(0,0,255), 2);
line(frame,Source[1], Source[3], Scalar(0,0,255), 2);
line(frame,Source[3], Source[2], Scalar(0,0,255), 2);
line(frame,Source[2], Source[0], Scalar(0,0,255), 2);

Matrix = getPerspectiveTransform(Source, Destination);


warpPerspective(frame, framePers, Matrix, Size(400,240));
}

void Threshold()
{
cvtColor(framePers, frameGray, COLOR_RGB2GRAY);
inRange(frameGray, 230, 255, frameThresh);
Canny(frameGray,frameEdge, 900, 900, 3, false);
add(frameThresh, frameEdge, frameFinal);
cvtColor(frameFinal, frameFinal, COLOR_GRAY2RGB);
cvtColor(frameFinal, frameFinalDuplicate, COLOR_RGB2BGR); //used in histrogram
function only

KIETW- ECE Page 86


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

cvtColor(frameFinal, frameFinalDuplicate1, COLOR_RGB2BGR); //used in histrogram


function only

void Histrogram()
{
histrogramLane.resize(400);
histrogramLane.clear();

for(int i=0; i<400; i++) //frame.size().width = 400


{
ROILane = frameFinalDuplicate(Rect(i,140,1,100));
divide(255, ROILane, ROILane);
histrogramLane.push_back((int)(sum(ROILane)[0]));
}

histrogramLaneEnd.resize(400);
histrogramLaneEnd.clear();
for (int i = 0; i < 400; i++)
{
ROILaneEnd = frameFinalDuplicate1(Rect(i, 0, 1, 240));
divide(255, ROILaneEnd, ROILaneEnd);
histrogramLaneEnd.push_back((int)(sum(ROILaneEnd)[0]));

}
laneEnd = sum(histrogramLaneEnd)[0];
cout<<"Lane END = "<<laneEnd<<endl;
}

void LaneFinder()
{
vector<int>:: iterator LeftPtr;
LeftPtr = max_element(histrogramLane.begin(), histrogramLane.begin() + 150);
LeftLanePos = distance(histrogramLane.begin(), LeftPtr);

vector<int>:: iterator RightPtr;


RightPtr = max_element(histrogramLane.begin() +250, histrogramLane.end());
RightLanePos = distance(histrogramLane.begin(), RightPtr);

line(frameFinal, Point2f(LeftLanePos, 0), Point2f(LeftLanePos, 240), Scalar(0, 255,0), 2);


line(frameFinal, Point2f(RightLanePos, 0), Point2f(RightLanePos, 240), Scalar(0,255,0), 2);

KIETW- ECE Page 87


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

void LaneCenter()
{
laneCenter = (RightLanePos-LeftLanePos)/2 +LeftLanePos;
frameCenter = 188;

line(frameFinal, Point2f(laneCenter,0), Point2f(laneCenter,240), Scalar(0,255,0), 3);


line(frameFinal, Point2f(frameCenter,0), Point2f(frameCenter,240), Scalar(255,0,0), 3);

Result = laneCenter-frameCenter;
}

void Stop_detection()
{
if(!Stop_Cascade.load("//home//pi//Desktop//MACHINE LEARNING//Stop_cascade.xml"))
{
printf("Unable to open stop cascade file");
}

RoI_Stop = frame_Stop(Rect(200,0,200,140));
cvtColor(RoI_Stop, gray_Stop, COLOR_RGB2GRAY);
equalizeHist(gray_Stop, gray_Stop);
Stop_Cascade.detectMultiScale(gray_Stop, Stop);

for(int i=0; i<Stop.size(); i++)


{
Point P1(Stop[i].x, Stop[i].y);
Point P2(Stop[i].x + Stop[i].width, Stop[i].y + Stop[i].height);

rectangle(RoI_Stop, P1, P2, Scalar(0, 0, 255), 2);


putText(RoI_Stop, "Stop Sign", P1, FONT_HERSHEY_PLAIN, 1, Scalar(0, 0, 255,
255), 2);
dist_Stop = (-1.07)*(P2.x-P1.x) + 102.597;

ss.str(" ");
ss.clear();
ss<<"D = "<<dist_Stop<<"cm";
putText(RoI_Stop, ss.str(), Point2f(1,130), 0,1, Scalar(0,0,255), 2);

KIETW- ECE Page 88


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

void Object_detection()
{
if(!Object_Cascade.load("//home//pi//Desktop//MACHINE
LEARNING//Object_cascade.xml"))
{
printf("Unable to open Object cascade file");
}

RoI_Object = frame_Object(Rect(200,0,200,140));
cvtColor(RoI_Object, gray_Object, COLOR_RGB2GRAY);
equalizeHist(gray_Object, gray_Object);
Object_Cascade.detectMultiScale(gray_Object, Object);

for(int i=0; i<Object.size(); i++)


{
Point P1(Object[i].x, Object[i].y);
Point P2(Object[i].x + Object[i].width, Object[i].y + Object[i].height);

rectangle(RoI_Object, P1, P2, Scalar(0, 0, 255), 2);


putText(RoI_Object, "Object", P1, FONT_HERSHEY_PLAIN, 1, Scalar(0, 0, 255, 255),2 ;
dist_Object = (-1.07)*(P2.x-P1.x) + 102.597;

ss.str(" ");
ss.clear();
ss<<"D = "<<dist_Object<<"cm";
putText(RoI_Object, ss.str(), Point2f(1,130), 0,1, Scalar(0,0,255), 2);

int main(int argc,char **argv)


{

wiringPiSetup();
pinMode(21, OUTPUT);
pinMode(22, OUTPUT);
pinMode(23, OUTPUT);
pinMode(24, OUTPUT);

KIETW- ECE Page 89


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

Setup(argc, argv, Camera);


cout<<"Connecting to camera"<<endl;
if (!Camera.open())
{

cout<<"Failed to Connect"<<endl;
}

cout<<"Camera Id = "<<Camera.getId()<<endl;

while(1)
{

auto start = std::chrono::system_clock::now();

Capture();
Perspective();
Threshold();
Histrogram();
LaneFinder();
LaneCenter();
Stop_detection();
Object_detection();

if (dist_Stop > 5 && dist_Stop < 20)


{
digitalWrite(21, 0);
digitalWrite(22, 0); //decimal = 8
digitalWrite(23, 0);
digitalWrite(24, 1);
cout<<"Stop Sign"<<endl;
dist_Stop = 0;

goto Stop_Sign;
}

if (laneEnd > 3000)


{
digitalWrite(21, 1);

KIETW- ECE Page 90


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

digitalWrite(22, 1); //decimal = 7


digitalWrite(23, 1);
digitalWrite(24, 0);
cout<<"Lane End"<<endl;
}

if (Result == 0)
{
digitalWrite(21, 0);
digitalWrite(22, 0); //decimal = 0
digitalWrite(23, 0);
digitalWrite(24, 0);
cout<<"Forward"<<endl;
}

else if (Result >0 && Result <10)


{
digitalWrite(21, 1);
digitalWrite(22, 0); //decimal = 1
digitalWrite(23, 0);
digitalWrite(24, 0);
cout<<"Right1"<<endl;
}

else if (Result >=10 && Result <20)


{
digitalWrite(21, 0);
digitalWrite(22, 1); //decimal = 2
digitalWrite(23, 0);
digitalWrite(24, 0);
cout<<"Right2"<<endl;
}

else if (Result >20)


{
digitalWrite(21, 1);
digitalWrite(22, 1); //decimal = 3
digitalWrite(23, 0);
digitalWrite(24, 0);
cout<<"Right3"<<endl;
}

KIETW- ECE Page 91


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

else if (Result <0 && Result >-10)


{
digitalWrite(21, 0);
digitalWrite(22, 0); //decimal = 4
digitalWrite(23, 1);
digitalWrite(24, 0);
cout<<"Left1"<<endl;
}

else if (Result <=-10 && Result >-20)


{
digitalWrite(21, 1);
digitalWrite(22, 0); //decimal = 5
digitalWrite(23, 1);
digitalWrite(24, 0);
cout<<"Left2"<<endl;
}

else if (Result <-20)


{
digitalWrite(21, 0);
digitalWrite(22, 1); //decimal = 6
digitalWrite(23, 1);
digitalWrite(24, 0);
cout<<"Left3"<<endl;
}

Stop_Sign:

if (laneEnd > 3000)


{
ss.str(" ");
ss.clear();
ss<<" Lane End";
putText(frame, ss.str(), Point2f(1,50), 0,1, Scalar(255,0,0), 2);

else if (Result == 0)
{
ss.str(" ");

KIETW- ECE Page 92


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

ss.clear();
ss<<"Result = "<<Result<<" (Move Forward)";
putText(frame, ss.str(), Point2f(1,50), 0,1, Scalar(0,0,255), 2);

else if (Result > 0)


{
ss.str(" ");
ss.clear();
ss<<"Result = "<<Result<<" (Move Right)";
putText(frame, ss.str(), Point2f(1,50), 0,1, Scalar(0,0,255), 2);

else if (Result < 0)


{
ss.str(" ");
ss.clear();
ss<<"Result = "<<Result<<" (Move Left)";
putText(frame, ss.str(), Point2f(1,50), 0,1, Scalar(0,0,255), 2);

namedWindow("orignal", WINDOW_KEEPRATIO);
moveWindow("orignal", 0, 100);
resizeWindow("orignal", 640, 480);
imshow("orignal", frame);

namedWindow("Perspective", WINDOW_KEEPRATIO);
moveWindow("Perspective", 640, 100);
resizeWindow("Perspective", 640, 480);
imshow("Perspective", framePers);

namedWindow("Final", WINDOW_KEEPRATIO);
moveWindow("Final", 1280, 100);
resizeWindow("Final", 640, 480);
imshow("Final", frameFinal);

namedWindow("Stop Sign", WINDOW_KEEPRATIO);


moveWindow("Stop Sign", 1280, 580);
resizeWindow("Stop Sign", 640, 480);

KIETW- ECE Page 93


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

imshow("Stop Sign", RoI_Stop);

namedWindow("Object", WINDOW_KEEPRATIO);
moveWindow("Object", 640, 580);
resizeWindow("Object", 640, 480);
imshow("Object", RoI_Object);

waitKey(1);
auto end = std::chrono::system_clock::now();
std::chrono::duration<double> elapsed_seconds = end-start;

float t = elapsed_seconds.count();
int FPS = 1/t;
//cout<<"FPS = "<<FPS<<endl;

return 0;

KIETW- ECE Page 94


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

2. SLAVE CODE (ARDUINO UNO):

PROGRAM:

int i =0;
unsigned long int j =0;

const int EnableL = 5;


const int HighL = 6; // LEFT SIDE MOTOR
const int LowL =7;

const int EnableR = 10;


const int HighR = 8; //RIGHT SIDE MOTOR
const int LowR =9;

const int D0 = 0; //Raspberry pin 21 LSB


const int D1 = 1; //Raspberry pin 22
const int D2 = 2; //Raspberry pin 23
const int D3 = 3; //Raspberry pin 24 MSB

int a,b,c,d,data;

void setup() {

pinMode(EnableL, OUTPUT);
pinMode(HighL, OUTPUT);
pinMode(LowL, OUTPUT);

pinMode(EnableR, OUTPUT);
pinMode(HighR, OUTPUT);
pinMode(LowR, OUTPUT);

pinMode(D0, INPUT_PULLUP);
pinMode(D1, INPUT_PULLUP);
pinMode(D2, INPUT_PULLUP);
pinMode(D3, INPUT_PULLUP);

void Data()
{
a = digitalRead(D0);
b = digitalRead(D1);
c = digitalRead(D2);
d = digitalRead(D3);

KIETW- ECE Page 95


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

data = 8*d+4*c+2*b+a;
}

void Forward()
{
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
analogWrite(EnableL,255);

digitalWrite(HighR, LOW);
digitalWrite(LowR, HIGH);
analogWrite(EnableR,255);

void Backward()
{
digitalWrite(HighL, HIGH);
digitalWrite(LowL, LOW);
analogWrite(EnableL,255);

digitalWrite(HighR, HIGH);
digitalWrite(LowR, LOW);
analogWrite(EnableR,255);

void Stop()
{
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
analogWrite(EnableL,0);

digitalWrite(HighR, LOW);
digitalWrite(LowR, HIGH);
analogWrite(EnableR,0);

void Left1()
{
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
analogWrite(EnableL,160);

digitalWrite(HighR, LOW);
digitalWrite(LowR, HIGH);
analogWrite(EnableR,255);

KIETW- ECE Page 96


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

void Left2()
{
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
analogWrite(EnableL,90);

digitalWrite(HighR, LOW);
digitalWrite(LowR, HIGH);
analogWrite(EnableR,255);

void Left3()
{
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
analogWrite(EnableL,50);

digitalWrite(HighR, LOW);
digitalWrite(LowR, HIGH);
analogWrite(EnableR,255);

void Right1()
{
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
analogWrite(EnableL,255);

digitalWrite(HighR, LOW);
digitalWrite(LowR, HIGH);
analogWrite(EnableR,160); //200

}
void Right2()
{
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
analogWrite(EnableL,255);

digitalWrite(HighR, LOW);
digitalWrite(LowR, HIGH);
analogWrite(EnableR,90); //160

KIETW- ECE Page 97


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

void Right3()
{
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
analogWrite(EnableL,255);

digitalWrite(HighR, LOW);
digitalWrite(LowR, HIGH);
analogWrite(EnableR,50); //100

void UTurn()
{
analogWrite(EnableL, 0);
analogWrite(EnableR, 0);
delay(400);

analogWrite(EnableL, 250);
analogWrite(EnableR, 250); //forward
delay(1000);

analogWrite(EnableL, 0);
analogWrite(EnableR, 0);
delay(400);

digitalWrite(HighL, HIGH);
digitalWrite(LowL, LOW);
digitalWrite(HighR, LOW); // left
digitalWrite(LowR, HIGH);
analogWrite(EnableL, 255);
analogWrite(EnableR, 255);
delay(700);

analogWrite(EnableL, 0);
analogWrite(EnableR, 0);
delay(400);

digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
digitalWrite(HighR, LOW); // forward
digitalWrite(LowR, HIGH);
analogWrite(EnableL, 255);
analogWrite(EnableR, 255);
delay(900);

analogWrite(EnableL, 0);
analogWrite(EnableR, 0);
delay(400);

KIETW- ECE Page 98


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

digitalWrite(HighL, HIGH);
digitalWrite(LowL, LOW);
digitalWrite(HighR, LOW); //left
digitalWrite(LowR, HIGH);
analogWrite(EnableL, 255);
analogWrite(EnableR, 255);
delay(700);

analogWrite(EnableL, 0);
analogWrite(EnableR, 0);
delay(1000);

digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
digitalWrite(HighR, LOW);
digitalWrite(LowL, HIGH);
analogWrite(EnableL, 150);
analogWrite(EnableR, 150);
delay(300);
}

void Object()
{

analogWrite(EnableL, 0);
analogWrite(EnableR, 0); //stop
delay(1000);

digitalWrite(HighL, HIGH);
digitalWrite(LowL, LOW);
digitalWrite(HighR, LOW);
digitalWrite(LowR, HIGH); //left
analogWrite(EnableL, 250);
analogWrite(EnableR, 250);
delay(500);

analogWrite(EnableL, 0);
analogWrite(EnableR, 0); //stop
delay(200);

digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH); //forward
digitalWrite(HighR, LOW);
digitalWrite(LowR, HIGH);
analogWrite(EnableL, 255);

KIETW- ECE Page 99


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

analogWrite(EnableR, 255);
delay(1000);

analogWrite(EnableL, 0); //stop


analogWrite(EnableR, 0);
delay(200);

digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
digitalWrite(HighR, HIGH); //right
digitalWrite(LowR, LOW);
analogWrite(EnableL, 255);
analogWrite(EnableR, 255);
delay(500);

analogWrite(EnableL, 0); //stop


analogWrite(EnableR, 0);
delay(1000);

digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
digitalWrite(HighR, LOW); // forward
digitalWrite(LowR, HIGH);
analogWrite(EnableL, 150);
analogWrite(EnableR, 150);
delay(500);

i = i+1;
}

void Lane_Change()
{

analogWrite(EnableL, 0);
analogWrite(EnableR, 0); //stop
delay(1000);

digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
digitalWrite(HighR, HIGH);
digitalWrite(LowR, LOW); //Right
analogWrite(EnableL, 250);
analogWrite(EnableR, 250);
delay(500);

analogWrite(EnableL, 0);
analogWrite(EnableR, 0); //stop
delay(200);

KIETW- ECE Page 100


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH); //forward
digitalWrite(HighR, LOW);
digitalWrite(LowR, HIGH);
analogWrite(EnableL, 255);
analogWrite(EnableR, 255);
delay(800);

analogWrite(EnableL, 0); //stop


analogWrite(EnableR, 0);
delay(200);

digitalWrite(HighL, HIGH);
digitalWrite(LowL, LOW);
digitalWrite(HighR, LOW); //LEFT
digitalWrite(LowR, HIGH);
analogWrite(EnableL, 255);
analogWrite(EnableR, 255);
delay(500);

analogWrite(EnableL, 0); //stop


analogWrite(EnableR, 0);
delay(1000);

digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
digitalWrite(HighR, LOW); // forward
digitalWrite(LowR, HIGH);
analogWrite(EnableL, 150);
analogWrite(EnableR, 150);
delay(500);

void loop()
{
if (j > 25000)
{
Lane_Change();
i = 0;
j = 0;
}

Data();
if(data==0)
{
Forward();

KIETW- ECE Page 101


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

if (i>0)
{
j = j+1;
}
}

else if(data==1)
{
Right1();
if (i>0)
{
j = j+1;
}
}

else if(data==2)
{
Right2();
if (i>0)
{
j = j+1;
}
}

else if(data==3)
{
Right3();
if (i>0)
{
j = j+1;
}
}

else if(data==4)
{
Left1();
if (i>0)
{
j = j+1;
}
}

else if(data==5)
{
Left2();
if (i>0)
{
j = j+1;
}
}

KIETW- ECE Page 102


DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING

else if(data==6)
{
Left3();
if (i>0)
{
j = j+1;
}
}

else if(data==7)
{
UTurn();
}

else if (data==8)
{
analogWrite(EnableL, 0);
analogWrite(EnableR, 0);
delay(4000);

analogWrite(EnableL, 150);
analogWrite(EnableR, 150);
delay(1000);
}

else if(data==9)
{
Object();
}

else if(data==10)
{
analogWrite(EnableL, 0);
analogWrite(EnableR, 0);
delay(2000);
}

else if(data>10)
{
Stop();
}

KIETW- ECE Page 103


DEPARTMENT

You might also like