KWECE-A-TEAM1-Autonomous Vehicle Using Image Processing-Documentation
KWECE-A-TEAM1-Autonomous Vehicle Using Image Processing-Documentation
BACHELOR OF TECHNOLOGY
IN
ELECTRONICS & COMMUNICATION ENGINEERING
(2016-2020)
Submitted by
A.SHIRISHA (16JN1A0404)
B.SIREESHA (16JN1A0407)
G.MOUNIKA (16JN1A0426)
V.SREE ROHITHA (16JN1A0453)
G.SRIDEVI (16JN1A0454)
CERTIFICATE
This is to certify that the thesis entitle “AUTONOMOUS VEHICLE USING IMAGE
PROCESSING” is a bonafide work of A.SHIRISHA, B.SIREESHA, G.MOUNIKA, V.SREE
ROHITHA , G.SRIDEVI has been carried out in practical fulfillment of the requirement for the
award of the degree of BACHELOR OF TECHNOLOGY in ELECTRONICS AND
COMMUNICATION ENGINEERING Branch to KAKINADA INSTITUTE OF
ENGINNERING AND TECHNOLOGY FOR WOMEN affiliated to
JNTUK, KAKINADA is a record of bonafied work carried out by them under my guidance &
supervision. The results embodied in this thesis have not been submitted to any other University
or Institute for the award of any degree.
EXTERNAL EXAMINER
ACKNOWLEDGEMENT
It gives us immense pleasure to acknowledge all those who helped us throughout in making
this project a great success.
With profound gratitude we thank Mr. Y. RAMA KRISHNA, M. Tech, MBA, Principal,
Kakinada Institute of Engineering and Technology, for her timely suggestions which helped us
to complete this project work successfully.
Our sincere thanks and deep sense of gratitude to Mr. V.V.SUBHASH, M. Tech, Head of the
Department ECE, for his valuable guidance, in completion of this project successfully.
We are thankful to both Teaching and Non-Teaching staff of ECE department for their kind
cooperation and all sorts of help bringing out this project work successfully.
A.SHIRISHA (16JN1A0404)
B.SIREESHA (16JN1A0407)
G.MOUNIKA (16JN1A0426)
V.SREE ROHITHA (16JN1A0453)
G.SRIDEVI (16JN1A0454)
DECLARATION
This work has not been previously submitted to any other institution
or University for the award of any other degree or diploma.
A.SHIRISHA (16JN1A0404)
B.SIREESHA (16JN1A0407)
G.MOUNIKA (16JN1A0426)
V.SREE ROHITHA (16JN1A0453)
G.SRIDEVI (16JN1A0454)
TABLE OF CONTENTS
ABSTRACT I
LIST OF TABLES IV
1. INTRODUCTION 1-8
1.1 Introduction 1
1.2 Description 1
1.3 Explanation 2
1.3.1 Lane Detection 2
1.3.2 Image Processing 3
1.3.3 Edge Detection 4
1.3.4 Histogram 4
1.3.4 (a) Histogram of Monochrome image 5
1.3.4 (b) Application of Histogram 6
1.4 Advantages of Autonomous vehicle 7
1.5 Keywords 7
1.6 Existing system 7
1.7 Proposed system 8
2. INTORDUCTION TO EMBEDDED SYSTEMS 9-16
2.1 Introduction 9
2.2 Background 10
2.3 Characteristics 11-13
2.3.1 User Interface 11
2.3.2 Processors in embedded systems 12
2.3.3 Readymade computer boards 12
2.3.4 Peripherals 13
TABLE OF CONTENTS
2.4 Applications 14
2.5 Types of embedded systems 15
2.5.1 Small scale embedded systems 15
2.5.2 Medium scale embedded system 15
2.5.3 Sophisticated embedded systems 15
2.6 Advantages 16
5. RESULT 83
6. CONCLUSION AND FEATURE SCOPE 84
APPENDIX SOURCE CODE 85-103
ABSTRACT
The autonomous vehicle is a self-driving vehicle which will drive by itself without any man
power. Here we use the autonomous vehicle by detecting the traffic signs, symbols, obstacle
and the path of the vehicle. The raspberry pi camera is used to detect the obstacles placed in
front of the vehicle and also detects the path of the vehicle. Using the concept of image
processing the camera detect the path of the vehicle and it vehicle moves along the path. The
traffic signs are detected by the camera and travel with respective to the signs is used. By using
the GPS the vehicle shares the location to the user, if any obstacle is obtained to the vehicle it
sends a message to the user by using GSM and shares the location using GPS. When the vehicle
has gone with accident it sends a message to the user and shares the location. During the low
light conditions the vehicle will enlighten the lights of the vehicle by using an LDR sensor.
The concept of image processing plays a vital role for the detection of path of the vehicle. The
main objective of the project is to make a vehicle drive by itself detecting traffic signs, symbols,
obstacle and path etc. The main function of this project is traffic sign detection, obstacle
detection, accident and gas detection, light detection for automatically enlighten the light of the
vehicle during low light conditions and location sharing systems using GPS.
I
LIST OF FIGURES
S.NO Figure No Figure Name Page no
III
LIST OF TABLES
IV
AUTONOMOUS VEHICLE USING IMAGE PROCESSING
CHAPTER 1
INTRODUCTION
1.1 INTRODUCTION
Image processing is one of the main drivers of automation, security and safety related application
of the electronic industry. Most image-processing technologies involve several steps like treat the
image as a two dimensional signal and apply standard signal processing techniques to it. Images
are also handled as 3D signals where the third dimension is the time or the z-axis. Highly efficient,
low memory and reliable solutions can be achieved by utilizing Embedded Systems and Image
processing to bring out the benefits of both for applications. Google is one of the billion dollar
companies who have demonstrated its own driverless car, a design that does away with all
conventional controls including the steering wheel, and other astonishing technologies. In their
driverless car
Google has not only included Image Processing, but also many other amazing technologies and
one of the most important among them is Lidar, which stands for ―Light Detection and Ranging‖.
It consists of a cone or puck-shaped device that projects lasers which bounce off objects to create
a high-resolution map of the environment in real time. In addition to helping driverless cars to
―see‖, Lidar is used to create fast, accurate 3D scans of landscapes, buildings, cultural heritage
sites and foliage. Some of the other technologies include Bumper Mounted Radar for collision
avoidance, Aerial that reads precise geo-location, Ultrasonic sensors on rear wheels which detects
and avoids obstacles, software which is programmed to interpret common road signs etc. Apart
from these, there are altimeters, gyroscopes, and tachymeters that determine the very precise
position of the car and offers highly accurate data for the car to operate safely. Apart from Google,
many other companies like Tesla, Audi, and Uber have also developed their own driverless cars
and have tested potentially. This paper concentrates on how Image processing can be used in
vehicles to drive the automotive industry to completely autonomous and high security pathways. A
real time embedded system environment is inevitable in an automotive application. Also, the scale
of the industry is very high so the solutions should be cost efficient, fast and reliable.
1.2 DESCRIPTION:
In autonomous vehicles, the driving commands from a human driver are replaced by a controller
or a microcomputer system that generates these commands from the information it gets as its input.
Since this paper deals with the applications of image processing in autonomous control of a
vehicle, the input given to the microcomputer system is the visual information obtained from a
camera mounted on the vehicle. This section explains in detail, some of the important factors in
Autonomous vehicles such as Lane Detection, Traffic Sign Detection, Obstacle Detection ,
accident detection , etc., which uses the processing of received image inputs and the algorithms
used in them. Lane Detection represents a robust and real time detection of road lane markers using
the concept of Hough transform in which the edge detection is implemented using the canny edge
detection technique. Traffic Sign Detection includes the recognition of road traffic signs in which,
the concept of Polynomial approximation of digital curves is used in the detection module.
1.3 EXPLANATION:
Lane detection is one of the main parts in the self-driving car algorithm development. On board
cameras are kept in and around the cars to capture images of road and surrounding of the car in real
time [1]. When the vehicle appears to deviate from the lane or vehicle safety distance is too small,
it can timely alert the driver to avoid dangerous situations. The basic concept of lane detection is
that, from the image of the road, the on-board controller should understand the limits of the lane
and should warn the driver when the vehicle is moving closer to the lanes. In an autonomous car,
lane detection is important to keep the vehicle in the middle of the lane, at all- time, other than
while changing lanes. Lane departure warning systems have already crawled into most of the high-
end passenger cars currently in market. A typical lane detection algorithm can be split into simple
steps:
1. Select the ROI (Region of Interest)
2. Image preprocessing (gray range/image noise subtraction)
3. Get the edge information from the image (edge detection)
4. Hough Transform (or other algorithms) to decide the lane markings
Step 1: Select the ROI the images collected by the on-board camera are color images. Each pixel
in the image is made up of R, G, and B three color components, which contains large amount of
information. Processing these images directly makes the algorithm consume a lot of time. A
better idea for this problem is to select the region of interest (ROI) from the original image by
concentrating on just the region of the image that interests us namely the region where the lane
lines are generally present. Processing on only the ROI can greatly reduce the time of algorithm
and improve the running speed. The original image is shown in Figure 1.1 the ROI region is shown
in Figure 1.2
Most of the road images have a lot of noise associated. So, before we do any image processing
steps we need to remove those noises. This is typically done through Image Preprocessing. Image
preprocessing includes gray scale conversion of color image, gray stretch, median filter to eliminate
the image noise and other interference information. Gray stretch can increase the contrast between
the lane and the road, which makes the lane lines more prominent.
Equation (1) represents the function which is to be applied to an RGB image to convert it to Gray
Scale.
L(x, y) = 0.21 R(x, y) + 0.72G(x, y) + 0.07 B(x, y) - (1)
Where R - Red component of the image G - Green component of the image B - Blue component
of the image x, y - position of a pixel
The method of image filtering includes the frequency domain filtering and the spatial domain
filtering. Spatial domain filtering is simpler and faster than the frequency domain filtering.
Spatial domain filtering can remove the salt and pepper noise from the original image and preserve
the edge details of the image.
Its main principle is to use the middle value of every pixel in the neighborhood of one pixel instead
of current pixel value. The image after median filtering is shown
The next step is to perform edge detection on the output of the preprocessing. It is basically to
detect the lines around the objects in the images. One of the common methods of edge detection is
called Canny Edge Detection introduced by John F Canny, University of California, and Berkeley
in 1986. It basically uses multiple steps including Gaussian filters, intensity gradient changes to
determine edges
In recent researches, one of the main goals are to develop higher efficient edge detection algorithm
for better detecting edges from varying image quality. For that purpose, an alternative approach to
edge detection is used, which is claimed to be a much efficient method than canny edge detector.
1.3.4 HISTOGRAM:
Digital images are composed of two-dimensional integer arrays that represent individual
components of the image, which are called picture elements, or pixels. The number of bits used to
represent these pixels determines the number of gray levels used to describe each pixel.
The pixel values in black-and-white images can be either 0 (black) or 1 (white), representing the
darker and brighter areas of the image, respectively, as shown in Figure 1(7).
Figure 1.7 Available pixel intensities for 1-bit, 2-bit, 3-bit, and 4-bit image data
If n bits are used to represent a pixel, then there will be 2n pixel values ranging from 0 to (2n -1).
Here 0 and (2n - 1) correspond to black and white, respectively, and all other intermediate values
represent shades of gray. Such images are said to be monochromatic (Figures 1(b) through 1(d)).
A combination of multiple monochrome images results in a color image. For example, an RGB
image is a combined set of three individual 2-D pixel arrays that are interpreted as red, green, and
blue color components.
An image histogram is a graph of pixel intensity (on the x-axis) versus number of pixels (on the y-
axis). The x-axis has all available gray levels, and the y-axis indicates the number of pixels that
have a particular gray-level value.2 multiple gray levels can be combined into groups in order to
reduce the number of individual values on the x-axis.
Figure 1.8(a) shows a simple 4 × 4 black-and-white image whose histogram is shown in Figure
1.8(b). Here the first vertical line of the histogram (at gray level 0) indicates that there are 4 black
pixels in the image. The second line indicates that there are 12 white pixels in the image.
Figure 1.9(a) is a gray scale image. The four pixel intensities (including black and white) of this
image are represented by the four vertical lines of the associated histogram Figure 1.9(b). Here the
x-axis values span from 0 to 255, which means that there are 256 (=28) possible pixel intensities.
a) Thresholding
A gray scale image can be converted into a black-and-white image by choosing a threshold and
converting all values above the threshold to the maximum intensity and all values below the
threshold to the minimum intensity. A histogram is a convenient means of identifying an
appropriate threshold.
In Figure 1.10, the pixel values are concentrated in two groups, and the threshold would be a value
in the middle of these two groups.
In Figure 1.11, the more continuous nature of the histogram indicates that the image is not a good
candidate for thresholding, and that finding the ideal threshold value would be difficult.
b) Image Enhancement:
Image enhancement refers to the process of transforming an image so as to make it more visually
appealing or to facilitate further analysis.5 it can involve simple operations (addition,
multiplication, logarithms, etc.)6 or advanced techniques such as contrast stretching and histogram
equalization.7
An image histogram can help us to quickly identify processing operations that are appropriate for
a particular image. For example, if the pixel values are concentrated in the far-left portion of the
histogram (this would correspond to a very dark image), we can improve the image by shifting the
values toward the center of the available range of intensities, or by spreading the pixel values such
that they more fully cover the available range.
Reduced Accidents.
Reduced Traffic Congestion
Reduced CO2 Emissions.
Increased Lane Capacity.
Lower Fuel Consumption.
Last Mile Services.
Reduced Travel Time and Transportation Costs.
1.5 KEYWORDS :
Image processing, obstacle detection, GSM, GPS, gas sensor, LCD display, Raspberry pi camera,
vibration sensor.
In this existing system the autonomous vehicle will drive by itself using image processing ,
histogram of an image, canny edge detection, gray scaling of path ,median filtering are the
techniques are used to drive the vehicle in its path. But if the vehicle faces any obstacle in its path,
it stops at the position and we can‘t identify its location. If the vehicle gets disturbance in its path
by surrounding environment it needs to detects and identify them and sends information to the user.
So to overcome these drawbacks we proposed a system with additional features to the autonomous
vehicle.
In this proposed system along with image processing we have implemented a GPS, GSM system,
display information through LCD Display, smoke detection, accident prohibition, signal lights to
the vehicle. Whereas GPS shows the location of the vehicle so that we can identify it and GSM will
send message to the user. LCD display is used to display the information which sends by the user
through GSM. If we need to stop the vehicle we can send a message that ‗STOP’ then through
GSM the vehicle receives the message and stops the movement of vehicle and shares the location
where it stops through GPS. If we need the location of the vehicle during its path we should send
text ‗L’ to the vehicle, then the vehicle shows the location of its path. If any obstacle is present in
front of the vehicles path it takes some time delay till the obstacle is removed or else it sends a
message that “AN OBSTACLE IS IN MY PATH, I CANT MOVE FORWARD!” If a shocks
or vibrations are occurred from surroundings of the vehicle it sends a message through GSM and
shares the location. If a heavy smoke is obtained in the path of the vehicle it will send message as
―HEAVY SMOKE IS MY PATH, I CANT MOVE FORWARD!” and shares its location. Any
load information or any notification‘s which we want to show to others we can display on the
vehicle through LCD display and the information will be send to GSM and display it. Whereas the
signal lights of the vehicle is used when the vehicle turns left position then left LED will glow, the
vehicle turns right position the right LED will glow and if the vehicle stops its movement then both
the LEDs will glow .
Vibration sensor is used if the vehicle gets heavy vibrations or shocks during its travelling path, it
detects the heavy vibrations and stop the vehicle immediately. We also used obstacle avoidance to
the vehicle by using a raspberry pi camera and it detects the nearer objects to the vehicle. The
raspberry pi camera we used in the vehicle will scans the path and filters the images of the path and
sends information about the path to the vehicle, so the vehicle moves in a particular path. Traffic
sign detection is the method we implemented in this vehicle to detects the traffic signs to obey its
way of the path. By the traffic sign detection the vehicle the symbols and follow the path.
CHAPTER 2
2.1 INTRODUCTION:
Modern embedded systems are often based on microcontrollers (i.e. microprocessors with
integrated memory and peripheral interfaces), but ordinary microprocessors (using external chips
for memory and peripheral interface circuits) are also common, especially in more complex
systems. In either case, the processor(s) used may be types ranging from general purpose to those
specialized in a certain class of computations or even custom designed for the application at hand.
A common standard class of dedicated processors is the digital signal processor (DSP).
Since the embedded system is dedicated to specific tasks, design engineers can optimize it to reduce
the size and cost of the product and increase the reliability and performance. Some embedded
systems are mass-produced, benefiting from economies of scale.
Embedded systems range from portable devices such as digital watches and MP3 players, to large
stationary installations like traffic light controllers, programmable logic controllers, and large
complex systems like hybrid vehicles, medical imaging systems, and avionics. Complexity varies
from low, with a single microcontroller chip, to very high with multiple units, peripherals and
networks mounted inside a large equipment rack.
2.2 BACKGROUND:
The origins of the microprocessor and the microcontroller can be traced back to the MOS integrated
circuit, which is an integrated circuit chip fabricated from MOSFETs (metal-oxide- semiconductor
field-effect transistors) and was developed in the early 1960s. By 1964, MOS chips had reached
higher transistor density and lower manufacturing costs than bipolar chips. MOS chips further
increased in complexity at a rate predicted by Moore's law, leading to large- scale integration (LSI)
with hundreds of transistors on a single MOS chip by the late 1960s. The application of MOS LSI
chips to computing was the basis for the first microprocessors, as engineers began recognizing that
a complete computer processor system could be contained on several MOS LSI chips.[5]
The first multi-chip microprocessors, the Four-Phase Systems AL1 in 1969 and the Garrett
AiResearch MP944 in 1970, were developed with multiple MOS LSI chips. The first single-chip
microprocessor was the Intel 4004, released in 1971. It was developed by Federico Faggin, using
his silicon-gate MOS technology, along with Intel engineers Marcian Hoff and Stan Mazor, and
Busicom engineer Masatoshi Shima.
One of the very first recognizably modern embedded systems was the Apollo Guidance
Computer,[citation needed] developed ca. 1965 by Charles Stark Draper at the MIT
Instrumentation Laboratory. At the project's inception, the Apollo guidance computer was
considered the riskiest item in the Apollo project as it employed the then newly developed
monolithic integrated circuits to reduce the computer's size and weight.
An early mass-produced embedded system was the Automatics‘ D-17 guidance computer for the
Minuteman missile, released in 1961. When the Minuteman II went into production in 1966, the
D-17 was replaced with a new computer that represented the first high-volume use of integrated
circuits.
Since these early applications in the 1960s, embedded systems have come down in price and there
has been a dramatic rise in processing power and functionality. An early microprocessor, the Intel
4004 (released in 1971), was designed for calculators and other small systems but still required
external memory and support chips. By the early 1980s, memory, input and output system
components had been integrated into the same chip as the processor forming a microcontroller.
Microcontrollers find applications where a general-purpose computer would be too costly. As the
cost of microprocessors and microcontrollers fell the prevalence of embedded systems increased.
Today, a comparatively low-cost microcontroller may be programmed to fulfill the same role as a
large number of separate components. With microcontrollers, it became feasible to replace, even
in consumer products, expensive knob-based analog components such as potentiometers and
variable capacitors with up/down buttons or knobs read out by a microprocessor. Although in this
context an embedded system is usually more complex than a traditional solution, most of the
complexity is contained within the microcontroller itself. Very few additional components may be
needed and most of the design effort is in the software. Software prototype and test can be quicker
compared with the design and construction of a new circuit not using an embedded processor.
2.3 CHARACTERISTICS:
Embedded systems are designed to do some specific task, rather than be a general-purpose
computer for multiple tasks. Some also have real-time performance constraints that must be met,
for reasons such as safety and usability; others may have low or no performance requirements,
allowing the system hardware to be simplified to reduce costs.
Embedded systems are not always standalone devices. Many embedded systems consist of small
parts within a larger device that serves a more general purpose. For example, the Gibson Robot
Guitar features an embedded system for tuning the strings, but the overall purpose of the Robot
Guitar is, of course, to play music. Similarly, an embedded system in an automobile provides a
specific function as a subsystem of the car itself.
The program instructions written for embedded systems are referred to as firmware, and are stored
in read-only memory or flash memory chips. They run with limited computer hardware resources:
little memory, small or non-existent keyboard or screen.
Embedded systems range from no user interface at all, in systems dedicated only to one task, to
complex graphical user interfaces that resemble modern computer desktop operating systems.
Simple embedded devices use buttons, LEDs, graphic or character LCDs (HD44780 LCD for
example) with a simple menu system.
More sophisticated devices that use a graphical screen with touch sensing or screen-edge buttons
provide flexibility while minimizing space used: the meaning of the buttons can change with the
screen, and selection involves the natural behavior of pointing at what is desired. Handheld systems
often have a screen with a "joystick button" for a pointing device.
Some systems provide user interface remotely with the help of a serial (e.g. RS-232, USB, I²C, etc.)
or network (e.g. Ethernet) connection. This approach gives several advantages: extends the
capabilities of embedded system, avoids the cost of a display, simplifies BSP and allows one to
build a rich user interface on the PC. A good example of this is the combination of an embedded
web server running on an embedded device (such as an IP camera) or a network router. The user
interface is displayed in a web browser on a PC connected to the device, therefore needing no
software to be installed.
Embedded processors can be broken into two broad categories. Ordinary microprocessors (μP) use
separate integrated circuits for memory and peripherals. Microcontrollers (μC) have on-chip
peripherals, thus reducing power consumption, size and cost. In contrast to the personal computer
market, many different basic CPU architectures are used since the software is custom-developed
for an application and is not a commodity product installed by the end user. Both Von Neumann,
as well as various degrees of Harvard architectures, is used. RISC as well as non-RISC processors
are found. Word lengths vary from 4-bit to 64-bits and beyond, although the most typical remain
8/16-bit. Most architecture comes in a large number of different variants and shapes, many of which
are also manufactured by several different companies.
Numerous microcontrollers have been developed for embedded systems use. General-purpose
microprocessors are also used in embedded systems; but generally, require more support circuitry
than microcontrollers.
PC/104 and PC/104+ are examples of standards for ready-made computer boards intended for
small, low-volume embedded and ruggedized systems, mostly x86-based. These are often
physically small compared to a standard PC, although still quite large compared to most simple
(8/16-bit) embedded systems. They often use DOS, Linux, NetBSD, or an embedded real-time
operating system such as MicroC/OS-II, QNX or VxWorks. Sometimes these boards use non-x86
processors.
In certain applications, where small size or power efficiency are not primary concerns, the
components used may be compatible with those used in general-purpose x86 personal computers.
Boards such as the VIA EPIA range help to bridge the gap by being PC-compatible but highly
integrated, physically smaller or have other attributes making them attractive to embedded
engineers. The advantage of this approach is that low-cost commodity components may be used
along with the same software development tools used for general software development. Systems
built in this way are still regarded as embedded since they are integrated into larger devices and
fulfill a single role. Examples of devices that may adopt this approach are ATMs and arcade
machines, which contain code specific to the application.
However, most ready-made embedded systems boards are not PC-centered and do not use the ISA
or PCI busses. When a system-on-a-chip processor is involved, there may be little benefit to having
a standardized bus connecting discrete components, and the environment for both hardware and
software tools may be very different.
One common design style uses a small system module; perhaps the size of a business card, holding
high density BGA chips such as an ARM-based system-on-a-chip processor and peripherals,
external flash memory for storage, and DRAM for runtime memory. The module vendor will
usually provide boot software and make sure there is a selection of operating systems, usually
including Linux and some real-time choices. These modules can be manufactured in high volume,
by organizations familiar with their specialized testing issues, and combined with much lower
volume custom mainboards with application-specific external peripherals.
Implementation of embedded systems has advanced so that they can easily be implemented with
already-made boards that are based on worldwide accepted platforms. These platforms include, but
are not limited to, Arduino and Raspberry Pi.
2.3.4 Peripherals:
A close-up of the SMSC LAN91C110 (SMSC 91x) chip, an embedded Ethernet chip
Embedded systems talk with the outside world via peripherals, such as:
2.4 APPLICATIONS:
Embedded systems are commonly found in consumer, industrial, automotive, home appliances,
medical, commercial and military applications.
Telecommunications systems employ numerous embedded systems from telephone switches for
the network to cell phones at the end user. Computer networking uses dedicated routers and
network bridges to route data.
Consumer electronics include MP3 players, television sets, mobile phones, video game consoles,
digital cameras, GPS receivers, and printers. Household appliances, such as microwave ovens,
washing machines and dishwashers, include embedded systems to provide flexibility, efficiency
and features. Advanced HVAC systems use networked thermostats to more accurately and
efficiently control temperature that can change by time of day and season. Home automation uses
wired- and wireless-networking that can be used to control lights, climate, security, audio/visual,
surveillance, etc., all of which use embedded devices for sensing and controlling.
Transportation systems from flight to automobiles increasingly use embedded systems. New
airplanes contain advanced avionics such as inertial guidance systems and GPS receivers that also
have considerable safety requirements. Various electric motors — brushless DC motors, induction
motors and DC motors — use electronic motor controllers. Automobiles, electric vehicles, and
hybrid vehicles increasingly use embedded systems to maximize efficiency and reduce pollution.
Other automotive safety systems include anti-lock braking system (ABS), Electronic Stability
Control (ESC/ESP), traction control (TCS) and automatic four-wheel drive.
Medical equipment uses embedded systems for vital signs monitoring, electronic stethoscopes for
amplifying sounds, and various medical imaging (PET, SPECT, CT, and MRI) for non-invasive
internal inspections. Embedded systems within medical equipment are often powered by industrial
computers.
Embedded systems are used in transportation, fire safety, safety and security, medical applications
and life-critical systems. Unless connected to wired or wireless networks via on-chip 3G cellular
or other methods for IoT monitoring and control purposes, these systems can be isolated from
hacking and thus be more secure.[citation needed] For fire safety, the systems can be designed to
have a greater ability to handle higher temperatures and continue to operate. In dealing with
security, the embedded systems can be self-sufficient and be able to deal with cut electrical and
communication systems.
A new class of miniature wireless devices called motes is networked wireless sensors. Wireless
sensor networking, WSN, makes use of miniaturization made possible by advanced IC design to
couple full wireless subsystems to sophisticated sensors, enabling people and companies to
measure a myriad of things in the physical world and act on this information through IT monitoring
and control systems. These motes are completely self-contained, and will typically run off a battery
source for years before the batteries need to be changed or charged.
Embedded Wi-Fi modules provide a simple means of wirelessly enabling any device that
communicates via a serial port.
This embedded system can be designed with a single 8 or 16-bit microcontroller. It can be operated
with the help of a battery. For developing small scale embedded system, an editor, assembler,
(IDE), and cross assembler are the most vital programming tools.
These types of embedded systems are designed using 16 or 32-bit microcontrollers. These systems
offer both hardware and software complexities. C, C++, Java, and source code engineering tool,
etc. are used to develop this kind of embedded system.
This type of embedded systems has lots of hardware and software complexities. You may require
IPS, ASIPS, PLAs, configuration processor, or scalable processors. For the development of this
system, you need hardware and software co-design & components which needs to combine in the
final system.
2.6 ADVANTAGES:
CHAPTER- 3
HARDWARE COMPONENTS
BOST BUCK
LI-ION CONVERTER
CONVERTER
BATTERY 5V
12V
LCD (16*12)
GAS SENSOR
GSM MODULE
MOTOR
DRIVER
(L298N)
M M M M
3.1 RASPBERRY PI 3B +:
The raspberry pi 3 model B+ is the final version of third generation single-board computer
boasting a 64-bit quad core processor running at 1.4GHZ, dual-band 2.4GHZ and 5GHZ
Wireless LAN, Bluetooth 4.2/BLE, faster Ethernet, and power over internet (POE) capability via
a separate POE HAT. The dual-band wireless LAN comes with modular compliance
certification, allowing the board to be designed into end products with significantly reduced
wireless LAN compliance testing, improving both cost and time to market. The Raspberry Pi
3modeel B+ maintains the same mechanical footprint as both the Raspberry Pi 2 model B and the
Raspberry Pi 3 model B.
The processor is made on system on chip method (SOC) .This is the Broadcom chip used in the
raspberry Pi 3B+.The underlying architecture of the BCM2837B0 is identical to the BCM2837A0
chip used in other versions of pi. The ARM core hardware is the same, only the frequency is rated
higher. The BCM2837B0 chip is packaged slightly different to the other processors, and most
notably includes a heat spreader for better thermals these allows higher clock frequencies (or
running at lower voltages to reduce power consumption), and more accurate monitoring and
control of the chip‘s temperature.
The Pi 3 model B+ has a 1.4 GHz 64-bit quad-core Broadcom ARM Cortex A53-architecture
processor with 512 KB shared cache memory. By improved packaging alongside a heat-spreader
which have helped to boot its performance of the processor.
3.1.3. MEMORY:
The raspberry pi 3 has 1GB LPDDR2 SDRAM (Low-Power Double Data Rate Synchronous
Dynamic Random Access memory).It is a type of double data rate SD RAM that consumes less
power .It has Level 1 (L1) cache memory of 32KB L1 and a Level 2 (L2) cache memory of 512
KB L2 with 1GB RAM.
3.1.4. NETWORKING:
It has a Gigabit Ethernet over USB 2.0 with max 300 Mbps of data transmissions .It also has Power-
over-Ethernet support (with separate POE HAT) and Improved PXE network and USB mass-
storage booting. And also it has wireless connection such as 2.4GHz and 5GHz
IEEE802.11.b/g/n/ac wireless LAN, Bluetooth 4.2, BLE
It has wireless and Bluetooth components are now inside a metallised can. This component group
has been FCC-approved as a module, which means if you incorporate a Pi3B+ into a product,
wireless/BT transmission doesn‘t require any further certification. The can also has the
Raspberry Pi logo embossed on it, which is a rather nice touch.
The Pi3B+ 802.11ac at 5 GHz have up to 100 Mbit performances. If you operate in a busy Wi-Fi
environment, switching to 5 GHz can give some significant improvements. It‘s always good to
have choices.
Base-1000 Ethernet
The Pi 3B+ uses a Microchip LAN7515 chip for ethernet and USB 2.0 hub. So it can take
advantage of a Gigabit Ethernet connection, but because of USB 2.0 limitations, its maximum
throughput is 330 Mbit.
Fig 3.6 Microchip LAN7515 Gigabit Ethernet and USB 2.0 Hub chip
This is still a big performance hike and will be plenty for most applications. My broadband is
100 Mbit, so the Pi3B+ is now able to fully utilise that speed. With the older Pi3B I could get a
maximum speed of around 60 Mbit.
3.1.5. VIDEO:
In Raspberry Pi 3 it has 1*full size HDMI, MIPI DSI display ports, It is a display serial interface
with Mobile Industry Processor Interface (MIPI) Alliance aimed at reducing the cost of display
controller. And MIPI CSI camera port which is a camera serial interface bus is used to transfer the
data to camera module Display Serial Interface (DSI), designed for use with the Raspberry Pi
Touch Display. At the right-hand edge of the board you‘ll find 40 metal pins, split into two rows
of 20 pins.
It consists of 3.5 mm Audio jack and also HDMI port as audio outputs in Raspberry Pi.
3.1.7. USB:
The BCM2837 USB port is On-The-Go (OTG) capable. If using either as a fixed slave or fixed
master, please tie the USB OTGID pin to ground. The USB port (Pins USB DP and USB DM)
must be routed as 90 ohm differential PCB traces. Note that the port is capable of being used as a
true OTG port however there is no official documentation. Some users have had success making
this work
GPIO stands for General Purpose Input/output is a type of pin found on an integrated circuit that
do not have a specific function .There are 40 GPIO in Raspberry Pi .While most of the pins are
used for sending a signal to a certain component ,the function of GPIO pins is customizable and
controllable by software.
A powerful feature of the Raspberry Pi is the row of GPIO (general-purpose input/output) pins
along the top edge of the board. A 40-pin GPIO header is found on all current Raspberry Pi boards
(unpopulated on Pi Zero and Pi Zero W). Prior to the Pi 1 Model B+ (2014), boards comprised a
shorter 26-pin header.
Any of the GPIO pins can be designated (in software) as an input or output pin and used for a wide
range of purposes.
Note: the numbering of the GPIO pins is not in numerical order; GPIO pins 0 and 1 are present
on the board (physical pins 27 and 28) but are reserved for advanced use (see below).
Voltages
Two 5V pins and two 3V3 pins are present on the board, as well as a number of ground pins (0V),
which are configurable. The remaining pins are all general purpose 3V3 pins, meaning outputs are
set to 3V3 and inputs are 3V3-tolerant.
Outputs
A GPIO pin designated as an output pin can be set to high (3V3) or low (0V).
Inputs
A GPIO pin designated as an input pin can be read as high (3V3) or low (0V). This is made easier
with the use of internal pull-up or pull-down resistors. Pins GPIO2 and GPIO3 have fixed pull-up
resistors, but for other pins this can be configured in software.
More
Plastic-covered chip can be seen to the bottom edge of the board, just behind the middle
set of USB ports. This is the USB controller, and is responsible for running the four USB ports.
Next to this is an even smaller chip, the network controller, which hands the Raspberry Pi's
Ethernet network port. A final black chip, smaller than the rest, can be found a little bit above the
USB Type-C power connector to the upper-left of the board ; this is known as a power management
integrated circuit (PMIC), and handles turning the power that comes in from the micro USB port
into the power the Pi needs to run.
The power supply requirements are differ by Raspberry Pi models. All models required a 5.1v
supply, but the current supplied generally increases according to the model. For Raspberry Pi 3
requires 2.5A micro USB power supply.
The power requirements of Raspberry Pi increases as you make use of the various interfaces. The
GPIO pins can draw 50mA safely, distributed across all the pins, HDMI port uses 50mA,camera
module requires 250mA,and keyboard and mice can take as little as 100mA or over 1000mA.
Raspbian OS is the official operating system used in Raspberry Pi. It is also supportable to Linux,
Ubuntu, Windows and etc. The OS can be installed on a MicroSD, MiniSD. The MicroSD slot is
located on the bottom of the Raspberry Pi board.
The Micro SD card slot on the Raspberry Pi 3 is located just below the Display Serial Adapter on
the other side. Insert the Micro SD card which was loaded with NOOBS in the slot and plug in
power supply.
The Arduino Uno is an open-source microcontroller board based on the microchip ATmega328P
microcontroller and developed by Arduino.cc, The board is equipped with sets of digital and analog
input/output pins that may be interfaced to various expansion boards and other circuits. The board
has 14 digital input output pins, 6 analog input output pins, and is programmable with the arduino
IDE (integrated development environment), via a type B USB cable. It can be powered by the USB
cable or by external 9-volt battery, through it accepts voltage between 7 and 20 volts.
The word ‗UNO‘ means ‗ONE‘ in Italian and was chosen to mark the initial release of Arduino
software. The UNO board is the first in a series of USB- based Arduino boards. The ATmega328
on the board comes preprogrammed with a boot loader that allows uploading new code to it without
any use of an external hardware programmer.
It is an open source computer hardware and software company, project, and user community that
designs and manufactures single-board microcontrollers and microcontroller kits for building
digital devices and interactive objects that can sense and control objects in the physical world.
Since Arduino is Open Source, the CAD and PCB design is freely available. Everyone can buy a
pre-assembled original Arduino board2 or a cloned board from another company. You can also
build an Arduino for yourself or for selling. Although it is allowed to build and sell cloned
Arduino boards, it‘s not allowed to use the name Arduino and the corresponding logo. Most
boards are designed around the Atmel Atmega328.
3.2.1 MICROCONTROLLER:
d) Port A consists of the pins from PA0 to PA7. These pins serve as analog input to analog
to digital converters. If analog to digital converter is not used, port A acts as an eight (8)
bit bidirectional input/output port.
e) Port B consists of the pins from PB0 to PB7. This port is an 8 bit bidirectional port having
an internal pull-up resistor.
f) Port C consists of the pins from PC0 to PC7. The output buffers of port C has symmetrical
drive characteristics with source capability as well high sink.
g) Port D consists of the pins from PD0 to PD7. It is also an 8 bit input/output port having an
internal pull-up resistor.
3.2.2 ARCHITECTURE:
RISC stands for reduced instruction set computer, it is a computer instruction set that allows a
computers microprocessors to have fewer cycles per instructions (CPI) than a complex instruction
set computer (CISC).A RISC computer has a small set of simple and general instructions, rather
than a large set of complex and specialized ones .
The main distinguishing feature of RISC is that the instruction set is optimized for a highly regular
instruction pipe flow. RISC processors have a CPI (clock per instruction) of one cycle. This is due
to the optimization of each instruction on the CPU and a technique called pipelining.
3.2.3 FEATURES:
It is an open source design and there is an advantage of being open source is that it has a
larger community of people using and troubleshooting it .This makes it easy to help in
debugging projects
It consists of 13 digital pins and 6 analog pins. These pins are used to connect hardware
to your Arduino UNO board externally.
It is a 16MHz clock speed which is faster enough for most applications and does not
speeds up the microcontroller
It has a 32KB of flash memory for sorting the code, 1KB of EEPROM (Electrically
Erasable Programmable Read Only Memory) and 2KB of SRAM (Static Random Access
memory).
It has a button to reset the program on the chip.
a) Power Supply
a. The power supply of the Arduino can be done with the help of an exterior power supply
otherwise USB connection. The exterior power supply (6 to 20 volts) mainly includes
a battery or an AC to DC adapter. The connection of an adapter can be done by plugging
a center-positive plug (2.1mm) into the power jack on the board. The battery terminals
can be placed in the pins of Vin as well as GND.
b) Vin :
The input voltage or Vin to the Arduino while it is using an exterior power supply
opposite to volts from the connection of USB or else RPS (regulated power supply). By
using this pin, one can supply the voltage
c) 5Volts
The RPS can be used to give the power supply to the microcontroller as well as
components which are used on the Arduino board. This can approach from the input
voltage through a regulator
d) 3.3V
A 3.3 supply voltage can be generated with the onboard regulator, and the highest
draw current will be 50 mA.
e) GND
GND (ground) pins
f) Memory
Serial: 0 (RX) and 1 (TX). Used to receive (RX) and transmit (TX) TTL serial data. These
pins are connected to the corresponding pins of the ATmega8U2 USB-to-TTL Serial chip.
External Interrupts: 2 and 3. These pins can be configured to trigger an interrupt on a low
value, a rising or falling edge, or a change in value. See the attach Interrupt () function for
details.
PWM: 3, 5, 6, 9, 10, and 11. Provide 8-bit PWM output with the analog Write () function.
SPI: 10 (SS), 11 (MOSI), 12 (MISO), 13 (SCK). These pins support SPI communication using
the SPI library.
LED: 13. There is a built-in LED connected to digital pin 13. When the pin is HIGH value,
the LED is on, when the pin is LOW, it's off.
The Uno has 6 analog inputs, labeled A0 through A5, each of which provide 10 bits of resolution
(i.e. 1024 different values). By default they measure from ground to 5 volts, though is it possible
to change the upper end of their range using the AREF pin and the analog Reference () function.
Additionally, some pins have specialized functionality:
TWI: A4 or SDA pin and A5 or SCL pin. Support TWI communication using the Wire
library. There are a couple of other pins on the board:
AREF. Reference voltage for the analog inputs. Used with analog Reference ().
Reset. Bring this line LOW to reset the microcontroller. Typically used to add a reset button
to shields which block the one on the board.
3.2.6 COMMUNICATION:
The Arduino Uno has a number of facilities for communicating with a computer, another Arduino,
or other microcontrollers. The ATmega328 provides UART TTL (5V) serial communication,
which is available on digital pins 0 (RX) and 1 (TX). An ATmega16U2 on the board channels this
serial communication over USB and appears as a virtual com port to software on the computer.
The '16U2 firmware uses the standard USB COM drivers, and no external driver is needed.
However, on Windows, an .inf file is required. The Arduino software includes a serial monitor
which allows simple textual data to be sent to and from the Arduino board. The
RX and TX LEDs on the board will flash when data is being transmitted via the USB-to-serial chip
and USB connection to the computer (but not for serial communication on pins 0 and 1).
A Software Serial library allows for serial communication on any of the Uno's digital pins. The
ATmega328 also supports I2C (TWI) and SPI communication. The Arduino software includes a
Wire library to simplify use of the I2C bus; see the documentation for details. For SPI
communication, use the SPI library.
It is cheap
The software of the Arduino is well-suited with all kinds of in operations systems
like Linux, Windows, and etc.
It is used to for real-time applications
Both the hardware and software and IDE are open source
Easy to connect sensors, electronic components and motors with jumper cables.
No need to put long setup on the board, just plug in then the code will runs.
Price is low and no need to connect many cables.
Runs on any type of Operating systems like Windows, Linux, and Mac OS.
It is widely used in the Real time applications.
Varieties of shields are used to connect for giving protection.
Basic knowledge of programming is enough to code the Arduino.
The main concept of this project is to design a robot using ultrasonic sensors to avoid the
obstacle. It can perform some tasks with some guidance or automatically. The robot vehicle
has an intelligence which is built inside the robot that is Arduino board .it is designed with
a microcontroller named AT mega328 from Atmel family of Arduino board
2. HOME AUTOMATION:
This project is to design a home automation system using an Arduino board with Bluetooth
being controlled by an android OS based smart phone. This project will gives a modern
solution with smart phones. Here the Bluetooth module is attached to the Arduino board at
the receiver side and while on the transmitter side, a Graphical User Interface (GUI)
application on the smart phone sends ON/OFF commands to the receiver where home
appliances are connected. By touching on the particular GUI, the different appliances can
be turned ON/OFF using this technology.
The Raspberry Pi camera board plugs directly into the CSI connector on the Raspberry Pi. It‘s able
to deliver a crystal clear 5MP resolution image or 1080 p HD video recording at 30fps latest version
1.3 Custom designed and manufactured by the Raspberry Pi foundation in the UK, the Raspberry
Pi camera board features a 5MP (2592*1944 pixels). Omni vision 5647 sensor in a fixed focus
module. The module attaches to Raspberry Pi, by the way of a 15 pin ribbon cable to the dedicated
15 pin MIPI camera serial interface (CSI), which was designed especially for interfacing to
cameras. The CSI bus is capable of extremely high data rates, and it exclusively carries pixel data
to the BCM 2835 processor. The board itself is tiny, at around 25mm *20 mm* 9mm,and weighs
just over 3g, making it perfect for mobile or other applications where size and weight are important.
The sensor itself has a native resolution of 5 mega pixel, and has a fixed focus lens on board. In
terms of still images, the camera is capable of 2592*1944 pixel images, and also supports 1080 p
with 30fps,720p with 60 fps and 640*480p 60/90 video recording. The camera is supported in the
latest version of Raspbian the Raspberry Pi‘s preferred operating systems.
The OV5647 is a 5-megapixel CMOS image sensor built on Omni Vision‘s proprietary 1.4- micron
OmniBSI™ backside illumination pixel architecture. The OV5647 delivers 5-megapixel
photography in addition to high frame rate of 720p/60 and 1080p/30 high-definition (HD) video
capture in an industry standard camera module size of 8.5 x 8.5 x 5 mm, making it an ideal solution
for the mainstream mobile phone market.
The 720p/60 HD video is captured in full field of view (FOV) with 2x2 binning to double the
sensitivity and improve signal-to-noise ratio (SNR). The post binning re-sampling filter helps
minimize spatial and aliasing artifacts to provide superior image quality.
Reduced crosstalk and photo response non-uniformity, which all contribute to significant
improvements in image quality and color reproduction. Additionally, Omni Vision CMOS image
sensors use proprietary sensor technology to improve image quality by reducing or eliminating
common lighting/electrical sources of image contamination, such as fixed pattern noise and
smearing to produce a clean, fully stable color image.
The low power OV5647 supports a digital video parallel port or high-speed two-lane MIPI
interface, and provides full frame, windowed or binned 10-bit images in RAW RGB format. It
offers all required automatic image control functions, including automatic exposure control,
automatic white balance, automatic band filter, automatic 50/60 Hz luminance detection, and
automatic black level calibration
The pi camera module is a portable light weight camera that supports Raspberry pi. It
communicates with Pi using the MIPI camera serial interface protocol. It is normally used in the
image processing, machine learning or in surveillance projects. It is commonly used in
surveillance drones since the payload of the camera is very less.
The vibration sensor is also called a piezoelectric sensor. These sensors are flexible devices
which are used for measuring various processes. This sensor uses the piezoelectric effect while
measuring the changes within acceleration, pressure, temperature, force otherwise strain by
changing to an electrical charge. This sensor is also used for deciding fragrances within the air by
immediately measuring capacitance as well as quality.
The sensitivity of these sensors normally ranges from 10 mV/g to 100 mV/g, and there are lower
and higher sensitivities are also accessible. The sensitivity of the sensor can be selected based on
the application. So it is essential to know the levels of vibration amplitude range to which the
sensor will be exposed throughout measurements.
3.4.1 SPECIFICATIONS:
The Vibration Sensor Detector is designed for the security practice When Vibration Sensor Alarm
recognizes movement or vibration, it sends a signal to either control panel Developed a new type
of Omni-directional high sensitivity Security Vibration Detector with Omni-directional detection
A gas sensor is a device which detects the presence or concentration of gases in the atmosphere.
Based on the concentration of the gas the sensor produces a corresponding potential difference by
changing the resistance of the material inside the sensor, which can be measured as output voltage.
Based on this voltage value the type and concentration of the gas can be estimated.
The gas sensors consist of a sensing element which comprises of the following parts.
Gas sensors are typically classified into various types based on the type of the sensing element it
is built with. Below is the classification of the various types of gas sensors based on the sensing
element that are generally used in various applications:
GSM is a mobile communication modem; it is stands for global system for mobile communication
(GSM). The idea of GSM was developed at Bell Laboratories in 1970. It is widely used mobile
communication system in the world. GSM is an open and digital cellular
KIETW- ECE Page 35
DEPARTMENT
AUTONOMOUS VEHICLE USING IMAGE PROCESSING
technology used for transmitting mobile voice and data services operates at the 850MHz, 900MHz,
1800MHz and 1900MHz frequency bands
GSM system was developed as a digital system using time division multiple access (TDMA)
technique for communication purpose. A GSM digitizes and reduces the data, then sends it down
through a channel with two different streams of client data, each in its own particular time slot.
The digital system has an ability to carry 64 kbps to 120 Mbps of data rates. SIM800L is a miniature
cellular module which allows for GPRS transmission, sending and receiving SMS and making and
receiving voice
Low cost and small footprint and quad band frequency support make this module perfect solution
for any project that require long range connectivity. After connecting power module boots up,
searches for cellular network and login automatically. On board LED displays connection state (no
network coverage - fast blinking, logged in - slow blinking).
3.6.1 SPECIFICATIONS
NET – antenna
VCC - supply voltage
RESET - reset
RXD - serial communication
TXD - serial communication
GND – ground
The DC gear BO motor is a DC motor battery operation (BO). DC motor converts electrical
energy into mechanical energy. The addition of a gear head to a motor reduces the speed and
increases the torque output. The BO series straight motor gives good torque and rpm at lower
operating voltages.
A gear motor is an all-in-one combination of a motor and gearbox. The addition of a gear head to
a motor reduces the speed while increasing the torque output. The most important parameters in
regards to gear motors are speed (rpm), torque (lb-in) and efficiency (%). In order to select the
most suitable gear motor for your application you must first compute the load, speed and torque
requirements for your application. ISL Products offers a variety of Spur Gear Motors, Planetary
Gear Motors and Worm Gear Motors to meet all application requirements.
Most of our DC motors can be complimented with one of our unique gearheads, providing you
with a highly efficient gear motor solution.
Current (I) – (unit: A) indicated by a straight line, from no load to full motor lock. This shows the
relationship between amperage and torque.
Torque (T) – (unit: gf-cm) this is the load borne by the motor shaft, represented on the X-axis.
Efficiency (η) – (unit: %) is calculated by the input and output values, represented by the dashed
line. To maximize the gear motors potential it should be used near its peak efficiency.
Output (P) – (unit: W) is the amount of mechanical energy the gear motor puts out.
3.7.2 FEATURES:
3.8 BATTERY:
growing in popularity for military and aerospace applications. A prototype Li-ion battery was
developed by Akira Yoshino in 1985, based on earlier research by John Goodenough, Stanley
Whittingham, Rachid Yazami and Koichi Mizushima during the 1970s–1980s, and then a
commercial Li-ion battery was developed by a Sony and Asahi Kasei team led by Yoshio Nishi in
1991.
In the batteries, lithium ions move from the negative electrode through an electrolyte to the positive
electrode during discharge, and back when charging. Li-ion batteries use an intercalated lithium
compound as the material at the positive electrode and typically graphite at the negative electrode.
The batteries have a high energy density, no memory effect (other than LFP cells)and low self-
discharge. They can however be a safety hazard since they contain a flammable electrolyte, and if
damaged or incorrectly charged can lead to explosions and fires. Samsung was forced to recall
Galaxy Note 7 handsets following lithium-ion fires, and there have been several incidents
involving batteries on Boeing 787s.
Chemistry, performance, cost and safety characteristics vary across LIB types. Handheld
electronics mostly use lithium polymer batteries (with a polymer gel as electrolyte) with lithium
cobalt oxide (LiCo2) as cathode material, which offers high energy density, but presents safety
risks, especially when damaged. Lithium iron phosphate (LiFePO4), lithium ion manganese oxide
battery (LiMn2O4, Li2MnO3, or LMO), and lithium nickel manganese cobalt oxide
(LiNiMnCoO2 or NMC) offer lower energy density but longer lives and less likelihood of fire or
explosion. Such batteries are widely used for electric tools, medical equipment, and other roles.
NMC and its derivatives are widely used in electric vehicles.
Research areas for lithium-ion batteries include extending lifetime, increasing energy density,
improving safety, reducing cost, and increasing charging speed, among others. Research has been
under way in the area of non-flammable electrolytes as a pathway to increased safety based on the
flammability and volatility of the organic solvents used in the typical electrolyte. Strategies include
aqueous lithium-ion batteries, ceramic solid electrolytes, polymer electrolytes, ionic liquids, and
heavily fluorinated systems.
Li-ion batteries provide lightweight, high energy density power sources for a variety of devices.
To power larger devices, such as electric cars, connecting many small batteries in a parallel circuit
is more effective and more efficient than connecting a single large battery. Such devices include:
Portable devices:
These include mobile phones and smartphones, laptops and tablets, digital cameras and
camcorders, electronic cigarettes, handheld game consoles and torches (flashlights).
Power tools:
Li-ion batteries are used in tools such as cordless drills, sanders, saws, and a variety of garden
equipment including whippersnapper‘s and hedge trimmers.
Electric vehicles:
Electric vehicle batteries are used in electric cars, hybrid vehicles, electric motorcycles and
scooters, electric bicycles, personal transporters and advanced electric wheelchairs. Also radio-
controlled models, model aircraft, aircraft, and the Mars Curiosity rover.
The LM2596 series of regulators are monolithic integrated circuits that provide all the active
functions for a step-down (buck) switching regulator, capable of driving a 3-A load with excellent
line and load regulation
Requiring a minimum number of external components, these regulators are simple to use and
include internal frequency compensation, and a fixed frequency oscillator.
The LM2596 series operates at a switching frequency of 150 kHz, thus allowing smaller sized filter
components than what would be required with lower frequency switching regulators. Available in
a standard 5-pin TO-220 package with several different lead bend options, and a 5- pin TO-263
surface mount package.
The new product, LMR33630, offers reduced BOM cost, higher efficiency, and an 85% reduction
in solution size among many other features. See the device comparison table to compare specs.
Start WEBENCH Design with LMR33630.
A standard series of inductors are available from several different manufacturers optimized for use
with the LM2596 series. This feature greatly simplifies the design of switch-mode power supplies.
Other features include a ±4% tolerance on output voltage under specified input voltage and output
load conditions, and ±15% on the oscillator frequency. External shutdown is included, featuring
typically 80 μA standby current. Self-protection features include a two stage frequency reducing
current limit for the output switch and an over temperature shutdown for complete protection under
fault conditions.
3.9.1 FEATURES:
a. New product available: LMR33630 36-V, 3-A, 400 kHz synchronous converter
b. 3.3-V, 5-V, 12-V, and adjustable output versions
c. Adjustable version output voltage range: 1.2-V to 37-V ±4% maximum over line and load
conditions
d. Available in TO-220 and TO-263 packages
e. 3-A output load current ,Input voltage range up to 40 V
f. Requires only four external components
g. Excellent line and load regulation specifications
h. 150-kHz Fixed-frequency internal oscillator
i. TTL shutdown capability
j. Low power standby mode, IQ, typically 80 μA
k. High efficiency • Uses readily available standard inductors
l. Thermal shutdown and current-limit protection
m. Create a custom design using the LM2596 with the WEBENCH Power Design
3.9.2 SPECIFICATIONS:
3.10.1. DESCRIPTION:
The MT3608 is a constant frequency, 6-pin SOT23 current mode step-up converter intended for
small, low power applications. The MT3608 switches at 1.2MHz and allows the use of tiny, low
cost capacitors and inductors 2mm or less in height. Internal soft-start results in small inrush
current and extends battery life. The MT3608 features automatic shifting to pulse frequency
modulation mode at light loads. The MT3608 includes under-voltage lockout, current limiting, and
thermal overload protection to prevent damage in the event of an output overload. The MT3608 is
available in a small 6-pin SOT-23 package.
3.10.2 FEATURES
3.10.3 OPERATION
The MT3608 uses a fixed frequency, peak current mode boost regulator architecture to regulate
voltage at the feedback pin. The operation of the MT3608 can be understood by referring to the
block diagram of Figure 3. At the start of each oscillator cycle the MOSFET is turned on through
the control circuitry. To prevent sub-harmonic oscillations at duty cycles greater than 50 percent,
a stabilizing ramp is added to the output of the current sense amplifier and the result is fed into the
negative input of the PWM comparator. When this voltage equals the output voltage of the error
amplifier the power MOSFET is turned off. The voltage at the output of the error amplifier is an
amplified version of the difference between the 0.6V band gap reference voltage and the feedback
voltage. In this way the peak current level keeps the output in regulation. If the feedback voltage
starts to drop, the output of the error amplifier increases. These results in more current to flow
through the power MOSFET, thus increasing the power delivered to the output. The MT3608 has
internal soft start to limit the amount of input current at startup and to also limit the amount of
overshoot on the output.
input Voltage 2 to 24 V
Efficiency up to 93%
3.10.4 APPLICATIONS
• Battery-Powered Equipment
• Set-Top Boxed
• LCD Bias Supply
• DSL and Cable Modems and Routers
• Networking cards powered from PCI or PCI express slots
The heart of the module is the big, black chip with chunky heat sink is an L298N.
The L298N is a dual-channel H-Bridge motor driver capable of driving a pair of DC motors. That
means it can individually drive up to two motors making it ideal for building two-wheel robot
platforms.
L298N Motor Driver Module - 5V Jumper, Power Supply Pins & Regulator
The L298N motor driver module is powered through 3-pin 3.5mm-pitch screw terminals. It
consists of pins for motor power supply (Vs), ground and 5V logic power supply (Vss).The L298N
motor driver IC actually has two input power pins viz. ‗Vss‘ and ‗Vs.‘.From Vs pin the H-Bridge
gets its power for driving the motors which can be 5 to 35V. Vss is used for driving the logic
circuitry which can be 5 to 7V. And they both sink to a common ground named ‗GND‘.
The L298 is an integrated monolithic circuit in a 15lead Multi watt and PowerSO20 packages. It
is a high voltage, high current dual full-bridge driver designed to accept standard TTL logic level
sand drive inductive loads such as relays, solenoids, DC and stepping motors. Two enable inputs
are provided to enable or disable the device independently of the input signals. The emitters of the
lower transistors of each bridge are connected together and the corresponding external terminal
can be used for the connecting of an external sensing resistor. An additional supply input is
provided so that the logic works at a lower voltage.
VCC pin supplies power for the motor. It can be anywhere between 5 to 35V. Remember,
if the 5V-EN jumper is in place, you need to supply 2 extra volts than motor‘s actual voltage
requirement, in order to get maximum speed out of your motor.
5V pin supplies power for the switching logic circuitry inside L298N IC. If the 5V-EN
jumper is in place, this pin acts as an output and can be used to power up your Arduino. If
the 5V-EN jumper is removed, you need to connect it to the 5V pin on Arduino.
ENA pins are used to control speed of Motor A. Pulling this pin HIGH(Keeping the jumper
in place) will make the Motor A spin, pulling it LOW will make the motor stop. Removing
the jumper and connecting this pin to PWM input will let us control the speed of Motor A.
IN1 & IN2 pins are used to control spinning direction of Motor A. When one of them is
HIGH and other is LOW, the Motor A will spin. If both the inputs are either HIGH or LOW
the Motor A will stop.
IN3 & IN4 pins are used to control spinning direction of Motor B. When one of them is
HIGH and other is LOW, the Motor B will spin. If both the inputs are either HIGH or LOW
the Motor B will stop.
OUT1 & OUT2 pins are connected to Motor A. Output 3&4 connected to Motor B.
The L298N motor driver‘s output channels for the motor A and B are broken out to the edge of
the module with two 3.5mm-pitch screw terminals.
You can connect two DC motors having voltages between 5 to 35V to these terminals.
Each channel on the module can deliver up to 2A to the DC motor. However, the amount of
current supplied to the motor depends on system‘s power supply.
For each of the L298N‘s channels, there are two types of control pins which allow us to control
speed and spinning direction of the DC motors at the same time viz. Direction control pins & Speed
control pins.
The module has two direction control pins for each channel. The IN1 and IN2 pins control the
spinning direction of the motor A while IN3 and IN4 control motor B.
Pulling these pins HIGH will make the motors spin, pulling it LOW will make them stop. But,
with Pulse Width Modulation (PWM), we can actually control the speed of the motors.
The module usually comes with a jumper on these pins. When this jumper is in place, the motor is
enabled and spins at maximum speed. If you want to control the speed of motors programmatically,
you need to remove the jumpers and connect them to PWM-enabled pins on Arduino.
I2C is a serial protocol for two-wire interface to connect low-speed devices like microcontrollers,
EEPROMs, A/D and D/A converters, I/O interfaces and other similar peripherals in embedded
systems. It was invented by Philips and now it is used by almost all major IC manufacturers.
Each I2C slave device needs an address – they must still be obtained from NXP (formerly Philips
semiconductors).
I2C uses only two wires: SCL (serial clock) and SDA (serial data). Both need to be pulled up with
a resistor to +Vdd. There are also I2C level shifters which can be used to connect to two I2C buses
with different voltages.
3.12.2 APPLICATION:
Describing connectable devices via small ROM configuration tables to enable "plug and play"
operation, such as
Serial Presence Detect (SPD) EEPROMs on dual in-line memory modules (DIMMs), and
Extended Display Identification Data (EDID) for monitors via VGA, DVI and HDMI connectors.
Accessing real-time clocks and NVRAM chips that keep user settings.
Accessing low-speed DACs and ADCs.
Changing contrast, hue, and color balance settings in monitors (via Display Data Channel).
LCD stands for Liquid Crystal Display. LCD is finding wide spread use replacing LEDs
(seven segment LEDs or other multi segment LEDs) because of the following reasons:
1. The declining prices of LCDs.
2. The ability to display numbers, characters and graphics. This is in contrast to LEDs,
which are limited to numbers and a few characters.
3. Ease of programming for characters and graphics.
These components are ―specialized‖ for being used with the microcontrollers, which means that
they cannot be activated by standard IC circuits. They are used for writing different messages on
a miniature LCD
A model described here is for its low price and great possibilities most frequently used in practice.
It is based on the HD44780 microcontroller (Hitachi) and can display messages in two lines with
16 characters each. It displays all the alphabets, Greek letters, punctuation marks, mathematical
symbols etc. In addition, it is possible to display symbols that user makes up on its own. Automatic
shifting message on display (shift left and right), appearance of the pointer, backlight etc. are
considered as useful characteristics.
3.13.1 FEATURES:
Interface with either 4-bit or 8-bit microprocessor.
Display data RAM
Character generator ROM
-matrix character patterns.
Character generator RAM
-matrix patterns.
Display data RAM and character generator RAM may be accessed by the microprocessor.
Numerous instructions
Clear Display, Cursor Home, Display ON/OFF, Cursor ON/OFF, Blink Character,
Cursor Shift, Display Shift.
Even limited to character based modules, there is still a wide variety of shapes and sizes
available. Line lengths of 8, 16,20,24,32 and 40 characters are all standard, in one, two and four
line versions. Several different LC technologies exist. ―Supertwist‖ types, for example, offer
improved contrast and viewing angle over the older ―twisted pneumatic‖ types. Some modules
are available with back lighting, so that they can be viewed in dimly-lit conditions. The back
lighting may be either ―electro-luminescent‖, requiring a high voltage inverter circuit, or simple
LED illumination.
LCD screen consists of two lines with 16 characters each. Each character consists of 5x7 dot
matrix. Contrast on display depends on the power supply voltage and whether messages are
displayed in one or two lines. For that reason, variable voltage 0-Vdd is applied on pin marked as
Vee. Trimmer potentiometer is usually used for that purpose. Some versions of displays have built
in backlight (blue or green diodes). When used during operating, a resistor for current limitation
should be used (like with any LE diode).
Depending on how many lines are used for connection to the microcontroller, there are 8-bit and
4-bit LCD modes. The appropriate mode is determined at the beginning of the process in a phase
called ―initialization‖. In the first case, the data are transferred through outputs D0-D7 as it has
been already explained. In case of 4-bit LED mode, for the sake of saving valuable I/O pins of the
microcontroller, there are only 4 higher bits (D4-D7) used for communication, while other may be
left unconnected. Consequently, each data is sent to LCD in two steps: four higher bits are sent
first (that normally would be sent through lines D4-D7), four lower bits are sent afterwards. With
the help of initialization, LCD will correctly connect and interpret each data received. Besides,
with regards to the fact that data are rarely read from LCD (data mainly are transferred from
microcontroller to LCD) one more I/O pin may be Saved by simple connecting R/W pin to the
Ground. Such saving has its price.
Once the power supply is turned on, LCD is automatically cleared. This process lasts for
approximately 15mS. After that, display is ready to operate. The mode of operating is set by
default. This means that:
1. Display is cleared
2. Mode
DL = 1 Communication through 8-bit interface
N = 0 Messages are displayed in one line
F = 0 Character font 5 x 8 dots
3. Display/Cursor on/off
D = 0 Display off
U = 0 Cursor off
Electronic cooling fans move air to cool electronic devices such as computers and appliances.
They are also used in telecommunications, military, and general industrial applications as well as
in heating, ventilation, and air conditioning (HVAC) systems. These are used to cool the CPU
(central processing unit) heat sink. Effective cooling of a concentrated heat source such as a
large-scale integrated circuit requires a heat sink, which may be cooled by fan alone, will not
prevent overheating of the small chip.
While in earlier personal computers it was possible to cool most components using natural
convection (passive cooling), many modern components require more effective active cooling. To
cool these components, fans are used to move heated air away from the components and draw
cooler air over them. Fans attached to components are usually used in combination with a heat sink
to increase the area of heated surface in contact with the air, thereby improving the efficiency of
cooling. Fan control is not always an automatic process. A computer's BIOS (basic input/output
system) can control the speed of the built-in fan system for the computer. A user can even
supplement this function with additional cooling components or connect a manual fan controller
with knobs that set fans to different speeds.
In the IBM PC compatible market, the computer's power supply unit (PSU) almost always uses an
exhaust fan to expel warm air from the PSU. Active cooling on CPUs started to appear on the Intel
80486, and by 1997 was standard on all desktop processors. Chassis or case fans, usually one
exhaust fan to expel heated air from the rear and optionally an intake fan to draw cooler air in
through the front, became common with the arrival of the Pentium 4 in late 2000.
Secure Digital, officially abbreviated as SD, is a proprietary non-volatile memory card format
developed by the SD Card Association (SDA) for use in portable devices.
The standard was introduced in August 1999 by joint efforts between SanDisk, Panasonic
(Matsushita Electric) and Toshiba as an improvement over MultiMediaCards (MMC), and has
become the industry standard. The three companies formed SD-3C, LLC, a company that licenses
and enforces intellectual property rights associated with SD memory cards and SD host and
ancillary products.
It was designed to compete with the Memory Stick, a DRM product that Sony had released the
year before. Developers predicted that DRM would induce wide use by music suppliers concerned
about piracy.
The trademarked "SD" logo was originally developed for the Super Density Disc, which was the
unsuccessful Toshiba entry in the DVD format war. For this reason the D within the logo resembles
an optical disc.
At the 2000 Consumer Electronics Show (CES) trade show, the three companies announced the
creation of the SD Association (SDA) to promote SD cards. The SD Association, headquartered in
San Ramon, California, United States, started with about 30 companies and today consists of about
1,000 product manufacturers that make interoperable memory cards and devices. Early samples of
the SD Card became available in the first quarter of 2000, with production quantities of 32 and 64
MB cards available three months later.
The miniSD form was introduced at March 2003 CeBIT by SanDisk Corporation which announced
and demonstrated it.The SDA adopted the miniSD card in 2003 as a small form factor extension
to the SD card standard. While the new cards were designed especially for mobile phones, they are
usually packaged with a miniSD adapter that provides compatibility with a standard SD memory
card slot.
The microSD removable miniaturized Secure Digital flash memory cards were originally named
T-Flash or TF, abbreviations of TransFlash. TransFlash and microSD cards are functionally
identical allowing either to operate in devices made for the other.SanDisk had conceived microSD
when its chief technology officer and the chief technology officer of Motorola concluded that
current memory cards were too large for mobile phones.[citation needed] The card was originally
called T-Flash but just before product launch, T-Mobile sent a cease-and- desist letter to SanDisk
claiming that T-Mobile owned the trademark on T-(anything),[citation
needed] and the name was changed to TransFlash. At CTIA Wireless 2005, the SDA announced
the small microSD form factor along with SDHC secure digital high capacity formatting in excess
of 2 GB (2000 MB) with a minimum sustained read and write speed of 17.6 Mbit/s.[citation
needed] SanDisk induced the SDA to administer the microSD standard. The SDA approved the
final microSD specification on July 13, 2005. Initially, microSD cards were available in capacities
of 32, 64, and 128 MB
This microSDHC card holds 8 billion bytes. Beneath it is a section of a magnetic-core memory
(used until the 1970s) that holds eight bytes using 64 cores. The card covers approximately 20 bits
(2 1/2 bytes)
The SDHC format, announced in January 2006, brought improvements such as 32 GB storage
capacity and mandatory support for FAT32 filesystems.[citation needed] In April, the SDA
released a detailed specification for the non-security related parts of the SD memory card standard
and for the Secure Digital Input Output (SDIO) cards and the standard SD host controller.[citation
needed]
In September 2006, SanDisk announced the 4 GB miniSDHC. Like the SD and SDHC, the
miniSDHC card has the same form factor as the older miniSD card but the HC card requires HC
support built into the host device. Devices that support miniSDHC work with miniSD and
miniSDHC, but devices without specific support for miniSDHC work only with the older miniSD
card. Since 2008, miniSD cards are no longer produced.
In January 2009, the SDA announced the SDXC family, which supports cards up to 2 TBand
speeds up to 300 MB/s.[citation needed] It features mandatory support for the exFAT
filesystem.[citation needed] SDXC was announced at Consumer Electronics Show (CES) 2009
(January 7–10). At the same show, SanDisk and Sony also announced a comparable Memory Stick
XC variant with the same 2 TB maximum as SDXC, and Panasonic announced plans to produce
64 GB SDXC cards. On March 6, Pretec introduced the first SDXC card,[16] a 32 GB card with a
read/write speed of 400 Mbit/s. But only early in 2010 did compatible host devices come onto the
market, including Sony's Handycam HDR-CX55V camcorder, Canon's EOS 550D (also known as
Rebel T2i) Digital SLR camera a USB card reader from Panasonic, and an integrated SDXC card
reader from JMicron.The earliest laptops to integrate SDXC card readers relied on a USB 2.0 bus,
which does not have the bandwidth to support SDXC at full speed.[19]
In early 2010, commercial SDXC cards appeared from Toshiba (64 GB),Panasonic (64 GB and 48
GB), and SanDisk (64 GB).In early 2011, Centon Electronics, Inc. (64 GB and 128 GB) and Lexar
(128 GB) began shipping SDXC cards rated at Speed Class 10.Pretec offered cards from 8
GB to 128 GB rated at Speed Class 16. In September 2011, SanDisk released a 64 GB microSDXC
card. Kingmax released a comparable product in 2011
3.15.5 Class:
The SD Association defines standard speed classes for SDHC/SDXC cards indicating minimum
performance (minimum serial data writing speed). Both read and write speeds must exceed the
specified value. The specification defines these classes in terms of performance curves that
translate into the following minimum read-write performance levels on an empty card and
suitability for different applications.
The SD Association defines three types of Speed Class ratings: the original Speed Class, UHS
Speed Class, and Video Speed Class.
Speed Class ratings 2, 4, and 6 assert that the card supports the respective number of megabytes
per second as a minimum sustained write speed for a card in a fragmented state. Class 10 asserts
that the card supports 10 MB/s as a minimum non-fragmented sequential write speed and uses a
High Speed bus mode. The host device can read a card's speed class and warn the user if the card
reports a speed class that falls below an application's minimum need. The graphical symbol for the
speed class has a number encircled with 'C' (C2, C4, C6, and C10).
UHS-I and UHS-II cards can use UHS Speed Class rating with two possible grades: class 1 for
minimum read/write performance of at least 10 MB/s ('U1' symbol featuring number 1 inside 'U')
and class 3 for minimum write performance of 30 MB/s ('U3' symbol featuring 3 inside 'U'),
targeted at recording 4K video.Before November 2013, the rating was branded UHS Speed Grade
and contained grades 0 (no symbol) and 1 ('U1' symbol). Manufacturers can also display standard
speed class symbols (C2, C4, C6, and C10) alongside, or in place of UHS speed class.
UHS memory cards work best with UHS host devices. The combination lets the user record HD
resolution videos with tapeless camcorders while performing other functions. It is also suitable for
real-time broadcasts and capturing large HD videos.
Video Speed Class defines a set of requirements for UHS cards to match the modern MLC NAND
flash memory and supports progressive 4K and 8K video with minimum sequential writing speeds
of 6-90 MB/s.The graphical symbols use 'V' followed by a number designating write speed (V6,
V10, V30, V60, and V90).
3.15.9 ADVANTAGES:
Memory cards are reliable because they have no moving parts (unlike a hard drive), they are
easily marked on their coverage to reflect the organization, and they are not affected by
magnetic fields (unlike tape).
Memory cards have a nonvolatile memory that maintains the stability of the data on the card,
the data on them are not threatened by the loss of the power source, and they should not be
refreshed. Memory cards are solid supports of the state, they are free of mechanical problems
or damage, they are small, light and compact with high storage capacity, and they require less
amount of energy.
Memory cards are very portable, they can be used in small devices, lightweight and low power
easily, they do not produce noise at work, and they allow more immediate access.
Memory cards come in all sorts of sizes, 128GB SD cards are more common, they have
relatively large storage space, they can be used in the slot for the memory card in different
devices easily, and they are easily removable.
Memory cards are used in various devices such as cameras, computers or mobile phones, they
are easy to follow, and you can use larger map of cost efficiency.
Data backups is an important task, if a company fails to save the information, you might lose
the job, losing money in the company and drop the clients and customers as well, it is very
important to keep backups of data to avoid something crucial missing.
Memory cards come in different sizes and formats, they store digital information on a device
with no moving parts, and they can store information from a computer or external device.
3.16.1 INTRODUCTION:
The GPS QUESTAR TTL is a compact all-in-one GPS module solution intended for a broad range
of Original Equipment Manufacturer (OEM) products, where fast and easy system integration and
minimal development risk is required. The receiver continuously tracks all satellites in view and
provides accurate satellite positioning data. The GPS QUESTAR TTL is optimized for applications
requiring good performance, low cost, and maximum flexibility; suitable for a wide range of OEM
configurations including handhelds, sensors, asset tracking, PDA-centric personal navigation
system, and vehicle navigation products. Its 56 parallel channels and 4100 search bins provide fast
satellite signal acquisition and short startup time. Acquisition sensitivity of –140dBm and tracking
sensitivity of –162dBm offers good navigation performance even in urban canyons having limited
sky view. Satellite-based augmentation systems, such as WAAS and EGNOS, are supported to
yield improved accuracy. USB-level serial interface is provided on the interface connector. Supply
voltage of 3.8V~5.0V is supported
1. G: Power Ground
2. R: serial port input, Arduino or USB to serial port TXD
3. T: serial port output, Arduino or USB to serial port RXD
4. V: 3.3 to 5v power supply.
3.16.4 FEATURES:
Model : QUESTAR
Based on u-Blox chip : UBX-G6010-ST
C / A code 1.023MHz code stream
Receive bands : L1 [1575.42 MHz]
Tracking channels : 50
Support DGPS [WAAS, EGNOS and MSAS]
Positioning performance o 2D plane : 5m [average] o 2D plane : 3.5m [average], DGPS
auxiliary.
Drift : <0.02m / s
Timing accuracy : 1us
Reference coordinate system: WGS-84
Maximum Altitude :18,000 m
3.16.5 ADVANTAGES:
3.16.6 APPLICATION:
3.17 LED :
A Light emitting diode (LED) is essentially a pn junction diode. When carriers are injected across
a forward-biased junction, it emits incoherent light. Most of the commercial LEDs are realized
using a highly doped n and a p Junction. LEDs are usually built on an n-type substrate, with an
electrode attached to the p-type layer deposited on its surface. P-type substrates, while less
common, occur as well. Many commercial LEDs, especially GaN/InGaN, also use sapphire
substrate.
3.17.1 ADVANTAGES:
• LEDs produce more light per watt than incandescent bulbs; this is useful in battery powered or
energy-saving devices.
• LEDs can emit light of an intended color without the use of color filters that traditional lighting
methods require. This is more efficient and can lower initial costs.
• The solid package of the LED can be designed to focus its light. Incandescent and fluorescent
sources often require an external reflector to collect light and direct it in a usable manner.
• When used in applications where dimming is required, LEDs do not change their color tint as
the current passing through them is lowered, unlike incandescent lamps, which turn yellow.
• LEDs are ideal for use in applications that are subject to frequent on-off cycling, unlike
fluorescent lamps that burn out more quickly when cycled frequently, or High Intensity Discharge
(HID) lamps that require a long time before restarting.
• LEDs, being solid state components, are difficult to damage with external shock. Fluorescent and
incandescent bulbs are easily broken if dropped on the ground.
• LEDs can have a relatively long useful life. A Philips LUXEON k2 LED has a life time of about
50,000 hours, whereas Fluorescent tubes typically are rated at about 30,000 hours, and
incandescent light bulbs at 1,000–2,000 hours.
3.17.2 APPLICATIONS:
BC547 is a NPN transistor hence the collector and emitter will be left open (Reverse biased) when
the base pin is held at ground and will be closed (Forward biased) when a signal is provided to
base pin. BC547 has a gain value of 110 to 800, this value determines the amplification capacity
of the transistor. The maximum amount of current that could flow through the Collector pin is
100mA, hence we cannot connect loads that consume more than 100mA using this transistor. To
bias a transistor we have to supply current to base pin, this current (IB) should be limited to 5mA.
When a transistor is used as a switch it is operated in the Saturation and Cut-Off Region as
explained above. As discussed a transistor will act as an Open switch during Forward Bias and as
a Closed switch during Reverse Bias, this biasing can be achieved by supplying the required
amount of current to the base pin. As mentioned the biasing current should maximum of 5mA.
Anything more than 5mA will kill the Transistor; hence a resistor is always added in series with
base pin.
The value of this resistor (RB) can be calculated using below formulae.
RB = VBE / IB
Where, the value of VBE should be 5V for BC547 and the Base current (IB depends on the
Collector current (IC). The value of IB should not exceed mA.
3.18.2 APPLICATIONS :
In a toggle switch you have a lever that you turn to one side or to the other to make the current
flow to one side or to other, or to not flow at all. There are several types of toggle switches. These
are characterized by the pole and the throw. A pole represents a contact. The pole represents the
connections that your pole can do.
A chassis is the load-bearing framework of an artificial object, which structurally supports the
object in its construction and function. An example of a chassis is a vehicle frame, the underpart
of a motor vehicle, on which the body is mounted; if the running gear such as wheels and
transmission, and sometimes even the driver's seat, are included, then the assembly is described as
a rolling chassis.
In an electronic device (such as a computer), the chassis consists of a frame or other internal
supporting structure on which the circuit boards and other electronics are mounted.
In some designs, such as older sets, the chassis is mounted inside a heavy, rigid cabinet, while in
other designs such as modern computer cases, lightweight covers or panels are attached to the
chassis .The combination of chassis and outer covering is sometimes called an enclosure
3.21 WHEELS :
God created legs for locomotion and man created wheels for the same purpose, which is one of the
greatest inventions in human era. Wheels are your best bet for robots as they are easy to design,
implement and practical for robots that require speed. They also do not suffer from static or
dynamic stability as the center of gravity of robot does not change when they are in motion or just
standing still and do not require complex models, designs and algorithms. The disadvantage is that
they are not stable on uneven or rough terrain and also on extremely smooth surfaces as they tend
to slip and skid.
This wheel has two degrees of freedom and can traverse Front or Reverse. The center of the wheel
is fixed to the robot chassis. The angle between the robot chassis and wheel plane is constant. Fixed
wheels are commonly seen in most WMR‘s where the wheels are attached to motors and are used
to drive and steer the robot.
These wheels are mounted to a fork which holds the wheel in place. Orientable wheels are normally
used to balance a robot and very unlikely to be used to drive a robot. There are two kinds of
Orientable wheels: Centered and Off-centered Orientable wheels.
The best choice for a robot that requires multi-directional movement. These wheels are normal
wheels with passive wheels (rollers) attached around the circumference of the center wheel. Omni
wheels can move in any direction and exhibits low resistance when they move in any direction.
The small wheels are attached in such a way that the axis of the small wheels are perpendicular to
the axis of the bigger center wheel which makes the wheel to rotate even parallel to its own
axis.Omni wheels are sometimes known as Swedish wheels and can be used to both drive and steer
a robot. Mecanum Wheel is also a type of Omni wheel with the exception that rollers are attached
at 45° angle around the circumference of another bigger wheel.
3.21.4 CONCLUSION:
The best wheel for your robot depends on the design and requirements. Fixed wheels are good
for simply connecting wheels to a motor and driving or steering. Orientable and spherical wheels
are good for balancing a robot (especially when two wheels drive and you require a third
balancing wheel; also known as auxiliary wheel). Swedish wheels are good for both driving and
steering, but come with their disadvantages.
CHAPTER-4
SOFTWARE USED
Arduino can sense the environment by receiving input from a variety of sensors and can
affect its surroundings by controlling lights, motors, and other actuators. The microcontroller on
the board is programmed using the Arduino programming language (based on Wiring) and the
Arduino development environment (based on Processing). Arduino projects can be stand - alone
or they can communicate with software on running on a computer (e.g. Flash, Processing, and Max
MSP). Arduino is a cross-platform program. You‘ll have to follow different instructions for your
personal OS.
As you can see, downloads are available for Windows, Max OS X and both 32 and 64bit
Linux. This example will download the software on a Windows system with admin rights. If you
are installing the software and do not have administrator rights on your system, you will want to
download the ZIP file instead of the Installer. Download the file (you may be asked for a donation,
but that isn‘t necessary) and save it to your computer. Depending on your internet connection
speed, it may take a while. Once it downloads, run it.
If asked if you want to allow it to make changes to your computer, say yes (That‘s the
only way you can get it installed). Next, you should see the licensing agreement
The Sketch is divided into two parts: setup and loop. Consider this their first guidelines on
how to develop a working sketch.
The setup portion is where you put code that needs to run only once. This includes things
like setting certain pins to HIGH, specifying whether a pin should be used as input or output,
assigning certain values to variables, etc. This code will run once each time the Arduino board is
powered up. Decide what commands need to run once, and plan to place them here.
The loop section is the main portion of the code that will keep running until you power off
the Arduino. This is the more challenging part of developing the algorithm.
Sketch
Here is where you begin to type in the actual commands, being careful about spelling and
syntax.
Analog I/O:
As discussed earlier, the Arduino boards include pins for performing Analog input and
output. One command is used to set a reference voltage (the value used as the maximum range of
the input voltage), another is used to read the Analog voltage, and the last is used to write the
Analog voltage.
Here are the commands:
Analog Reference (type):
You can choose from 5 options
DEFAULT is going to be 5 volts (on 5V Arduino boards) or 3.3 volts (on 3.3V Arduino
boards)
INTERNAL is a built-in reference that varies with the type of processor
INTERNAL1V1: is a built-in 1.1V reference, but is only available on the Mega
INTERNAL2V56: is a built-in 2.56V referenced hat is also available only on the Mega
EXTERNAL: this means that you will use whatever voltage is applied to the Mega
AREF pin for the reference voltage
The source code for the IDE is released under the GNU General Public License. The
Arduino IDE supplies a software library from the Wiring project, which provides many common
input and output procedures. User-written code only requires two basic functions, for starting the
sketch and the main program loop, that are compiled and linked with a program stub main () into
an executable cyclic executive program with the GNU tool chain, also included with the IDE
distribution. The Arduino IDE employs the program avrdude to convert the executable code into a
text file in hexadecimal encoding that is loaded into the Arduino board by a loader program in the
board's firmware. By default, avrdude is used as the uploading tool to flash the user code onto
official Arduino boards.
Raspberry Pi OS is the recommended operating system for normal use on a Raspberry Pi.
Raspberry Pi OS is a free operating system based on Debian, optimized for the Raspberry Pi
hardware. Raspberry Pi OS comes with over 35,000 packages: precompiled software bundled in a
nice format for easy installation on your Raspberry Pi.
The Raspberry Pi should work with any compatible SD card, although there are some guidelines
that should be followed:
For installation of Raspberry Pi OS with desktop and recommended software (Full) via NOOBS
the minimum card size is 16GB. For the image installation of Raspberry Pi OS with desktop and
recommended software, the minimum card size is 8GB. For Raspberry Pi OS Lite image
installations we recommend a minimum of 4GB. Some distributions, for example LibreELEC and
Arch, can run on much smaller cards.
Note: Only the Raspberry Pi 3A+, 3B+ and Compute Module 3+ can boot from an SD card larger
than 256 GB. This is because there was a bug in the SoC used on previous models of Pi.
SD card class:
The card class determines the sustained write speed for the card; a class 4 card will be able to write
at 4MB/s, whereas a class 10 should be able to attain 10 MB/s. However, it should be noted that
this does not mean a class 10 card will outperform a class 4 card for general usage, because often
this write speed is achieved at the cost of read speed and increased seek times.
The original Raspberry Pi Model A and Raspberry Pi Model B require full-size SD cards. From
the Model B+ (2014) onwards, a micro SD card is required.
Troubleshooting:
We recommend buying the Raspberry Pi SD card which is available here, as well as from other
retailers; this is an 8GB class 6 micro SD card (with a full-size SD adapter) that outperforms almost
all other SD cards on the market and is a good value solution.
If you are having trouble with corruption of your SD cards, make sure you follow these steps:
1. Make sure you are using a genuine SD card. There are many cheap SD cards available which
are actually smaller than advertised or which will not last very long.
2. Make sure you are using a good quality power supply. You can check your power supply by
measuring the voltage between TP1 and TP2 on the Raspberry Pi; if this drops below 4.75V
when doing complex tasks then it is most likely unsuitable.
3. Make sure you are using a good quality USB cable for the power supply. When using a lower
quality power supply, the TP1->TP2 voltage can drop below 4.75V. This is generally due to
the resistance of the wires in the USB power cable; to save money, USB cables have as little
copper in them as possible, and as much as 1V (or 1W) can be lost over the length of the cable.
4. Make sure you are shutting your Raspberry Pi down properly before powering it off.
Type sudo halt and wait for the Pi to signal it is ready to be powered off by flashing the
activity LED.
5. Finally, corruption has been observed if you are overclocking the Pi. This problem has been
fixed previously, although the workaround used may mean that it can still happen. If after
checking the steps above you are still having problems with corruption, please let us know.
4.2.1 INSTALLATION:
Raspberry Pi have developed a graphical SD card writing tool that works on Mac OS, Ubuntu
18.04 and Windows, and is the easiest option for most users as it will download the image and
install it automatically to the SD card.
Most other tools require you to download the image first, then use the tool to write it to your SD
card.
Official images for recommended operating systems are available to download from the Raspberry
Pi website downloads page.
You may need to unzip .zip downloads to get the image file (.img) to write to your SD card.
Note: the Raspberry Pi OS with Raspberry Pi Desktop image contained in the ZIP archive is over
4GB in size and uses the ZIP64 format. To uncompress the archive, a unzip tool that supports
ZIP64 is required. The following zip tools support ZIP64:
7-Zip (Windows)
The Unarchiver (Mac)
Unzip (Linux)
Writing the image
How you write the image to the SD card will depend on the operating system you are using.
Linux
Mac OS
Windows
Chrome OS
Boot your new OS
You can now insert the SD card into the Raspberry Pi and power it up.
For the official Raspberry Pi OS, if you need to manually log in, the default user name is pi, with
password raspberry. Remember the default keyboard layout is set to UK.
You should change the default password straight away to ensure your Raspberry Pi is secure.
You will be shown raspi-config on first booting into Raspberry Pi OS. To open the configuration
tool after this, simply run the following from the command line:
Sudo raspi-config
The sudo is required because you will be changing files that you do not own as the pi user.
You should see a blue screen with options in a grey box in the center, like so:
Note that in long lists of option values (like the list of timezone cities), you can also type a letter
to skip to that section of the list. For example, entering L will skip you to Lisbon, just two options
away from London, to save you scrolling all the way through the alphabet.
Generally speaking, raspi-config aims to provide the functionality to make the most common
configuration changes. This may result in automated edits to /boot/config.txt and various standard
Linux configuration files. Some options require a reboot to take effect. If you changed any of those,
raspi-config will ask if you wish to reboot now when you select the <Finish> button.
Menu options
The default user on Raspberry Pi OS is pi with the password raspberry. You can change that here.
Read about other users.
Network Options:
From this submenu you can set the host name, your wireless LAN SSID, and pre-shared key, or
enable/disable predictable network interface names.
Hostname:
Set the visible name for this Pi on a network.
Boot Options:
From here you can change what happens when your Pi boots. Use this option to change your boot
preference to command line or desktop. You can choose whether boot-up waits for the network to
be available, and whether the Plymouth splash screen is displayed at boot-up.
Localization Options:
The localization submenu gives you these options to choose from: keyboard layout, time zone,
locale, and wireless LAN country code.
Change locale:
Select a locale, for example en_GB.UTF-8 UTF-8.
Interfacing Options:
In this submenu there are the following options to enable/disable: Camera, SSH, VNC, SPI, I2C,
Serial, 1-wire, and Remote GPIO.
SSH allows you to remotely access the command line of the Raspberry Pi from another computer.
SSH is disabled by default. Read more about using SSH on the SSH documentation
page. If connecting your Pi directly to a public network, you should not enable SSH unless you
have set up secure passwords for all users.
SPI: Enable/disable SPI interfaces and automatic loading of the SPI kernel module, needed for
products such as PiFace.
I2C: Enable/disable I2C interfaces and automatic loading of the I2C kernel module.
1-wire: Enable/disable the Dallas 1-wire interface. This is usually used for DS18B20 temperature
sensors.
Overclock:
It is possible to overclock your Raspberry Pi's CPU. The default is 700MHz but it can be set up to
1000MHz. The overclocking you can achieve will vary; overclocking too high may result in
instability. Selecting this option shows the following warning:
Be aware that overclocking may reduce the lifetime of your Raspberry Pi. If overclocking at a
certain level causes system instability, try a more modest overclock. Hold down the Shift key
during boot to temporarily disable overclocking.
Developer support:
Raspbian pulls more attention from the raspberry foundation given it is the official Raspberry OS.
This results in the development of more features and utility software. With support from the
raspberry community, it is therefore easy to set up this distribution and get going. For this reason,
Raspbian also comes pre-installed with office programs, a web-browser, Mine craft and some
programming languages (scratch, python, c/c++).
Lightweight Nature:
Raspbian is fast and light. Processes share the same resources during execution without the need
for creating process-specific resources unlike in heavyweight systems. This, therefore, increases
the efficiency and speed of the operating system. Since the adoption of epiphany-based software,
the os has gained noticeable speed, unlike previous versions which run on Midori. All Raspbian
programs are also created in a way that they increase performance efficiency. A good example is
the command line used to play media. (OXM command line).
Learner-oriented:
What really sieves great software from the good software is the ability to be mindful of the end-
user. The PI is created especially for educational purposes. That remains that. Therefore what
makes Raspbian more advantageous to most pi users is its ability to choose the best learning tools
and programs. From simple beginner-friendly programming languages like python and ruby to
simplistic teaching software like scratch. I think Raspbian always stands out from the rest
Raspbian os is easy to maintain as it is to use. The commands are quite easy and whenever you
need to install software, the repository will always provide an updated version of the same. The
repository boasts also of plenty of software all that encompasses most of which that you will need.
Computer vision is one of the hottest fields in the industry right now. You can expect plenty of
job openings to come up in the next 2-4 years. The question then is – are you ready to take
advantage of these opportunities? Take a moment to ponder this – which applications or products
come to your mind when you think of computer vision? The list is HUGE. We use some of them
everyday! Features like unlocking our phones using face recognition, our smartphone cameras,
self-driving cars – computer vision is everywhere.
OpenCV, or Open Source Computer Vision library, started out as a research project at Intel. It‘s
currently the largest computer vision library in terms of the sheer number of functions it holds.
OpenCV contains implementations of more than 2500 algorithms! It is freely available for
commercial as well as academic purposes. And the joy doesn‘t end there! The library has interfaces
for multiple languages, including Python, Java, and C++.
The first OpenCV version, 1.0, was released in 2006 and the OpenCV community has grown leaps
and bounds since then.
Now, let‘s turn our attention to the idea behind this article – the plethora of functions OpenCV
offers! We will be looking at OpenCV from the perspective of a data scientist and learning about
some functions that make the task of developing and understanding computer vision models easier.
Machines see and process everything using numbers, including images and text. How do you
convert images to numbers – I can hear you wondering. Two words – pixel values:
Every number represents the pixel intensity at that particular location. In the above image, I have
shown the pixel values for a grayscale image where every pixel contains only one value i.e. the
intensity of the black colour at that location.
Note that colour images will have multiple values for a single pixel. These values represent the
in9tensity of respective channels – Red, Green and Blue channels for RGB images, for instance.
Reading and writing images is essential to any computer vision project. And the OpenCV library
makes this function a whole lot easier.
By default, the imread function reads images in the BGR (Blue-Green-Red) format. We can read
images in different formats using extra flags in the imread function:
A color space is a protocol for representing colors in a way that makes them easily reproducible.
We know that grayscale images have single pixel values and color images contain 3 values for
each pixel – the intensities of the Red, Green and Blue channels.
Most computer vision use cases process images in RGB format. However, applications like video
compression and device independent storage – these are heavily dependent on other color spaces,
like the Hue-Saturation-Value or HSV color space.
As you understand a RGB image consists of the color intensity of different color channels, i.e. the
intensity and color information are mixed in RGB color space but in HSV color space the color
and intensity information are separated from each other. This makes HSV color space more robust
to lighting changes.
OpenCV reads a given image in the BGR format by default. So, you‘ll need to change the color
space of your image from BGR to RGB when reading images using OpenCV. Let‘s see how to do
that:
Machine learning models work with a fixed sized input. The same idea applies to computer vision
models as well. The images we use for training our model must be of the same size.
Now this might become problematic if we are creating our own dataset by scraping images from
various sources. That‘s where the function of resizing images comes to the fore.
Images can be easily scaled up and down using OpenCV. This operation is useful for training deep
learning models when we need to convert images to the model‘s input shape. Different
interpolation and downsampling methods are supported by OpenCV, which can be used by the
following parameters:
―You need a large amount of data to train a deep learning model‖. I‘m sure you must have comes
across this line of thought in form or another. It‘s partially true – most deep learning algorithms
are heavily dependent on the quality and quantity of the data.
But what if you do not have a large enough dataset? Not all of us can afford to manually collect
and label images.
Suppose we are building an image classification model for identifying the animal present in an
image. So, both the images shown below should be classified as ‗dog‘:
But the model might find it difficult to classify the second image as a Dog if it was not trained on
such images. So what should we do?
Let me introduce you to the technique of data augmentation. This method allows us to generate
more samples for training our deep learning model. Data augmentation uses the available data
samples to produce the new ones, by applying image operations like rotation, scaling, translation,
etc. This makes our model robust to changes in input and leads to better generalization.
Rotation is one of the most used and easy to implement data augmentation techniques. As the name
suggests, it involves rotating the image at an arbitrary angle and providing it the same label as the
original image. Think of the times you have rotated images in your phone to achieve certain angles
– that‘s basically what this function does.
Image translation is a geometric transformation that maps the position of every object in the image
to a new location in the final output image. After the translation operation, an object present at
location (x,y) in the input image is shifted to a new position (X,Y):
X = x + dx
Y = y + dy
Image translation can be used to add shift invariance to the model, as by tranlation we can change
the position of the object in the image give more variety to the model that leads to better
generalizability which works in difficult conditions i.e. when the object is not perfectly aligned to
the center of the image.
This augmentation technique can also help the model correctly classify images with partially
visible objects. Take the below image for example. Even when the complete shoe is not present in
the image, the model should be able to classify it as a Shoe.
This translation function is typically used in the image pre-processing stage. Check out the below
code to see how it works in a practical scenario:
In case of adaptive thresholding, different threshold values are used for different parts of the image.
This function gives better results for images with varying lighting conditions – hence the term
―adaptive‖.
Otsu‘s binarization method finds an optimal threshold value for the whole image. It works well for
bimodal images (images with 2 peaks in their histogram).
Image segmentation is the task of classifying every pixel in the image to some class. For example,
classifying every pixel as foreground or background. Image segmentation is important for
extracting the relevant parts from an image.
The watershed algorithm is a classic image segmentation algorithm. It considers the pixel values
in an image as topography. For finding the object boundaries, it takes initial markers as input. The
algorithm then starts flooding the basin from the markers till the markers meet at the object
boundaries.
Let‘s say we have topography with multiple basins. Now, if we fill different basins with water of
different color, then the intersection of different colors will give us the object boundaries. This is
the intuition behind the watershed algorithm.
Bitwise operations include AND, OR, NOT and XOR. You might remember them from your
programming class! In computer vision, these operations are very useful when we have a mask
image and want to apply that mask over another image to extract the region of interest.
Edges are the points in an image where the image brightness changes sharply or has discontinuities.
Such discontinuities generally correspond to:
Discontinuities in depth
Discontinuities in surface orientation
Changes in material properties
Variations in scene illumination
Edges are very useful features of an image that can be used for different applications like
classification of objects in the image and localization. Even deep learning models calculate edge
features to extract information about the objects present in image.
Edges are different from contours as they are not related to objects rather they signify the changes
in pixel values of an image. Edge detection can be used for image segmentation and even for image
sharpening.
In image filtering, a pixel value is updated using its neighbouring values. But how are these values
updated in the first place?well, there are multiple ways of updating pixel values, such as selecting
the maximum value from neighbours, using the average of neighbours, etc. Each method has its
own uses.
Gaussian filtering is also used for image blurring that gives different weights to the neighbouring
pixels based on their distance from the pixel under consideration
For image filtering, we use kernels. Kernels are matrices of numbers of different shapes like 3 x 3,
5 x 5, etc. A kernel is used to calculate the dot product with a part of the image. When calculating
the new value of a pixel, the kernel center is overlapped with the pixel. The neighbouring pixel
values are multiplied with the corresponding values in the kernel. The calculated value is assigned
to the pixel coinciding with the center of the kernel.
In the above output, the image on the right shows the result of applying Gaussian kernels on an
input image. We can see that the edges of the original image are suppressed. The Gaussian kernel
with different values of sigma is used extensively to calculate the Difference of Gaussian for our
image. This is an important step in the feature extraction process because it reduces the noise
present in the image.
A contour is a closed curve of points or line segments that represents the boundaries of an object
in the image. Contours are essentially the shapes of objects in an image.
Unlike edges, contours are not part of an image. Instead, they are an abstract collection of points
and line segments corresponding to the shapes of the object(s) in the image.
We can use contours to count the number of objects in an image, categorize objects on the basis of
their shapes, or select objects of particular shapes from the image.
Keypoints is a concept you should be aware of when working with images. These are basically the
points of interest in an image. Keypoints are analogous to the features of a given image.
They are locations that define what is interesting in the image. Keypoints are important, because
no matter how the image is modified (rotation, shrinking, expanding, distortion), we will always
find the same keypoints for the image.
Scale Invariant Feature Transform (SIFT) is a very popular keypoint detection algorithm. It
consists of the following steps:
Features extracted from SIFT can be used for applications like image stitching, object detection,
etc. The below code and output show the keypoints and their orientation calculated using SIFT.
Speeded-Up Robust Features (SURF) is an enhanced version of SIFT. It works much faster and is
more robust to image transformations. In SIFT; the scale space is approximated using Laplacian
of Gaussian. Wait – that sounds too complex. What is Laplacian of Gaussian?
Laplacian is a kernel used for calculating the edges in an image. The Laplacian kernel works by
approximating a second derivative of the image. Hence, it is very sensitive to noise. We generally
apply the Gaussian kernel to the image before Laplacian kernel thus giving it the name Laplacian
of Gaussian.
In SURF, the Laplacian of Gaussian is calculated using a box filter (kernel). The convolution with
box filter can be done in parallel for different scales which is the underlying reason for the
enhanced speed of SURF (compared to SIFT). There are other neat improvements like this in
SURF – I suggest going through the research paper to understand this in-depth.
The features extracted from different images using SIFT or SURF can be matched to find similar
objects/patterns present in different images. The OpenCV library supports multiple feature-
matching algorithms, like brute force matching, knn feature matching, among others.
In the above image, we can see that the keypoints extracted from the original image (on the left)
are matched to keypoints of its rotated version. This is because the features were extracted using
SIFT, which is invariant to such transformations.
OpenCV supports hear cascade based object detection. Hear cascades are machine learning based
classifiers that calculate different features like edges, lines, etc. in the image. Then, these classifiers
train using multiple positive and negative samples.
Trained classifiers for different objects like faces,eyes etc are available in the OpenCV Github repo
, you can also train your own haar cascade for any object.
Make sure you go through the below excellent article that teaches you how to build a face detection
model from video using OpenCV:
Building a Face Detection Model from Video using Deep Learning (OpenCV
Implementation)
And if you‘re looking to learn the face detection concept from scratch, then this article should be
of interest.
OpenCV is truly an all-encompassing library for computer vision tasks. I hope you tried out all
the above codes on your machine – the best way to learn computer vision is by applying it on
your own. I encourage you to build your own applications and experiment with OpenCV as much
as you can.
OpenCV is continually adding new modules for latest algorithms from Machine learning, do
check out their Github repository and get familiar with implementation. You can even contribute
to the library which is a great way to learn and interact with the community.
CHAPTER 5
RESULT
The main objective of the project is to vehicle will drive by itself by using image processing
technique. By using the raspberry pi camera the vehicle detects the path to move forward and
obstacle avoidance is also done by using camera as it detects the obstacles placed in front of the
vehicle. By this image processing technique the vehicle scans the desired path and filters the
images of the path and drive itself. By these autonomous vehicles the effort of human driving will
be reduced. In the first stage the path of the vehicle is captured by raspberry pi camera and second
stage gray scaling of the path image is done in the process and after canny edge detection is
obtained and thus the path of the vehicle is obtained.
CHAPTER 6
CONCLUSION:
Lots of practical applications are found by using this Image Processing Algorithms and further
studies are carried out. Various image processing techniques have been presented through these
papers which have high end real time applications in the day to day life. The important take away
terms are Computer vision techniques, obstacle avoidance, and traffic sign detection. Various
algorithms and filters are being used to achieve high efficiency data extraction from images. After
evaluating the end results of the papers analyzed, it can be concluded that, 75% were found success
in real time in embedded systems. In real time environment these autonomous vehicles helps in
industrial zones and in daily life also. Software development environment OpenCV has also been
discussed in these papers. Histogram used in this have different usages in other fields too.
FUTURE SCOPE:
In further the features of the vehicle can be improved with high quality and durable megapixel
camera can be used for path detection in image processing at its best performance .By using LDR
sensor we can enlight the body of the vehicle so that the vehicle can travel in low lightning
conditions also. On developing the image processing technique into further steps the vehicle can
easily detects the path to travel and reduce the time travelling.
Program:
#include <opencv2/opencv.hpp>
#include <raspicam_cv.h>
#include <iostream>
#include <chrono>
#include <ctime>
#include <wiringPi.h>
RaspiCam_Cv Camera;
stringstream ss;
vector<int> histrogramLane;
vector<int> histrogramLaneEnd;
{
Camera.set ( CAP_PROP_FRAME_WIDTH, ( "-w",argc,argv,400 ) );
Camera.set ( CAP_PROP_FRAME_HEIGHT, ( "-h",argc,argv,240 ) );
Camera.set ( CAP_PROP_BRIGHTNESS, ( "-br",argc,argv,50 ) );
Camera.set ( CAP_PROP_CONTRAST ,( "-co",argc,argv,50 ) );
Camera.set ( CAP_PROP_SATURATION, ( "-sa",argc,argv,50 ) );
Camera.set ( CAP_PROP_GAIN, ( "-g",argc,argv ,50 ) );
Camera.set ( CAP_PROP_FPS, ( "-fps",argc,argv,0));
void Capture()
{
Camera.grab();
Camera.retrieve( frame);
cvtColor(frame, frame_Stop, COLOR_BGR2RGB);
cvtColor(frame, frame_Object, COLOR_BGR2RGB);
cvtColor(frame, frame, COLOR_BGR2RGB);
void Perspective()
{
line(frame,Source[0], Source[1], Scalar(0,0,255), 2);
line(frame,Source[1], Source[3], Scalar(0,0,255), 2);
line(frame,Source[3], Source[2], Scalar(0,0,255), 2);
line(frame,Source[2], Source[0], Scalar(0,0,255), 2);
void Threshold()
{
cvtColor(framePers, frameGray, COLOR_RGB2GRAY);
inRange(frameGray, 230, 255, frameThresh);
Canny(frameGray,frameEdge, 900, 900, 3, false);
add(frameThresh, frameEdge, frameFinal);
cvtColor(frameFinal, frameFinal, COLOR_GRAY2RGB);
cvtColor(frameFinal, frameFinalDuplicate, COLOR_RGB2BGR); //used in histrogram
function only
void Histrogram()
{
histrogramLane.resize(400);
histrogramLane.clear();
histrogramLaneEnd.resize(400);
histrogramLaneEnd.clear();
for (int i = 0; i < 400; i++)
{
ROILaneEnd = frameFinalDuplicate1(Rect(i, 0, 1, 240));
divide(255, ROILaneEnd, ROILaneEnd);
histrogramLaneEnd.push_back((int)(sum(ROILaneEnd)[0]));
}
laneEnd = sum(histrogramLaneEnd)[0];
cout<<"Lane END = "<<laneEnd<<endl;
}
void LaneFinder()
{
vector<int>:: iterator LeftPtr;
LeftPtr = max_element(histrogramLane.begin(), histrogramLane.begin() + 150);
LeftLanePos = distance(histrogramLane.begin(), LeftPtr);
void LaneCenter()
{
laneCenter = (RightLanePos-LeftLanePos)/2 +LeftLanePos;
frameCenter = 188;
Result = laneCenter-frameCenter;
}
void Stop_detection()
{
if(!Stop_Cascade.load("//home//pi//Desktop//MACHINE LEARNING//Stop_cascade.xml"))
{
printf("Unable to open stop cascade file");
}
RoI_Stop = frame_Stop(Rect(200,0,200,140));
cvtColor(RoI_Stop, gray_Stop, COLOR_RGB2GRAY);
equalizeHist(gray_Stop, gray_Stop);
Stop_Cascade.detectMultiScale(gray_Stop, Stop);
ss.str(" ");
ss.clear();
ss<<"D = "<<dist_Stop<<"cm";
putText(RoI_Stop, ss.str(), Point2f(1,130), 0,1, Scalar(0,0,255), 2);
void Object_detection()
{
if(!Object_Cascade.load("//home//pi//Desktop//MACHINE
LEARNING//Object_cascade.xml"))
{
printf("Unable to open Object cascade file");
}
RoI_Object = frame_Object(Rect(200,0,200,140));
cvtColor(RoI_Object, gray_Object, COLOR_RGB2GRAY);
equalizeHist(gray_Object, gray_Object);
Object_Cascade.detectMultiScale(gray_Object, Object);
ss.str(" ");
ss.clear();
ss<<"D = "<<dist_Object<<"cm";
putText(RoI_Object, ss.str(), Point2f(1,130), 0,1, Scalar(0,0,255), 2);
wiringPiSetup();
pinMode(21, OUTPUT);
pinMode(22, OUTPUT);
pinMode(23, OUTPUT);
pinMode(24, OUTPUT);
cout<<"Failed to Connect"<<endl;
}
cout<<"Camera Id = "<<Camera.getId()<<endl;
while(1)
{
Capture();
Perspective();
Threshold();
Histrogram();
LaneFinder();
LaneCenter();
Stop_detection();
Object_detection();
goto Stop_Sign;
}
if (Result == 0)
{
digitalWrite(21, 0);
digitalWrite(22, 0); //decimal = 0
digitalWrite(23, 0);
digitalWrite(24, 0);
cout<<"Forward"<<endl;
}
Stop_Sign:
else if (Result == 0)
{
ss.str(" ");
ss.clear();
ss<<"Result = "<<Result<<" (Move Forward)";
putText(frame, ss.str(), Point2f(1,50), 0,1, Scalar(0,0,255), 2);
namedWindow("orignal", WINDOW_KEEPRATIO);
moveWindow("orignal", 0, 100);
resizeWindow("orignal", 640, 480);
imshow("orignal", frame);
namedWindow("Perspective", WINDOW_KEEPRATIO);
moveWindow("Perspective", 640, 100);
resizeWindow("Perspective", 640, 480);
imshow("Perspective", framePers);
namedWindow("Final", WINDOW_KEEPRATIO);
moveWindow("Final", 1280, 100);
resizeWindow("Final", 640, 480);
imshow("Final", frameFinal);
namedWindow("Object", WINDOW_KEEPRATIO);
moveWindow("Object", 640, 580);
resizeWindow("Object", 640, 480);
imshow("Object", RoI_Object);
waitKey(1);
auto end = std::chrono::system_clock::now();
std::chrono::duration<double> elapsed_seconds = end-start;
float t = elapsed_seconds.count();
int FPS = 1/t;
//cout<<"FPS = "<<FPS<<endl;
return 0;
PROGRAM:
int i =0;
unsigned long int j =0;
int a,b,c,d,data;
void setup() {
pinMode(EnableL, OUTPUT);
pinMode(HighL, OUTPUT);
pinMode(LowL, OUTPUT);
pinMode(EnableR, OUTPUT);
pinMode(HighR, OUTPUT);
pinMode(LowR, OUTPUT);
pinMode(D0, INPUT_PULLUP);
pinMode(D1, INPUT_PULLUP);
pinMode(D2, INPUT_PULLUP);
pinMode(D3, INPUT_PULLUP);
void Data()
{
a = digitalRead(D0);
b = digitalRead(D1);
c = digitalRead(D2);
d = digitalRead(D3);
data = 8*d+4*c+2*b+a;
}
void Forward()
{
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
analogWrite(EnableL,255);
digitalWrite(HighR, LOW);
digitalWrite(LowR, HIGH);
analogWrite(EnableR,255);
void Backward()
{
digitalWrite(HighL, HIGH);
digitalWrite(LowL, LOW);
analogWrite(EnableL,255);
digitalWrite(HighR, HIGH);
digitalWrite(LowR, LOW);
analogWrite(EnableR,255);
void Stop()
{
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
analogWrite(EnableL,0);
digitalWrite(HighR, LOW);
digitalWrite(LowR, HIGH);
analogWrite(EnableR,0);
void Left1()
{
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
analogWrite(EnableL,160);
digitalWrite(HighR, LOW);
digitalWrite(LowR, HIGH);
analogWrite(EnableR,255);
void Left2()
{
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
analogWrite(EnableL,90);
digitalWrite(HighR, LOW);
digitalWrite(LowR, HIGH);
analogWrite(EnableR,255);
void Left3()
{
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
analogWrite(EnableL,50);
digitalWrite(HighR, LOW);
digitalWrite(LowR, HIGH);
analogWrite(EnableR,255);
void Right1()
{
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
analogWrite(EnableL,255);
digitalWrite(HighR, LOW);
digitalWrite(LowR, HIGH);
analogWrite(EnableR,160); //200
}
void Right2()
{
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
analogWrite(EnableL,255);
digitalWrite(HighR, LOW);
digitalWrite(LowR, HIGH);
analogWrite(EnableR,90); //160
void Right3()
{
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
analogWrite(EnableL,255);
digitalWrite(HighR, LOW);
digitalWrite(LowR, HIGH);
analogWrite(EnableR,50); //100
void UTurn()
{
analogWrite(EnableL, 0);
analogWrite(EnableR, 0);
delay(400);
analogWrite(EnableL, 250);
analogWrite(EnableR, 250); //forward
delay(1000);
analogWrite(EnableL, 0);
analogWrite(EnableR, 0);
delay(400);
digitalWrite(HighL, HIGH);
digitalWrite(LowL, LOW);
digitalWrite(HighR, LOW); // left
digitalWrite(LowR, HIGH);
analogWrite(EnableL, 255);
analogWrite(EnableR, 255);
delay(700);
analogWrite(EnableL, 0);
analogWrite(EnableR, 0);
delay(400);
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
digitalWrite(HighR, LOW); // forward
digitalWrite(LowR, HIGH);
analogWrite(EnableL, 255);
analogWrite(EnableR, 255);
delay(900);
analogWrite(EnableL, 0);
analogWrite(EnableR, 0);
delay(400);
digitalWrite(HighL, HIGH);
digitalWrite(LowL, LOW);
digitalWrite(HighR, LOW); //left
digitalWrite(LowR, HIGH);
analogWrite(EnableL, 255);
analogWrite(EnableR, 255);
delay(700);
analogWrite(EnableL, 0);
analogWrite(EnableR, 0);
delay(1000);
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
digitalWrite(HighR, LOW);
digitalWrite(LowL, HIGH);
analogWrite(EnableL, 150);
analogWrite(EnableR, 150);
delay(300);
}
void Object()
{
analogWrite(EnableL, 0);
analogWrite(EnableR, 0); //stop
delay(1000);
digitalWrite(HighL, HIGH);
digitalWrite(LowL, LOW);
digitalWrite(HighR, LOW);
digitalWrite(LowR, HIGH); //left
analogWrite(EnableL, 250);
analogWrite(EnableR, 250);
delay(500);
analogWrite(EnableL, 0);
analogWrite(EnableR, 0); //stop
delay(200);
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH); //forward
digitalWrite(HighR, LOW);
digitalWrite(LowR, HIGH);
analogWrite(EnableL, 255);
analogWrite(EnableR, 255);
delay(1000);
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
digitalWrite(HighR, HIGH); //right
digitalWrite(LowR, LOW);
analogWrite(EnableL, 255);
analogWrite(EnableR, 255);
delay(500);
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
digitalWrite(HighR, LOW); // forward
digitalWrite(LowR, HIGH);
analogWrite(EnableL, 150);
analogWrite(EnableR, 150);
delay(500);
i = i+1;
}
void Lane_Change()
{
analogWrite(EnableL, 0);
analogWrite(EnableR, 0); //stop
delay(1000);
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
digitalWrite(HighR, HIGH);
digitalWrite(LowR, LOW); //Right
analogWrite(EnableL, 250);
analogWrite(EnableR, 250);
delay(500);
analogWrite(EnableL, 0);
analogWrite(EnableR, 0); //stop
delay(200);
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH); //forward
digitalWrite(HighR, LOW);
digitalWrite(LowR, HIGH);
analogWrite(EnableL, 255);
analogWrite(EnableR, 255);
delay(800);
digitalWrite(HighL, HIGH);
digitalWrite(LowL, LOW);
digitalWrite(HighR, LOW); //LEFT
digitalWrite(LowR, HIGH);
analogWrite(EnableL, 255);
analogWrite(EnableR, 255);
delay(500);
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
digitalWrite(HighR, LOW); // forward
digitalWrite(LowR, HIGH);
analogWrite(EnableL, 150);
analogWrite(EnableR, 150);
delay(500);
void loop()
{
if (j > 25000)
{
Lane_Change();
i = 0;
j = 0;
}
Data();
if(data==0)
{
Forward();
if (i>0)
{
j = j+1;
}
}
else if(data==1)
{
Right1();
if (i>0)
{
j = j+1;
}
}
else if(data==2)
{
Right2();
if (i>0)
{
j = j+1;
}
}
else if(data==3)
{
Right3();
if (i>0)
{
j = j+1;
}
}
else if(data==4)
{
Left1();
if (i>0)
{
j = j+1;
}
}
else if(data==5)
{
Left2();
if (i>0)
{
j = j+1;
}
}
else if(data==6)
{
Left3();
if (i>0)
{
j = j+1;
}
}
else if(data==7)
{
UTurn();
}
else if (data==8)
{
analogWrite(EnableL, 0);
analogWrite(EnableR, 0);
delay(4000);
analogWrite(EnableL, 150);
analogWrite(EnableR, 150);
delay(1000);
}
else if(data==9)
{
Object();
}
else if(data==10)
{
analogWrite(EnableL, 0);
analogWrite(EnableR, 0);
delay(2000);
}
else if(data>10)
{
Stop();
}