0% found this document useful (0 votes)
152 views18 pages

Autonomous Cars Report - Part 2

The document discusses autonomous vehicles and different levels of vehicle automation. It provides examples of car manufacturers that have tested autonomous vehicles, including Audi testing a driverless car from San Francisco to Las Vegas. Achieving the final 10% of full vehicle autonomy is very challenging and will require extensive testing of vehicles in diverse traffic situations to train the machine learning algorithms and achieve the high level of accuracy needed for safe operation.

Uploaded by

Ketan Ingale
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
152 views18 pages

Autonomous Cars Report - Part 2

The document discusses autonomous vehicles and different levels of vehicle automation. It provides examples of car manufacturers that have tested autonomous vehicles, including Audi testing a driverless car from San Francisco to Las Vegas. Achieving the final 10% of full vehicle autonomy is very challenging and will require extensive testing of vehicles in diverse traffic situations to train the machine learning algorithms and achieve the high level of accuracy needed for safe operation.

Uploaded by

Ketan Ingale
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Autonomous Cars

Chapter One: Introduction

“Autonomous Driving is a Reality.”

For the past hundred years, innovation within the automotive sector has brought major
technological advances leading to safer, cleaner and more affordable vehicles. In the decades of the
21st century the industry appears to be on the cusp of revolutionary change with potential to
dramatically reshape not just the competitive landscape but also the way we interact with the
vehicles and indeed the future design of our roads and cities. The revolution when it comes will be
engendering by advent of an autonomous or self-driving car.

A glance at the communications of many car manufacturers, suppliers and technology companies
shows that they usually refer to ‘automated driving’, and only rarely to ‘autonomous driving’. The
former term is the umbrella term that includes several phases of automation beginning with driver-
assistance systems. The latter describes the final stage of automation, the situation in which the
system takes over all the steering, accelerating and braking manoeuvres. In this phase, a person is
out of the loop and the car is able to act totally alone at all times and in all traffic situations. Vehicles
that have reached this step can be described as autonomous, driverless or self-driving. In short, the
term automation is used to express the self-acting aspect of a machine. Autonomy goes beyond that
means the self-governing ability of an entire system.

So far, we have only been talking about autonomous cars, the furthest and most advanced stage of
vehicle automation in which the system is responsible for all driving tasks everywhere and at all
times. With this full automation, there is no driver anymore; all the occupants of a vehicle are just
passengers. It will take quite few years until lot of these vehicles are seen on the roads, but some
vehicles are equipped with considerable automation already today.

Level 0 is the starting point – there is no automation and the driver steers, accelerates and chooses
to apply the brakes on the vehicle without any system support. With levels 1 and 2, the system takes
over more and more of these driving tasks. However, the driver is obliged to permanently monitor
the system, the traffic and the surroundings, and must be able to take over the driving function at
any time. Level 3 means that the driver no longer must continuously monitor the activity, because
the system indicates to him or her when it is necessary to take control of the vehicle. With Level 4,
the system masters all driving functions in normal operation and in defined surroundings, but the
driver can overrule the system’s decisions. The last stage in Level 5, where the car is capable of
driving on its own, even when driver is not in the vehicle.

Some manufacturers’ autonomous vehicles have already emerged from the concept phase and
passed thorough tests to take their place on the roads. Currently, they operate in controlled
environments but will be found soon within normal traffic.

For example, an Audi car drove itself from San Francisco to Las Vegas, and another driverless Audi
reached a maximum of 149.1 miles per hour (240 kms per hour) on a racetrack (see Figure 1.1).
NioEV’s new sports car Nio EP9 completed the circuit of Americas Formula 1 racetrack in Austin in a
spectacular 2:40.33 minutes. Mercedes has presented it F015, which provides an impression of the
autonomous mobility of the coming years with its futuristic design and many innovative features
(see Figure 1.2).

1|Page
Department of Computer Engineering
Autonomous Cars

Figure 1.1. Driverless Race Car of Audi.

Source: Audi AG

Figure 1.2. Mercedes’ Self-Driving F015

Source: Daimler AG

For several years, Google has been testing its well-known vehicles in California and other US states.
Tesla had equipped some of its cars with software, cameras and radar, enabling them to drive
autonomously in certain traffic situations. Volvo plans to put cars that can drive in autonomous
mode on the beltway around Gothenburg, Sweden. Many other can manufacturers such as Ford,
General Motors, BMW, Toyota, Kia, Nissan and Volkswagen are working on prototypes of self-driving
cars or have already tested them in road traffic.

2|Page
Department of Computer Engineering
Autonomous Cars

The CTOs of many car manufacturers and technology companies agree that achieving the first 90 per
cent of autonomous driving is nothing special. It’s the last 10 per cent – in the most challenging
traffic situations and especially in urban areas – that makes the difference. That’s why the vehicles
have to be tested in as many traffic situations as possible so that experience is gained on their
reaction. A similar argument is presented by Jen-Hsun Huang, CEO of Nvidia, who demands the
accuracy of 99.999999 per cent in the development of autonomous cars, whereby the last
percentage point can only be achieved at very great expense. Toyota and Rand Corporation have
published calculations of the number of miles self-driving cars have to be tested before they can be
assessed as roadworthy because the algorithms required for driverless cars undergo self-learning in
multiple road traffic situations. The more traffic situations these algorithms are exposed to, the
better prepared they are to master a new situation. Designing this training process so that the
accuracy demanded by Jen-Hsun Huang is obtained, will be crucial challenge in development of
autonomous vehicles.

When discussing what fault tolerance might be acceptable, it should be borne in mind that people
are more likely to forgive mistakes made by other people than mistakes made by machines. This also
applies to driving-errors, which are more likely to be overlooked if they were committed by a driver
and not by machine. This means that autonomous vehicles will only be accepted if they cause
significantly fewer errors than the drivers. For this reason alone, the precision demanded by Jen-
Hsun Huang is indispensable.

3|Page
Department of Computer Engineering
Autonomous Cars

Chapter Two: Literature Survey

The NVIDIA Corp. has


empirically demonstrated
that CNNs are able to learn
entire task of lane and road
Mariusz Bojarski,
following without manual
Davide del Testa,
decomposition into road or
Daniel
lane marking detection,
Dworakowski,
semantic abstraction, path
Bernhard Firner,
planning and control. A small
Beat Flepp,
End to End amount training data from
Prasoon Goyal,
1. Learning for Self- 2016 less than hundred hours of
Lawrence D. Jackel,
Driving Cars driving was enough to train
Mathew Monfort,
the car to operate in diverse
Urs Muller,
conditions, on highways,
Jiakai Zhang,
local and residential roads in
Xin Zhang,
sunny, cloudy, and rainy
Jake Zhao,
conditions. The CNN can
Karol Zieba
learn meaningful road
features from a very sparse
training signal (steering
alone).
In the following paper, the
authors presented a
reinforcement-learning
Driverless Car:
based approach with Deep Q
Autonomous Abdur R. Fayjie,
Network implemented in
Driving Using Deep Sabir Hossain,
2. 2018 autonomous driving. The
Reinforcement Doukhni Oualid,
reward function and agent
Learning In Urban Deok-Jin Lee
actions were defined, and
Environment
the neural network trained
accordingly to maximize the
positive reward.
The authors are proposing a
deep learning framework for
car detection, which fuses
Car Detection for LIDAR and camera. It has
Autonomous been shown that by
Vehicle: LIDAR and Xinxin Du, considering the point cloud
3. Vision Fusion 2017 Marcelo H. Ang Jr., and exploring multi-layer
Approach Through Daniela Rus information, the framework
Deep Learning is more efficient generating
Framework high-quality proposal boxes.
Consequently, the final
detection performance is
further improved.

4|Page
Department of Computer Engineering
Autonomous Cars

The paper discusses how the


entire system functions and
its division into obstacle
T. Banerjee, sensing, locomotion, traffic
S. Bose, light detection, information
Self-Driving Cars: A
A. Chakraborty, to the user and the
4. Peep into the 2017
T. Samadder, navigation of the vehicle. It
Future
Bhaskar Kumar, also states that since the
T. K. Rana vehicle is powered by green
energy, it will not contribute
to global warming and
pollution.
The paper or work presents
the methodology behind the
MIT-AVT study which aim to
define and inspire the next
Lex Fridman, generation of naturalistic
Daniel E. Brown, driving studies. The main
Michael Glazer, principle of this study is that
William Angell, the researchers leverage the
Spencer Dodd, power of computer vision
MIT Autonomous
Benedikt Jenik, and deep learning to
Vehicle
Jack Terwilliger, automatically extract
Technology Study:
Juila Kindelsberger, patterns of human
Large-Scale Deep
Li Ding, interaction with various
5. Learning Based 2018
Sean Seaman, levels of autonomous vehicle
Analysis of Driver
Hillary Abraham, technology. They use the AI
Behaviour and
Alea Mehler, to analyse the entirety of the
Interaction with
Andrew Sipperley, driving experience in large-
Automation
Anthony Pettinato, scale data and use human
Bobbie Seppelt, expertise and qualitative
Linda Angell, analysis to dive deep into the
Bruce Mehler, data to gain case-specific
Bryan Reimer understanding. To date, the
dataset includes 99
participants, 11,846 days of
participation, 405,807 miles,
and 5.5 Billion video frames.

5|Page
Department of Computer Engineering
Autonomous Cars

Chapter Three: Motivation and Objectives

Studying about autonomous driving is quite a challenge, because new findings on the subject – often
contradictory – are appearing every day. Ideas, concepts and technologies relating to self-driving
vehicles are emerging all over the world and it is hardly possible to gain a detailed overview of them
all. So, this report cannot aim to be an entirely consistent description that is accurate in every detail
but is more like the collected journals of an expedition that is not yet completed. It was worthwhile
setting out on this expedition, because there is probably no other technology that will so
fundamentally transform our economic and social lives. The time has come to address the subject of
autonomous mobility and to make it the subject of social disclosure, thus contributing to changing
our lives for the better.

The automobiles have fascinated me since my childhood. I love driving cars. It gives me control and
power over the machine – a feeling of freedom, movement, pride and pleasure. To me car means
freedom. Cars are a form of self-expression, cars are my identity, it’s who I am. And as I love the
cars, I too have passion for Artificial Intelligence. So, combining the two fields, the field of self-
driving cars is born. And that’s what made me take it up as the seminar topic.

6|Page
Department of Computer Engineering
Autonomous Cars

CHAPTER FOUR: DETAILS OF DESIGN/TECHNOLOGY/ANALYTICAL


AND/OR EXPERIMENT WORK

The technologies upon which the autonomous driving is based blur the boundaries between the
automotive industry and the robotic, electronic and software industries. Software with programming
codes and algorithms, as well as cameras and sensors using radar, ultrasound and lasers, are all
gaining importance. Meanwhile, the hardware of a vehicle – the chassis, suspension and other
mechanical parts – are losing importance. So, it’s not surprising that technology companies such as
Microsoft, Apple, Google, Nvidia, Mobileye, NuTonomy and Qualcomm, are occupied with
autonomous driving and have developed their own driverless vehicles. Even the traditional
automotive suppliers such as Aisin, Delphi, Bosch, Denso, Continental, TRW, Schaeffler or Magna are
either preparing their own prototype of self-driving cars or working on the key components for
autonomous driving.

The essence of autonomous driving is the development of vehicles onto cyber-physical systems that
comprise a combination of mechanical and electronic components. A vehicle’s hardware and
software exchange certain data about the infrastructure (the Internet of Things), and the vehicle is
controlled or monitored by a processing unit.

But what do you mean by an Autonomous Car?

4.a What is an Autonomous Car?

Technically, an autonomous car is a vehicle capable of sensing its environment and moving with little
or no human input. These cars use a variety of techniques to detect their surroundings, such as
RADAR, LIDAR, Laser Light, Odometer, GPS, and computer vision. These vehicles require a broad
range of technologies and infrastructures to operate properly. Each vehicle is required to
continuously collect and interpret vast amounts of information. Every system of the car must work
with the surrounding environment.

4.b Different Levels of Autonomous Driving

It may come as a surprise to learn that vehicle automation is not an either/or proposition. And not
everyone wants complete automation. Putting your car on automatic pilot in fair weather along a
standard stretch of highway may be fine, but any driver would want the option to regain control of
the car should circumstances demand it. The autonomy of vehicles is graded on a scale from zero to
five, with zero meaning no autonomy and five signifying complete autonomy.

The standards organization Society of Automotive Engineers (SAE) International has defined these
levels in a report on “Taxonomies and Definitions” regarding automated driving systems. SAE
descriptions for the levels are given in the following subtitles.

1. Level Zero: No Automation


There was a time when cars had no computers at all, and in the early days they
didn’t even have power steering or power brakes.
At level zero, all aspects of the driving task are in the hands of the driver.

7|Page
Department of Computer Engineering
Autonomous Cars

“A vehicle that fits into this category, relies on a human to dictate every driving
action.” The driver has full control.

2. Level One: Driver Assistance


At this level, the automobile includes some built-in capabilities to operate the
vehicle.
The vehicle may assist the driver with tasks like steering or
acceleration/deceleration.
Most modern cars fit into this level. If your vehicle has adaptive cruise control or
lane-keeping technology, it’s probably at level one.

3. Level Two: Partial Assistance


At this level of automation, two or more automated functions work together to
relieve the driver of control. An example is a system with both adaptive cruise
control and automatic emergency braking.
The driver must remain fully engaged with the driving tasks, but you might notice
some transfer of control from man to machine.

4. Level Three: Conditional Automation


This level is marked by both the execution of steering and acceleration/deceleration
and the monitoring of the driving environment.
In levels zero through two, the driver does all the monitoring. At level three, the
driver is still required, but the automobile can perform all aspects of the driving
tasks under some circumstances.
Level three and higher qualify as automated driving system (ADS). Another
commonly used term is highly automated vehicle (HAV).
There’s a big bump in capability between levels two and three. The driver still must
keep his eyes on the road, ready to take over at the moment’s notice.
But a level three vehicle can handle certain parts of the trip on its own – mainly
highway driving.

5. Level Four: High Automation


Level four vehicles don’t need a human driver. The vehicle can essentially do all the
driving, but the driver can intervene and take control as needed.
This level of automation means that the car can perform all driving functions “under
certain conditions.” The test vehicles currently on road would fall under this
category.

6. Level Five: Full Automation


A completely automated vehicle can perform all driving functions under all
conditions. In this situation, humans are just passengers.

4.c Different Challenges faced while designing the Autonomous Cars

1. Road Conditions
Road conditions could be highly unpredictable and vary from places to places. In some cases,
there are not smooth and marked highways. In other cases, road conditions are highly

8|Page
Department of Computer Engineering
Autonomous Cars

deteriorated – no lane marking. Lanes are not defined, there are potholes, mountainous and
tunnel roads where external signals for direction are not very clear and likewise.

2. Weather Conditions
Weather conditions play another spoilsport. There could be a sunny and clear weather or
rainy and stormy weather. Autonomous cars should work in all sorts of weather conditions.
There is absolutely no scope for failure or downtime.

3. Traffic Conditions
Autonomous cars would have to get onto the road where they would have to drive in all
sorts of traffic conditions. They would have to drive with other autonomous are on the road,
and at the same time, there would also be a lot of humans. Wherever humans are involved,
there are involved lot of emotions. Traffic could be highly moderated and self-regulated. But
often there are cases where people may be breaking traffic rules. An object may turn up in
unexpected conditions. In the case of dense traffic, even the movement of few cms per
minute does matter. One can’t wait endlessly for traffic to clear automatically and have
some precondition to start moving. If more of such cars on the road are waiting for traffic to
get cleared, ultimately that may result in a traffic deadlock.

4. Accident Liability
The most important aspect of autonomous car is accident liability. In the case of
autonomous cars, the software will be the main component that will drive the car and make
all the important decisions. While the initial designs have a person physically placed behind
the steering wheel, newer designs showcased by Google, do not have a dashboard and a
steering wheel. Additionally, due to the nature of autonomous cars, the occupants will
mostly be in a relaxed state and may not be paying close attention to the traffic conditions.
In situations where their attention is needed, by the time they need to act, it may be too late
to avert the situation.

5. Radar interference
Autonomous cars use lasers and radar for navigation. The lasers are mounted on roof top
while the sensors are mounted on the body of the vehicle. The principle of radar works by
detecting reflections of radio works from surrounding objects. When on the road, a car will
continuously emit radio frequency waves which get reflected from the surrounding cars and
other objects neat the road. Appropriate action is then taken based on the radar readings.
When this technology is used for hundreds of vehicles on the road, the cars won’t be able to
distinguish between its own signal and the signal from another vehicle. Even if multiple
frequencies are available for radar, this frequency range is unlikely to be insufficient for all
the vehicles manufactured.

Challenges are many even today for rolling out the autonomous cars on the road. But so is the
determination of our scientists, engineers and problem solvers from various disciplines. The
collective effort of the industry will make the autonomous car on the road a reality one day, and the
benefits will be huge. Not only it will save fuel, encourage efficient transportation and shared
services, but will also help in saving many lives that are regularly lost in road accidents.

9|Page
Department of Computer Engineering
Autonomous Cars

The solution for most of the challenges mentioned above lies in choosing the AI powered super
computer in the car. For this same, NVIDIA is coming up with more powerful and intelligent AI and
motherboards. One such example of motherboard is NVIDIA DRIVE AGX Pegasus.

4.d NVIDIA Drive AGX Pegasus

It’s hard to argue with calling NVIDIA the market leader in providing integrated compute platforms
for automated driving development. Since the launch of the original Drive PX system in 2015, NVIDIA
has continued cranking up the performance and capability of its development platform. One of the
key factors to NVIDIA’s success in this space has been the integrated platform that includes a board
with processing chips and all the necessary input/output for connecting sensors and communicating
to other vehicle systems. This makes a lot easier for developers to buy the system and connect it in
the vehicle without having to assemble a bespoke solution from parts. NVIDIA also provides an array
of software frameworks for deep learning that enable developers to get started quickly. To get to so-
called Level 5 automation that can operate in any environment at any time, NVIDIA is now offering
the Drive AGX Pegasus. The Pegasus system utilizes two Xaviers and two of the company’s next-
generation GPU’s to crank out 320 TOPS.

Given below is spec sheet of Drive AGX Pegasus.

GPU Microarchitecture (Volta 12 nm)

Computing 2x Tegra Xavier

CPU 16x Carmel ARM64

2x Volta iGPU (512 CUDA cores)


GPU
2x Turing dGPUS

Accelerator 2x DLA

Memory LPDDR4

Storage 128GB eMMC

Performance 320 INT8 TOPS (total)

TDP 500W

10 | P a g e
Department of Computer Engineering
Autonomous Cars

4.e Components in an Autonomous Cars

A. Drive AGX Platform


NVIDIA Drive AGX self-driving compute platforms are built on NVIDIA XavierTM, the world’s
first processor designed for autonomous driving. The auto-grade Xavier system-on-a-chip
(SoC) is in production today and architected for safety, incorporating six different types of
processors to run redundant and diverse algorithms for AI, sensor processing, mapping and
driving. Leveraging Xavier, Drive AGX platforms process data from camera, lidar, radar, and
ultrasonic sensors to understand the complete 360-degree environment in real-time, localize
itself to a map, and plan a safe path forward.
NVIDIA Drive AGX Pegasus achieves ab unprecedented 320 TOPS of deep learning with an
architecture built on two NVIDIA Xavier processors and two next-generation TensorCore
GPUs. This energy efficient, high-performance AI computer runs an array of deep neural
networks simultaneously and is designed to safely handle highly automated and fully
autonomous driving. No steering wheel or pedals required.

Figure 4.e.1. Drive AGX Platform

Source: NVIDIA

B. Camera
All autonomous cars utilize some form of camera technology, but the actual camera
technology and setup on each driverless car varies. The cameras are used to identify road
markings and traffic signals. Some self-driving cars can operate using just a single camera
embedded in the windshield. Other autonomous car cameras require several cameras
mounted to the vehicle’s exterior with slight separation in order to give an overlapping view
of the car’s surroundings. The goal of a driverless car camera is to help the car’s computer
build up a composite picture of the surrounding world in order to drive safely/ The
technology behind these cameras function like the human eye, which provides overlapping
images to the vehicle’s computer before determining things like depth of field, peripheral
movement, and dimensionality of objects.

Figure 4.e.2. Cameras in Autonomous Car

Source: Google Images

11 | P a g e
Department of Computer Engineering
Autonomous Cars

C. Radar Sensor
Autonomous cars typically have bumper-mounted radar sensor units (two in front and two
in back). These help the vehicle detect road dynamics such as detours, traffic delays, vehicle
collisions, and other obstacles, by sending a signal to the on-board processor to apply the
brakes and/or move out of the way. This technology works in conjunction with other
features on the car such as inertial measurement units, gyroscopes, and a wheel encoder to
send accurate signals to the processing unit (i.e. brain) of the vehicle.

Figure 4.e.3. Radar Sensor

Source: Google Images

D. LIDAR (Light Illuminating Detection and Ranging) Sensor


The LIDAR unit, which looks like a spinning siren light, provides driverless cars with highly
accurate long range detection, with ranges of up to 100 meters, As it spins, it continuously
scans the world around the car and builds a 3D omni-directional view to allow the car to see
potential hazards by bouncing a laser beam off surfaces surrounding the car in order to
accurately determine the identity and distance of the object. The rotating LIDAR units are
normally mounted to the top the car, providing an unobstructed 360-degree view. This unit
generates raw information about the world, which is then sent to the car’s brain to process;
just like a human driver world.

Figure 4.e.4. LIDAR Sensor

Source: Google Images

12 | P a g e
Department of Computer Engineering
Autonomous Cars

E. GNSS and IMU


Global Navigation Satellite System (GNSS) refers to a constellation of satellites providing
signals from space that transmit positioning and timing data to GNSS receivers. The receivers
then use this data to determine location. By definition, GNSS provides global coverage. The
performance of GNSS is assessed using four criteria:

1. Accuracy: the difference between a receiver’s measured and real position, speed or
time;
2. Integrity: a system’s capacity to provide a threshold of confidence and, in the event of
an anomaly in the positioning data, an alarm;
3. Continuity: a system’s ability to function without interruption;
4. Availability: the percentage of time a signal fulfils the above accuracy, integrity and
continuity criteria.

An inertial measurement unit (IMU) is an electronic device that measures and reports a
body’s specific force, angular rate, and sometimes the magnetic field surroundings the body,
using a combination of accelerometers and gyroscopes, sometimes also magnetometers.
IMU are typically used to maneuver aircraft, including unnamed aerial vehicles (UAVS),
among many others, and spacecraft, including satellites and landers, and autonomous
vehicles.

4.f Working of Autonomous Car

The working of Autonomous Car includes 5 core components.

Figure 4.f.1. 5 Core Components

Source: Wired

13 | P a g e
Department of Computer Engineering
Autonomous Cars

1. Computer Vision
Computer Vision is how we use cameras to see the road. Humans demonstrate the power of
vision by handling a car with basically just two eyes and a brain. For a self-driving car, we can
use camera images to find lane lines, or track other vehicles on the road.

Figure 4.f.2. Computer Vision

Source: Medium

2. Sensor Fusion
Sensor Fusion is how we integrate data from other sensors, like radar and lasers – together
with camera data – to build a comprehensive understanding if the vehicle’s environment. As
good as cameras are, there are certain measurements – like distance and velocity – at which
other sensors excel, and other sensors can work better in adverse weather, too. By
combining all of our sensor data, we get a richer understanding of the world.

4.f.3. Sensor Fusion

Source: Google Images

3. Localization
Localization is how we figure out where we are in the world, which is the next step after we
understand what the world looks like. We all have cell phones with GPS, so it might seem
like we know where we are all the time already. But in fact, GPS is only accurate to within
about 1–2 meters. Think about how big 1–2 meters is! If a car were wrong by 1–2 meters, it
could be off on the sidewalk hitting things. So we have much more sophisticated
mathematical algorithms that help the vehicle localize itself to within 1–2 centimetres.

Figure 4.f.4. Localization

Source: Google Images

14 | P a g e
Department of Computer Engineering
Autonomous Cars

4. Path Planning
Path planning is the next step, once we know what the world looks like, and where in it we
are. In the path planning phase, we chart a trajectory through the world to get where we
want to go. First, we predict what the other vehicles around us will do. Then we decide
which maneuver we want to take in response to those vehicles. Finally, we build a trajectory,
or path, to execute that maneuver safely and comfortably.

Figure 4.f.5. Path Planning

Source: Google Images

5. Control
Control is the final step in the pipeline. Once we have the trajectory from our path planning
block, the vehicle needs to turn the steering wheel and hit the throttle or the brake, in order
to follow that trajectory.

Figure 4.f.6. Control

Source: Audi AG

15 | P a g e
Department of Computer Engineering
Autonomous Cars

CHAPTER FIVE: DISCUSSIONS AND CONCLUSIONS

Currently, there are many different technologies available that can assist in creating autonomous
vehicle systems. Items such as GPS, automated cruise control, and lane keeping assistance are
available to consumers on some luxury vehicles. The combination of these technologies and other
systems such as video-based lane analysis, steering and brake actuation systems, and the programs
necessary to control all the components will become a fully autonomous system. The problem is
winning the trust of the people to allow a computer to drive a vehicle for them, because of this,
there must be research and texting done repeatedly to assure a near fool proof final product. The
product will not be accepted instantly, but overtime as the systems become more widely used
people will realize the benefits of it. The implementation of autonomous vehicles will bring up the
problem of replacing humans with computers that can do the work for them. There will not be an
instant change in society, but it will become more apparent over time as they are integrated into
society.

16 | P a g e
Department of Computer Engineering
Autonomous Cars

CHAPTER SIX: BIBLIOGRAPHY

[1] X. Du, M. H. Ang and D. Rus, "Car detection for autonomous vehicle: LIDAR and vision fusion
approach through deep learning framework", 2017 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), Vancouver, BC, 2017, pp. 749-754.
[2] Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon
Goyal, Lawrence D. Jackel, Mathew Monfort, Urs Muller,Jiakai Zhang, Xin Zhang, Jake Zhao and Karol
Zieba, “End to End Learning for Self-Driving Cars”, 2016 NVIDIA Corporation Holmdel, NJ 07735,
2017.
[3] T. Banerjee, S. Bose, A. Chakraborty, T. Samadder, B. Kumar and T. K. Rana, "Self-driving cars: A
peep into the future", 2017 8th Annual Industrial Automation and Electromechanical Engineering
Conference (IEMECON), Bangkok, 2017, pp. 33-38.
[4] A. R. Fayjie, S. Hossain, D. Oualid and D. Lee, "Driverless Car: Autonomous Driving Using Deep
Reinforcement Learning in Urban Environment", 2018 15th International Conference on Ubiquitous
Robots (UR), Honolulu, HI, 2018, pp. 896-901.
[5] Lex Fridman, Daniel E. Brown, Michael Glazer, William Angell, Spencer Dodd, Benedikt Jenik, Jack
Terwilliger, Julia Kindelsberger, Li Ding, Sean Seaman, Hillary Abraham, Alea Mehler, Andrew
Sipperley, Anthony Pettinato, Bobbie Seppelt, Linda Angell, Bruce Mehler, Bryan Reimer, “MIT
Autonomous Vehicle Technology Study: Large-Scale Deep Learning Based Analysis of Driver Behavior
and Interaction with Automation”, 2018 77 Massachusetts Ave, Cambridge, MA 02139, USA, 2018.
[6] “WIRED.” Wired, Conde Nast, www.wired.com.
[7] “Medium – a Place to Read and Write Big Ideas and Important Stories.” Medium, medium.com.
[8] books.emeraldinsight.com. N.p., n.d. Web. 18 Apr. 2019.
<https://books.emeraldinsight.com/resources/pdfs/chapters/9781787148345-TYPE23-NR2.pdf>.

17 | P a g e
Department of Computer Engineering
Autonomous Cars

CHAPTER 7: ANNEXURE 1
Plagiarism Report

18 | P a g e
Department of Computer Engineering

You might also like