Autonomous Baja Paper Published
Autonomous Baja Paper Published
International Journal of
Volume 9, Issue 1, 2023
Traffic System
http://civil.journalspub.info/index.php?journal=JTETS&page=index
Research IJTETS
Abstract
This research project introduces a novel modular system aimed at transforming off-road Baja vehicle
into autonomous entities while still maintaining the option for manual operation. The primary objective
of this study involves the design and development of a Level 3 autonomous vehicle prototype, utilizing
a Society of Automotive Engineers eBaja vehicle equipped with advanced actuators and exteroceptive
sensors. The outcome of this project yielded a drive-by-wire system, enabling the Baja vehicle to
autonomously localize itself through sensor input, generate accurate surrounding maps, and efficiently
plan trajectories. In simulated environments, the vehicle successfully navigated a series of obstacles
based on given maps. The goal is to enhance the software system's modularity and real-time capability,
empowering the vehicle to autonomously traverse challenging off-road terrains for the purpose of
aiding distressed individuals in rescue scenarios. Furthermore, this article addresses the criticality of
sensor integrity, emphasizing the potential risks associated with tampering or manipulation of sensor
data. As the lives of individuals are at stake, any compromises to the accuracy and reliability of sensor-
generated data can lead to catastrophic consequences. Consequently, a comprehensive exploration of
attacks targeting various sensor types employed by autonomous vehicles is discussed. At Level 3,
autonomous vehicles assume full control over driving tasks, although human drivers must remain
vigilant and ready to intervene if the advanced driver assistance systems require assistance or encounter
functional limitations. Pending approval, drivers will have the ability to temporarily disengage from
steering responsibilities, allowing them to engage in activities such as video streaming, email
correspondence, and communication with colleagues.
Keywords: Autonomous vehicle, vehicle technology, self-driving vehicles, driverless cars, artificial
intelligence in vehicles, robotics in automotive, advanced driver assistance systems, ADAS
crucial to address critical challenges such as protecting user privacy and mitigating risks associated with
hacking and terrorism [1].
The popularity of vehicular technology has surged in recent times, with autonomous driving emerging
as a prominent and widely discussed topic. To develop intelligent transportation systems that are safe
and reliable, it is essential to deploy precise positioning technologies that can effectively handle various
uncertainties, including pedestrian behavior, unexpected obstacles, and diverse road conditions [2].
Achieving public trust and the successful realization of fully AVs necessitates the demonstration of
exceptional accuracy by these technologies in addressing these challenges.
An AVs, also known as a self-driving or driverless car, possesses the remarkable capability to perceive
its environment and navigate without human intervention. This groundbreaking achievement is made
possible through the utilization of a diverse range of sensors, including cameras, radar systems, and
lidar, along with sophisticated computer algorithms and advanced machine learning techniques. By
analyzing data collected from these sensors, AVs can effectively detect and respond to traffic conditions,
road hazards, and obstacles, enabling them to navigate to their intended destinations safely and
efficiently without the need for human input [3–7].
In the realm of AVs, a fully automated driving system is employed to enable the vehicle to adapt and
respond to external conditions that are typically managed by a human driver. This entails establishing
communication links between vehicles and infrastructure to facilitate Vehicle-to-Vehicle (V2V) and
Vehicle-to-Infrastructure (V2I) communication. Through V2V communication, vehicles can exchange
pertinent information such as local traffic data and driving intentions. In essence, AVs can be described
as intelligent cars or “robocars” that utilize a combination of sensors, computer processors, and
comprehensive databases, such as maps, to assume partial or complete control of driving functions from
human operators. The integration of this technology into cars promises numerous advantages, including
a potential reduction in accidents, energy consumption, and overall pollution levels [8–10].
It is important to note that automated driving is not a binary concept, but rather a progressive
evolution. Automakers are gradually introducing active safety and self-driving features to their vehicles.
These features are often categorized based on their combined control of acceleration and braking
(longitudinal control) and steering (lateral control). While some features may exhibit similar
functionalities, they may differ in terms of the level of human control versus autonomous system
control, aligning with different levels of driving automation [11].
The Society of Automotive Engineers (SAE) uses the term "automated" rather than "autonomous" to
describe these vehicles. One reason for this distinction is that the term "autonomy" carries implications
beyond the electromechanical realm. A fully autonomous car would possess self-awareness and the
ability to make independent choices [12].
To illustrate, consider a scenario where you instruct a self-driving car to take you to work, but instead,
it decides to divert to the beach. However, a fully automated car would strictly follow your instructions
and autonomously navigate to the designated destination without deviations [13].
Although the terms “self-driving” and “autonomous” are often used interchangeably, they have slight
distinctions. A self-driving car has the capability to drive itself in certain or even all situations, but with
the requirement that a human passenger remains present and prepared to assume control if necessary.
Self-driving cars typically fall within Level 3 (conditional driving automation) or Level 4 (high driving
automation) classifications. They may also be subject to geofencing, which restricts their operation to
specific geographic areas. In contrast, a fully autonomous Level 5 vehicle possesses the freedom to
operate anywhere without limitations.
The SAE, specifically J3016, has established a standardized framework consisting of six levels of
driving automation. These levels have been widely adopted, including by the U.S. Department of
Transportation. Each level represents a distinct degree of automation and can be summarized as follows:
• Level 0: No Automation: The vehicle relies entirely on human control, with no automated features
present.
• Level 1: Driver Assistance: The vehicle incorporates driver-assist technologies that provide
limited automation in specific functions, such as adaptive cruise control or lane-keeping
assistance. However, the driver retains primary control and responsibility.
• Level 2: Partial Automation: The vehicle can concurrently control steering and
acceleration/deceleration but requires the driver to remain engaged and vigilant.
• Level 3: Conditional Automation: The vehicle can manage most driving tasks under specific
conditions but still necessitates the driver’s ability to intervene when prompted.
• Level 4: High Automation: The vehicle can independently handle most driving functions within
predefined conditions or operational domains. However, there may be exceptional situations
where the driver might need to assume control.
• Level 5: Full Automation: The vehicle is fully autonomous and proficient in executing all driving
tasks across any situation or environment without human intervention. Level 5 vehicles possess
no restrictions and can operate anywhere.
These six levels of automation serve as a comprehensive framework for categorizing the capabilities
and limitations of self-driving and AVs. They facilitate effective communication and understanding
among researchers, manufacturers, policymakers, and the general public in the rapidly evolving field
of autonomous transportation Figure 1.
Each level of automation necessitates the integration of additional sensor layers as vehicles
progressively assume responsibilities previously managed by human drivers. For instance, a Level 1
vehicle might be equipped with a single radar and camera, while a Level 5 vehicle, which must be
capable of navigating any environment it encounters, requires comprehensive 360-degree sensing
across various sensor types.
Advanced driver-assistance systems (ADAS) are electronic systems integrated into vehicles that
leverage cutting-edge technologies to assist drivers. These systems encompass a wide array of active
safety features, with the terms “ADAS” and “active safety” often used interchangeably.
ADAS utilizes sensors such as radar and cameras within the vehicle to perceive the surrounding
environment. Based on the perception data, ADAS can provide valuable information to the driver or
autonomously initiate appropriate actions.
ADAS functionalities that offer information to the driver commonly incorporate the term “warning”
in their names. For instance, if the system detects the presence of an object, such as another vehicle or
a cyclist, in a location that may be challenging for the driver to observe, features like blind-spot warning
or rear backup warning will alert the driver. Similarly, if the system determines that the vehicle is
deviating from its designated lane, it can activate lane departure warning to notify the driver.
In summary, ADAS technologies play a pivotal role in enhancing driving safety and efficiency. They
not only assist drivers in various scenarios but also prove beneficial for individuals with physical or
cognitive impairments that might otherwise impede their ability to operate a vehicle. By leveraging
sensors and advanced algorithms, ADAS contributes to a safer and more accessible driving experience.
NEED/MOTIVATION
The development of AVs is driven by several key motivations, each with significant implications for
the future of transportation:
• Improved safety: A primary driving force behind AV development is the goal of enhancing road
safety. By reducing accidents caused by human error, which presently account for a significant
portion of road incidents, self-driving cars have the potential to make roads safer for all users.
• Increased efficiency: AVs offer the prospect of improved transportation efficiency by addressing
challenges such as traffic congestion and parking limitations. Through optimized traffic flow,
reduced idle time, and streamlined parking, self-driving cars can enhance overall transportation
efficiency, leading to time and fuel savings.
• Accessibility: Another compelling motivation is to enhance accessibility to transportation for
individuals who face limitations in driving, such as the elderly or people with disabilities. Self-
driving cars can provide a means for these individuals to regain their mobility and independence,
expanding their access to essential services and opportunities.
• Environmental benefits: AVs hold the promise of significant environmental benefits. By
optimizing driving behavior, reducing traffic congestion, and implementing efficient routing
systems, self-driving cars can contribute to a reduction in greenhouse gas emissions and other
pollutants, thereby mitigating the impact of transportation on the environment.
• Economic benefits: The development and deployment of AVs have the potential to create new
industries, generate employment opportunities, and spur economic growth. Additionally, as self-
driving technology advances and becomes more widespread, transportation costs are expected to
decrease, benefiting both individuals and businesses.
In summary, the motivation behind AVs is centered around creating a transportation system that is
safer, more efficient, and accessible to a broader range of individuals. By addressing key challenges and
harnessing the potential of advanced technology, self-driving cars have the capacity to transform the
way we travel, benefiting society (see Figure 2).
CHALLENGES OF AV
The advancement and implementation of AVs encounter several noteworthy challenges, including
the following:
• Safety and reliability: Ensuring the safety and reliability of AVs is a paramount challenge. These
vehicles must be capable of operating safely and dependably in diverse road and weather
conditions, promptly detecting, and responding to unexpected events, and minimizing the risk of
accidents. Achieving robust safety and reliability necessitates rigorous testing, validation
processes, and continuous improvements.
• Legal and regulatory issues: The development and deployment of AVs give rise to complex legal
and regulatory considerations. Issues such as liability and insurance, privacy protection, and
compliance with existing regulations pose significant challenges. Establishing comprehensive
legal and regulatory frameworks that address these concerns and facilitate the safe and ethical
deployment of AVs is imperative.
• Infrastructure and connectivity: AVs rely on advanced infrastructure and high-speed connectivity
to function effectively. Technologies such as 5G networks, precise GPS systems, and high-
definition mapping are essential components. However, the development and deployment of such
infrastructure, especially in remote or rural areas, pose substantial challenges that require
substantial investment and planning.
• Technical complexity: The development of AVs involves addressing various technical challenges.
Advancements in sensor technology, sophisticated machine learning algorithms, robust human–
machine interfaces, and reliable cybersecurity measures are crucial for achieving safe and
efficient autonomous driving. Overcoming these technical complexities requires continuous
innovation and collaboration across interdisciplinary fields.
• Public perception and acceptance: AVs represent a relatively novel technology, and their
widespread adoption may encounter public skepticism and resistance. Building public awareness,
addressing concerns, and fostering trust in autonomous driving technologies are significant
challenges. Educating the public about the potential benefits, safety measures, and ethical
considerations of AVs is vital for their acceptance and integration into society.
The development and deployment of AVs confront multiple substantial challenges. Addressing safety,
legal and regulatory aspects, infrastructure requirements, technical complexities, and public perception
are critical to realizing the full potential of autonomous driving. Meeting these challenges demands
concerted efforts from industry stakeholders, policymakers, researchers, and the public to ensure the
successful integration of AVs into our transportation ecosystem.
Several automakers are actively developing Level 3 autonomous driving technology, with some
vehicles already available on the market. However, the deployment of such technology raises significant
technical, ethical, and legal challenges. Regulatory bodies are diligently evaluating the safety and
reliability of these systems to ensure their adherence to robust standards.
As this technology continues to evolve, further advancements, stringent testing, and continued
collaboration among industry stakeholders, policymakers, and regulators are necessary to address the
complexities associated with Level 3 autonomous driving and ensure the safe integration of these
vehicles on our roads (see Figure 3).
To implement stereo vision, the team explored two standard approaches. The first involved a
proprietary implementation using two separate cameras, while the second utilized an established stereo
vision unit consisting of two cameras along with specialized software. Both options were evaluated
based on factors such as image acquisition speed, range, resolution, depth perception accuracy, and field
of view.
Faster image acquisition enables a higher frame rate, allowing the system to stay updated with real-
time information. Longer range image sensors facilitate earlier detection of environmental obstacles,
providing more time for decision-making. Higher resolution leads to clearer images, thereby enhancing
the accuracy of image processing. Additionally, precise depth perception contributes to improved
decision-making capabilities. A larger field of view allows for capturing more comprehensive
environmental information.
However, it is important to note that cameras alone may not suffice for long-range detection,
especially in adverse weather conditions. In situations where the cameras face visibility limitations, an
alternative detection method becomes necessary. Long-range sensors are crucial for detecting obstacles
in the vehicle’s path and estimating their distance, enabling swift reactions. Two standard options for
long-range distance sensing are light detection and ranging (LiDAR) and range-finding radar. LiDAR
employs laser beams to detect objects and measure their distance, while radar uses electromagnetic
waves projected within a cone area to detect and receive signals reflected from obstacles.
Implementing stereo vision and incorporating long-range sensors, such as LiDAR or radar, play
integral roles in facilitating accurate perception and efficient obstacle detection for the Baja vehicle.
These technologies contribute to the vehicle’s ability to navigate safely and make informed decisions,
even in challenging environments or when visibility is compromised.
Radar operates by measuring the range and movement of objects based on the time and frequency of
the returning signal. On the other hand, LiDAR is an active sensor that calculates object distances by
emitting pulsed laser lights and measuring the time it takes for the reflected pulses to return. LiDAR
technology can be categorized into 2D and 3D LiDAR systems. In a 2D LiDAR system, a single laser
emitter and a rotating reflection platform are used to direct laser beams to all angles within the detection
range, creating a 2D map of the environment at the LiDAR’s mounting height. In contrast, 3D LiDAR
utilizes multiple layered laser beams to generate several 2D cross-sectional areas, which are then
combined to construct a 3D representation of the surroundings.
While both radar and LiDAR offer long-range detection capabilities, LiDAR holds several
advantages over radar in the field of autonomous driving. Laser light provides more precise illumination
of objects compared with radar waves. Unlike radar sensors with wide cone-shaped wavefronts, LiDAR
beams focus on specific areas within their direction. Additionally, LiDAR signals tend to have less noise
compared with radar signals, as radar can generate noise due to unwanted reflections. LiDAR, in
contrast, reflects less and operates using active laser signals that are independent of external
light sources.
It is worth noting that 2D LiDAR systems can only gather distance information within a 2D plane,
which means they may not detect or measure obstacles that are higher or lower than that plane. In the
context of this project, with only LiDAR sensors, the Baja vehicle would automatically choose to
navigate around an object since it would not be able to identify the specific object obstructing its path.
LiDAR sensors rely on the rotation of an inner reflective platform to emit laser beams at different angles,
resulting in a scanning frequency limited by the speed of the rotation mechanism. Additionally, LiDAR
systems can encounter challenges when encountering transparent or reflective objects, such as windows
and mirrors. However, these concerns are not relevant for off-road vehicles.
Point clouds are three-dimensional arrays of points that represent the spatial distribution of objects and
surfaces in each environment. They provide a detailed representation of the surroundings and are
commonly used in various applications, including autonomous driving, mapping, and object recognition.
The LiDAR system provides a real-time, high-resolution view of its surroundings, capturing point
cloud data up to a range of 328 ft (100 m). Unlike 2D LiDAR systems that are limited to points on the
same mounting plane, the LiDAR’s multiple vertical planes offer a comprehensive understanding of
objects in the environment. This rich point cloud data is instrumental in object detection, environment
perception, and risk mitigation.
Using laser beams in four stacked planes, the LiDAR measures the distance and direction of objects
relative to its position. The LD-MRS, equipped with the LiDAR, can track objects without the need for
additional hardware or software. It provides essential information such as position, speed, size, direction
of movement, and age (scans, time) for objects within its field of view. Remarkably, the LD-MRS has
the capability to track up to 128 objects simultaneously, with real-time processing.
Figure 6 illustrates the process of object detection with the LiDAR. A laser pulse is transmitted from
the LD-MRS to the object, and the reflected pulse is collected and processed by the LiDAR. This
information, along with other processed data, is transmitted over Ethernet for further analysis and
decision-making. The LD-MRS incorporates multi-layer technology, enabling compensation for pitch
angles. This means that even when attached to a vehicle like the Baja vehicle, the LiDAR can accurately
detect objects even during braking or acceleration maneuvers.
In addition to the LiDAR, GPS (Global Positioning System) plays a vital role in this project. While
the Stereo Vision Camera, LiDAR, and Radar contribute to Baja vehicle’s perception, accurate
localization is crucial for achieving Level 3 autonomy. GPS provides essential location information,
enabling the vehicle to determine its position and navigate within its environment accurately.
Combining perception sensors like the Stereo Vision Camera, LiDAR, Radar, and GPS localization is
essential for creating a robust and reliable autonomous driving system at Level 3 Figure 6.
To address the intermittent GPS signal in off-road environments, the team incorporated motion
sensors, specifically an inertial measurement unit (IMU). An IMU combines readings from a gyroscope
and accelerometer to provide a 6-dimensional pose comprising location (3 dimensions) and orientation
(3 dimensions). Motion sensors, such as IMUs, are valuable for measuring acceleration, velocity, and
position in both translation and rotation.
Using an IMU alongside GPS helps mitigate velocity and position drift, which can occur due to small
errors or drift in acceleration measurements during the integration process. IMUs excel at rapidly
calculating poses and serve as interoceptive sensors to enhance feedback loops within the system.
Microelectromechanical Systems are employed in IMU sensors to measure pose through electrical
and mechanical methods, compactly packaged for practical use. The current trend in IMU sensors
involves fusing data from multiple separate Microelectromechanical Systems sensors to mitigate the
limitations of individual sensors. An Attitude Heading Reference System integrates data from a three-
axis gyroscope, a three-axis accelerometer, and a three-axis magnetometer.
In addition to the primary detection system comprising a camera and LiDAR, the Baja vehicle
requires additional sensors to sense obstacles effectively. Ultrasonic sensors are among the additional
sensors employed in the system. Ultrasonic sensors utilize sound waves to detect nearby objects and
provide crucial information for obstacle detection, ensuring the vehicle can navigate safely through
its environment.
By combining the capabilities of the camera, LiDAR, IMU, and ultrasonic sensors, the AV is
equipped with a comprehensive perception system that allows it to perceive its surroundings accurately
and make informed decisions while navigating challenging off-road environments.
To account for potential obstacles that may appear on the sides of the vehicle, outside the field of
view of the camera or LiDAR, additional side sensors are incorporated. These side sensors serve a more
general purpose and do not require the same level of detailed information as the cameras or LiDAR.
Multiple side sensors are employed to provide the vehicle with a comprehensive awareness of its
surroundings.
For rear sensors, ultrasonic sensors are chosen due to their affordability, availability in bulk, and
ability to provide proximity distance measurements. Ultrasonic sensors utilize sound waves to
determine the distance to an object. They emit ultrasonic sound waves at specific frequencies, measure
the time it takes for the waves to hit an object and return to the sensor, and calculate the distance using
the formula: D = (Vsound * time) / 2.
However, ultrasonic sensors are influenced by ambient temperature, which affects the speed of sound.
Interference between multiple ultrasonic sensors can alter the waveform and potentially invalidate
collected data. Additionally, certain materials can absorb the sound waves, further impacting
sensor performance.
To address these challenges, multiple readings are taken and averaged to obtain a more accurate
result. Values outside a certain threshold can be filtered out. It is important to note that ultrasonic sensors
are not the primary location sensor on the vehicle, and their purpose is to provide an approximation of
the area around the sides and back of the vehicle rather than extremely precise data.
To manage multiple sensors, the detection area of each sensor is carefully examined, and their
placement is optimized to minimize interference. Alternatively, the sensors can be programmed to ping
distance measurements during separate time intervals to avoid overlapping waves.
The range of the ultrasonic sensors determines the maximum distance at which they can detect
objects. However, their primary function is to detect potential collisions from the sides. The cone-
shaped detection pattern of the sensor influences the initial size and final shape of the waveform emitted
by the sensor.
By incorporating side and rear ultrasonic sensors, the AV enhances its ability to detect objects outside
the field of view of the primary sensors, enabling safer navigation and obstacle avoidance.
The waveform of the ultrasonic sensor evolves as it propagates to its maximum distance, with the
sensitivity of the sensor determined by the maximum range divided by its resolution. In the context of
this project, a highly sensitive system is not necessary as the objects being sensed are generally large.
Therefore, an inch-level sensitivity increment is more than sufficient.
In Level 3 autonomous driving, data fusion plays a crucial role due to the integration of data from
multiple sensors. However, data fusion requires substantial computing resources, particularly in the case
of computer vision tasks. To address this, the team decided to use a Computing Unit (CU) for handling
larger tasks and smaller embedded processors to handle specific tasks as directed by the CU. This
approach helps optimize processing speed and power management, as lower level functions can be
offloaded from the CU to smaller embedded processors, reducing the workload of the CU. The CU acts
as the system’s central processing unit, acquiring real-world feedback and performing computations on
sensor data.
For lower-level tasks and quick prototyping, the Arduino microcontroller platform has been
instrumental. Its integrated development environment simplifies the process by directly addressing the
registers, eliminating the need to search through board documentation for the appropriate registers. This
facilitates the rapid assembly of motor control and sensor reading circuitry. Furthermore, these
microcontrollers are low-power, providing an advantage over the microprocessors. To make an
informed decision in selecting an CU, a thorough comparison between microcontrollers and
microprocessors is necessary.
By employing a combination of the CU, embedded processors, and microcontrollers, the software
system for Level 3 autonomous driving efficiently manages the processing and integration of sensor
data, facilitating real-time decision-making and control in the AV.
Both microcontrollers and microprocessors have a central processing unit and peripheral
components. Microcontrollers are commonly found in various devices such as digital cameras, washing
machines, and microwaves. They typically have limited memory and lower performance compared with
microprocessors, but their efficiency lies in their low power consumption.
Microprocessors, on the other hand, offer higher computing power comparable to standard
computers. They often run operating systems, have more memory capacity, and can handle demanding
computing tasks efficiently. Unlike microcontrollers that typically run a single program repeatedly,
microprocessors can handle multiple applications simultaneously, making them suitable for running
simulation programs and processing data concurrently.
Affordable and easily programmable microcontrollers, like the Arduino platform, are well-suited for
quick prototyping and seamless integration into larger projects like the one at hand. Microcontrollers
excel at controlling peripherals such as motors and actuators, enabling them to handle tasks such as
steering, braking, and acceleration. Meanwhile, the CU takes charge of data processing and
visualization. The Arduino can also be used to enable or disable the autonomous capability of the
vehicle. Additionally, the Arduino platform can implement Proportional Integral Derivative control, a
feedback loop commonly used in robotic systems. This control loop continuously compares the desired
sensor value with the current sensor value, adjusting achieve the desired value rapidly, smoothly, and
accurately. Figure 7 illustrates the feedback loop employed in this context.
Enabling smooth acceleration, deceleration, steering, and power steering requires implementing a
coordinated motion strategy. Communication between the CU and the Arduino can be achieved through
serial messages, ensuring that even if the CU fails, the Baja vehicle can still be manually controlled
with the added advantage of power steering. The Arduino Uno, specifically the Arduino ATmega-328,
will be utilized for this purpose.
To achieve a high level of perception like human capabilities, the Baja vehicle will be equipped with
several sensors:
• A 3D LiDAR
• A 3D stereo camera
• GPS
• IMU
• RADAR
• Ultrasonic sensors
Integrating these sensors and actuators for level three autonomy necessitates the development of a
software architecture that efficiently collects and interprets system data to activate the appropriate
actuators. The software system must be capable of gathering data from multiple mounted sensors and
consolidating it into a unified 3D point cloud. It should utilize visual odometry and positioning sensors
to accurately determine the vehicle’s real-time localization and utilize this information to generate an
occupancy grid. The occupancy grid helps the vehicle identify accessible and restricted areas. Moreover,
the software system should employ the occupancy grid to generate a trajectory and communicate the
planned path through commands, guiding the vehicle to follow the path and reach its desired destination.
In the realm of software development, the team also focused on sensor fusion, exploring how data
from each sensor would be combined and interpreted to enhance perception and decision-making.
The project team recognized the necessity of multi-sensor data fusion to achieve a comprehensive
and reliable understanding of the environment. By combining data from multiple sensors, a cohesive
and detailed world image can be created. Sensor fusion, which utilizes different sensor methods to
collect data about the same objects, enhances reliability compared with data fusion alone. However,
sensor data may contain uncertainties and noise, requiring appropriate handling before fusion.
This sensor fusion technique is crucial for tracking the vehicle’s true location and constructing a map
of the surrounding terrain and obstacles. Simultaneous Localization and Mapping (SLAM) is a method
employed to interpret the fused sensor data and estimate geometric features within a global reference
frame. It utilizes these features to estimate the robot's position. In this project, SLAM is combined with
GPS and IMU modules to correct for location drift caused by imperfections in visual data. This process
ensures safe navigation from point A to point B, avoiding collisions with real-world objects.
At a lower level, the Arduino microcontroller platform has significantly affected project
development. It offers a user-friendly environment for rapid prototyping and seamless integration. The
Arduino Uno, specifically chosen for this project, enables power steering, controls the vehicle’s
steering, braking, and acceleration, and can activate or deactivate the autonomous capability.
Additionally, the Arduino is well-suited for implementing Proportional Integral Derivative control, a
widely used feedback loop in robotics, ensuring precise control and reaching desired values effectively.
In summary, sensor fusion, SLAM, and the utilization of the Arduino platform are integral
components of the software architecture, enabling comprehensive data acquisition, interpretation, and
actuation in the pursuit of autonomous navigation. The Arduino microcontroller platform has gained
popularity in the project space due to its ability to facilitate quick prototyping. The integrated
development environment of Arduino allows developers to directly address the registers, eliminating
the need to search through documentation for specific register addresses. This streamlines the process
of integrating motor control and sensor reading circuitry, enabling rapid development.
Moreover, Arduino microcontrollers offer the advantage of low power consumption, distinguishing
them from previously mentioned microprocessors. This characteristic makes them suitable for
applications where power efficiency is crucial.
In the context of AVs, the Arduino platform finds diverse applications. One such application is sensor
control, where Arduino boards can be utilized to manage sensors like LiDAR, radar, or camera sensors.
By programming an Arduino board, it becomes possible to receive input data from these sensors,
process it, and transmit the relevant information to the main computer.
Arduino boards are also effective in actuator control, providing the means to manage crucial
components such as steering, braking, and acceleration. By receiving signals from the vehicle's main
computer, Arduino boards can translate them into appropriate actions for the actuators, enabling precise
control over vehicle movements.
Data processing is another area where Arduino boards excel. They can perform tasks such as data
filtering and smoothing to enhance the quality of sensor data before transmitting it to the main computer.
Additionally, Arduino boards can handle real-time calculations and control decisions, contributing to
efficient and responsive AV operation.
Furthermore, Arduino boards can be employed for communication control within the vehicle system.
For instance, they can manage wireless communication modules, facilitating interaction between the
vehicle and external entities such as other vehicles or traffic infrastructure.
IPG CARMAKER
IPG Carmaker is a specialized software tool developed by IPG Automotive, a German company, to
facilitate the design, testing, and validation of ADAS, autonomous driving functions, and vehicle
dynamics simulations. This comprehensive simulation environment empowers automotive engineers
and researchers to create, verify, and enhance vehicle systems and components.
With IPG Carmaker, users can simulate diverse driving scenarios encompassing various road types,
weather conditions, and traffic situations. The software offers detailed modeling capabilities for
essential vehicle components like the powertrain, chassis, and suspension. Furthermore, it provides
provisions for incorporating sensors and actuators into the simulations, enabling comprehensive
evaluation of their performance.
One of the significant advantages of IPG Carmaker is its ability to replicate complex traffic scenarios
by modeling the behavior of other vehicles and pedestrians. This feature enhances the realism of the
simulations and allows for a thorough assessment of the vehicle's interaction with its surroundings.
The software's user interface, as shown in the provided image, presents a range of customizable input
options. Users can specify details such as car type, wheel type, and road characteristics, among others,
to tailor the simulation environment according to their requirements.
MODELLING OF VEHICLE
In the above Figures 8 and 9, it shows the overall vehicle data set where different values can be given
to different areas of field such as body type, powertrain, EV or Combustion, Wheel parameters, sensor
mountings, and so on. And in the additional tab of the dataset special control models can be given such
as lateral and longitudinal control.
7 Lidar
8 RADAR
MODELLING OF TRACK
Figure 10 illustrates the inclusion of various road types in the simulation environment, such as urban
streets, forested areas, uneven terrains, and uphill climbs. This feature enables comprehensive testing
of the vehicle across different terrains and facilitates data collection. It allows us to observe the vehicle's
performance and behavior in diverse scenarios, providing insights into areas where the vehicle may
deviate from user instructions or encounter challenges.
SIMULATION
Pedestrian Crossing
Upon approaching a pedestrian crossing, the Baja vehicle utilizes its array of sensors and cameras to
detect the presence of pedestrians and accurately identify the location of the crossing Figure 15.
It then adjusts its speed, gradually slowing down and eventually coming to a complete stop at a safe
distance from the pedestrians, ensuring their safety while crossing. The Baja vehicle cautiously proceeds
through the intersection only if the pedestrian crossing is clear, prioritizing the well-being of pedestrians
and avoiding any sudden or unpredictable maneuvers that may pose a risk to them. The Baja vehicle
remains stationary if pedestrians are still crossing, patiently waiting until the crossing is completely
clear before resuming its journey (see Figure 16).
Animal Crossing
When encountering animal crossings in rural areas, Baja vehicles employ the same approach as with
pedestrian crossings. The vehicle’s sensors and cameras detect the presence of animals and identify the
crossing location. It adjusts its speed, slowing down and stopping at a safe distance to allow the animals
to cross safely. The Baja vehicle proceeds cautiously through the crossing if it is clear, prioritizing the
well-being of the animals and avoiding any sudden or unsafe maneuvers. It remains stationary until the
animals have safely crossed the road, ensuring their safety throughout the process.
Signal Detection
When approaching a traffic signal, Baja vehicle slows down and comes to a stop at a safe distance
from the signal. This is achieved by using the vehicle’s sensors and cameras to detect the signal and
adjust the vehicle’s speed accordingly (see Figure 17).
After coming to a stop at a traffic signal, the Baja vehicle utilizes its AI algorithms and mapping
technologies to interpret the signal and make a decision based on its destination and current traffic
conditions. Once the signal turns green, the vehicle accelerates smoothly and safely through the
intersection. It is programmed to take necessary precautions, including emergency braking or swerving
if needed, to prevent collisions and ensure the safety of both the passengers and other road users.
Vehicle Overlap
When preparing for an overtaking maneuver, a Baja vehicle relies on its sensors and cameras to scan
the surrounding traffic, including vehicles in the target and adjacent lanes. By utilizing advanced AI
algorithms and mapping technologies, the vehicle analyzes the traffic patterns to determine the optimal
timing and path for the maneuver. It signals its intention to other vehicles, gradually increases speed
while maintaining a safe distance, and adjusts its trajectory to avoid collisions. Once the overtaking
maneuver is completed, the vehicle smoothly and safely returns to its original lane. This ensures a secure
and efficient overtaking process while prioritizing the safety of all road users (see Figure 20).
Lane Change
Lane changes are a common maneuver for AVs navigating through traffic. To execute a lane change,
the vehicle utilizes its sensors and cameras to scan the surrounding traffic, including the target lane,
adjacent lanes, and blind spots. By leveraging AI algorithms and mapping technologies, the vehicle
analyzes traffic patterns to determine the optimal timing and path for the maneuver. It signals its
intention to nearby vehicles and smoothly steers into the target lane while maintaining a safe distance
from others. Adjusting speed and trajectory, the vehicle carefully navigates traffic conditions to avoid
any potential collisions. This ensures a safe and efficient lane change while prioritizing the well-being
of all road users (see Figure 21).
RESULTS
To enable autonomous driving capabilities, the Baja vehicle underwent extensive analysis of its
acceleration, braking, and steering systems. This allowed for the efficient control of these mechanical
components. With these advancements, the vehicle was transformed into an autonomous system capable
of navigating a predefined track, avoiding obstacles along the way. The design, testing, and simulation
of the Baja vehicle were conducted using the IPG Carmaker software, ensuring a comprehensive
evaluation of its performance.
To enable Level 3 automation, several technical features were tested and simulated using software.
These features included pedestrian and animal crossing detection, traffic sign recognition, signal
interpretation, lane change assistance, and traffic overlay. The software incorporated sensors such as a
3D solid-state LiDAR, side-mounted radar sensors, and a front-facing stereo camera to successfully
implement these features. For software integration, the Arduino Mega 2560 microcontroller was chosen
due to its popularity, cost-effectiveness, and ease of integration compared with the Jetson Nano.
CONCLUSION
The Baja vehicle underwent a transformation into an autonomous system capable of navigating to a
specified destination while avoiding obstacles. Level 3 autonomous driving represents a significant
advancement in vehicle automation, enabling the vehicle to operate independently in certain scenarios
while requiring driver readiness to assume control if needed. This level of autonomy enhances safety
and efficiency, particularly on highways and off-road situations with dedicated lanes and moderate
traffic. Nevertheless, the deployment of Level 3 autonomous driving technology presents notable
challenges, including technology reliability, cybersecurity risks, ethical considerations, and legal
complexities. Regulators are actively assessing the safety and effectiveness of these systems, urging
automakers to further enhance their technology to address these concerns.
Level 3 autonomous driving represents a significant step towards AVs, but there is still room for
progress to reach higher levels of automation, such as Level 4 and Level 5. These advanced levels would
enable vehicles to operate in a broader range of conditions without requiring driver intervention.
Achieving these higher levels of autonomy remains a goal for the continued development of AV
technology.
Acknowledgement
Authors wish to express their gratitude to Dr. Suhas Mohite and Dr. Nagesh Chougule, Dean and
HoD of Mechanical Engineering Department of COEP Tech University respectively, for their
encouragement and permission to publish this work. The authors would also like to thank Mr.
Ramanathan S and Mr. Chinmaya Sharma of Automotive Test System for providing free license &
training on IPG CARMAKER.
REFERENCES
1. Martínez-Díaz M, Soriguera F. Autonomous vehicles: theoretical and practical challenges. J Intell
Transp Syst. 2018; 22 (4): 283–301.
2. Parekh D, Poddar N, Rajpurkar A, Chahal M, Kumar N, Joshi GP et al. A review on autonomous
vehicles: progress, methods, and challenges. IEEE Access. 2022; 10: 144169–144189.
3. Jemmali MA, Mouftah HT. Improved control design for autonomous vehicles. Int J Info Technol
Control Autom Syst. 2022; 12 (1): 1–13.
4. Chen Y, Yu H, Zhang J, Cao D. Lane-exchanging driving strategy for autonomous vehicle via
trajectory prediction and model predictive control. Chin J Mech Eng. 2022; 35: 71. doi:
10.1186/s10033-022-00748-7.
5. Pushpakanth A, Dhavalikar MN. Development of steering control system for autonomous vehicle.
Int J Eng Adv Technol. 2022;11 (6): 1586–1592.
6. Ayala R, Khan Mohd T. Sensors in autonomous vehicles: A survey. IEEE Access. 2021; 9: 160369–
160382.
7. Azam S, Munir F, Sheri AM, Kim J, Jeon M. System, design and experimental validation of
autonomous vehicle in an unconstrained environment. IEEE Access. 2020; 8: 196857–196868.
8. Salman UH. KHalel MI, Abdullah AA. Development of autonomous vehicles. Int J Adv Comput
Sci Appl. 2018; 9 (9): 174–179.
9. Bratulescu RA, Vatasoiu RI, Sucic G, Mitroi SA, Vochin MC, Sachian MA. Object detection in
autonomous vehicles. Sensors. 2022; 22 (21): 375–380. doi: 10.1109/WPMC55625.
2022.10014804.
10. Ghariblu H. Decision making of an autonomous vehicle in a freeway travelling. Int J Automot Eng.
2022; 13 (2): 280–286.
11. Orlický A, Mashko A, Mík J. Assessment of external interface of autonomous vehicles. Transport
means. 2021; 2021: 223–228.
12. Bautista-Camino P, Barranco-Gutiérrez AI, Cervantes I, Rodríguez-Licea M, Prado-Olivarez J,
Pérez-Pinal FJ. Local path planning for autonomous vehicles based on the natural behavior of the
biological action-perception motion. Sensors. 2022; 22 (5): 2512.
13. Yeong J, Velasco-Hernandez G, Barry J, Walsh J. Sensor and sensor fusion technology in
autonomous vehicles: a review. Sensors (Basel). 2021; 21 (6): 2078. doi: 10.3390/s21062140,
PMID 33803889.