0% found this document useful (0 votes)
30 views12 pages

2020 Aptiv Whitepaper Machinelearning Radar

The white paper discusses the critical role of radar in automotive sensing and perception systems, emphasizing its advantages in various weather conditions and cost-effectiveness for active safety features. It highlights the integration of machine learning to enhance radar performance and improve object detection and classification, ultimately supporting advanced driver-assistance systems. The document also outlines the importance of sensor fusion, combining data from multiple sensors to create a comprehensive environmental model for automated vehicles.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views12 pages

2020 Aptiv Whitepaper Machinelearning Radar

The white paper discusses the critical role of radar in automotive sensing and perception systems, emphasizing its advantages in various weather conditions and cost-effectiveness for active safety features. It highlights the integration of machine learning to enhance radar performance and improve object detection and classification, ultimately supporting advanced driver-assistance systems. The document also outlines the importance of sensor fusion, combining data from multiple sensors to create a comprehensive environmental model for automated vehicles.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

W H I T E PA P E R

Machine Learning Takes Automotive


Radar Further
Some may think that the greatest challenge to automating vehicles is in developing
the algorithms that tell a vehicle where and how to drive – the planning and
policy. It is not. The greatest challenge lies in sensing and perception, in building
a perception system that can reliably create the most accurate and robust
environmental model for the planning and policy functions to act upon. In this way,
perception systems are fundamental to enabling higher levels of automation.

As OEMs look for the best perception systems to deploy in their vehicles to enable
lifesaving, active safety capabilities, radar offers a multitude of benefits, including low
system cost and resiliency through a wide range of weather and lighting conditions.

These attributes make radar an ideal foundation for building any vehicle’s environmental
model, and they become especially critical as vehicles move beyond basic warning
functions and into assistance and automation functions. Centralizing the intelligence
and applying machine learning in just the right way can turbocharge the performance,
ensuring that vehicles capitalize on radar’s strengths while fusing its data with that of
other sensing modalities. In doing so, OEMs can create the best canvas on which to
design and implement planning and policy functions that provide advanced features
and solve the most challenging corner cases.

1
M AC H I N E L E A R N I N G A N D R A DA R

Machine Learning and Radar

Active safety capabilities save lives and prevent B E N E F I T S O F R A DA R


accidents. For example, forward collision warning
with automatic emergency braking reduces rear- The main sensors in use today on vehicles
end collisions by 50%, according to the Insurance are radar and cameras, with ultrasonics
Institute for Highway Safety. In a 2019 Consumer playing a role in short distances at low speeds
Reports survey, 57% of vehicle owners said and lidar used in autonomous driving.
an advanced driver-assistance feature in their
vehicle had prevented them from getting into Part of the reason radar is widely used is that
an accident. These solutions typically employ a it can reliably indicate how far away an object
forward-facing radar or camera – or ideally, both. is. Typical long-range automotive radars can
provide range measurements on objects that
The challenge for OEMs in the coming years are as much as 300 meters to 500 meters away.
will be to bring more advanced active safety Cameras, by contrast, have to try to estimate
features to the market in a cost-effective way, how far away an object is based on the size
allowing OEMs to offer the capabilities on more of the object in the camera’s image and other
models and bring them to more consumers – factors. Even leveraging a stereoscopic approach,
while at the same time laying the groundwork for this can be challenging. Further, resolution
higher levels of automation, which will have to becomes an issue, as a single pixel in a camera
address the most difficult sensing challenges. image is very broad at long range, making it
harder for a camera to discern those objects.
Success depends on two primary functions: Focusing optics can help, but they limit the field
the quality of the information provided by of view, leading to a challenging compromise
the sensors, and the ability of the compute typical of camera-based perception systems.
to interpret that data. On the sensor side,
radar-centric solutions provide an excellent At the same time, radar makes inherent
foundation for this path. On the compute side, measurements of relative speed, so at the same
a machine learning system can use the data time it is providing a range measurement, it
coming from radar sensors and combine it can also tell how quickly something is moving
with data from other sources to create a very toward the vehicle or away from it. Cameras
robust picture of a vehicle’s environment. and lidars may need to take multiple images
over time to estimate relative speed.

2
M AC H I N E L E A R N I N G A N D R A DA R

Because radar uses radio waves instead of light will be better able to anticipate movements
to detect objects, it works well in rain, fog, snow if it knows exactly what it is looking at.
and smoke. This stands in contrast to optical
technologies such as cameras – or in the future, Lidar has drawn attention because it offers
lidar – which are generally susceptible to the some unique strengths. It can take direct range
same challenges as the human eye. Consider measurements at high resolution and form
the last time you were blinded by direct sunlight a grid, where each grid cell has a particular
while driving, or tried to see clearly through a distance associated with it. Because lidar
windshield covered with dirt and grime. Optical operates at a much higher frequency, it has
sensors have the same challenges, but radars a much shorter wavelength than traditional
can still see well in those cases. And unlike radar – and that means it can provide higher
cameras, radar does not need a high-contrast angle resolution than radar, allowing lidar to
scene or illumination to sense well at night. identify the edges of objects more precisely.

One downside of lidar is that it needs to


have a clean and clear surface in front
of it to be effective, which of course can
be especially problematic on a moving
vehicle. One unfortunate yet well-placed
beetle could render a vehicle sightless.

An equally significant issue is that lidar is


a less mature technology than radar, which
means it’s much more expensive. The expense
limits how widely lidar can be used in today’s
high-volume automotive marketplace.

To ensure a reliable and safe solution, a vehicle


Figure 1. Radar can perceive its environment in a should have access to a combination of different
variety of weather and lighting conditions. sensing technologies and then use sensor fusion
(see sidebar→) to bring those inputs together
Radar also provides an OEM significant packaging to gain the best possible understanding of the
flexibility, thanks to its ability to work when placed environment. But even if that isn’t possible – if
behind opaque surfaces. Optical technologies the cameras are smudged and the lidar is having
need to be able to “see” the road, which requires bug-splatter issues – the radars in the vehicle
them to be visible from the outside of a vehicle can deliver excellent information, especially when
– preferably at a high point so they can have paired with the right machine learning algorithms.
good line of sight and stay clear of road dirt
and grime. Radar, by contrast, can be placed
behind vehicle grilles, in bumpers, or otherwise
hidden away, giving designers significant
flexibility to focus on vehicle aesthetics.

WHERE TO USE OPTICAL SENSORS

Cameras are well suited for object classification.


Only a camera can read street signs, and
a camera is best at telling if an object is
another vehicle, a pedestrian, a bicycle or
even a dog. Each of those objects is going
to behave differently, so the vehicle’s system Figure 2. Radar can be located behind the outer body of a vehicle.

3
M AC H I N E L E A R N I N G A N D R A DA R

SENSOR FUSION

Sensor fusion is the ability to bring together inputs from multiple radars, lidars and cameras to form a single
model or image of the environment around a vehicle. The resulting model is more accurate because it balances
the strengths of the different sensors. Vehicle systems can then use the information provided through sensor
fusion to support more-intelligent actions.

Of course, the more sensors on a vehicle, the more challenging fusion becomes, but also the more opportunity
exists to improve performance.

In the past, the processing power to analyze sensor data to determine and track objects had been packaged
with the cameras or radars. With Aptiv’s Satellite Architecture approach, the processing power is centralized
into a more powerful active safety domain controller, allowing for data to be collected from each sensor and
fused in the domain controller.

RADAR LIDAR CAMERA FUSION


• Long-range sensing
RADAR • Object movement Object detection
• All-weather performance
Pedestrian detection

Weather conditions

Lighting conditions
• Precise 3D object detection
Dirt
LIDAR • Range accuracy
• Free-space detection Velocity

Distance - accuracy

Distance - range
• Object classification Data density
CAMERA • Object angular position
• Scene context Classification

Packaging

+ = Strength O = Capability — = Weakness

4
M AC H I N E L E A R N I N G A N D R A DA R

M AC H I N E L E A R N I N G

Machine learning is a subset of artificial


intelligence that refers to a system’s ability to
be trained through experience with different
scenarios. As vehicles become more automated,
developers can use machine learning to train
systems to identify objects and to better
understand their environment with less data.

One challenge machine learning helps address


with radar is edge detection. Radar’s longer
wavelengths produce lower resolution that can
lead to under-resolved targets, making it difficult
to tell where a target’s edges are. When that
happens, it becomes challenging to interpret the
data and resolve the scene. Engineers are working
on ways to improve the resolution of radar, such AN AUTOMOTIVE FIRST
as moving up from the common 77 GHz frequency
used in today’s automotive applications to 120 Aptiv pioneered advanced driver-
GHz or higher, with a corresponding reduction in assistance systems (ADAS) technologies
wavelength. That allows a much higher resolution in 1999 with an adaptive cruise
for the same size sensor. Even with today’s control system for the Jaguar XKR.
radars, however, machine learning can help to Using a microwave radar in the front
characterize different scenarios when the data is of the vehicle, the adaptive cruise
difficult to describe through standard algorithms. control (ACC) system measured the
distance and relative speed of the
Developers can present many examples of objects preceding vehicle and used throttle
in a particular category to a machine learning and braking to ensure that the Jaguar
system, and it can learn how signals are scattered stayed 1 to 2 seconds behind it.
by complex objects with many reflection points.
It can take advantage of contextual information. The technology won a PACE Award,
And it can even learn from simultaneous data but the radar was expensive, so the
provided by cameras, lidars or HD maps to capability was aimed narrowly at luxury
classify objects based on radar signals. vehicles. Engineers joked that if you
bought the radar, you got the car for
Further benefits are possible if we use free. Many generations of hardware
machine learning judiciously. Instead of later, the technology is smaller, lighter
taking a brute-force approach and applying and less than one-tenth the cost. Radar
machine learning to all of the raw data has proven successful through decades
provided by a radar, we can do some classical of harsh use, and vehicles of all levels
preprocessing and then apply machine learning now rely on the technology to provide
just to those portions that make sense. active safety features to consumers.

5
M AC H I N E L E A R N I N G A N D R A DA R

Many automotive radars utilize an array of While the data provided by a radar is more
antennas to measure angle. In classical radar complex than what comes in from vision systems
signal processing, the digitized signals from each – providing range and range rate in addition to
antenna are converted to range and speed. The location of objects – it is also quite valuable. It is
signals are compared across the antenna array well worth the effort to intelligently sift through
to make angle measurements. An example of the data to extract meaning. Aptiv’s 20-year
preprocessing would be to use classical signal history of working with automotive radar – we
processing to isolate regions of interest, to focus were the first to put a radar in a Jaguar in 1999 to
on objects with certain ranges and speeds. The enable adaptive cruise control – has given us the
signals from each antenna with a common range expertise needed to pull out the relevant data in
and speed can then be used to train a system. the most efficient way.

Common radars can utilize up to 12 antennas,


and five or more radars can be employed on C O S T A N D P O W E R A DVA N TAG E
a single vehicle. Those antennas allow digital
beamforming, where the signals from each Emerging architectures have satellite radars
individual antenna are digitized and then distributed throughout a vehicle, connected
combined digitally. The result is that the radars via Ethernet to a central system-on-chip with a
sample the signals one time and then form beams machine-learning accelerator. Aptiv is using this
in as many different directions as necessary. By kind of Satellite Architecture to process data
looking across these arrays and analyzing the from five radars or more and keep costs down.
places where the radars overlap, the system can The approach is highly data efficient, and the
deduce the angles of different objects. machine-learning models can run on processors
that cost less and consume less power than
This kind of analysis gives the system a rich basis alternatives.
of information to feed into a neural network,
which in turn can apply machine learning to For example, an implementation that processes
produce an even clearer picture of the scene. data from six short-range radars would use about
Without this interim step, an AI system would have 1W, whereas an implementation processing data
to determine the scene from the raw digitized from six cameras could consume 10W to 15W, and
signals themselves in real time, which means a high-end graphical processing unit consumes
it would need to be extremely powerful and around 100W.
therefore more expensive and resource-intensive,
and it would require long training sequences to In another example, machine learning can glean
figure out what to make of the data. Plus, such a information on range and free-space detection on
system would be difficult to troubleshoot – if the radar-generated data to deliver results that are
vehicle detected an object that was not there, for close to lidar, but at radar’s lower cost.
example, it could be difficult to figure out where
the processing went wrong. Combining classical Potential savings come from not having to build
processing with machine learning can provide parallel implementations of processing, RAM and
some orthogonality in the data processing, which communications in every sensor, and from the
increases the robustness of the system. efficiencies gained from centralizing software in a
domain controller. The lower cost means that even
standard or entry-level vehicles can be equipped
with this lifesaving technology.

6
M AC H I N E L E A R N I N G A N D R A DA R

CHALLENGING SCENARIOS

There are many scenarios that human drivers encounter every day that do not lend themselves to easy
solutions when it comes to advanced driver assistance systems. If there is an object in the road, is it safe
to drive over? How should the vehicle adjust its driving if an adjacent truck creates a blind spot? Machine
learning coupled with radar can address these and many more concerns. Here are a few examples.

Debris in the road

Small objects or debris in the road can pose a challenge,


particularly at high speeds. Radar with machine learning has
been shown to improve range by more than 50% and enable it
to track small objects at 200 meters, which allows plenty of time
for the vehicle to either change lanes or come to a safe stop.

Objects that are safe to drive over

Human drivers often take for granted their ability to gauge


whether an object on the road is something they could drive
over. They do not estimate that the object is 5 cm high or 10
cm. They tend to act on intuition – a feeling, perhaps, based
on their past experience. A machine learning system can also
be trained with objects that are safe to drive over and those
that are not. Programmers can create a portion of the overall
processing chain focused on this question as a special subset
of object classification – “over-drivable,” yes or no? – and pass
the answer on to software that can take action if needed.

Vulnerable road users

Vulnerable road users include bicyclists and motorcyclists. This has been a particular area of focus for
regulatory and rating agencies because these users have little protection in the event of a collision, and
they can be more difficult to identify than other vehicles. Machine learning reduces misses by 70%
compared to classical radar signal processing, and sensor fusion with other sensing modalities can
improve detection further.

7
M AC H I N E L E A R N I N G A N D R A DA R

Pedestrians

Detecting pedestrians can present unique challenges to any kind of sensors, particularly in a cluttered
urban environment when many pedestrians could be crossing a street and walking in different directions.
By using all dimensions of radar data as described earlier, however, advanced machine learning
techniques can help the vehicle see the pedestrians in the cluttered environment. It can even spot
them behind a parked car or other obstruction that may hide them from view.

Occluded pedestrians:

Pedestrian near path and parked vehicle: Pedestrian alert during rear parking maneuver:

8
M AC H I N E L E A R N I N G A N D R A DA R

Low-reflectivity road boundaries

Some road boundaries, such as flat concrete walls seen from acute angles, do not reflect radar strongly.
Machine learning can use robust segmentation and signal processing across range, Doppler and antenna
response over time to figure out where those boundaries are.

Blind spot

Sensor occlusion – a blind spot created by another object, like a large truck – is one of the biggest
challenges of automated driving. It is less a problem of failing to detect occluded objects than it is the
fact that today’s systems are not fully aware of their blind spots. Human drivers have learned to account
for unseen possibilities and guard against threats that may be hidden. Aptiv’s perception approach
creates this awareness and allows upstream functions to act defensively, as a human driver would.

9
M AC H I N E L E A R N I N G A N D R A DA R

Stopped car in lane

Machine learning can help provide accurate object detection and tracking, including object boundaries
and robust separation. With advanced processing methods, we can decrease position error and object-
heading error by more than 50%, which means that the vehicle is better able to tell when another vehicle
is stopped in its lane.

360-degree sensing

Aptiv’s sensor fusion approach brings together inputs from various sensors around a vehicle. If the
vehicle is equipped with enough sensors, this means it can have a 360-degree view of its environment,
and that complete picture will help the vehicle make better decisions. Machine learning helps the system
identify objects within that scope, classifying them as cars, trucks, motorcycles, bicycles, pedestrians and
so forth. It can determine their heading. And it can help separate and identify stationary or slow-moving
objects.

10
M AC H I N E L E A R N I N G A N D R A DA R

Tracking inside a tunnel

Machine learning can also help a vehicle understand when it is inside a tunnel. Tunnels have historically
been a challenging environment for radar. The tunnel walls provide a reflective surface, which can result in
a very high number of detections that can overwhelm a radar’s capacity to process targets. Also, these
reflections can come from high elevation angles, which can make stationary targets difficult to be
identified as such. Further, tunnels will often have fans to help clear stagnant air, and the spinning blades
of the fan could confuse a radar into thinking it is seeing a moving object. All of these issues can be
mitigated by making adjustments to the radar processing when the vehicle is in a tunnel. By applying
machine learning to radar data processing, the system is able to filter out noise from positive detections
with much greater accuracy than classical methods have allowed. It can now better interpret radar returns
in tunnels and other closed environments, classify targets such as fans, and effectively solve radar’s
tunnel challenge.

T H E R OA D F R O M H E R E

As OEMs look to bring active-safety capabilities to their full range of vehicles, they will need sensors that
are cost-effective and able to deliver data in challenging conditions, and the intelligence to get the most
useful information from the data. They can achieve that through machine learning and a combination of
sensors anchored by radar. Innovations such as Aptiv’s RACam can package those sensors – in this case,
radar and camera – into one compact unit.

Aptiv’s Satellite Architecture centralizes the intelligence that receives data from those sensors, improving
performance by keeping latency low and reducing sensor mass by up to 30%. OEMs can then develop
differentiating features for various levels of automated driving on top of this robust base of sensing and
perception technology, building from Level 1 automation to Level 2 and Level 2+.

Longer term, Aptiv’s Smart Vehicle Architecture enables the overall vision by structuring the electrical
and electronic architecture of a vehicle in a way that makes the most sense for its sensing and perception
needs, creating a path to Level 3 and Level 4 automation. In the meantime, OEMs can take important
steps today to help democratize active safety and ensure that everyone has access to these lifesaving
technologies.

11
M AC H I N E L E A R N I N G A N D R A DA R

ABOUT THE AUTHOR

Rick Searcy
Advanced Radar Systems Manager

Rick manages the development of advanced radar systems for Aptiv, a position he has
held since 2013. He has been involved in the development of every radar produced at
the company since 1994.

Rick is located in Kokomo, Indiana. He earned his master’s degree from the University of
Michigan, where he studied applied electromagnetics and digital signal processing.

L E A R N M O R E AT A P T I V.C O M /A DVA N C E D - S A F E T Y →

12

You might also like