2020 Aptiv Whitepaper Machinelearning Radar
2020 Aptiv Whitepaper Machinelearning Radar
As OEMs look for the best perception systems to deploy in their vehicles to enable
lifesaving, active safety capabilities, radar offers a multitude of benefits, including low
system cost and resiliency through a wide range of weather and lighting conditions.
These attributes make radar an ideal foundation for building any vehicle’s environmental
model, and they become especially critical as vehicles move beyond basic warning
functions and into assistance and automation functions. Centralizing the intelligence
and applying machine learning in just the right way can turbocharge the performance,
ensuring that vehicles capitalize on radar’s strengths while fusing its data with that of
other sensing modalities. In doing so, OEMs can create the best canvas on which to
design and implement planning and policy functions that provide advanced features
and solve the most challenging corner cases.
1
M AC H I N E L E A R N I N G A N D R A DA R
2
M AC H I N E L E A R N I N G A N D R A DA R
Because radar uses radio waves instead of light will be better able to anticipate movements
to detect objects, it works well in rain, fog, snow if it knows exactly what it is looking at.
and smoke. This stands in contrast to optical
technologies such as cameras – or in the future, Lidar has drawn attention because it offers
lidar – which are generally susceptible to the some unique strengths. It can take direct range
same challenges as the human eye. Consider measurements at high resolution and form
the last time you were blinded by direct sunlight a grid, where each grid cell has a particular
while driving, or tried to see clearly through a distance associated with it. Because lidar
windshield covered with dirt and grime. Optical operates at a much higher frequency, it has
sensors have the same challenges, but radars a much shorter wavelength than traditional
can still see well in those cases. And unlike radar – and that means it can provide higher
cameras, radar does not need a high-contrast angle resolution than radar, allowing lidar to
scene or illumination to sense well at night. identify the edges of objects more precisely.
3
M AC H I N E L E A R N I N G A N D R A DA R
SENSOR FUSION
Sensor fusion is the ability to bring together inputs from multiple radars, lidars and cameras to form a single
model or image of the environment around a vehicle. The resulting model is more accurate because it balances
the strengths of the different sensors. Vehicle systems can then use the information provided through sensor
fusion to support more-intelligent actions.
Of course, the more sensors on a vehicle, the more challenging fusion becomes, but also the more opportunity
exists to improve performance.
In the past, the processing power to analyze sensor data to determine and track objects had been packaged
with the cameras or radars. With Aptiv’s Satellite Architecture approach, the processing power is centralized
into a more powerful active safety domain controller, allowing for data to be collected from each sensor and
fused in the domain controller.
Weather conditions
Lighting conditions
• Precise 3D object detection
Dirt
LIDAR • Range accuracy
• Free-space detection Velocity
Distance - accuracy
Distance - range
• Object classification Data density
CAMERA • Object angular position
• Scene context Classification
Packaging
4
M AC H I N E L E A R N I N G A N D R A DA R
M AC H I N E L E A R N I N G
5
M AC H I N E L E A R N I N G A N D R A DA R
Many automotive radars utilize an array of While the data provided by a radar is more
antennas to measure angle. In classical radar complex than what comes in from vision systems
signal processing, the digitized signals from each – providing range and range rate in addition to
antenna are converted to range and speed. The location of objects – it is also quite valuable. It is
signals are compared across the antenna array well worth the effort to intelligently sift through
to make angle measurements. An example of the data to extract meaning. Aptiv’s 20-year
preprocessing would be to use classical signal history of working with automotive radar – we
processing to isolate regions of interest, to focus were the first to put a radar in a Jaguar in 1999 to
on objects with certain ranges and speeds. The enable adaptive cruise control – has given us the
signals from each antenna with a common range expertise needed to pull out the relevant data in
and speed can then be used to train a system. the most efficient way.
6
M AC H I N E L E A R N I N G A N D R A DA R
CHALLENGING SCENARIOS
There are many scenarios that human drivers encounter every day that do not lend themselves to easy
solutions when it comes to advanced driver assistance systems. If there is an object in the road, is it safe
to drive over? How should the vehicle adjust its driving if an adjacent truck creates a blind spot? Machine
learning coupled with radar can address these and many more concerns. Here are a few examples.
Vulnerable road users include bicyclists and motorcyclists. This has been a particular area of focus for
regulatory and rating agencies because these users have little protection in the event of a collision, and
they can be more difficult to identify than other vehicles. Machine learning reduces misses by 70%
compared to classical radar signal processing, and sensor fusion with other sensing modalities can
improve detection further.
7
M AC H I N E L E A R N I N G A N D R A DA R
Pedestrians
Detecting pedestrians can present unique challenges to any kind of sensors, particularly in a cluttered
urban environment when many pedestrians could be crossing a street and walking in different directions.
By using all dimensions of radar data as described earlier, however, advanced machine learning
techniques can help the vehicle see the pedestrians in the cluttered environment. It can even spot
them behind a parked car or other obstruction that may hide them from view.
Occluded pedestrians:
Pedestrian near path and parked vehicle: Pedestrian alert during rear parking maneuver:
8
M AC H I N E L E A R N I N G A N D R A DA R
Some road boundaries, such as flat concrete walls seen from acute angles, do not reflect radar strongly.
Machine learning can use robust segmentation and signal processing across range, Doppler and antenna
response over time to figure out where those boundaries are.
Blind spot
Sensor occlusion – a blind spot created by another object, like a large truck – is one of the biggest
challenges of automated driving. It is less a problem of failing to detect occluded objects than it is the
fact that today’s systems are not fully aware of their blind spots. Human drivers have learned to account
for unseen possibilities and guard against threats that may be hidden. Aptiv’s perception approach
creates this awareness and allows upstream functions to act defensively, as a human driver would.
9
M AC H I N E L E A R N I N G A N D R A DA R
Machine learning can help provide accurate object detection and tracking, including object boundaries
and robust separation. With advanced processing methods, we can decrease position error and object-
heading error by more than 50%, which means that the vehicle is better able to tell when another vehicle
is stopped in its lane.
360-degree sensing
Aptiv’s sensor fusion approach brings together inputs from various sensors around a vehicle. If the
vehicle is equipped with enough sensors, this means it can have a 360-degree view of its environment,
and that complete picture will help the vehicle make better decisions. Machine learning helps the system
identify objects within that scope, classifying them as cars, trucks, motorcycles, bicycles, pedestrians and
so forth. It can determine their heading. And it can help separate and identify stationary or slow-moving
objects.
10
M AC H I N E L E A R N I N G A N D R A DA R
Machine learning can also help a vehicle understand when it is inside a tunnel. Tunnels have historically
been a challenging environment for radar. The tunnel walls provide a reflective surface, which can result in
a very high number of detections that can overwhelm a radar’s capacity to process targets. Also, these
reflections can come from high elevation angles, which can make stationary targets difficult to be
identified as such. Further, tunnels will often have fans to help clear stagnant air, and the spinning blades
of the fan could confuse a radar into thinking it is seeing a moving object. All of these issues can be
mitigated by making adjustments to the radar processing when the vehicle is in a tunnel. By applying
machine learning to radar data processing, the system is able to filter out noise from positive detections
with much greater accuracy than classical methods have allowed. It can now better interpret radar returns
in tunnels and other closed environments, classify targets such as fans, and effectively solve radar’s
tunnel challenge.
T H E R OA D F R O M H E R E
As OEMs look to bring active-safety capabilities to their full range of vehicles, they will need sensors that
are cost-effective and able to deliver data in challenging conditions, and the intelligence to get the most
useful information from the data. They can achieve that through machine learning and a combination of
sensors anchored by radar. Innovations such as Aptiv’s RACam can package those sensors – in this case,
radar and camera – into one compact unit.
Aptiv’s Satellite Architecture centralizes the intelligence that receives data from those sensors, improving
performance by keeping latency low and reducing sensor mass by up to 30%. OEMs can then develop
differentiating features for various levels of automated driving on top of this robust base of sensing and
perception technology, building from Level 1 automation to Level 2 and Level 2+.
Longer term, Aptiv’s Smart Vehicle Architecture enables the overall vision by structuring the electrical
and electronic architecture of a vehicle in a way that makes the most sense for its sensing and perception
needs, creating a path to Level 3 and Level 4 automation. In the meantime, OEMs can take important
steps today to help democratize active safety and ensure that everyone has access to these lifesaving
technologies.
11
M AC H I N E L E A R N I N G A N D R A DA R
Rick Searcy
Advanced Radar Systems Manager
Rick manages the development of advanced radar systems for Aptiv, a position he has
held since 2013. He has been involved in the development of every radar produced at
the company since 1994.
Rick is located in Kokomo, Indiana. He earned his master’s degree from the University of
Michigan, where he studied applied electromagnetics and digital signal processing.
L E A R N M O R E AT A P T I V.C O M /A DVA N C E D - S A F E T Y →
12