Advanced LiDAR Systems
Advanced LiDAR Systems
Candidato: Relatore:
Vittorio Giarola Prof. Arturo Lorenzoni
Matricola 1152954
Correlatore:
Prof. Wei Wei
To my Family
iv
Abstract
It’s possible to say that one of the most interesting laser application is Li-
DAR. This system can measure the distance between a laser source and a
target by hitting it with a pulsed light and then revealing the reflected ray
with a suitable detector. Differences in laser return times and wavelengths
can then be used to make digital 3D-representations of the target. This is
the basic idea behind this complex and fascinating object. The willing of
understanding better its way of functioning and exploiting modern applica-
tions in obstacles detection made me curious and at the same time excited,
to work in this research field. The final aim of this work is to show that is
possible to design and manufacture a LiDAR prototype that associate good
performances and a low price; typically less than 250 dollars. Moreover, this
work promises to develop an ecosystem able to show the LiDAR data in real
time. So, in other words, an integrated environment.
vi
Contents
1 Project Development 1
1.1 Why Project Management . . . . . . . . . . . . . . . . . . . . 1
1.2 The procedure . . . . . . . . . . . . . . . . . . . . . . . . . . 2
3 Fundamentals on LiDAR 21
3.1 The Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.1.1 Technology and Operations . . . . . . . . . . . . . . . 22
3.1.2 Interface . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2 Arduino . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2.1 Programming . . . . . . . . . . . . . . . . . . . . . . . 29
3.2.2 Connection Between Arduino and the Sensor . . . . . 29
3.2.3 Power . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.2.4 Input and Outputs . . . . . . . . . . . . . . . . . . . . 32
3.3 Servo Motors . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.3.1 How It Works . . . . . . . . . . . . . . . . . . . . . . . 34
3.3.2 Control Strategy . . . . . . . . . . . . . . . . . . . . . 36
3.4 Processing v3 . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.4.1 Export . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4 LiDAR Platform v1 41
4.1 Sensor control Strategy . . . . . . . . . . . . . . . . . . . . . 43
4.2 Servo control Strategy . . . . . . . . . . . . . . . . . . . . . . 46
vii
viii CONTENTS
5 LiDAR platform v2 63
5.1 Stepper Motors . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.1.1 Micro Stepper driver and wiring scheme . . . . . . . . 68
5.2 Experiment Set up . . . . . . . . . . . . . . . . . . . . . . . . 70
5.2.1 Camera specifications and features . . . . . . . . . . . 70
5.2.2 Carbon fiber single layer pre-preg . . . . . . . . . . . . 71
5.2.3 Equipment List . . . . . . . . . . . . . . . . . . . . . . 71
5.3 Experimental Data . . . . . . . . . . . . . . . . . . . . . . . . 75
5.3.1 Data Report . . . . . . . . . . . . . . . . . . . . . . . 75
5.3.2 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . 80
5.4 New Structure . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.5 Results and Scan . . . . . . . . . . . . . . . . . . . . . . . . . 81
6 Conclusions 87
List of Figures
ix
x LIST OF FIGURES
xi
xii LIST OF TABLES
Chapter 1
Project Development
Considering the fact that this project is strongly technology and product
oriented we think that it should be better to follow a mind set based on
project management. Considering that, it’s important to walk on the well
known trace of PM, putting a strong attention to the agile method. This
consideration comes from the need of a flexible and constantly adjustable
development strategy to deliver the project even considering time as a main
driver of the process. These are the projects for which the goal is clearly
defined but the solution is not. They are usually called Iterative and Adap-
tive life cycles. We hope that this short introduction has been sufficiently
clear and able to explain the reason why we would like to proceed on that
way.
1
2 CHAPTER 1. PROJECT DEVELOPMENT
not only task completion for newly defined functions and features but also
further solution definition through function and feature discovery. It is the
discovery part of the Adaptive PMLC1 models that sets them apart from
other PMLC models.[1] An Adaptive PMLC model consists of a number
of phases that are repeated in cycles, with a feedback loop after each cy-
cle is completed. Each cycle proceeds based on an incomplete and limited
understanding of the solution. Each cycle learns from the preceding cycles
and plans the next cycle in an attempt to converge on an acceptable so-
lution. Adaptive PMLC model is missing both depth and breadth of the
solution. Figure 1.1 depicts the Adaptive PMLC model for projects that
meet the conditions of an incomplete solution due to missing features and
functions. In the Adaptive PMLC model, as with other Agile approaches,
the degree to which the solution is known might vary over a wide range from
knowing a lot but not all. The less that is known about the solution, the
more risk, uncertainty, and complexity will be present. To remove the un-
certainty associated with these projects, the solution has to be discovered.
That will happen through a continuous change process from cycle to cycle.
That change process is designed to create a convergence to a complete solu-
tion. In the absence of that convergence, Adaptive projects are frequently
cancelled and restarted in some other promising direction.
M onitor N ext N
Scope P lan Launch Close Close
Iteration Iteration &Control Iteration Iteration P roject
Iteration
pointing out all the steps and describing the macroscopic phases of if, we can
easily reply explaining Fig. 1.3. In this picture are reported the milestones
of the project. By the continuous check and monitoring of the status of the
project, taking into account the successful engagement of the milestones we
are able to understand how we are proceeding. In Fig. 1.4 is showed the
timeline of the whole project. The fist step is related to the acquisition of the
informations. At this stage is fundamental to study the LiDAR technology
by itself and then make a research related to the prior art. Talking about
this last topic, we will firstly focus our attention on the market products
and secondly to the open sources projects. The open source community
offers a huge variety of examples and related projects from which we can
take some inspiration. Moreover, the DIY spirit fits perfectly our idea to
keep low the cost of our platform. In the second stage it is expected to
identify the components which are necessary to manufacture the LiDAR, at
least in its first version. This version will be a learning trial version from
which we expect to gather experience and understand the limits and the
boundaries of the system. Once we have a precise idea of the equipment, it
will be necessary to understand how to organize our work and necessarily
4 CHAPTER 1. PROJECT DEVELOPMENT
W hat is
aLidar Lidar0 s Lidartrial Lidar
physics V ersion1.0 2.0
Common V ersion
Lidar Standard
applications Rotating
State of system
the art Settingsand
Description
2
IDE stands for integrated development environment.
1.2. THE PROCEDURE 5
TIMELINE
MARCH [3月] APRIL [4月] MAY [5月] JUNE [6月]
TASK Mile
Stone
Delivery
SubTask xx.yy Date
Introduction on LiDAR
Systems
7
8 CHAPTER 2. INTRODUCTION ON LIDAR SYSTEMS
T RAN SM IT T ER RECEIV ER
Beam OP T ICAL
DET ECT OR
DAT A
ACQU ISIT ION
Laser
Detector
sented only to give a general idea of the concept behind LiDAR technology.
We can recognise:
• Scanner and optics: How fast images can be developed is also affected
by the speed at which they are scanned. There are several options to
scan the azimuth and elevation, including dual oscillating plane mir-
rors, a combination with a polygon mirror and a dual axis scanner.
Optic choices affect the angular resolution and range that can be de-
tected. A hole mirror or a beam splitter are options to collect a return
signal;
• Repetition Rate: this is the rate at which the laser is pulsing, and it is
measured in kilohertz (KHz). Laser emits extremely quick pulses, so
it’s important to check that all the equipments, such as trans-receiver
and receiver are able to operate at the maximum laser pulsing speed;
• Scan angle: this is measured in degrees and is the distance that the
scanner moves from one end to the other. We will adjust the angle
depending on the application and the accuracy of the desired data
product;
• Nominal point spacing (NPS): The rule is simple enough the more
points that are hit in the collection, the better we will define the tar-
gets. The point sample spacing varies depending on the application.
It’s also good to keep in mind that LiDAR systems are random sam-
pling systems. Although it’s not possible to determine exactly where
the points are going to hit on the target area. The only thing that is
possible to do is that we can decide how many times the target areas
are going to be hit, so for example we can choose a higher frequency
of points to better define the targets.
• Cross track resolution: this is the spacing of the pulses from the LiDAR
system in the scanning direction, or perpendicular to the direction that
the platform is moving;
• Swath: this is the actual distance of the area of coverage for the LiDAR
system. It can vary depending on the scan angle. Mobile LiDAR has a
swath, too, but it is usually fixed and depends on the particular sensor.
For these systems, though, you might not hear the word ”swath” it
may instead be referred to as the ”area of coverage” and will vary
depending on the repetition rate of the sensor;
• Overlap: it’s the amount of redundant area that is covered between
flight lines or swaths within an area of interest. Overlap is not a wasted
effort, because sometimes it provides more accuracy.[3]
So, the accuracy of the system depends not only on laser’s feature but
even on the proprieties of the revelation and the system in general. Here we
have to be honest and add some more informations. Actually, during the
development of this work, we are basically interested in mechanical scan-
ning systems. Even though mechanical rotational detection systems have
some limits and disadvantages, they’re quite easy to implement and de-
velop. One of the major drawbacks is the fact that the scanning speed is
a limit to the performances of the LiDAR and at the same time reduces
the rotating components life, causing fatalities and premature failure. As
with just about any technology that requires precision you cannot count on
LiDAR collection without proper sensor calibration. All sensors should be
calibrated routinely, and calibration is recommended after every installation
into a platform. How you calibrate depends on the system and the LiDAR
provider, but the results are similar. Terrestrial LiDAR scanners, however,
have self calibrations that take place before every collection. In addition,
all LiDAR systems (whether in the air or on the ground) are initially cal-
ibrated by the manufacturer in a lab and during field trials. Whenever a
hardware component is repaired or replaced on a LiDAR sensor head, the
system should be recalibrated. It is also important to remember that every
further modification and adjustment of the system brings to a revision and
check up status of the new configuration.
2.2. LIDAR APPLICATIONS 11
also terrestrial laser scanning happen on the Earth’s surface and can be
both stationary or mobile. Stationary terrestrial scanning is most common
as a survey method, for example in conventional topography, monitoring,
cultural heritage documentation and forensics.[7] The 3D point clouds ac-
quired from these types of scanners can be matched with digital images taken
of the scanned area from the scanner’s location to create realistic looking
3D models in a relatively short time when compared to other technologies.
Each point in the point cloud is given the colour of the pixel from the image
taken located at the same angle as the laser beam that created the point.
Mobile LiDAR (also mobile laser scanning) is when two or more scanners are
attached to a moving vehicle to collect data along a path. These scanners
are almost always paired with other kinds of equipment, including GNSS
receivers and IMUs. One example application is surveying streets, where
power lines, exact bridge heights, bordering trees, etc. all need to be taken
into account. Instead of collecting each of these measurements individually
in the field with a tachymeter, a 3D model from a point cloud can be cre-
ated where all of the measurements needed can be made, depending on the
quality of the data collected. This eliminates the problem of forgetting to
take a measurement, so long as the model is available, reliable and has an
appropriate level of accuracy. Terrestrial LiDAR mapping involves a process
of occupancy grid map generation. The process involves an array of cells
divided into grids which employs a process to store the height values when
LiDAR data falls into the respective grid cell. A binary map is then created
by applying a particular threshold to the cell values for further processing.
The next step is to process the radial distance and z-coordinates from each
scan to identify which 3D points correspond to each of the specified grid cell
leading to the process of data formation.[8]
2.2. LIDAR APPLICATIONS 13
so called prior art. The very first generations of automotive adaptive cruise
control systems used only LiDAR sensors and it is expected a more intense
use of these technologies in the next future.
2.2.5 Energy
combining LiDAR technology and GIS (Geographic Information System) is
possible to develop a method for predicting city-wide electricity gains from
photovoltaic panels based on detailed 3D urban massing models combined
with Daysim-based hourly irradiation simulations, typical meteorological
year climactic data and hourly calculated rooftop temperatures. The re-
sulting data can be combined with online mapping technologies and search
engines as well as a financial module that provides building owners inter-
ested in installing a photovoltaic system on their rooftop with meaningful
data regarding spatial placement and informations.[10] So, in this sense Li-
2.3. STATE OF THE ART LIDAR SCANNING TECHNOLOGY 15
DAR could help the diffusion of RES (Renewable energy sources) in a more
efficient and rational way. This is only a simple example of what is possible
to conceive in the energy world thanks to LiDAR technology. An other im-
portant and relevant example is related to power transmission lines status
monitoring. In electric power transmission area, it can make for the laying,
maintenance and management of power grid. In the link of electric power
line design, the terrains and geographic features within the entire line design
area can be learned about based on LiDAR data. Especially in areas with
thick trees, the area and amount of trees to be cut down can be estimated.
In the link of power lines repair and maintenance, the height of line at any
location can be measured and calculated based on the LiDAR data points
on the electric power line and the elevation of corresponding exposed spot
on the ground, which can facilitate repair and maintenance.[11] Not only
Other relevant applications are missing for a time issues such as agriculture,
archeology or biology. Who is writing doesn’t want to explain in detail these
applications.
both during the day and at night – and works reliably even in adverse
weather conditions. Mass production will start at the end of 2019 and
the product will be on the market in the early 2020, as scheduled.
The price is not known yet, but rumors say around 7000 dollars (even
though we don’t know if this could be a realistic number).
Description Value
Wavelenght 830 nm
Maximum pulse energy 5 nJ
Pulse Duration 4 ns
Pulse repetition frequency 2.16 MHz
Beam divergence (FWHM, full angle) 0.4 mrad
Mirror rotation 30 Hz
Base rotation 2.5 mHz
Fundamentals on LiDAR
As we have said in the previous sections, especially the realization of this first
platform is essential to get confident with the equipment, and in particular
study more deeply the interaction between hardware and software. The
language necessary to control the MCU or better the micro controller that
we’ll use constitutes a completely new barrier. So, for this first period it’s
necessary to study and practice what’s written on books, web manuals and
tutorials.
In this Chapter will be reported and explained basic principles and how
things work; in particular:
• Sensor;
• Micro-Controller (Arduino);
• Servo Motors;
• Processing v3;
21
22 CHAPTER 3. FUNDAMENTALS ON LIDAR
Specifications Measurements
Size (l x w x h) 20 x 48 x 40 mm
Weight 22 g
Operating Temperature -20 to 60 C
Specifications Measurements
5 Vdc nominal
Power
4.5 Vdc min., 5.5 Vdc max.
105 mA idle
Current
135 mA continuous operations
mitter to the receiver. It stores the transmit signature, sets the time delay
for ”zero” distance, and recalculates this delay periodically after several mea-
surements. Next, the device initiates a measurements by performing a series
of acquisitions. Each acquisition is a transmission of the main laser signal
while recording the return signal at the receiver. If there is a signal match,
the result is stored in memory as a correlation record. The next acquisition
is summed with the previous result. When an object at a certain distance
reflects the laser signal back to the device, these repeated acquisitions cause
a peak to emerge, out of the noise, at the corresponding distance location
in the correlation record.
The device integrates acquisitions until the signal peak in the correlation
record reaches a maximum value. If the returned signal is not strong enough
for this to occur, the device stops at the predetermined maximum acquisi-
tion count. Signal strength is calculated from the magnitude of the signal
record peak and a valid signal threshold is calculated from the noise floor.
If the peak is above the threshold the measurements is considered valid and
the device will calculate the distance, otherwise it will report 1 cm. When
beginning the next measurement, the device clears the signal and starts the
sequence again.[15]
3.1.2 Interface
Initialization On power-up or reset, the device performs a self-test se-
quence and initializes all registers with default values. After roughly 22 ms
distance measurements can be taken with the I2C interface or the Mode
Control Pin. So before explaining how LiDAR Lite v3’s I2C works it’s bet-
ter to give to the reader some basic and general informations related to I2C
itself. Only then it will be possible to understand clearly how the LiDAR
data transmission works.[15]
I2C description I2C-bus compatible ICs don’t only assist designers, they
also give a wide range of benefits to equipment manufacturers because:
These are just some of the benefits. In addition, I2C-bus compatible ICs in-
crease system design flexibility by allowing simple construction of equipment
variants and easy upgrading to keep designs up-to-date. In this way, an en-
tire family of equipment can be developed around a basic model. Upgrades
for new equipment, or enhanced-feature models (i.e. extended memory, re-
mote control, etc.) can then be produced simply by clipping the appropriate
ICs onto the bus. If a larger ROM is needed, it’s simply a matter of selecting
a micro-controller with a larger ROM. As new ICs supersede older ones, it’s
easy to add new features to equipment or to increase its performance by
simply unclipping the outdated IC from the bus and clipping on its succes-
sor.
For 8-bit oriented digital control applications, such as those requiring mi-
crocontrollers, certain design criteria can be established:
• The cost of connecting the various devices within the system must be
minimized;
• Overall efficiency depends on the devices chosen and the nature of the
interconnecting bus structure.
I2C Lidar Lite Interface This device has a 2-wire, I2C-compatible serial
interface. It can be connected to an I2C bus as a slave device, under the
control of an I2C master device. It supports 400 kHz Fast Mode data trans-
fer. The I2C bus operates internally at 3.3 Vdc. An internal level shifter
allows the bus to run at the maximum of 5 Vdc. Internal 3k ohm pull-up
resistors ensure this functionality and allow for simple connection to the I2C
host. The device has a 7-bit slave address with the default value of 0x62.
The effective 8 bit-bit I2C address is 0x64 write and 0xC5. The device will
not respond to a general call. Support is not provided for 10-bit addressing.
The sensor module has a 7-bit slave address with a default value of 0x62 in
hexadecimal notation. The effective 8 bit I2C address is: 0x64 write, 0x65
read. The device will not presently respond to a general call. Please note
some additional informations:
• This device does not work with repeated START conditions. It must
first receive a STOP condition before a new START condition;
• The Ack and NACK item are responses from the master device to the
slave device;
• The last NACK in the read is technically optional, but the formal I2C
protocol states that the master shall not acknowledge the last byte.
Specifications Measurements
Range (70% reflective Target) 40 m
Resolution ± 1 cm
Accuracy <5 m ± 2.5 cm
± 10 cm
Mean
Accuracy ≥ 5 m ± 1% of maximum distance
Ripple
± 1% of maximum distance
270 Hz typical
Update Rate (70% reflective Target) 650 Hz fast mode
>1000 Hz Short range only
50 Hz default
Repetition Rate
500 Hz max
Specifications Measurements
Wavelength 905 nm (nominal)
Total Laser Power (peak) 1.3 W
Mode of operation Pulsed (256 pulsed max. pulse train)
Pulse width 0.5 µs (50% duty cycle)
Pulse train repetition frequency 10-20 kHz (nominal)
Energy per pulse < 280 nJ
Beam Diameter at laser aperture 12 x 2 mm
Divergence 8 m Radian
3.2 Arduino
Arduino Uno is a microcontroller board based on the ATmega328P. It has
14 digital input/output pins (of which 6 can be used as PWM outputs), 6
28 CHAPTER 3. FUNDAMENTALS ON LIDAR
Microcontroller ATmega328P
Operating Voltage 5V
Input Voltage (recommended) 7-12V
Input Voltage (limit) 6-20V
Digital I/O Pins 14 (of which 6 provide PWM output)
PWM Digital I/O Pins 6
Analog Input Pins 6
DC Current per I/O Pin 20 mA
DC Current for 3.3V Pin 50 mA
32 KB (ATmega328P)
Flash Memory
of which 0.5 KB used by bootloader
SRAM 2 KB (ATmega328P)
EEPROM 1 KB (ATmega328P)
Clock Speed 16 MHz
LED BUILTIN 13
Length 68.6 mm
Width 53.4 mm
Weight 25 g
3.2.1 Programming
Arduino Uno schematics is open-source hardware. The ATmega328 comes
preprogrammed with a bootloader that allows you to upload new code to
it without the use of an external hardware programmer. It communicates
using the original STK500 protocol. You can also bypass the bootloader
and program the microcontroller through the ICSP (In-Circuit Serial Pro-
gramming) header using Arduino ISP or similar; see these instructions for
details. The ATmega16U2 (or 8U2 in the rev1 and rev2 boards) firmware
source code is available in the Arduino repository. The ATmega16U2/8U2
is loaded with a DFU bootloader, which can be activated by:
You can then use Atmel’s FLIP software (Windows) or the DFU programmer
(Mac OS X and Linux) to load a new firmware. Or you can use the ISP
header with an external programmer (overwriting the DFU bootloader). See
this user-contributed tutorial for more information.
And then it’s possible to update and upload the code via USB thanks to
the appropriate software. It important to say that the arduino code is quite
simple and intuitive. To have an idea and understand how the basic Arduino
IDE environment is, please take a look at Fig. 3.2.
Somebody could raise the end and ask why it’s highly recommended to put
a capacitor between the ground and 5V pin. The answer is quite immediate
and easy: Capacitor is recommended to mitigate inrush current when device
is enabled and, as said before, and as it’s possible to see in Fig. 3.3
- 680uF capacitor (+) to Arduino 5v;
- 680uF capacitor (-) to Arduino GND;
30 CHAPTER 3. FUNDAMENTALS ON LIDAR
3.2.3 Power
The Arduino Uno board can be powered via USB connection or with an
external power supply. The power source is selected automatically. External
(non-USB) power can come either from an AC-to-DC adapter (wall-wart) or
battery. The adapter can be connected by plugging a 2.1mm center-positive
plug into the board’s power jack. Leads from a battery can be inserted
in the GND and Vin pin headers of the POWER connector. The board
can operate on an external supply from 6 to 20 volts. If supplied with less
than 7V, however, the 5V pin may supply less than five volts and the board
may become unstable. If using more than 12V, the voltage regulator may
overheat and damage the board. The recommended range is 7 to 12 volts.
The power pins are as follows (recap):
• Vin. The input voltage to the Arduino/Genuino board when it’s using
an external power source (as opposed to 5 volts from the USB connec-
tion or other regulated power source). You can supply voltage through
this pin, or, if supplying voltage via the power jack, access it through
this pin;
(7 - 12V), the USB connector (5V), or the VIN pin of the board (7-
12V). Supplying voltage via the 5V or 3.3V pins bypasses the regulator,
and can damage your board;
Each of the 14 digital pins on the Uno can be used as an input or output,
using pinMode(),digitalWrite(), and digitalRead() functions. They operate
at 5 volts. Each pin can provide or receive 20 mA as recommended operating
condition and has an internal pull-up resistor (disconnected by default) of
20-50kΩ. A maximum of 40mA is the value that must not be exceeded on
any I/O pin to avoid permanent damage to the micro-controller. In addition,
some pins have specialized functions:
• Serial: 0 (RX) and 1 (TX). Used to receive (RX) and transmit (TX)
TTL serial data. These pins are connected to the corresponding pins
of the ATmega8U2 USB-to-TTL Serial chip;
• PWM: 3, 5, 6, 9, 10, and 11. Provide 8-bit PWM output with the
analogWrite() function;
• LED: 13. There is a built-in LED driven by digital pin 13. When the
pin is HIGH value, the LED is on, when the pin is LOW, it’s off;
3.2. ARDUINO 33
a battery or proper DC source and spin at high RPM (rotations per minute)
but put out very low torque. An arrangement of gears takes the high speed
of the motor and slows it down while at the same time increasing the torque.
It’s common sense the basic law of physics: work = f orce × distance. A
tiny electric motor does not have much torque, but it can spin really fast
(small force, big distance). The gear design inside the servo case converts
the output to a much slower rotation speed but with more torque (big force,
little distance). Even though, the amount of actual work is the same. Gears
in an inexpensive servo motor are generally made of plastic to keep it lighter
and less costly (see Fig. 3.6a). On a servo designed to provide more torque
for heavier work, the gears are made of metal (see Fig. 3.6b) and are harder
to damage. With a small DC motor, you apply power from a battery, and
the motor spins. Unlike a simple DC motor, however, a servo’s spinning
motor shaft is slowed way down with gears. A positional sensor on the
final gear is connected to a small circuit board (see Fig. 3.7). The sensor
tells this circuit board how far the servo output shaft has rotated. The
electronic input signal from the computer feeds into that circuit board. The
electronics on the circuit board decode the signals to determine how far the
user wants the servo to rotate. It then compares the desired position to the
actual position and decides which direction to rotate the shaft so it gets to
the desired position. That is what makes servo motors so useful: once you
tell them what you want done, they do the job without your help. This
automatic seeking behaviour of servo motors makes them perfect for many
robotic applications.
A lot of different kinds of servo motors are these days available:
• Positional rotation servo, this is the most common type of servo mo-
tor. The output shaft rotates in about 90 degrees, 180 degrees or 270
degrees. It has physical stops placed in the gear mechanism to prevent
turning beyond these limits to protect the rotational sensor;
• Continuous rotation servo, this is quite similar to the common po-
36 CHAPTER 3. FUNDAMENTALS ON LIDAR
• Linear servo, this is also like the positional rotation servo motor de-
scribed above, but with additional gears (usually a rack and pinion
mechanism) to change the output from circular to back-and-forth.
These servos are not easy to find though.
the shaft, and based on the duration of the pulse sent via the control wire;
the rotor will turn to the desired position. The servo motor expects to see a
pulse every 20 milliseconds (ms) or in other words 50Hz and the length of the
pulse will determine how far the motor turns. For example for a common
180◦ servo, a 1.5ms pulse will make the motor turn to the 90◦ position.
Shorter than 1.5ms moves it in the counter-clockwise direction toward the
0◦ position, and any longer than 1.5ms will turn the servo in a clockwise
direction toward the 180◦ position. The motor’s speed, then, is proportional
to the difference between its actual position and desired position. So if the
motor is near the desired position, it will turn slowly, otherwise it will turn
fast. This is called proportional control. This means the motor will only run
as hard as necessary to accomplish the task at hand, a very efficient way to
achieve the rotation, see Fig. 3.8.
In the next chapter will be presented the result of the servo test that has
been carried on in lab thanks to an oscilloscope (See §Chap. 4, §Sec. 4.2.1).
3.4 Processing v3
Processing is a flexible software sketchbook and a language for learning how
to code within the context of the visual arts. Since 2001, Processing has
promoted software literacy within the visual arts and visual literacy within
technology. There are tens of thousands of students, artists, designers, re-
searchers, and hobbyists who use Processing for learning and prototyping.
Its features are:[18]
• Free to download and open source;
3.4.1 Export
The export feature packages a sketch to run within a Web browser. When
code is exported from Processing it is converted into Java code and then
compiled as a Java applet. When a project is exported, a series of files are
written to a folder named applet that is created within the sketch folder. All
files from the sketch folder are exported into a single Java Archive (JAR)
file with the same name as the sketch. For example, if the sketch is named
Sketch 123, the exported file will be called Sketch 123.jar. Every time a
sketch is exported, the contents of the applet folder are deleted and the files
are written from scratch. Any changes previously made to the index.html
file are lost. Media files not needed for the applet should be deleted from the
data folder before it is exported to keep the file size small. For example, if
there are unused images in the data folder, they will be added to the JAR file,
thus needlessly increasing its size. In addition to exporting Java applets for
the Web, Processing can also export Java applications for the Linux, Mac-
intosh, and Windows platforms. When ”Export Application” is selected
from the File menu, folders will be created for each of the operating sys-
tems specified in the Preferences. Each folder contains the application, the
source code for the sketch, and all required libraries for a specific platform.
Additional and updated information about the Processing environment is
available at www.processing.org/reference/environment or by selecting the
40 CHAPTER 3. FUNDAMENTALS ON LIDAR
LiDAR Platform v1
Firstly, it’s necessary to reaffirm that this first platform will be part of a
bigger learning phase. So, we will investigate how it’s possible to build
such a similar structure, discovering step by step problems and figuring out
how to overcome these issues. Moreover, this first version will be essential
to understand and define the boundaries of version one; at the same time
foresee a possible solution to improve the machine’s performance, towards
version 2.
Now, I will list the key components of the version that will be discussed in
detail in further sections of this chapter:
• Sensor;
• Arduino;
• Arduino Expansions;
• Simple Structure;
• Power Bank;
As it’s possible to image, before putting all the parts together it’s necessary
to focus on the single parts, if required test in lab some specific components
and then assemble the whole platform solving integration problems or issues
and then test again. This last test is essential, since it’s like a benchmark
between the requirements on which the platform has been built and the
performances that the final product is actually able to deliver.
At the beginning of the process a general timeline has been deployed. The
41
42 CHAPTER 4. LIDAR PLATFORM V1
timeline itself presents different steps and phases, which correspond to differ-
ent implementations and functionalities. Going on with the process, taking
into account time constraint and possible improvements of the platform, we
will decide what to do as first or what to exclude. See Fig. 4.1.
Anyway, right now it’s time to show what is the architecture of this first
prototype, see Fig. 4.2. It’s possible to see how the parts and tools have
been assembled together to form the LiDAR system. Since the structure
has to be portable, we have decided to provide the power necessary to feed
the equipment through a 5V output power bank. The full capacity of the
power supply will be 10000mAh, necessary to feed sensor, controller, motors
and the bluetooth. So, in simple words, the two motors will move the sensor
(pan and tilt movement), as soon as the movement is completed the sensor
acquisition is triggered by the Arduino, which contemporary knows and
control the servos, the measurement will be then elaborated by the MCU and
sent to the PC via Serial. Thanks to the adoption of the master and slave
wireless transmission of the informations, it’s possible to keep the circuit
simple, reduce the number of wires that is used without compromising the
transmission speed. So, the master device will be connected to the Arduino
Board expansion (directly mounted on the Arduino PCB itself) with four
pins: two for power (GND and 5V) and other two for data communication
(Tx and Sx). Obviously, then, the master will be coupled with his slave and
that one will be directly plugged into the USB port of the PC. In this way,
the slave could be powered and at the same time transmit data via Serial
Port to the Processing v3 Sketch that runs on the machine.
4.1. SENSOR CONTROL STRATEGY 43
The only thing that is missing, though it can be seen on the schematic, is
the presence of the Arduino servo expansion. By using that, we achieve: an
independence of the motors power supply for the Arduino and at the same
time the possibility of an easy control and feeding of the two servos.
necessary to install the library in the Arduino library folder. Once that this
step has been done, the sensor can be easily controlled and trigger, as said
before.
Firstly, it’s mandatory to define the LiDAR OBJECT with a proper syntax.
Then, it’s necessary to initiate the communication via I2C (the transmis-
sion mode we have chosen). Then, with a single command is possible to
trigger the sensor and capture the measurement. Anyway, the sensor can
operate in different modes and with two levels of accuracy. Obviously, the
deal it’s a compromise between speed and reliability of the data gathered
by the sensor. The following listed code explains basically how to operate
the sensor:
1 /∗−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
2 This example shows how t o i n i t i a l i z e , c o n f i g u r e ,
3 and r e a d d i s t a n c e from a LIDAR−L i t e c o n n e c t e d o v e r I2C
interface .
4
5 Connections :
6 LIDAR−L i t e 5 Vdc ( r e d ) t o Arduino 5v
7 LIDAR−L i t e I2C SCL ( g r e e n ) t o Arduino SCL
8 LIDAR−L i t e I2C SDA ( b l u e ) t o Arduino SDA
9 LIDAR−L i t e Ground ( b l a c k ) t o Arduino GND
10
11 ( C a p a c i t o r recommended t o m i t i g a t e i n r u s h c u r r e n t when d e v i c e
i s enabled )
12 680uF c a p a c i t o r (+) t o Arduino 5v
13 680uF c a p a c i t o r ( −) t o Arduino GND
14 −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−∗/
15 #i n c l u d e <Wire . h> // Standard L i b r a r y
16 #i n c l u d e <LIDARLite . h> // LiDAR ’ s L i b r a r y
17
18 LIDARLite myLidarLite ; // LiDAR OBJECT DEFINITION
19
20 void setup ( )
21 {
22 S e r i a l . b e g i n ( 1 1 5 2 0 0 ) ; // I n i t i a l i z e s e r i a l c o n n e c t i o n t o
display distance readings
23 /∗
24 begin ( i n t c o n f i g u r a t i o n , bool f a s t i 2 c , char l i d a r l i t e A d d r e s s )
25 −> S t a r t s t h e s e n s o r and I2C .
26
27 Parameters
28 −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
29 configuration : Default 0.
30 S e l e c t s one o f s e v e r a l p r e s e t c o n f i g u r a t i o n s .
31 f a s t i 2 c : D e f a u l t 100 kHz . I2C b a s e f r e q u e n c y . ( I f t r u e I2C
f r e q u e n c y i s s e t t o 400kHz ) ∗/
32 myLidarLite . b e g i n ( 0 , t r u e ) ; // S e t c o n f i g u r a t i o n t o d e f a u l t
and I2C t o 400 kHz
33 /∗ c o n f i g u r e ( i n t c o n f i g u r a t i o n , c h a r l i d a r l i t e A d d r e s s )
34 S e l e c t s one o f s e v e r a l p r e s e t c o n f i g u r a t i o n s .
35 Parameters
4.1. SENSOR CONTROL STRATEGY 45
36 −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
37 configuration : Default 0.
38 0 : D e f a u l t mode , b a l a n c e d p e r f o r m a n c e .
39 1 : S h o r t range , h i g h s p e e d . Uses 0 x1d maximum a c q u i s i t i o n
count .
40 2 : D e f a u l t range , h i g h e r s p e e d s h o r t r a n g e . Turns on q u i c k
t e r m i n a t i o n d e t e c t i o n f o r f a s t e r measurements a t s h o r t r a n g e (
with d e c r e a s e d a c c u r a c y ) .
41 3 : Maximum r a n g e . Uses 0 x f f maximum a c q u i s i t i o n count .
42 4 : High s e n s i t i v i t y d e t e c t i o n . O v e r r i d e s d e f a u l t v a l i d
measurement d e t e c t i o n a l g o r i t h m , and u s e s a t h r e s h o l d v a l u e
f o r h i g h s e n s i t i v i t y and n o i s e .
43 5 : Low s e n s i t i v i t y d e t e c t i o n . O v e r r i d e s d e f a u l t v a l i d
measurement d e t e c t i o n a l g o r i t h m , and u s e s a t h r e s h o l d v a l u e
f o r low s e n s i t i v i t y and n o i s e . ∗/
44 myLidarLite . c o n f i g u r e ( 0 ) ; // Change t h i s number t o t r y out
alternate configurations
45 }
46 void loop ( )
47 {
48 /∗ d i s t a n c e ( b o o l b i a s C o r r e c t i o n , c h a r l i d a r l i t e A d d r e s s )
49 Take a d i s t a n c e measurement and r e a d t h e r e s u l t .
50 Parameters
51 −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
52 b i a s C o r r e c t i o n : D e f a u l t t r u e . Take a c q u i s i t i o n with r e c e i v e r
b i a s c o r r e c t i o n . I f s e t t o f a l s e measurements w i l l be f a s t e r .
R e c e i v e r b i a s c o r r e c t i o n must be p er fo rm e d p e r i o d i c a l l y . ( e .
g . 1 out o f e v e r y 100 \ r e a d i n g s ) . ∗/
53
54 // Take a measurement with r e c e i v e r b i a s c o r r e c t i o n and p r i n t
to s e r i a l terminal
55 S e r i a l . p r i n t l n ( myLidarLite . d i s t a n c e ( ) ) ;
56
57 // Take 99 measurements w i t h o u t r e c e i v e r b i a s c o r r e c t i o n and
p r i n t to s e r i a l terminal
58 f o r ( i n t i = 0 ; i < 9 9 ; i ++)
59 {
60 S e r i a l . p r i n t l n ( myLidarLite . d i s t a n c e ( f a l s e ) ) ;
61 }
62 }
Listing 4.1: Code Listing-sensor acquisition
Taking a look at the code listing 4.1; it’s possible to see that on:
• Line 18, LiDAR Object has been defined;
• Line 32, I2C communication starts, with default settings (value = 0)
and speed set to 400kHz (true);
• Line 44, set LiDAR operational mode (value = 0, balanced perfor-
mances);
• Line 55, acquisition trigger (no bias correction). It means that the
measure is faster but not super accurate; as we have said the measure
46 CHAPTER 4. LIDAR PLATFORM V1
the x-axis to the y-axis rather than clockwise from north (0◦ ) to east (90◦ )
like the horizontal coordinate system. The polar angle is often replaced
by the elevation angle measured from the reference plane. Elevation angle
of zero is at the horizon. The spherical coordinate system generalizes the
two-dimensional polar coordinate system. It can also be extended to higher-
dimensional spaces and is then referred to as a hyper-spherical coordinate
system.
Spherical coordinates determine the position of a point in three-dimensional
space based on the distance ρ from the origin and two angles θ and φ. The
following graphics (See Fig. 4.6) may help to understand spherical coordi-
nates better. On this page, we derive the relationship between spherical and
Cartesian coordinates.
is the angle the hypotenuse makes with the z-axis leg of the right triangle,
the z-coordinate of P (i.e., the height of the triangle) is z = ρ cos φ. The
length of the other leg of the right triangle is the distance from P to the
z-axis, which is r = ρ sin φ. The distance of the point Q from the origin is
the same quantity. The cyan triangle, shown in both the original 3D coordi-
nate system on the left and in the xy-plane on the right, is the right triangle
whose vertices are the origin, the point Q, and its projection onto the x-axis.
In the right plot, the distance from Q to the origin, which is the length of
hypotenuse of the right triangle, is labelled just as r. As θ is the angle this
hypotenuse makes with the x-axis, the x- and y-components of the point Q
(which are the same as the x- and y-components of the point P) are given
by x = r cos θ and y = r sin θ. Since r = ρsinθ, these components can be
rewritten as x = ρ sin φ cos θ and y = ρ sin φ sin θ. In summary, the formulas
4.4. DATA PROCESSING, VISUALIZATION AND RESULTS 53
x = ρ sin φ cos θ
y = ρ sin φ sin θ
z = ρ cos θ
So, once the sensor has revealed the distance, there are some lines in the
Arduino code that operates the transformation and then print a string with
(x, y, z) on the serial port. This operation it’s quite easy thanks to the
instruction println. It is as well possible to print the data to file, but actually
we preferred to transmit the data via Serial: in this way it’s immediate a
real time elaboration.
5 import j a v a . u t i l . Calendar ;
6 import j a v a . t e x t . SimpleDateFormat ;
7 \\ g l o b a l v a r i a b l e i n i t i a l i z a t i o n
8 Serial serial ;
9 i n t serialPortNumber = 0 ;
10 f l o a t angle = 6.5 f ;
11 f l o a t angleIncrement = 0;
12 f l o a t xOffset = 3.0;
13 f l o a t xOffsetIncrement = 0;
14 f l o a t yOffset = 152.0 f ;
15 f l o a t yOffsetIncrement = 0;
16 f l o a t scaleIncrement = 0;
17 float h;
18 A r r a y L i s t <PVector> v e c t o r s ; \\ v e c t o r i n i t i a l i z a t i o n t o s t o r e
data
19 int lastPointIndex = 0;
20 int lastPointCount = 0;
21
22 void setup ( ) {
23 s i z e ( 8 0 0 , 6 0 0 , P3D) ; // environment i n i t i a l i z a t i o n
24 colorMode (HSB, 3 6 0 , 1 0 0 , 1 0 0 ) ;
25 noSmooth ( ) ;
26 v e c t o r s = new A r r a y L i s t <PVector >() ;
27 S t r i n g [ ] s e r i a l P o r t s = S e r i a l . l i s t ( ) ; // s e r i a l p o r t r e a d −>
28 String s e r i a l P o r t = s e r i a l P o r t s [ serialPortNumber ] ;
29 p r i n t l n ( ” Using s e r i a l p o r t \” ” + s e r i a l P o r t + ” \” ” ) ;
30 p r i n t l n ( ”To u s e a d i f f e r e n t s e r i a l port , change
serialPortNumber : ” ) ;
31 printArray ( s e r i a l P o r t s ) ;
32 s e r i a l = new S e r i a l ( t h i s , s e r i a l P o r t , 1 1 5 2 0 0 ) ; } // <−
33 v o i d draw ( ) {
34 String input = s e r i a l . readStringUntil (10) ;
35 i f ( i n p u t != n u l l ) {
36 S t r i n g [ ] components = s p l i t ( input , ’ ’ ) ;
37 i f ( components . l e n g t h == 3 ) {
38 v e c t o r s . add ( new PVector ( f l o a t ( components [ 0 ] ) , f l o a t (
components [ 1 ] ) , f l o a t ( components [ 2 ] ) ) ) ; } }
39 background ( 0 ) ;
40 t r a n s l a t e ( width / 2 , h e i g h t / 2 , −50) ;
41 rotateY ( angle ) ;
42 int size = vectors . size () ;
43 f o r ( i n t i n d e x = 0 ; i n d e x < s i z e ; i n d e x++) {
44 PVector v = v e c t o r s . g e t ( i n d e x ) ;
45 i f ( i n d e x == s i z e − 1 ) {
46 // draw r e d l i n e t o show r e c e n t l y added LIDAR s c a n p o i n t
47 i f ( i n d e x == l a s t P o i n t I n d e x ) {
48 l a s t P o i n t C o u n t ++;
49 } else {
50 lastPointIndex = index ;
51 lastPointCount = 0;
52 }
53 i f ( lastPointCount < 10) {
54 s t r o k e (0 , 100 , 100) ;
55 l i n e ( x O f f s e t , y O f f s e t , 0 , v . x ∗ s c a l e + x O f f s e t , −v . z ∗
4.4. DATA PROCESSING, VISUALIZATION AND RESULTS 55
s c a l e + y O f f s e t , −v . y ∗ s c a l e ) ; } }
56 h= s q r t ( v . x∗v . x+v . y∗v . y+v . z ∗v . z ) ;
57 s t r o k e (360 −(h ∗ 0 . 4 ) , 1 0 0 , 1 0 0 ) ;
58 p o i n t ( v . x ∗ s c a l e + x O f f s e t , −v . z ∗ s c a l e + y O f f s e t , −v . y ∗
scale ) ;}
59 a n g l e += a n g l e I n c r e m e n t ;
60 x O f f s e t += x O f f s e t I n c r e m e n t ;
61 y O f f s e t += y O f f s e t I n c r e m e n t ;
62 s c a l e += s c a l e I n c r e m e n t ; }
63 void keyPressed ( ) {
64 i f ( key == ’ q ’ ) {
65 // zoom i n
66 scaleIncrement = 0.02 f ;
67 } e l s e i f ( key == ’ z ’ ) {
68 // zoom out
69 s c a l e I n c r e m e n t = −0.02 f ;
70 } e l s e i f ( key ==’ p ’ ) {
71 // e r a s e a l l p o i n t s
72 vectors . clear () ;
73 } e l s e i f ( key == ’ a ’ ) {
74 // move l e f t
75 x O f f s e t I n c r e m e n t = −1 f ;
76 } e l s e i f ( key == ’ d ’ ) {
77 // move r i g h t
78 xOffsetIncrement = 1 f ;
79 } e l s e i f ( key == ’w ’ ) {
80 // move up
81 y O f f s e t I n c r e m e n t = −1 f ;
82 } e l s e i f ( key == ’ x ’ ) {
83 // move down
84 yOffsetIncrement = 1 f ;
85 } e l s e i f ( key == CODED) {
86 i f ( keyCode == LEFT) {
87 // r o t a t e l e f t
88 a n g l e I n c r e m e n t = −0.015 f ;
89 } e l s e i f ( keyCode == RIGHT) {
90 // r o t a t e r i g h t
91 angleIncrement = 0.015 f ;}}}
92 void keyReleased ( ) {
93 i f ( key == ’ q ’ ) {
94 scaleIncrement = 0 f ;
95 } e l s e i f ( key == ’ z ’ ) {
96 scaleIncrement = 0 f ;
97 } e l s e i f ( key == ’ a ’ ) {
98 xOffsetIncrement = 0 f ;
99 } e l s e i f ( key == ’ d ’ ) {
100 xOffsetIncrement = 0 f ;
101 } e l s e i f ( key == ’w ’ ) {
102 yOffsetIncrement = 0 f ;
103 } e l s e i f ( key == ’ x ’ ) {
104 yOffsetIncrement = 0 f ;
105 } e l s e i f ( key == ’ s ’ ) {
106 saveToFile () ;
107 } e l s e i f ( key == CODED) {
56 CHAPTER 4. LIDAR PLATFORM V1
In the following part the algorithm will be commented to explain the differ-
ent parts and functionalities. As for each scratch or program at the begin-
ning are reported the libraries that are needed to run, compile and interpret
the algorithm. Then the global variables are exposed.
• Line 23, this instruction declares and initialize the size and the colour
of the environment trough which the point cloud will be showed. It will
be a rectangle 800 × 600 pixels. Before drawing 3D form in Processing,
it’s necessary to tell the software to draw with a 3D renderer. The
default renderer in Processing draws only two-dimensional shapes, but
there are additional options (P3D and OPENGL) to render 3D form.
P3D is the simplest and most compatible renderer, and it requires no
additional libraries;
• Line 24, definition of the color mode and the background colour of the
environment. Processing uses the RGB color model as its default for
working with color, but the HSB specification can be used instead to
define colors in terms of their hue, saturation, and brightness. The hue
of a color is what most people normally think of as the color name:
yellow, red, blue, orange, green, violet. A pure hue is an undiluted
color at its most intense. The saturation is the degree of purity in a
color. It is the continuum from the undiluted, pure hue to its most
diluted and dull. The brightness of a color is its relation to light and
dark.[22]
For example the color black is represented as (HSB, 360, 100, 100).
Somebody could raise the hand and ask why the system HSB has been
4.4. DATA PROCESSING, VISUALIZATION AND RESULTS 57
chosen rather the more common and intuitive RGB. The answer lies
on something that well be explained in detail later. In fact, a color
will be assigned to each point taking into account the distance from
the sensor. If the system RGB is adopted, the result is a saturation of
the color and points which are close to each other are displayed with
the same color. This is something that we don’t want since the color
to the points contributes to create a more realistic three-dimensional
effect;
• Line 26-32, initialization of the vector in which data are then stored.
Establishment of the Serial Port communication;
• Line 33, this part is responsible to draw the points, in the proper
position, according to the coordinates. In order to make it easier to
understand and intuitive, the last point that is displayed on the screen
is connected by means of a red line to the origin of the axis. In this
way, it’s possible to follow the progression in the representation of the
point cloud;
• Line 66-111, this part of the code is responsible of the navigation sys-
tem. By pressing the keyboard arrows is possible to rotate the point
cloud around a vertical or horizontal axis. While, pressing W,S,D,A
buttons is possible to translate the point cloud horizontally or verti-
cally. Finally, pressing the Q is possible to zoom in, while Z corre-
sponds to zoom out. Last but not least important, P erases all points
and S saves the point cloud to file;
• Line 113-125, is the part related to the function save to file. By press-
ing the S button, the function save is invoked. The file in output will
contain the three-dimensional coordinates, the file will be saved in Pro-
cessing v3 installation folder with a name ”yyMMdd HHmmss.xyz”.
Results In this paragraph are reported samples from the first test carried
on with LiDAR platform v1. As it’s easy to understand, at the very be-
ginning the processing sketch was not able to display colors (See Fig. 4.8
and Fig. 4.9). While Fig. 4.10 represent the evolution to colors. As, said
before, colors help to identify better objects and give a more accurate three-
dimensional aspect. Anyway, it’s necessary to be honest and say that this
first platform has given a result which is not good as expected. In fact,
the density of the point cloud is not sufficient to give us a precise idea of
the object or obstacle. The only thing that we can infer is the approximate
shape of things or rooms. This prototype just underlines and shows which
is possible to have a 3D representation of a surrounding environment em-
ploying a simple hardware as we did. But, for sure, the platform requires
improvements, as well as upgrades. The first noticeable thing is the point
58 CHAPTER 4. LIDAR PLATFORM V1
cloud density: it’s not enough to show with precision the scanned objects.
For this reason, the next version should improve dramatically this param-
eter. In the following section new proposal to solve this problem will be
presented.
Rank Priorities
Code – algorithm (Servo control, Serial Data
1
Transmission, Data Processing)
2 Motors
3 Structure
1)
• + Easy to display.
All of those three options, software related, are not cost effective, since
there’s no need to buy anything but just programming and code. Moreover,
it’s true that on the internet it’s possible to find some open source software
that can guide you in the development of a new application or a new soft-
ware part. But at the same time, we should say that we should spend a
lot of time to make this process smooth and reliable. Anyway, regarding
the motors, it’s necessary to evaluate if a different technology could bring
to better performances.
2)
a. To reach a high level of accuracy it’s possible to buy new servos with
better performances and specs. The main problem that we must face
is the instability of the PWM signal to the servos. In fact, we have
tested the servo with an oscilloscope and we have found out that the
PWM signal is not stable and oscillates. This can lead to servo rota-
tions that are not totally controllable and at the same time uncertain.
Considering these problems, it’s a wise decision to take into account
different motor technologies, for example stepper motors. Stepper are
used in critical applications where the minimum required step is re-
ally small or in task where the precise positioning is an issue. With
an improvement of the motor performances it’s possible to scan more
accurately the surrounding environment.
• + Accuracy;
• + Reliability;
• + Increase the performance of the whole system;
• - Time and Cost, expensive (especially switch to other technolo-
gies);
• - Test in lab;
• - Slow down the scanning speed (↑ accuracy ↓ speed).
3)
LiDAR platform v2
This is, probably, the most important chapter of this work. In fact, it’s
fundamental to show that we understood problems and issues emerged in
the first version. And furthermore, we have been able to propose a possible
solution that suits our requirements and allows to reach the expected qual-
ity standard. The decision that has been carried out consists in the radical
change from servo motors to steppers. This solution brings as consequence
the need to re-write the code that control pan and tilt movements, adapting
it to the new technology that has been implemented. Beside that, the other
parts will substantially remain the same.
Just to summarize, the big change is related to hardware. This, involves big
changes in the software part related to motion control. Other small software
implementations will be carried on in this phase, but the impact on the final
products is really limited or not substantially relevant. Anyway, consider-
ing big changes, it’s mandatory to conceive a new set of experiments, able
to reveal the reliability and the performances of the new motion system in
particular.
In the second version of our LiDAR, we adopted stepper motors. On pa-
per, this kind of motors can execute very small rotations. If we take for
example, a stepper motor NEMA 171 it can deliver a minimum rotation of
1.8◦ degrees. If a stepper motor is coupled with a micro-stepper driver is
possible to obtain fractions of this angle (typically 1/2, 1/4 , 1/8, 1/32). Since
the micro-stepper represents an open loop control there’s the need to check
and verify the mechanical accuracy of the motors.
63
64 CHAPTER 5. LIDAR PLATFORM V2
• be accurate;
• reliable;
their size.
Stepper motors have a magnetized geared core that is surrounded by a num-
ber of coils which act as electromagnets. Despite the real number of coils,
electrically, we can draw a simplified scheme that comprehends only two
coils in a stepper motor, divided into a number of small coils. By precisely
controlling the current in the coils, the motor shaft can be made to move
in discrete steps, as illustrated in the following diagrams (Fig. 5.2): In the
first diagram the coil at the top is energized by applying electricity in the
polarity shown. The magnetized shaft is attracted to this coil and then locks
into place. Now look what happens when the electricity is removed from
the top coil and applied to the other coil (Fig. 5.3). The shaft is attracted
to the second coil and locks into place there. The jump between the two
positions is one step (in this illustration a step is 90 degrees, in actual fact
a stepper motor usually steps just a fraction of this. The diagrams are sim-
plified for clarity). We have seen how the motor shaft moves to lock itself
into place in front of an attracting electromagnet, each magnet represents
one step. It is, however, possible to move the motor shaft into positions
between steps. This is known as ”micro-stepping”. In order to understand
5.1. STEPPER MOTORS 67
how micro-stepping works, it’s possible to See Fig. 5.4: In this illustration
the current has been applied to both coils in an equal amount. This causes
the motor shaft to lock into place halfway between the two coils. This would
be known as a ”half step”. The principle can be extended to include quarter
steps, eight steps and even sixteenth steps. This is done by controlling the
ratio of the current applied to both coils to attract the motor shaft to a
position between the coils but closer to one coil than the other. Commonly
this operation is executed by a micro-stepper driver: choose a good quality
driver is quite important to reduce vibrations and for smooth operations.
By using micro-stepping, it is possible to move the shaft of a stepper motor
to a fraction of a degree, allowing for extremely precise positioning.
Actually, it’s possible to divide stepper motors in two categories: bipolar
and unipolar. Since we don’t want to explain everything in detail I’ll just
talk about bipolar stepper (the one we are interested in and the one on
which the experiment is based and conceived). Bipolar stepper motors, See
Fig. 5.5, consist of two coils of wire (electrically, actually split into sev-
eral physical coils) and generally have four connections, two per coil. The
simplified diagrams of stepper operation presented in the previous section
are all bipolar stepper motors. An advantage of bipolar stepper motors is
that they make use of the entire coil winding, so they are more efficient.
68 CHAPTER 5. LIDAR PLATFORM V2
control and command the stepper motor. In our case we use Arduino as
microcontroller and we flash the board with a specific IDE. Other micro-
controllers can be used, getting a pretty similar result. Right now, it has
been explained how a stepper and a stepper control work, so it’s possible
now to go further presenting the experiment.
1. The camera captures the absolute orientation angle of the fiber pre-
preg;
72 CHAPTER 5. LIDAR PLATFORM V2
Item Function
The aim of the mechanical structure is to
hold the CCD sensor (high-res) camera in
Structure
place, as well as to position the metallic plate
fixed to the stage (following).
A rotary stage is a tool that couples a
stepper motor with a gearing system and
transforms the mechanical rotation of the
Rotary Stage stepper’s shaft to a high precision rotation
of a mechanical stage.
The stage gear ratio is 1:90;
Size of the stage plate: 60mm;
Nema 17
1.8◦ minimum rotation
Maximum
current: 1.3 A
Torque:
0.22 Nm
Code:
Stepper motor & driver
42HAP34BL4
The motor is then coupled and driven by
the DM320, showed and presented above.
If that motor is coupled with a driver that
provides 1/20 micro-stepping, considering
the gear ratio of the stage, it’s possible to get
0.001◦ minimum rotation of the stage.
In a CCD image sensor, pixels are represented
by p-doped metal-oxide-semiconductors (MOS)
capacitors. These capacitors are biased above
the threshold for inversion when image
CCD Camera acquisition begins, allowing the conversion
of incoming photons into electron charges at
the semiconductor-oxide interface; the CCD
is then used to read out these charges.
More details in the dedicated section.
The software interface triggers the
acquisition of the image and saves
Software interface the result in the pc. Then the picture is
processed and the inclination of the fibers is
detected and showed on the screen.
This is the tool that reveals the rotation.
Thanks to its stretched profile it’s possible
Single layer Carbon Fiber prepreg
to detect the rotation of the fiber positioned
firmly on the stage.
Examples:
By knowing the number of steps that the stepper has executed and taking
into account what has been said in point 2), it’s possible to make a compar-
ison between the expected value and the value that has been measured by
the camera with the second picture. One thing is valuable to be underlined:
if we analyze the process we understand that the measure does not depend
on the absolute values gathered by the camera. In fact, we are interested
in the difference between the two values rather than the absolute numbers.
This means that we can tolerate some errors related to non-zero alignment,
since the error will, presumably, affect the two measures in the same way.
This is a very good result, in fact we have as consequence a stable and re-
liable method to measure what we are looking for. The only thing that is
unknown at this point is if the camera will be able to detect a single step
of the whole system. In that case, it won’t be a problem let the motor step
5.3. EXPERIMENTAL DATA 75
by a precise number of steps (i.e. 10, 100, 1000, 10000, etc.) and then
check the amplitude of the average step, by simply dividing the angle that
is measured with the camera by the number of steps. Last but not the least
we need to make some annotations. Talking about the experimental set up,
it’s necessary to say that:
• The fiber should be well positioned and fixed on the rotary platform,
that lies on the rotary stage. In detail, four pieces of thin tape has
been attached on the plate and the Pre-Preg sheet has been firmly
anchored;
• The Pre-preg sheet has been cleaned with some alcohol in order to
remove some pollution and dirt from the surface. This step requires
patience since it’s very easy to damage the carbon fiber layer. So,
if the sheet is damaged the software struggles to recognize the fiber
pattern and at the same time the output measure is not accurate;
• A special light has been used to illuminate the fiber sample, in this
way it’s possible to increase the contrast factor and this helps a lot in
the patter recognition of the fiber;
M easure1 = 35.1457 [o ];
M easure2 = 88.1025 [o ];
Result = 52.9568 [o ] ∆(M easure2 − M easure1 );
Right now, the next step is to calculate the average amplitude of one step.
The operation in pretty easy, it’s just necessary to divide ∆ by the number
of steps ns .
52.9568 [o ]
⇒ = 0.00966 [o /step]
5485 [step]
With simple math is possible to calculate the theoretical stepping angle and
the relative error:
1.8 12
= 0.01
90
(0.01 − 0.00966)
× 100 = 3.431%
0.01
As it’s possible to see, the data are quite reliable (See Tab. 5.3). That means
that the experimental set up is able to detect what we wanted to measure.
As already mentioned, in the following data analysis we will underline the
boundaries of the system. By the way, the data prove that the stepper is a
good solution to implement and integrate to the LiDAR project, since it’s
possible to obtain small and accurate rotations. Before showing the second
set of data measurements, it’s possible to show and present some diagrams.
In this way it will be possible to have an immediate idea of the data set.
In the following diagrams (Fig. 5.13 and Fig. 5.14) will be showed the
78 CHAPTER 5. LIDAR PLATFORM V2
correlation between the average micro-step and the total number of steps.
In this way it’s easy to see that the asymptotic value is quite close to the
expected value (theoretical step, as computed before). The second diagram
(Fig. 5.14) shows a zoomed area close to the origin of the axis. The
system turns out to be not accurate considering small rotations and small
angles. Especially for a number of step that is smaller than 100. That
means that when the rotation is smaller that 1◦ the experimental set up
struggles to detect the orientation of the fiber. The next step is to present
the second data set, to see if the data are confirmed or not and then propose
a motivation of the inaccuracy of the measurements. As said, the second
set will be acquired setting the micro-stepping factor to 1/4 (Tab. 5.4). And
the relative diagram is the one reported in Fig. 5.15. In this case too,
the experiment shows an inadequacy to detect very small rotations, but
asymptotically it reaches the expected value of 0.005◦ .
5.3. EXPERIMENTAL DATA 79
5.3.2 Conclusions
• The data show that the system is reliable and helps us to check what
we wanted to check. In this way is possible to test the accuracy of a
stepper motor. This was the initial goal;
• It’s possible to see that the values are quite similar to the ones we
expected to find, at least for a sufficient big number of steps. It’s
possible to inflict that, when the rotation of the stage is smaller than
1◦ , the relative error is huge. A possible answer could be related to
the average operation. In fact, with small angles the error is split on a
small number of steps, while with a bigger angle the error, related to
the detection of the carbon fiber orientation, is split on a big number
of steps. This bring us to think that the algorithm that is used to
identify the carbon fiber pattern could be revised and it can become
more robust and reliable. The algorithm has proved to be sufficiently
efficient with quite wide rotations, not the same with small or super
small ones;
• It’s very important to set the camera with the right diaphragm aper-
ture and focus, and try to increase as much as possible the luminance
5.4. NEW STRUCTURE 81
contrast:
(Ls − Lb )
Cw =
Lb
Where Cw indicates the luminance contrast, Ls the luminance of the
target object and Lb the luminance of the background;
• It’s important too to pay attention to the positioning of the carbon
fiber on the metallic plate. Some vibrations, or accidental collision
(with other structural parts) can modify its position and give wrong
results;
• At the same time is relevant the quality of the carbon fiber that is
used. Generally, when the carbon fiber is flexible and malleable it’s
more difficult to recognize the pattern. That’s why is recommended to
partially activate the resin through heat, UV light or other equivalent
methods;
• The adoption of the stepper technology has proved to be the right
solution to improve the density of the LiDAR point cloud. By coupling
ad-hoc electrical driving systems and mechanical system is possible
to obtain densities close to 1Million points, and at the same time
guarantee a high level of accuracy.
Conclusions
This last chapter is fundamentally a recap of what has been treated and
presented in this work. The main goal of the project was to develop a
versatile LiDAR platform able to scan the surrounding environment with a
decent accuracy and quality of the point cloud. We repeat another time that
quality of the point cloud, actually means density: the more dense a point
cloud is, the more accurate the final representation is. Anyway, the main
goal was to meet these requirements, dealing with a constraint on costs. The
final platform, indeed, should have cost roughly 250$. With the following
list the whole process will be recapped and some conclusions or notes will
be added:
1. We started from the basic knowledge, from scratch, studying the single
components and parts of the project, learning the theory and trying
to figuring out how to solve a complex problem as the one we had in
input;
2. Finally, the LiDAR v1 has been designed first and then manufactured:
the development of this platform has been challenging and required us
to put together a vast knowledge and deploy multidisciplinary skills;
basically related to servo motor control, Arduino servo management
and integration, the LiDAR sensor and other various equipment. But
it’s not finished, indeed, the project required us to develop from scratch
a brand new visualization environment which allows us to show the real
time point cloud that represents the virtualization of the surrounding
reality. This environment has been written on a Java platform called
Processing v3 (virtual image processing system);
87
88 CHAPTER 6. CONCLUSIONS
then, the most efficient and profitable solutions has been adopted in
version 2;
5. Talking about the LiDAR prototype and its possible applications, it’s
quite easy to say 3D virtual scan and point cloud representation of
interiors (what we did in the test phase). But then, with further
improvements and updates that machine could be used as a low cost
platform applicable to:
[3] J. Young, Lidar for Dummies, (2011), Wiley Publishing Inc., ISBN: 978-
0-470-94225-3.
[4] Cracknell, Arthur P.; Hayes, Ladson (2007) [1991]. Introduction to Re-
mote Sensing (2 ed.). London: Taylor and Francis. ISBN 0-8493-9255-1.
OCLC 70765252.
[6] Chiu, Cheng-Lung; Fei, Li-Yuan; Liu, Jin-King; Wu, Ming-Chee. ”Na-
tional Airborne Lidar Mapping and Examples for applications in deep
seated landslides in Taiwan”. Geoscience and Remote Sensing Sympo-
sium (IGARSS), 2015 IEEE International. ISSN 2153-7003.
[8] S. Lee, J. Joon Im, B. Lee, Leonessa, K. Alexander. ”A real time grid-
map generation and object classification for ground based 3D lidar data
using image analysis techniques”. Image Processing (ICIP), 2010 17th
IEEE International Conference on image processing. ISSN 1522-4880.
91
92 BIBLIOGRAPHY
[17] ttps://www.sciencebuddies.org/science-fair-
projects/references/introduction-to-servo-motors
[18] https://processing.org/
[19] Ira Greenberg (31 December 2007). Processing: Creative Coding and
Computational Art. Apress. pp. 151–. ISBN 978-1-4302-0310-0.
[20] Jeanine Meyer (15 June 2018). Programming 101: The How and Why
of Programming Revealed Using the Processing Programming Language.
Apress. pp. 121–. ISBN 978-1-4842-3697-0.
[21] Ira Greenberg (25 March 2010). The Essential Guide to Processing for
Flash Developers. Apress. pp. 412–. ISBN 978-1-4302-1980-4.
[22] Casey Reas and Ben Fry. Processing: A Programming Handbook for
Visual Designers, Second Edition. December 2014, The MIT Press. 720
pages.