0% found this document useful (0 votes)
11 views10 pages

Robotics and Automation

The document discusses various sensors used in mobile robotics, including LiDAR, wheel/motor sensors, ground-based beacons, GPS, active ranging sensors, and optical encoders. It explains the theory, working principles, applications, advantages, and limitations of each sensor type. Additionally, it compares the human vision system to a camera, highlighting their similarities and differences in image processing.

Uploaded by

ekenemeifesinahi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views10 pages

Robotics and Automation

The document discusses various sensors used in mobile robotics, including LiDAR, wheel/motor sensors, ground-based beacons, GPS, active ranging sensors, and optical encoders. It explains the theory, working principles, applications, advantages, and limitations of each sensor type. Additionally, it compares the human vision system to a camera, highlighting their similarities and differences in image processing.

Uploaded by

ekenemeifesinahi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

NAME: Ekeneme Mac-anthony Ifesinachi

REG NO: 20201209423


DEPT: MECHATRONICS ENGINEERING Assignment on MCE 523(Mobile Robotics)
QUESTION 1:
Discuss the theory and working principle of the lidar Sensor

INTRODUCTION
LiDAR (Light Detection and Ranging) is a remote sensing technology that uses laser pulses to
measure distances and create precise 3D maps of objects, surfaces, or environments. It is
widely used in applications such as autonomous vehicles, robotics, topographic mapping, and
atmospheric studies.

Its theory of Operation states that;


LiDAR works on the principle of Time-of-Flight (ToF), where a laser emits pulses of light that
travel to an object and reflect back to the sensor. The system measures the time taken for the
light to return and calculates the distance using:

Distance = ( c * t)/2

where:

C = Speed of light (~ m/s)

T = Time taken for the light pulse to return


The division by 2 accounts for the round-trip travel of the light pulse.

WORKING PRINCIPLE
A LiDAR system consists of the following main components:
●​ Laser Emitter:
Emits laser pulses, typically in the infrared or ultraviolet spectrum.
The pulse rate can range from thousands to millions of pulses per second.

●​ Scanner & Optics


Uses rotating mirrors or MEMS (Micro-Electro-Mechanical Systems) to direct laser beams
across a wide field of view.
Determines the angle of the returning beam, helping construct a 3D map.

●​ Photodetector & Receive


Captures the reflected light and measures the time delay.
Can use photodiodes or avalanche photodetectors for high sensitivity.

●​ Processing Unit
Uses algorithms to calculate distance, detect objects, and generate 3D point clouds.
A Schematic Diagram of a lidar Sensor

Citations:
Reference: Zhang, J. & Singh, S. (2017). "LOAM: Lidar Odometry and Mapping in
Real-time." Robotics: Science and Systems.

Source: Texas Instruments. "Understanding LiDAR technology: Operation, Components,


and Applications.

A LIDAR SENSOR

QUESTION 2:
Write a comprehensive review of;
●​ The wheel/motor Sensor
●​ The Ground base Sensor
●​ The GPS
●​ The Active Ranging Sensors and
●​ The Optical Encoders
Put Citations and diagrams

1) wheel/motor sensor
Wheel and motor sensors monitor wheel speed, position, and torque in robotic and
automotive applications. They are critical in autonomous navigation, vehicle dynamics control,
and motion tracking.
Types and Working Principles
1. Hall Effect Sensor:
-​ Uses a magnetic field to detect rotational movement.
-​ Outputs a voltage proportional to wheel/motor speed.

2. Rotary Encoder:
-​ Measures angular position or speed by detecting marks on a rotating disk.
-​ Can be incremental (counts pulses) or absolute (provides exact position).

3. Tachometer
-​ Measures rotational speed (RPM) using an optical or magnetic pickup.

Applications
●​ Autonomous vehicles (wheel speed measurement)
●​ Robotic navigation (odometry)
●​ Electric motors (torque and speed control)

Advantages
●​ High precision in motion tracking
●​ Works in various environments
●​ Compatible with different motor types

Limitations
●​ Wheel slip can affect accuracy
●​ Magnetic interference affects Hall sensors

A Schematic Diagram of a Wheel/ Motor Sensor


A Wheel/Motor Sensor
Citations:
●​ Reference: Bosch Automotive Handbook (10th Edition). Robert Bosch GmbH.
●​ Source: Honeywell. "Hall Effect Sensor Application Notes."

2) GROUND-BASED BEACON (GBB)


A Ground-Based Beacon (GBB) is a stationary device placed on or near the ground that
emits signals for positioning, navigation, tracking, or communication purposes. It acts as a fixed
reference point that mobile agents (drones, robots, aircraft, autonomous vehicles) detect to
determine their location or navigate a defined area.
The beacon continuously or intermittently transmits signals such as:
●​ Radio Frequency (RF)
●​ Infrared (IR)
●​ Ultrasonic
●​ Laser pulses
●​ Light Emissions
The mobile system uses the received signal strength, angle of arrival, or time-of-flight to
calculate distance or location relative to the beacon.

The Applications of Ground-Based Beacons are as follows;


●​ Aerospace / UAV Operations
-​ UAV precision landing
-​ Drone localization in GPS-denied zones
-​ Flight path guidance

●​ Autonomous Vehicles & Robotics;


-​ Indoor navigation and mapping
-​ Robotic ground swarm control
-​ Warehouse robot localization

●​ Mining and Aviation:


-​ Airplane instrument landing systems,Marine navigation aids and hazard marking

●​ Industrial / Mining / Construction


-​ Automated equipment localization
-​ Asset tracking
-​ Tunnel navigation aids

●​ Search and Rescue


Beacon placement in disaster zones for locating survivors or guiding rescue drones

The Advantages of these GBB are;


●​ High Accuracy: Especially in short-range or GPS-denied environments
●​ Real-time Positioning: Continuous location updates
●​ Cost-Effective: Simple beacon infrastructure reduces complexity
●​ Versatility: Can use RF, IR, sound, or optical signals
●​ Scalable: Multiple beacons can form a coverage network

The Limitations of Ground base beacon includes;


●​ Line-of-Sight Dependency: Optical or IR beacons need unobstructed views
●​ Signal Interference: RF beacons are prone to interference in crowded spectrum
environments
●​ Limited Range: Ground-based systems may not cover large outdoor areas
●​ Environment Sensitivity: Weather or dust can affect optical/IR beacons
●​ Maintenance: Physical beacons need power sources and periodic maintenance

The Technologies Enabling Ground-Based Beacons are as follows;


●​ Radio Transmitters (VHF, UHF)
●​ Infrared LEDs and Receivers
●​ Ultrasonic Transducers
●​ Laser Rangefinders
●​ Time-of-Flight (ToF) Sensors

Its Future Trends is;


●​ Integration with AI and ML for adaptive signal control
●​ Use in autonomous vehicle V2X (Vehicle-to-Everything) communication
●​ Miniaturization and low-power IoT-enabled beacon systems
●​ Enhanced accuracy with 5G-based ground beacons
A Schematic Diagram of a Ground Base Beacon

3) Gps (Global Positioning System)


GPS provides global location data by receiving signals from satellites. It helps in autonomous
navigation, geolocation, and mapping.

Working Principle:
-​ A GPS receiver gets signals from at least four satellites.
-​ Uses triangulation to determine position.
-​ Time delays in signal arrival help calculate distance.

Applications:
-​ Autonomous vehicle navigation
-​ Mapping and surveying
-​ Military and aerospace applications

Advantages
●​ Works worldwide with high accuracy
●​ Passive system (does not require transmission)
●​ Can integrate with other sensors for better precision

Limitations
●​ Signal blockage in tunnels, dense forests, or urban areas
●​ Requires open sky for best performance
●​ Affected by atmospheric conditions
A Schematic Diagram of the Global positioning system(GPS)
Reference: Misra, P. & Enge, P. (2006). Global Positioning System: Signals, Measurements,
and Performance.

Source: National Coordination Office for Space-Based PNT (www.gps.gov)

4) ACTIVE RANGING SENSOR


Active ranging sensors measure the distance to objects using emitted signals, commonly
used in robotics, automotive LiDAR, and industrial applications.

Types and Working Principles


●​ LiDAR (Light Detection and Ranging):
Emits laser pulses and measures Time-of-Flight (ToF).
●​ Radar (Radio Detection and Ranging):
Uses radio waves for long-range distance measurement.
●​ Sonar (Sound Navigation and Ranging):
Uses sound waves to detect underwater objects.

APPLICATIONS
-​ Autonomous vehicles (LiDAR for 3D mapping)
-​ Military and aerospace (Radar for tracking)
-​ Marine navigation (Sonar for underwater detection)

Advantages
●​ High precision and real-time data
●​ Works in various environments (e.g., Radar in fog, Sonar underwater)
●​ Suitable for autonomous navigation

Limitations
●​ LiDAR is expensive and sensitive to weather
●​ Sonar has limited range in air
●​ Radar can be affected by interference
A Schematic Diagram of an Active Ranging Sensor
Reference: Groves, P.D. (2013). Principles of GNSS, Inertial, and Multisensor Integrated
Navigation Systems.

Source: Texas Instruments. "Ultrasonic Sensing: Operation and Applications.”

5) OPTICAL ENCODERS
Optical encoders measure position, speed, and rotation using light detection through an
encoded disk. They are widely used in robotics and motor control.

Working Principle
-​ A LED light source shines through a rotating disk with markings.
-​ A photodetector counts the number of interruptions to determine position or speed.

They are of two main types.


Which are:
-​ Incremental Encoder: Counts pulses to track position.
-​ Absolute Encoder: Provides a unique position value at every step.

APPLICATIONS
●​ Motor speed control
●​ Robot arm positioning
●​ CNC machines and automation

ADVANTAGES
●​ High accuracy and resolution
●​ Fast response time
●​ Works well in industrial environments

LIMITATIONS
●​ Sensitive to dirt and dust
●​ Requires precise alignment
●​ High-resolution encoders can be expensive.
A Schematic Diagram of an Optical Encoder
Reference: HEIDENHAIN Corporation. "Basics of Rotary Encoders: Overview of Measuring
Principles."

Source: Rockwell Automation. "Optical Encoders – Principles and Applications.”

3) GIVE A BRIEF COMPARISON BETWEEN THE HUMAN VISION SYSTEM AND THE
CAMERA

The PIN-HOLE THEORY states that; “When light rays from an object pass through a
small hole, they form a real and inverted image on the opposite side”. Both the human
vision system and the camera operate based on this principle, though each has specialized
mechanisms for controlling light and focusing the image.
Its Comparison is that;
In a camera, light enters through a small adjustable opening called the aperture. The size of
this aperture controls the amount of light reaching the camera’s image sensor or film. Inside the
camera, a islens system focuses the incoming light rays, forming a clear image on the sensor.
This image is real and inverted, but the camera's software or internal systems adjust it for
proper display.
SIMILARLY, In the human vision system, light enters the eye through the pupil, which acts
like the pin-hole. The size of the pupil is controlled by the iris, adjusting automatically depending
on the brightness of the surroundings. The eye's lens, with the help of ciliary muscles, changes
shape to focus light directly onto the retina, which serves as the image formation surface similar
to a camera sensor. Like in the camera, the image formed on the retina is real and inverted.
However, the human brain processes this inverted image and interprets it as upright.

IN CONCLUSION, Both the camera and the human eye work on the pin-hole theory principle;
allowing light to pass through an opening, focusing it, and forming an image on a sensitive
surface. The key difference is that the camera corrects the inverted image electronically, while
the human brain naturally adjusts it during perception.
4) A SCHEMATIC OF THE OPERATION OF THE CCD

The Schematic of CCD Operation includes;

●​ Light Entry:
Light from the scene passes through the camera lens and strikes the CCD sensor.

●​ Photon to Electron Conversion (Photoelectric Effect):


Each pixel in the CCD is a photodiode that absorbs incoming photons and converts them into
electrical charges (electrons).
The amount of charge is proportional to the intensity of the light hitting each pixel.

●​ Charge Storage:
The electrons generated are stored in potential wells created by applying voltages to
electrodes above each pixel.
Each pixel holds a packet of charge representing the light intensity at that location.

●​ Charge Transfer (Coupling):


Once exposure is complete, the charges are shifted row by row (like a bucket brigade)
towards the readout register.
The process is controlled by clock pulses applied to the electrodes.

●​ Readout Register:
The final row of charges enters a horizontal shift register, where each pixel’s charge is
transferred sequentially to an amplifier.

●​ Charge to Voltage Conversion:


The amplifier converts the charge packets into voltage signals.
This voltage is directly proportional to the amount of light captured by each pixel.

●​ Signal Processing and Output:


The analog voltage signals are processed and converted to digital values by an
Analog-to-Digital Converter (ADC).
The digital image is then formed and stored or displayed.

You might also like