Robotics and Automation
Robotics and Automation
INTRODUCTION
LiDAR (Light Detection and Ranging) is a remote sensing technology that uses laser pulses to
measure distances and create precise 3D maps of objects, surfaces, or environments. It is
widely used in applications such as autonomous vehicles, robotics, topographic mapping, and
atmospheric studies.
Distance = ( c * t)/2
where:
WORKING PRINCIPLE
A LiDAR system consists of the following main components:
● Laser Emitter:
Emits laser pulses, typically in the infrared or ultraviolet spectrum.
The pulse rate can range from thousands to millions of pulses per second.
● Processing Unit
Uses algorithms to calculate distance, detect objects, and generate 3D point clouds.
A Schematic Diagram of a lidar Sensor
Citations:
Reference: Zhang, J. & Singh, S. (2017). "LOAM: Lidar Odometry and Mapping in
Real-time." Robotics: Science and Systems.
A LIDAR SENSOR
QUESTION 2:
Write a comprehensive review of;
● The wheel/motor Sensor
● The Ground base Sensor
● The GPS
● The Active Ranging Sensors and
● The Optical Encoders
Put Citations and diagrams
1) wheel/motor sensor
Wheel and motor sensors monitor wheel speed, position, and torque in robotic and
automotive applications. They are critical in autonomous navigation, vehicle dynamics control,
and motion tracking.
Types and Working Principles
1. Hall Effect Sensor:
- Uses a magnetic field to detect rotational movement.
- Outputs a voltage proportional to wheel/motor speed.
2. Rotary Encoder:
- Measures angular position or speed by detecting marks on a rotating disk.
- Can be incremental (counts pulses) or absolute (provides exact position).
3. Tachometer
- Measures rotational speed (RPM) using an optical or magnetic pickup.
Applications
● Autonomous vehicles (wheel speed measurement)
● Robotic navigation (odometry)
● Electric motors (torque and speed control)
Advantages
● High precision in motion tracking
● Works in various environments
● Compatible with different motor types
Limitations
● Wheel slip can affect accuracy
● Magnetic interference affects Hall sensors
Working Principle:
- A GPS receiver gets signals from at least four satellites.
- Uses triangulation to determine position.
- Time delays in signal arrival help calculate distance.
Applications:
- Autonomous vehicle navigation
- Mapping and surveying
- Military and aerospace applications
Advantages
● Works worldwide with high accuracy
● Passive system (does not require transmission)
● Can integrate with other sensors for better precision
Limitations
● Signal blockage in tunnels, dense forests, or urban areas
● Requires open sky for best performance
● Affected by atmospheric conditions
A Schematic Diagram of the Global positioning system(GPS)
Reference: Misra, P. & Enge, P. (2006). Global Positioning System: Signals, Measurements,
and Performance.
APPLICATIONS
- Autonomous vehicles (LiDAR for 3D mapping)
- Military and aerospace (Radar for tracking)
- Marine navigation (Sonar for underwater detection)
Advantages
● High precision and real-time data
● Works in various environments (e.g., Radar in fog, Sonar underwater)
● Suitable for autonomous navigation
Limitations
● LiDAR is expensive and sensitive to weather
● Sonar has limited range in air
● Radar can be affected by interference
A Schematic Diagram of an Active Ranging Sensor
Reference: Groves, P.D. (2013). Principles of GNSS, Inertial, and Multisensor Integrated
Navigation Systems.
5) OPTICAL ENCODERS
Optical encoders measure position, speed, and rotation using light detection through an
encoded disk. They are widely used in robotics and motor control.
Working Principle
- A LED light source shines through a rotating disk with markings.
- A photodetector counts the number of interruptions to determine position or speed.
APPLICATIONS
● Motor speed control
● Robot arm positioning
● CNC machines and automation
ADVANTAGES
● High accuracy and resolution
● Fast response time
● Works well in industrial environments
LIMITATIONS
● Sensitive to dirt and dust
● Requires precise alignment
● High-resolution encoders can be expensive.
A Schematic Diagram of an Optical Encoder
Reference: HEIDENHAIN Corporation. "Basics of Rotary Encoders: Overview of Measuring
Principles."
3) GIVE A BRIEF COMPARISON BETWEEN THE HUMAN VISION SYSTEM AND THE
CAMERA
The PIN-HOLE THEORY states that; “When light rays from an object pass through a
small hole, they form a real and inverted image on the opposite side”. Both the human
vision system and the camera operate based on this principle, though each has specialized
mechanisms for controlling light and focusing the image.
Its Comparison is that;
In a camera, light enters through a small adjustable opening called the aperture. The size of
this aperture controls the amount of light reaching the camera’s image sensor or film. Inside the
camera, a islens system focuses the incoming light rays, forming a clear image on the sensor.
This image is real and inverted, but the camera's software or internal systems adjust it for
proper display.
SIMILARLY, In the human vision system, light enters the eye through the pupil, which acts
like the pin-hole. The size of the pupil is controlled by the iris, adjusting automatically depending
on the brightness of the surroundings. The eye's lens, with the help of ciliary muscles, changes
shape to focus light directly onto the retina, which serves as the image formation surface similar
to a camera sensor. Like in the camera, the image formed on the retina is real and inverted.
However, the human brain processes this inverted image and interprets it as upright.
IN CONCLUSION, Both the camera and the human eye work on the pin-hole theory principle;
allowing light to pass through an opening, focusing it, and forming an image on a sensitive
surface. The key difference is that the camera corrects the inverted image electronically, while
the human brain naturally adjusts it during perception.
4) A SCHEMATIC OF THE OPERATION OF THE CCD
● Light Entry:
Light from the scene passes through the camera lens and strikes the CCD sensor.
● Charge Storage:
The electrons generated are stored in potential wells created by applying voltages to
electrodes above each pixel.
Each pixel holds a packet of charge representing the light intensity at that location.
● Readout Register:
The final row of charges enters a horizontal shift register, where each pixel’s charge is
transferred sequentially to an amplifier.