0% found this document useful (0 votes)
17 views6 pages

Slam Using LIDAR For UGV

The document discusses using LIDAR sensors for simultaneous localization and mapping (SLAM) in autonomous vehicles. It describes how LIDAR sensors work and their advantages over other sensors for localization. It also discusses map learning, path planning, and navigation for autonomous robots.

Uploaded by

Rudranil Bose
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views6 pages

Slam Using LIDAR For UGV

The document discusses using LIDAR sensors for simultaneous localization and mapping (SLAM) in autonomous vehicles. It describes how LIDAR sensors work and their advantages over other sensors for localization. It also discusses map learning, path planning, and navigation for autonomous robots.

Uploaded by

Rudranil Bose
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

5 III March 2017

http://doi.org/10.22214/ijraset.2017.3211
www.ijraset.com Volume 5 Issue III, March 2017
IC Value: 45.98 ISSN: 2321-9653
International Journal for Research in Applied Science & Engineering
Technology (IJRASET)
Slam Using LIDAR For UGV
I.Harish1, S.Raja2, J. Gowtham3, T. M. Jayakumar4, B. Karthik5, G. Karthik6
1,2
Assistant Professor, 3,4,5,6UG Students , Dept. of ECE
Sri Shakthi Institute of Engg and Technology, Coimbatore, TN, India.

Abstract: Simultaneous localization and mapping is a mobile robot positioning themselves and creating the map of the
environment at the same time, which is the core problem of the vehicle achieve the authentic intelligent. EKF-SLAM is a most
used SLAM algorithm based on the extended Kaiman Alter. The LIDAR will reduce its positioning error compared with the dead
reckoning and has more simplified and generic model compared with the EKF-SLAM algorithm based on vehicle kinematics
model. Meanwhile, it has a lower requirements on the hardware acquisition system. The algorithm is more robust than the
traditional EKF-SLAM so the algorithm will have a certain reference value on the SLAM research and provide a new way on the
SLAM research based on the differential model of vehicle motion.
Keywords: Lidar, Slam, Autonomous vehicle, EKF-SLAM

I. INTRODUCTION
An unmanned ground vehicle (UGV) is a vehicle that operates while in contact with the ground and without an onboard human
presence. Generally, the vehicle will have a set of sensors to observe the environment, and will either autonomously make decisions
about its behavior or pass the information to a human operator at a different location who will control the vehicle through tele-
operation. The UGV is also known as the land-based counterpart to unmanned aerial vehicles and remotely operated underwater
vehicles. Unmanned robotics are being actively developed for both civilian and military use to perform a variety of dull, dirty, and
dangerous activities.
A robot may also be able to learn autonomously. Autonomous learning is the capability of Learn or gain new capabilities without
outside assistance.
Based on the surroundings they adjust their strateigies.
Adapt to surroundings without outside assistance.
Develop a sense of ethics regarding mission goals.

II. LIDAR
LIDAR refers to a remote sensing technology that emits intense, focused beams of light and measures the time it takes for the
reflections to be detected by the sensor. This data can be used to compute ranges, or distances, to objects. In this manner, lidar is
analogous to radar (radio detecting and ranging), except that it is based on discrete pulses of laser light. The 3 dimensional
coordinates (e.g., x, y, z or latitude, longitude, and elevation) of the target objects are computed from 1) the time difference between
the laser pulse being emitted and returned, 2) the angle at which the pulse was “fired,” and 3) the absolute location of the sensor on or
above the surface of the Earth.
There are two classes of remote sensing technologies that are differentiated by the source of energy used to detect a target: passive
systems and active systems. Passive systems detect radiation that is generated by an external source of energy, such as the sun, while
active systems generate and direct energy toward a target and subsequently detect the radiation. Lidar systems are active systems
because they emit pulses of light (i.e. the laser beams) and detect the reflected light. This characteristic allows lidar data to be
collected at night when the air is usually clearer and the sky contains less air traffic than in the daytime. In fact, most lidar data are
collected at night. Unlike radar, lidar cannot penetrate clouds, rain, or dense haze and must be flown during fair weather.

III. ROLE OF LIDAR IN SLAM ALGORITHM


The Accuracy is one of the primary reasons for use of lidar data. Lidar is an accurate, cost-effective method for collecting
topographic elevation data for large areas (Fowler and others, 2007). As a result, determining the required level of data accuracy and
documenting the achieved level is an important part of data collection and its subsequent use. Typically a data set is collected with a
target accuracy value. The vendor can vary flight and instrument parameters to achieve the required accuracy and cost specifications.
Once the data have been collected and processed, they are tested to make sure that the collection and subsequent operations were

1157
©IJRASET: All Rights are Reserved
www.ijraset.com Volume 5 Issue III, March 2017
IC Value: 45.98 ISSN: 2321-9653
International Journal for Research in Applied Science & Engineering
Technology (IJRASET)
successful in meeting the desired specifications. Documenting data accuracy is important to ensure proper and widespread use and to
maximize data utility. Data accuracy is commonly provided in quality assessment documents and in the metadata.

Fig 1: Represent the UGV

IV. ACQUIRING IMAGES USING MATLAB TOOL

Fig 2 Represent extraction of image using mat lab tool.

Fig 3 Extraction of image using matlab tool.

A. Map Learning

©IJRASET: All Rights are Reserved 1158


www.ijraset.com Volume 5 Issue III, March 2017
IC Value: 45.98 ISSN: 2321-9653
International Journal for Research in Applied Science & Engineering
Technology (IJRASET)
Map learning cannot be separated from the localization process, and a difficulty arises when errors in localization are incorporated
into the map. This drawback is referred as Simultaneous localization and mapping (SLAM).An important additional problem is to
determine whether the robot is in a part of environment already stored or never visited. One way to solve this problem is by using
electric beacons, near field communication (NFC), Wi-Fi, Visible light communication (VLC) and Li-Fi and Bluetooth.

B. Path Planning
Path planning is an important issue as it allows a robot to get from point A to point B. The advantage of of real-time motion planning
is dependent on the accuracy of the map (or floor plan), on robot localization and on the number of obstacles. Topologically, the
problem of path planning is related to the shortest path problem of finding a route between two nodes in a graph.

C. Robot navigation
Outdoor robots can use GPS in a similar way to automotive navigation systems. Another part of systems can be used with floor plan
instead of maps for indoor robots, combined with localization wireless hardware..

V. RESULTS AND DISCUSSIONS


Lidar is a advanced sensor for driverless vehicles since it enables highly precise and robust localization across a wide range of
conditions,” explains Karl Iagnemma, CEO of nuTonomy, a ¬Cambridge, Mass., startup that is currently testing self-driving cars in
¬Singapore. But Iagnemma points out that the size, complexity, and cost of the current generation of lidar sensors are significant
obstacles to the commercialization of any technology that depends on them. Many autonomous cars have relied on the HDL-64E
lidar sensor from Silicon Valley–based Velodyne, which scans 2.2 million data points in its field of view each second and can
pinpoint the location of objects up to 120 meters away with centimeter accuracy. But the sensor itself weighs more than 13 kilograms
and costs US $80,000. This year, Velodyne announced the VLP-32A, which offers a 200-meter range in a 600-gram package. With a
target cost of $500 (at automotive scale production), the VLP-32A would be two orders of magnitude cheaper than its predecessor
but still too expensive to be integrated into driverless cars intended for the consumer market. A substantial amount of recent
academic and industry research has been focused on making lidar sensors smaller, easier to manufacture, and cheaper. At the CES
2016 electronics show, Quanergy Systems, in Sunnyvale, Calif., demonstrated a prototype solid-state lidar sensor designed for
driverless cars. It uses an optical phased array to steer laser pulses rather than a rotating system of mirrors and lenses. Quanergy
projects that its sensor will cost $250 in volume production, and it should be available to automotive original equipment
manufacturers in early 2017.Meanwhile, two startups are working on $100 automotive lidar systems they both say will be released in
2018. Innoviz, in Israel, is promising a “high-definition solid-state lidar” with better resolution and a larger field of view than those in
existing sensors. Innoluce, in the Netherlands, is using a microelectromechanical mirror system to scan and steer a laser beam instead
of the solid-state approach; its engineers claim it will outperform optical phased arrays in both range and resolution.

Fig 4. The two dimensional image produced by autonomous vehicle.


A complete overview of the environment is gathered using the autonomous vehicle with the help of SLAM algorithm. The data are

©IJRASET: All Rights are Reserved 1159


www.ijraset.com Volume 5 Issue III, March 2017
IC Value: 45.98 ISSN: 2321-9653
International Journal for Research in Applied Science & Engineering
Technology (IJRASET)
obtained by gathering individual data stitched together using the SLAM. At the end a complete and a very accurate map of the
surrounding is obtained which is processed and these data are used by the vehicle for autonomous navigation.

. VI. CONCLUSION
The autonomous vehicle are the future of transportation. The SLAM refined by LIDAR technology is one of the best way to
implement autonomous vehicle and mapping. The LIDAR technology have an upper hand of accuracy which helps in accurate
measure the distance of nearby vehicles and objects. The SLAM algorithm helps in mapping the environment for the autonomous
vehicle by the data send by LIDAR.

REFERENCES
[1] J.weingarten , r.siegwart “ekf-based 3d slam for structured environment reconstruction”. In: 2-6 aug 2005.
[2] Yangming li , edwin b.olson “structural tensors for general purpose lidar feature extraction”. In: 15 aug 2011.
[3] gangqiang zhao , junsong yuan “curb dectection and tracking using 3d-lidar scanner”. In: 21 feb 2013.
[4] Andrzej bieszczad “identifying landmark cues with lidar laser scanner data taken from multiple viewpoints”. In: 10 dec 2015.
[5] akshay a. Mane; mahesh n. Parihar; sharad p. Jadhav; rahul gadre “data acquisition analysis in slam applications” in: 16 march 2017
[6] He mengwen; eijiro takeuchi; yoshiki ninomiya; shinpei kato “robust virtual scan for obstacle detection in urban environments” in: 08 august 2016.thomas
theelen, vision reseach 48 (2008) 2569–2577.

©IJRASET: All Rights are Reserved 1160

You might also like