0% found this document useful (0 votes)
46 views10 pages

461 Navigation

Path planning involves navigating from a starting point to a goal considering various factors. It is impacted by the complexity of variables like environmental obstacles, robot capabilities, and success metrics. A wide array of algorithms are used for path planning, ranging from reactive methods using immediate sensor data to mapping-based approaches utilizing environment representations. Effective path planning considers the environment layout, robot abilities, and goals to determine the most suitable path.

Uploaded by

rieadhasank2001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views10 pages

461 Navigation

Path planning involves navigating from a starting point to a goal considering various factors. It is impacted by the complexity of variables like environmental obstacles, robot capabilities, and success metrics. A wide array of algorithms are used for path planning, ranging from reactive methods using immediate sensor data to mapping-based approaches utilizing environment representations. Effective path planning considers the environment layout, robot abilities, and goals to determine the most suitable path.

Uploaded by

rieadhasank2001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Path Planning

Path planning is a multifaceted challenge that involves navigating from a starting point to a goal amidst various
considerations. Here are some key points to consider:

1.Complexity Beyond a Simple Question: While the question might seem straightforward — "How do I get to my
goal?" — the answer involves numerous variables and considerations that significantly impact the path planning
process.

2. Variables Impacting Path Planning: Visibility of the goal, presence or absence of a map, nature of obstacles
(known, unknown, static, or dynamic), importance of speed versus path smoothness, available computational
resources, precision of motion control, all play vital roles in determining the optimal path.

3. Diverse Collection of Algorithms: Path planning encompasses a wide array of algorithms and strategies. These
algorithms vary in complexity, adaptability, and suitability for different scenarios. Some focus on mapping-based
approaches, while others rely on reactive or heuristic-based methods.

4. Considerations: Environment, Success Metrics, Robot Capabilities: Path planning is deeply influenced by three
primary factors:
- Environment: The nature of the surroundings, presence of obstacles, layout, and dynamics of the environment.
- Success Metrics: Goals or objectives, whether it's reaching the goal swiftly, following a smooth path, or
optimizing for specific criteria.
- Robot Capability: This includes computational power, sensory abilities, precision in movement, and the agility of
the robot or system.

In essence, path planning involves navigating through a complex interplay of environmental factors, objective-based
metrics, and the capabilities of the robot or system. It's not just about finding a route but considering a multitude of
variables to determine the most suitable path given the specific context and constraints of the situation.
Visual Homing(Purely Reactive):

Purely reactive navigation strategies, as described in visual homing, rely solely on immediate sensory input to guide a
system or agent toward a goal without maintaining a map or detailed environmental representation. In this context:

1. Measuring Visual (x, y) Position of Goal:


- The system assesses visual information in real-time to estimate the relative position of the goal in its visual field.
This could involve recognizing visual cues or landmarks representing the goal and determining its approximate (x, y)
position.

2. Moving to Bring Goal to Visual Center:


- Upon perceiving the goal, the system or agent adjusts its orientation and movements to bring the goal into the
center of its visual perception. This reorientation helps in aligning the direction of movement towards the goal.

3. Proportional Control (if the goal is visible) and Random Walk (if not):
- When the goal is within sight, proportional control mechanisms are employed to regulate the system's movements
in direct proportion to the difference between the goal's current position and the desired visual center. This allows for
smooth and accurate movement towards the goal.

- In cases where the goal is not visible or lost from sight, the system might resort to a random walk strategy,
navigating in various directions or exploring the environment without specific guidance until the goal is reacquired or
comes back into view.

These reactive strategies allow the system to adapt its movements based on immediate sensory feedback, making
decisions in real-time without needing a predefined map or extensive prior knowledge of the environment. However,
these approaches might lack the ability to plan ahead or optimize paths based on long-term considerations, relying
instead on instant sensory input to navigate towards the goal.

Bug Based Path Planning

Absolutely, bug-based algorithms, despite their simplicity, offer robust solutions for navigating environments with
obstacles and limited knowledge. Here are some key points based on the provided information:

1. Obstacle Navigation without a Map: Bug algorithms excel in scenarios where robots encounter obstacles but
lack a detailed map of the environment. They rely on local sensor data and simple behaviors rather than an intricate
map representation.

2. Global Goal Information, Local Environmental Awareness: Having access to global goal direction or distance
provides a sense of orientation for the robot, enabling it to navigate towards the goal. However, the robot's knowledge
about the environment is restricted to what it can sense locally, without a comprehensive map.

3. Adaptability to Varying Environments: Whether it's an outdoor scenario with a building obstructing the path or
an indoor setting with furniture blocking the way, bug algorithms adapt by leveraging their reactive nature. They use
techniques like visual homing (orienting towards the goal based on visual cues), wall-following (navigating around
obstacles by following their contours), and odometry (estimating position based on movement) to maneuver.

4. Simple yet Effective Behavior-Based Approach: Bug algorithms rely on a set of simple and provable behaviors
that guide the robot. These behaviors, such as trying to move towards the goal using visual cues, following along
obstacles, and keeping track of movement, are combined to achieve effective navigation.

5. No Need for Detailed Mapping: Unlike traditional mapping-based approaches, bug algorithms do not require
building a map of the environment in advance. Instead, they rely on reactive behaviors that respond to immediate
sensory information, allowing for quick and adaptive navigation.
6. Surprising Power in Simplicity:Despite their simplicity, bug-based algorithms can be remarkably powerful in
navigating complex environments. By combining basic behaviors, they enable robots to circumvent obstacles and
progress toward goals, showcasing their versatility and effectiveness.

In summary, bug-based algorithms offer an intuitive, computation-friendly approach to navigation, making them highly
adaptable and surprisingly powerful in scenarios where detailed mapping might not be available or necessary.

Metric /Global Path Planning:

When a robot possesses full knowledge of its environment, including a map and the locations of both the robot and
the goal, the task shifts to finding an optimal path from the robot's position to the goal. This optimization usually
revolves around minimizing distance or time taken to reach the goal. The process of metric or global path planning
involves two main components:

1. Map Representation ("Graph"):


- Feature-based maps: These maps describe the environment based on distinct features like office numbers,
landmarks, or other recognizable points of interest.
- Grid-based maps:*The environment is represented as a grid where each cell denotes an area, allowing for
Cartesian or quadtree-based representations. These grids define areas as obstacles or free spaces.
- Polygonal maps:They involve geometric decompositions of the environment into polygons, outlining the
boundaries and obstacles within the space.

2. Path Finding Algorithms:


- Shortest-Path Graph Algorithms: These algorithms work on the map representation to find the shortest path from
the robot's current location to the goal. They include methods like Breadth-First-Search (BFS) and the A* algorithm.

- Breadth-First-Search (BFS): BFS systematically explores all the neighbor nodes at the present depth before
moving on to nodes at the next depth level. It ensures finding the shortest path on an unweighted graph.

- A* Algorithm: A* is a popular algorithm for finding the shortest path in a weighted graph. It uses heuristics to
efficiently search and navigate through the map, balancing between moving towards the goal and exploring new
paths based on estimated costs.

These algorithms leverage the map representation and the specific features of the environment to compute the
optimal path efficiently. They take into account the geometry of the space, the locations of obstacles, and the desired
optimality criteria to plan a path that minimizes distance or time. By employing these methods, robots or autonomous
systems can plan their routes intelligently, considering the layout of the environment and making informed decisions
to reach their goals optimally.

Feature Based Map Representation → Map Representation→ Global Path Planning →Path Planning

Here's a concise note summarizing the concept of topological or landmark-based maps:


Also known as Topological or Landmark-based Maps.

Features Recognizable by the Robot:


- Encompass both natural landmarks (such as corners, doorways, and hallways) and artificial ones (like office door
numbers or specific tags designed for robot recognition).
Gateways: Landmarks serving as decision points (e.g., intersections) guiding directional choices.
Distinguishable Places: Unique and easily identifiable landmarks within the environment aiding in navigation and
orientation.

Graph Representation of the World:


Graph Structure: The environment is represented as a graph connecting these landmarks.
Edges: Represent actual routes or paths between landmarks, detailing how to navigate from one landmark to
another.
Navigation Along Edges: Typically allows for visual or reactive navigation, enabling movement from one landmark
to another.
Attributes of Edges: Can store additional information like distance, time, or other relevant details aiding in path
planning.

Analogy with Google Maps for Humans:


- Relates to the human-readable guidance provided by platforms like Google Maps, which offer turn-by-turn
instructions based on landmarks or points of interest, simplifying navigation for humans.

Topological or landmark-based maps offer a navigational framework for robots, relying on recognizable landmarks
and decision points to guide their movements. This representation simplifies navigation and path planning by focusing
on key identifiable features rather than detailed geometric information.

Path Finding Algorithm in Map representation

Absolutely, path-finding algorithms are instrumental in navigating a graph-based map representation. Here's a
concise note summarizing these algorithms:

Map Representation as a Weighted Graph:


- All map representations are structured as weighted graphs, where nodes represent locations, and edges between
nodes signify connections or paths with associated weights or costs.

One-Time Computation:
- A significant advantage is the need for computing paths just once for a given graph representation, reducing
computational overhead.

Algorithm for Computing Shortest Paths:


- The primary goal is to calculate the shortest paths within the graph, determining the optimal route from a starting
point to a destination.
- **Waypoint Representation:** The resulting path is often represented as a series of waypoints, guiding the robot
along the calculated route.

Single Path Search Algorithms:


- **Breadth-First-Search (BFS):** Effective for simple graphs, systematically explores neighbor nodes before moving
deeper, suitable for finding the shortest path in unweighted or uniform-cost graphs.
- **A* Search:** Ideal for large graphs, combines elements of BFS and heuristic information to efficiently navigate
and find the shortest path, considering both cost and estimated distance to the goal.

Gradient Path Algorithms for Multiple Paths:


- **Fixed Base Station or Destination:** These algorithms aim to discover multiple paths towards a specific fixed
destination, exploring various routes rather than focusing solely on the shortest path.
- **Examples:** Methods like BFS, Dijkstra’s algorithm, Wavefront algorithms, among others, are used to explore
and uncover multiple possible paths towards a specific goal.
These algorithms serve diverse purposes, either finding the shortest path from one point to another or exploring
multiple paths towards a fixed destination, catering to various navigation needs within a graph-based map
representation.

Localization

Localization in robot navigation refers to the process by which a robot determines its own position within an
environment. It involves the robot's ability to understand and accurately estimate its location relative to its
surroundings, often in the context of a predefined map or in an unknown environment.

Types of Localization

Dead-reckoning :
Dead reckoning, or odometry, enables a robot or autonomous system to estimate its position by continuously tracking
and integrating its internal movements over time. Using wheel encoders or inertial measurement units (IMUs), it
measures parameters like distance traveled, direction changes, and rotations. For instance, a mobile robot equipped
with wheel encoders records the rotations and distances moved by its wheels, calculating its new position relative to
the starting point. However, due to accumulated errors from sensor imprecisions or slippage, dead reckoning can
lead to position drift over time. An example is a robotic vacuum cleaner using dead reckoning to navigate within a
room, estimating its position based on wheel rotations. While efficient for short-term navigation, it might slightly
deviate from its actual position over extended operation periods due to inherent sensor inaccuracies.

This navigation method involves incrementally updating the position by taking discrete "steps" based on measured
movements, like distance and direction changes. Inertial Navigation Systems (INS) exemplify dead reckoning,
incorporating sophisticated sensors such as accelerometers and gyroscopes. These sensors enhance measurements
of instantaneous velocity and orientation changes, compensating for external influences like momentum or external
forces. Initially prevalent in expensive systems like satellites or submarines due to their accuracy, the increasing
availability of low-cost Inertial Measurement Units (IMUs) has made dead reckoning more accessible. However, while
dead reckoning via IMUs provides cost-effective navigation, it remains susceptible to accumulated errors over time
due to sensor imprecisions or external interferences, potentially leading to position drift in extended operations.

Landmark(sensing):
Localization through triangulation involves determining a robot's position by measuring distances to known landmarks
or beacons in its environment. This method utilizes geometric principles to calculate the robot's position based on
distance measurements to multiple landmarks with known locations. For instance, in indoor environments, visual
beacons with known positions can be used; the robot measures the distance to these beacons using sensors like
cameras or laser rangefinders, enabling triangulation to estimate its position. Similarly, in outdoor scenarios, GPS
receivers triangulate signals from multiple satellites to determine the robot's location. Radio or cellular towers and
their signal strengths also provide a basis for triangulation-based localization, allowing the robot to infer its position
relative to these known landmarks by measuring signal strengths from various towers. These methods leverage
geometric principles and known landmark positions to accurately estimate the robot's location within its
environment.Landmark-based navigation operates in contrast to dead reckoning by utilizing measurements to
external landmarks with known positions. These landmarks can be diverse, ranging from visual markers to radio
towers or GPS satellites. For instance, GPS satellites function as "landmarks" in GPS-based localization. These
satellites continuously transmit messages containing time of transmission and their precise positions. A GPS receiver
calculates distance by measuring signal transmission time (based on the speed of light), enabling it to determine its
distance from each satellite. In 3D space, a GPS receiver determines its position by intersecting spheres around at
least four satellites, finding the point where these spheres intersect, representing the receiver's location in three
dimensions. This method allows for accurate positioning by leveraging known positions of external landmarks.

State Estimation (uncertainty in motion & sensing):


Probabilistic reasoning methods like Kalman Filters and Particle Filters are vital in estimating a robot's position by
incorporating both motion and sensor data, accounting for uncertainties in navigation. Kalman Filters employ a
recursive mathematical approach to fuse incoming sensor measurements with predictions from motion models. For
instance, in autonomous vehicles, a Kalman Filter combines GPS data with vehicle speed and direction to estimate
the vehicle's position and correct measurement errors over time. Conversely, Particle Filters, also called Monte Carlo
Localization, use a probabilistic sampling approach, representing the robot's possible positions as a collection of
particles. For instance, a robot navigating in a complex environment with noisy sensors can use Particle Filters to
maintain a cloud of particles, assigning higher weights to those aligning with sensor measurements. These particles
evolve based on motion and sensor updates, providing a more robust estimation of the robot's position, especially in
scenarios with high uncertainty or complex environments.

State estimation integrates both motion (like dead reckoning) and sensing (landmarks) considering their inherent
uncertainties. Despite individual errors, combining them harnesses their complementary nature, enhancing overall
accuracy in determining a system's state amidst uncertainty.

Kalman Filter: Kalman Filters harness Gaussian mathematics to model uncertainties, offering a versatile approach
for state estimation beyond just localization. They excel in various applications, like enhancing GPS accuracy in cars
or enabling precise navigation for lawnmowers using beacons. Additionally, in warehouse settings, Kalman Filters
facilitate robust localization for robots navigating amid diverse obstacles. For instance, in cars, a Kalman Filter
combines GPS data with vehicle dynamics, refining position estimation by accounting for sensor noise and vehicle
motion. Similarly, a lawnmower utilizing beacons for localization leverages Kalman Filters to fuse sensor readings,
enabling accurate positioning despite environmental challenges. Warehouse robots employ these filters to optimize
their navigation amidst dynamically changing surroundings, ensuring reliable and efficient operations.

“Ideal for systems with linear dynamics and Gaussian noise. They work well when the system's behavior can
be effectively modeled by linear equations and when uncertainties follow a Gaussian distribution. Kalman
Filters are computationally efficient and are often used in scenarios like sensor fusion in cars (combining
GPS with inertial sensors) or in applications where there's a continuous flow of sensor data.”

Particle Filter : Particle Filters, known as Monte Carlo Localization, leverage discrete distributions of "particles" to
model uncertainty, akin to sampling or histograms. They excel in navigating complex and ambiguous environments,
such as a robot exploring a building with a map. In this scenario, a particle filter maintains a set of particles
representing possible robot positions. As the robot moves and gathers sensor data, these particles are adjusted
based on the measurements, allowing for a more accurate estimation of the robot's location. Particularly beneficial in
situations where traditional methods struggle due to environment complexity or ambiguity, particle filters offer a robust
approach for localization and navigation in dynamic and intricate surroundings like indoor environments.

“ Suited for scenarios involving nonlinear and non-Gaussian systems. They excel in complex and ambiguous
environments where traditional methods struggle. Particle Filters maintain a cloud of particles, making them
flexible in representing complex probability distributions. They are advantageous in cases like robot
localization in cluttered indoor environments, handling multimodal distributions or scenarios where the
system dynamics are nonlinear and non-Gaussian.”
Mapping And Exploration

Mapping: Mapping involves creating a representation or memory of the spatial layout and structure of an
environment as a robot or system navigates through it. It aims to build a comprehensive understanding of the
surroundings, often using sensors and data collected during movement. Mapping is crucial in various fields like
robotics, where it enables tasks such as returning to a home base after completing a task, efficient room cleaning by
robots, or systematically searching for survivors in disaster zones. Additionally, it's pivotal in scenarios like mapping
collapsed structures such as mines or buildings for rescue operations, aiding responders in understanding the layout
to plan effective strategies.

Exploration : On the other hand, exploration deals with the process of navigating and covering an unknown or
unexplored environment systematically. It aims to ensure complete coverage of the space without prior knowledge of
its shape or size. In applications like robotics, exploration is essential for tasks such as efficiently covering an area
during cleaning tasks or systematically searching an unknown environment for survivors. It's also crucial in scenarios
like mapping uncharted territories or exploring hazardous zones, ensuring comprehensive coverage while minimizing
blind spots or missed areas.

Both mapping and exploration are fundamental in various fields, particularly in robotics, disaster response, and
exploration scenarios, facilitating efficient and comprehensive coverage of unknown or unfamiliar environments while
aiding in tasks ranging from navigation to spatial understanding and search and rescue operations.

Occupancy Grid and Sensor Model

An Occupancy Grid provides a structured representation of a map by dividing the environment into grid cells, with
each cell labeled as "occupied," "empty," or "unknown." This grid-based approach facilitates spatial understanding
and navigation for robots or autonomous systems. Occupancy Grids are instrumental in mapping environments,
allowing robots to discern navigable paths, obstacles, or unexplored areas based on the status of each grid cell.

On the other hand, a Sensor Model plays a crucial role in constructing Occupancy Grids by translating raw sensor
measurements into values that determine the occupancy status of each grid cell. Different sensors, such as LIDAR,
depth cameras, or 360-degree vision systems, generate distinct types of data. The Sensor Model's task is to interpret
these sensor readings, varying in type and scope, into meaningful information for the occupancy grid. For instance,
LIDAR or depth cameras provide precise distance measurements, aiding in distinguishing between occupied and
unoccupied grid cells, while a 360-degree vision/ranging system offers a broader perspective but might have different
limitations or accuracy levels in representing the environment within the grid cells. The Sensor Model's design and
calibration depend on the specific sensor types and configurations employed by the robot, ensuring accurate
mapping and navigation within the Occupancy Grid.
How Sensor Model and Occupancy Grid Works
How Frontier Based Exploration Works

Frontier-Based Exploration operates by identifying frontiers—boundaries between known and unknown areas—in a
grid-based map. A frontier cell is an unknown cell adjacent to at least one known, empty cell. The key concept
involves selecting a frontier cell, often the closest one, and planning a path to explore it. This process repeats until no
more frontier cells exist, indicating completion of the mapping process.

In a finite world, where the grid map represents the entire environment, any algorithm systematically exploring frontier
cells is guaranteed to cover the entire area. The systematic approach ensures that every frontier cell gets explored,
and as the robot moves through these frontiers, it continuously updates the map by encountering and categorizing
new cells as either occupied or empty. Eventually, with no frontier cells left unexplored, the map is considered
complete, ensuring comprehensive coverage of the entire grid world. This systematic exploration strategy guarantees
full coverage within a finite grid environment, allowing the robot to effectively map and explore the entire known area.
Summary

You might also like