Group 1 - AI
Group 1 - AI
I
7.2 Navigation and Path Planning
1. Environment Mapping
Sensors Used:
Mapping Approaches:
Occupancy Grids: Divide the space into grids, labeling cells as free,
occupied, or unknown.
Semantic Maps: Incorporate object-level information into the map, such
as labeling "table" or "door."
Topological Maps: Represent the environment as a graph, where nodes
are key locations and edges represent paths between them.
2. Localization
Localization ensures the robot knows its precise position and orientation within the
mapped environment.
1
Techniques:
Challenges:
Path planning algorithms compute the optimal route from the robot’s current location
to its goal while avoiding obstacles and considering constraints like time or energy
efficiency.
Popular Algorithms:
Optimization Goals:
2
Shortest distance, minimal energy consumption, fastest time, or a
combination of these.
4. Obstacle Avoidance
Real-time obstacle detection and avoidance are critical for safe navigation. Robots
rely on sensor data to recognize obstacles and adjust their paths dynamically.
Key Techniques:
Challenges:
5. Motion Control
Motion control ensures the robot follows the planned path smoothly and accurately
while maintaining stability.
Key Considerations:
3
Applications:
1. Use sensors (e.g., LIDAR, GPS, IMU, cameras) to gather data about
the environment.
2. Analyze the data to understand obstacles, landmarks, or dynamic
changes.
Decision-Making:
4
Mobility and Actuation:
Communication:
Autonomous Vehicles:
Self-driving cars rely on advanced path planning and navigation for safe
transport.
Examples include Tesla Autopilot and Waymo.
Service Robots:
5
1. Used in warehouses (e.g., Amazon Robotics) for sorting, picking, and
transporting goods.
2. Delivery robots in urban areas for last-mile delivery.
Exploration Robots:
Healthcare:
6
Path Planning Techniques in Autonomous Robotic Systems:
1. Grid-Based Methods
Grid-based methods divide the environment into a uniform grid, where each cell
represents a distinct region of space. Each cell can either be marked as traversable or
non-traversable, based on the presence of obstacles or constraints. These methods are
widely used in structured and static environments due to their simplicity and
efficiency.
Grid-based methods are commonly applied in applications such as path finding for
robots, video games, and logistics systems. However, their performance depends on
the size and resolution of the grid, as well as the complexity of the environment.
7
1.1. A Algorithm*
The A* (A-Star) algorithm is one of the most popular and efficient grid-based search
algorithms. It calculates the shortest path between a start node and a goal node by
combining two factors:
1. The actual cost (g): The cost of the path from the start node to the current node.
2. The heuristic cost (h): An estimated cost from the current node to the goal node.
The algorithm prioritizes nodes based on their total estimated cost, which is given by:
f(n) = g(n) + h(n)
Optimality: If the heuristic function is admissible (i.e., it never overestimates the true
cost), the A* algorithm guarantees finding the shortest path.
Efficiency: By using a heuristic function, A* can significantly reduce the number of
nodes it needs to explore compared to other exhaustive search methods.
1. Initialize the distance of the start node to zero and all other nodes to infinity.
8
2. Add the start node to a priority queue (min-heap).
3. Iteratively extract the node with the smallest distance from the queue and update the
distances of its neighbors.
4. Repeat until the goal node is reached or all nodes have been explored.
9
High-Dimensional Spaces: In spaces with more than two or three dimensions (e.g.,
robotic arms in 6-DOF space), the size of the grid grows exponentially, making these
methods impractical.
Grid-based methods often produce paths that follow the grid’s discrete structure,
resulting in non-smooth or jagged trajectories. These paths may require additional
post-processing (e.g., smoothing) for practical use in real-world applications.
2. Sampling-Based Methods
Sampling-based methods are widely used for solving path planning problems in high-
dimensional and complex environments. Instead of discretizing the entire
environment like grid-based methods, they generate random samples in the
configuration space (C-space) and focus only on regions relevant to finding a
feasible path. These methods are highly scalable and can handle problems with
complex constraints and obstacle configurations.
The core idea behind sampling-based methods is to avoid explicit computation of the
entire configuration space, which can be computationally prohibitive in high-
dimensional spaces. Instead, they approximate the solution by sampling a subset of
the space and connecting these samples to construct a path.
Steps in RRT
10
2. Nearest Node Selection: Identify the nearest node in the current tree to the random
sample using a distance metric.
3. Tree Extension: Extend the tree from the nearest node toward the random sample,
often using a predefined step size.
4. Collision Checking: Verify that the new branch does not intersect with obstacles in
the environment.
5. Repeat: Continue until a path is found to the goal or a termination condition is met.
Advantages of RRT
Challenges of RRT
Suboptimal Paths: RRT focuses on feasibility rather than optimality and may
produce paths that are far from the shortest or smoothest.
Path Refinement: Often requires post-processing to improve path quality or
smoothness.
Randomness: The performance and quality of the solution can vary due to its
reliance on random sampling.
Steps in PRM
1. Sampling: Generate a set of random points in the configuration space, ensuring they
are in the free space (collision-free regions).
2. Connection: For each sampled point, connect it to nearby points using a local planner
if the path between them is collision-free. This step forms the edges of the graph.
11
3. Graph Search: Use graph search algorithms (e.g., Dijkstra or A*) to find a path
between the start and goal nodes in the constructed road map.
Advantages of PRM
Multi-Query Capability: Once the road map is constructed, it can be reused for
multiple queries, making it efficient for static environments.
Adaptability: Can handle complex environments with non-linear constraints and
obstacles.
Challenges of PRM
Key Applications
1. Robotic Manipulators: Path planning for robotic arms with many degrees of
freedom (DOF), such as in industrial assembly lines or medical surgeries.
2. Drones and UAVs: Navigation of unmanned aerial vehicles through cluttered
environments, such as forests or urban settings.
3. Autonomous Vehicles: Planning paths through complex, obstacle-laden roadways.
4. Space Exploration: Planning feasible paths for rovers or robotic arms in
extraterrestrial environments with uncertain terrains.
5. Animation and Games: Generating realistic motion paths for characters or objects in
virtual environments.
12
3. Potential Field Methods Potential field methods treat the robot as a particle in a
field, where attractive forces pull it toward the goal and repulsive forces push it away
from obstacles.
Attractive Potential:
Repulsive Potential:
3.2. Limitations
Local Minima: The robot may get trapped in local minima, unable to reach the goal.
Narrow Passages: Difficulty navigating through narrow areas due to overlapping
forces.
3.3. Enhancements Methods such as artificial potential fields combined with gradient
descent or hybrid techniques can mitigate some of these issues.
Machine learning (ML) techniques have emerged as powerful tools for solving path
planning problems, particularly in environments with uncertainty, dynamic changes,
or incomplete information. Unlike traditional methods, ML-based approaches can
learn patterns from data and adapt to various scenarios, making them highly versatile
for both structured and unstructured environments.
13
Reinforcement Learning is a popular framework for learning path planning policies
through interaction with the environment. In RL, an agent learns to navigate from a
start to a goal by maximizing cumulative rewards obtained through trial and error.
The path planning task is typically formulated as a Markov Decision Process (MDP),
which consists of:
States (S): Represent the robot's position and configuration in the environment.
Actions (A): Possible movements or decisions the robot can make.
Rewards (R): Feedback indicating the quality of the agent's decisions (e.g., positive
for moving closer to the goal, negative for collisions).
Transitions (T): Probabilities of reaching a new state given the current state and
action.
Deep Q-Learning extends the traditional Q-learning algorithm by using a deep neural
network to approximate the Q-value function, which represents the expected
cumulative reward for each state-action pair.
Scalability: Handles large and continuous state-action spaces, which are common in
high-dimensional environments.
Exploration-Exploitation Tradeoff: Uses techniques like epsilon-greedy strategies
to balance exploration of new paths and exploitation of learned policies.
Memory Replay: Stores past experiences in a replay buffer and trains the neural
network on sampled batches to improve learning stability and efficiency.
Policy Gradient methods directly optimize the policy (a mapping from states to
actions) rather than estimating the Q-value function. The policy is represented as a
probabilistic model, and the goal is to maximize the expected cumulative reward.
Continuous Actions: Suitable for environments with continuous action spaces (e.g.,
robotic arm movement).
14
End-to-End Optimization: Optimizes the entire decision-making process in one step,
leading to smoother and more robust trajectories.
Common Algorithms: Include Proximal Policy Optimization (PPO), Trust Region
Policy Optimization (TRPO), and Soft Actor-Critic (SAC).
Neural networks are used to process spatial and temporal data for path prediction and
decision-making in complex environments. Two primary types of networks are
commonly employed:
CNNs excel at processing spatial data, making them ideal for grid-based or image-
based representations of the environment.
Use Case: CNNs can process occupancy grids, maps, or aerial images to predict
feasible paths or identify navigable regions.
Advantages: Efficient for static environments and can generalize well to similar
scenarios.
RNNs are designed to process sequential or temporal data, making them suitable for
dynamic environments where the robot's actions depend on past states.
Use Case: RNNs are used to model motion patterns, predict future trajectories, or
handle scenarios with dynamic obstacles.
Variants: Long Short-Term Memory (LSTM) networks and Gated Recurrent Units
(GRUs) are commonly used to address the vanishing gradient problem in traditional
RNNs.
15
4.3. Generalization and Adaptability
16
3. Drone Path finding: Efficient navigation in GPS-denied environments using onboard
sensors and learned models.
4. Space Exploration: Path planning for planetary rovers operating in uncertain terrains
with minimal prior information.
1. Dynamic Environments
Key Issues:
Moving Obstacles: Robots must detect and respond to dynamic obstacles, such as
vehicles, pedestrians, or animals, without collisions.
Uncertainty: Environmental factors, such as weather conditions (e.g., rain or fog) or
sensor noise, can degrade performance.
Crowded Areas: Navigating through crowded spaces, such as shopping malls or
urban streets, requires robust path planning and real-time decision-making.
Time Sensitivity: Processing environmental data and recalculating paths quickly
enough to react in real-time is computationally intensive.
Possible Solutions:
17
Anticipation Models: Predict the future trajectories of moving obstacles using
machine learning techniques.
2. Energy Efficiency
Key Issues:
Possible Solutions:
3. Localization Accuracy
Key Issues:
18
GPS Limitations: GPS signals may be unavailable or unreliable in certain
environments, such as dense urban areas or indoors.
Sensor Drift: Over time, IMUs (Inertial Measurement Units) and other sensors can
accumulate errors, reducing localization accuracy.
Environmental Complexity: Feature-poor or dynamic environments (e.g., empty
corridors, changing furniture layouts) challenge localization systems.
Possible Solutions:
Ensuring the ethical deployment of autonomous robots and guaranteeing safety for
humans and the environment are paramount.
Key Issues:
Human Safety: Autonomous robots must operate without causing harm to humans,
especially in shared spaces like roads or workplaces.
Decision-Making Ethics: In scenarios where harm is unavoidable (e.g., collision
risks), robots must make ethical decisions (e.g., "trolley problem" scenarios).
Data Privacy: Robots equipped with cameras and sensors collect vast amounts of
data, potentially infringing on individuals’ privacy.
Bias in ML Models: Machine learning-based decision-making may inadvertently
reflect biases present in the training data, leading to unfair or unsafe outcomes.
Possible Solutions:
19
Safety Standards: Adhere to established safety regulations (e.g., ISO 13482 for
personal care robots) and conduct rigorous testing before deployment.
Fail-Safe Mechanisms: Implement emergency stop systems, redundancy, and error
recovery protocols to handle unexpected situations.
Transparent AI: Develop interpretable AI models to ensure accountability and
understand the decision-making process.
Ethical Frameworks: Incorporate ethical considerations into design and policy, such
as prioritizing human safety above all else.
Privacy Protection: Use data anonymization techniques and secure communication
channels to safeguard user privacy.
20
Summery
The choice of path planning technique for autonomous robotic systems depends
largely on the specific requirements of the system and the environment in which it
operates. Grid-based methods are highly effective in structured and static
environments, offering reliable and optimal solutions with algorithms like A* and
Dijkstra. However, their efficiency diminishes in larger, dynamic, or high-
dimensional spaces. In such cases, sampling-based methods like RRT (Rapidly
Exploring Random Tree) and PRM (Probabilistic Roadmap) are more suitable. These
methods excel in complex, high-dimensional spaces, offering scalability and
flexibility, though they do not guarantee optimal solutions. To address their
limitations, post-processing techniques and hybrid approaches are often employed.
Despite the potential of these methods, autonomous robotic systems face several
challenges, including dealing with dynamic environments, optimizing energy
consumption, ensuring accurate localization in GPS-denied spaces, and addressing
ethical concerns around safety and decision-making. Overcoming these challenges
will require advances in hardware, software, and policy-making. As technology
progresses, the integration of environment mapping, localization, advanced path
planning, obstacle avoidance, and precise motion control will continue to enhance the
reliability and versatility of robotic systems. This will unlock new possibilities for
robotics in various industries, ranging from logistics to healthcare, driving the future
of autonomous systems.
21
Refrence
Hart, P., Nilsson, N., & Raphael, B. (1968). A Formal Basis for the Heuristic
Determination of Minimum Cost Paths. IEEE Transactions on Systems Science and
Cybernetics, 4(2), 100-107.
Probabilistic Robotics (2005). Thrun, S., Burgard, W., & Fox, D. MIT Press.
Ratliff, N. D., & Todorov, E. (2006). Path planning for autonomous vehicles.
Journal of Field Robotics, 24(7), 513-529.
Saffer, R., & Kadivc, T. (2011). Evaluation of Optimal Path Planning Algorithms for
Dynamic Environments. Journal of Intelligent & Robotic Systems, 63(1), 69-85.
Müller, M., & Schaal, S. (2003). Dynamic Path Planning for Mobile Robots in
Dynamic Environments Using Probabilistic Roadmaps. Proceedings of the IEEE
International Conference on Robotics and Automation.
Borenstein, J., & Koren, Y. (1991). The vector field histogram: Fast obstacle
avoidance for mobile robots. IEEE Transactions on Robotics and Automation, 7(3),
278-288.
22