0% found this document useful (0 votes)
2 views28 pages

Module 4

Uploaded by

manavvvv298
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views28 pages

Module 4

Uploaded by

manavvvv298
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Module 4

2: Sector Boundaries in Robotic Navigation

• Title: Computation of Sector Boundaries & Their Impact

• Content:

o Definition: Sector boundaries divide the robot's surrounding environment (often its
local perception field) into angular "sectors" or "cones." These boundaries are
typically defined by obstacles or free space openings.

o Computation:

▪ Usually based on sensor data (e.g., LiDAR scans, depth camera point clouds).

▪ Points from obstacles are projected onto a 2D plane around the robot.

▪ Angular sorting of these points helps identify gaps (free space) and obstacle
clusters.

▪ Sector boundaries are drawn along the angular limits of these gaps or the
edges of obstacle clusters.

▪ Algorithms like "Vector Field Histogram (VFH)" or "Dynamic Window


Approach (DWA)" implicitly or explicitly compute these.

o Impact on Motion Planning:

▪ Local Navigation: Helps the robot quickly identify clear paths and steer away
from immediate obstacles.

▪ Reactive Behavior: Enables rapid response to dynamic environments or


unknown obstacles.

▪ Goal-Oriented Movement: Robot prioritizes sectors that lead towards the


goal.

▪ Reduced Computation: Instead of processing every sensor point, planning


focuses on summarized sector information.

▪ Example: A robot's LiDAR scan identifies an opening of 30 degrees to its left.


This defines a free-space sector. The robot can then plan to move into this
sector.
3: Advantages and Challenges of Sector Boundaries

• Title: Determining Sector Boundaries: Advantages & Challenges

• Content:

o Advantages:

▪ Real-time Processing: Enables fast computation for reactive navigation,


crucial for dynamic environments.

▪ Simplicity: Conceptually straightforward and relatively easy to implement.

▪ Computational Efficiency: Reduces the complexity of collision avoidance by


summarizing environment information.

▪ Direct Steering Commands: Can directly map sector information to robot


steering and velocity commands.

▪ Robustness: Less sensitive to individual sensor noise compared to point-by-


point processing.

o Challenges:

▪ Resolution Dependence: The choice of sector angular resolution impacts


fidelity; too coarse misses details, too fine increases computation.

▪ Ambiguity in Cluttered Environments: In dense obstacle fields,


distinguishing clear paths can be difficult, leading to small or non-existent
free sectors.

▪ Local Minima: Can lead to oscillations or getting stuck if the robot faces a
symmetric obstacle configuration.

▪ "Corridor" Problem: May struggle to navigate narrow corridors optimally


without a global context.

▪ Static vs. Dynamic: More complex to handle dynamic obstacles moving


through sectors accurately.

▪ Feature Extraction Difficulty: Accurately identifying the "edges" of free


space from noisy sensor data can be challenging.

4: Computing Sector Boundaries in Unknown Environments

• Title: Method to Compute Sector Boundaries for Mobile Robots in Unknown Environments

• Content:

o Approach: Sensor-driven, reactive, and often iterative.

o Method (Example using LiDAR-like sensors):


1. Sensor Data Acquisition: Robot continuously acquires range readings (e.g.,
LiDAR points) from its surroundings.

2. Polar Grid Representation: Transform the Cartesian sensor points into a


polar coordinate system relative to the robot's center. Discretize the angular
space into "sectors" or "bins" (e.g., 1-degree or 5-degree sectors).

3. Obstacle Density/Proximity Calculation: For each sector, calculate a


measure of obstacle presence. This could be:

▪ The minimum range reading in that sector.

▪ The number of obstacle points in that sector.

▪ An "occupancy value" or "cost" for the sector.

4. Free Space Identification: Identify contiguous sequences of sectors with low


obstacle density/high minimum range readings. These represent potential
free paths.

5. Boundary Determination: The angular limits of these contiguous free-space


sequences define the sector boundaries. These are the angular directions
where the environment transitions from free space to occupied space (or
vice versa).

6. Path Selection/Steering: Based on these identified free sectors, the robot's


local planner chooses the sector that best balances goal direction, clearance
from obstacles, and robot dynamics.

o Example: If a LiDAR sweep shows clear readings from 0 to 45 degrees, then an


obstacle at 46 degrees, the boundary is at 45 degrees. Another clear path from 60 to
90 degrees defines another boundary at 60 and 90 degrees.

5: The "Peg-in-a-Hole" Problem

• Title: The "Peg-in-a-Hole" Problem and its Significance

• Content:

o Definition: A fundamental robotics problem where a robot manipulator must insert


a peg (cylindrical, square, etc.) into a corresponding hole.

o Challenges: Despite its apparent simplicity for humans, it's highly challenging for
robots due to:

▪ Tight Tolerances: The peg and hole often have very small clearances (e.g.,
microns to millimeters).

▪ Uncertainty: Imperfections in robot calibration, sensor noise, part


manufacturing tolerances, and environmental variations.

▪ Contact Dynamics: Complex forces and torques arise during insertion,


requiring active control.
o Significance in Robotics and Automation:

▪ Benchmark Task: A classic benchmark for testing robot precision, dexterity,


sensing capabilities, and control algorithms.

▪ Industrial Applications: Crucial in assembly lines for manufacturing products


(e.g., engine assembly, electronic component placement, consumer goods).

▪ Micro-assembly: Relevant for very small-scale assembly.

▪ Space Exploration: Essential for docking, repair, and sample collection.

▪ Surgical Robotics: Analogous problems arise in minimally invasive surgery.

▪ Foundation for Complex Tasks: Solving peg-in-hole lays the groundwork for
more intricate manipulation tasks.

6: Challenges in Aligning Peg with Hole

• Title: Challenges in Aligning Peg with Hole: Real-World Uncertainties

• Content:

o 1. Robot Calibration Errors:

▪ Issue: Inaccuracies in the robot's kinematic model, joint encoders, or tool


center point (TCP) calibration.

▪ Impact: The robot believes its end-effector is in one position, but it's slightly
off, leading to misalignment.

o 2. Part Manufacturing Tolerances:

▪ Issue: The peg or hole may not be perfectly manufactured (e.g., slightly oval,
tapered, off-center).

▪ Impact: Even if the robot is perfectly aligned, the physical parts themselves
introduce errors.

o 3. Sensor Noise and Limitations:

▪ Issue: Vision systems have pixel resolution limits; force/torque sensors have
noise and drift.

▪ Impact: Inaccurate feedback about the peg's position, orientation, or contact


forces.

o 4. Environmental Variations:

▪ Issue: Temperature changes, vibrations, or lighting variations can affect part


positions or sensor readings.

▪ Impact: Dynamic changes making a static pre-programmed solution


insufficient.

o 5. Compliant Behavior and Stiffness:


▪ Issue: Robot's inherent stiffness or compliance can lead to deflection under
contact forces, complicating control.

▪ Impact: The robot might "push" against the hole instead of sliding in.

o 6. Pose Estimation Accuracy:

▪ Issue: Accurately determining the 6D pose (position and orientation) of both


the peg and the hole simultaneously.

▪ Impact: Small angular errors can lead to large positional errors at the peg tip,
causing jamming.

7: Peg-in-a-Hole: Mathematical Formulation and Task Planning Application

• Title: "Peg-in-a-Hole": Mathematical Formulation & Task Planning

• Content:

o Mathematical Formulation (Simplified):

▪ Goal: Minimize the positional and angular misalignment between the peg's
axis and the hole's axis.

▪ Let the peg's pose be Pp=(xp,yp,zp,αp,βp,γp) and the hole's pose be Ph=(xh
,yh,zh,αh,βh,γh).

▪ Objective Function: Minimize error E=∣∣Pp−Ph∣∣2 (or a more sophisticated


metric involving both position and orientation).

▪ Constraints:

▪ Contact Forces: Fcontact≤Fmax (prevent jamming/damage).

▪ Torques: Tcontact≤Tmax.

▪ Clearance: Dpeg≤Dhole (geometric constraint).

▪ Friction Model: Ffriction=μ⋅Fnormal.

▪ Control Input: Robot joint velocities or end-effector forces.

▪ Feedback: Force/torque readings, vision data (relative pose).

o Application in Task Planning:

▪ Hierarchical Planning:

▪ Global Plan: Move the robot arm to a pre-insertion approach pose


(gross motion planning).

▪ Fine Motion Plan (Peg-in-Hole Strategy): Execute a sequence of


guarded motions and compliant motions based on sensor feedback:

▪ Search/Alignment: Spiral search, active compliance, or


vibratory motions to locate the hole opening.
▪ Insertion: Apply controlled force along the insertion axis
while maintaining minimal lateral forces/torques.

▪ Detection of Success/Failure: Monitor force profiles to


confirm successful insertion or detect jamming.

▪ Robustness: Mathematical models and control strategies are crucial to


handle the uncertainties and achieve robust insertion.

8: Simulating Planar Motion for a Differential Drive Robot

• Title: Simulating Planar Motion for a Differential Drive Robot

• Content:

o Differential Drive Robot: A robot with two independent wheels, usually driven by
separate motors, allowing it to move forward/backward and rotate by varying the
speeds of the wheels.

o Planar Motion: Movement restricted to a 2D plane (x, y, and orientation θ).

o Simulation Process:

1. Kinematic Model: Define the robot's kinematic equations:

▪ Let vL and vR be the linear velocities of the left and right wheels.

▪ w = robot's wheel base.

▪ Robot's linear velocity v=(vL+vR)/2.

▪ Robot's angular velocity ω=(vR−vL)/w.

▪ Change in x, y, θ over a small time step Δt:

▪ Δx=vcos(θ)Δt

▪ Δy=vsin(θ)Δt

▪ Δθ=ωΔt

2. Environment Representation: Create a 2D map (e.g., grid map) with


obstacles.

3. Robot State: Initialize robot's pose (x0,y0,θ0).

4. Control Input: Provide desired wheel velocities (vL,vR) or robot


linear/angular velocities (v,ω).

5. State Update Loop:

▪ At each time step Δt:

▪ Calculate new velocities (v,ω) from control inputs.

▪ Update robot's pose: (xk+1,yk+1,θk+1)=(xk+Δx,yk+Δy,θk


+Δθ).
▪ Collision Detection: Check if the new robot pose collides
with any obstacles in the environment.

6. Visualization: Render the robot's movement and environment in real-time.

9: Uncertainties & Dynamic Obstacles in Planar Motion Simulation

• Title: Uncertainties & Dynamic Obstacles in Planar Motion Simulation Accuracy

• Content:

o Impact of Uncertainties:

▪ Sensor Noise: Simulated sensors (e.g., range finders) might have noise
models. If not, the simulation is overly optimistic.

▪ Actuator Noise/Errors: Wheel slippage, motor inefficiencies, or encoder


errors can lead to discrepancies between commanded and actual wheel
velocities.

▪ Localization Errors: Simulated robot's "true" position might drift from its
"estimated" position if a localization model (e.g., SLAM) is not included.

▪ Model Simplification: Simulating a point robot instead of a rigid body, or


ignoring friction, airflow, etc., reduces accuracy.

▪ Discretization Errors: Large time steps (Δt) or coarse environmental grids can
lead to accumulated errors or missed collisions.

▪ Impact: Leads to divergence between simulated and real-world behavior,


rendering the simulation less useful for real-world deployment without
robust error handling or state estimation.

o Impact of Dynamic Obstacles:

▪ Collision Prediction: Predicting future positions of moving obstacles adds


complexity. Simple simulations might only check current collisions.

▪ Path Re-planning: Requires continuous re-planning or local adjustments of


the robot's path to avoid moving obstacles.

▪ Inter-robot Collision: In multi-robot simulations, coordinating movement to


avoid collisions between agents is challenging.

▪ Impact: The simulation must model obstacle dynamics (velocity,


acceleration) and incorporate predictive collision detection, significantly
increasing computational load and algorithm complexity.

10: Simple Planar Motion Simulation Framework & Real-World Implications

• Title: Simple Planar Motion Simulation Framework & Real-World Implications

• Content:
o Simple Simulation Framework Design:

▪ Components:

▪ Environment Module: Grid-based map (0=free, 1=obstacle), static


obstacles defined.

▪ Robot Module: State variables (x, y, theta), kinematic model.

▪ Controller Module: Takes goal/waypoints, outputs desired wheel


velocities. Can be basic (e.g., "go-to-goal" or pure pursuit).

▪ Sensor Module (Optional): Simulate simple range sensors (e.g., rays


cast from robot).

▪ Collision Detection Module: Checks robot's circular/rectangular


footprint against obstacle cells.

▪ Visualization Module: Plots robot and environment.

▪ Main Loop: Iterates through time steps, updates robot state, checks
collisions, renders.

o Real-World Implications:

▪ Algorithm Validation: Test new path planning or control algorithms in a


controlled environment before deploying on physical hardware.

▪ Rapid Prototyping: Quickly develop and iterate on robot behaviors without


the cost and time of physical experiments.

▪ Safety Testing: Simulate dangerous scenarios (e.g., near-collision events)


that would be risky in the real world.

▪ Parameter Tuning: Optimize control parameters (e.g., speeds, turning rates)


for desired performance.

▪ Debugging: Easier to identify and fix logical errors in robot code in a


simulated environment.

▪ Training: Training reinforcement learning agents for navigation tasks.

▪ Cost Reduction: Reduces hardware wear-and-tear and associated


operational costs.

11: Polygon Penetration Algorithm for Collision Detection

• Title: Polygon Penetration Algorithm for Collision Detection

• Content:

o Concept: A robust method for collision detection between two or more convex
polygons. It determines if two polygons overlap and, if so, by how much (the
penetration depth) and in what direction (the minimum translation vector, MTV).

o Core Principle: Separating Axis Theorem (SAT):


▪ Two convex polygons do not overlap if and only if there exists a line (called a
"separating axis") on which their projections do not overlap.

▪ Conversely, if their projections overlap on all possible separating axes, then


the polygons are penetrating.

o How it Helps Detect Collisions:

▪ Identify Potential Axes: For each polygon, consider the normal vectors to
each of its edges as potential separating axes. Also, for 3D, consider cross-
products of edge directions.

▪ Project Vertices: Project all vertices of both polygons onto each candidate
separating axis.

▪ Check for Overlap: For each axis, determine the minimum and maximum
projected values for both polygons, creating intervals.

▪ Collision Condition: If there is any axis where the projected intervals do not
overlap, then the polygons are not colliding.

▪ Penetration Calculation: If all axes show overlap, then a collision is


occurring. The axis with the minimum overlap (smallest gap between
projections) indicates the direction and depth of the minimum penetration.
This is the MTV.

o Benefit: Provides not just a boolean "collision/no collision" but also quantitative
information (MTV) useful for collision response (e.g., pushing objects apart).

12: Demonstrating Polygon Penetration Algorithm

• Title: Polygon Penetration Algorithm: Example

• Content:

o [Diagram Placeholder: Illustrate two slightly overlapping convex polygons]

o Example Scenario:

▪ Consider two convex polygons, Polygon A (e.g., a square) and Polygon B


(e.g., a triangle).

▪ Step 1: Identify Axes:

▪ Axes for Polygon A: Normals to its 4 edges.

▪ Axes for Polygon B: Normals to its 3 edges.

▪ Step 2: Project Vertices: For each axis, project every vertex of Polygon A and
Polygon B onto that axis. This gives you two intervals (min/max projected
values) per axis.

▪ Step 3: Check Overlap:

▪ For each axis, compare the intervals.


▪ If, for any axis, the intervals do not overlap (e.g., A's projection is
[0,5] and B's is [7,10]), then there is NO collision. The algorithm
stops.

▪ If all axes show overlapping intervals (e.g., A's projection is [0,5] and
B's is [3,8]), a collision is detected.

▪ Step 4: Determine MTV:

▪ Among all axes that show overlap, calculate the magnitude of the
overlap for each.

▪ The axis with the smallest overlap determines the minimum


translation vector (MTV). The direction of the MTV is the normal of
that axis, and its magnitude is the smallest overlap.

o Illustration: Draw lines representing axes. Show how vertices project onto these
lines. Indicate the overlapping intervals and point out the axis corresponding to the
minimum penetration.

13: Polygon Penetration vs. Bounding Box Collision Detection

• Title: Polygon Penetration vs. Bounding Box Collision Detection

• Content:

o Bounding Box-Based Collision Detection:

▪ Concept: Encloses complex objects within simpler geometric primitives


(bounding boxes, spheres, capsules, etc.). Collision is first checked between
these simplified bounding volumes.

▪ Types: Axis-Aligned Bounding Boxes (AABB), Oriented Bounding Boxes (OBB),


Bounding Spheres.

▪ How it Works: Check if bounding boxes/spheres overlap. If they do, a


potential collision exists. If not, no collision.

▪ Advantages:

▪ Extremely Fast: Simple mathematical checks (e.g., min/max


coordinates for AABB).

▪ Good for Broad-Phase: Excellent for quickly ruling out non-collisions


between many objects.

▪ Disadvantages:

▪ Approximation: Prone to "false positives" (bounding boxes overlap,


but the actual objects don't).

▪ No Penetration Depth: Doesn't tell you how much objects penetrate


or in what direction.

▪ Tight Fit Issues: Less accurate for irregularly shaped objects.


o Polygon Penetration (SAT-based):

▪ Concept: Checks for exact collision between convex polygons using the
Separating Axis Theorem.

▪ How it Works: As explained previously, projects polygons onto axes and


checks for overlap.

▪ Advantages:

▪ Precise: Provides exact collision detection (no false positives for


convex shapes).

▪ Penetration Information: Yields minimum translation vector (MTV),


crucial for collision response.

▪ Handles Rotation: Naturally handles rotated polygons.

▪ Disadvantages:

▪ Computationally More Expensive: More calculations than simple


bounding box checks.

▪ Limited to Convex Polygons (for SAT): Non-convex polygons need to


be decomposed into convex parts.

▪ Still a "Narrow-Phase" Test: Often used after a broad-phase


bounding box check to refine collision detection for potential
overlaps.

14: Importance of Simulation in Robotic Task Planning

• Title: Importance of Simulation in Robotic Task Planning

• Content:

o Definition: Robotic task planning involves determining a sequence of actions or


operations a robot must perform to achieve a high-level goal (e.g., "assemble a car
door"). Simulation is a virtual environment where these plans can be designed,
tested, and optimized.

o Importance:

1. Safety: Test complex or dangerous scenarios without risking damage to


expensive robots or injury to personnel.

2. Cost Reduction: Significantly reduces the need for physical prototypes and
repeated real-world trials, saving money on hardware, materials, and energy.

3. Time Efficiency: Allows for rapid iteration and testing of multiple task plans
and strategies much faster than with physical robots.

4. Debugging & Verification: Easier to identify logical errors, unexpected


behaviors, or collisions in a controlled virtual environment.
5. Performance Optimization: Tune parameters, refine sequences, and
optimize robot movements for speed, energy, or smoothness.

6. "What-If" Analysis: Explore different solutions or handle various


uncertainties (e.g., part variations) without physical setup changes.

7. Training & Development: Train AI/ML models for complex tasks, or train
human operators and engineers on robotic systems.

8. Offline Programming: Pre-program robot trajectories and logic, minimizing


downtime on the actual production line.

o Examples:

1. Assembly Line Design: Simulate different robot layouts and task allocations
for a new product to find the most efficient setup.

2. Space Exploration: Simulate rover movements, sample collection, and


instrument deployment on Mars to test autonomy and handle delays.

3. Surgical Robotics: Practice complex surgical procedures in a virtual


environment before operating on patients.

4. Warehouse Automation: Test fleet management algorithms and pick-and-


place strategies for thousands of items.

15: Components of a Robotic Task Planning Simulation Setup

• Title: Components of a Robotic Task Planning Simulation Setup

• Content:

o 1. Robot Model:

▪ Description: Kinematic (joint limits, links, DH parameters) and dynamic


(mass, inertia, friction) properties of the robot. Includes end-effector/tool.

▪ Example: URDF (Universal Robot Description Format) files.

o 2. Environment Model:

▪ Description: 3D CAD models of the workspace, fixed obstacles (tables, walls),


workpieces, fixtures, and other machinery.

▪ Example: STL, OBJ, or COLLADA files for objects.

o 3. Physics Engine:

▪ Description: Simulates realistic physical interactions: gravity, friction,


collisions, joint limits, contact forces.

▪ Example: ODE (Open Dynamics Engine), Bullet, NVIDIA PhysX, MuJoCo,


Gazebo's physics engine.

o 4. Sensor Models:
▪ Description: Simulate sensor data (e.g., LiDAR, cameras, force/torque
sensors, depth sensors) with realistic noise and limitations.

▪ Example: Simulated point clouds, rendered camera images, force readings


during contact.

o 5. Controller/Planner Modules:

▪ Description: Implement and test path planning algorithms (A*, RRT), motion
controllers (PID, impedance control), and task-level planners.

▪ Example: ROS Navigation Stack, custom planning libraries.

o 6. Human-Machine Interface (HMI) / Visualization:

▪ Description: Graphical user interface for defining tasks, monitoring


simulation, and visualizing robot movements and environment.

▪ Example: RViz, custom GUIs built with OpenGL or similar.

o 7. Scripting/Programming Interface:

▪ Description: Allows users to write code to define tasks, control the


simulation, and analyze results.

▪ Example: Python APIs for simulation environments (e.g., CoppeliaSim, Isaac


Sim).

16: Testing Grasp & Motion Strategies in Manipulator Simulations

• Title: Testing Grasp & Motion Strategies in Manipulators via Simulation

• Content:

o Grasp Strategy Testing:

▪ Methodology:

1. Define Grasp Poses: Specify potential gripper approach and grasp


points on target objects.

2. Grasp Quality Metrics: Evaluate grasp stability (e.g., force closure,


resistance to slippage) under simulated forces and torques.

3. Collision Checking: Ensure the gripper doesn't collide with the


object or environment during approach, grasping, and retraction.

4. Kinematic Reachability: Check if the robot arm can reach the


desired grasp pose without exceeding joint limits or self-colliding.

5. Tolerance Analysis: Simulate slight variations in object


position/orientation to test grasp robustness.

▪ Benefits: Quickly test hundreds or thousands of grasp candidates without


physical setup. Avoid damage to grippers or objects.
o Motion Strategy Testing:

▪ Methodology:

1. Path Planning Algorithms: Test different global and local path


planning algorithms (e.g., RRT, PRM) for efficiency, collision
avoidance, and smoothness.

2. Trajectory Generation: Simulate different acceleration/deceleration


profiles and joint interpolation methods.

3. Collision Avoidance: Rigorously test the robot's ability to avoid both


static and dynamic obstacles throughout its motion.

4. Constraint Satisfaction: Verify that joint limits, speed limits, and


acceleration limits are respected.

5. Force Control (for compliant tasks): Simulate contact events and


verify that desired contact forces are maintained.

6. Task Sequence Validation: Ensure the entire sequence of motions


(e.g., pick, move, place) is feasible and collision-free.

▪ Benefits: Identify potential collisions, optimize movement time, ensure


smooth and safe operation, and validate control strategies.

17: Role of "Source" and "Goal" Scenes in Task Planning

• Title: Role of "Source" and "Goal" Scenes in Task Planning

• Content:

o Scene: A snapshot of the environment's state at a particular moment, including the


robot's configuration, the location and orientation of all objects, and the state of any
relevant features (e.g., open/closed doors).

o Source Scene:

▪ Definition: Represents the initial state of the world before the robot begins a
task.

▪ Role:

▪ Initial Conditions: Defines the starting configuration of the robot


and the layout of all relevant objects.

▪ Problem Definition: Establishes the "knowns" at the beginning of


the planning process.

▪ State Representation: Serves as the first state in a state-space


search or planning graph.

▪ Example: Robot is at charging station, parts A, B, C are in bin X,


assembly fixture is empty.
o Goal Scene:

▪ Definition: Represents the desired final state of the world after the robot has
successfully completed the task.

▪ Role:

▪ Objective Specification: Clearly defines what needs to be achieved


by the robot.

▪ Termination Condition: The planning process concludes when a path


or sequence of actions leads to this state.

▪ Verification: Used to verify if a plan is successful.

▪ Example: Robot is at charging station, parts A, B, C are assembled on


fixture Y, fixture X is empty.

o Significance: Task planning is fundamentally about finding a sequence of actions that


transforms the source scene into the goal scene while satisfying various constraints
(e.g., collision avoidance, kinematic limits, temporal order).

18: Planning & Optimizing Transitions Between Source and Goal Scenes

• Title: Planning & Optimizing Transitions Between Source and Goal Scenes

• Content:

o Process:

1. High-Level Task Decomposition: Break down the overall transition from


source to goal into a series of intermediate sub-goals or actions (e.g., "pick,"
"place," "move_to").

2. State-Space Search: Use AI planning algorithms (e.g., STRIPS, PDDL solvers,


hierarchical planners) to find a logical sequence of actions that transform the
current scene into the next desired scene, until the goal is reached.

3. Motion Planning for Each Action: For each high-level action (e.g., "pick A"),
detailed motion planning is performed to generate collision-free trajectories
for the robot's arm or base. This might involve:

▪ Pre-grasp approach: Moving to a position above the object.

▪ Grasp: Closing the gripper.

▪ Post-grasp retreat: Moving away from the object.

▪ Transfer motion: Moving the object to its destination.

4. Constraint Satisfaction: Ensure that all actions respect physical constraints


(e.g., object stability during transport, robot joint limits, tool clearances).

5. Feedback and Replanning: In dynamic or uncertain environments, sensor


feedback is used to update the current scene. If deviations occur, the plan is
re-evaluated or re-optimized.
o Optimization:

1. Time Optimization: Minimize the total time to complete the task (e.g.,
shortest path, fastest joint speeds without violating limits).

2. Energy Efficiency: Minimize energy consumption (e.g., smooth motions,


avoiding unnecessary accelerations).

3. Smoothness: Generate trajectories that are kinematically smooth and


reduce wear and tear.

4. Collision Probability: Optimize for maximum clearance from obstacles.

5. Multi-Objective Optimization: Often involves balancing competing


objectives.

19: Vision Systems for Source & Goal Scene Detection

• Title: Vision Systems for Detecting and Distinguishing Source & Goal Scenes

• Content:

o Role of Vision Systems: Crucial for autonomous robots to perceive and understand
their environment, allowing them to:

▪ Identify current objects and their poses (position and orientation).

▪ Detect the presence or absence of specific components.

▪ Verify the completion of sub-tasks.

▪ Adapt to variations in the environment.

o How Implemented:

▪ Scene Understanding:

▪ Object Detection: Using deep learning (e.g., CNNs like YOLO, Faster
R-CNN) to identify specific objects (e.g., "peg," "hole," "assembly
base") within the camera's field of view.

▪ Instance Segmentation: Precisely outlining each identified object.

▪ 3D Pose Estimation: Using techniques like ICP (Iterative Closest


Point) with depth cameras (e.g., Intel RealSense, Azure Kinect) or
multi-view stereo to determine the 3D position and orientation of
objects.

▪ Scene Comparison:

▪ Source Scene Detection: Compare the current perceived scene


(from camera data) with a stored model of the expected source
scene. This confirms the task is ready to begin.
▪ Goal Scene Detection/Verification: After actions, the vision system
re-scans the environment. It compares the current perceived scene
with the desired goal scene model to verify task completion.

▪ Feature Matching: Use feature descriptors (e.g., SIFT, ORB) to match


features between the current image and a reference image
representing the goal scene.

▪ Semantic Segmentation: Identify different regions or objects in the


scene to understand its overall configuration.

▪ Adaptation to Changes: Vision systems allow the robot to handle slight


variations in object placement or dynamically changing environments, which
is critical for robust task execution.

20: Sub-Problems of Task Planning in Robotic Systems

• Title: Three Key Sub-Problems of Task Planning in Robotic Systems

• Content:

o 1. What-to-do (Action Sequence Planning / High-Level Planning):

▪ Explanation: Determining the logical sequence of discrete actions a robot


needs to perform to achieve a high-level goal, without worrying about how
those actions are physically executed. This is often symbolic AI planning.

▪ Example: To "assemble a chair," this sub-problem determines: (1) Pick leg 1,


(2) Attach leg 1 to seat, (3) Pick leg 2, (4) Attach leg 2 to seat...

o 2. Where-to-go (Path and Motion Planning):

▪ Explanation: Given a high-level action (e.g., "move to grasp object"), this


sub-problem determines the specific, collision-free trajectory in the robot's
configuration space (C-space) for its joints or base to execute that action.
This is the geometric aspect.

▪ Example: For "pick leg 1," this determines the precise joint angles and
velocities for the robot arm to move from its current pose to a pre-grasp
pose, then to the grasp pose, and then to a safe retreat pose.

o 3. How-to-do (Execution Control / Manipulation Primitives):

▪ Explanation: How the robot physically interacts with the environment to


execute a specific action. This involves low-level control, sensory feedback,
and dealing with contact and forces.

▪ Example: For "attach leg 1 to seat," this involves:

▪ Fine alignment using visual or force feedback.

▪ Applying controlled insertion force.

▪ Detecting successful insertion.


21: Addressing Task Planning Sub-Problems in Industrial Assembly

• Title: Addressing Task Planning Sub-Problems in Industrial Robotic Arms (Assembly)

• Content:

o Scenario: Assembling a car door using an industrial robotic arm.

o 1. What-to-do (Action Sequence Planning):

▪ Approach: Often pre-programmed based on product design and assembly


flow. CAD models of the car door and components are used to define the
assembly sequence. Task planners (sometimes human-in-the-loop,
sometimes automated) define: "pick window regulator," "fasten window
regulator," "pick door handle," "mount door handle," etc.

▪ Tools: Specialized assembly planning software, sometimes integrated with


PLM (Product Lifecycle Management) systems.

o 2. Where-to-go (Path and Motion Planning):

▪ Approach: Offline programming in simulation software is common. Robot


paths are generated and optimized for collision avoidance (with the car door,
fixture, other robots) and cycle time. For complex paths, algorithms like RRT-
Connect or PRM are used. Trajectories are then taught-in or exported to the
robot controller.

▪ Tools: Robot simulation software (e.g., Process Simulate, RoboDK, KUKA.Sim,


ABB RobotStudio), motion planning libraries (e.g., OMPL).

o 3. How-to-do (Execution Control / Manipulation Primitives):

▪ Approach: This is where sensor feedback becomes critical.

▪ Vision-Guided Picking: Cameras detect the exact pose of parts in


bins for precise picking.

▪ Force-Compliant Insertion: For fastening bolts or inserting


components, force-torque sensors on the robot wrist provide
feedback. The robot executes compliant motions (e.g., spiral search,
force control) to handle misalignments.

▪ Error Handling: Logic is built into the robot program to react to


contact forces exceeding thresholds (e.g., retry insertion, signal an
error).

▪ Tools: Robot programming languages (KRL, RAPID), force control libraries,


machine vision systems.

22: Interdependence of Task Planning Sub-Problems

• Title: Interdependence of Task Planning Sub-Problems with Examples


• Content:

o The three sub-problems (What-to-do, Where-to-go, How-to-do) are highly


interdependent and cannot be solved in isolation.

o Example: Assembling a Component (e.g., snapping two plastic parts together):

▪ 1. What-to-do (Action Sequence): The high-level plan decides "Pick Part A,"
then "Pick Part B," then "Assemble A to B."

▪ Interdependence 1 (What-to-do informs Where-to-go): The choice of


"Assemble A to B" directly dictates the need for specific motion plans: the
robot must move Part A to Part B's location, ensuring their snap-fit features
align. This constrains the where-to-go planning for A and B.

▪ 2. Where-to-go (Path and Motion): The motion planner designs collision-


free trajectories for picking A, picking B, and bringing them together.

▪ Interdependence 2 (Where-to-go informs How-to-do): The final approach


path defined by where-to-go heavily influences the how-to-do (e.g., if the
approach is straight down, insertion might need active force control; if it's
angled, different compliant strategies apply). A collision during where-to-go
means the how-to-do cannot proceed.

▪ 3. How-to-do (Execution Control): The robot executes the precise motion to


snap the parts, using force feedback to detect successful engagement.

▪ Interdependence 3 (How-to-do informs What-to-do/Where-to-go): If the


how-to-do fails (e.g., parts don't snap, or forces are too high), it feeds back.
This might trigger a re-plan of the where-to-go (e.g., trying a slightly different
approach) or even a re-evaluation of the what-to-do (e.g., if the parts are
incompatible, re-order, or call for human intervention).

o Conclusion: Solving one sub-problem often reveals constraints or opportunities for


the others, necessitating an iterative or hierarchical approach.

23: Scene Analysis in Robotics & Implementation in Autonomous Systems

• Title: Scene Analysis in Robotics & Implementation in Autonomous Systems

• Content:

o Definition: Scene analysis in robotics refers to the process by which an autonomous


system perceives, interprets, and understands its surrounding environment from raw
sensor data. It involves recognizing objects, determining their properties (pose,
identity), understanding relationships between objects, and identifying free space
and obstacles.

o Implementation in Autonomous Systems (e.g., Autonomous Vehicles):

1. Sensor Data Acquisition:


▪ Input: Cameras (monocular, stereo, surround-view), LiDAR (3D point
clouds), Radar (distance, velocity), Ultrasonic sensors.

2. Perception (Low-Level Processing):

▪ Object Detection & Recognition: Deep learning models (CNNs)


identify pedestrians, vehicles, traffic signs, lane markings, etc.

▪ Object Tracking: Kalman Filters, Particle Filters, or more advanced


trackers estimate object trajectories over time.

▪ Semantic Segmentation: Pixel-level classification of the scene (e.g.,


road, sidewalk, sky, building).

▪ 3D Reconstruction: Using stereo vision or LiDAR to build a 3D map of


the environment and estimate object depths/poses.

3. Scene Understanding (High-Level Reasoning):

▪ Situation Awareness: Combining information from multiple sensors


(sensor fusion) to build a comprehensive, coherent understanding of
the dynamic scene (e.g., "that is a pedestrian crossing the road at 5
mph").

▪ Behavior Prediction: Predicting the future actions of other agents


(e.g., "that car is likely to turn left").

▪ Mapping: Building and updating a local or global map of the


environment, including static obstacles and dynamic elements.

4. Decision Making & Planning:

▪ The derived scene understanding feeds into the decision-making


module, which then informs the path planning and control modules
(e.g., "slow down," "change lane," "stop").

o Overall Goal: To create a rich, actionable representation of the world for the robot to
safely and effectively navigate and interact.

24: Optimal Part Ordering in Assembly Line Using Scene Analysis

• Title: Optimal Part Ordering in Assembly Line Using Scene Analysis

• Content:

o Problem: In a robotic assembly line, parts often arrive in bins or on conveyors in


arbitrary order. Optimal part ordering (sequencing) can significantly impact
efficiency. Scene analysis is key to achieving this.

o Method for Optimal Part Ordering:

1. Scene Capture: Robot's vision system (e.g., overhead camera, 3D sensor)


captures the current state of the parts presentation area (e.g., a bin of mixed
parts).
2. Part Identification and Pose Estimation:

▪ Scene analysis algorithms (e.g., trained deep learning models) detect


and identify each individual part instance in the bin (e.g., "Part A,"
"Part B," "Part C").

▪ For each identified part, its precise 3D pose (position and


orientation) is estimated.

3. Grasp Planning and Collision Checking:

▪ For each identified part, potential grasp points and approach paths
for the robot gripper are computed.

▪ Crucially, collision checks are performed to ensure that picking one


part does not cause a collision with other parts in the bin or the bin
itself.

4. Feasibility Analysis: Determine which parts are "pickable" at the current


moment (i.e., reachable and collision-free to grasp).

5. Optimization Criteria:

▪ Task Sequence Constraint: The assembly process often dictates a


fixed order (e.g., "Part A must be assembled before Part B").

▪ Accessibility: Prioritize parts that are easiest to reach and grasp


(e.g., on top, least occluded).

▪ Minimizing Robot Motion: Choose parts that require minimal robot


movement or reorientation to pick.

▪ Tool Change Minimization: If different tools are needed, prioritize


picking all parts requiring one tool before switching.

6. Dynamic Re-ordering: As parts are picked, the scene changes. The scene
analysis and optimization process continuously re-evaluates the "pickable"
parts and re-orders the remaining parts based on the updated scene,
ensuring dynamic efficiency.

25: How Part Ordering Affects Task Completion Time and Efficiency

• Title: How Part Ordering Affects Task Completion Time & Efficiency

• Content:

o Direct Impact on Task Completion Time:

▪ Reduced Robot Travel Time: Optimal ordering can minimize the distance the
robot's end-effector has to travel between pick-up and drop-off locations.

▪ Fewer Tool Changes: If a task requires multiple tools, picking all parts that
use Tool A before switching to Tool B saves significant time compared to
frequent tool changes.
▪ Minimized Jitter/Oscillation: Planning smooth transitions between pick
locations.

▪ Optimized Joint Movements: More efficient joint trajectories can reduce


overall cycle time.

o Impact on Efficiency:

▪ Increased Throughput: Faster completion times directly translate to higher


production rates.

▪ Reduced Energy Consumption: Less motion and more efficient trajectories


use less power.

▪ Reduced Wear and Tear: Smoother, more optimized motions reduce stress
on robot joints and motors, extending robot lifespan.

▪ Minimized Collision Risk: Planning based on accessibility ensures that


picking one part doesn't disturb or collide with others, reducing rework.

▪ Improved System Robustness: A well-ordered plan is less prone to


unexpected errors or jams, leading to more reliable operation.

▪ Example: Imagine assembling a product with 10 different screws. If the


robot has to pick screw A, fasten it, then pick screw B, fasten it, and so on,
it's very inefficient. If it can pick all 10 screws in one go, then fasten them
sequentially, it's much faster.

26: Key AI Components in Autonomous Vehicle Decision-Making & Navigation

• Title: Key AI Components in Autonomous Vehicle Decision-Making & Navigation

• Content:

o 1. Perception (Computer Vision & Sensor Fusion):

▪ Role: Interprets raw sensor data (cameras, LiDAR, radar) to build a


comprehensive understanding of the environment.

▪ AI Techniques:

▪ Deep Learning (CNNs): Object detection (vehicles, pedestrians,


cyclists), semantic segmentation (road, lane lines, sidewalks, sky),
traffic sign/light recognition.

▪ Sensor Fusion (Kalman Filters, Particle Filters, Extended Kalman


Filters): Combines data from multiple sensors to create a more
robust and accurate representation of the environment, tracking
dynamic objects, and estimating ego-vehicle pose.

o 2. Prediction:

▪ Role: Forecasts the future behavior and trajectories of other dynamic agents
(vehicles, pedestrians).
▪ AI Techniques:

▪ Recurrent Neural Networks (RNNs/LSTMs): Learn temporal patterns


in agent movements.

▪ Probabilistic Graphical Models: Model uncertainties in predictions.

▪ Behavioral Models: Rule-based or learned models of typical road


user behavior.

o 3. Planning (Path Planning & Behavioral Planning):

▪ Role: Determines the vehicle's optimal future actions (path, speed, lane
changes) to reach the destination safely and efficiently.

▪ AI Techniques:

▪ Reinforcement Learning (RL): Learn optimal driving policies through


trial and error in simulated environments.

▪ Graph Search Algorithms (A, Dijkstra):* For global path planning.

▪ Model Predictive Control (MPC): Optimizes control inputs over a


future horizon, handling constraints.

▪ Behavior Trees / State Machines: Define high-level driving behaviors


(e.g., "follow lane," "change lane," "turn at intersection").

o 4. Localization & Mapping (SLAM - Simultaneous Localization and Mapping):

▪ Role: Determines the vehicle's precise position and orientation within a


map, and continuously builds/updates the map.

▪ AI Techniques:

▪ Probabilistic Methods (Kalman Filters, Particle Filters, Graph


SLAM): Estimate state and map with uncertainty.

▪ Neural Networks: For feature extraction in visual SLAM.

27: Sensor Fusion in Autonomous Vehicle Navigation

• Title: Sensor Fusion for Autonomous Vehicle Navigation

• Content:

o Definition: Sensor fusion is the process of combining data from multiple diverse
sensors (e.g., cameras, LiDAR, radar, GPS, IMU) to obtain a more accurate, reliable,
and complete understanding of the environment and the vehicle's own state than
any single sensor could provide alone.

o How it Assists Navigation:

1. Increased Robustness & Redundancy:


▪ Handles Sensor Failures: If one sensor fails or is occluded (e.g.,
camera in heavy rain), other sensors can compensate.

▪ Mitigates Sensor Limitations: Cameras provide rich semantic


information but struggle with depth at long range or in low light.
LiDAR excels at precise 3D mapping but might be affected by fog.
Radar is robust in adverse weather and provides velocity but has
lower spatial resolution. Fusion combines strengths.

2. Enhanced Accuracy:

▪ Combining noisy measurements from different sensors can lead to a


more precise estimate of an object's position, velocity, or the
vehicle's own pose (e.g., fusing GPS with IMU for accurate
localization).

3. Improved Object Detection & Tracking:

▪ LiDAR provides precise range and depth for object geometry.


Camera provides visual features for classification. Radar provides
accurate velocity measurements. Fusing these allows for robust
detection, classification, and tracking of dynamic objects.

4. Better Environment Understanding:

▪ Creates a richer, more comprehensive 3D model of the surroundings,


essential for path planning and obstacle avoidance.

5. Dealing with Ambiguity:

▪ A single sensor might have ambiguous readings (e.g., a camera might


struggle to differentiate between a shadow and a pothole). Fusion
can resolve such ambiguities using complementary information.

o Common Techniques: Kalman Filters, Extended Kalman Filters (EKF), Unscented


Kalman Filters (UKF), Particle Filters, Deep Learning-based fusion architectures.

28: Ethical & Technical Challenges of Autonomous Vehicles in Public Environments

• Title: Ethical & Technical Challenges of Autonomous Vehicles in Public Environments

• Content:

o Ethical Challenges:

1. The Trolley Problem: In unavoidable accident scenarios, how should the AV


be programmed to prioritize (e.g., minimize harm to occupants vs.
pedestrians)? Who decides these rules, and are they universally acceptable?

2. Responsibility and Liability: Who is legally responsible in case of an accident


involving an AV (manufacturer, software developer, owner, operator)?

3. Job Displacement: Mass adoption of AVs could displace professional drivers


(taxis, trucks), leading to significant societal disruption.
4. Privacy: AVs collect vast amounts of data about their surroundings and
occupants. How is this data secured and used?

5. Equity and Access: Will AV technology exacerbate social inequalities if it's


only accessible to certain demographics or areas?

o Technical Challenges:

1. Robustness in Adverse Conditions: Reliable operation in heavy rain, snow,


fog, bright sunlight, or unusual lighting conditions.

2. Perception of Edge Cases/Novelty: Handling rare, unusual, or unseen


scenarios (e.g., unpredictable human behavior, unusual road debris, non-
standard signage).

3. Sensor Limitations & Failures: Dealing with sensor degradation, malicious


interference, or complete failure.

4. Predicting Human Behavior: Accurately predicting the actions of highly


unpredictable pedestrians, cyclists, and human drivers.

5. Validation & Verification: Proving the safety and reliability of complex AI


systems, especially when they exhibit emergent behaviors.

6. Cybersecurity: Protecting AVs from hacking, jamming, or other cyber


threats.

7. Mapping & Localization in Dynamic Environments: Maintaining accurate


maps and localization in areas with frequent construction, changing traffic
patterns, or adverse GPS conditions.

8. Ethical Decision-Making Implementation: Translating complex ethical


considerations into unambiguous, executable code.

29: AI and ML in Space Missions (Chandrayaan/Mars Rovers)

• Title: AI & ML in Space Missions: Chandrayaan & Mars Rovers

• Content:

o Overall Role: AI and ML are critical for enhancing autonomy, scientific discovery,
fault detection, and mission efficiency in deep space, where human intervention is
delayed or impossible.

o Specific Applications:

1. Autonomous Navigation (Mars Rovers):

▪ Path Planning: AI algorithms (e.g., "AutoNav" for


Curiosity/Perseverance) analyze terrain data (from stereo cameras,
LiDAR) to autonomously plan safe, efficient paths around obstacles,
craters, and dangerous slopes. Reduces reliance on ground control
for every meter.
▪ Hazard Avoidance: ML models identify "keep-out" zones (rocks,
deep sand) based on visual features.

▪ Localization: Vision-based odometry uses ML to track movement by


matching features in consecutive images.

2. Target Selection & Scientific Autonomy:

▪ Target Identification (e.g., "AEGIS" on Mars Rovers): AI analyzes


images to autonomously detect scientifically interesting features
(e.g., specific rock formations, mineral veins) and prioritize them for
further investigation (e.g., taking close-up images, deploying
instruments).

▪ Data Prioritization: ML algorithms can identify high-value scientific


data to transmit first when bandwidth is limited.

3. Resource Management:

▪ Power Management: AI optimizes power usage for instruments and


locomotion based on mission goals and available solar energy.

▪ Thermal Control: ML models predict optimal heating/cooling cycles.

4. Fault Detection and Diagnostics:

▪ ML algorithms monitor telemetry data for anomalies, predicting


potential failures and suggesting corrective actions before critical
events occur.

▪ Chandrayaan (specifics apply but often less explicit on public AI


details): While specific public details on ML in Chandrayaan's
autonomous navigation are fewer than for Mars rovers, it would
leverage AI/ML for:

▪ Autonomous Landing: Image processing for terrain relative


navigation, hazard detection during descent.

▪ Orbiter Data Analysis: AI for classifying lunar surface


features, identifying ice deposits from spectral data.

30: Task Planning Challenges for Mars Rovers During Terrain Traversal

• Title: Task Planning Challenges for Mars Rovers During Terrain Traversal

• Content:

o 1. High Latency and Communication Delays:

▪ Challenge: Round-trip communication times to Mars can be minutes to


hours. Direct teleoperation is impossible.
▪ Impact on Planning: Requires high levels of onboard autonomy for path
planning, decision-making, and error handling. Plans are sent in batches, and
the rover executes them autonomously.

o 2. Unknown and Unpredictable Terrain:

▪ Challenge: The Martian surface is highly varied with rocks, sand dunes,
craters, slopes, and potential hazards. Maps are incomplete.

▪ Impact on Planning: Requires robust local perception and reactive planning


to avoid immediate obstacles and navigate unknown terrain. Path planners
must prioritize safety over optimality.

o 3. Limited Resources (Power, Computational):

▪ Challenge: Solar power availability varies; limited onboard computational


power.

▪ Impact on Planning: AI algorithms must be computationally efficient.


Balancing exploration vs. power-hungry activities.

o 4. Robot Mobility and Kinematic Constraints:

▪ Challenge: Rovers have complex rocker-bogie suspensions, which can handle


rough terrain but also introduce complex kinematic constraints.

▪ Impact on Planning: Planning must account for stability, wheel slip, and the
ability to traverse different terrain types.

o 5. Scientific Objectives vs. Safety:

▪ Challenge: Balancing the need to reach scientifically interesting targets with


ensuring the rover's safety. Sometimes the most interesting terrain is also
the most dangerous.

▪ Impact on Planning: Requires sophisticated decision-making to weigh


scientific value against risk, potentially needing human override for high-risk
maneuvers.

o 6. Long-Term Mission Degradation:

▪ Challenge: Dust accumulation, radiation, temperature extremes cause wear


and tear, and potential component failures.

▪ Impact on Planning: Autonomy needs to monitor system health, adapt


planning to degraded capabilities, or even self-diagnose and perform basic
repairs.

31: Robotic Autonomy: Critical for Space Exploration Missions

• Title: Robotic Autonomy: Critical for Space Exploration Missions

• Content:

o Why Autonomy is Paramount:


1. Overcoming Communication Delays: As distances to planets increase (Mars,
Jupiter's moons), light-speed communication delays become prohibitive for
direct human control. Autonomy allows robots to react immediately to
unforeseen circumstances.

2. Operating in Hostile Environments: Many space environments are lethal to


humans (vacuum, radiation, extreme temperatures, toxic atmospheres).
Autonomous robots can perform tasks without life support.

3. Unstructured and Unknown Environments: Robots encounter unpredictable


terrains, weather, and scientific targets. Autonomy allows them to adapt,
explore, and make decisions without constant human supervision.

4. Resource Optimization: Onboard AI can manage power, thermal, and data


resources efficiently, extending mission lifespan and maximizing scientific
return.

5. Increased Productivity: Autonomous robots can operate 24/7 (solar


permitting) and process data much faster than humans, maximizing data
collection and scientific discovery.

6. Enabling Complex Science: Autonomy enables sophisticated scientific tasks


like autonomous drilling, sample acquisition, and in-situ analysis, requiring
precise interaction with the environment.

7. Human Safety: Keeping humans out of dangerous situations, especially


during initial reconnaissance and hazardous operations.

o Specific Examples:

1. Mars Rovers (Spirit, Opportunity, Curiosity, Perseverance): Their "AutoNav"


and "AEGIS" systems allow them to autonomously traverse terrain and
identify scientific targets, greatly extending their exploration range and
scientific output beyond what teleoperation alone could achieve.
Perseverance's sample caching also relies on high levels of autonomy.

2. Deep Space Probes (e.g., Voyager, Cassini): While not "robots" in the typical
sense, they have increasing levels of onboard autonomy for fault detection,
anomaly resolution, and scientific data management due to immense
communication delays.

3. Future Missions (e.g., Europa Clipper, Lunar South Pole exploration): Will
rely on advanced autonomy for navigating complex icy terrains, drilling into
sub-surface oceans, and operating in permanently shadowed regions where
direct human control is impossible.

You might also like