Module 4
Module 4
• Content:
o Definition: Sector boundaries divide the robot's surrounding environment (often its
local perception field) into angular "sectors" or "cones." These boundaries are
typically defined by obstacles or free space openings.
o Computation:
▪ Usually based on sensor data (e.g., LiDAR scans, depth camera point clouds).
▪ Points from obstacles are projected onto a 2D plane around the robot.
▪ Angular sorting of these points helps identify gaps (free space) and obstacle
clusters.
▪ Sector boundaries are drawn along the angular limits of these gaps or the
edges of obstacle clusters.
▪ Local Navigation: Helps the robot quickly identify clear paths and steer away
from immediate obstacles.
• Content:
o Advantages:
o Challenges:
▪ Local Minima: Can lead to oscillations or getting stuck if the robot faces a
symmetric obstacle configuration.
• Title: Method to Compute Sector Boundaries for Mobile Robots in Unknown Environments
• Content:
• Content:
o Challenges: Despite its apparent simplicity for humans, it's highly challenging for
robots due to:
▪ Tight Tolerances: The peg and hole often have very small clearances (e.g.,
microns to millimeters).
▪ Foundation for Complex Tasks: Solving peg-in-hole lays the groundwork for
more intricate manipulation tasks.
• Content:
▪ Impact: The robot believes its end-effector is in one position, but it's slightly
off, leading to misalignment.
▪ Issue: The peg or hole may not be perfectly manufactured (e.g., slightly oval,
tapered, off-center).
▪ Impact: Even if the robot is perfectly aligned, the physical parts themselves
introduce errors.
▪ Issue: Vision systems have pixel resolution limits; force/torque sensors have
noise and drift.
o 4. Environmental Variations:
▪ Impact: The robot might "push" against the hole instead of sliding in.
▪ Impact: Small angular errors can lead to large positional errors at the peg tip,
causing jamming.
• Content:
▪ Goal: Minimize the positional and angular misalignment between the peg's
axis and the hole's axis.
▪ Let the peg's pose be Pp=(xp,yp,zp,αp,βp,γp) and the hole's pose be Ph=(xh
,yh,zh,αh,βh,γh).
▪ Constraints:
▪ Torques: Tcontact≤Tmax.
▪ Hierarchical Planning:
• Content:
o Differential Drive Robot: A robot with two independent wheels, usually driven by
separate motors, allowing it to move forward/backward and rotate by varying the
speeds of the wheels.
o Simulation Process:
▪ Let vL and vR be the linear velocities of the left and right wheels.
▪ Δx=vcos(θ)Δt
▪ Δy=vsin(θ)Δt
▪ Δθ=ωΔt
• Content:
o Impact of Uncertainties:
▪ Sensor Noise: Simulated sensors (e.g., range finders) might have noise
models. If not, the simulation is overly optimistic.
▪ Localization Errors: Simulated robot's "true" position might drift from its
"estimated" position if a localization model (e.g., SLAM) is not included.
▪ Discretization Errors: Large time steps (Δt) or coarse environmental grids can
lead to accumulated errors or missed collisions.
• Content:
o Simple Simulation Framework Design:
▪ Components:
▪ Main Loop: Iterates through time steps, updates robot state, checks
collisions, renders.
o Real-World Implications:
• Content:
o Concept: A robust method for collision detection between two or more convex
polygons. It determines if two polygons overlap and, if so, by how much (the
penetration depth) and in what direction (the minimum translation vector, MTV).
▪ Identify Potential Axes: For each polygon, consider the normal vectors to
each of its edges as potential separating axes. Also, for 3D, consider cross-
products of edge directions.
▪ Project Vertices: Project all vertices of both polygons onto each candidate
separating axis.
▪ Check for Overlap: For each axis, determine the minimum and maximum
projected values for both polygons, creating intervals.
▪ Collision Condition: If there is any axis where the projected intervals do not
overlap, then the polygons are not colliding.
o Benefit: Provides not just a boolean "collision/no collision" but also quantitative
information (MTV) useful for collision response (e.g., pushing objects apart).
• Content:
o Example Scenario:
▪ Step 2: Project Vertices: For each axis, project every vertex of Polygon A and
Polygon B onto that axis. This gives you two intervals (min/max projected
values) per axis.
▪ If all axes show overlapping intervals (e.g., A's projection is [0,5] and
B's is [3,8]), a collision is detected.
▪ Among all axes that show overlap, calculate the magnitude of the
overlap for each.
o Illustration: Draw lines representing axes. Show how vertices project onto these
lines. Indicate the overlapping intervals and point out the axis corresponding to the
minimum penetration.
• Content:
▪ Advantages:
▪ Disadvantages:
▪ Concept: Checks for exact collision between convex polygons using the
Separating Axis Theorem.
▪ Advantages:
▪ Disadvantages:
• Content:
o Importance:
2. Cost Reduction: Significantly reduces the need for physical prototypes and
repeated real-world trials, saving money on hardware, materials, and energy.
3. Time Efficiency: Allows for rapid iteration and testing of multiple task plans
and strategies much faster than with physical robots.
7. Training & Development: Train AI/ML models for complex tasks, or train
human operators and engineers on robotic systems.
o Examples:
1. Assembly Line Design: Simulate different robot layouts and task allocations
for a new product to find the most efficient setup.
• Content:
o 1. Robot Model:
o 2. Environment Model:
o 3. Physics Engine:
o 4. Sensor Models:
▪ Description: Simulate sensor data (e.g., LiDAR, cameras, force/torque
sensors, depth sensors) with realistic noise and limitations.
o 5. Controller/Planner Modules:
▪ Description: Implement and test path planning algorithms (A*, RRT), motion
controllers (PID, impedance control), and task-level planners.
o 7. Scripting/Programming Interface:
• Content:
▪ Methodology:
▪ Methodology:
• Content:
o Source Scene:
▪ Definition: Represents the initial state of the world before the robot begins a
task.
▪ Role:
▪ Definition: Represents the desired final state of the world after the robot has
successfully completed the task.
▪ Role:
18: Planning & Optimizing Transitions Between Source and Goal Scenes
• Title: Planning & Optimizing Transitions Between Source and Goal Scenes
• Content:
o Process:
3. Motion Planning for Each Action: For each high-level action (e.g., "pick A"),
detailed motion planning is performed to generate collision-free trajectories
for the robot's arm or base. This might involve:
1. Time Optimization: Minimize the total time to complete the task (e.g.,
shortest path, fastest joint speeds without violating limits).
• Title: Vision Systems for Detecting and Distinguishing Source & Goal Scenes
• Content:
o Role of Vision Systems: Crucial for autonomous robots to perceive and understand
their environment, allowing them to:
o How Implemented:
▪ Scene Understanding:
▪ Object Detection: Using deep learning (e.g., CNNs like YOLO, Faster
R-CNN) to identify specific objects (e.g., "peg," "hole," "assembly
base") within the camera's field of view.
▪ Scene Comparison:
• Content:
▪ Example: For "pick leg 1," this determines the precise joint angles and
velocities for the robot arm to move from its current pose to a pre-grasp
pose, then to the grasp pose, and then to a safe retreat pose.
• Content:
▪ 1. What-to-do (Action Sequence): The high-level plan decides "Pick Part A,"
then "Pick Part B," then "Assemble A to B."
• Content:
o Overall Goal: To create a rich, actionable representation of the world for the robot to
safely and effectively navigate and interact.
• Content:
▪ For each identified part, potential grasp points and approach paths
for the robot gripper are computed.
5. Optimization Criteria:
6. Dynamic Re-ordering: As parts are picked, the scene changes. The scene
analysis and optimization process continuously re-evaluates the "pickable"
parts and re-orders the remaining parts based on the updated scene,
ensuring dynamic efficiency.
25: How Part Ordering Affects Task Completion Time and Efficiency
• Title: How Part Ordering Affects Task Completion Time & Efficiency
• Content:
▪ Reduced Robot Travel Time: Optimal ordering can minimize the distance the
robot's end-effector has to travel between pick-up and drop-off locations.
▪ Fewer Tool Changes: If a task requires multiple tools, picking all parts that
use Tool A before switching to Tool B saves significant time compared to
frequent tool changes.
▪ Minimized Jitter/Oscillation: Planning smooth transitions between pick
locations.
o Impact on Efficiency:
▪ Reduced Wear and Tear: Smoother, more optimized motions reduce stress
on robot joints and motors, extending robot lifespan.
• Content:
▪ AI Techniques:
o 2. Prediction:
▪ Role: Forecasts the future behavior and trajectories of other dynamic agents
(vehicles, pedestrians).
▪ AI Techniques:
▪ Role: Determines the vehicle's optimal future actions (path, speed, lane
changes) to reach the destination safely and efficiently.
▪ AI Techniques:
▪ AI Techniques:
• Content:
o Definition: Sensor fusion is the process of combining data from multiple diverse
sensors (e.g., cameras, LiDAR, radar, GPS, IMU) to obtain a more accurate, reliable,
and complete understanding of the environment and the vehicle's own state than
any single sensor could provide alone.
2. Enhanced Accuracy:
• Content:
o Ethical Challenges:
o Technical Challenges:
• Content:
o Overall Role: AI and ML are critical for enhancing autonomy, scientific discovery,
fault detection, and mission efficiency in deep space, where human intervention is
delayed or impossible.
o Specific Applications:
3. Resource Management:
30: Task Planning Challenges for Mars Rovers During Terrain Traversal
• Title: Task Planning Challenges for Mars Rovers During Terrain Traversal
• Content:
▪ Challenge: The Martian surface is highly varied with rocks, sand dunes,
craters, slopes, and potential hazards. Maps are incomplete.
▪ Impact on Planning: Planning must account for stability, wheel slip, and the
ability to traverse different terrain types.
• Content:
o Specific Examples:
2. Deep Space Probes (e.g., Voyager, Cassini): While not "robots" in the typical
sense, they have increasing levels of onboard autonomy for fault detection,
anomaly resolution, and scientific data management due to immense
communication delays.
3. Future Missions (e.g., Europa Clipper, Lunar South Pole exploration): Will
rely on advanced autonomy for navigating complex icy terrains, drilling into
sub-surface oceans, and operating in permanently shadowed regions where
direct human control is impossible.