0% found this document useful (0 votes)
68 views22 pages

M Tech Unit V Notes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views22 pages

M Tech Unit V Notes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 22

UNIT V

Robotics: Introduction, Tasks, parts, effectors, Sensors, Architectures, Configuration


spaces, Navigation and motion planning, Introduction to AI based programming Tools

Robotics is a field that integrates mechanical engineering, electrical engineering,


computer science, and artificial intelligence to design and develop robots. A robot is a
programmable mechanical device capable of sensing, processing information, and
performing tasks autonomously or semi-autonomously. Robotics plays a significant role
in modern technology, automating tasks across industries, improving efficiency, and
enhancing precision.

1 INTRODUCTION
Robotics is a separate entity in Artificial Intelligence that helps study the creation of
intelligent robots or machines. Robotics combines electrical engineering, mechanical
engineering and computer science & engineering as they have mechanical construction,
electrical component and programmed with programming language. Although, Robotics and
Artificial Intelligence both have different objectives and applications, but most people treat
robotics as a subset of Artificial Intelligence (AI). Robot machines look very similar to
humans, and also, they can perform like humans, if enabled with AI.

In earlier days, robotic applications were very limited, but now they have become smarter and
more efficient by combining with Artificial Intelligence. AI has played a crucial role in the
industrial sector by replacing humans in terms of productivity and quality. In this article,
'Robotics and Artificial Intelligence, we will discuss Robots & Artificial Intelligence and
their various applications, advantages, differences, etc. Let's start with the definition of
Artificial Intelligence (AI) and Robots.
What is a robot?
A robot is a machine that looks like a human, and is capable of performing out of box actions
and replicating certain human movements automatically by means of commands given to it
using programming. Examples: Drug Compounding Robot, Automotive Industry Robots,
Order Picking Robots, Industrial Floor Scrubbers and Sage Automation Gantry Robots, etc.
Characteristics of Robots
 Autonomy: Robots can perform tasks with minimal or no human intervention.
 Programmability: They can be programmed for different tasks.
 Interaction: Robots interact with the environment through sensors and effectors. History of
Robotics
 Ancient Era: Early concepts of automated machines date back to ancient Greece (e.g.,
Archytas' mechanical pigeon).
 20th Century: The term robot was introduced by KarelČapek in his 1921 play R.U.R..
 Modern Robotics: Began with the invention of the Unimate robotic arm in 1961, which
revolutionized industrial automation.
Applications of Robotics
1. Industrial Robotics:
 Used for manufacturing, welding, and assembly lines.
 Example: FANUC robots in automobile factories.
2. Medical Robotics:
 Robots assist in surgery, diagnostics, and rehabilitation.
 Example: Da Vinci Surgical System for minimally invasive surgery.
Exploration Robotics:
 Used in space, underwater, and hazardous environments.
 Example: NASA's Perseverance Rover for Mars exploration.
4. Service Robotics: o Robots for domestic and commercial tasks.
o Example: Roomba vacuum cleaners.
5. Autonomous Vehicles:
 Self-driving cars, drones, and delivery robots.
 Example: Tesla Autopilot and Amazon Prime Air drones.
2. TASKS
Robotsperformadiverserangeoftasksbasedontheirdesign,capabilities,andapplication
domains. Tasks are broadly categorized as follows:
1. Manipulation Tasks:
o Robotsmanipulate objectsusingroboticarmsandendeffectors.
o Examples:
 Pick-and-placerobotsinassemblylines.
 Robotsusedforpainting,welding,andassemblingparts.
2. LocomotionTasks:
o Robotsmovethroughtheenvironmentusingvariousmechanisms.
o Examples:
 Wheeledrobots:Autonomousmobilerobots(AMRs)usedinwarehouses.
 Leggedrobots:QuadrupedslikeBostonDynamics'Spotforrou
gh terrain.
 Flyingrobots:Dronesforsurveillanceanddelivery.
3. PerceptionTasks:
o Robotsperceivetheenvironmentusingsensorsandcomputervisionalgorithms.
o Examples:
 RecognizingobjectsusingcamerasandLiDAR.
 Detectingobstaclesduringnavigation.
4. NavigationTasks:
o Robotsautonomouslynavigatetospecific locationswhileavoidingobstacles.
o Examples:
 Self-drivingcarsusingSLAM(SimultaneousLocalizationandMapping).
 Dronesnavigatingdeliveryroutes.
5. HazardousEnvironment Tasks:
o Robotsperformtasksinenvironmentsdangerousfor humans.
o Examples:
 Robotsusedfornuclearcleanup.
 Underwater robotsfor deep-seaexploration.
6. CollaborativeTasks:
o Collaborativerobots(cobots)workalongsidehumans.
o Examples:
 Cobotsassistingworkersinfactories.
 Robotshandlinglogisticsinwarehouses.
7. Manufacturing: Assembly, welding, painting, material handling, machine tending.
8. Logistics: Warehousing, transportation, delivery.
9. Healthcare: Surgery, rehabilitation, drug delivery, patient assistance.
10. Agriculture: Harvesting, planting, weeding, livestock monitoring.
11. Space Exploration: Sample collection, planetary exploration, maintenance of space
stations.
12. Defense: Surveillance, reconnaissance, bomb disposal.
13. Service Robots: Household chores, entertainment, education.
3.PARTS
A robot is composed of several critical components that allow it to function effectively.
These parts include hardware and software systems.
PARTS OF A ROBOT
 Link: A rigid body connecting two joints.
 Joint: The connection between two links, allowing relative motion (e.g., revolute,
prismatic).
 Chassis (Frame)
 The robot's physical body or structure that supports and holds all other parts together.
 Examples: Metal frame, plastic shell, or any structural base.
 Motors
 Provide motion and control the movement of the robot’s parts.
 Types: DC motors, stepper motors, servo motors, etc.
 Wheels or Legs
 For mobility, allowing the robot to move.
 Examples:
o Wheels: For wheeled robots (e.g., robots on tracks or with wheels).
o Legs: For walking robots (e.g., quadrupeds, bipedal robots).
 Arms (Robotic Arms)
 Used to perform tasks like picking up, manipulating, or assembling objects.
 Examples: Robotic arms with servos or motors to enable movement.
 Sensors
 Enable the robot to perceive its environment and gather data for decision-making.
 Examples:
o Ultrasonic Sensors: Measure distances.
o Infrared Sensors: Detect obstacles or measure proximity.
o Cameras: For vision and object detection.
o Temperature Sensors: Measure heat levels.
o Force/Torque Sensors: Detect physical contact or force.
 Grippers/End Effectors
 Tools attached to the robot for interacting with the environment (e.g., grabbing,
lifting, or manipulating objects).
 Examples:
o Claws: For picking up objects.
o Vacuum Grippers: For lifting objects using suction.
o Specialized Tools: (e.g., welding tools, screwdrivers, etc.).
 Batteries
 Provide electrical energy to power the robot's components.
 Types: Lithium-ion, lithium-polymer, or other rechargeable batteries.
 Cables and Connectors
 Facilitate the connection of electrical components for power supply and data transfer.
 LED Indicators
 Used for status indicators or to show the robot's operating condition (e.g., power
on/off, error signals).

COMPONENTS OF A ROBOT

Key Components:
1. PowerSupply:
o Provides energytotherobot.
o Examples:
 Rechargeablebatteries(Lithium-ion).
 Solarcellsforrenewableenergy.
 Fuelcellsforlong-durationtasks.
2. Controllers:
o The"brain"oftherobotthatprocessesdata,executesprograms,andcontrol
s actions.
o Examples:
 MicrocontrollerslikeArduino.
 AdvancedprocessorslikeNVIDIAJetsonforAI applications.
3. Actuators:
o Convertenergyintomotion.
o Types:
 ElectricActuators:Servomotors,steppermotors.
 HydraulicActuators:Usepressurizedfluidforhigh-powertasks.
 PneumaticActuators:Usecompressedairforlightweightoperations.
4. Sensors:
o Collectdataabouttherobot'senvironment(explainedindetailbelow).
5. End Effectors:
o Toolsordevicesthatallowtherobottointeractwiththe environment.
o Examples:
 Grippers:Forpickingupobjects.
 SpecialTools:Drills,screwdrivers,orweldingtorches.
6. Mechanical Structure:
o Thebodyorframeoftherobot thathousescomponentsand provides support.
o Example:
 Roboticarms,wheeledchassis,leggedmechanisms.
Parts vs Components
 Parts are individual physical elements that make up the robot. They are simpler and
generally don't function on their own.
o Examples: Motors, sensors, wheels, arms, batteries.
 Components are functional units made up of multiple parts working together to
perform specific tasks.
o Examples: Control System, Sensor System, Actuation System, Power
System, Communication System.

4.EFFECTORS(OR) END EFFECTORS


Definition of Effectors in Robotics and AI
In robotics, effectors are the components that allow a robot to interact with its environment
through physical actions. These are the "outputs" of a robot’s behavior and are crucial for
carrying out tasks. Effectors respond to commands or decisions made by the robot's control
system, enabling it to perform various functions like moving, manipulating objects, or even
interacting with humans.
In the context of AI, effectors are used by AI systems in robots to carry out actions based on
the AI's perception, decision-making, and control. AI uses data from sensors to decide how to
use these effectors effectively.
What are End Effectors?
 The essential "tool" at the end of a robot's arm, enabling it to interact with its
environment.
 The interface between the robot and its task.
 Can be mechanical, electromechanical, or specialized devices.
Role of Effectors in Robotics and AI
 Interaction with the Environment: Effectors allow robots to physically interact with
the world. This can include moving around, manipulating objects, or performing tasks
in the real world.
 Action Execution: Effectors execute physical actions based on the decisions made by
the AI. This enables robots to move, pick up, assemble, or change things in the
environment.
 Autonomy and Precision: In robots with AI, effectors are controlled autonomously
or semi-autonomously, allowing the robot to operate without human input. The AI
optimizes the use of effectors for precise and effective task completion.
 Feedback Loop: Effectors receive feedback from sensors (e.g., tactile sensors,
cameras) to adjust actions in real time. This feedback helps AI control the effectors
dynamically, enhancing accuracy and responsiveness.

Types of End Effectors

Grippers: The most common type, used for grasping and manipulating objects.

o Mechanical Grippers: Use jaws or fingers to grip objects.


o Vacuum Grippers: Use suction to lift objects.
o Magnetic Grippers: Used for handling ferromagnetic materials.
o Adhesive Grippers: Utilize adhesives to grasp objects.

Process Tools: Designed for specific tasks, such as:

o Welding Torches
o Spray Nozzles (for painting)
o Cutting Tools
o Sanding/Grinding Tools
o Assembly Tools
Soft Robotics: Utilizing flexible materials for improved adaptability.
Key Considerations in End Effector Selection
 Payload: The weight the end effector must handle.
 Reach: The distance the end effector needs to reach.
 Precision: The level of accuracy required for the task.
 Speed: The necessary speed of operation.
 Environment: The conditions the end effector will operate in (e.g., temperature,
humidity).
 Task: The specific job the end effector must perform.
Advanced End Effector Technologies
 Soft Robotics: Using soft materials for grippers, allowing for more gentle handling of
delicate objects.
 Bio-inspired Design: Mimicking biological systems for improved dexterity and
adaptability.
 Sensor Integration: Incorporating sensors into end effectors for enhanced feedback
and control.
Examples of End Effector Applications
 Manufacturing: Assembly, welding, painting, material handling.
 Healthcare: Surgery, rehabilitation, prosthetics.
 Logistics: Packaging, sorting, palletizing.
 Agriculture: Harvesting, planting, pruning.
 Space Exploration: Sample collection, manipulation of equipment.

5.SENSORS
Introduction
Sensors in Robotics refer to a mechanical function used to calculate the condition and
environment of a robot. This sensor is based on the functions of the human sensory organs.
Robots receive a broad range of data about their surroundings, such as position, size,
orientation, velocity, distance, temperature, weight, force, etc. This information is what
allows the robot to function efficiently while interacting with its environment to perform
complex tasks.
The working of robot sensors derives from the principle of energy conversion, also known as
transduction. Different sensors are required by different robots to attain measures of control
and respond flexibly in their environment.
TYPES OF ROBOT SENSORS
There are many different types of robot sensors, discussed in the following sections:
Light Sensors
The light sensor is used to detect light and it usually generates a voltage difference. Robotic
light sensors are of two types: Photovoltaic cells and photoresistors. Photovoltaic cells are
applied when changing solar radiation energy to electrical. Naturally, these sensors are
commonly used in the production of solar robots.
On another hand, photoresistors are used to adjust their resistance by changing light
intensities. As more light is on it, the resistance decreases. These light sensors are usually not
expensive, so they are vastly used in robots.
Sound Sensor
A sound sensor detects a sound and converts it into an electrical signal. By applying this type
of sensor, robots can navigate through sound, even to the point of creating a sound-controlled
robot that recognizes and responds to specific sounds or series of sounds, to carry out certain
tasks.
Temperature Sensor
A temperature sensor is used to detect temperature changes within the environment. This
sensor mainly uses the voltage difference principle to get the temperature change, thereby
generating the equivalent temperature value of the environment. There are different types of
temperature sensor ICs (integrated circuits) used to detect temperature, including LM34,
TMP37, TMP35, TMP36, LM35, etc. These sensors can be used in robots required to work in
extreme weather conditions like an ice glacier or a desert.
Contact Sensor
Contact sensors are also known as touch sensors. They mainly function to detect a change in
velocity, position, acceleration, torque, or force at the joints of the manipulator and the end-
effector in robots. Physical contact is required for these sensors to efficiently direct the robot
to act accordingly. The sensor is executed in different switches such as a limit switch, button
switch, and tactile bumper switch.
The application of contact sensors is commonly found in obstacle avoidance robots. Upon
detection of any obstacle, it transmits a signal to the robot so that it may perform various
actions like reversing, turning, or simply stopping.
Proximity Sensor
In robotics, a proximity sensor is used to detect objects that are close to a robot and measure
the distance between a robot and particular objects without making physical contact. This is
possible because the sensors use magnetic fields to sense the objects in question. Proximity
sensors are typed into photoresistors, infrared transceivers, and ultrasonic sensors.
Infrared (IR) Transceivers
An infrared (IR) transceiver or sensor measures and detects infrared radiation in its
environment. Infrared sensors are either active or passive. Active infrared sensors both emit
and detect infrared radiation, using two parts: a light-emitting diode (LED) and a receiver.
These active transceivers act as proximity sensors, and they are commonly used in robotic
obstacle detection systems.
On the other hand, passive infrared (PIR) sensors only detect infrared radiation and do not
emit it from the LED. Passive sensors are mostly used in motion-based detection.
Ultrasonic Sensor
An ultrasonic sensor is a device that measures the distance of a specific object by emitting
ultrasonic sound waves and converts the reflected sound into an electrical signal. Ultrasonic
sensors radiate sound waves toward an object and determine its distance by detecting
reflected waves. This is why they are mainly used as proximity sensors, applied in robotic
obstacle detection systems and anti-collision safety systems.
Photoresistor
Photoresistors are devices that modify resistance depending on the amount of light placed
over them. They are also called light-dependent resistors (LDR). Due to their sensitivity to
light, they are often used to detect the presence or absence of light and measure the light
intensity. With photoresistors, more light means less resistance. [2]
Distance Sensor
Distance sensors are used to define the distance of an object from another object without
needing any physical contact. Distance sensors work by emitting a signal and measuring the
difference when the signal returns. Depending on the technology, this signal can be infrared,
LED, or ultrasonic waves, which is why distance sensors are commonly associated with
ultrasonic sensors.
Ultrasonic Distance Sensors
An ultrasonic distance sensor is a tool that measures the distance to an object using high-
frequency sound waves. Ultrasonic sensors work by emitting sound waves at a much higher
frequency than humans can hear. They then wait for the sound to be reflected. Measuring
time lapses between the sending and relay of the ultrasonic wave, and calculating against the
speed of sound is how the sensor determines the distance to the target.

6.ARCHITECTURES
What are Robot Architectures?
 A framework or design pattern that organizes the different components of a robot's
software system.
 Provides a structure for how the robot perceives its environment, makes decisions,
and acts.
 Influences the robot's behavior, capabilities, and overall performance.
Key Components of Robot Architectures
1. Perception:
o Sensing: Acquiring information from the environment through sensors (e.g.,
cameras, lidar, sonar).
o Perception: Processing sensory data to understand the environment (e.g.,
object recognition, mapping).
2. Planning:
o World Modeling: Creating an internal representation of the environment.
o Goal Setting: Defining the desired outcome or task.
o Path Planning: Determining the sequence of actions to achieve the goal.
3. Action:
o Control: Generating commands for the robot's actuators (e.g., motors,
grippers).
o Execution: Carrying out the planned actions.
Types of Robot Architectures
1. Behavior-Based Architectures:
o Focus on reactive behaviors and simple actions.
o Organized in layers, with lower layers handling basic reflexes and higher
layers handling more complex behaviors.
o Example: Subsumption Architecture
2. Planning-Based Architectures:
o Emphasize deliberative planning and decision-making.
o Involve complex reasoning and world modeling.
o Example: Classical AI Planning
3. Hybrid Architectures:
o Combine elements of both behavior-based and planning-based approaches.
o Aim to balance reactive and deliberative capabilities.
o Example: Three-Layer Architecture
4. Cognitive Architectures:
o Inspired by human cognition, aiming to replicate human-like intelligence.
o Incorporate concepts like perception, attention, memory, and learning.
o Example: SOAR architecture
5. REACTIVE ARCHITECTURE
Example: A robot vacuum navigating a room by detecting and avoiding obstacles.
6. Deliberative Architectures
Robots plan actions based on detailed internal models of the environment and perform
reasoning or decision-making.
7. Deliberative Architecture
Uses planning algorithms for decision-making Example: Autonomous vehicles
planning routes based on traffic data and maps.
8. Probabilistic Architectures:
Overview: Deals with uncertainty in the environment through statistical models.
Example: Drones navigating in uncertain weather conditions or GPS unavailability.
9. Neural Network-Based Architectures:
Overview: Based on artificial neural networks, these architectures use deep learning
for perception, decision-making, and control.
Example: Robots using computer vision to recognize and manipulate objects.
10. Multi-Agent Architectures:
Overview: Involves multiple robots (agents) working together (or against each other)
to complete tasks.
Example: Swarm robotics, where multiple robots collaborate to explore an area or
perform tasks.
7. CONFIGURATION SPACE (C-SPACE)
Configuration Space Is Central To Robot Motion Planning, Representing All Possible States
Or Configurations A Robot Can Assume.
1.What is a Configuration Space?
Definition: The configuration space (C-space) is the space of all possible configurations of a
robot. Each configuration is a unique combination of the robot’s position and its internal
parameters (e.g., angles of joints in a robotic arm or position in space for a mobile robot).
Purpose: It provides a way to represent the robot's possible states and facilitates the problem
of motion planning by transforming it into a problem of navigating through this space.
2.C-SPACE REPRESENTATION
Robot Configuration: A configuration is typically represented as a point in a multi-
dimensional space. The dimensionality of the C-space depends on the number of degrees of
freedom (DOF) of the robot.
Example: A 2D mobile robot has a 3D configuration (x, y, θ), where x and y represent
position and θ represents orientation.
Example (Arm Robot): A robotic arm with 6 joints will have a 6D C-space, each representing
the angle or position of each joint.
C-Space Obstacles: The C-space is usually subdivided into free space (where the robot can
move) and obstacle space (where the robot cannot move due to obstacles).
Free Space (F): The portion of C-space where the robot can operate without colliding with
obstacles.
Obstacle Space (O): The portion of C-space where the robot would collide with obstacles in
the environment.
3.DIMENSIONALITY OF CONFIGURATION SPACE
Degrees of Freedom (DOF): The dimensionality of the C-space corresponds to the degrees of
freedom of the robot. For a robot with n degrees of freedom, the C-space will have n
dimensions.
Example 1: A simple 2D mobile robot has 3 DOF: x (position in the x-axis), y (position in the
y-axis), and θ (orientation). Thus, the C-space is 3-dimensional.
Example 2: A 6-DOF robotic arm has a 6-dimensional C-space corresponding to the joint
angles of each arm segment.
4.OBSTACLE REPRESENTATION IN C-SPACE
Obstacle Space: In the C-space, obstacles are represented as C-space obstacles, which are
regions of the C-space where the robot’s configuration causes a collision with the
environment.
For a mobile robot, this might involve a circle or polygonal area in the 2D plane that the
robot cannot occupy.
For a robot arm, the C-space obstacles will depend on the joint angles and positions that
cause parts of the arm to collide with objects.
Translation and Rotation: The obstacles in the real world, when mapped into the C-space,
become more complex shapes due to the translation and rotation of the robot's components.
Rigid Bodies: If the robot is rigid (non-deformable), obstacles in C-space are more easily
represented. However, deformable robots (e.g., soft robots) present more complex obstacle
representations.
5.TYPES OF CONFIGURATION SPACES
Discrete C-Space: In some applications, the robot’s possible configurations can be discretized
(e.g., for a grid-based map). Each grid cell or discrete configuration corresponds to a state in
the space.
Example: A robot navigating a 2D grid where each cell represents a unique configuration.
Continuous C-Space: In more realistic settings, the configuration space is continuous, and the
robot's state can be any point within a continuous range of values.
Example: A robotic arm whose joint angles can be any value within a continuous range, such
as 0 to 180 degrees.
6.Challenges in Configuration Space
High Dimensionality: The higher the degrees of freedom (DOF) of a robot, the higher the
dimensionality of the configuration space. For example, a robot with 10 DOF (such as a
complex robot arm) will have a 10-dimensional C-space, which is difficult to visualize and
navigate.
Curse of Dimensionality: As the DOF increases, the C-space becomes exponentially larger,
making path planning more computationally expensive.
Complex Obstacles: Mapping complex real-world environments into C-space requires careful
consideration of how obstacles move and interact with the robot’s configurations.
7.C-SPACE EXAMPLES IN ROBOTICS
Mobile Robots:
A mobile robot’s configuration space might be a 2D space where the robot can move along a
floor, considering obstacles like walls and furniture. The robot’s configuration is represented
by position and orientation (x, y, θ).
Robotic Arms:
A robotic arm’s configuration space involves the angles of its joints, which makes the C-
space high-dimensional. Each combination of joint angles represents a different
configuration.
Multi-Robot Systems:
In multi-robot systems, each robot’s C-space must be considered in relation to other robots.
This can lead to a high-dimensional joint configuration space for the entire system.

8. NAVIGATION AND MOTION PLANNING

1. Navigation and Motion Planning in Robotics

Navigation refers to the robot's ability to move from one point to another in a physical
environment while avoiding obstacles and optimizing its path. Motion planning is the
process of determining how the robot should move, given its starting point, goal, and
environmental constraints.

Key Concepts in Navigation and Motion Planning:

 Path Planning:
o Path planning is the process of determining the sequence of movements a
robot must take to reach a goal. This often involves determining an optimal
path that minimizes time, energy, or risk.
o The planning algorithm takes into account various constraints, such as
obstacles, safety zones, and terrain types.
o Types of Path Planning:
 Global Path Planning: Involves planning a route based on a global
map (or model) of the environment. Common algorithms include:
 A Algorithm*: Finds the shortest path to the goal by considering
the cost of movement and estimating distance using a heuristic.
 Dijkstra’s Algorithm: Similar to A*, but it does not use
heuristics and ensures the shortest path based purely on
distance.
 Local Path Planning: Focuses on navigating in real-time based on
dynamic sensor data. It allows a robot to respond to immediate
obstacles and changes in the environment.
 Dynamic Window Approach (DWA): Uses real-time velocity
and obstacle data to adjust the robot's path to avoid collisions.
 Motion Planning:
o Motion planning is focused on determining how a robot's body, including its
limbs or effectors, should move through space to perform tasks such as
grasping, assembling, or avoiding obstacles.
o Motion planning can be divided into two primary stages:
 Kinematic Planning: Deals with the geometric aspects of motion,
considering the positions and orientations of objects and the robot. It
doesn't account for forces or dynamics.
 Dynamic Planning: Takes into account forces, velocities, and
accelerations during movement, providing more realistic motion
trajectories.
 Robot Localization:
o Localization is the process of determining the robot's position within its
environment. It can be achieved using sensors like GPS (in outdoor
environments), lidar, or computer vision.
o Simultaneous Localization and Mapping (SLAM): SLAM is a key
technique that allows robots to build a map of their environment while
simultaneously determining their location on that map. It is crucial for mobile
robots operating in unknown environments.
 Obstacle Avoidance:
o Robots must be able to detect and avoid obstacles in their path. This can be
done using sensors like:
 Lidar: Uses laser beams to map the environment and detect obstacles.
 Ultrasonic Sensors: Measures distance using sound waves.
 Cameras: Provide visual data for object detection and avoidance.
o Reactive Approaches: Focus on immediate responses to obstacles, using
sensor data to avoid collisions in real-time.
o Predictive Approaches: Predict the movement of obstacles and plan paths
that avoid future collisions.
 Algorithms for Motion Planning:
o Rapidly-exploring Random Trees (RRT): A fast motion planning algorithm
used to explore large search spaces by incrementally building a tree of feasible
trajectories.
o Probabilistic Roadmaps (PRM): A technique for motion planning in
complex environments that creates a map of possible paths by randomly
sampling the environment.

 Mapping: Creating a representation of the environment.

Applications of Navigation and Motion Planning:

 Autonomous Vehicles: Path planning and motion control for self-driving cars that
must navigate through complex environments while avoiding obstacles and obeying
traffic rules.
 Industrial Robots: Robots used in manufacturing must plan paths for tasks such as
picking and placing items or assembling parts while avoiding collisions with workers
or equipment.
 Robotic Surgery: Motion planning ensures that robotic arms operate with high
precision and avoid damage to surrounding tissues during medical procedures.
 Service Robots: Robots in home or healthcare environments use navigation and
motion planning to move through rooms, avoid obstacles, and assist users.

9. INTRODUCTION TO AI-BASED PROGRAMMING TOOLS FOR ROBOTICS

AI-based programming tools enable the development and deployment of intelligent behaviors
in robots, enhancing their ability to perform tasks autonomously or semi-autonomously.
These tools leverage machine learning, computer vision, reinforcement learning, and other AI
techniques to enable robots to perceive, learn, and interact with their environment.

Key AI-Based Programming Tools for Robotics:

1. ROS (Robot Operating System):


o ROS is a popular open-source framework for building robotic applications. It
provides a collection of libraries and tools to help software developers create
robot behavior and functionalities.
o ROS 2: An improved version of ROS that provides better real-time
capabilities, enhanced security, and support for multi-robot systems.
o ROS Tools for Navigation and Motion Planning:
 Navigation Stack: ROS provides a pre-built navigation stack that
integrates path planning, localization, and obstacle avoidance.
 MoveIt!: A motion planning framework for manipulating robots,
including arm manipulation, grasping, and trajectory planning.
2. OpenAI Gym:
o OpenAI Gym is a toolkit for developing and comparing reinforcement
learning (RL) algorithms. It provides a variety of simulation environments,
including robotics-related tasks, for training AI agents to solve problems
through trial and error.
o It allows robotics developers to experiment with different RL algorithms for
tasks such as autonomous navigation, task planning, and robotic manipulation.
3. TensorFlow and PyTorch:
o TensorFlow and PyTorch are two popular machine learning frameworks used
in robotics for developing AI-based models, such as deep learning models for
vision, speech, and decision-making.
o TensorFlow: Provides support for deploying AI models on embedded systems
or robotic hardware and supports reinforcement learning, computer vision, and
control algorithms.
o PyTorch: Known for its flexibility and ease of use, PyTorch is widely used
for developing neural networks and integrating AI models into robotic
systems.
4. VPL (Visual Programming Languages):
o Visual programming languages are used to create AI programs through
graphical interfaces rather than traditional coding. These tools are useful for
beginners and those without strong programming backgrounds.
o Examples include:
 Blockly: A visual programming language that is often used to teach
basic concepts of robotics and AI.
 Node-RED: A flow-based programming tool for wiring together
hardware devices, APIs, and online services. It can be used for robotics
applications like sensor integration and control.
5. Gazebo Simulator:
o Gazebo is an open-source robotics simulation tool that integrates with ROS. It
provides a high-fidelity simulation environment for testing robotic navigation,
sensor integration, and motion planning algorithms without needing physical
robots.
o Developers can simulate complex environments, including sensors like lidar,
cameras, and GPS, to test the AI behavior of robots in virtual worlds before
deployment.
6. CoppeliaSim (V-REP):
o CoppeliaSim is a versatile robot simulation platform used to simulate and
control robots. It includes features for motion planning, robot perception, and
AI-based behavior control.
o CoppeliaSim allows integration with ROS and offers APIs for scripting robot
behaviors, making it an ideal tool for AI-based programming and robot design.
7. DeepMind Lab:
o DeepMind Lab is a 3D environment for AI research, specifically for training
AI systems to perform complex tasks in simulated environments.
o It can be used for developing reinforcement learning-based control systems,
such as teaching robots to navigate or interact with their environment.
8. RoboFlow:
o RoboFlow is an AI-powered tool designed for training computer vision
models for robotics. It is specifically designed to help developers build models
for object detection, classification, and tracking, which are essential for robot
navigation and manipulation.
9. MATLAB and Simulink:
o MATLAB is widely used for developing algorithms and simulations,
including robotics applications. Simulink, its associated simulation platform,
is often used to model robotic systems and simulate motion planning
algorithms.
o MATLAB supports AI and machine learning toolboxes, making it easier to
implement AI-based algorithms for control, perception, and decision-making.
Key Concepts in AI-Based Programming for Robotics:

 Reinforcement Learning (RL): RL involves teaching robots to make decisions based


on rewards and penalties. It's used in applications like autonomous navigation or
learning tasks such as walking, flying, or playing games.
 Computer Vision: Robots use computer vision (with deep learning) to interpret the
environment using cameras or other vision sensors. It is essential for tasks like object
recognition, navigation, and inspection.
 Natural Language Processing (NLP): Robots can be trained to understand and
respond to human language, enabling more intuitive interaction.
 Sensor Fusion: Combining data from multiple sensors (e.g., vision, lidar, and IMU)
to create a more accurate and reliable understanding of the environment.
6. ARCHITECTURES OF ROBOT
1.REACTIVE ARCHITECTURE
Example: A robot vacuum navigating a room by detecting and avoiding obstacles.
2. Deliberative Architectures
Robots plan actions based on detailed internal models of the environment and perform
reasoning or decision-making.
Example: Autonomous vehicles planning routes based on traffic data and maps.
3.3. Hybrid Architectures:
COMBINE 1 & 2
Example: Robots that avoid obstacles in real-time while planning complex movements.
4. Cognitive Architectures:
Overview: Inspired by human cognition; replicates brain processes like memory, learning,
and problem-solving.
Example: A humanoid robot interacting and learning from humans in various environments.
5. Probabilistic Architectures:
Overview: Deals with uncertainty in the environment through statistical models.
Example: Drones navigating in uncertain weather conditions or GPS unavailability.
6. Neural Network-Based Architectures:
Overview: Based on artificial neural networks, these architectures use deep learning for
perception, decision-making, and control.
Example: Robots using computer vision to recognize and manipulate objects.
7.Multi-Agent Architectures:
Overview: Involves multiple robots (agents) working together (or against each other) to
complete tasks.
Example: Swarm robotics, where multiple robots collaborate to explore an area or perform
tasks.
8.BEHAVIOUR BASED ARCHITECTURE
EXECUTE DIFFEREBT BEHAVIOURS
9.HIERCHICAL ARCHITECTURE
LAYERS
10.MODULAR ARCHUTECTURE
DIFF INDEPENDENT MODULES
11.DISTRIBUTED ARCHITECTURE
PROCESS DISTRIBUTED ACROSS MULTIPLE NODES

***********CONFIGURATION SPACE*************************************
CONFIGURATION SPACE IS CENTRAL TO ROBOT MOTION PLANNING,
REPRESENTING ALL POSSIBLE STATES OR CONFIGURATIONS A ROBOT
CAN ASSUME.
1.What is a Configuration Space?
Definition: The configuration space (C-space) is the space of all possible configurations of a
robot. Each configuration is a unique combination of the robot’s position and its internal
parameters (e.g., angles of joints in a robotic arm or position in space for a mobile robot).
Purpose: It provides a way to represent the robot's possible states and facilitates the problem
of motion planning by transforming it into a problem of navigating through this space.
2.C-SPACE REPRESENTATION
Robot Configuration: A configuration is typically represented as a point in a multi-
dimensional space. The dimensionality of the C-space depends on the number of degrees of
freedom (DOF) of the robot.
Example: A 2D mobile robot has a 3D configuration (x, y, θ), where x and y represent
position and θ represents orientation.
Example (Arm Robot): A robotic arm with 6 joints will have a 6D C-space, each representing
the angle or position of each joint.
C-Space Obstacles: The C-space is usually subdivided into free space (where the robot can
move) and obstacle space (where the robot cannot move due to obstacles).
Free Space (F): The portion of C-space where the robot can operate without colliding with
obstacles.
Obstacle Space (O): The portion of C-space where the robot would collide with obstacles in
the environment.
3.DIMENSIONALITY OF CONFIGURATION SPACE
Degrees of Freedom (DOF): The dimensionality of the C-space corresponds to the degrees of
freedom of the robot. For a robot with n degrees of freedom, the C-space will have n
dimensions.
Example 1: A simple 2D mobile robot has 3 DOF: x (position in the x-axis), y (position in the
y-axis), and θ (orientation). Thus, the C-space is 3-dimensional.
Example 2: A 6-DOF robotic arm has a 6-dimensional C-space corresponding to the joint
angles of each arm segment.
4.OBSTACLE REPRESENTATION IN C-SPACE
Obstacle Space: In the C-space, obstacles are represented as C-space obstacles, which are
regions of the C-space where the robot’s configuration causes a collision with the
environment.
For a mobile robot, this might involve a circle or polygonal area in the 2D plane that the
robot cannot occupy.
For a robot arm, the C-space obstacles will depend on the joint angles and positions that
cause parts of the arm to collide with objects.
Translation and Rotation: The obstacles in the real world, when mapped into the C-space,
become more complex shapes due to the translation and rotation of the robot's components.
Rigid Bodies: If the robot is rigid (non-deformable), obstacles in C-space are more easily
represented. However, deformable robots (e.g., soft robots) present more complex obstacle
representations.

5.TYPES OF CONFIGURATION SPACES


Discrete C-Space: In some applications, the robot’s possible configurations can be discretized
(e.g., for a grid-based map). Each grid cell or discrete configuration corresponds to a state in
the space.
Example: A robot navigating a 2D grid where each cell represents a unique configuration.
Continuous C-Space: In more realistic settings, the configuration space is continuous, and the
robot's state can be any point within a continuous range of values.
Example: A robotic arm whose joint angles can be any value within a continuous range, such
as 0 to 180 degrees.
6.Challenges in Configuration Space
High Dimensionality: The higher the degrees of freedom (DOF) of a robot, the higher the
dimensionality of the configuration space. For example, a robot with 10 DOF (such as a
complex robot arm) will have a 10-dimensional C-space, which is difficult to visualize and
navigate.
Curse of Dimensionality: As the DOF increases, the C-space becomes exponentially larger,
making path planning more computationally expensive.
Complex Obstacles: Mapping complex real-world environments into C-space requires careful
consideration of how obstacles move and interact with the robot’s configurations.
7.C-SPACE EXAMPLES IN ROBOTICS
Mobile Robots:
A mobile robot’s configuration space might be a 2D space where the robot can move along a
floor, considering obstacles like walls and furniture. The robot’s configuration is represented
by position and orientation (x, y, θ).
Robotic Arms:
A robotic arm’s configuration space involves the angles of its joints, which makes the C-
space high-dimensional. Each combination of joint angles represents a different
configuration.
Multi-Robot Systems:
In multi-robot systems, each robot’s C-space must be considered in relation to other robots.
This can lead to a high-dimensional joint configuration space for the entire system.

1. Navigation and Motion Planning

Navigation and motion planning are key areas in robotics, autonomous systems, and AI.
These fields involve planning and controlling the movement of a robot or agent within an
environment to reach a goal while avoiding obstacles.

a. Key Concepts in Navigation and Motion Planning

1. Path Planning
o Goal: Find an optimal path from a start point to a destination.
o Types of Path Planning:
 Global Planning: Uses global maps of the environment (static). It is
computationally expensive and works well in structured environments
(e.g., grid-based maps).
 Local Planning: Uses local sensory data to plan a path in real-time.
It’s more adaptive but may have limitations in complex environments.
2. Motion Planning
o Goal: Not only find a path but also determine how to move along that path,
considering constraints like velocity, acceleration, and robot configuration.
o Key Challenges:
 Collision Avoidance: Ensuring the robot avoids obstacles.
 Dynamic Environments: Adapting to moving obstacles and real-time
changes.
 Non-holonomic Constraints: Constraints that limit the motion of a
robot, such as not being able to move sideways for wheeled robots.
3. Types of Motion Planning Algorithms
o Grid-Based Methods:
 A Algorithm:* A popular algorithm for pathfinding, combines heuristic
search with the search for the shortest path.
 Dijkstra’s Algorithm: A simpler algorithm focused solely on the
shortest path in weighted graphs.
o Sampling-Based Methods:
 Rapidly-exploring Random Tree (RRT): Generates random nodes
and expands the tree towards the goal.
 Probabilistic Roadmaps (PRM): Samples points in the configuration
space and connects them to form a roadmap.
o Optimization-Based Methods:
 These methods minimize an objective function (such as energy or
time) while ensuring the path is feasible.
4. Planning in Dynamic Environments
o Replanning: In dynamic environments, the plan may need frequent updates.
o SLAM (Simultaneous Localization and Mapping): An important technique
where a robot maps its environment while simultaneously tracking its position.

b. Factors Influencing Motion Planning

 Robot Configuration: The type of robot (e.g., mobile robot, drone, manipulator)
affects the motion planning strategy.
 Environment Complexity: The presence of static and dynamic obstacles.
 Efficiency Requirements: The balance between the speed of the algorithm and the
quality of the solution (optimal vs. suboptimal).

2. Introduction to AI-Based Programming Tools

AI-based programming tools enable the development of intelligent systems capable of


making decisions, learning from data, and improving performance over time. These tools
help developers integrate AI in robotics, machine learning, and other applications.

a. Key AI-Based Programming Tools

1. TensorFlow
o Overview: An open-source framework developed by Google for building and
deploying machine learning models.
o Key Features:
 Supports deep learning and neural networks.
 Allows building both simple and highly complex models.
 Provides tools for distributed computing, making it scalable.
o Applications: Robotics, image and speech recognition, natural language
processing (NLP).
2. PyTorch
o Overview: An open-source deep learning framework developed by
Facebook’s AI Research lab.
o Key Features:
 Dynamic computation graph, making it flexible and easier to debug.
 Strong support for GPU acceleration.
 Easy integration with other Python libraries.
o Applications: Used in computer vision, reinforcement learning, and natural
language processing.
3. OpenAI Gym
o Overview: A toolkit for developing and comparing reinforcement learning
algorithms.
o Key Features:
 Offers a wide variety of simulated environments for testing.
 Facilitates the development of AI agents that can learn through
interaction with the environment.
o Applications: Reinforcement learning research, robotics, and automation.
4. ROS (Robot Operating System)
o Overview: A flexible framework for building robot software, not an operating
system in the traditional sense.
o Key Features:
 Provides libraries and tools to help software developers create robot
applications.
 Includes motion planning, perception, control, and hardware
abstraction.
 Supports integration with AI algorithms for perception, decision-
making, and control.
o Applications: Autonomous vehicles, industrial robots, drones, and mobile
robots.
5. OpenCV (Open Source Computer Vision Library)
o Overview: A popular library for computer vision tasks.
o Key Features:
 Real-time computer vision.
 Includes algorithms for image processing, object detection, and motion
tracking.
o Applications: Visual navigation for robots, object detection, augmented
reality.
6. MATLAB and Simulink
o Overview: Widely used for numerical computing, simulations, and model-
based design.
o Key Features:
 Built-in tools for signal processing, control systems, and machine
learning.
 Simulink offers block diagram modeling, useful for robotic control
systems and simulations.
o Applications: Robot kinematics, control systems, and testing AI algorithms in
simulated environments.
b. AI-Driven Motion Planning Tools

 Deep Reinforcement Learning (DRL): A cutting-edge AI technique that can be


applied to motion planning problems. DRL allows robots to learn optimal navigation
strategies through interaction with the environment.
 Simultaneous Localization and Mapping (SLAM) with AI: AI can be used to
improve SLAM algorithms by enabling better decision-making when faced with new
or uncertain environments.
 AI for Collision Avoidance: AI tools, like deep learning-based neural networks, can
be used to predict and avoid dynamic obstacles in real-time.

c. Advantages of AI in Navigation and Motion Planning

 Adaptability: AI systems can adapt to dynamic and changing environments.


 Efficiency: AI algorithms can learn and optimize motion planning strategies over
time, potentially improving their performance in new situations.
 Complex Decision Making: AI tools enable robots to make complex decisions by
considering multiple factors such as energy consumption, path length, and obstacle
avoidance.

Conclusion

Understanding the combination of navigation and motion planning with AI-based tools is
essential for developing advanced autonomous systems. With tools like ROS, TensorFlow,
and PyTorch, developers can create intelligent robots capable of efficient, adaptive, and
robust motion planning even in dynamic and uncertain environments. AI techniques, such as
reinforcement learning and deep neural networks, have further revolutionized the way robots
plan and execute movements.

You might also like