Robotics and Intelligent Systems
(Open Elective)
Session: 2024-25 (Even)
Branch: Robotics &AI Semester: 4th sem
Unit 1
History, state of the art, Need for AI in Robotics. Thinking and acting humanly, intelligent agents,
structure of agents.
Introduction to Robotics and Intelligent Systems
Robotics and Intelligent Systems is an interdisciplinary field that integrates mechanical
engineering, electronics, computer science, artificial intelligence (AI), and control systems to
design and develop autonomous and semi-autonomous machines. These systems can perceive
their environment, make decisions, and execute actions to achieve specific tasks with minimal
human intervention.
Key Components of Robotics and Intelligent Systems
1. Sensors and Perception
o Collect data from the environment using cameras, LiDAR, ultrasonic sensors,
infrared sensors, and more.
o Essential for navigation, object recognition, and environmental awareness.
2. Actuators and Motion Control
o Motors, servos, and hydraulic systems allow robots to move and interact with
their surroundings.
o Used in robotic arms, mobile robots, and industrial automation.
3. Embedded Systems and Microcontrollers
o Robots rely on embedded systems for real-time processing and decision-making.
o Examples: Arduino, Raspberry Pi, ESP32, and custom-built microcontrollers. 4.
Artificial Intelligence and Machine Learning
o Enables robots to learn from data, recognize patterns, and adapt to new
environments.
o AI-driven applications include speech recognition, image processing, and
autonomous navigation.
5. Communication and Networking
o Wireless communication (Wi-Fi, Bluetooth, Zigbee) allows remote control and
cloud connectivity.
o Used in IoT-enabled robotic systems and industrial automation.
6. Autonomous Systems and Control Algorithms
o Algorithms like PID control, fuzzy logic, and reinforcement learning help robots
function independently.
o Used in self-driving cars, drones, and robotic process automation (RPA).
Applications of Robotics and Intelligent Systems
• Industrial Automation – Robotic arms in manufacturing, assembly lines, and quality
inspection.
• Healthcare – Surgical robots, rehabilitation devices, and AI-assisted diagnostics. •
Autonomous Vehicles – Self-driving cars, delivery drones, and automated guided
vehicles (AGVs).
• Smart Homes and Assistive Technology – Home automation, robotic vacuum cleaners,
and elderly care robots.
• Agriculture – Precision farming, automated harvesting, and pest control robots.
History of AI:
Ancient Foundations
• Ancient Greece and China:
o Early mechanical concepts, like Archytas of Tarentum's flying pigeon (4th
century BCE), a steam-powered mechanical bird.
o Chinese inventors created automata like water clocks and mechanical animals.
• Middle Ages:
o Islamic engineers like Al-Jazari (12th century) built programmable water clocks
and humanoid automata for entertainment and practical use.
Renaissance Period
• Leonardo da Vinci (15th century):
o Designed a humanoid robot (mechanical knight) based on his understanding of
human anatomy and mechanics.
• Rise of clockwork automata in Europe, used in churches, clocks, and royal courts to
showcase mechanical innovation.
Industrial Revolution
• Introduction of mechanized systems in manufacturing during the late 18th and early 19th
centuries.
• Jacquard Loom (1804):
o One of the first programmable machines, which used punch cards to automate
textile weaving.
• These systems laid the groundwork for modern automation.
20th Century: Birth of Modern Robotics
• 1920s:
o The term "robot" was coined by Karel Čapek in his play R.U.R. (Rossum’s
Universal Robots), depicting mechanical workers.
• 1940s:
o Norbert Wiener introduced cybernetics, the study of control and communication
in living organisms and machines.
o Isaac Asimov introduced the Three Laws of Robotics, exploring the relationship
between humans and robots.
• 1950s-1960s:
o Unimate (1961): The first industrial robot, created by George Devol, was used in
automobile manufacturing for tasks like welding.
o Shakey the Robot (1966): The first general-purpose mobile robot, capable of
reasoning and navigating, developed at Stanford Research Institute.
Rise of Intelligent Systems
• 1970s:
o The Stanford Arm and MIT’s Robot Hand advanced robotic manipulation. o
Development of AI programming languages like LISP allowed robots to perform
more complex tasks.
• 1980s:
o Emergence of vision systems, enabling robots to "see" and recognize objects.
o Japan became a leader in robotics, producing industrial and humanoid robots
(e.g., Honda’s early prototypes of humanoids).
21st Century: Robotics and AI Integration
• 2000s:
o Autonomous Robots: Robots like the Roomba (robotic vacuum cleaner) entered
homes.
o ASIMO (Honda, 2000): Advanced humanoid robot capable of walking, running,
and recognizing faces and voices.
o Military and space exploration saw significant robotic advancements with
unmanned aerial vehicles (UAVs) and Mars rovers.
• 2010s:
o AI and Machine Learning:
▪ Robots began using neural networks and deep learning for tasks like
speech recognition and autonomous driving.
▪ Advancements in natural language processing allowed robots to
understand and respond in conversational language.
o Collaborative Robots (Cobots):
▪ Robots designed to work alongside humans safely in industrial settings. •
2020s:
o Humanoid Robots: Robots like Boston Dynamics’ Atlas displayed exceptional
agility and balance.
o Soft Robotics: Inspired by biological organisms, these robots are used in delicate
environments such as medical surgeries.
o AI Integration: Systems like OpenAI’s models contribute to enhanced robot
intelligence and interaction.
Key Applications in Modern Times
• Industrial Robotics:
o Automation of manufacturing processes with precision and efficiency.
• Healthcare:
o Robotic surgery systems like da Vinci Surgical Robot revolutionized medical
operations.
• Space Exploration:
o Robots like the Perseverance Rover explore Martian terrain.
• Autonomous Vehicles:
o Self-driving technology integrates robotics, AI, and sensors for navigation and
safety.
• Service and Social Robots:
o Robots in customer service, caregiving, and home assistance are becoming
commonplace.
State of the Art in Robotics & AI
Definition
The state of the art in Robotics and AI refers to the most advanced technologies, methods, and
applications currently achieved in these fields, representing the cutting edge of innovation and
development.
Key Features
1. Advanced Capabilities
o Autonomy: Robots perform tasks without human intervention.
o Adaptability: Learning and adapting to dynamic environments (e.g., AI-powered
decision-making).
o Precision: Enhanced accuracy in tasks like surgeries or manufacturing.
2. Integration with Emerging Technologies
o Internet of Things (IoT): Real-time data exchange and monitoring.
o Cloud Robotics: Centralized data processing and sharing.
o Edge Computing: On-device data processing for real-time decisions.
o 5G Networks: High-speed communication and low latency for remote control.
3. Human-Robot Interaction (HRI)
o Natural Language Processing (NLP) enables robots to understand and respond to human
speech.
o Emotion recognition for improved interaction.
4. Applications
o Healthcare: Surgical robots, rehabilitation devices, diagnostics.
o Autonomous Systems: Self-driving cars, UAVs, and drones.
o Industrial Automation: Collaborative robots (Cobots) in manufacturing and logistics.
o Space Exploration: Robotic rovers like NASA’s Perseverance.
Examples of State of Art Robotics & AI
1. Sophia (Hanson Robotics): Humanoid robot with advanced conversational AI and human-like
expressions.
2. Boston Dynamics’ Robots: Robots like Spot and Atlas, showcasing advanced mobility and
adaptability.
3. Tesla’s Autopilot: AI-powered autonomous vehicle system.
Challenges
1. Ethics: Job displacement, privacy, and ethical decision-making in AI.
2. Safety: Ensuring reliable and secure operation of AI systems.
3. Scalability: Reducing costs and making technology accessible globally.
Future Trends
1. Neuro-robotics: Integration of robotics with neural control systems for advanced prosthetics.
2. Soft Robotics: Flexible robots for delicate tasks in healthcare and agriculture.
3. Swarm Robotics: Collective behavior in robots for disaster recovery and environmental
monitoring.
4. Quantum Computing in AI: Faster AI computations for robotics.
Need for AI in Robotics
Artificial Intelligence (AI) plays a crucial role in enhancing the capabilities of robotic systems.
Traditional robots follow predefined instructions, but AI enables them to learn, adapt, and make
decisions autonomously. The integration of AI in robotics is essential for improving automation,
efficiency, and intelligent decision-making in various applications.
1. Enhancing Perception and Sensing
• AI helps robots process data from sensors like cameras, LiDAR, ultrasonic, and infrared
sensors.
• Enables object detection, facial recognition, and environment mapping for autonomous
navigation.
2. Decision-Making and Problem-Solving
• AI-powered robots can analyze situations and make real-time decisions.
• Machine Learning (ML) and Deep Learning allow robots to recognize patterns and
predict outcomes.
• Used in self-driving cars, industrial automation, and robotic surgery.
3. Adaptive Learning and Self-Improvement
• AI enables robots to learn from past experiences and improve their performance. •
Reinforcement Learning (RL) allows robots to optimize their actions for better efficiency. •
Example: AI-powered warehouse robots learn to optimize pick-and-place operations. 4.
Human-Robot Interaction (HRI)
• AI allows robots to understand speech, gestures, and emotions, improving human
interaction.
• Natural Language Processing (NLP) helps in voice commands and communication. •
Example: AI assistants like Sophia (humanoid robot) and Pepper (social robot). 5.
Autonomy and Mobility
• AI-powered robots can navigate and operate independently in dynamic environments.
• Used in autonomous vehicles, drones, and space exploration robots (e.g., NASA’s Mars
Rover).
• AI-based Simultaneous Localization and Mapping (SLAM) enables real-time path
planning.
6. Predictive Maintenance and Fault Detection
• AI helps robots predict failures and perform self-diagnostics.
• Reduces downtime and maintenance costs in industries like manufacturing and
healthcare.
• Example: AI-based industrial robots monitor wear and tear to prevent breakdowns.
7. Multi-Robot Coordination and Swarm Robotics
• AI enables multiple robots to collaborate and coordinate tasks efficiently.
• Used in swarm robotics for applications like disaster management, search and rescue, and
logistics.
• Example:
Amazon’s AI-driven warehouse robots work in coordination for fast deliveries.
8. Robotics in Healthcare and Assistive Technology
• AI-driven robots assist in surgeries, rehabilitation, and elderly care.
• Example: AI-assisted robotic surgeons like the Da Vinci Surgical System. •
Exoskeleton robots use AI for mobility assistance in disabled individuals.
Intelligent Systems:
In order to design intelligent systems, it is important to categorize them into four categories
(Luger and Stubberfield 1993), (Russell and Norvig, 2003)
1. Systems that think like humans
2. Systems that think rationally
3. Systems that behave like humans
4. Systems that behave rationally
Scientific Goal: To determine which ideas about knowledge representation, learning, rule
systems search, and so on, explain various sorts of real intelligence.
Engineering Goal: To solve real world problems using AI techniques such as Knowledge
representation, learning, rule systems, search, and so on.
Traditionally, computer scientists and engineers have been more interested in the engineering
goal, while psychologists, philosophers and cognitive scientists have been more interested in
the scientific goal.
Cognitive Science: Think Human-Like
a. Requires a model for human cognition. Precise enough models allow simulation by
computers. b. Focus is not just on behavior and I/O, but looks like reasoning process.
c. Goal is not just to produce human-like behavior but to produce a sequence of steps of the
reasoning process, similar to the steps followed by a human in solving the same task.
Laws of thought: Think Rationally
a. The study of mental faculties through the use of computational models; that it is, the study of
computations that make it possible to perceive reason and act.
b. Focus is on inference mechanisms that are probably correct and guarantee an optimal solution. c.
Goal isto formalize the reasoning process as a system of logical rules and procedures of inference.
d. Develop systems of representation to allow inferences to be like
“Socrates is a man. All men are mortal. Therefore Socratesis mortal”
Turing Test: Act Human-Like
a. The art of creating machines that perform functions requiring intelligence when performed by
people; that it is the study of, how to make computers do things which, at the moment, people do
better.
b. Focus is on action, and not intelligent behavior centered around the representation of the world
c. Example: Turing Test
o 3 rooms contain: a person, a computer and an interrogator.
o The interrogator can communicate with the other 2 by teletype (to avoid the machine imitate the
appearance of voice of the person)
o The interrogator tries to determine which the person is and which the machine is.
o The machine tries to fool the interrogator to believe that it is the human, and the person also tries
to convince the interrogator that it is the human.
o If the machine succeeds in fooling the interrogator, then conclude that the machine is
intelligent. Rational agent: Act Rationally
a. Tries to explain and emulate intelligent behavior in terms of computational process; that it is
concerned with the automation of the intelligence.
b. Focus is on systems that act sufficiently if not optimally in all situations.
c. Goal is to develop systems that are rational and sufficient
Intelligent Agent’s:
What is an Agent?
An agent can be anything that perceiveits environment through sensors and act upon
that environment through actuators. An Agent runs in the cycle of perceiving, thinking,
and acting. An agent can be:
o Human-Agent: A human agent has eyes, ears, and other organs which work for sensors
and hand, legs, vocal tract work for actuators.
o Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for sensors
and various motors for actuators.
o Software Agent: Software agent can have keystrokes, file contents as sensory input and
act on those inputs and display output on the screen.
Hence the world around us is full of agents such as thermostat, cellphone, camera, and
even we are also agents.
Before moving forward, we should first know about sensors, effectors, and actuators.
Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through
sensors.
Actuators: Actuators are the component of machines that converts energy into motion.
The actuators are only responsible for moving and controlling a system. An actuator can
be an electric motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can be legs,
wheels, arms, fingers, wings, fins, and display screen.
Intelligent Agents:
An intelligent agent is an autonomous entity which act upon an environment using
sensors and actuators for achieving goals. An intelligent agent may learn from the
environment to achieve their goals. A thermostat is an example of an intelligent agent.
Following are the main four rules for an AI agent:
o Rule 1: An AI agent must have the ability to perceive the environment.
o Rule 2: The observation must be used to make decisions.
o Rule 3: Decision should result in an action.
o Rule 4: The action taken by an AI agent must be a rational action.
Rational Agent:
A rational agent is an agent which has clear preference, models uncertainty, and acts in
a way to maximize its performance measure with all possible actions.
A rational agent is said to perform the right things. AI is about creating rational agents to
use for game theory and decision theory for various real-world scenarios.
For an AI agent, the rational action is most important because in AI reinforcement
learning algorithm, for each best possible action, agent gets the positive reward and for
each wrong action, an agent gets a negative reward.
Note: Rational agents in AI are very similar to intelligent agents.
Rationality:
The rationality of an agent is measured by its performance measure. Rationality can be
judged on the basis of following points:
o Performance measure which defines the success criterion.
o Agent prior knowledge of its environment.
o Best possible actions that an agent can perform.
o The sequence of percepts.
Note: Rationality differs from Omniscience because an Omniscient agent knows the
actual outcome of its action and act accordingly, which is not possible in reality.
Structure of an AI Agent
The task of AI is to design an agent program which implements the agent function. The
structure of an intelligent agent is a combination of architecture and agent program. It
can be viewed as:
1. Agent = Architecture + Agent program
Following are the main three terms involved in the structure of an AI agent:
Architecture: Architecture is machinery that an AI agent executes on.
Agent Function: Agent function is used to map a percept to an action. 1.
f:P* → A
Agent program: Agent program is an implementation of agent function. An agent
program executes on the physical architecture to produce function f.
PEAS Representation
PEAS is a type of model on which an AI agent works upon. When we define an AI agent
or rational agent, then we can group its properties under PEAS representation model. It
is made up of four words:
o P: Performance measure
o E: Environment
o A: Actuators
o S: Sensors
Here performance measure is the objective for the success of an agent's behavior.
PEAS for self-driving cars:
Let's suppose a self-driving car then PEAS representation will be:
Performance: Safety, time, legal drive, comfort
Environment: Roads, other vehicles, road signs, pedestrian
Actuators: Steering, accelerator, brake, signal, horn
Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar.
Structure of an AI Agent
To understand the structure of Intelligent Agents, we should be familiar with
Architecture and Agent programs. Architecture is the machinery that the agent
executes on. It is a device with sensors and actuators, for example, a robotic car, a
camera, and a PC. An agent program is an implementation of an agent function. An
agent function is a map from the percept sequence(history of all that an agent has
perceived to date) to an action.
Agent = Architecture + Agent Program
There are many examples of agents in artificial intelligence. Here are a few:
• Intelligent personal assistants: These are agents that are designed to help users
with various tasks, such as scheduling appointments, sending messages, and
setting reminders. Examples of intelligent personal assistants include Siri, Alexa,
and Google Assistant.
• Autonomous robots: These are agents that are designed to operate
autonomously in the physical world. They can perform tasks such as cleaning,
sorting, and delivering goods. Examples of autonomous robots include the
Roomba vacuum cleaner and the Amazon delivery robot.
• Gaming agents: These are agents that are designed to play games, either against
human opponents or other agents. Examples of gaming agents include chess
playing agents and poker-playing agents.
• Fraud detection agents: These are agents that are designed to detect fraudulent
behavior in financial transactions. They can analyze patterns of behavior to
identify suspicious activity and alert authorities. Examples of fraud detection
agents include those used by banks and credit card companies.
• Traffic management agents: These are agents that are designed to manage
traffic flow in cities. They can monitor traffic patterns, adjust traffic lights, and
reroute vehicles to minimize congestion. Examples of traffic management agents
include those used in smart cities around the world.
• A software agent has Keystrokes, file contents, received network packages that
act as sensors and displays on the screen, files, and sent network packets acting
as actuators.
• A Human-agent has eyes, ears, and other organs which act as sensors, and hands,
legs, mouth, and other body parts act as actuators.
• A Robotic agent has Cameras and infrared range finders which act as sensors and
various motors act as actuators.
Characteristics of an Agent
Types of Agents
Agents can be grouped into five classes based on their degree of perceived intelligence
and capability :
• Simple Reflex Agents
• Model-Based Reflex Agents
• Goal-Based Agents
• Utility-Based Agents
• Learning Agent
• Multi-agent systems
• Hierarchical agents
Simple Reflex Agents
Simple reflex agents ignore the rest of the percept history and act only on the basis of
the current percept. Percept history is the history of all that an agent has perceived to
date. The agent function is based on the condition-action rule. A condition-action rule
is a rule that maps a state i.e., a condition to an action. If the condition is true, then the
action is taken, else not. This agent function only succeeds when the environment is fully
observable. For simple reflex agents operating in partially observable environments,
infinite loops are often unavoidable. It may be possible to escape from infinite loops if the
agent can randomize its actions.
Problems with Simple reflex agents are :
• Very limited intelligence.
• No knowledge of non-perceptual parts of the state.
• Usually too big to generate and store.
• If there occurs any change in the environment, then the collection of rules needs to
be updated.
Simple Reflex Agents
Model-Based Reflex Agents
It works by finding a rule whose condition matches the current situation. A model-based
agent can handle partially observable environments by the use of a model about the
world. The agent has to keep track of the internal state which is adjusted by each
percept and that depends on the percept history. The current state is stored inside the
agent which maintains some kind of structure describing the part of the world which
cannot be seen.
Updating the state requires information about:
• How the world evolves independently from the agent?
• How do the agent’s actions affect the world?
Model-Based Reflex Agents
Goal-Based Agents
These kinds of agents take decisions based on how far they are currently from their
goal(description of desirable situations). Their every action is intended to reduce their
distance from the goal. This allows the agent a way to choose among multiple
possibilities, selecting the one which reaches a goal state. The knowledge that supports
its decisions is represented explicitly and can be modified, which makes these agents
more flexible. They usually require search and planning. The goal-based agent’s
behavior can easily be changed.
Goal-Based Agents
Utility-Based Agents
The agents which are developed having their end uses as building blocks are called
utility based agents. When there are multiple possible alternatives, then to decide which
one is best, utility-based agents are used. They choose actions based on a preference
(utility) for each state. Sometimes achieving the desired goal is not enough. We may
look for a quicker, safer, cheaper trip to reach a destination. Agent happiness should be
taken into consideration. Utility describes how “happy” the agent is. Because of the
uncertainty in the world, a utility agent chooses the action that maximizes the expected
utility. A utility function maps a state onto a real number which describes the associated
degree of happiness.
Utility-Based Agents
Learning Agent
A learning agent in AI is the type of agent that can learn from its past experiences or it
has learning capabilities. It starts to act with basic knowledge and then is able to act and
adapt automatically through learning. A learning agent has mainly four conceptual
components, which are:
1. Learning element: It is responsible for making improvements by learning from the
environment.
2. Critic: The learning element takes feedback from critics which describes how well
the agent is doing with respect to a fixed performance standard.
3. Performance element: It is responsible for selecting external action. 4. Problem
Generator: This component is responsible for suggesting actions that will lead to
new and informative experiences.
Learning Agent