Unit 1: Intro To Robotics: Zeroth Law First Law Second Law Third Law
Unit 1: Intro To Robotics: Zeroth Law First Law Second Law Third Law
Zeroth Law : A robot may not injure humanity, or, through inaction, allow
humanity to come to harm
First Law : A robot may not injure a human being, or, through inaction, allow a
human being to come to harm, unless this would violate a higher order law
Second Law: A robot must obey orders given it by human beings, except where
such orders would conflict with a higher order law
Third Law : A robot must protect its own existence as long as such protection
does not conflict with a higher order law.
TYPES OF ROBOTS:
1. Stationary robots: perform repeating tasks without
ever moving an inch, explore areas or imitate a
human being.
2. Autonomous robots: self supporting or in other
words self contained. In a way they rely on their own
‘brains’.
3. Virtual robots: just programs, building blocks of software inside a computer.
4. Remote controlled robots: difficult and usually dangerous tasks without
being at the spot.
5. Mobile robots: rolling, walking
SENSORS:
Manipulation:
2. AIML IN ROBOTICS:
What is AI?
It is the simulation of human intelligence in machines enabling them to perform
cognitive tasks. These tasks include learning, reasoning, problem-solving, perception,
language understanding, and decision-making. AI systems can be rule-based
(symbolic AI) or data-driven (machine learning and deep learning), allowing them to
improve performance over time. AI is used in various fields, such as healthcare, finance,
social media, automation, and robotics.
MACHINE LEARNING: Branch of AI that enables systems to learn and improve from
experience without explicit programming. It develops algorithms that analyse data,
identify patterns, and make decisions. The goal is for computers to learn automatically
and adapt without human intervention.
Uses and Abuses:
• Predict the outcomes of elections
• Identify and filter spam messages from e-mail
• Foresee criminal activity
• Automate traffic signals according to road conditions
• Produce financial estimates of storms and natural disasters
• Examine customer churn
• Create auto-piloting planes and auto-driving cars
• Stock market prediction
• Target advertising to specific types of consumers
Recognizing patterns: Pattern recognition is the automated recognition of patterns and
regularities in data. It has applications in statistical data analysis, signal processing,
image analysis, information retrieval, data compression, machine learning.
How do machines learn?
Machines learn through machine learning (ML), it enables systems to improve their
performance based on experience rather than explicit programming. The learning process
generally follows these steps:
1. Data Collection – Machines require large datasets to recognize patterns and make
predictions.
2. Data Processing – The collected data is cleaned, structured, and prepared for
analysis.
4. Evaluation – The trained model is tested on new data to assess its accuracy and
performance.
Scope of ML in Robotics:
AI enhances vision for object detection, grasping for optimal handling, motion control for
dynamic interaction and obstacle avoidance, and data analysis for proactive decision-
making.
While computer vision focuses purely on image processing through algorithms, robot
vision or machine vision involves both software and hardware (like cameras and
sensors) that allow robots to perceive and interact with their environment.
Machine vision is responsible for advancements in robot guidance and inspection
systems, enabling automation.
The key difference is that robot vision also considers kinematics, meaning how a
robot understands its position and movements in a 3D space, allowing it to
physically interact with its surroundings.
IMITATION LEARNING:
• It is a type of machine learning where the model generates its own labels from raw data
instead of relying on manually labelled data.
• It serves as a bridge between supervised and unsupervised learning, allowing AI to learn
representations from vast amounts of unlabelled data.
• [priori (PRE-EXISTING) training and data captured close range to interpret long-range
ambiguous sensor data] EXAMPLES – WATCH BOT, FSVMs, road probabilistic
distribution model (RPDM)
MULTI-AGENT LEARNING:
Multi-Agent Learning (MAL) is a branch of machine learning where multiple AI agents learn and
interact within a shared environment. These agents can either collaborate, compete, or
coexist to achieve individual or collective goals. (coordination and negotiation)
Inverse Optimal Control (IOC), also called Inverse Reinforcement Learning (IRL), is the process
of recovering an unknown reward function in a Markov Decision Process (MDP) by observing an
expert's optimal behaviour (policy).
Markov Decision Process (MDP) – A framework used in reinforcement learning where an agent
takes actions in a given state to maximize rewards over time.
3. ROBOT CONFIGURATIONS:
WRIST: DOF – 3 (PITCH- up down, YAW- left right, ROLL-rotation around arm)
CONTROL METHODS: Control could mean two things. One is motion control strategy, i.e.,
whether a robot is servo controlled or not, and the other one is how the motion path is achieved,
i.e., point-to-point or continuous.
COORDINATE TRANSFORMATION:
2D Affine Transformations:
3D TRANSLATION
2. 2D AND 3D SCALING.
Combined:
Along X axis:
Along Y :
2D REFLECT:
Along X:
Along Y:
2) A transducer is a device that converts one form of energy into another. It takes an input
signal in one physical form and transforms it into an output signal of a different form,
typically electrical.
3) An actuator is a device that converts energy (usually electrical, hydraulic, or pneumatic)
into mechanical motion. It is used to move or control a mechanism or system.
CLASSIFICATION OF SENSORS:
1) BASED ON MEASURED QUANTITIES
Non-Linearity Error Refers to the deviation of the actual output from an ideal straight-
line response over the measurement range. It indicates how much the sensor's output deviates
from a perfectly linear relationship between input and output.
Hysteresis Error Occurs when a sensor or transducer gives different output values for the
same input, depending on whether the input is increasing or decreasing. This means the
output follows a different path when measured during an increasing input versus a
decreasing input, creating a loop-like effect.
SENSORS IN ROBOTS:
There are generally two categories of sensors used in robotics; these are for internal
purposes, and those for external purposes.
INTERNAL: Used to monitor and control the various joints of the robot; they form a feedback
control loop with the robot controller.
Examples of internal sensors include potentiometers and optical encoders, while
tachometers of various types can be deployed to control the speed of the robot arm.
EXTERNAL: Sensors that are placed outside the robot and help it interact with its
surroundings, equipment, or objects in a workspace. They are essential for coordinating
robotic operations in industrial environments.
Light Detection and Ranging (LiDAR) is a remote sensing technology that uses laser
pulses to measure distances and create 3D models.
Thresholding is the binary conversion technique in which each pixel is converted into a
binary value, either black or white.
Region growing is a collection of segmentation techniques in which the pixels are
grouped in regions called grid elements based on attribute similarities.
Edge detection is considered as the intensity change that occurs in the pixels at the
boundary or edges of a part.
NOTE:
UNIT: 03 (KINEMATICS)
1.
2.
3.
Degrees of freedom (DoF) is the number of independent movements a
robot can make. It's also used to describe the motion capabilities of robots,
including humanoid robots.
Forward kinematics refers to the process of determining the position and orientation of the
end-effector (tool or hand) of a robot given the values of its joint parameters (angles or
displacements).