0% found this document useful (0 votes)
31 views22 pages

Robotics

Uploaded by

bobjan6900
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views22 pages

Robotics

Uploaded by

bobjan6900
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 22

Artificial

Intelligence
Chapter 25
Robotics
Definition of Robotics
 A robot is…
• “An active artificial agent whose environment is the physical
world”
--Russell and Norvig
• “A programmable, multifunction manipulator designed to move
material, parts, tools or specific devices through variable
programmed motions for the performance of a variety of tasks”
--Robot Institute of America
 Robotics brings together many of the concepts we have seen earlier in the
book
Robotics
 Robots are physical agents that perform tasks by manipulating the physical
world.
 To do so, they are equipped with effectors such as legs, wheels, joints, and
grippers.
 Effectors have a single purpose: to assert physical forces on the
environment.
 Robots are also equipped sensors, which allow them to perceive their
environment.
Robot Categories
 Most of today’s robots fall into one of three
primary categories.
 Manipulators, or robot arms are physically
anchored to their workplace, for example in a
factory assembly line or on the International
Space Station.
 Manipulator motion usually involves a chain of
controllable joints, enabling such robots to
place their effectors in any position within the
workplace.
Robot Categories cont…
 The second category is the mobile
robot.
 Mobile robots move about their
environment using wheels, legs, or
similar mechanisms.
 The planetary rover shown in Figure
explored Mars for a period of 3
months in 1997.
Robot Categories cont…
 The third type of robot combines
mobility with manipulation, and is
often called a mobile manipulator.
 Humanoid robots mimic the human
torso.
 Figure 25.1(b) shows two early
humanoid robots, both manufactured
by Honda Corp. in Japan.
Other filed of robotics
 The field of robotics also includes prosthetic devices
• artificial limbs, ears, and eyes for humans)
 intelligent environments
• such as an entire house that is equipped with sensors and effectors
 and multibody systems, wherein robotic action is achieved through swarms
of small cooperating robots.
Robots in a real world
 Real robots must cope with environments that are partially observable,
stochastic, dynamic, and continuous.
 Many robot environments are sequential and multi-agent as well.
 Partial observability and stochasticity are the result of dealing with a large,
complex world.
 Practical robotic systems need to embody prior knowledge about the robot,
its physical environment, and the tasks that the robot will perform so that
the robot can learn quickly and perform safely.
ROBOT HARDWARE
 The success of real robots depends at least as much on the design of sensors
and effectors that are appropriate for the task.
 Sensors:
 Sensors are the perceptual interface between robot and environment.
 Passive sensors, such as cameras, are true observers of the environment:
• they capture signals that are generated by other sources in the environment.
 Active sensors, such as sonar, send energy into the environment.
• Active sensors tend to provide more information than passive sensors
ROBOT HARDWARE cont..
 Effectors:
 Effectors are the means by which robots move and change the shape of their
bodies.
 To understand the design of effectors, it will help to talk about motion and
shape in the abstract, using the concept of a degree of freedom (DOF) We
count one degree of freedom for each independent direction in which a
robot, or one of its effectors, can move.
Robotic Perception
 Perception is the process by which robots map sensor measurements into
internal representations of the environment.
 Perception is difficult because sensors are noisy, and the environment is
partially observable, unpredictable, and often dynamic.

 Good internal representations for robots have three properties:


• they contain enough information for the robot to make good decisions,
• they are structured so that they can be updated efficiently,
• and they are natural in the sense that internal variables correspond to natural state
variables in the physical world.
Robotic Perception cont…
 Robot perception can be viewed as temporal inference from sequences of
actions and measurements
 Xt is the state of the environment (including the robot) at time t, Zt is the
observation received at time t, and At is the action taken after the
observation is received.
Robotic Perception cont…
 Localization and mapping:
 Localization is the problem of finding out where things are—including the
robot itself.
 Knowledge about where things are is at the core of any successful physical
interaction with the environment.
 For example, robot manipulators must know the location of objects they
seek to manipulate; navigating robots must know where they are to find
their way around.
 let us consider a mobile robot that moves
slowly in a flat 2D world.
 Let us also assume the robot is given an
exact map of the environment.
 The pose of such a mobile robot is defined
by its two Cartesian coordinates with
values x and y and its heading with value θ
 If we arrange those three values in a
vector, then any particular state is given by
Xt =(xt, yt, θt)
 The new state xt+1 is obtained by an
update in position of vtΔt and in orientation
of ωtΔt.
ROBOTIC SOFTWARE ARCHITECTURES
 A methodology for structuring algorithms is called a software architecture.
 An architecture includes languages and tools for writing programs, as well as
an overall philosophy for how programs can be brought together.
 Modern-day software architectures for robotics must decide how to
combine reactive control and model-based deliberative planning.
 Reactive control is sensor-driven and appropriate for making low-level
decisions in real time.
 However, it rarely yields a plausible solution at the global level, because
global control decisions depend on information that cannot be sensed at the
time of decision making. For such problems, deliberate planning is a more
appropriate choice.
 most robot architectures use reactive techniques at the lower levels of
control and deliberative techniques at the higher levels.

 Architectures that combine reactive and deliberate techniques are called


hybrid architectures.
Subsumption architecture
 The subsumption architecture (Brooks, 1986) is a framework for assembling
reactive controllers out of finite state machines.
 Nodes in these machines may contain tests for certain sensor variables, in
which case the execution trace of a finite state machine is conditioned on
the outcome of such a test.

 The subsumption architecture enables programmers to compose robot


controllers from interconnected finite state machines.
Three-layer architecture
 Hybrid architectures combine reaction with deliberation.
 The most popular hybrid architecture is the three-layer architecture which
consists of a
• reactive layer,
• an executive layer,
• and a deliberative layer.
 The reactive layer provides low-level control to the robot. It is characterized
by a tight sensor–action loop. Its decision cycle is often on the order of
milliseconds.
Three-layer architecture
 The executive layer (or sequencing layer) serves as the glue between the
reactive layer and the deliberative layer.
 It accepts directives by the deliberative layer, and sequences them for the
reactive layer.
 For example, the executive layer might handle a set of via-points generated
by a deliberative path planner, and make decisions as to which reactive
behavior to invoke.
 Decision cycles at the executive layer are usually in the order of a second.
The executive layer is also responsible for integrating sensor information into
an internal state representation.
Three-layer architecture
 The deliberative layer generates global solutions to complex tasks using
planning.
 Because of the computational complexity involved in generating such
solutions, its decision cycle is often in the order of minutes. The deliberative
layer (or planning layer) uses models for decision making.
 Those models might be either learned from data or supplied and may utilize
state information gathered at the executive layer.
 Some robot software systems possess additional layers, such as user
interface layers that control the interaction with people, or a multi-agent
level for coordinating a robot’s actions with that of other robots operating in
the same environment.
Pipeline architecture
 Just like the subsumption architecture, the pipeline architecture executes
multiple process in parallel.
 However, the specific modules in this architecture resemble those in the
three-layer architecture.
 Data enters this pipeline at the sensor interface layer.
 The perception layer then updates the robot’s internal models of the
environment based on this data.
 Next, these models are handed to the planning and control layer, which
adjusts the robot’s internal plans turns them into actual controls for the
robot. Those are then communicated back to the vehicle through the vehicle
interface layer.
Pipeline architecture
 The key to the pipeline architecture is that this all happens in
parallel.
 While the perception layer processes the most recent sensor data,
the control layer bases its choices on slightly older data.
 In this way, the pipeline architecture is similar to the human brain.
 we perceive, plan, and act all at the same time.

You might also like