0% found this document useful (0 votes)
35 views12 pages

Lect02 4up

Uploaded by

alialataby
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views12 pages

Lect02 4up

Uploaded by

alialataby
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Today

Robotics and Autonomous Systems


Lecture 2: Mobile Robotics

Richard Williams
• Today we’ll start to look at the main problems faced by mobile robots.
• This sets up the issues we’ll consider for the first half of the course.
Department of Computer Science
University of Liverpool • We’ll also consider how these issues relate to the idea of agency.

1 / 48 2 / 48

Autonomy Autonomy

• A really autonomous vehicle makes its own decisions about what to


do.
• Many autonomous vehicles are not really autonomous percepts

Environment sensors

effectors
• They are teleoperated.
actions
• The notion of an agent can help us understand what this requires.

3 / 48 4 / 48
What is an agent? What is an agent?

• Trivial (non-interesting) agents:

• As we said before: • thermostat;


• light switch;
An agent is a computer system that is situated in some
• unix daemon (e.g., biff).
environment, and that is capable of autonomous action in
that environment in order to meet its delegated objectives.

• It is all about decisions


• An agent has to choose what action to perform.
• An agent has to decide when to perform an action.

• More interesting agents are intelligent.

5 / 48 6 / 48

Intelligent Agents Abstract Architectures for Agents

• Assume the world may be in any of a finite set E of discrete,


instantaneous states:
E “ te , e 1 , . . .u.
• An intelligent agent is a computer system capable of flexible
autonomous action in some environment. • Agents are assumed to have a repertoire of possible actions available
By flexible, we mean: to them, which transform the state of the world.
• reactive;
• pro-active;
Ac “ tα, α1 , . . .u
• social. • Actions can be non-deterministic, but only one state ever results from
• All these properties make it able to respond to what is around it. an action.
• A run, r, of an agent in an environment is a sequence of interleaved
world states and actions:

α
0 1 α2 3 α α αu´1
r : e0 ÝÑ e1 ÝÑ e2 ÝÑ e3 ÝÑ ¨ ¨ ¨ ÝÑ eu

7 / 48 8 / 48
Runs of agents Runs of agents

• When actions are deterministic each state has only one possible
successor.
• A run would look something like the following:

9 / 48 10 / 48

Runs of agents Runs of agents

North!

11 / 48 12 / 48
Runs of agents Runs of agents

North!

13 / 48 14 / 48

Runs of agents Runs of agents

East!

15 / 48 16 / 48
Runs of agents Runs of agents

North!

17 / 48 18 / 48

Runs of agents Runs of agents

• When actions are non-deterministic, a run (or trajectory) is the same,


• Which we might picture as so: but the set of possible runs is more complex.

North!
North!
North!
North!

19 / 48 20 / 48
Runs of agents Agents

• In fact it is more complex still, because all of the runs we pictured • We can think of an agent as being a function which maps runs to
start from the same state. actions:
• Let R be the set of all such possible finite sequences (over E and Ac)
This is the set of all runs from all starting states. Ag : RE Ñ Ac
• RE is the subset of R that end with a state. • Thus an agent makes a decision about what action to perform based
• All the ones where the agent needs to make a decision. on the history of the system that it has witnessed to date.

21 / 48 22 / 48

Agents Purely Reactive Agents

North! • Some agents decide what to do without reference to their history —


they base their decision making entirely on the present, with no
North! North! reference at all to the past.
West! • We call such agents purely reactive:

action : E Ñ Ac

• A thermostat is a purely reactive agent.


"
off if e = temperature OK
actionpe q “
• Potentially the agent will reach a different decision when it reaches on otherwise.
the same state by different routes.

23 / 48 24 / 48
Purely Reactive Agents Purely Reactive Agents

North!

North! North!

West!

• A reactive agent will always do the same thing in the same state.

25 / 48 26 / 48

A reactive robot Agents with state

• A simple reactive program for a robot might be:


Drive forward until you bump into something. Then, turn to percepts

the right. Repeat.


see state

Environment

next
action

actions

27 / 48 28 / 48
Agents with state Agents with state

• The action-selection function action is now defined as a mapping


• The see function is the agent’s ability to observe its environment,
whereas the action function represents the agent’s decision making action : I Ñ Ac
process.
• Output of the see function is a percept: from internal states to actions.
• An additional function next is introduced, which maps an internal
see : E Ñ Per state and percept to an internal state:
• The agent has some internal data structure, which is typically used to next : I ˆ Per Ñ I
record information about the environment state and history.
• Let I be the set of all internal states of the agent. • This says how the agent updates its view of the world when it gets a
new percept.

29 / 48 30 / 48

Agents with state A robot with state

percepts

see state

Environment

1 Agent starts in some initial internal state i0 . next


action

2 Observes its environment state e, and generates a percept see pe q. actions

3 Internal state of the agent is then updated via next function,


becoming next pi0 , see pe qq.
• per is a bool that indicates “against an object”.
4 The action selected by the agent is actionpnext pi0 , see pe qqq.
• i is an integer, “against object for n steps”.
This action is then performed.
• see updates per each step, indicating if the robot is against an object.
5 Goto (2).
• next is as follows:
"
i+1 if per = true
next pi q “
0 otherwise.

31 / 48 32 / 48
A robot with state What is mobile robotics?

• Last time we boiled the challenges of mobile robotics down to:

?
• Now the robot can take more sophisticated action.
• For example, backing up if it cannot turn away from the wall
immediately.

• This is an example of a common situation in robotics.


• Trading memory and computation for sensing. • Where am I ?
• Where am I going ?
• How do I get there ?

• Now we’ll start talking about how to answer these questions.

33 / 48 34 / 48

The pieces we need General control architecture

• Locomotion and Kinematics


How to make the robot move, tradeoff between manoeverability and
ease of control.
• Perception
How to make the robot “see”. Dealing with uncertainty in sensor input
and changing environment. Tradeoff between cost, data and
computation.
• Localization and Mapping
Establish the robot’s position, and an idea of what it’s environment
looks like.
• Planning and navigation
How the robot can find a route to its goal, how it can follow the route.

35 / 48 36 / 48
General control architecture What makes it (particularly) hard

• Changing environment.
• Things change.
• Things get in the way.
• No compact model available.
• How do you represent this all?
• Many sources of uncertainty.
• All information comes from sensors which have errors.
• The process of extracting useful information from sensors has errors

37 / 48 38 / 48

The basic operations Map

• We start with what the robot can “see”.


There are several forms
this might take, but it will
depend on:

What sensors the robot


has

What features can be ex-


tracted.

• (These are not a particularly likely set of features.) • A map then says, for example, how these features sit relative to one
another.

39 / 48 40 / 48
Localization Navigation

• A robot localizes by identifying features and the position in the map


from which it could see them.

• Navigation is then a combination of finding a path through the map. . . .


• Lanser et al (1996)

41 / 48 42 / 48

Navigation How do we put these pieces together?

• A system architecture specifies how these pieces fit together.


• Consider these to be refinements of the “agent with state” from above.
percepts

see state

Environment

next
action

actions

• . . . and avoiding things that get in the way.

• Breaking down next and action into additional pieces.


• Adding in new aspects of state I.

43 / 48 44 / 48
Approach: Classical/Deliberative Approach: Behaviour-based

• Sparse or no modeling
• Complete modeling
• Behavior based
• Function based
• Vertical decomposition
• Horizontal decomposition
• Bottom up

45 / 48 46 / 48

Approach: Hybrid Summary

• A combination of the above.


• Last time we talked about what the main challenges of mobile
• Exactly the best way to combine them nobody knows.
robotics are.
• Typical approach is:
• Let “lower level” pieces be behavior based • This lecture started to describe how we can meet these challenges.
Localization • We covered the main things we need to be able to autonomously
Obstacle avoidance control a robot.
Data collection
• Along the way we looked at how notions of agency — and what this
• Let more “cognitive” pieces be deliberative
Planning
means for autonomy — can help.
Map building

47 / 48 48 / 48

You might also like