Course3 - Lecture Video Transcripts
Course3 - Lecture Video Transcripts
********
0:05
Okay, so then in course number three, I'm going to talk about formal verification
and synthesis techniques. Meaning that, okay, let's say we are given an autonomous
systems, we are given the property of interest that we would like to see if the
system satisfies that property of interest. So formal verification algorithm is
going to answer that question. In the formal verification, it's a model based
design paradigm, means specify design, verify, refine. So in the formal
verification we deal with the specification requirement, what the system is
supposed to do. So we have a specification requirement. I talk about them in course
number two. Of course there is modeling involved. I'm talking about modeling in
course number one, description of what the system actually does and then there is
design involved. So it's a structured creation of artifacts to modify the system
behavior. So when I talk about formal synthesis, actually we design the system in a
way that we know that it's going to be correct. But let's say somebody will design
a system, right? And then we are already given a design system and we are asked to
formally verify if the design system is correct. So in the verification step, which
I will talk about in course number three, we certify that the system does what it's
supposed to do and of course there is a refinement involved why? Because what if
the verification step says no, the system doesn't satisfy the property of interest.
So then we need to go back, refine our design and again apply formal verification.
And this is an Iterative scheme until the verification procedure says yes, meaning
that the system does satisfy the property of interest.
Play video starting at :2:9 and follow transcript2:09
Okay, so how to assess system correctness? So formal methods are mathematical based
techniques for both specification development and verification of software and
hardware system. Okay, one thing which is important, I want you to understand,
there is a major difference between verification and testing. Testing is a
simulation or execution based inspection of the system correctness? Let's look at
the example and see why formal verification and testing are different, right? So
consider a system represented by this abstract mathematical equation. Again, f can
be a nonlinear function. X is a vector which is changing with time and time is
changing discreetly, right? And assume f(0) is equal to 0. Now we are asked show
that this system is asymptotically stable with respect to the origin or equilibrium
point. Meaning that no matter where do you put the vector x at time zero as time
elapses, ensure that the norm of x(t) for any as time goes to infinity, converges
to zero. So we are asked to verify this. Okay, so there are two ways we can verify
this, right? One way is testing. What if I simulate this system, meaning that I
start this recursive equation at different initial condition. And I just run the
system and I compute the vector xt at different time and I look at that and I see
how does it evolve in time? Does it get closer to origin? So if it gets closer and
closer to origin, then I might suspect that yes, the system might be asymptotically
stable, but still I cannot guarantee. Why is that? Because the system can start
from within a continuous set. I have uncountably many points that the system can
start from, so I cannot simulate all of them agree. So we cannot simulate the
system inside the computer for uncontably many starting points, so that's not
doable. And also I cannot simulate the system for infinite time horizon. I can only
simulate the system for a finite time horizon. So these are the drawback of
testing. I cannot simulate for uncountably many initial condition or a starting
point. And I cannot simulate the system forever. I can only simulate for finite
time horizon. But now let's see how formal verification works in this case. So I
can actually formally show this system is asymptotically stable with respect to the
equilibrium point. Simply, if I can find a function v whose domain is the same set
that this state vector lives in, its co domain is non negative real numbers. So if
I can find a function v with these two property v(x) equal to 0 if and only if x
equal to 0, and for any nonzero x, if I can show v(f(x))- v(x) is less than zero,
then I'm done. Meaning that I was able to formally prove that this system is
asymptotically stable without doing any testing simply by searching for a function
v with these two property. So that's the formal way of showing asymptotic stability
and also show the drawback of testing. So this is a difference between now, if you
want to employ testing to show some property of interest or a formal way of showing
the property of interest. Okay, so let's compare now formal verification versus
testing. Informal verification we are able to provide a mathematical proof of the
absence of errors relative to specification and the model testing cannot guarantee
absence of errors. Informal verification we use formal specifications, for example,
linear temporal logic or automata, whose formulation require highly qualified
engineers, whereas testing requires less mathematical skills. So lastly, formal
verification is computationally expensive and often fails even in naive application
for real world system, whereas testing is computationally cheap. So I want to
emphasize this sentence by Dystro saying that program testing can be used to show
the presence of bugs but never to show their absence. So this sentence nicely
formulates the fundamental difference between formal verification and testing. So
in this specialization, we mainly talk about formal verification because we are
interested to show absence of software error and bugs in designing autonomous
systems which are safety critical and life creative.
Play video starting at :8:3 and follow transcript8:03
So, why should we care about formal verification? Because it provides a formal
proof of system correctness, and is absolutely necessary for safety critical and
highly reliable systems. Okay, so let's look at some of the examples that formal
verification hasn't been used and the incidents that happen actually justifies the
benefit of using formal methods in deploying such systems. In 1990, the software
error in a baggage handling system postponed the opening of Denver's Airport for
nine months at a loss of 1.1 million US dollar per day. You see, it's a software
error and bugs can be very costly. In fact, in some study it has been claimed that
software error and bugs cost global economy hundreds of billions of dollars
annually.
Play video starting at :9:9 and follow transcript9:09
So in 1994, the software bug in Intel's Pentium II floating-point division unit
caused a loss of around 500 million US dollars to replace faulty processors. And as
you watch in the video, in 1996 Ariane 5 rocket got destroyed, and the main reason
was software defect due to a conversion of a 64-bit floating point to 16-bit signed
integer value, and costed around $380 million. Another example in 2013, Honda car
company had to recall around 350,000 minivans, due to software defect that causes
harsh application of brake without any driver action. Again, billions of dollars
got lost. So now, let's look at some of the positive example, in which, formal
verification has been used to deploy an autonomous system in a reliable way. So, of
course, after this bug and Intel's Pentium II in 1994, Intel started having a team
on formal verification.
Play video starting at :10:24 and follow transcript10:24
So the event, Intel went on to use formal verification extensively in the
development of CPU architectures. In the development of the Pentium 4, actually,
formal verification was used to find a number of bugs that could have led to a
similar recall incident as this one.
Play video starting at :10:51 and follow transcript10:51
Have they gone undetected? So, in 2012, Curiosity lands on Mars with a verified
code using a SPIN model checker. So, if you are interested, this is actually a very
good read regarding the use of SPIN model checker to formally verify some part of
the software that was used in Curiosity.
Play video starting at :11:16 and follow transcript11:16
So, this slide is telling you what is a formal verification or model checking is in
a nutshell. So, in formal verification, we are given a system and requirements. The
very first thing we need to do is, I mean, the system in general is a complex
physical system, we need to come up with a model of that system, right, and that's
mathematical model. And this is what I will discuss in course number 1, same thing
for requirement, right? Sometimes we are given requirement in terms of English
sentences. So we need to actually compile a formal way of describing those
requirements. And this is what I will explain in course number 2, for example,
linear temporal logic or automotive, they are nice tools to represent property of
interest. So now, let's assume we have property of interest in a formal way, has
been described, we have a model of our system, so we fit both of them into a model
checker. So the formal verification or model checking job is to check if the model
satisfies the specification and give us the answer, yes, or give us an answer, no.
But being able to also provide a counterexample, give us a trace of the system, why
the system doesn't satisfy the property of interest. So, under some assumptions
over our abstract model, actually, the model checking algorithm is able to generate
a trace of the system that violates the property of interest. And that's the
beautiful things about formal verification.
Play video starting at :13:9 and follow transcript13:09
Okay, now, let's see what is formal synthesis, which also will be discussed in
course number 3.
Play video starting at :13:17 and follow transcript13:17
So, synthesis is the process of generating the description of a system in terms of
related low-level components from some high-level descriptions of the expected
behavior. So synthesis, if successful, avoids many manual design step. So it's
actually synthesis is described and synthesized as opposed to specified-design-
verify-refined. Remember, in formal verification, the system has already been
designed and we are asked to formally verify if the system satisfied property of
interest. If it's yes, we are happy. If the answer is no, we need to go back,
redesign our system and again apply formal verification. Again, formal
verification, if it says yes, we are happy, but it might still say no. So that
means, we need to go back again and redesign our system, right? This is an
Iterative scheme, but formal synthesis is trying to avoid this repetition, it says,
okay, tell me what is the property of interest? And I try to design your system in
one shot, such that it satisfied the property of interest.
Play video starting at :14:29 and follow transcript14:29
So, formal synthesis is a methodology which mechanizes the designer step and
integrates the verification step in one unifying procedure. It's a new model based
design paradigm. So, also known as correct by construction synthesis, meaning that
I am constructing the system in a way that I also guarantee it's correct, meaning
that you don't need to apply formal verification anymore. Informal synthesis, you
get the correctness when you design the system, so you bypass the formal
verification.
Play video starting at :15:11 and follow transcript15:11
So, very high level, the conceptual problem around formal synthesis is this, we are
given a system, let's say a car, we are given a property of interest, right? Let's
say, a linear. We are asked to compute an artifact, a software code, or it's also
called controller C, such that if we apply this artifact to our system, their
composition provably satisfy the requirements of interest, you see? So, in the
context of car, S, let's say, is our car, specification of interest, let's say, no
collision, right? So now, the synthesis question is asking design,
Play video starting at :16:1 and follow transcript16:01
A controller sip, for example, an autopilot, such that if I apply that autopilot
algorithm or software in my car, it guarantees that no collision will happen
provably, right? So that's what a synthesis problem is, conceptually, right?
Spoiler alert. I should tell you that because you might say formal synthesis is
very interesting, it actually removes this repetitive scheme which we might face
informal verification. Because in formal verification, if the answer was no, we
need to go and redesign our system and again apply formal verification. So the
catch is computationally formal synthesis can be much more complex than formal
verification. So that's the reason some people might still prefer to use formal
verification. And go through this repetitive scheme rather than using formal
synthesis due to computational complexity around formal synthesis. This is
something I will talk in Course 3. And this slide is giving us, in a nutshell, what
formal synthesis is. Again, we are given a system, let's say a car. We are given
requirements of interest. So for a system we need to build a mathematical model of
the system, right? This mathematical model can be, for example, can be described
using a finite submission. We are also given the requirement, let's say in the form
of English sentences. So we need to actually generate a formal way of representing
the requirement as well. As I said in course number two, I talked about how to
formally describe property of interest using linear temporal logic or automotive.
So now we feed this abstract model and a specification inside the synthesis engine.
What formal synthesis is trying to answer is or accommodate is synthesize a
controller which enforces this specification over the abstract model. So the
outcome of synthesis is a provably-correct controller, a provably correct artifact
that we apply to our abstract model and we already know it will satisfy this
property of interest. It's already correct by design.
Play video starting at :18:48 and follow transcript18:48
Let's look at a synthesis example. Let's say we are interested in asymptotic
stability of what we call linear system. What is the specification? The state of
the system should converge to 0 as time goes to infinity, also known as asymptotic
stability. Again, I will talk about this property in course number two. So let's
say the model of the system is also given using this linear difference equation. So
you see, this vector x, we call it a state vector containing quantities of interest
about our system. Let's say velocity, acceleration, position. And this U is also a
vector containing the actuation signal that we need to provide to the system such
that the system achieve this property of interest. So now this is an example of
formal synthesis. This is a matrix, if this matrix is full rank. So this A is an N
by n matrix, B is an N by M matrix. So if this matrix constructed from A and B from
the model of the system is full rank, then we can actually design an artifact or
controller of this form. Which is linear also such that when we apply this policy
to our system, we know that the system become asymptotically stable. For example,
if you are familiar with MathLab, you just need to run this function in MathLab. So
then you get the controller right away and you know it's actually correct. So this
control, if you apply this, there has to be a negative sign here which I forgot to
add. So if you add this controller to the system, this actually ensures that when
you add here, so then you end up with this matrix A minus BK. And then you can see
that the eigenvalue of this matrix are inside the unit circle, or meaning that the
size, because eigenvalues can be complex. So being inside unit circle means the
amplitude of the eigenvalue are less than 1. So in this case, you get asymptotic
stability. So the question is, if this matrix is not full rank, so what do we do?
So now you need to go and modify the system design, because by modifying the
system, you're actually able to modify the matrices A and B. And hopefully you can
actually achieve that condition such that this matrix become full rank. And then if
it's full rank, then you know how to design the controller in a formal way. This is
just providing an example of what do I mean by formal synthesis. Being able to
design an artifact or controller, namely in this case, this matrix K. Such that if
I compute U actuation signal based on this equation, then I know that the system
Equip with that U coming from here x of T is your measurement, right? So you
measure the states of the system at time T. You fit them inside the computer.
Computer does k times that vector multiplication fits the input to the actuator of
the system and this loop is just continuing, right, at any time. So in this case,
you achieve asymptotic stability. And you know that is formally correct, meaning
that this input calculating using this equation formally ensures asymptotic
stability of the system.
Play video starting at :22:59 and follow transcript22:59
Okay, again, this is the outline of this specialization. So course number one is
mainly around modeling, difference equations, hybrid automata, time automata, label
transition system. I talk about what do I mean by behavior of a system, what do I
mean by serial or feedback composition of systems. All these topics will be studied
in course number one. Course number two, I will talk about low level and high level
specifications, low level, mainly stability, high level, I'll talk about invariance
reachability, linear, temporal logic formula. I'll talk about property expressed by
automata, for example, nondeterministic bookie automata, nondeterministic finite
automata. So all these details will be discussed in course number two. Finally, in
course number 3, I talk about formal verification and synthesis. How can we analyze
reachability property? How can we do formal verification if we are given a linear
temporal logic formula? And I talk about topics around what we call barrier
certificates. These are all tools that allow us to formally, for example, in the
context of barrier, if we use that tool, we can show that the system is safe with a
mathematical guarantee. And then finally, in course number 3, I will conclude by
talking about formal synthesis. And in particular, I use what is called abstraction
based techniques. What are those techniques that can be used to synthesize
artifact? To synthesize software that can ensure, if I apply that to the system,
that ensures that the system satisfies the property of interest.
Play video starting at :25:9 and follow transcript25:09
Okay, this is the big picture. So what I want to emphasize is this red part is the
part that I am talking in this class. So in this specialization. In this
specialization, I'm not going to talk about the implementation code generation. So
there are other courses out there that will talk about code generation, scheduling,
real time analysis, and all these things. So those are not the focus of this
specialization. This specialization, we mainly talk about modeling, requirements of
interest and then how do we formally verify if the model satisfies requirements of
interest or how to design artifacts for the model. That when I applied artifacts to
the model, their interaction satisfies property of interest. So we do not cover
actuator and sensor models. I'm not talking about perception in this class. I don't
talk about embedded processor design. I mean, all these things, they have their own
dedicated courses that you can pursue and understand around perception, image
processing. I don't talk about real time analysis and scheduling. So there is a
dedicated courses out there which just talk about real time analysis and
scheduling, right? Like response time analysis, worst case, execution time
multitasking, all these things. So I don't talk about concurrency like models of
concurrent programs, synchronous reactive languages. And I will not talk about
embedded code generation in this specialization. So these are out of the scope of
this specialization. So here, we mainly talk about, okay, how do we model
autonomous systems? How do we model property of interest for autonomous systems?
And then how do we verify the model satisfies the property of interest in a formal
way? Or how are we able to design artifacts for the model such that their
composition of the artifact and the model satisfies property of interest? Okay, why
you should be excited about this specialization. I mean, formal methods for
autonomous systems with mixed continuous and discrete dynamics is a hot topic in
industry. Many companies, including Toyota, Waymo, Zoox, GM, Nuro, Denso, Bosch are
actually hiring people with expertise on formal methods for hybrid system. Formal
verification and controller synthesis is actually subject of current research in
academia as well. Lots of opportunity to contribute in terms of an undergraduate or
graduate thesis. And the other good news is there are mature tools for formal
verification of software. These are some of the names, for example, SPIN,
CPAchecker, so these are all tools available online we can leverage to verify
formally software. There are academic tools for formal verification of hybrid
system. Many of them are actually do reachability analysis. So PHAer SpaceX, Flow
star, I mean, these tools are all available we can use to formally verify hybrid
systems. There are academic tools for LTL specification synthesis. These are some
of the examples and eventually there are some proof of concepts tools for formal
synthesis of continuous space systems. And some of those tools names include SCOTS,
pFaces, LTLMop, TuLip, AMYTISS, OmegaThreads have been developed in my group.
TuLips at Caltech LTLMoP at Cornell University. Okay, I just did a quick search the
other day to see what are the open positions out there that asking from students to
have knowledge about formal methods. And right away by a simple search, I was able
to actually come up with many openings. You can see here, I put some of the
examples in Cruise, Nuro, Zoox and Tesla. They're asking for people who have
expertise on formal methods. And that's a topic which I'm covering in this
specialization.
Play video starting at :30:4 and follow transcript30:04
So after this specialization, you are able to create a formal model of an
autonomous system, which is actually useful for the verification or synthesis task.
You are able to express value system property in a rigorous and precise manner. For
example, using linear temporal logic formula. You're able to understand the basic
algorithm and notions underlying the verification for stability, invariance,
reachability or even more complex linear temporal logic properties. You are able to
understand basic algorithms underlying the controller synthesis for stability,
invariance, reachability. You are able to show the correctness of a closed group
system, at least for some toy examples. And you are able to use the learned topics
and leverage them to either verify or synthesize controller for some autonomous
systems using existing tool which I explained in the previous slide.
Play video starting at :31:15 and follow transcript31:15
Okay, as you know, there is no prerequisite for this specialization. But what I
expect is I expect you to have some basic knowledge of linear algebra and
differential equation. And it would be nice if we can think abstractly because the
number of real world examples are kept at a minimum. And it's also good if you are
able to read math. For example, I'm going to use notations like this what this
means, this means that F is a function, its code, its domain, is cartesian product
of x and u, its co domain is set X. What this notation means, means k is a subset
equal of x. So that's what I expect, that you are able to read these types of
mathematical equations in this class to be able to follow many of the topics which
I'm covering in this specialization.
0:04
In the previous course, I talked about the specification of interest for autonomous
systems. If you recall, I talked about regular properties, Omega-regular
properties, and then I started talking about which type of machines can be used to
recognize safety regular properties. Remember, the regular safety properties, they
could be described or characterized using automata called nondeterministic finite
automata. Then regarding Omega-regular properties, I mentioned that they can be
recognized using a machine called nondeterministic Buchi automata. Then I also
talked about linear time properties, in particular, linear temporal logic. If you
recall, I mentioned that, linear temporal logic formula, they also describe
property of interest, which can be recognized using also nondeterministic Buchi
automata.
Play video starting at :1:5 and follow transcript1:05
Now let's delve into our last course. We already know how to build a model of an
autonomous systems based on course number 1, we know how to describe property of
interest and requirements of interest for autonomous systems using the different
classes of logics I explained in course number 2. Now the question is, I would like
to verify if the system satisfies a property of interest or design a controller,
and other system, C, whose feedback composition with the original system satisfy
the property of interest. Let's first look into verification for finite system.
Play video starting at :1:59 and follow transcript1:59
I will mention how can we actually check regular safety properties if you have a
finite system and how can we check Omega-regular properties when you have a finite
system. Let's look into first checking regular safety properties. Let's have a
recap. Remember, I talked about linear time properties, or acronym is LT. An LT
property over a set of atomic proposition, AP, is a language P of infinite words
over this alphabet. What is the alphabet? Over set of atomic proposition. Any
language over this alphabet is called an LT property. Now, if I am given a simple
system as, without any blocking state, then we say the system satisfies the LT
property P under an appropriate labeling map. What's the role of labeling map? It
takes a state and it maps it to symbols of the Alpha. Remember, symbols of alphabet
are subset of AP, that's the reason I use two arrow, which means a set valued map,
if and only if, if we apply the labeling map L over the behavior of S, those
traces, which has been generated by applying the labeling map L over the behavior
of S, those traces are subset equal P. Remember that applying L, the labeling map,
over the behavior of S is equal to applying L over all those infinite state
sequences in which the pair of u and x belongs to the behavior of S. Now, our job
is to verify if for a given regular safety property, let's say P is a regular
safety property,
Play video starting at :4:14 and follow transcript4:14
assume also the system is finite, I would like to check how a finite system, S,
satisfies a regular safety property, but pure in a machinery way in a systematic
way. Let's have a recap of regular safety property.
Play video starting at :4:37 and follow transcript4:37
P is called regular safety if and only if the bad of P is regular. Remember, what
was the bad of P? The set of bad prefixes of P. If the bad of P, the set of bad
prefixes of P can be recognized using a nondeterministic finite automata, then P is
called regular safety property.
Play video starting at :5:10 and follow transcript5:10
A safety property P is called regular if the set of all bad prefixes of P can be
recognized using a nondeterministic, finite automata. There is a finite automata,
which can be nondeterministic whose language is equal to the set of all bad
prefixes of P. Now, how can I leverage a systematic approach to verify if a finite
system satisfies a regular safety property in a machinery and systematic manner?
Play video starting at :5:58 and follow transcript5:58
Let's have an example. What is our original safety property? It says a, and here,
note, b never holds twice in a row, right? That's our safety property. A, and not
b, never holds twice in a row. Of course, what are all the bad prefixes? Any finite
length board in which, A and not P, appears twice in a row, that's actually
considered a bad prefix. Any finite word in which a, and not b, appears in a row,
that's actually a bad prefixes of P. Can I construct an NFA which recognize those
bad prefixes? Yes, look at this NFA. For finitely many times, I do not care what
the symbols in the word is. You see, I put true, but then after that, I see a and
not b, and after that, I also see a and not b. You see two a and not b appears in a
finite vote that already now, is a finite bad prefixes of P. After that, I do not
care. This NFA the language of this NFA contains the set of all bad prefixes of P.
We are interested in verifying regular safety properties. We are given a finite
simple systems without any blocking estates. We are given a regular safety
property, P. The reason we call it regular, so that means there is an NFA, which
admits the set of bad prefixes of P. Question, we would like to check does system S
satisfy P? The method I'm going to present here relies on an analogy between these
two bullet points. Checking language inclusion for NFA is equivalent to model
checking for regular safety properties. But rather than me checking language
inclusion, I'm going to actually resort to the NFA representing the set of bad
prefixes of P as well as my own finite simple system, S, and try to take the
product and try to actually reason about the product of the system, S, and the NFA.
I will actually explain in details how the product is being constructed in the next
lecture. In a nutshell, so this diagram shows what do we mean by verifying regular
safety property. We are given a finite simple system, S, we are given a regular
safety property, P. What we do first, we construct an NFA accepting the set of bad
prefixes of P. Great. I have an NFA. It's finite. I have a simple system, also
finite.
Play video starting at :9:40 and follow transcript9:40
Checking if the system satisfies the original regular safety property P actually
boils down to an invariant checking on the product between the system, S, and the
NFA accepting all the bad prefixes of P. In the product, when you take the product,
if the final estate in the product, they never get visited, that actually implies
the system, S, does satisfy the original property, P. Any path on the product that
actually reaches the accepting estate, that's provide onto example, why when you
projected over the behavior of the system, that behavior already shows why the
system violates property P. You see, a nice thing about model checking is, not only
we are able to say the systems satisfies P, yes or no? Even if it does not satisfy,
we can actually construct the path or the behavior in the system, S, that violates
the original regular safety property, P.
0:04
In the previous lecture, I mentioned that in order to verify if a finite system
satisfy a regular safety property systematically, we need to take the product of
the system S here. See? Take the product of the system S and the NFA representing
the set of bad prefixes of the original property, and then on the product, check if
the accepting state can be reached from the initial state of the product. Now, the
question is, how do we take the product of a system and NFA? Here in this slide, I
will explain how do we take the product of a finite system S and an NFA A. We are
given a simple system, S, together with the labeling map, L, that maps each state
to a subset of atomic proposition. That state I use two, that means L is a set
value map. What is our alphabet in this case is the power set of atomic
proposition. We're also given an NFA A defined over the same alphabet. Now, the
product system, so the product of S and the NFA A itself is a system. Now let's see
what are the ingredients of this new system resulted from the product. The state
set of the product is the product of the state set of our original system S and the
state set of the NFA. The input set is the same as the input set of the system. Now
I need to tell you what is the set of initial states and how the state transition
map in this product system is defined. Here is the definition of a state transition
map. If I start from a state in the product, namely the pair x,q, under the input
u, I go to a new pair x', q', if and only if in the original system, I go from
state x under u-x', and in the, I go from q to q' under the label of x'. You see,
under the label of x', not x. I repeat again. I need to define how the state
transition map is defined for the product system. Here is the definition of state
transition map. Starting from a state in the product, namely the pair x,q, under
input u, I go to a new state pairs x',q', if and only if in the original system S,
starting from x under, I go to x', and inside the NFA, starting from state q under
the label of x', I go to q'. What about the initial states of the product system?
The set initial states for the product is a set of all pairs, x0,q in which x0 is
an initial state of the original system S. Q is the successor if you start from any
initial states in the NFA and under the label of x0. You see? Q is the successor in
the NFA, starting from any initial states in the NFA and reached under the label of
x0. By the way, I abuse notation here. I apply Delta over a set rather than a
point. this is the definition of Delta of Q hat, A. You take the union of Delta of
Q,A in which Q belongs to Q hat. I abuse notation here slightly, and what this mean
exactly the Delta of a set of a state rather than a single state. In the next
lecture, what I'm going to do, I will actually provide an example of taking the
product of a system and an NFA.
0:02
Let's look at an example of the product of a system and an NFA. Here, we are given
a simple system, S. It contains four estates. What you see written underneath each
estate is actually the label of that estate. If you're in the state 10, the label
it shows from it, it's y. If you're in 01, also it shows the label y. If you're in
00 and 11, the labels are empty. The system has two initial estate, 10, 00, and you
can see the transitions among them. This is the NFA, remember. What is the original
regular safety property? It says, ever two y in a row should appear in any vote. Of
course, the bad prefixes are the set of all those finite votes in which we see two
y appearing in a row. Now, I need to draw an NFA that accepts the set of all finite
votes in which we see two y in a row. Here is the corresponding NFA. Look, if
you're in a state q_0, finitely many times, no y, and then you take a transition y,
you go to q_1. You can under not y, negation of y, you can go back to q_0. But in
order to get accepted, you have to go to the accepting estate. We have to visit the
accepting estate. Then you have to take another y. Then you will see two y
consecutively in a finite word accepted by this NFA. This NFA admits to the set of
all bad prefixes of the original regular safety property. What is original regular
safety property? Never two y in. Now let's take the product of this simple system
and this NFA. The very first thing is, what is the set of initial estates in the
product? Remember, it says here, the set of initial estates is those pair of the
initial estates of the system, and those q that can be read from the initial
estates of the NFA under the label of q_0. Here, initial estates of the NFA is q_0.
Very good. How many initial estates I have in the simple system? Two. One has label
empty, one has label 1. Great. Let's look at 10. The label it choose from itself is
y. If I'm in q_0, under y, I go to what? I go to q_1. That means 10q_1 is an
initial estate of the product here.
Play video starting at :3:8 and follow transcript3:08
What about this one? What is the label of this empty? If you're in q_0o, under
empty, where do I go? Empty means not y, so I go to zero itself. Then 00q_0 is also
an initial estate in the product. Great. We have two initial estates in the
product, and we found both of them. Now, let's start from these two initial estates
and start adding the transition. Great. Look, if I'm in estate 10, I have a
transition to 01, and when you go to 01, what is this label? 1. Great. If you're in
estate, q_1 under y, where do I go q_f. You see. From 10, I go to 01, and from q_1
under the label of 01, I go to q_f. Then I have a transition from 10q_1, 01q_f.
Great. But I also have a transition from 10 to 11. You see? What is the label of
11? Empty. Now, come here. If you're in a state q_1 under empty, where do I go? I
go to q_0. Empty is not y. Great. Then I do also have a transition from 10q_1 to
11q_0, because I go from 10 to 11, and the label of 11 is empty. From q_1 under
empty, I go to q_0. Great. Now let's come and let's talk about this initial
condition, 00. If I'm in 00, I have a self loop to 00 and the label is empty.
Great. If you're in q_0 under empty, where do I go? I go to q_0. Then 00q_0 also
has a self loop to itself. Why? Because from 00, I go to 00, and from q_0 under
label empty, I go to q_0. Great. But I also have a transition to 10. Look at this
transition. Great. What is the label of 10? Y? Now come here. Look at state q0.
Under y, where do I go? Q1? Great. Then I have a transition from 00q0 to 10q1,
which is this transition. I'm done with this state, so now let's come back to this
state, 11q0. If you're in 11, there is a transition to 00, and the label of 00 is
empty. Great. If you're in q0, under empty, or you go to q0. Then you need to go to
00q0. I also have a transition to 10 under y. The label of 10 is y. Now let's look
at the state q0. Under y, I go to q1. Then I have a transition to 10q1 as well.
Now, I go toward this state, 01qf. From a state 01, I have a self loop to 01 and
the label of 01 is y. If I'm in qf, under y, I go to qf. Then I have a self loop
from 01qf to itself. I also have a transition from 01 to 11, and the label of 11 is
empty. If I'm in qf, under empty, I have a self loop to qf. Great. Then I go to
11qf. That's another transition. Now, let's look at the state 11. I have a
transition to 00, I have a transition to 10. Let's look at the transition to 00.
The label of 00 is empty. If I'm in qf under empty, I go to qf. That means from
11qf, I go to a new state 00qf. I also have a transition to 10 and the label of 10
is y. If I'm in qf under y, I go to qf. Then from 11qf, I go to 10qf here. Now I am
done with this state. I continue in a similar way for these two states as well, and
this figure shows all possible transitions in the product. Now the question of
interest is, how am I supposed to use this product to be able to verify if the
simple system satisfy the property of interest? Without looking at the product,
given that this system is very simple, let's check if the system does satisfy the
property. The answer is no. Why is that? Because if I start from this initial
state, I already see y. Then if I take transition here, and I also see another y. I
already see trace generated by my system in which two y appear in a row. I already
know that the system doesn't satisfy the original safety property of interest,
which was asking never two y in a row. Now, I mean, in this case, it was a very
simple system, with i, we could see n. But in general, your system can have
billions or trillions of state. It's not easy. That's the reason we have to result
to the product. Now let's look at the product. The product says, can you start from
an initial state in the product and reach a state containing accepting estate qf?
The answer is yes, here. If I start in this state, 10q1, in one transition, I go to
qf. Since I can reach the accepting state, so that means the system does not
satisfy the original property of interest, and the contra example or the trace of
the system that already violates the property, you can actually look, based on the
product graph, the trace, the path in the product graph that reach the accepting
state. Here is one, 10 to 01. That already provides a word that violates it's bad
prefix to the original property. So 1001 means this 1001 because there is y here
and then another y. I end up with a finite word, which already is a bad prefix for
the original property. The product does not satisfy never accepting estate,
because, in fact, I do reach the accepting estate. Hence, the original system does
not satisfy the original regular safety property.
Course3Week2Lecture4 - An Example
===================================
0:05
Let's look at this example and try to solve this example. We are given the
following simple system and as depicted here, and we are given the regular safety
property. It says, always, if a is valid, and b and not c was valid somewhere
before, then a and b, do not hold thereafter at least until c holds. There are two
parts in this question. The first part, it says, define an NFA that admits the set
of bad prefixes of p safe, and then decide whether the system satisfies p safe by
taking the product. If it does not satisfy, it does provide a contra example. Let's
first of all, let me change the PDF. Yes. Look at these examples. Look at here.
They provide you some example for you to understand this property better.
Play video starting at :1:16 and follow transcript1:16
I mean, for example,
Play video starting at :1:22 and follow transcript1:22
point a or a, b, and then a, b, c. This actually belongs to the prefix of p safe.
The other example is this one, a, b, and then b,c. This also belongs to it. It
says, always, if a is valid, and b and not c was valid somewhere before, then a and
b do not hold thereafter at least until c holds. This finite word is not a bad
prefix, because you see a is valid here in this location, and then b and c was
valid, somewhere before, which is in this location. Then a and b do not hold
thereafter, at least until c holds here. C holds here, and then a and b also holds.
Then this is not a bad prefix. What about the second one? In the second one, let's
see. A is valid, b and c was valid somewhere before, then a and b, do not hold
thereafter at least until c holds. In this case, I mean, a is valid, first instant,
and b and c was valid somewhere before? No. Then we don't need to even check the
application. Let's have actually some bad prefixes in this case. Here, I'm going to
write some bad prefixes for you to understand. A,c, a, a,b,c. This actually belongs
to bad fix of. Why is that? Because look a is valid, b and not c was valid
somewhere before, then a and b should not hold thereafter at least until c hold,
which is not true, why? Because a is valid here, b and not c valid before, but then
before c got valid, a was valid. That actually is a bad prefix. Another example is
this one. A, a,c, a,c,a, no way, this is b actually. Let me fix this. This is also
a bad prefix. Why is that? I tell you. Because a valid is here, b and not c valid
somewhere, no, I made a mistake, so here this one. A is valid here, b and not c was
valid before, then a and b do not hold thereafter at least until c hold. But look a
holds here before c holds.
Play video starting at :4:50 and follow transcript4:50
Now, let's see what is there. The first part of the question was asking, define
NFA, whose language is equal to the bad prefixes of the property. This is the NFA,
I plot it here. Q_0, not b and not c, we go to q_1. On the b and not c, self-loop
on the not a, we have a state q_2 and q_3. Under a we go here, under c and not a,
we come back to q_1. We go here under a or b and not c. Then we have a self-loop
here under not a and not b and not c, or a and c. This NFA, in fact, admit the set
of bad prefixes of the property. Why is that? Look, when we start from q_0,
Play video starting at :6:37 and follow transcript6:37
so we have a self-loop under not b and not c. From q_0 to q_1, we have b and not c.
In the q_1 we have a self-loop not a. But then we take the transition a, we go to
q_2. A happens, and before that, we had b and/or c, so we're good. From q_2, we can
go back to q_1 under c and not a. Now, in order to go to the accepting state, so
you see we saw a happened here. We have a. Loop a is here, and b and not c was
before. Then the property said, a and b do not hold thereafter at least until c
holds.
Play video starting at :7:39 and follow transcript7:39
Look at the transition going to the accepting. A or b hold and c doesn't hold. That
means we see a or b before we see c. Any word accepted by this NFA is a bad
prefixes for the original safety property. Let me erase those. Now what I'm going
to do is I'm going to take product of this NFA with the system in order to verify
if the system satisfies the property. Here. First of all, what is the initial state
in this case? Remember, I said, the initial state is the pair of initial states of
the system and any successor in the NFA reaching from the initial states under the
label of the initial states of the system. What is the initial state of the system?
It's s_0. What is the label of s_0? A, b. In the NFA, under a, b, so that means, b
and not c, I go from q_0 to q_1. That means s_0, q_0 is the initial states in the
product, s_0, q_1. Now, if I mean s_0, so I have a transition, I can go to s_1, or
I can go to s_3. I can go to s_1 under Beta, and I can go to s_3 under Alpha. In
fact, when I go to s_1, what is the label of s_1? A, b, c. When I go to s_3, what
is the label of s_3? A, c. If I look at the NFA, if I mean q_1, so under the label
of s_3, I go to q_2, and under the label of s_1, I go to q_2. So then I can write.
I have these transitions. Under Alpha, I go to the state (S_3, Q_2). And under Beta
I go to (S_1, Q_2). So now if I mean S3, look at here, state S3, I have a
transition to is one, and the label of this one is {a, b, c} So in the state Q2, I
have a self loop Q2 as well. So then I go here under Gamma and so if I continue the
same way. So look from state S1, I have a transition to S4. What is the label of
S4? {a, c}. I do go from Q2 to Q2 under the A, C, so then I go from here to here
under Gamma and similarly I can go from this state to the one top under Alpha and
eventually I have a transition to (S_5, Q_3). So is that true? Yes. Look from state
is 4, I go to S5. Under Beta, what is the level of S5 {a, b}? And I do go from Q2
to Q3 under {a, b}. And this is the product.
Play video starting at :11:54 and follow transcript11:54
Does the system satisfy the property of interest? The answer is no. Why? Because
this is containing the accepting state, and accepting state is reachable.
Play video starting at :12:11 and follow transcript12:11
So we conclude that. I was asking, does S satisfy P safe? No. S does not satisfy.
Let me write it here, S does not satisfy P safe. I need to ask for why a counter
example? Here is the counter example. So counter example is given by the following
initial path fragment in the product. So starting from zero. Any path that reached
the accepting state : Q1 and then U2, U2, Q2 and Q3. So now let's project it on the
state components. Then what I'm going to get? If I project it, I'm going to get
this state sequence : S_0, S_3, S_1. So this finite state run is the one that
causing the counter example. Let's look at the trace of it. The word corresponding
to this. So {a, b} {a, c} {a, b, c} {a, c, a, b} and I continue, {a, c}. So
obviously, so you see the trace of P belongs to the back prefix of P safe. So we
were able to systematically, by taking product of the system and the NFA, verify
that the system does not satisfy the original property, and also the product
structure allows you to find the counter example so easily. Because we need to just
find a path on the product graph that reached the accepting states and then project
it over the states of the system, and that actually is a finite state path in the
original system that generates a word that is inside the bad prefix of the original
property.
0:04
In the previous lectures, we studied how we can verify a system satisfies a regular
safety property, and the way we did it, we took the product of the system and the
NFA representing the set of back prefixes of the original property. Then on the
product graph, we check if the accepting state of the NFA is reachable. That was
the case, so that meant the system does not satisfy the original regular safety
property. Otherwise, it does satisfy the original regular safety property. Now,
today, I'm going to talk about how can we verify Omega regular properties. We are
given a symbol system without any blocking estates, together with the labeling map
L, which maps one state to a symbol of alphabet, which is a sub set of that
proposition. We are given an Omega regular property P, and we're asked to check.
Does system satisfy the Omega regular property? Yes or no. The way we're going to
tackle this problem, we construct an NBA A for the set of bad behaviors. Meaning
that, the language characterized by A is actually complement of our original
regular property P. You see the language of the NBA is the complement of the Omega
regular property. These are the set of bad behaviors. Now what we need to check is,
we need to check that all the words generated by the system, which is extracted by
simply applying the labeling map over the behavior of the system. We need to show
is this set, which is the set of all words generated by the system, intersection
with the language of A, which contains all the bad words or bad behaviors. This
intersection, if it's empty, that includes that the system does satisfy the
original property P. But, we cannot construct this, these sets are infinite. This
set and this set can potentially be infinite. We cannot just construct those set in
a computer, and then take the intersection. That's not doable. But the way we're
going to do by resorting to taking product of the system and the NBA A. We build
the product of the system and NBA, and we check whether the product does satisfy
acceptance condition of A is violated. Why that? Because we don't want the system
to satisfy the behavior of the NBA A. Because the NBA A, were constructed to admit
the set of bad behaviors. If the product does not admit the acceptance condition of
A, that means the original system satisfy the original property P. What is the
acceptance condition of A? Because it's NBA A, the accepting estate needs to be
visited infinitely often, in order for the word to be accepted by A. But we don't
want to get accepted. That means in the product, we have to check the negation of
that. The negation of infinitely often accepting estate is equivalent to saying,
almost forever, no accepting estate. Or, eventually always no accepting estate. We
need to check this. I'm going to delve into this more in the next lectures. How can
we actually check this systematically.
Play video starting at :4:24 and follow transcript4:24
In order to check Omega regular properties, we need to check if the product of the
system and the NBA representing the bad behaviors, satisfy eventually always no
accepting estates. This requires techniques for checking. This is called a
persistence property finite system. What is the definition of persistent property?
Let me elaborate here. Let P be a linear time property. P is called a persistence
property if there exists a propositional formula Phi, over the set of atomic
proposition AP such that, P is the set of all infinite words over the alphabet, in
which for all, but finitely many indices, AI satisfy Phi. You see? The P is called
a persistent property if there exists a propositional formula Phi, over the set of
a proposition, such that, P is the set of all infinite words, A 0, A1, A2 all the
way till infinity, over our corresponding alphabet, such that, for all but finitely
many of those indices, AI satisfy Phi. AI satisfy Phi for all I but finitely many
of them. Or, this are the other way of saying it. From some moment on Phi is true
always, or eventually forever Phi is true, and these are called persistence
property.
Play video starting at :6:40 and follow transcript6:40
In a nutshell, this diagram illustrated how we can verify a finite system against
Omega regular properties. We are given a finite simple system S, and we are given
an Omega regular property P. First, we need to construct an NBA A for the set of
bad behaviors, or we need to build an NBA whose language is the complement of the
language of P. Then, we need to take the product of the system and the NBA. Then we
need to apply our model checking over the product graph. Namely, on the product, we
have to check a persistent property. If the persistent property does hold, so that
means the original system satisfies the original Omega regular property, otherwise,
it doesn't, and we are also able to come up with a contra example, which in this
case is going to be an infinite state run, in which it violates the original
property. You see in the product, we have to check the persistent checking. We take
the product of the system S and the NBA A, and we check if the product satisfy
eventually forever no accepting state. From some moment on, no accepting the state.
This is called a persistent property. Later on in the future lecture, I will
explain how we can verify persistent property over the product for a more
systematically.
0:03
Now the question is, how do we take the product of a system and NBA? Actually, the
product construction is exactly the same as taking the product of the system as an
NFA. Consider we are given a simple system S, together with the labeling map L that
map each state to a subset of at proposition, and we are given an NBA, which is
also over the same old by the way. Now, the product of S and NBA, it's a system
with this top L. What is the state set? Is the product of the state set of the
system and the NBA. The input set is the same, input set as the system S. What
about the state transition map, as well as the sets of initial estates? Here, how
the state transition map is defined. Again, I said is the same as the product of
the system and NFA. Starting from a state pair X, Q under U, we go to state pair X'
prime and Q', if and only if in the original system S, I go from state X to X'
under input U. See this part. In the NBA, I go from a state Q to Q' under the label
of state Q, X'. Despot. You see, that's how the state transition map is defined. I
go from the pair of state X, Q under U to X', Q', if and only if in system S, I go
from X to X' under U, and inside the NBA, I go from Q to Q' under the label of X'.
What about the set of initial estates? X_0' is the set of all pair X_0, Q such that
Q can be reached from some of the initial estates inside the NBA under the label of
X_0. Again, remember, we abused the notation. This Delta(Q_0) means you take the Q
is equal to, you taking the union of Delta Q_0, LX_0, a small q_0 belongs to the
set of initial estates. That's how we construct the set of initial estates in the
product, exactly the same as we did for the product of the system and the NFA.
0:04
Let's again reiterate how we going to verify if a finite system satisfies a given
Omega regular property. We are given a finite, simple system S, without any
terminal estates, together with the labeling map, L, that map each state to a
subset of atomic proposition. We are also given an NBA. This NBA accepts the
language, which is complement of the original property we would like to verify
against. This NBA represent the bad behaviors of an Omega regular property, P. In
other words, the language of the NBA is equal to complement of P. Now look, the
following statements are all equivalent. System S satisfy the original Omega
regular property P, if and only if the traces of the system, meaning that you apply
a labeling map over all the behavior of the system, intersection with the language
of the NBA that admits bad behaviors, this intersection being empty, if and only if
you take the product of the system S, and the NBA and the product does satisfy
eventually always no accepting mistake or eventually forever no accepting mistake.
All these tree statements are equate. I'll reiterate again, these two sets, they
can be infinite. We cannot even construct and store these sets in a computer.
Taking these intersections is almost impossible. Rather than following Step number
2, we actually follow Item number 3, meaning that because S is finite, we can take
the product of S and the NBA and P is also finite, so this product graph is going
to be a finite graph. Now on the product system, we just need to check that,
eventually, always no accepting mistakes, and we don't need to worry about the
taking this intersection at all.
0:05
Let's look at an example, so we are given a finite simple system S, which is this,
and we are given an omega regular property P. The property says each sentmessage
will eventually be delivered, right? So that's the original property that we would
like to see if the system does satisfy that. So because the system is very simple,
we can just look at it, and we can readily verify if the system does satisfy this
property yes or no. So look, if I start from the starting location, so under this
transition, I go to this state which just try to send, right? So now I can
actually, there are two possibilities. One might get delivered, but non
deterministically I may go to last, and I might take actually this loop and end up
getting trapped inside this loop. So that means, sent message will not get
eventually delivered. So we can readily see from this system that the system does
not satisfy this omega regular property, right? Because look, if we start from
here, under start, then we go here, and then we take this transition non
deterministically and then what if we get stuck in this loop? So the message never
get delivered, right. So then the system does not satisfy the property. But we
would like to verify this by taking the product, right, by the product
construction. Okay first, we need to build an NBA that admits the bad behaviors,
right? So meaning that, admits the complement of P. So, okay, what is the
complement of P? The complement of P is never delivered after some trial, right?
Message never delivered after some try. Okay, what is the NBA recognizing this type
of proper behavior? Meaning that, never delivered after some try. Here, look at
this NPA. So start from state Q0, finally many times we do not care. And then under
a transition try and not deliver, we go to accepting state. And then we stuck, we
get stuck in the accepting state under not delivered. So the message doesn't get
delivered, right. Of course, the transition going outside the accepting estate
doesn't matter, right? Because they are not coming back to the accepting estate. So
I mean, you can still keep this, the reason we added this transition, because we
wanted to have a deterministic, we wanted to make it somehow deterministic at the
accepting estate. But I mean, actually let's keep this state because later on in
the next slide, I will show you that, even if you would have removed this, would
have not affected the product and the conclusion we're going to have from the
product. So that's the reason, now for the sake of illustration, let me actually
keep this transition. But you see this transition does not play any role in the
language of the NBA, we can also remove it, right? Okay, so now let's take the
product of the system and the NBA. Okay, first of all, what is the set of initial
states in the product? Okay, in the start, and here from Q0 under start, where do I
go? So if you're in Q0 nder start, you go to Q0, right. Then start Q0 is the
initial state of the product, great. See, and it's the only initial states. So now
let's look at the other transition. So look, I have a transition from a start to
try, right? Great. So from Q0 under try, where do I go? You can go to Q0, you can
also go to QF, you see? You can go to Q0, you can also go to QF, right? And
similarly, okay let's look at other transitions, or I'm not going to do all of it,
but let's practice some of them. So here, when I'm in try Q0, right? So if you only
try, so there is a transition to last, right? Okay, in Q0 under last, okay under
loss, I have a self loop, right? True, true means everything, right? So I have a
self loop, so I go to lost and Q0, great. So in the last, I also have a transition
back to try, right? So, if you are in Q0 under try, where do I go? You can go to
QF, you can also go to Q0, look I can go to QF, I can also go to Q0, great. I also
have a transition to deliver try when you are in tri. So if you're in Q0 under
deliver, I go to Q0 here, from try Q0, I go to deliver Q0. And similarly I can
actually construct the rest of the transition. And this is going to be the final
orthodox graph or product system. So now let's check, does the product satisfy
eventually, forever not accepting the state. Meaning in the product do I get
settled in state which are not accepting, or I need to check in the product, do I
visit any of the accepting estate? Infinitely often, true or false? Let's check.
Look, if you look at this path, if you're in Q0, under this transition, you go
here, this is an accepting state. And then you have this transition to accepting a
state and you have this loop, right? And that's causing the problem, why is that?
Because I was able to find a path on the product in which the accepting estate is
visited infinitely often, which means, I do not satisfy eventually, always not
accepting a state, because I am visiting accepting a state infinitely often. So
that already means this persistent property does not hold hence the original system
doesn't satisfy this original omega regular property.
0:03
So let's have a quick comparison between checking regular safety versus omega
regular properties. This slide will actually summarize most of the discussions I
had in the previous slides regarding verifying regular safety properties and
verifying omega regular properties. So for regular safety property P, remember we
said the system S satisfy the regular safety property P if and only if we take the
product of the system and the NFA representing the set of bad prefixes of P. And
then in the product we check if the any of the accepting state will be reachable,
right? If the product satisfied forever no accepting state or always no accepting
estate or in other words, if in the product none of the accepting state is
reachable from the initial estates in the product. We conclude that the system is
satisfied the original regular safety property P. In the case of omega regular
property p, remember we said a system s satisfy Omega regular property p if and
only if when you take the product of the system s. And the NBA, NBA was accepting
the set of all bad behaviors, or the language of the NBA was the complement of P.
So if the product satisfies eventually, always no accepting state, eventually
forever no accepting state, then that means the system S does satisfy the original
omega regular property P. So you see, in the product we have to check if none of
the accepting state is visited infinitely often. And if that's the case, that means
the system S satisfied the original omega regular property P.
0:05
Now we need to check. How can we systematically check the persistent condition or
the product system? Let's again look into the persistence checking in details.
Previously, everything was through the example, it was a very simple example, we
could see with our own eyes that if the persistent condition was satisfied. But
when the system is very large, containing billions of trillions of estates, so then
the system is going to be extremely large, and we need to have an algorithmic way
of verifying a persistent condition. Let's say we are given a finite simple system
without terminal estate, and we are given a persistence condition. A is an element
of the attainable position. We're interested now in this simple question because if
we are able to answer this, then we can also answer the persistent checking on the
product system. For now for the pada, we still want to check persistent condition,
but for a system. So does system S satisfy eventually always does this hold. Let's
look at the negation. Let's say S doesn't satisfy eventually always A. So that
means what? If and only if there is a state path in the system there is a state
sequence, zero is one is two is three all the way. Such that Si does not satisfy A
for infinitely many indices. You see, S doesn't satisfy eventually always A is
equivalent to this statement that there is a state sequence, zero is two vital
infinity inside the system such that for infinitely many indices, is in that state
sequence does not satisfy A. This is also equivalent to a statement. There exists a
reachable states. Going from here to here, we're using the fact that the system is
finite, Troy. So since the system is finite, when you look at an infinite estate
sequence, so that means what? That means one of the estate is going to appear
infinitely many times in that sequence. So that means what? Means we can actually
break down this estate sequence to a finite fix and a cycle from S to itself.
Play video starting at :3:5 and follow transcript3:05
Now this two statement is equivalent to this statement. If and only if, there
exists a reachable state in which S doesn't satisfy A, and there is a cycle from S
to it and this is also equivalent to this statement. If and only If, there exists a
non-trivial reachable, strongly connected component C, such that the intersection
of C and all those states in the system that doesn't satisfy A is non-empty.
Play video starting at :3:47 and follow transcript3:47
By the way, these statements are valid under the assumption that S is finite. What
is a strongly connected component? This is the definition. Strongly connected
component, which is maximal set of estates that are reachable from each other. In a
graph a part of the graph is called a strongly connected component, if these are
the maximal set of estates which are reachable from each other. All the estates
inside the strongly connected component are reachable from each other. These are
maximal set of estates. A strongly connected component is called non-trivial if it
has at least one H. Either, it's going to be one state with a slope or containing
two or more states. In that case, the strongly connected component is called
nontrivial. By the way, look at this last equivalent statement. It's asking for a
non-trivial reachable, strongly connected component. Keep that in money has to be
nontrivial.
Play video starting at :5:15 and follow transcript5:15
Strong component is the maximal set of estates in which each estate is reachable
from any other estate. Now, let's see, how can we leverage all these equivalency in
our advantage and to be able to offer a systematic solution for persistent checking
on the product when we are verifying Omega regular property. Let's go back and look
at this example again. Remember, this was one of the examples which I showed that
remember we saw the system did not satisfy the original Omega regular property
because we constructed the product on the product one accepting estate was
reachable, and we could visit accepting estate infinitely often in the product so
then the persistent condition was not satisfied. We also looked at the system
because the system is so simple. We were able to also readily say that yes, the
system does not satisfy the, you know, original megaregular property because if you
start from a start, we go to try and then non-deterministically, you go to last,
and you can take this loop infinitely often so the message never get delivered
because the original property was asking each sent message will eventually be
delivered, but that is not the case if we get stuck in this loop. This was the NBA
representing the bad behaviors. The language of this NBA was the complement of P.
Now, let's look at the product system, and let's answer the persistent checking on
the product system by resorting to a strongly connected component analysis. Here,
if you recall, this was the product system so I'm able to actually divide the
product in three reachable, non-trivial, strongly connected component. Why is that?
So I have three non-trivial, strongly connected component. All of them are non-
trivial because they have more than two estates, all of it, and they are all
reachable from the initial estate. This is reachable from the initial estate
because actually, initial estate is one of estate in it. This strongly connected
component is also reachable from the initial estate, and this strongly connected
component is also reachable. Great. Look every estate in the strongly connected
component is reachable from every other estate. For example, here, is this estate
reachable from this? Yes, through this pass. What about this fund through this
path? What about this fund? Can I go from here to here? Yes, through this path. Can
I go from here to here? Yes, through this path. Go here here and here. You see is
the maximal set of estates in which they are all reachable from each other. What
about C2? But look, these estates are not reachable from the estate so we did not
include these estates here. These two estate reachable from each other? Yes, I can
go from here to here, and I can go from here to here. These estates, I cannot go
from this estate to these two estates that's the reason these estate are not
included in C2. I can go from this estate here, and every estate here is reachable
from every other estate. Great. Now, what was the persistent property? Eventually,
always no accepting estate. In this example, we have the reachable strongly
connected component. All of them are also non-trivial. Now, does any of them
contains an accepting estate? Yes. Look, C2 contains two states, which both of them
are accepting a state. That means what? You see contains two states, which does not
satisfy not accepting a state, or is does satisfy accepting estate. Because one of
the strongly connected component contains at least one accepting estate, that
already implies that accepting estate can be visited infinitely often so that means
the persistence condition is not satisfied hence the original system does not
satisfy the original megagular property. If you do this strongly connected
component analysis and none of the strongly connected component, none of the non-
trivial one, obtains an accepting state, then the persistent condition is
satisfied. Now you might ask, I transfer the problem of checking persistent
condition to finding strongly connected components over the podax system. Now you
might ask, how do we search for a strongly connected component? The good news is
there are several well-established algorithm in the literature that you can simply
leverage to compute a strongly connected component of a graph. In our case, we are
talking about the paradox system, you can look at it as a graph. What are those
algorithms? For example, one of them is called Kosaraju's algorithm, we also have
Tarjan's strongly connected component algorithm,we also have path-based strong
component algorithm. These are efficient algorithm. They can be used for very large
graph, and they're also very fast.
0:06
So now the question is, how can we verify a finite, how a finite system is satisfy
a given linear temporal logic formula, right? We call it LTL model checking
problem. So we are given a finite simple system S over a set of attempt proposition
AP. We also given an LTL formula varphi over the set of atomic proposition AP. So
the model checking question is this, does system S satisfy the LTL formula where
varphi does this hold? So how can we check this? So the basic idea is try to refute
s satisfying varphi by searching for a state path PI in s such that PI does not
satisfy the LTL formula of alpha, or PI satisfy the negation of LTL formula of
alpha. So now this already is giving me an idea how to check if a system S satisfy
a given ltl formula varphi. So what I need to do is I need to take the negation of
the LTL formula, you know, not varphi. So now I need to construct an NBA. Remember,
for any LTL formula there exists an NBA. So negation of varphi itself is an LTL
formula. So now I need to construct an NBA A that admits the words of not varphi.
So now I need to search for a path PI inside system S in which if you apply the
labeling map L over this state sequence, or if you look at the corresponding trace
of this state sequence that should be inside the words of not varphi or inside the
language of NBA. And if I can come up with at least one state sequence PI with this
property, that means the system S does not satisfy the original LTL formula varphi.
But let's do it in a systematic way. Mean, I already explained in the previous
lectures how to verify if a system S satisfy an omega regular property, right? So
LTL is also an omega regular property. So I should be able to utilize those to
answer this problem here. So what I need to do, I need to simply construct the
product system S and the NBA. And again on the product check the persistence
condition. So hence the LTL model checking problem in a nutshell, gets very easy.
We are given a finite system S, we are given an LTL formula varphi. We are asked
check, does the system satisfy lLTL formula varphi. So what I'm going to do is I'm
going to take the negation of varphi. Negation of varphi itself is an LTL formula,
hence there exists an NBA admitting all the words that satisfies the negation of
varphi. So I construct an NBA A for not varphi. And this is called the NBA
recognizing bad behaviors.
Play video starting at :3:59 and follow transcript3:59
So then I take the product of the system and the NBA A and I need to check the
persistence condition on the product system eventually always not accepting a
state. And if that's true then that means the original system S does satisfy the
LTL formula varphi otherwise it does not. And an infinite state sequence on the
product system that visit an accepting state infinitely often. Data state sequence
on the product when I project it to the state sequence from my original system is
in fact a contra example. By the way, I use I use F and G here because F remember
stand for eventually and G stands for always. I didn't write eventually always, no
accepting state. I write the acronym F stand for eventually, capital G stand for
always. Remember when I talked about the syntax of LTL formula? I used both diamond
and capital F to represent eventually and I also used square and capital G to
denote all these our.
0:04
So remember, I already explained this in the previous course, that for any given
linear temporal logic formula, we can construct an NBA whose language is equal to
the language of the linear temporal logic formula, right? So let me actually
reiterate that fact. So for any LTL formula var phi over the set of atomic
proposition, AP, there always exists an NBA A over this alphabet, which is the
power set of the atomic proposition such that all those infinite words satisfying
the LTL formula var phi is equal to the language of that specific NBA A. So we will
not discuss the formal proof of this fact, right? So instead, what we going to do,
we going to look at some examples in the next lecture on try to construct NBA for
the LTL formula var phi.
0:05
So, as I mentioned previously, verifying if a system satisfies a given linear
temporal logic formula. So this problem actually boils down to what we already
discussed in terms of verifying if a system satisfies an omega regular property,
why is that? Because the set of bad behaviors, when you are given an LTL formula,
the set of bad behaviors is the negation of that formula, right? And we already
know for any LTL formula, there exists an NBA whose language is equal to the set of
all the words that satisfy the LTL formula. Hence, I need to construct an NBA for
the negation of the original LTL formula. And now I can actually take the product
of the system and the NBA admitting all the bad behaviors. And then on the product
I need to check persistence checking. So then this is already recovered by model
checking of omega regular properties, right? Because LTL are a class of omega
regular properties. So then the problem is already solved by what I already
explained in terms of verifying omega regular property. But rather, here, I will
concentrate my focus on providing some examples to see how one can construct a non-
deterministic Buchi automata for given linear temporal logic formula. Let's start
with some examples.
Play video starting at :1:42 and follow transcript1:42
So assume the original property given to us. It says next A, okay? So now let's
look at the negation of the formula. Negation of verify is negation of next A, and
negation goes inside the next, right? It become next, not A, great. I just need to
build an NFA that admits all the words that satisfy next, not A. Here, look at this
NBA. So in the first instance is true, can be anything. But in order to get
accepted, I need to go to q2. So the next is not A and after that I don't care,
right? I have a self loop on the accepting state under true. So you see all the
words. So that is accepted by this NBA is exactly, are those words that satisfies
this LTL formula next, not A.
Play video starting at :2:46 and follow transcript2:46
So here I just put the syntax of the NBA for your record. I mean, you already know
the definition of the NBA and the syntax of the NBA. It has a finite set of states.
It's defined over an Alphabet. There is a state transition relation involved. We
have a set of initial states and a set of accepting states, right? And remember, we
set a run for an infinite word, right?
Play video starting at :3:13 and follow transcript3:13
So is a state sequence of this form in which q0 belongs to the initial set. And I
go from q0 to q1 under A0, from q1 to q2 under A1, and so on, so forth. And it's
called accepting if this state sequence visit and accepting the state infinitely
often. I mean, you're already familiar with the syntax and semantics of NBA. I just
put it here for more recap. And of course, the language of the NBA is a set of all
infinite words accepted by the NBA. Okay, here was an example of an NBA accepting
the negation of the LTL formula. So now, if you had a system and you wanted to
check if the system satisfies this original LTL formula, what you need to do, you
need to take the product of that system and this NBA that admits all the bad
behaviors. And then on the product system, we need to check the persistence
condition. Another example, all LTL formula is a or b, okay? This is the
corresponding NBA for this LTL formula. Look, the union of these two NBA. Actually,
you can consider it as one NBA. The top one admits a, the bottom one admits b. See
a or b. After that, we don't care. What about an NBA that admits always a, right?
Remember, box a means always a. Here, see, in q0, I have a set always a. So any
word accepted by this NBA is satisfying, always a.
Play video starting at :5:17 and follow transcript5:17
Even if you take this transition, it's not going to be accepted because there is no
way of coming back to the accepting state.
Play video starting at :5:28 and follow transcript5:28
What about an NBA that satisfies the language of this LTL formula. Infinitely often
a. Remember, box diamond a also means infinitely often a. Okay, look at here. If I
start from initial state q0, in order to get accepted, I need to take a go to q1.
Of course, I can take infinitely many a here, or I can also take not a. But in
order to get accepted, I still have to take a, right? So if you see all the state
sequences of this NBA that visit accepting the state, infinitely often, they
actually produce the letter a, also infinitely often.
Play video starting at :6:24 and follow transcript6:24
Look at this one. Look at this LTL formula. It says always a and not b should
implies eventually b, okay? So of course, implication hold. If the first property
doesn't hold itself, right? That means the negation of this is not a or b. Here in
the accepting state, I have a self loop under not a or b, because under not a or b,
I don't need to even check what comes after implication, right? But what if a and
not b hold? If a and not b hold, for finitely many time, I do not see b. But in
order to get accepted, eventually I have to see b, right? This transition to the
accepting the state, it's under b. So this NBA admits exactly those words that
satisfy this LTL formula.
Play video starting at :7:38 and follow transcript7:38
So by the way, empty to the power omega, it's accepted by this NBA and it's
correct, right? Because the a and not b doesn't hold with empty, so then you don't
need to check what comes after implication. This is also accepted, right? So a b, a
b, a b, because every time there is an a immediately, after that b is appearing.
And again, because it's periodic, every time you see a, immediately after that b is
true. So then it satisfies this LTL formula.
Play video starting at :8:24 and follow transcript8:24
What about eventually always a, look at this NBA? Finitely many times it's true,
means we don't care. But in order to get accepted, eventually, a is true and a
remains true forever after that. You see, that means eventually always. Finitely,
many times we don't care what's going to happen, but eventually, in order to get
accepted, eventually a is true and a remains true forever afterwards. Of course,
this transition does not play any role in the language of the NBA, why? Because
it's not coming back to the accepting state. So you see, these are all possible
runs. So q0 q0. If you have a self loop on q0, this is not accepted by the NBA. You
have to go and visit the accepting state infinitely often. What about q0, q1 to the
power omega, it is accepted. Q0, q0, and then q1. Omega also accepted, right? Or
q0, q0, q0, and then eventually q1 to the power omega, also accepted by this NBA.
So this NBA accepts all the words that satisfies the LTL formula. Eventually,
always a.
0:04
In the previous lectures, I talk about how we can verify finite system. Remember,
we divide it in two parts, verifying finite systems against regular safety
properties, and then verifying finite systems against Omega regular properties.
Remember, in the case of verifying against finite regular safety properties, we
took the product of the finite system and the NFA representing the bad prefixes of
the original property and then on the product, we just check a very simple
invariance condition. Is accepting state, is it reachable in the product? If the
accepting state is not reachable, that means the system satisfies the original
regular safety property. In the case of Omega regular property, we constructed an
NBA for the bad behaviors. We took the product of the system and the NBA, and then
in the product, we check a persistent condition. We check eventually always no
accepting states, or in other words, we check if in the product, accepting state
can be visited infinitely often. If, yes, that means the system doesn't satisfy the
original Omega regular property, otherwise, it does satisfy. Now today, I'm going
to talk about how can we synthesize controllers for finite system to satisfy some
property of interest? Now, we are shifting the gears from verification to
synthesis. How can we actually design systems to be correct? We also call them
erected by construction.
Play video starting at :1:57 and follow transcript1:57
Let's look at some definitions regarding satisfactions and realizability of LTL
form. Let S be a simple system with no blocking pair. Let verify be an LTL formula
over a set of atomic proposition AP. Let L be strict map. We also call it labeling
function. Here, you can see the domain of the labeling map is the product of the
input and state set. In the previous lectures, the domain was only a state set but
here we also modified to also include input set without loss of generality. The co-
domain is subsets of atomic propositions. As you know, those subsets are symbols or
letters of alphabet. Now, every element of the behavior, remember, the behavior is,
so in this case, because the system has no blocking pairs, behaviors are actually
infinite input and state sequences.
Play video starting at :3:13 and follow transcript3:13
If you look at any element of the behavior under this labeling map, they actually
induces an infinite word over the alphabet. Why is that here? Because you apply L
over this pair. When you apply L, so these are infinite sequences, at every
instance, it's going to be a subset of atomic proposition, which is a symbol of
alphabet. Then, by applying L over this pair of infinite sequences, what we see, we
see an infinite sequence over the Alpha, or in other words, we see an infinite
word, and that's how we actually relate a system to property of interest because
the system generates behavior, infinite input state sequences. But on the other
hand, property of interest are infinite words over the alphabet. With the help of
this map, L, we can actually translate. We can go from behavior to infinite words.
The words which are generated by the system under the labeling map, L. In fact,
this word at any time instant t is equal to this equation is by applying the
labeling function over the pair of u(t), x(t). P(Var Phi) if you do recall is a
property over the alphabet. Remember, P(Var Phi) was the set of all infinite words
satisfying LTL formula of Var Phi. Now we can use the labeling function L to define
a property. Now, we can actually bring the property also from the alphabet domain
to the input state domain. How? We call it p of L arfi over u times x is simply the
pair of all those input state sequences such that when you apply the labeling map L
over those infinite input estate sequences, they are going to be inside this set of
p of arfi. Remember, P of arf was the set of all infinite words satisfying the LTL
formula of aripi. Now, P of L arfi is simply the set of old input state, infinite
input estate sequences that when you apply L over them, they are going to be inside
the P of verify. And now we say a specific infinite input state sequence, satisfy
an LTL formula wifi is if simply this pair of infinite sequences belongs to this
set. So the whole discussion of this slide is to just tell you how can we relate a
system to a property of interest, in terms of satisfaction.
Play video starting at :6:53 and follow transcript6:53
An infinite input state sequence, satisfy an LTL formula. If that pair belongs to
PL of f. In other words, this input state sequence, satisfy an LTL formula arf, if
when you apply the labeling functions over these input state pairs. When you apply
it, that will generate an infinite words. If that infinite word satisfy a
corresponding LTL formula, then you say X satisfy that LTL form.
Play video starting at :7:36 and follow transcript7:36
In summary, we say the system simple system satisfies an LTL formula verify under
the labeling map L iff and only iff, the behavior of the system is a subset equal
of P L f. Meaning that if all input state pairs generated by the system satisfy the
LTL formula arf, then we can claim that the system satisfied the LTL formula arf.
Play video starting at :8:16 and follow transcript8:16
The second point says, LTL formula arpi is realizable on system under the labeling
map L I P L RP is realizable on S, which means there exists another system C, which
is feedback composable with S and the behavior of the feedback composition of C and
S is subset equal of P pi. Let's dive into the second bullet point more in details.
We say the LTL formula verify is realizable on system S under the labeling Map L I
P S L verify is realizable on S, which means there exists. A controller or S system
C, which is feedback composable with S, and their feedback composition, the
behavior of their feedback composition is a subset equal P L of Rf. Look, system,
the behavior of system might not be subset equal P L f. We are asked to design a
controller to design another system, capital C, such that C is feedback composable
with S and the behavior of the feedback composition is subset equal P L R f. Let's
provide an example. For example, let's say your system, is a car, and you're asked
to design a controller or an autopilot that, controls the car and avoids colliding
with the car in front. Now, this autopilot, which I'm talking about, that actually
can be, expressed as this system C, because that can be, it's a system. Even though
it's a software implementing in the computer, you can actually look at it in a high
level as a system C. We are trying to design the system C or autopilot C or control
C, that when it's applied to the systems, meaning, it has to be feedback composable
with S. When you apply on system, the feedback composition ensures that car will
not collide with the car in front. I'm trying to put it in the context for more
clarification.
Play video starting at :11:15 and follow transcript11:15
For the sake of simplicity, so I'm going to, deal with only a fraction of LTL
formula. For the sake of this course, I assume system is simple, and the LTL
formula Phi the atomic proposition A, is any of the property which is listed here.
It's either safety, which means Phi is equal to always P and P is a member of the
atomic proposition. Or it can be reachability eventually P or finally P. Or it can
be persistence, eventually, always P. Or it can be recurrence, always eventually P.
Or it can be reactivate, meaning that verify can be this formula, which says,
infinitely off and P implies infinitely off and Q. Depending on which of these Phi
we are dealing with, what we're going to do in this course, we're going to drive an
algorithm. The output of the algorithm is a set D, which is a subset of the state
set. It's also called the domain of the controller or the domain of the system C.
Meaning that if you start any inside D, you always have a, control actions that you
can apply to your system in the closed up fashion, such that the system will Q with
that C will actually satisfy the property of interest, which is one of these.
Ultimately, I will present two results for each of those specification of interest.
We're going to answer this question I verify realizable on S under L. That's the
case. If and only if the set of initial estate is a subset equal of D. Why is that?
Because as you recall, I said, if you start inside D, there is always a control
action that can, steer the system in a closed loop fashion in order to satisfy the
property of interest if. Now, since the system can only get initialized from
capital X_0, so if capital X_0 is a subset equal to d subset equal D, that means?
What means the system can actually start, from any point inside the domain of the
controller, and then the system can actually navigate and steer and the system C
can actually ensure that the system will satisfy the property of interest. That's
one result. We will provide the other result. If X_0 is a subset equal D, then we
can drive a system C, from D so that C is feedback composable with S and their
feedback composition will satisfy sphere this is exactly the synthesis problem we
are interested to solve in this course. These are the two results that for each of
those property, I will present. Number 1, Phi is realizable on if and only if X_0
is a subset equal D. I'm actually I'm going to tell you how this D is constructed
for each of those property of interest. Then the next point is, if it's realizable,
how does C look like? I will actually tell you how does system C look like, and I
will show you it's actually feedback composable by construction with system S, and
their feedback composition will in fact satisfy those property of interest.
0:04
Central to the synthesis algorithms, I'm going to explain in this course, is an
algorithm which is called fixed point algorithm. Let's see how the fixed point
looks like. All algorithm are described by so-called fixed point expressions.
Central to those description is in fact a monotone function G, which its domain is
a subset of a state set. Domain is a power set of a state set, co-domain power set
of a state set, and the function G is called monotone if it has this property. For
any two subset Z, Z prime of state set X, we have, if Z is subset equal to Z prime,
that implies G(Z) is subset equal G of Z prime. If G has this property, then G is
called a monotone function. If Z is a subset equal to Z prime implies G(Z) subset
equal to G of Z prime, then G is called a monotone function. A subset Z* is a fixed
point of G if Z* = G(Z*), then Z* is called a fixed point of G. A fixed point Z hat
is called maximal fixed point if first of all, it has to be fixed point, that means
Z hat is equal to G of Z hat. In addition, for any if Z is a subset equal to G(Z)
implies Z is subset equal to Z hat, then Z hat is called a maximal fixed point. A
fixed point Z check of G is called minimal fixed point if first of all, it has to
be fixed point, meaning that Z check is equal to G of Z check. In addition if G(Z)
subset equal to Z implies Z check is subset equal Z, then Z check is called minimal
fixed point. In addition, we're going to use the notation Nu and Mu to denote the
maximal respectively minimal fixed point of the operator G. Z hat, it's a maximal
fixed point and we're going to use this notation. This is from Mu calculus. This
notation comes from Mu calculus. This means maximal fixed point and this means
minimal fixed point. We're just borrowing these notations from Mu calculus to
denote maximal or minimal fixed point of an operator, so a monotone operator. Keep
in mind that this fixed point expression and this monotone function, which is
behind the fixed point expressions is actually building the main ingredients in the
synthesis algorithms, which I will explain later for designing controller C.
0:05
So let's look into computational fixed points, so in general, there are going to to
be two types of iteration, one is called non-growing iteration, and the other one
is called non-shrinking iterations. So in the case of non-growing iterations, so at
iteration 0, we start from the whole state set X, and then we apply the fixed point
operator starting from X, and then we continue, right? So in this case, when we
start from the whole state set X, at each iteration we actually start removing
right? Due to the, how the operator is defined, which actually, this comes later
for those of specific specification I talked about. I will actually explain in
details how this operator is in a closed form defined, but in the case of non
growing iterations. So when you start iteration 0 from the whole set X, and then
when you apply G, you start removing elements from this state X, so you remove and
remove and remove. And let's say eventually you get a fixed point, so that fixed
point is going to to be a subset of the whole state set X, so that's the reason
it's called non growing iterations, because you always remove. So in the other case
which is called non-shrinking iteration, at iteration 0, you start from an empty
set, and then you start applying the corresponding operator G. And then you start
adding to that empty set, you start adding, adding and adding, until you get a
fixed point. So that's the reason it's called non-shrinking iteration, because at
every iteration it actually gets larger and larger until we get a fixed point.
Which means there is not going to to be any more addition to the set, so now that
brings me to this nice theorem about fixed point computation. So let G be a
monotone function and the state set be finite. Then there exists integer i and j
such that the maximal fixed point and minimal fixed point, they all going to happen
within finite number of iterations. And when we get the fixed point, so the maximal
fixed point is going to be exactly the set which is constructed at iteration i. And
the minimal fixed point is going to be exactly the set that constructed at
iteration j. Keep in mind, this theorem is only valid if the state set X is finite,
unfortunately, when X is not finite, we cannot guarantee termination of fixed point
iteration. And in general this actually might not even terminate if X is not
finite, so this theorem, which is very interesting result, says if G is a monotone
function, right? Look at the domain and codomain power set of a state set, co
domain also power set of state set. And if the state set of the system is finite,
namely if the system itself is finite, then there exists finite iterations ij such
that both minimum. Both maximum and minimum fixed point of the operator j will
happen within finite number of iterations. So for those of you interested in the
proof, this is the proof for the theorem, which I'm not going to to go into it
because it's very complex, but if you're interested, you can actually look at the
proof yourself.
0:04
If you recall, in the previous lectures, I talked about this monotone map, which is
central in six point iterations. Now, the question is, how do we define that
monotone operator. In order to define the monotone operator, of course, depending
on which property we would like to enforce over the system, first, I need to define
the notion of green map. Let's see how the pre map is defined. Let S be a simple
system, and let's assume it's finite. We define the pre map for a set Z, which is a
subset of product of input set and a state set simply by. The pre of Z is the set
of old input state pairs such that when you apply, such that the state transition
map F applied on X, under input u is not empty and is a subset equal projection of
Z/X. Let's digest this definition. Again, pre of Z is the set of all input estate
pairs such that if I start from the state X, under input U, I will go to some other
estates. Those estates have to be subset equal the projection of Z/X. Keep in mind,
Z is a set which is defined over the product, input and estate. That's the reason I
said when you start from X, under you, you will go to bunch of estates, and those
estates has to be subset equal of projection of Z/X. Pre of Z is the set of non
blocking input estate pairs for which all successor estates are in the projection
of on X. One of the good property of pre is that pre is monotone. Let's look at
this example in order to get familiarized with the pre map. Look at this simple
finite system. We are asked, what is the pre of a,2? Let's see. Pre of a,2. First
of all, let's project a,2 over the state. Then I get state number 2. Now, the
question is from which states under which input, I will end up going to state
number 2. If I'm in a state 2 under b, I go to state 2, so then b,2 has to be
inside the pre. If I'm in estate 3 under a, I also go to estate 2. Then a,3 also be
insight 3. What about a,1? That's very interesting. I did not include a,1 insight
the pre, you might say y. Because here, one under a, I might go to two. But that's
the thing. You see, in a state 1 under a, we have non determinism. Meaning that, if
I'm in a state 1 under a, I might go to two, or I might stay in a state 1. That
means the successors of a state 1 under a is not only two, it's 1,2 which is not
subset equal two. That's the reason, I did not include a,1. Then the pre of a,2 is
a,3 and b,2. What about pre of b,2 and a,3? First of all, we projected over the
state. What are the state 2 and 3? Very good. We already know what are the which
from which state under which input we can go to two. We already know it's b,2 and
a,3. What about the state 3? The state 3 is b,1 here. B,1, b,2 and a,3. Pre of a,1.
Let's project it over one. A state one. From which state under which input, I go to
one? I said is empty. Why is that? You might say if you're in a state 1 under a,
you go to one. The problem is because of the non determination. If I'm in a state
1, under a, I might go to one, or I might go to two. The successors of one under a
is 1,2, which is not subset equal to one. That's the reason the pre of a,1 is
empty. Whatever pre of a,2. First, project a,3 over the state set. We get state
three. Let's look at a state tree from which state under which input I go to three,
from state 1 under b. Then the pre of one a,3 is b,1 here. That's how the pre is
defined.
0:05
So, okay, let's look at the specifications that we're going to cover in this
course. Let's start with the safety specifications and see how can we actually
synthesize the controller C, which is feedback composable with our original system
and their feedback composition satisfy a safety property. What is the problem state
here? We are given a finite simple system. We are given a set of atomic
propositions, AP and a labeling function L. We are also given a safety formula,
varphi, which is equal to always varsphi, in which varsphi is a propositional
formula over the set of atomic proposition AP, okay? And we are interested to solve
this realizability problem. Is this safety property realizable on system S under
the labeling map L? See, this is a synthesis problem. Can we realize, can we
enforce this safety formula over the system? So, now the second part says, if
varphi is realizable on S under L, we have to provide a system C that is feedback
composable with S, so that the feedback composition satisfies varphi. Let's provide
an example here. Let's say system S is a car, and what is the safety formula? For
example, no collision with the car in front, right? That's a safety formula. No
collision with the car in front, or always keep a safe distance with the car in
front. So, okay, the realizability is asking, are we able to enforce this no
collision with the car in front? Can we ensure over our self driving car? So, and
if the answer is yes, can we design a system C, which simply is going to be a
software code which is going to be implemented in the car computer, such that it
will controls the car, and it ensures the safety objective, which is no collision
with the car in front. So that, let's put it in the context, right? So in a high
level abstract, we are actually trying to solve this problem. So, okay, I mean,
safety specification is one of the most often used specification in practice. Here
are some examples of the safety specifications which I am handling here for the
sake of synthesizing a controller C. For example, in the context of adaptive cruise
control, what is the safety specification of interest? Always the distance of the
car with the car in front should be positive. Of course, rather than zero, we put a
D safe. We put a safe distance, right? Or buffer overflow. Always, memory has to be
bigger than equal to 0 and less than equal to 50. If you recall from our fares
course, I talked about this baking line. So in that case, what was the safety
property? Always box in oven implies the temperature has to be between 26 to 28, or
even mutual exclusion, always not green one and green two, right? So if you look at
a junction in, so if you have two traffic light, so both of them cannot be green at
the same time, right? So these are some examples of safety specification showing
why safety specifications are very important in real world applications. Okay, now
the question is, how do we solve the realizability problem, and how do we come up
with a controller C for a given system, okay? This is the solution to the safety
synthesis problem. First, we need to define the set Z sub varphi. What is Z sub
varphi? Is the set of all input state pairs such that when you apply the labeling
map to them, L of U, X satisfy the propositional formula varphi. Look, the property
of interest was always varsphi, so varphi was equal to always varsphi. Varphi is a
propositional logic formula. So z sub varphi is simply the set of input state pairs
such that when you apply the labeling map on it, they will satisfy the
propositional formula varphi. So now remember, in a fixed point iteration, an
operator map G was central to the fixed point algorithm. So now here in the context
of safety synthesis problem, I'm telling you how this monotone operator, is defined
concretely. So in this case, operator G is a map from power set of U*X to the power
set of U*X, and is defined as this G of Z is simply pre of Z intersection with Z
sub varphi. So since pre was monotone, this intersection is also monotone. So this
map G is guaranteed, that is monotone. So this is how the monotone map in our
safety synthesis problem fixed point algorithm is defined, pre of Z intersection
with Z varphi. So now I can show this nice theorem which is about realizability for
safety. Varphi, which is equal to always varsphi, is realizable on finite simple
system S under labeling map L, if and only if the set of initial states is a subset
equal, the projection of maximal fixed point over the state. What is this? Is the
maximal fixed point that comes from this monotone operator. Use this monotone
operator iterate the fixed point algorithm. If S is finite, we have guaranteed that
within finite iteration we're going to get a maximal fixed point. So it's a maximal
fixed point, is a set defined over the product of input set and a state set. So now
let's project it over the state set. If X0, the set of initial state is subset
equal that projection. Then we say varphi is realizable on system. Okay, that
answers the realizability problem. Now the other question is, how do we design the
controller C, which is feedback composable with the system S and their feedback
composition satisfies this property of interest. That brings me to this corollary
controller synthesis for safety. Suppose that the set of initial state is in fact
subset equal of the projection of maximal fixed point over the state set. So now
let's see BDs static system. Remember, from course number one, it's a static,
because it's a state set is singleton, right? Be a static system with a strict
transition function and strict output map H, sub c q, x, which is defined as these.
So you see, all the ingredients in this system, C is defined, right? I mean, if C
is just a self loop, because our state set is a singleton, right? You start from q,
you have a self loop over q, right? And you see the internal and external input are
the state set of the system, right? So the output set is the input set of the
system. So the only thing we have to define is the output map Hc. Because it's
static also, Hc is equal to Hc prime times x, right? For a given x. So now we only
need to define this Hc prime x. Okay, let's see how this Hc prime is defined. First
of all, Hc prime is a set valued map. For a given x, it gives you a subset of input
set, right? So Hc prime is a set valued map for a given state value. It gives you a
subset of U, and it satisfies this condition. If x, right? Belongs to projection of
the maximal fixed point over the state set, then what should be the u? So Hc prime
x is the set of all those input in which the pair of u, x belongs to that maximal
fixed point. Otherwise it can be any input. So, and the C that comes from this
formulation is guaranteed to be feedback composable with system S, and their
feedback composition satisfies this property of interest always varphi.
Play video starting at :10:51 and follow transcript10:51
So let's repeat again this corollary. Suppose that the set of initial states is
subset equal of the projection of maximal fixed point over the state set. Meaning,
consider always varysphi b realizable. Then the system C is simply a static system,
right? In which, since it's a static, all the ingredients already defined, the only
thing I have to tell you how is defined is this Hc prime of x. So how this Hc prime
of x is defined, given our maximal fixed point, Hc prime of x is equal to if x
belongs to projection of maximal fixed point over the state set, then Hc prime of x
is the set of all those inputs u, in which the pair of u,x belongs to the maximal
fixed point. Remember, maximal fixed point is a subset of U*X. But if x does not
belong to this, then we don't care, the input can be anything, right? That's the
reason I put the whole input set. So, and then this corollary guarantees that this
c is feedback composable with the system S, and their feedback composition
satisfies the varphi, which is always varsphi.
0:04
Let's look at an example of system and safety synthesis using maximal fixed point
iteration. Consider we are given this simple system so the state set is a set
containing 1,2,3. The initial set is a state 1, and the input set is the set
containing A,P. In this case, we are given the set of [inaudible] propositions,
which is the set containing 1,2,3, A and P, is the union of state set and input
set. What is the labeling map? It's the definition of the labeling map in this
example, which is given to us is L(u,x) is simply the set containing u and x.
L(u,x) is the set containing u and x. What is the safety property of interest in
this case study? Varphi is equal to always input A or state 2. This is the safety
specification, which we are asked to do to answer the realizability question and if
the realizability question answer was yes, design a controller C for it. What is
the property of interest? Always input is A or state is two. Let's start the
procedure. First, we need to construct the set Z_Varphi. Remember, Z_Varphi is the
set of all input state pairs such that when you apply the labeling map to them,
they will satisfy the propositional logic Varphi. Great. In this case, what is the
propositional logic is A input A or a state two. Then Z_Varphi is input A, state
can be anything or state 2 input can be anything. Great. Now let's apply the
maximal fixed point iteration.
Play video starting at :2:23 and follow transcript2:23
This is the operator we're going to use pre(Z) intersection with Z_Varphi. This is
a maximal fixed point iteration, meaning it's a non growing iteration. At iteration
zero, we start from everything. Z sub zero is everything u * x and then at each
iteration, we try to remove input state pairs until we get a fixed one.
Play video starting at :2:53 and follow transcript2:53
What is Z_1? Remember, a Z_1 is pre (Z_0) intersection with Z_Varphi. Great. What
is the pre (Z_0)? Remember, Z_0 is everything, u * x. First, let's project u * x
over the state set. The projection is going to be the whole state set x. Now let's
see from which state under which input we can go to any of the other states. Great.
The projection of this over the state set is going to be 1,2,3, so now let's see
from which state under which input we can go to 1,2,3. b,2 because in two under b,
I can go to two. In three under a, I can go to two so the a,2 is also included.
From one under b, I can go to three so then b,1 is also included. If I'm in one
under a, I can go to one or two, so then a,1 is also included. This is all possible
state input pairs such that the success source is going to be inside the state set,
which is X. Now take intersection of this with Z_Varphi. If you take the
intersection it's going to be a,1, a,3 and b,2. This is going to be a Z_1. What is
Z_2? Z_2 is pre (Z_1) intersection with Z_Varphi. Now I need to compute the pre
Z_1, the set we constructed from the previous iteration. Let's compute pre (Z_1).
This is Z_1. Let's project it over the state set. I get 1,2,3. Great. It's going to
be the same as previous situation. Because in the previous situation, the
projection over the state was also 1,2,3 so then if I compute the pre (Z_1) is also
going to be the same as this one, is going to be the same as this one. When you
take the intersection with Z_Varphi, it's going to be the same as Z_1, so Z_2 is
going to be equal to Z_1, and we got a fixed point. Z_2 equal to Z_1, so that means
our maximal fixed point is equal to Z_1. Also how the controller looked like in
this case. Look, if you're in a state 1, you have to apply input A. If you're in a
state 3, you have to apply input A, and if you're in a state 2, you have to apply
input B so that's how its Z prime is defined. Its Z prime of one is equal to a, its
Z prime of two is equal to b, it's Z prime of three is equal to A. That's how we
solve the safety synthesis problem by applying the maximal fixed point iteration.
0:05
So in the previous lectures, I talked about the safety specification synthesis,
right? The property of interest was always varsi, in which varsi was here. You see
varsi was a propositional formula over the set of atomic proposition. So we were
able to compile and come up with a monotone function for the safety synthesis,
which was simply a pre of z intersection with z varsi. And then we showed that the
maximal fixed point that comes from this monotone function is in fact can answer
the realizability problem for safety. And if the realizability for safety was yes,
then we could actually design the controller c, which was feedback composable with
the system, and their feedback composition would have enforced the safety property.
And then we looked at this example. So by the way, if you are interested to
understand the proof of the theorem for realizability for safety, and also the
corollary, I refer you to this detail proof if you are interested. So now let's
look into reachability specifications. So what is the problem statement in this
case? Again, we are given a finite simple system S. We are given a set of atomic
propositions AP and a labeling function L. We are given a reachability formula
which says verify is equal to eventually varsi or finally varsi, where varsi is a
propositional formula over the set of atomic proposition AP. So we are interested
in solve this realizability problem. Is this reachability formula realizable on
system S under the labeling map L? And if it is realizable, provide a system C
which is feedback composable with S, and their feedback composition will in fact
satisfy this reachability form. Reachability is also one of the important property
for designing autonomous systems. For example, in the context of cruise control,
you can just say eventually reach desired velocity, right? Or for example for
program termination, we can say eventually reach a final state, if you recall from
the first course when I described the baking line example. So there we could have
eventually box should be inside the oven. Or in the context of robot task planning,
for example, the robot should eventually reach station number one, or should
eventually reach charging station, for example, or eventually reach target region.
Play video starting at :3:25 and follow transcript3:25
So reachability is also an important property of interest when we design autonomous
systems. So now let's see, how can we actually solve the reachability synthesis
problem. So again, similar to safety ritual, the problem we need to define this set
z sub varsi, and it's exactly the same as u four. Z sub varsi is the set of all
input state pairs in which when you apply the labeling map on it, the label of u, x
in fact satisfy the propositional formula varsi same as before. So now let's see
how the monotone function G is defined in order to apply the fixed point algorithm.
So in this case, the monotone function G, which again look at its codomain, is the
power set of u times x, and its codomain is also power set of u times x. Let's see
how it is defined. G(Z) is pre(Z) union Z varsi, look at the differences. In the
safety synthesis problem the monotone operator G was pre(Z) intersection Z varsi.
But here G(Z) is Pre(Z) union Z varsi.
Play video starting at :5: and follow transcript5:00
So now we have all the ingredients to introduce this nice theorem on realizability
for reachability.
Play video starting at :5:9 and follow transcript5:09
There exists finite system C which is feedback composable with S, and their
feedback composition will satisfy the reachability specification under the labeling
map L if and only if the initial state set is subset equal of minimal fixed point
of this map when it's projected on the state set.
Play video starting at :5:40 and follow transcript5:40
So let's repeat again the theorem about realizability for reachability. There
exists finite system C, which is feedback composable with our original system S
here our original system S. And their feedback composition in fact does satisfy
this reachability formula under the labeling map L if and only if the initial state
set is subset equal projection of minimal fixed point over the state set.
Play video starting at :6:26 and follow transcript6:26
So now the question is, okay, how do we construct the controller C? When let's say
we apply the fixed point, we get a minimal fixed point. So now how can we actually
construct C using this minimal fixed point? So here in reachability is going to be
a slightly trickier than safety. So let's look, so in order to actually construct
C, we need to define the following function j. Look at its domain is the states,
codomain is a natural number union infinity, and j(x) is given by is the infimum of
an integer number i in which x is going to be inside the ice set that comes from
the fixed point iteration when it's projected on x. But keep in mind, we are taking
infimum, meaning that the lowest index of the set when you project it over the
state set in which x will appear inside that set.
Play video starting at :7:47 and follow transcript7:47
What if for a given x, x does not appear in any of the set then j(x) is going to be
infinity.
Play video starting at :7:55 and follow transcript7:55
Otherwise j(x) is going to be the smallest index of the set generated from the
fixed point iterations in which when you project that set over this state set, x is
going to be inside of it. I mean, I go through an example and then it gets more
clear how this j(x) is defined and how. So this corollary now is telling us how to
use this j(x) in order to define how the controller C is going to be constructed.
Play video starting at :8:39 and follow transcript8:39
So suppose that the set of initial states is subset equal of projection of minimal
fixed point over the state set. So let's see, be a static system with a strict
transition function. So remember, same as safety, when we say it's static, all the
ingredients is already known except the output map. So in this case, the output map
is defined as this. So you see, the only unknown ingredient in the output map of C
is this Hc prime of x, which is a set valued map. Its domain is a state set.
Codomain is power set of input sets. That's the reason I say it's a set valued map.
So let's see how this Hc prime is defined. So Hc prime of x is equal to if j(x),
which j(x) is defined here is less than infinity. Hc prime of x is the set of all
those input in which (u, x) belongs to that particular set in the fixed point
iteration at that particular iteration which is given by j(x). Otherwise it can be
any input. So that's the reason I put the whole input set. The C that comes from
this structure is in fact feedback composable with the system S, and their feedback
composition in fact satisfies the reachability problem verify.
Play video starting at :10:25 and follow transcript10:25
So again, look how Hc prime in the definition of C is defined. Hc prime is a set
valued map and is given by Hc prime of x if j is not infinity is the set of all
those input in which the pair of (u, x) belongs to the corresponding set in the
fixed point iteration after iteration j(x). Otherwise it can be any input.
0:05
So let's look at like an example for a reachability synthesis problem, so we are
given a simple system finite, and let's look at the state set is one, two, three,
that is the initial state set? This only containing state number one, what are the
inputs, a on a comma p. And in this example, we are given the set of atomic
proposition to be the set containing the set of states and the set of inputs,
meaning that the set of proposition is one, two, three, and a and b. What is the
labeling map, the labeling map, the lo the pair u comma x is simply the set
containing u and x, very simple. And what is the reachability property of interest,
that high is equal to eventually state two, so that's what we are trying to see if
it's realizable. Eventually state two okay, before we go through all these, you
know, minimal fixed point defining the map, monotone map, let's look at the system.
Let's see if eventually two is even possible, is it realizable, the answer is yes.
Why is that, when you start from state one, which action we should apply, so in
state one, you cannot apply a. What is that, because if you apply a, you have non-
determinism, you might go to two, but you might also get stuck in state number one,
right? Self-rule, but what if you apply b, If you apply b from a state one, you go
to state three, and if you're in the state three, you apply a and you eventually
reach a state two. So you can see this reachability problem is readily realizable
for this system, right? You can check with your eyes that yes, if you're in a state
one, you apply action b, you go to a state three, and if you're in a state three,
you apply action a, you go to a state two, and then you are done right, you reach a
state two. But now we want to verify this by applying the minimal fixed point
iteration and see if we get the same result okay, let's first construct z sub-
varsity. Okay, z sub varsity is equal to one, so it's equal to a comma two, b comma
two, right? A state has to be two, but input can be a or d, right, and okay, so now
let's apply the fixed point iteration, so this is a, you know, non-shrinking
iteration, meaning that at iteration zero we start from empty set, and in each
iteration we start adding to it until we reach a fixed point, means there is no
element can be added anymore, right. And that's when we terminate okay, so z zero
is empty, so remember, this is our fixed point, the monotone function pre of z
union z varsity. Okay, what is the pre of z zero, I mean, z zero is empty, right?
Pre of empty is empty, correct, very good, so then z one is equal to pre of empty,
which is empty union z varsity. So then z one is going to be equal to z varsity,
which is just a comma two, b comma two.
Play video starting at :3:42 and follow transcript3:42
So now let's compute pre of z one, in order to compute pre of z one, in order to
compute of z one let's project pre of you know, let's project z one over the state
set. We only have one state, which is state two okay, let's look at a state two
from which state, under which input we can reach a state two, if you're in a state
two under b, you can reach a state two. If you're in state three under a, you can
reach a state two if you're in a state one unfortunately, the only, you know, if
you apply a, you might go to a state two, or you might also stay in a state one, so
that's the reason we cannot include a comma one due to the non-determinism, right?
Because the set of successors has to be subset equal to here if you are in state
one, if you apply a, the set of successors is one comma two, which is not subset
equal to two, so then we cannot include a comma one, so then p of z one is equal to
b comma two, and a three.
Play video starting at :4:55 and follow transcript4:55
So then z two is equal to pre of z one union z varsity, which is going to be a
comma two, b comma two, a comma three. Great, you see, now we compute pre of z two,
let's take the projection of z two over the state, which state we get two and
three, let's see from which state under which input we can reach state two or
three. So state two, we already know b comma two, and a comma three, if you're in a
state three from, you know, state one under b, you can reach a state three. So you
know, b comma one, then pre of z two is equal to b comma two, a comma three, and b
comma one so now let's take the union of this set and z VAR sign, we get a two, b
comma two, a comma three, and b comma one. So that's our z three, so now let's do
projection, let's project z three over the state set, which state we get one, two,
three, so now let's compute the pre of z three from which state under which input
we can go to one or two or three. Okay, you know, if you are in two under b, if you
are in three under a, if you are in one under b, and if you're in one under a, so
then the pre of z three is equal to b comma two, a comma three, a comma one, and b
comma, one.
Play video starting at :6:29 and follow transcript6:29
Let's take the union of this with z bar psi, so then we get a two, b comma two, a
comma three, b comma one, and a comma one, and this is z four, so let's compute pre
of z four, so let's take the projection of z four over the state set, we get one,
two, three. So, okay, from which state under which inputs we can go to a state one
or two or three, so pre of z four is going to be exactly equal to the pre of z
three, you know, it's going to be b comma two, a three, b comma one, and a one. So
again, if you take the union of this with z VAR side, you're going to get exactly
the same thing as z four, so you see, we got a fixed point, z five is equal to z
four, hence z four is our minimal fixed point. Great.
Play video starting at :7:25 and follow transcript7:25
So is the reachability realizable, let's see, let's project the minimal fixed point
over the state set, which state we get one is the initial state one of them? The
answer is yes, hence this reachability specification is realizable over this
system. So now we need to construct the controller c, right, the system c, which is
with feedback, composable with the s and the feedback composition, satisfy the
reachability property. To do so, we need to construct this index function, right, j
of x, remember, j of x was lowest index I, in which x would have been inside the
projection of that ith iteration set that comes from the fixed point iteration.
Okay, let's see, so let's see j of state, so let's do projection, remember, what
was the, you know, z zero was empty its projection over the state set is going to
be empty, z one. Let's project it over the state set, we get a state two, z two, if
we project it over the state set, we get two comma three, and then z three, when we
projected over the state set, we get one, comma two, comma three, and then z four.
When we projected, we get one comma, two, comma three, so now here, so here are the
projection, right? So now what is the j of two, state two okay, so the state two
appears in which state is in the fixed point iteration, the soonest. Okay, let's
look, state two does appear in z zero, no, because z zero is empty, does appear in
the projection of z one over the state set yes. Look, if you project that over the
state set, you get a state two so, yes, state two appears here, so then j of two is
actually the index one, what about j of state three? So look, state three for the
first time appears inside z two, so then j of three is this index, which is two,
what about a state one? So, state one appears for the first time in z three, so
then j of one is equal to three see, j of one is three, J of three is two, J of two
is one great. Now we can define the map HC prime of x, which is the main ingredient
in order to define the system c, which is feedback composable with s okay, it's c
prime of x, this is the definition, right? Okay, so what is HC prime of two, it's
HC prime of two here it's c prime of two is a or b, it's going to be this whole
input cell, great. What about its C prime of three is going to be a what about its
HC prime of one is going to be b here it's C prime of one is b, it's C prime of two
is a, it's C prime of two is a or b. So look again, it's c prime of two, so you
have to go inside z one and see which input appears next to two, in z one, both a
and b appears, so that means when you are in the state two, you can apply a or b.
Play video starting at :11:21 and follow transcript11:21
So what about Hc prime of three, so look, three, j of three was two, so you go
inside z two, and then you check what is the input affiliated with three, input is
a, then its c prime of three is a. We do the same thing for state one, so remember,
j of one was equal to three, we go and check the set z three, what is the input
affiliated with state one is b, hence its c prime of one is equal to b. And that
was exactly compatible with what we came up readily by looking at the, you know, a
state diagram of the system, remember, I said when you're in a state one, you have
to apply b. Look, exactly the input affiliated with one in z three is b, when
you're in a state three, you have to apply a, remember, the input affiliated with
the state three in z two was a. And when you're in the state b, you don't need to
worry about anything, because the reachability is already satisfied, another
question is why HC prime of one is not a comma b due to this non-determinism,
because if you apply a on a state one, you might get trapped on a state one and you
never progress to reach state two.
Play video starting at :12:58 and follow transcript12:58
And that's how the, you know, the controller looks like,
Play video starting at :13:3 and follow transcript13:03
You know, everything is known except HC, and HC is defined as this, and I define HC
prime of x in the previous slide. So you have all the ingredients so, and the
compost system is given as this, and the compost system in fact does satisfy the
reachability property eventually too. This is the proof of the theorem for
realizability for reachability, in case you are interested to understand the proof
as well, and also the proof for the construction of the controller using that index
function j of x. In case you are interested.
0:05
In the previous lectures I talked about synthesis algorithm for safety and
reachability specification. The specification of interest which we're dealing with
today is called persistent property. And I'm going to explain how can we come up
with a fixed point algorithm for synthesizing the artifact or the system C, which
is feedback composable with our original system. And their feedback composition
satisfies this so-called persistence property. So what is the problem statement? We
are given a finite simple system S, we are given a set of atomic propositions AP,
and a labeling map L. So a persistence formula varphi is, you see combination of
reachability and safety property, eventually always varsi. Where varphi is a
propositional formula over the given set of atomic proposition AP. We are
interested in solving this realizability problem. This varphi is called a
persistence property, eventually always varsi. Is varsi realizable on S under the
labeling function L? And if it's realizable on S, provide a system C which is
feedback composable with our original system S, and their feedback composition
satisfies this persistence formula varphi. Note that eventually always varsi is
often referred to as also co-Buchi objective.
Play video starting at :1:57 and follow transcript1:57
So, persistent specification has a lot of applications. For example, in control
theory, it's a property which is very close to stability. So it says eventually
always a small neighborhood of origin. So that means what? Means the trajectory of
the system should converge to a small neighborhood of the origin and stay there
forever. Here, eventually always P, which P is equal to 1 if and only if the norm
of state value is less than equal to epsilon. So in this case, this property says
the trajectory should eventually converge to a ball around equilibrium point,
radius of the ball is epsilon, and stay there forever afterwards. For example, in
the context of cruise control, the car should reach to a desired velocity if there
is no car detected in front, right? So that can actually be written as a persistent
property, eventually always desired velocity, or in the context of robot task
planning. So the robot, for example, should eventually go to station one and stay
there forever, right? So we can actually represent that using this persistent
property. Eventually always station one, okay? So now let's see how we can
formulate the fixed point operator to solve this persistent specification. So,
similar to reachability and safety specification, we define the set Z sub varsi.
Which is again the set of all input state pairs, in which when you apply a labeling
function over the pair, that will satisfy our propositional formula varsi. So now
we have this theorem regarding realizability for persistent property. There exists
a finite system C, so that the feedback composition of C and S satisfies our
persistent property varphi under the labeling map l. If and only if the set of
initial states of the system is a subset equal of this fixed point projection of
this fixed point z infinity over the state set. In which this fixed point is
actually the fixed point of this nested minimal and maximal fixed point. So, this
equation might look very complex. But what is going on? Because here we have two-
nested temporal operator, we also require a nested fixed point operator. So in fact
we have two-fixed points intervene. We have an outer fixed point, which is a
minimal fixed point algorithm, and we have an inner fixed point, which is a maximal
fixed point operator. You can look at it as two for loop. The outside for loop is
implementing the minimal fixed point and the inner for loop implementing the
maximal fixed point. So the outer for loop is a non shrinking operator, like
growing, but the inner for loop is a non growing fixed one operator, right? So in
the outer for loop, or in the utter fixed point, we start at iteration zero, we
start from empty set, and then we start adding, right? And in order to add at each
iteration, you need to actually solve the inner maximal fixed point iteration,
because here we have two temporal operator.
Play video starting at :6:23 and follow transcript6:23
But the good news is this theorem actually provides exactly which fixed point is
going to solve this persistent property. In case you are interested to come up with
the nested fixed point for more sophisticated property, I will actually refer you
to mu calculus. So there you can actually find more sophisticated nested of fixed
points in order to solve more sophisticated, let's say linear temporal logic
formula. But as I said, for the sake of this course, we are limiting ourselves to
very specific LTL formula, right? So, okay, great, we know that this fixed point
will actually solve this persistent property. So now the question is how C is being
constructed. So again, similar to reachability we had, so you need to construct
those. So as you solve this fixed point, right? You actually can construct this set
Zi, comes with construction. These are the outer set, right? They're constructed
based on the outer fixed point. So you need to construct this set Zi. Then we
again, similar to reachability, we need to define this index map j. Given any
state, it provides a number, a non negative integer which can also be infinity. And
is defined as this, j of x is the lowest index i, in which x belongs to the
projection of those Zi, which are defined here over the state set. So now with the
definition of this indexing map, so what we can do now we can define how the
controller look like in this case. So, suppose that the persistence property is
realizable over the system, right? So suppose that X0 is a subset equal projection
of Z infinity over the state set. The system C or the controller C is a static
system with this strict transition function and a strict output map of these four.
So now my job is to only define how Hc prime of X is defined. So Hc prime of X is a
set valued map, its domain is a state set, codomain is input set. And is defined as
this, Hc prime of X is the set of all inputs U, in which the pair of u comma x
belongs to Z of jx, as long as jx is not infinity. If jx is infinity, then it's
going to be the set of all input. So I repeat again, so Hc prime, the set valued
map Hc prime is defined as this, Hc prime of x is the set of all inputs U, in which
the pair of u comma x belongs to Z sub jx, but as long as jx is not infinity. If jx
is infinity, then Hc prime of x is equal to the set of all input. So C defined
based on this corollary, is feedback composable with S and their feedback
composition satisfies this persistent formula. If you are interested in the proof
of their realizability for persistent, this is the proof, you can actually look in
details if you are interested to know how the proof works. And here also the proof
of the corollary for the construction of the controller. Why the controller defined
as here will impact, why it is feedback composable, and why their feedback
composition satisfies the persistent specification.
0:03
In the previous lecture, I introduced the fixed point operator for the persistent
property. If you recall, I said that operator was actually nested of two fixed
point operator. The outer one, which was a minimal fixed point iteration, that the
inner one was a maximal fixed point iteration. Now let's apply that fixed point
operator over this simple case study and see if we can answer the realizability
problem. We are given a simple system containing five estates, one, two, three,
four, five. State 1 is a initial estates. We have two action A coma B. A two
propositions are the union of set of estates and set of inputs, one, two, three, 45
A, B, and on our labeling map, L of Um x is simply the set containing u and x. What
is the specification of interest, eventually, always state two or four. Let's look
at this simple system. It's a very simple system. Let's see if we can readily
verify if this property is realizable. The answer is yes. Why is that? Because when
we start from state 1, which action should I use B or A? I cannot use B. Because
due to the non-determinism, I might get stuck in a state 1, and I never even reach
a State 2 or 4. But then I cannot apply action A. So if I apply action A at state
1, where do I go? I go to state number two. So now, if I met a state number 2, I
cannot apply A because I go to a state three which is blocking. That means I cannot
eventually reach two or four and stay there forever. Then I should apply only
action B. If I apply action B, I have non determinism. I might get a stock on state
2, which is a good thing. I might go to estate number 4, or I might go to estate
number 5. If I go to estate number 4, which action I should apply in estate number
4, action, A. I stay in four forever, and that's a good thing. If I go in state
number 5, which action I should apply as state number 5 action B? Because that
brings me to State 4, and then I mean four, I can apply A, and I can stay in a
infinitely. I mean, always right afterwards. We can readily verify that this
persistent property is in fact realizable. Let's see if we get the same answer by
applying the fixed point which are introduced in the previous lecture. I verified
realizable on under the labeling map L. Let me introduce bring a blank page here.
Play video starting at :3:5 and follow transcript3:05
We need to compute Z,
Play video starting at :3:10 and follow transcript3:10
which is equal to M Z, that's the outer fixed point, and this is the inner fixed
point. The outer one is minimal, the inner one is maximal.
Play video starting at :3:34 and follow transcript3:34
I'm simply using the formula, which I introduced in the previous lecture.
Play video starting at :3:44 and follow transcript3:44
First of all, what is z i? V is simply The set of input state pairs, input doesn't
matter, but the state has to be two or four. That's the persistent property two or
four.
Play video starting at :4:7 and follow transcript4:07
Z zero because the outer fixed point is, growing iteration, we start from empty set
and we start growing it. What is Z1? Z1 is we need to solve a maximal fixed point
to come out with Z1. That's the inner loop.
Play video starting at :4:38 and follow transcript4:38
We start from, remember, the inner fixed point is a maximal fixed point. It's a
shrink in iteration. It means iteration zero in the inner iteration, we start from
everything. Then we'll start. Let's iterate the maximal fixed point, the inner
iteration.
Play video starting at :5:8 and follow transcript5:08
If I do that, this is the set of input state pair I get.
Play video starting at :5:16 and follow transcript5:16
Now let's compute the
Play video starting at :5:28 and follow transcript5:28
which is equal to. By the way, these pre are very easy by looking at this, the
state diagram of the system or just write it, but you can readily verify yourself
as well.
Play video starting at :5:47 and follow transcript5:47
I mainly try to spend more time on to tell you how this nested fixed point works
together.
Play video starting at :5:57 and follow transcript5:57
b,2, a,4, b,4, a,5, and b,5.
Play video starting at :6:20 and follow transcript6:20
Now we need to compute Z_2^Nu that is equal to pre of Z_1 intersection Z_var Psi
Play video starting at :6:41 and follow transcript6:41
which gives me only a,4.
Play video starting at :6:46 and follow transcript6:46
See, it's a shrinking iteration.
Play video starting at :6:52 and follow transcript6:52
What is the pre of Z_1^Nu? This is the pre,
Play video starting at :7:11 and follow transcript7:11
a,1, a,4 and b,5.
Play video starting at :7:17 and follow transcript7:17
Here. Look. Projected Z_1^Nu, is this right? If you projected over the set, our
state is two or four, and then see what is the pre of that. Two or four? From which
state under which action, I can go to two or four. So a,1 and a,5 and a,4 and b,5,
exactly.
Play video starting at :7:57 and follow transcript7:57
I need another page. That's pre of. Let's continue with Z_3^Nu,
Play video starting at :8:15 and follow transcript8:15
is equal to pre of a,4 intersection with Z_var Psi, and that gives me a,4 as well.
It seems you are getting a fixed point from the inner loop. Very good. Now I have
Z_1. That's our fixed point. Then Z_1 Is a,4. Now we compute Z_2. You see? We did
all these things to update one iteration of the other fixed point. Now we go back
again to the inner fixed point. What is Z_2 is equal to?
Play video starting at :9:14 and follow transcript9:14
Again, I need to solve a maximal fixed point to compute Z_2.
Play video starting at :9:25 and follow transcript9:25
Intersection Z_var Psi in union pre of Z_1. Z_0 was empty, Z_1 is pair a,4. Now
let's solve this fixed point. Again, what Z_0^Nu? Again it's everything U*X.
Remember, it's a shrinking iteration. What about Z_1? Z_1^Nu is pre of Z_0^Nu
intersection Z_var Psi union pre of Z_1.
Play video starting at :10:23 and follow transcript10:23
If you can readily verify that this is equal to a,2, b,2, a,4, b,4 and b,5.
Play video starting at :10:49 and follow transcript10:49
Very good. Now let's compute the pre of Z_1.
Play video starting at :11: and follow transcript11:00
In fact, we use that to compute Z_1^Nu. It's a,4 and b,5.
Play video starting at :11:15 and follow transcript11:15
What about Z_2^Nu?
Play video starting at :11:25 and follow transcript11:25
Is {(b,2), (a,4) , (b,4)}
Play video starting at :11:44 and follow transcript11:44
union {(b,5)}.
Play video starting at :12:1 and follow transcript12:01
What is the pre of (Z_1) noon {(a,1),
Play video starting at :12:29 and follow transcript12:29
(a,2), (a,4), (b,4), (b,5)}.
Play video starting at :12:47 and follow transcript12:47
Now I see Z_3 noon is equal to Z_2 noon. We got a fixed point.
Play video starting at :13:6 and follow transcript13:06
Pre of Z_2 noon
Play video starting at :13:14 and follow transcript13:14
is equal to pre of Z_1.
Play video starting at :13:23 and follow transcript13:23
That's why you write a fixed point. Now, I can update the Z_2 in the outer fix
point, which is equal to {(b,2), (a,4) (b,4),
Play video starting at :14:2 and follow transcript14:02
(b,5)}. Now we could continue with Z_3.
Play video starting at :14:13 and follow transcript14:13
Now we try to again iterate for third time the outer iteration, which, again,
requires another solving another fixed point in the inner loop.
Play video starting at :14:29 and follow transcript14:29
Let's add another.
Play video starting at :14:42 and follow transcript14:42
What is the pre of Z_2 which is needed to compute Z_3?
Play video starting at :14:49 and follow transcript14:49
You already have Z_2. It's projected over the state 2, 4, 5 and from which state
under which input? I can reach 2 or 4 or 5. That's the set of input state pairs.
{(a,1), (b,2), (a,4), (b,4), (b,5)}.
Play video starting at :15:33 and follow transcript15:33
Again, continue with another maximum fixed point.
Play video starting at :15:58 and follow transcript15:58
What I'm going to do is I already have done the calculation so I'm not going to
repeat everything. Let me just record the result. I already done, let's say, the
inner fixed point and the fixed point I get is Z_2 is equal to
Play video starting at :16:34 and follow transcript16:34
and is equal to (b,2).
Play video starting at :16:52 and follow transcript16:52
A, 4, b, 4 union pre(Z_2).
Play video starting at :17:4 and follow transcript17:04
Great. Now I go and I can update Z_3. Z_3 is equal to, you see now, I did all these
to be able to update Z_3 the outer iteration. You have two for you, outside and the
inner one. This is a, 1,
Play video starting at :17:34 and follow transcript17:34
b, 2, a, 4, b, 4, b, 5.
Play video starting at :17:50 and follow transcript17:50
Now, what was Z_4?
Play video starting at :17:56 and follow transcript17:56
Z_4 is equal to, so now I need to solve another maximal fixed point to update.
Play video starting at :18:18 and follow transcript18:18
Union pre(Z_3).
Play video starting at :18:25 and follow transcript18:25
What is the pre(Z_3)? I mean, you already have Z_3. Project over the state, 1, 2,
4, 5. From which state under which input? I can go to 1, 2, 4, 5, and this is the
pre(Z_3).
Play video starting at :18:42 and follow transcript18:42
State 1 under a, State 1 under b, State 2 under b, State 4 under a,
Play video starting at :18:57 and follow transcript18:57
State 4 under b,
Play video starting at :19:2 and follow transcript19:02
State 5 under b,
Play video starting at :19:9 and follow transcript19:09
and State 5 under a.
Play video starting at :19:17 and follow transcript19:17
Now, again, I saw a fixed point here, and I
Play video starting at :19:37 and follow transcript19:37
am not solving already solved the inner fixed point. You have a fixed point after
two iterations, and which is equal to b, 2, a, 4, b, 4 union pre(Z_2).
Play video starting at :20:22 and follow transcript20:22
Actually, that should be pre(Z_3).
Play video starting at :20:31 and follow transcript20:31
I did all these things to come with Z_4.
Play video starting at :20:47 and follow transcript20:47
Now I would like to compute Z_5,
Play video starting at :20:55 and follow transcript20:55
another iteration of the outer fixed point, which requires solving it again a
maximal fixed point in the inner loop,
Play video starting at :21:26 and follow transcript21:26
pre(Z_4). In this case, we get a fixed point.
Play video starting at :21:52 and follow transcript21:52
Let's just write it. In this case, we get a fixed point, Z_5 = Z_4.
Play video starting at :22:5 and follow transcript22:05
That is equal to Z infinite.
Play video starting at :22:23 and follow transcript22:23
Is it realizable? Yes. Why is that? Because if I project
Play video starting at :22:32 and follow transcript22:32
Z_4 over the state Z, what do I get? 1, 2, 4, 5 contains initial state, which is
one, then it's realizable.
Play video starting at :22:49 and follow transcript22:49
The Alpha is realizable.
Play video starting at :23:1 and follow transcript23:01
Now I need to define the controller. Let's also define the controller very fast.
Play video starting at :23:15 and follow transcript23:15
C is equal to static system, q, q X, X, F_c, H_c.
Play video starting at :23:41 and follow transcript23:41
Now, I need to define the index map, j(x).
Play video starting at :23:58 and follow transcript23:58
Rejection of Z_i over the state.
Play video starting at :24:9 and follow transcript24:09
What is j(1)? Let's look at State 1 in which set this appear. Did appear in Z_i? It
appear in Z_1? Let's see. If I can see Z_1, no, because Z_1 is this one. Did not
appear Z_1, did not appear there. What about Z_2? Did appear in Z_2? This is Z_2.
No, it also did not appear in Z_2. What about Z_3? It did appear in Z_3. That's
equal to three, and j(2) State 2 appeared in Z_2.
Play video starting at :24:57 and follow transcript24:57
State 3 did not appear in any of them. Then we put infinity. State 4 appeared in
Z_1, so we put one, State 5 appeared in Z_2. You put two. Great. Now, we have all
these cases. Look j(1) was three. That means H_c of State 1 is equal to, so look at
Z_3 and see what's the input corresponding to one? Here, Z_3 here. What's the input
corresponding to one? Input one a. Put a here.
Play video starting at :25:55 and follow transcript25:55
J(2) was two. Sorry.
Play video starting at :26:2 and follow transcript26:02
H_c(2) = 2, if you look at Z_2, the input assigned to two is b,
Play video starting at :26:16 and follow transcript26:16
so infinity, so that means H_c(3) is a or b, and j(4) = 1. If you look at Z_1, what
is the input assigned to four? Is a. Finally, j(5) is two. If you look at Z_2, the
input assigned to five is b. That's the controller being defined.
0:05
In this lecture, we're going to talk about the synthesis algorithm for recurrence
specification. So far we talked about invariance specification, reachability
specification, persistent specification, and now we are dealing with recurrence
specification. Let's see what the problem statement is. Consider a finite simple
system as a set of atomic propositions AP and a labeling function L. Exactly the
same as before the previous synthesis problems. Now, a recurrence formula varphi is
equal to always eventually varPsi. Remember, in the persistence, these two
operator, the location was different. In the persistence, we had diamond box
varPsi, but in recurrence we have box diamond varPsi, which also means infinitely
often varPsi, where varPsi is a propositional formula over the set of atomic
proposition AP. Now we are interested to solve this realizability problem. Is this
varphi realizable on this system S under the labeling map L? If it is realizable,
we want to also provide a system C, or controller, which is feedback composable
with S, so that their feedback composition, you see feedback composition of C and
S, satisfies varphi. Also I would like to remark that this always eventually varPsi
is often referred to as Buchi objective in other literature as well. What are the
property of interest that we can actually formulate as a recurrence property? For
example, in the robotic task motion planning, sometimes a robot has to visit a
region infinitely often. For example, has to visit the pickup station infinitely
often, or it has to visit the dropout station infinitely often, or in the context
of self-driving car, for example, one of the assumptions we have over the traffic
signal is the traffic signal is green infinitely often. It has to be green
infinitely often. You see, many property of interest can be formulated as Buchi
objective , or recurrence specifications. Now let's see how the fixed-point
operator is going to look like in this case. Again, the same as previous synthesis
problem, we define the set Z_varPsi, which is, again, the set of all input state
pairs in which when you apply the labeling map on them, they will satisfy the
propositional logic formula varPsi. Now, let's look at this theorem which talks
about the realizability for recurrence or Buchi objectives.
Play video starting at :3:21 and follow transcript3:21
There exists finite system C or controller C, so that C feedback composition with S
satisfies this recurrence objective under the labeling map L, if and only if the
set of initial state of the system, X_0, is a subset equal of the projection of the
fixed point, Z_infinity, when it project over the state set. Remember, Z infinity
is a set of pairs of input state. Now projected over the state set. If the set of
initial condition is a subset equal of that, then the realizability problem is
valid and we can actually construct a controller C. In this case, let's see how
this fixed point is defined. Now the fixed point is defined like this. It's
actually opposite of the fixed point which was defined for persistence problem, in
the sense that, remember, in the case of persistence, we have two nested fixed
point. There was an outer fixed-point algorithm, which was a minimal fixed point.
There was a inner fixed point, which was a maximal fixed point. In the case of
recurrence is the other way around. The outer fixed-point algorithm is a maximal
fixed-point algorithm, meaning that it's a non-growing algorithm. The inner fixed-
point algorithm is a minimal fixed-point algorithm, is a non-shrink or it's a
growing algorithm. Remember, we have two for loop. The inner for loop implements
the minimal fixed point, which is a growing algorithm. The outer for loop is a
maximal fixed point or is a shrinking fixed-point algorithm, which is defined like
this. Now, let's see how the controller is actually defined in this case. Again,
the same as persistence problem, we need to define set Z_i. Those are the set that
gets computed for the outer for loop at every iteration i. We have Z_0, Z_1, Z_2,
Z_3 until we get a fixed one. Guess what? If the system is finite, 100% we will get
a fixed point after some finite iterations. Now, those set that gets constructed
for the outer fixed point, so at each iteration, let's call them Z_i. Now with the
help of those Z_i, we define this index map j. The domain of the index map j is the
state value. Co-domain is a non-negative integer, which can also be infinity, and
it's defined like that. For a given state value of j(x) is equal to the lowest
index i in which x belongs to projection of Z_i/x. What if the value of x does not
appear in any of the Z_i? Because we only have finitely many of the Z_i, because we
get a fixed point. What if x does not belong to all of them? Then the corresponding
index is infinity. Now with the definition of j(x), we have this interesting
corollary for the controller synthesis for recurrence. Suppose that the set of
initial state of the original system S is a subset equal of the projection of fixed
point over the state set. Let's C be a static system with a strict transition
function and a strict output map of this form. All the ingredients of C, the same
as previous synthesis problem, is all defined. The only missing part is H_c prime
of x. H_c prime of x is a set valued map. Its domain is a state set, co-domain is
input set, and is given by, for a given state x, H_c prime of x is equal to, if
j(x) is less than infinity, if it's not infinity, then it's going to be the set of
input U, in which the pair of u, x belongs to Z_j(x).
Play video starting at :8:19 and follow transcript8:19
You get already familiar with the problem I solved in the previous lecture for the
persistent problem, how to be able to use this index j to define this H_c prime of
x. Same thing for recurrence specification. What if the index is infinity? Then H_c
prime of x is all possible set of inputs. We have guaranteed that controller C
defined like this is 100% feedback composable with our system S, and their feedback
composition satisfies our recurrence objectives varphi.
Play video starting at :9:6 and follow transcript9:06
In case you are interested in the proof of the realizability for recurrence, this
is the proof. You can go through it in case you are interested. Also the proof for
the system C being feedback composable and the feedback composition satisfying the
property of interest, you can also look at this proof.
0:05
Now, one might ask, what do we do if we have more sophisticated specification. What
I'm going to do is, here I actually provide example of some more complex
specification. For example, I will let you know how the fixed point operator is
defined for the first example, and then I leave the rest to you to investigate
yourself, how the fixed point operator is actually defined for number 2, 3, 4.
Let's look at the specification number 1.
Play video starting at ::51 and follow transcript0:51
We are given a set of atomic propositions, AP, which is, a, g, a_1, a_2, g_1, g_2,
t_1, t_2, m_1 and m_2, and a labeling function L. We're asked five fixed point
expressions from which you can determine if the following specifications are
realizable. Furthermore, assume that the specifications are realizable and drive
the controller that enforces the specification. So let's look at number 1. We have
two recurrent goal. [inaudible] is equal to infinitely often t_1 and infinitely
often t_2. We have conjunction of two recurrent property.
Play video starting at :1:40 and follow transcript1:40
The main task for us is to actually define the fixed point expression, whose fixed
point can be utilized or leveraged to actually define the controller, which is
feedback composable with system S, and the feedback composition, satisfy this, two
recurrent goal property. Let's see how the expression look like in this case. Let
me introduce an extra page here.
Play video starting at :2:19 and follow transcript2:19
What I'm going to do is, first, I'm going to define these sets called Z_t_i.
Play video starting at :2:29 and follow transcript2:29
Z_t_i is equal to the set of input, state pairs, such that, when you apply the
labeling map to them,
Play video starting at :2:49 and follow transcript2:49
they satisfy propositional logic t_i.
Play video starting at :3: and follow transcript3:00
I can be one or two. Now, this is the fixed point expression for, you know,
satisfying these two recurrence objective.
Play video starting at :3:21 and follow transcript3:21
Of course, the utter fixed point is a maximal fixed point. Now, the inner one
actually is intersection of two minimal fixed point iteration. Mu union.
Play video starting at :3:49 and follow transcript3:49
As you can see, for more sophisticated objective, the fixed point expression is
also becoming more and more complex. You might ask what people usually do when they
are given a very complex linear typologic formula. I should let you know that
people do not actually use directly fixed point expressions to do the synthesis.
This is out of the scope of this course, but there are ways actually to build an
automota for the given, for example, LTL objective or omega regular objectives. For
example, a rabbin or parity automata. Then what they do is they take the product of
those automata with the finite transition system of the system. Then on the
product, they actually solve a game to solve the required synthesis problem. That's
more systematic than fixed point iteration. As you can see, as the property gets
more and more complex, or fixed point expression is actually becoming more and more
complex. Look at this case. We have two conjunction of two recurrence property.
It's already getting very complex to write even the fixed point expression in this
case. But this is just for the sake of exercise, and for more sophisticated
property, people actually resort to automotive based way of doing the synthesis
problem. Now, in this case, that's fixed point. You can actually run this fixed
point. You have two four loop. The outer four loop is a maximal fixed point. Inner
four loop, you actually need to solve two minimal fixed point, and then you need to
take the intersections. Then when you are done, you again, go and iterate the outer
four loop. Now here, you need to define these two sets. The interesting fact about
this property is your controller is not anymore static. It's actually dynamic.
Play video starting at :6:9 and follow transcript6:09
Now let's see how the controller is looked like in this case. Let's say we compute
the fixed point. We have z infinity. Now I need to define how the controller look
like.
Play video starting at :6:30 and follow transcript6:30
You actually use the fixed point you get because you need to use its pre to define
the set.
Play video starting at :6:48 and follow transcript6:48
That's z_i t_1. You also need to do z_i t_2. Now the controller looks like.
Play video starting at :7:8 and follow transcript7:08
Again, the controller is not static anymore. It's a dynamic controller. Let me
actually introduce another blank page.
Play video starting at :7:24 and follow transcript7:24
How the controller look like? Here. Now, the controller has two states. State 1 is
the initial states. Now, let's see how if C and C are defined.
Play video starting at :7:46 and follow transcript7:46
If you have state 1,
Play video starting at :7:52 and follow transcript7:52
you go to state 2 if x belongs to rejection of zt_1 over the state. Otherwise, you
stay in state 1.
Play video starting at :8:14 and follow transcript8:14
The same thing for
Play video starting at :8:21 and follow transcript8:21
if you're in state 2 under state x, you go to state 1 if x belongs to projection of
zt_2 over the state, otherwise you stay on state 2.
Play video starting at :8:48 and follow transcript8:48
Again, you need to define the index map J, which is the lowest index i such that x
belongs to
Play video starting at :9:6 and follow transcript9:06
projection of z_i t_k for one and two. Now we define this index map. Now let's see.
We already defined F_c. Now let's see how H_c is defined here. H_c(1,x) equal to
those input in which the pair ux belongs to Z^t_1,
Play video starting at :10:16 and follow transcript10:16
j_t1(x) if j_t1(x) is less than infinity. Otherwise, it is going to be the whole
set of input times x.
Play video starting at :10:35 and follow transcript10:35
Actually, let me do one thing. I can define both of them. Rather than saying one
here, I can say k times x, and then I can use k here, k. Then k can be one or two.
That's it. You see, in this case, we are dealing with a dynamic controller, but it
has only two state because we had conjunction of two recurrence property, and
that's how we defined the index map, and the index map is defined to define the
output map of the controller.
Play video starting at :11:28 and follow transcript11:28
Actually these fixed point expression solves this conjunction of two recurrence
property. Again, I repeat, using fixed point expressions to solve more
sophisticated linear temporal logic property, it's doable because you need to come
up with this nested fixed point algorithm. But this can be very tedious and
complex. So for more complex LTL property, which is out of the scope of this,
people actually use automata base controller synthesis, and it gets more systematic
and straightforward. But complexity might be higher than using fixed point. I'm not
going into the details. Here, I provide an example for what if we have this
sophisticated objective. Then I came up with a fixed point, which was this one, and
this fixed point expression solves that synthesis, that objective property. This is
how the controller is also defined. Which is not anymore a static, is in fact a
dynamic controller. You can actually yourself with the help of Google, find out how
the fixed point expression looks like for problem number 2, 3, or 4. Leave that to
yourself. If you're interested, if you're curious to understand how the fixed point
for the number 2, 3, 4 look like, you can actually investigate yourself. But as I
said, this is the out of scope of this course, and I just wanted to show you an
example. As the property gets more complex, the fixed point expression also gets
very more complex. In fact, people do not use anymore the fixed point expression
for solving more sophisticated LTL property. They resort to automata base
controller synthesis. That actually brings us to the end of the fixed point or
synthesis algorithm.
0:04
In the past few lectures, I talked about verification and some synthesis algorithm,
if you recall. One thing we should stress out now is all those verification
algorithm and the synthesis algorithm I explained, all of them, they could have
been performed in a computer in a pure algorithmic fashion. But keep in mind the
main ingredient was that the given system was finite. Now, the question of interest
is, for example, for many autonomous systems, including self driving cars, drones,
sheep, and so on, these systems cannot be necessarily described using finite
system. The underlying system representation for this complex autonomous system, in
fact is not finite. It can be infinite or even uncountable. Now the question of
interest you might ask is, how am I able to leverage those verification techniques
and synthesis algorithm, fixed point synthesis algorithm I explained in the
previous lecture can be applied to these type of physical systems with state
variables or physical quantity, evolving continuously in the space. In order to
answer this question, what I'm going to talk about is called abstraction. What
we're going to do is we try to construct a finite representation of those infinite
system. We try to construct a finite abstraction of those continuous space system.
This is the focus of the next few lectures, how to construct those abstraction.
What is the main ingredient that helps us to construct those abstraction? The main
ingredient is establishment of a relation between the original system and its
finite abstraction, which is called feedback refinement relation. Then later on, I
will explain how those obstruction can be constructed and computed. Let's see. What
is the motivation behind having a system relation, namely, feedback refinement
relation between the original system and its finite abstraction? Look at these two
closed loop diagram. You are given two system, S_1, S_2, and a specification via
Phi. Suppose we have a controller C for system S_2, and we know that this close
loop, the feedback composition of C and S_2 satisfies via Phi. Now, the question
is, how can we modify C to C' so that C' is feedback composable with S_1 and their
feedback composition satisfy via Phi. But I should also add that, we already know
there is an underlying relation between system S_1 and system S_2. The question is
how that relation should be defined such that it allows us to define a controller,
which has been defined for a different system, namely S_2, bring it to our original
system, namely S_1. Also being able to transfer the proof of correctness in the
sense that if I do know feedback composition of C and S_2 satisfies the property of
interest via Phi, I also wants to get this proof in the original domain. Meaning, I
want to also have that feedback composition of C prime and S_1 also satisfy the
property of interest via Phi. Now, the question is, how should I define the
relation between these two system to get this nice property? Refining the
controller and bringing the proof from one domain to other domain. The construction
of C prime is often called controller refinement. Because we are refining the
controller from the abstract domain, namely on the right hand side, S_2, let's call
it abstract system to the concrete domain where on the left hand side, S_1 is our
concrete system.
Play video starting at :4:57 and follow transcript4:57
The other motivation for using this system relation is when we do system
replacement. Let's say we are given an interconnected system. This interconnected
system consists of a number of subsystems in this particular diagram consists of
four subsystems, S_1, S_2 all the way till S_4. Let's say this large scale
interconnected system satisfy some property of interest. Now, we would like to
replace component S_1 with S_1'. Because, for example, S_1 is broken or S_1'
performs better, or S_1' is an updated version of S_1. There are reasons that we
would like to replace a component. There is a new component that has new
functionality. It's updated version. Now, if you substitute S_1 with S_1' the
question is, what are the conditions so that the new interconnected system, which
is composed of S_1', S_2, S_3, and S_4 still satisfies via Phi. Here, if I'm able
to establish the relation between S_1 and S_1', then we don't need to worry about
the satisfaction of the overall larger scale interconnected system. This is another
interesting application of having this concrete system relation. Because it allows
us to interchangeably, plug and play. We can actually take out a component with the
new component and still being able to show that the interconnected system with the
new component in it will still satisfy the original property of interest. In order
to be able to have this strong property for the interconnected system, we need to
be able to show that S_1 and S_1', they have this relation between them.
0:04
So [COUGH] let's look at the concrete definition of feedback refinement relation.
This relation is the main ingredient to build finite abstraction of continuous
space system or infinite system. And then be able to use that finite abstraction to
design a controller and eventually refine that controller back to the original
system. So, okay, in order to define feedback refinement relation, first I need to
define what do I mean by admissible inputs. So given a system S, we define the set
of admissible inputs at state x by so this notation Us(x) is equal to the set of
all inputs in which the pair x, u is non-blocking. In other words, if I'm in a
state x, if I apply that u, there is an outgoing transition, the successor is not
going to be empty set. So that what I mean by admissible input of system S at state
x, the set of all the inputs for which if I start from x, there is an ongoing
transition, right? It's not going to be empty, it's not a trap state, it's going to
be non-blocking. So now let's define feedback refinement relation. So let's say we
are given two system S1, S2, two simple system. And let's assume the input set of
system 2 is subset equal input set of system 1. Aa strict relation Q, look Q is a
binary relation, right? So strict relation Q, which is subset equal, the product of
the state sets of system 1 and 2 is a feedback refinement relation from system 1,
S1 to system 2, and denoted by this notation. So means Q is a feedback refinement
relation from system 1 to system 2. And by being strict means Q of X1 for any X1
inside capital X1 is going to be non-empty, we already introduced that what do I
mean by a strict relation, right? So if the following holds for all pair x1, x2
inside the relation, so let's see, what are those three conditions. Number 1, if x1
is an initial state in system 1, then x2 has to be an initial state in system 2.
Play video starting at :3:9 and follow transcript3:09
Admissible input of state x2 in system 2 should be subset equal admissible inputs
of state x1 in system S1.
Play video starting at :3:23 and follow transcript3:23
And then finally number 3, which is the most important one. It says for any input
which is admissible at state x2 in system S2, that implies if I apply that input on
a state x1 in system S1, so then I will go to bunch of states, right? So now the Q
of those states where I'm landing has to be subset equal of the set of successor
states which I can reach in system 2 from state x2 under input u.
Play video starting at :4:6 and follow transcript4:06
So let's wrap it up, what do I mean by those conditions of feedback refinement
relation? In other words, for any pair x1, x2 in the relation, the condition number
1 is saying if x1 is an initial state in system S1, then then x2 needs to be an
initial state in system S2.
Play video starting at :4:33 and follow transcript4:33
Condition 2 is saying every admissible input of system S2 at state x2 is also
admissible at state x1 in system S1. And finally, condition 3 is saying every
successor in system 1 started from state x1 under u when mapped to x2 via feedback
refinement relation Q, is contained in the set of successors in system 2 started
from state x2 under input u. And this is the concrete definition of feedback
refinement relation. Which is by the way, it's very important for the sake of
building or computing a finite system representing our original infinite system.
But at the same time being able to use that finite system, apply our synthesis
algorithmic technique, the fixed point algorithm, over that finite system, design
controller and being able to refine the controller to the original system. I will
explain more on this refinement in the next lectures.
0:05
>> Let's look at an example of feedback refinement relation. Here we are given two
simple systems swap and is true. We are asked to check if the relation queue which
is this set of state pairs from system one and two. If this Q is a feedback
refinement relation from system S one to system S two. So what are we going to do?
We're going to actually check all these three conditions. Very good. First of all,
we have to check if Q. Remember, it has to be a strict relation. So that's the very
first thing we have to check. It is a strict. Why is that? Because for any state in
system S one, Q of X one is not empty. Why is that? Look how many state we have.
123-451-2345 for all of them, q of them so is not empty. So there is a state from
system two which are related to those states, right? That already implies Q is
strict. Now the very first condition says for any state pairs in Q, for any X one,
X two in Q, if X one is initial, stating system is found, X two has to be also
initial in system S two. Let's see if that's the case. Let's look at this pair.
One, one prime is one initial in system S one. Yes. So that means one prime has to
be initial in system s two. Is it? Yes. So now we come to five. One prime is five
an initial state. Yes. You see, remember we use initial state with the source less.
Five is an initial state. Is one prime, initial state? Yes. Two, two prime, two is
not initial state. Three is not initial state. Four is not initial state. Then we
are done. The very first condition of feedback refinement relation hold. So what?
The second condition, saying any admissible input in system two also has to be
admissible in system one. Great. So let's look at this pair. One, one prime. Let's
look at one prime. What are the admissible input at one prime? A and b. So that
means a and b also has to be admissible at state one. Is this the case? Yes. Look,
if you're in a state one under a and b, you have successors. So let's look at the
next pair. Five, one prime. Let's see what are the inputs admissible at one prime,
a and b. We need to check. A and b has to be also admissible at state five. Is this
the case? Yes. Look, if in five under a, you go to four under b or two and under B
you go to one. So now let's look at the next two. Two prime. Let's see which inputs
are admissible at two prime, only a. So that means a has to be admissible at the
state two. Is that the case? Yes. Under a, we might go to three or four. The next
pair is three, comma three prime. Let's see what inputs are admissible at three
prime. There is no input. The set of admissible input at three prime is empty. And
you know which also the case at the state three, the set of admissible input at the
state three is also empty. So very good. And then let's look at the last pair,
four, comma two prime. What are admissible at two prime? Input A. So a has to be
also admissible at four, is that the case? Yes, under a, in a state four, we go to
a state four. Great. The second condition is also true. And the last condition says
for any admissible input that we apply at the state x two. If we apply that
admissible input in system S one, Q of all the successors has to be subset of the
successors in system two. So let's check that.
Play video starting at :4:13 and follow transcript4:13
If I'm in a state one prime, if I apply a state b, where do I go? One prime. Okay,
now come to a state one, apply b. Where do I go? I go to a state one. Or I might go
to a state five. What is the q of one, five? So the q of one is one prime. The Q of
five is also one prime. So is one prime a subset equal of the successor of one
prime under b? Yes, great. If I mean one prime, I can also apply a. Now come back.
Let's look at the original system. If you're in a state one, if you apply a, where
do I go? I go to a state two. What is the q of two? Two prime, and it's a subset
equal of the successor of one prime. Under a, you go to two prime. So now let's go
to the next pair, five, comma, one prime. So the same thing. What are admissible in
one prime? B or a. So if I apply b, I go to one prime. Come and look at B. Sorry,
state five. If you apply b, where do I go? I go to a state one. What is the Q of
one is one prime. One prime is a successor of one prime under b. Yes, I can also
apply a. So if I go to the concrete system and I apply a from state five, where do
I go? Four or three. Now Q of two, comma, four has to be two prime or subset equal
to prime. Is that the case? Yes. Two and four, they are related to two prime. Now
we go to the next pair, two, two prime. If I'm state two prime in system S two,
what are the admissible input a? Okay, so if I apply a in state two in system one,
where do I go? Three or four. That means three or four has to be in the Q of,
sorry, Q of three, four has to be subset equal, two prime, three prime. Is that the
case? Yes, Q of three is three prime. Q of four is two prime. Great.
Play video starting at :6:51 and follow transcript6:51
So now we come here three, comma, three prime. If I'm in three prime, there is no
successor. So there is no input available. So if I'm in three, also there is no
input available. Great. So now we come to the last pair. So, look, if I'm in two
prime, I can apply action a. So let's come and apply action a on state four, where
do I go? I go on a state four is Q of four, subset equal, the set of two prime,
three prime. Yes. Here, four is related with two prime. I do also satisfy the last
condition of feedback refinement relation. Hence Q, which is the set of this pair,
is in fact a feedback refinement relation from system S one to system S two. Now
look what happens here, even though system is one itself is also a finite system.
But I was able to use the concept of feedback refinement relation to reduce the
size of S one, meaning that now look at this tool has only three estates. Now I can
use this tool to, let's say, design a controller because it's smaller, hence it's
computationally less challenging. Then I can refine that controller back to system
S one. Of course, the feedback refinement relation can also be used in the case
that S one is an infinite system and S two is a finite system. And this example
comes in the future lectures.
0:04
In this lecture, I'm going to show the usefulness of having a feedback refinement
relation. In the sense that how can we leverage feedback refinement relation and
being able to show how can we actually refine a controller from an abstract system
to a concrete system. Consider we are given two systems S_1, we can call it
concrete system. S_2, we can call it an abstraction. Let's assume the input set of
system 2 is a subset equal of the input set of system 1. In addition, consider, we
have a controller, a system C for system S_2, which is fit by composable with
system S_2.
Play video starting at :1:2 and follow transcript1:02
Also, let's assume that if system C is non-blocking, that implies S_2 is also non-
blocking, namely this condition, which I have denoted using a star. This equation
might look a little complex, but in English term, what we are saying here is if
system C is non-blocking, that implies that abstraction also has to be non-
blocking. Now, let's say, what do we get from this, we know that there is a system
C feedback composable with S_2, and if C is non-blocking, S_2 is also non-blocking.
That brings me to the main theory that says, and also assume Q is a strict and it's
a feedback refinement relation from system S_1 to S_2. Let's again reiterate. We
are given two system S_1, S_2. We can call this S_1 concrete system, we can call
this S_2 abstraction. We have a strict relation Q, and we know it's a feedback
refinement relation from systems S_1 to system S_2. In addition, assume there is a
controller for system S_2. If system S_2 is a finite abstraction, is a finite
system, we can always apply the fixed point algorithm, which I explained in the
previous lectures to design this system C that, which is feedback composable with
S_2 and the feedback composition satisfies one of those property of interest. Then
we assume there is a system C which is feedback composable with S_2. We assume Q is
a feedback refinement relation from systems S_1 to system S_2. In addition, there
is some technical assumption as well, which is not a big deal, which says, if C is
non-blocking, that implies S_2 is also non-blocking. This condition will be used to
prove the main theorem which I explain now. Let Q be a feedback refinement relation
from systems S_1 to system S_2. Let's assume C is feedback composable with S_2, and
C being non-blocking implies this to be non-blocking, then we get item 2 and. What
is item 2? Item 2 is saying C is also feedback composable with the serial
interconnection of Q and S_1. The behavior of, the feedback interconnection in the
right is actually subset equal of the behavior of the feedback interconnection in
the left. Let's digest this, two interesting result. The Number 1 says, you can
actually use feedback refinement relation Q as a quantizer after concrete system
S_1, and then send the output of the quantizer to the same controller that we have
designed for systems S_2. That's how the refinement work. It's very easy. You put
the quantizer or you put the feedback refinement relation Q as a so called
quantizer, after the output of system is fun, the state information of system is
fun goes inside Q, and then the output of the Q goes to the controller C. The item
Number 3 is saying, every behavior you see here on the right in the concrete domain
is a subset equal of the behavior you see on the left. Since I designed the
controller C, for the final abstraction to satisfy some property of interest. Since
the behavior of the system in the right is a subset equal of the behavior of the
system of, feedback of, closed loop system in the left. That means what? That means
this closed loop on the right also do satisfy the property of interest. We get that
for free.
0:05
In the previous lecture, I explained the main theorem behind the feedback
refinement relation. How can we refine a controller from one system to another
system? Now, as a result of the theorem, which I explained in the previous lecture,
now we're going to have this result, which says, if Q is a feedback refinement
relation from system 1 to system S_2, then 1 and a star imply these two bullet. In
case you forgot what condition 1 and a star imply, so 1 was simply saying C is
feedback composable, the system S_2, and condition star was simply saying if C is
non blocking, that implies S_2 also has to be non blocking. Now if we know Q is a
feedback refinement relation from systems S_1 to S_2, and assume C is feedback
composable with system S_2 and non blockingness of C implies non blockingness of
S_2, now we have these two property. Serial interconnection of C and Q is feedback
composable with S_1. For any input state sequence, which belongs to the behavior of
this closed loop there exists input state sequence of this closed loop such that
the state trajectory, X_1, X_2, at any given time, the state pairs of X_1, X_2 at
any given time, they are in the relation. Maybe the most important property is the
first bullet point. Remember, in the previous lecture, I talked about this closed
loop. But guess what? Without loss of generality, you if you put a dash line around
serial interconnection of Q and C, that's our refined control. That's our C prime.
Remember, the design in C for the abstract system S_2. The question is, if Q is a
feedback before in relation from system S_1 to system S_2, what is the refined
version of C for system S_1? The refined version of C, namely C prime is simply
serial interconnection of Q and C. C or Q is C prime. Is the refined version of C
that can be used simply for the concrete system S_1. That's how the refinement
works. If you have a feedback refinement relation Q from system S_1 to System S_2.
You know that this controller is going to be feedback composable system is fun, not
only that, their feedback composition will satisfy the same property as this
feedback composition.
Play video starting at :3:29 and follow transcript3:29
You might ask why we need this star property? Why we need it to say non
blockingness of C implies non blockingness of S_2, we can show it through this
simple example. Here we are given to a systems S_1 and S_2. Very simple, very
static. You see S_2 as a blocking state, this one doesn't have.
Play video starting at :4: and follow transcript4:00
You can easily see that this Q, which is simply containing one element, 1,1 is a
feedback refinement relation. You can readily verify from system S_1 to system S_2.
Now consider this controller C. It's also a static. This controller C is, in fact,
feedback composable with S_2. C is also feedback composable with this S_1. However,
condition star is not satisfied. Why is that?
Play video starting at :4:39 and follow transcript4:39
Remember, non blockingness of C should imply non blockingness of S_2. Here, C is
non blocking, but S_2 is blocking. The condition star is not satisfied. Now let's
see why this will cause a problem. Look at here. If you apply this controller C to
system S_1, then this A,1 to the power Omega belongs to the behavior of feedback
composition of C and S_1. If you apply A on state 1. What do you see? You see an
infinite state sequence of 1, an infinite input sequence of A. However, this
behavior does not belong to the feedback composition of C and S_2. Because feedback
composition of C and S_2, in fact, is empty.
Play video starting at :5:51 and follow transcript5:51
The behavior is empty because at the state 1, we are blocking, so we don't have any
outgoing transition.
Play video starting at :6:2 and follow transcript6:02
Or your behavior contains only one element. It's not an infinite sequence because
system S_2 is blocking. Now you see the behavior of this feedback composition is
not subset equal to the behavior of this feedback composition. Whereas, you had
this nice behavioral inclusion here or here. Do you see that? Now, why this doesn't
work? Mainly because the condition star is not satisfied. This simple example is
explaining why we need non blockingness of C should imply non blockingness of S_2.
Play video starting at :6:48 and follow transcript6:48
If you're interested in the proof of the theorem and corollary, this is a sketch of
the proof. You can bring it yourself, and it's a real detail, but it shows why we
need, for example, the star condition to show the behavior of inclusion.
0:05
So today I'm going to provide a way of constructing abstractions for system. So we
focus on the abstractions in which their state set is a cover of the state set of
the original or concrete system, meaning that x2 is a set of non empty subsets of
x1. So for example, if you look at this figure, so let's say x1 is this two
dimensional set, and then each of this circle is actually a subset of this two
dimensional set, right? So now each of these circle can actually denote or can be
one state of the abstract system. So remember, each member, each state of the
abstract system itself is a subset of the state set of the concrete system. So now
if you look at the concrete system, whose state set is this two dimensional set,
what is the cardinality of this set? Is infinite, in fact uncountable, why? Because
how many points we have in this two dimensional set? Uncountably many. However, we
can actually put a finite cover over it. So how many of these circles we have in
order to cover this set? Finitely many of those circles, right? Each of these
circle, in fact, is one state of the abstract system, and the union of them, the
union of them, in fact covers the original state set. If you take union of this
circle, it's going to cover, it's going to the original state set, which was
containing infinitely many points in this set, right? So every cell x2 is in fact a
subset of x1. Look, every cell, each circle itself is a subset of x1. In this case,
the feedback refinement relation Q or the quantizer Q is a simple set membership
relation. Meaning what? Meaning for any given point x1 in our original state set,
capital X1, the relation or the quantizer Q picks a cell x2 in a non deterministic
fashion, so that x1 belongs to x2. If you look at this figure, look at that point
x1 here. Yeah, this point, so this point belongs to two cells, right? Because it's
in the intersection of two circles. So that means Q of x1 is going to be any of
these two cells in the abstract domain.
Play video starting at :3:7 and follow transcript3:07
So, okay, now let's see how the computation of the abstraction works. Let's say we
have two system S1 and S2. And let's assume the state set of the system S2 is be a
cover of the non-empty sets of X1, right?
Play video starting at :3:30 and follow transcript3:30
I mean, X2 is be a cover by using non-empty sets of X1. Here I assume uniform cover
elements, all of them are circle the same size. But you can also have non uniform
cover elements, right? So this theorem doesn't even restrict about the shape of
those subsets. They can have any shapes as long as their union contains the
original state set, namely capital X1. So now I can provide the theorem about the
computation of abstraction. So the set membership relation is a feedback refinement
relation from system S1 to system S2 if and only if, if X1 is an initial state in
our original system S1. And if x1 belongs to a cover element x2, then x2, the cover
element x2 has to be an initial state in system two. So as we know, this state
input set of system two has to be subset equal of the input set of system one. And
if x1, a point belongs to a cover element x2, that implies that the admissible
input of power element x2 in system S2 has to be subset equal of admissible input
at state x1 in system S1. So the second bullet point is saying U2, the input set of
system S2 has to be subset equal of the input set of system S1. And if x1, the
point x1 is inside the cover element x2, admissible input at cover element x2 in
system S2 has to be subset equal admissible input at state x1 in system S1. And
then finally if, look at two cover element x2 and x2 prime in X2. And let's say u,
input u is admissible at cover element x2 in system S2.
Play video starting at :6:11 and follow transcript6:11
And if the intersection of cover element x2 prime. And so look at this, it says the
successor in system S1 when you start from a set x2 under input u. So if you look
at this successor is going to be a set inside X1. If that intersection with all the
cover element x2 prime is non empty, then in the abstract domain you should have a
transition from x2 under input u to x2 prime. So you see here I am evaluating F1
over a set. Remember, X2 is a cover set. So that requires computation of the
reachable set. So I will come back to this point later. So let's actually go over
the theorem again. So the theorem said the set membership relation is a feedback
refinement relation from system S1 to system S2 if and only if, so remember the
state set of the system S2, we already know it's a cover of the state set of system
one, right? Okay, so now set membership relation is a feedback refinement relation
from system S1 to system S2 if and only if, condition number one says, if point x1
is an initial state in system S1, and if x1 belongs to a cover element x2, x2 also
has to be an initial state in system S2. Condition two says that, if x1, point x1
belongs to color element 2, then admissible input at state x2 in system S2 has to
be subset equal of admissible input at state x1 in system S1.
Play video starting at :8:34 and follow transcript8:34
Finally, the third condition says, [CAUGH] so how do we build the transition among
the cover element? It says start from a cover element x2, apply an input u which is
admissible at cover element x2. So now you have to compute the set of all
successors in system one when you start from the cover element x2 under the input
u. So that's going to be a set inside X1. So now take the intersection of that set
with other cover elements inside [CAUGH] the state set of system S2. If that
intersection is non-empty in the abstract domain, you have to put transition from
cover element x2 to those other cover elements which are intersecting with this
reachable set under the input u. Now you might ask, how do we compute F1(x2, u)?
Because X2 here is not a point, is a set. So that requires computation of reachable
set, which I briefly explained in the next few lectures. But I mean in total, in
summary, that's how the abstraction is constructed, using the cover set and with
the use of set membership relation, right?
0:05
So let's look at an example of constructing an abstraction using cover set. So
consider we are given a simple system as font as depicted on the left. And then
let's look at a cover of the state set of S1. And these are the cover. We have 3
cover element one prime, which is a subset of a state set of S1containing state 1
and 5. 2 prime, which is a cover element containing a state 2 and 4. 3 prime is a
cover element containing only state 3, right? So now we would like to build an
abstraction of system S1using those cover elements. Okay, very good. Okay, first of
all, we need to see which of these cover element can be initial state. So if you
look at system S1, 1 and 5 are initial states, right? And both 1 and 5 are inside
cover element 1 prime. So that means in the abstract domain, 1 prime, cover element
1 prime has to be an initial state. Here, see, 1 prime is an initial state as well
great. So we only have 1 initial state in the abstract domain. So now we need to
see which inputs are admissible at those color elements. So look, if I look at the
state 1, a and b are admissible. If I look at 5, a and b are admissible. Hence a
and b are also admissible at 1 prime. If you look at the 2 and 4 in 2, a is only
admissible. In 4, a is only admissible. Then at 2 prime, cover element 2 prime,
only a is admissible. And if you look at 3, nothing is admissible. Hence yet state
3 prime is also nothing is admissible great. So we are able to also come up with
the set of admissible inputs at those color elements, right? So now let's put the
transition among them. Okay, so if I'm in a state 1 under a, where do I go? If
you're in a state 1 under a, I go to 2. And if I'm in a state 5 under a, where do I
go? I go to 4 and 2. And then guess what? 2 and 4, both of them are inside cover
element 2 prime. So that means in the abstract domain from a state 1 prime, I go to
a state 2 prime under action a. What about action b? If you are in a state 1 under
b, you go to a state 1 or 5. And if you are 5 under b, you go to 1. So that means
if you are in cover element 1 prime, under action b, you will have a self loop on
cover element 1 prime. Because in the concrete domain, you go to a state 1 or 5,
and those are related, those are inside cover element 1 prime. So then in the
abstract domain, if I'm in 1 prime under b, I have a solution as depicted here.
Okay, so now let's go to state 2 prime. So remember we said only action a is
admissible great. So if I in 2 under a, I might go to 3 or 4. And if I mean 4 under
a, I might. I will go to 4, great. But look, 3 is inside cover element 3 prime and
4 is inside cover element 2 prime. So that means fearing cover element 2 prime
under a, you might have a solved loop or you might go to cover element 3 prime,
great. And eventually, if you are in a state 3 prime, cover element 3 prime
containing state 3, so there is no outgoing transition from state 3. So that means
there will not be also any outgoing transition from the cover element 3 prime. So
here we came up with the abstraction of system S1 using cover elements of the state
of the system S1. And here this queue which is constructed using set membership
relation, is in fact a feedback refinement relation from system S1 to system S2 as
constructed here.
0:06
Now we have all the ingredients to talk about how can we build finite abstractions
of sample-and-hold linear control systems. Remember, in the first course when I was
talking about modeling, I actually were able to show you that many systems around
us, many physical system, many electrical systems can be modeled using linear
control system. Now let's assume we are given a continuous dynamic described by
this linear ordinary differential equation. Remember, the Xi of t is our state
vector. Contains many of our physical variables in our physical system. Now let's
look at the sample-and-hold version of the system, which is a simple system S in
which X is the R state set. In this case, n-dimensional Euclidean space, which n is
the number of state variable in the system. The set of initial state is a subset of
our n. By the way, there is a typo here. Just ignore this one here. Our input set
is also a subset of dimensional our m dimensional Euclidean space. What about our
state transition function? Remember our state transition function, if you start
from state x on the input u, you go to this state, which is computed using this
linear map, matrix A_d*x plus matrix B_d*u in which A_d is constructed from our
original matrix in the continuous time case is actually this exponential matrix e
to the power A time. In fact, you can actually use Matlab to easily compute this
matrix. It's called expm. You put inside the matrix A and the sampling time and it
computes you A_d. B_d is simply this integral. In fact, there is a function in
Matlab that you put A and B inside of it and the sampling time and it gives you the
A_d and B_d. It's called c2d, continuous to discrete; the name of the function.
Now, if you have A_d and B_d, you can easily compute the successor of any state
under any input try because it become A_d*x+B_d*u. You simply do this matrix vital
multiplications, two of them, and then sum it up, and that's the successor that the
system will be. Now we are interested to construct finite abstraction of sample-
and-hold version of linear control system. Keep in mind a state set either is the
whole n-dimensional Euclidean space or is a subset of n-dimensional Euclidean
space. But keep in mind, the state set of our original system is continuous, hence
uncountable, and we would like to construct a finite abstraction of our system
using the notion of feedback refinement relation. What do we do? In order to build
the abstraction, we need to build a cover of the state set. Here, our cover
elements are simply hyper U, so our hyper-rectangle. What do I mean by hyper-
rectangle? Our color elements, our hyper-rectangle centered at c with the radius r.
What does it mean? This r is a vector in our n and c is also a vector in our n. The
hyper-rectangle is defined in the sense that when you are in the center c in every
direction because it's n-dimensional space, what is the radius is going to be
component of this vector r. By the way, this vector r is positive component-wise.
Each entry of r tells you in each direction how much the rectangle is spreading
out. What about the input set of the abstract domain is some finite subset of
original input set? X hat is X_b hat. What is X_b hat? Union of those cover
elements which are in the form of hyperrectangle. Then X_o hat contains what is
called overflow symbols. For example, let's say, your estate set is only a compact
subset of R^n. Everything outside that compact or bounded, a subset, you can
actually represent it using one symbol alone, which is called X_o hat, which is,
it's called overflow symbols. Anything outside the bounded set or compact set, you
can represent it by X_o hat.
Play video starting at :5:54 and follow transcript5:54
We already defined what the X hat is. X hat is union of X_b hat and X_o hat. X_b
hat is simply you put a uniform grid on your original compact or bounded set of
your original system. In this case, the cover elements are hyperrectangles centered
at c with the radius, r. R itself is a vector, each component is positive, and it
tells you. You see the cover elements are hyperractangle, meaning that in each
dimension, you can have different radius for your hyperrectangle. X_o hat just
simply represent everything outside your original compact set representing the
state set of your original system. You can just say everything outside a
representative with one symbol, and we call it X_o hat. U hat is simply a finite
subset of U. The question is, how does a state transition map? F hat is defined.
That's the main ingredient which we need to understand how that is being defined.
Remember, it requires a computation of reachable set or its over approximation. I
tell you how F hat is being computed. If x hat belongs to the overflow state, then
F hat( x hat, u) for any U is going to be hat. Otherwise, if X hat is this
hyperrectangle, for example, here, denoted by this hyperrectangle in this figure,
in order to compute, F hat of this hyperrectangle on the input u, what we need to
do in the case that we are given a linear system of this form, with this
corresponding, discrete sample and hold version matrices, first, we need to compute
c prime and r prime. What is c prime? C prime is A_d*c plus B_d*u. What is C prime?
C prime is the center of a new hyperretangle containing all reachable state of the
original system when it starts from this hyperretangle and after by applying input
u. Great. That's the center of the resulting hyperretangle containing all successor
points. What about its radius r prime? The radius is also computed using this
equation. An exponential matrix e^metzler(A) times our sampling time. Remember the
sampling time comes from here, because we are looking at the sampling hold version
of our original system. Times the radius of the original hyperrectangle. Now, what
you do you construct this new hyperrectangle centered at c prime with the radius r
prime. Remember, r prime is a vector. It's component all positive, because in
different dimension, you have a different stretch for a hyperrectangle. Now what
you do is you take intersection of this new constructed orange hyperrectangle, with
all cover elements of your cover set. In this case, cover set is containing all
these hyperrectangle. They're also uniform. When you take the intersection, you
actually intersect with
Play video starting at :10:34 and follow transcript10:34
here. I put the intersection, and so this is one hyperrectangle,
Play video starting at :10:45 and follow transcript10:45
1, 2, 3, 4, 5, 6, 7, 8. The orange hyperrectangle, which was calculated with those
parameter is intersecting with eight hyperrectangle or cover elements in the cover
set. That means in the abstract domain, we have to put transition from this
hyperrectangle to those eight hyperrectangle, which are intersecting with this
orange hyperrectangle non-deterministically. Now you have to repeat this procedure
for all possible abstract input, for all possible input inside you had, and for all
possible cover elements. That's how we construct a finite representation of our
original system, or its sample-and-hold version, which is also guaranteed to be
finite as long as we are interested to work in a bounded set of states of R^n. You
might say how this metzler(A) is constructed. This is the definition of
metzler(A)_ij. For all the elements on the main diagonal of A, you put them the
self, and for any off diagonal, you put their absolute value.
Play video starting at :12:28 and follow transcript12:28
This is how we construct finite abstraction of a continuous space system or an
infinite system using cover set construction, and the abstraction comes with the
feedback refinement relation, which is a simple set membership relation.
Play video starting at :12:58 and follow transcript12:58
I recommend you to watch the next lecture, which has been developed by my ex-PhD
student. What Mahmoud is going to present is going to present one of the tool,
namely SCOTS, and also Omega threats that we have developed in our lab.
Play video starting at :13:24 and follow transcript13:24
It's open source. It's available. You can use Scots to build finite abstraction of
in continuous space control system. By the way, it doesn't need to be linear, it
can also be non linear. Not only that, SCOTS also comes with minimal and maximal
fixed point iteration. You can actually use SCOTS to also design controller C for
safety specification, reachability specification for persistence property as well.
Natively is being implemented inside SCOTS. If you're interested to understand how
this finite abstraction construction works for cases of control system, I recommend
you to also watch the next lecture. Mahmoud will explain. There are also some
instructions comes in the lecture on how to install the tool, and then Mahmoud
explain how to use the tool to construct finite abstractions and to use the finite
abstraction, to also design controllers for some property of interest which are
natively being implemented inside the tool SCOTS. Like everything I explained here
in terms of putting a grid of, hypercube, computing this over approximation of
reachable, orange hyperrectangle. These are all being implemented inside the tool
called SCOTS. Again, I recommend you to watch the next lecture which goes through
installing SCOTS, and playing with SCOTS, with some case study.