Tentacular AI
Tentacular AI
Selmer Bringsjord1 , Naveen Sundar G1 , Atriya Sen1 , Matthew Peveler1 , Biplav Srivastava2 Kartik Talamadupula2
1
Rensselaer Polytechnic Institute (RPI); RAIR Lab
2
IBM Research
[email protected], [email protected], [email protected],
[email protected], [email protected], [email protected]
in nature.
…
3 Quick Overview
We give a quick and informal overview of TAI. We have a set
of agents a1 , . . . , an . Each agent has an associated (implicit
or explicit) contract that it should adhere to. Consider one
particular agent τ . During the course of this agent’s lifetime,
6
The layering of TAI is in fact anticipated by the increasingly Figure 2: Space of Logical Calculi. There are five dimensions that
powerful axiom-centric cognition described in [Bringsjord, 2015], cover the entire, vast space of logical calculi. The due West dimen-
which takes Peano Arithmetic as central. sion holds those calculi powering the Semantic Web (which are gen-
7
Though out of reach for now, given that our chief objective erally short of first-order logic = L1 ), and include so-called descrip-
is but an informative introduction to TAI, the relationship between tion logics. Both NW and NE include logical systems with wffs that
our conception of cognitive consciousness, which is central to TAI are allowed to be infinitely long, and are needless to say hard to
agents (Attribute #4 above), and consciousness as conceived by compute with and over. SE is higher-order logic, which has a robust
Chella, is a fertile topic for future investigation. A multi-faceted dis- automated theorem-proving community gathered around it. It’s the
cussion of artificial consciousness is by the way to be had in [Chella SW direction that holds the cognitive calculi described in the present
and Manzotti, 2007]. For a first-draft axiomatization of the brand of paper, and associated with TAI; and the star refers to those specific
consciousness central to TAI agents, see [Bringsjord et al., 2018]. cognitive calculi called out in these pages by us.
To make the above notions more concrete, we use a version inheritance programming language. We show below some of
of a computational logic. The logic we use is deontic cogni- the important sorts used in DCEC.
tive event calculus (DCEC). This calculus is a first-order
modal logic. Figure 2 shows the region where DCEC is lo- Sort Description
cated in the overall space of logical calculi. DCEC belongs
to the cognitive calculi family of logical calculi (denoted by Agent Human and non-human actors.
a star in Figure 2 and expanded in Figure 3). DCEC has a Time The Time type stands for time in the domain.
well-defined syntax and inference system; see Appendix A of E.g. simple, such as ti , or complex, such as
[Govindarajulu and Bringsjord, 2017a] for a full description. birthday(son(jack)).
The inference system is based on natural deduction [Gentzen, Event Used for events in the domain.
1935], and includes all the introduction and elimination rules ActionType Action types are abstract actions. They are in-
for first-order logic, as well as inference schemata for the stantiated at particular times by actors. Exam-
modal operators and related structures ple: eating.
This system has been used previously in [Govindarajulu Action A subtype of Event for events that occur as
and Bringsjord, 2017a; Govindarajulu et al., 2017] to auto- actions by agents.
mate versions of the doctrine of double effect DDE, an eth- Fluent Used for representing states of the world in the
ical principle with deontological and consequentialist com- event calculus.
ponents. While describing the calculus is beyond the scope
of this paper, we give a quick overview of the system below.
Dialects of DCEC have also been used to formalize and auto- The syntax has two components: a first-order core and a
mate highly intensional (i.e. cognitive) reasoning processes, modal system that builds upon this first-order core. The fig-
such as the false-belief task [Arkoudas and Bringsjord, 2008] ures below show the syntax and inference schemata of DCEC.
and akrasia (succumbing to temptation to violate moral prin- The first-order core of DCEC is the event calculus [Mueller,
ciples) [Bringsjord et al., 2014]. Arkoudas and Bringsjord 2006]. Commonly used function and relation symbols of the
[2008] introduced the general family of cognitive event cal- event calculus are included. Fluents, event and times are the
culi to which DCEC belongs, by way of their formalization three major sorts of the event calculus. Fluents represent
of the false-belief task. More precisely, DCEC is a sorted states of the world as first-order terms. Events are things that
(i.e. typed) quantified modal logic (also known as sorted first- happen in the world at specific instants of time. Actions are
order modal logic) that includes the event calculus, a first- events that are carried out by an agent. For any action type
order calculus used for commonsense reasoning. α and agent a, the event corresponding to a carrying out α is
given by action(a, α). For instance if α is “running” and a is
“Jack” , action(a, α) denotes “Jack is running”. Other cal-
culi (e.g. the situation calculus) for modeling commonsense
DCEC ⇤e and physical reasoning can be easily switched out in-place of
the event calculus.
DCEC ⇤ Syntax
t
en
Actions
Ag
tion and public announcements, B for belief, D for desire, I Prohbitions
for intention, and finally and crucially, a dyadic deontic op- t1 Obligations
…
Goal
4.2 Inference Schemata
t4
Plan using
The figure below shows a fragment of the inference schemata other agents
for DCEC. First-order natural deduction introduction and
elimination rules are not shown. Inference schemata IK and
Figure 4: TAI Working Through Time. A TAI agent initially consid-
IB let us model idealized systems that have their knowledge ers a goal and then has to produce a proof for the non-existence of a
and beliefs closed under the DCEC proof theory. While hu- non-tentacular plan that uses only this agent. Then τ recruits a set
mans are not deductively closed, these two rules lets us model of other relevant agents to help with its goal.
more closely how more deliberate agents such as organiza-
tions, nations and more strategic actors reason. (Some di-
alects of cognitive calculi restrict the number of iterations on 5 Defining TAI
intensional operators.) I13 ties intentions directly to percep- We denote the state-of-affairs at any time t by a set of for-
tions (This model does not take into account agents that could mulae Γ(t). This set of formulae will also contain any obli-
fail to carry out their intentions). I14 dictates how obligations gations and prohibitions on different agents. For each agent
get translated into known intentions. ai at time t, there is a contract c(ai , t) ⊆ Γ(t) that describes
ai ’s obligations, prohibitions etc. a at any time t then comes
Inference Schemata (Fragment) up with a goal g so that its contract is satisfied.9 The agent
K(a, t1 , Γ), Γ ` φ, t1 ≤ t2 believes that if g does not hold then its contract at some future
K(a, t2 , φ)
[IK ] t + δ will be violated:
B(a, t1 , Γ), Γ ` φ, t1 ≤ t2 ^
[IB ] B a, t, ¬g → ¬ c(a, t + δ)
B(a, t2 , φ)
0
K(a, t, φ) t < t , I(a, t, ψ) Then the agent tries to come up with a plan involving a se-
[I4 ] [I13 ]
φ P(a, t0 , ψ) quence of actions to satisfy the goal.
B(a, t, φ) B(a, t, O(a, t, φ, χ)) O(a, t, φ, χ) We make these notions more precise. An agent a has a
[I14 ]
K(a, t, I(a, t, χ)) set of actions that it can perform at different time points.
For instance, a vacuuming agent can have movement along
a plane as its possible actions while an agent on a phone can
4.3 Semantics have displaying a notification as an action. We denote this by
can(a, α, t) with the following additional axiom:
The semantics for the first-order fragment is the standard first-
order semantics. The truth-functional connectives ∧, ∨, →, ¬ Axiom ¬can(a, α, t) → ¬happens(action(a, α), t)
and quantifiers ∀, ∃ for pure first-order formulae all have
the standard first-order semantics. The semantics of the We now define a consistent plan below:
modal operators differs from what is available in the so-called Consistent Plan
Belief-Desire-Intention (BDI) logics [Rao and Georgeff,
1991] in many important ways. For example, DCEC explic- A consistent plan ρha1 ,...,an i at time t is a sequence of agents
a1 , . . . , an with corresponding actions α1 , . . . , αn and times
itly rejects possible-worlds semantics and model-based rea-
t1 , . . . , tn such that Γ ` (t < ti < tj ) for i < j and for all
soning, instead opting for a proof-theoretic semantics and the
associated type of reasoning commonly referred to as natu-
ral deduction [Gentzen, 1935; Francez and Dyckhoff, 2010]. 8
A overview of this list is given lucidly in [McNamara, 2010].
Briefly, in this approach, meanings of modal operators are 9
See [Govindarajulu and Bringsjord, 2017a] for an example of
defined via arbitrary computations over proofs. how obligations and prohibitions can be used in DCEC.
agents ai we have: Level(2) TAI Agents
1. can(ai , αi , ti )
Prerequisite For any a, α, t, we have:
2. happens(action(ai , αi )) is consistent with Γ(t).
Γ `can(a, α, t) → B τ, t0 , can(a, α, t)
Note that a consistent plan ρh...i can be represented by a term
in our language. We introduce a new sort Plan and a variable- The TAI agents above can be considered first-order ten-
arity predicate symbol plan(ρ, a1 , . . . , an ) which says that ρ tacular agents. We can also have a higher-order TAI agent
is a plan involving a1 . . . , an . that intentionally engages in actions that trigger one or more
A goal is also any formula g. A consistent plan satisfies a other agents to act in tentacular fashion as described above.
goal g if: The need for having the uniform planning constraint is more
clear when we consider higher-order agents.
happens(action(a1 , α1 ), t1 ), . . . ,
Γ(t) ∪ `g 6 A Hierarchy of TAI Agents
happens(action(an , αn ), tn )
The TAI formalization above gives rise to multiple hierar-
We use Γ ` (ρ → g) as a shorthand for the above. The chies of tentacular agents. We discuss some of the these be-
above definitions of plans and goals give us the following im- low.
portant constraint needed for defining TAI. This differenti- Syntactic Goal Complexity The goal g can range in complexity
ates our planning formalism from other planning systems and from simple propositional statements, e.g. cleanKitchen,
makes it more appropriate for an architecture for a general- to first-order statements. e.g. ∀r : Room : clean(r), and
purpose tentacular AI system. to intensional statements representing cognitive states of other
agents
Uniform Planning Constraint B(a, now, B(b, now, ∀r : clean(r)))
Plans and goals should be represented and reasoned over in the Goal Variation According to the definition above, an agent a qual-
language of the planning system. ifies as being tentacular if it plans for just one goal g in tentac-
ular fashion as laid out in the conditions above. We could have
Leveraging the above requirement, we can define two lev- agents that plan for a number of varied and different goals in
els of TAI agents. A Level(1) TAI system corresponding to tentacular fashion.
an agent τ is a system that comes up with goal g at time t0 to Plan Complexity For many goals, there will usually be multiple
satisfy its contract, produces a proof that there is no consis- plans involving different actions (with different costs and re-
tent plan that involves only the agent τ . Then τ comes with a sources used) and executed by different agents.
plan that involves one or more other agents. A Level(1) TAI
agent starts with knowledge about what actions are possible Key
for other agents.
House 18 Agent
Level(1) TAI Agents
Environment
Prerequisite For any a, α, t, we have:
NY State s Sensor
Road System
Γ `can(a, α, t) → K τ, t0 , can(a, α, t)
e Effector
Then
1. τ produces a proof that no plan exists for g involving just
itself and τ declares that there is no such plan.
e
Γ `S τ, t0 , ¬∃ρ : (plan(ρ, τ ) ∧ ρ → g)