0% found this document useful (0 votes)
62 views9 pages

Tentacular AI

Tentacular AI

Uploaded by

Lucas Cotrim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views9 pages

Tentacular AI

Tentacular AI

Uploaded by

Lucas Cotrim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Tentacular Artificial Intelligence, and the Architecture Thereof, Introduced

Selmer Bringsjord1 , Naveen Sundar G1 , Atriya Sen1 , Matthew Peveler1 , Biplav Srivastava2 Kartik Talamadupula2
1
Rensselaer Polytechnic Institute (RPI); RAIR Lab
2
IBM Research
[email protected], [email protected], [email protected],
[email protected], [email protected], [email protected]

Abstract its particular environment E (i.e. takes in percepts of E),


arXiv:1810.07007v1 [cs.AI] 14 Oct 2018

engages in some computation, and then, on the strength of


We briefly introduce herein a new form of dis- that computation, performs an action/actions in that environ-
tributed, multi-agent artificial intelligence, which ment. (Of course, for an agent that persists, this cycle iterates
we refer to as “tentacular.” Tentacular AI is distin- through time.) On this definition, a computer program that
guished by six attributes, which among other things implements, say, the factorial function n! qualifies as an arti-
entail a capacity for reasoning and planning based ficial agent (let’s dub it ‘aFAC ’), one operating in the environ-
in highly expressive calculi (logics), and which ment EN of basic arithmetic; and the human who has con-
enlists subsidiary agents across distances circum- ceived and written this program has built an artificial agent.
scribed only by the reach of one or more given net- While plenty of the artificial agents touted today are rather
works. more impressive than aFAC , our aim is to bring to the world,
within a decade, a revolutionary kind of AI that yields arti-
1 Introduction ficial agents with a radically higher level of intelligence (in-
cluding intelligence high enough to qualify the agents as cog-
We briefly introduce herein a new form of distributed, multi- nitively conscious) and power. This envisioned AI we call
agent artificial intelligence. An AI artifact s is currently un- Tentacular AI, or just ‘TAI’ for short (rhymes with ‘pie’).
derstood as an agent with a predetermined set of goals, a Before presenting architectural-level information about TAI,
set of fixed inputs and outputs, and obligations and permis- we give an example that’s a bit more robust than our first-
sions. The agent does not have any leeway in accomplishing paragraph one.
its goals or adhering to its obligations, prohibitions, or other Let’s suppose that an AI agent aHOME overseeing a home
legal/ethical principles that bind it. Do we need agents that go is charged with the single, unassuming task of moving a cup
beyond these limitations? A humble example follows: Dur- on the home’s kitchen table onto a saucer that is also on that
ing your daily commute to work, an agent ac in your car ob- table. How shall the agent make this goal happen? If the AI
serves that there is more traffic than usual headed toward the can delegate to a robot in the house capable of manipulating
local store. It then consults a weather service and finds that standard tabletop objects in a narrow tabletop environment
a major storm is headed toward your town. ac conveys this ETABLE , and that robot is at the table or can get there in a
information to ah , an agent in your home. ah then communi- reasonable amount of time, then of course aHOME can direct
cates with an agent ap on your phone and finds out that you the robot to pick up the cup and put it on the saucer. This
do not know about the storm coming your way, as you have is nothing to write home about, since AI of today has given
not made any preparations for it; and as further evidence of us agent-robot combos that, in labs (our own, e.g.) and soon
your ignorance, you have not read any notifications about the enough in homes across the technologized world, can do this
storm. ah then infers from your calendar that you may not kind of thing routinely and reliably. In fact, this kind of capa-
have enough time to get supplies after you read your notifica- bility to find plans and move tabletop objects around in order
tions later in the day. ah commands ac to recommend to you to obtain goals in tabletop environments2 has been a solved
a list of supplies to shop for on your way home, including problem from the research point of view for decades [Gene-
at least n items in certain categories (e.g. 3 gallons of bottle sereth and Nilsson, 1987], e.g. Not only that, but there are
water). longstanding theorems telling us that the intrinsic difficulty
AI of today, as defined by any orthodox, comprehensive of finding plans to move various standard tabletop objects in
overview of it (e.g. [Russell and Norvig, 2009]), consists in arbitrary starting configurations in tabletop environments is
the design, creation, implementation, and analysis of artifi- algorithmically solvable and generally tractable.3
cial agents.1 Each such agent a takes in information about
1 2
This is the exact phrase used by Russell and Norvig [2009]. Such environments are variants of those traditionally termed
Other comprehensive overviews match the Russell-Norvig orienta- ‘blocks-worlds.’
3
tion; e.g. [Luger, 2008]. E.g., see [Gupta, 1992].
However, AI of today is, if you will, living a bit of a lie. of a TAI agent is rationalist. This label reflects the require-
Why? Because in real life, the agent aHOME would not be op- ment that any proposed solution to the problem discovered by a
erating in only the tabletop environment ETABLE ; rather the TAI agent must be accompanied by a justification that defends
idea is that this agent should be able to understand and man- and explains that the proposed solution is a solution, and, when
age the overall environment EHOME of the home, which surely appropriate, also that the solution (and indeed perhaps the pro-
cess used to obtain the solution) has certain desirable proper-
comprises much more than the stuff standardly on one kitchen
ties. Minimally, the justification must include an argument or
table! Homes can have parents, kids, dogs, visitors, . . . ad in- proof for the relevant conclusions. In addition, the justifica-
definitum. tion must be verified, formally; we thus say that certification
For example, suppose that aHOME finds that the tabletop is provided by a TAI agent.
robot is broken, having been mangled by the home’s frisky D4 Capable of “theory-of-mind” level reasoning, planning, and
beagle. Then how does aHOME solve the problem of getting communication. Discussion of this attribute is omitted to save
the saucer moved? Artificial agents of today capable of the space; see e.g. [Arkoudas and Bringsjord, 2009] for our lab’s
kind of planning that worked before this complication are first foray into automated reasoning at this level. (The truth is,
now hamstrung. But not so a TAI agent. One reason is that it’s more accurate to say the fourth requirement is that a TAI
TAI agents are capable of human-level communication. In agent must have cognitive consciousness, as this phenomenon
certain circumstances within aHOME the most efficient way for is explained and axiomatized in [Bringsjord et al., 2018].)
the agent aHOME to accomplish the task may be to simply say D5 Capable of creativity, minimally to the level of so-called m-
politely via I/IoT (Internet or Interent of Things) through a creativity. Creativity in artificial agents, and the engineer-
speaker or a smartphone or a pair of smart glasses to a human ing thereof, has been discussed in a number of places by
in the home (of whose mind the TAI agent has a model) sitting Bringsjord [Bringsjord and Ferrucci, 2000]e.g., but recently
Bringsjord and Sen [2016] have called for a form of creativity
at the table in question: “Would you be so kind as to place that
in artificial agents using I/IoT.
cup on top of the saucer?” Of course, aHOME may not be so
fortunate as to have the services of a human available: maybe D6 Has “tentacular” power wielded throughout I/IoT, Edge Com-
no human is at home, yet the task must be completed. In this puting, and cyberspace. This is the most important attribute
possessed by TAI agents, and is reflected in the ‘T’ in ‘TAI.’
case, a TAI agent can still get things done, in creative fash- To say that such agents have tentacular problem-solving power
ion. E.g., suppose that in the home a family member received is to say that they can perceive and act through the I/IoT (or
beforehand a small blimp that can fly around inside the home equivalent networks) and cyberspace, across the globe. TAI
and pick things up.4 The TAI agent might then activate and agents thus operate in a planet-sized, heterogeneous environ-
use this blimp through I/IoT to put the cup atop the saucer. ment that spans the narrower, fixed environments used to de-
But what, more precisely, is a TAI agent? We say that a TAI fine conventional, present-day AI, such as is found in [Russell
agent must be: and Norvig, 2009].

D1 Capable of problem-solving. Whereas, as we’ve noted, stan- 2 Related Work


dard AI counts simple mappings from percepts to actions
as bona fide AI, TAI agents must be capable of problem- Given the limited scope of the present paper, we only make
solving. This may seem like an insignificant first attribute of some brief comments about related work, which can be par-
TAI, but a consequence that stems from this attribute should titioned for convenience into that which is can be plausibly
be noted: Since problem-solving entails capability across regarded as on the road toward the level of expressivity and
the main sub-divisions of AI, TAI agents have multi-faceted associated automated reasoning that TAI requires, and prior
power. Problem-solving requires capability in these sub-areas work that provides a stark and illuminating contrast with TAI.
of AI: planning, reasoning, learning, communicating, creativ- First, as to work we see as reaching toward TAI, we note
ity (at least relatively simple forms thereof), and — for mak-
ing physical changes in physical environments — cognitive
that recently Miller et al. [2018] present a planning frame-
robotics.5 Hence, all TAI agents can plan, reason, learn, com- work that they call social planning, in which the agent under
municate; and they are creative and capable of carrying out consideration can plan and act in a manner that takes account
physical actions. of the beliefs of other agents. The goal for an agent in social
planning can either be a particular state of the external world,
D2 Capable of solving at least important instances of problems
or a set of beliefs of other agents (or a mix of both). The
that are at and/or above Turing-unsolvable problems. AI of
today, when capable of solving problems, invariably achieves system is built upon a simplified version of a propositional
this success on problems that are merely algorithmically solv- modal logic (unlike our system, presented below, which is
able and tractable (e.g., checkers, chess, Go). more expressive and can accommodate more complex goals,
e.g. goals over unbounded domains or goals that involve nu-
D3 Able to supply justification, explanation, and certification of merical quantification; such statements require going beyond
supplied solutions, how they were arrived at, and that these so-
lutions are safe/ethical. We thus say that the problem-solving
propositional modal logic). In addition, certainly the NARS
system from Wang [2006] has elements that one can ratio-
4
Such a blimp is a simple adaptation of what is readily available nally view as congenial to TAI. For instance, NARS is multi-
as a relatively inexpensive toy. layered and reasoning-centric. On the other hand, the ‘N’
5
Cognitive robotics is defined in [Levesque and Lakemeyer, in ‘NARS’ is for ‘Non-axiomatic,’ and TAI, and indeed the
2007] as a type of robotics in which all substantive actions per- entire approach to logicist AI pursued by at least Bringsjord
formed by the robots are a function of the cognitive states (e.g. be- and Govindarajulu, seeks whenever possible to leverage auto-
liefs & intentions) of these robots. mated reasoning over powerful axiom systems, such as Peano
Arithmetic.6 In addition, TAI is deeply and irreducibly inten- TAI Agent

sional, while NARS appears to be purely extensional. Clever


management of computational resources in TAI is clearly go-
Contract
ing to be key, and we see the work of Thorisson and col- obligations, Goal
Try to generate ↵1 ↵2 … ↵n
a plan
leagues (e.g. [Helgason et al., 2012]) to be quite relevant prohibitions Execute plan

to TAI and the challenges the implementation of it will en-


counter. For a final example of work that is generally aligned
with TAI, we bring to the reader’s attention a recent compre-
hensive treatment of proof-based work in computer science: Analyze other
relevant agents ↵1 ↵2 … ↵n
[Arkoudas and Musser, 2017]. As TAI is steadfastly proof-
Execute plan
based AI, this tome provides very nice coverage of the kind
of work required to implement TAI.
Secondly, for illuminating contrast, we note first that some Contract
Come up with obligations, Goal
have considered the concept of corporate intelligence com- other goals prohibitions
posed of multiple agents, including machines, where inspi-
ration comes from biology. A case in point is the fascinat-
ing modeling in [Seidita et al., 2016].7 In our case, TAI is a
thoroughly formal conception independent of terrestrial biol- Figure 1: TAI Informal Overview: We have an architecture for how
ogy, one that is intended to include types of agents of greater a TAI agent τ might operate. τ continuously comes up with goals
intelligence than those currently on Earth. Another illumi- based on its contract. If a goal is not achievable using τ ’s own
nating contrast comes via considering established languages resources, τ has to employ other agents in achieving this goal. To
for planning that are purely extensional in nature (e.g. PDDL, successfully do so τ would need to have one or more of D1 − D6
which in its early form is given in [Mcdermott et al., 1998]), attributes.
as therefore quite different than planning of the type that is
required for TAI, which must be intensional in character, and
the agent comes up with goals to achieve so that its contract
is (since cognitive calculi are intensional computational log-
is not violated. Some of these goals might require an agent
ics). MA-PDDL is an extension of PDDL for handling do-
to exercise some or all of the six attributes D1 − D6 . We
mains with multiple agents with varying actions and goals
[Kovacs, 2012], and as such would seem to be relevant to TAI. formalize this using planning as shown in Figure 1. As shown
in the figure, if some goal is not achievable on its own, τ
But unlike social planning discussed above, MA-PDDL does
can seek to recruit other agents by leveraging their resources,
not aim to change beliefs (nor for that matter other epistemic
beliefs, obligations etc.
attitudes) of other agents. While MA-PDDL could be used
to do so, representing beliefs and other cognitive states in
PDDL’s extensional language can lead to undesirable conse- 4 The Formal System
quences, as demonstrated in [Bringsjord and Govindarajulu,
2012]. Extensions of the original PDDL (PDDL1), for exam-
ple PDDL3 [Gerevini and Long, 2004], are still extensional

in nature.

This concludes the related-work section. Note that below


we describe and define TAI from the point of view of AI plan-
ning. …

3 Quick Overview
We give a quick and informal overview of TAI. We have a set
of agents a1 , . . . , an . Each agent has an associated (implicit
or explicit) contract that it should adhere to. Consider one
particular agent τ . During the course of this agent’s lifetime,
6
The layering of TAI is in fact anticipated by the increasingly Figure 2: Space of Logical Calculi. There are five dimensions that
powerful axiom-centric cognition described in [Bringsjord, 2015], cover the entire, vast space of logical calculi. The due West dimen-
which takes Peano Arithmetic as central. sion holds those calculi powering the Semantic Web (which are gen-
7
Though out of reach for now, given that our chief objective erally short of first-order logic = L1 ), and include so-called descrip-
is but an informative introduction to TAI, the relationship between tion logics. Both NW and NE include logical systems with wffs that
our conception of cognitive consciousness, which is central to TAI are allowed to be infinitely long, and are needless to say hard to
agents (Attribute #4 above), and consciousness as conceived by compute with and over. SE is higher-order logic, which has a robust
Chella, is a fertile topic for future investigation. A multi-faceted dis- automated theorem-proving community gathered around it. It’s the
cussion of artificial consciousness is by the way to be had in [Chella SW direction that holds the cognitive calculi described in the present
and Manzotti, 2007]. For a first-draft axiomatization of the brand of paper, and associated with TAI; and the star refers to those specific
consciousness central to TAI agents, see [Bringsjord et al., 2018]. cognitive calculi called out in these pages by us.
To make the above notions more concrete, we use a version inheritance programming language. We show below some of
of a computational logic. The logic we use is deontic cogni- the important sorts used in DCEC.
tive event calculus (DCEC). This calculus is a first-order
modal logic. Figure 2 shows the region where DCEC is lo- Sort Description
cated in the overall space of logical calculi. DCEC belongs
to the cognitive calculi family of logical calculi (denoted by Agent Human and non-human actors.
a star in Figure 2 and expanded in Figure 3). DCEC has a Time The Time type stands for time in the domain.
well-defined syntax and inference system; see Appendix A of E.g. simple, such as ti , or complex, such as
[Govindarajulu and Bringsjord, 2017a] for a full description. birthday(son(jack)).
The inference system is based on natural deduction [Gentzen, Event Used for events in the domain.
1935], and includes all the introduction and elimination rules ActionType Action types are abstract actions. They are in-
for first-order logic, as well as inference schemata for the stantiated at particular times by actors. Exam-
modal operators and related structures ple: eating.
This system has been used previously in [Govindarajulu Action A subtype of Event for events that occur as
and Bringsjord, 2017a; Govindarajulu et al., 2017] to auto- actions by agents.
mate versions of the doctrine of double effect DDE, an eth- Fluent Used for representing states of the world in the
ical principle with deontological and consequentialist com- event calculus.
ponents. While describing the calculus is beyond the scope
of this paper, we give a quick overview of the system below.
Dialects of DCEC have also been used to formalize and auto- The syntax has two components: a first-order core and a
mate highly intensional (i.e. cognitive) reasoning processes, modal system that builds upon this first-order core. The fig-
such as the false-belief task [Arkoudas and Bringsjord, 2008] ures below show the syntax and inference schemata of DCEC.
and akrasia (succumbing to temptation to violate moral prin- The first-order core of DCEC is the event calculus [Mueller,
ciples) [Bringsjord et al., 2014]. Arkoudas and Bringsjord 2006]. Commonly used function and relation symbols of the
[2008] introduced the general family of cognitive event cal- event calculus are included. Fluents, event and times are the
culi to which DCEC belongs, by way of their formalization three major sorts of the event calculus. Fluents represent
of the false-belief task. More precisely, DCEC is a sorted states of the world as first-order terms. Events are things that
(i.e. typed) quantified modal logic (also known as sorted first- happen in the world at specific instants of time. Actions are
order modal logic) that includes the event calculus, a first- events that are carried out by an agent. For any action type
order calculus used for commonsense reasoning. α and agent a, the event corresponding to a carrying out α is
given by action(a, α). For instance if α is “running” and a is
“Jack” , action(a, α) denotes “Jack is running”. Other cal-
culi (e.g. the situation calculus) for modeling commonsense
DCEC ⇤e and physical reasoning can be easily switched out in-place of
the event calculus.
DCEC ⇤ Syntax

S ::= Agent | ActionType | Action v Event | Moment | Fluent


DCEC 

 action : Agent × ActionType → Action

initially : Fluent → Formula




CEC

 holds : Fluent × Moment → Formula




 happens : Event × Moment → Formula
µC

f ::=


 clipped : Moment × Fluent × Moment → Formula




 initiates : Event × Fluent × Moment → Formula

 terminates : Event × Fluent × Moment → Formula




prior : Moment × Moment → Formula

CC t ::= x : S | c : S | f (t1 , . . . , tn )
q : Formula | ¬φ | φ ∧ ψ | φ ∨ ψ | ∀x : φ(x) |

Figure 3: Cognitive Calculi. The cognitive calculi family is com- 

P(a, t, φ) | K(a, t, φ) |


posed of a number of related calculi. Arkoudas and Bringsjord in-



φ ::= C(t, φ) | S(a, b, t, φ) | S(a, t, φ) | B(a, t, φ)
troduced the first member in this family, CEC, to model the false- 
 D(a, t, φ) | I(a, t, φ)

belief task. The smallest member in this family, µC, has been 


∗ 0
used to model uncertainty in quantified beliefs [Govindarajulu and

O(a, t, φ, (¬)happens(action(a , α), t ))
Bringsjord, 2017b]. DCEC and variants have been used in the mod-
elling of ethical principles and theories and their implementations. The modal operators present in the calculus include the
standard operators for knowledge K, belief B, desire D, in-
tention I, etc. The general format of an intensional operator is
4.1 Syntax K (a, t, φ), which says that agent a knows at time t the propo-
As mentioned above, DCEC is a sorted calculus. A sorted sition φ. Here φ can in turn be any arbitrary formula. Also,
system can be regarded as analogous to a typed single- note the following modal operators: P for perceiving a state,
C for common knowledge, S for agent-to-agent communica- Goals

t
en
Actions

Ag
tion and public announcements, B for belief, D for desire, I Prohbitions

for intention, and finally and crucially, a dyadic deontic op- t1 Obligations

erator O that states when an action is obligatory or forbidden


for agents. It should be noted that DCEC is one specimen in
a family of extensible cognitive calculi.
The calculus also includes a dyadic (arity = 2) deontic op- g
erator O. It is well known that the unary ought in standard t2
deontic logic leads to contradictions. Our dyadic version of
the operator blocks the standard list of such contradictions,
and beyond.8 Goal

Declarative communication of φ between a and b at time t t3 Proof/argument for


is represented using the S(a, b, t, φ). non existence of plan
using just

Goal
4.2 Inference Schemata
t4
Plan using
The figure below shows a fragment of the inference schemata other agents
for DCEC. First-order natural deduction introduction and
elimination rules are not shown. Inference schemata IK and
Figure 4: TAI Working Through Time. A TAI agent initially consid-
IB let us model idealized systems that have their knowledge ers a goal and then has to produce a proof for the non-existence of a
and beliefs closed under the DCEC proof theory. While hu- non-tentacular plan that uses only this agent. Then τ recruits a set
mans are not deductively closed, these two rules lets us model of other relevant agents to help with its goal.
more closely how more deliberate agents such as organiza-
tions, nations and more strategic actors reason. (Some di-
alects of cognitive calculi restrict the number of iterations on 5 Defining TAI
intensional operators.) I13 ties intentions directly to percep- We denote the state-of-affairs at any time t by a set of for-
tions (This model does not take into account agents that could mulae Γ(t). This set of formulae will also contain any obli-
fail to carry out their intentions). I14 dictates how obligations gations and prohibitions on different agents. For each agent
get translated into known intentions. ai at time t, there is a contract c(ai , t) ⊆ Γ(t) that describes
ai ’s obligations, prohibitions etc. a at any time t then comes
Inference Schemata (Fragment) up with a goal g so that its contract is satisfied.9 The agent
K(a, t1 , Γ), Γ ` φ, t1 ≤ t2 believes that if g does not hold then its contract at some future
K(a, t2 , φ)
[IK ] t + δ will be violated:
B(a, t1 , Γ), Γ ` φ, t1 ≤ t2  ^ 
[IB ] B a, t, ¬g → ¬ c(a, t + δ)
B(a, t2 , φ)
0
K(a, t, φ) t < t , I(a, t, ψ) Then the agent tries to come up with a plan involving a se-
[I4 ] [I13 ]
φ P(a, t0 , ψ) quence of actions to satisfy the goal.
B(a, t, φ) B(a, t, O(a, t, φ, χ)) O(a, t, φ, χ) We make these notions more precise. An agent a has a
[I14 ]
K(a, t, I(a, t, χ)) set of actions that it can perform at different time points.
For instance, a vacuuming agent can have movement along
a plane as its possible actions while an agent on a phone can
4.3 Semantics have displaying a notification as an action. We denote this by
can(a, α, t) with the following additional axiom:
The semantics for the first-order fragment is the standard first-
order semantics. The truth-functional connectives ∧, ∨, →, ¬ Axiom ¬can(a, α, t) → ¬happens(action(a, α), t)
and quantifiers ∀, ∃ for pure first-order formulae all have
the standard first-order semantics. The semantics of the We now define a consistent plan below:
modal operators differs from what is available in the so-called Consistent Plan
Belief-Desire-Intention (BDI) logics [Rao and Georgeff,
1991] in many important ways. For example, DCEC explic- A consistent plan ρha1 ,...,an i at time t is a sequence of agents
a1 , . . . , an with corresponding actions α1 , . . . , αn and times
itly rejects possible-worlds semantics and model-based rea-
t1 , . . . , tn such that Γ ` (t < ti < tj ) for i < j and for all
soning, instead opting for a proof-theoretic semantics and the
associated type of reasoning commonly referred to as natu-
ral deduction [Gentzen, 1935; Francez and Dyckhoff, 2010]. 8
A overview of this list is given lucidly in [McNamara, 2010].
Briefly, in this approach, meanings of modal operators are 9
See [Govindarajulu and Bringsjord, 2017a] for an example of
defined via arbitrary computations over proofs. how obligations and prohibitions can be used in DCEC.
agents ai we have: Level(2) TAI Agents
1. can(ai , αi , ti )
Prerequisite For any a, α, t, we have:
2. happens(action(ai , αi )) is consistent with Γ(t).
Γ `can(a, α, t) → B τ, t0 , can(a, α, t)

Note that a consistent plan ρh...i can be represented by a term
in our language. We introduce a new sort Plan and a variable- The TAI agents above can be considered first-order ten-
arity predicate symbol plan(ρ, a1 , . . . , an ) which says that ρ tacular agents. We can also have a higher-order TAI agent
is a plan involving a1 . . . , an . that intentionally engages in actions that trigger one or more
A goal is also any formula g. A consistent plan satisfies a other agents to act in tentacular fashion as described above.
goal g if: The need for having the uniform planning constraint is more
clear when we consider higher-order agents.
happens(action(a1 , α1 ), t1 ), . . . ,
  
Γ(t) ∪ `g 6 A Hierarchy of TAI Agents
happens(action(an , αn ), tn )
The TAI formalization above gives rise to multiple hierar-
We use Γ ` (ρ → g) as a shorthand for the above. The chies of tentacular agents. We discuss some of the these be-
above definitions of plans and goals give us the following im- low.
portant constraint needed for defining TAI. This differenti- Syntactic Goal Complexity The goal g can range in complexity
ates our planning formalism from other planning systems and from simple propositional statements, e.g. cleanKitchen,
makes it more appropriate for an architecture for a general- to first-order statements. e.g. ∀r : Room : clean(r), and
purpose tentacular AI system. to intensional statements representing cognitive states of other
agents
Uniform Planning Constraint B(a, now, B(b, now, ∀r : clean(r)))
Plans and goals should be represented and reasoned over in the Goal Variation According to the definition above, an agent a qual-
language of the planning system. ifies as being tentacular if it plans for just one goal g in tentac-
ular fashion as laid out in the conditions above. We could have
Leveraging the above requirement, we can define two lev- agents that plan for a number of varied and different goals in
els of TAI agents. A Level(1) TAI system corresponding to tentacular fashion.
an agent τ is a system that comes up with goal g at time t0 to Plan Complexity For many goals, there will usually be multiple
satisfy its contract, produces a proof that there is no consis- plans involving different actions (with different costs and re-
tent plan that involves only the agent τ . Then τ comes with a sources used) and executed by different agents.
plan that involves one or more other agents. A Level(1) TAI
agent starts with knowledge about what actions are possible Key
for other agents.
House 18 Agent
Level(1) TAI Agents
Environment
Prerequisite For any a, α, t, we have:
NY State s Sensor
Road System
Γ `can(a, α, t) → K τ, t0 , can(a, α, t)


e Effector
Then
1. τ produces a proof that no plan exists for g involving just
itself and τ declares that there is no such plan.
e
Γ `S τ, t0 , ¬∃ρ : (plan(ρ, τ ) ∧ ρ → g)


2. τ produces a plan for g involving just itself and one or s


more agents and declares that plan. s e
!
 
Γ `S τ, t0 , plan(ρ, a1 , . . . , τ . . . an ) ∧ ρ → g
Figure 5: Pictorial Overview. A bit of explanation: That some agents
are within agents indicates that the outer agent knows and/or be-
The agent may not always have knowledge about what lieves everything relevant about the inner agent; hence as agents
other agents can do. The TAI agent may have imperfect are increasingly cognitively powerful, the depth of their epistemic
knowledge about other agents. The agent can gain informa- attitudes grows (reflected in formulae with iterated belief/knowledge
tion about other agents’ actions, their obligations, prohibi- operators). Agents grow in size/intelligence in lockstep with the log-
tions, etc. by observing them or by reading specifications ical calculi upon which they are based increasing in expressivity and
governing these agents. In this case, we get a Level(2) TAI reasoning power; L0 is zero-order logic, L1 is e.g. first-order logic,
agent. We need to modify only the prerequisite condition and the particular cognitive calculus DCEC is shown. Rotation in-
dicates simply that, through time, agents perceive and act.
above.
7 Examples and Embryonic Implementation Beforehand, a number of contracts have been executed that
In this section, we present a formal sketch of a TAI agent and bind the adult parents P1 and P2 in a home H, and also bind
then describe using another example ongoing work in imple- a number of artificial agents in H, including a TAI agent
menting a TAI system. (τ ) that oversees the home. (Strictly speaking, the agents
wouldn’t have entered into contracts, but they would know
7.1 Example that their human owners have done so, and they would know
what the contracts are.)
Consider the example given in the beginning. We have a hu-
man j and three artificial agents: ac in the car, ah in the home It’s winter in Berlin NY. Night. Outside, a blizzard.
and ap an agent managing scheduling and calendar informa- The mother and father of the home H, and their two
tion. We present some of the formulae in Γ. toddler children, are fast asleep. The smartphone
of each parent is set to “Do Not Disturb”, with in-
coming clearance for only close family. There is no
B(ac , t0 , crowded(store) → unusal), f1 landline phone. A carbon monoxide sensor in the
P(ac , t1 , crowded(store)), f2 basement, near the furnace, suddenly shows a read-
! out indicating an elevated level, which proceeds to
ac , t,unusal, creep up. τ perceives this, and forms hypotheses
∀t : O  f3
happens action(ac , check(weather)), t + 1 about what is causing the elevated reading, and be-
  lieves on the basis of using a cognitive calculus that
∀t : B ac , t, f3 the reading is accurate (to some likelihood factor).
   The nearest firehouse is notified by τ . No alarm
happens action(a, check(weather)), t3
∀a :   f4 sounds in the house. τ runs a diagnostic and deter-
→ K(a, t4 , storm), mines that the battery for the central auditory alarm
 is shot. The reading creeps up higher, and now even
∀t : O ac , t, storm, S(ac , ah , storm, t + 1)) , f5 the sensors in the upstairs bedrooms where the hu-
mans are asleep show an elevated, and climbing,
 
∀t : B ac , t, f5
level. τ perceives this too.
The above formulae first state the fact that ac observes the
Unfortunately, τ reasons that by the time the firemen ar-
store being crowded. ac ’s contract states that the agent should
rive, permanent neurological damage or even death may well
check a weather service if it finds something unusual. The
(need again a likelihood factor) be caused in the case of one or
formulae also states that if an agent checks the weather at t3 ,
more members of the family. Should the alarm company have
the agent will get a prediction about an incoming storm. ac ’s
programmed the sensor to report to a central command, still,
contract places an obligation on it to inform ah if it believes
any human command is fallible. The company may be negli-
that a storm is incoming.
gent, or a phone call may be the only option at their disposal,
 or they may dispatch personnel who arrive too late. Without
∀t : O ah , t, storm, ∀s : quantity(s) > 0 , f6 enlisting the help of other artificial agents in planning and

ah , t5 ,shops(j, today) ∨ shops(j, tomorrow)
 reasoning, τ can’t save the family; τ knows this on the basis
K , f7 of proof/argument.
→ ∀s : quantity(s) > 0
! However, τ can likely wake the family up, starting with the
ah , t,happens action(ac , recc(shops(j))), t parents, in any number of ways. However, each of these ways
∀t : B f8 entails violation of at least one legal prohibition that has been
→ shops(j)
    created by contracts that are in place. These contracts have
ah , t, happens action ah , req(ac , shops(j)) , t been analyzed by an IBM service, which has stocked the mind
∀t : B  
 f9
of τ with knowledge of legal obligations in DCEC— or rather
→ happens action(ac , recc(shops(j))), t
in a dialect that has separate obligation operators for legal
Ol and moral Om obligations. The moral obligation to save
The first formula above states that ah ought to see to it that the family overrides the legal prohibitions, however. τ turns
supplies are stocked in the event of a storm. Then we have on the TV in the master bedroom at maximum volume, and
that ah knows that the human j shopping today or tomorrow flashes a warning to leave the house immediately because of
can result in the supplies being stocked. ah gets information the lethal gas building up. (There are many other alternatives,
from ap that shopping tomorrow is not possible (this formula of course. TAI could break through Do Not Disturb, eg).
is not shown). Then we have formulae stating the effects of
ac recommending the shopping action to j. The goal for ah 7.3 Toward Using Smart-City Infrastructure
is ∀s : quantity(s) > 0 and a plan for it is built up using ah , The European Initiative on Smart Cities [eur, 2018] is an ef-
ac and j. fort by the European Commission [ec, 2018] to improve the
quality of life throughout Europe, while progressing toward
7.2 Toward an Implementation energy and climate objectives. Many of its goals are rele-
We describe an example scenario that we are targeting for an vant to and desirable in the world at large. TAI has the po-
embryonic implementation. tential to be instrumental in achieving many of these, such
as smart appliances (in the manner discussed in the previous [Arkoudas and Bringsjord, 2009] K. Arkoudas and
sub-section) and intelligent traffic management. Indeed, the S. Bringsjord. Propositional Attitudes and Causa-
scope and objectives of the Initiative may conceivably be con- tion. International Journal of Software and Informatics,
siderably broadened with a pervasive application of TAI. 3(1):47–65, 2009.
We briefly point at a simple scenario that expands on the [Arkoudas and Musser, 2017] Konstantine Arkoudas and
vision of the European Initiative’s smart-transportation goals.
David Musser. Fundamental Proof Methods in Computer
Parking space is very scarce on a work-day in mid- Science: A Computer-Based Approach. MIT Press,
town Manhattan. A busy executive will need to Cambridge, MA, 2017.
park near several offices over the course of the day, [Bringsjord and Ferrucci, 2000] S. Bringsjord and D. Fer-
and these locations change over the week. rucci. Artificial Intelligence and Literary Creativity: In-
The executive’s car consults her calendar. Based on past side the Mind of Brutus, a Storytelling Machine. Lawrence
patterns, it interpolates locations where it believes she intends Erlbaum, Mahwah, NJ, 2000.
to park. It communicates with other cars parked at these lo- [Bringsjord and Govindarajulu, 2012] S. Bringsjord and
cations, and determines when their owners are likely to re- N. S. Govindarajulu. Given the Web, What is Intelligence,
turn, based on their expressed (and inferable) intentions and Really? Metaphilosophy, 43(4):361–532, 2012. This
current locations. Adjusting for the location of our executive, URL is to a preprint of the paper.
traffic conditions and changes in her agenda, it determines the
optimal parking locations dynamically, throughout her busy [Bringsjord and Sen, 2016] Selmer Bringsjord and Atriya
day. Of course, in the spirit of TAI, all other cars would have Sen. On Creative Self-Driving Cars: Hire the Compu-
their movement adjusted accordingly, through time.10 tational Logicians, Fast. Applied Artificial Intelligence,
30:758–786, 2016. The URL here goes only to an un-
corrected preprint.
8 Conclusion & Future Work
[Bringsjord et al., 2014] Selmer Bringsjord, Naveen Sundar
We have introduced Tentacular AI, and a number of archi- Govindarajulu, Daniel Thero, and Mei Si. Akratic Robots
tectural elements thereof, and are under no illusion that we and the Computational Logic Thereof. In Proceedings of
have accomplished more than this. At AEGAP 2018, we ETHICS • 2014 (2014 IEEE Symposium on Ethics in Engi-
will demonstrate TAI in action in both the scenarios sketched neering, Science, and Technology), pages 22–29, Chicago,
above; implementation is currently underway. Despite the IL, 2014. IEEE Catalog Number: CFP14ETI-POD.
nascent state of the TAI research program, we hope to have
provided a promising, if inchoate, overview of tentacular AI [Bringsjord et al., 2018] S. Bringsjord, P. Bello, and N.S.
— an overview which, given the centrality of highly expres- Govindarajulu. Toward Axiomatizing Consciousness. In
sive languages for novel planning and reasoning, we hope is D. Jacquette, editor, The Bloomsbury Companion to the
of interest to some, maybe even many, at this dawn of the Philosophy of Consciousness, pages 289–324. Blooms-
“internet of things” and its vibrant intersection with AI. bury Academic, London, UK, 2018.
[Bringsjord, 2015] Selmer Bringsjord. Theorem: General
9 Acknowledgments Intelligence Entails Creativity, assuming . . .. In T. Besold,
M. Schorlemmer, and A. Smaill, editors, Computational
The TAI project is made possible by joint support from RPI
Creativity Research: Towards Creative Machines, pages
and IBM under the AIRC Program; we are grateful for this
51–64. Atlantis/Springer, Paris, France, 2015. This is Vol-
support. Some of the research reported on herein has been
ume 7 in Atlantis Thinking Machines, edited by Kuhnberg-
enabled by support from ONR and AFOSR, and for this too
wer, Kai-Uwe of the University of Osnabruck, Germany.
we are grateful.
[Chella and Manzotti, 2007] Antonio Chella and Ricardo
References Manzotti, editors. Artificial Consciousness. Imprint Aca-
demic, Exeter, UK, 2007.
[Arkoudas and Bringsjord, 2008] K. Arkoudas and
[Dwork, 2008] Cynthia Dwork. Differential Privacy: A Sur-
S. Bringsjord. Toward Formalizing Common-Sense
Psychology: An Analysis of the False-Belief Task. In vey of Results. In International Conference on Theory
T.-B. Ho and Z.-H. Zhou, editors, Proceedings of the and Applications of Models of Computation, pages 1–19.
Tenth Pacific Rim International Conference on Artificial Springer, 2008.
Intelligence (PRICAI 2008), number 5351 in Lecture [ec, 2018] The European Commission’s Priorities. https://ec.
Notes in Artificial Intelligence (LNAI), pages 17–29. europa.eu/commission/index en, 2018. [Online; accessed
Springer-Verlag, 2008. 25-June-2018].
10
TAI applications like this give rise to privacy concerns which [eur, 2018] European Initiative on Smart Cities.
could possibly be resolved by employing either differential privacy https://setis.ec.europa.eu/set-plan-implementation/
[Dwork, 2008] or privacy based on zero-knowledge proofs [Gehrke technology-roadmaps/european-initiative-smart-cities,
et al., 2011]. 2018. [Online; accessed 25-June-2018].
[Francez and Dyckhoff, 2010] Nissim Francez and Roy Dy- [Luger, 2008] George Luger. Artificial Intelligence: Struc-
ckhoff. Proof-theoretic Semantics for a Natural Language tures and Strategies for Complex Problem Solving (6th
Fragment. Linguistics and Philosophy, 33:447–477, 2010. Edition). Pearson, London, UK, 2008.
[Gehrke et al., 2011] Johannes Gehrke, Edward Lui, and [Mcdermott et al., 1998] D. Mcdermott, M. Ghallab,
Rafael Pass. Towards Privacy for Social Networks: A A. Howe, C. Knoblock, A. Ram, M. Veloso, D. Weld, and
Zero-Knowledge Based Definition of Privacy. In Theory D. Wilkins. PDDL – The Planning Domain Definition
of Cryptography Conference, pages 432–449. Springer, Language. Technical Report CVC TR-98-003, Yale
2011. Center for Computational Vision and Control, 1998.
[Genesereth and Nilsson, 1987] M. Genesereth and N. Nils- [McNamara, 2010] P. McNamara. Deontic Logic. In Ed-
son. Logical Foundations of Artificial Intelligence. Mor- ward Zalta, editor, The Stanford Encyclopedia of Philoso-
gan Kaufmann, Los Altos, CA, 1987. phy. 2010. McNamara’s (brief) note on a paradox arising
[Gentzen, 1935] Gerhard Gentzen. Investigations into Logi- from Kant’s Law is given in an offshoot of the main entry.
cal Deduction. In M. E. Szabo, editor, The Collected Pa- [Miller et al., 2018] Tim Miller, Adrian R. Pearce, and Liz
pers of Gerhard Gentzen, pages 68–131. North-Holland, Sonenberg. Social Planning for Trusted Autonomy, pages
Amsterday, The Netherlands, 1935. This is an English ver- 67–86. Springer International Publishing, Cham, 2018.
sion of the well-known 1935 German version. [Mueller, 2006] E. Mueller. Commonsense Reasoning: An
[Gerevini and Long, 2004] Alfonso Gerevini and Derek Event Calculus Based Approach. Morgan Kaufmann, San
Long. Plan Constraints and Preferences in PDDL3. Tech- Francisco, CA, 2006. This is the first edition of the book.
nical report, Department of Electronics for Automation, The second edition was published in 2014.
University of Brescia, 2004. This is the language of the [Rao and Georgeff, 1991] A. S. Rao and M. P. Georgeff.
Fifth International Planning Competition. Modeling Rational Agents Within a BDI-architecture. In
[Govindarajulu and Bringsjord, 2017a] Naveen Sundar R. Fikes and E. Sandewall, editors, Proceedings of Knowl-
Govindarajulu and Selmer Bringsjord. On Automat- edge Representation and Reasoning (KR&R-91), pages
ing the Doctrine of Double Effect. In Carles Sierra, 473–484, San Mateo, CA, 1991. Morgan Kaufmann.
editor, Proceedings of the Twenty-Sixth International [Russell and Norvig, 2009] S. Russell and P. Norvig. Arti-
Joint Conference on Artificial Intelligence, IJCAI-17, ficial Intelligence: A Modern Approach. Prentice Hall,
pages 4722–4730, Melbourne, Australia, 2017. Preprint Upper Saddle River, NJ, 2009. Third edition.
available at this url: https://arxiv.org/abs/1703.08922.
[Seidita et al., 2016] Valeria Seidita, Antonio Chella, and
[Govindarajulu and Bringsjord, 2017b] Naveen Sundar Maurizio Carta. A Biologically Inspired Representation of
Govindarajulu and Selmer Bringsjord. Strength Factors: the Intelligence of a University Campus. Procedia Com-
An Uncertainty System for a Quantified Modal Logic, puter Science, 88:185–190, 2016.
2017. Presented at Workshop on Logical Foundations
[Wang, 2006] Pei Wang. Rigid Flexibility: The Logic of In-
for Uncertainty and Machine Learning at IJCAI 2017,
Melbourne, Australia. telligence. Springer, Dordrecht, The Netherlands, 2006.
This book is Volume 34 in the Applied Logic Series, edited
[Govindarajulu et al., 2017] Naveen Sundar Govindarajulu, by Dov Gabbay and Jon Barwise.
Selmer Bringsjord, Rikhiya Ghosh, and Matthew Peveler.
Beyond the doctrine of double effect: A formal model
of true self-sacrifice. International Conference on Robot
Ethics and Safety Standards, 2017.
[Gupta, 1992] Naresh Gupta. On the Complexity of Blocks-
world Planning. Artificial Intelligence, 52:223–254, 1992.
[Helgason et al., 2012] Helgi Helgason, Eric Nivel, and
Kristinn Thórisson. On Attention Mechanisms for AGI
Architectures: A Design Proposal. In J. Bach, B. Goertzel,
and M. Iklé, editors, Proceedings of the Fifth Conference
on Artificial General Intelligence, pages 89–98, Berlin,
Germany, 2012. Springer.
[Kovacs, 2012] Daniel L. Kovacs. A Multi-Agent Extension
of PDDL3.1. In Proceedings of the 3rd Workshop on the
International Planning Competition (IPC), ICAPS- 2012,
pages 25–29, Atibaia, Brazil, 2012. ICAPS.
[Levesque and Lakemeyer, 2007] Hector Levesque and Ger-
hard Lakemeyer. Chapter 24: Cognitive Robotics. In
Handbook of Knowledge Representation, Amsterdam, The
Netherlands, 2007. Elsevier.

You might also like