Towards Intelligent Autonomous Control Systems Architecture and Fundamental Issues Antsaklis, P. J., Passino, K. M., & Wang, S. J. (1989)
Towards Intelligent Autonomous Control Systems Architecture and Fundamental Issues Antsaklis, P. J., Passino, K. M., & Wang, S. J. (1989)
Towards Intelligent Autonomous Control Systems Architecture and Fundamental Issues Antsaklis, P. J., Passino, K. M., & Wang, S. J. (1989)
315
9 1989 Kluwer Academic Publishers. Printed in the Netherlands.
and
S.J. W A N G
Jet Propulsion Laboratory, MS 198-326, 4800 Oak Grove Drive, Pasadena, CA 91109
Abstract. Autonomous control systems are designed to perform well under significant uncertainties in the
system and environment for extended periods of time, and they must be able to compensate for system failures
without external intervention. Intelligent autonomous control systems use techniques from the field of arti-
ficial intelligence to achieve this autonomy. Such control systems evolve from conventional control systems
by adding intelligent components, and their development requires interdisciplinary research. A hierarchical
functional intelligent autonomous control architecture is introduced here and its functions are described
in detail. The fundamental issues in autonomous control system modelling and analysis are discussed.
Key words. Automatic control theory, autonomous control, intelligent control, intelligent systems,
hierarchical systems, hybrid systems, artificial intelligence.
1. Introduction
Autonomous control systems must perform well under significant uncertainties in the
plant and the environment for extended periods of time and they must be able to
compensate for system failures without external intervention. Such a u t o n o m o u s
behavior is a very desirable characteristic of advanced systems. An autonomous con-
troller provides high-level adaptation to changes in the plant and environment. To
achieve autonomy, the methods used for control system design should utilize both (i)
algorithmic-numeric methods, based on the state-of-the-art conventional control,
identification, estimation, and communication theory, and (ii) decision-making-
symbolic methods, such as the ones developed in computer science and specifically in
the field of artificial intelligence (AI). In addition to supervising and tuning the control
algorithms, the autonomous controller must also provide a high degree of tolerance
to failures. To ensure system reliability, failures must first be detected, isolated, and
identified, and subsequently a new control law must be designed if it is deemed
necessary. The autonomous controller must be capable of planning the necessary
sequence of control actions to be taken to accomplish a complicated task. It must be
able to interface to other systems as well as with the operator, and it may need
learning capabilities to enhance its performance while in operation.
316 P.J. ANTSAKLIS ET AL.
Advanced planning, learning, and expert systems, among others, must work
together with conventional control systems in order to achieve autonomy. The need
for quantitative methods to model and analyze the dynamical behavior of such
autonomous systems presents significant challenges well beyond current capabilities.
It is clear that the development of autonomous controllers requires significant inter-
disciplinary research effort as it integrates concepts and methods from areas such as
control, identification, estimation, and communication theory, computer science,
especially artificial intelligence, and operations research.
In this paper, an autonomous controller architecture is introduced and discussed in
detail. For such controllers to become a reality, certain fundamental questions should
be studied and resolved first. These fundamental problems are identified, formulated
and discussed, and future research directions are outlined. Next, the focus of this
paper is established and a detailed description of the results is given.
Autonomous controllers can be used in a variety of systems from manufacturing
to unmanned space, atmospheric, ground, and underwater exploratory vehicles. In
this paper, we develop an autonomous controller architecture for future space
vehicles. Referring to a particular class of control problems has the advantage that the
development addresses relatively well-defined control needs rather than abstract
requirements. Furthermore, the autonomous control of space vehicles is highly
demanding; consequently the developed architecture is general enough to encompass
all related autonomy issues. Future space vehicles must be capable of autonomous
operation to accomplish their missions. Emerging aeromaneuvering vehicles such as
the Aeroassisted Orbital Transfer Vehicle and the Aerospace Plane will be required
to maneuver at high altitudes and hypersonic velocities in a flight regime characterized
by significant uncertainty in atmospheric density and aerodynamic characteristics.
Uncertainty in these parameters may cause significant deviation from the nominal
trajectory, conceivably leading to the loss of the vehicle. Significant time and com-
munication constraints during the atmospheric flight dictate that the vehicles should
perform autonomously for extended periods of time since pilot or ground support
intervention may not be possible. Future space systems, such as manned space
platforms, contain significant flexible structural components. Model uncertainties and
system parameter variations require advanced adaptive control techniques to meet
stability and performance specifications. An autonomous adaptive control system is
needed to deal with gross fundamental and environmental changes in the system. For
space systems these include hardware failures, docking disturbances, payload arti-
culation, and man-motion disturbances.
It should be stressed that all the results presented here apply to any autonomous
control system. In other classes of applications, the architecture, or parts of it, can be
used directly and the same fundamental concepts and characteristics identified here
are valid.
The architecture of autonomous controllers necessary for the operation of
advanced planetary and aeromaneuvering space vehicles is developed here. The
concepts and methods needed to successfully design such an autonomous controller
TOWARDS INTELLIGENTAUTONOMOUS CONTROL SYSTEMS 317
are introduced and discussed. A hierarchical functional autonomous controller archi-
tecture is described in detail; it is designed to ensure the autonomous operation of the
control system and it allows interaction with the pilot/ground station and the systems
on board the autonomous vehicle. Note that a shorter version of these results has
appeared in [4].
Section 2 gives a brief history of the development of control systems to motivate
the necessity for autonomous controllers. The functions, characteristics, and benefits
of autonomous control are outlined. Next, it is explained that plant complexity and
design requirements dictate how sophisticated a controller must be. From this it can
be seen that often it is appropriate to use methods from operations research or AI to
achieve autonomy. Such methods are studied in intelligent control theory. An over-
view of some relevant research literature in the field of intelligent autonomous control
is given together with references that ouline research directions. In Section 3, an
autonomous control functional architecture for future space vehicles is introduced.
The controller is hierarchical, with three levels, the execution level (lowest level), the
coordination level (middle level), and the management and organization level (highest
level). The general characteristics of the overall architecture, including those of the
three levels are explained, and an example to illustrate their functions is given. In
Section 4, fundamental issues and attributes of intelligent autonomous system archi-
tectures are described. An approach to the quantitative, systematic modelling, analy-
sis, and design of autonomous controllers is discussed. It is a 'hybrid' approach since
it is proposed to use both conventional analysis techniques based on difference and
differential equations, together with new techniques for the analysis of systems
described with a symbolic formalism such as finite automata. The more global,
macroscopic view of dynamical systems, taken in the development of autonomous
controllers, suggests the use of a model with a hybrid or nonuniform structure, which
in turn requires the use of a hybrid analysis. Finally, some concluding remarks are
given in Section 5.
The need to achieve the demanding control specifications for increasingly complex
dynamical systems has been addressed by using more complex mathematical models
such as nonlinear and stochastic ones, and by developing more sophisticated design
algorithms for say, optimal control. The use of highly complex mathematical models,
however, can seriously inhibit our ability to develop control algorithms. Fortunately,
simpler plant models, for example linear models, can be used in the control design;
this is possible because of the feedback used in control which can tolerate significant
model uncertainties. Controllers can then be designed to meet the specifications
around an operating point, where the linear model is valid and then via a scheduler
a controller emerges which can accomplish the control objectives over the whole
operating range. This is, for example, the method typically used for aircraft flight
control. In autonomous control, we need to significantly increase the operating range
of the plant. We must be able to deal with significant uncertainties in models of
increasingly complex dynamical systems in addition to increasing the validity range
of our control methods. This will involve the use of intelligent decision-making
processes to generate control actions so that a performance level is maintained, even
though there are drastic changes in the operating conditions.
There are needs today that cannot be successfully addressed with the existing
conventional control theory. They mainly pertain to the area of uncertainty. Heuristic
methods may be needed to tune the parameters of an adaptive control law. New
control laws to perform novel control functions should be designed while the system
is in operation. Learning from past experience and planning control actions may be
necessary. Failure detection and identification is needed. These functions have been
performed in the past by human operators. To increase the speed of response, to
relieve the pilot from mundane tasks, and to protect operators from hazards, autonomy
is desired. It should be pointed out that several functions proposed in later sections,
to be part of the autonomous controller, have been performed in the past by separate
systems; examples include fault trees in chemical process control for failure diagnosis
and hazard analysis, and control system design via expert systems.
2.3. INTELLIGENTAUTONOMOUSCONTROL
The necessity for a succession of increasingly complex control systems from classical
to adaptive and intelligent control, to meet the ever increasing performance require-
ments on the current and future complex dynamical systems, is described. The basic
elements of intelligent controllers are highlighted and an outline of the relevant
research on intelligent control is given.
(i) Future space vehicles will be increasingly complex. Some characteristics that
are needed in the model used to design their controller can only be described
by symbolic representation techniques.
TOWARDS INTELLIGENT AUTONOMOUS CONTROL SYSTEMS 323
(ii) Control functions normally performed by the pilot, crew, or ground station
must be incorporated into the controller for autonomous operation.
Therefore, expert personnel's control decisions will have to be automated.
(iii) Human intervention in the control process should be allowed. A facility to
interrupt the autonomous operation of the controller in case of design objec-
tive changes or controller failures should be included.
The need to use intelligent autonomous control stems from the need for an increased
level of autonomy in achieving complex control tasks. In the next section a number
of intelligent control research results which have appeared in the literature are
outlined.
II
INTERFACE
I CONTROL EXECUTIVE
Management & ~ GENERATION UPPER
Organization ~ (Upper Level) MANAGEMENT
Level ~ MONITORING Decision
~ ASSESSOR Making &
~ (Upper Level) Learning
I
IIa CONTROL MANAGER
Higher ~ MIDDLE
~ (Middle Level) MANAGEMENT
II ~
Coordination ~ Decision
Level ~ (Lower Level) Maldng,
Learning &
lib CONTROL IMP. SUPERVISOR Algorithms
Lower ~
~ *SCHEDULER ~ LOWER
*LEARNING (Low Level) MANAGEMENT
ADAPTIVE C O N T R O L &,
IDENTIFICATION Illl
HI ~ Illl Algorithms in
Execution ~ ID & STATE EST~[I[ Software &
ALGORn'aMS Illl aardwarc
, *IblFO"DISTRIBAUTOR IllI
Hardware
Commands are issued by higher levels to lower levels and response data flows from
lower levels upwards. Parameters of subsystems can be altered by systems one level
above them in the hierarchy. There is a delegation and distribution of tasks from
higher to lower levels and a layered distribution of decision-making authority. At each
level, some preprocessing occurs before information is sent to higher levels. If requested,
data can be passed from the lowest subsystem to the highest, e.g., for display. All
subsystems provide status and health information to higher levels. Human inter-
vention is allowed even at the control implementation supervisor level (IIb).
The specific functions at each level are described in detail in later sections. Here we
present a simple illustrative example to clarify the overall operation of the auto-
nomous controller. Suppose that the pilot desires to repair a satellite. After dialogue
with the control executive via the interface, the task is refined to 'repair satellite using
robot A'. This is arrived at using the capability assessing, performance monitoring,
and planning functions of the control executive. The control executive decides if the
repair is possible, under the current performance level of the system, and in view of
near term planned functions. The control executive, using its planning capabilities,
sends a sequence of subtasks sufficient to achieve the repair to the control manager. This
sequence could be to order robot A to: 'go to satellite at coordinates x y z ' , 'open repair
hatch', 'repair'. The control manager, using its planner, divides say the first subtask,
'go to satellite at coordinates x y z ' , into smaller subtasks: 'go from start to x~yLz]',
then 'maneuver around obstacle', 'move to x 2 Y 2 Z 2 ' , 9 9 9 'arrive at the repair site and
wait'. The other subtasks are divided in a similar manner. This information is passed
to the control implementation supervisor, which recognizes the task, and uses stored
control laws to accomplish the objective. The subtask 'go from start to x~ y~zm', can
for example, be implemented using stored control algorithms to first, proceed forward
10 meters, to the right 15 degrees, etc. These control algorithms are executed in the
controller at the execution level utilizing sensor information; the control actions are
implemented via the actuators.
It is important at this point to discuss the d e x t e r i t y of the controller. The execution
level of a highly dexterous controller is very sophisticated and it can accomplish
complex control tasks. The implementation supervisor can issue commands to the
controller such as 'move 15 centimeters to the right', and 'grip standard, fixed
dimension cylinder', in a dexterous controller, or it can completely dictate each mode
of each joint (in a manipulator) 'move joint ! 15 degrees', then 'move joint 5 3
degrees', etc. in a less dexterous one. The simplicity, and level of abstractness of macro
commands in an autonomous controller depends on its dexterity. The more sophisti-
cated the execution level is, the simpler are the commands that the control imple-
mentation supervisor needs to issue. Notice that a very dexterous robot arm may itself
have a number of autonomous functions. If two such dexterous arms were used to
complete a task which required the coordination of their actions, then the arms would
be considered to be two dexterous actuators and a new supervisory autonomous
328 P.J. ANTSAKLIS ET AL.
controller would be placed on top for the supervision and coordination task. In
general, this can happen recursively, adding more intelligent autonomous controllers
as the lower level tasks, accomplished by autonomous systems, need to be supervised.
The functional architecture for the execution level of the autonomous controller is
shown in Figure 2. Its main function is to generate, via the use of numeric algorithms,
low level control actions as dictated by the higher levels of the controller, and apply
them to the vehicle. It senses the responses of the vehicle and environment, processes
it to identify parameters, estimates states, or detects vehicle failures, and passes this
information to the higher levels.
The Sensor and Actuator subsystems are depicted in Figure 2. These devices which
physically accomplish the functions for the autonomous controller are at the lowest
level of the architecture. The complexity of these devices depends on the dexterity of
the controller. All sensors which provide information from the vehicle and environ-
ment to any component in the autonomous controller are included here. On the
execution level, the controller will need feedback information about control variables.
The state estimator and parameter identifier also use such outputs for their respective
tasks. The failure detection and identification (FDI) algorithms need these outputs
and those of special failure sensors to enable them to detect failures. To perform
'execution monitoring' for the planning systems at the higher levels, the dynamical
response of the system must be sensed and passed to the planning system so that
it can determine if a plan has failed. The implementation supervisor also needs
sensor information so that it can, for instance, make the smooth transition in the
Conuol
............f......
Implementation Supervisor
Information
Assessor
Control
ImplementationSupervisor FDI lib
Execution
Information
Distributor
Paran~te,
Identifier&
State
Estimator I FDI
Algorithms
Actuators Sensors
The functional architecture for coordination level lib is shown in Figure 3. Coordina-
tion level IIb receives commands to perform predetermined specific control tasks from
TOWARDS INTELLIGENTAUTONOMOUS CONTROL SYSTEMS 331
Control Manager FDI Ha
Adaptive
Tuner
Supervisor ~ FDI
*Status Monitoring IIb
*CrisisManagement
Infomiation
Assessor
o o o ~ o ~ o a , woa, o ~ o ~ o o o o o a B o o o o w i
the control manager in the level above. It provides the appropriate sequence of
control and identification algorithms to the execution level below. Its ability to deal
with extensive uncertainties is limited.
The main function of the control implementation supervisor shown in Figure 3, is to
carry out the sequence of control actions dictated by the control manager. It can
accomplish predetermined control actions and cope with limited predetermined crisis
situations. The supervisor receives the sequence of control tasks to be accomplished
from the control manager and it has access to a variety of control models, and control,
identification and estimation algorithms. It selects appropriate reference signals for
the controller and it optimizes the subsequences of actions to accomplish the tasks
dictated by the above levels in the best way possible. The supervisor uses the scheduler
to decide what models and algorithms to use in the controller and identifier; it uses
the tuner to decide how to adapt parameters in the algorithms, which are currently
used, and it sends this information to the execution level. It monitors the status of the
system at lib and III, i.e., what algorithms and models are currently used, and the
health of the systems. The supervisor does performance monitoring on lib and III
levels using information provided by the information assessor and FDI lib. It
contains a crisis management facility to deal with certain failures. This includes a
number of methods to maintain performance or to maintain a certain degree of safety
in operations, while degrading performance gracefully. For example, if a failure in an
actuator or sensor is detected, it can switch to an alternative control method using
other actuators or sensors to maintain performance. If performance cannot be main-
tained, it should degrade gracefully, guaranteeing safety (stability). It will take the
necessary steps to maintain stability after a failure is detected and it is isolated
and identified. The control implementation supervisor uses learning to improve the
332 P.J. ANTSAKLISET AL.
implementation of the (predetermined) control forms. It thus improves the speed and
accuracy of tuning with experience, it improves its crisis management and the schedul-
ing of algorithms, and it learns how to more efficiently optimize the overall opera-
tions, as a good human supervisor would do; it also learns completely new control
methods sent from the level above. It informs the control manager about the health
of the system in levels IIb and III, about its status (the progress in performing the
tasks) and it notifies the manager if failures, and unexplained (at that level) per-
formance degradation is occurring.
The main function of the Scheduler shown in Figure 3, is to determine, during the
performance of a specific control function, if certain conditions are met in order to
switch to alternative control laws (and plant models) and to appropriate identifica-
tion, estimation and FDI algorithms. It receives information from the implementation
supervisor as to the control function to be performed, together with information
about the plant models and their validity range, the corresponding control laws, and
the rest of the algorithms. Based on information it receives from the supervisor it
decides when to switch to the proper algorithms and models. The criteria for switching
are predetermined, in perhaps tabular form, and they also depend on information from
environmental sensors. This information is transmitted from the higher level through
the supervisor. Examples will be the schedulers used for docking control. Depending,
for example, on approaching speed and attitude, an appropriate new control law is
selected. Here, the scheduler also selects corresponding plant model when necessary.
The scheduler does not deal with crisis situations.
The main function of the adaptive tuner shown in Figure 3 is to determine, during
the execution of particular algorithms, if specific conditions are met in order to adjust,
tune, certain parameters in the adaptation laws. It receives information from the
implementation supervisor as to the current algorithm being executed, control and
identification algorithms, and also information from the information assessor (via the
supervisor) necessary to decide first if timing is appropriate. Then based on predeter-
mined criteria, it selects the new values for the parameters in the adaptation laws. The
criteria for tuning will be based on excessive output, state, and parameter errors, and
the selection of the new adaptive parameter values will depend on algorithms or
heuristic rules using performance measures and actual past and present inputs
and outputs. In this way parameter tuning of identification and control algorithms
(adaptive, robust, optimal) is accomplished.
The main function of the information assessor shown in Figure 3 is to process and
distribute sensor, state and parameter information to the information distributor
(execution level) and the implementation supervisor. It receives information from the
supervisor as to the current plant model, control, estimation and identification, and
FDI algorithms, and it instructs the information distributor to pass the necessary
sensor information to the controller, identifier, and FDI systems. It receives, from the
identifier via the distributor, information about the current model parameter and
state estimates. After instruction from the supervisor it supplies to the supervisor
processed information such as errors in state and parameter estimation. To do this,
TOWARDS INTELLIGENTAUTONOMOUSCONTROLSYSTEMS 333
it uses sensor data supplied by the distributor and models supplied by the supervisor.
This processed information is used by the tuner, the scheduler, and the control
implementation supervisor for performance monitoring.
The main function of the FDI IIb subsystem shown in Figure 3 is to supervise the
FDI algorithms (execution level) and to detect and identify, using algorithms and
heuristic methods, failures that occurred at the execution level. It passes the informa-
tion about the current models used from the supervisor to the FDI algorithms. It
sends appropriate FDI algorithms to be executed to the lower level. It receives the
outputs of those algorithms. It compares them with additional information from the
supervisor, and it proceeds, after detecting a failure, to isolate and identify it. It
informs the FDI IIa subsystem about the status of the failure and it also informs the
supervisor so that predetermined crisis measures can be taken if necessary and
possible. If the crisis cannot be dealt with at that level, the information is passed to
the FDI IIa, and the designer via the control manager.
8" ,W W O K , r ~ G ~ ' C
9 C o n t r o l Executive
~ Monitoring ~
~ Monitoring oi.e.ammg Interface
~ Assessing oPlanning
~ Generation
5. Concluding Remarks
A hierarchical functional autonomous controller architecture was introduced. In
particular, the architecture for the control of future space vehicles was described in
detail; it was designed to ensure the autonomous operation of the control system and
it allowed interaction with the pilot and crew/ground station, and the systems on
board the autonomous vehicle. The fundamental issues in autonomous control system
modelling and analysis were discussed. It was proposed to utilize a hybrid approach
to modelling and analysis of autonomous systems. This will incorporate conventional
control methods based on differential equations and new techniques for the analysis
of systems described with a symbolic formalism. In this way, the well developed
theory of conventional control can be fully utilized. It should be stressed that auto-
nomy is the design requirement and intelligent control methods appear, at present, to
offer some of the necessary tools to achieve autonomy. A conventional approach may
evolve and replace some or all of the 'intelligent' functions. Note that this paper is
based on the results presented in [3].
It was shown that in addition to conventional controllers, the autonomous control
system incorporates planning, learning, and FDI. An initial study of the FDI problem
TOWARDS INTELLIGENT AUTONOMOUS CONTROL SYSTEMS 339
incorporating both conventional and AI FDI techniques was reported in [45]. Fur-
thermore, AI planning systems were modelled and analyzed in a Petri net framework
in [46].
It must be stressed that the results presented here apply to any autonomous control
system. For other applications, the architecture, or parts of it, and the ideas discussed
here are valid. For instance, to achieve a certain level of autonomy for a particular
application one may modify the functional architecture by removing the management
and organization level. In this case, the limited version of the autonomous controller
would not provide for a user interface, goal generation, high level learning, etc. In
general, modifying the controller for certain applications entails the removal of
portions of the functional architecture which limits the attainable degree of auto-
nomy. Hence, to use the above results for a different application one must decide what
level of autonomy is needed and then include in the autonomous controller architec-
ture those components necessary to achieve it.
Acknowledgement
This work was partially supported by the Jet Propulsion Laboratory, Pasadena,
California.
References
1. Albus, J., et al., Theory and practice of intelligent control, Proc. 23rd IEEE COMPCON, pp. 19-39
(1981).
2. Albus, J.S., et al., Hierarchical control of intelligent machines applied to space station telerobotics,
Proc. Space Telerobotics Workshop, pp. 155-166 (1988).
3. Antsaklis, P.J. and Passino, K.M., Autonomous control systems: Architecture and concepts for future
space vehicles, Final report, Jet Propulsion Laboratory Contract, Oct. 1987.
4. Antsaklis, P.J., Passino, K.M., and Wang, S.J., Autonomous control systems: Architecture and
fundamental issues, Proc. Amer. Control Conference, Atlanta, pp. 602-607 (1988).
5. Astrom, K.J., et al., Expert control, Automatica 22, 277-286 (1986).
6. Atkinson, D.J., Telerobot task planning and reasoning: Introduction to JPL AI research, Proc. Space
Telerobotics Workshop, pp. 339-350 (1988).
7. Balaram, J., et aL, Run time control architecture for the JPL telerobot, Proc. Space Telerobotics
Workshop, pp. 211-222, (1987).
8. Bhatt, R., et al., A real time pilot for an autonomous robot, Proc. IEEE Internat. Syrup. Intelligent
Control, pp. 135-139 (1987).
9. Blank, G., Responsive system control using register vector grammar, Proc. 1EEE Internat. Symp.
Intelligent Control, pp. 461-466 (1987).
10. Charniak, E. and McDermott, D., Introduction to Artificial Intelligence, Addison Wesley, Reading,
Mass. (1985).
1I. Crosscope, J. and Bonnell, R., An integrated intelligent controller employing both conceptual and
procedural knowledge, Proc. IEEE Internat. Symp. Intelligent Control, pp. 416-422 (1987).
12. Cruz, J.B. and Stubberud, A.R., Knowledge based approach to multiple control coordination in
complex systems, Proc. 1EEE lnternat. Syrup. Intelligent Control, pp. 50-53 (1987).
13. DeJong, K., Intelligent control: Integrating AI and control theory, Proc. IEEE Trends and Applications
1983, pp. 158-161 (1983).
14. Despain, A.M. and Patt, Y.N., Aquarius - a high performance computing system for sysmbolic/
numeric applications, Proc. COMPCON S'85, pp. 376-382 (1985).
340 P.J. ANTSAKLIS ET AL.
15. Dudziak, M.J., et aL, IVC: An intelligent vehicle controller with real-time strategic replanning, Proc.
IEEE lnternat. Syrup. Intelligent Control pp. 145-I52 (1987).
16. Dudziak, M.J., SOLON: An autonomous vehicle mission planner, Proc. Space Telerobotics Workshop,
pp. 289-302 (1987).
17. Farsaie, A., et al., Intelligent controllers for an autonomous system, Proc. 1EEE Internat. Symp.
Intelligent Control, pp. 154--158 (1987).
18. Findeisen, W., et al., Control and Coordination in Hierarchical Systems, Wiley, New York (1980).
19. Fiorio, G., Integration of multi-hierarchy control architectures for complex systems, Proc. IEEE
Internat. Syrup. Intelligent Control pp. 71-79 (1987).
20. Firschein, O., et al., Artificial Intelligence for Space Station Automation, Noyes, New Jersey (1986).
21. Freidland, P. and Lum, H., Building intelligent systems: Artificial intelligence research at NASA Ames
Research Center, Proc. Space Telerobotics Workshop, pp. 19-26 (1988).
22. Fu, K.S., Learning control systems and intelligent control systems: An intersection of artificial intelli-
gence and automatic control, IEEE Trans. Automatic Control pp. 70-72 (1971).
23. Gartrell, C.F., et aL, The use of expert systems for adaptive control of large space stuctures, Proc.
AIAA Guidance Navigation and Control Conf., pp. 376-385 (1985).
24. Gevarter, W.B., Artificial Intelligence, Noyes, NJ (1984).
25. Graglia, P. and Meystel, A., Planning minimum time trajectory in the traversability space of a robot,
Proc. IEEE lnternat. Symp. Intelligent Control pp. 82-87 (1987).
26. Guha, A. and Dudziak, M., Knowledge based controllers for autonomous system, Proc. IEEE Work-
shop Intelligent Control pp. 134-138 (1985).
27. Handelman, D.A. and Stengel, R.F., Combining qualitative and quantitative reasoning in aircraft
failure diagnosis, AIAA Guidance Navigation and Control Conf., pp. 366-375 (1985).
28. Hawker, J. and Nagel, R., World models in intelligent control systems, Proc. IEEE lnternat. Syrup.
Intelligent Control pp. 482-488 (1987).
29. Hodgson, J., Structures for intelligent systems, Proc. IEEE Internat. Syrup. Intelligent Control
pp. 348-353 (1987).
30. Jenkins, L., Space telerobotic systems: Applications and concepts, Proc. Space Telerobotics Workshop,
pp. 29-34 (1988).
31. Kitzmiller, C.T. and Kowalik, J.S., Coupling symbolic and numeric computing in KB systems, AI
Magazine 18, 85-90 (1987).
32. Knight, J.F. and Passino, K.M., Decidability for Temporal Logic in Control Theory, Proc. 25th
Allerton Conf., pp. 335-344 (1987).
33. Krogh, B., Controlled Petri nets and maximally permissive feedback logic, Proc. 25th Allerton Conf.,
pp. 317-326 (1987).
34. Lutz, P., Autonomous mobile robots in industrial production environment, in L.O. Hertzgerger and
F.C.A. Groen (eds.) Intelligent Autonomous Systems, North-Holland, NY (1987) (Proc. of an Interna-
tional Conference in Amsterdam, Dec. 1986.)
35. Mendel, J. and Zapalac, J., The application of techniques of artificial intelligence to control system
design, in Advances in Control Systems, C.T. Leondes (ed.), Academic Press, NY (1968).
36. Mesarovic, M., Macko, D. and Takahara, Y., Theory of Hierarchical, Multilevel, Systems, Academic
Press, NY (1970).
37. Meystel, A., Intelligent control: Issues and perspectives, Proc. IEEE Workshop Intelligent Control
pp. 1-15 (1985).
38. Meystel, A., Nested hierarchical controller for intelligent mobile autonomous system, in L.O. Hertz-
gerger and F.C.A. Groen (eds.), Intelligent Autonomous Systems, North Holland, NY (1987) (Proc. of
an International Conference in Amsterdam, Dec. 1986.)
39. Meystel, A., Planning/control architectures for master dependent autonomous systems with non-
homogeneous knowledge representation, Proc. IEEE Internat. Symp. Intelligent Control, pp. 31-41
(1987).
40. Meystel, A., Nested hierarchical controller with partial autonomy, Proc. Space Telerobotics Workshop,
pp. 251-270 (1988).
41. Moizer, A. and Pagurek, B., An onboard navigation system for autonomous underwater vehicles, in
L.O. Hertzgerger and F.C.A. Groen (eds.), Intelligent Autonomous Systems, North Holland, NY
(1987) (Proc. of an International Conference in Amsterdam, Dec. 1986.)
TOWARDS INTELLIGENT AUTONOMOUS CONTROL SYSTEMS 341
42. Ostroff, J.S., Real time computer control of discrete systems modelled by extended state machines: A
temporal logic approach, PhD dissertation, Report No. 8618, Dept. of Elect. Eng., University of
Toronto, Jan. 1987.
43. Passino, K.M., Restructurable controls and artificial intelligence, McDonnell Aircraft Internal
Report, IR-0392, April 1986.
44. Passino, K.M. and Antsaklis, P.J., Restructurable controls study: An artificial intelligence approach
to the fault detection and identification problem, Final Report, McDonnell Douglas Contract, Oct.
1986.
45. Passino, K.M. and Antsaklis, P.J., Fault detection and identification in an intelligent restructurable
controller, Journal of Intelligent and Robotic Systems 1, 145-161 (1988).
46. Passino, K.M. and Antsaklis, P.J., Artificial intelligence planning problems in a Petri net framework,
Proc. Amer. Control Conference, pp. 626-631 (1988).
47. Pearson, G., Mission planning for autonomous systems, Proc. Space Telerobotics Workshop, pp. 303-
306 (1988).
48. Raulefs, P. and Thorndyke, P.W., An architecture for heuristic control of real time processes, Proc.
Space Telerobotics Workshop, pp. 139-148 (1988).
49. Saridis, G.N., Toward the realization of intelligent controls, Proc. 1EEE 67, I 115-1133 (1979).
50. Saridis, G., Intelligent controls for advanced automated processes, Proc. Automated Decision Making
and Problem Solving Conf., NASA CP-2180, May 1980.
51. Saridis, G.N., Intelligent robot control, 1EEE Trans. Automatic Control, AC-28, 547-556 (1983).
52. Saridis, G.N., Foundations of the theory of intelligent controls, Proc. IEEE Workshop Intelligent
Control, pp. 23-28 (1985).
53. Saridis, G.N., Knowledge implementation: structures of intelligent control systems, Proc. lEEElnter-
nat. Symp. Intelligent Control, pp. 9-17 (1987).
54. Saridis, G.N. and Valavanis, K.P., Software and hardware for intelligent robots, Proc. Space
Telerobotics Workshop, pp. 241-250 (1988).
55. Schenker, P.S., Program objeclives and technology outreach, Proc. Space Telerobotics Workshop,
pp. 3-18 (1988).
56. Shapiro, S.C. (ed.), Encyclopedia of Artificial Intelligence, Wiley, NY (1987).
57. Stark, L., Telerobotics: Research needs for evolving space stations, Proc. Space Telerobotics Work-
shop, pp. 91-94 (1988).
58. Stengel, R.F., AI theory and reconfigurable flight control systems, Princeton University Report
1664-MAE, June 1984 (see also 1665 by D. Handelman).
59. Stephanou, H.E., An evidential framework for intelligent control, Proc. 1EEE Workshop Intelligent
Control, pp. 118-123 (1985).
60. Thistle, J.G. and Wonham, W.M., Control problems in a temporal logic framework, Int. J. Control
44, 943-976 (1986).
61. Trankle, T.L., Sheu, P. and Rabin, U.H., Expert system architecture for control system design, Proc.
Amer. Control Conference, pp. 1163-1169 (1986).
62. Turner, P.R., et al., Autonomous systems: Architecture and implementation, Jet Propulsion Labora-
tories, Report No. JPL D-1656, August 1984.
63. Valavanis, K.P., A mathematical formulation for the analytical design of intelligent machines. PhD
Dissertation, Electrical and Computer Engineering Dept., Rensselaer Polytechnic Institute, Troy NY,
Nov. 1986,
64. Valvanis, K.P. and Saridis, G.N., Architectural models for intelligent machines, Proc. 251h Conf.
Decision and Control, Athens, Greece (1986).
65. Valavanis, K.P. and Saridis, G.N., Information theoretic modelling of intelligent robotic systems, Part
I: The organization level, Proc. 261h Conf. Decision and Control, Los Angeles, pp. 619-626 (1987).
66. Valavanis, K.P. and Saridis, G.N., Information theoretic modelling of intelligent robotic systems,
Part II: The coordination and execution levels, Proc. 261h Conf. Decision and Control, Los Angeles,
pp. 627-633 (1987).
67. Villa, A., Hybrid knowledge based/analytical control of uncertain systems, Proc. 1EEE lnternat. Syrup.
Intelligent Control, pp~ 59-70 (1987).
68. Waldon, S., et al., Updating and organizing world knowledge for an autonomous control syslem, Proc.
1EEE lnternat. Syrup. lntelligenl Control pp. 423-430 (I987).
342 P.J. ANTSAKLIS ET AL.
69. Wolfe, W.J. and Raney, S.D., Distributed intelligence for supervisory control, Proc. Space Telero-
botics Workshop, pp. 139-148 (1988).
70. Wos, L., Automated Reasoning: 33 Basic Research Problems, Prentice Hall, NJ (1988).
71. Zadeh, L.A., Fuzzy logic, Computer, April 1988, 83-93.
72. Zeigler, B.P., Knowledge representation from Newton to Minsky and beyond, Journal of Applied
Artificial Intelligence 1, 87-107 (1987).