Intelligent Diagnosis and Prognosis of Industrial Networked Systems Automation and Control Engineering 1st Edition Chee Khiang Pang

Download as pdf or txt
Download as pdf or txt
You are on page 1of 84

Full download ebook at ebookgate.

com

Intelligent Diagnosis and Prognosis of


Industrial Networked Systems Automation and
Control Engineering 1st Edition Chee Khiang
Pang
https://ebookgate.com/product/intelligent-
diagnosis-and-prognosis-of-industrial-networked-
systems-automation-and-control-engineering-1st-
edition-chee-khiang-pang/

Download more ebook from https://ebookgate.com


More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Intelligent Systems Modeling Optimization and Control


Automation and Control Engineering 1st Edition Yung C.
Shin

https://ebookgate.com/product/intelligent-systems-modeling-
optimization-and-control-automation-and-control-engineering-1st-
edition-yung-c-shin/

Product Engineering Tools and Methods Based on Virtual


Reality Intelligent Systems Control and Automation
Science and Engineering 1st Edition Doru Talaba

https://ebookgate.com/product/product-engineering-tools-and-
methods-based-on-virtual-reality-intelligent-systems-control-and-
automation-science-and-engineering-1st-edition-doru-talaba/

Optimal Control of Singularly Perturbed Linear Systems


and Applications Automation and Control Engineering 1st
Edition Zoran Gajic

https://ebookgate.com/product/optimal-control-of-singularly-
perturbed-linear-systems-and-applications-automation-and-control-
engineering-1st-edition-zoran-gajic/

Sliding Mode Control in Engineering Automation and


Control Engineering 1st Edition Wilfrid Perruquetti

https://ebookgate.com/product/sliding-mode-control-in-
engineering-automation-and-control-engineering-1st-edition-
wilfrid-perruquetti/
Handbook of Healthcare Delivery Systems Industrial and
Systems Engineering Series 1st Edition Yuehwern Yih

https://ebookgate.com/product/handbook-of-healthcare-delivery-
systems-industrial-and-systems-engineering-series-1st-edition-
yuehwern-yih/

Intelligent Buildings and Building Automation 1st


Edition Shengwei Wang

https://ebookgate.com/product/intelligent-buildings-and-building-
automation-1st-edition-shengwei-wang/

Industrial Automated Systems Instrumentation and Motion


Control Terry L.M. Bartelt

https://ebookgate.com/product/industrial-automated-systems-
instrumentation-and-motion-control-terry-l-m-bartelt/

Industrial Automation and Control System Security


Principles Protecting the Critical Infrastructure
Second Edition Ronald L. Krutz

https://ebookgate.com/product/industrial-automation-and-control-
system-security-principles-protecting-the-critical-
infrastructure-second-edition-ronald-l-krutz/

The Industrial Electronics Handbook Second Edition


Intelligent Systems Bogdan M. Wilamowski

https://ebookgate.com/product/the-industrial-electronics-
handbook-second-edition-intelligent-systems-bogdan-m-wilamowski/
Intelligent Diagnosis and
Prognosis of Industrial
Networked Systems

Untitled-9 1 5/20/11 1:23 PM


AUTOMATION AND CONTROL ENGINEERING
A Series of Reference Books and Textbooks

Series Editors
FRANK L. LEWIS, Ph.D., SHUZHI SAM GE, Ph.D.,
Fellow IEEE, Fellow IFAC Fellow IEEE
Professor Professor
Automation and Robotics Research Institute Interactive Digital Media Institute
The University of Texas at Arlington The National University of Singapore

Classical Feedback Control: With MATLAB® and Simulink®, Second Edition,


Boris J. Lurie and Paul J. Enright
Synchronization and Control of Multiagent Systems, Dong Sun
Subspace Learning of Neural Networks, Jian Cheng Lv, Zhang Yi, and Jiliu Zhou
Reliable Control and Filtering of Linear Systems with Adaptive Mechanisms,
Guang-Hong Yang and Dan Ye
Reinforcement Learning and Dynamic Programming Using Function
Approximators, Lucian Buşoniu, Robert Babuška, Bart De Schutter,
and Damien Ernst
Modeling and Control of Vibration in Mechanical Systems, Chunling Du
and Lihua Xie
Analysis and Synthesis of Fuzzy Control Systems: A Model-Based Approach,
Gang Feng
Lyapunov-Based Control of Robotic Systems, Aman Behal, Warren Dixon,
Darren M. Dawson, and Bin Xian
System Modeling and Control with Resource-Oriented Petri Nets, Naiqi Wu
and MengChu Zhou
Sliding Mode Control in Electro-Mechanical Systems, Second Edition,
Vadim Utkin, Jürgen Guldner, and Jingxin Shi
Optimal Control: Weakly Coupled Systems and Applications, Zoran Gajić,
Myo-Taeg Lim, Dobrila Skatarić, Wu-Chung Su, and Vojislav Kecman
Intelligent Systems: Modeling, Optimization, and Control, Yung C. Shin
and Chengying Xu
Optimal and Robust Estimation: With an Introduction to Stochastic Control
Theory, Second Edition, Frank L. Lewis, Lihua Xie, and Dan Popa
Feedback Control of Dynamic Bipedal Robot Locomotion, Eric R. Westervelt,
Jessy W. Grizzle, Christine Chevallereau, Jun Ho Choi, and Benjamin Morris
Intelligent Freight Transportation, edited by Petros A. Ioannou
Modeling and Control of Complex Systems, edited by Petros A. Ioannou
and Andreas Pitsillides
Wireless Ad Hoc and Sensor Networks: Protocols, Performance, and Control,
Jagannathan Sarangapani
Stochastic Hybrid Systems, edited by Christos G. Cassandras
and John Lygeros
Hard Disk Drive: Mechatronics and Control, Abdullah Al Mamun,
Guo Xiao Guo, and Chao Bi
Autonomous Mobile Robots: Sensing, Control, Decision Making
and Applications, edited by Shuzhi Sam Ge and Frank L. Lewis

Untitled-9 2 5/20/11 1:23 PM


Automation and Control Engineering Series

Intelligent Diagnosis and


Prognosis of Industrial
Networked Systems

Chee Khiang Pang


National University of Singapore,
Kent Ridge, Singapore

Frank L. Lewis
University of Texas,
Arlington, TX, USA

Tong Heng Lee


National University of Singapore,
Kent Ridge, Singapore

Zhao Yang Dong


The Hong Kong Polytechnic University, Hung Hom,
Kowloon, Hong Kong PRC

Boca Raton London New York

CRC Press is an imprint of the


Taylor & Francis Group, an informa business

Untitled-9 3 5/20/11 1:23 PM


CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2011 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business

No claim to original U.S. Government works

Printed in the United States of America on acid-free paper


Version Date: 20110429

International Standard Book Number: 978-1-4398-3933-1 (Hardback)

This book contains information obtained from authentic and highly regarded sources. Reasonable
efforts have been made to publish reliable data and information, but the author and publisher cannot
assume responsibility for the validity of all materials or the consequences of their use. The authors and
publishers have attempted to trace the copyright holders of all material reproduced in this publication
and apologize to copyright holders if permission to publish in this form has not been obtained. If any
copyright material has not been acknowledged please write and let us know so we may rectify in any
future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or
hereafter invented, including photocopying, microfilming, and recording, or in any information stor-
age or retrieval system, without written permission from the publishers.

For permission to photocopy or use material electronically from this work, please access www.copy-
right.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222
Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that pro-
vides licenses and registration for a variety of users. For organizations that have been granted a pho-
tocopy license by the CCC, a separate system of payment has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are
used only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com

Untitled-9 4 5/20/11 1:23 PM


Dedication

To those I love, and those who love me.


C. K. Pang

To Galina.
F. L. Lewis
Contents
Preface.......................................................................................................................xi

Nomenclature........................................................................................................... xv

List of Figures ........................................................................................................xvii

List of Tables...........................................................................................................xxi

Chapter 1 Introduction ..........................................................................................1


1.1 Diagnosis and Prognosis....................................................................2
1.1.1 Parametric Based .....................................................................2
1.1.2 Non-Parametric Based .............................................................3
1.2 Applications in Industrial Networked Systems .................................5
1.2.1 Modal Parametric Identification (MPI) ...................................5
1.2.2 Dominant Feature Identification (DFI)....................................6
1.2.3 Probabilistic Small Signal Stability Assessment .....................6
1.2.4 Discrete Event Command and Control ....................................7
Chapter 2 Vectors, Matrices, and Linear Systems ................................................9
2.1 Fundamental Concepts ......................................................................9
2.1.1 Vectors .....................................................................................9
2.1.2 Matrices ................................................................................. 11
2.2 Linear Systems ................................................................................ 14
2.2.1 Introduction to Linear Systems.............................................. 14
2.2.2 State-Space Representation of LTI Systems.......................... 15
2.2.3 Linearization of Non-Linear Systems.................................... 18
2.3 Eigenvalue Decomposition and Sensitivity ..................................... 19
2.3.1 Eigenvalue and Eigenvector................................................... 19
2.3.2 Eigenvalue Decomposition .................................................... 21
2.3.3 Generalized Eigenvectors ...................................................... 23
2.3.4 Eigenvalue Sensitivity to Non-Deterministic System
Parameters.............................................................................. 27
2.3.5 Eigenvalue Sensitivity to Link Parameters ............................ 28
2.4 Singular Value Decomposition (SVD) and Applications ................ 32
2.4.1 Singular Value Decomposition (SVD) .................................. 32
2.4.2 Norms, Rank, and Condition Number ................................... 34
2.4.3 Pseudo-Inverse....................................................................... 34
2.4.4 Least Squares Solution .......................................................... 34

vii
viii Contents

2.4.5 Minimum-Norm Solution Using SVD ..................................36


2.5 Boolean Matrices.............................................................................37
2.5.1 Binary Relation......................................................................38
2.5.2 Graphs....................................................................................40
2.5.3 Discrete-Event Systems .........................................................40
2.6 Conclusion .......................................................................................44
Chapter 3 Modal Parametric Identification (MPI) ..............................................47
3.1 Introduction .....................................................................................47
3.2 Servo-Mechanical-Prototype Production Cycle ..............................50
3.2.1 Modal Summation .................................................................53
3.2.2 Pole-Zero Product..................................................................54
3.2.3 Lumped Polynomial ..............................................................55
3.3 Systems Design Approach...............................................................55
3.4 Modal Parametric Identification (MPI) Algorithm..........................62
3.4.1 Natural Frequencies fi and Damping Ratios ζi .....................64
3.4.2 Reformulation Using Forsythe’s Orthogonal Polynomials....68
3.4.3 Residues Ri ............................................................................73
3.4.4 Error Analysis........................................................................75
3.5 Industrial Application: Hard Disk Drive Servo Systems.................76
3.6 Results and Discussions...................................................................82
3.7 Conclusion .......................................................................................84
Chapter 4 Dominant Feature Identification (DFI) ..............................................85
4.1 Introduction .....................................................................................85
4.2 Principal Component Analysis (PCA).............................................90
4.2.1 Approximation of Linear Transformation X .........................90
4.2.2 Approximation in Range Space by Principal Components ...91
4.3 Dominant Feature Identification (DFI)............................................92
4.3.1 Data Compression..................................................................92
4.3.2 Selection of Dominant Features ............................................93
4.3.3 Error Analysis........................................................................95
4.3.4 Simplified Computations .......................................................97
4.4 Time Series Forecasting Using Force Signals and Static Models ...97
4.4.1 Determining the Dominant Features......................................99
4.4.2 Prediction of Tool Wear .........................................................99
4.4.3 Experimental Setup..............................................................100
4.4.4 Effects of Different Numbers of Retained Singular Values q
and Dominant Features p.....................................................105
4.4.5 Comparison of Proposed Dominant Feature Identification
(DFI) and Principal Feature Analysis (PFA) .......................108
4.5 Time Series Forecasting Using Acoustic Emission Signals and
Dynamic Models............................................................................113
4.5.1 ARMAX Model Based on DFI............................................114
Contents ix

4.5.2 Experimental Setup..............................................................116


4.5.3 Comparison of Standard Non-Dynamic Prediction Models
with Dynamic ARMAX Model ...........................................121
4.5.4 Comparison of Proposed ARMAX Model Using ELS with
DFI, MRM Using RLS with DFI, and MRM Using RLS
with Principal Feature Analysis (PFA) ................................123
4.5.5 Effects of Different Numbers of Retained Singular Values
and Features Selected ..........................................................125
4.5.6 Comparison of Tool Wear Prediction Using AE Measure-
ments and Force Measurements...........................................130
4.6 DFI for Industrial Fault Detection and Isolation (FDI) .................131
4.6.1 Augmented Dominant Feature Identification (ADFI) .........132
4.6.2 Decentralized Dominant Feature Identification (DDFI)......132
4.6.3 Fault Classification with Neural Networks..........................132
4.6.4 Experimental Setup..............................................................134
4.6.5 Fault Detection Using 120 Features ....................................142
4.6.6 Augmented Dominant Feature Identification (ADFI) and
NN for Fault Detection ........................................................143
4.6.7 Decentralized Dominant Feature Identification (DDFI) and
NN for Fault Detection ........................................................145
4.7 Conclusion .....................................................................................149
Chapter 5 Probabilistic Small Signal Stability Assessment..............................151
5.1 Introduction ...................................................................................152
5.2 Power System Modeling: Differential Equations ..........................155
5.2.1 Synchronous Machines........................................................156
5.2.2 Exciter and Automatic Voltage Regulator (AVR)................163
5.2.3 Speed Governor and Steam Turbine ....................................165
5.2.4 Interaction between a Synchronous Machine and Its Con-
trol Systems .........................................................................167
5.3 Power System Modeling: Algebraic Equations.............................168
5.3.1 Stator Equations...................................................................168
5.3.2 Network Admittance Matrix YN ..........................................171
5.3.3 Reduced Admittance Matrix YR .........................................172
5.4 State Matrix and Critical Modes....................................................173
5.5 Eigenvalue Sensitivity Matrix........................................................178
5.5.1 Sensitivity Analysis of the New England Power System ....183
5.5.2 Statistical Functions.............................................................187
5.5.3 Single Variate Normal PDF of αi ........................................193
5.5.4 Multivariate Normal PDF ....................................................194
5.5.5 Probability Calculations ......................................................195
5.5.6 Small Signal Stability Region..............................................198
5.6 Impact of Induction Motor Load ...................................................201
5.6.1 Composite Load Model for Sensitivity Analysis.................203
x Contents

5.6.2 Motor Load Parameter Sensitivity Analysis........................205


5.6.3 Parametric Changes and Critical Modes Mobility...............214
5.6.4 Effect of the Number of IMs on Overall Sensitivity
(with 30% IM load) .............................................................215
5.6.5 Effect on Overall Sensitivity with Different Percentages of
IM Load in the Composite Load..........................................220
5.7 Discussion......................................................................................221
5.8 Conclusion .....................................................................................222
Chapter 6 Discrete Event Command and Control.............................................225
6.1 Introduction ...................................................................................225
6.2 Discrete Event C2 Structure for Distributed Teams ......................227
6.2.1 Task Sequencing Matrix (TSM) ..........................................229
6.2.2 Resource Assignment Matrix (RAM)..................................233
6.2.3 Programming Multiple Missions .........................................235
6.3 Conjunctive Rule–Based Discrete Event Controller (DEC)..........237
6.3.1 DEC State Equation.............................................................237
6.3.2 DEC Output Equations ........................................................239
6.3.3 DEC as a Feedback Controller ............................................239
6.3.4 Functionality of the DEC.....................................................239
6.3.5 Properness and Fairness of the DEC Rule Base ..................241
6.4 Disjunctive Rule–Based Discrete Event Controller (DEC) ...........244
6.5 DEC Simulation and Implementation............................................245
6.5.1 Simulation of Networked Team Example............................246
6.5.2 Implementation of Networked Team Example on Actual
WSN ....................................................................................247
6.5.3 Simulation of Multiple Military Missions Using FCS ........247
6.6 Conclusion .....................................................................................252
Chapter 7 Future Challenges.............................................................................255
7.1 Energy-Efficient Manufacturing ....................................................255
7.2 Life Cycle Assessment (LCA).......................................................257
7.3 System of Systems (SoS)...............................................................260
References..............................................................................................................263

Index ......................................................................................................................279
Preface
In an era of intensive competition when asset usage and plant operating efficiencies
must be maximized, unexpected downtime due to machinery failure has become more
costly and unacceptable than before. To cut operating costs and increase revenues, in-
dustries have an urgent need for prediction of fault progression and remaining lifespan
of industrial machines, processes, and systems. As such, predictive maintenance has
been actively pursued in the manufacturing industries in recent years where equip-
ment outages are forecasted, and maintenance is carried out only when necessary.
Prediction leads to improved management and hence effective usage of equipment,
and multifaceted guarantees are increasingly being given for industrial machines,
processes, products, and services, etc. To ensure successful condition-based mainte-
nance, it is necessary to detect, identify, and classify different kinds of failure modes
in the manufacturing processes as early as possible.
With the pushing need for increased longevity in machine lifetime and its early
process fault detection, intelligent diagnosis and prognosis have become an important
field of interest in engineering. For example, an engineer who mounts an acoustic
sensor onto a spindle motor would like to know when the ball bearings will be worn
out and need to be changed without having to halt the ongoing milling processes, which
decreases the industrial yield. Or a scientist working on sensor networks would like to
know which sensors are redundant during process monitoring and can be pruned off
to save operational and computational overheads. These realistic scenarios illustrate
the need for new or unified perspectives for challenges in system analysis and design
for engineering applications.
Currently, most works on Condition-Based Monitoring (CBM), Fault Detection
and Isolation (FDI), or even Structural Health Monitoring (SHM) consider solely the
integrity of independent modules, even when the complex integrated industrial pro-
cesses consist of several mutually interacting components interwoven together. Most
literature on diagnosis and prognosis is also mathematically involved, which makes it
hard for potential readers not working in this field to follow and appreciate the state-
of-art technologies. As such, a good intelligent diagnosis and prognosis architecture
should consider crosstalk to facilitate actions and decisions among the synergetic in-
tegration of composite systems simultaneously, while maintaining overall stability at
the same time. This “big-picture” approach will also limit the inherent intrinsic un-
certainties and variabilities within the interacting components, while suppressing any
possible extrinsic socio-techno intrusion and uncertainties from the human interface
layer.
Adding to the current literature available in this research arena, this book provides
an overview of linear systems theory and the corresponding matrix operations re-
quired for intelligent diagnosis and prognosis of industrial networked systems. With
the essential theoretical fundamentals covered, automated mathematical machineries
are developed and applied to targeted realistic engineering systems. Our results show

xi
xii Preface

the effectiveness of these tool sets for many time-triggered and event-triggered indus-
trial applications, which include forecasting machine tool wear in industrial cutting
machines, sensors and features reduction for industrial FDI, identification of critical
resonant modes in mechatronic systems for systems design of research and develop-
ment (R&D), probabilistic small signal stability in large-scale interconnected power
systems, discrete event command and control for military applications, etc., just to
name a few. It should be noted that these developed tool sets are highly portable, and
can be readily adopted and applied to many other engineering applications.

Outline
This book is intended primarily as a bridge between academics in universities, practic-
ing engineers in industries, and also scientists working in research institutes. The book
is carefully organized into chapters, each providing an introductory section tailored to
cover the essential background materials, followed by specific industrial applications
to realistic engineering systems and processes. To reach out to a wider audience, lin-
ear matrix operators and indices are used to formulate mathematical machineries and
provide formal decision software tools that can be readily appreciated and applied.
The book is carefully crafted into seven chapters with the following contents:
• Chapter 1: Introduction
Intelligent diagnosis and prognosis using model-based and non-model-based
methods in current existing literature are discussed. The various application
domains in realistic industrial networked systems are also introduced.
• Chapter 2: Vectors, Matrices, and Linear Systems
Fundamental concepts of linear algebra and linear systems are reviewed
along with eigenvalue and singular value decompositions. The usage of both
real and binary matrices for diagnosis and prognosis applications are also
discussed.
• Chapter 3: Modal Parametric Identification (MPI)
Proposes a Modal Parametric Identification (MPI) algorithm for fast iden-
tification of critical modal parameters in R&D of mechatronic systems. A
systems design approach with enhanced MPI is proposed for mechatronic
systems and verified with frequency responses of dual-stage actuators in
commercial hard disk drives (HDDs).
• Chapter 4: Dominant Feature Identification (DFI)
Proposes a Dominant Feature Identification (DFI) software framework for
advanced feature selection when using inferential sensing in online moni-
toring of industrial systems and processes. A mathematical tool set which
guarantees minimized least squares error in feature reduction and clustering
is developed. The proposed techniques are verified with experiments on tool
wear prediction in industrial high speed milling machines and fault detection
in a machine fault simulator.
• Chapter 5: Probabilistic Small-Signal Stability Assessment
Proposes analytical and numerical methods to obtain eigenvalue sensitivi-
ties with respect to non-deterministic system parameters and load models
Preface xiii

for large-scale interconnected power systems. A probabilistic small-signal


stability assessment method is proposed, and verified with extensive simu-
lations on the New England 39-Bus Test System.
• Chapter 6: Discrete Event Command and Control
Proposes the use of binary matrices and algebra for command and control of
discrete event–triggered systems. A mathematically justified framework is
provided for distributed networked teams on multiple missions. This is veri-
fied with simulations and experiments on a wireless sensor network (WSN),
as well as simulation on a military ambush attack mission.
• Chapter 7: Future Challenges
Provides conclusion and future work directions for intelligent diagnosis and
prognosis in areas of energy-efficient manufacturing, life cycle assessment,
and systems of systems architecture.

Learning Outcomes
The developed tools allow for higher level decision making and command in syner-
getic integration between several industrial processes and stages, thereby achieving
shorter time in failure and fault analysis in the entire industrial production life cy-
cle. This shortens production time while reducing failure through early identification
and detection of the key factors that can lead to potential faults. As such, engineers
and managers are empowered with the knowledge and know-how to make important
decisions and policies. They can also be used to educate fellow researchers and the
public about the advantages of various technologies.
Potential readers not working in the relevant fields can also appreciate the litera-
ture therein even without prior knowledge and exposure, and are still be able to apply
the tool sets proposed therein to address industrial problems arising from evolving
or even emerging behavior in networked systems or processes, e.g., sensor fusion,
pattern recognition, and reliability studies, etc. The mathematical machineries pro-
posed aim to analyze methodologies to make autonomous decisions that meet present
and uncertain future needs quantitatively, without compromising the ad-hoc “add-on”
flexibility of network-centered operations.
Many universities also have established programs and courses in this new field,
with cross-faculty and inter-discipline research going on in this arena as well. As such,
this book can also serve as a textbook for an intermediate to advanced module as part
of control engineering, systems reliability, diagnosis and prognosis, etc. We also hope
that the book is concise enough to be used for self-study, or as a recommended text,
for a single advanced undergraduate or postgraduate module on intelligent diagnosis
and prognosis, FDI, CBM, or SHM, etc.
xiv Preface

Acknowledgments
Last but not least, we would like to acknowledge our loved ones for their love, under-
standing, and encouragement throughout the entire course of preparing this research
book. This book was also made possible with the help of our colleagues, collaborators,
as well as students and members in our research teams. This work was supported in
part by Singapore MOE AcRF Tier 1 Grant R-263-000-564-133, NSF Grant ECCS-
0801330, ARO Grant W91NF-05-1-0314, AFOSR Grant FA9550-09-1-0278, and
Hong Kong Polytechnic University Grant #ZV3E.

Chee Khiang Pang


Frank L. Lewis
Tong Heng Lee
Zhao Yang Dong
Nomenclature
ADFI Augmented Dominant Feature Identification
AE Acoustic Emission
ARFIMA AutoRegressive Fractionally Integrated Moving Average
ARIMA AutoRegressive Integrated Moving Average
ARMAX Auto-Regressive Moving-Average with eXogenous input/s
AVR Automatic Voltage Regulator
BEP Best Efficiency Point
BIBO Bounded-Input-Bounded-Output
BU Business Unit
C2 Command and Control
CAD Computer-Aided Design
CAPEX CAPital EXpenditure
CBM Condition-Based Monitoring
CDF Cumulative Distribution Function
CF Characteristic Function
CNC Computer Numerical Control
dRAM disjunctive-input Resource Assignment Matrix
DAE Differential and Algebraic Equation
DDFI Decentralized Dominant Feature Identification
DEC Discrete Event Control
DFI Dominant Feature Identification
DSA Dynamic Signal Analyzer
DSP Digital Signal Processing
EA Evolutionary Algorithm
ELS Extended Least Squares
EPRI Electric Power Research Institute
FACTS Flexible Alternating Current Transmission Systems
FCS Future Combat System
FDI Fault Detection and Isolation
FEA Finite Element Analysis
FEM Finite Element Modeling
FFBD Functional Flow Block Diagram
FFT Fast Fourier Transform
GA Genetic Algorithm
GHG Green House Gas
HDD Hard Disk Drive
HHT Hilbert–Huang Transform
HMM Hidden Markov Model
HTN Hierarchical Task Network
HVDC High-Voltage Direct Current
IM Induction Motor
IPP Independent Power Producer
ISO Independent System Operator
JAUGS Joint Architecture for Unmanned Ground System
KLT Karhunen–Loève Transform

xv
xvi Nomenclature

LCA Life Cycle Assessment


LDV Laser Doppler Vibrometer
LS Least Squares
LSE Least Square Error
LITP Linear-In-The-Parameter
LTI Linear Time-Invariant
MIMO Multi-Input-Multi-Output
MPI Modal Parametric Identification
MRE Mean Relative Error
MRM Multiple Regression Model
MSE Mean Square Error
MTBF Mean Time Between Failure
NN Neural Network
OODA Observe, Orient, Decide, and Act
OPEX OPerations EXpense
OS Overall Sensitivity
O&S Operation and Support
PCA Principal Component Analysis
PDA Personal Digital Assistant
PDF Probability Density Function
PFA Principal Feature Analysis
PHM Prognostic Health Management
PN Petri Net
PSS Power System Stabilizer
PZT Lead-Zirconate-Titanate (Pb-Zr-Ti)
R&D Research & Development
RAM Resource Assignment Matrix
RBF Radial Basis Function
RBS Rule-Based System
RDM Resource Dependency Matrix
RLS Recursive Least Squares
RMS Root Mean Square
RTO Regional Transmission Organization
SD Standard Deviation
SHM Structural Health Monitoring
SISO Single-Input-Single-Output
SNR Signal-to-Noise Ratio
SoS System-of-Systems
SVD Singular Value Decomposition
TCM Tool Condition Monitoring
TIA Totally Integrated Automation
TOC Total Ownership Cost
TPM Technical Performance Metric
TRADOC TRAining and DOCtrine command
TSM Task Sequencing Matrix
UAV Unmanned Aerial Vehicle
UGS Unattended Ground Sensor
UGV Unmanned Ground Vehicle
VCM Voice Coil Motor
WSN Wireless Sensor Network
ZIP Constant impedance (Z), current (I), and power (P)
List of Figures
2.1 Dynamic model of linear system. ................................................................... 14
2.2 Graph of binary relation matrix M. ................................................................ 39
2.3 Directed graph. ............................................................................................... 41
2.4 Complete architecture of DEC in a WSN....................................................... 42

3.1 R&D teams in a typical servo-mechanical-prototype production cycle in


mechatronic industries.................................................................................... 51
3.2 A four-phase systems design approach. ......................................................... 57
3.3 Top-level functional flow block diagram for mechatronic product devel-
opment. ........................................................................................................... 58
3.4 Functional flow block diagram for mechatronic R&D. .................................. 59
3.5 Initial synthesized design for mechatronic R&D workflow. .......................... 60
3.6 New proposed design for mechatronic R&D workflow.................................. 61
3.7 R&D teams in a typical servo-mechanical-prototype production cycle in
HDD industries. .............................................................................................. 62
3.8 Picture of VCM with appended PZT active suspension in a dual-stage
configuration for HDDs. The downward arrows represent the input signals,
and the upward arrows represent the corresponding outputs.......................... 77
3.9 Frequency responses. Dashed: LDV measurement data of PZT active sus-
pension. Solid: From identified model using proposed modal identification
algorithm......................................................................................................... 77
3.10 Frequency responses. Dashed: LDV measurement data of VCM. Solid:
From identified model using proposed modal identification algorithm. ........ 79
3.11 Frequency responses. Left: −5% shift in resonant frequencies. Right: +5%
shift in resonant frequencies. Dashed: LDV measurement data of PZT
active suspension. Solid: From identified model using proposed modal
identification algorithm. ................................................................................. 80
3.12 Frequency responses. Left: −5% shift in resonant frequencies. Right: +5%
shift in resonant frequencies. Dashed: LDV measurement data of VCM.
Solid: From identified model using proposed modal identification
algorithm......................................................................................................... 81

4.1 Proposed DFI algorithm showing feature space Rn , compressed feature


space Rq , and data (singular value) space Rm . ............................................... 95
4.2 High speed milling machine. ........................................................................100
4.3 Ball nose cutter. ............................................................................................101
4.4 Flank wear at cutting edge............................................................................101
4.5 Experimental setup. ......................................................................................102
4.6 Three axis cutting force signals (Fx , Fy , Fz ). .................................................103

xvii
xviii List of Figures

4.7 Flowchart of the proposed DFI for choosing the number of principal com-
ponents via SVD, and clustering using the K-means algorithm to select the
dominant feature subset. ...............................................................................106
4.8 Plot of principal components vs. singular values. ........................................108
4.9 MRM using sixteen dominant features and the RLS algorithm. ..................109
4.10 MRM using random selected four features {fm, fod, sod, vf} and tool wear
comparison. ..................................................................................................110
4.11 Examples of MRMs using four dominant features, three principal compo-
nents, and the RLS algorithm. ......................................................................111
4.12 Examples of MRMs using four dominant features, three principal compo-
nents, and the RLS algorithm (zoomed).......................................................112
4.13 Piezotron Acoustic Emission Sensor used, 100 to 900 kHz.........................117
4.14 Experimental setup. ......................................................................................119
4.15 Unprocessed AE signal during cutting process. ...........................................119
4.16 Stages in evolution of tool wear....................................................................120
4.17 Plot of principal components vs. singular values. ........................................123
4.18 RLS using all sixteen force and AE features................................................124
4.19 ARMAX models using all sixteen force and AE features............................125
4.20 Examples of tool wear prediction using four dominant features and three
principal components....................................................................................126
4.21 Examples of prediction using four dominant features and three principal
components...................................................................................................131
4.22 Two-layer NN trained using backpropagation for industrial fault classifi-
cation after proposed ADFI and DDFI. ........................................................133
4.23 Machine fault simulator and the eight sensors. ............................................134
4.24 Zoomed-in view of the four vibration sensors Ax, Ay, A2x, A2y, with the
corresponding data cables attached. .............................................................136
4.25 Current clamp meters and Futek torque sensor. ...........................................137
4.26 Schematic of a typical ball bearing...............................................................138
4.27 Plot of principal components vs. singular values. ........................................142

5.1 Block diagram of IEEE Type 1 Exciter and AVR system. ...........................164
5.2 Block diagram of simplified exciter and AVR system..................................165
5.3 Simplified block diagram of speed governor system....................................166
5.4 New England test system..............................................................................174
5.5 Composite load model consisting of a ZIP load and an IM load. ................203
5.6 IM transient-state equivalent circuit. ............................................................204
5.7 Effect on the OS of the load modeled as a composite load with a certain
percentage of IM and ZIP loads. ..................................................................209
5.8 Least damped local mode movement with a change in the location of
composite load from Bus 3 to Bus 29. .........................................................210
5.9 Least damped inter-area mode movement with a change in the location of
composite load from Bus 3 to Bus 29. .........................................................211
5.10 Root loci of the local mode for the given operating condition for the dif-
ferent values of the IM load parameters. ......................................................214
List of Figures xix

5.11 Root loci of the inter-area mode for the given operating condition for the
different values of the IM load parameters...................................................215
5.12 S.D. of the motor loads with a variable number of IMs in the system. ........217
5.13 OS of the load with 16 and 17 motors in the system....................................218
5.14 OS of the load with 11 and 12 motors in the system at Bus 23 and Bus 24. 219
5.15 Overall sensitivity of the load with a different percentage of dynamic load
component. ...................................................................................................220

6.1 C2 rule–based discrete event control for distributed networked teams. .......227
6.2 Sample mission scenario from [240]. ...........................................................229
6.3 Simulation results of DEC sequencing mission tasks in the networked team
example.........................................................................................................246
6.4 Simulation results of DEC sequencing mission tasks with increased Mis-
sion 1 priority................................................................................................247
6.5 DEC virtual reality interface panoramic view of the configuration of the
mobile WSN during real-world experiments................................................248
6.6 Experimental results showing the task event trace of the WSN. ..................248
6.7 Simulated battlefield with networked military team using ambush attack
tactics. ...........................................................................................................250
6.8 DEC sequencing mission tasks in the Networked Team Example. Simula-
tion. ...............................................................................................................253

7.1 Relationship between Best Efficiency Point (BEP) and Mean Time Be-
tween Failure (MTBF)..................................................................................256
7.2 Power data collected in learning machine runs are used to form detection
buffers. ..........................................................................................................257
List of Tables
3.1 Identified Modal Parameters of PZT Active Suspension ............................... 78
3.2 Identified Modal Parameters of VCM ............................................................ 79
3.3 Identified Modal Parameters of Perturbed PZT Active Suspension ............... 81
3.4 Identified Modal Parameters of Perturbed VCM............................................ 82

4.1 Experimental Components ...........................................................................103


4.2 Features and Nomenclature ..........................................................................104
4.3 Principal Components and Singular Values .................................................107
4.4 Results of DFI Method Using Three Retained Singular Values...................109
4.5 Results of PFA Method Using Three Retained Singular Values ..................110
4.6 Results of DFI Method Using Four Retained Singular Values.....................111
4.7 Results of PFA Method Using Four Retained Singular Values ....................112
4.8 Experimental Components ...........................................................................118
4.9 AE Features and Nomenclature....................................................................121
4.10 Principal Components and Singular Values .................................................122
4.11 Comparison of Model Accuracies ................................................................125
4.12 Results of DFI and ARMAX Model with ELS (New DFI) Using Three
Retained Singular Values..............................................................................127
4.13 Results of DFI and MRM with RLS Using Three Retained Singular Values127
4.14 Results of PFA and MRM with RLS Using Three Retained Singular Values128
4.15 Results of DFI and ARMAX Model with ELS (New DFI) Using Four
Retained Singular Values..............................................................................128
4.16 Results of DFI and MRM RLS Using Four Retained Singular Values ........129
4.17 Results of PFA and MRM RLS Using Four Retained Singular Values........129
4.18 Nomenclature and Corresponding Sensors...................................................135
4.19 (AI0-AI2) Motor Current Features and Nomenclature.................................139
4.20 AI3 Torque Features and Nomenclature.......................................................139
4.21 (AI4-AI7) Acceleration Features and Nomenclature ...................................140
4.22 Machine Status and Representation..............................................................140
4.23 The 25 Most Dominant Principal Components and Singular Values ...........141
4.24 Machine Status and Fault Detection .............................................................143
4.25 Features Selection Using ADFI....................................................................144
4.26 Machine Status and Fault Detection Using Reduced Number of Sensors
and Features from ADFI and NN .................................................................144
4.27 Feature Selection Using Convention DFI–Normal.......................................145
4.28 Feature Selection Using Convention DFI–Bearing ......................................146
4.29 Feature Selection Using Convention DFI–Imbalance ..................................146
4.30 Feature Selection Using Convention DFI–Loose Belt .................................147
4.31 Feature Selection Using Convention DFI–Bearing Outer Race Fault..........147

xxi
xxii List of Tables

4.32 Feature Selection Using Proposed DDFI .....................................................148


4.33 Machine Status and Fault Detection Using Reduced Number of Sensors
and Features from DDFI and NN .................................................................148

5.1 Critical Eigenvalues and Their Locations at 830MW Level ........................178


5.2 Sensitivity Factor of Critical Eigenvalue to Exciter Gain KA of Generator
at Bus 30 Using the Analytical Approach ....................................................183
5.3 Sensitivity Factor of Critical Eigenvalue to Exciter Gain KA of Generator
at Bus 30 Using the Numerical Approach ....................................................184
5.4 Sensitivity Analysis Errors for Different Exciter Gain KA Perturbation Lev-
els..................................................................................................................184
5.5 Comparison of Analytical to Numerical Approaches in Computing the
Eigenvalue Sensitivities (Speed Governor Gain KT G1 at Bus 30, Sensitivity
Values ×10−3 ) ..............................................................................................185
5.6 Percentage Errors for Eigenvalue Sensitivity Computation between Nu-
merical and Analytical Methods at Bus 30 (Numerical Method Uses 1%
Perturbation) .................................................................................................185
5.7 Eigenvalue Sensitivity Analysis of the Governor Time Constant TT G6 at
Machines 30–32............................................................................................186
5.8 Eigenvalue Sensitivity Analysis of the Governor Time Constant TT G6 at
Machines 33–35............................................................................................186
5.9 Eigenvalue Sensitivity Analysis of the Governor Time Constant TT G6 at
Machines 36–38............................................................................................187
5.10 Eigenvalue Sensitivity of the Governor Gain KT G2 ......................................187
5.11 Eigenvalue Sensitivity of the Governor Gain KT G2 ......................................188
5.12 Eigenvalue Sensitivity of the Governor Gain KT G2 ......................................188
5.13 Identify the Most Sensitive Eigenvalue to Exciter Gain KA at Each Generator189
5.14 Identify the Most Sensitive Eigenvalue to Speed Governor Gain KT G1 at
Each Generator .............................................................................................189
5.15 Identify the Most Sensitive Eigenvalue to Speed Governor Gain KT G3 at
Each Generator .............................................................................................190
5.16 Identify the Most Sensitive Eigenvalue to Speed Governor Time Con-
stant TT G4 at Each Generator ........................................................................190
5.17 Identify the Most Sensitive Eigenvalue to Speed Governor Time Con-
stant TT G5 at Each Generator ........................................................................191
5.18 Hermite Polynomial and Its Indices .............................................................198
5.19 Critical Eigenvalues and Their Locations at 830MW Level ........................198
5.20 Hermite Polynomial and Its Indices of the New England System ...............199
5.21 Sensitivity Computation with Respect to KA at Buses 31 and 32.................201
5.22 Parametric Sensitivity of the Critical Eigenvalues for Composite Load at
Bus 3 (50% IM) ............................................................................................212
5.23 Parametric Sensitivity of the Critical Eigenvalues for Composite Load at
Bus 4 (70% IM) ............................................................................................213
5.24 OS of the Composite Loads with the Variable Number of IMs Modeled in
the System ....................................................................................................216
List of Tables xxiii

6.1 Mission 1–Task Sequence for Deployed WSN ............................................232


6.2 Mission 2–Task Sequence for Deployed WSN ............................................236
6.3 Matrix Multiply in the OR/AND Algebra ....................................................238
6.4 Suppressing Enemy Troop 1.........................................................................249
6.5 Suppressing Enemy Troop 2.........................................................................251
Introduction
1
Traditionally, diagnosis and prognosis are terms commonly used in the medical do-
main; diagnosis being the identification of diseases through careful observation of the
patients’ symptoms and results from various in-depth examinations, while prognosis
is the prediction of the various outcomes of the illness and corresponding remedies,
including the anticipation of recovery from the expected course if no contingencies
arise. With the introduction of these concepts to the field of engineering, diagnosis
now becomes the art of identification of engineering systems’ failure through ob-
servation of the sensory signals from the machines and equipment, while prognosis
is the prediction of failure and provision of corresponding engineering solutions to
achieve expected desired outcomes. Intelligent diagnosis and prognosis is extremely
important as we are in an era of intensive manufacturing competition, and the new
challenges faced by industrial manufacturing processes include maximizing produc-
tivity, ensuring high product quality, and reducing the production time while min-
imizing the production cost simultaneously. Unanticipated and unresolved failures
result in machine and equipment downtime, and increases OPerations EXpenditure
(OPEX) and CAPital EXpenditure (CAPEX) from halting production and interven-
tion of engineers, respectively. This decreases revenue and is unacceptable in modern
competitive manufacturing and production systems.
The desire and need for various accurate diagnostic tools with predictive prognos-
tic capabilities have been around since human beings invented and operated complex
and expensive machineries after the Industrial Revolution. With the amount of tech-
nological advancement, the area of intelligent diagnosis and prognosis is of extreme
importance in today’s industrial networked systems ranging from complex manufac-
turing, large-scale interconnected power systems, aerospace vehicles, military and
merchant ships, and automotive industries, etc. While the application domains might
be different, the goals are identical to maximize equipment up time and to mini-
mize maintenance and operating costs. As manning levels are reduced and equipment
becomes more complex, intelligent maintenance schemes must replace the old pre-
scheduled and labor intensive planned maintenance systems to ensure that equipment
continues to function [1].
It is thus obvious that the modern engineering systems require intelligent machine
and system fault diagnosis and prognosis, since the positive impacts of employing
these capabilities reduces Operation and Support (O&S) costs and Total Ownership
Costs (TOCs). In this chapter, we will give a survey of the diagnosis and prognosis
methods provided in current literature, as well as those of technical proximity, eg.,
Conditioned-Based Maintenance (CBM), Prognostic Health Management (PHM),
Fault Detection and Isolation (FDI), Structural Health Monitoring (SHM), etc. We
will then discuss the novelty of our proposed methodologies and our application to
realistic industrial networked systems.

1
2

1.1 DIAGNOSIS AND PROGNOSIS


The intelligence of diagnosis and prognosis methodologies has increased tremen-
dously over the past few decades. After a positive diagnosis of impending fault or
failure in the engineering system or process, prognosis takes place and provides ample
time for rectification before total breakdown or instability. While this time window
and corresponding corrective measurements depend on application to application,
the main functions include but are not limited to detection, isolation, quantification,
prediction, anticipation, and correction, etc. The vast range of applications now range
from manual, semi-automated, or fully automated, and have been applied to man-
ufacturing, commercial, and defense systems, etc. Generally, these applications can
be classified mainly into parametric-based or non-parametric-based diagnosis and
prognosis.

1.1.1 PARAMETRIC BASED


Parametric-based methods for diagnosis and prognosis are attractive because the un-
derlying physics or statistics can be captured and understood. In this section, we
discuss some of these research efforts for realistic engineering systems that appear in
current literature.
The Markov Process is a mathematical model used to represent random evolution
of a memoryless system, i.e., a model for which the likelihood of a given future state at
any given moment depends only on its present state and not on any past states. Such a
process is analogous to “threshold-detection.” Over the years, Markov Processes have
been successfully used by researchers to model past failure states or predict future
states of processes or machines in aerospace, automotive, and defense industries, etc.
A Hidden Markov Model (HMM) has only one discrete hidden state variable and a
set of discrete or continuous observation nodes. As tool wear degradation is one of the
main failure modes in large scale industrial cutting and milling machines, tool wear
monitoring and prediction of useful remaining life is a very important and practical
consideration. The cutting tool wear monitoring and prediction of useful life was mod-
eled using HMMs in [2][3], via self organizing maps and dynamic HMMs in [4][5],
and continuous HMMs in [6]. Similarly, monitoring of bearings’ health in rotary
machines is also studied in current literature. A strategy to optimize bearing main-
tenance schedules was proposed with the application of condition-based monitoring
techniques in [7]. A robust condition based maintenance algorithm was developed
for Gearbox of Westland and SH-60 helicopters and remaining useful life prediction
using HMM techniques in [8][9]. Diagnosis of pump systems was also studied using
a new autoregressive hidden semi-Markov Model in [10].
The maintenance strategy has also evolved over the years. A real-time health
prognosis and dynamic preventive maintenance policy was developed for equipment
under aging Markovian deterioration in [11]. Markov analysis technique was used to
calculate reliability measures for safety systems that involving several typical relia-
bility related factors in [12]. The structural parallel system with multiple failures was
studied using Markov chain-line sampling method in [13].
Introduction 3

Besides Markovian methods, time series models are also used to forecast future
events based on known past events. Time series analyses are often divided into two
classes; namely frequency-domain methods and time-domain methods. The former
is based on frequency information from spectral analysis or wavelet analysis, while
time series analyses are based mainly on examination of autocorrelation and cross-
correlation information. Autocorrelation analysis examines serial dependence, and
spectral analysis examine cyclic behavior which might not be related to seasonality.
In the Box-Jenkins methodology (named after the statisticians George Box and
Gwilym Jenkins), AutoRegressive Moving Average (ARMA) models are used to find
the best fit of a time series to past values of this time series for forecasts. Models for
time series data can have many forms and represent different stochastic processes.
When modeling variations in the level of a process, three broad classes of practical
importance are the AR models, Integrated (I) models, and MA models. These three
classes depend linearly on previous data points. Combinations of these ideas produce
ARMA and AutoRegressive Integrated Moving Average (ARIMA) models. The Au-
toRegressive Fractionally Integrated Moving Average (ARFIMA) model generalizes
the former three. The ARMA model is used to successfully to monitor and forecast
past, present, and future conditions of the machine by various researches in automo-
tive and aeronautical fields. Power consumption of active suspension of automotive
system is predicted [14] using a novel pseudo-linear method for the estimation of
fractionally integrated ARMA. The authors in [15] used AR and ARMA models to
predict automobile body shop assembly process variation. The steam turbine rotor
failure forecasted by [16] using vibration signals and applying the ARMA model. The
condition based maintenance forecast for aerospace engine performance parameters
analyzed using the ARMA model [17]. A simple and fast soft-wired tool wear state at
every wear condition monitoring is developed using the ARMA model [18]. A method
to predict the future conditions of machines based on one-step-ahead prediction of
time-series forecasting techniques and regression trees is proposed in [19].

1.1.2 NON-PARAMETRIC BASED


For non-parametric-based approaches, heuristic methods involving Evolutionary Al-
gorithms (EAs), Genetic Algorithms (GAs), Neural Networks (NNs), fuzzy logic, etc.,
and their combinations have been used for improved intelligence. The GA uses are in-
spired by the evolutionary nature of biological systems, and is a search technique con-
sisting of mutation, selection, reproduction (inheritance), and recombination stages.
GAs are also commonly used for complex optimization, and are used in financial,
industrial design using parametrization, time series prediction, and signal processing
applications, etc.
In current literature, GAs are used in a layered approach based on Q-learning (a
reinforcement learning technique) to determine the best weighting for optimal control
and design problems in [20]. The authors in [21] presented an algorithm for deter-
mining an optimal loading of elements in series-parallel systems. An early warning
method was proposed based on a discrete event simulation that evaluates cost-effects
of an arbitrary allocation of failure risks for a simulated 3-machine production sys-
4

tem in [22]. The authors in [23] proposed an optimization procedure based on GA


to search for the most cost-effective maintenance schedule, considering both produc-
tion gains and maintenance expenses. A multivariate Bayesian process mean-control
problem for a finite production run (under the assumption that the observations are
values of independent normally-distributed vectors of random variables) is presented
in [24]. The author in [25] modeled repairable systems using hierarchical Bayesian
model that are a compromise between the “bad-as-old” and “good-as-new.” The Air-
craft rotor dynamic system is studied using an applied continuous wavelet transform
that utilizes harmonic forcing satisfying combination resonance. The authors in [26]
performed health monitoring and evaluation of dynamic characteristics in gear sets
using wavelet transform methods. A condition monitoring system was set up for a tur-
bine using vibration data based on state-space whose associated recursive algorithms
(Kalman filter and fixed interval smoothing) provide the basis for probability of fail-
ure estimation in [27]. Kalman filters were also used to track changes in features like
vibration levels, mode frequencies, or other waveform signature features. Prognostic
utility for the signature features are determined by transitional failure experiments
in [28].
For the case of NNs, information processing algorithms are employed to mimick
a human brain, i.e., activation of functions via “firing” of neurons. A typical NN
consists of a large number of highly interconnected processing elements (neurons)
through proper activation, which work in unison to solve specific problems like pattern
recognition, time series prediction, or data classification, etc. In an artificial NN,
other intelligent techniques are merged with conventional NN to complete even more
difficult or higher hierarchical tasks. In the literature, two popular artificial intelligent
techniques, i.e., artificial NNs and expert systems are commonly used for machine
diagnosis, which include fuzzy-logic systems, fuzzy-neural networks, and neural-
fuzzy systems, etc.
The authors in [29] developed a new hydraulic valve fluid field model based on
non-dimensional artificial NNs to provide an accurate and numerically efficient tool in
an automatic transmission hydraulic control system design. Their results show better
performance than the conventional computational fluid dynamics technique, which
is numerically inefficient and time consuming. The turbine, compressor, and gear
wear statuses were studied in [30], and an NN model was proposed which predicts
temperature faster in comparison with the original models for Honeywell turbine and
compressor components. The authors in [31] evaluated the performance of recurrent
neural networks and neuro-fuzzy systems predictors using two-benchmark gear wear
data sets. Through comparison, it is found that if a neuro-fuzzy system is properly
trained, it performs better than recurrent neural networks in both forecasting accuracy
and training efficiency. Artificial intelligence techniques have been applied to machine
diagnosis and have shown improved performance over conventional approaches.
As such, it is obvious that novel techniques in intelligent diagnosis and prognosis
have become an important field of interest in engineering science with the pushing
need for increased longevity in machine lifetime and its early fault detection. In this
book, we develop the essential theories for tackling issues pertaining to application
Introduction 5

issues faced during complex decision making in a multi-attribute space. This is done
through use of linear matrix operators and indices to formulate mathematical ma-
chineries and provide formal decision software tools which can be readily appreciated
and used by engineers from industries as well as researchers from research institutes
and academia. The developed tools allow for higher level decision making and com-
mand in synergetic integration between several industrial processes and stages, for
shorter time in failure and fault analysis in the entire industrial production life cy-
cle. The enhanced automated systems level toolsets also address industrial problems
arising from evolving and emerging behavior in networked systems or processes, and
analyze methodologies to make autonomous decisions that meet present and uncer-
tain future needs quantitatively without compromising the ad hoc “ad-on” flexibility
of network-centered operations. This shortens production time while reducing failure
by early identification and detection of the key factors which lead to potential faults.
As such, engineers and managers are empowered with the knowledge and know-how
to make important decisions and policies, as well as educate fellow researchers and
public about the advantages of various technologies.

1.2 APPLICATIONS IN INDUSTRIAL NETWORKED SYSTEMS


In this section, we detail some of the realistic engineering domains of industrial
networked systems where intelligent diagnosis and prognosis are needed. With the
complexity and scale of problems encountered, it is essential that linear operations
with closed-form solutions are provided for quick and easy analyses. The identified
applications are modeled using linear systems theory, and diagnosis and prognosis
using proposed solutions are carried out with advanced matrix manipulations.

1.2.1 MODAL PARAMETRIC IDENTIFICATION (MPI)


Mechatronic systems—the integration of mechanical, electrical, computational, and
control systems—have pervaded products ranging from larger scale anti-braking sys-
tems in cars to small scale microsystems in portable mobile phones. To sustain market
shares in the highly competitive consumer electronics industries, continual improve-
ments in servo evaluation and performance of products are essential, in particular for
portable devices requiring ultra-high data capacities and ultra-strong disturbance re-
jection capabilities. Moreover, pressures to minimize time-to-market of new products
imply a necessarily high schedule compression of the end-to-end servomechanical
design and evaluation cycle, i.e., from mechanical structural designing, prototyping,
to servo control system design to meet target specifications.
Furthering our earlier research to improve yield through rapid Modal Parametric
Identification (MPI) of critical resonant modes for prognosis with mechanical actuator
redesign using Least-Squares (LS) of frequency response data packed in matrices [32],
we enhance the proposed MPI using Forsythe’s method of complex orthogonal poly-
nomial transformation [33]. The new MPI avoids ill-conditioned numerical solutions
during computation of matrices [34], and the calculated modal parameters are stored
6

in a central repository for technical sharing. This immediately follows from the pro-
posed integrated systems design methodology in managing complex R&D processes
for high-performance mechatronic industries [35].

1.2.2 DOMINANT FEATURE IDENTIFICATION (DFI)


In an era of intensive competition, the new challenges faced by industrial manufac-
turing processes include maximizing productivity, ensuring high product quality, and
reducing the production time while minimizing the production cost simultaneously.
As such, it is crucial that asset usage and plant operating efficiency be maximized,
as unexpected downtime due to machinery failure has become more costly than be-
fore. Predictive maintenance is thus actively pursued in the manufacturing industry
in recent years, where equipment outages are predicted and maintenance is carried
out only when necessary.
To ensure successful condition based maintenance, it is necessary to detect, iden-
tify, and classify different kinds of failure modes in the manufacturing process. One
of the causes of delay in manufacturing processes is machine downtime or failure of
the machining tools. Failure of machine tools can also affect the production rate and
quality of products, and detection of the tool state becomes a vital role in manufac-
turing [36].
In an industrial cutting machine, it is not possible to estimate and predict the state
of the tool directly. However, it is easy to obtain information that is highly correlated
to tool through inferential sensing, which can be achieved by signal processing of
the collected signals obtained from the embedded sensors. In [37], we developed a
Dominant Feature Identification (DFI) algorithm to identify the dominant features
from a dynamometer force sensor which affects tool wear based on the features vs.
time data concatenated in a matrix. A new DFI approach is proposed in [38] to predict
tool wear dynamically from the dominant features selected from an acoustic sensor.
With the celebrated success of the effectiveness of DFI, it is also extended in [39]
to greatly reduce the number of sensors and features required for industrial fault
detection.

1.2.3 PROBABILISTIC SMALL SIGNAL STABILITY ASSESSMENT


Due to the deregulation of power industry in many countries, the traditionally ver-
tically integrated power systems have been experiencing dramatic changes leading
to competitive electricity markets. Power system planning in such an environment is
now facing increasing requirements and challenges because of the deregulation. In
particular, it introduces a variety of uncertainties to system planning. The traditional
deterministic power system analysis techniques have been found in many cases to have
limited capability to reveal the increasing uncertainties in today’s power systems. The
power system operation and planning demonstrate probabilistic characteristics which
requires emphasis on probabilistic techniques.
To study the probabilistic small signal stability on large-scale interconnected power
systems, it is imperative to obtain a linearized model of the entire power system under
Introduction 7

consideration in state-space form, where the key generation and control parameters are
coupled with the state transition matrix. In this chapter, we investigate power system
state matrix sensitivity characteristics with respect to system parameter uncertainties
with analytical and numerical approaches, and identify those parameters that have
great impact on system eigenvalues, therefore, the system stability properties [40]. A
key probabilistic power system analysis technique is the probabilistic power system
small signal stability assessment technique. With the many factors such as demand
uncertainty, market price elasticity, and unexpected system congestions, it is more
appropriate to have probabilistic power system stability assessment results rather
than a deterministic one, especially for the sake of risk management in a competitive
electricity market [41]. We present a framework of probabilistic power system small
signal stability assessment technique, fully supported with detailed probabilistic anal-
ysis and case studies. The results can be used as a valuable reference for utility power
system small signal stability assessment, probabilistically and reliably, and can be
used to help Regional Transmission Organizations (RTOs) and Independent System
Operators (ISOs) perform planning studies under the open access environment [42].
On the other hand, load modeling also plays an important role in power system dy-
namic stability assessment. One of the widely used methods in assessing load model
impact on system dynamic response is parametric sensitivity analysis. A composite
load model-based load sensitivity analysis framework is proposed. It enables compre-
hensive investigation into load modeling impacts on system stability considering the
dynamic interactions between load and system dynamics. The effect of the location
of individual as well as patches of composite loads in the vicinity on the sensitivity
of the oscillatory modes are also investigated. The impact of load composition on the
overall sensitivity of the load is also discussed [43].

1.2.4 DISCRETE EVENT COMMAND AND CONTROL


Traditional design methodologies for discrete event workflow systems that employ
trial-and-error approaches for evaluating designs followed by implementations are
time consuming and costly. If tasks are not scheduled to be activated correctly, serious
problems might occur in the workflow, including blocking and deadlock phenomena,
which will halt the entire discrete event system. As such, it is imperative to perform
an integrated inquiry of how task structures impact discrete-event task scheduling
efficiencies. This is instrumental in the presence of competitive due-date targets and
time-sensitive operations in many workflow systems, where managers typically seek
process re-engineering solutions using concurrent process engineering, etc.
In military systems, the TRADOC Pamphlet 525-66 Battle Command and Battle
Space Awareness capabilities prescribe expectations that networked teams will per-
form in a reliable manner under changing mission requirements, varying resource
availability and reliability, and resource faults, etc., during mission execution in mil-
itary applications. In [44], a Command and Control (C2) structure is presented that
allows for computer-aided execution of the networked team decision-making process,
control of force resources, shared resource dispatching, and adaptability to change
based on battlefield conditions. A mathematically justified networked computing envi-
8

ronment called the Discrete Event Control (DEC) Framework based on binary matri-
ces is provided. DEC has the ability to provide the logical connectivity among all team
participants including mission planners, field commanders, war-fighters, and robotic
platforms, etc. The proposed data management tools are developed and demonstrated
in a simulation and implementation study using a distributed Wireless Sensor Net-
work (WSN). A simulation example on a battlefield with networked Future Combat
System (FCS) teams deploying ambush attack tactics is also presented [45]. The re-
sults show that the tasks of multiple missions are correctly sequenced in real-time,
and that shared resources are suitably assigned to competing tasks under dynamically
changing conditions without conflicts and bottlenecks.
In this chapter, we review the parametric-based and non-parametric-based diagno-
sis and prognosis methods that exist in current literature. Some examples are given of
realistic engineering applications in industrial networked systems where intelligent
diagnosis and prognosis are also identified, and linear matrix operations are proposed
for fast closed-form analysis.
With these covered, we will introduce the essential theoretical fundamentals of
vectors, matrices, and linear systems in the following chapter. Advanced matrix oper-
ations like eigenvalue decomposition and Singular Value Decomposition (SVD) are
also presented, along with binary matrices, which are useful for modeling and control
of event-triggered systems.
Vectors, Matrices, and Linear
2 Systems
Linear algebra has a wide range of applications in various domains for diagnosis and
prognosis of engineering systems. In this chapter, we first review the fundamental con-
cepts of linear algebra, including domain, range, transformation, and null spaces, etc.
This is followed by the introduction of linear systems, which covers linearization of
non-linear systems. Finally, we investigate a special type of matrix called Boolean
matrices, and its usage in system modeling of graphs and Discrete Event Control
(DEC) of event-triggered discrete event systems.

2.1 FUNDAMENTAL CONCEPTS


In this section, we introduce the basic definitions of vectors and matrices. We also
review the concepts of norms and spaces, as well as some useful matrix operators
which are commonly encountered in different fields of engineering.

2.1.1 VECTORS
A vector is an n-tuple of numbers arranged either in a column or row [46]. This
column or row arrangement hence gives a vector magnitude and direction.
For example,
⎡ ⎤
x1
⎢ x2 ⎥
⎢ ⎥
x = ⎢ . ⎥, (2.1)
⎣ .. ⎦
xn
where x is a column vector of length n, while

y = y1 y2 · · · ym (2.2)
is a row vector of length m. The elements in x and y can be real or complex depending
on application domain.
Norms are generally functions which map a vector onto a scalar. Using x in (2.1)
as an example, the Euclidean norm x2 of x is

||x||2 = x21 + x22 + · · · + x2n



= xT x, (2.3)
where the superscript T denotes the transpose operation. The Euclidean norm is also
called the Euclidean length, L2 distance, or L2 norm. As such, the difference in

9
10

Euclidean norms of two vectors is in essence the inverse of the physical proximity
between them, i.e., the greater the difference in Euclidean norms, the “farther” or
“disalike” the two vectors are.
Other interesting vector norms include the Taxicab norm or Manhattan norm
n
||x||1 = ∑ |xi |, (2.4)
i=1

which is the L1 distance or L1 norm.


The infinite norm or supremum is

||x||∞ = max |x1 |, |x2 |, · · · , |xn |), (2.5)

where max() denotes the maximum value of the elements therein.


Now given a set of k vectors x1 , x2 , · · · , xk of the same length of n, the inner
product of x1 and x2 is denoted as x1  x2 or x1 , x2 , where

x1 , x2  = x1 T x2
= x2 T x1
n
= ∑ x1,i x2,i
i=1
= ||x1 ||||x2 || cos θ , (2.6)

where θ is the angle between vectors x1 and x2 . The outer product of x1 and x2
is x1 , x2 , where

x1 , x2  = x1 x 2 T
= x2 x1 T . (2.7)

Note that the inner product produces a scalar, while the outer product produces a
matrix of size n × n.
Relating the concept of inner product back to norms, the norms ||x1 || and ||x2 ||
of x1 and x2 satisfy

||x1 + x2 || ≤ ||x1 || + ||x2 ||, (2.8)

or the triangle inequality, and

|x1 , x2 | ≤ ||x1 || · ||x2 ||, (2.9)

or the Cauchy-Schwarz inequality.


The cross product between x1 and x2 is defined as x1 × x2 and produces a vector
normal to both x1 and x2 of magnitude

||x1 × x2 || = ||x1 ||||x2 || sin θ . (2.10)


Vectors, Matrices, and Linear Systems 11

A linear combination of these vectors is defined as the vector

c1 x1 + c2 x2 + · · · + ck xk , (2.11)

where c1 , c2 , · · · , ck are scalars. The vectors are said to be linearly dependent if there
exist k scalars c1 , c2 , · · · , ck being non-zero and satisfying

c1 x1 + c2 x2 + · · · + ck xk = 0, (2.12)

but excluding the trivial solution. As such, the vectors are linearly independent if

c1 x1 + c2 x2 + · · · + ck xk = 0 (2.13)

has only the trivial solution, i.e., c1 = c2 = · · · = ck = 0.


Now, given a set of k vectors x1 , x2 , · · · , xk which have the same length of n. The
set V containing x1 , x2 , · · · , xk and their linear combinations is said to form a vector
space. Alternatively, the vectors x1 , x2 , · · · , xk are said to span the vector space V.
Some properties of a vector space include
• Addition: If x1 ∈ V and x2 ∈ V, then x1 + x2 ∈ V, and
• Scalar Multiplication: If x1 ∈ V, then c1 x1 ∈ V for all arbitrary scalars c1 .

It is clear that the null vector 0 is a vector in all vector spaces V.


For any arbitrary vector x ∈ V, we can express as

x = c 1 x1 + c 2 x2 + · · · + c k x n . (2.14)
These vectors are said to constitute a basis for the vector space V. The largest number
of linearly independent vectors in V is d, where d is the dimension of V or dim(V).
Note that although the choice of a basis is non-unique, d is fixed for a given vector
space V.

2.1.2 MATRICES
A matrix A is a rectangular array of numbers where
⎡ ⎤
a11 a12 · · · a1n
⎢ a21 a22 · · · a2n ⎥
⎢ ⎥
A=⎢ . .. .. ⎥, (2.15)
⎣ .. . a . ij

am1 am2 ··· amn
and A is said to be a m × n matrix of m rows and n columns. The entry or element of
the matrix in the ith row and jth column is ai j . As such, A can be seen as a rectangular
arrangement of mn-tuple numbers, or a concatenation of vectors of the same length.
A matrix with one column or one row is hence, a vector.
The 1-norm for A is
m
||A||1 = max
1≤ j≤n
∑ |ai j |, (2.16)
i=1
12

which is simply the maximum absolute column sum of the A. The ∞-norm of A is
defined as
n
||A||∞ = max
1≤i≤m
∑ |ai j |, (2.17)
j=1

or the maximum absolute row sum of A. The 2-norm of A is

||A||2 = λmax (AH A)


= σmax (A), (2.18)

where λmax () denotes the maximum eigenvalue of , and σmax () denotes the maxi-
mum singular value of . The superscript H is the Hermitian or conjugate transpose
operator.
Another interesting matrix norm is the Frobenius norm ||A||F of A where

||A||F = tr{AH A}
m n
= ∑ ∑ |ai j |2
i=1 j=1
min{m,n}


= ∑ σ 2,i (2.19)
i=1

where tr{} is the trace operation, or sum of diagonal elements of the square matrix.
σi are the singular values of A. More details on eigenvalues and singular values will
be provided in future sections.
Now given an m × n matrix A, the vector spaces spanned by the set of linearly
independent row vectors and column vectors of A are called the row space and column
space of A, respectively. The vector space consisting of all possible solutions of Ax= 0
is called the null space of A, kernel of A, or simply Ker(A). This is because if A is
viewed as a linear transformation, the null space of a matrix is precisely the set of
vectors that is mapped to the null vector.
The maximum number of linearly independent vectors of a matrix A (or the di-
mension of the row space of A) is called the row rank or simply the rank of A. Nullity
of a matrix A is defined as the dimension of the null space of A. For an m × n matrix A,
the following theorems hold [47].

Theorem 2.1

The number of linearly independent rows of A is the number of linearly independent


columns of A. In other words, the row rank of A is also the column rank of A and the
rank of A.
Vectors, Matrices, and Linear Systems 13

Theorem 2.2

For a given m × n matrix A, rank(A)+ nullity(A) = n.

The theorems are self-explanatory and the proofs are omitted here for brevity.
In linear algebra, linear transformations can be represented by matrices. If T is
a linear transformation mapping Rn → Rm where x ∈ Rn and y ∈ Rm are vectors,
then T :→ y = Ax for some m × n matrix A. Here, A is called the transformation
matrix.
The domain of a function is the set of “input” or argument values for which the
function is defined. In other words, the function provides a unique “output” or “value”
for each member of the domain [48]. As such, A can now be viewed as an operator
which manipulates x to produce y. In the same sense as matrix norm is discussed
above, we can define an operator norm or induced norm ||A|| of A as

||Ax||
||A|| = sup
x=0 ||x||
= sup ||Ax||. (2.20)
||x||=1

For a linear transformation T which is represented by a transformation matrix A


where A ∈ Rm×n that maps Rn to Rm , we define the domain space of A as all possible
“input” vectors x where x ∈ Rn . If the linear transformation is unconstrained, one
easily sees that its domain is exactly Rn .
Conversely, given a transformation matrix A ∈ Rm×n that maps Rn → Rm , the
range space of A is defined as the set of all possible “output” vectors y ∈ Rm [47]. As
such, it is easy to see that the range space of A is exactly the column space of itself
or column space. This is also the image of the corresponding matrix transformation,
as the mapping Ax = y is actually taking all possible linear combinations of column
vectors of A. Therefore, for a transformation matrix A ∈ Rm×n , we denote R(A) as
the range space of A.
Let A be an m × n matrix. The product of A and the n-dimensional vector x can
be written in terms of the inner product of vectors as follows
⎡ ⎤
r1 x
⎢ r2 x ⎥
⎢ ⎥
Ax = ⎢ . ⎥ , (2.21)
⎣ .. ⎦
rm x

where r1 , r2 , · · · , rm denote the rows of the matrix A. It follows that x is in the null
space of A if and only if x is orthogonal (or perpendicular) to each of the row vectors
of A. This is because the inner product of two vectors is zero if they are orthogonal.
The row space of a matrix A is the span of the row vectors of A. Using a similar
reasoning as above, the null space of A is the orthogonal complement to the row
14

space. That is, a vector x lies in the null space of A if and only if it is perpendicular
to every vector in the row space of A.
The left null space of A ∈ Rm×n is the set of all vectors x ∈ Rm such that xT A = 0.
It is the same as the null space of the transpose of A, or AT . The left null space is the
orthogonal complement to the column space of A. This can be seen by writing the
product of the matrix AT and the vector x in terms of the inner product of vectors
⎡ ⎤
c1 x
⎢ c2 x ⎥
⎢ ⎥
AT x = ⎢ . ⎥ , (2.22)
⎣ .. ⎦
cn x

where c1 , c2 , · · · , cn are the column vectors of A. Thus, AT x = 0 if and only if x is


orthogonal (perpendicular) to each of the column vectors of A. It follows that the null
space of AT is the orthogonal complement to the column space of A. As such for a
matrix A, the column space, row space, null space, and left null space are sometimes
referred to as the four fundamental subspaces [49].

2.2 LINEAR SYSTEMS


After introducing the concepts of vectors and matrices, we will discuss their perusal
in representation of linear systems and linearized non-linear systems in this section.

2.2.1 INTRODUCTION TO LINEAR SYSTEMS


A typical dynamic linear system model is shown in Figure 2.1.

FIGURE 2.1 Dynamic model of linear system.

Let y1 (t) be the output produced by an input signal u1 (t) and y2 (t) be the output
produced by another input signal u2 (t). The system is said to be linear if when [50]
1. the input is α u1 (t), the corresponding output is α y1 (t), where α is scalar;
and
2. the input is u1 (t) + u2 (t), the corresponding output is y1 (t) + y2 (t).
Or equivalently, when the input is α u1 (t) + β u2 (t), the corresponding output
is α y1 (t) + β y2 (t). As such, a linear system is said to obey the Principle of Su-
perposition since it has such a property.
Vectors, Matrices, and Linear Systems 15

A system is said to be Linear Time-Invariant (LTI) if it is linear and for a time-


shifted input signal u(t − t0 ), the output of the system is also time-shifted y(t − t0 ). To
see if a system satisfies the LTI criterion, we first find the output y1 (t) that corresponds
to the input u1 (t). Next, we let u2 (t) = u1 (t − t0 ) and then find the corresponding
output y2 (t). If y2 (t) = y1 (t − t0 ), the linear system is said to be LTI. In other words,
an LTI system satisfies the Principle of Superposition and for the same input signal,
the output produced by the system today will be exactly the same as that produced by
the system at any other time. This reproducibility and precision allow us to predict
and analyze linear systems.
A system is said to have memory if the value of y(t) at any particular time t1
depends on the time from (−∞, t1 ], i.e., depends on the historical profile of y(t).
Conversely, a system is said to have no memory if the value of y(t) at any particular
time t1 depends only on the time t1 . On the other hand, a causal system is a system
where the output y(t) at a particular time t1 depends solely on its input for t ≤ t1 . All
physical systems are causal by nature. Similarly, a system is said to be non-causal if
the value of y(t) at a particular time t1 depends on its input u(t) for some t > t1 .
The signal u(t) is said to be bounded if |u(t)| < β < ∞ ∀t, where β is a positive,
real, and finite scalar. A system is said to be Bounded-Input-Bounded-Output (BIBO)
stable if the output y(t) produced by any bounded input u(t) is also bounded.

2.2.2 STATE-SPACE REPRESENTATION OF LTI SYSTEMS


LTI systems in general can be represented by transfer functions (ratios of orders
of Laplace operator “s” in continuous-time domain or z-transform operator “z” in
discrete-time domain) in the frequency domain, or state-space representation using
matrices in time domain. As such, a state-space representation is a mathematical
model of a physical system as a set of inputs, outputs, and state variables, related
by first-order differential equations and expressed as vectors in matrix form. The
state space representation provides a systematic and convenient way to represent and
analyze systems with multiple inputs and outputs.
Consider a single input nth order LTI differential equation
d n x(t) d n−1 x(t) d i x(t)
+ an−1 + · · · + a i · · · + a0 x(t) = b0 u(t), (2.23)
dt n dt n−1 dt i
where u(t) is the input, x(t) is the state, and ai and b0 are constants. The most
straightforward method for choosing n state variables to represent this system is to let
the state variables be equal to x(t) and its first (n − 1) derivatives. If the state variables
are denoted by ξ , we can rewrite (2.23) as
ξ1 (t) = x(t),
dx(t)
ξ2 (t) = , (2.24)
dt
..
.
d n−1 x(t)
ξn (t) = ,
dt n−1
16

and the n differential equations resulting from these definitions become

ξ̇1 (t) = ξ2 (t),


ξ̇2 (t) = ξ3 (t),
..
. (2.25)
ξ̇n−1 (t) = ξn (t),
ξ̇n (t) = −a0 ξ1 (t) − a1 ξ2 (t) − · · · − an−1ξn + b0 u(t).

Putting all the states into a vector, the above system of equations can be rewritten
as
⎡ ⎤ ⎡ ⎤⎡ ⎤
ξ̇1 0 1 0 ··· 0 ξ1
⎢ ⎥ ⎢ .. ⎥⎢ ⎥
⎢ ξ̇2 ⎥ ⎢ 0 0 1 . 0 ⎥⎢ ξ2 ⎥
⎢ ⎥ ⎢ ⎥⎢ ⎥

..
. ⎥ = ⎢
⎢ .. ⎥⎢
⎥⎢
..
. ⎥
⎢ ⎥ ⎢ 0 0 0 . 0 ⎥⎣ ⎥
⎣ ξ̇n−1 ⎦ ⎣ ⎦ ξn−1 ⎦
0 0 ··· 0 1
ξ̇n −a0 −a1 −a2 ··· −an−1 ξn
⎡ ⎤
0
⎢ 0 ⎥
⎢ ⎥
⎢ ⎥
+ ⎢ ... ⎥ u(t). (2.26)
⎢ ⎥
⎣ 0 ⎦
b0

Now if the output of the dynamic system y(t) is one (or a linear combination) of
the state variables, e.g., ξ1 (t), we can write y(t) in the output equation as
⎡ ⎤
ξ1
⎢ ξ2 ⎥
 ⎢ ⎥
⎢ ⎥
y(t) = 1 0 · · · 0 0 ⎢ ... ⎥ . (2.27)
⎢ ⎥
⎣ ξn−1 ⎦
ξn
 T
Defining x(t) = ξ1 ξ2 · · · ξn−1 ξn , we can rewrite the system model
in the form of state-space representation as

ẋ(t) = Ax(t) + Bu(t),


y(t) = Cx(t) + Du(t), (2.28)
Vectors, Matrices, and Linear Systems 17

where
⎡ ⎤
0 1 0 ··· 0
⎢ .. ⎥
⎢ 0 0 1 . 0 ⎥
⎢ ⎥
A = ⎢
⎢ .. ⎥,

⎢ 0 0 0 . 0 ⎥
⎣ 0 0 ··· 0 1 ⎦
−a0 −a1 −a2 ··· −an−1
⎡ ⎤
0
⎢ 0 ⎥
⎢ ⎥
⎢ .. ⎥ ,
B = ⎢ . ⎥
⎢ ⎥
⎣ 0 ⎦
b0

C = 1 0 ··· 0 0 ,
D = 0. (2.29)
Now, x ∈ Rn is the state vector, y is output vector, and u is the input (or control)
vector. A ∈ Rn×n is the state matrix, B ∈ Rn is the input matrix, C ∈ R1×n is the output
matrix, and D = 0 is the feedthrough (or feedforward) matrix. In this example, we
have chosen a Single-Input-Single-Output (SISO) system for simplicity but without
loss of generality, and this framework can be readily applied to systems of higher
dimensions with multiple inputs and outputs.
The state equation can now be expressed as a transfer function in frequency domain
as [46]
Y (s)
= C(sI − A)−1 B + D, (2.30)
U(s)
where U(s) and Y (s) are the system’s input and output, respectively, and s is the
Laplace Transform operator.
For general Multi-Input-Multi-Output (MIMO) systems, (2.28) is written as
ẋ(t) = Ax(t) + Bu(t),
y(t) = Cx(t) + Du(t), (2.31)
where A ∈ Rn×n , B ∈ Rn×p , C ∈ Rq×n , and D ∈ Rq×p for a system with p inputs and q
outputs. In this general formulation, all matrices are allowed to be time-variant, i.e.,
their entries may vary with time. However for LTI systems, these matrices contain
constant real numbers.
Depending on the assumptions taken, the state-space model representation can
assume the following forms, and variable t can be continuous or discrete. In the
latter case, the time variable is usually indicated as sampled instant k. In general, the
following representations are typical and most encountered in literature and practice
• Continuous time-invariant
ẋ(t) = Ax(t) + Bu(t),
y(t) = Cx(t) + Du(t),
18

• Continuous time-variant
ẋ(t) = A(t)x(t) + B(t)u(t),
y(t) = C(t)x(t) + D(t)u(t),
• Discrete time-invariant
x(k + 1) = Ax(k) + Bu(k),
y(k) = Cx(k) + Du(k),
• Discrete time-variant
x(k + 1) = A(k)x(k) + B(k)u(k),
y(k) = C(k)x(k) + D(k)u(k)

depending on domain of application.

2.2.3 LINEARIZATION OF NON-LINEAR SYSTEMS


However, all physical systems are inherently non-linear by nature. In order to create
a linear model from a non-linear system, the non-linear system under consideration
is linearized about a particular point based on the Taylor series expansion.
We recall that the Taylor series expansion for a general function f (x) is


1 d n f (x) 
f (x) = ∑ n 
(x − x0 )n , (2.32)
n=0 n! dx x=x0

and this series is said to be expanded about the point x = x0 , commonly referred to
as the operating point or equilibrium point. For functions that are relatively smooth,
the magnitudes of the terms in this series decrease as higher order derivatives are
introduced, and an approximation of a function can be achieved by selecting only
some lower order terms.
Using the above definition, we have
1
f (x) = f (x0 ) + f (x0 )(x − x0 ) + f (x0 )(x − x0 )2 + · · ·
 2
≈ f (x0 )x + f (x0 ) − f (x0 )x0 , (2.33)

and a linear relation is obtained if we retain only the first two terms of expansion.
This approximation is referred to as the linearization of f (x), and it is important that
the region of operation be near x0 for the approximation to be valid.
In a MIMO setting, the function f depends on multiple variables and each function
in f can be expanded into a Taylor series and thus linearized separately. Alterna-
tively, one can use matrix-vector notation and the linearized version of the non-linear
function f is
 
∂ f(x)  1 
T ∂ f(x) 
2
f(x) = f(x0 ) +  (x − x ) + (x − x ) (x − x0) + · · ·
∂ x x0
0 0
∂ x x0 2 2

∂ f(x) 
≈ f(x0 ) + (x − x0), (2.34)
∂ x x0
Vectors, Matrices, and Linear Systems 19

and the linear relation is obtained if we retain the first two terms as usual. In this
expression, the first order derivative of f(x) is a derivative of an m × 1 vector with
respect to an n × 1 vector. This results in an m × n matrix commonly known as the
Jacobian, whose (i, j)th element is ∂∂ xfij .

2.3 EIGENVALUE DECOMPOSITION AND SENSITIVITY


With the fundamental concepts of vector, matrices, and their uses in representation of
linear systems and linearized non-linear systems, we will further discuss the concepts
eigenvalues, eigenvectors, and their corresponding computational methods in this
section. Specific applications to find the sensitivity of eigenvalues to non-deterministic
system parameters and link parameters will also be detailed.

2.3.1 EIGENVALUE AND EIGENVECTOR


Given a square matrix A ∈ Rn×n . A non-zero vector x is defined to be an eigenvector
of A if it satisfies
Ax = λ x (2.35)
for some scalar λ . λ is called an eigenvalue of A corresponding to the eigenvector x.
If A is the state matrix of a linear system, λ is also the pole of the system, and the
corresponding eigenvector x is also known as the mode shape.
The above equation can also be rewritten as
(A − λ I)x = 0, (2.36)
where I ∈ Rn×n is an identity matrix. We can therefore also interpret an eigenvector x
as being a vector from the null space of (A − λ I) corresponding to an eigenvalue λ .
Each eigenvector is associated with a specific eigenvalue, but one eigenvalue can be
associated with several or even with an infinite number of eigenvectors.
If A is viewed as a linear transformation, the eigenvector x has the property that its
direction is not changed by the transformation A, but that it is only scaled by a factor
of λ . In general, most vectors x will not satisfy (2.36), since linear transformation A
generally scales, translates, or shears x. Alternatively, if A is a multiple of the iden-
tity matrix, i.e., no vectorial change in directions, then all non-zero vectors are also
eigenvectors.
Under the linear transformation A, the eigenvectors experience merely a change
in magnitude but no change in direction. If λ = 1, the vector remains unchanged
(unaffected by the transformation). If λ = −1, the vector flips to the opposite direction.
This is defined as a reflection.
With these concepts in mind, we have the following lemmas [51].

Lemma 2.1

If x is an eigenvector of the linear transformation A with eigenvalue λ , then any scalar


20

multiple α x is also an eigenvector of A with the same eigenvalue. Similarly, if more


than one eigenvector shares the same eigenvalue λ , any linear combination of these
eigenvectors will itself be an eigenvector with eigenvalue λ .

Together with the zero vector, the eigenvectors of A with the same eigenvalue form
a linear subspace of the vector space called an eigenspace. The set of all eigenvalues
is called the spectrum of A.

Lemma 2.2

The eigenvectors corresponding to different eigenvalues are linearly independent, in


particular, that in an n-dimensional space the linear transformation A cannot have
more than n eigenvectors with different eigenvalues.

Conventionally, the word eigenvector formally refers to the right eigenvector v


where Av = λ v . v is the most commonly used eigenvector. However, there is the
less commonly known left eigenvector w as well, and is defined by wT A = λ wT . It
is worth noting the eigenvalues are exactly identical since the poles of the system
remain unchanged under different representations, and that vwT = wT v = 1. Both
left and right eigenvalues are important for our proposed probabilistic small signal
assessment methodology, and more details will be provided in Chapter 5.
In order to compute eigenvectors, one has to first find the eigenvalues depicted
earlier in (2.36). As such, it is imperative that A − λ I has to be non-singular or not
invertible. These conditions are equivalent to solving

det(A − λ I) = 0, (2.37)

where det() is the determinant operation.


Equation (2.37) is also called the characteristic equation of A, and the left-hand
side is called the characteristic polynomial. When expanded, this gives a polynomial
equation for λ . Neither the eigenvector nor its components are present in the char-
acteristic equation. With the computed eigenvalues λ , we can then substitute them
back into (2.36) to obtain the corresponding eigenvectors x of A. A simple example
is provided for illustration.

Example 2.1: A linear transformation on the real plane A is given by

 
1 0
A= . (2.38)
1 2
Vectors, Matrices, and Linear Systems 21

The eigenvalues of A can be obtained by solving the characteristic equation


   
1 0 λ 0
det(A − λ I) = det −
1 2 0 λ
 
1−λ 0
= det
1 2−λ
= 0.
⇒ (1 − λ )(2 − λ ) = 0
∴λ = 1, 2, (2.39)

and the eigenvalues of A are thus λ = 1 and λ = 2. With these eigenvalues, we can
proceed to find the corresponding eigenvectors.
 T
Consider the eigenvalue λ = 2 and let x = x1 x2 . Now, the characteristic
equation becomes
  
−1 0 x1
= 0. (2.40)
1 0 x2

This matrix equation represents a system of two linear equations

−x1 = 0, (2.41)
x1 = 0, (2.42)

which implies that x1 = 0. We are free to choose any real number for x2 except zero. By
 T
selecting x2 = 1, the corresponding eigenvector is 0 1 .
Similarly, consider the eigenvalue λ = 1, the characteristic equation is thus
  
0 0 x1
= 0. (2.43)
1 1 x2

This matrix equation represents a linear equation

x1 + x2 = 0. (2.44)

As per above, we are free to choose a any real number for x1 except zero. By selecting x1 =
 T
1 and x2 = −x1 = −1, the corresponding eigenvector is 1 −1 .

It is obvious that the complexity of the eigenvalue problem increases rapidly with
increasing the degree of the polynomial or the dimension of the vector space. As the
dimension increases, no exact solutions exist and numerical methods are used to find
the eigenvalues and their corresponding eigenvectors.

2.3.2 EIGENVALUE DECOMPOSITION


A matrix can be factorized into the canonical form and represented in terms of its
eigenvalues and eigenvectors. This is known as eigenvalue decomposition.
22

Let A ∈ Rn×n with n linearly independent eigenvectors x1 , x2 , · · · , xn . As such, A


can be factorized as
A = MΛΛM−1 , (2.45)
where M ∈ Rn×n whose ith column is the eigenvector xi of A and Λ is a diagonal
matrix whose diagonal elements are the corresponding eigenvalues, i.e., Λ ii = λi .
In general, the eigenvectors x1, x2 , · · · , xn are normalized though it is not a necessity,
since a normalized eigenvector is also a valid eigenvector itself. A non-normalized
set of eigenvectors can also be used as the columns of M. This is always true and can
be intuitively understood by noting that the magnitude of the eigenvectors in M is
“cancelled” in the decomposition with M−1 .
If A can be eigenvalue-decomposed and if none of its eigenvalues are zero, then A
is non-singular and its inverse is given by

Λ −1 M−1 .
A−1 = MΛ (2.46)

Since Λ is a diagonal matrix, its inverse is simply given by



Λ −1 = diag λi−1 (2.47)

and A−1 can be easily obtained.


Eigenvalue decomposition has many practical engineering applications, and is
numerically efficient when higher orders of multiplicities (or power series) of A are
required. To compute An where n ∈ Z+ , it is easy to see that

An = MΛΛM−1 MΛ
ΛM−1 × · · · × MΛ
ΛM−1
  
n
−1
= Λ M
MΛ n
, (2.48)

and the required calculations are greatly simplified with Λ n = diag λi . n

Furthermore, if A is symmetrical, i.e., ai j = a ji , it will have n linearly independent


eigenvectors which can be chosen such that they are orthogonal to each other with
unity norm. As such, we can decompose A as

ΛMT ,
A = MΛ (2.49)

where the superscript T is the matrix transpose operator. The eigenvectors obtained
are real, mutually orthogonal, and provide a basis for Rn . Also, M is orthonor-
mal, i.e., MT = M−1 , and Λ is also real.
However, if A ∈ Cn×n is normal and has an orthogonal eigenvector basis, it can be
decomposed as
A = UΛ ΛUH , (2.50)
where the superscript H is the Hermitian operator and U is an unitary matrix. Fur-
thermore if A is Hermitian, Λ will be real. If A is unitary, Λ takes all its values on the
complex unit circle.
Vectors, Matrices, and Linear Systems 23

In general, the product of the eigenvalues is equal to the determinant of A, or


n
det(A) = ∏ λi , (2.51)
i=1

while the sum of the eigenvalues is equal to the trace of A


n
tr(A) = ∑ λi . (2.52)
i=1

The eigenvectors of A−1 are the same as the eigenvectors of A.


If A ∈ Rn×n does not have a complete set of n linearly independent eigenvec-
tors, one or more of the eigenvalues are repeated or the same λi is a multiple root
of the characteristic equation. Due to this multiplicity of repeated eigenvalues, the
computation of the corresponding eigenvectors will be more difficult, and eigenvalue
decomposition of A cannot be performed since we will not have a sufficient set of n
eigenvectors to construct the n × n modal matrix M. To resolve this difficulty, the
notion of a generalized eigenvector [52] will be presented here.

2.3.3 GENERALIZED EIGENVECTORS


A generalized eigenvector of a matrix A is a non-zero vector v, which has associated
with it an eigenvalue λi having algebraic multiplicity m > 1 and satisfying

(A − λ I)mv = 0. (2.53)

In order to compute the generalized eigenvectors, there are currently several meth-
ods available. The two most widely-used algorithms are the bottom-up and top-down
approaches with different pros and cons [46]. In this chapter, we will introduce the
top-down algorithm to compute the generalized eigenvectors.
The index of a repeated eigenvalue λi is denoted ηi , where ηi is the smallest
integer η such that
η
rank(A − λi I) = n − mi , (2.54)
where n is the dimension of the space of (A − λi I), and mi is the algebraic multiplicity
of λi . The top-down algorithm is a procedure depicted by the following steps [52]:

• Step 1: For an eigenvalue λi with index ηi , find all linearly independent


solutions to the simultaneous set of matrix equations give by

(A − λiI)ηi x = 0, (2.55)
ηi −1
(A − λi I) x = 0. (2.56)

Each such solution will start a different “chain” of generalized eigenvectors.


Because rank(A − λi I)ηi = n − m, there will be no more than n − (n − mi ) =
mi solutions, each of which is a generalized eigenvector. We denote these
solutions as v11 , v12 , · · · , v1mi .
24

• Step 2: Begin generating further generalized eigenvectors by computing the


chain for each j = 1, 2, · · · , mi by solving

(A − λi I)v1j = v2j ,
(A − λi I)v2j = v3j ,
(A − λi I)v3j = v4j , (2.57)
..
.

until we get

(A − λiI)vηj i = 0, (2.58)

which indicates that vηj i is a regular eigenvector.


• Step 3: The length of the chains is ηi. There may also may be chains of shorter
length. If the chains of length ηi do not produce the full set of generalized
eigenvectors, begin the procedure again by finding all solutions to

(A − λiI)ηi −1 x = 0, (2.59)
(A − λiI)ηi −2 x = 0, (2.60)

and repeating the procedure when necessary. This will produce chains of
lengths ηi − 1. Continue until all generalized eigenvectors have been found.
• Step 4: Repeat this procedure for other repeated eigenvalues.

It is now clear that the case of n independent eigenvectors makes construction


of the modal matrix much simpler. However, this is generally not true for the case
of repeated eigenvalues, and the motivation for computing generalized eigenvectors
is that we can use them in the modal matrix when we have an insufficient number
of regular eigenvectors. When generalized eigenvectors are included in the modal
matrix, the resulting matrix operator in the new basis will be almost diagonal, since
generalized eigenvectors xi are chained to the regular eigenvectors xη as

Axη = λ xη ,
Axη −1 = λ xη −1 + xη ,
..
. (2.61)
Ax2 = λ x2 + x3 ,
Ax1 = λ x1 + x2 , (2.62)
Vectors, Matrices, and Linear Systems 25

and hence a Jordan block as a submatrix with the following structure


⎡ ⎤
λ 1 0 0 ··· 0
⎢ .. ⎥
⎢ 0 λ 1 0 . 0 ⎥
⎢ ⎥
⎢ . ⎥
⎢ 0 0 λ . . . . . . .. ⎥
⎢ ⎥ (2.63)
⎢ ⎥
⎢ 0 0 0 ... 1 0 ⎥
⎢ ⎥
⎢ ⎥
⎣ 0 0 0 ... λ 1 ⎦
0 0 0 ··· 0 λ
can be obtained.
The size of this block will be the length of the chain. This implies that the size of
the largest Jordan block will be the size of the longest chain of eigenvectors, which
is the index η of the eigenvalues. The modal matrix in Jordan canonical form will
be constructed with both regular and generalized eigenvectors by concatenating the
regular eigenvectors with the chains of generalized eigenvectors. We illustrate these
concepts with an example [46].

Example 2.2: Given a defective or rank-deficient matrix

⎡ ⎤
3 −1 1 1 0 0
⎢ 1 1 −1 −1 0 0 ⎥
⎢ ⎥
⎢ 0 0 2 0 1 1 ⎥
A=⎢

⎥,
⎥ (2.64)
⎢ 0 0 0 2 −1 −1 ⎥
⎣ 0 0 0 0 1 1 ⎦
0 0 0 0 1 1

we have the following characteristic polynomial as

|A − λ I| = (λ − 2)5 λ , (2.65)

which provide eigenvalues of λ1 = 0 and λ2 = 2 with m2 = 5. For λ1 = 0, we can


 T
compute the corresponding eigenvector as x1 = 0 0 0 0 1 −1 .
However for λ2 = 2, we must first compute the index of λ2 . Following the above-
mentioned procedure, we have n − m2 = 6 − 5 = 1, and the index η2 can be found
from

rank(A − 2I)1 = 4 = 1, (2.66)


rank(A − 2I)2 = 2 = 1, (2.67)
rank(A − 2I) 3
= 1, (2.68)

which gives η2 = 3. This implies that there will be the chain of eigenvector of length
three. We proceed to find a solution to

(A − 2I)3 x = 0, (2.69)
(A − 2I)2 x = 0, (2.70)
Another random document with
no related content on Scribd:
úgyse érti meg soha. A katona, az katona. És mink tudunk bánni
vele. Szivünk joga.
(Némán emelkednek föl a liften. Többé nincs köztük szó a
dologról.)
SZEGÉNYSÉG.

(Idő: délutáni öt óra. Szín: a nagykörút, hol most egymásután


gyulladnak ki a nagy villamoslámpák a boltok előtt. Személyek: két
nagyon kis fiú. Az egyik gazdag, a másik szegény. A gazdag fiun
szilszkin-sapka és kötött keztyű van, a szegény fiunak kendő van a
nyakában és rongyos, széttaposott czipő a lábán.)
A gazdag: Én azért szólítottalak meg téged, mert látom, hogy
szegény vagy.
A szegény: És mit akarsz tőlem?
A gazdag: Én jószívű fiu vagyok és segíteni akarok rajtad, mert az
apám mindig azt mondja, hogy segíteni kell a szegényen. De én nem
tudom, hogy mi a baja a szegénynek. Ha tudnám, hogy mi a baja,
könnyebben tudnék segíteni rajta.
A szegény: Igen, én nagyon szegény fiu vagyok. A Szigetvári-
utczában én vagyok a legszegényebb fiu.
A gazdag: Hát lássuk: Nincs pénzed?
A szegény: De van. Most adtam el az utolsó ujságot és most van
hatvan krajczárom.
A gazdag: Nekem csak két krajczárom van, mert annyit kapok, ha
elmegyek hazulról, hogy legyen nálam pénz, ha jön egy koldus.
A szegény: Nekem hatvan krajczárom van és abból nem is kell adni
a koldusnak. Ha én adnék a koldusnak, felpofozna az apám.
A gazdag: Eddig még minden szegény gyereknek, a kivel
találkoztam, több pénze volt, mint nekem. Hát akkor nem ebben van
a szegénység. Majd én kérdezek, te meg majd felelj. Jó?
A szegény: Jó.
A gazdag: Hát van neked gonosz mostohád?
A szegény: Nincs. Hanem két mamám van. Az egyik az igazi
mamám, a másik meg csak közös háztartás. Mindig csúfolják is az
apámat, hogy két nővel él. De a másik is rokon.
A gazdag: Hát akkor gondoskodnak rólad.
A szegény: Oh, igen.
A gazdag: Hát most megint kérdezlek: a szegény gyerek nem kap
enni?
A szegény: Dehogy nem. Csakkogy nem meleg húst eszik és nem
főzeléket eszik, hanem kenyeret eszik és szalonnát eszik.
A gazdag: Mi az? Neked nem muszáj húst enni? Neked nem kell
főzeléket enni? Milyen jó neked! És neked szabad szalonnát enni?
Fogadjunk, most is van nálad szalonna.
(Vágyva néz a szegény fiu zsebére.)
A szegény: De van ám! Csakhogy nem adok belőle.
A gazdag: Milyen jó neked! Nekem mindenből muszáj adni a
szegényeknek. Istenem, Istenem, milyen jó is nektek.
(Csöndesen mennek egymás mellett.)
A gazdag: Hát aztán megint kérdezlek. Azt mondják, hogy a
szegények dideregnek és fáznak és reszketnek. Nem fázol?
A szegény: Nem fázom, mert az öreganyám nagykendője van a
nyakamban.
A gazdag: Nekem Petőfi-gallér van a nyakamon és mindig befuj a
szél a gégémbe.
A szegény: Hogy fuj be a gégédbe?
A gazdag: Hát csak befuj. Tudhatnád, hiszen neked is van gégéd.
Nem tudod, mi a gége? Az az a kő, a mi az ember nyakában van,
elől. Ha tapogatod a nyakadat, érzed hogy van benne elől egy kő.
Vagy egy vas. Vagy nem tudom… szóval valami kemény. Az a gége.
És abba fuj a szél. És az rossz. De csak nekem rossz, mert nekem
gallérom van. Neked nem rossz, mert neked kendőd van. És most
már látom, hogy nektek nem is olyan rossz. Nekünk gazdagoknak
sokkal rosszabb.
A szegény: Nem is rosszabb.
A gazdag: De is rosszabb.
A szegény: Nem is.
A gazdag: De is.
(Ballagnak tovább. A gazdag erősen gondolkozik.)
A gazdag (magában): Talán a vallásban van a dolog.
A szegény: Mit mondasz?
A gazdag: Van vallásod?
A szegény: Van.
A gazdag: Mikor?
A szegény: Kedden és pénteken, háromtól négyig.
A gazdag: Nekem is akkor van. Hát akkor te is olyan vallásu vagy,
mint én. Mert ha más vallásu volnál, akkor szerdán négytől ötig
volna vallásod és ha zsidó volnál, akkor csütörtökön volna és nem
köszönnél a papnak.
A szegény: De nekem kedden és pénteken van, háromtól négyig
és én köszönök a papnak.
A gazdag: Ez a római katolikus vallás.
A szegény: Ez az.
A gazdag: Hát akkor ez is mindegy. (Nevet.)
A szegény: Mit nevetsz?
A gazdag: Mert én azt hittem, hogy a szegény emberek még a
vallásban is zsidók.
A szegény: Az nem is igaz. Nagyon sok gazdag zsidók vannak.
(Mennek, mendegélnek.)
A gazdag: Ejnye, ejnye. Hát most már igazán nem tudom. Ha
beteg vagy, muszáj ágyban maradni?
A szegény: Nem.
A gazdag: Miért?
A szegény: Mert nincs is ágyam.
A gazdag: Hát hol alszol?
A szegény: A sarokban van egy pokrócz, azon van még egy pár
pokrócz, aztán van egy pár rongy, aztán van egy párna, meg van
egy takaró. Ez nem ágy, hanem vaczok.
A gazdag: Vaczok? Milyen szép neve van. Vaczok. Vaczok. Vaczok.
A szegény: Igenis, vaczok.
A gazdag: És neked szabad abban aludni. Mink egyszer csináltunk
egy ilyet a tesvéremmel és sokáig könyörögtünk, hogy engedjen az
anyám ott aludni, de nem hagyott. És egyszer aztán, mikor az apám
meg az anyám a szinházban volt, csináltunk egy ilyet plédekből meg
paplanokból és ott aludtunk tíz óráig, de akkor hazajöttek,
megbüntettek és az ágyba kergettek. Milyen jó pedig az!
A szegény: Megbüntettek? Megvertek?
A gazdag: Nem. Nálunk nem lesz verve.
A szegény: Hát mi lesz csinálva?
A gazdag: Le kellett írni százszor azt a mondatot, hogy: Gyereknek
éjjel ágyban a helye.
A szegény: Pedig ez nem is igaz.
A gazdag: Hát téged nem úgy büntetnek?
A szegény: Nem. Én kapok egy pofont az apámtól, aztán mehetek
játszani.
A gazdag: Oh, az én apám sokkal kegyetlenebb. Nekem mindig
ilyen feladatokat kell írni.
A szegény: Borzasztó lehet.
(Ballagnak, ballagnak.)
A gazdag: Na hát én most megyek haza. De hiába szólítottalak
meg, mert nem tudtam meg semmit. Hát hogy segíthetnék én
rajtad, mikor nekem sokkal rosszabb, mint neked. Neked sokkal
jobb, mint nekem. Én odaadnék mindent azért, hogy szegény
lehessek. (Felragyog a szeme): Hiszen az nagyszerű lehet,
szegénynek lenni!
A szegény (büszkén): Bizony! Bizony!
A gazdag (irigyen néz rá): Most hová mész?
A szegény: Csavarogni.
A gazdag (félig síró hangon): Oh… oh… milyen jó…
A szegény: Hát szerbuz.
A gazdag: Szerbuz.
A szegény: (hátbavágja, mondván): Leczti!
(Nagy rohanás. A szegény egyenest berohan a legnagyobb sárba.
A gazdag, a ki pedig már majdnem utólérte, megáll a sár partján. A
szegény a sár közepén áll, bokáig a fekete lében és onnan vigyorog.
Sőt kiabál is.)
A gazdag. És a sárba is bemehet… és a legnagyobb disznóságokat
szabad neki kiabálni… és…
(Keserves sirásba fulladva bandukol hazafelé.)
FELVONÁSVÉG.

(Ez itt egy kis tanulmány. Határozottan nagyképűség volna


«dramaturgiai vázlatnak» nevezni, pedig nagyon szeretném. Tehát:
felvonásvég. Egy vígjáték első felvonásának a vége.
Személyek:
Egy asszony (28–30 éves).
Egy ur (35–36 éves).
Egy ifju (28 éves).
A díszlet egy folyó partját ábrázolja. Túlnan kék hegyek, őszi
reggeli párába burkolva. A folyó opálszínű. A falevelek mind sárgák,
sárgás-zöldek és pirosak. Kora reggel van.)

XIV. JELENET.

Az asszony, az ur.
Az ur (belép a szinpadra): Hogyan? Ön itt?
Az asszony (kissé dideregve): Itt vagyok, mint láthatja.
Az ur: Ilyen korán kel?
Az asszony: A hotelben kivülről fűtik a kályhámat, s ma úgy
zörögtek vele, hogy félhétkor fölébredtem. Nem tudtam aztán már
elaludni. Hát gondoltam egyet és felöltöztem. Lejöttem élvezni ezt a
hűvös őszi reggeli hangulatot.
Az ur: Én mindig ilyenkor kelek.
Az asszony: (nem felel. Átnéz a tulsó partra).
Az ur: Oly ritkán esik meg, hogy egyedül találom.
Az asszony: Igen?
Az ur: Igen.
(Szünet.)
Az ur: Pedig özvegy asszonyt nem nagy művészet egyedül találni,
ha az ember ráadja magát.
Az asszony: Viszont nem nagy művészet özvegy asszonynak
egyedül maradnia, ha ráadja magát.
Az ur: Izé… menjek el?
Az asszony: Nem mondom. De tudja… hüvös van, sötét, borus nap
van… ilyenkor az emberek hüvösek, sötétek és borusak a gondolatai.
Az ur: Azt látom. De azt hiszem, ezen könnyű segíteni.
Az asszony: Hogyan?
Az ur: Az ember mond egy meleg és fényes szót. Egy szót, a mi
fűt. Aztán mindjárt meleg van és süt a nap.
Az asszony: Például.
Az ur: Például: szerelem.
(Szünet.)
Az asszony: Hogy érti ezt?
Az ur: Szeretem magát. Két éve szeretem, két éve titkolom úgy,
hogy minden szempillantásban érzi rajtam, hogy minden
tekintetemből kiragyog, hogy minden szavamból ordít.
Az asszony: És?
Az ur: És… ez az egész. Most egyedül vagyunk, most mondom.
Jól esik mondanom. Szeretem, szeretem, egy érett férfi komoly
szerelmével, mely nem szalmalángon melegedett, hanem lassú tűzön
pirult.
Az asszony: Ugy beszél, mint egy érzelmes szakács.
Az ur: Tréfál.
Az asszony: Hihetetlenül ostobák maguk, férfiak. Határtalan a
maguk ügyetlensége.
Az ur: Miért?
Az asszony (feléje fordul, mosolyogva): Nincs érzéke magának a
stilus iránt? Hány óra van most? Hét. Milyen nap van ma?
Szeptember huszonkilenczedike. Milyen az idő? Szép, derűs, meleg,
melankolikus őszi nap? Ördögöt. Minden csupa hüvösség, minden
nyirkos, dideregtető, hideg, szomorú. Valami nagy pinczehangulat ül
a lelkemen. Most keltem föl és még nem reggeliztem. Nem aludtam
ki magamat, rosszkedvű vagyok, a mosdóvizem kellemetlenűl hideg
volt, tehát bosszusan, álmosan, dideregve, éhgyomorra kapom ezt a
szerelmi vallomást. Szerencsétlen ember. Egy szerelmi vallomást, a
mely két év óta pirul lassú tűzön. Egy szerelmi vallomást, a mely
nem illik a meteorologiai viszonyokhoz. Egy vallomást, a melynek
valami selymes, szőnyeges, függönyös szoba mélyén kellene
elhangzania, mikor alkonyat van s pattog a tűz a kandallóban s mikor
én kissé pirultan, lágy kasmir-pongyolában heverek a kereveten,
mint ezt utálatos, régi és rossz írók már elcsépelték, de a mint az
életben mégis kellemes. Csöndben kellene ennek elhangzania, de
lágy, meleg csöndben. Ez a csönd itt hideg és kemény csönd. Érti?
Ezt viszont utálatos, új és rossz írók csépelték el, de ez is így van. A
mit ön most cselekedett, az stílszerűtlen, az rossz, az ügyetlen. Ön
kamatoztatta a tőkéjét két éven át, szépen hallgatott és most az
egészért, kamatos kamatokat is beleértve, valami rossz papirost vett.
Fucscs az ön tőkéjének. Ön egy pillanat alatt elrontott mindent.
Az ur: De…
Az asszony: És most ne szóljon, mert rettenetesen hasonlít ahhoz
az emberhez, a kinek a szél a Margithidról a Dunába viszi a kalapját
és a ki utánacsap, lenyul érte a vizbe. Egyike a világon a legnagyobb
és legszebb kunsztoknak: ilyenkor nem kapni a kalap után. Brutus
megölte az apját; az ókorban ez volt a példa arra, hogy valaki tudott
az érzelmein uralkodni. A jövő Brutusa az lesz, a ki nem kap a
kalapja után, ha a szél lerántja a fejéről. Apát ölni könnyű. Majd ha
egyetlen reflexmozgásuk felett tudnak uralkodni, akkor beszéljenek
lelki erőről és lélekjelenlétről, urak. Ne szóljon most. Ne dadogjon.
Ne kapkodjon. Ne tegye magát nevetségessé. Hallgasson.
(Szünet.)
Az ur: Tehát stilszerűtlen voltam.
Az asszony: A legnagyobb mértékben.
Az ur: Belátom.
Az asszony: Lássa, ezt szeretem. Legalább tanult valamit, aminek
nagy hasznát veheti, ha nők körül jár. Én még azt is bevallom
magának, hogy nem tudom: nem járt volna-e sikerrel a kisérlete, ha
olyankor tette volna, a mikor meglett volna rá a dispoziczió. Tanulja
meg, hogy az igazi asszony ezen az egy ponton fogható meg. Jókor
kell jönni. Most legalább elmegy a kedve néhány évre attól, hogy
szerelemről beszéljen egy éhes, fázós, rosszkedvű asszonynak
csunya időben, a szabadban, a mikor annak az asszonynak minden
inkább foglalkoztatja e pillanatban az elméjét, mint a szerelem, a
melytől oly távol van ez a helyzet, mint…
Az ur: Spórolja meg ezt a lesujtó hasonlatot. Köszönöm, hogy
megtanított valamire. Ugy érzem, hogy még hasznát fogom venni az
életben. Kezét csókolom.
(Kalapot emel és balra elmegy. A mely pillanatban balról eltünt,
jobbról belép a szinpadra az ifjú.)

XV. JELENET.

Az asszony, az ifju.
Az asszony (az ifjú nyakába borul): No végre, hogy itt vagy!
Az ifju: Te édes.
Az asszony: Csókolj meg… csókolj… Ezen a szép, szomorú őszi
reggelen jobban sovárgok az ajkad melege után, mint valaha. Ölelj
meg. Most jó, most édes, most boldogság. Soha, csak most.
(Összeölelkeznek.) (Függöny.)
GYUFAOLDAT.

(Esti hét óra. A gyerek most jött haza a Városligetből a


kisasszonynyal. A ház urnője az erkélyen ül és a Lloyd estilapját
olvassa.)
A cseléd (berohan): Nagysága, az Istenért, tessék hamar jönni.
Az urnő: Mi az?
A cseléd: A kisasszony gyufát kever.
Az urnő: Mit csinál?
A cseléd: Tessék jönni…
(Előre szalad, az urnő utána. Benyitnak a kisasszony szobájába. A
kisasszony az asztalnál ül és egy pohár meleg vízbe gyufafejeket
aprit, mint kiflit a kávéba. Közben csöndesen sír.)
Az urnő: Mit csinál, kisasszony?
A kisasszony: Én?
(Hirtelen az asztalra borul és zokogásban tör ki. A cseléd hamar
elszedi előle a poharat meg a krajczáros gyufát.)
Az urnő: Menjen ki, Eszti.
A cseléd: Igenis. (Kimegy.)
Az urnő: Mi ez, kisasszony? Beszéljen.
(A kisasszony nem felel, csak sír. Ez így megy tiz perczig.)
Az urnő (kikiált): Pista, gyere be.
(Bejön a gyerek és merőn néz.)
Az urnő: Hol voltatok délután a kisasszonynyal?
A fiu: A ligetben.
Az urnő: És… mi baja most a kisasszonynak?
A fiu: Nem tudom. Talán a főhadnagy megsértette.
Az urnő: Micsoda főhadnagy?
A fiu: Van nekünk egy főhadnagyunk. Mindig vár minket a Zserbó
mögött.
Az urnő: Szép. És ma is várt?
A fiu: Ma mink voltunk ott előbb. Aztán csak később jött a
főhadnagy és veszekedtek.
Az urnő: Veszekedtek?
A fiu: Nem mindjárt, csak aztán.
Az urnő: Hát hogy volt, mondd el szépen az egészet, mondd el,
mit hallottál.
A fiu: Hát jöttünk és vártunk és aztán jött a főhadnagy. Nem
huszár, csak gyalog-katona. És a kisasszony üdvözölte a szájával és a
főhadnagy visszaüdvözölte a szájával, még a fülibe is beleüdvözölt.
Jaj, maga kiczliz, jaj, maga kiczliz. Ezt mondta a kisasszony. Jaj,
maga kiczliz.
Az urnő: Tovább.
A fiu: Hát mondta neki, hogy ő egy kiczliz és aztán leültek a
padra. Akkor valamit beszéltek és a főhadnagy mondta, hogy
áthelyezik Horvátországba. De a kisasszony mondta, hogy: ez nem
igaz, még tőlem is kérdezte, hogy: «úgy-e, Pista, nem igaz?» Én
mondtam is, hogy nem igaz. Aztán lassan a kisasszony szomorú lett
és sírt és szidta a főhadnagyot, de a főhadnagy vigasztalta és
üdvözölte a szájával a nyakát és az egész arczát összeüdvözölte. Ezt
tudsz, ezt tudsz. Ezt mondta a kisasszony. Ezt tudsz, egyebet se
tudsz.
Az urnő: Tovább.
A fiu: De aztán csak sírás volt és üdvözlés volt és a kisasszony azt
mondta, hazugság az egész, te csak a faképnél akarsz hagyni, tik
tisztek mind ilyenek vagytok, mér is adtam oficzirnak a szerelmemet.
Az urnő: Nem szégyelli magát kisasszony, a gyerek előtt ilyeneket
mondani?
(A kisasszony mozdulatlanul sir tovább kis tócsákat az asztalra.)
A fiu: Akkor a főhadnagy németül esküdött és azt mondta a
kisasszonynak németül, hogy szereti őtet. Nicht wahr, nicht wahr, ezt
mondta a kisasszony, hogy én ne értsem. Nicht wahr. Hogy nem
igaz.
Az urnő: Tovább.
A fiu: Akkor egyszerre nagyon dühbe gyött a kisasszony és azt
mondta: «nahát, ha éppen tudni akarod, most hagysz el, a mikor
szerencsétlenné teszel.»
Az urnő: Jó, jó. Aztán mi volt.
A fiu: Hát hogy szerencsétlenné teszi, meg hogy megöli magát,
mert anya lesz.
Az urnő: Kisasszony, kisasszony, hallja ezt, szemtelen disznó?
Ilyenre bízom a kis fiamat.
(Semmi válasz.)
A fiu: Hogy hát megöli magát mert fél a szégyentől és hogy hát
könnyű neked, mert te oficzir vagy, de mit csinálok én, mert
szembeköp az egész világ és a főhadnagy üdvözölni akarta, de a
kisasszony eltolta őtet és kérdezte tőlem: «ugy-e, Pista, aljas ember
a főhadnagy bácsi?» én mondtam, hogy aljas és a főhadnagy kupán
vágott és megint akarta megölelni a kisasszonyt és mondta, hogy
Horvátországba megy, majd onnan küld nyolcz forintot.
(A kisasszony felzokog.)
Az urnő (ridegen): Tovább.
A fiu: Aztán elment a főhadnagy és a kisasszonnyal utána
szaladtunk és a kisasszony kérdezte: legalább az adresszedet mondd
meg és a főhadnagy mondta: majd megírom levélben és akkor
mentünk az Aréna-kávéházba és a kisasszony sokáig telefonált és
kijött és mondta: «látod, Pista, milyen aljas, nem is helyezték át
Horvátországba.» Aztán hazajöttünk.
Az urnő: Jól van, fiam, kimehetsz.
A fiu: Kezit csókolom. (Kimegy.)
(Nagy szünet.)
Az urnő: Kisasszony!
A kisasszony (fölkel az asztaltól és alázatosan áll meg az urnő
előtt): Nagyságos asszony kérem…
Az urnő: Fogja be a száját.
A kisasszony: De igazolásomra…
Az urnő: Arra fütyülök. Semmi közöm hozzá. Ön… ön…
A kisasszony: Nagyságos asszony kérem, egy szó sem igaz… már
mint abból, a mit én a főhadnagynak mondtam. Csak szerelmét
akartam visszanyerni.
Az urnő: Nem igaz?
A kisasszony: Persze hogy nem. Csak ki akartam próbálni, hogy ha
ezt mondom, van-e pofája elhagyni engem.
Az urnő: És?
A kisasszony: Hát tetszik látni, volt pofája.
Az urnő: És ez a komédia itt a gyufával?
A kisasszony: Még egyszer próbára akartam tenni szerelmét… ha
kiiszom a gyufát és benne van az ujságban… Hogy eljött volna-e
hozzám? Hogy lett volna-e pofája hagyni engem a kórházban.
Az urnő: Lett volna nyugodt lehet.
A kisasszony: De most már nem iszom meg.
Az urnő: Jól teszi. És minthogy hála Istennek, sem anyának nem
érzi magát, sem gyufát nem ivott, most nyugodt lélekkel
kijelenthetem, hogy ki van dobva tőlünk és ha még egy óra mulva itt
látom, a férjem dobja ki. Szedje össze azonnal a czókmókját és
mars.
A kisasszony: Csak az én Pistikémtől bucsuzhassam el.
Az urnő: Attól elbucsuzhatik, maga szemtelen. Nézze meg az
ember. Főhadnagyok, meg hazugságok, meg gyufák… ilyen hallatlan
szemtelenséget még életemben nem hallottam. Örüljön, hogy a
férjemnek nem szólok, mert ha szólnék neki, tudja Isten, mit
csinálna magával! Mars ki a házamból!
(Kimegy, becsapja az ajtót és tovább olvassa a Lloyd estilapját.
Benn a kisasszony csomagolni kezd és énekel. Az énekből erősen
kihallatszik fájó rezgéssel ez a sor: «szegény szivem majd megreped
utánad.»)
A KOCSI.

(Ennek a beszélgetésnek személyei: egy úrnő és egy férfi. Azért nem


mondom: asszony, nő, vagy hölgy, mert szeretném az «úrnő» szóval
az életkorát is jelezni. Úrnőnek csak az élet meghatározott öt
esztendejében hívják az asszonyt. A férfi pedig – férfi. Tudniillik ő is
férfikorban van.)
Az urnő: Száz éve nem láttam.
A férfi: Ezt én sajnálom a legjobban.
Az urnő: De örülök, hogy most látom, mert elbeszélgethetünk régi
dolgokról. Nagyon régi dolgokról, olyan régiekről, hogy talán már
nem is emlékszik rájuk.
A férfi: Furcsa, hogy maga most már másodszor szólít meg úgy,
mintha mind a ketten hatvan évesek volnának. Előbb azt mondta,
hogy száz éve nem látott. Most meg nagyon régi dolgokat említ.
Az urnő: Nos?
A férfi: Asszonyok csak olyankor szokták a multat régmultnak
tekinteni, mikor el akarnak mesélni valami olyat, a mi mentségre
szorul. És egyelőre jobb mentség nincs, mint az idő.
Az urnő: Érdekes. Nagyon érdekes.
A férfi: Micsoda?
Az urnő: Hogy kitalálta. Hogy kitalálta az igazat. Valóban, el
akarok mondani valamit. És ezt a férfiak csak akkor szokták kitalálni,
ha az asszony meséjében ők maguk fordulnak elő.
A férfi: Hogyan? Előfordulok benne?
Az urnő: Nagyon.
(Szünet. A férfi nagyon töri a fejét.)
A férfi: Nem jut eszembe semmi.
Az urnő: Majd én segítek. Elmondom, de diszkrécziót kérek.
A férfi: Ugyan, ugyan…
Az urnő: Nem a történetre vonatkozik a kérésem. Azt elmesélheti
bárkinek. De az időpontra nézve adja a becsületszavát.
A férfi (némán kezet nyujt.)
Az urnő: Tudniillik tíz évvel ezelőtt történt. Itt, Pesten.
Októberben.
A férfi: Nem emlékszem.
Az urnő: Tíz évvel ezelőtt októberben együtt voltunk egy estélyen.
Magának barna volt a haja, mint a gesztenye. Én meg sokkal
szőkébb lettem azóta. Künn vacsoráztunk a Városligetben. Az uram
akkor két hétre Berlinbe utazott és én egymagam mentem el az
estélyre. Éjfél után egy órakor megúntam a mulatságot és
jelentettem a háziasszonynak, hogy haza fogok menni. Maga ott állt
a háziasszony mellett és nagyon a szemembe nézett.
A férfi: Ez… ez… ez már kezd eszembe jutni.
Az urnő: Egész este nagyon kínozott. Akkor három hétig
szerelmes volt belém. Nem beszélt róla, de azokkal a bizonyos néma
grimaszokkal kínozott. Meg mozdulatokkal: kiment, bejött, felkelt,
leült, szóval úgy viselkedett, mint a hogy nagyon ügyetlen, fiatal
szerelmesek szoktak viselkedni.
A férfi: Aha… aha.
Az urnő: És mikor a háziasszonynak azt mondtam: «na Teréz,
most elmegyek haza», akkor maga elsompolygott a háziasszony
mellől és mikor a vesztibülből kiléptem az utczára, hogy kocsiba
üljek, maga egyszerre csak ott termett.
A férfi: Igen.
Az urnő: És azt mondta, hogy haza akar kisérni.
A férfi: Igen.
Az urnő: Én nevettem ezen az őrültségen, határtalanul illetlennek
tartottam az ajánlkozását, de aztán beleegyeztem, még pedig két
okból. Először, mert oly ártatlanul, oly naivul ajánlkozott, hogy azt
gondoltam: «nini, ez nem is tudja, milyen borzalmas dolgot kér.»
Másodszor pedig, mert… majdnem szerelmes voltam magába.
(Nagy szünet.)
A férfi: Miiiiiii? Micsoda?
Az urnő: Igen, igen.
A férfi (nagyranyitott szemmel): Szerelmes volt?
Az urnő: Nem. De majdnem. Tudja, azon a ponton voltam, a hol a
többi kezd a férfitől függeni. Az asszony bizonyos ideig nyugodtan
tűri a dolgot, aztány egyszerre csak érzi, hogy «na most már a többi
az ő dolga».
A férfi: És maga ezt érezte?
Az urnő: Ezt.
A férfi: Miért nem mondta?
Az urnő: Mert ennek az egész dolognak éppen az a
természetrajza, hogy az asszony nem mondja. A férfi dolga:
megkülönböztetni azt az időt, mikor az asszony nem érzi, attól az
időtől, mikor már nem mondja.
A férfi: Oh, én szamár.
Az urnő (sóhajtva): Hagyjuk ezt. Folytassuk az esetet. Mikor
ajánlkozott, hamarjában nem tudtam, mit csináljak. Aztán valami
bolond láz szaladt végig rajtam és azt mondtam: jó. És akkor maga
hirtelen felkiáltott: «Kocsiért megyek?» És ez volt az első hibája.
A férfi: Micsoda.
Az urnő: Hogy elment kocsiért és magamra hagyott két perczre.
Képzelheti, mennyire tetszett maga nekem, ha még ezt is eltűrtem.
Pedig az asszony semmiért nem haragszik úgy, mint azért, hogy
valaki ki hagyja hűlni. Aztán megjött a kocsi.
A férfi: Konflis.
Az urnő: Úgy van. Szép magától, hogy emlékezik még a műfajra.
Mert ezen fordul meg minden. Konflis volt, nem pedig fiakker,
egyfogatú és nem kétfogatú.
A férfi: Nem kaptam mást.
Az urnő: Kapnia kellett volna. Mert mi történt?
A férfi: Beültünk a konflisba.
Az urnő: Úgy van. És huszonöt perczig mentünk hazafelé. Tudja,
mi a különbség konflis és fiakker közt?
A férfi: Nem. Nem. Nem.
Az urnő: Először is a konflisnak zörög az ablaka és az ember nem
hallja, hogy mit beszél az, a ki mellette ül. Aztán a konflis
októberben nyirkos és kietlen. A fiakkernek sem a kereke, sem az
ablaka nem zörög. A kereke gummival van bevonva, az ablakrámák
pedig posztóval. A fiakkerben tehát kellemes, tompa csönd van, ott
lehe finom nüánszokat mondani. Ott lehet közömbös szavakat
különös hangsúlylyal ejteni. Ön azt kérdezte tőlem: «Hogy érzi
magát?» Nagyon szellemes kérdés volt és én azt feleltem rá, hogy:
«jól». Az ablak csörgött, a kerék zörgött, a kocsi rázott és így nekem
kiáltanom kellett: «jól, jól, johól!» Holott a puha és csöndes
fiakkerben lesüthettem volna a szememet és halkan mondhattam
volna: «jól». És a hangomba vegyíthettem volna valami szemérmes
biztatást, valami csöndes szégyenkezést, a miből maga rögtön
kiérezte volna, hogy jól érzem magam az ön társaságában, hogy egy
kicsit félek, de ez jól esik… satöbbi, satöbbi, satöbbi. Az a «jól», ha
csöndesen mondhattam volna, az én saját külön nüánszaimmal,
valóságos biztatás lett volna az ön számára. De így, hogy nagyon
hangosan mondtam, azt jelentette: «jól érzem magam, jól, jól,
hagyjon már békében?» És akkor hallgattunk vagy öt perczig a
kocsiban. És konflisban még hallgatni se lehet. Mert a csöndes, puha
fiakkerben feltünt volna a hallgatásom és maga azt kérdezte volna:
«Miért hallgat?» – És akkor én talán sírni kezdtem volna.
A férfi: Oh, óh!
Az urnő: De a konflisban nem tűnik fel a hallgatás. Mert
természetes, hogy abban a nagy zörgésben nincs kedve beszélni az
embernek. Hogy finoman fejezzem ki magamat: a konflisban nem
hallani a csöndet. Tehát ez is kútba esett. És aztán szépen, sorjában
kútba esett minden, mert elvégre azt még szerelmes nőtől sem
kívánhatja a férfi, hogy mindent bevalljon brutális érthetőséggel, –
hogyan kívánhatná tehát olyantól, a ki még csak majdnem szerelmes
és a ki igazán csak apró sóhajtásokkal, furcsább hangsulyozásokkal
értetheti meg magát…
A férfi: Úgy van, óh én szamár, úgy van, óh én szamár.
Az urnő: És azontúl, hogy én kiszálltam a kocsiból és maga
elbúcsúzott tőlem, nem is láttam. Jól tette, hogy elkerült, mert
hiszen semmi bátorítást nem kapott tőlem. Most látom azóta először.
Hát vegye tudomásul, hogy ha akkor fiakkert hozott volna…
A férfi: Oh, óh!
Az urnő: Ilyen semmiségeken múlnak a legszebb dolgok. És most
ne is feleljen, se magát ne sajnálja, se engem ne sajnáljon. És most
büntetésből kisérjen haza az uramhoz.
A férfi: Megyek kocsiért.
Az urnő: Jó lesz, mert esik az eső.
A férfi: De most már… most már öntudatosan, tiszteletből hozom
a konflist.
Az urnő: Nem, nem. Tiszteletből hozzon fiakkert, mutassa ezzel,
mint a hogy én is meg fogom mutatni, hogy most már hiába a
fiakker is. Rettenetes, hogy maguk férfiak sohse találják el. Menjen.
Menjen és jegyezze meg, hogy ha asszonyt kisér haza, mindig azt a
kocsit válaszsza, a melyikbe a lehető legtöbb ló van fogva. Menjen.
(A férfi búsan megy kocsiért. Régi szamárságok fanyar emléke
húzza mosolyra az arczát. És az eső esik, csöndesen.)
A NAGYON ELŐKELŐ FIU.

(Szinhely: bárhol, ahol egy asztal és két szék van. Személyek: a


művésznő és a fiu. A művésznő valamivel öregebb mint a fiu. A fiu
sokkal fiatalabb mint a művésznő. Az asztal a középen áll, a
művésznő az egyik széken ül, a fiu a művésznő mellett áll és a másik
szék üres. Majd később ül le rá a fiu.)
A művésznő: No, magyarázza meg.
A fiu: De nem fog haragudni.
A művésznő: Nem.
A fiu: Hát pontosan hogy volt? Úgy volt, hogy Erdélyben volt egy
hangverseny, a melyen maga énekelt. Maga Bécsből jött, a
pályaudvaron reggelizett és azonnal utazott tovább Erdély felé. Én
magát nem ismertem, nekem a hangverseny rendezősége azt írta,
hogy ott a helyszínén tartunk egy próbát és este a konczerten én
kisérem magát. Így hát egy vonaton utaztunk és ott ismerkedtünk
meg a Sas-szállodában.
A művésznő: Úgy van.
A fiu: Konczert után nem beszéltünk, s megint csak a vonatnál
találkoztunk reggel.
A művésznő: Igen.
A fiu: A rendezők, a kik le sem feküdtek, amúgy frakkosan
kísértek ki minket a vonathoz, magának átadtak borítékban ötszáz
koronát, nekem átadtak borítékban százötven koronát. Azzal
beültünk ketten egy kupéba…
A művésznő: A mely rosszúl volt fűtve.
A fiu: A mely egyáltalán nem volt fűtve, a mi szintén nagy baj
volt, de a mely körülményre egyelőre nem óhajtok kiterjeszkedni.
A művésznő: Gyerünk tovább.
A fiu: Gyerünk tovább. Hideg volt, tehát keztyüt húztam, és tudja
Isten, ha én keztyüt húzok, mindjárt megszállja egész lényemet
valami előkelőség. Maguk hölgyek ezt nem értik, de minden
egyszerű polgár, a ki nem sokszor húz keztyüt, igazat fog nekem
adni abban, hogy a keztyü nagy hatással van az emberre.
A művésznő: Tovább, tovább.
A fiu: Kérem, ne nézze le a fontos részleteket. Ez a keztyü-dolog
például igen fontos volt. Mert aztán már olyan keztyühangon
kezdtem önnel beszélni.
A művésznő: Nem vettem észre. Egyszerűen udvariasan, kedvesen
beszélt, mint akárki más beszélt volna az ön helyén.
A fiu: Ezért mondom, hogy ez keztyühang volt, mert én az én
zeneszerző-barátaim közt arról vagyok híres, hogy én vagyok a
legnagyobb csibész. Goromba és gúnyos vagyok a nőkkel, nálam a
hölgy olyanokat pirul, a milyeneket csak akar, engem a nők előbb
megutálnak, meggyűlölnek, aztán megirigyelnek, meg akarnak
maguknak szerezni, uralkodni akarnak rajtam, mire én egyre
gorombább és egyre malicziózusabb vagyok velük, mire látván a
sikertelen küzdelmet, kezdenek belém szeretni.
A művésznő: Szép kis ostobaságot szavalt most itt.
A fiu: Nem is olyan ostobaság ez, a milyennek látszik. Ha a férfi
nehezen megszerezhetőnek mutatkozik a nő előtt, ez már jó
befektetés. A német bakfisregények «hideg» emberekről beszélnek.
Ez volt a régi módszer. De az uj módszer szerint a fiu meleg,
temperamentumos, de úgy tesz, mintha utálná az illető hölgyet,
őrülten rajong és lelkesedik például a repülőgépért, úgy, hogy a
hölgy végre is azt mondja magában: «miért a repülőgép és miért
nem én?» A hölgy azt gondolja: «ha ez nem nézne le engem, rólam
is tudna ilyen szépeket mondani», és végül észreveszi magát a hölgy
azon, hogy irigyli a fiunak a kalapját azért, mert a kalap a fiuval
együtt megy haza.
A művésznő: Aztán?
A fiu: Aztán ez az egész.
A művésznő: Ez az egész módszere?
A fiu: Ez.
A művésznő: Gyönyörű.
(Hallgatnak.)
A fiu: Könnyű ezt most megkritizálni. Hanem kiállani, az nehéz.
Maga most nevet rajta.
A művésznő: Persze, hogy nevetek.
A fiu: Hiszen épp erről van szó. Most már elmondtam a
módszeremet, mert a kolozsvári gyorsvonaton elmulasztottam
kipróbálni. Ez most már csak bonczolás.
A művésznő: Bonczolás?
A fiu: Az. Ha magam feküdném a klinikán a bonczasztalon
néhány darabra vágva, nem tudná könnyen elképzelni, hogy valaha
is belém lehetett szeretni. Ugy-e? Hát az én csibészségemnek, az én
felületes, fölényes, léhán kedves, gyermekesen daczos, elbizakodott
és mégis oly kezes, szimpatikus hódítómodoromnak ez, a mit most
itt elmondtam, már csak a hullája. Ezen ugyan nincs mit imádni, ezt
magam is tudom.
A művésznő: Tehát?
A fiu: Tehát ezt gyilkolta meg részint a keztyü, a mi igazán nem
csibész-kellék, részint pedig a kupéban érezhető hideg, a mely
minden könnyed csevegésnek, mozgékonyságra alapított modornak
ellensége. Gubbasztva ültünk a sarokban és a repülőgépről volt szó.
Bizonyos undorral beszéltem a repülőgépről, valami előkelő
fanyarsággal, a mi magának nagyon tetszett.
A művésznő: Úgy van.
A fiu: Pedig nekem az kellett volna, hogy ne tessék önnek. Azt a
kis ellenszenvet nem tudtam magában fölkelteni, a miből én a
szerelmet csinálom, a mivel én magamból az idomíthatatlan,
meghódíthatatlan csikót formálom a nő szemében, mert a nőnek
minél idomíthatatlanabbnak tünik fel valaki, annál jobban szeretné
idomítani és szelidíteni. De a szelid unalom és udvarias modor, a
melylyel a diskurzust kezeltem, szimpátiát ébresztett magában és én
szimpátiával nem érek semmit, mert azzal nem tudok mit kezdeni.
A művésznő: De kedves, udvarias volt.
A fiu: Mert akkor már lemondtam magáról, Akkor már eunuchnak
éreztem magamat, a ki lemondott a nőről, s a kinek most már csak
egy gondja van: igyekezni azon, hogy a nő jól érezze magát. Mert ha
ott be lett volna fűtve és rajtam nem lett volna keztyü, akkor azon
igyekeztem volna, hogy a nő rosszul érezze magát.
A művésznő: Szép.
A fiu: És így egyre előkelőbbé váltam. Gondosan betakartam önt
a plédemmel, a mi megnyugtató hatást tett önre. Ebben sok minden
van, benne van az, hogy mozdulatlanná tettem önt, nem voltam
kiváncsi a lábára, karcsu alakját bebugyolált tömeggé csúfitottam,
szinte megrögzítettem a kupé sarkában, szinte nem is akartam, hogy
mozduljon, felém hajoljon. És ehhez illő dolgokat is mondtam:
adtam a pedáns, rendes, gondos, családi embert, más emberekről
kezdtem kaszinói hangon beszélni, dicsértem a korrekt urakat, holott
imádom a csirkefogókat, de észleltem, hogy ön így érzi jól magát és
semmi okom nem volt arra, hogy önt nyugtalanná tegyem egy uj
világfelfogással.
A művésznő: Tehát ezért volt előkelő?
A fiu: Ezért.
A művésznő: Ezért volt finom, kedves, okos, udvarias?
A fiu: Ezért.
A művésznő: Ezért tetszett nekem, ezért mondtam én magamban:
«érdekes, ez a semmi kis muzsikus milyen nobilis, előkelő, komoly
kis fiu»?
A fiu: Ezért.
A művésznő: És ez mind azért volt, mert ön unt engem, lusta volt a
meghódításomra vállalkozni, mert hideg volt és mert már keztyüt
húzott.
A fiu: Úgy van. Egyik dolog vonzotta a másikat. A hideg a
keztyüt, a keztyü a plédet, a pléd a baráti gondoskodást, a baráti
gondoskodás az udvariasságot, a finom hangot, a köznapi kaszinó-
felfogást és a vígjátéki szalontónust.
A művésznő: Szóval lenézett.
A fiu: Le.
A művésznő: Nem látta bennem az asszonyt, csak a nőt.
A fiu: Csak a nőt.
A művésznő: Nem érdekeltem.
A fiu: Nem.
A művésznő: Nem érdeklem most sem?
A fiu: Most sem.
A művésznő: Jelentéktelen kis operettnő vagyok a maga szemében,
a ki néhány száz koronáért utazik és énekel.
A fiu: Úgy van.
A művésznő: Nem is énekelek olyan jól?
A fiu: Nem.
A művésznő: Nem is vagyok olyan szép?
A fiu: Nem.
A művésznő: Isten áldja meg.
(Fölkel.)
A fiu: Hova megy?
A művésznő: Innen el. Mert látom, hogy kezdődik a maga
módszere. Kezdődik a csirkefogó-hang.
A fiu: Maradjon itt.
(Most ül le a másik székre.)
A művésznő: Köszönöm, nem. Most már ugyis hiába próbálná ki
rajtam a módszert. Akármit mondana, folyton arra gondolnék, hogy
ez most módszer.
A fiu: Vagyis: nem szabad egy nő előtt a módszert előre
elmesélni.
A művésznő: Nem, fiam. Hanem nem szabad egy nőről bármilyen
lélektani hülyeség miatt is lemondani. Én a plédbe burkolva az egész
utazás alatt azon gondlkoztam, hogy maga az én emberem. Ilyen
kell nekem. Akkor meg voltam fogva, módszer nélkül.
A fiu: Igazán?
A művésznő: Igazán. Amért nem a módszernek ültem fel, hanem
az embernek, a ki soha olyan művész nem lehet, hogy magát egy
szerep mögé el tudja rejteni, mint ezt maguk, kisszerű hódítók
hiszik, azért nem kellett volna engem lenézni.
A fiu: Nem?
A művésznő: Nem.
A fiu (felragyogó szemmel): Igazán?
A művésznő: Olyan igazán, mint a milyen igazán én most magát itt
hagyom a faképnél.
(Szó nélkül örökre távozik.)

You might also like