0% found this document useful (0 votes)
14 views254 pages

Zbornik ContemporaryComputationalScience

Uploaded by

Jerry Taylor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views254 pages

Zbornik ContemporaryComputationalScience

Uploaded by

Jerry Taylor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 254

Contemporary Computational Science

edited by
Piotr Kulczycki
Piotr A. Kowalski
Szymon Łukasik

3rd Conference on Information Technology, Systems Research and


Computational Physics

and

6th International Symposium CompIMAGE’18 – Computational Modeling of


Objects Presented in Images: Fundamentals, Methods, and Applications

http://cs2018.fis.agh.edu.pl/
All rights reserved; no part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or
by any means, electronic, mechanical photocopying, recording, or otherwise, without the prior permission of the Publisher.

Copyright the AGH University of Science and Technology in Kraków

Kraków 2018

ISBN: 978-83-66016-22-4

AGH University of Science and Technology Press


AGH-UST, 30 Mickiewicza Av., 30-059 Kraków, Poland
Preface
This e-book contains conference material from two concurrent conferences:
− 3rd Conference on Information Technology, Systems Research and Computational Physics
(ITSRCP'18),
− 6th International Symposium CompIMAGE’18 – Computational Modeling of Objects Presented in
Images: Fundamentals, Methods, and Applications (CompIMAGE'18),
which has been organized on 2-5 July 2018 by the Faculty of Physics and Applied Computer Science of
the AGH University of Science and Technology. The co-organizer is the Systems Research Institute of
the Polish Academy of Sciences in Warsaw, Poland. The conferences are being held under auspices of
the Committee on Automatic Control and Robotics of the Polish Academy of Sciences. Significant
contributions are also being made by the Széchenyi István University (Győr, Hungary), the Slovak
University of Technology (Bratislava, Slovakia), and the State University of New York at Fredonia,
and the SUNY Buffalo State, both in the USA. Detailed information can be found on the
website http://cs2018.fis.agh.edu.pl/, and also on the webpages of the particular conferences,
http://itsrcp18.fis.agh.edu.pl/ and http://isci18.fis.agh.edu.pl/, respectively.
The intention of holding simultaneous conferences is to combine the two subject areas; the first of
which is general, the second presents valuable tools for solving many problems that are highlighted in
the former area. The first conference covers all aspects of information technology (in particular,
including computational intelligence and data analysis), systems research (especially control
engineering and dynamical systems), and computational methods of contemporary applied physics;
worthwhile contributions are also made in the related fields of applied mathematics. The second
conference includes theoretical and practical aspects of the computational modeling of objects
presented in images along with applications in this field.
This publication incorporates short papers and abstracts of regular papers presented at both
conferences. For an entire e-book provided as one file, please click here. Full texts of regular papers
will be available in the Springer's edited books:
− Information Technology, Systems Research and Computational Physics, eds. Kulczycki P., Kacprzyk J.,
Kóczy L.T., Mesiar R. Wisniewski R., Advances in Intelligent Systems and Computing series,
− Computational Modeling of Objects Presented in Images. Fundamentals, Methods, and Applications,
eds. Barneva R.P., Brimkov V.E., Kulczycki P., Tavares J.M.R.S., Lecture Notes in Computer Science series,
respectively for the particular conferences.
In conclusion, we would like to express our heartfelt thanks to the International Program Committee
for their input in reviewing the submitted manuscripts. Their scientific contribution and kind help were
particularly appreciated in the rapid and reliable management of received subject material, both
valuable and varied in theme.

Editors

Piotr Kulczycki Piotr A. Kowalski Szymon Łukasik

iv
Table of Contents

Preface iv

Table of Contents v

Section 1
Computational Physics 1

Generative Models for Fast Cluster Simulations in the TPC for the ALICE
Experiment ( Kamil Deja, Tomasz Trzci«ski, and Šukasz Graczykowski ) . . 2
2D-Raman Correlation Spectroscopy Recognizes the Interaction at the Carbon
Coating and Albumin Interface (Anna Koªodziej, Aleksandra Weseªucha-
Birczy«ska, Paulina Moskal, Ewa Stodolak-Zych, Maria Du»yja, El»bieta
Dªugo«, Julia Sacharz, and Marta Bªa»ewicz ) . . . . . . . . . . . . . . . . 3
Eect of elastic and inelastic scattering on electronic transport in open systems
(Karol Kulinowski, Maciej Woªoszyn, and Bartªomiej J. Spisak ) . . . . . . 4
Phase-space approach to time evolution of quantum states in conned systems.
Damian Koªaczek, Bartªomiej J. Spi-
The spectral split-operator method (
sak, and Maciej Woªoszyn ) . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Section 2
Modeling, Segmentation, Recognition 7

Automatic Segmentation and Quantitative Analysis of Irradiated Zebrash Em-


bryos (Melinda Katona, Tüde T®kés, Emília Rita Szabó, Szilvia Brunner,
Imre Zoltán Szabó, Róbert Polanek, Katalin Hideghéty, and László G. Nyúl ) 8
Classication of breast lesions using quantitative dynamic contrast enhanced-
MRI ( Mohan Jayatilake, Teresa Gonçalves, and Luís Rato ) . . . . . . . . . 9
Recognizing Emotions with EmotionalDAN ( Ivona Tautkute, Tomasz Trzci«ski,
and Adam Bielski ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Clustering functional MRI Patterns with Fuzzy and Competitive Algorithms
(Alberto Arturo Vergani, Samuele Martinelli, and Elisabetta Binaghi ) . . . 11

Section 3
Information Technology 13

Integer Programming Based Optimization of Optical Node Architectures ( Sta-


nisªaw Kozdrowski and Sªawomir Sujecki ) . . . . . . . . . . . . . . . . . . 14
Two approaches for the computational model for software usability in practice
(Eva Rakovská and Miroslav Hudec ) . . . . . . . . . . . . . . . . . . . . . . 21
Content-based recommendations in an e-commerce platform ( Šukasz Dragan and
Anna Wróblewska ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

v
Analysis of dispersive part of AC magnetic susceptibility measurement of high-
temperature superconductors by means of neural network ( Marcin Kowa-
lik, Waldemar Tokarz, Andrzej Koªodziejczyk, Marek Giebuªtowski, Ryszard
Zalecki and Wiesªaw Marek Woch ) . . . . . . . . . . . . . . . . . . . . . . 23
A method of Functional Test interval selection with regards to Machinery and
Economical aspects (Jan Piesik and Emilian Piesik ). . . . . . . . . . . . . 31

Section 4
Data Analysis and Systems Research 45
Using Random Forest Classier for particle identication in the ALICE Experi-
ment ( Tomasz Trzci«ski, Šukasz Graczykowski, and Michaª Glinka ) . . . . 46
Fault Propagation Models Generation in Mobile Telecommunication Networks
based on Bayesian Networks with Principal Component Analysis Filtering
( Artur Ma¹dziarz ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
An ecient model for steady state numerical analysis of erbium doped uoride
Sªawomir Sujecki
glass ber lasers ( ) . . . . . . . . . . . . . . . . . . . . . . 48
Maªgorzata Cha-
Image enhancement with applications in biomedical processing (
rytanowicz, Piotr Kulczycki, Szymon Šukasik, and Piotr A. Kowalski ) . . . 54
Ecient Astronomical Data Condensation using Approximate Nearest Neig-
hbors ( Szymon Šukasik, Konrad Lalik, Piotr Sarna, Piotr A. Kowalski,
Maªgorzata Charytanowicz, and Piotr Kulczycki ) . . . . . . . . . . . . . . 55
Comparative analysis of segmentation methods and extracting heart features in
Cardiovascular MRI ( Joanna ‘wiebocka-Wi¦k ) . . . . . . . . . . . . . . . . 56
Grzegorz Goªaszewski
Similarity-based outlier detection in multiple time series ( ) 68

Section 5
Tomography 69
Multimaterial Tomogrpahy: Reconstruction from Decomposed Projection Sets
(László G. Varga ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Sequential Projection Selection Methods for Binary Tomography ( Gábor Lékó
and Péter Balázs ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Variants of Simulated Annealing for Strip Constrained Binary Tomography ( Ju-
dit Sz¶cs and Péter Balázs ) . . . . . . . . . . . . . . . . . . . . . . . . . . 72

Section 6
Computational Intelligence 73
Optimizing Clustering with Cuttlesh Algorithm ( Piotr A. Kowalski, Szymon
Šukasik, Maªgorzata Charytanowicz, and Piotr Kulczycki ) . . . . . . . . . 74
A Memetic version of the Bacterial Evolutionary Algorithm for discrete optimi-
Boldizsár Tú¶-Szabó, Peter Földesi, and László T. Kóczy
zation problems ( ) 75
A Hybrid Cascade Neural Network with Ensembles of Extended Neo-Fuzzy Neu-
Yevgeniy Bodyanskiy and Oleksii Tyshchenko
rons and its Deep Learning ( ) 76

Section 7
Applied Mathematics 77
O©ga Nánásiová, ‰u-
Probability Measures and projections on Quantum Logics (
bica Valá²ková, and Viera ƒer¬anová ) . . . . . . . . . . . . . . . . . . . . 78
Jana
Statistical analysis of models' reliability for punching resistance assessment (
Kalická, Mária Minárová, Jaroslav Halvoník, and Lucia Majtánová ) . . . . 79

vi
Statistical test for fractional Brownian motion based on detrending moving
average algorithm ( Grzegorz Sikora ) . . . . . . . . . . . . . . . . . . . . . . 80
On persistence of convergence of kernel density estimates in particle ltering
( David Coufal ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Multidimensional copula models of dependencies between selected international
nancial market indexes (Tomá² Bacigál, Magdaléna Komorníková, and
Jozef Komorník ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Adam
New Types of Decomposition Integrals and Computational Algorithms (
’eliga ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Trend analysis and detection of change-points of selected nancial and market
Dominika Ballová
indices ( ) . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Karl Javorszky
Picturing Order ( ) . . . . . . . . . . . . . . . . . . . . . . . . . . 95

Section 8
Discrete Geometry and Topology 105

Endpoint-Based Thinning with Designating Safe Skeletal Points ( Kálmán Palágyi


and Gábor Németh ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Maximal P-simple Sets on (8,4) Pictures ( Péter Kardos and Kálmán Palágyi ) . 107
An immersed boundary approach for the numerical analysis of objects represen-
ted by oriented point clouds ( László Kudela, Stefan Kollmannsberger, and
Ernst Rank ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Structuring digital spaces by closure operators associated to n Jo-
-ary relations (
sef Slapal ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

Section 9
Computer Vision 111

Graph Cutting in Image Processing handling with Biological Data Analysis


(Mária šdímalová, Tomá² Bohumel, Katarína Plachá-Gregorovská, Peter
Weismann, and Hisham El Falougy ) . . . . . . . . . . . . . . . . . . . . . . 112
Comparison of 3D graphics engines for particle track visualization in the ALICE
Experiment (Piotr Nowakowski, Julian Myrcha, Tomasz Trzci«ski, Šukasz
Graczykowski and Przemysªaw Rokita ) . . . . . . . . . . . . . . . . . . . . 113
A methodology for trabecular bone microstructure modelling agreed with three-
Jakub Kami«ski, Adrian Wit, Krzysztof Janc,
dimensional bone properties (
and Jacek Tarasiuk ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Human stress detection using non-contact remote photoplethysmography from
Sergii Nikolaiev, Sergii Telenyk and Yury Tymoshenko
video stream ( ) . . . 125

Section 10
Fuzzy Logic 137

On wavelet based enhancing possiilities of fuzzy classication of measurement


Ferenc Lilik, Levente Solecki, Brigita Sziová, László T. Kóczy, and
results (
Szilvia Nagy ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
On the Convergence of Fuzzy Grey Cognitive Maps ( István Á Harmati and
László T. Kóczy ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Hierarchical fuzzy decision support methodology for packaging system design (
Kata Vörösk®i, Gerg® Fogarasi, Adrienn Buruzs, Peter Földesi, and László
T. Kóczy ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

vii
Section 11
Machine Learning 141
Applicability of Deep Learned vs Traditional Features for Depth Based Classi-
Fabio Bracci, Mo Li, Ingo Kossyk, and Zoltan-Csaba Marton
cation ( ) . . . 142
Eect of image view for mammogram mass classication ( Sk Md Obaidullah,
Sajib Ahmed, and Teresa Gonçalves ) . . . . . . . . . . . . . . . . . . . . . 143
Solving a Combinatorial Multiobjective Optimization Problem by Genetic Al-
gorithm (Marcin Studniarski, Liudmila Koliechkina and Elena Dvernaya ) . 144

Section 12
Image Analysis 157
Karol Piaskow-
Fast Object Detector based on Convolutional Neural Networks (
ski, and Dominik Belter ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Dariusz Po-
Applying computational geometry to designing an occlusal splint (
jda, Agnieszka Anna Tomaka, Leszek Luchowski, Krzysztof Skabek, and
Michaª Tarnawski ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Alham-
Assessment of Patients Emotional Status According To iris Movement (
zawi Hussein and Attila Fazekas ) . . . . . . . . . . . . . . . . . . . . . . . 160
Computer-aided diagnosis system for lumbar spinal stenosis detection in MRI
Dominik Horwat and Marek Kro±nicki
based on radiological criteria ( ) . . . 177

Section 13
Intelligent Data Analysis 191
Credibility of Fuzzy Knowledge (Oleksandr Provotar ) . . . . . . . . . . . . . . . 192
RMID: a novel and ecient image descriptor for mammogram mass classication
(Sk Md Obaidullah, Sajib Ahmed, Teresa Gonçalves, and Luís Rato ) . . . . 203
Instrumentals/songs separation for background music removal ( Himadri Muk-
herjee, Sk Md Obaidullah, K.C. Santosh, Teresa Gonçalves, Santanu Pha-
dikar, and Kaushik Roy ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
Tomasz Rybotycki
Modern metaheuristics in physical processes optimization ( ) . 205

Section 14
From Theory to Applications 207
Eect of left ventricular longitudinal axis variation in Pathological hearts using
Deep learning (Yashbir Singh, Deepa, Shi-Yi Wu, João Manuel R. S. Ta-
vares, Michael Friebe and Weichih Hu ) . . . . . . . . . . . . . . . . . . . . 208
Nilanjana
Finding Graph from Retinal Vascular Network for Image Verication (
Dutta Roy and Arindam Biswas ) . . . . . . . . . . . . . . . . . . . . . . . 213
Pure Hexagonal Context-Free Grammars Generating Hexagonal Patterns ( Pa-
wan Kumar Patnaik, Venkata Padmavati Metta, Jyoti Singh, and D.G.
Thomas ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Juan Camilo
Design of a haptic exoskeleton for the hand with Internet of Things (
Calvera Duran, Octavio José Salcedo Parra and Carlos Enrique Montene-
gro Marín ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226

Section 15
Early Stage Researchers 227
Crisp vs Fuzzy Decision Support Systems for the Forex Market ( Przemysªaw
Juszczuk and Lech Kru± ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228

viii
Neural network and dynamic programming for R&D sector development in Po-
land ( Jacek Chmielewski ) . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Recurrent Neural Networks with grid data quantization for modeling LHC su-
perconducting magnets behavior ( Maciej Wielgosz and Andrzej Skocze« ) . 240

Author Index 241

ix
x
Section 1

Computational Physics

1
Generative Models for Fast Cluster Simulations in the TPC for
the ALICE Experiment
1 1 2
Kamil Deja , Tomasz Trzci«ski , and Šukasz Graczykowski
1
Institute of Computer Science, Warsaw University of Technology, Poland;
2
Faculty of Physics, Warsaw University of Technology, Poland;

Abstract. Simulating the possible detector response is a key component of every


high-energy physics experiment. The methods used currently for this purpose provide
high-delity results. However, this precision comes at a price of a high computational
cost, which renders those methods infeasible to be used in other applications, e.g. data
quality assurance. In this work, we present a proof-of-concept solution for generating the
possible responses of detector clusters to particle collisions, using the real-life example of
the Time Projection Chamber (TPC) in the ALICE experiment at CERN. We introduce
this solution as a rst step towards a semi-real-time anomaly detection tool. It's essential
component is a generative model that allows to simulate synthetic data points that bear
high similarity to the real data. Leveraging recent advancements in machine learning,
we propose to use state-of-the-art generative models, namely Variational Autoencoders
(VAE) and Generative Adversarial Networks (GAN), that prove their usefulness and ef-
ciency in the context of computer vision and image processing. The main advantage
oered by those methods is a signicant speedup in the execution time, reaching up to
3
the factor of 10 with respect to the GEANT3, a currently used cluster simulation tool.
Nevertheless, this computational speedup comes at a price of a lower simulation qua-
lity. In this work we show quantitative and qualitative limitations of currently available
generative models. We also propose several further steps that will allow to improve the
accuracy of the models and lead to the deployment of anomaly detection mechanism based
on generative models in a production environment of the TPC detector.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

2
2D-Raman Correlation Spectroscopy Recognizes the Interaction
at the Carbon Coating and Albumin Interface
1 1 1
Anna Koªodziej , Aleksandra Weseªucha-Birczy«ska , Paulina Moskal , Ewa
2 3 2 1 2
Stodolak-Zych , Maria Du»yja , El»bieta Dªugo« , Julia Sacharz , and Marta Bªa»ewicz
1
Faculty of Chemistry, Jagiellonian University, Krakow, Poland;
Faculty of Materials Science and Ceramics,
2

AGH-University of Science and Technology, Krakow, Poland;


Technolutions, Šowicz , Poland;
3

Abstract. Carbon materials open new perspectives in biomedical research, due to


their inert nature and interesting properties. For biomaterials the essential attribute is
their biocompatibility, which refers to the interaction with host cells and body uids,
respectively. The aim of our work was to analyze two types of carbon layers diering
primarily in topography, and modeling their interactions with blood plasma proteins. The
rst coating was a layer formed of pyrolytic carbon (CVD) and the second was constructed
of multi-walled carbon nanotubes obtained by electrophoretic deposition (EPD), both set
on a Ti support. The results of the performed complex studies of the two types of model
carbon layers exhibit signicant dissimilarities regarding their interaction with chosen
blood proteins, and the dierence is related to the origin of a protein: whether it is
animal or human. Wettability data, nano sctatch tests were not sucient to explain
the material properties. In contrast, Raman microspectroscopy thoroughly decodes the
phenomena occurring at the carbon structures in contact with the selected blood proteins
interface. The 2D correlation method selects the most intense interaction and points
out the dierent mechanism of interactions of proteins with the nanocarbon surfaces and
dierentiation due to the nature of the protein and its source: animal or human. The
2D- correlation of the Raman spectra of the MWCNT layer+HSA interphase conrms an
increase in albumin β -conformation. The presented results explain the unique properties
of the C-layers (CVD) in contact with human albumin..

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

3
Eect of elastic and inelastic scattering on electronic transport in
open systems
Karol Kulinowski, Maciej Woªoszyn, and Bartªomiej J. Spisak

AGH University of Science and Technology, Faculty of Physics and Applied Computer Science,
al. Mickiewicza 30, 30-059 Krakow, Poland;

Abstract. The purpose of this study is to apply the distribution function formalism
to the problem of electronic transport in open systems, and numerically solve the kinetic
equation with a dissipation term. This term is modeled within the relaxation time ap-
proximation, and contains two parts, corresponding to elastic or inelastic processes. The
collision operator is approximated as a sum of the semiclassical energy dissipation term,
and the momentum relaxation term which randomizes momentum but does not change
energy. As a result, the distribution of charge carriers changes due to the dissipation
processes, which has a profound impact on the electronic transport through the simulated
region discussed in terms of the currentvoltage characteristics and their modication
caused by the scattering.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

4
Phase-space approach to time evolution of quantum states in
conned systems. The spectral split-operator method
Damian Koªaczek, Bartªomiej J. Spisak, and Maciej Woªoszyn

AGH University of Science and Technology, Faculty of Physics and Applied Computer Science,
al. Mickiewicza 30, 30-059 Krakow, Poland;

Abstract. Using the phase space approach, we consider the dynamics of a quantum
particle in an isolated conned quantum system with three dierent potential energy
proles. We solve the Moyal equation of motion for the Wigner function with the highly
ecient spectral split-operator method. The main aim of this study is to compare the
accuracy of the used algorithm by analysis of the total energy expectation value, in terms
of the deviation from its exact value. This comparison is performed for the second and
fourth order factorizations of the time evolution operator.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

5
6
Section 2

Modeling, Segmentation, Recognition

7
Automatic Segmentation and Quantitative Analysis of Irradiated
Zebrash Embryos
1 2 2 2
Melinda Katona , Tüde T®kés , Emília Rita Szabó , Szilvia Brunner ,
2 2 2 1
Imre Zoltán Szabó , Róbert Polanek , Katalin Hideghéty , and László G. Nyúl
1
Department of Image Processing and Computer Graphics, University of Szeged,
Árpád tér 2, Szeged, H-6720 Hungary;
2
ELI-HU Non-Prot Ltd., Dugonics tér 13, Szeged, H-6720 Hungary;

Abstract. Radiotherapy is one of the most common methods to treat dierent cancer
cells in clinical application despite having harmful eects on healthy tissues. Radiobiologi-
cal experiments are very important to determine the irradiation-caused acute and chronic
eects to dene the exact consequences of dierent irradiation sources. Photon irradiation
has been used on zebrash embryos, a very new in vivo and appropriate model system
in radiobiology. After irradiation, dose-dependent morphological changes were observable
in the embryos. These morphological deteriorations were measured manually by biologist
researchers during three weeks, which was an extremely time demanding process (15 mi-
nutes per image). The aim of this project was to automate this evaluating process, to save
time for researchers and to keep the consistence and accuracy of the evaluation. Hence,
an algorithm was developed and used to detect the abnormal development of zebrash
embryos.

The full text will be available in the edited book Computational Modeling of Objects
Presented in Images. Fundamentals, Methods, and Applications, eds. Barneva R.P.,
Brimkov V.E., Kulczycki P., Tavares J.M.R.S., to be published by Springer in the Lecture
Notes in Computer Science series soon.

8
Classication of breast lesions using quantitative dynamic
contrast enhanced-MRI
1 2 2
Mohan Jayatilake , Teresa Gonçalves , and Luís Rato

University of Peradeniya, Sri Lanka;


1

2
Computer Science Department, University of Évora, Portugal;

Abstract. Imaging biomarkers are becoming important in both research and clinical stu-
dies. This study is focused on developing measures of tumour mean, fractal dimension,
homogeneity, energy, skewness and kurtosis that reect the values of the pharmacokinetic
(PK) parameters within the breast tumours, evaluate those using clinical data and inves-
tigate their feasibility as a biomarker to discriminate malign from benign breast lesions.
In total, 75 patients with breast cancer underwent Dynamic Contrast Enhanced-Magnetic
Resonance Imaging (DCE-MRI). Axial bilateral images with fat-saturation and full breast
coverage were performed at 3T Siemens with a 3D gradient echo-based TWIST sequence.
The whole tumour mean, fractal dimension, homogeneity, energy, skewness and kurtosis
trans
of K and Ve values were calculated. Median of both the mean and fractal dimension
trans
of K and Ve for benign and malignant show signicant discrimination. Further, the
median of skewness and kurtosis of Ve
between benign and malignant are also signicantly
trans
varying. In conclusion, mean and fractal dimension of both K and Ve and skewness
and kurtosis of Ve for typical breast cancer, computed from PK parametric maps, show
potential as a biomarker for breast tumour diagnosis either as a benign or malignant.

The full text will be available in the edited book Computational Modeling of Objects
Presented in Images. Fundamentals, Methods, and Applications, eds. Barneva R.P.,
Brimkov V.E., Kulczycki P., Tavares J.M.R.S., to be published by Springer in the Lecture
Notes in Computer Science series soon.

9
Recognizing Emotions with EmotionalDAN
Ivona Tautkute, Tomasz Trzci«ski, and Adam Bielski

Polish-Japanese Academy of Information Technology, Warsaw, Poland;


Warsaw University of Technology, Warsaw, Poland;
Tooploox, Poland;

Abstract. Classication of human emotions remains an important and challenging task


for many computer vision algorithms, especially in the era of humanoid robots which
coexist with humans in their everyday life. Currently proposed methods for emotion re-
cognition solve this task using multi-layered convolutional networks that do not explicitly
infer any facial features in the classication phase. In this work, we postulate a funda-
mentally dierent approach to solve emotion recognition task that relies on incorporating
facial landmarks as a part of the classication loss function. To that end, we extend a
recently proposed Deep Alignment Network (DAN), that achieves state-of-the-art results
in the recent facial landmark recognition challenge, with a term related to facial features.
Thanks to this simple modication, our model called EmotionalDAN is able to outperform
state-of-the-art emotion classication methods on two challenging benchmark dataset by
up to 5%.

The full text will be available in the edited book Computational Modeling of Objects
Presented in Images. Fundamentals, Methods, and Applications, eds. Barneva R.P.,
Brimkov V.E., Kulczycki P., Tavares J.M.R.S., to be published by Springer in the Lecture
Notes in Computer Science series soon.

10
Clustering functional MRI Patterns with Fuzzy and Competitive
Algorithms
Alberto Arturo Vergani, Samuele Martinelli, and Elisabetta Binaghi

University of Insubria, Varese, Italy;

Abstract. We used model free methods to explore the brain's functional properties
adopting a partitioning procedure based on cross-clustering: we selected Fuzzy C-Means
(FCM) and Neural Gas (NG) algorithms to nd spatial patterns with temporal features
and temporal patterns with spatial features, applied to a shared fMRI repository of Face
Recognition Task. We investigated the partitioning matching the BOLD signal signatu-
res with the classes found and with the results of functional connectivity analysis. We
compared the outcomes using the just known model-based knowledge as likely ground
truth, conrming the role of Fusiform brain regions. Partitioning results globally show
a better spatial clustering than temporal clustering for both algorithms; in the case of
temporal clustering, FCM outperforms Neural Gas. The relevance of brain subregions
related to Face Recognition were correctly distinguished by algorithms and the results are
in agreement with the current neuroscientic literature.

The full text will be available in the edited book Computational Modeling of Objects
Presented in Images. Fundamentals, Methods, and Applications, eds. Barneva R.P.,
Brimkov V.E., Kulczycki P., Tavares J.M.R.S., to be published by Springer in the Lecture
Notes in Computer Science series soon.

11
12
Section 3

Information Technology

13
Integer Programming Based
Optimization of Optical Node
Architectures

Stanislaw Kozdrowski1 and Slawomir Sujecki2


1
Warsaw University of Technology, Warsaw, Poland,
[email protected],
2
Wroclaw University of Science and Technology,
Wroclaw, Poland

Abstract. The main objective of this study is to minimize capex and


opex in telco optical networks which use colorless, directionless, con-
tentionless node architecture. In the paper therefore a review of new
generation reconfigurable optical add drop multiplexer architectures is
presented with a particular focus on optimization of optical node re-
sources. The problem is formulated as an integer linear programming
problem. The results of numerical experiments are presented for network
topologies of different dimensions and with a large demand set.

Keywords: Network design, ROADM, CDC optical node, Integer Pro-


gramming, Linear Programming optimization

1 Introduction
New Generation Reconfigurable Optical Add Drop Multiplexers (NG ROADMs)
deployed currently in high speed optical telecom networks have colorless, direc-
tionless and contentionless (CDC) architectures [1, 2]. Hence, CDC optical node
architectures are a subject of an intense research [3, 4] and are of great interest
to network operators and equipment suppliers. Additionally, NG ROADMs may
also have flex spectrum/flex grid, functionality typically referred to as colorless,
directionless and contentionless - flex spectrum/flex grid (CDC-F) architecture
[5, 6].
NG ROADMs enable operators to offer a flexible service and provide po-
tentially significant savings in Operational Expenditure (OpEx) and Capital
Expenditure (CapEx). OpEx reductions are delivered primarily by means of
touchless provisioning and activation of network bandwidth (Figure 1). Con-
cerning CapEx, network operators do not need to pay high capital expense by
replacing at once all traditional transponders and nodal elements with those
compliant with CDC-F technology but can implement instead an investment
strategy of “pay as you grow” with a goal of eventually replacing all elements [7,
8]. Additionally, having colorless functionality one can use different wavelengths
for different sections in the optical path to avoid congestion in the network.

14
Stanislaw Kozdrowski et al.

Fig. 1: CDC ROADM

During the last decade attention of the telecommunication community have


been concentrated on Routing and Wavelength Assignment (RWA) and Routing
and Spectrum Allocation (RSA) problems. Consequently, numerous exact and
heuristic approaches are now available to solve RWA and RSA problems [5, 9].
However, with the advent of CDC-F technology there is a need to develop more
accurate modes of an optical node. In this study we concentrate therefore on op-
tical node resources optimization and compare various types of ROADMs both
classical and CDC-F. We formulate the problem as Integer Programming (IP)
both for classical ROADM node network and colorless ROADM one. Traffic de-
mands range from 1 Gb to 1 Tb, between each pair of nodes, and are represented
via traffic matrix, which is changing in each time period, which in turn depends
on the duration of the service.
In the paper we concentrate on optimization of optical node resources (i.e.
vatious types of transponders). In the paper the problem is formulated as Integer
Programming [10] and solved using CPLEX software package. Linear Program-
ming (LP) is also applied to obtain to evaluate the distance to the global opti-
mum. Several alternative Integer Programming and Mixed Integer Programming
(MIP) formulations of the RSA problem can be found in the literature [11, 12].
The rest of the paper is organized as follows. In Section 2 the problem and the
constraints are presented. In the next Section the numerical results are showed
and in the last Section concluding remarks are provided.

15
Optimization of Optical Node Architectures

2 Problem Description

First, let us consider an optical network conecting a set of nodes n ∈ N with


number of transponders t installed in the node - i(t, n). Second, consider a set
of transponders t ∈ T , each transponder type t is characterized by number of
outputs - o(t), bitrate of one output - b(t) and cost of using transponder - ξ(t).
Next, let us consider a set of Time Period P, with volume h(n, n0 , p) from
node n to node n0 (the values h(·) constitute the traffic matrix for each period
p ∈ P). Additionally, ξ(n) denotes cost of intervention in node n with binary
variable wnp equal 1 - if intervention is needed in node n in period p and 0 -
otherwise. Detailed sets, constants and variables description is presented in the
table bellow.

Sets Constants Variables


N nodes o(t) number of outputs xtnp number of
of transponder t transponder t
installed in node n
in period p
(n,n0 )
T transponders b(t) bitrate of one ytp number of outputs
output of of transponder t
transponder t installed in relation
(n, n0 ) in period p
(n,n0 )
P time period ξ(t) cost of using zn00 p bitrate in relation
transponder t in (n, n0 ) with node
one period n00 being the final
destination in
period p
h(n, n0 , p) volume from node wnp binary; if
n to node n0 in intervention is
period p needed in node n
in period p
i(t, n) number of
transponders t
installed in node n

ξ(n) cost of intervention


in node n

p(p) period before


period p; p is the
first period if
p(p) = ∅

M a large number

16
Stanislaw Kozdrowski et al.

We define the objective function as the sum of intervention cost and transpon-
der cost, representing opex and capex cost of the considered networks.

X X X
F = min (wnp ξ(n) + xtnp ξ(t)) (1a)
p∈P n∈N t∈T

Additionally, we have the following constraints.

X (n,n0 )
X (n0 ,n)
zn00 p = zn00 p + h(n, n00 , p) ∀n, n00 ∈ N : n 6= n00 , ∀p ∈ P (2a)
n0 ∈N n0 ∈N
X (n,n0 )
X (n,n0 )
ytp b(t) ≥ zn00 p ∀n, n0 ∈ N , ∀p ∈ P (2b)
t∈T n00 ∈N
X (n,n0 ) (n0 ,n)
xtnp o(t) ≥ (ytp + ytp ) ∀t ∈ T , ∀n ∈ N , ∀p ∈ P (2c)
n0 ∈N
wnp M ≥ xtnp − xtnp(p) ∀n ∈ N , ∀p ∈ P : p(p) 6= ∅, ∀t ∈ T (2d)
wnp M ≥ xtnp(p) − xtnp ∀n ∈ N , ∀p ∈ P : p(p) 6= ∅, ∀t ∈ T (2e)
wnp M ≥ xtnp − i(t, n) ∀n ∈ N , ∀p ∈ P : p(p) = ∅, ∀t ∈ T (2f)
wnp M ≥ i(t, n) − xtnp ∀n ∈ N , ∀p ∈ P : p(p) = ∅, ∀t ∈ T (2g)
wnp ∈ {0, 1} ∀n ∈ N , ∀p ∈ P (2h)

where the first three sets of constraints guarantee that each transponder t
has a sufficient bitrate and number of outputs. The latter sets of constraints
are associated with interventions on sites. Finaly, (2h) assures that variables are
binary.

3 Numerical Experiments

We have considered optimization problem presented in Section 2 with objective


function given by equation (1a) and constraints (2a)–(2h). We have choosen 3
types of transponders, whose parameters are presented in Table 1a. For each
transponder t there are different parameter settings (i.e., number of outputs
o(t), bitrate b(t) and cost ξ(t)).

transponder
1 2 3
type - t Case A B C D
o(t) 1 4 1 number of nodes 5 12 5 12
b(t) 10 10 100 ξ(n) 2 2 5 5
ξ(t) 1 3 5 (b) Network parameters
(a) Transponder parameters
Table 1: Network model parameters

17
Optimization of Optical Node Architectures

We have taken various problem instances into the consideration for 4 cases
A, B, C and D. These cases concern the size of the network and intervention
cost ξ(n), presented in Table 1b .
Cases A and C consider the network topology encompassing 5 nodes and 7
links, presented in Figure 2a with intervention cost ξ(t) equal 2 and 5, respec-
tively. Cases B and D consider the network topology with 12 nodes and 20 links
presented in Figure 2b, with intervention cost ξ(t) equal 2 and 5, respectively.
The network of Figure 2b is taken from [7].

(a) net 5 (b) net 12

Fig. 2: Network topologies

We have analyzed both networks - net 5 and net 12 for 3 Time Periods. Each
Time Period p ∈ P contains a demand matrix. Demands are expressed in Gb.
For the network net 5 demands were generated artificially, while for the network
net 12 demands were taken from a real network [13].

Time F
Case Time (Gap)
Period LP IP
1 59,6 64 1
A 2 139,7 140 3
3 226,1 228 14
1 263,7 267 72000 (1,7%)
B 2 509,3 557 72000 (8,5%)
3 815,7 861 72000 (12,4%)
Table 2: Computational results for cases A and B

18
Stanislaw Kozdrowski et al.

To solve all problem instances we have used CPLEX 12.6.1, running 8 parallel
threads with a runtime limit of 20 hours under a 64-bit Windows OS with 64 GB
of RAM. Table 2 presents the results for case A and B. Column “F” presents both
LP and IP best volume and column “Time (Gap)” presents the running time
volume (in seconds) whilst the final percentage gap of the best found solution is
presented in brackets, whenever the runtime limit was reached.
Table 3 presents the results for cases C and D. Cases A and C, which core-
sponds to the smal networks were solved to optimality. The results for large
network (cases B and D) show that the optimal solution has not been found and
all final gaps were significantly large.

Time F
Case Time (Gap)
Period LP IP
1 65,4 74 1
C 2 149,7 154 4
3 238,9 241 80
1 281,7 290 72000 (1,9%)
D 2 554,5 608 72000 (9,5%)
3 822,7 975 72000 (17,4%)
Table 3: Computational results for cases C and D

Table 3 presents results with an increased value of the intervention cost


ξ(n) = 5 when compared with the results presented in the table 2, where ξ(n) =
2. The results from Table 3 show that the change of ξ(n) has practically no
influence on the calculation time for Time Period 1 and 2. For Time Period 3
significant different is observed. Also the calculation time strongly depends on
the size of the network and the number of Time Periods considered.
The results obtained for cases B and D (larger network, Tables 2 and 3)
show that the minima attained are suboptimal and there is a need to improve
the method by using, for example, metaheuristics (i.e., evolutionary algorithms).

4 Concluding Remarks
A novel CDC ROADM architecture is presented and compared with the tradi-
tional one. LP and IP methods were applied to optimize the node resources. IP
problem for CDC architecture with objective function taking into account opex
and capex indicator was formulated. It has been shown that for small networks,
in all cases, an optimal solution can be reached using IP method. However, the
performance of an optimization algorithm for larger networks might be further
improved by implementing relaxation, parallel processing and heuristics. The
presented research is supposed to be continued with a goal to further examin
Integer Programming and Mixed Integer Programming as well as various meta-
heuristics.

19
Optimization of Optical Node Architectures

References
1. R. Jensen, A. Lord, N. Persons, Colourless, directionless, contention-
less roadm architectures using low-loss optical matrix switches, Euro-
pean Conference and Exhibition on Optical Communication (ECOC),
2010,doi:10.1109/ECOC.2010.5621248.
2. P. Ji, Y. Aono, Colorless and directionless multi-degree reconfigurable opti-
cal add/drop multiplexers, Wireless and Optical Communication Conference
(WOCC), 2010,doi:10.1109/WOCC.2010.5510664.
3. R. Jensen, A. Lord, N. Persons, Highly scalable oxc-based contentionless roadm
architectures with reduced network implementation costs, Optical Fiber Commu-
nication Conference and Exposition (OFC/NFOEC), 2012,.
4. J. Pedro, S. Pato, Towards fully flexible optical node architectures: Inpact on block-
ing performance od dwdm transport networks, Transparent Optical Networks (IC-
TON), 2011,doi:10.1109/ICTON.2011.5970863.
5. M. Klinkowski, K. Walkowiak, Routing and spectrum assignment in spectrum
sliced elastic optical networks, IEEE Communications Letters 15 (2011) 884–886.
doi:10.1109/LCOMM.2011.060811.110281.
6. S. Kozdrowski, S. Sujecki, Optical node architectures in the context of the quality
of service in optical networks, IARIA 2018 The Tenth International Conference on
Advanced Service Computing (Brarcelona 2018) 57–60. doi:ISBN: 978-61208-606-
4.
7. A. de Sousa, A. Tomaszewski, M. Pióro, Bin-packing based optimization of eon
networks with s-bvts, Optical Network Design and Modeling International Confer-
ence, 2016 24. doi:10.1109/ONDM.2016.7494082.
8. M. Klinkowski, M. Żotkiewicz, K. Walkowiak, M. Pióro, M. Ruiz, L. Valasco,
Solving large instances of the rsa problem in flexgrid elastic optical networks,
IEEE/OSA Journal of Optical Communications and Networking, 8 (2016) 320–
330. doi:10.1364/JOCN.8.000320.
9. A. Cai, G. Shen, L. Peng, M. Zukerman, Novel node-arc model and multiiteration
heuristics for static routing and spectrum assignment in elastic optical networks,
Journal of Lightwave Technology 31 (2013) 3402–3413. doi:10.1.1.718.3148.
10. B. Korte, J. Vygen, Combinatorial optimization. theory and algorithms,, Springer-
Verlag 5 ed.
11. K. Christodoulopoulos, I. Tomkos, E. A. Varvarigos, Elastic bandwidth allocaton
in flexible ofdm-based optical networks, IEEE Journal of Lightwave Technology 29
(2011) 1354–1366. doi:10.1109/JLT.2011.2125777.
12. Y. Wang, X. Cao, Q. Hu, Y. Pan, Towards elastic and fine-granular bandwidth al-
location in spectrum-sliced optical networks, IEEE/OSA Journal of Optical Com-
munications and Networking 4 (2012) 906–917. doi:10.1364/JOCN.4.000906.
13. http://sndlib.zib.de/home.action.

20
Two approaches for the computational model for software
usability in practice
Eva Rakovská and Miroslav Hudec

Faculty of Economic Informatics, University of Economics in Bratislava;

Abstract. Rapid software development and its massive deployment into practice
brings a lot of problems and challenges. How to evaluate and manage the existing software
in an enterprise is not an easy task. Despite dierent methodologies in IT management,
we encounter problems with how to measure usability of software. Software usability is
based on user experience and it is strongly subjective. Every IT user is unique, so the
measurement of IT usability has often qualitative character. The main tool for such me-
asurement is survey, which maps her or his needs of daily work. The article comes from
experimental study in the medium-sized company. It was based on the idea of using rule-
based expert system for measurement of software usability in enterprises. Experimental
study gave a more detailed view into the problem; how to design the fuzzy-rules and how
to compute them. The article points to problems in designing a computational model of
software usability measurement. Thus, it suggests a computational model, which is able
to avoid the main problems arising from experimental study and to deal with uncertainty
and vagueness of IT user experience, dierent number of questions for each users group,
dierent ranges of categorical answers among groups, and variations in number of ans-
wered questionnaires. This model is based on the three hierarchical levels of aggregation
with the support of fuzzy logic.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

21
Content-based recommendations in an e-commerce platform
Šukasz Dragan and Anna Wróblewska

Faculty of Mathematics and Information Science,


Warsaw University of Technology, ul. Koszykowa 75, 00-662 Warsaw, Poland;

Abstract. Recommendation systems play an important role in modern e-commerce


services. The more relevant items are presented to the user, the more likely s/he is to
stay on a website and eventually make a transaction. In this paper, we adapt some state-
of-the-art methods for determining similarities between text documents to content-based
recommendations problem. The goal is to investigate variety of recommendation methods
using semantic text analysis techniques and compare them to querying search engine index
of documents. As a conclusion we show, that there is no signicant dierence between
examined methods. However using query based recommendations we need more precise
meta-data prepared by content creators. We compare these algorithms on a database
of product articles of the biggest e-commerce marketplace platform in Eastern Europe -
Allegro.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

22
Analysis of dispersive part of AC magnetic susceptibility
measurement of high-temperature superconductors by
means of neural network

Marcin Kowalik, Waldemar Tokarz, Andrzej Kołodziejczyk, Marek Giebułtowski,


Ryszard Zalecki, Wiesław Marek Woch

AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Kraków, Poland
[email protected]

Abstract. This paper demonstrates the results of neural network application for
analysis of temperature dependent, dispersive part of dynamic susceptibility for
the granular, polycrystalline high-temperature superconductors. The goal of
neural network is to classify a small section of single measurement and to find
out if in this particular section a beginning of superconducting transition is pre-
sent, from which the value of critical temperature Tc could be estimated.

Keywords: High-temperature superconductors, critical temperature, neural


networks applications

1 Introduction

A superconductor is a material, which cooled below certain temperature, called criti-


cal temperature Tc, has exactly zero electrical resistance. Below Tc also expulsion of
external magnetic field from inside of superconductor material is observed which is
called a Meissner effect. The critical temperature Tc is the most basic characteristic of
the superconducting material [Tinkham 1996]. In 1986 Bednorz and Müller discov-
ered that cuprate-perovskite ceramic material Ba-La-Cu-O is superconducting near
30 K. [Bednorz 1986]. Year later superconductivity transition between 80 K and 93 K
was observed in the Y-Ba-Cu-O system at ambient pressure by American scientists
[Wu 1987]. Research in the follow 30 years has led to the discovery of numerous
cuprate superconductors that belong to several families with Tc up to 134 K for
HgBa2Ca2Cu3O9 at ambient pressure [Schilling 1993] and 164 K under 30 GPa [Gao
1994]. These ceramic materials with high value of Tc became known as High Tem-
perature Superconductors (HTS).
In the last several years the neural networks (NN) demonstrated the ability to clas-
sify with a very high degree of accuracy sets of labeled data [Rosenblatt 1958, Niel-
sen 2017]. The neural network with ease can properly recognize and classify hand-
written digits or letters [LeCun 2018], classify phases in condensed matter physics
[Ch’ng 2017] or even classify stars light curves in searching for Exoplanets [Pearson
2017].

23
2

In this paper, we study how well the neural network will perform in classification
task of a small section (subpart) of single AC magnetic susceptibility measurement of
HTS. The goal of neural network is to find out if in this particular section a beginning
of superconducting transition is present, from which the value of critical temperature
Tc could be estimated.

2 Experiment and computation

2.1 The AC magnetic susceptibility measurements

The AC magnetic susceptibility can be written as a complex number by the formula:

χ=χ’+iχ” (1)
where χ ' is the dispersion and χ '' is the absorption part of the dynamic susceptibility.
The value of dispersion part corresponds to the diamagnetic nature - a negative mag-
netization of the HTS sample, when external magnetic field is applied. The value of
absorption part corresponds to the energy converted into heat during one cycle of the
external, AC magnetic field Hac. For the bulk HTS samples this energy loss is con-
nected with the magnetic field penetration into the intra and inter-granular regions
[Gömöry 1997].
The values of χ' and χ'' for HTS change with temperature. Above certain tempera-
ture, called the critical temperature Tc, the values of both parts of AC susceptibility
are equal to zero. On the other hand, below critical temperature Tc the χ' part has
negative values, χ'' is positive or equal to zero. The absolute value of χ' will decreases
if temperature raises and increases if temperature lowers, which can be used to define
the critical temperature Tc of HTS[Kowalik 2017]. The shapes of χ' and χ'' parts as a
function of temperature and the value of Tc are strongly correlated with crystal struc-
ture and microstructure of specific HTS material. The AC magnetic susceptibility,
next to resistance measurements, is the most important method for characterization
the physical properties of HTS materials. Selected examples of AC measurement
results and technique of Tc determination are shown in Fig. 1.
1.5E-05
0.0E+00 89.60 K (a)
0.0E+00
' [a.u.]

Tc=92.20 K 1.0E-05
Tc
-5.0E-05
' [a.u.]

-3.0E-06
'' [a.u.]

92 T [K] 93

5.0E-06
-1.0E-04

0.0E+00
-1.5E-04
EuBa2Cu3Ox + (0.11% wt., NiFe2O4), Hac = 0.099 Oe

80 85 90 95
T [K]

24
3

2.8E-05
90.52 K (b)
0.0E+00

-1.4E-05
2.1E-05

' [a.u.]
-5.0E-05
TC Tc=92.05 K

' [a.u.] 1.4E-05

'' [a.u.]
-1.0E-04 -2.1E-05
92 T [K] 93

7.0E-06
-1.5E-04

-2.0E-04 0.0E+00
YBa2Cu3Ox + (1% wt., YMnO3), Hac = 0.109 Oe

80 85 90 95
T [K]

0.0E+00
96.94 K (c)
0.0E+00 3.0E-06
' [a.u.]

-5.0E-07
Tc=105.91 K
-5.0E-06 2.0E-06
Tc

" [a.u.]
' [a.u.]

106 107
T [K]

-1.0E-05 1.0E-06

0.0E+00
-1.5E-05
(Bi0.6Pb0.4)2Sr1.6Ba0.4Ca2Cu3Ox, Hac= 230 mOe

80 85 90 95 100 105 110


T [K]
3.2E-03 8.8E-04
89.71 K (d)
3.11E-03
' [a.u.]

8.6E-04
3.1E-03

Tc Tc = 88.78 K 8.4E-04
3.06E-03
' [a.u.]

3.0E-03
" [a.u.]

89 90
T [K]
8.2E-04
2.9E-03

8.0E-04

2.8E-03
2G HTS AMSC tape, Hac = 1.9 Oe
7.8E-04
80 85 90
T [K]

Fig. 1. The selected examples of AC susceptibility measurement results for a) nanocomposite


EuBa2Cu3Ox + 0.11% wt. nanoparticles of NiFe2O4, b) mixture of YBa2Cu3Ox and 1% wt.
YMnO3, c) (Bi0.6Pb0.4)2Sr1.6Ba0.4Ca2Cu3Ox and d) 2G (second generation) HTS tape manufac-
tured by AMSC. The tape has multilayer structure and one layer has ferromagnetic properties.
The insets shows the estimation of Tc values. The Figs. a), b) and c) shows results for bulk
polycrystalline samples, which were prepared by solid state reaction.

Measurements of dynamic magnetic susceptibility as a function of temperature and


intensity of the AC magnetic field, ranging up to 10.9 Oe, were made using an induct-

25
4

ance bridge consisting of a transmission coil and two detector coils setup in the heli-
um cryostat [Chmist 1991]. A SRS830 DSP Lock-In amplifier was used as a detector
and the AC current source at 189 Hz. The temperature was monitored by the Lake
Shore Model 330 autotuning temperature controller employing chromel-gold – 0.07%
Fe thermocouple with the accuracy of about 0.3 K and resolution of about 0.05 K.
The control of the measurements and data acquisition was performed with a comput-
er. The direction of an applied magnetic HAC field was parallel to the longest side of
the parallelepipedal HTS sample.

2.2 Dataset

The dataset contains results of AC magnetic susceptibility measurements for HTS


like: YBa2Cu3Ox, EuBa2Cu3Ox (REBCO-123, where RE is rare earth element) and
Bi2Sr2Ca2Cu3Ox (BISCO-2223). The measurements were performed as a function of
temperature and AC magnetic field Hac. Temperature range for REBCO samples was
77-100K and 77-125K for BISCO. More than 400 measurements were considered.
Part of these measurements were analyzed and published in the paper[Peczkowski
2017]. Single measurement had 459 data points on average (median 431) and there
were 22 points/K on average (median 21). Every data point had assigned three values:
sample temperature, value of χ’ and value of χ”.
In order to prepare dataset for feeding into neural network for every single result
following steps were done: a) the lowest value of χ’ was normalized to -1, b) all val-
ues of χ” were drop out, c) result with m experimental points was divided into n sec-
tions (subparts) of size 10, 20 or 30 data points, according to the formula:

n = m – s +1 (2)
where m is number of data points in measurement and s is number of data points in
section. Section ni+1 included 10, 20 or 30 succeeding data points, starting from i data
point. Next, d) every section was classified into one of three classes: class 1 - all data
points in section had temperatures below critical temperature Tc, class 2 - all data
points in section had temperatures above Tc, class 3 - there were data points in section,
which had temperatures below and above Tc and e) temperatures were drop out.
Finally, the dataset, prepared for neural network feeding, consisted of about
200,000 labeled sections. Random set of sections is show on Fig 2.

26
5

Fig. 2. A random sample of training data with visible differences between sections for the sec-
tion size of 30 data points. The graphs of sections in which the beginning of superconducting
transitions is visible were marked with a star.

2.3 Neural Network Architecture

The deep, feedforward, fully connected neural network architecture was chosen
(Fig. 3). The size of input layer was equal to size of section, which is equivalent to
number of experimental points in section. We considered sections of sizes of 10, 20
and 30 points. The neural network had two hidden layers marked as n(2) and n(3). Both
hidden layers used the tanh activation function. The number of neurons in hidden
layers was dependent on size of the section. The layer n(2) had 20 neurons for section
size of 10 points, 40 neurons for section size of 20 points and 60 neurons for section
size of 30 points. In case of layer n(3) it was 50, 100 or 150 neurons respectively. The
last, output layer had three neurons with softmax activation function.

27
6

Hidden layer n(3)


Hidden layer n(2)
Input layer n(1)

Output

Fig. 3. Architecture of deep, feedforward neural network used to classify a section of χ -


dispersion part of AC magnetic susceptibility measurement.

2.4 Training of Neural Network


We employed the TensorFlow package [Abadi 2015] with Keras API [Chollet 2015].
Optimization of the weights of neural network was done by minimizing the loss func-
tion, the cross-entropy. The weights in each layer were optimized using a backward
propagation scheme. The Adam optimization algorithm was chosen [Kingma 2014].
The parameters of optimizer were set at default values. The dataset was not divided
into batches. The training set consisted of 80% sections and the validation set 20%.
The neural network was trained by several dozen epochs. Performance for a dozen or
so neural networks with different number of neurons in layers n(2) and n(3) were evalu-
ated. Some of the tested NN used two dropout layers, which were placed between
layers n(2) and n(3), and between layer n(3) and output layer. The dropout rate was also
varied. The details of neural networks architecture and their performance on valida-
tion set are shown in Table 1.

Table 1. Selected architectures and performances of tested neural networks


Optimizer Number of Number of Activation Dropout rate: Number of Value Value
inputs n(1) neurons: function n(2) - n(3), epochs loss accuracy
n(2) , n(3) for n(2) , n(3) n(3)-output

Adam 10 20, 50 tanh 0, 50 20 0.1542 0.9470


Adam 10 20, 50 tanh 50, 50 20 0.1781 0.9358
Adam 10 20, 50 tanh 0, 0 20 0.1045 0.9702
Adam 10 20, 50 tanh 0.1, 0.1 20 0.109 0.9693
Adam 10 20, 50 tanh 0.05, 0.05 20 0.109 0.9702
SGD 10 20, 50 tanh 0, 0 30 0.1694 0.9389
SGD 10 20, 50 tanh 0, 0 40 0.1704 0.9391
SGD 10 20, 50 tanh 0.2, 0.2 80 0.1789 0.9377
Adam 10 20, 50 sigmoid 0, 0 120 0.0969 0.9719
Adam 20 40, 100 tanh 0, 0 40 0.0958 0.9715
Adam 30 60, 150 tanh 0, 0 40 0.1022 0.9671
Adam 30 60, 150 tanh 30, 30 40 0.1118 0.9645
Adam 30 60, 100 tanh 0, 0 40 0.096 0.9702
Adam 30 50, 100 tanh 0, 0 40 0.1 0.9671
Adam 30 50, 100 tanh 30, 30 40 0.1 0.9671

28
7

2.5 Results

We find out that all studied neural network architectures were able to recognize if in
small series of data points the beginning of superconducting transition is present. In
classification task the best versions of neural networks (see Table 1) achieved an ac-
curacy slightly above 97% for all three section sizes. The section size of 30 data
points is sufficient for handmade classification and most important for proper evalua-
tion of Tc of HTS sample. The choice of using the Adam optimization algorithm,
which was based on our earlier experiences in training of NN, was a very good one.
The performance of Adam was 3% better in comparison with standard method using
stochastic gradient descent (SGD). The Adam optimizer also provided the fastest
convergence time. The use of two dropout layers decreased accuracy, especially when
dropout rate was in range 30-50%, which are a typical values used for deep neural
networks learning. This suggests that chosen network architectures was near the op-
timal one for this classification task. We also tried to use sigmoid activation function
instead tanh function. The resulting neural network performed as well as former one
(tanh NN), but the convergence time was several times slower.

3 Conclusions

We use an artificial neural network to learn the features of χ' - dispersion part of AC
magnetic susceptibility vs temperature and applied magnetic field Hac of HTS sam-
ples. We found out that our neural network was able to recognize whether the begin-
ning of superconducting transition was present for small series of experimental points.
In this classification task an accuracy of about 97% (see Table 1) was achieved. This
result allows for extraction just only significant experimental points for determination
of real value of critical temperature Tc.

Acknowledgments. This work was supported by the Polish Ministry of Science and
Higher Education and its grants for Scientific Research.

References
[Abadi 2015] Abadi, M. et al., TensorFlow: Large-Scale Machine Learning on Heterogene-
ous Systems (2015), https://www.tensorflow.org/, last accessed 2018/05/15.
[Bednorz 1986] Bednorz, J., G., Müller, K. A.: Possible high Tc superconductivity in the
Ba−La−Cu−O system. Z. Phys. B. 64 (1), 189–193 (1986).
[Ch’ng 2017] Ch’ng, K., Carrasquilla, J., Melko, R. G., Khatami, E.: Machine Learning
Phases of Strongly Correlated Fermions, Phys Rev X 7, 031038-1–031038-9 (2017).
[Chmist 1991] Chmist, J.: Wpływ technologii na podstawowe właściwości wysokotempera-
turowych nadprzewodników typu Y-Ba-Cu-O, Zakład Fizyki Ciała Stałego Instytutu Meta-
lurgii, Akademia Górniczo-Hutnicza im. Stanisława Staszica w Krakowie (1991).

29
8

[Chollet 2015] Chollet, F. et al., Keras Homepage, https://keras.io/, last accessed


2018/05/15.
[Gao 1994] Gao, L., Xue, Y. Y., Chen, F., Xiong, Q., Meng, R. L., Ramirez, D., Chu, C. W.,
Eggert, J. H., Mao, H. K.: Superconductivity up to 164 K in HgBa2Cam−1CumO2m+2+δ (m=1,
2, and 3) under quasihydrostatic pressures. Phys. Rev. B 50(6), 4260-4263 (1994).
[Gömöry 1997] Gömöry, F.: Characterization of high-temperature superconductors by AC
susceptibility measurements. Supercond. Sci. Technol. 10, 523–542 (1997).
[Kingma 2014] Kingma, D., P., Ba, J.: Adam: A Method for Stochastic Optimization.
CoRR abs/1412.6980, (2014).
[Kowalik 2017] - Kowalik, M., Zalecki, R., Woch, W.M., Tokarz, W., Niewolski, J., Gon-
dek, Ł.: Critical Currents of (Bi1−xPbx)2Sr2Ca2Cu3Oy (x=0.2 and 0.4) Films Deposited on
Silver Substrate by Sedimentation. J Supercond Nov Magn 30(9), 2387–2391 (2017).
[LeCun 2018] LeCun, Y., Cortes, C., Burgers, C. J. C., THE MNIST DATABASE of hand-
written digits, http://yann.lecun.com/exdb/mnist/, last accessed 2018/05/15.
[Nielsen 2015] Nielsen, M.A.: Neural Networks and Deep Learning. Determination Press,
(2015).
[Pearson 2017] Pearson, K. A., Palafox, L.: Searching for Exoplanets using Artificial Intel-
ligence, Workshop on Deep Learning for Physical Sciences (DLPS 2017), NIPS 2017, Long
Beach, CA, USA.
[Peczkowski 2017] Pęczkowski, P., Szterner, P., Jaegermann, Z., Kowalik, M., Zalecki, R.,
Woch, W.,M.: Effects of Forming Pressure on Physicochemical Properties of YBCO Ce-
ramics. J Supercond Nov Magn, (2018).
[Rosenblatt 1958] Rosenblatt, F.: The Perceptron: A probabilistic model for information
storage and organization in the brain. Psychological Review 65(6), 386-408 (1958).
[Schilling 1993] Schilling, A., Cantoni, M., Guo, J. D., Ott, H. R.: Superconductivity above
130K in the Hg-Ba-Ca-Cu-O system. Nature 363, 56-58 (1993).
[Tinkham 1996] Tinkham, M., McKay, G.: Introduction to superconductivity. 2nd edn.
McGraw-Hill, Inc., New York (1996).
[Wu 1987] Wu, M. K., Ashburn, J. R., Torng, C. J., Hor, P. H., Meng, R. L., Gao, L.,
Huang, Z. J., Wang, Y. Q., Chu, C. W.: Superconductivity at 93 K in a New Mixed-Phase
Y-Ba-Cu-O Compound System at Ambient Pressure. Phys. Rev. let. 58, 908-910 (1987).

30
A method of Functional Test interval selection with
regards to Machinery and Economical aspects

Jan Piesik1[0000-0002-0883-4830] and Emilian Piesik2[0000-0002-1618-847X]


and Marcin Sliwinski3[0000-0001-7577-0526]
1Michelin Polska S.A., Leonharda str. 9, 10-454 Olsztyn, Poland
2, 3 Gdansk University of Technology, G. Narutowicza str.11/12,80-233 Gdansk, Poland
[email protected]

Abstract. This paper discusses the problem of choosing the optimal frequency
of functional test, including the reliability calculations and production efficiency,
but also the effect of company risk management. The proof test as a part of the
functional test interval is well described for the process industry. Unfortunately,
this situation is not the case for the machinery safety functions with low demand
mode. Afterwards, it is presented current approach of companies, which to pur-
suing industrial excellence monitor their activity through appropriately selected
key performance indicators, which enable, among other things, to increase
productivity. In addition, companies are increasingly exploring potential risks in
the face of new challenges as a part of sophisticated risk management, including
the perception of the enterprise plants as a safe place for work, by customers and
business partners. In the elimination of potential risks, the influence of human
and its interaction with machines is increasingly taken into account. To illustrate
the issue a tire cord twisting machines are used in the case study. In this article,
the authors propose a solution in the selection of the functional test interval of
safety function and complementary protective measures of machinery as a com-
promise to obtain satisfactory results regarding safety requirements, productivity
indicators and risk management issues.

Keywords: Efficiency enhancement, maintenance engineering, parameter


optimization, safety analysis, time-schedule control, tires.

1 Introduction

At present, there is a sharp increase in the requirements and scope that every enterprise
manages. Year after year, business requirements set by companies are increasing. Those
results in finding further areas which can be better managed to get tangible benefits for
the company. One of such areas is planned stoppages for maintenance.
Planned maintenance consists of: functional test, inspection, cleaning, lubrication,
planned replacement of elements, e.g. batteries, condition monitoring.
A comprehensive approach to maintenance and effective optimisation is
implemented in companies through the implementation of Total Productive

31
Maintenance (TPM) [1] and the implementation of the Reliability Centered Mainte-
nance (RCM) [17].
At this paper authors concerns on proof tests and functional test optimisation. In the
literature, the optimisation of the preventive maintenance stops is widely described.
Their optimization is analyzed for cost incurred [20], in short-term cost optimization
and long cost optimization [3] as well as time-dependent inspection frequency models
[19]. Most of the current articles has focused on the narrow scope of particular cost
optimization. At industry, in addition to compliance with the costs, law, safety stand-
ards, workplace requirements, increasingly essential interactions between them, and
other business-related risks (e.g. brand receipt by customers) are becoming increasingly
important. This is something that has been seen in more and more companies. The ap-
proach for company management also changes rapidly in recent years. It can be
observed in many changes over the years in the standards, e.g. quality management
standard, which last version ISO 9001:2015 [8] implement new, additional require-
ments of interested parties. The certification of this standard become now the basis for
business management. However, it still doesn’t cover all scope of activities. For that
reason, standards ISO 31000 [12], and ISO 22301 [11] was created and covers risk
management and business continuity management. The reason for this is that the man-
agement process becomes more complex than at the end of the twentieth century and
new risks affecting companies are identified. Actual methods presented in the literature
does not cover that issues. For the response, the new policy has to be implemented, and
approach has to be modified and adapted. The authors in this article present a new
integrated approach to this subject, based on well-known methodology presented in
international standards [4],[6],[9] and impact of environment and humans aspects to the
functional test interval selection. Due to the new risk areas managed by companies,
counting the stoppage connected to the functional test interval has also taken into
consideration other factors, in addition to the direct costs of stoppages, or the costs of
potential defects. Taking into account the wide range of risks in accordance with ISO
31000. It can be stated that the brand good image loss, costs a company (e.g. an accident
at work) much more than the cost of additional machine stops associated with the proof
test or functional test. Direct costs and efficiency of planned maintenance can be
evaluated through Key Performance Indicators (KPIs). The KPIs can be defined ac-
cording to international standard ISO 22400 [10].

2 Background

The following tests and fault detection help to detect and remove hidden faults in the
safety system. We have three possibilities for failure detection [18]:
• failure detection by automatic (diagnostic) self-tests (including operator observa-
tion),
• failure detection by functional test (manual test), e.g. proof test,
• failure detection during process requests/shutdowns.

32
3

2.1 Proof test

The term proof test is sometimes used interchangeably with the function test, while
some authors consider them to be identical, others see them as different and even use
other terms such as functional proof test. As it was mentioned the elaborate description
about proof test is given in process industry literature and on this basis. The definition
of proof test given is a ‘‘periodic test performed to detect failures in safety-related sys-
tems so that the system can be restored to an ‘‘as new’’ condition or as close as practical
to this condition’’[15]. The need for routine maintenance action to detect unrevealed
failures is established by the standard, and the proof test is one of these activities. Those
tests should be made in conditions as close as it is possible to normal operating condi-
tions of Safety Requirement Specification (SRS). The test has to include all elements
of SRS starting from sensors, by logic controllers up to output devices. The proof test
has to be complex what means all elements have to be tested at the same time. The term
functional testing as used in IEC-61508 [4] part 7 means to "reveal failures during the
specification and design phases to avoid failures during implementation and integration
of software and hardware". This consequently means that proof test and functional tests
have different meanings. Sometimes because of production specificity, there are made
tests only of few elements, what is called partial tests. However, also with rare fre-
quency entire tests has to be done. Differences between them arrive in three most im-
portant aspects: frequency of tests, percent of failure detection and need to stop com-
plete installation or made during normal work. The partial tests (e.g. visual inspections)
can detect only some system failures. The full tests done mainly during overhauls
granted restore the system to full operating condition. According to IEC61508-2 [4],
the frequency of proof test will be dependent upon the target failure measure associated
with the Safety Integrity Level (SIL), the architecture, the automatic diagnostic cover-
age and the expected demand rate.

2.2 Functional test

In the article, it is assumed that proof test is one of the functional tests. Functional
testing shall include, but not be limited to, verifying the following:

 the operation of all input devices including primary sensors and Safety-Related Elec-
trical Control System (SRECS) input modules;
 logic associated with each input device;
 logic associated with combined inputs;
 trip initiating values (set-points) of all inputs;
 release of alarms functions;
 the speed of response of the SRECS when necessary;
 operating sequence of the logic program;
 the function of all final control elements and SRECS output modules;
 computational functions performed by the SRECS;
 timing and speed of output devices;
 the function of the manual trip to bring the system to its safe state;

33
 the function of user-initiated diagnostics;
 complete system functionality;
 the SRECS is operational after testing.
For those applications where partial functional testing is applied, the procedure shall
also be written to include [15].:

 describing the partial testing on the input and logic solver during operation;
 testing the final element during unit shut down;
 executing the output(s).
There are two ways to minimalise the percentage of planned stops. First is a reduc-
tion of the time spent for planned stops what means optimisation and increase the effi-
ciency of works done during those stops. The second way is to maximalist the frequency
of planned stops. Finding the root cause of failure mentioned in the previous point can
result in the elimination of some checking and planned jobs. The most critical to
optimise is the time spent for actions required by the law and other regulations. The
fact which cannot be neglected is a key role of maintenance in maintaining the safety
at the appropriate level in operation [21], maintenance and repair stage of overall safety
lifecycle [4]. After machine commissioning the maintenance department take care of
safety aspects [13] as well as cost criteria what has to be done choosing correct mainte-
nance strategy [14].

2.3 Testing methods

There exist three general types of systems testing methods:

─ Shutdown testing. Cons of this type of test are that demands stop of the whole in-
stallation to perform the test. This inconvenience is much more severe in the process
industry, but it also affects in other branches of industry. The second disadvantage
is the need to perform the test manually and to record it also manually.
─ Bypass testing. On the other hand, for this type of testing, the inconvenience lies in
need to disable the safety function during the test and manual testing and to record
it manually. The manual test also involves the risk of human error. Moreover, the
last item is additional costs for bypass elements.
─ Partial stroke testing (PST). Pros for this type of test is that it can be done automati-
cally and registered automatically. Cons is that it does not give absolute certainty
about the operation of tested elements.
In the machinery, the most common type of testing is shutdown testing.

2.4 Frequency of test


At start-up, the operation of the safety function is validated but the safety function must
be maintained by periodic proof testing. The full proof test performing a safety function
is treated as the undesired stopping of the production process which reduces production

34
5

effectiveness. According to the general safety standard 61508 stated that the proof test
interval could be determined based on Average Probability of Failure on Demand
(PFDavg) value [4]. According to standard PN-EN ISO 12100:2011 product manufac-
turer should provide information for end user about the nature and frequency of inspec-
tions for safety functions [6]. Unfortunately, in safety manuals frequently can be found
no information about proof test frequency or there is a statement that proof test is rec-
ommended to be performed at least once per year. The frequently encountered rule is
also that Proof Test Interval should not be more than 50% of demand rate. The standards
assume that lifetime of the machinery as twenty years. It is based on the assumption
that only a few modern systems last more than twenty years without being replaced or
rebuilt. It is also assumed that machine controls get at least one proof test during the
lifetime.
The proof test is performed as a test of a complete subsystem and not some separate
components (subsystem elements) unless the subsystem contains only one element.
Subsystem could include the following elements:
─ complex electronic devices, e.g. PLCs,
─ electronic devices with the predefined behaviour are, e.g. IO modules,
─ electromechanical elements, e.g. relays, contactors.
The obligation for end-user touch three main domain:

─ follow the law and regulations,


─ follow the safety manuals of the manufacturer of the machines,
─ follow the PFDavg and Probability of Failure per hour (PFH) calculations.
The first obligation can be fulfilled partially by applying the rules contained in the
Recommendation of Use CNB / M / 11.050 published by European co-ordination of
Notified Bodies for Machinery concerning dual-channel safety-related systems with
two channels with electromechanical outputs:
─ If the safety integrity requirement for safety function is SIL 3 (Hardware Fault Tol-
erance (HFT) =1) or Performance Level (PL) e (Cat.3 or Cat. 4) then the proof test
of this function shall be performed at least every month;
─ If the safety integrity requirement for safety function is SIL2 (HFT=1) or PL d
(Cat.3) then the proof test of this function shall be performed at least every twelve
months.
The excellent example of this recommendation is contactor relays, safety relays,
emergency stop buttons, switches which are typically safety devices with electrome-
chanical outputs. Second obligation to perform periodic inspections is given by Di-
rective 2009/104/EC of the European Parliament and of the Council of 16 September
2009 concerning the minimum safety and health requirements for the use of work
equipment by workers at work. It is implementation done by national law regulations.
Following the second obligation only in the standard PN-EN ISO 14119 covering
interlocks, we can find direct values of test proof interval. For applications using inter-
locking devices with automatic monitoring, it is stated that for PL e with Category 3 or

35
Category 4 or SIL 3 with HFT equal one functional test should be performed every
month. Moreover, for PL d with category 3 or SIL 2 with HFT=1 functional test should
be carried out at least every twelve months [7]. In safety manuals of safety equipment,
it can often be found that the producer advises or recommend to make a proof test of
the device at least once per year or IEC 61511-1:2016 for the process industry states in
clause 16.3.1.3:,, The schedule for the proof tests shall be according to the SRS. The
frequency of proof tests for a SIF shall be determined through PFDavg or PFH
calculation in accordance with 11.9 for the SIS as installed in the operating
environment.” [5]. Also, in IEC EN 61508, it is stated that the proof test interval should
be based on the PFD calculations [4]. IEC/EN 62061 states that a proof test interval of
twenty years is preferred (but not mandatory) [10]. Recently in many safety manuals,
manufacturers write that maximum proof test interval in a high demand mode of oper-
ation is twenty years.
The third obligation is assuming those written above, generally consider PL ≤ c or
SIL 1. Determination of the optimal frequency of testing poses difficulties in many
companies. The mathematical approach is not very common and demands a high level
of technical knowledge and familiarity with the norms and safety aspects. Determining
the level of safety after the modification of equipment and adapt it to the requirements
put technical departments in the face of new requirements and problems [16]. It was
assumed that the hardware component with the smallest value for the proof test interval
determines the proof test time for the subsystem.
Simplified calculation of PFD with perfect proof-test can be obtained as shown be-
low.

PFD ( t )   D t (1)

where:
λD – dangerous failure rate,
t – time.
Assuming that the system is using non-repairable elements in configuration 1oo1,
equation receives following form equal :

1
PFDavg1oo1  DU TI (2)
2
where:
λDU – dangerous undetected failure rate,
TI- proof test interval.
Values of the failure probability requirements are required for the whole safety
function, including different systems or subsystems. The average probability of failure
on demand of a safety function is determined by calculation of PFDavg for all
subsystems, which as a whole create safety function.

36
7

The end user of the safety-related system has to make an analysis of PFDavg based
on the data received from the producer of each part of the safety-related system.

2.5 Key Performance Indicators for production management

The efficiency of production plant can be evaluated through KPIs. This method is
widely utilized in many companies. Recently definition of KPIs was defined by inter-
national standards, e.g. ISO 22400 [10]. KPIs in manufacturing facilities are ranked
according to many categories. Indicators are reflected in the objectives of the plant.
They play the role of a performance measure of plant operations. Typically, they are
different at different levels of business management. Their right choice often deter-
mines the success of the company. KPIs can be implemented in all types of industries,
including machinery, continuous and batch processes. Proper selection of indicators
allows for quick identification of losses. The key maintenance indicators set out in
standard ISO 22301 allow for increased dynamics in maintenance operations.

2.6 Risk Management


As has been presented previously, from year to year can be seen increasing role of
quality management in improving business performance. That is due to the fact of
strong market competition and similar technical solutions used in machines and
processes. In many cases, companies buy machines from external companies, which
causes competitors to have the same machine park. Consequently, to be competitive
companies are working to improve management efficiency that will increase revenue
and hence profits. Analysed by the author's aspects of the management to be effective,
must represent all the emerging opportunities and threats. That is a significant change
in the analysis approach proposed by ISO 9001: 2015[8]. Due to the changes, the
presented in the past management models have been recently enriched by internal and
external risk identification. Risk management can be implemented at every level and
type of business activity. Risk identification process is the task of finding, identifying,
classifying risk sources and dangerous events, taking into account their causes and
consequences. Risk identification process can be based on different information sources
such as historical expert knowledge, theoretical analysis and emerging risks taking into
account stakeholder's needs [2]. As a part of Risk management can also be treated
Business Continuity Management described at standard ISO 22301 [11]. In this article,
the solution proposed by the authors take into account the results of the analysis of the
Risk Management process. This is because many procedural imperatives have their
origins in the results of risk analysis. As an example, the instructions which oblige
departments to perform a monthly functional test. The frequency of these tests is not
based due to risk analysis of safety functions, but rather on risks minimalisation of an
accident at work.

37
3 Proposed solution
As it was presented in previous chapters, a test of machinery issue is not precisely de-
fined taking into consideration three crucial factors:

─ law and standards requirements,


─ new aspects of risk analysis,
─ the increase of productivity.
The proof test objective is to discover critical errors not found by the diagnostics.
Definition of proof test frequency is stated as diagnostics of components, sub-systems
and whole control systems. Is intended to determine their state in the formulation of the
assessment of the willingness to perform safety functions. The proposition consists of
two elements:

1. The proposition of test interval for machinery;


2. Method of estimation additional risk influences into proof test frequency for low
demand mode.

The first part of proposition helps to increase the productivity of the machines by
standardisation of test frequency. The second part takes into account the risks defined
by a broader approach to company risk management.

3.1 Estimation of test intervals for machinery

A variety of installation in many industry branches required periodic proof testing and
functional tests. In the law and standards, there is a gap in clarifying the frequency of
functional, proof tests and shutdowns used for failure detection. It mainly concerns
functions with SIL 1. As it is stated in the literature, user defining the functional test
has to base on the data delivered by the manufacturer of the machine.

Table 1. Summarised proposition for test intervals of machinery


SIL HFT Preconized Source
(EN 62061) (EN 62061) test interval
1 1 1/year Authors
2 1 1/year CNB / M
/ 11.050
3 1 1/month CNB / M
/ 11.050

Frequently proof test interval is estimated by the manufacturer to twenty years. The
second source of information can be historical data about the frequency of demands for
the safety-related action of the Safety Related Part of the Control System. On the basis
of those data, the interval can be changed. The first in the order is the authors' proposal
presented in Table 1.

38
9

3.2 Estimation of identified risks influences into proof test frequency


for low demand mode
For some machines equipped with safety function and complementary protective
measures working in low demand mode because of construction, the specification of
production, ergonomics, lack of space happens that safety functions or complementary
protective measures can be activated incidentally, e.g. forklift attacked safety mate,
product fall and activate safety line. This provokes that machine stops because of
function activation. The more dangerous is the situation, where this function was not
activated, and only some mechanical parts were defected. That in future can result in
incorrect operation of the safety function. Usually, operators should alert maintenance
stuff, and after verification, the machine can be given back for production. This situa-
tion has taken place in general but taking into consideration human errors (e.g. damage
could not be seen from forklift operator), based on author’s analysis quarter of such
incidents are not reported. To assure that safety function or complementary protective
measures are still able to fulfil its function authors propose to made additional estima-
tion shown in Fig. 1.

D1
F1 N/A
SIL 1 D2 FTI
D1 VI
F2
D2
MSIL2
D1
F1 N/A
SIL 2 D2
FTI
F2 D1
VI
D2 FTI
F1
SIL 3 D1 N/A
F2 D2 FTI

Fig. 1. Graph of additional action estimation for machines working in low demand mode.

where:
Safety Integrity Level: SIL1, SIL2, SIL3.
The frequency of unplanned activation of the function: F1 - seldom to less often; F2
– frequent;
The possibility of detection eventual damages without stopping
machines/production line: D1 - possible; D2 - practically impossible;
Action: N/A – No action necessary, VI – Visual inspection, FTI – More Frequent
Time interval; MSIL2 – Modification to SIL2;

Presented analysis took into account three categories: SIL of the system and dived it
into three scopes. First for SIL1, the second one for the SIL2, and the third one for
SIL3.

39
The second category is the frequency of such unintended safety function activation.
It is divided into seldom and frequent. The third category is the possibility of eventual
damages detection without stopping the production line or machine. This category is
divided into possible to detect cases and impossible to detect without stop events.
As a result, it can be obtained four possible scenarios. First with lowest risk finish
with no actions. The second result is adding into maintenance preventive plan addi-
tional visual verification of safety function elements, or complementary protective
measures elements state. The term complementary safety measures are used in ISO
12100 standard and are used to avoid or to limit the harm [6]. Example of this can be
emergency stop systems. The frequency of that inspection should be not less than twice
as often as the period between two proof or functional tests. The third action is
requested to modify the elements to fulfil the requirements of SIL2. The last scenario
is an increase in the frequency of proof or functional test interval. The frequency of the
test should be not less than twice the period between two known accidental activation.

4 Case Study- Production of Semi-Finished Products for


tire
For case study was chosen Tire Cord Twisting Machines. After risk analysis (Failure
Modes, Effects and Criticality Analysis) were identified one safety function and two
complementary protective measures. Safety function protects by restricting access to
the cabinet, with rotating elements. The first complementary protective measures func-
tion is to secure the hand or forearm by the thread of the textile cord by installing the
cable pull safety switch on both sides of the machine. Second is a typical emergency
stop button. All functions have estimated SIL1. Based on the manufacturer's data, it can
be calculated that each of the given safety function and complementary protective
measures has achieved SIL1.
During analysis of productivity looses, one of them - Preventive maintenance time
causing the downtime, was identified as one of the leading productivity loss. This indi-
cator shows that every month the company loses one hour of production per month for
the machine. In total for all machines of this type, it gives 132 hours yearly lost only
for functional tests. In order to improve this result, it was decided to analyse the indi-
cated machines according to the presented above model.
The first complementary measure which secures the hand or forearm by the thread
of the textile cord based on a reliability data of elements of this function has functional
test equal to service life and equal to twenty years what means that there is no need to
perform a proof test of this function. Taking into consideration the facts about risk man-
agement was made analysis proposed by the authors.
At the analysis was adopted – SIL1. The analysis of the entries to the Computerized
Maintenance Management Application and interviews with both production operators
and maintenance workers shows that the unintended activation of the complementary
protective measures function by the operator or the product takes place on average once
per twelve months. So, it can be qualified to F1 group. The last criterion of analysis,

40
11

which is the possibility of defect detection, has been evaluated as practically impossible
detection.

D1
F1 N/A
SIL=1 D2
FTI
F2 D1
VI
D2
MSIL2
D1
F1 N/A
SIL=2 D2 FTI
F2 D1
VI
D2
SIL=3 F2 D2 FTI

Fig. 2. Graph of additional action estimation for a first complementary measure of cord twist-
ing machines working in low demand mode.

From the estimation of additional action estimation (Fig. 2) can be stated that it is
necessary to change time interval of functional test. Taking into account the function
activation and damage frequency, at average once every year, the proposition is to dou-
ble the frequency of the occurrence of activation – what equal six months. In summary,
the result of the analysis is the change in interval of functional test time to six months.
Profit for the company can be estimated as an additional 110 hours of machines work
per year and minimisation of risks identified in risk analyse.

D1
F1 N/A
SIL1 D2
FTI
D1
F2 VI
D2
MSIL2
D1
F1 N/A
SIL2 D2 FTI
F2 D1
VI
D2
SIL 3 F2 D2 FTI

Fig. 3. Graph of additional action estimation for a second complementary measure of cord
twisting machines working in low demand mode.

Second complementary measure – emergency stop. The frequency of use is rare, and
detection was quantified as possible. For this reason, no additional action is necessary
(Fig. 3). Also in this case producer gave T1 value as twenty years. Concluding there is
no proof test necessary during lifetime of this function. According to authors proposi-
tion (every year test for SIL1) time the functional test of that complementary measure
is done in the double frequency as the first one.

41
Safety function which protects by restricting access to the cabinet has estimated SIL1
based on SIL assignment matrix proposed in the EN 62061 standard. The severity of
the injury was estimated as level 3. Frequency and duration with note 3, the probability
of hazard event as possible and note 3, avoidance as possible with note 3.
Cl=Fr+Pr+Av=3+3+3=9 (Fig. 4).

Severity Class (Cl)


(Se) 3-4 5-7 8-10 11-13 14-15
4 SIL2 SIL2 SIL2 SIL3 SIL3
3 SIL1 SIL2 SIL3
2 SIL1 SIL2
1 SIL1

Fig. 4. SIL assignment matrix for the analysed safety function.

Safety function with value SIL1. Analyzing available data was assumed that fre-
quency of unplanned activation is frequent and detection of possible damages is possi-
ble without stopping the machine. Following proposed method can be estimated that
additional action, in this case, is additional visual inspection (Fig. 5). As the average
frequency of unplanned activation or damage was estimated to six months, visual in-
spection of that element was planned for three months. Manufactures data presents the
T1 value for proof test interval as 20 years. So there is no need to plan an additional test
for this element. According to authors proposal, functional test is completed with the
frequency of twelve months.

D1
F1 N/A
SIL=1 D2
FTI
D1
F2 VI
D2
MSIL2
D1
F1 N/A
SIL=2 D2 FTI
F2 D1
VI
D2
SIL=3 F2 D2 FTI

Fig. 5. Graph of additional action estimation for defined safety function of cord twisting ma-
chine.

Summarizing achieved results can be stated that by the use of proposed method were
achieved two goals.
First, the rules of functional test frequency become clear from a user point of view.
Based on risk analysis and manufacturer data, level can be stated required SIL and SIL
achieved by the installation. With this information based on Tab. 1 user can stated rec-
ommended frequency. This influence into minimalisation of time spends into preven-
tive maintenance. What in consequence increase productivity KPIs.

42
13

Second, a graph of additional action estimation helps the user to minimalise addi-
tional risks not covered before. The tool is easy in use and can be easily utilised by
maintenance or responsible for safety personnel. Implementation of actions defined in
proposed graph influence on results of risk analysis made at the different level of com-
pany management according to ISO 31000 [12].

5 Discussion
The proposed solution allows providing the required SIL, taking into account aspects
of company risk management that are not taken into account when calculating SIL ac-
cording to the standard IEC 62061 [9]. The method takes into account the UE
recommendations and standard regulations. Additional verification or shorter
frequency of proof test help minimalise the risk of decreasing in time performance level
value. The third important thing is that frequency of different tests is joined to
minimalise the stoppages of machines and as a result, minimalise loose of production.
Presented above tools is a new approach taking in the account author’s experience.
At the same time, it is recommended to perform the analysis of the causes of unin-
tended activation of SIF, in order to eliminate the primary cause of the increased risk.
A compelling analysis and subsequent action plan can eliminate the cause, which will
result in a return to the regular interval.

6 Conclusions

Management through KPIs is a useful tool for identifying sources of loss of various
types as well as monitoring progress in eliminating identified risks. In turn, the tool
presented by the authors serves to improve the above-defined productivity KPIs, helps
to optimise the functional and proof test interval taking into account specific aspects of
risk management. An important issue which has to be underlined is that many manu-
facturers of safety-related systems assume a lifetime of machinery as the twenty-year
mission time. This fact has to be taken into consideration by the user on machines which
have already at about twenty years, as they have to prepare for the wear-out stage of
systems. Other conditions that could be included with the new versions of the risk man-
agement or quality management standards may force changes in the proposed method.
The tool has so far been used several times; further tests are needed to confirm its ef-
fectiveness in different cases.

References
1. Carannante, T.: The introduction and implementation of TPM using a conceptual model de-
veloped in-house – phase I. Maintenance & Asset Management. vol. 18, No 5/6 (2003).
2. Golebiewski, D., Kosmowski, K.T.: Towards a process based management system for oil
port infrastructure in context of insurance. Journal of Polish Safety and Reliability Associa-
tion, Summer Safety and Reliability Seminars, vol. 8, No. 1, 23-38 (2017).

43
3. Guo, H., Szidarovsky F.,Gerokostopoulos A.,Niu P. On Determining Optimal Inspection
Interval for Minimizing Maintenance Cost, Reliability and Maintainability Symposium
(RAMS), 2015 Annual. IEEE. Palm Harbour (2015).
4. IEC 61508 1-7:2010: Functional safety of electrical/ electronic/programmable electronic
safety-related systems. International Electrotechnical Commission (2010).
5. IEC 61511 1-3:2016: Functional safety – Safety instrumented systems for the process
industry sector. International Electrotechnical Commission (2016).
6. ISO 12100-2:2010: Safety of machinery - Basic concepts, general principles for design -
Part 2: Technical principles. International Organization for Standardization. Geneva (2010).
7. EN ISO 14119:2013: Safety of machinery - interlocking devices associated with guards -
Principles for design and selection, International Organization for Standardization. Geneva
(2013).
8. ISO 9001:2015: Quality Management System – Requirements. International Organization
for Standardization. Geneva (2015).
9. EN 62061:2005: Safety of machinery – Functional safety of safety-related electrical, elec-
tronic and programmable electronic control system. International Electrotechnical Commis-
sion. Geneva (2005).
10. ISO 22400: Automation Systems and integration - Key performance Indicators for Manu-
facturing Operations Management, International Organization for Standardization. Geneva
(2014).
11. ISO 22301: Societal security – Business continuity management - Requirements, Interna-
tional Organization for Standardization. Geneva (2012).
12. ISO 31000: Risk management- Principles and guidelines. International Organization for
Standardization. Geneva (2009).
13. Kelly, T.P., McDermid, J.A.: A systematic approach to safety case maintenance. Reliability
Engineering & System Safety, 71, 271–284 (2001).
14. Lu, L., Jiang, J.: Analysis of on-line maintenance strategies for k-out-of-n standby safety
systems. Reliability Engineering and System Safety, 92, 144–155 (2007).
15. Norwegian oil and gas. 070 - Norwegian oil and gas application of IEC61508 and IEC 61511
in the Norwegian Petroleum Industry (2004).
16. Piesik, J., Kosmowski, K.T.: Aktualne problemy zarządzania niezawodnością i bezpieczeń-
stwem linii produkcyjnej. Application of computers in Science and Technology 2016. The
Scientific Papers of Faculty of Electrical and Control Engineering Gdansk University of
Technology. 51. 155-158 (2016).
17. Rausand, M.: Reliability centered maintenance. Reliability Engineering and System Safety,
60, 121–132 (1998).
18. Sintef: Reliability Prediction Method for Safety Instrumented Systems PDS Method
Handbook, 2010 Edition. Trondheim (2010).
19. Subhash, M.: Optimal inspection frequency. A tool for maintenance planning/forecasting.
International Journal of Quality & Reliability Management. vol. 21, No.7, 763-771. (2004).
20. Vaurio, J.K.: A note of optimal inspection intervals, International Journal of Quality and
Reliability Management, 11, 65-68 (1994).
21. Zio, E., Compare, M.: Evaluating maintenance policies by quantitative modeling and
analysis. Reliability Engineering and System Safety 109, 53–65 (2013).

44
Section 4

Data Analysis and Systems Research

45
Using Random Forest Classier for particle identication in the
ALICE Experiment
1 2 1
Tomasz Trzci«ski , Šukasz Graczykowski , and Michaª Glinka
1
Institute of Computer Science, Warsaw University of Technology;
2
Faculty of Physics, Warsaw University of Technology, Poland;

Abstract. Particle identication is very often crucial for providing high quality re-
sults in high-energy physics experiments. A proper selection of an accurate subset of
particle tracks containing particles of interest for a given analysis requires ltering out
the data using appropriate threshold parameters. Those parameters are typically cho-
sen sub-optimally by using the so-called cuts"  sets of simple linear classiers that are
based on well-known physical parameters of registered tracks. Those classiers are fast,
but they are not robust to various conditions which can often result in lower accuracy,
eciency, or purity in identifying the appropriate particles. Our results show that by
using more track parameters than in the standard way, we can create classiers based on
machine learning algorithms that are able to discriminate much more particles correctly
while reducing traditional method's error rates. More precisely, we see that by using a
standard Random Forest method our approach can already surpass classical methods of
cutting tracks.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

46
Fault Propagation Models Generation in Mobile
Telecommunication Networks based on Bayesian Networks with
Principal Component Analysis Filtering
Artur Ma¹dziarz

Systems Research Institute Polish Academy of Science, Warsaw, Poland

Abstract. The mobile telecommunication area has been experiencing huge


changes recently. Introduction of new technologies and services (2G, 3G, 4G(LTE)) as
well as multivendor environment distributed across the same geographical area bring a
lot of challenges in network operation. This explains why eective yet simple tools and
methods delivering essential information about network problems to network operators
are strongly needed. The paper presents the methodology of generating the so-called
fault propagation model which discovers relations between alarm events in mobile tele-
communication networks based on Bayesian Networks with Primary Component Analysis
pre-ltering. Bayesian Network (BN) is a very popular FPM which also enables graphical
interpretation of the analysis. Due to performance issues related to BN generation algo-
rithms, it is advised to use pre-processing phase in this process. Thanks to high processing
eciency for big data sets, the PCA can play the ltering role for generating FPMs based
on the BN.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

47
An efficient model for steady state numerical analysis of
erbium doped fluoride glass fiber lasers

Slawomir Sujecki

Department of Telecommunications and Teleinformatics, Faculty of Electronics, Wroclaw Uni-


versity of Science and Technology, Wyb. Wyspianskiego 27, 50-370 Wroclaw, Poland

[email protected]

Abstract. The paper presents an efficient model for steady state numerical anal-
ysis of erbium doped fluoride glass fiber lasers operating within the near infrared
wavelength range. The problem of calculation of photon flux density and popu-
lations of electronic state levels within the fiber laser cavity is reduced to a solu-
tion of a set of coupled ordinary differential equations. The boundary conditions
imposed at the cavity facets result in a two point boundary value which is solved
using a relaxation method. A Newton-Raphson method is used to calculate the
populations of the energetic levels. The modelling parameters are taken from the
literature. A simplified five level model is used for the description of the erbium
ion electronic levels, which participate in interactions with pump and signal light.
Pump wavelength is set to 980 nm. The results obtained show that the applied
numerical technique is stable, efficient and can be readily applied on a standard
personal computer.

Keywords: Fiber Laser, Numerical Modelling, Ordinary Differential Equa-


tions.

1 Introduction

Fiber lasers due to their superior quality in terms of the output beam brightness and
output beam delivery are a preferred choice of a light source for many applications.
Therefore large effort is invested into the development of new fiber laser sources. In
recent years a particular focus has been given to fiber lasers operating at wavelengths
exceeding 2000 nm. These fiber lasers are based on fluoride glass since silica glass has
an unacceptable level of light propagation loss for wavelengths larger than 2000 nm.
Of particular interest due to the ease of pumping are lanthanide ion doped fluoride fiber
lasers. Erbium ion doped fluoride glass fiber laser (EIDFFL) or instance can be pumped
with a standard 980 nm laser diode, which has been developed for applications in long
distance fiber optic telecom applications. This fact made EIDFFL a subject of intensive
research, which resulted in a large number of remarkable achievements. Up until now
EIDFFLs with output power as high as 24 W [1] have been realized, peak pulsed power
oover 10 kW under Q-switched pulse operation was achieved [2] while the longest op-
erating wavelength for any lanthanide ion doped fluoride glass fiber laser has also been

48
achieved using EIDFFL and is 3680 nm [3]. All these achievements were accompanied
by a noticeable design and modelling effort, which was mainly accomplished using
time domain models, e.g. [4-11].
In this contribution a model that relies on solving the rate equations and ordinary
differential equations describing an evolution of pump and signal power within the laser
cavity is presented. The results show a stable operation of the proposed algorithm and
also very good computational efficiency.

2 Numerical Model

The energy level diagram for EIDFFL pumped at 976 nm is shown in Fig.1. The pump
laser promotes the ions to level 2 via the ground state absorption process and to level 4
from level 2 via excited stated absorption process. The lasing takes place between levels
2 and 1 at the wavelength of 2800 nm. The rate equations can be derived consistently
with the energy diagram from Fig.1, [4,6]:

N4
W22 N 22 − + RESA = 0
4
 43 N 4 N3
− =0
4 3
4
i 2 Ni N 2
RGSA − RSE − RESA +  − − 2W22 N 22 + W11 N12 = 0 (1)
i =3 i 2
4
i1 Ni N1
RSE +  − − 2W11 N12 = 0
i=2 i 1
4
 N
− RGSA +  i 0 i + W22 N 22 + W11 N12 = 0
i =1 i

whereby the sum of level populations N0, N1, N2, N3, N4, respectively for levels 0, 1, 2,
3 and 4 (Fig.1) is equal to the total doping concentration N. 1, 2, 3, 4 give the life
times of levels 1, 2, 3 and 4 respectively while xx give the branching ratios. W11 and
W22 are the cooperative up-conversion coefficients for levels 1 and 2, respectively.
RGSA gives the ground state absorption rate, RSE gives the rate of stimulated emission
between levels 1 and 2 while RESA gives the rate of the excited state absorption from
level 2. The equations (1) are complemented by a set of four ordinary differential equa-
tions, which describe the evolution of pump and signal waves. Aligning the fiber with
the z axis of the coordinate system allows to write the four differential equations in the
following form:

49
Pp = ( g p −  p ) Pp+
d +
dz
− Pp− = ( g p −  p ) Pp−
d
dz
(2)
d +
Ps = ( g s −  s ) Ps+
dz
d
− Ps− = ( g s −  s ) Ps−
dz

where Ps and Pp are powers for signal and pump, respectively. The superscripts ‘+’ and
‘-‘ denote the forward and backward propagating wave respectively. In (2) gp and gs
denote the gain for pump and signal respectively while x gives the value of loss.

Fig. 1. A schematic diagram of energy levels for erbium ion doped into fluoride glass.

Figure 2 shows a schematic diagram for the considered laser cavity. A beam splitter
separates pump and signal waves at the left side of the fiber.

Fig. 2. A schematic diagram of energy levels for erbium ion doped into fluoride glass.

50
For fiber cavity shown in Fig.2 the set of boundary conditions is:

Pp+ ( z = 0 ) = R p ( z = 0 ) Pp− ( z = 0 ) + (1 − R p ( z = 0 ) ) Ppump


Pp− ( z = L ) = R p ( z = L ) Pp+ ( z = L )
(3)
Ps+ ( z = 0 ) = Rs ( z = 0 ) Ps− ( z = 0 )
Ps− ( z = L ) = Rs ( z = L ) Ps+ ( z = L )

where Rs and Rp are the reflectivity for signal and pump, respectively. powers for signal
and pump, respectively. The set of ordinary differential equations (2) is solved subject
to boundary conditions (3) using a relaxation method and Runge-Kutta 4-5 algorithm.
The consistent calculation of the level populations subject to known distribution of
pump and signal power is obtained by solving equations (1). As these equations are
nonlinear a Newton-Raphson method is applied to solve them. As an initial solution a
simplified energy level model neglecting the up-conversion and excited state absorp-
tion processes is used. Such approximation reduces the equations (1) to a set of 3 linear
algebraic equations, which can be solved analytically, whilst the populations of levels
3 and 4 are equal to zero implicitly.

3 Numerical Results

The modelling parameters are presented in tables 1 and 2. Figs. 3 and 4 show the cal-
culated dependence of the CPU time and output power on the iteration number for typ-
ical values of the pump power. The simulations were performed within MATLAB com-
putational environment, in Windows 10 operating system at 64-bit Intel Core i7-6700
processor with CPU clock at 3.4 GHz.

Table 1. Numerical modelling parameters used in simulations.

Quantity Unit value


W11 m3/s 1x10-24
W22 m3/s 0.3x10-24
Pump wavelength nm 976
Pump wavelength nm 2800
N 1/ m3 9.6x1026
L m 1
p 1/m 23x10-3
s 1/m 3x10-3
Rp(z=0) 0
Rp(z=L) 0.96
Rs(z=0) 0.04
Rs(z=L) 0.96

51
Table 2. Branching ratios and level lifetimes.

Quantity Unit value


1 ms 9
2 ms 6.9
3 ms 0.12
4 ms 0.57
21, 20 0.37, 0.63
32, 31, 30 0.856, 0.004, 0.014
43, 42, 41, 40 0.34, 0.04, 0.18, 0.44
separates pump and signal waves at the left side of the fiber.

Fig. 3. The dependence of CPU time and Output power on the iteration number at pump power
of 5 W.

Fig. 4. The dependence of CPU time and Output power on the iteration number at pump power
of 10 W.

52
The results shown in Fig.3 and Fig.4 confirm that using the proposed algorithm results
can be obtained at a standard personal computer within several seconds per one bias
point. The algorithm behaves in a stable manner and converges to a solution within a
couple of iteration steps. These results also confirm that the proposed algorithm is rel-
atively tolerant to an initial guess.

Acknowledgements

The author wishes to thank Wrocław University of Science and Technology (statutory
activity) for financial support.

References
1. Tokita, S., Murakami, M., Shimizu, S., Hashida, M. and Sakabe, S.: Liquid-cooled 24 W
mid-infrared Er:ZBLAN fiber laser, Optics Letters 34(20), 3062-3064 (2009).
2. Lamrini, S., Scholle,,K., Schäfer, M., Fuhrberg, P., Ward, J., Francis, M., Sujecki, S.,
Oladeji, A., Napier, B., Seddon, A., Farries, M., and Benson, T.: High-Energy Q-switched
Er:ZBLAN Fibre Laser at 2.79 m, CLEO/Europe-EQEC 2015, Mid-IR fibre laser systems
II CJ_7_2, OSA, Munich, 2015.
3. Qin, Z., Xie, G., Ma, J., Juan, P. and Qian, L.: Mid-infrared Er:ZBLAN fiber laser reaching
3.68 μm wavelength,Chinese optics Letters 15(11), 111402 (2017).
4. G. Zhu, X. Zhu, R. A. Norwood, and N. Peyghambarian, Experimental and Numerical In-
vestigations on Q-Switched Laser-Seeded Fiber MOPA at 2.8 mu m, Journal of Lightwave
Technology 32(23), 3951-3955 (2014).
5. Sujecki, S. : Simple and efficient Method of Lines based algorithm for modelling of erbium
doped Q-switched ZBLAN fibre lasers, J. Opt. Soc. Am. B 33(11), 2288-2295 (2016)
6. J. Li and S. D. Jackson: Numerical Modeling and Optimization of Diode Pumped Heavily-
Erbium-Doped Fluoride Fiber Lasers, IEEE Journal of Quantum Electronics 48(4), 454-464
(2012).
7. J. Li, L. Gomes and S. D. Jackson: Numerical Modeling and Optimization of Diode Pumped
Heavily-Erbium-Doped Fluoride Fiber Lasers, IEEE Journal of Quantum Electronics 48(5),
596-607 (2012).
8. J. Li, H. Luo, Y. Liu, L. Zhang, and S. D. Jackson: Modeling and Optimization of Cascaded
Erbium and Holmium Doped Fluoride Fiber Lasers, IEEE Journal of Selected Topics in
Quantum Electronics 20(5),1-14 (2014)
9. M. Gorjan, M. Marincek, and M. Copic, Role of Interionic Processes in the Efficiency and
Operation of Erbium-Doped Fluoride Fiber Lasers, IEEE Journal of Quantum Electronics
47(2), 262-273 (2011).
10. M. Pollnau, The route toward a diode-pumped 1-W erbium 3-mu m fiber laser, IEEE Journal
of Quantum Electronics 33(11), 1982-1990 (1997)
11. M. Eichhorn, Numerical modelling of Tm-doped double-clad fluoride fiber amplifiers, IEEE
Journal of Quantum Electronics 41(12), 1574-1581 (2005)

53
Image enhancement with applications in biomedical processing
1,2 1,3 1,3
Maªgorzata Charytanowicz , Piotr Kulczycki , Szymon Šukasik , and
1,3
Piotr A. Kowalski
1
Polish Academy of Sciences, Systems Research Institute,
Centre of Information Technology for Data Analysis Methods;
2
Catholic University of Lublin, Institute of Mathematics and Computer Science;
3
AGH University of Science and Technology,
Faculty of Physics and Applied Computer Science, Kraków, Poland;

Abstract. The images obtained by X-Ray or computed tomography (CT) may be


contaminated with dierent kinds of noise or show lack of sharpness, too low or high
intensity and poor contrast. Such image deciencies can be induced by adverse physical
conditions and by the transmission properties of imaging devices. A number of enhance-
ment techniques in image processing may improve the quality of the image. These include:
point arithmetic operations, smoothing and sharpening lters and histogram modicati-
ons. The choice of the technique, however, depends on the type of image deciency.
In this paper, the primary aim is to propose an ecient image enhancement method
based on nonparametric estimation so as to enable medical images to have better contrast.
To evaluate the method performance, X-Ray and CT images have been studied. Experi-
mental results verify that applying this approach can engender good image enhancement
performance when compared with classical techniques.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

54
Ecient Astronomical Data Condensation using Approximate
Nearest Neighbors
1,2 1 1 1,2
Szymon Šukasik , Konrad Lalik , Piotr Sarna , Piotr A. Kowalski ,
2,3 1,2
Maªgorzata Charytanowicz , and Piotr Kulczycki
1
AGH University of Science and Technology,
Faculty of Physics and Applied Computer Science, Kraków, Poland;
2
Polish Academy of Sciences, Systems Research Institute,
Centre of Information Technology for Data Analysis Methods;
3
Catholic University of Lublin, Institute of Mathematics and Computer Science;

Abstract. Analyzing astronomical observations represents one of the most challen-


ging tasks of data exploration. It is largely due to the volume of the data acquired using
advanced observational tools. While other challenges typical for the class of Big Data pro-
blems - like data variety - are also present, datasets size represents the most signicant
obstacle in visualization and subsequent analysis.
The paper studies ecient data condensation algorithm aimed at providing its compact
representation. It is based on approximate nearest neighbor calculation using parallel
processing. The properties of the proposed approach are studied on astronomical datasets
related to the GAIA mission. It is concluded that introduced technique might serve as a
scalable method of alleviating the problem of data sets size.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

55
Comparative analysis of segmentation methods
and extracting heart features in Cardiovascular
MRI
Joanna ‘wiebocka-Wi¦k1
AGH University of Science and Technology, Cracov 30-059, PL,
Faculty of Physics and Applied Computer Science
[email protected],
WWW home page: http://home.agh.edu.pl/jsw

Abstract. The aim of the paper is to propose new method of cardiac


magnetic resonance (CMR) image segmentation and compare its e-
ciency with the available methods used for segmentation and feature
extraction. The addition goal is to identify defects and benets of pro-
posed methodology, to check how the algorithm copes with photos cap-
tured with the various projections and quality and to verify whether
they could be used routinely in medical centers. Following segmentation
methods were compared: global thresholding (with manulally selected
threshold and mean value threshold), local thresholding, grain growth,
area division, area growth and division and deformable models.
Keywords: image segmentation, MRI, medical imaging, cardiology

1 Introduction
Image segmentation is the process of extracting the object (or the region) of
interest from the background. Unlike the computer vision systems, the human
perception system allows to distinguish and recognize the patterns, shapes and
objects in a fully automatic and involuntary way. However modern technology
is still and fast developing and results in such a large amount of data that it is
not possible for a man to process all captured images and signals in a manually
way. Therefore newer and more advanced algorithms for automatic or semi-
automatic image segmentation are created and used in computer vision systems
in biometrics, face recognition and smile at cameras photographic, motion video
games and many others.
Medical imaging is an unique area where the best image segmentation appli-
cation is extremely needed. Diagnosis and therapy are the huge source of images
and data, which require further processing and analysis. Imaging systems such
as MRI (Magnetic Resonance Imaging) are used routinely in the study of thou-
sands of patients each year. A single study is a collection of hundreds or even
thousands of scans. In addition, the accuracy of segmentation process aects the
diagnostic value of the image obtained and the evaluation of the patient's health
as well.

56
Although the physician is able to recognize the individual structures in the
image, without their isolation it is not possible to proceed the geometric mea-
surements, which might be crucial process in case of pathological tissues size
assessment. It is worth to highlight that any kind on manual segmentation is
time-consuming, and time is one of the most critical factors in a patient treat-
ment. The challenge becomes even more complicated when we realize the results
of tests using various methods dier, and additionally illustrates the various
parts of the body of patients and therefore there is no universal segmentation
methods used in medical imaging. Taking this arguments into consideration, it
is highly recommended to look for the best and most eective methods of auto-
matic and semi-automatic segmentation methods to support the doctors in their
work.
The aim of the paper is to propose new method of cardiac magnetic reso-
nance (CMR) images segmentation and compare its eciency with the available
methods used for segmentation and feature extraction. The addition goal is to
identify defects and benets of proposed methodology, to check how the algo-
rithm copes with photos captured with the various projections and quality and
to verify whether they could be used routinely in medical centers. Following
segmentation methods were compared: global thresholding (with manulally se-
lected threshold and mean value threshold), local thresholding, grain growth,
area division, area growth and division and deformable models.

2 Cardiovascular Magnetic Resonance (CMR)


Imaging of the heart and blood vessels MRI is so popular and common technique
that it is isolated from the MRI and Cardiovascular Magnetic Resonance (CMR)
imaging. The reason why MRI is so often used in medical practice, is its safety
(lack of ionizing radiation), high-resolution images and many options in selecting
the best projection. [1]
There are two basic types of sequences giving dierent output images:
 Gradient Echo Sequence (GE) - blood and fat are white (this technique is
often called white blood imaging ). It has greater use in the functional imaging,
for example in the examination of blood ow;
 Spin Echo Sequence (SE) - fat is white, but the blood is black. The sequence
is very useful in anatomical imaging techniques
CMR is very useful in diagnosis of a huge amount of diseases, like coronary
heart disease (which leads to heart attack), heart failure, cardiomyopathy, the
diagnosis of pathological structures within the heart, as well as myocarditis, or
pericardial diseases.
The CMR method allows for the selection of any projection during the image
acquisition. The radiographer can proceed the heart-section in any direction,
but mostly four projections, which have the greatest diagnostic value are in
common use: 4-chamber views, 3-chamber views, 2-chamber views, and short
axis. . Comparison of these projections were shown in 1. Projections names are

57
similar to those used in echocardiography. The most common examinations in
CMR are performed in all these projections, regardless of the imaging reason
The body of the patient can also be imaged on any depth, which allows the
cross-sectional views of heart throughout its volume.

Fig. 1. Cross-section of the heart in four projections: a) four-chamber b) two-chamber


c) three-chamber d) short axis [2]

3 Cardiovascular image segmentation


As it was already mentioned, CMR is a type of examination which allows to
diagnose a tremendous amount of heart and vascular diseases. In every case the
diagnosis is a results of the analysis of dierent image areas in dierent projec-
tions. That is why it is needed to propose segmentation algorithms, dedicated
to extract specic portions of images, depending on the purpose of the study
and projection. This observation leads to conclusion that it is not possible to
design one, fully universal, generic methods for heart segmentation of the heart,
which could be implemented in all medical cases. According to the American
Heart Association, in CMR images analysis it is possible to distinguish 9 seg-
ments within the left ventricle itself for diagnostic purposes, while even 400 for
research purposes. However, literature studies proved the optimal number of left
ventricular parts, which provides diagnostic capabilities is 17.
This scheme of segmentation, which is widely used is shown in Figure 2. Fully
automatic and universal algorithm should be able not only to isolate the interior
cavity of the heart and the wall, but also to divide cardiac muscle depending on
the depth of the section to the indicated segments.

58
Fig. 2. 17-segment scheme of the segments distribution of left ventricle in cardiac
muscle (recommended by the American Heart Association, based on [1][3])

4 Implementation
Following the generally accepted trend of software development and cross-platform
solutions, all algorithms were implemented in JavaScript (compliant with the
ECMA Script 5 standard), and as the core of dierent algorithms the applica-
tion DWV was used [5]. It is mainly used for parsing and displaying images in
DICOM format. Additionally, the application provides the basic tools of analy-
sis and image processing, such as zoom in, zoom out; manipulation of contrast
and brightness. he code is written in JavaScript, allowing it to run in most web
browsers, both desktop and mobile devices, and applications on the desktop (us-
ing web-view). To view the content HTML5 (rendering elements, composition)
and CSS3 (styling) was used. In application, the external libraries were also
applied (jQuery with jQuery Mobile, KineticJS, magic-wand-js).

5 Comparison of dierent segmentation methods


Images All methods were tested on 2 images shown in Figure 3. Selected projec-
tion (4-chamber) was chosen, and for this projection 2 images, related with the
heart phase were taken into consideration: end-diastolic and end-systolic. Both
projections have signicant diagnostic value. This choice allows to confront com-
pared methods with some diculties related with cardiac images segmentation:

 various projections are characterized by the dierent, very specic heart


shapes,
 in each projection, size, shape, and the location of anatomical structures in
heart is changing,
 research images have often low diagnostic value.

59
The purpose of each method to extract the left ventricle from the image.
That is why in Figure 3, the specic region of extraction were marked. The con-
tours were determined manually using the tool live wire, available in the software
DWV [5]. This tool allows to create an active contour selecting consecutive, be-
longing points (user is pointing to the points, while the contour between them
is dynamically created by the program). In case of each image, contours correct-
ness has been veried and approved by the medical physicist with experience in
segmentation of medical heart images (expert knowledge).

Fig. 3. Images of the heart used to compare methods of segmentation with marked
reference areas: 4-chamber diastolic (left), 4-chamber systolic (right) (based on [4]

6 Results
For each of the contour marked in Figure 3 the eld was calculated. These elds
will be treated as a reference to compare with contours and eld calculated after
using proposed segmentation algorithms. Two other metrics which be helpful in
algorithms evaluation ant their comparison is the percentage of pixels redundant
pixels and undersized (not marked). The application of these two parameters is
essential because during the comparison between reference image and image
after segmentation it might occur that in both cases area limited by contour
are similar yet oset related to each other or have slightly dierent shapes. This
problem was illustrated in Figure 4
Therefore, comparison of algorithms is a sum of two kinds of evaluation: a
qualitative assessment (visual) proceed by a high qualied and experienced in
image segmentation person and quantitative one, implemented according to the
following procedure with the following procedure:
 For each image (n ) we determined a reference eld (in pixels) limited by
contour (Pn ), based on Figure 3
 For each of the algorithms (x ), and selected image (n ), we extracted an
area after segmentation and calculated its led (Pnx ). Next we verify its

60
Fig. 4. The ratio of the elds and the percentage of coverage. Brown shape of a reference
area, a green outline of the designated area. The red lines indicate redundantly marked
pixels, blue pixels lines not covered by the designated area (undersized) a) both areas
have similar eld, while a small number of pixels diering in both areas; b) contours are
shaped like a painting), so their elds are very similar, while the percentage of coverage
is negligible - there is a very large number of redundant pixels and unselected. On this
basis, not only qualitatively but also quantitatively determine the contour in the case
of a) is closer to the reference

correlation to the corresponding area in the reference image (if the algorithm
has identied a few areas, only the largest one is taken into consideration as
the one who assures maximum possible correlation).
 We set the number of pixels selected redundantly (Hnx ) and undersized
(Lnx ),
 Based on this parameters we proposed the similarity coecient, which allows
for quantitative assessment of compliance of designated areas. Individual
members should be as small as possible (ideally 0), so the whole metric - the
closer to 0, the better the result:
|Pnx − Pn | Hnx Lnx
SX = + + (1)
Pn Pn Pn
 It was assumed that the uncertainty of the reference eld is equal to 0, while
the uncertainty of other variables are based on the uncertainty of a measuring
instrument (graphic program). Each pixel in DICOM format results in 26
pixels screenshot displayed in a graphics program and on the basis of this
the uncertainty is calculated as:
26px
u(Pnx ) = u(Lnx ) = u(Snx ) = √ (2)
3
According to the right of uncertainty transfer:
s 2  2  2
δSnx δSnx δSnx
u(Snx ) = u(Pnx ) u(Hnx ) u(Lnx ) (3)
δPnx δHnx δLnx

61
This metric is not resistant to rotation or scaling areas, however it is not
relevant: both contours are determined on the basis of the same image, in the
same scale, and the image was not subject to any rotation. All elds (measured
as a number of pixels) were calculated using a graphics program for the output
of DWV application). In a consequence their size and the resolution was much
larger than the size of DICOM images (256 x 256 pixels). However, the propor-
tions are retained, and so the use of metrics relative to the reference eld and
contour allows to receive the absolute value independent form the resolution in
the nal image.

6.1 Global thresholding (manually selected threshold)


In Figure 5 the images obtained after of global thresholding were shown. There
is no clear border between the atrium and ventricle. Thus, the left ventricle and
left atrium was extracted as a one nal structure.

Fig. 5. The resulting 4-chamber images after the application of a global thresholding
imposed predetermined threshold the image selected individually. Yellow marked for
reference contour of the left ventricle. The applied thresholds: 110 Diastolic (left) and
systolic (right)

6.2 Global thresholding (mean value as threshold)


In Figure 6 the results of images global thresholding with mean value as threshold
were shown. The algorithm handled with all images registered in short axis
projection (left ventricle is surrounded by much darker myocardium). In other
case large areas were extracted without detailed structures.

6.3 Local thresholding


Local thresholding (threshold for a given pixel was the mean of the surround-
ing pixels) gave unsatisfactory results, therefore the calculation of metrics for

62
Fig. 6. The resulting 4-chamber images after applying thresholding global threshold
for being average with pixel intensity values. Yellow marked for reference contour of
the left ventricle.Diastolic (left) and systolic (right)

comparison was pointless. The tremendous eect of oversegmentation and inap-


propriately large amount of small areas caused blurring the information about
the objects in the image. Some of characteristic results were shown in Figure 7

Fig. 7. The resulting images for the local thresholding - you can see the oversegmen-
tation eect. The pictures of the yellow reference contours. 4-chamber diastolic.

6.4 Grain growth


In Figure 8 the results of grain growth method were shown. The precised choos-
ing of a starting point and selecting a proper threshold (for homogeneity test)
required generation of many images. The algorithm gave satisfactory results in
case of short axis projection (the contour of the left ventricle is clear, and the
contrast relative to the surrounding myocardium is high). In all three cases,
extracted area did not exceed the reference contour. In case of 4-chamber pro-
jection once again we had a problem of separation of the left ventricle and left
atrium (there is no clear border between). For lower quality images there were
signicant uctuations in intensity levels within the structure of the heart. The

63
grain growth algorithm is a sensitive method what manifested in very strong
oversegmentation eect.

Fig. 8. The eects of grain growth algorithm for the left ventricle (the homogenous
test). It was assumed that the absolute value of the intensity dierence with respect
to the initial pixel does not exceed a certain threshold. It has been arbitrarily chosen
(as starting point equals to 45), 4-chamber diastolic (left) and systolic (right)

6.5 Area division

The area division method is not resistant for oversegmentation eect and has
low resolution. Figure 9 shows the results; in each case the imperfection of the
method can be seen. In left ventricle, there are several major areas extracted
(lack of one major area). The algorithm assigns separate color for each found
areas. For each image, the homogeneity test was proceed. It was based on a
comparison of pixel intensity to the mean intensity of the area - if more than
10% of the pixels has exceeded the tolerance threshold (set individually for the
each area), then it was divided in the next iteration. It allows to detect one of
the drawbacks of the method - if the searched object (or tissue) is not entirely
placed in one of the image parts after division in the 1st iteration, it is a small
chance that it will be extracted correctly (in the rst iteration it is cut by the
areas border).

6.6 Area growth and division

In the case of growth and dividing algorithm the results are similar to the division
method. Inuence of the "growth" step is particularly visible in the case of large,
homogenous areas, such as background. Inhomogeneity in left ventricle and its
small size caused denser, separable segments. The homogeneity was proceed (as
in the case of division method). The resulting images with the applied reference
contour was presented in Figure 10.

64
Fig. 9. The resulting images for the method of dividing the area; it shows a strong over-
segmentation eect. For each image the individual threshold was used for homogeneous
test, thresholds=60 (4-chamber diastolic (left) and systolic (right))

Fig. 10. The results of the growth and division of the area algorithm. Background is
homogenous, and within the left ventricle in each case we can see a large number of
small surfaces (heterogonous structure). Threshold=30, 4-chamber diastolic (left) and
systolic (right)

6.7 Deformable models method


For all examined methods, the deformable model algorithm gave the best results
(showed in Figure 11). All input images were preprocessed (Gaussian blurring).
There is strong correlation between calculated and reference eld (contours are
covered). Some disadvantage of this method might be the fact that starting
point is needed as one of the input parameters to contour evolution. What is
more threshold is individual for each image. Time computational and algorithm
complexity is also higher.

7 Conclusions
The expert rating of the resulting images showed that the best and most eec-
tive algorithm is deformable model. The main reason is its ability to adapt to

65
Fig. 11. The results of deformable model algorithm- a red outline with yellow contour
reference. In all cases, the two areas are very close to each other. 4-chamber diastolic
(left) and systolic (right)

the image content, and resistance to oversegmentation process, which was major
drawback. However it needs a starting point as an initial condition what might
be considered as some disadvantage.
Little worse, but still acceptable results were given by the grain growth method.
This method is more simple than the previous one and it takes similar actions,
however it is less resistant to image structure and content . The applied homo-
geneity test was checking only the dierence between the initial and processed
pixel intensities. As a result, some heterogeneous of the segmented structures
were omitted. In this method, the structure edges are also more sharpened and
there are some losses inside the structures which may be seen in the image as
partial lling of the left ventricle (the round shape of the segmented structures
was not forced like in case of deformable model application). Choosing the initial
point is also part of the procedure.
Global thresholding with manually selected threshold was also eective; after
segmentation of left ventricle, left atrium was also extracted.
Unfortunately, applying a threshold counted as a mean pixel intensity gave un-
satisfactory results. In both cases, the both ventricles and atria area of the heart
was marked. It is worth to highlight that the algorithm would be very eective,
if the goal whole heart segmentation . The other algorithms (local thresholding,
division of the area, growth and the division of the area) resulted in unwanted,
oversegmentation eect which disqualied them from the further analysis and
comparison.
In conclusion, it was noted that the most advanced method (deformable model)
gave the best results, but it was also the most expensive (computing power, time
complexity). On the other hand even the simplest method, such as thresholding
with appropriate threshold could give only slightly worse results. The result of
segmentation was strongly inuenced by the image acquisition conditions and
projection.

66
References
1. Roberts A.,: Human anatomy, Dorling Kindersley, 2014
2. Bogeart J., Dymarkowski S, Taylor A ,: Clinical cardiac MRI, 2nd Edition,
Springer, 2012
3. Materials in kindly permission of John Paul, the 2nd Hospital in Cracov and Karol
Manijak
4. Cerqueira M. D. et al.,: Standarized Myocardial Segmentation and Nomenclature
for Tomographic Imaging of the Heart: A statement for Healthcare Professionals
From the Cardiac Imaging Commitee of the Council on Clinical Cardiology of the
American Heart Association, Circulation, 2002.
5. https://github.com/ivmartel/dwv (access: 17.05.2018)
6. https://github.com/pmneila/morphsnakes (access 03.05.2018).

67
Similarity-based outlier detection in multiple time series
Grzegorz Goªaszewski

Division for Information Technology and Systems Research, Department of Applied Informatics
and Computational Physics Faculty of Physics and Applied Computer Science, AGH University
of Science and Technology;

Abstract. Outlier analysis is very often the rst step in data pre-processing. Since it
is performed on mostly raw data, it is crucial that algorithms used are fast and reliable.
These factors are hard to achieve when the data analysed is highly dimensional, such is the
case with multiple time series data sets. In this article, various outlier detection methods
(distance distribution-based methods, angle-based methods, k -nearest neighbour, local
density analysis) for numerical data are presented and adapted to multiple time series
data. The study also addresses the problem of choosing an appropriate similarity measure
(L-p norms, Dynamic Time Warping, Edit Distance, Threshold Queries based Similarity)
and its impact on eciency in further analysis. Work has also been put into determining
the impact of an approach to apply these measures to multivariate time series data. To
compare the dierent approaches, a set of tests were performed on synthetic and real
data.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

68
Section 5

Tomography

69
Multimaterial Tomogrpahy: Reconstruction from Decomposed
Projection Sets
László G. Varga

University of Szeged,
Department of Image Processing and Computer Graphics H-6720, Szeged, Árpád tér 2;

Abstract. We propose a reconstruction method for a theoretic projection acquisition


technique, where we assume, that the object of study consists of a nite number of
material, and we can separately measure the amount of materials along the paths of
projection beams. The measurement decomposes the projections for separating materials,
i.e., we get a separate projection set for each material (called decomposed projections), and
each projection set holds information on one material only. We describe a mathematical
formulation where the newly proposed reconstruction problem is formalised by an equation
system and show that the model can be solved by equation system-based reconstruction
techniques like the SIRT method while maintaining convergence. We test the theoretic
setup on simulated data by reconstructing phantom images from simulated projections
and compare the results to reconstructions from classical X-ray projections. We show that
using decomposed projections can lead to better results from 20 times less projections than
classical X-Ray tomography.

The full text will be available in the edited book Computational Modeling of Objects
Presented in Images. Fundamentals, Methods, and Applications, eds. Barneva R.P.,
Brimkov V.E., Kulczycki P., Tavares J.M.R.S., to be published by Springer in the Lecture
Notes in Computer Science series soon.

70
Sequential Projection Selection Methods for Binary Tomography
Gábor Lékó and Péter Balázs

Department of Image Processing and Computer Graphics, University of Szeged,


Árpád tér 2, H-6720 Szeged, Hungary;

Abstract. Binary tomography reconstructs binary images from a low number of their
projections. Often, there is a freedom how these projections can be chosen which can
signicantly aect the quality of reconstructions. We apply sequential feature selection
methods to nd the 'most informative' projection set based on a blueprint image. Using
various software phantom images, we show that these methods outperform the previously
published projection selection algorithms.

The full text will be available in the edited book Computational Modeling of Objects
Presented in Images. Fundamentals, Methods, and Applications, eds. Barneva R.P.,
Brimkov V.E., Kulczycki P., Tavares J.M.R.S., to be published by Springer in the Lecture
Notes in Computer Science series soon.

71
Variants of Simulated Annealing for Strip Constrained Binary
Tomography
Judit Sz¶cs and Péter Balázs

Department of Image Processing and Computer Graphics, University of Szeged,


Árpád tér 2. H-6720, Szeged, Hungary;

Abstract. We consider the problem of reconstructing binary images from their row and
column sums with prescribed number of strips in each row and column. In a previous paper
we compared an exact deterministic and an approximate stochastic method (Simulated
Annealing  SA) to solve the problem. We found that the latter one is much more suitable
for practical purposes. Since SA is sensitive to the choice of the initial state, in this paper
we present dierent strategies for choosing a starting image, and thus we develop variants
of the SA method for strip constrained binary tomography. We evaluate the dierent
approaches on images with varying densities of object pixels.

The full text will be available in the edited book Computational Modeling of Objects
Presented in Images. Fundamentals, Methods, and Applications, eds. Barneva R.P.,
Brimkov V.E., Kulczycki P., Tavares J.M.R.S., to be published by Springer in the Lecture
Notes in Computer Science series soon.

72
Section 6

Computational Intelligence

73
Optimizing Clustering with Cuttlesh Algorithm
1,2 1,2 2,3
Piotr A. Kowalski , Szymon Šukasik , Maªgorzata Charytanowicz , and
1,2
Piotr Kulczycki
1
AGH University of Science and Technology,
Faculty of Physics and Applied Computer Science, Kraków, Poland;
2
Polish Academy of Sciences, Systems Research Institute,
Centre of Information Technology for Data Analysis Methods;
3
Catholic University of Lublin, Institute of Mathematics and Computer Science;

Abstract. The aim of the article is to outline the Cuttlesh Algorithm - a mo-
dern metaheuristic procedure - and to demonstrate its usability in data mining problems.
Cuttlesh Algorithm is a very recent solution to a broad-range of optimization tasks.
In this paper, we utilized this metaheuristic procedure for the clustering problem with
Calinski-Harabasz index used as a measure of solution quality. To examine the algorithm
performance selected datasets from UCI Machine Learning Repository were used. Furt-
hermore, the well-known and commonly utilized k-means procedure was applied to the
same data sets - to obtain broader and independent comparison. The quality of generated
results were assessed via the use of the Rand Index.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

74
A Memetic version of the Bacterial Evolutionary Algorithm for
discrete optimization problems
1 2 1,3
Boldizsár Tú¶-Szabó , Peter Földesi , and László T. Kóczy
1
Department of Information Technology, Széchenyi István University, Gy®r, Hungary;
2
Department of Logistics, Széchenyi István University, Gy®r, Hungary;
Department of Telecommunications and Media Informatics,
3

Budapest University of Technology and Economics, Budapest, Hungary.

Abstract. In this paper we present our test results with our memetic algorithm, the
Discrete Bacterial Memetic Evolutionary Algorithm (DBMEA). The algorithm combines
the Bacterial Evolutionary Algorithm with discrete local search techniques (2-opt and
3-opt).
The algorithm has been tested on four discrete NP-hard optimization problems so far,
on the Traveling Salesman Problem, and on its three variants (the Traveling Salesman
Problem with Time Windows, the Traveling Repairman Problem, and the Time Depen-
dent Traveling Salesman Problem). The DBMEA proved to be ecient for all problems:
it found optimal or close-optimal solutions. For the Traveling Repairman Problem the
DBMEA outperformed even the state-of-the-art methods.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

75
A Hybrid Cascade Neural Network with Ensembles of Extended
Neo-Fuzzy Neurons and its Deep Learning
1 2
Yevgeniy Bodyanskiy and Oleksii Tyshchenko
1
Control Systems Research Laboratory at Kharkiv National University of Radio Electronics;
2
Institute for Research and Applications of Fuzzy Modeling,
CE IT4Innovations, University of Ostrava;

Abstract. This research contribution instantiates a framework of a hybrid cascade


neural network rest on application of a specic sort of neo-fuzzy elements and a new
peculiar adaptive training rule. The main trait of the oered system is its competence
to continue intensifying its cascades until the required accuracy is gained. A distinctive
rapid training procedure is also covered for this case that gives possibility to operate with
nonstationary data streams in an attempt to provide online training of multiple parametric
variables. A new training criterion is examined which suits for handling nonstationary
objects. Added to everything else, there’s always an occasion to set up (increase)
an inference order and a quantity of membership relations inside the extended neo-fuzzy
neuron.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

76
Section 7

Applied Mathematics

77
Probability Measures and projections on Quantum Logics
1 2 3
O©ga Nánásiová , ‰ubica Valá²ková , and Viera ƒer¬anová
1
Slovak University of Technology, Institute of Computer Science and Mathematics,
Ilkovi£ova 3, 812 19 Bratislava, Slovakia;
2
Slovak University of Technology, Department of Mathematics and Descriptive Geometry,
Radlinského 11, 810 05 Bratislava, Slovakia;
3
Department of Mathematics and Computer Science, Faculty of Education,
Trnava University, Priemyselná 4, 918 43 Trnava, Slovakia;

Abstract. The present paper deals with modelling of a probability measure of logical
connectives on a quantum logic. We follow the work in which the probability of logical
conjunction, disjunction and symmetric dierence and their negations for noncompatible
propositions are studied.
We study a special map (G-map) on a quantum logic, which represents a probabi-
lity measure of a projection and and implication show, that unlike classical (Boolean)
logic, probability measure of projections on a quantum logic are not necessarilly pure
projections.
In the end, we compare properties of a G-map with properties of a probability measure
related to logical connectives on a Boolean algebra.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

78
Statistical analysis of models' reliability for punching resistance
assessment
Jana Kalická, Mária Minárová, Jaroslav Halvoník, and Lucia Majtánová

Slovak University of Technology, Bratislava, Slovakia

Abstract. The paper deals with statistical analysis of engineering data set. The pur-
pose of analysis is to stipulate suitability of formulas that compete for being involved in
prepared EuroCode that will be valid from 2020. Authors dispose with a sucient num-
bers of lab tests. Having input geometrical and physical parameters of each experiment at
hand, the corresponding theoretical value is computed by using three formulas provided
by three models. Case by case, the ratio between measured and theoretical value reveal
the safety immediately: greater then one means safety, less then one means failure. This
ratio stands as the one parametric dimensionless statistical variable which is analysed
afterwards.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

79
Statistical test for fractional Brownian motion based on detrending moving
average algorithm

Grzegorz Sikoraa
a Faculty of Pure and Applied Mathematics, Hugo Steinhaus Center

Wroclaw University of Science and Technology, Janiszewskiego 14a, 50-370 Wrocław, Poland

Abstract
Motivated by contemporary and rich applications of anomalous diffusion processes we propose a new statistical test
for fractional Brownian motion, which is one of the most popular models for anomalous diffusion systems. The test is
based on detrending moving average statistic and its probability distribution. Using the theory of Gaussian quadratic
forms we determined it as a generalized chi-squared distribution. The proposed test could be generalized for statistical
testing of any centered non-degenerate Gaussian process. Finally, we examine the test via Monte Carlo simulations
for two exemplary scenarios of subdiffusive and superdiffusive dynamics.
Keywords: detrending moving average algorithm, statistical test, fractional Brownian motion

1. Introduction
The theory of stochastic processes is currently an important and developed branch of mathematics [10, 22, 25].
The key issue from the point of view of the application of stochastic processes is statistical inference for such random
objects [41, 53, 62, 64]. This field consists of statistical methods for the reliable estimation, identification, and
validation of stochastic models. Such a part of the theory of stochastic processes and the statistics developed for them
are used to model phenomena studied by other fields such as physics [28, 60, 75], chemistry [28, 66, 75], biology
[11, 14, 28, 32, 65, 66], engineering [7, 67], among others.
This work is motivated by growing interest and applications of the special class of stochastic processes, namely
anomalous diffusion processes, which largely depart from the classical Brownian diffusion theory [50, 63]. Such
processes are characterized by a nonlinear power-law growth of the mean squared displacement (MSD) in the course of
time. Their anomalous diffusion behavior manifested by nonlinear MSD is intimately connected with the breakdown
of the central limit theorem, caused by either broad distributions or long-range correlations. Today, the list of systems
displaying anomalous dynamics is quite extensive [26, 31, 35, 44, 56, 59]. Therefore in recent years, there has
been great progress in the understanding of the different mathematical models that can lead to anomalous diffusion
[36, 37, 51]. One of the most popular of them is the fractional Brownian motion (FBM) [29, 33, 35, 42, 51, 73, 78].
Introduced by Kolmogorov [38] and studied by Mandelbrot in a series of papers [46, 47], it is now well-researched
stochastic process. FBM is still constantly developed by mathematicians in different aspects [5, 23, 55, 57, 77].
The main subject considered in this work is the issue of rigorous and valid identification of the FBM model. The
problem of FBM identification has been described in the mathematical literature for a long time [8, 18]. However,
most of the works mainly concern various methods of estimating the parameters of the FBM model. They are based,
among others, on p-variation [45], discrete variation [19], sample quantiles [20] and other methods [9, 12, 21, 27,
43, 52, 74, 81]. A certain gap in this theory is the lack of tools such as rigorous statistical tests to identify the FBM
model in empirical data. Some approaches to FBM identification are known, e.g., application of empirical quantiles
[13], distinguishing FBM from pure Brownian motion [40]. According to the author’s current knowledge, the only
statistical test for the FBM model is the test based on the distribution of the time average MSD [71]. Due to the lack
of statistical tests specially designed for the FBM model, in this work, we propose such a statistical testing procedure.

Email address: [email protected] (Grzegorz Sikora)

Preprint submitted to Elsevier

80
The proposed test has a test statistic which is the detrending moving average (DMA) statistic introduced in the
paper [2]. For more than a decade, the DMA algorithm has become an important and promising tool for the analysis
of stochastic signals. It is constantly developed and improved [4, 16, 17, 69], its multifractal version was created and
used [15, 30, 34, 80] and it is applied for different empirical datasets [39, 58, 61, 68]. As one of the important method
for fluctuation analysis, the DMA algorithm was often compared with other methods [6, 79, 82]
In section 2 we show that the distribution of the DMA statistics follows the generalized chi-squared distribution.
The main section 3 demonstrates the statistical testing procedure based on computing the DMA statistic for empirical
data. In section 4 the results of Monte Carlo simulations of the proposed test are presented and discussed. Section 5
contains conclusions and final remarks. In the last section 6, the Matlab code of the proposed test is presented.

2. Probability distribution of DMA statistic

The DMA algorithm was introduced in [2]. For a finite trajectory {X(1), X(2), . . . , X(N)} of a stochastic process
the DMA statistic has the following form
N
1 X
σ2 (n) = (X( j) − X̃n ( j))2 , n = 2, 3, . . . , N − 1, (1)
N − n j=n

where X̃n ( j) is a moving average of n observations X( j), . . . , X( j − n + 1), i.e.


n−1
1X
X̃n ( j) = X( j − k).
n k=0

The statistic σ2 (n) is a random variable which computes the mean squared distance between the process X( j) and its
moving average X̃n ( j) of the window size n. It has scaling law behavior σ2 (n) ∼ C H n2H , where H is a self–similarity
parameter of the signal [2, 4]. The constant C H has explicit expression computed in the case of fractional Brownian
motion [4]. As a byproduct of this scaling law one can estimate the self–similarity parameter H from linear fitting on
double logarithmic scale [6, 17, 70].
In this work, we leave the issue of DMA algorithm as an estimation method and concentrate on the probability
characteristics of this random statistic. Throughout the paper, we assume that the stochastic process X( j) is a centered
Gaussian process. Therefore a finite trajectory X = {X(1), X(2), . . . , X(N)} is a centered Gaussian vector with covari-
 
ance matrix Σ = {E X( j)X(k) : j, k = 1, 2, . . . , N}. Let introduce the process Y( j) := X( j + n − 1) − X̃n ( j + n − 1), which
is still a centered Gaussian process. We calculate the covariance matrix of the vector Y = {Y(1), Y(2), . . . , Y(N −n+1)}
    h i h i
E Y( j)Y(k) = E X( j + n − 1)X(k + n − 1) − E X( j + n − 1)X̃n (k + n − 1) − E X̃n ( j + n − 1)X(k + n − 1)
k+n−1
h i   1 X  
+ E X̃n ( j + n − 1)X̃n (k + n − 1) = E X( j + n − 1)X(k + n − 1) − E X( j + n − 1)X(m)
n m=k
j+n−1
1 X 1 X X
− E [X(k + n − 1)X(l)] + 2 E [X(l)X(m)] . (2)
n l= j n j≤l≤ j+n−1 k≤m≤k+n−1

 
That matrix we denote by Σ̃ = {E Y( j)Y(k) : j, k = 1, 2, . . . , N − n + 1}. We see that the dependence structure of
the process Y(i) is fully determined by the covariance of the process X(i). Moreover the covariance E [X(k)X(m)] in
formula (2) has a prefactor
!2
1
1− , for l = j + n − 1 ∧ m = k + n − 1,
n
1 1
2
− , for (l = j + n − 1 ∧ m , k + n − 1) ∨ (l , j + n − 1 ∧ m = k + n − 1),
n n
1
, for l , j + n − 1 ∧ m , k + n − 1.
n2

81
Therefore we can rewrite the formula (2) in the equivalent form
!2 ! k+n−2
  1   1 1  X  
E Y( j)Y(k) = 1− E X( j + n − 1)X(k + n − 1) + 2 −  E X( j + n − 1)X(m)
n n n  m=k
j+n−2

X  1 X X
+ E [X(l)X(k + n − 1)] + 2 E [X(l)X(m)] . (3)
l= j
n j≤l≤ j+n−2 k≤m≤k+n−2

The average value of random variable σ2 (n) we can now express based on (2) and (3) by covariance structure of the
process X( j)
N N
h i 1 X  2  1 X h 2 i
E σ2 (n) = E X( j) − X̃n ( j) = E Y ( j − n + 1)
N − n j=n N − n j=n
 
N  !2 ! i−1 j−1
1 X 1 1 1 X 1X 2
 X 


2 2
= 1− E[X ( j)] + 2 2 − E[X( j)X(m)] + 2 E[X (m)] + 2 E[X(m)X(l)] . (4)
 
N − n j=n n n n m= j−n+1 n j−n+1 n j−n+1≤k<m≤ j−1

 

 

We can also express the variance of the random variable σ2 (n)


 N  N
h
2
i 1 X
2
 1 X  
Var σ (n) = Var 
 Y ( j − n + 1)  = Cov Y 2 (l − n + 1), Y 2 (m − n + 1)
(N − n)2  (N − n)2


j=n l,m=n
N
1 X h i h i h i
= 2
E Y 2 (l − n + 1)Y 2 (m − n + 1) − E Y 2 (l − n + 1) E Y 2 (m − n + 1) . (5)
(N − n) l,m=n
h i h i
The terms E Y 2 (l − n + 1) and E Y 2 (m − n + 1) one can compute from covariance of the process Y( j) according to
h i
(3). The 4th–order moment E Y 2 (l − n + 1)Y 2 (m − n + 1) can be expressed by covariance structure of process Y( j)
according to Isserlis’ theorem [72]:
h i h i h i
E Y 2 (l − n + 1)Y 2 (m − n + 1) = E Y 2 (l − n + 1) E Y 2 (m − n + 1) + 2E [Y(l − n + 1)Y(m − n + 1)]2 .

Therefore applying above to (5) we get


N
h i 2 X
Var σ2 (n) = E [Y(l − n + 1)Y(m − n + 1)]2 . (6)
(N − n)2 l,m=n

Using (3) one can present formula (6) for variance in terms of covariance structure of the underlying process X(i).
In order to describe more probabilistic properties of the random variable σ2 (n) we notice the quadratic form
representation
N
1 X 2 1
σ2 (n) = Y ( j − n + 1) = YYT ,
N − n j=n N−n

where YT is a vertical vector which is a transpose of the vector Y. The random object YYT is a quadratic form of a
Gaussian vector Y. Therefore we apply the theory of Gaussian quadratic forms to study random variable (N − n)σ2 (n).
The theory of Gaussian quadratic forms [49] provides us with the following representation
N−n+1
X
d
(N − n)σ2 (n) = λ j (n)U j , (7)
j=1

d
where = means equality in distribution. The probability distribution in (7) is the generalized chi-squared distribution
[24]. The random variables U ′j s form an i.i.d. sequence of chi-squared distribution with one degree of freedom. The
coefficients λ j (n) are the eigenvalues of covariance matrix Σ̃ of the vector Y. They depend on n and the parameters of

82
the process Y( j). The distribution in (7) one can interpret as a sum of independent gamma distributions with constant
d
shape parameter 1/2 and different scale parameters, i.e. λ j (n)U j = G(1/2, 2λ j (n)). By G(α, β) we denote the gamma
distribution with the shape parameter α and scale parameter β. It has a PDF of the form

xα−1 exp(−x/β)
f(α,β) (x) = (x > 0)
Γ(α) βα
and CDF
1
F(α,β) (x) = γ(α, x/β),
Γ(α)
R∞
where Γ function and lower incomplete gamma function γ are defined respectively Γ(z) = 0 xz−1 e−x dx and γ(s, x) =
Rx
0
t s−1 e−t dt. The characteristic function of random variable (N − n)σ2 (n) is a product of characteristic functions of
gamma distributions
N−n+1
Y 1
φ(N−n)σ2 (n) (t) = h i1/2 .
j=1 1 − 2λ j (n)it
Therefore based on representation (7) we get the average value for σ2 (n)
N−n+1
h i 1 X 1  
E σ2 (n) = λ j (n) = tr Σ̃ ,
N − n j=1 N−n

where tr (A) is a trace of the matrix A. That gives the same result for the mean of σ2 (n) as in (4) and connects
eigenvalues λ j (n) with a dependence structure of the observed process X( j). Representation (7) provides also the
variance formula
N−n+1 N−n+1
h i 1 X h i 2 X 2  
Var σ2 (n) = 2
λ 2
j (n)Var U j = 2
λ2j (n) = 2
tr Σ̃2 ,
(N − n) j=1 (N − n) j=1 (N − n)

which is the same as in (6).


The generalized chi-squared distribution in (7) was intensively studied. In the literature, there are many different
representations for PDF or CDF of such distribution f.e. in terms of zonal polynomials and confluent hypergeometric
functions [48], single gamma-series [54], Lauricella multivariate hypergeometric functions [1], extended Foxs func-
tions [3] and others [76]. Here we present the formulas for PDF and CDF according to [54]. The PDF of σ2 (n) has a
form for x > 0  
∞ δ x N−n +k−1
exp − x(N−n)
X k 2 2λ1 (n)
fn (x) = C   2λ (n)  N−n , (8)
2 +k

k=0 Γ N−n + k 1
2 N−n

where λ1 (n) is the smallest eigenvalue of the matrix Σ̃ and


N−n+1 !1/2 N−n+1 k+1
Y λ1 (n) X (1 − λ1 (n)/λ j (n))k 1 X
C= , γk = , δk+1 = jγ j δk+1− j , δ0 = 1. (9)
j=1
λ j (n) j=1
2k k + 1 j=1

The PDF in (8) can be understood as a series of densities of gamma distributions G((N − n)/2 + k, 2λ1 (n)/(N − n)) :

X
fn (x) = C δk f N−n +k, 2λ1 (n)  (x), (x > 0).
2 (N−n)
k=0

Moreover justified term-by-term integration leads to the CDF formula of σ2 (n):


  ∞
X Z x ∞
X
2
Fn (x) = P σ (n) ≤ x = C δk f N−τ +k, 2λ1 (τ)  (y)dy = C δk F N−τ +k, 2λ1 (τ)  (x). (10)
2 (N−τ) 2 (N−τ)
k=0 0 k=0

83
We have also the formula for the tail of random variable σ2 (n):
  ∞
X
P σ2 (n) > x = 1 − Fn (x) = 1 − C δk F N−τ +k, 2λ1 (τ)  (x). (11)
2 (N−τ)
k=0

Therefore we obtain:
∞ !

2
 X N−τ x(N − τ)
P σ (n) > x = C δk Γ + k, ,
k=0
2 2λ1 (τ)
R∞
where Γ(s, x) is upper incomplete gamma function defined as Γ(s, x) = x
t s−1 e−t dt.

3. Statistical test based on DMA

Knowing the exact probability distribution of the random variable σ2 (n) we can propose the statistical test. Be-
cause of the generality of this distribution the test is general for any centered Gaussian process. In this paper, we
concentrate on the FBM denoted by BH ( j), defined by its covariance function
 
E (BH ( j), BH (k)) = D j2H + k2H − | j − k|2H ,

where D is a scale parameter called diffusion constant and H is a self-similarity parameter also called Hurst index.
The null hypothesis of proposed statistical test is

H0 : {BH (1), BH (2), . . . , BH (N)} is a trajectory of FBM with parameters D and H,

while alternative hypothesis is

H1 : {BH (1), BH (2), . . . , BH (N)} is not a trajectory of FBM with parameters D and H.

The test statistic is the random variable σ2 (n) distributed according to the CDF of the form (10). Therefore we define
the p-value of the test as the double-tailed event probability
  

X∞
N − τ t(N − τ)
! X∞ γ N−τ
2 + k, t(N−τ)
2λ (τ)


p = 2 min{P(σ2 (n) < t), P(σ2 (n) > t)} = 2C min 
 1

δ Γ + k, , δ , (12)
 
k k  
2 2λ (τ) N−τ

Γ +k

 1


k=0 k=0
 2

where t is the value of DMA statistics σ2 (n) calculated for empirical trajectory of data. Because p-value in (12)
has infinite series representation one has to truncate the sum and compute it as finite truncated sum, where M is the
truncation parameter. The error of such approximation was studied in details in [54]. From our perspective it is
enough to apply Monte Carlo simulations and compute p-value as an empirical quantile from sample of generalized
chi-squared random variables of the form 1/(N − n) N−n+1
P
j=1 λ j (n)U j .
Summarizing the procedure for testing hypothesis

H0 : {BH (1), BH (2), . . . , BH (N)} is a trajectory of FBM with parameters D and H,

is the following:
Step 1) For empirical trajectory {BH (1), BH (2), . . . , BH (N)} compute DMA statistic
N
1 X
σ2 (n) = (X( j) − X̃n ( j))2 := t.
N − n j=n

 
Step 2) Compute the matrix Σ̃ = {E Y( j)Y(k) : j, k = 1, 2, . . . , N − n + 1} and its eigenvalues {λ j (n) : j = 1, 2, . . . , N −
n + 1}.

84
Step 3) L times generate a sample Ul = {U1l , U2l , . . . , U N−n+1
l
} from chi-squared distribution with one degree of freedom,
l = 1, 2, . . . , L.
Step 4) L times compute the value of generalized chi-squared random variable
N−n+1
1 X
σ2l (n) = λ j (n)U lj , l = 1, 2, . . . , L.
N − n j=1

Step 5) Compute double-tailed event p-value as


n o
2 min #{σ2l (n) > t}, #{σ2l (n) < t}
p= .
L
If p < α reject the null hypothesis H0 , where α is a significant level. In other case there is no significant
statistical proof for rejection of H0 .

We propose the test based on the distribution of DMA statistic σ2 (10) for argument n = 10, because random
variable σ2 (10) has different domains for different values of Hurst index H in the case of fixed scale parameter D.
However two issues need further studies. The first open problem is the optimization of the proposed test according
to the argument n. Natural questions arise about the optimal choice of n and the most effective performance of the
test. The crucial point is the dependence of the domain of DMA test statistic on the different values of Hurst exponent
H. The second issue is the problem of the testing procedure in the case of unknown scale parameter D. The essence
of this problem is that the DMA test statistic can have not disjoint domains for different pairs of parameters (D, H).
These problems need continuing research and will be developed by the author.
In the simulation examination of the proposed statistical test, we consider the case of standard FBM model with
fixed D = 1.

4. Monte Carlo simulations

In order to examine the proposed test, we perform Monte Carlo simulations. First we present results for the case
of the Hurst index Hreal = 0.25, which corresponds exemplary subdiffusion case. We generate T = 1000 independent
trajectories of FBM process with fixed D = 1 and length N = 1000. For each trajectory we test the null hypothesis

H0 : {BH (1), BH (2), . . . , BH (N)} is a trajectory of FBM with Htest ,

where Htest ∈ {0.05, 0.1, . . . , 0.95}. Therefore for each case of Htest we test H0 1000 times and obtain 1000 corre-
sponding p-values. On the Figure 1 we present boxplots of obtained p-values for all cases of Htest . The results for
Htest < 0.2 and Htest > 0.3 are almost all p < 0.05 and that is the strong statistical evidence to reject incorrect H0 . For
Htest = 0.2 and Htest = 0.3 we obtained 767 and 751 results with p < 0.05 respectively. That means more than 75% of
correct rejections of incorrect H0 . In the case when Htest = 0.25 and H0 is true we obtained 76 results with p < 0.05.
In other words we made a type I error (incorrect rejection of true H0 ) 7.6% of T = 1000 tests. The detailed numbers
of accepting of H0 or H1 for the case with Hreal = 0.25 we present in Table 1.

Htest 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
H0 0 0 0 233 924 249 2 0 0 0
H1 1000 1000 1000 767 76 751 998 1000 1000 1000
Htest 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95
H0 0 0 0 0 0 0 0 0 0
H1 1000 1000 1000 1000 1000 1000 1000 1000 1000

Table 1: Numbers of accepting of H0 or H1 at the significant level α = 0.05 for the case with Hreal = 0.25 obtained from T = 1000 Monte Carlo
simulations.

85
1

0.5
p

0
0.05

0.15

0.25

0.35

0.45

0.55

0.65

0.75

0.85

0.95
0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9
Htest

Figure 1: p-values obtained from T = 1000 Monte Carlo simulations from testing H0 for any Htest ∈ {0.05, 0.1, . . . , 0.95}. The significant level
was α = 0.05 and the Hreal = 0.25.

The next case is a validation of the proposed test for exemplary superdiffusion case with Hreal = 0.75. Analogous
simulations produced 19 sets of p-values corresponding Htest ∈ {0.05, 0.1, . . . , 0.95}. The each set contains 1000 p-
values presented as a boxplot on the Figure 2. The results for Htest < 0.65 are almost all p < 0.05 and that is the strong
statistical evidence to reject incorrect H0 . For cases with Htest = 0.7 and Htest > 0.75 the test works not so good as
for previous subdiffusion scenario. It incorrectly accepts H1 more than 80% times for Htest = 0.7 and Htest = 0.8 and
around 60% times for Htest > 0.8. So for those cases the type II error is committed very often and the power of the
test is weak. On the other hand in the case when Htest = 0.75 and H0 is true we obtained 63 results with p < 0.05. In
other words we made a type I error 6.3% of T = 1000 tests. The detailed numbers of accepting of H0 or H1 for the
case with Hreal = 0.75 we present in Table 2.

0.5
p

0
0.05

0.15

0.25

0.35

0.45

0.55

0.65

0.75

0.85

0.95
0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Htest

Figure 2: p-values obtained from T = 1000 Monte Carlo simulations from testing H0 for any Htest ∈ {0.05, 0.1, . . . , 0.95}. The significant level
was α = 0.05 and Hreal = 0.75.

Htest 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
H0 0 0 0 0 0 0 0 0 0 0
H1 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000
Htest 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95
H0 0 1 233 844 937 835 611 568 582
H1 1000 999 767 156 63 165 389 432 420

Table 2: Numbers of accepting of H0 or H1 at the significant level α = 0.05 for the case with Hreal = 0.75 obtained from T = 1000 Monte Carlo
simulations.

86
5. Conclusion

In this work, we proposed a new statistical test to identify the FBM model in empirical data. This tool is based
on the exact probability distribution of the DMA test statistic σ2 (n), which is very sensitive due to the Hurst index H.
The proposed procedure is a new original result in the theory of statistical inference of Gaussian processes.
Conducted Monte Carlo simulations indicate that the constructed test works worse in the case of the superdiffusion
when H > 1/2. In such a scenario, the type II error is very often committed and the power of the test seems to be
weaker than for the subdiffusion. This is due to the fact that for the superdiffusion, the domains of the test statistic
σ2 (n) are close to each other and have joint parts to differing H parameters. This is not the case for subdiffusion where
the test works much better. However, it should be strongly emphasized that for both sub and superdiffusion the type
I error occurs very rarely and the test correctly accepts the null hypothesis when it is true. In connection with such a
test performance and the type II errors, it is possible to modify and optimize the proposed procedure. Namely, a better
test performance can be obtained by selecting the argument n of the test statistic σ2 (n). It is an interesting issue, worth
attention and further research.
Another direction of research on the constructed test is its generalized version due to the unknown parameter of
the scale parameter D. In this paper, we assumed a standard FBM with D = 1. In the general case with the unknown D,
the proposed test should be combined with the pre-estimation of the parameter D by other known methods. However,
by applying the theory of the ratios of quadratic Gaussian forms [49], it is possible to generalize the described test to
the situation of the unknown and non-estimated parameter D. In this case, the probability distribution of the ratios of
quadratic forms will not depend on D at all and this parameter will be irrelevant.
The described test procedure can be applied in a sequential manner according to the grid of the values of the
parameter H. This will allow to reject the hypotheses with false H values and accept the FBM hypothesis with the true
H. Such performance of the proposed test will also be a method for estimating the Hurst exponent as well as a reliable
test procedure.
Finally, we want to point out that the statistical test proposed for FBM can be generalized (due to the theory of
Gaussian quadratic forms) for each non-degenerated Gaussian process.

6. Appendix

Here we present the Matlab code for the proposed statistical test.

function [h,p,t]=DMAtest(x,n,H,D,alpha,L)
%
% This function performs the statistical test for FBM (Fractional Brownian
% Motion) with known scale parameter D and unknown suggested Hurst index H.
% The test statistics is a DMA (Detrended Moving Average) statistic
% computed for empirical vector data x. The test is
% proposed by Grzegorz Sikora.
%
% Input:
% x <- vector of empirical data
% n <- argument of DMA statistic
% H <- Hurst index
% D <- known scale parameter of FBM
% alpha <- significant level
% L <- number of Monte Carlo simulations
%
% Output:
% h <- accepted hypothesis: h=0 null hypothesis, h=1 alternative hypothesis
% p <- p-value
% t <- value of DMA statistic for empirical data x
%

87
% Written by Grzegorz Sikora 13.02.2018, [email protected]

% Step 1)
N=length(x);
xmean=sum(x(repmat([1:n]’,1,N-n+1)+repmat(0:N-n,n,1)))/n;
t=sum((x(n:N)-xmean).ˆ2)/(N-n);

%Step 2)
%Covariance matrix of FBM:
R=repmat([1:N]’,1,N);
C=R’;
X=D*(R.ˆ(2*H)+C.ˆ(2*H)-abs(R-C).ˆ(2*H));

%Covariance matrix of process Y(i):


Y1=zeros(N-n+1,N-n+1);
Y2=Y1;
Y3=Y1;
Y=Y1;
for i=1:N-n+1
for j=1:N-n+1
Y1(i,j)=(1-1/n)ˆ2*X(i+n-1,j+n-1);
Y2(i,j)=(1/(nˆ2)-1/n)*(sum(X(i+n-1,j:j+n-2))+sum(X(i:i+n-2,j+n-1)));
Y3(i,j)=1/(nˆ2)*sum(sum(X(i:i+n-2,j:j+n-2)));
end
end
Y=Y1+Y2+Y3;
lambda=eig(Y)’;

%Step 3)
U_j=chi2rnd(1,N-n+1,L);

%Step 4)
sigma_j=1/(N-n)*lambda*U_j;

%Step 5)
p=2*min(sum(sigma_j>t)/L,sum(sigma_j<t)/L);
if p<alpha
h=1;
else
h=0;
end

Acknowledgements

The author would like to acknowledge the support of NCN Maestro Grant No. 2012/06/A/ST1/00258.
[1] Aalo, V.A., Piboongungon, T., Efthymoglou, G.P., 2005. Another look at the performance of MRC schemes in Nakagami-m fading channels
with arbitrary parameters. IEEE Trans. Commun. 53, 20022005.
[2] Alessio, E., Carbone, A., Castelli, G., Frappietro, V., 2002. Second-order moving average and scaling of stochastic time series. Eur. J. Phys.
B 27 197.
[3] Ansari, I.S., Yilmaz, F., Alouini, M.S., Kucur, O., 2014. New results on the sum of Gamma random variates with application to the perfor-
mance of wireless communication systems over Nakagami-m fading channels. Trans. Emerging Tel. Tech. 28 (1), e2912.
[4] Arianos, S., Carbone, A., 2017. De trending moving average algorithm: A closed-form approximation of the scaling law. Physica A 382,
9-15.
[5] Bahamonde, N., Torres, S., Tudor, C.A., 2018. ARCH model and fractional Brownian motion. Stat. Probabil. Lett. 134, 70-78.

88
[6] Bashan, A., Bartsch, R., Kantelhardt, J.W., Havlin, S., 2008. Comparison of detrending methods for fluctuation analysis. Physica A, 387,
5080-5090.
[7] Beichelt, F., 2006. Stochastic Processes in Science, Engineering and Finance. C hapman & Hall/CRC.
[8] Beran, J., 1994. Statistics for long memory processes. Chapman and Hall, London.
[9] Bondarenko, V.V., 2012. An iterative algorithm of estimating the parameters of the fractal Brownian motion. J. Automat. Inf. Sci. 44 (7),
62-68.
[10] Borodin, A.N., 2017. Stochastic Processes. Series: Probability and Its Applications, Birkhäuser Basel.
[11] Bressloff, P.C., 2014. Stochastic Processes in Cell Biology. Series: Interdisciplinary Applied Mathematics 41, Springer International Publish-
ing.
[12] Breton, J.C., Coeurjolly, J.F., 2012. Confidence intervals for the Hurst parameter of a fractional Brownian motion based on finite sample size.
Statistical inference for stochastic processes 15 (1), 1-26.
[13] Burnecki, K., Kepten, E., Janczura, J., Bronshtein, I., Garini, Y., Weron, A., 2012. Universal Algorithm for Identification of Fractional
Brownian Motion. A Case of Telomere Subdiffusion. Biophysical Journal 103, 18391847.
[14] Capasso, V., Bakstein, D., 2015. An Introduction to Continuous-Time Stochastic Processes: Theory, Models, and Applications to Finance,
Biology, and Medicine. Series: Modeling and Simulation in Science, Engineering and Technology, Birkhäuser Basel.
[15] Carbone, A., 2007. Algorithm to estimate the Hurst exponent of high-dimensional fractals. Phys. Rev. E 76, 056703.
[16] Carbone, A., 2009. Detrending Moving Average algorithm: a brief review. Science and Technology for Humanity (TIC-STH), IEEE Toronto
International Conference.
[17] Carbone, A., Kiyono, K., 2016. Detrending moving average algorithm: Frequency response and scaling performances. Phys. Rev. E 93,
063309.
[18] Coeurjolly, J.F., 2000. Simulation and identification of the fractional Brownian motion: a bibliographical and comparative study. Journal of
Statistical Software 5 (7).
[19] Coeurjolly, J.F., 2001. Estimating the parameters of a fractional Brownian motion by discrete variations of its sample paths. Statistical
Inference for stochastic processes 4 (2), 199-227.
[20] Coeurjolly, J.F., 2008. Hurst exponent estimation of locally self-similar Gaussian processes using sample quantiles. Ann. Statist. 36 (3),
1404-1434.
[21] Coeurjolly, J.F., Kortas, H., 2012. Expectiles for subordinated Gaussian processes with applications. Electron. J. Statist. 6, 303-322.
[22] Cox D.R., Miller, H.D., 2017. The Theory of Stochastic Processes. Chapman & Hall/CRC.
[23] Davidson, J., Hashimzade, N., 2009 Type I and type II fractional Brownian motions: A reconsideration. Comput. Stat. Data Anal. 53 (6),
2089-2106.
[24] Davies, R.B., 1980. The distribution of a linear combination of x2 random variables. Applied Statistics 29, 323-333.
[25] Durrett, R., 2016. Essentials of Stochastic Processes. Springer Texts in Statistics, Springer.
[26] Efros, A.L., Nesbitt, D.J., 2016. Origin and control of blinking in quantum dots. Nat. Nanotechnol. 11, 661-671.
[27] El Hajjar, S.T., 2015. A Statistical Study to Provide Estimators of Hurst Parameter for a Fractional Brownian Motion Through Unbalanced
Sampling Time. Analysis and Applications 2 (1), 1-8.
[28] Freund, J.A., Pöschel, T., 2010. Stochastic Processes in Physics, Chemistry and Biology. Series: Lecture Notes in Physics. Springer.
[29] Fuliński, A., 2017. Fractional Brownian motions: memory, diffusion velocity, and correlation functions. J. Phys. A: Math. Theor. 50, 054002.
[30] Gu, G.F., Zhou, W.X., 2010. Detrending moving average algorithm for multifractals. Phys. Rev. E 82, 011136.
[31] Gudowska-Nowak, E., Dybiec, B., 2010. Subordinated diffusion and continuous time random walk asymptotics. Chaos 20, 043129.
[32] Holcman, D., 2017. Stochastic Processes, Multiscale Modeling, and Numerical Methods for Computational Cellular Biology. Springer Inter-
national Publishing.
[33] Jeon, J.H., Metzler, R., 2010. Fractional Brownian motion and motion governed by the fractional Langevin equation in confined geometries.
Phys. Rev. E 81, 021103.
[34] Jiang, Z.Q., Zhou, W.X., 2011. Multifractal detrending moving-average cross-correlation analysis. Phys. Rev. E 84, 016106.
[35] Kepten, E., Bronshtein, I., Garini, Y., 2011. Ergodicity convergence test suggests telomere motion obeys fractional dynamics Phys. Rev. E
83, 041919.
[36] Klafter, J., Lim, S.C., Metzler, R., 2012. Fractional Dynamics. Recent Advances, World Scientific, New Jersey.
[37] Klafter, J., Sokolov, I.M., 2011. First Steps in Random Walks. From Tools to Applications. Oxford University Press, Oxford.
[38] Kolmogorow, A. , 1940. Wienersche Spiralen und einige andere interessante Kurven in Hilbertschen Raum. C.R. (Doklady) Acad. Sci. URSS
(N.S.), 26, 115-118.
[39] Li, Q., Cao, G., Xu, W., 2018. Relationship research between meteorological disasters and stock markets based on a multifractal detrending
moving average algorithm. Int. J. Mod. Phys. B 32, 1750267.
[40] Li, M., Genay, R., Xue, Y., 2016. Is it Brownian or fractional Brownian motion? Economics Letters 145, 52-55.
[41] Lindsey, J.K., 2004. Statistical analysis of stochastic processes in time. Series: Cambridge series in statistical and probabilistic mathematics
14, Cambridge University Press, 2004.
[42] Lisy, V., Tothova, J., 2018. NMR signals within the generalized Langevin model for fractional Brownian motion. Physica A 494, 200-208.
[43] Liu, Y., Liu, Y., Wang, K., Jiang, T., Yang, L., 2009. Modified periodogram method for estimating the Hurst exponent of fractional Gaussian
noise. Phys. Rev. E 80, 066207.
[44] Magdziarz, M., Klafter, J., 2010. Detecting origins of subdiffusion. P-variation test for confined systems. Phys. Rev. E 82, 011129.
[45] Magdziarz, M., Wójcik, J., Ślezak, J., 2013. Estimation and testing of Hurst parameter using p-variation. J. Phys. A: Math. Theor., 46, 325003.
[46] Mandelbrot, B.B., 1983. The Fractal Geometry of Nature, Freeman, New York.
[47] Mandelbrot, B.B., Van Ness, J.W., 1968. Fractional Brownian motions, fractional noises and applications. SIAM Rev. 10, 422-437.
[48] Mathai, A.M., 1982. Storage capacity of a dam with gamma type inputs. Ann. Inst. Stat. Math. 34, 591597.
[49] Mathai, A.M., Provost, S.B., 1992. Quadratic Forms in Random Variables: Theory and Applications. Marcel Dekker, New York.
[50] Metzler, R., Klafter, J., 2000. The random walks guide to anomalous diffusion: a fractional dynamics approach. Phys. Rep. 339, 1-77.

89
[51] Meroz, Y., Sokolov, I.M., 2015. A toolbox for determining subdiffusive mechanisms. Phys. Rep. 573, 1-29.
[52] Mielniczuk, J., Wojdyłło, P., 2007. Estimation of Hurst exponent revisited. Comput. Stat. Data Anal. 51 (9), 4510-4525.
[53] Mishura, Y., Shevchenko, G., 2017. Theory and Statistical Applications of Stochastic Processes. Series: Mathematics and Statistics, Wiley-
ISTE.
[54] Moschopoulos, P.G., 1985. The distribution of the sum of independent gamma random variables. Ann. Inst. Stat. Math. 37, 541544.
[55] Mukeru, S., 2017. Representation of local times of fractional Brownian motion, Stat. Probabil. Lett. 131, 1-12.
[56] Negro, L., Inampudi, S., 2017. Fractional Transport of Photons in Deterministic Aperiodic Structures. Sci. Rep. 7, 2259.
[57] Nourdin, I., 2012. Selected aspects of fractional Brownian motion. Series: Bocconi & Springer series 4, Springer.
[58] Pal, M., Rao, P.M., Manimaran, P., 2014. Multifractal detrended cross-correlation analysis on gold, crude oil and foreign exchange rate time
series. Physica A 416, 452-460.
[59] Palombo, M., Gabrielli, A., De Santis, S., Cametti, C., Ruocco, G., Capuani, S., 2011. Spatio-temporal anomalous diffusion in heterogeneous
media by nuclear magnetic resonance. J. Chem. Phys. 135, 034504.
[60] Paul, W., Baschnagel, J., 2013. Stochastic Processes: From Physics to Finance. Springer International Publishing.
[61] Ponta, L., Carbone, A., Cincotti, S., 2017. Detrending Moving Average Algorithm: Quantifying Heterogeneity in Financial Data. Computer
Software and Applications Conference (COMPSAC), IEEE 41st Annual.
[62] Rajarshi, M.B., 2013. Statistical Inference for Discrete Time Stochastic Processes. Series: Springer Briefs in Statistics, Springer India.
[63] Rakotonasy, S.H., Néel, M.C., Joelson, M., 2014. Characterizing anomalous diffusion by studying displacements. Commun. Nonlinear Sci.
Numer. Simul. 19, 2284-2293.
[64] Rao, M.M., 2014. Stochastic Processes - Inference Theory. Series: Springer Monographs in Mathematics, Springer International Publishing.
[65] Schinazi, R.B., 2014. Classical and Spatial Stochastic Processes: With Applications to Biology. Birkhäuser.
[66] Schuster, P., 2016. Stochasticity in Processes: Fundamentals and Applications to Chemistry and Biology. Series: Springer Series in Syner-
getics, Springer International Publishing.
[67] Scott M., 2012. Applied stochastic processes in science and engineering. U. Waterloo.
[68] Serinaldi, F., 2010. Use and misuse of some Hurst parameter estimators applied to stationary and non-stationary financial time series. Physica
A 389 (14), 2770-2781.
[69] Shao, Y.H., Gu, G.F., Jiang, Z.Q., Zhou, W.X., 2015. Effects of polynomial trends on detrending moving average analysis. Fractals 23 (3),
1550034.
[70] Shao, Y.H., Gu, G.F., Jiang, Z.Q., Zhou, W.X., Sornette, D., 2012. Comparing the performance of FA, DFA and DMA using different synthetic
long-range correlated time series. Sci. Rep. 2, 835.
[71] Sikora, G., Burnecki, K., Wyłomańska, A., 2017. Mean-squared-displacement statistical test for fractional Brownian motion. Phys. Rev. E
95, 032110.
[72] Song, I., Lee, S., 2015. Explicit formulae for product moments of multivariate Gaussian random variables. Stat. Probabil. Lett. 100, 27-34.
[73] Szymanski, J., Weiss, M., 2009. Elucidating the Origin of Anomalous Diffusion in Crowded Fluids. Phys. Rev. Lett. 103, 038102.
[74] Taqqu M., Teverovsky V., 1995. Estimators for long-range dependence: an empirical study. Fractals 3 (4), 785-798.
[75] Van Kampen, N.G., 2007. Stochastic Processes in Physics and Chemistry. Series: North-Holland personal library, Elsevier.
[76] Vellaisamy P., Upadhye, N.S., 2009. On the sums of compound negative binomial and gamma random variables. J. Appl. Probab. 46, 272-283.
[77] Wang, W., Chen, Z., 2018. Large deviations for subordinated fractional Brownian motion and applications, J. Math. Anal. Appl. 458 (2),
1678-1692.
[78] Weiss, M., 2013. Single-particle tracking data reveal anticorrelated fractional Brownian motion in crowded fluids. Phys. Rev. E 88, 010101(R).
[79] Xi, C., Zhang, S., Xiong, G., Zhao, H., 2016. A comparative study of two-dimensional multifractal detrended fluctuation analysis and two-
dimensional multifractal detrended moving average algorithm to estimate the multifractal spectrum. Physica A 454, 34-50.
[80] Xiong, G., Zhang, S., Zhao, H., 2014. Multifractal spectrum distribution based on detrending moving average. Chaos, Solitons & Fractals 65,
97-110.
[81] Yerlikaya-Özkurt, F., Vardar-Acar, C., Yolcu-Okur, Y., Weber, G.W., 2014. Estimation of the Hurst parameter for fractional Brownian motion
using the CMARS method. Journal of Computational and Applied Mathematics 259, 843-850.
[82] Zhang, Q., Zhou, Y., Singh, V.P., Chen, Y.D., 2011. Comparison of detrending methods for fluctuation analysis in hydrology. Journal of
Hydrology 400 (12), 121-132.

90
On persistence of convergence of kernel density estimates in
particle ltering
1,2
David Coufal
1
The Czech Academy of Sciences, Institute of Computer Science,
Pod Vodárenskou v¥ºí 2, 182 07 Praha 8, Czech Republic;
2
Charles University, Faculty of Mathematics and Physics,
Department of Probability and Mathematical Statistics,
Sokolovská 83, 186 75 Praha 8, Czech Republic;

Abstract. A sucient condition is provided for keeping the character of the ltering
density in the ltering task. This is done for the Sobolev class of ltering densities. As a
consequence, estimating the ltering density in particle ltering persists its convergence
at any time of ltering. Specifying the condition complements previous results on using
the kernel density estimates in particle ltering.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

91
Multidimensional copula models of dependencies between
selected international nancial market indexes
1 1 2
Tomá² Bacigál , Magdaléna Komorníková , and Jozef Komorník
1
Slovak University of Technology, 810 05 Bratislava, Slovakia;
2
Comenius University, 820 05 Bratislava, Slovakia;

Abstract. In this paper we focus our attention on multidimensional copula models of


returns of the indexes of selected prominent international nancial markets. Our modeling
results, based on elliptic copulas, 7-dimensional vine copulas and hierarchical Archimedean
copulas demonstrate a dominant role of the SPX index among the considered major stock
indexes (mainly at the rst tree of the optimal vine copulas). Some interesting weaker
conditional dependencies can be detected at it's highest trees. Interestingly, while global
optimal model (for the whole period of 277 months) belong to the Student class, the
optimal local models can be found (with very minor dierences in the values of GoF test
statistic) in the classes of vine and hierarchical Archimedean copulas. The dominance of
these models is most striking over the interval of the nancial market crisis, where the
quality of the best Student class model was providing a substantially poorer t.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

92
New Types of Decomposition Integrals and Computational
Algorithms
Adam ’eliga

Slovak University of Technology, Faculty of Civil Engineering,


Radlinského 11, 810 05 Bratislava, Slovakia;

Abstract. In this paper we dene two new types of decomposition integrals, namely
the chain and the min-max integral and prove some of their properties. Their superde-
composition duals are also mentioned. Based on the wide applicability of decomposition
integrals, some computational algorithms and their complexity are discussed.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

93
Trend analysis and detection of change-points of selected
nancial and market indices
Dominika Ballová

Slovak University of Technology, Bratislava 81005, Slovakia;

Abstract. From the macroeconomic point of view, the stock index is the best indi-
cator of the behavior of the stock market. Stock indices fulll dierent functions. One of
heir most important function is to observe developments of the stock market situation.
Therefore, it is crucial to describe the long-term development of indices and also to nd
moments of abrupt changes. Another interesting aspect is to nd those indices that have
evolved in a similar way over time. In this article, using trend analysis, we will uncover
the long-term evolution of selected indices. Other goal is to detect the moments in which
this development suddenly changed using the change point analysis. By means of cluster
analysis, we nd those indices that are most similar in long-term development. In each
analysis, we select the most appropriate methods and compare their results.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

94
Picturing Order

Karl Javorszky1
1 Institut für Angewandte Statistik, 1090 Wien, Austria
[email protected]

Abstract. Some of the implications of simple numeric facts had been, for lack
of computers, not available for research in previous generations. We present
numeric interdependencies regarding differing linear positions of an element of
a set, built of instances of sequences of natural numbers, which are not visible
to the naked eye, as the relevant tables consist of several thousands of rows and
hundreds of columns. Two linear positions of an element create a coordinate on
a plane. If there exist three planes that possess common axes, a rectangular
space can be constructed. The position of the element then reflects the innate
differences that constitute the deviating linear positions. The logical conflicts
arising from differing linear positions, of the same element in differing sorting
orders, are of arithmetic nature, as the tables we construct contain nothing but
linear positions derived by simple sorting operations, conducted on natural
numbers. The tools known as cyclic permutations are used to build up a model,
that predicts properties of a multi-dimensional assembly, based on the linear
position of a logical marker, and the other way around. The cycles that connect
instances of place-amount coincidences with each other, create in their standard
form logical statements that are triplets, where each element of the triplet can
have one of four forms. The interdependences are in need of professional visu-
alization.

Keywords: DNA, Cycles, Logical First Section

1 Constructing the Model

We present an arithmetic tool. It is built on simple readings of values of a+b=c,


which we sequence (sort, order) according to some aspects of the expression. One
finds surprisingly complex interdependences resulting from simple sorting operations.
The subject opens up a door to a new approach to information processing. Research
has progressed to a stage, where visualization is necessary to proceed. The present
paper is an invitation to professional artists of visualization of complex interdepend-
encies.

1.1 Sizes and Extents.

The model to demonstrate, and define, in a deictic fashion, terms connected to the
idea of order is built on natural numbers. The principles remain the same as we order

95
a collection, irrespective of the cardinality n of the set, as long as n remains within
some bounds (~ 6 < n < ~ 140). The main property of the collection is d, the number
of different categories of (a,b). The number n of objects (rows in the table) is given by

n = d (d+1)/2 (1)

These are the triangular numbers. [1] That the triangular numbers give the possible
sizes of models is an artefact of the generating algorithm, which is as follows:

1.2 Generating Algorithm


#d=16
begin outer loop, i:1,d
begin inner loop, j:i,d
append new record
write
a=i, b=j, c=a+b, k=b-2a,
u=b-a, t=2b-3a,
q=a-2b, s=(d+1)-(a+b),
w=2a-3b
end inner loop
end outer loop

There exists an optimal size for a set to utilize the accounting-translational relations
among natural numbers to the maximum. If all circumstances are ideal, there exist
surjective relations in both directions sequenced ↔ commutative, in dependence of
the logical fragmentation of the collection into sub-collections that are sequential,
commutative, or both. The fragmentation can be utilized to point out properties of
multi-dimensional, commutative assemblies as implications of linear positions of
elements with specific logical properties; the implications work of course both ways:
the existence of specific properties of multi-dimensional commutative assemblies
severely restricts possible properties of elements occupying specific linear places in
the related, accounting-translationally equivalent sequence. The relation is shown in
[2]. The numeric relations shown there are the reason we use 16 different versions of
(a,b), and also the reason for not using more than 9 describing aspects of a+b=c, as
log(OEIS/A000041) ~ 13 at the relevant value of n ~ 67.

The collection created such will now be sequenced, and the linear position of each of
the elements registered, in each of the sorting (sequencing) orders. Not more than ~
13 independent describing aspects can be used to describe commutative arrangements
of symbols on a collection of ~ 67 elements, therefore combinations of 2 of 9 describ-
ing aspects as sorting keys will leave no non-redundant sequences undiscovered. We
create 72 catalogued sorting orders by using each of the 9 describing aspects once as
the first, and once as the second sorting criterium. The table has now 136 rows and
9+72 columns.

96
1.3 Consolidating Logical Contradictions

The last of the preparatory steps to set up the model has to do with the logical contra-
dictions arising from the differing linear positions assigned to the same element, in
dependence of which of the sorting orders is in the present moment relevant. (If we
line up the students once on their first name, and once on their family name, there will
be in all probability differing linear positions in the two sorting orders for several of
the students.) We extend the Wittgenstein set of logical sentences by allowing sen-
tences describing such states of the world which are not the case. By knowing that
position(set S, element i, sorting order [α,β]) = j, we can conclude that sorting order
[γ,δ] is not the case. That, what is not the case, is the background to that, what is the
case.

The numeric facts in the tables show, how the logical background can be consolidated
with the classical foreground: into tautologies, compromises and discontinuities. The
logical compromise between the contradicting statements: pos()=i ↔ pos()=j is ac-
cessible to imagination, if we see the element to be in transit, en route, under way. We
cannot have explicit logical contradictions in a Wittgenstein system. The solution is to
find a diplomatic compromise: if belligerent A states, that the right position of harbor
for ship e(a,b) is in port Nr. i, while belligerent B states, hat the right harbor for that
ship is port Nr. j, then a diplomatic compromise would propose that the ship is per-
petually under way between ports Nr. i,j. This can be done by reading off the cycles
that are the mechanism of a transition between sorting order [α,β] → sorting order
[γ,δ]. We generate the Table of Movements which is a step-by-step record of each
movement (from_linear_place, to_linear_place) that is done by an element during a
reorder from any of the 72 catalogued orders into any of the other 71 catalogued
orders. The cycles have a literature in mathematics, their deictic definition is [3]. The
table of movements has 347.616 rows. These we consolidate in a Table of Cycles,
which is the actual tool we use.
We have to count in terms of cycles, because the traditional ways of enumerating
logical objects have a slight bias. The improved method makes counting to be a three-
step process: identifying the commutative symbols the elements carry, identifying the
positions the elements occupy, match this occurrence to a natural number, somewhat
more precise than the present method of identifying how many positions the elements
occupy (and determining the position of an element from the number of identical units
that make up the element). Without computers, there is no chance to milk out this last
drop of information about an occurrence: tabulating the results is not possible with
paper and pencil methods; for the drawing of the relevant Figures, professional capac-
ities are needed.

97
1.4 Associations, Ideal and Actual Circumstances

Each element is a data depository, which registers, with which other elements it is
associated. To be associated with other elements goes beyond being a member in the
corpus of a specific cycle. The extent of the association of a new element added to a
collection of elements among which associations already exist, is the extent in im-
provement of predictions caused by adding the new element as an additional search
criterium. The manifold ways of the elements to be associated with each other, by
rules provided by the natural numbers, appear to be the set of directing meta-
principles that can be understood to be the grammar of logical sentences.
There are cycles that can co-exist. Then there are some, that cannot exist concur-
rently. The interplay is governed by symbols that are concurrently sequential and
commutative. What we look for is collections of cycles that can coexist in specific
slices of time in specific segments of space. That would constitute the ideal case,
when all circumstances are permissive, where we observe combinatorial variants of
one and the same tautology.
The symbols which denote the membership in a cycle are concurrently sequential
and commutative. Elements with such symbols are needed to explain the correspond-
ence between sequentially ordered logical symbols, like the codons of the DNA, and
properties of multi-dimensional commutative assemblies, like the biochemical constit-
uents of the organism. The symbols are commutative with respect to their property of
pointing out a subset of the elements, for which the membership in a specific corpus
is true, and they are sequential, as the members of the corpus of a cycle are sequenced
among each other. The total number of distinct cycles is of no primary relevance,
because many of the cycles are mutually exclusive.
The ideal case is then a local biotope, like a happy valley surrounded by un-
inhabitable wildernesses of many kinds. We shall discuss first the functioning of the
model under ideal circumstances: this allows picturing theoretical genetics, infor-
mation transmission, learning and the development of intelligence.
The numbers show clearly, that some of the compromises made by pushing explicit
contradictions into the future, somewhere else, will not be sustainable, and a break-
down occurs. The last chapter deals with the circumstances being not of the ideal, but
of the most probable types.

2 Ideal Circumstances

The memory functions, like genetics, only under optimal, ideal circumstances. A few
drinks are sufficient to wreck the intellect, as are minimal influences sufficient to
disturb fertility. The match between the packed-up, compressed, stored form of in-
formation, and its realization in the form of a multidimensional arrangement of sym-
bols, is apparently subject to a stable constellation of circumstances, which provide
the screen for a movie, in which sequences and mixtures interact [4] [5] [6]. If there
are no stable walls of a cave, no shadows can be observed while they metamorphose
into each other.

98
Main requirements of the idealized environment are:

• Standard reorders,
• Two, exact, Euclid-type rectangular spaces,
• One, inexact, Newton-like space,
• The existence of logical shadows,
• The existence of ties,
• The predictability of subsequent members of a cycle, based on previous members
of the cycle.

These preconditions allow picturing logical processes that resemble rules assumed to
be at work behind genetics and learning.

2.1 Standard Reorders and Spaces

Among the catalogued reorders, we find 10 that are lending themselves to be standard
reorders. They move the elements of the set in 45 cycles of 3 elements each, and one
cycle with one, stationary element, which we propose to call the central element. The
standard cycles move each: ∑a = 18, ∑b = 33. Relative to these, the other cycles are
{long, thick, fast, …}.
The standard reorders have furthermore the advantage of common axes, which al-
low creating rectangular spaces with 3-D coordinates of objects. There appear 2 such
spaces, of which the axes are perpendicular and the planes of which are fixated by
readings of a+b=c according to some of their aspects. We call these the a-, and b-
versions of Euclid spaces. Their axes are: {(a: a+b, a; b-2a, a; a-2b, b-2a), (b: a+b,
b; b-2a, a-2b; a-2b, a)}. The position of an element in two sorting orders is given by
its coordinates on a plane, the axes of which are the two sorting orders. The planar
coordinates of an element mirror the two linear positions of the element exactly.
The two Euclid type spaces can be merged into one common, Newton type space,
of which the axes are {a+b, b-2a, a-2b}. On each of the 3 planes of the Newton space,
both of the axes are an inexact summary of two Euclid axes: so, every element can
have four logically equivalent positions. (E.g. a+b is an inexact summary of {a+b,a;
a+b,b}, b-2a is an inexact summary of {b-2a,a; b-2a,a-2b}, so the position of an ele-
ment on the plane x: a+b, y: b-2a can be any of {(x1;y1): (a+b,a;b-2a,a), (x1;y2):
(a+b,a;b-2a,a-2b), (x2;y1): (a+b,b;b-2a,a), (x2;y2): (a+b,b;b-2a,a-2b)}.)
It appears that theoretical genetics is based on arithmetic rules of sequenced collec-
tions, where 3 standard reorders connect 3 planes, on each of which one of 4 possible
positions can be pointed out by the position of elements in two linear sequences. In a
strict logical sense, one spatial moment is a sequenced succession of three planes,
because the commutative moment of “now” is a plane across the temporal sequence,
which the numbers show to be a numeric sequence connecting three planes in two
spaces; of these, our neurology and psychology create the impression of one common
space, of which we believe the constituents to be sub-spaces, although, in fact, the
common, encapsulating Newton space is less organically rooted in the complex inner
associations of a+b=c, than the two two-thirds-spaces, in which the planar positions

99
of the elements are tautologic implications of their respective linear positions. We live
in two two-thirds-spaces, although we may believe, and have reason to believe, that
we live in one common space, which splits up into two almost-exactly-half-spaces.

2.2 Learning

The background to that, what is the case, is the medium to register and carry infor-
mation. Those elements of a cycle, that have already been or have not yet been, are
not the case, but predictions can be made about them. We can conclude, based on
occurrences of {elements, places, temporal sequence}, which of any reorganizations
can be happening presently, which are excluded and which remain possible.

Those cycles, the existence of which is explicitly excluded by the facts of elements
{ei} being on places {pi} are called the shadow of what is the case. Neighbors of the
shadow are then also associated, in their own way, with that, what is the case.

It is possible to register a new, identifying symbol that describes the present state by
using the unordered state of elements that are in a tie. (The members of the rowing
team stand all on step 1 of the podium. If they stand there internally sorted, according
to their ranks in discus-throwing, we will conclude that a competition, in discus-
throwing, had taken place in the not so distant past.) Elements of the background are
presently not relevant, and this common, commutative symbol makes them all to be a
part of a tie. Under ideal circumstances, to one state of the world belongs one shadow
of it, and the potential neighborhoods among elements that are not the case keep their
neighborhood relations after repetitions. Then, it is possible to retrieve experiences
relating to a previous occurrence of this state of the world, as remembered by the
content of the neighbors of its shadow.

To learn is to improve the accuracy of predictions about what will happen next. Our
neurology obviously reckons with logical processes that have predictable continua-
tions. Biologic processes are cyclic, periodic, rhythmic; a period is a cycle that in-
cludes other cycles, the rhythm is the interference pattern caused by cycles within
periods. The ability to conclude from the rhythm of smaller changes (e.g. day/night)
within larger changes (e.g. lunar phases) to the predictable appearance of specific
occurrences is a very basic aptitude of organisms. The nervous system makes predic-
tions based on cycles, rhythms and periods. The succession of elements according to
the rules of succession that constitute a cycle is a prediction about the elements that
follow.

The ideal case deals with predictions in a stable spatial structure. In that case, both the
where and the when are a given, the what is the content of the message. The actual
case is less accommodating than the ideal case, and there appear less or more than one
contestants for one and the same place.

100
3 Actual Circumstances

In its present, introductory phase, the tool may evoke associations in some to a Ror-
schach plate. It is one’s own phantasy, creativity and intuition, that determine the
ways of reading and interpreting the numbers. Presenting the relations, by graphical
means, among natural numbers to be dependent of sorting and ordering, can be a great
help while building up an inner order among the concepts in the user’s brain. The
following points need professional attention:

• Spatial fixation
• Concept of mass
• Two transcendent planes
• Tolerance intervals

3.1 Agglomeration in Space : Mass

Those triplets of elements {e1, e2, e3}, that are a standard cycle in a standard, spatial
reorder, may also appear embedded in other, longer cycles. The spatial effect is iden-
tical, whether this happens within 1, or rather in 2, or even 3 independent cycles that
run concurrently, but independently of each other. In these cases, the spatial fixation
appears as an independent property of the individual cycles, which – without the oc-
currence of specific other elements being their neighbors – would otherwise not have
specific spatial properties.
The occurrence of space-generating standard triplets of elements {e1, e2, e3} being
but a small portion of all occurrences, one can visualize a general inevitability of traf-
fic jams, pile-ups at specific coordinates in space. Cycles running, as they are, at dif-
fering speeds, there appears necessarily a deceleration at coordinates that have to be
cleared by differing cycles. We state, that the pile-ups, agglomerations, have proper-
ties that allow recognizing similarities and differences among them.
The general property of all spatial agglomerations is that there is an additive load
to the congregation of cycles that agglomerate at spatial coordinates. The expressions
∑a, ∑b over each cycle can be used as an allegory of the concept of a “mass” of a
cycle. Actual, measurable appearances will exist at points of deceleration, where ma-
terial mass appears to come into existence, and at points of acceleration, where mate-
rial mass appears to disappear from existence.

3.2 Logical Archetypes : Chemical Elements

The unavoidable agglomerations in space can be classified, typified, categorized on


their differing properties, e.g. on their genesis or morphology. The aggregated loads
can be distinguished, As the granularity they cause is an implication of a+b=c, they
can be called logical archetypes. The logical archetypes can well serve as an allegory
of the concepts of chemical elements.

101
3.3 Two Planes More

The standard reorders have 10 different types. Of these, twice 3 have been mentioned:
these create the rectangular spaces we are familiar with. These are connected to the
following aspects of a+b=c: {(a+b), (b-2a), (a-2b)}. There remain 4 more standard
reorders, of which 2 planes with common axes can be constructed. These are: {(a), (b-
a)}. The two planes transcend the orthogonal spaces; they also have the effect of as-
signing places within the rectangular spaces, but these places are only two-
dimensionally fixated. Whether the places, which the two extra planes assign to ele-
ments, are an allegory for “the elements should/could also be here” or can serve as
allegories for the concepts of magnetism and electricity, can be discussed, as soon as
professionally constructed illustrations are available of the paths convoys (strings,
chains, filaments, cycles) occupy in space.

3.4 Overall Target Values and Tolerance Ranges

It appears that a concept of a Grand Total of dislocations can be an arguable idea. If


the tool stands still, each element has an expected linear place in each of the orders
imposed by two of the aspects of a+b=c, and an observed linear place in each of the
orders imposed by two different aspects of a+b=c.
Having an overall, general value, established on a dimension (which may be relat-
ed to χ²), that can be interpreted as conformity, consistency, inner truth, potential
foreground, or the like, allows support for ideas as:

• Nature maintains an order,


• The strictness of order can range from rigid to appearing chaotic,
• There is a natural tendency to continuity and stability,
• There is a range of tolerance before a threshold is hit.

One can use the inner numeric inexactitudes of the classical counting system to estab-
lish tolerance ranges around target – transformation – levels. The proposition is to use
as the basis of counting the differing maximal numbers of logical relations possible on
n objects, when these are commutative or sequential, and to count back to properties
of objects: to deduct the number of objects from the basis of a given, fixed number of
logical relations. The translational equivalences that appear as we reckon back from
logical relations to properties of elements can appear in various forms regarding the
number, position, sequence of elements.

May the questions raised in this paper, fundamental and technical, appear inviting for
further research.

102
References
1. OEIS A000217. Available online: www.oeis.org/A000217 (accessed on 25 May 2018).
2. OEIS A242615. Available online: www.oeis.org/A242615 (accessed on 25 May 2018).
3. OEIS A235647. Available online: www.oeis.org/A235647 (accessed on 25 May 2018).
4. Javorszky, K. Learn to Count in Twelve Easy Steps: Webinar in FIS: lis-
tas.unizar.es/pipermail/fis/, (2013) available online: www.tautomat.com (accessed on 25
May 2018)
5. Javorszky, K. Transfer of Genetic Information: An Innovative Model, available online:
http://www.mdpi.com/2504-3900/1/3/222 (2017) (accessed on 25 May 2018)
6. Javorszky, K. Natural Orders, De Ordinibus Naturalibus; Morawa: Vienna, Austria, ISBN
978-3-99057-139-2. (2016)
7. Javorszky, K. Biocybernetics : A Mathematical Model of the Memory; Wien: Eigenverlag;
Oesterr. Nationalbibliothek: http://data.onb.ac.at/rec/AC01018124 (1985)
8. Javorszky, K. Summary of lectures held about granularity algebra at Grupo Bioinformatica
at Centro Politecnico Superior de la Universidad de Zaragoza; Bergheim : Mackinger;
ISBN: 3900676070, Oesterr. Nationalbibliothek: http://data.onb.ac.at/rec/AC01333418
(1995)
9. Javorszky, K. Principia philosophiae naturalis : (draft); Bergheim : Mackinger; ISBN:
3900676062, Oesterr. Nationalbibliothek: http://data.onb.ac.at/rec/AC01116989 (1995)
10. Javorszky, K. A Rational Model In Theoretical Genetics. tripleC, ISSN: 1726-670X, Vol 2
No 1; https://doi.org/10.31269/triplec.v2i1.13 (2004)
11. Javorszky, K. Essay On Order; International Journal “Information Theories and Applica-
tions”, Vol. 21, Number 1, p76-84 (2014) (accessed on 25 May 2018)
12. Javorszky, K. Accounting in Theoretical Genetics; International Journal “Information
Theories and Applications”, Vol. 19, Number 1, 100. p86-98,
www.foibg.com/ijita/vol19/ijita19-01-toc.pdf (2012) (accessed on 25 May 2018)
13. Javorszky, K. Information Processing in Auto-regulated Systems, Entropy, 5, p161-192.
https://doi.org/10.3390/e5020161 (2003) (accessed on 25 May 2018)
14. Javorszky, K. Logical Structure Of Chromosomes; International Journal “Information
Theories and Applications”, Vol. 18, Number 1, p69-81
www.foibg.com/ijita/vol18/ijita18-1-p06.pdf (2011) (accessed on 25 May 2018)
15. Javorszky, K. Unique Identification of States of Sets, In: 6th International Conference on
Applied Informatics, Eger, Hungary, January 27–31, 2004; icai.ektf.hu/pdf/ICAI2004-
vol1-pp229-234.pdf (2004) (accessed on 25 May 2018)
16. Javorszky, K. Explaining Bio-coding: The Concept of Stability; In: Petitjean, M (Ed.)
http://www.mdpi.org/fis2005/proceedings.html. (2005) (accessed on 25 May 2018)
17. Javorszky, K. Minisymposium: Approaches to Autoregulation; In: SMB Convention, Ann
Arbor, July 2004; http://www.math.lsa.umich.edu/SMB2004/SMBindex.html (2004) (ac-
cessed on 25 May 2018)
18. Javorszky, K. A Rational Model of the Information Processing in Theoretical Genetics,
Session 7.1; In: CASYS 03: Sixth International Conference on Computing Anticipatory
Systems, Liège, Belgium, August 11-16, 2003 (2003) (accessed on 25 May 2018)
19. Javorszky, K. The Logic of Self-sustaining Sampling Systems; In:
http://ipcat95.csc.liv.ac.uk/IPlinks.html (1995) (accessed on 25 May 2018)

103
104
Section 8

Discrete Geometry and Topology

105
Endpoint-Based Thinning with Designating Safe Skeletal Points
Kálmán Palágyi and Gábor Németh

Department of Image Processing and Computer Graphics,


University of Szeged, Szeged, Hungary;

Abstract. Thinning is an iterative object reduction: border points that satisfy some
topological and geometric constraints are deleted until stability is reached. If a border
point is not deleted in an iteration, conventional implementations take it into consideration
again in the next step. With the help of the concepts of a 2D-simplier point and a weak-
3D-simplier point, rechecking of some `survival' points is not needed. In this work an
implementation scheme is reported for sequential thinning algorithms, and it is shown
that the proposed method can be twice as fast as the conventional approach in the 2D
case.

The full text will be available in the edited book Computational Modeling of Objects
Presented in Images. Fundamentals, Methods, and Applications, eds. Barneva R.P.,
Brimkov V.E., Kulczycki P., Tavares J.M.R.S., to be published by Springer in the Lecture
Notes in Computer Science series soon.

106
Maximal P-simple Sets on (8,4) Pictures
Péter Kardos and Kálmán Palágyi

University of Szeged, Szeged, Hungary

Abstract. Bertrand proposed the notion of a P-simple set for constructing topology-
preserving reductions. In this paper, we dene the maximalness of a P-simple set, give
a new sucient condition for topology-preserving reductions acting on (8,4) pictures on
the square grid, and it is proved that this condition designates a maximal P-simple set.

The full text will be available in the edited book Computational Modeling of Objects
Presented in Images. Fundamentals, Methods, and Applications, eds. Barneva R.P.,
Brimkov V.E., Kulczycki P., Tavares J.M.R.S., to be published by Springer in the Lecture
Notes in Computer Science series soon.

107
An immersed boundary approach for the numerical analysis of
objects represented by oriented point clouds
1 1 1,2
László Kudela , Stefan Kollmannsberger , and Ernst Rank
1
Chair for Computation in Engineering, Technische Universität München,
Arcisstr. 21, 80333 München, Germany;
2
Institute for Advanced Study, Technische Universität München, Germany;

Abstract. This contribution presents a method aiming at the numerical analysis of solids
whose boundaries are represented by oriented point clouds. In contrast to standard nite
elements that require a boundary-conforming discretization of the domain of interest,
our approach works directly on the point cloud representation of the geometry. This is
achieved by combining the inside-outside information that is inferred from the members of
the point cloud with a high order immersed boundary technique. This allows for avoiding
the challenging task of surface tting and mesh generation, simplifying the image-based
analysis pipeline drastically. We demonstrate by a numerical example how the proposed
method can be applied in the context of linear elastostatic analysis of solids.

The full text will be available in the edited book Computational Modeling of Objects
Presented in Images. Fundamentals, Methods, and Applications, eds. Barneva R.P.,
Brimkov V.E., Kulczycki P., Tavares J.M.R.S., to be published by Springer in the Lecture
Notes in Computer Science series soon.

108
Structuring digital spaces by closure operators associated to
n-ary relations
Josef Slapal

Brno University of Technology, 616 69 Brno, Czech Republic;

Abstract. We introduce an isotone Galois connection between n-ary relations and clo-
sure operators on a set for every integer n > 1. We focus on certain n-ary relations on the
digital line Z and study the closure operators on the digital plane Z that are associated,
2

in the Galois connection introduced, to special products of pairs of the relations. These
closure operators, which include the Khalimsky topology, are shown to provide well be-
haved connectedness, so that they may be used as background structures on the digital
plane for the study of digital images.

The full text will be available in the edited book Computational Modeling of Objects
Presented in Images. Fundamentals, Methods, and Applications, eds. Barneva R.P.,
Brimkov V.E., Kulczycki P., Tavares J.M.R.S., to be published by Springer in the Lecture
Notes in Computer Science series soon.

109
110
Section 9

Computer Vision

111
Graph Cutting in Image Processing handling with Biological
Data Analysis
1 1 2 3
Mária šdímalová , Tomá² Bohumel , Katarína Plachá-Gregorovská , Peter Weismann ,
3
and Hisham El Falougy
1
Slovak University of Technology in Bratislava;
2
Institute of Experimental Pharmacology and Toxicology Slovak Academy of Science;
3
Institute of Anatomy, Faculty of Medicine, Comenius University of Bratislava;

Abstract. In this contribution we present graph theoretical approach to image pro-


cessing focus on biological data. We use the graph cut algorithms and extend them for
obtaining segmentation of biological cells.We introduce completely new algorithm for ana-
lysis of the resulting data and sorting them into three main categories, which correspond
to the certain type of biological death of cells, based on the mathematical properties of
segmented elements.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

112
Comparison of 3D graphics engines for particle
track visualization in the ALICE Experiment

Piotr Nowakowski1 , Julian Myrcha1 , Tomasz Trzciński1 , Łukasz Graczykowski2


and Przemyslaw Rokita1 for the ALICE Collaboration
1
Institute of Computer Science, 2 Faculty of Physics
Warsaw University of Technology, Poland,
[email protected], [email protected], [email protected],
[email protected], [email protected]

Abstract. In this paper, we examine possible ways of upgrading the 3D


graphics module in the Event Display, a standalone application used to
visualize the processes occurring in the ALICE experiment at CERN.
This application displays a graphical representation of tracks of ele-
mentary particles as measured with the detector and recorded during
proton-proton, lead-lead or proton-lead collisions. These visualizations
are crucial for monitoring the condition of data acquisition and event re-
construction processes for which they heavily rely on an outdated version
of the OpenGL graphics engine. In this work, we analyze the advantages
and disadvantages associated with upgrading the graphics engine to a
new framework, be it a new version of the OpenGL engine or Vulkan.
To that end, we present an extensive comparative evaluation between
the new OpenGL and Vulkan graphics libraries and draw conclusions
regarding their implementation within the frames of the Event Display
application.

1 Introduction
ALICE (A Large Ion Collider Experiment) [1] is one of the four main experi-
ments of the LHC (Large Hadron Collider ) [2]. Its primary goal is to study the
physics of ultra-relativistic heavy-ion collisions (lead–lead (Pb–Pb) in the case of
LHC) in order to measure the properties of the Quark-Gluon Plasma [3, 4]. AL-
ICE is a complex detector consisting of 18 different sub-detectors which register
signals left by traversing particles. Tracking (detection of particle trajectories,
also referred to as “tracks”) in ALICE is performed by three sub-systems, that is
the Inner Tracking System (ITS) [5], the Time Projection Chamber (TPC) [6],
and the Transition Radiation Detector (TRD) [7]. The TPC, a gaseous detector
extending in azimuthal plane from 0.8 m to 2.5 m from the interaction point, is
the main tracking device of the experiment.
In this work, we focus on one of the most crucial software applications used
in the ALICE Experiment, the Event Display. This software is capable of ren-
dering reconstructed particle tracks on a screen (see Figure 1). These tracks are
computed from measurements done by the TPC alone or both the ITS and the

113
Fig. 1: Visualization of charged particle tracks in the ALICE TPC detector from

the Pb–Pb collision at sNN = 5.02 TeV [8].

TPC. Data for visualization can be sourced either from a database (past colli-
sions) or from a “live” measurement. In this case data are gathered as soon as
a partial reconstruction of the tracks collected during data taking is available.
The second application is more important, because any hardware or software
faults in the detector can lead to wrong, i.e. physically impossible, reconstruc-
tion results, which can easily be spotted by the monitoring team. Instantaneous
error detection is crucial for the functioning of the entire system as, in case of
any problems, the data collection process can be restarted thereby avoiding un-
necessary corruption of the data. In this case, instead of loosing the data of an
entire run of up to 38 hours, only a small portion of the corrupted data is lost.
The current implementation of the Event Display’s 3D rendering module uses
OpenGL 1.x API (application programming interface) which was released more
than a decade ago and it is no longer supported by many of the recent visualiza-
tion libraries. For this very reason an upgrade of the existing implementation to a
new graphic engine is planned. In this paper, we investigate possible approaches
of performing this upgrade. More precisely, we analyze the performances of the
two most prevalent graphical APIs currently available on the market: OpenGL

114
Comparison of 3D graphics engines for particle track visualization

4.x and Vulkan. Although alternative solutions exist e.g. DirectX or Metal, they
are vendor specific, meaning that they offer support for a limited number of
hardware configurations and we therefore do not consider them in this work. To
evaluate the performances of the tested APIs, we developed a set of sample ap-
plications using both OpenGL and Vulkan, and compared their results in terms
of efficiency and computational cost.
The remainder of this paper is organized in the following way. In the next
section, we outline the graphics interfaces that we used in this work, highlighting
their advantages and disadvantages. Next, we describe the sample applications
we developed for performing experiments. Finally, we present our evaluation
testbed and the results of the performed experiments. In the last section, we
conclude this work by recommending one of the tested interfaces.

2 Description of Graphics Interfaces


2.1 OpenGL
OpenGL was created in 1992 by Silicon Graphics, Incorporated and is maintained
to this day (in 2017 version 4.6 of the specification was released) by OpenGL
ARB (Architecture Review Board ), consisting of the biggest IT companies. In
2006 ARB was made part of a bigger organization called Khronos Group.
OpenGL [9] is based on a state machine called a context — global for the
application list of settings which affects the way objects are displayed on screen.
Ownership of the context is exclusive for a single thread of the application.
This ownership can be passed on to other threads, but the context can not be
accessed by two threads simultaneously. Because of that it is not possible to use
capabilities of current multicore processors to speed up the object drawing itself.
Additionally OpenGL functions are blocking, holding the thread execution until
their task is completed — this thread will not be able to do any other computing
in the meantime, while communication with the GPU is ongoing.

2.2 Vulkan
Vulkan [10] is a new graphic API released in 2016 by the Khronos Group. Vulkan
is a low-level API, which provides more control of the graphics card to the
programmer, but also requires from him to implement more code — he has to
handle some tasks that are taken care of by the driver in OpenGL, such as
memory management of the graphics card, synchronization or swapping display
buffers for the operating system. Although this makes the application itself more
complicated, it significantly simplifies the driver, which in turn can be more
aggressively optimized by the graphics card manufacturer.
Vulkan was designed with multi-threading in mind, as it lacks a global con-
text. Configuration is instead split into many Vulkan objects, which can be safely
modified in different threads.
Rendering in Vulkan is realized by creating and filling (”recording”) com-
mand buffers. Its contents are then placed in the task queue of the graphic card.

115
Command buffers can be rerecorded in every frame (similarly to how OpenGL
operates). However, if it is not necessary (description of a particular object has
not changed between frames) they can be reused (queued again), saving a lot
of CPU time. Queueing of command buffers has to be performed on a single
(usually main) thread, but the recording can be realized on multiple threads
simultaneously. This design allows for participation of all processor cores in the
drawing task, improving the overall performance.

3 Implementation

Graphics APIs provide a couple of ways that can be used to render an identical
image on the screen, but varying in efficiency of GPU utilization, resulting in
different performances.
Four graphics interface versions have been developed, starting from a naive
implementation (for gauging performance when the code is not optimized) and
ending with the most efficient implementation that we could come up with.

3.1 OpenGL

Version A treats reconstructed tracks as separate objects, thus allocating sepa-


rate set of graphic card resources (vertex buffer, index buffer, color variable) for
each one. Additionally, every track is drawn by a separate command. This imple-
mentation requires multiple OpenGL context alterations (mainly buffer binding)
during rendering a single frame, which is costly — this should be reflected in
poor performance.
Main feature of version B is a reduction of the amount of buffer binding while
keeping separate drawing calls for each track. All vertex data are aggregated into
a single buffer. Additionally, tracks are sorted according to the particle type,
which allows configuration of the line color exactly once per group of tracks
instead of for every individual track.
In previous two versions each track is drawn via an individual function call.
Because a single collision is usually made of thousands of tracks, this adds up to
a lot of avoidable, repetitive work for the graphic driver performed every frame.
In version C a different drawing function was used, which can visualize the whole
collision in a single call. It operates on an array of parameters (where every set
of parameters represents a single drawing operation, like in previous versions).
Version C has reduced the number of drawing calls per frame to one, but
only from the programmer’s perspective — the driver still has to traverse the
parameter array and dispatch drawing commands to GPU one by one. It is
possible to store the parameters directly in memory buffer of the GPU and then
just refer to it, avoiding most of the data transfer when a draw command is
enqueued. This way of drawing is called indirect, which is a main feature of
version D.

116
Comparison of 3D graphics engines for particle track visualization

3.2 Vulkan
Vulkan sample programs allow to choose a track drawing strategy — with com-
mand buffers cleared and recorded for every frame (the dynamic version) or
recorded only once (the static version). The static version takes full advantage
of command buffering available in Vulkan, while the dynamic version tries to
simulate rendering in the way of OpenGL in order to compare the two interfaces
on a more equal basis.
Versions A, B and D use the same rendering techniques as their OpenGL
counterparts, but implemented using the Vulkan API. Since there is no Vulkan
equivalent of the drawing call used in OpenGL version C, an approach unique
to Vulkan was tested here.
Version C is testing multithreading capabilities of Vulkan by utilizing sec-
ondary command buffers. Secondary command buffers can not be directly placed
in the rendering queue of the graphics card, but can be executed as a part of
a normal command buffer (which is called primary) and inherit some of the
pipeline settings of its parent. Although a single command buffer (of any of the
two types) can not be written to by more than one thread simultaneously, a piece
of rendering work (e.g. an object in the virtual world) can be split into multiple
secondary command buffers, recorded on multiple threads and then referenced
in a single primary command buffer.

3.3 Track visualization


A single trajectory (in the ALICE track reconstruction system) is represented via
a list of points describing positions in space where a particle was at a given mo-
ment. We use these data to construct composite Bézier curves that are displayed
on the screen.
To calculate control points for the curves we have used two different algo-
rithms, one created by Rob Spencer [11] (referred to as Algorithm #1 ) and the
other created by John Hobby [12] (referred to as Algorithm #2 ). Curves pro-
duced by these algorithms on the same input data have a slightly different shape
as shown in Figure 2.
Additionally, control points can be calculated beforehand via the main pro-
cessor and supplied to the graphic card or calculated on the graphic card directly.
These options are labeled as CPU and GPU in the experimental results, respec-
tively.
Taking both features mentioned above into account, there are four possi-
ble configurations of a single implementation variant. Experimental results have
been grouped accordingly (see next section).

4 Experiments
4.1 Hardware
Sample programs were tested on Windows 10 with a NVIDIA 388.71 driver on
two machines:

117
(a) Algorithm #1 (b) Algorithm #2

Fig. 2

– desktop computer — with quad-core Intel i7-4771 processor with 3.50 GHz
clock, NVIDIA GeForce 780 GTX graphics card and 32 GB of RAM,
– notebook — with quad-core Intel i7-3610QM processor with 2.40 GHz clock,
NVIDIA GeForce GTX 660M graphics card and 8 GB of RAM.

4.2 Performance tests

Performance of every implementation was measured by counting the number of


frames rendered by the graphics card in a 10 seconds time period, determined
by high-precision clock routines provided by the Windows operating system. In
order to reduce the randomness of the results, each test was repeated 10 times
and then averaged.

Alg.#1, CPU Alg.#2, CPU Alg.#1, GPU Alg.#2, GPU Average


Variant A 63.26 64.85 64.96 61.23 63.58
Variant B 746.83 734.75 738.00 734.21 738.45
Variant C 1078.75 1059.32 1079.91 873.36 1022.84
Variant D 1251.56 1253.13 1233.05 997.00 1183.69

Table 1: Performance of OpenGL implementations on desktop computer in FPS


(frames per second). “Alg. #1” is Algorithm #1, “Alg. #2” is Algorithm #2.

Table 1 presents OpenGL performance measurements gathered on the desk-


top computer. The biggest gain in performance (around twelve times increase in
achieved frames per second) occurs between versions A and B, where the sim-
plest optimization attempt was made (reduction of OpenGL context changes).
In subsequent implementations the performance has also increased, although on
a smaller scale — around 38% and 15% (if compared to the preceding version),
respectively.

118
Comparison of 3D graphics engines for particle track visualization

Alg.#1, CPU Alg.#2, CPU Alg.#1, GPU Alg.#2, GPU Average


Variant A 29.21 28.95 28.36 27.88 28.60
Variant B 200.60 198.89 198.26 192.57 197.58
Variant C 224.82 226.04 225.33 213.49 222.42
Variant D 226.40 224.52 223.16 214.45 222.13

Table 2: Performance of OpenGL implementations on notebook computer in


FPS (frames per second). “Alg. #1” is Algorithm #1, “Alg. #2” is Algorithm
#2.

Table 2 presents OpenGL performance measurements gathered on the note-


book computer. Like in the previous case, the biggest gain is achieved between
implementation A and B (six fold increase). Subsequent implementations in this
case, however, have not improved the performance very much — there is a small
12% increase between version B and C, while C and D are in a practical sense
equal.

Alg.#1, CPU Alg.#2, CPU Alg.#1, GPU Alg.#2, GPU Average


dynamic
Variant A 532.77 543.18 503.27 442.28 505.38
Variant B 1124.86 1116.07 1113.59 1108.65 1115.79
Variant C 1103.75 1131.22 1109.88 1101.32 1111.54
Variant D 1193.32 1169.59 1201.92 1182.03 1186.72
static
Variant A 1162.79 1196.17 1201.92 1204.82 1191.43
Variant B 1203.37 1203.37 1196.17 1184.83 1196.94
Variant C 1173.71 1203.37 1194.74 1191.90 1190.93
Variant D 1206.27 1209.19 1219.51 1197.60 1208.15

Table 3: Performance of Vulkan implementations on desktop computer in FPS


(frames per second). “Alg. #1” is Algorithm #1, “Alg. #2” is Algorithm #2.

Table 3 presents Vulkan performance measurements gathered on the desktop


computer.
Usage of the API in a style similar to OpenGL (the dynamic version) suffers
from a similar bottleneck, as seen in the difference in performance between ver-
sion A and B. However, the sample programs are running faster overall, especially
the less optimized versions. As in OpenGL, a variant that uses the indirect ren-
dering technique is the fastest. Surprisingly, the variant that uses multithreading
achieve worse results than variant B.

119
When command buffers are not recorded for every frame (the static version),
sample programs perform almost equally as well (with the exception of variant
C, which also in this case runs slower).
Vulkan implementation, in the best case, is a little (around 2%) faster than
the best implementation of OpenGL on the desktop computer.

Alg.#1, CPU Alg.#2, CPU Alg.#1, GPU Alg.#2, GPU Average


dynamic
Variant A 157.80 158.86 156.08 156.42 157.29
Variant B 193.12 193.20 189.29 188.537 191.04
Variant C 193.65 193.46 189.00 188.75 191.22
Variant D 200.40 200.20 194.74 196.31 197.91
static
Variant A 201.33 201.65 196.81 196.19 199.00
Variant B 200.44 199.64 196.04 197.32 198.36
Variant C 201.94 201.41 196.81 196.58 199.18
Variant D 199.28 199.16 195.20 196.77 197.60

Table 4: Performance of Vulkan implementations on notebook computer in FPS


(frames per second). “Alg. #1” is Algorithm #1, “Alg. #2” is Algorithm #2.

Table 4 presents Vulkan performance measurements gathered on the note-


book computer. The results are similar to those found in Table 3. However, the
difference in speed between variant A and B in the dynamic version is signifi-
cantly smaller (on desktop machine performance gain is around 121%, while here
it is only 21%) and the best implementation is not as fast as the best OpenGL
implementation on the same machine.
Figure 3 presents the gathered data in a form of a graph. As can be seen,
implementations that depend on repeated submission of drawing commands (dy-
namic Vulkan and all of OpenGL) achieve better performance at each optimiza-
tion step. However, in the case of static Vulkan implementations the performance
is almost constant, no matter if the code is optimized or not.
Additionally, the computer with a better GPU benefits from the optimization
far more than the computer with an inferior version — every implementation
with the latter (excluding ”naive” version A) is running at around 180 frames
per second, while with the former version the performance varies from around
60 frames per second to over 1200 frames per second.

4.3 Difficulty of usage


Difficulty of usage as a trait of programming interface is a subjective matter and
as such is hard to measure, but it has an influence on the speed of application
development. We tried to estimate this by counting the number of lines of code
used in our implementations.

120
Comparison of 3D graphics engines for particle track visualization

OpenGL 660M
OpenGL 780Ti
Vulkan S 660M
1,000
Vulkan S 780Ti
Performance [FPS]

Vulkan D 660M
Vulkan D 780Ti

500

0
A B C D
Variant

Fig. 3: Measured performance in FPS (frames per second). ”Vulkan S” stands for
static (i.e. command buffers recorded only once) while ”Vulkan D” represents
dynamic (i.e. command buffers recorded every frame).

Table 5 presents the gathered data. Only A, B and D variants are taken into
account, because they use equivalent rendering techniques in both OpenGL and
Vulkan. The biggest difference in length occurs in a piece of code that is shared

Module OpenGL Vulkan


Track loading 565 565
Common code 828 2272
RendererA 265 377
RendererB 301 441
RendererD 316 432
Sum 2275 4087

Table 5: Number of code lines.

between implementations using the same interface (which handles proper 3D


initialization, shader loading, memory management, framebuffer swapping etc.).
Sections containing only drawing code (labeled RendererX ) are longer as well,
but not nearly as much (around 40%). Taking everything into account, Vulkan
implementation is 80% longer than OpenGL.

121
5 Conclusion
According to the experimental results the application performance is affected
mainly by the way it was implemented and not by the chosen graphics API. The
most advanced implementation (implementation D) in both OpenGL and Vulkan
version on both computers achieves similar rendering speeds (difference is about
10%). Additionally neither API was found to be faster (OpenGL was faster on
notebook while Vulkan on desktop). This means that there is a slight variance
on how good the graphics driver performs on different hardware configurations.
Therefore a simpler API should be selected, as this would reduce development
time. OpenGL is in this case considered to be superior.
We have measured FPS in a scenario where only track data are displayed. If
this was the only task of the Event Display, almost all compared implementations
would be accepted, as rendering speeds are 3 times faster than the required 60
FPS. This is however not the case — the program has other data to display
which also consumes time. Track rendering optimization provides more time for
other computations.

Acknowledgements
The authors acknowledge the support from the Polish National Science Centre
grant no. UMO-2016/21/D/ST6/01946.

References
1. K. Aamodt et al., “The ALICE experiment at the CERN LHC,” JINST, vol. 3,
p. S08002, 2008.
2. L. Evans and P. Bryant, “LHC Machine,” JINST, vol. 3, p. S08001, 2008.
3. E. V. Shuryak, “Quark-Gluon Plasma and Hadronic Production of Leptons, Pho-
tons and Psions,” Phys. Lett., vol. 78B, p. 150, 1978. [Yad. Fiz.28,796(1978)].
4. J. Adams et al., “Experimental and theoretical challenges in the search for the
quark gluon plasma: The STAR Collaboration’s critical assessment of the evidence
from RHIC collisions,” Nucl. Phys., vol. A757, pp. 102–183, 2005.
5. G. Dellacasa et al., “ALICE: Technical Design Report of the Inner Tracking System
(ITS),” 1999.
6. G. Dellacasa et al., “ALICE: Technical Design Report of the Time Projection
Chamber,” CERN-OPEN-2000-183, CERN-LHCC-2000-001, 2000.
7. P. Cortese, “ALICE: Technical Design Report of the Transition Radiation Detec-
tor,” 2001.
8. A. Collaboration, “ALICE event display of a Pb-Pb collision at 2.76A TeV.” https:
//cds.cern.ch/record/2032743?ln=en, 2015.
9. G. Sellers, R. Wright, and N. Haemel, OpenGL Superbible: Comprehensive Tutorial
and Reference. Addison-Wesley Professional, 7th ed., 2015.
10. G. Sellers and J. Kessenich, Vulkan Programming Guide: The Official Guide to
Learning Vulkan. Always learning, Addison Wesley, 2016.
11. R. Spencer, “Spline Interpolation.” http://scaledinnovation.com/analytics/splines/
aboutSplines.html, 2010.

122
Comparison of 3D graphics engines for particle track visualization

12. J. Hobby, “Smooth, Easy to Compute Interpolating Splines,” Discrete & Compu-
tational Geometry, pp. 123–140, June 1986.

123
A methodology for trabecular bone microstructure modelling
agreed with three-dimensional bone properties
Jakub Kami«ski, Adrian Wit, Krzysztof Janc, and Jacek Tarasiuk

Faculty of Physics and Applied Computer Science (WFiIS),


AGH – University of Science and Technology, al.Mickiewicza 30, 30-059 Kraków, Poland;

Abstract. Bone tissue is a structure with a high level of geometrical complexity as a


result of mutual distribution of a large number of pores and bone scaolds. For the study
of the mechanical properties of the bone, there is a demand to generated microstructu-
res comparable to trabecular bone with similar characteristics. Internal structure of the
trabecular and compact bone has a high impact of their mechanical and biological charac-
ter. The novel methodology for the definition of three-dimensional geometries with the
properties similar to natural bone is presented. An algorithm uses a set of parameters to
characterize ellipsoids computed based on Finite Element Method (FEM). A comparative
analysis of real trabecular bone samples and the corresponding generated models is pre-
sented. Additional validation schemas are proposed. It is concluded that computer-aided
modelling appears to be an important tool in the study of the mechanical behavior of
bone microstructure.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

124
Human stress detection using non-contact remote
photoplethysmography from video stream

S. Nikolaiev1[0000-0002-2025-8469], S. Telenyk2[0000-0001-9202-9406],

Y. Tymoshenko3 [0000-0001-7812-6437]
1,3.
National Technical University of Ukraine ‘Igor Sikorsky Kyiv Polytechnic Institute’
(Ukraine)
2 Cracow University of Technology (Poland)

Abstract. This paper presents the experimental results for stress index calcula-
tion using developed by the authors information technology for non-contact re-
mote human heart rate variability (HRV) retrieval in various conditions from
video stream using common wide spread web cameras. The developed system
architecture based on remote photoplethysmography (r-PPG) technology is
briefly overviewed. Use-cases of measuring stress index are presented and ana-
lyzed in details. The results of the experiments have shown that the r-PPG sys-
tem is capable of retrieving stress level that is in accordance with the feelings of
experiments’ participants.

Keywords: video processing; web cameras; stress index; remote photoplethys-


mography; rPPG; heart rate; heart rate variability; Predictive, Preventive, Per-
sonalized and Participatory Medicine.

1 Introduction

At present, the formation of the XXI century medicine requires a new philosophy and
the platforms for more effective person’s treatments to advance current healthcare
systems. There are several needs that the modern and innovative healthcare systems
across the planet should respond to. Among them are: the rising costs of medical care
and the emerging need to reduce such costs; the grand challenges facing the
healthcare and biomedical industry in order to utilize many of the novel technologies;
the necessity of radical improvement in wellness and disease prevention; the develop-
ing shortage of healthcare professionals; the strong desire of the individual persons to
participate more in every aspects of their healthcare.
These tasks to be solved need a new paradigm of advanced healthcare in terms of
predictive, preventive, personalized, and participatory (P4) medicine [1]. The core
elements of that vision are widely accepted now providing physicians and patients
with personalized information about each individual’s health on different system lev-
els. During the development of P4 medicine, many rapidly developing technologies
such as artificial intelligence, telemedicine, smart clothes with wearable sensors, mo-

125
bile apps and beyond will result in treating the causes rather than the symptoms of
disease, more efficient patient management and hence a better quality of life.
The healthcare industry is on the cusp of substantial changes in the coming decade
as new technologies are being developed. In this paper, the authors follow the para-
digm of P4M, which is a global trend in the 21st century and it involves continuous
monitoring of the humans’ condition even before any signs of negative changes [2].
The authors have developed robust contactless remote photoplethysmography in-
formation technology that uses video stream processing in real time. The developed
system is able to obtain biological indicators like HRV through the use of widely
distributed web and other video cameras. Mass adoption of such IT will allow people
to provide an appropriate level of heart health through continuous contactless moni-
toring of HRV without changing their life styles and could make our medicine pre-
ventive.
Taking into account that the relationship between stress, heart disease and sudden
death has been recognized since antiquity the main focus of this paper will be dedi-
cated to stress. The choice of studying stress is dictated by the following facts:
According to statistics, annual costs to employers because of stress related health
care issues and missed work are estimated in $300 Billion.[3] Moreover 77 % of re-
spondents of the research marked that they regularly experience physical symptoms
caused by stress. And 73 % of people experience psychological symptoms because of
stress. 33% feel that they are living with extreme stress. So detecting and measuring
stress may not only help to make lives better, but also to reduce other diseases like
heart disease.
In this paper the results of measuring stress index of humans in different conditions
with the help of developed system are presented.

2 Heart rate variability

“Heart Rate Variability” (HRV) has become the conventionally accepted term to de-
scribe variations of both instantaneous heart rate and the series of times between con-
sequential pairs of heart contractions (so called RR-intervals). The analysis of HRV
has been widely used as a non-invasive and reliable tool to evaluate cardiovascular
autonomic control in health and disease. In order to describe oscillation in consecutive
cardiac cycles, other terms have also been used in the literature: for e.g. cycle length/
heart period variability or RR interval tachogram, and they more appropriately em-
phasize the fact that it is the interval between consecutive beats that is being analyzed
rather than the heart rate per se [4]. In this work the term HRV will be used through-
out the article.
Usually for the accurate diagnosis of cardiovascular diseases the Holter device is
used as the medical standard for heart activity measuring. It requires a patient to visit
a doctor, install the device for couple of days, and then follow doctor’s examining of
obtained electrocardiograms (ECGs) manually. But many heart diseases do not re-
quire the entire ECG to be examined, and for diagnostics it is enough to have only

126
beat-to-beat time intervals, so-called RR intervals. The phenomenon to focus on is the
oscillation in the intervals between consecutive heart beats as well as the oscillations
between consecutive instantaneous heart rates. Patterns in these oscillations contain
enough information for unveiling not only heart pathologies but also dysfunctions of
the whole organism.
The modern development of information technology (IT) infinitely extends the
possibility of tracking various biological signals of a person with further computer
processing of digital data. In recent years, there have been various alternatives to the
Holter device, namely: personal pulse meters, "smart" clocks, fitness trackers that
allow you to record HR, continuous monitoring of the cardiovascular system and
reduce the risk of CVD. Modern markets of mobile soft- and hardware are filled with
a different kind of applications for health monitoring and pulsometer-like gadgets that
may read, store and process our biological signals.
But the only problem remains that all these approaches are contact and in some
types of applications it is impossible to make contact measurement and remote tech-
nology is needed to estimate HR and heart rate variability.

3 Contactless remote photoplethysmography

In recent years, the possibility to extract the heart rate (HR) with the help of a remote
photo detector has been established, so called remote photoplethysmography (rPPG)
[5 - 9]. The new technique offers a heart rate (HR) measurement that does not need to
have contact with the studied object, a valuable feature for both medical and surveil-
lance purposes [8]. rPPG contactless monitors of human heart activity by detecting
subtle human skin color variations induced by heart contractions and blood flow using
observed by the camera reflected light from skin [10].
Lately, several new rPPG algorithms have been developed for pulse-signal extrac-
tion from the face with RGB-cameras as photo detectors [6, 7]. These include: (a)
Blind Source Separation (e.g., PCA-based [11] and ICA-based [12]), which use dif-
ferent criteria to separate temporal RGB traces into uncorrelated or independent signal
sources to retrieve the pulse; (b) CHROM [13], which linearly combines the chromi-
nance signals by assuming a standardized skin-color to white-balance the camera; (c)
PBV [14], which uses the signature of blood volume changes in different wavelengths
to explicitly distinguish the pulse-induced color changes from motion noise in RGB
measurements; and (d) 2SR [15], which measures the temporal rotation between spa-
tial subspaces of skin-pixels for pulse extraction. The essential difference between
these rPPG algorithms is in the way of combining RGB-signals into a pulse-signal.
The use of three color channels with multiple wavelengths gives the methods the pos-
sibility to be robust to motion of the subject. A better understanding of the core rPPG
algorithms could benefit many systems/applications for video health monitoring, such
as the monitoring of heart-rate [16 – 20], respiration [17], SpO2 [21], blood pressure
[22], neonates [23], [24], and the detection of atrial fibrillation [25] and mental stress
[26].

127
4 Remote PPG system’s architecture description

The developed by the authors of this work rPPG technology is based on the one-pixel
camera mathematical model and has the following modules structure:
1) Face detection module;
2) Images spatial filtering module;
3) Module for skin tints time series frequencies filtration;
4) Heart beats’ time detection module
Video processing begins with sequential analysis of each video frame applying
face detector, and spatial filters like: skin-detector to find skin pixels on the frames;
transformations of color tint signal spaces to compensate energy of luminance of skin;
aggregating skin pixel colors to reduce camera’s sensor’s pixel noise; and temporal
filters including frequency finite impulse response pass-band filter with frequencies of
heart rate to remove all temporal noises except heart signal. Heart beats’ time detec-
tion module returns sequence of heart contraction moments in time that allow to cal-
culate time deltas between each pair of R-peaks resulting in series of RR-intervals.
The system using normal web camera with frame resolution of 640x480 pixels and
average speed of 25 frames per second, detects 99.3% of heart contractions. The ex-
periments have shown that standard deviation of delta time between heart beat con-
tractions’ time detected by the system and Holter monitor is 0.046 seconds.
As the output – the system returns time series of RR intervals and as result Heart
Rate Variability (HRV) can be calculated, same as spectrograms of retrieved cardioin-
tervalogramms. [27-29]

5 Stress index calculation

Having series of RR-intervals it is possible to apply variational pulsometry that is


used for stress calculation. The essence of variational pulsometry consists in learning
the distribution law of cardiointervals. The distributions of cardiointervals are also
called histograms. A traditional manner of grouping cardiointervals in the range from
400 to 1300 ms with the buckets’ intervals of 50 ms. was constituted in perennial
practice. Thus, 20 fixed ranges of cardio-intervals' length are considered that allow to
compare pulsograms received by different researchers. The timing capacity of pul-
sograms is set to 5-minute standard.
Cardiointerval histogram is a bar plot with buckets' width of 50 ms. Cardio RR-
intervals are distributed among these buckets and form columns. The higher the col-
umn, the more cardiointervals it includes with duration within the beginning and end
time of the bucket. A healthy person with a normal energy potential has symmetrical
histogram of pyramidal shape with its central column containing between 30%-50%
of all cardiointervals.
According to [30] variational pulsometry is widely practiced in Russia and post-
soviet countries and is called "index of regulatory systems tension" or stress index
(SI).

128
The core idea of the stress index proposed by R. Baevskiy is to capture the factors
that are caused by stress into one single formula. The formula for calculating the
Stress Index is presented below:
𝐴𝑀𝑜
𝑆𝐼 = (1)
2 ∗ 𝑀о∗ 𝑀𝑥𝐷𝑀𝑛

where 𝑀𝑜 (the mode) — is the most frequently occurring value of cardiointervals


in milliseconds. Mo differs a little from mathematic expectation (M) in the case of
normal allocation and high stationarity;
𝐴𝑚𝑜 (the mode amplitude) — is the proportion of the most common cardiointer-
vals, from which the central column of the histogram was formed, of all cardiointer-
vals.
𝑀𝑥𝐷𝑀𝑛 (the RR-intervals variation range) — the difference between cardiointer-
vals of the minimum and maximum duration.
SI Calculation - it is only one of approaches to interpretation and estimation of the
histogram (variational pulsogram).
In norm SI varies within the limits of 80-150. This parameter is very sensitive to
amplification of sympathetic tone. Small load (physical or emotional) increase SI 1,5-
2 times. At significant loads it increases 5-10 times. By illness with constant tension
of regulatory systems, SI in rest can be equal to 400-600. By coronary heart disease
and with myocardial infarction, SI in rest reaches 1000-1500.

6 Stress index measurement using developed remote rPPG


system

Several experiments were conducted to measure stress level of people in different


situations with the help of developed rPPG system including car and tractor drivers
before and after day shifts and students before and during exams.

6.1 Drivers’ stress

The rPPG system was used to track HRV and stress of drivers during many hours’
drives in summer 2017. The camera was mounted in the car's interior on the celling to
have clear view on the drivers’ face. The drivers were 5-6 hours in the road covering
intercity distances and measurement of SI before and immediately after the journey
were compared. Also the drivers were asked to fill in small survey with questions
about their level of tiredness. The results of comparisons have shown that stress index
that was in range 14-32 before the journey was almost twice higher in the end of the
route.

129
Fig. 1. – Stress index of the driver after 5 hour intercity journey is twice higher com-
pared to measurement before the journey.

The system was also tested on several drivers in the field using various mod-
els of tractors. The camera was mounted near the steering wheel or on the ceiling
depending on the tractor model. It was shown that drivers who were working in more
comfortable models of tractors and have been fewer hours on their shift by the mo-
ment of experiment conduction had less stress index values.

Fig. 2. – The system testing also involved several tractor-drivers in field conditions
on tractors of different models.

6.2 Students stress

Another experiment was conducted at IASA NTUU “Igor Sikorsky KPI” involving
group of 23 students. The purpose of this experiment was to detect a change in the
student's internal state directly before and during the exam at session. The null hy-
pothesis to test was that students would feel calm two days before the exam as they
still have two days to prepare. On the exam, everyone will be little worried. Those
who fail to pass the exam will be very worried and experience severe stress.
The experiment consisted of two phases: on the first phase students’ stress index
was measured in calm conditions. At this stage no coming events for the next 2 days
have been planned that could have been treated by students as alert factor and also
students have not performed any active physical exercises that could have affected the
experiment. During this stage experimental measurements of SI with the rPPG system

130
have shown average SI to be in the range 12-37 and heart rate to be in range 61-73
bpm.
The second phase of the experiment was conducted during exam. It appeared that
students were extremely stressed out. SI was in range 75-261 and the lowest average
heart rate per student was 83 while the highest was 145 bpm.
Each experiment entry lasted from four to seven minutes. The goal was to collect
at least 300 heartbeats in each of the experiments in order to be able to calculate the
stress index by Baevsky's method.
After processing the results, it turned out that the null hypothesis was confirmed
but not for all students. On average, the stress index level during the exam has been
greater than two days before the exam. But there were also students who had normal
stress index level during both phases and who were also stressed all the time.
For example, let’s look at the obtained data of the student whose heart rate was
within the normal range during both phases of experiment (approximately 67 beats
per minute).

Fig. 3. – Example of RR-intervals of the student in time (sec). Signatures over the bars show
the instantaneous pulse (beats/minute).

The figure 3 shows his RR intervals obtained during the experiment on the exam.
They are in range from 630 to 1017 milliseconds. The bar height shows the duration
of each of the RR intervals in milliseconds (at the bottom of the bar durations in milli-
seconds are presented in white). Numbers above the bars indicate the instantaneous
pulse (beats per minute). Labels under the abscise axis indicate the time of each RR
interval occurrence given in seconds from the beginning of the experiment recording.
Judging from the figure above, one can assert that the student is in normal condi-
tion and has little stress. After calculating the stress index, we get the figure 4, which
shows that the student's stress was in the range of 20 to 40 conventional units. To
calculate the stress formula, the duration of RR-intervals series was taken equal to
150 seconds.

131
Fig. 4. – Stress index from time plot of calm student (conditional units)

Although most of the students had “poker faces” during this experiment and
showed no external signs of their emotions and neither the lecturer nor the stuff con-
ducting the experiment noticed any difference from other students, the rPPG system
has detected very high heart rate and big SI values from some of the students. For
example the figure 5 shows RR-intervals of an extremely worried student.

Fig. 5. – Example of obtained RR-intervals from experiment during exam

Her pulse was in the range of 110 to 140 beats per minute and averaged 127 beats
per minute, indicating high level of adrenaline in the blood and, accordingly, a high
level of stress.
At the same time, two days before the exam, she was calm and her average pulse
rate was 65 beats per minute.
The figure below shows the student's photoplethysmogram signal (yellow) and af-
ter frequency filtration (violet) obtained using the developed IT. Yellow vertical lines
represent heart contraction moments. A good noise to signal ratio of raw signal can be
noticed because the amplitude of the signal is much bigger than the amplitude of
noise. This testifies to the high quality and reliability of the obtained PPG and RR-
intervals, respectively.

132
Fig. 6. – Student’s rPPG heartbeat signal before (yellow) and after filtering (violet) in time
(sec)

After calculating the stress index, we get the following figure 7, from which it can
be seen that the student's stress was in the range of 125 to 132 conventional units.

Fig. 7. – Stress index from time plot of the stressed student (conditional units)

The obtained students’ levels of stress during the exam were surprising even for
the professor with many years of experience who was conducting this exam.
It was shown that the system determined the pulse and stress levels of the students
without any problems and the rPPG system measurements results were in accordance
with students’ answers from questionnaires about their feelings. Also, after a detailed
analysis of the rPPG signals retrieved by the system, their quality confirmed as high
and conclusions about the reliability of the results of the experiments themselves were
marked as reliable.

7 Conclusions

The need of the new non-contact widely available sensors for human bio-signals con-
stant monitoring was described within the framework of predictive, preventive, per-
sonalized, and participatory (P4) medicine. It was shown that the heart rate variability
can be a good indicator for estimating internal states of the human including heart

133
rate, stress levels and heart diseases that cause decrease in productivity and financial
loses for the enterprises and to the whole economy.
Overview of recent papers and approached was made to show how using widely
spread web cameras it is possible to build remote non-contact information technology
that can extract precise timings of heart beats from video stream based on remote
photoplethysmography. Also the architecture and main modules of the developed by
the authors contactless rPPG system were presented.
Several experiments were described for human stress index calculation in different
conditions including car and tractor drivers.
It was shown that despite the absence of visual signs perceivable by other humans,
the rPPG system was able to differentiate and measure internal states of the people
who were participating in the experiments. The obtained measurements were in ac-
cordance with the feelings of participants and the quality of the obtained results was
confirmed by in-depth examining of all stages of signal processing within the rPPG
system.
In the summary, it can be stated that the developed personal non-contact automatic
remote photoplethysmography system for heart beats time moments’ contraction (so
called RR-intervals) retrieval, can be used in different applications like:

─ Person’s functional and emotional states detection;


─ Person’s Identification & Authentication via remote detection of vital signs pres-
ence;
─ Contactless remote HRV and stress tracking.

Developed rPPG method has shown good performance using ordinary web camer-
as for online R-peaks detection. The technology misses heart beats at rate 0.69%, and
detects false positives heart beats at rate 1.16%. Root mean square time deviation
between correctly classified heart beats is 0.046 seconds. Using cameras filming at
higher frame rates one can greatly decrease this errors.

References

1. Golubnitschaja O, Kinkorova J, Costigliola V: Predictive, preventive and per-


sonalized medicine as the hardcore of 'Horizon 2020': EPMA position paper.
EPMA J, 5(1):6 (2014)
2. S. Nikolaiev, Y. Tymoshenko, “The Reinvention of the Cardiovascular Diseases
Prevention and Prediction Due to Ubiquitous Convergence of Mobile Apps and
Machine Learning”, Proc. of the 2nd Int. Sci. and Pr. Conf. ITIB 2015, Oct 7-9,
pp. 23-26 (2015)
3. Transforming stress through awareness, education and collaboration. Homepage
https://www.stress.org/stress-and-heart-disease/, last accessed 2018/03/31
4. Heart rate variability. Standards of measurement, physiological interpretation, and
clinical use. Task Force of the European Society of Cardiology and The North
American Society of Pacing and Electrophysiology. European Heart Journal
(1996) 17, pp.354–381

134
5. X. Li et al. “Remote Heart Rate Measurement from Face Videos under Realistic
Situations”. In: Computer Vision and Pattern Recognition (CVPR), IEEE Confer-
ence on. June 2014, pp. 4264–4271. doi: 10.1109/CVPR.2014. 543 (2014).
6. W. Verkruysse, L. Svaasand, and J. Nelson. “Remote plethysmographic imaging
using ambient light”. English. In: Optics Express 16.26, pp. 21434–21445 (2008).
7. M. Poh, D. McDuff, and R. Picard. “Non-contact, automated cardiac pulse meas-
urements using video imaging and blind source separation”. English. In: Optics
express 18.10, p. 10762 (2010).
8. M. van Gastel, S. Stuijk, and G. de Haan. “Motion Robust Remote-PPG in Infra-
red”. English. In: IEEE Transactions on Biomedical Engineering 62.5, pp. 1425–
1433 (2015).
9. K. Humphreys. “A CMOS camera-based system for clinical photoplethysmo-
graphic applications”. In: Proceedings of SPIE. Vol. 5823, pp. 88–95 (2005).
10. W. Verkruysse, L. O. Svaasand, and J. S. Nelson, “Remote plethysmographic im-
aging using ambient light,” Opt. Express, vol. 16, no. 26, pp. 21 434–21 445,
(Dec. 2008).
11. M. Lewandowska, J. Ruminski, T. Kocejko, and J. Nowak, “Measuring pulse rate
with a webcam - a non-contact method for evaluating cardiac activity,” in Com-
puter Science and Information Systems (FedCSIS), 2011 Federated Conference
on, pp. 405–410 (Sept. 2011).
12. M.-Z. Poh, D. McDuff, and R. Picard, “Advancements in noncontact, multipa-
rameter physiological measurements using a webcam,” Biomedical Engineering,
IEEE Trans. on, vol. 58, no. 1, pp. 7–11, (Jan. 2011).
13. G. de Haan and V. Jeanne, “Robust pulse rate from chrominance-based rPPG,”
Biomedical Engineering, IEEE Trans. on, vol. 60, no. 10, pp. 2878–2886, (Oct.
2013).
14. G. de Haan and A. van Leest, “Improved motion robustness of remote-PPG by us-
ing the blood volume pulse signature,” Physiological Measurement, vol. 35, no. 9,
pp. 1913–1922, (Oct. 2014).
15. W. Wang, S. Stuijk, and G. de Haan, “A novel algorithm for remote photople-
thysmography: Spatial subspace rotation,” Biomedical Engineering, IEEE Trans-
actions on, vol. PP, no. 99 , (2015).
16. X. Li, J. Chen, G. Zhao, and M. Pietikainen, “Remote heart rate measurement
from face videos under realistic situations,” in The IEEE Conference on Comput-
er Vision and Pattern Recognition (CVPR), pp. 4264–4271 (June 2014).
17. L. Tarassenko, M. Villarroel, A. Guazzi, J. Jorge, D. Clifton, and C. Pugh, “Non-
contact video-based vital sign monitoring using ambient light and auto-regressive
models,” Physiological measurement, vol. 35, no. 5, p. 807, (May 2014).
18. W. Wang, S. Stuijk, and G. de Haan, “Exploiting spatial redundancy of image
sensor for motion robust rPPG,” Biomedical Engineering, IEEE Trans. on, vol.
62, no. 2, pp. 415–425, (Feb. 2015).
19. M. Kumar, A. Veeraraghavan, and A. Sabharwal, “Distance PPG: Robust non-
contact vital signs monitoring using a camera,” Biomed. Opt. Express, vol. 6, no.
5, pp. 1565–1588, (May 2015).

135
20. S. Tulyakov, X. Alameda-Pineda, E. Ricci, L. Yin, J. F. Cohn, and N. Sebe, “Self-
adaptive matrix completion for heart rate estimation from face videos under real-
istic conditions,” in The IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), pp. 2396–2404 (June 2016).
21. A. R. Guazzi, M. Villarroel, J. ao Jorge, J. Daly, M. C. Frise, P. A. Robbins, and
L. Tarassenko, “Non-contact measurement of oxygen saturation with an RGB
camera,” Biomed. Opt. Express, vol. 6, no. 9, pp. 3320–3338, (Sep. 2015).
22. I. C. Jeong and J. Finkelstein, “Introducing contactless blood pressure assessment
using a high speed video camera,” Journal of Medical Systems, vol. 40, no. 4, pp.
1–10, (2016).
23. L. K. Mestha, S. Kyal, B. Xu, L. E. Lewis, and V. Kumar, “Towards continuous
monitoring of pulse rate in neonatal intensive care unit with a webcam,” in 36th
Annual International Conference of the IEEE Engineering in Medicine and Biolo-
gy Society, pp. 3817–3820 (Aug. 2014).
24. S. Fernando, W. Wang, I. Kirenko, G. de Haan, S. Bambang Oetomo, H. Cor-
poraal, and J. van Dalfsen, “Feasibility of contactless pulse rate monitoring of ne-
onates using google glass,” in Wireless Mobile Communication and Healthcare
(Mobihealth), 2015 EAI 5th International Conference on, pp. 198–201 (Oct.
2015).
25. J.-P. Couderc, S. Kyal, L. K. Mestha, B. Xu, D. R. Peterson, X. Xia, and B. Hall,
“Detection of atrial fibrillation using contactless facial video monitoring,” Heart
Rhythm, vol. 12, no. 1, pp. 195–201, (2015).
26. B. Kaur, S. Moses, M. Luthra, and V. N. Ikonomidou, “Remote stress detection
using a visible spectrum camera,” pp. 949 602–949 602–13, (May 2015).
27. Sergii Nikolaiev, Yurii Tymoshenko, Kateryna Matviiv, Haar Cascade Face De-
tector Quality Dependence on Training Dataset Variability, Naukovi Visti NTUU
KPI, №6, pp. 38-46 (2017).
28. Sergii Nikolaiev, Hryhorii Chereda, Sampling Rate Independent Filtration Ap-
proach For Automatic ECG Delineation, International Scientific Journal “Inter-
nauka” . Available: http://www.inter-nauka.com/issues/2017/5/2394
29. S. Nikolaiev, Metric And Algorithm For Similarity Between Two Temporal Event
Sequences Calculation, System research and information technologies, № 3, pp.
127-135 (2017).
30. Baevskiy R.M, Ivanov G.G. Heart Rate Variability: theoretical aspects and
possibilities of clinical application [in Russian]. Ultrazvukovaya i funktsional-
naya diagnostika.. Vol 3:pp. 106-127 (2001).

136
Section 10

Fuzzy Logic

137
On wavelet based enhancing possiilities of fuzzy classication of
measurement results
1 1 1 1,2 1
Ferenc Lilik , Levente Solecki , Brigita Sziová , László T. Kóczy , and Szilvia Nagy
1
Széchenyi István University, H-9026 Gy®r, Egyetem tér 1, Hungary;
Budapest University of Technology and Economics,
2

H-1117 Budapest, Magyar tudósok krt. 2, Hungary;

Abstract. In fuzzy classication methods if the antecedents arise as the result of


a measurement, the antecedent can have too many dimensions to handle. In order to
base a classication scheme on such data, a careful selection, a sampling or re-sampling is
necessary. It is also possible to use functions or transformations that reduce the long, high
dimensional measurement data vector or matrix into a single point or to a low number of
points. Wavelet analysis can be useful in such cases in two ways.
First, the measurement data can be compressed by wavelet analysis, thus reducing the
dimensionality of the measured signal. We demonstrate the applicability of this scheme
by a telecommunication line evaluation problem with fuzzy rule interpolation to overcome
the issue of sparse rulebase
Second, if other functions, such as entropies, are used for extraction of the information
from the measured data, and this information is not sucient for performing the classi-
cation well enough, the use the same method of acquiring information on the wavelet
analysed version of the signal can increase the dimensionality, thus bringing back some of
the information that has been lost during the application of the function. The applicabi-
lity of this scheme is demonstrated on a combustion engine cylinder surface classication
problem (new and worn) using Rényi entropies.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

138
On the Convergence of Fuzzy Grey Cognitive Maps
1 2,3
István Á Harmati and László T. Kóczy
1
Department of Mathematics and Computational Sciences, Széchenyi István University, Gy®r,
Hungary;
2
Department of Information Technology, Széchenyi István University, Gy®r, Hungary;
3
Department of Telecommunications and Media Informatics,
Budapest University of Technology and Economics, Budapest, Hungary;

Abstract. Fuzzy grey cognitive maps (FGCMs) are extensions of fuzzy cognitive
maps (FCMs), applying uncertain weights between the concepts. This uncertainty is
expressed by so-called grey numbers. Similarly to FCMs, the inference is determined by
an iteration process, which may converge to an equilibrium point, but limit cycles or
chaotic behaviour may also turn up.
In this paper, based on the grey weighted connections between the concepts and the
parameter of the sigmoid threshold function, we give sucient conditions for the existence
and uniqueness of xed points for sigmoid FGCMs.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

139
Hierarchical fuzzy decision support methodology for packaging
system design
Kata Vörösk®i, Gerg® Fogarasi, Adrienn Buruzs, Peter Földesi, and László T. Kóczy

Széchenyi István University, H-9026 Gy®r, Egyetem tér 1, Hungary;

Abstract. In the eld of logistics packaging (industrial-, or even customer packa-


ging), companies have to take decisions on determining the optimal packaging solutions
and expenses. The decisions often involve a choice between one-way (disposable) and
reusable (returnable) packaging solutions. Even nowadays, in most cases the decisions
are made based on traditions and mainly consider the material and investment costs.
Although cost is an important factor, it might not be sucient for nding the optimal
solution. Traditional (two-valued) logic is not suitable for modelling this problem, so here
the application of a fuzzy approach, because of the metrical aspects, a fuzzy signature
approach is considered. In this paper a fuzzy signature modelling the packaging decision
is suggested, based on logistics expert opinions, in order to support the decision making
process of choosing the right packaging system. Two real life examples are also given, one
in the eld of customer packaging and one in industrial packaging.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

140
Section 11

Machine Learning

141
Applicability of Deep Learned vs Traditional Features for Depth
Based Classication
Fabio Bracci, Mo Li, Ingo Kossyk, and Zoltan-Csaba Marton

Institute of Robotics and Mechatronics, German Aerospace Center (DLR), Weÿling, Germany;

Abstract. In robotic applications often highly specic objects need to be recognized, e.g.
industrial parts, for which methods can't rely on the online availability of large labeled
training data sets or pre-trained models. This is especially valid for depth data, thus ma-
king it challenging for deep learning (DL) approaches. Therefore, this work analyzes the
performance of various traditional (global or part-based) and DL features on a restricted
depth data set, depending on the tasks complexity. While the sample size is small, we
can conclude that pre-trained DL descriptors are the most descriptive but not by a sta-
tistically signicant margin and therefore part-based descriptors are still a viable option
for small but dicult 3D data sets.

The full text will be available in the edited book Computational Modeling of Objects
Presented in Images. Fundamentals, Methods, and Applications, eds. Barneva R.P.,
Brimkov V.E., Kulczycki P., Tavares J.M.R.S., to be published by Springer in the Lecture
Notes in Computer Science series soon.

142
Eect of image view for mammogram mass classication
Sk Md Obaidullah, Sajib Ahmed, and Teresa Gonçalves

Dept. of Informatics, University of Évora, Portugal;

Abstract. Mammogram images are broadly categorized into two types: carniocaudal
(CC) view and mediolateral oblique (MLO) view. In this paper, we study the eect
of dierent image views for mammogram mass classication. For the experiments, we
consider a dataset of 328 CC view images and 334 MLO view images (almost equal ratio)
from a publicly available lm mammogram image dataset [3]. First, features are extracted
using a novel radon-wavelet based image descriptor. Then an extreme learning machine
(ELM) based classication technique is applied and the performance of ve dierent ELM
kernels are compared: sigmoidal, sine, triangular basis, hard limiter and radial basis
function. Performances are reported in terms of three important statistical measures
namely, sensitivity or true positive rate (TPR), specicity or false negative rate (SPC)
and recognition accuracy (ACC). Our experimental outcome for the present setup is two-
fold: (i) CC view performs better then MLO for mammogram mass classication, (ii)
hard limiter is the best ELM kernel for this problem.

The full text will be available in the edited book Computational Modeling of Objects
Presented in Images. Fundamentals, Methods, and Applications, eds. Barneva R.P.,
Brimkov V.E., Kulczycki P., Tavares J.M.R.S., to be published by Springer in the Lecture
Notes in Computer Science series soon.

143
Solving a Combinatorial Multiobjective
Optimization Problem by Genetic Algorithm

Marcin Studniarski1 , Liudmila Koliechkina1 , and Elena Dvernaya2


1
Faculty of Mathematics and Computer Science, University of Lódź, Banacha 22,
90-238 Lódź, Poland
{marstud,lkoliechkina}@math.uni.lodz.pl
2
Chair of Document Management and Information Activities in Economic Systems,
Poltava University of Economics and Trade, Poltava, Ukraine

Abstract. We develop a new method of generating Pareto-optimal solu-


tions of a discrete multiobjective programming problem. This is achieved
by using a specially designed genetic algorithm which includes some prob-
abilistic stopping criterion. This approach enables us to find all minimal
solutions of the problem with a prescribed probability. Our method is a
combination of two algorithms: the RHS (Random Heuristic Search) and
the base VV (Van Veldhuizen) algorithm. The RHS starts from a fixed
initial population and applies some transition rule to obtain the next
population. The VV algorithm collects the best solutions generated by
subsequent iterations of the RHS. The proposed method can applied to
a combinatorial set of permutations without repetitions or with repeti-
tions, as well as to other combinatorial objects. As a crossover operator,
three types of crossover are used.

Keywords: Combinatorial optimization · Genetic algorithm · Stopping


criterion.

1 Introduction
The research of many scientists is devoted to the study of multiobjective op-
timization problems [3, 6, 10, 15–17, 20, 24, 25], in particular, discrete problems
[10, 24, 25].
The interest in studying multiobjective models on discrete sets is due to their
wide application to solving various problems of the economy, the design of com-
plex systems, decision-making under ambiguity and others. Recently, significant
results have been obtained in the areas of study of different classes of combi-
natorial models and the development of new discrete optimization methods [10,
11]. But developing existing methods and constructing new ones for this class of
problems are still important tasks for today.
As is known, the majority of combinatorial optimization problems can be
reduced to integer programming problems, but this is not always justified since
it is not possible to take into account the combinatorial properties of the problem.
In monographs [5, 23] it is shown that the convex hull of the set of per-
mutations has a common permutable polyhedron, whose vertex set is equal to

144
M. Studniarski, L. Koliechkina, E. Dvernaya

the set of permutations, which makes it possible to consider the problem as a


problem on graphs. In order to solve this problem, a genetic algorithm is pro-
posed in the articles [12, 22]. It should be noted that genetic algorithms have
demonstrated their effectiveness for solving problems that cannot be solved by
traditional methods; see [1, 7–9, 13, 18, 21, 26]. In addition, the computational
time of genetic algorithms for most problems is practically linearly dependent
on the size of the problem and the number of optimized parameters [9, 13, 18,
21, 26]. An important stage in the genetic algorithm is the stopping criterion,
which is determined by the number of iterations. Genetic algorithms, like heuris-
tic search algorithms, never guarantee that an optimal solution will be obtained,
but such solutions could be obtained after a fixed number of iterations. There is
always a possibility that we cannot get optimal solutions. However, as in [7], we
can consider some convergence in probability. Having given a fixed probability δ
(0 < δ < 1), we can determine the smallest number of iterations t(δ) which will
guarantee us to obtain an optimal solution with this probability δ.
Genetic algorithms have been described by many researchers in relation to
various applied problems. In [14], Nix and Vose characterize genetic algorithms
as Markov chains. Aytug and Koehler [2] use the Markov chain formulation
to find an upper bound for t(δ). In [24], Studniarski applies a general Markov
chain model of a genetic algorithm (see [27]) to multiple-criteria optimization
problems. He sets an upper bound for the number of iterations that must be
performed in order to obtain, with a certain probability, a population consisting
entirely of minimal solutions. However, since the population can contain multiple
copies of the same element, we can only guarantee that at least one minimal
solution is found. The results of [24] have an obvious drawback, because they
do not provide the generation of the complete Pareto set, even if it is finite and
of low cardinality. In [25], Studniarski improves the previous stopping rules, so
that they allow us to find, with a given probability, all the minimal solutions in
a finite problem of multiobjective optimization.
The aim of this work is to develop a method for generating Pareto-optimal
solutions of a discrete multiobjective problem. This will be achieved by using
a genetic algorithm according to the scheme described in [25], with some mod-
ifications. In dealing with discrete multiobjective optimization problems, it is
necessary to take into account the finiteness of the discrete set of admissible
points. At present there are very few developed methods for solving such prob-
lems. Since the heuristic genetic algorithm has proved very effective in practical
applications, it was chosen to solve the problem in question.
The paper is organized as follows. In Sections 2–4 we review the main results
of [25] with necessary modifications taking into account the fact that now only
one half of each population undergoes mutation. Section 5 contains the formu-
lation of a combinatorial discrete multiobjective optimization problem. A new
algorithm designed for solving this combinatorial problem is described in Section
6. Finally, some computational examples illustrating the theory are presented in
Section 7.

145
Combinatorial Multiobjective Optimization

2 The RHS algorithm


The RHS (Random Heuristic Search), which is presented in [27], is a general al-
gorithm model providing a unified framework for the description of various evo-
lutionary algorithms, including the classical genetic algorithm. The RHS consists
of finding an initial population P (0) and a transition rule τ which, for a given
population P (i) , determines a new population P (i+1) . Iterating τ , we obtain a
sequence of populations:
τ τ τ
P (0) −→ P (1) −→ P (2) −→ ... (1)

Each population is a finite collection of individuals which are elements of a given


finite set Ω called the search space. Populations are multisets, which means that
the same individual may appear more than once in a given population.
We may assume that Ω is a subset of integers: Ω = {0, 1, ..., l − 1}. The
number l is called the size of search space. Then a population can be represented
as an incidence vector (see [19, p. 141]):

v = (v0 , v1 , ..., vl−1 )T , (2)

where vi is the number of copies of individual i ∈ Ω in the population (vi = 0 if


individual i does not appear in the population). The size of population v is the
number
l−1
X
r= vi . (3)
i=0

We assume that all the populations appearing in sequence (1) have the same
size r. Dividing each component of incidence vector (2) by r, we obtain the
population vector
p = (p0 , p1 , ..., pl−1 )T , (4)
where pi = vi /r is the proportion of individual i ∈ Ω in the population. We can
observe that representation (4) is independent of population size. Each vector p
of this type belongs to the set
( l−1
)
X
Λ := x ∈ Rl : xi ≥ 0 (∀i), xi = 1 . (5)
i=0

However, not all points of Λ correspond to finite populations. For a fixed r ∈ N,


the following subset of Λ consists of all populations of size r (see [27, p. 7]):
( l−1
)
1 X
Λr := x ∈ Rl : xi ∈ N ∪ {0} (∀i), xi = r . (6)
r i=0

We now define the mapping

G : Λ −→ Λ,

146
M. Studniarski, L. Koliechkina, E. Dvernaya

called heuristic [27, p. 9] or generational operator [19, p. 144], in the following


way: for a vector p ∈ Λ representing the current population, G(p) is the proba-
bility distribution that is sampled independently r times (with replacement) to
produce the next population after p. For each of these r choices, the probability
of selecting an individual i ∈ Ω is equal to G(p)i , the i-th component of G(p).
A transition rule τ is called admissible if it is a composition of a heuristic G
with drawing a sample in the way described above. Symbolically,

τ (p) = sample(G(p)), ∀p ∈ Λ. (7)

A transition rule defined this way is nondeterministic, i.e., by applying it re-


peatedly to the same vector p, we can obtain different results. It should also
be noted that, although G(p) may not belong to Λr , the result of drawing an
r-element sample is always a population of size r; therefore, it follows from (7)
that τ (p) ∈ Λr .
The RHS generates a sequence of populations (in the form of population
vectors, see (4))
p̂, τ (p̂), τ 2 (p̂), ... , (8)
where p̂ is a fixed initial population. The RHS can be regarded as a Markov
chain where the state space is Λr and the values of successive random vectors
X0 , X1 , X2 ,... are populations (8). Since p̂ is fixed, we may assume that X0 is a
random vector taking on the single value p̂ with probability 1.
In this paper we consider a special genetic algorithm as a particular case of
the RHS. We assume that a single iteration of the genetic algorithm produces the
next population form the current population in the way described below (this
will be an abstract model for the more specific algorithm described in Section 6).
Contrary to [25], we now construct separately three different parts of the next
population.
The first part is formed by the “best” elements from the current
population, which may be defined arbitrarily, but in Section 6 it will be the set
of all nondominated points in the current population. We assume that there are
r0 different “best” elements (repeated elements are deleted), where r0 < r/2.
For the second part of the next population (r/2 − r0 elements):

1. Choose the first parent from the set of “best” elements of the current pop-
ulation, and the second parent from the set of other elements of the current
population.
2. Crossover the two selected parents to obtain a child.
3. Put the child into the next population.
4. If the next (partial) population contains less than r/2 members, return to
step 1.

For the second half of the next population (r/2 elements):

1. Choose an individual from the current population.


2. Mutate the individual by replacing it by another, randomly generated ele-
ment of the search space.

147
Combinatorial Multiobjective Optimization

3. Put the mutated individual in the next population.


4. If the next (partial) population contains less than r/2 members, return to
step 1.
Such a process of evolution can continue infinitely. Therefore, an important
aspect in the algorithm is the stopping criterion, which can be either a given
number of generations or convergence of a population in a certain sense.
To get our stopping criteria, we will be using some properties of mutation
which is generally understood as changing one element of the search space to
another with a certain probability. We denote by ui,j the probability that in-
dividual i ∈ Ω will mutate into j ∈ Ω. In this way, we obtain a l × l matrix
U = [ui,j ]i,j∈Ω . We will denote by
Pr (q | p) = Pr(τ (p) = q) (9)
the probability of obtaining a population q in the current iteration of the RHS
algorithm provided the previous population is p. The probability of generat-
ing individual j ∈ Ω from population p by successive application of selection,
crossover and mutation is equal to (see [25], formula (7))
l−1
X
G(p)j = Pr([j] | p)scm = ui,j Pr([i] | p)sc , (10)
i=0

where the symbol [i] means that we generate a single individual i (not a whole
population as in (9)), the subscript sc means that we are dealing with the compo-
sition of selection and crossover, and the subscript scm indicates the composition
of selection, crossover and mutation. However, in our algorithm mutation is ap-
plied to the second half of population only, and for the first half it is omitted,
which corresponds to the case where ui,i = 1 and ui,j = 0 for i 6= j. Therefore, in
our case, formula (10) applies only to the generation of the second half of popu-
lation (with the crossover c meaning in fact “doing nothing”), while to generate
the first half, we should use
G(p)j = Pr([j] | p)s (11)
for the first r0 elements, and
G(p)j = Pr([j] | p)sc (12)
for the remaining r/2 − r0 elements.
To get a whole new population, one should draw an r-element sample from
the probability distribution G(p), using one of the formulas (10)–(12) (depending
on the position of the generated individual).

3 Stopping criteria for finding all minimal elements of Ω


Let us consider the following multiobjective optimization problem. Suppose that
Ω is a finite search space defined in Section 2, and let f : Ω → F be a function

148
M. Studniarski, L. Koliechkina, E. Dvernaya

being minimized, where F = {f (ω) : ω ∈ Ω} and (F, ) is a partially ordered


set. An element x∗ ∈ F is called a minimal element of (F, ) if there is no x ∈ F
such that x ≺ x∗ , where the relation ≺ is defined by
(x ≺ y) :⇔ (x  y ∧ x 6= y).
The set of all minimal elements of F is denoted by Min(F, ). We define the set
of optimal solutions in our multiobjective problem as follows:
Ω ∗ = Minf (Ω, ) := {ω ∈ Ω : f (ω) ∈ Min(f (Ω), )} . (13)
In particular, if F is a finite subset of the Euclidean space Rp , and f = (f1 , ..., fp ),
where each component of f is being minimized independently, then the relation
 in F can be defined by
(x  y) :⇔ (xi ≤ yi , i = 1, ..., l). (14)
In this case, Ω ∗ is the set of all Pareto-optimal solutions of the respective mul-
tiobjective optimization problem. We assume that the goal of RHS is to find all
elements of Ω ∗ . Suppose that Ω ∗ has the following form:
Ω ∗ = {j1 , j2 , ..., jm }, (15)
where the (possibly unknown) number m of optimal solutions is bounded from
above by some known positive integer M . We will say that all the elements of
Ω ∗ have been found in the first t iterations if, for each γ ∈ {1, ..., m}, there exists
s ∈ {1, ..., t} such that τ s (p̂)jγ > 0. This means that each minimal solution is a
member of some population generated in the first t iterations.
The following theorem is a variant of [25, Thm. 6.1].
Theorem 1. We consider the model of algorithm described above. Suppose that
there exists a number β ∈ (0, 1) satisfying
ui,j ≥ β, ∀i ∈ Ω, j ∈ Ω ∗ . (16)
Let M and t be two positive integers satisfying the inequality
M (1 − β)rt/2 < 1. (17)
Let Ω ∗ be of the form (15) with m ≤ M . Then the probability of finding all
elements of Ω ∗ in the first t iterations is at least
1 − M (1 − β)rt/2 . (18)
Corollary 1. We consider the same model of algorithm as in Theorem 1. Sup-
pose that condition (16) holds for some β ∈ (0, 1) and for all j ∈ Ω ∗ . Let M be
a given upper bound for the cardinality of Ω ∗ . For any δ ∈ (0, 1), we denote by
t∗min (δ) the smallest number of iterations required to guarantee that all elements
of Ω ∗ have been found with probability δ. Then
 
∗ 2(ln(1 − δ) − ln M )
tmin (δ) ≤ , (19)
r ln(1 − β)
where dxe is the smallest integer greater than or equal to x.

149
Combinatorial Multiobjective Optimization

4 Construction of the set of minimal elements


The results described above enable us to formulate a practical method for con-
structing the set Ω ∗ . Various elements of this set are members of different popu-
lations created by using a genetic algorithm, and cannot be easily identified. To
obtain an efficient method of constructing Ω ∗ , some modifications of the RHS
are necessary. Here is an algorithm which is a combination of the RHS and the
base VV (Van Veldhuizen) algorithm described in [20, § 3.1].
1) Suppose we have some RHS satisfying the assumptions of Theorem 1,
which generates a sequence of populations (8), where all of them are members of
Λr . For each p ∈ Λr , we define the set of individuals represented in population
p:
set(p) := {ω ∈ Ω : pω 6= 0}. (20)
2) We create a sequence {Dt } of subsets of Ω as follows:

Dt := set(τ t (p̂)), t = 0, 1, ... , (21)

where τ 0 := id is the identity mapping.


3) We define another sequence {Et } of sets recursively by

E0 := Minf (D0 , ), (22)


Et+1 := Minf (Et ∪ Dt+1 , ), t = 0, 1, ... , (23)

where we have used the notation Minf as in (13). Formulas (22) and (23) define
the VV algorithm.
In [20, Prop. 1] it is shown that the sets f (Et ) converge with probability 1
to Min(F, ) in the sense of some metric. However, according to [20], the size of
the sets Et will grow to the size of the set of minimal elements. Since this size
may quite large, this basic algorithm cannot be always used in practice. In fact,
procedure 1)–3) described above can have practical value only if the cardinality of
Ω ∗ is relatively small, which is true at least for some multiobjective optimization
problems on discrete sets.

Theorem 2. Let the assumptions of Corollary 1 be satisfied. Then, with prob-


ability δ, we have
Ω ∗ = Et , ∀t ≥ t∗min (δ). (24)

5 Problem formulation
Let us assume that f : X → Rp is a given mapping, where X is a discrete subset
of Rk . We consider the following multiobjective optimization problem:

Min{f (x) : x ∈ X}. (25)

The solution of problem (25) lies in finding all Pareto-optimal (efficient) points
of X, which respect to the partial order relation defined by formula (14). The

150
M. Studniarski, L. Koliechkina, E. Dvernaya

elements of X may be, for instance, combinatorial objects: permutation, partial


permutation, combination and others. Let us define here the combinatorial set
of permutations.
Let us assume that we have a pre-assigned multiset A = {a1 , a2 , ..., an }, and
set(A) := {e1 , e2 , ..., ek } is its base, where ej ∈ R for all j ∈ Nk := {1, ..., k},
and the multiplicity of each element ej is equal to k(ej ) = rj , j ∈ Nk , where
r1 + r2 + ... + rk = n.
An arranged m-sample from the multiset A shall be a collection, identified
as
a = (ai1 , ai2 , ..., aim ) , (26)
6 it if s 6= t (s, t ∈ Nm ), m ≤ n.
where aij ∈ A for all ij ∈ Nn , j ∈ Nm , is =

Definition 1. [4, 5, 23] A set P (A) whose elements are n-samples of the form
(26) from the multiset A is called a Euclidean combinatorial set if, for its arbi-
trary elements a0 = (a01 , a02 , ..., a0n ) and a00 = (a001 , a002 , ..., a00n ), the condition

(a0 6= a00 ) ⇔ (∃j ∈ Nn : a0j 6= a00j )

is satisfied. In other words, two elements of the set P (A) are different from one
another if, regardless of other differences, they have different arrangements of
symbols that constitute them.
A set of permutations with repetitions of n appropriate real numbers, among
which there are k different ones, is called a common set of permutations and
denoted by Pn,k (A).

Let us examine the elements of the set of permutations with repetitions as


points of the arithmetic Euclidean space Rn .
Let P (A) be a Euclidean combinatorial set, and let a vector a of the form
(26) (with m = n) be an element of P (A). The mapping

ϕ : P (A) → Pϕ (A) ⊂ Rn

is called an immersion of the set P (A) into the arithmetic Euclidean space if ϕ
places the set P (A) to an unambiguous correspondence with the set Pϕ (A) ⊂ Rn
according to the rule:

for a = (ai1 , ..., ain ) ∈ P (A), x = ϕ(a), x = (x1 , ..., xn ) ∈ Pϕ (A),


we have xj = aij for all j ∈ Nn .

In this case, problem (25) may be formulated as a vector optimization problem


on a discrete set of permutations:

Min{f (x) : x ∈ Pn,k (A)}. (27)

The solution of problem (27) shall be understood as the task of finding the
elements of the set of Pareto-optimal (effective) solutions Ω ∗ = P (f, X), where
X = Pn,k (A).

151
Combinatorial Multiobjective Optimization

6 The main algorithm


In this section, we describe a genetic algorithm for solving the multicriterial
optimization problem (25) on a discrete combinatorial set X. It is a particular
case of the algorithm, described in Section 2.
In the algorithm we use three types of crossover. The first type is the classical
one-point crossover, while the remaining two types are variants of two-point
crossover. The third type may be used whenever permutations with repetitions
are not allowed. We will explain them on a simple example. Let us assume that
there are two parental permutations (12345), (34521).

1. Randomly determine a crossing point at which both permutations (12|345),


(34|521) are divided into two parts, and the final segments are exchanged.
As a result, we obtain (12|521), (34|345). It should be noted that we have
obtained two permutations with repetitions.
2. We choose – randomly and uniformly – two crossing positions. The first
point of discontinuity is situated between the first and the second elements
of permutations, while the second one – between the fourth and the fifth ele-
ments: (1|234|5), (3|452|1). The permutations exchange the fragments placed
between the crossing positions: (1|452|5), (3|234|1). The resulting permuta-
tions also have repeating elements.
3. In the same way as in the previous case, we randomly and uniformly choose
two crossing positions: (1|234|5), (3|452|1). The first stage: the permuta-
tions exchange the fragments placed between the crossing points: (∗|452|∗),
(∗|234|∗). The second stage: instead of asterisks, the respective elements from
the parental permutations, starting with the second element, are being in-
serted. If the permutation element is repeated, then we take the next element
in the parent permutation. In particular, in the first permutation (1|234|5), 3
is such a number, followed by 4, which is present in the new permutation thus
being omitted, 5 is also omitted and we move to the start of permutations,
select the number 1. As a result, instead of (∗|452|∗) we receive (14523), in
a similar way from (3|452|1), instead of (∗|234|∗), we receive (52341).

The mutation process is very simple and consists in replacement of a part of


the population with another one, which is indiscriminately selected and has the
same size. However, the stopping criteria, described in Section 3, are still actual
for this algorithm because their proofs use properties of mutation only.
The following algorithm can be used to solve problem (27):

Step 1. We set the population size r, a probability δ ∈ (0, 1), and the number
of algorithm iterations t∗min (δ) which is calculated by formula (19), where M
is a given upper bound for the cardinality of Ω ∗ , and β = 1/ |Ω| describes
the probability of mutation. We also set the initial value t = 0.
Step 2. We form a population P (t) of randomly generated permutations ai ∈ X,
i ∈ Nr .
Step 3. We compute the values for each of p criteria f1 , ..., fp over the whole
population P (t) .

152
M. Studniarski, L. Koliechkina, E. Dvernaya

Step 4. We determine the set Ft of nondominated elements (the first front) of


P (t) . For this purpose, we can use, for example, the “fast-non-dominated-
sort” procedure described in [3].
Step 5. We place the whole set Ft in the next population P (t+1) (repeated
permutations are excluded from Ft ). Then we complement the population
of r/2 elements by adding r/2 − |Ft | permutations obtained by crossover of
elements from Ft with elements from set(P (t) )\Ft .
Step 6. We finish the formation of population P (t+1) : the remaining r/2 ele-
ments are obtained by generation of random permutations.
Step 7. We construct the set Et according to (22)–(23) (note that, for t = 0,
we have E0 = F0 ).
Step 8. We increment the value of t by 1. If t < t∗min (δ), then we move to step 3.
Otherwise, by Theorem 2, the current set Et is equal to the set of solutions
Ω ∗ with probability δ.

7 Numerical examples
In this section we present the results of computational testing of the algorithm
described in Section 6 on two examples. In both examples, we have used the first
type o crossover only.
Example 1. Let us define a vector function f = (f1 , f2 , f3 ) by
3x1 + 2x2 + 5x3 + x4 + 7x5 + 3x6
f1 (x1 , ..., x6 ) = ,
4x1 + x2 + 2x3 + 3x4 + 5x5 + x6
5x1 + x2 + 7x3 + 2x4 + 8x5 + x6
f2 (x1 , ..., x6 ) = ,
4x1 + 2x2 + 2x3 + 4x4 + 2x5 + 3x6
7x1 + 2x2 + 9x3 + 2x4 + 2x5 + 3x6
f3 (x1 , ..., x6 ) = .
2x1 + 2x2 + x3 + 7x4 + x5 + x6
We consider the problem of minimizing f on the combinatorial set of permuta-
tions without repetitions with the base A = set(A) = {1, 2, 3, 4, 5, 6}, that is, we
consider problem (27) with n = k = 6.
The cardinality of the set P6,6 (A) is equal to 6! = 720. Therefore, we can take
M = 720 as an upper bound for the number of Pareto-optimal solutions. Then
β = 1/M ≈ 0, 001389. The population size is r = 300. For the stopping criterion,
we accept the probability δ = 0, 99. We calculate from formula (19) t∗min (δ) = 54.
After 54 iterations of the algorithm, we obtain a set of permutations Et , where
t = t∗min (δ), consisting of 15 elements which are listed in the second column of
Table 1.
Example 2. We consider the problem of minimizing the vector function
f = (f1 , f2 ), where
3x1 + 5x2 5x1 + 7x2
f1 (x1 , x2 ) = , f2 (x1 , x2 ) = ,
4x1 + 2x2 2x1 + 3x2
on the discrete set
X = {(x1 , x2 ) : x1 ∈ {0} ∪ N4 , x2 ∈ {0} ∪ N5 } .

153
Combinatorial Multiobjective Optimization

Table 1. Results for Example 1

No. Element of Et Value of f1 Value of f2 Value of f3


1 562431 1,120690 1,278689 1,555556
2 651342 1,129032 1,322581 1,62000
3 215634 1,285714 1,387097 1,516667
4 431562 1,117647 1,451613 0,771739
5 342651 1,109375 1,409836 0,808989
6 451632 1,034483 1,09375 0,806818
7 351642 1,084746 1,177419 0,725275
8 532641 1,030303 1,338462 0,953488
9 513624 1,084746 1,161765 1,010989
10 251634 1,153846 1,000000 0,65625
11 451623 1,037807 0,969231 0,808989
12 431625 1,066074 0,940299 0,762887
13 541623 1,000120 1,000000 0,865169
14 243615 1,239130 0,952381 1,238095
15 234165 1,578947 1,811321 2,656250

This is an example of problem (25), where the cardinality of X is equal to 30.


Therefore, we can take M = 30 and β = 1/30 ≈ 0, 03333. The population size is
r = 25. For the stopping criterion, we take δ = 0, 99. We calculate from formula
(19) t∗min (δ) = 19. After 19 iterations of the algorithm, we obtain a set Et ⊂ X,
where t = t∗min (δ), consisting of 5 elements which are listed in the second column
of Table 2.

Table 2. Results for Example 2

No. Element of Et Value of f1 Value of f2


1 (1,0) 0,75 2,5
2 (1,1) 1,33 2,4
3 (2,0) 0,75 2,5
4 (1,2) 1,625 2,375
5 (3,0) 0,75 2,5

8 Conclusions

We have developed a new method of generating Pareto-optimal solutions of


a discrete multiobjective programming problem on some set of combinatorial
objects (for example, permutations partial permutations, or combinations). This
has been achieved by using a specially designed genetic algorithm which includes

154
M. Studniarski, L. Koliechkina, E. Dvernaya

some probabilistic stopping criterion. The preliminary results of numerical tests,


presented above, show the efectiveness of our method.

References
1. Alharbi, S., Venkat, I.: A genetic algorithm based approach for solving the min-
imum dominating set of queens problem. Journal of Optimization Vol. 2017,
Article ID 5650364, 1–8
2. Aytug, H., Koehler, G.J.: Stopping criteria for finite length genetic algorithms.
INFORMS Journal on Computing 8, 183–191 (1996)
3. Deb. K, Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective ge-
netic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation 6(2),
182–197 (2002)
4. Donets, G.A., Kolechkina, L.N.: Method of ordering the values of a linear function
on a set of permutations. Cybernetics and Systems Analysis 45(2), 204–213 (2009)
5. Emelichev, V.A., Kovalev, M.M., Kravtsov, M.K.: Polytopes, Graphs, and Opti-
misation. Nauka, Moscow (1981)
6. Engau, A., Wiecek, M.M.: Generating ε-efficient solutions in multiobjective pro-
gramming. European Journal of Operational Research 177, 1566–1579 (2007)
7. Greenhalgh, D., Marshall, S.: Convergence criteria for genetic algorithms. SIAM
Journal on Computing 30, 269–282 (2000)
8. Kiyoumarsi, F.: Mathematics programming based on genetic algorithms education.
Procedia – Social and Behavioral Sciences 192, 70–76 (2015)
9. Koehler, G.J., Bhattacharya, S., Vose, M.D.: General cardinality genetic algo-
rithms. Evolutionary Computation 5, 439–459 (1998)
10. Koliechkina, L.N., Dvernaya, O.A., Nagornaya, A.N.: Modified coordinate method
to solve multicriteria optimization problems on combinatorial configurations. Cy-
bernetics and Systems Analysis 50(4), 620–626 (2014)
11. Koliechkina, L.N., Dvirna, O. A.: Solving extremum problems with linear fractional
objective functions on the combinatorial configuration of permutations under mul-
ticriteriality. Cybernetics and Systems Analysis 53(4), 590–599 (2017)
12. Lima, A., Vettorazzi, D., Cruz, A., Lima, C., Soares, A.: ATM: a new heuristic
algorithm based on genetic algorithm and betting theory. IEEE Latin America
Transactions 15(3), 510–516 (2017)
13. Long, Q., Wu, Ch.: A hybrid method combining genetic algorithm and Hooke-
Jeeves method for constrained global optimization. Journal of Industrial and Man-
agement Optimization 10(4), 1279–1296 (2014)
14. Nix A., Vose, M.D.: Modelling genetic algorithms with Markov chains. Annals of
Mathematics and Artificial Intelligence 5, 79–88 (1992)
15. Osman, M.S., Abo-Sinna, M.A., Mousa, A.A.: An effective genetic algorithm ap-
proach to multiobjective resource allocation problems (MORAPs). Applied Math-
ematics and Computation 163, 755– 768 (2005)
16. Rahmo, E.-D., Studniarski, M.: A new global scalarization method for multiobjec-
tive optimization with an arbitrary ordering cone. Applied Mathematics 2017(8),
154–163
17. Rahmo, E.-D., Studniarski, M.: Generating epsilon-efficient solutions in multiob-
jective optimization by genetic algorithm. Applied Mathematics 2017(8), 395–409
18. Rani K., Kumar V.: Solving travelling salesman problem using genetic algorithm
based on heuristic crossover and mutation. International Journal of Research in
Engineering and Technology 2(2), 27–34 (2014)

155
Combinatorial Multiobjective Optimization

19. Reeves, C.R., Rowe, J.E.: Genetic Algorithms – Principles and Perspectives: A
Guide to GA Theory. Kluwer, Boston (2003)
20. Rudolph,G., Agapie, A.: Convergence properties of some multi-objective evolu-
tionary algorithms. In: Zalzala, A. et al. (eds.) Proceedings of the 2000 Congress
on Evolutionary Computation (CEC 2000), vol. 2, pp. 1010–1016. IEEE Press,
Piscataway (NJ) (2000)
21. Rowe, J.E., Vose, M.D., Wright, A.H.: Structural search spaces and genetic oper-
ators. Evolutionary Computation 12, 461–493 (2004)
22. Salgueiro, R., de Almeida, A., Oliveirac, O.: New genetic algorithm approach for
the min-degree constrained minimum spanning tree, European Journal of Opera-
tional Research 258, 877–886 (2017)
23. Stoyan, Yu.G., Yakovlev, S.V.: Mathematical models and optimization methods
for geometric design. Naukova Dumka, Kiev (1986)
24. Studniarski, M.: Stopping criteria for genetic algorithms with application to multi-
objective optimization. In: R. Schaefer et al. (eds.) Parallel Problem Solving from
Nature – PPSN XI, Part I, LNCS, vol. 6238, pp. 697–706. Springer, Berlin (2010)
25. Studniarski, M.: Finding all minimal elements of a finite partially ordered set by
genetic algorithm with a prescribed probability. Numerical Algebra, Control and
Optimization 1(3), 389–398 (2011)
26. Tabatabaee, H.: Solving the traveling salesman problem using genetic algorithms
with the new evaluation function. Bulletin of Environment, Pharmacology and Life
Sciences 4, 124–131 (2015)
27. Vose, M.D.: The Simple Genetic Algorithm: Foundations and Theory. MIT Press,
Cambridge, Massachusetts (1999)

156
Section 12

Image Analysis

157
Fast Object Detector based on Convolutional Neural Networks
Karol Piaskowski, and Dominik Belter

Poznan University of Technology, Institute of Control, Robotics and Information Engineering,


Poznan, ul. Piotrowo 3A, Poland;

Abstract. We propose a fast object detector, based on Convolutional Neural Network


(CNN). The object detector, which operates on RGB images, is designed for a mobile
robot equipped with a robotic manipulator. The proposed detector is designed to quickly
and accurately detect objects which are common in small manufactories and workshops.
We propose a fully convolutional architecture of neural network which allows the full GPU
implementation. We provide results obtained on our custom dataset based on ImageNet
and other common datasets, like COCO or PascalVOC. We also compare the proposed
method with other state of the art object detectors.

The full text will be available in the edited book Computational Modeling of Objects
Presented in Images. Fundamentals, Methods, and Applications, eds. Barneva R.P.,
Brimkov V.E., Kulczycki P., Tavares J.M.R.S., to be published by Springer in the Lecture
Notes in Computer Science series soon.

158
Applying computational geometry to designing an occlusal splint
Dariusz Pojda, Agnieszka Anna Tomaka, Leszek Luchowski, Krzysztof Skabek, and
Michaª Tarnawski

Institute of Theoretical and Applied Informatics, Polish Academy of Sciences,


Baltycka 5, Gliwice, Poland;

Abstract. The occlusal splint is one of the methods of treatment of discrepancies between
the centric relation and maximal intercuspation (CR/MI), and other temporomandibular
joint (TMJ) disorders. It is also a method of reducing the eects of bruxism. Designing
an occlusal splint for a given relation between the maxilla and the mandible involves:
creating partial surfaces, integrating them, and producing the splint on a 3D printer.
The paper presents and compares some techniques used to design splint surfaces under a
required therapeutic maxilla-mandible relation.

The full text will be available in the edited book Computational Modeling of Objects
Presented in Images. Fundamentals, Methods, and Applications, eds. Barneva R.P.,
Brimkov V.E., Kulczycki P., Tavares J.M.R.S., to be published by Springer in the Lecture
Notes in Computer Science series soon.

159
Assessment of Patients Emotional Status
According To iris Movement

ALHAMZAWI HUSSEIN1[0000−0002−9537−6208] and ATTILA FAZEKAS


1
Department of the Computer Graphics and Image Processing, Faculty of
Informatics, University of Debrecen, Debrecen, Hungary
[email protected]
2
Department of the Computer Graphics and Image Processing, Faculty of
Informatics, University of Debrecen, Debrecen, Hungary
[email protected]

Abstract. The human eye allows vision but also reflects mood, emo-
tional state, mental and physical condition. Eye activity and behavior
reflected in pupil size, gaze direction, eyelid motion or eye-opening are
affected by states of effect. Investigating literature, it was clear that fea-
tures extracted from the eye region are included in the most innovative
techniques in emotion recognition giving comprehensive, reliable and ob-
jective information relating to subjects emotional state. The aims of this
Article, it delivers the foundational work required for investigating the
role of gaze distribution as an emotional index. Towards this objective,
an eye gaze detector is developed and tested analyzing videos of volun-
teers performing a specific gaze task. The accuracy of the method was
checked using different lighting and distance conditions in order to find
the best parameters to be used. In short-term (tracking the gaze and
plot a medically diagram that will give us information about the Pa-
tients situation for example if she/he is angry or may have some stress
so we can be postponed giving the medicine for a while and we could
wait until her/his situation will be better and then we can give her/him
the medicine).

Keywords: Gaze tracking · Biomedical information · Canny edge filter


· Emotion recognition .

1 INTRODUCTION
Humans are great at deciphering emotional expressions quickly and efficiently,
reflect their significance the success of the social interaction [1] . Inability to re-
spond to the emotional state of others and the exact discrimination is associated
with a range of social unrest, from autism to psychopathy [2] ,[3],[4]. Across the
cultures, people can be recognize from the face at least six fundamental expres-
sion of emotion ,Including surprise , joy , Sadness , Fear , disgust and anger
[5],[6].as long facial expression of self sensitive emotion , like An embarrassment
or the shame [7],[8] and the pride [9]. The fundamental emotions for the facial
expressions are created with characteristic configurations of the movement of

160
ALHAMZAWI HUSSEIN.

the facial muscle, which provide the basic sensory perception of discrimination
between a various types of the emotional expressions [10]. For example, The
wideness of the eye and the flexing the muscles of the mouth reflect the state
the facial expression of fear ,while alternative flexion of the muscle of the mouth
and eye restriction reflect the state the facial expression of the joy [10]. Emo-
tional expression have been arisen as efficacious adaptation to advantage the
expresser and become only communicative as a minor function through conti-
nued the practice and heredity [12]. For example, when people put expressions
of fear, The patterns of eye movements indicate the size of the nose To enhance
the perception of one environment While the opposite pattern was noted and
indicted for the disgusting [13]. For categorization the facial emotion , we need
more information that is provided by different regions of the face[14.],[ 15] And
also the identity of the face[16],[17]. many studies and researches indicated to
show that they used spread strategy for focus on a more diagnostic area, It is
usually indexed by eye movements, a public reflection of attention spread[18].
For example, The rhesus macaque spend a lot of time fixation at the eyes, While
watching faces that are threatened compared to other faces like yawn and lip
smack, which they are Focused more strongly at the mouth [19] fixation pat-
tern for the human display is an individual difference when viewing the faces,
These patterns also varied with one person through different tasks, However,
these patterns were surprisingly reliable through examples of a specific parti-
cipant and task [20]. While it is not clearly always both attentional strategies
really progress and improve the performance, for successful emotion recognition
in some cases, we need to select a specific area of appearing face. For exam-
ple, patients with bilateral amygdala damage are comparatively inexact at Fear
recognition from the face compared to observation of healthy [21], and a large
enroll to this Disability perhaps a lack of attention in the face for the eye region.
Uncommonly, when given straightforward instruction to look or move their eyes
into the region of the eye of facial expression, amygdala damage patients can
recognize the fear [22], Indicating that the basic deficit of the patient is not to
identify fear, In itself, but to some extent in selectively looking at the area of
the face most diagnostic for successful recognition of the fear. As an example
likely to be similar, a typical autistic person looks a lot of time at nondiagnostic
relative to characteristic (e.g., mouth , nose, eyes ) areas of the face through
the emotion recognition, Probably contributes to the inability to recognize emo-
tion[23]. There is other works on a specific nation have been shown that when
the patterns of the eye-gaze were restrained, Encoding for the memory was im-
paired on face identity, depending on a condition where eye gaze allowed to move
freely[24]. Deployment Strategy of attention inside the face is driven not just by
properties of low-level visual, But also through the attributes and objectives of
the observer. Slow eye-movement patterns are displayed through Spider phobics
when presenting stimuli [25] Related to fear for controls. People who have a high
degree of neuroscience are looking longer in the eye area than fear faces rela-
tive to those in low neurotic [26]. Westerners are less likely to fixate on the eye
area of the face less than extent compared to Easterners, who are more likely to

161
Assessment of Patients Emotional Status According To iris Movement

fixate on the eye area [27]. The effective context in which identical faces were
embedding has also been shown to alter emotional perception dramatically.[28].
These results reveal individual and collective differences in eye movements du-
ring emotion recognition leading to Target-based biases in cognitive processing.
The requirements for addressing emotion recognition seem to stimulate certain
patterns of attention across the face. We are here exploring how attention, as
illustrated by eye movement patterns, four different classics of the emotional
expression were detection during this experiment. We recorded eye movements
where participants were neutralizing the emotional expressions of certain emoti-
ons. Experiments were done upon a student in University of Debrecen,Hungary
and people around the university and in Afak hospital ,Iraq .. The work presen-
ted here differs from those identified in the literature for using as input only a
low-cost Webcam, low resolution (640x480) and low acquisition speed (15 FPS)
and does not use any feature to restrict user movements, or any type of zoom to
aid in the identification of the eyes.

2 RELATED WORK
Peng [29] identifies the eyes from the gradient of the grayscale image, which
restricts the search region of the eyes very effectively. The use of horizontal and
vertical projections of the gradient points allows the desired region to be iden-
tified with high precision. The whole process is shown in Figure 1. Once this
region is identified (part B of Figure 1), the final step is to identify the exact
location of the eyes. For this purpose, a scan with a pre-defined model (part A
of Figure 1) is made in the eye region. Until the area where the eyes are in the
image is located In each position an evaluation of the image of the eye region
with the model is made using equations (1) and (2) below, the region that mi-
nimizes equation (2) is considered to be the eyes.

A
B C

Fig. 1: Search process in the Peng system [29]. [A] Model used for searching, [B]
Eye region extracted by gradient ,[C] Difference between the image and the
model in the eye region.

m
X
Cf,j [i, j] = g[k, l]f [k + 1, l + j] (1)
k=1

Cf,g [i, j]
M [i, j] = Pm Pn 1 (2)
[ k=1 l=1 f 2 [k + i]l + j]] 2

162
ALHAMZAWI HUSSEIN.

Where g [k, l] represents the intensity of the pixel [k, l] in the image and f
[i, j] the intensity of the pixel [i, j] in the model. Equation (1) represents the
correlation between the model and the image considering the image from the
point [k, l]. Note that the denominator of the second equation is the quadratic
mean of the image in the segment being evaluated, the method proposed by
Zhang [30] used to adjust the ”red-eye” of photographs, identify regions of red
color (*) and makes refinements using search masks that are applied to the image
in succession.
Li [31] searches the iris from a point, the method searches for a radial sequence of
points that are consistent with the iris edges. From there the RANSAC (random
sample consensus) algorithm is used to randomly select a group of points and
check if the sequence fits the desired model. In this case the model looks for is an
ellipse, where size and eccentricity restrictions are added in the search process.
Figure (2) shows the procedure used by Li.

Fig. 2: Procedure proposed by Li [31]. [A] Set of all points found (+) [B] Points
remaining after exclusion of points far from average [C] Selected points (+ ) and
deleted ( * ) [D] Best ellipse using only the selected points.

3 PROPOSED MODEL
The work presented here is part of a color image with the already delimited user
face, this step is done as described in [32]. From this result, we look for a set
of border points that approach a circumference in the two regions expected for

163
Assessment of Patients Emotional Status According To iris Movement

the identification of the eyes. Some restrictions are used in the eye detection
process to reduce the number of false positives, as the expected iris diameter
in the image and, also, the existence of a striking difference between the tone
external and internal points of the region considered.

3.1 Implementation

In the works were presented in [29] a camera with a resolution higher than a
”Web Cam” is used, or the face is supported on a holder reducing freedom of
movement of the user or even using special lenses to increase the eye region. The
system presented here initially uses the procedure described by Candide in [32]
to identify the face region, and then the color image generated directly by the
”Webcam” is converted to gray scale.
A gradient-type filter is then applied to the grayscale image and a resultant
binarization of the image is done. Two points close to the eyes are identified
through a procedure similar to that used by Peng [29], as shown in Figure 3.
In [A], from the horizontal projection of the image gradient are two points of
maximum, one in each half of the image. These points identify the vertical lines
that delimit the sides of the face. In [B], in the middle third of the face, bounded
by the lateral vertical lines, the maximum point of the vertical projection of the
region is identified. This point determines the horizontal line passing through
the eyes. And finally in [C], in the same region identified in the previous step,
there are two maximum points in the horizontal projections, one in each half
of the face. These points, together with the horizontal straight line that passes
through the eyes, determine the two points where the search begins and have
close proximity to the two eyes.

Fig. 3: Model used to identify the eye region in the system.

164
ALHAMZAWI HUSSEIN.

In each of the steps it is evaluated the possibility of not having maximum


points in the expected region with the desired amplitude. In these cases a default
value is set. Identification of the lateral borders, for example, is usually impaired
in people with long hair, which causes a dispersion of the edge region. In 95%
of the cases the initial points found are on the eyes and even in the worst cases,
the region delimited by the sides of the face and the lines parallel to the eyes are
sufficient for a good initial estimate.
The next step in the process is finding the contours of the irises. To do this,
a search for edge points that are closest to a circumference within the region
delimited in the previous step is performed.
For the identification of the contours, a canny type filter [33] is applied to the
grayscale image. The filter is applied with a distinct set of parameters, depending
on the side of the selected face, in order to minimize uneven lighting effects. As
shown in [33] the Canny filter uses two parameters that represent a range of
intensities. If the intensity of the contour is greater than the largest parameter,
the point will be considered as belonging to the contour, if it is lower than
the smaller it will be disregarded. Points with intermediate intensity will be
considered if they participate in a chain of points that have at least one point
with intensity higher than the highest parameter.
The Canny filter parameters are obtained iteratively. Initially, the filter is applied
and the proportion of edges in the region of the search mesh is identified. If
the number of border points is less than the expected range, the canny filter
parameters are decreased in order to identify more points. If the number of border
points exceeds the accepted range, the parameters are increased to minimize edge
identification.
Figure (4) shows the result of applying the Canny filter to different parameters.
In [A] we observe the result of the process after the 1st iteration, and in [B] after
the 8th iteration.

Fig. 4: Images after application of the Canny filter. [A] Image after the first iteration.
[B] Image after the 8th iteration.

165
Assessment of Patients Emotional Status According To iris Movement

The search for the iris is made starting from points in an 18x6 mesh with center
at the point identified in the previous step that are equally between 0.017 *
(Face Width), this constant was inferred as follows: for the images used in the
tests, the lowest ratio between the iris diameter and the face width was 0.0425,
thus, a 20% margin was used for the accepted minimum radius is 0.034 * (Face
Width). For the mesh step, half of this value was considered, ensuring that at
least one point of the mesh starts inside the iris. Figure (5) shows the search
mesh superimposed on the initial image.
From each point of the mesh, a radial search with a neighborhood 8 is started.
If an edge point is identified, a local search is made to identify if this point is
part of a larger set, with at least 25 pixels (value identified experimentally). If
this does not occur the point is discarded.
This method avoids small blemishes in the eye region. The search continues until
a point that meets the constraint is found or the number of attempts is greater
than 15. If the number of points found is greater than 3, identifies the circle
which best fits the set. Otherwise, the search restarts from the next point in
the mesh and this set is discarded. The quality of the fit of the points to the
circumference is defined by equation (3).

Fig. 5: Overlap of the search grid over the initial image

Pm p
| k=1 (xk − xc ) + (yk − yc )2 − R |
ε= (3)
M ∗R

Where R is the radius of the circumference found, Xc and Yc are the coordinates

166
ALHAMZAWI HUSSEIN.

of the center of the circumference, Xk and Yk are the coordinates of each of the
points, and m is the total number of points considered.
Equation (3) determines the error of fitting the points to a circumference. If the
error is less than 0.24 a search refinement procedure will be done. In this refine-
ment a local search is made starting from each of the original points in search
of other edge points that are close to the contour of the estimated circumfe-
rence. From this new set of points lies the new circumference that best suits
this new set. Sets of points that are not evenly distributed around the circum-
ference are discarded; the criterion used to define this uniformity is the following:

– The circumference is divided into 36 bands of 10 degrees.


– Check the number of points within each track. The more bands have points
the better the ”uniformity” of the circumference.
– Sets that are not distributed over 140 degrees are discarded.
Figure (6) illustrates the two-condition search refinement process, in 6 [A] one
has the search in a point that meets the initial conditions of diameter of the
circumference, number of points and error, in 6 [B], after refinement, the set
is discarded because the points do not distribute radially within the expected
range. In 6 [C] and 6 [D] we observe a group where the refinement meets a set
that meets the requirements.

Fig. 6: Refinement of search in two conditions.

The process is repeated for each point of the mesh, the sets of points that meet
the constraints (minimum error, circumference diameter, point distribution) are
stored. Among the sets of points selected, one that has the greatest difference of
intensity between points inside the estimated and immediately external circum-
ference will be considered as the user’s iris. The movement of the mouse pointer

167
Assessment of Patients Emotional Status According To iris Movement

will be defined from the deviation of the position of the iris in front of the calcu-
lated average in the first ten seconds of execution. If the current position drifts
to the right the mouse will move to the right, if the current position deflects
to the left the mouse will be moved in this direction and in the same way if it
deflects in the vertical direction.
The last step of our project it was used kalman filter as shown in [34] to tracking
the mouse that moved through iris movement and drawing diagram for it .

Fig. 7: Drawing daigram for mouse cursor movement using kalman filter.

4 EXPERIMENTS

The experiments were divided into two parts. In the first part, iris identification
tests were performed on static images, the second part, tests were performed
on images acquired in real time , and the third part is a physiology part to
determine the patients situation mood .For the experiments with static images
the images of the IMM Face Database group were used. Only colored images
were selected, with front faces totaling 60 images. The intention to use a sample
already available on the Internet is to enable other works to be compared in a
simple and standardized way. Figure (7) illustrates the result of the search for
12 images under different light conditions and facial expression.

168
ALHAMZAWI HUSSEIN.

Fig. 8: Identification of irises in 12 IMM Face Database images [35].

The first and second faces were selected to illustrate conditions in which the al-
gorithm usually fails, in the first case the person is with the eyes slightly closed
and in the second case the difference of illumination between the sides of the
face of the individual is much accented.
For the group of images selected, 109 hits were obtained (each eye individually
considered as a hit) and 6 false positives. Experiments with the sequences of
images presented a new set of difficulties. The images used in the first expe-
riment have resolution of 480x640, the images taken directly from the camera
(Creative Webcam Live! Pro), although they have a similar definition and good
lighting conditions do not have the same quality as those extracted from a digi-
tal camera because even small movements generate slightly blurred images. The
time taken to acquire a webcam image used is about 8 times slower compared to
a digital camera, Typically on a webcam the acquisition speed is 15FPS (0.067s
per image), while digital cameras require only 1 / 125s (0.008s per image). This
makes even small movements with the face; generate slightly ”blurred” images
in the acquisition made by the webcam which impairs the detection of edges.
Another factor that influences the result is the positioning of the camera, in this
case the distance from the camera to the face is not the main factor, and the
main problem is the height of the camera with respect to the eyes. Positioning
the camera below the eye line favors the system, while the reverse adversely
affects the system. The higher the position of the camera, the more the eyelids
overlaps the iris, hampering the search for a circumference.

169
Assessment of Patients Emotional Status According To iris Movement

Figure (8 ) shows the equipment used and the lighting conditions during the
tests.

A B

Fig. 9: In [A] the assembly with the Webcam is displayed. In [B] it is observed the
quality of the illumination and the typical distance in which the tests were carried
out.

The final tests were performed on 4 people with favorable lighting conditions
and with the positioning of the camera below the eye line similar to the scheme
observed in figure (8) [B]. Images of each of the users were stored as they di-
rected the eyes to each end of the screen, to the medians of the sides and the
center of the screen. It was observed that the system often ”lost” the irises as
the users changed the direction of the eyes, and then identified them again. As
mentioned earlier during the movement the detection of the edges is impaired.
The iris identification was successful in 87.5% of cases and 2% of false positi-
ves occurred. Eventually the edge detection step captures small spots due to
reflections that impair the identification of irises. In Figure 9 [B] we can identify
in detail the contours identified for the right eye. One can observe the formation
a considerable number of border points inside the iris which causes a system
failure.

170
ALHAMZAWI HUSSEIN.

Fig. 10: In [A] an error occurs in the system. In [B] the image generated by the edge
detector is displayed.

After the iris had been detected and tracked and diagram had been plotted it
as we explained above, now we discuss the psychological aspect of this experiment
in which comparison with the diagram to obtain the exact result for the persons
emotional state, in this case we need to divided the screen virtually into four
equal part as shown in the figure last one . According to psychological analysis
by Robert Phipes [36] :

– Angry person: looking at the middle of the screen (0,0) and soundly jump
around it .
– Joy/happens: looking at the upper middle of the screen.
– Sadness: looking at down left side of the screen.

171
Assessment of Patients Emotional Status According To iris Movement

– Scary and afraid :looking at the middle down of the screen .


– Surprise :looking down the screen and jumping right and left at the down
edge of the screen

Fig. 11: eyeball movement

5 CONCLUSION AND FUTURE WORKS


In this work, we have focused on the problem of human emotion recognition in
the case of naturalistic, rather than acted and extreme, expressions. The main
elements of our approach are that we use multiple algorithms for the extraction
of the difficult eye movement characteristics in order to make the overall appro-
ach more robust to image processing errors, we focus on the dynamics of eye
movement reference points rather than on the exact facial deformations they are
associated with, thus being able to handle sequences in which the interaction is
natural or naturalistic rather than posed or extreme and we follow a multimodal
approach where audio and visual modalities are combined, thus enhancing both
performance and stability of the system. Our system was made using the cheap
equipment under natural light condition. Our future work will be improving
speed and quality of our system by trying to use dots summation, in order to
determine the person emotional state rather than one diagram method(i.e every

172
ALHAMZAWI HUSSEIN.

Fig. 12: Screen divide into 4 parts

dot represents the iris focus point per millisecond ) . This done by a mathema-
tical method that calculates all dots in every part of the screen as shown in the
figure 12.

Fig. 13: Iris focus points on the screen per millisecond

173
Assessment of Patients Emotional Status According To iris Movement

References
1. Smith, Fraser W., and Philippe G. Schyns. ”Smile through your fear and sadness:
transmitting and identifying facial expression signals over a range of viewing dis-
tances.” Psychological Science 20, no. 10 (2009): 1202-1208.
2. Baron-Cohen, Simon, and Sally Wheelwright. ”The empathy quotient: an investi-
gation of adults with Asperger syndrome or high functioning autism, and normal
sex differences.” Journal of autism and developmental disorders 34, no. 2 (2004):
163-175.
3. Marsh, Abigail A., Megan N. Kozak, and Nalini Ambady. ”Accurate identification
of fear facial expressions predicts prosocial behavior.” Emotion 7, no. 2 (2007): 239.
4. Ben-Sasson, Ayelet, Liat Hen, Ronen Fluss, Sharon A. Cermak, Batya Engel-Yeger,
and Eynat Gal. ”A meta-analysis of sensory modulation symptoms in individuals
with autism spectrum disorders.” Journal of autism and developmental disorders
39, no. 1 (2009): 1-11.
5. Ekman, Paul, and Wallace V. Friesen. ”Unmasking the face: A guide to recognizing
emotions from facial cues.” (1975).
6. Caldara, Roberto, Philippe Schyns, Eugene Mayer, Marie L. Smith, Frdric Gosselin,
and Bruno Rossion. ”Does prosopagnosia take the eyes out of face representations?
Evidence for a defect in representing diagnostic facial information following brain
damage.” Journal of cognitive neuroscience 17, no. 10 (2005): 1652-1666.
7. Hejmadi, Ahalya, Richard J. Davidson, and Paul Rozin. ”Exploring Hindu Indian
emotion expressions: Evidence for accurate recognition by Americans and Indians.”
Psychological Science 11, no. 3 (2000): 183-187.
8. Keltner, Dacher, Randall C. Young, and Brenda N. Buswell. ”Appeasement in hu-
man emotion, social practice, and personality.” Aggressive behavior 23, no. 5 (1997):
359-374.
9. Tracy, Jessica L., and Richard W. Robins. ”” Putting the Self Into Self-Conscious
Emotions: A Theoretical Model”.” Psychological Inquiry 15, no. 2 (2004): 103-125.
10. Ekman, Paul, and Wallace V. Friesen. Manual for the facial action coding system.
Consulting Psychologists Press, 1978.
11. Fox, N.A. and Davidson, R.J., 1988. Patterns of brain electrical activity during
facial signs of emotion in 10-month-old infants. Developmental Psychology, 24(2),
p.230.
12. Darwin, Charles. ”The expression of emotion in animals and man.” London,
England: Murray (1872).
13. Aviezer, Hillel, Ran R. Hassin, Jennifer Ryan, Cheryl Grady, Josh Susskind, Adam
Anderson, Morris Moscovitch, and Shlomo Bentin. ”Angry, disgusted, or afraid?
Studies on the malleability of emotion perception.” Psychological science 19, no. 7
(2008): 724-732.
14. Smith, Marie L., Garrison W. Cottrell, FrdAric Gosselin, and Philippe G. Schyns.
”Transmitting and decoding facial expressions.” Psychological science 16, no. 3
(2005): 184-189.
15. Spezio, Michael L., Ralph Adolphs, Robert SE Hurley, and Joseph Piven. ”Abnor-
mal use of facial information in high-functioning autism.” Journal of autism and
developmental disorders 37, no. 5 (2007): 929-939.
16. Gosselin, Frdric, and Philippe G. Schyns. ”Bubbles: a technique to reveal the use
of information in recognition tasks.” Vision research 41, no. 17 (2001): 2261-2271.
Zhao, Wenyi, Rama Chellappa, P. Jonathon Phillips, and Azriel Rosenfeld. ”Face
recognition: A literature survey.” ACM computing surveys (CSUR) 35, no. 4 (2003):
399-458.

174
ALHAMZAWI HUSSEIN.

17. Kowler, Eileen, Eric Anderson, Barbara Dosher, and Erik Blaser. ”The role of
attention in the programming of saccades.” Vision research 35, no. 13 (1995): 1897-
1916.
18. Nahm, Frederick KD, Amelie Perret, David G. Amaral, and Thomas D. Albright.
”How do monkeys look at faces?.” Journal of Cognitive Neuroscience 9, no. 5 (1997):
611-623.
19. Walker-Smith, Gail J., Alastair G. Gale, and John M. Findlay. ”Eye movement
strategies involved in face perception.” Perception 42, no. 11 (2013): 1120-1133.
20. Adolphs, Ralph, Daniel Tranel, Hanna Damasio, and Antonio Damasio. ”Impaired
recognition of emotion in facial expressions following bilateral damage to the human
amygdala.” Nature 372, no. 6507 (1994): 669.
21. Adolphs, Ralph, Frederic Gosselin, Tony W. Buchanan, Daniel Tranel, Philippe
Schyns, and Antonio R. Damasio. ”A mechanism for impaired fear recognition after
amygdala damage.” Nature 433, no. 7021 (2005): 68.
22. Pelphrey, Kevin A., Noah J. Sasson, J. Steven Reznick, Gregory Paul, Barbara
D. Goldman, and Joseph Piven. ”Visual scanning of faces in autism.” Journal of
autism and developmental disorders 32, no. 4 (2002): 249-261.
23. Henderson, John M., Carrick C. Williams, and Richard J. Falk. ”Eye movements
are functional during face learning.” Memory cognition 33, no. 1 (2005): 98-106.
24. Pflugshaupt, Tobias, Urs P. Mosimann, Wolfgang J. Schmitt, Roman von Wart-
burg, Pascal Wurtz, Mathias Lthi, Thomas Nyffeler, Christian W. Hess, and Ren
M. Mri. ”To look or not to look at threat?: Scanpath differences within a group of
spider phobics.” Journal of anxiety disorders 21, no. 3 (2007): 353-366.
25. Perlman, Susan B., James P. Morris, Brent C. Vander Wyk, Steven R. Green, Jaime
L. Doyle, and Kevin A. Pelphrey. ”Individual differences in personality predict how
people look at faces.” PloS one 4, no. 6 (2009): e5952.
26. Jack, Rachael E., Caroline Blais, Christoph Scheepers, Philippe G. Schyns, and Ro-
berto Caldara. ”Cultural confusions show that facial expressions are not universal.”
Current Biology 19, no. 18 (2009): 1543-1548.
27. Aviezer, Hillel, Ran R. Hassin, Jennifer Ryan, Cheryl Grady, Josh Susskind, Adam
Anderson, Morris Moscovitch, and Shlomo Bentin. ”Angry, disgusted, or afraid?
Studies on the malleability of emotion perception.” Psychological science 19, no. 7
(2008): 724-732.
28. Peng, Kun, Limin Chen, Su Ruan, and Georgy Kukharev. ”A robust agorithm
for eye detection on gray intensity face without spectacles.” Journal of Computer
Science Technology 5 (2005)
29. Zhang, Lei, Yanfeng Sun, Mingjing Li, and Hongjiang Zhang. ”Automated red-
eye detection and correction in digital photographs.” In Image Processing, 2004.
ICIP’04. 2004 International Conference on, vol. 4, pp. 2363-2366. IEEE, 2004
30. Li, Dongheng, and Ruqin Zhang. ”Eye Typing Using A Low-Cost Desktop Eye
Tracker.” Iowa State University (2005).
31. Cndido, Jorge, and Mauricio Marengoni. ”Enhancing face detection using bayesian
networks.” In IASTED International Conference on Signal and Image Processing-
SIP2006, vol. 1, no. 1. 2006.
32. Canny, John. ”A computational approach to edge detection.” In Readings in Com-
puter Vision, pp. 184-203. 1987.
33. Garg, Pragati, Naveen Aggarwal, and Sanjeev Sofat. ”Vision based hand gesture
recognition.” World Academy of Science, Engineering and Technology 49, no. 1
(2009): 972-977.
34. LNCS Homepage, http://www2.imm.dtu.dk/ aam/ . Last accessed 31 may 2018

175
Assessment of Patients Emotional Status According To iris Movement

35. Phipps, Robert. Body language: it’s what you don’t say that matters. John Wiley
Sons, 2012

176
Computer-aided diagnosis system for lumbar
spinal stenosis detection in MRI based on
radiological criteria

Dominik Horwat1† and Marek Krośnicki1‡

Faculty of Mathematics, Physics and Informatics, Institute of Theoretical Physics


and Astrophysics, University of Gdansk, Wita Stworza 57, 80-309 Gdańsk, Poland

[email protected]

[email protected]

Abstract. Lumbar spinal stenosis (LSS), a narrowing of the spinal canal,


is a common cause of lower back pain. Magnetic Resonance imaging
(MRI) play an important role in the diagnosis of lumbar abnormalities
and is preferred modality for diagnosis of lower back pain. The purpose of
this work is to design a semi-automatic computer-aided diagnosis (CAD)
system for detecting LSS from Magnetic Resonance images. For sake of
this study the STIR sequence of MRI mid-sagittal images of the lum-
bar spine are used. We present preliminary results of our work on an
algorithm for quantification and detection of the spinal stenosis.

Keywords: lumbar spinal stenosis · computer-aided diagnosis · STIR-


MRI · region growing segmentation · quantitative evaluation.

1 Introduction

According to the World Health Organization, the worldwide prevalence of low


back pain may be as high as 42% [10]. A common cause of low back pain is lum-
bar spinal stenosis (LSS). Hughes et al. [7] define lumbar spinal stenosis as ”a
pathological condition of the spinal canal with its concentric narrowing and pres-
ence of specific clinical syndrome”. LSS causing a difficulties especially during
walking resulting in patients disability [19][5]. LSS affect millions of middle-aged
and elderly patients [5]. It is estimated that more than 200.000 adults are af-
fected by LSS in the United States [20]. LSS is also the most common reason
of the spinal surgery for >65 years old patients. Wu et al. [20] reports that in
the period of time 2002 to 2007, the rate of lumbar stenosis surgery per 100.000
patients is about 135.5–137.5 persons.
Radiological findings are crucial in the diagnosis of lumbar spinal stenosis
beside symptoms and clinical signs [17]. Magnetic Resonance imaging (MRI) is
commonly used to assess patients with lumbar spinal stenosis [14]. Typically,
the sagittal T1-weighted, T2-weighted, STIR, and proton density-weighted, and
axial T1- and T2-weighted sequences are used for lumbar spine imaging [18].
The T1- and T2-weighted sequences are most frequently used in spinal stenosis

177
D. Horwat, M. Krośnicki

quantitative evaluation [4]. A few quantitative radiological criteria can be found


in the literature [17]. The frequently applied criterion is measurement of the
mid-sagittal antero-posterior diameter of dural sac (DSAPD).
The increasing number of patients (about 8%) and the growing demand for
radiological diagnostics are not correlated with an increase in radiologists (about
1%) [1]. In the result, the demand of a computer-aided diagnosis (CAD) meth-
ods has increased in the last few years. Using the CAD methods allow to limit
radiologist’s time spent on imaging diagnostics. This time saving is necessary to
ensure a sufficiently high quality of patient’s health care. Moreover, the stenosis
diagnosis is commonly based on subjective parameters. The lack of the method-
ological rigor in the LSS quantitative assesment process results in inter- and
intra-radiologists variability [22]. Some studies found that correlation between
clinical symptoms and radiology findings are poor [2]. This shows the urgent
need for CAD methods to ensure the reproducibility and comparability of the
diagnosis results.
Several methods for diagnosis of lumbar spinal stenosis have been developed,
but no full CAD system is available to detect and quantify spinal stenosis [14].
Koompairojn et al. [10] present a system based on machine learning classification
technique to automatically recognize lumbar spine components and LSS diagnose
by applying a Multilayer Perceptron. Koh et al. [9] developed method based
on inter- and intra-context features generation using a two-level classifier for
performing the diagnosis. Ruiz et al. [14] developed an interesting methodology
to classify and quantify spine disease (disc degeneration, herniation and spinal
stenosis). In this method a comparison between real and ideal contour was used
to set a threshold for a subsequent detection of spinal stenosis. Quantification
of the dural sac canal ratio was carried out as the main criteria for calculation
of spinal stenosis. All of this researches used T2-weighted MRI images.
In this study preliminary results of the development of a semi-automatic
computer-aided diagnosis system for detecting lumbar spinal stenosis from STIR
MRI mid-sagittal images are shown. In the presented method the LSS was quan-
tified using the dural sac antero-posterior diameter. Detection was carried out
by comparing DSAPD with the mean diameter of the dural sac in each lumbar
spine segment.
For this purpose the image was firstly preprocessed to enhance the edge and
to reduce the inhomogeneities of the pixels intensity over the dural sac region.
Then, the dural sac was segmented using a region growing technique.

2 Method
2.1 Magnetic Resonance imaging
Examinations was performed on 3-T MRI (Philips Medical Systems Achieva).
In this study the STIR sequence was used. All images used for testing were
obtained using the same image acquisition protocol: 13 slices of 4 mm thickness,
pixel spacing of 0.44mm × 0.44mm, image resolution of 784 × 784, repetition
time 4100-4500 ms, effective echo time 70 ms.

178
CAD diagnosis system for LSS detection in MRI

2.2 Preprocessing
The presented method consist of three steps: image preprocessing, region grow-
ing segmentation and lumbar spinal stenosis quantitative evaluation (diagnosis).
First, a single mid-sagittal slice was manually selected from a MRI dataset. At
the preprocessing step the MRI mid-sagittal image was prepared for segmenta-
tion by edge enhancement (image sharpening). The process was carried out as
follows:
1. Gaussian smoothing: the image was smoothed by the two-dimensional
(2D) Gaussian filter G(x, y) to eliminate the noise while still preserving object
boundaries,

1  x2 + y 2 
G(x, y) = exp − , (1)
2πσ 2 2σ 2
where: x, y are image coordinates and σ is a standard deviation of the associated
probability distribution. Value of σ was set to 2.5.

2. Range operation: a range operator was applied to the smoothed image.


Range is the local texture operator defined with respect to a certain neighbor-
hood Ω which defines the local region over which the calculation is made[16],

< = max(I(x, y)) − min(I(x, y)) Ω
, (2)
where: x, y are image coordinates.

Range operator captures intensity fluctuations between groups of neighboring


pixels by calculating the difference between maximum and minimum value over
the defined neighborhood. Applying the range operator over 3 × 3 neighborhood
Ω allowed detection of the edge between the dural sac and the vertebral bodies.
3. Binarization: image after applying range operator (range image) was bina-
rized. The threshold was selected empirically as 0.35 of maximum value of range
image.
4. Edge extration: the connected-component labeling was performed on bi-
narized image to extract the biggest object which is the edge between the dural
sac and the vertebral bodies [6].
5. Edge thinning: the extracted edge was skeletonized using morphological
skeletonization algorithm [21].
6. Edge enhancement: the image obtained as a result of previous steps opera-
tions was used as a sharpening mask Imask . The original MRI image was filtered
using 2D Gaussian filter with σ = 1. Next, the smoothed image Ismoothed was en-
hanced using a contrast enhancement method called histogram equalization [16].
Finally, a sharpening mask was applied to the resulting image Iequalized . This
operation produced image with emphasized vertebral bodies-dural sac boundary.
Figure 1 shows a results of the preprocessing stage.

Ienhanced = Iequalized + Imask (3)

179
D. Horwat, M. Krośnicki

Fig. 1. Preprocessing process: a) Original MRI STIR image. b) Image (a) smoothed
using Gaussian filter with σ = 1. c) Image (b) after applying range operator. d) Bina-
rized image (c). e) The longest edge extracted from (d). e) Mask (d) applied to original
image (a).

180
CAD diagnosis system for LSS detection in MRI

2.3 Segmentation

In the next step the dural sac region was segmented out from preprocessed
image using region growing method. In this technique segmentation algorithm
starts from the seed points and append the pixels in the neighborhood to the
same region if they satisfy the similarity criteria. At the same time an adjacency
spatial relationships between pixels must be considered.
The similarity criterion was defined based on pixels intensity: the intensity
value of a candidate pixel must lie within a specified range. The lower threshold
of the range was set as a 0.9 value of the running average intensity of the growing
region. The upper threshold was determined by the maximum value of original
image pixel (without edge enhancement mask). An 8-connected neighborhood for
image pixels adjacent relationship was chosen. A single seed point was manually
placed in the dural sac region. An initial threshold value was calculate as the
mean pixels intensity of the 10 × 10 window centered on seed point. At each next
iteration the running average intensity of the growing region was calculated and
then was set as a new lower range threshold. Despite of image preprocessing,
in the segmented region holes and discontinuities can appear. Therefore, the
morphological closing operator (A⊕B) B was used to fill holes and to join
a narrow isthmuses (isthmuses occurred as a result of the edge enhancement)
within the segmented region. A 4 × 6 rectangle structuring element B was used
[15]. Finally, the segmented dural sac region was obtained.
At the last step, the contour of the segmented region was found using ”march-
ing squares” method [12]. The morphological closing ensure producing an en-
closed contour. Finally, the region contour was split out into two contours to get
two points sets describing anterior and posterior boundary of the dural sac.

2.4 Diagnosis

The diagnosis of lumbar spine stenosis is based on one of the radiological criteria
[17][11]. The mid-sagittal antero-posterior diameter of dural sac was chosen as
the LSS stenosis descriptor. In the proposed method measurements were carried
out perpendicularly to a curvature of the spine. To model the spine curvature
a set of landmark points was manually placed in the middle of the posterior
margin of the vertebral body (L1, L2, L3, L4, L5, S1). The B-spline curve then
was fitted to markers (see figure 2a). Next, the normal to B-spline was traced
at the point closest to the anterior contour for all points of this contour (it was
assumed that both points satisfies the equation of normal). Next, the intersection
points between normals and posterior contour were calculated. This approach
allowed to get the set of measurements points.
Finally, quantification of the LSS was performed by calculating the dural sac
antero-posterior diameter (see figure 2b). DSAPD was calculated for each pair
of points using the Euclidean distance and DICOM Pixel Spacing(0028,0030)
attribute (see equation 4). Pixel Spacing atribute holds the physical distance in
the patient between the center of each pixel in mm [13],

181
D. Horwat, M. Krośnicki

p
d = P ixel spacingy · (x1 − x2 )2 + (y1 − y2 )2 , (4)

where: P ixel spacingy is the horizontal physical size of pixel, x1 , y1 is coordinates


of the point on anterior contour, x2 , y2 is coordinates of the point on posterior
contour.

Fig. 2. a) B-spline model of the spine curvature. Yellow line is the B-spline, red points
are the landmarks placed at the midpedicular level of the vertebral bodies. b) Set of
measurements points. Red lines show the measurements of antero-posterior diameter
of the dural sac. c) Comparison of segmentation results for the different position of the
seed point. The magenta, yellow and red line correspond to initial point placed in the
cerebrospinal fluid, the spinal cord and the boundary of the two regions, respectively.

Detection of spinal stenosis was done by comparing each of the diameter


measurement with the mean DSAPD for the corresponding lumbar spine seg-
ment. The lumbar spine was divided into segments (L1-L2, L2-L3, L3-L4, L4-L5,
L5-S1). The comparison ratio CR was calculated using the following formula:

DSAP D
CR = 1 − · 100%. (5)
segment mean

The comparison ratio was used to establish a cut-off value t for LSS detection.
The threshold t was set as a 10% of the CR.

182
CAD diagnosis system for LSS detection in MRI

3 Results and Discussion

The method proposed in this study was evaluated in MRI STIR images of n = 7
subjects. The test dataset included both normal and stenosed spine images.
The result of the segmentation is shown in the figure (see figure 3b,c). The
usage of morphological closing ensure obtaining an enclosed region. To present
precision of the segmentation, the original MRI image was blended with the
contour extracted from the segmented dural sac region. It can be clearly seen
that output contour accurately capture the anterior and posterior boundary of
the dural sac (see figure 3d).

Fig. 3. a) Original MRI STIR image, b) Segmented dural sac region. c) Segmented
region (b) superimposed on (a). d) Comparison of the contour of the segmented dural
sac region with the original image (a).

Application of a Gaussian filter and histogram equalization eliminates inho-


mogeneities of the segmented dural sac region associated with the difference in
cerebrospinal fluid (CSF) and spinal cord intensity values. Such inhomogeneities
can lead to wrong segmentation (see figure 4).
Applying the range operator prevents growing region from spreading out to
the vertebral bodies (see figure 5).
To test the independence of the region growing algorithm from the location
of the starting point, the seed point was placed in the CSF, the spinal cord and

183
D. Horwat, M. Krośnicki

Fig. 4. Segmentation result: a) before histogram equalization. b) after histogram equal-


ization.

Fig. 5. a) Original image. b) Segmentation result before applying range operator. c)


Segmentation result after applying range operator.

184
CAD diagnosis system for LSS detection in MRI

the boundary of the two regions. This experiment shown that the location of the
seed point does not significantly affect the result of segmentation (see figure 2c).
In the literature two definition (cut-offs) of stenosis are commonly used: ≤ 12
mm or ≤ 10 mm (”relative” or ”absolute” stenosis, respectively) [8] and ≤ 9 mm
[3]. These descriptors are somewhat arbitrary and do not take into consideration
the inter-individual variability of the dural sac diameter. The method proposed in
this work overcomes this problem. Comparison of DSAPD with mean DSAPD in
corresponding segment allows for taking into account the anatomical narrowing
of the dural sac in its lower part direction in the LSS detection process. The
method presented in this study gives possibility to establish the individual LSS
detection threshold for each subject.
In the figure 6 the results of the lumbar spinal stenosis detection in a normal
case and in two cases classified by radiologist as LSS are shown. It can be seen
(see figure 7 and figure 8) that the CAD method found narrowed spinal segments
in both pathological cases. In the normal case narrowing was not found by CAD.
At this stage of the research, the lack of full and accurate radiological reports
did not allow to assess the accuracy of LSS detection. Nevertheless, based on
the general diagnosis of the radiologist, it can be concluded that the results of
LSS detection are promising.

Fig. 6. LSS detection results: a)-b) MRI studies of two subjects classified by radiologist
as pathological (the white square show the zoomed region in (c) and (d)). c)-d) Result
of LSS detection for cases shown in (a) and (b). Red markers indicate the narrowed
segments.

185
D. Horwat, M. Krośnicki

LSS case no.1


LSS case no.2
Normal
18
16
14
Dural sac diameter [mm]

12
10
8
6
4
2
200 300 400 500 600
Measurement points
Fig. 7. Dural sac antero-posterior diameter measurements.

4 Conclusions

This work presents preliminary results of the research on the development of


a computer-aided diagnosis system for lumbar spinal stenosis detection from
Magnetic Resonance images. The system uses STIR mid-sagittal slices as input
data. This method enhances the images at the preprocessing stage, segments
the dural sac using the region growing technique, quantifies spinal stenosis using
the dural sac antero-posterior diameter as a radiological criterion and detects
LSS based on comparing the diameter of the dural sac with the dural sac mean
diameter values (Comparison Ratio). Preliminary tests with 7 subjects show
promising results of lumbar spinal stenosis detection.
As part of further research, the method will be tested on bigger set of subjects
to evaluate its diagnostic applicability. The diagnostic accuracy of the method
will also be evaluated on the basis of full radiological reports. The quantification
process will be verified by comparing the CAD measurements of the dural sac
diameter with measurements manually performed by radiologist. The next goal
will be to reduce the user’s input to the minimum. The mile stones towards the
development of fully-automated CAD are automatic selection of the mid-sagittal
slice and automatic determination of the curvature of the spine.

186
CAD diagnosis system for LSS detection in MRI

LSS case no.1


LSS case no.2
0.8mal
No
Lumba segment bo de
Cut-off
0.6 value (0.1CR)

0.4
Compa ison atio

0.2

0.0

−0.2

−0.4

200 300 400 500 600


Measurement points
Fig. 8. Lumbar spinal stenosis detection using Comparison Ratio.

187
D. Horwat, M. Krośnicki

5 Conflict of interest statement


All authors declare that they have no conflict of interest to disclose.

References
1. Alomari, R.S., Chaudhary, V., Dhillon, G.: Computer aided diagnosis system for
lumbar spine. In: ISABEL (2011)
2. Amundsen, T., Weber, H., Lilleas, F., Nordal, H., Abdelnoor, M., Magnaes, A.:
Lumbar spinal stenosis: Clinical and radiologic features. Spine 20(10), 1178–1186
(1995)
3. Chatha, D., Schweitzer, M.E.: Mri criteria of developmental lumbar spinal stenosis
revisited. Bulletin of the NYU hospital for joint diseases 69(4), 303–307 (2011)
4. Cheung, J.P.Y., Shigematsu, H., Cheung, K.M.C.: Verification of measurements
of lumbar spinal dimensions in t1- and t2-weighted magnetic resonance imaging
sequences. The Spine Journal 14(8), 1476–1483 (2014)
5. Ciricillo, S.F., Weinstein, P.R.: Lumbar spinal stenosis. Western Journal of
Medicine 158(2), 171–177 (1993)
6. Gonzalez, R., Woods, R.: Digital Image Processing. 3 edn. (2009)
7. Hughes, A., Makirov, S., Osadchiy, V.: Measuring spinal canal size in lumbar spinal
stenosis: description of method and preliminary results. International Journal of
Spine Surgery 9(3), 1–9 (2015)
8. Kalichman, L., Cole, R., Kim, D.H., Li, L., Guermazi, A., Hunter, D.J.: Spinal
stenosis prevalence and association with symptoms: The framingham study. The
spine journal: official journal of the North American Spine Society 9(7), 545–550
(2009)
9. Koh, J., Alomari, R.S., Chaudhary, V., Dhillon, G.: Lumbar spinal stenosis cad
from clinical mrm and mri based on inter- and intra-context features with a two-
level classifier. In: Medical Imaging 2011: Computer-Aided Diagnosis. vol. 7963
(2011)
10. Koompairojn, S., Hua, K., A Hua, K., Srisomboon, J.: Computer-aided diagnosis
of lumbar stenosis conditions. In: Proc SPIE. vol. 7624, pp. 7624 – 7624 – 12 (2010)
11. Lee, S.Y., Kim, T.H., Oh, J.K., Lee, S.J., Park, M.S.: Lumbar stenosis: A recent
update by review of literature. Asian Spine Journal 9(5), 818–828 (2015)
12. Lorensen, W.E., Cline, H.E.: Marching cubes: A high resolution 3d surface con-
struction algorithm. COMPUTER GRAPHICS 21(4), 163–169 (1987)
13. National Electrical Manufacturers Association: National Electrical Manufacturers
Association: Digital Imaging and Communications in Medicine (DICOM) Part 3:
Information Object Definitions (2011)
14. Ruiz-España, S., Arana, E., Moratal, D.: Semiautomatic computer–aided classifi-
cation of degenerative lumbar spine disease in magnetic resonance imaging. Com-
puters in Biology and Medicine 62, 196–205 (2015)
15. Soille, P.: Morphological Image Analysis. Principles and Applications. 2 edn. (2004)
16. Solomon, C., Breckon, T.: Fundamentals of Digital Image Processing (2011)
17. Steurer, J., Roner, S., Gnannt, R., Hodler, J.: Quantitative radiologic criteria for
the diagnosis of lumbar spinal stenosis: a systematic literature review. BMC. Mus-
culoskeletal Disorders 12(175), 1–9 (2011)
18. Talekar, K., Cox, M., Smith, E., Flanders, A.: Imaging spinal stenosis. Applied
Radiology 46(1), 8–17 (2017)

188
CAD diagnosis system for LSS detection in MRI

19. Taylor, V., Deyo, R., Cherkin, D., Kreuer, W.: Low back pain hospitalization:
recent us trends and regional variation 19(11), 1207–1212 (1994)
20. Wu, A.M., Zou, F., Cao, Y., Xia, D.D., He, W., Zhu, B., Chen, D., Ni, W.F., Wang,
X.Y., Kwan, K.: Lumbar spinal stenosis: an update on the epidemiology, diagnosis
and treatment. AME Medical Journal 2(5), 1–14 (2017)
21. Zhang, T.Y., Suen, C.Y.: A fast parallel algorithm for thinning digital patterns.
Image Processing and Computer Vision 27(3), 236–239 (1984)
22. Zheng, F., Farmer, J.C., Sandhu, H.S., OLeary, P.F.: A novel method for the
quantitative evaluation of lumbar spinal stenosis. HSS Journal 2(2), 136–140 (2006)

189
190
Section 13

Intelligent Data Analysis

191
CREDIBILITY OF FUZZY KNOWLADGE

OLEKSANDR PROVOTAR 0000-0002-6556-3264


Department of Computer Science, University of Rzeszow,
Rzeszow, Aleja Rejtana 16c, 35-959, Poland
[email protected]

Abstract. An approach to finding a credible estimates of fuzzy


knowledge in fuzzy inference systems is considered. To investigate a
credibility the elements of the theory of probabilities of fuzzy events are
used. The examples of application of the proposed approach in expert
diagnostic systems and bioinformatics is given.

Key words: fuzzy event, probability, credibility.

1. Introduction

It is known that the fuzzy fuzzy inference systems [3-5, 7-11] are the convenient tool
to present knowledge in information systems, which are built on the basis of ideas and
methods of inductive mathematics [2].
The fuzzy specification of problem means ordered set of fuzzy instructions.
The fuzzy specification of the problem with the algorithm during fulfilling which the
approximate (fuzzy) solution of the problem is received will be called as fuzzy infer-
ence system.
Let x1, …, xn are input linguistic variables and y – output linguistic variable [9-
11].
The ordered set of fuzzy instructions looks like as following:

if x1 is A11  ...  x n is A1n then y is B1


if x1 is A21  ...  x n is A2 n then y is B 2
.........................................
if x1 is Am1  ...  x n is Amn then y is B m

where Aij и Bi – fuzzy sets, symbol “” is interpreted as t-norm of fuzzy sets.
The algorithm of calculating the output of such specification under the inputs
A1 , … , A  consists in performing such steps:
n

1. Calculate the truth level of the rules:

192
 i  min[max(A1' ( x1 )  Ai1 ( x1 )),..., max(An' ( xn )  Ain ( xn )];

2. Calculate outputs of each rule:

Bi' ( y)  min( i , Bi ( y));

3. Calculate aggregated output:

B( y)  max(B1' ( y), ..., Bm' ( y)).

2. Probability of Fuzzy Events


The proposed approach to solving problems (based on fuzzy models) allows simplify-
ing the methods of solving problems. But, there is a necessity for additional studies of
the results reliability.
For determining the probability of the event A in the space of elementary
events X, the concept of probability measure P is introduced. The function P is a
numerical function which assigns a number P(A) to the elementary event A, and in
addition:
 
0  P(A)  1, P(X) = 1, P(  Ai) =
i 1
 P( Ai )
i 1
for each A1, A2, … such that Ai  Aj = , if i  j.
Fuzzy set

А = {(x, A(x)), xX}

in the space X will be called a fuzzy event in space X, where A: Х  [0,1] is a
membership function of fuzzy set A.
A probability of fuzzy event A can be calculated according to the formula

P(A) =  A(x)P(x),
xA

where P(x) is a function of the probability distribution.


Conditional probability of fuzzy event A given fuzzy event B will be deter-
mined with the help of Cartesian product notion. Namely, the distribution function
P(A|B) of the conditional probability of fuzzy event A given the fuzzy event B is deter-
mined by the distribution function P(A,B) of binary Cartesian product АВ probability
and probability distribution function PB of fuzzy event B, provided it is not zero, that
is for any pair (x,y) of Cartesian product X Y performed

193
 P( A, B ) ( x, y )
 , PB ( y )  0
Q( A B ) ( x, y )   PB ( y )
1, PB ( y )  0.


 Q
 AB ( x, y )
P( A|B ) ( x, y )   .

 Q AB ( x, y )

 x, y

Given this, we can calculate the conditional probability of any fuzzy events at
a given probability measure.
The probability distribution function of binary Cartesian product АВ will be
calculated by the formula

P( A, B) ( x, y)  min(PA ( x), PB ( y)).

3. Fuzzy Knowledge in Expert Diagnostics Systems

Let X1 = {5, 10, 15, 20}, X2 = {5, 10, 15, 20}, X3 = {35, 36, 37, 38, 39, 40} – spaces
for determining the values of linguistic variables:

x1 = “Coughing” = {“weak”, “moderate”, “strong”},


x2 = “Running nose” = {“weak”, “moderate”, “strong”},
x3 = “Temperature” = {“normal”, “raised”, “high”, “very high”}

accordingly.
Determine the elements of these sets:

“Coughing”: “weak” = 1/5 + 0.5/10; “moderate” = 0.5/5 + 0.7/10 + 1/15;


“strong” = 0.5/10 + 0.7/15 + 1/20.
“Running nose”: “weak” = 1/5 + 0.5/10; “moderate” = 0.5/10 + 1/15; “strong”
= 0.7/15 + 1/20.
“Temperature”: “normal” = 0.5/35 + 0.8/36 + 0.9/37 + 0.5/38; “raised” =
0.5/37 + 1/38; “high” = 0.5/38 + 1/39; “very high” = 0.8/39 + 1/40.

Let Y = {influenza, sharp respiratory disease, angina, pneumonia} is a space


for determining the value of linguistic variable y. Then the dependence of the pa-
tient’s disease on his symptoms can be described by the following system of specifi-
cations:

if x1 is “weak”  x2 is “weak”  x3 is “raised” then y is “0.5/influenza +0.5/OРЗ


+0.4/angina + 0.8/pneumonia”;

194
if x1 is “weak”  x2 is “moderate”  x3 is “high” then y is “0.8/influenza
+0.7/sharp respiratory disease +0.8/angina + 0.3/pneumonia”;
if x1 is “weak”  x2 is “moderate”  x3 is “very high” then y is “0.9/influenza
+0.7/sharp respiratory disease +0.8/angina + 0.2/pneumonia”.

If to the input x1 of this algorithm to supply value A1'  1 / 5  0.7 / 10 , to the


input x2 – value A2'  1 / 5  0.5 / 10 , to the input x3 – value A3'  1 / 36  0.9 / 37 , then
in accordance with procedure of fulfilling the algorithm the fuzzy solution of the
problem is

B = 0.5/influenza + 0.5/sharp respiratory disease + 0.4/angina + 0.5/ pneumonia.

3. Fuzzy Knowledge in Bioinformatics

It is known [2, 6] that the problem of recognizing the structures of the proteins
of different organization levels is rather complicated. To solve it the different methods
and approaches, including experimental (based on physics of chemical relations crea-
tion), machine teaching (used the data bases of experimentally found secondary struc-
tures as learning samples), probabilistic (on the basis of the Bayes procedures and
Markov chains) are used.
The method of recognition of the secondary structure of DNA using fuzzy
inference systems is proposed. The problem is the following: it is necessary to build
the fuzzy inference systems which using random amino acid sequence would define
(as a fuzzy set) the secondary structure of central remainder (of the amino acid) of the
input sequence.
To solve this problem at first it is necessary to design the fuzzy specification of
the problem according to learning samples. One of the methods to build the system of
fuzzy instructions according to numerical data consists of the following. Let’s the
rules base with n inputs and one output is created. There are learning dates (samples)
as the sets of pairs for that

x1 (i), x 2 (i), ..., x n (i); d (i), i  1,2, ... , m,

where xj(i) – inputs and d(i) – output, at that xj(i)  {a1, a2, …, ak }, d(i)  {b1, b2,
…, bl}. It is necessary to build the fuzzy inference systems which would generate the
correct output data according to random input values. The algorithm of solving of the
provided problem consists in the following sequence of steps:
1. Dividing the space of inputs and outputs for areas (dividing learning data for
groups on m1, …, mk lines, which means, each input and output is divided for 2N+1
cuts where N for each input is selected individually. Separate areas (segments) will be
called in the following way:
MN(left N), ... , M1(left 1), S(medium), D1(right 1), ... , DN(right).

195
Determination membership function for each areas.
2. Building fuzzy sets on the basis of learning samples (for each group mi
learning data

( x1 (1), x 2 (1), ... , x n (1); d (1))


( x1 (2), x 2 (2), ... , x n (2); d (2))
......................
( x1 (mi ), x 2 (mi ), ... , x n (mi ); d (mi ))

we build the fuzzy sets of the form:

a1(1) a k(1)
 / a1  ... 
(m )
A1 i / ak
mi mi
.... ....................
a1( n ) a k( n )
 / a1  ... 
(m )
A1 i / ak
mi mi

b1 bk(1)
B mi  / b1  ...  / bk
mi mi

where a1( j ) – number of symbols a1 in column j of the learning data group, b1 –


number of symbols bj in the last column of the learning data group.
3. Building fuzzy rules on the basis of fuzzy sets from the previous step on the
following scheme:

( x1 (1), x 2 (1), ... , x n (1); d (1))


( x1 (2), x 2 (2), ... , x n (2); d (2))

......................
( x1 (mi ), x 2 (mi ), ... , x n (mi ); d (mi ))

R (1) : if x1 is A1( m1 )  x 2 is A2( m1 )  ...  x n is An( m1 ) then y is B m1

4. Elimination of contradictions.

This algorithm puts in accordance to each set of learning data the fuzzy rule of
the logical inference.
It will be shown how to use the suggested algorithm of building the fuzzy sets
for recognizing the secondary structure of DNA.

196
It is known [2, 6], that the secondary structure of the pieces of polypeptide se-
quence is determined mainly the by the interactions of neighbor amino acids within
these pieces. To be more exact, the type of secondary structure of the exact remain is
determined by its surrounding.
To build the fuzzy inference systems the learning samples from 15 remains of
the protein MutS [6] are used which look like the following:

К V S E G G L I R E G Y DPD
e - - - h h h h h h h h h h h
V S E G G L I R E G Y DPDL
- - - h h h h h h h h h h h h
SE G GL I R E G Y DPDLD
- - h h h h h h h h h h hh h
EG G L I R E G Y D PDLD A
- h h h h h h h h h h h h h h
GG L I RE G Y D P D LD AL
h h h h h h h h h h h h h h h
The prediction belongs to the central remain, besides the following denotations
are used: h  for spiral, е  for cylinder,  other
According to the algorithm, the teaching data is divided, for example, for 3
groups:

К V S E G G L h R E GY D P D S E G G L I R hGY D P D L D
V S E G G L I h E G Y D P D L; E G G L I R E h Y D P D L D A;

К V S E G G L h R E GY D P D S E G G L I R hGY D P D L D
V S E G G L I h E G Y D P D L; E G G L I R E h Y D P D L D A;

GG LI R EGhDP DL D AL

and compared to each group with according fuzzy sets

( m3 )
Ai( m1 ) , Ai( m2 ) , Ai , B ( m1 ) , B ( m2 ) , B ( m3 )

Then fuzzy specification of the recognition problem will look like:

R (1) : if x1 is A1(m1 )  x 2 is A2(m1 )  … x14 is A14


(m1 )
then y is B (m1 ) ,
R (2) : if x1 is A1(m 2 )  x 2 is A2(m 2 )  …  x14 is A14
(m 2 )
then y is B ( m2 ) ,
 x 2 is A2  …  x14 is A14 3 then y is B (m 3 ) ,
(m 3 ) (m 3 ) (m )
R (3) : if x1 is A1

197
Using the algorithm of solving the specification, we will find the output re-
ceived system of fuzzy instructions, if to the input the following amino acid sequence
is supplied:
L К V S E G G L I R E G Y D P.

In accordance with the procedure of executing the algorithm we will get that
the secondary structure of the remainder L is h.

4. Credibility of Fuzzy Knowledge

Consider an example. Let X1 = {5, 10}, X2 = {5, 10}, X3 = {36, 37, 38, 39, 40}
are spaces to determine the values of linguistic variables

x1 = "Coughing" = { "weak (C)", "moderate (C)", "strong (C)"}


x2 = "Running nose" = { "weak (R)", "moderate (R)", "strong (R)"}
x3 = "Temperature" = { "normal", "raised ", “high", "very high"}.

Define the elements of these sets:

"Coughing": "weak (C)" = 1/5; "moderate (C)" = 0.5/5 + 0.5 / 10; "Strong (C)"
= 1/10.
"Running nose": “weak (R) "= 1/5; "moderate (R)" = 0.5 / 5 + 0.5 / 10; "strong
(R)" = 1/10.
"Temperature": "normal" = 1/36 + 0.5/37; "raised" = 1/37 +0.5/38; "high" =
1/38 + 0.5/39; "very high" = 0.5/39 + 1/40.

Let Y = {influenza, sharp respiratory disease, angina, pneumonia} be a space


to determine the values of linguistic variable y. Then the patient dependent on its
symptoms can be described by the following specifications:

if x1 is "weak (C)"  x2 is "weak (R)"  x3 is "raised" then y is "0.5/influenza +


0.5/sharp respiratory disease + 0.4/angina + 0.8/pneumonia";
if x1 is "weak (C)"  x2 is "moderate (R)"  x3 is "high" then y is "0.8/influenza +
0.7/sharp respiratory disease + 0.8/angina + 0.3/pneumonia";
if x1 is "weak (C)"  x2 is "moderate (R)"  x3 is "very high" then y is
"0.9/influenza + 0.7/sharp respiratory disease + 0.8/angina + 0.2/pneumonia".

Let an input x1 of this algorithm is A1'  1 / 5  0.5 / 10 , an input x2 is


A2'  1 / 5  0.5 / 10 and an input x3 is A3'  1 / 38 . Then in accordance with procedure
of fulfilling the algorithm of the fuzzy inference system the fuzzy solution of the
problem is

B’ = 0.5/influenza + 0.5/sharp respiratory disease + 0.5/angina +


0.5/pneumonia.

198
We need to find the probability of this disease at symptoms A1' , A2' , A3' accordingly.
Also, let the probability distribution in the spaces X1 = {5, 10}, X2 = {5, 10}, X3 =
{36, 37, 38, 39, 40}, Y = {influenza, sharp respiratory disease, angina, pneumonia}
are
"Coughing": PX1 (5)  0.4, PX1 (10)  0.6;
"Running nose": PX2 (5)  0.4, PX2 (10)  0.6;
"Temperature":
PX3 (36)  0.3, PX3 (37)  0.3, PX3 (38)  0.2, PX3 (39)  0.1, PX3 (40)  0.1;
"Disease": PY(influenza) = 0.5, PY(sharp respiratory disease) = 0.3,
PY(angina) = 0.1, PY(pneumonia) = 0.1.

First, calculate the probability of hypotheses  fuzzy inference specifications.


Transform the first hypothesis

H1 = if x1 is "weak (C)"  x2 is "weak (R)"  x3 is "raised" then y is


"0.5/influenza + 0.5/sharp respiratory disease + 0.4/angina + 0.8/pneumonia"

to the expression

H1 = (x1 is “weak (C)”)  (x2 is “weak (R)”)  (x3 is “high”)  y is


“0.5/influenza +0.5/sharp respiratory disease +0.4/angina + 0.8/pneumonia.

Then we find the appropriate additions and obtain fuzzy sets:

(x1 is “weak (C)”) = 1/10;


(x2 is “weak (R)”) = 1/10;
(x3 is “raised”) = 1/36 + 0.5/38 + 1/39 + 1/40.

Then we calculate the probability of fuzzy events:

P ((x1 є “weak (C)”)) = 0.61 = 0.6;


P ((x1 є “weak (R)”)) = 0.61 = 0.6;
P ((x3 є “raised”)) = 0.3 + 0.1 + 0.1 + 0.1 = 0.6;
P(“0.5/influenza + 0.5/sharp respiratory disease + 0.4/angina +
+ 0.8/pneumonia”) = 0.25 + 0.15 + 0.04 + 0.08 = 0.52.

Then the probability of the first hypothesis is P(H1 ) = 0.58.


Similarly we calculate the probability of hypotheses

H2 = if x1 is "weak (C)"  x2 is "moderate (R)"  x3 is "high" then y is


"0.8/influenza + 0.7/sharp respiratory disease + 0.8/angina + 0.3/pneumonia"

and

199
H3 = if x1 is "weak (C)"  x2 is "moderate (R)"  x3 is "very high" then y is
"0.9/influenza + 0.7/sharp respiratory disease + 0.8/angina + 0.2/pneumonia".

So, the probability of the hypothesis H2 is P(H2) = 0.5675 and the probability
of the hypothesis H3 is P(H3) = 0.6775.
At the next step we calculate the conditional probability P(A/H1), P(B/H2),
P(B/H3). The calculation algorithm of the conditional probability P (A/Hi) consists in
performing the following steps:

1. Calculate the distribution function of binary probability P( B, H i ) :

P( B,Hi ) ( x1 ,..., x n , y )  min[max( PX1 ( x1 )   A' ( x1 ), ... , PX n ( x n )   A' ( x n ), PY ( y )   B' ( y )),


1 n

max( PX1 ( x1 )   Ai1 ( x1 ), ... , PX n ( x n )   Ain ( x n ), PY ( y )   Bi ( y ))].

2. Calculate the probability function of Cartesian product:

 P( B , H i ) ( x1 ,..., xn , y )
 , PB ( y )  0
Q( BH ) ( x1 ,..., x n , y )   P B ( y )
i
1, PB ( y )  0.

3. Calculate the conditional probability distribution function:



 QBH1 ( x1 ,..., x n , y )
P( B|H1 ) ( x1 ,..., x n , y )   .
  Q B H 1
( x1 ,..., x n , y )
 x1 ,..., xn , y

Let’s calculate, for example, values

P( B , H ) (5, 5, 36 , influenza)
1
, Q( BHi ) (5, 5, 36, influenza) and
P( B|H1 ) (5, 5, 36, influenza).
We obtain

P( B, H ) (5, 5, 36, influenza) 


1

min[max( PX1 (5)   A' (5), PX 2 (5)   A' (5), PX 3 (36)   A' (36 ), PY (influenza ))   B ' (influenza )),
1 2 3

max(PX1 (5)   A11 (5), PX 2 (5)   A12 (5) , PX 3 (36)   A13 (36) , PY (influenza)   B1 (influenza))] 
 min[max( 0.4, 0.4, 0, 0.25), max( 0.4, 0.4, 0, 0.25)]  0.4.
Q( B H1 ) (5, 5, 36 , influenza)  P( B, H ) (5, 5, 36, influenza) / PB ( y)  0.8.
1

200
P( B / H1 ) (5, 5, 36 , influenza) 
Q( B  H1 ) (5, 5, 36 , influenza) /  Q B H1 (5, 5, 36, influenza)  0.8 / 190  8 / 1900 .
x1 ,..., xn , y
The distribution function of binary probability, probability functions of Carte-
sian product, and distribution function of conditional probability for other values of
arguments are calculated in a similar way.
On the next step, we calculate the Cartesian products A1  A2  A3 B,
A11  A12  A13  B1 and their aggregation.
Now we can calculate the conditional probability P(B/H1). Namely,

131
P( B / H1 )  .
1425

To calculate the probability P(B/H2) we find Cartesian product


A21  A22  A23  B2 and calculate the conditional probability

77
P( B / H 2 )  .
950

To calculate the probability P(B/H3) we find Cartesian product


A31  A32  A33  B3 and calculate the conditional probability P(B/H3). Namely,

122
P( B / H 3 )  .
950

Then, using the analogue of law of total probability

n
P ( B )   P( H i ) P ( B / H i ) ,
i 1

we can calculate the probability of an event B, that is, the probability of output of
fuzzy inference system is B. Therefore, we have

3 131 77 122
P( B)   P( H i ) P( B / H i )  0.58   0.5675   0.6775   0.2.
i 1 1425 950 950

5. Conclusion

The proposed approach based on fuzzy models allows to simplify the methods
of solving of above mentioned problems. But, there is a necessity for additional stud-
ies of the credibility results.

201
Very often, there is a necessity in solving so-called inverse problems men-
tioned above. In this case, to calculate the reliability of the results, we can use Bayes'
formula
n
P( Ak / B)  P( B / Ak ) P( Ak ) /  P( B / Ai )P( Ai )
i 1
Bayes’ theorem offers an approach to the assessment of the reliability of the
results and has achieved some success in expert systems in the last 20 years.
Given a probability distribution [1] in the space X Bayes’ recognition proce-
dure allows to evaluate the сredibility of the fuzzy inference system outputs (inputs)
by analogy with [2].

References

1. Buckley JJ. Fuzzy Probabilities. Physica-Verlag, Heidelberg, Germany (2003).


2. Gupal, A.M., Sergienko, I.V.: Optimal Recognition Procedures (in Russian). Naukova
Dumka, Kyiv (2008).
3. Katerynych, L., Provotar, A.: Neural Networks Diagnostics in Homeopath System. Inter-
national Journal Information Theories & Applications. 15, 89-93 (2008).
4. Konysheva, L.K., Nazarov, D.M.: Foundations of Fuzzy Sets Theory (in Russian). SPB
Piter, Moscow (2011).
5. Klir GJ, Yuan B, eds. Fuzzy Sets, Fuzzy Logic and Fuzzy Systems: Selected Papers by
Lotfi A. Zadeh. World Scientific, Singapore (1996).
6. Lesk, A.: Introduction to Bioinformatics (in Russian). Labaratoria Znaniy, Moscow
(2009).
7. Leski, J.: Systemy Neuronowo-Rozmyte (in Polish). Naukowo-Techniczne,Warszawa
(2008).
8. Provotar, A.I., Lapko, A.V., Provotar, A.A.: Fuzzy Inference Systems and Their Applica-
tions. International Scientific Journal Cybernetics and Systems Analysis. 49, 517-525
(2013).
9. Rutkowska, D., Pilinski, M., Rutkowski, L. Sieci Neuronowe, Algorytmy Genetyczne,
Systemy Rozmyte (in Polish). Wydawnictwo Naukove PWN, Warszava (1999).
10. Rutkowski, L. Metody i Techniki Sztucznej Inteligencji (in Polish). Wydawnictwo
Naukove PWN, Warszava (2009).
11. Zadeh, L.A.: Fuzzy Sets as a Basis for a Theory of Possibility. Fuzzy Sets and Systems.
1, 3-28 (1978).

202
RMID: a novel and ecient image descriptor for mammogram
mass classication
Sk Md Obaidullah, Sajib Ahmed, Teresa Gonçalves, and Luís Rato

Dept. of Informatics, University of Évora, Portugal

Abstract. For mammogram image analysis, feature extraction is the most crucial step
when machine learning techniques are applied. In this paper, we propose RMID (Radon-
based Multi-resolution Image Descriptor), a novel image descriptor for mammogram mass
classication, which perform eciently without any clinical information. For the present
experimental framework, we found that, in terms of area under the ROC curve (AUC),
the proposed RMID outperforms, upto some extent, previous reported experiments using
histogram based hand-crafted methods, namely Histogram of Oriented Gradient (HOG)
and Histogram of Gradient Divergence (HGD) and also Convolution Neural Network
(CNN). We also found that the highest AUC value (0.986) is obtained when using only
the carniocaudal (CC) view compared to when using only the mediolateral oblique (MLO)
(0.738) or combining both views (0.838). These results thus proves the eectiveness of
CC view over MLO for better mammogram mass classication.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

203
Instrumentals/songs separation for background music removal
1 2 3 2
Himadri Mukherjee , Sk Md Obaidullah , K.C. Santosh , Teresa Gonçalves ,
4 1
Santanu Phadikar , and Kaushik Roy

West Bengal State University;


1

University of Évora;
2

The University of South Dakota;


3

4
Maulana Abul Kalam Azad University of Technology;

Abstract. The music industry has come a long way since its inception. Music produ-
cers have also adhered to modern technology to infuse life into their creations. Systems
capable of separating sounds based on sources especially vocals from songs have always
been a necessity which has gained attention from researchers as well. The challenge of
vocal separation elevates even more in the case of the multi-instrument environment. It
is essential for a system to be rst able to detect that whether a piece of music contains
vocals or not prior to attempting source separation. In this paper, such a system is propo-
sed being tested on a database of more than 99 hours of instrumentals and songs. Using
line spectral frequency-based features, we have obtained the highest accuracy of 99.78%
from among six dierent classiers, viz. BayesNet, Support Vector Machine, Multi Layer
Perceptron, LibLinear, Simple Logistic and Decision Table.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

204
Modern metaheuristics in physical processes optimization
Tomasz Rybotycki

Centre of Statistical Data Analysis,


Systems Research Institute Polish Academy of Science, Warsaw, Poland;

Abstract. The subject of this work is applying the articial neural network (ANN)
taught using two metaheuristics - the rey algorithm (FA) and properly prepared evo-
lutionary algorithm (EA) - to nd the approximate solution of the Wessinger's equation,
which is a nonlinear, rst order, ordinary dierential equation. Both methods were com-
pared as an ANN training tool. Then, the discussion of applying this method in selected
physical processes is discussed.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

205
206
Section 14

From Theory to Applications

207
Effect of left ventricular longitudinal axis variation in
Pathological hearts using Deep learning
Yashbir Singh1[1], Deepa2[1], Shi-Yi Wu3[1], João Manuel R. S. Tavares4[3], Michael
Friebe5[4] and Weichih Hu6[1]
1 Chung Yuan Christian University, Zhongli, Taiwan
2
Instituto de Ciência e Inovação em Engenharia Mecânicae Engenharia Industrial,
Departamento de Engenharia Mecânica, Faculdade de Engenharia, Universidade do Porto,
Porto, PORTUGAL
3
Otto-von-Guericke-Universität, Magdeburg, Germany
[email protected],
[email protected]

Abstract. Cardiac disease is a primary cause of death worldwide. Prior studies


have indicated that the dynamics of the cardiac left ventricular (LV) during di-
astolic filling is a major indicator of cardiac viability. Hence, studies have
aimed to evaluate cardiac health based on quantitative parameters unfolding LV
function. In this research, it is demonstrated that major aspects of the cardiac
function, mainly ejection fraction, are due to abnormalities of the left ventricu-
lar on longitudinal axis variation. We used Bayesian deep learning algorithms
to measure the wall motion of the LV that correlates well with the LV ejection
fraction. Our results reveal relations among the wall regions of the LV. The
findings of this research can potentially be used as determination value to iden-
tify patients with future cardiac disease problems leading to heart failure.

Keywords: Pathological heart, Ejection Fraction, Deep learning, longitudinal


axis.

1 Introduction

Death caused by Heart Failure (HF) has remarkably increased in the past few years
mainly due to the general aging of the human population. While modern develop-
ments in the biomedical field are surely helping in diagnosing and subsequently trea-
ting patients whether it is the cost related to the interventional device.
Research, production, distribution and subsequent clinical training is a huge concern
that society has to deal with in form of ever-increasing healthcare expenses. Screening
of the population that is susceptible to HF, can be helpful to reduce the deaths due to
HF [1] and simultaneously reduce healthcare expenses through preventative treat-
ments. By the guidelines of The American College of Cardiology Foundation and
American Heart Association (ACCF/AHA), there are two classes of HF that has been
categorized in these patients: class A and class B of HF [2]. In class A, patients are
more susceptible to HF, but lacking any structural heart disease or symptoms. In class

208
B, patients are seen with the structural disease, but lacking signs and symptoms of
HF. Additional contributors to developing HF are other diseases like hypertension,
diabetes mellitus, metabolic syndrome and atherosclerotic [2, 3].
Now, the question is that what else can be done in preventing HF at major scale. We
see Bayesian deep learning (DL) research and recent algorithms as possible future
tools for screening and diagnosis in order to facilitate the detection of patients prone
to HF. DL is a technique that utilizes machine learning algorithms (supervised or
unsupervised) that are perfectly dependent on the choice of the data representation
used for training the algorithm on various layered models of non-linear operational
input [4].
The applications may be multifunctional and involve pattern recognition, statistical
classification, convolutional deep neural networks and deep belief networks [5]. Here,
we present our work on building a computer aided diagnosis system with the goal to
detect wall motion of LV based on DL.

2 Method

In this study, our focus was on the classification portion of the LV; as to the image
processing part, the reader can find the details in the referenced papers that address
the automatically detection of the interior (endocardial) and exterior (epicardial) bor-
ders of the LV [6, 7]. The images were acquired using a computerized tomography
scanner SIEMENS_LEOVB30B at the National Institute of Hospital of Yang Ming,
National Yang Ming University, Taiwan. The study and the informed consent proce-
dure were approved by the Institutional Review Board of National Yang Ming Uni-
versity Hospital. A number of features were studied to identify the cardiac motion in
order to discover cardiac wall motion abnormalities, mainly: velocity, radial strain
and circumferential strain, local and global Simpson volume and segmental volume,
which are based on the inner (endocardial) contour.
We used Bayesian Networks (BNs) to detect both the interior (endocardial) and exte-
rior (epicardial) borders of the LV [8, 9]. Motion interferences were compensated by
using global motion estimation based on robust statistics outside the LV; this is done
so that the heart’s motion is only analyzed on the longitudinal axis (Fig. 1). Then,
numerical feature vectors, which were calculated using the contours extracted from
two consecutive time frames, were tracked through time.
In general, velocity, radial strain and circumferential strain can be calculated in terms
of standard deviation or/and mean of five segment’s respective feature values from
any one view.
The features used to help in the detection of local and global dysfunction of heart
were:
(i) Velocity features used to determine how fast any pair of control points change in
the x and y coordinates per image frame;
(ii) Circumferential strain features to assess how much the contour between any two
control points shrinks in the systolic phase;
(iii) Radial strain features also called Thickening of cardiac wall;

209
(iv) Local and global Simpson Volume to determine the volume as computed by the
Simpson rule for each frame of the heart as a whole;
(v) Segmental Volume in order to obtain the volume per segment per frame and the
segmental EF values.

Fig. 1. (A) Longitudinal axis representation of LV; (B) Computer Tomography transverse
section on the short axis of LV.

3 Results

Most of the research reported the longitudinal strain as a very sensitive parameter of
sub endocardial dysfunction. In addition, evaluation of circumferential, radial strain
and local and global Simpson Volume are also significant when assessing compensa-
tion patterns of LV function. Though, lack of a normal range of values and associated
variation hinder their use for everyday clinical evaluation.
We implemented Bayesian Network to detect wall motion abnormalities of LV and
did parameter training using 220 training cases with CT images of size 512 × 512
pixels. Our feature selection resulted in every segment dependent on five features
such as Velocity features, Circumferential strain features, Radial strain features, Local
and global Simpson Volume, Segmental Volume. Table 1 is showing the Area under
the ROC curve for the testing set. The classifier did well every heart segment, and
entirely achieved high sensitivity and specificity between 84% and 98%.

Table 1. Area under the ROC curve for the test set.
Segment of Bayesian Network of testing Segment of LV Bayesian Network of
LV set testing set
1 0.90873 9 0.9648
2 0.86170 10 0.9176
3 0.97790 11 0.8450

210
4 0.91673 12 0.9837
5 0.84506 13 0.9715
6 0.9874 14 0.9155
7 0.8643 15 0.9471
8 0.8200 16 0.9450

This study studies the effect of ejection fraction due to LV variation on the longitudi-
nal axis. We have also seen some variation about volume change and performed the
simulation study with the actual volume of LV (Fig. 2), which has been done by
Weichihhu lab [10]. We got the variation on the longitudinal axis performing a com-
parative study of actual and simulated LV heart. Variations of 1%, 4%, 7% and 10%
were found on various points of LV (Fig. 2, Panel C). This can be seen as an initial
step to recognize local and global dysfunction in the heart.

Fig. 2. (A) Actual volume of heart model; (B) Simulated heart model; (C) Difference between
the two models (A, B).

4 Conclusion

In this research, we addressed the task of building an objective classification applica-


tion for ejection fraction analysis and LV wall motion on the longitudinal axis based
on extracted features. The simple, but effective feature selection technique used, re-
sulted in a classifier that depends on only a small subset of the calculated features,
and their limited number makes it easier to explain the final classifier result to physi-
cians in order to get their feedback. Further research will integrate ejection fraction
and LV motion of pathological heart.

211
Acknowledgement

João Manuel R.S. Tavares gratefully acknowledges the funding of Project NORTE-
01-0145-FEDER-000022 - SciTech - Science and Technology for Competitive and
Sustainable Industries, co-financed by “Programa Operacional Regional do Norte”
(NORTE2020), through “Fundo Europeu de Desenvolvimento Regional” (FEDER).

References
1. Nagueh, S. F., Smiseth, O. A., Dokainish, H., Andersen, O. S., Abudiab et al. Mean Right
Atrial Pressure for Estimation of Left Ventricular Filling Pressure in Patients with Normal
Left Ventricular Ejection Fraction: Invasive and Noninvasive Validation. Journal of the
American Society of Echocardiography (2018).
2. Yancy, C. W., Jessup, M., Bozkurt, B., Butler, J., Casey et al. 2013 ACCF/AHA guideline
for the management of heart failure. Circulation, (2013).
3. Chen, I. L., Singh, Y., & Hu, W. Comparative Study of Arterial Compliance Using Inva-
sive and Noninvasive Blood Pressure Waveform. Journal of Biomedical Engineering, 5(1),
25-29 (2017).
4. Singh, Y., Wu, S. Y., Friebe, M., Tavares, J. M. R., & Hu, W. Cardiac Electrophysiology
Studies Based on Image and Machine Learning (2018).
5. Angermueller, C., Pärnamaa, T., Parts, L., & Stegle, O. Deep learning for computational
biology. Molecular systems biology, 12(7), 878 (2016).
6. Georgescu, B., Zhou, X. S., Comaniciu, D., & Krishnan, S. U.S. Patent No. 7,421,101.
Washington, DC: U.S. Patent and Trademark Office (2008).
7. Zheng, Y., Georgescu, B., Scheuering, M., & Comaniciu, D. (2012). U.S. Patent No.
8,150,119. Washington, DC: U.S. Patent and Trademark Office (2012).
8. Fung, G., Qazi, M., Krishnan, S., Bi, J., Rao, B., & Katz, A. Sparse classifiers for automat-
ed heartwall motion abnormality detection. In Machine Learning and Applications,. Pro-
ceedings. Fourth International Conference on pp. 194-200 (2005).
9. Murphy, K. P., & Russell, S. Dynamic bayesian networks: representation, inference and
learning (2002).
10. Deepa, D. , Singh, Y. , Wu, S. Y. , Friebe, M. , Tavares, J. M. , Wei-Chih, H. 'Develop-
ment of 4D Dynamic Simulation Tool for the Evaluation of Left Ventricular Myocardial
Functions'. World Academy of Science, Engineering and Technology, International Sci-
ence Index, Computer and Information Engineering, 12(5), 2814 (2018).

212
Finding Graph from Retinal Vascular Network
for Image Verification

Nilanjana Dutta Roy1 and Arindam Biswas2


1
Department of Computer Science and Engineering, Institute of Engineering and
Management, Kolkata, India
2
Department of Information Technology, Indian Institute of Engineering Science and
Technology, Shibpur, Howrah, India
{nilanjanaduttaroy,
barindam}@gmail.com

Abstract. Retina biometrics for secured system is increasingly becom-


ing popular because of its unchangeable nature throughout the life span,
robustness against tampering and contact free capture process. In this
paper, the authors show the benefits of retina graph representation in
image matching for person verification. This paper presents a retinal im-
age verification framework based on the retinal vascular graph matching
algorithm (RVGM). Retinal vascular structure is extracted using a fam-
ily of enhancement, proper illumination distribution, noise removal and
morphological operators. Then, unique retinal patterns are defined as for-
mal spatial graphs derived from the retinal vascular structure. A node
level graph matching approach, later distinguishes between genuine and
fake comparisons. Because of unavailability of multiple datasets, experi-
ments are done on all the images of DRIVE database. A matching score
estimation (MSE) method for the genuine and fake score distribution of
the database is used to measure performance of the RVGM algorithm. A
MSE score 0 to 5 ensures the authenticity of the user whereas above that
denies the user. The authors also show that a simple retina graph is used
to bring down the verification time by considerable amount compared to
other junction point based methods and the cost functions that include
location based, point to point node matching.

Keywords: closed polygons, rings, polygon approximation, graph, match-


ing score, retinal vascular network

1 Introduction
Efficient registration and verification process decide the prosperity of any au-
thentication system. The retinal biometry is best-suited for high security appli-
cations where the user is cooperative [6]. The vascular structure of eye shares
a uniqueness pattern across the human population [6]. It remains unchanged
throughout the lifespan of a person and it is claimed to be robust to changes
in human physiology. This is not easily accessible as it is located safely under
the layer of conjunctiva and this is hard to tamper with the retinal images. The

213
Nilanjana Dutta Roy and Arindam Biswas

retinal vascular structure can be viewed as a formal graph for image registration
also [3]. Using BGM algorithm, extraction of a spatial graph from retina was
proposed in [8]. In [2], a classification of the entire vascular tree deciding on the
type of intersection point for artery/vein classification has been proposed. In this
paper, we present a complete graph-representation of retinal vascular structure.
It is used for vascular pattern matching over traditional vascular biometrics of
feature-based or image-based template matching.
We present the features of retinal blood vessels into a topological feature-
based graph, taken from DRIVE database [1]. We also show how retinal vascu-
lar structure influences the prevalent graph features present in retina for faster
verification and registration process. While analyzing retinal vasculature, a large
number of ‘ring’ structures are found in it which bears the significance in this
research. Rings are the closed polygons, formed by arteriovenous crossings on
the retinal vascular structure.
This paper is organized in the following way. Section 2 describes the methodology
where image enhancement and segmentation are described. Feature extraction,
graph representation from fundus image and image verification are discussed in
Section 3. Analyzed experimental results are discussed in Section 4 and Section
5 draws the final conclusion.

2 Methodology

2.1 Image preprocessing and segmentation

Some common image processing steps are applied here over the fundus images to
make the image ready for further processing. Grayscale conversion, sharpening
using multiple passes of illumination distribution by CLAHE, followed by Otsu
thresholding [9] helped us to give the image a good shape at its initial stage. The
grayscaled image is then passed through 2-D median filtering for de-noising and
finally, a smooth textured image in binary platform is hence presented. Figure
1 shows the process for segmentation.

2.2 Feature extraction

The task of feature extraction transforms rich content of images into usable
content features. In this work, the closed polygons called ‘rings’, present in the
fundus images, formed by many arterio-vascular crossings bear significance in
feature selection. To accomplish this work, width [5] of every vessel has been
calculated to eliminate the tiny and disconnected ones from the segmented image.
It further results in a binary image (It ), shown in Figure 2 (b), with all thick and
major vessels present in it. Then, morphological image analysis [10] is applied
to extract the skeleton of the vessels. To detect the terminal points, tp , a 9
pixel (3 × 3) mask is passed through the skeleton image (Ip ). If the central
pixel with value 1 has exactly one neighbor with the same value, it is then
defined as terminal point, tp . The objects are identified by connected component

214
Finding Graph from Retinal Vascular Network for Image Verification

(a) (b) (c) (d)

(e) (f) (g) (h)

Fig. 1. (a) original RGB image (b) green channelled image (c) CLAHE filtered image
(d) image after bottom hat (e) contrast enhanced image (f) extracted blood vessels by
Otsu thresholding (g) median filtered image (h) segmented image after noise removal

labelling [4] and then for each object, the above mentioned method is applied to
distinguish between two closely related terminal points.
Removal of single threaded vessels from Ip starts by scanning each tp with a 9
pixel (3 × 3) mask again and sets a new value to it as 0 which results in Inew .
Histograms of the images Ip and Inew are compared at this stage. Histogram error
(dif f ) between them ensures the scope for further removal of single threaded
vessels from tp . Removal stops at every crossing, a center pixel of value 1 with its
four or more neighbors with same value in a 9 pixel mask. The process continues
every iteration till dif f becomes 0. dif f = 0 indicates similarity between both
the images. Features which we wanted to focus on, is finally extracted where all
the bounded polygonal regions are found. The retina graph would be generated
next based on the extracted feature made up of rings.

3 Graph representation

The primary reason behind graphical representation of retinal structure is to


accelerate the image verification stage. It simplifies a complicated retinal vascular
structure into a more logical and understandable graphical format which would
further be worthwhile in graph matching. More detailing of this stage has been
explained below.

215
Nilanjana Dutta Roy and Arindam Biswas

3.1 Disjoint Region Identification

From If inal , which is now an image with only bounded polygonal structures, dis-
tinct regions have been identified by connected component labelling method [4]
with a user defined masking window of size 3 × 3. It is shown by different colors
in Figure 3 (b).

3.2 Polygonal approximation of each disjoint region by modified


split and merge method

To accomplish the goal, we have identified each disjoint polygon with specified
region code for every member of the ring. For outer boundary approximation of
the polygons, split and merge method is used with little modification in it as per
our requirement. Any two random points P 1 and P 2 are chosen on any part of
the polygon boundary C and a straight line is drawn between (P 1, P 2) following
the formula

y = mx + b (1)
4y
where m is defined as 4x .
Now, on the n number of segments on the straight line, stored in sgp [i], perpen-
dicular lines are drawn which intersect C at certain intersection points stored
in intersectp [i]. The similarity on of slopes with a little tolerance in both P 1
and P 2 concludes the final segment of C as (P 1, P 2). On the other hand, maxi-
mum distance, δ (δ ≥ 0.006), between sgp [i] and intersectp [i] breaks C into two
more segments, (P 1, temp) and (temp, P 2) where temp is the maximum distant
point between (P 1, P 2) and C. The process continues till all the segments on the
curve C are covered. Boundary approximation drives us to calculate the unique
region codes for each member of the. At the end of approximation stage, the
closed polygon receives multiple number of line segments over the boundary of
it. Linear distance between each segment on the boundary adds value in form-
ing a parameter for region code generation. At each segment, the direction of
the current side is noted by placing a reference frame for 8− connected region
(Figure 5) on it, clockwise. These generated features further act as notable pa-
rameters in forming the region code. Hence, the formation of region code is done
as a sequence of li and di from any point where li is the distance of line segment
and di is the direction of the current side with reference frame and i is the no.
of segments where 1 ≤ i ≤ n. Following the above mentioned method, unique
region codes are generated for the images from DRIVE database. The region
code for Image 4 from DRIVE has been calculated on total 19 contour points
(Figure 4).

3.3 Graph from vascular network

A graphical representation from any complicated retinal vascular network is the


most indispensable phase of the proposed method and plotting the graph is also

216
Finding Graph from Retinal Vascular Network for Image Verification

a challenging task.
The biometric template here is defined as a spatial graph extracted from the
retinal vascular structure. The retina graph is defined as G = {V, E}, where V is
the set of vertices formed from rings and E is the corresponding edges between
them. Figure 6 shows that any two polygons on rings, sharing a common edge
between them, seem to be directly connected with each other. A spatial graph
representation, made from vascular structure of retina by following the steps is
defined in Figure 2.
In the Figure 6 (a), there are common polygonal sides exist between node 1 and
node 3 and also node 2 and node 3 which resembles the connected edges between
the nodes. But, no edge has been formed between node 1 and node 2 as they don’t
share a common polygonal side between them as per Figure 6 (b). Following the
same process, a corresponding graphical representation, shown in Figure 2 (f),
has successfully been plotted from the whole retinal vascular network.

4 Experimental evaluation

4.1 Materials

The proposed method was evaluated on the images collected from DRIVE [1]
database. The images of DRIVE were captured by Canon CR5 non-mydriatic 3
CCD cameras with a 45 degree FOV for medical imaging. The collected RGB
images are passed through the different stages of preprocessing to make the
images ready for the experiment.

4.2 Results

The purpose of the proposed approach is to represent a retinal vascular structure


into its equivalent graph. To accomplish the work, stable features are extracted
from retinal vasculature and they are identified as distinct closed polygons, rings.
Outer boundaries of the polygons are measured by approximation and their
corresponding region codes are generated. The code is formed as a sequence of
(li and di ) where li is the length of a side of polygon between two consecutive
contours and di is the direction of the present side according to the reference
frame, shown in Figure 5. Table 1 shows the formed region codes against each
polygon of the image no. 4 (04 manual1) from DRIVE database. Approximated
boundaries and corresponding contour points of the same image are shown in
Table 2. To wrap up with the above mentioned experiments, this is the final stage
of node matching process by using Levenshtein string matching algorithm [7].
Levenshtein distance (LD) is a string metric for measuring the difference between
two sequences, which we will refer to as the source string (s) and the target
string (t). Matching scores between individual nodes are shown in Table 5 using
Levenshtein distance algorithm.

217
Nilanjana Dutta Roy and Arindam Biswas

Table 1. Unique region codes have been generated against each graphical node

Node no. Unique region code


Node1 10024532462320776,343771513065252884
Node2 5721081300366189789647674413231413
Node3 173291682392240257781014973042483552963
Node4 9411244202230772570271900546417626425356125356664854
Node5 29241215722028197331340206425456865696273
Node6 232213422200492221352201202801300147
2201017246375276205356 575224205254136674
Node7 4021932644624613627026111312073063753966371097265384255414
Node8 9332093762662300256307145765722461703864531303

4.3 Performance measures

It is a common practice to measure the performance of any algorithm based on


its accuracy calculation. Accuracy here is defined by

(Correct detection ∗ 100)


Accuracy = % (2)
N o. of nodes

which is comparable to other point to point matching algorithms described in


[3], [8], [2]. Average accuracy of all the images of DRIVE is 92.88% which is
shown in Table 3.
In most of the existing works, the graph features that they have used as nodes
are mostly the branch points and terminal points which are again dependent on
accurate segmentation process and proper image capturing method. Changes in
resolution, improper illumination distribution and any transformation further
affect the node selection process for the approaches mentioned in the Table 5.
More number of nodes need more comparison also for point-to-point template
matching of any two graphs. Whereas in the proposed method, focus is given on
the closed regions formed by artery-vascular structures on thick vessels which are
robust against poor segmentation, improper illumination distribution and little
transformation. Moreover, minimum number of nodes of graph reduces the com-
plexity of graph matching over point-to-point template matching approaches.

5 Conclusion

An approach towards retinal image verification system is proposed here based


on retinal graph matching. The segmented binary image has been generated
from the original fundus image and then the unattached, tiny blood vessels were
removed from the terminal points. A spatial graph is finally formulated upon
the existing, closed polygonal structures of the binary image where each polygon
is considered as individual node of the graph. Against each node, a generated

218
Finding Graph from Retinal Vascular Network for Image Verification

Table 2. Polygon approximation on each node and its contour points

Nodes Approximated Polygons Nodes Approximated Polygons


with contour points, δ ≥ 0.006 with contour points, δ ≥ 0.006

contour points=10 contour points=14

contour points=10 contour points=25

contour points=12 contour points=19

contour points=17 contour points=14

219
Nilanjana Dutta Roy and Arindam Biswas

Table 3. Result and analysis

Samples No. of nodes (Manual) Correct detection False detection Failure Rate Accuracy (%)
Image 1 12 11 1 1 91.6
Image 2 13 13 2 0 100
Image 3 9 8 0 1 88.88
Image 4 10 8 0 2 80
Image 5 10 9 0 1 90
Average performance on all the images from DRIVE dataset 92.88

Table 4. Matching score between each node using the Levenshtein distance string
matching

Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Node 7 Node 8


Node 1 0 30 28 37 31 57 38 32
Node 2 30 0 26 37 32 59 42 32
Node 3 28 26 0 38 29 58 42 31
Node 4 37 37 38 0 33 52 42 37
Node 5 31 32 29 33 0 55 40 36
Node 6 57 59 58 52 55 0 49 54
Node 7 38 42 42 42 40 49 0 41
Node 8 32 32 31 37 36 54 41 0

unique region code is the major strength of this proposal which identifies a
polygon uniquely. The best part of the approach is that, the code sequence of
the regions always shows the robustness (proves the same polygon) even for the
distorted and sheared images. Each node of a new retinal graph is compared with
every other nodes of the existing retinal graph in database and hence, authentic
source is identified. The approach also shows that a simple retina graph is used to
bring down the verification time by 60% compared to other junction point based
methods and the cost functions that include location based, point to point node
matching. Focus on the rings, which are bounded polygonal structures, reduces
the chances of getting false positive values. The future of this work will include
improving the algorithm for poor quality images and making it more robust
against any transformation during image capturing and graph creation. Also, to
strengthen the verification stage in less time, additional stage for verification will
be added with the existing one. This work also seeks to test the image matching
algorithm on multiple samples of larger retina databases when it is available.

References

1. The DRIVE database, Image sciences institute, university medical center utrecht,
The Netherlands. http://www.isi.uu.nl /Research/Databases/DRIVE/, last ac-

220
Finding Graph from Retinal Vascular Network for Image Verification

Table 5. Comparison with other methods

Authors Features used as graph nodes No. of nodes Graph matching


(Average)
[3] Vascular bifurcation point and 169 point-to-point
vessel segment end point template matching
based on stable and
unstable structure
[2] Connecting point, meeting point, 120 NA
bifurcation point, crossing point and end point
[8] Terminal point, a central pixel with value 1 100 Maximum common
subgraph (MCS), which will
has exactly 3 neighbours with value 1, and if be the intersection of two
a central pixel is a feature point and has two compared graphs based on
or more neighbours which are feature points and finding minimum graph edit
on different sides of central pixel distance

RVGM Closed polygons formed by arterio-vascular 12


(proposed method) structures on thick vessels and their Node level region
region codes code matching

cessed on 7th July, 2007

2. Dashtbozorg B., Mendonca A. M., Campilho A., An automatic graph based


approach for artery-vein classification in retinal images. IEEE Transactions on
Image Processing 23, 10731083, 2014

3. Deng K., Tian J., Zheng J., Zhang X., Dai X., Xu M., Retinal fundus image regis-
tration via vascular structure graph matching. International Journal of Biomedical
Imaging, 2010

4. Gonzalez R. C., Woods R. E., Eddins S. L., Digital Image Processing using Matlab.
3rd ed., Prentice-Hall, Upper Saddle River, NJ, USA, 2006

5. Goswami S., Goswami S., De S., Automatic Measurement and Analysis of Vessel
Width in Retinal Fundus Image, Proceedings of the First International Conference
on Intelligent Computing and Communication, pp.451-458, 2016

6. Hill R., Biometrics: Personal Identification in Networked Society. Springer-Verlag,


New York, NY, USA, 1999

7. Ho T., Oh S., Kim H., A parallel approximate string matching under Levenshtein
distance on graphics processing units using warp-shuffle operations, Plos one
journal, 2017

8. Jiang X., Mojon D., Adaptive local thresholding by verification based multithresh-
old probing with application to vessel detection in retinal images. IEEE Trans.
Pattern Anal. Mach. Intell. 25, 131137, 2003

221
Nilanjana Dutta Roy and Arindam Biswas

9. Otsu N., A threshold selection method from gray-level histogram, IEEE Transac-
tions on System Man Cybernetics, Vol. SMC-9, No. 1: 62-66, 1979

10. Sofka C.V., Stewart M., Retinal vessel centerline extraction using multiscale
matched filters, confidence and edge measures. IEEE Trans. Med. Imag. 25,
15311546, 2006

222
Finding Graph from Retinal Vascular Network for Image Verification

(a) (b)

(c) (d)

1 3

7 8

(e) (f)

Fig. 2. Flow diagram of graph representation (a) segmented binary image (b) thin
vessels removed (c) formation of ring (d) disjoint region identification (e) polygonal
approximation of each disjoint region (f) graph formation

223
Nilanjana Dutta Roy and Arindam Biswas

(a) (b)

Fig. 3. Region identification by CCN (a) possible polygons (b) identified regions shown
by different colors

Fig. 4. Boundary approximation with 19 contour points and region code


4021932644624613627026111312073063753966371097265384255414

Fig. 5. Reference frame for directional encoding

(a) (b)

Fig. 6. Graph plotting (a) disjoint polygons (b) edges exist between 1 − 3 and 2 − 3 as
their corresponding polygons share common side between them

224
Pure Hexagonal Context-Free Grammars Generating Hexagonal
Patterns
1 1 2 3
Pawan Kumar Patnaik , Venkata Padmavati Metta , Jyoti Singh , and D.G. Thomas

Department of Computer Science and Engineering,


1

Bhilai Institute of Technology, Durg, India;


2
Chhattisgarh Professional Examination Board, Raipur, India;
3
Department of Mathematics, Madras Christian College, Chennai, India;

Abstract. A new syntactic model, called pure hexagonal context free grammar is
introduced based on the notion of pure two-dimensional context-free grammar. These
grammars generate hexagonal picture arrays on triangular grids. We also examine certain
closure properties of pure hexagonal context free languages.

The full text will be available in the edited book Computational Modeling of Objects
Presented in Images. Fundamentals, Methods, and Applications, eds. Barneva R.P.,
Brimkov V.E., Kulczycki P., Tavares J.M.R.S., to be published by Springer in the Lecture
Notes in Computer Science series soon.

225
Design of a haptic exoskeleton for the hand with Internet of
Things
1 1,2
Juan Camilo Calvera Duran , Octavio José Salcedo Parra and
2
Carlos Enrique Montenegro Marín
1
Faculty of Engineering, Universidad Nacional de Colombia, Bogotá, Colombia;
2
Faculty of Engineering, Universidad Distrital "Francisco José de Caldas", Bogotá, Colombia;

Abstract. This paper focuses on the design of a functional haptic hand exoskeleton
with the purpose of oering physical rehabilitation of the hand. Moreover, some existing
studies of such type of exoskeleton are analyzed which are later used to design our own
exoskeleton. Finally, the conclusions regarding the design are presented, including some
evaluations and possible improvements in the exoskeleton for future work.

The full text will be available in the edited book Computational Modeling of Objects
Presented in Images. Fundamentals, Methods, and Applications, eds. Barneva R.P.,
Brimkov V.E., Kulczycki P., Tavares J.M.R.S., to be published by Springer in the Lecture
Notes in Computer Science series soon.

226
Section 15

Early Stage Researchers

227
Crisp vs Fuzzy Decision Support Systems for the Forex Market
1 2
Przemysªaw Juszczuk and Lech Kru±
1
Faculty of Informatics and Communication, Department of Knowledge Engineering,
University of Economics, 1 Maja 50, 40-287 Katowice, Poland;
2
Systems Research Institute, Polish Academy of Sciences, Newelska 6, 01-447 Warsaw, Poland;

Abstract. A new concept of the multicriteria fuzzy trading system using the technical
analysis is proposed. The existing trading systems use dierent indicators of the technical
analysis and generate buy or sell signal only when assumed conditions for a given indicator
are satised. The information presented to the trader  decision maker is binary. The
decision maker obtains a signal or no. In comparison to the existing traditional systems
called as crisp, the proposed system treats all considered indicators jointly using the
multicriteria approach and the binary information is extended with the use of the fuzzy
approach. Currency pairs are considered as variants in the multicriteria space in which
criteria refer to dierent technical indicators. The introduced domination relation allows
generating the most ecient, non-dominated (Pareto optimal) variants in the space. An
algorithm generated these non-dominated variants is proposed. It is implemented in a
computer-based system assuring sovereignty of the decision maker.
We compare the proposed system with the traditional crisp trading system. It is made
experimentally on dierent sets of real-world data for three dierent types of trading:
short-term, medium and long-term trading. The achieved results show the computational
eciency of the proposed system. The proposed approach is more robust and exible
than the traditional crisp approach. The set of variants derived for the decision maker
in the case of the proposed approach includes only non-dominated variants, what is not
possible in the case of the traditional crisp approach.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

228
Neural network and dynamic programming for R&D
sector development in Poland

Jacek Chmielewski1
1 System Research Institute, Polish Academy of Sciences
2
ul. Newelska 6, 01-447 Warszawa, Poland
[email protected]

Abstract. The paper summarizes the methods of systematic approach for R&D
sector development in Poland. System maps R&D sector through neural network.
Neural network becomes object for dynamic programming task in searching of
better solution.
.

Keywords: R & D sector, neural networks, dynamic programming, the task of


searching for a better solution.

1 Introduction
In recent years, there are more and more documents and reports that introduce long-
term forecasts of state development scenarios. Development scenarios pointed to
threats that could significantly affect the slowdown in economic growth. Sustainable
development of the country in the times of globalization requires a systematic approach
to development planning. The removal of threats should start as soon as possible, and
the research and development (R & D) sector can play an important role in solving
problems. The potential of the R & D sector should be used for proper identification of
developmental threats and issuing recommendations in a timely manner and well in
advance.
The system approach is needed to solve problems, because due to the complexity of
the problems, their intuitive solutions will not return acceptable results. One of the pos-
sible system solutions is the economic model of the R & D sector, which is mapped to
the neural network. The neural network allows you to build an object for dynamic pro-
gramming. Dynamic programming can further improve the results of the system ap-
proach. An important role in improving the country's economic performance should be
played by the development of the R & D sector, which in the world of the global econ-
omy has a big impact on the country's development results.
The situation of the R & D sector depends on many factors is also included in the
economic situation of the country, strongly depends on the innovative approach of pol-
iticians and businessmen. An important part of the development of the R & D sector is
the proper use of resources allocated to this sector.
In the article, I would like to present the assumptions of a systemic approach, which
enables long-term analysis of the development of the R & D sector. The system will

229
support decision-making processes to obtain the best results in planning R & D sector
development. A model of the research and development sector will be created using the
neural network, and in the next step I will search for a better solution using dynamic
programming. Dynamic programming is used to search decision data that can improve
the results achieved by the B + R sector.

2 The system solution description

Greater chances of success in terms of economic development, scientific and social


development of the country exist when we implement the implementation of the strat-
egy for the R & D sector as soon as possible. Considering the scale of the project, it is
important to implement the system better than in comparison to the intuitive approach.
Below I present the elements of the system, which in my opinion should be considered
for the governmental, academic and business organizations involved in the implemen-
tation of the strategy.
Documents that are published, such as: "Poland 2030. The Third Wave of Moder-
nity" [2], "Poland 2030. Threats to development" [3], "Foresight 2020 Poland" [17] and
the OECD report [13] is an important step in Strategic thinking about the country's
development with the R & D sector. However, the documents are only a starting point
from which we can start a systematic approach leading to the implementation of a long-
term strategy.
Appropriate organizational link between scientific research subjects, to ensure ade-
quate financing of the necessary conditions for success, which in the long term will be
a reversal of the GDP trend in Poland presented in the OECD report [13].
The implementation of the development strategy must consider a part of the R & D
sector, which can significantly change the negative tendency and development in ad-
vance to develop risk mitigation methods. If there is a limitation of human resources
that can actively develop a national method of product elimination, there is a risk of
increasing the productivity of resources that create GDP.
The significant development of research and development infrastructure fosters the
emergence of innovative solutions. R & D infrastructure is a complex structure that
should be adapted to the financial and organizational capacity of the state. R & D in-
frastructure consists of:
• Polish Academy of Sciences
• R & D unit
• Higher education institutions operating in the field of research and development
• Parks of knowledge and technology
• Innovative enterprises
The system approach to long-term development of the R & D sector should strive
to find the most sensitive points of the system, whose change at the current moment
will help to reverse the negative trend in the long-term scale. System presented in this
article is focused on improving the performance of R&D sector in Poland, so that by
means of a strategy supporting the of the country development.

230
It seems a reasonable approach to the challenges facing the system in implementing
the strategy; this can increase the likelihood of successful implementation.
The implementation of long-term strategies and development can take place in an
environment that is difficult to predict in a dozen or so years. The system approach
eliminates errors that may appear in the definition of resources and the financial capac-
ity needed to properly implement the strategy.
The proposed system supporting decisions in long-term periods for the R & D sector
is based on three pillars. The first pillar of the system is the model of the research and
development sector in Poland created using a neural network. The second pillar is dy-
namic programming in which the data provided by the first pillar is used to provide the
possibility of analyzing the impact of decision variables on system results. The third
pillar is built on the knowledge of an expert or analyst who chooses boundary condi-
tions based on his own experience.

3 Neural network as model of R&D sector

In browsing the bibliography, I was unable to find examples of models for the R & D
sector. Interesting solutions for the country's macroeconomic model were based on sci-
entific descriptions of dr. Paweł Rośczak from the Łódź University. This page
roszczak.com presents EMIL [26] and Makrosim [35] applications based on neural net-
works. Makrosim is a project carried out at the University of Lodz, whose aim was to
build a computer system that, based on the introduced macroeconomic data, allows to
stimulate economic growth. EMIL is an econometric model of the Swedish economy;
the creators are prof. Jan B. Gajda from the University of Lodz and prof. Claes-Hakan
Gustafson from the University of Orebro (Sweden) [9].
The R & D sector model is a very important element of the proposed decision support
system. Without this model, it is difficult to find elements that need to be modified to
achieve the expected goal. There are different approaches to creating a model of the R
& D sector in Poland. In my opinion, an interesting approach to modeling the R & D
sector in Poland, apart from the mathematical model or statistical solution, is a model
based on a neural network.
Data for the R & D sector are available on the websites of statistical agencies, in-
cluding the Central Statistical Office, Poland and Eurostat, as well as in sector reports
and studies. To build a research and development sector model using a neural network,
the Neuroph Studio application developed by Zoran Sevarac and the University of Bel-
grade team in Serbia [28] is used. The Neuroph Studio application helps in creating a
neural network by sharing Java Neural Network Tools libraries and a graphical user
interface that allows creating, learning, testing and writing a structured neural network.
Neuroph Studio supports most of the known neural network architectures.

231
3.1 Neural network data selection, testing, learning and verification

Data for R&D sector in Poland has been selected based on sector structure then it was
defined input values and decision-making variables. The statistical data to be used for
learning neural network. Data should be verified by the following tests [16]:
• Chi-square test of independence
• Correlation ratio
• Coefficient of convergence Czuprowa
• Evaluation of independence and correlation
This is an important element in design of the system as inappropriately selected data
will impact the accuracy of the model of R&D sector in Poland. At the end, it might
have decreased the quality of the outcome results.
Neural network learning takes place through the upload input to Neuroph Studio
application. Data is prepared in the form of tables with input and output data. Data is
entered to the MLP (Multi-Layer Propagation) neural network with three hidden layers
(16, 8, and 4).
Network learning takes place with available data in the proportions 60/40 learning
and testing data. The learning process ends when it reached the stop criteria as Max
error = 0.01 and Learning Rate = 0.02
Neural network verification will be performed by comparing generated neural net-
work forecasts with the empirical values [35] as following measures:
T
1 (ỹt − yt )
MPE = ∑ 100 (1)
𝑇 yt
t=1

Where:
MPE - mean percentage error
ỹt - Forecast value
yt - Empirical value
𝑇
1 (ỹt − yt ) (2)
MAPE = ∑ | | 100
𝑇 yt
𝑡=1

Where:
MAPE - mean absolute percentage error
ỹt - Forecast value
yt - Empirical value

𝑇
1 2 (3)
RMSPE = √ ∑(𝑦̃𝑡 − 𝑦𝑦 )
𝑇
𝑡=1

Where:
RMSPE - root mean square percentage error

232
ỹt - Forecast value
yt - Empirical value

3.2 Conversion of neural network in the form of an explicit

At this stage, we implement the connection of the output from the neural network sub-
system to the input of the dynamic programming subsystem. The data from the neural
network outputs is transferred to the dynamic programming input in the form of a ma-
trix of weighing and transformation functions. Dynamic programming allows system
experts or analysts to evaluate and modify decision data.
In the next step, the programming system will evaluate the best solution with the
desired range of decision variables.
Dynamic programming uses the explicit form of a neural network in the form of a
matrix. In the case of a unidirectional three-layer neural network, the outputs from each
layer go to the next layer of neurons.

Hereafter is a schema of the multilayer one-way network:

Fig.1. Three-layer one-way neural network schema using MATLAB symbols [8]

Where:
R - Number of inputs
S w - Number of neurons in the first layer, the second and third
f w - Activation function of the neuron layer
p – Input data
W w - Weight matrix layer
bw - Value of the bias layer
aw - Output layer

The dimensions of the matrix for the layer neurons are as follows [8]:
𝑝1 𝑤1,1 𝑤2,2 ⋯ 𝑤1,𝑅 𝑏1 𝑎1
𝑝2 𝑤2,1 𝑤2,2 … 𝑤2,𝑅 𝑏2 𝑎2 (4)
𝑝=[ ⋮ ]W=[ ⋮ ⋮ ⋮ ⋮ ] 𝑏 = [ ] 𝑎 = [ ⋮]

𝑝𝑅 𝑤𝑆,1 𝑤𝑆,2 ⋯ 𝑤𝑆,𝑅 𝑏𝑆 𝑎𝑆

233
Where:
p - Input data matrix
W - Weight matrix layer
b- Matrix biases, output data matrix
R- Number of inputs
S- Number of neurons in layer

The explicit form of the equation of neural network with three layers of hidden
has the form:
a3 = f 3 (W 3 f 2 (W 2 f 1 (W1 p+b1 )+ b2 )+ b3 ) (5)
Where:
a3 - Output of the third layer
f w - Activation function of the neuron layer
W w - Weight matrices layers
p - Input data matrix
bw - Matrix of bias layer

We have defined how we can connect between neural network subsystems and dy-
namic programming. The range of decision values is determined by experts who can
determine the scope of, for example, investment in an element of the sector. This will
allow you to choose better to use your existing financial resources.

4 Dynamic programming package

The problem of dynamic programming has been defined as the search for the best
solution for decision variables introduced by an expert or analyst who studies the influ-
ence of input variables on output parameters.
An expert or analyst may simulate the preservation of the preferred strategy for the
development of the R & D sector in the period under consideration. An expert or analyst
may use different strategies, observing how the system reacts to changing the parame-
ters of decision variables.
The problem of dynamic programming is defined as searching for better solutions
for output parameters at intervals defined by input parameters within the range defined
by an expert or analyst.
As a dynamic programming platform, the DP2PNSolver package developed by Lew
A. and Mauch H. [20] can be used. The package allows you to solve problems in dy-
namic programming. The DP2PNSolver tool contains modules on two levels: the first
level contains an input to introduce the specification of a discrete DP problem.
The specification of the problem that is being processed is in the temporary Petri
network (PN) representing the Bellman network (BN). The transient level problem is
normalized to the problem of mathematical modeling. The optimal solution to the prob-
lem is the second layer called the code output (Java or Excel spreadsheet).

234
I am using the DP2PNSolver package [24], which is available in a computer appli-
cation with the installed Java SDK 1.4.2 package with the "javac" compiler.

4.1 Dynamic programming data verification

The R & D sector model built on the neural network provides input data to the
problem defined in dynamic programming. It is very important to check whether the
problem we have defined meets Markov's property [11].
This means that when deciding d1, d2, ..., dk in subsequent k stages the stage sn+1 at
the end of the stage n , that depend entirely on the stage of s k+1 after decision dk+1,,
dk+2, ..., dn.
If the problem has properties you can apply dynamic programming method for Mar-
kov and Markov property follows the principle of Bellman optimality:
"The optimal Strategy has the property that, regardless of what was the initial state,
and what were the initial decisions, the remaining decisions must create optimum strat-
egy due to the condition being the result of these initial decisions."
Dynamic programming task is defined as the search for the better solutions for out-
put parameters at intervals defined by the input of an expert or analyst.

𝑎53 = max 𝑓 3 (𝑊 3 𝑓 2 (𝑊 2 𝑓 1 (𝑊 1 𝑝 + 𝑏1 ) + 𝑏 2 ) + 𝑏 3 ) (6)


𝑝1 ,𝑝2 ,𝑝3 ,𝑝4

Where:
𝒂𝟑 - Matrix output
𝑎13 - Investment in fundamental research
𝑎23 - Investment in R & D sector
𝑎33 - Investments in innovative enterprises
𝑎43 - Investment in science and technology parks
𝑎53 - Transfer the results to the economy
𝑓 𝑛 - Activation functions of neurons in each layer neural network,
𝑊 𝑛 - Weight matrices of neurons in each layer neural network,
𝑏 𝑛 - Bias in the individual layers of the neural
𝒑𝒏 - Decision-making matrix input
𝑝1 - Expenditures on R & D sector,
𝑝2 - Business expenditures on R & D sector
𝑝3 - Expenditures on education system,
𝑝4 - Expenditures for the support system
𝑝1 , 𝑝2 , 𝑝3 , 𝑝4 - belong to a range of expert decision-making data

Neural network maps impact goes to exit through a system of weights of neurons
that are explicit.

235
The weight shall be transferred to the dynamic programming system, so we have a
system reproduces the behavior of the R&D sector-development on the input parame-
ters, which will be introduced by an expert or analyst. From this stage, dynamic pro-
gramming system is ready for analysis and presentation of results for the search for a
better solution.

4.2 Better solution approach


The concept of better solution approach might be a matter of expert’s discussions
who can present different preferences of the R&D sector development. My approach of
better solution is the direction the development of the sector R&D, which allows max-
imizing the transfer of research results to economy and business.
Better solution is based on the following decision-making variables:
• Expenditures on R & D sector,
• Business sector expenditures on R & D sector
• Expenditures on education system
• Expenditures for the support system
Decision-making variables in range preferred by expert or analyst give data output
on relocation resources where transfer of R&D sector research results to economy is
set up as goal for maximizing allocation:
• Investment in fundamental research
• Investment in R & D sector
• Investments in innovative enterprises
• Investment in science and technology parks
I have defined goal for the system to get better results on transferring R&D outcomes
to economy what might positively influence on economy development.
The transfer of R&D sector research results to economy [23] is to my understanding
a key element that can prevent long-term slowdown of GDP growth, what determined
my own definition of better solution.
Better solutions approach is implemented in dynamic programming problem as fol-
lowing steps:
• For each input range is the maximum value of dynamic programming task
decision transfers the results of research to the economy and business.
• Dynamic programming task is performed in four steps separately for any given
decision.
• In each step, only one decision is a test range, the remaining data is fixed. The
value of a decision giving the greatest value of transfers in the next step, for the next
decision is the value of a constant.
• The result is a collection of four decision-making parameters that give the value
of the maximum transfer of the results of R&D sector to business and the economy.

236
5 Conclusion

Forecasts with a very small increase in gross domestic product by 1% in 2030-2060,


according to the OECD report, should lead to actions that can counteract the long-term
slowdown of Poland's development.
Proper investments in the development of the R & D sector can help the economy de-
velop above a somewhat pessimistic development forecast. The R & D sector has the
necessary intellectual capital to solve non-trivial development problems.

Finding the right factor blocking the development of the country requires a systemic
approach with a strong commitment to the R & D sector, which can increase the effec-
tiveness of eliminating factors affecting the slowdown in GDP growth

My proposition of a system solution is based on the methodology of neural network


and dynamic programming, to first create a model R & D sector, then find the most
sensitive parameters and propose realistic solutions that will reduce the risk of GDP
slowdown in the period 2030-2060.

References
1. Bartosz J. , Dorocki S. , Wpływ wielkości nakładów inwestycyjnych w sektorze B+R na
regionalne zróżnicowanie tempa rozwoju Francji, Zakład Przedsiębiorczości i Gospodarki
Przestrzennej, Instytut Geografii, Instytut Pedagogiczny im. KEN w Krakowie, 2009
2. Boni M. , Raport Polska 2030 - Trzecia fala nowoczesności, Ministerstwo Administracji i
Cyfryzacji, 2011
3. Boni M. , Raport Polska 2030. Wyzwania rozwojowe, Ministerstwo Administracji i Cyfry-
zacji, 2009
4. Chmielewski J. , Konieczność inteligentnego programowania dynamicznego dla rozwoju
sektora B+R , Techniki Informacyjne Teoria i Zastosowania – wybrane problemy tom 4 (16)
, 2014
5. Chmielewski J., Transfer wiedzy i innowacji w zakresie zastosowań informatyki i cyberne-
tyki jako sposób zwiększenia kapitału intelektulanego dla Polski i Regionów, Technologie
Informacyjno – Komunikacyjne, możliwości, zagrożenia, wyzwania, 2009
6. Chmielewski J. , Zastosowanie programowania dynamicznego i sieci neuronowych dla sek-
tora badań naukowych i rozwoju, Seria: Studia i materiały Polskiego Stowarzyszenia Zarzą-
dzania Wiedzą, 2008
7. Duval, R., de la Maisonneuve C. , Long-Run GDP Growth Framework and Scenarios for
the World Economy, OECD Working Papers No 663 , 2009
8. Duzinkiewicz K., Grochowski M. , Metody sztucznej inteligencji, Politechnika Gdańska,
Wydział Elektrotechniki i Automatyki, Katedra Inżynierii Systemów Sterowania, Metody
sztucznej inteligencji, Zajęcia laboratoryjne , (brak daty)
9. Gajda J. , Gustafson C. , EMIL - An Econometric Macro Model of Sweden, Ö. U. Depart-
ment of Economics, 1999
10. Główny Urząd Statystyczny, Działalność badawczo - rozwojowa (B+R) w Polsce, Urząd
Statystyczny w Szczecinie, 2013

237
11. Jakowska-Suwalska K. , Programowanie dynamiczne - przykłady i zadania, Politechnika
Śląska w Gliwicach Wydział Organizacji i Zarządzania , 2013
12. Johansson Å. , Long-Term Growth Scenarios, OECD Economics Department Working Pa-
pers, No. 1000, OECD Publishing, 2013
13. Johansson Å. , Looking to 2060: Long-Term Global Growth Prospects: A Going for Growth
Report, OECD Economic Policy Papers, No. 3 - OECD Publishing - ISSN 2226583X, 2012
14. Kacprzyk J. , Studies in Computational Intelligence, Springer Berlin/Heidelberg, 1860-
949X, Volume 38/2007, 2007
15. Kacprzyk J. , Towards Perception-Based Fuzzy Modeling: An Extended Multistage Fuzzy
Control Model and Its Use in Sustainable Regional Development Planning. ISBN 981-238-
751-X, pages: 321-337, 2006
16. Kaszubski K., Kuczewski M., Rośczak P. , Gra ekonomiczna symulująca sterowanie gospo-
darką narodową implementowana za pomocą systemu komputerowego wykorzystującego
sztuczną sieć neuronową, Uniwersytet Łódzki Wydział Ekonomiczno - Socjologiczny Kie-
runek Informatyka i Ekonometria, Łódź, 2002
17. Kleiber M. , Praca Zbiorowa - Wyniki Narodowego Programu Foresight Polska 2020, 2009
18. Klimczak D. , Strojny M. , Żagun K. , Czy warto inwestować w innowacje. Analiza sektora
badawczo-rozwojowego w Polsce, Raport KPGM, 2009
19. Leśniewski Ł. , Sektor badawczo - rozwojowy w Polsce, Polska Agencja Informacji i Inwe-
stycji Zagranicznych S.A. , Wydział Informacji, Departament Informacji Gospodarczej,
2010
20. Lew A. , Mauch H , Dynamic Programming, a Computational Tool, ISBN-10 3-540-37013-
7, Springer Berlin Heidelberg New York, 2006
21. Matusiak K. , Ośrodki innowacji i przedsiębiorczości w Polsce, Polska Agencja Rozwoju
Przedsiębiorczości, 2010
22. Matusiak K. ,Rekomendacja zmian w polskim systemie transferu technologii i komercjali-
zacji wiedzy, Polska Agencja Rozwoju Przedsiębiorczości, 2010
23. Matusiak K. , System transferu technologii i komercjalizacji wiedzy w Polsce - siły moto-
ryczne i bariery, Polska Agencja Rozwoju Przedsiębiorczości, 2010
24. Mauch H. , DP2PN2Solver: a flexible dynamic programming solver software tool, Control
and Cybernetics, 2006, Vol.: 35, Part 3, pages 687-702, Polish Academy of Science , 2006
25. Mosionek-Schweda M. , Finansowanie działalności badawczo-rozwojowej przedsiębiorstw
w Polsce, Wyższa Szkoła Bankowa w Toruniu, Oeconomia Copernicana, 2011
26. Rośczak P. , Model gospodarki Szwecji EMIL w postaci sieci neuronowych. http://www.ro-
sczak.com/index.php/pl/oprogramowanie/emil, 2003
27. Santarek K. , Bagiński J. , Buczacki A. , Sobczak D. , Szerenos A. , Transfer technologii z
uczelni do biznesu. Tworzenie mechanizmów transferu technologii, 2008
28. Severac Z. , Koprivica M. , Getting started with Neuroph, Neuroph Studio framework ver-
sion 2.3 , 2012
29. Straszak A. , Lokalny Transfer Wiedzy i Innowacji w Internetowych Lokalno-Globalnych
Społeczeństwach i Gospodarkach opartych na wiedzy, Unia Europejska – Transfer wiedzy
i innowacji w warunkach lokalnych, tom 4 , 2008
30. Straszak A. , Przyspieszenie kreatywności i innowacyjności w regionach wiedzy poprzez
zwiększenie zastosowań automatyki, informatyki i cybernetyki, Technologie Informacyjno
- Komunikacyjne, możliwości, zagrożenia, wyzwania, 2009
31. Straszak A., Kruszewski T. , Long-term global stability in the world in the years 1960 -
2060. Artykuł na Konferencję w Kosowie, 2013

238
32. Straszak A. , Studzinski J. , Bogdan , L. , Poland 21st Century Infrastructure for „Global
Great Transition”(Eco – Info – Communalism), Scenarios Looking for Future System Re-
search Solutions, 2005
33. The Organization for Economic Co-operation and Development #1, Medium and Long-term
Scenarios for Global Growth and Imbalances, OECD Economic Outlook - Volume 2012
Issue 1 - OECD Publishing, 2012
34. The Organization for Economic Co-operation and Development #2, OECD Economic Out-
look, Vol. 2012/2 OECD Publishing, 2012
35. Zieliński S. , Rośczak P. , Gra ekonomiczna symulująca sterowanie gospodarką narodową
implementowana za pomocą systemu komputerowego wykorzystującego sztuczną sieć
neurnową, Uniwersytet Łódzki Wydział Ekonomiczno-Socjologiczny Kierunek Informa-
tyka i Ekonometria, 2002

239
Recurrent Neural Networks with grid data quantization for
modeling LHC superconducting magnets behavior
1 2
Maciej Wielgosz and Andrzej Skocze«
1
Faculty of Computer Science, Electronics and Telecommunications,
AGH University of Science and Technology, Krakow, Poland;
2
Faculty of Physics and Applied Computer Science,
AGH University of Science and Technology, Krakow, Poland;

Abstract. This paper presents a model based on RNN architecture, in particular


LSTM, for modeling the behavior of LHC superconducting magnets.High resolution data
available in PM database was used to train a set of models and compare their performance
with respect to various hyper-parameters such as input data quantization and number of
cells.A novel approach to signal level quantization allowed to reduce a size of the model,
simplify tuning of the magnet monitoring system and make the process scalable. The
paper shows that RNNs such as LSTM or GRU may be used for modeling high resolution
signals with an accuracy over 0.95 and as small number of the parameters ranging from
800 to 1200. This makes the solution suitable for hardware implementation essential in
the case of monitoring performance critical and high speed signal of LHC superconducting
magnets.

The full text will be available in the edited book Information Technology, Systems
Research and Computational Physics, eds. Kulczycki P., Kacprzyk J., Kóczy L.T., Me-
siar R., Wisniewski R., to be published by Springer in the Advances in Intelligent Systems
and Computing series soon.

240
Author Index

‘wiebocka-Wi¦k Charytanowicz
Joanna 56 Maªgorzata 54, 55, 74
Šukasik Chmielewski
Szymon 54, 55, 74 Jacek 229
ƒer¬anová Coufal
Viera 78 David 91
’eliga
Dªugo«
Adam 93
El»bieta 3
šdímalová
Deepa 208
Mária 112
Deja

Ahmed Kamil 2

Sajib 143, 203 Dragan


Šukasz 22
Bªa»ewicz Du»yja
Marta 3 Maria 3
Bacigál Duran
Tomá² 92 Juan Camilo Calvera 226
Balázs Dvernaya
Péter 71, 72 Elena 144
Ballová
Dominika 94 El Falougy

Belter Hisham 112

Dominik 158
Földesi
Bielski
Peter 75, 140
Adam 10
Fazekas
Binaghi
Attila 160
Elisabetta 11
Fogarasi
Biswas
Gerg® 140
Arindam 213
Friebe
Bodyanskiy
Michael 208
Yevgeniy 76
Bohumel Giebuªtowski
Tomá² 112 Marek 23
Bracci Glinka
Fabio 142 Michaª 46
Brunner Goªaszewski
Szilvia 8 Grzegorz 68
Buruzs Gonçalves
Adrienn 140 Teresa 9, 143, 203, 204

241
Graczykowski Komorníková
Šukasz 2, 46, 113 Magdaléna 92
Kossyk
Halvoník Ingo 142
Jaroslav 79 Kowalik
Harmati Marcin 23
István Á 139
Kowalski
Hideghéty
Piotr 54, 55, 74
Katalin 8
Kozdrowski
Horwat
Stanisªaw 14
Dominik 177
Kro±nicki
Hu
Marek 177
Weichih 208
Kru±
Hudec
Lech 228
Miroslav 21
Kudela
Hussein
László 108
Alhamzawi 160
Kulczycki
Piotr 54, 55, 74
Janc
Kulinowski
Krzysztof 124
Karol 4
Javorszky
Karl 95
Lékó
Jayatilake
Gábor 71
Mohan 9
Lalik
Juszczuk
Konrad 55
Przemysªaw 228
Li

Kóczy Mo 142

László T. 75, 138140 Lilik

Kalická Ferenc 138

Jana 79 Luchowski

Kami«ski Leszek 159

Jakub 124
Kardos Ma¹dziarz

Péter 107 Artur 47

Katona Majtánová

Melinda 8 Lucia 79

Koªaczek Marín

Damian 5 Carlos Enrique Montenegro 226

Koªodziej Martinelli

Anna 3 Samuele 11

Koªodziejczyk Marton
Andrzej 23 Zoltan-Csaba 142
Koliechkina Metta
Liudmila 144 Venkata Padmavati 225
Kollmannsberger Minárová
Stefan 108 Mária 79
Komorník Moskal
Jozef 92 Paulina 3

242
Mukherjee Rokita
Himadri 204 Przemysªaw 113
Myrcha Roy
Julian 113 Kaushik 204
Nilanjana Dutta 213
Németh Rybotycki
Gábor 106 Tomasz 205
Nánásiová
O©ga 78 Sacharz

Nagy Julia 3

Szilvia 138 Santosh

Nikolaiev K.C. 204

Sergii 125 Sarna

Nowakowski Piotr 55

Piotr 113 Sikora

Nyúl Grzegorz 80

László G. 8 Singh
Jyoti 225
Obaidullah Yashbir 208
Sk Md 143, 203, 204 Skabek
Krzysztof 159
Palágyi Skocze«
Kálmán 106, 107 Andrzej 240
Parra Slapal
Octavio José Salcedo 226 Josef 109
Patnaik Solecki
Pawan Kumar 225 Levente 138
Phadikar Spisak
Santanu 204 Bartªomiej J. 4, 5
Piaskowski Stodolak-Zych
Karol 158 Ewa 3
Piesik Studniarski
Emilian 31 Marcin 144
Jan 31 Sujecki
Plachá-Gregorovská Sªawomir 14, 48
Katarína 112 Sz¶cs
Pojda Judit 72
Dariusz 159 Szabó
Polanek Emília Rita 8
Róbert 8 Imre Zoltán 8
Provotar Sziová
Oleksandr 192 Brigita 138

Rakovská Tú¶-Szabó
Eva 21 Boldizsár 75
Rank T®kés
Ernst 108 Tüde 8
Rato Tarasiuk
Luís 9, 203 Jacek 124

243
Tarnawski
Michaª 159
Tautkute
Ivona 10
Tavares
João Manuel R. S. 208
Telenyk
Sergii 125
Thomas
D.G. 225
Tokarz
Wlademar 23
Tomaka
Agnieszka Anna 159
Trzci«ski
Tomasz 2, 10, 46, 113
Tymoshenko
Yury 125
Tyshchenko
Oleksii 76

Vörösk®i
Kata 140
Valá²ková
‰ubica 78
Varga
László 70
Vergani
Alberto Arturo 11

Weismann
Peter 112
Weseªucha-Birczy«ska
Aleksandra 3
Wielgosz
Maciej 240
Wit
Adrian 124
Woªoszyn
Maciej 4, 5
Woch
Wiesªaw 23
Wróblewska
Anna 22
Wu
Shi-Yi 208

Zalecki
Ryszard 23

244

You might also like