Structural-Damage Detection With Big Data Using Parallel Computing
Structural-Damage Detection With Big Data Using Parallel Computing
DOI 10.1007/s13042-015-0453-3
ORIGINAL ARTICLE
Abstract Life of many people and vital social activities handles the volume and variety part of big data.
depend on good functioning of civil structures such as ‘‘MapReduce’’ is a programming model of Google and an
nuclear power plants, large bridges, pipelines, and others associated implementation for processing and generating
especially during and after high winds, earthquakes or large data sets using a group of machines. Despite con-
environmental changes. We can recognize the structural tinuous data (BigData) originated from GPS-RTK and
damage of existing systems through signals of the stiffness WSN is not studied in this paper; only one selected data is
reductions of their components, and changes in their used to determine technical state of structure in real time.
observed displacements under certain load or environ- However, bigdata approach is used in this paper to solve
mental conditions. It can be formulated as an inverse the following problems: Firstly, small number of available
problem and solved by the learning approach using neural data for neural network training may lead to strange, not
networks, NN. Here, network function is referred as an practical solutions, which are beyond the permissible area.
associative memory device capable of satisfactory diag- To overcome this problem, small available data is sup-
nostics even in the presence of noisy or incomplete mea- plemented by numerical data generated from integration of
surements. It is not new idea, however, to obtain, in real Monte Carlo simulation and finite element method—
time; an advanced warning of the onset of durability/ MCSFEM model. Also, the learning approach using neural
structural problems at a stage when preventative action is networks, NN, gives effect which is guaranteed only if this
possible has emerged creating an exciting new field within network has, among the various required conditions, a
civil engineering. It refers to the broad concept of assessing complete set of training data and so good (optimal) archi-
the ongoing in-service performance of structures using a tecture of NN. Integration of a Monte-Carlo simulation
variety of measurement techniques, e.g., global position with NN—MCSNN model in order to find the optimal
system—real time kinematic, GPS-RTK, wireless sensor (global) architecture of NN is selected and formulated to
networks, WSN, and others. It leads to, in this case, the so solve this problem. Both MCSFEM model and MCSNN
called big data. Big data is a term applied to data sets model require expensive computing, which lead to neces-
whose size is beyond the ability of commonly used tradi- sity of the use of parallel computing based on MultiPro-
tional computer software and hardware to undertake their cessorSystem on Chip, MPSoC, architecture. Secondly, the
acquisition, access, and analytics in a reasonable amount of use of MPSoC architecture led to the emergence of mul-
time. To solve this problem, we can use the Hadoop like tiple NN named net of neural network, NoNN, rather than a
Batch processing system using a distributed parallel pro- single NN. The simple approach ‘‘trial and error’’ for a
cessing paradigm called MapReduce, which like system single neural network is insufficient to determine an opti-
mal architecture of NN from NoNN. To deal with this
problem we develop, in this paper, a new distributed par-
& Chi Tran allel processing using the ‘‘Master-Slaver’’ structure, in the
[email protected]
‘‘MapReduce’’ framework rather than the Google’s
1
The Faculty of Geodesy, Geospatial and Civil Engineering, Hadoop like Batch processing system using a distributed
WM University in Olsztyn, Olsztyn, Poland parallel processing paradigm called MapReduce. Our
123
Int. J. Mach. Learn. & Cyber.
method is based on the ‘‘single instruction multiple data’’, the mechanical aspects of the finite element method in
SIMD, technology using computer with multiprocessor order to ensure uniform structural properties of numerical
systems on a chip (or computer with multiple cores, CPUs) data and relationship of them to characteristics of the
without necessarily Google’s computer program and a studied structures.
group of machines. It is used for both MCSFEM model and On the other hand, NN effect is guaranteed only if this
MCSNN model in order to generate virtual dataset and to network has, among the various required conditions, an
find out the optimal architecture of NN from NoNN. It optimal architecture (good enough). To solve this opti-
enables us to create quickly many numerical data and mization issue, we usually can use the gradient method;
eighty thousand architectures (using computer with eight however, it gives us only a local optimal solution, i.e. the
CPUs) can be created instead of eight thousand (using optimal NN architecture is not guaranteed. We can use, in
computer with one CPU) only in the same computation this case, two global approaches: integration of the multi-
times. Effect obtained is presented in the numerical ple neural networks model with either genetic algorithm or
example for the local damage detection in real time of the Monte-Carlo simulation (global optimization methods).
plane steel truss structures so that one can react quickly The use of genetic algorithm, according to studies of many
when problems appear or to predict new trends in the near authors, is not suitable for NN computing. Then, integra-
future. tion of a Monte-Carlo simulation with NN, MCSNN
model, used to find the optimal (global) architecture of NN
Keywords Damage detection Big data Finite element is selected and formulated.
method Monte-Carlo simulation Artificial neural Also, to obtain an advanced warning, in real time, of the
network Parallel computing Single instruction multiple onset of structural problems at a stage when preventive
data Multiprocessor systems on chips action is possible has emerged creating an exciting new
field of civil engineering. We need new tools such as global
position system and real time kinematic technology, GPS-
1 Introduction RTK, (see [5] or wireless sensor networks, WSN. It is
continuous observation giving us information about dis-
As we know, changing in any one of the mechanical and placements of structural elements in real time. Information
geometrical properties of the building structures leads to originated from mentioned tools can be a collection of data
change their responses. Many applications of artificial sets in its ‘‘large size’’ framework, referred as the ‘‘big
Neural Network, NN, have been developed, as in [3, 5, 13, data’’ (see [7, 14]). Big Data, according to James [10], is a
16], in application of a PSO-based neural network; [4], in dataset that is large, fast, dispersed, unstructured and
the use of adaptive-network-based fuzzy inference systems beyond the ability of available hardware and software
models and others, and applied to a variety of structures. facilities to undertake their acquisition in a reasonable
Monitoring of the building structures referred as the way to amount of time and space.
compare actual structural responses and values derived Hence, both MCSFEM model and MCSNN model
from designed process in order to obtain an early warning require expensive computing due to mathematical models
of threats at a stage when preventive action is possible. with the following characteristics: Big data, real-time,
Usually, it consists the observation of the system over a multitasking and a lot of parallelism. The performance
period of time using periodically spaced measurements, demanded by this computing leads to necessity of the use
extraction of features from these measurements and anal- of Multi-Processor-System architectures, in a single chip,
ysis of these features to determine the current state of MPSoC, endowed with complex communication infras-
health (or damage-detection) of this system. A supervised tructures, such as networks on chip, NoC. We need, in this
learning mode using of NN, for the structural local damage case, to integrate hardware and software—hardware
detection can be used. It is now not new idea in the field of dependent software rather than the sum of traditional
structural health monitoring. hardware and software (hardware independent software). It
On the one hand, incomplete, insufficient data/small is worthless from a decision making perspective in a range
available data may lead, in the NN training process, to of areas, including business, science, engineering, defense,
solutions beyond the permissible area. Then, we must, in education and others. Computer architecture MPSoC, (see
addition to small available data, generate the so called Katalin [9] in which, embedded applications are changing
virtual/numerical data belonging in the desired range by, in from single processor-based systems to intensive data
this paper, integration of Monte Carlo simulation and the communication and parallel programming techniques with
finite element method, (MCSFEM model). Here, we per- efficient automation of mapping parallel software to par-
form a combination of much creative mathematical possi- allel hardware, enables us to solve our mathematical
bilities of Monte Carlo simulation with high accuracy in model. However, the use MPSoC, architecture led to the
123
Int. J. Mach. Learn. & Cyber.
emergence of multiple NN named Net of Neural Network, 2 Structural damage detection of truss structures
NoNN, rather than a single NN. The simple approach ‘‘trial
and error’’ for a single neural network is insufficient to Generally, from the static displacements U and prescribed
determine an optimal architecture of NN from NoNN. One load F, the static equilibrium of structures in the FEM
can use the Hadoop like Batch processing system using a framework, is represented in the form:
distributed parallel processing paradigm called MapRe- KU ¼ F ð1Þ
duce, which is a programming model of Google and an
associated implementation for processing and generating K denotes a stiffness matrix; K = f(E, S, L, h), where each
large data sets (BigData or discretized stream data [15]) structural element has modulus of elasticity E, cross-sec-
using a group of machines. A new approach presented in tional area S and length L, which is inclined with an angle
the Hadoop like Batch processing system (see [17]) has h measured counterclockwise from the positive global X
evolved and matured over past few years for excellent axis. To find out a value, U, from available information of
offline data processing platform for Big Data, which, K and F we can use a direct problem presented as a Feed-
according to [2], can crunch a huge volume of data using a forward approach for structural damage detection. It is
distributed parallel processing paradigm called Google’s demonstrated by the direct mapping W as:
MapReduce. Here, application developers specify the W : S 2 Rm ! U 2 Rn ð2Þ
computation in terms of a map and a reduce function, and
the underlying MapReduce job scheduling system auto- where, n expresses a number of nodes; m denotes a number
matically parallelizes the computation across a cluster of of structural members. With constant geometry of struc-
machines. It provides a standardized framework for ture, an approach to introduce damage is through variations
implementing large-scale distributed computation. How- in the cross-sectional areas, S, of structural members. The
ever, it does not have the mechanism to identify parallelism structure is described as a set of finite elements categorized
of NN computations integrated with Monte Carlo simula- into undamaged and damaged states in different degrada-
tion, MCS, and finite element method, FEM. tion levels. The structural damage is defined as the maxi-
To solve this problem, we develop a new distributed mum stiffness reduction at one or more local elements or
parallel processing model based on the single instruction the loss of stiffness of structure through a reduction in the
multiple data, SIMD, technology using the ‘‘Master-Slaver’’ cross-sectional areas of any element used in computing kij
structure. The SIMD technology enables us to split big data of the stiffness matrix K. A disturbance process using
on n datasets, processed at the same time on q processors, Monte-Carlo simulation is carried out through variations in
CPUs, rather than processed on single processor CPU by the the cross-sectional areas, Si, i = 1, 2,…, m, which is pre-
single instruction single data, SISD technology. Results will sented in the form:
be obtained from single laptop with multicores, CPUs,
Si ¼ S0 0:5 S0 f ð3Þ
through a combination of multiprocessing systems and par-
allel programming techniques in order to efficient automa- where S0 denotes the initial cross-sectional area of all of
tion of mapping parallel software into parallel hardware, elements of the studied structure; f denotes the uniform
without Google’s computer program and a cluster of random numbers distributed in the interval [0, 1]. The
machines. The computing time using the SIMD technology relations between the static displacement values Uj, j = 2,
will be about n times shorter in compare with the use of SISD 3,…, n and the different values of cross-sectional area Si,
technology for same problem. Multiprocessing systems, i = 1, 2, …, m of each structural element (or stress in each
MPS, according to [1], helps us to solve very large problems element of the structure, ri, i = 1, 2, …, m), is presented in
such as weather modeling, designing systems that satisfy the form:
limited budgets, and improving programmer productivity. It Uj , Si or Uj , ri ; i ¼ 1 : m; j ¼ 1 : n ð4Þ
is used, in this paper, for both MCSFEM model and MCSNN
model in order to generate virtual dataset and to find out the Assuming that a number of experimentally observable
optimal architecture of NN from NoNN. It enables us to displacements, Uobs., are available for different conditions
create quickly many numerical data and eighty thousand of structural damage. The damage-detection problem is
architectures (using computer with eight CPUs) can be cre- formulated as follows: for a given vector Uobs., find the
ated instead of eight thousand only (using computer with one vector S*, such that KUobs. = F, where S* denotes a threat
CPU) in the same computation times. element of structure, which is determined by:
123
Int. J. Mach. Learn. & Cyber.
or may not be trustworthy or uncertain. These characteristic
S ¼ Si : max S0i Si ð5Þ properties of big data are summarized in Fig. 1b.
i¼1:m
Artificial Neural Network (NN) According to the
It is mathematically equivalent to obtaining the mapping, demands, feed forward and feedback, of our Structural
U, that shows a feed back problem between the known Damage Detection problem, we use the layered feedfor-
response of a structure and the physical state of them. ward network with nonlinear differentiable transfer func-
U : U 2 Rn ! S 2 Rm ð6Þ tions, Tan-Sigmoid and Log-Sigmoid, which enables us to
use back propagation training manner on a representative
Finally, a process of early damage detection is presented in set of input/target pairs rather than training the network on
the following figure. all possible input/target pairs. Generally, NN consists of
interconnected processing elements called nodes or neu-
2.1 Big data (BD) rons that work together to produce an output function. The
two-layer network is shown in Figs. 2 and 3.
Data originated from global position system and real time Where, q denotes a number of input; m—neuron number
kinematic, GPS-RTK, and wireless sensor networks, WSN, of the first layer; h—neuron number of the second layer;
leads to the so called big data. Big Data, according to the k—output number; ai, i = 1, 2, …, k denotes the outputs of
Wikipedia, ‘‘is a collection of data sets so large and com- network. Each input is weighted with an appropriate w.
plex that it becomes difficult to process using on-hand Neurons use Tan-Sigmoid/Log-Sigmoid transfer function
database management tools’’. Dimensions which have to generate their output.
become common for characterizing big data, and which A multiple layers of neurons can have several layers, in
together with volume, according to Gottfried Vossen [12], all cases, it is not obvious how many neural layers and
are called the ‘‘4 Vs of big data’’, are the velocity or the neurons of each to include in the network. As, two neural
speed with which data is produced and needs to be con- layers provide the greater flexibility necessary to model
sumed, the variety data can have, and the veracity the data complex-shaped solution surfaces, and are recommended
comes with. Velocity refers to the fact that data often as a general function approximation. It can approximate
comes in the form of streams which do not give the any function with a finite number of discontinuities, arbi-
respective consumer a chance to store them for whatever trarily well, given sufficient neurons in the hidden (first)
purpose, but to act on the data instantly. Variety means that layer. Two-layer network used in this work is shown in the
data can come in different forms such as unstructured (e.g., abbreviated form as:
text), semi-structured (e.g., XML documents), or structured The network mapping I/O, p $ a, is fit to a given data
(e.g., as a table), and veracity refers to the fact the data may set of q exemplar input vectors {p(q)} and their associated
training output vector {a(k)} by adjusting the weight
vector, W, and L, of coefficients so that a(k) = t(k) (ap-
proximation, t).
(a)
Neural number of each layer Generally, there is no
direct and precise way of determining the most appropriate
number of neurons to include in each neural layer, and this
problem becomes more complicated as the number of
hidden layers in the network increases. A range of different
configurations of neurons is usually considered, and that
with the best performance is accepted. Failures in the use
(b)
Input
p1 W Layer 1 Layer 2 Output
V a1
P2
. . a2
. . .
. . .
.
ak
pq k
q m h
Fig. 1 a Early structural damage detection. b The defining of Big
Data Fig. 2 Two-layer network
123
Int. J. Mach. Learn. & Cyber.
123
Int. J. Mach. Learn. & Cyber.
• we have, for example, available data (training data): 4 Multi-processor system-on-chip (MPSoC)
x = [1, 2, 3, 4, 5, 6, 7, 8, 9]
Parallel computing Parallel computing is a form of com-
y = [1.22, 3.16, 4.42, 2.28, 0.18, 2.01, 7.9, 8.3, 10.1].
putation based on either the computer-internet integration
• Any measurable function on x(9) to y(9) can be or multiprocessor/multicore computers, in which many
approximated as closely as desired, in the range x [ calculations are carried out simultaneously. It operates on
[1, 9], through regression by p = polyfit(x,y,7); the principle that large problems can be divided into smaller
f = polyval(p,x) derived from MATLAB program. ones, which are then solved in parallel. MPSoC is an inte-
We obtain the approximated function: grated circuit (IC) or a system on a chip (SoC) that inte-
y ¼ f ðxÞ ¼ 0:0030x7 0:0947x6 þ 1:1666x5 7:2207x4 grates all components of a computer or other electronic
system into a single chip. Software plays a critical role in
þ 24:0056x3 43:3274x2 þ 41:6380x 14:9533
MPSoC design because the chip won’t do anything without
ð8Þ software. Beyond its hardware architecture, an MPSoC
Graphical results described ‘x – y’ relation according to system is generally running a set of software application
available data (‘o’) and regression results (‘*’) are shown divided into tasks and an operating system devoted to
in the following Fig. 4a. manage both hardware and software through a middleware
Using this approximated function (Eq. 8) for the layer, e.g. drivers. A heterogeneous multiprocessor provides
required data: the computational concurrency required to handle concur-
x1 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] rent real-world events in real time via reducing conflicts
y1 = [1.22, 3.16, 4.42, 2.28, 0.18, 2.01, 7.9, 8.3, 10.1, among processing elements and tasks. In an MPSoC, either
13.0, 15.0, 9.0, 8.0, 11.0, 35.0]. hardware or software can be used to solve a problem. An
We get graphical results shown in the Fig. 4b: abstract view of the Software-Hardware integration of
The fit is good for this data on the interval x [ [1, 9]. MPSoC can be presented in the following figure:
Beyond this interval the graph shows that the polynomial An ideal software design flow allows the software
behavior takes over and the approximation quickly deteri- developer to implement the application in a high-level
orates (on the interval out of the available data, x [ (9, 15]). language, without considering the low-level architecture
We have, in this case, insufficient training data. details. The software design for MPSoC is more complex
Bigdata To generate in addition to available data in than a simple software compilation. As, an ideal software
order to reach a sufficient training data, we perform a design flow allows the software developer to implement the
combination of much creative mathematical possibilities of application in a high-level language, without considering
Monte Carlo simulation with high accuracy in the the low-level architecture details. In an ideal design flow,
mechanical aspects of finite element method, MCSFEM. the software generation targeting a specific architecture
Consequently, we can obtain a lot of data named bigdata— consists of a set of automatic final application software
massive datasets. Many different hardware architectures code generation, and hardware-dependent software, HdS,
apart from traditional Central Processing Units, CPUs, can code generation (Fig. 5c); the classical approaches is
be used to process data. The common point of these shown in Fig. 5b.
architectures is their massive inherent parallelism as well The HdS is made of lower software layers that may
as a new programming model rather than the classical von incorporate an operating system, OS, communication
Neumann CPUs. management, and a hardware abstraction layer to allow the
50
y
4 40
2 30
20
0
10
-2 0
1 2 3 4 5 6 7 8 9 0 5 10 15
x x
Fig. 4 Fitting the required data by the function, f(x), for x, y (a) and x1, y1 (b)
123
Int. J. Mach. Learn. & Cyber.
123
Int. J. Mach. Learn. & Cyber.
(b)
123
Int. J. Mach. Learn. & Cyber.
(Slaves)
e1= |a1 - t1| … e2= |a1 - t1| … ei= |ai - ti| … em= |am – tm|
x 10 -4
NN-MSE Relation x 10
-4
3.9 3.8
3.8 3.7
3.7
3.6
3.6
MSE
3.5 3.5
3.4 3.4
3.3
3.3
3.2
3.2
3.1 60
0 10 20 30 40 50 0 20 40 net{5} = Optimal architecture of Neural Network at
NN NN
CPU5: 9-46-9-9 (m* = 46)
Fig. 10 Relationship between NN numbers and MSEs In the computer program, EE denotes the mean square
error, MSE, i.e., EE = MSE.
• The number of neurons of first and second layer, m [ • EE obtained from different slaves after training process:
M, and h [ H (h = q for this example). They are Relationship between NN numbers and MSEs is repre-
required to seek the optimal structure of the neural sented graphically as follows.
123
Int. J. Mach. Learn. & Cyber.
Finally, integrating of NN computing and FEM based on The heart of any prediction system is prediction model.
MPSoC enables us to recognize the threat of this structure, There are various machine learning algorithms available
which is located in the seventh element having the cross- for different types of prediction systems. Any prediction
sectional area, S7 = S* = 0.0028 m2 that does not belong system using NN will have higher probability of correct-
to the training dataset (compatible with FEM computing ness if model is built using good training samples and good
with error, e = 0.0032 - 0.0028 = 0.0004 m2). Note that (optimal) NN architecture. We would like to emphasize, in
this error will be reduced depending on the increase of this work, on an advantage of the use of the mapreduce
training dataset and engineering experiences. model based on MPSoC to solve bigdata and multitasking
problems (NN, NoNN, FEM and MCSFEM) for building
these models. It turns out also that combining multicore
6 Conclusions and coprocessor technology provides extreme computing
power for highly CPU-time-computing applications.
Today, we can talk about terabytes and petabytes of data As, the improvement in performance gained by the use
rather than a gigabyte of data that was considered to be of MPSoC depends on the software algorithms used and
imposing. In the engineering problems, we must adapt to hardware. A new model of the mapreduce system used in
data originated from a host of new sources such as: GPS, this paper for a hardware architecture made of eight iden-
wireless devices, sensors, Streaming communication gen- tical processors, CPUs, called homogeneous symmetrical
erated by machine-to-machine interactions. Data is con- multiprocessing, SMP, architecture.
tinuous, unstructured and no constrained to rigid structures Some issues should be considered in the future:
of rows and columns. It is proving to be very hard to
• real time data processing challenges are very complex.
process by traditional computation methods. This perfor-
Handling the velocity of continuous data is not an easy
mance demanded by such applications requires the
task. It would like to handle, in the future, the parallel
migration from single processor-based systems to intensive
processing of this data in order to extract the meaningful
data communication and the use of multiprocessor archi-
information from this moving stream, i.e., we need an
tectures in a single chip (MPSoC). To solve the bigdata
another model used to allow high-level programming of
problems, one can use the Hadoop like Batch processing
heterogeneous multiprocessor architectures;
system using a distributed parallel processing paradigm
• more processor may lead to better results, but how
called MapReduce, which is a programming model of
many for this case should be considered in the future.
Google and an associated implementation for processing
• It is an increasing interest in the development of smart
and generating large data sets (BigData) using a group of
structures with built-in fault detection systems that
machines. It provides a standardized framework for
would provide damage and/or threat warnings in the
implementing large-scale distributed computation. Despite
fields of Civil Engineering.
continuous data (BigData) originated from GPS-RTK and
WSN is not studied in this paper; only one selected data is
used to determine technical state of structure in real time; References
bigdata approach is used in this paper to solve the problems
concerned with both MCSFEM and MCSNN model. 1. Almasi GS, Gottlieb A (1994) Highly parallel computing, 2nd
Hence, Google’s MapReduce model does not have the edn. Benjamin Cummings, Redwood City
2. Bhattacharya D, Mitra M (2013) Analytics on big fast data using
mechanism to identify parallelism of NN computations real time stream data processing architecture. EMC Proven Pro-
integrated with Monte Carlo Simulation, MCS, and finite fessional Knowledge Sharing
element method, FEM. Then, we develop a new distributed 3. Chau KW (2007) Application of a PSO-based neural network in
parallel processing model based on the single instruction analysis of outcomes of construction claims. Autom Constr
16(5):642–646
multiple data, SIMD, technology using the ‘‘Master-Sla- 4. Cheng CT et al (2005) Long-term prediction of discharges in
ver’’ structure. It enables us to split big data on n datasets, Manwan Hydropower using adaptive-network-based fuzzy infer-
processed at the same time on n processors, CPUs, rather ence systems models. Lect Notes Comput Sci 3612:1152–1161
than processed on single processor CPU by the single 5. Hwang J, Yun H, Park S-K, Lee D, Hong S (2012) Optimal
method of RTK-GPS/Accelerometer integration to monitor the
instruction single data, SISD, technology. This paper gives displacement of structures. Sensor 12:1014–1034. doi:10.3390/
a brief description that indicates the enormous potential, s120101014 ISSN 1424-8220
which can be seen in parallel processing, through an 6. Laemmer L, Sziveri J, Topping BHV (1997) MPI for the trans-
example and use case, in the framework of developments tech parastation. Advances in computational Mechanics with
Parallel and Distributed Processing Ltd., Edinburgh
in computer science. It indicates that, multicore computing 7. Mayer-Schonberger V, Cukier K (2013) Big data: a revolution
is the issue of how to fit multiple processing cores onto a that will transform how we live, work and think. Houghton
single chip to produce a functional and efficient whole. Mifflin Harcourt Publishing Company, New York
123
Int. J. Mach. Learn. & Cyber.
8. Neural Network ToolboxTM User’s Guide, v. 2010b. https://www. 14. Yang LT, Chen J (2014) Guest Editorial, Special Issue on Scal-
mathworks.com/help/pdf_doc/nnet/nnet_ug.pdf. Accessed 2 Nov able Computing for Big Data; Big Data Research 1, 2014, 3–5.
2015 http://www.elsevier.com/locate/bdr
9. Popovici K, Rousseau F, Jerraya AA, Wolf M (2010) Embedded 15. Zaharia M, Das T, Li H, Hunter T, Shenker S, Stoica I (2013)
software design and programming of Multiprocessor System-on- Discretized streams: fault-tolerant streaming computation at
Chip. Springer, Heidelberg scale. http://dx.doi.org/10.1145/2517349.2522737
10. Tien JM (2013) Big data: unleashing information. J Syst Sci Syst 16. Zang C, Imregun M (2001) Combined neural network and
Eng 22(2):127–151. doi:10.1007/s11518-013-5219-4 reduced FRF techniques for slight damage detection using mea-
11. Tran C, Srokosz P (2010) The idea of PGA stream computations for sured response data. Arch Appl Mech 71:525–536
soil slope stability evaluation. Comptes Rendus Méc 338(9):499–509 17. Zhao Y, Wu J, Liu C (2014) Dache: a data aware caching for big-
12. Vossen G (2014) Big data as the new enabler in business and data applications using the MapReduce framework. Tsinghua Sci
other intelligence. Vietnam J Comput Sci 1:3–14. doi:10.1007/ Technol 19(1):39–50. ISSN 1007–0214 05/10
s40595-013-0001-6 Pub. Springer
13. Weber F (2004), Research Activity in Smart Structures at EMPA.
In: Guran A, Valasek M (eds) 3rd International Congress on
Mechatronics MECH2K4 CTU in Prague, Czech Republic
123