tspr20 - Assignment - Group7 - 2597 - 73620 - TSPR20 - Draft Scientific Report Assignment - Group 7
tspr20 - Assignment - Group7 - 2597 - 73620 - TSPR20 - Draft Scientific Report Assignment - Group 7
Introduction
Safety critical systems as those where failure results in catastrophic loss of injury
(life) and damage to property and the environment (Bharathi & Selvarani, 2019)
(Törnblom & Nadjm-Tehrani, 2018). These safety critical systems have
requirements on accuracy as well as timeliness of when the results are delivered;
prime examples exist in automotive and healthcare applications, where rigorous
monitoring and dependability validation is carried out, and strict adherence to ISO
standards like ISO26262-6 is must for software engineers in the automotive
industry (Bharathi & Selvarani, 2019).
1
Methods
Artificial intelligence and machine learning can be used for early identification of
errors in the design stage in order to reduce rigours testing efforts during the
verification stage.
In this regard, Bharathi & Selvarani (2019), employed a case study approach
where they used a stocastic model called Hidden Markov Model (HMM). They
used HMM to build a state machine of all possible behaviour conditions of an anti-
locking braking systems (ABS) and its related transtions for 4 major parameters
(stopping distance, tire torque, wheel and vehicle speed). Hidden states are
defined as being those that are caused by underlying invisible factors, yet they
are still observable and measurable in the same way as operational and error
states which are visible (Bharathi & Selvarani, 2019). The approach by Bharathi
& Selvarani (2019) is to first identify the hidden states of the system and then
compute the probability of failure by first determing the temporal distributionof
error occurance, and then using HMM to traverse the states with the aid of
forward-backward algorithm.
Behavioural models are state machines, thus model based test tools can be used
(e.g. LBTest 3.x) to scrutinise and generate a learned model. This learning model
that can then be used to test safety and security properties of the system using
model checkers (Meinke, 2017). Any anomalies found during this analysis are
then classified as being true negative or false negatives via execution of test
cases on the system under test (Meinke, 2017). The tool LBTest uses
propositional linear temporal logic (PLTL), to model temporal behaviour of the
system. PLTL works with two basic modal operators e.g. (p U q) means p is true
until q is true.
2
Results
Bharathi & Selvarani, (2019) relied on three performance mertics, namesly precision, recall and
F-measure, when assessing the machine learning algorithm HMM. They claimed that HMM had
helped in identifying more temporal sequenses of error occurance, and concluded that HMM
provided satisfacory recall performance.
Khosrowjerdi, Meinke, & Rasmusson (2017) showed through the use of two industrial case study
they were able to use temporal logic to model informal behavioural requirements and
demonstarted the success of LBT in surfacing known and unknown errors. Their results showed
the viability of modelling many behavioral requiremetns, and that their tools were capable of
achieving high test throughput for a low latency Engine Control Unit tests. The further observed
that their models avoid major source of false positive and false negative tests that are usually
found in model based testing, because they incorporated machine learning techniques.
It was interesting to read that Bharathi & Selvarani, (2019) state that although their approach
provided good results for the ABS case study in terms of providing temporal sequences of error
occurance, the same framework may not map directly to other safety critical systems that are not
within the automotive industry. This provides a big threat to validity of such machine learning
usage approach during testing and as such researchers need to look at the vailidty on a case by
case basis.
We see that there are advantages from using machine learning with model based testing, as this
apprach appears to be independed of platform and code and has the added advantage of working
with black box testing approach. This is really useful as LBT can be used without needing to
expose intellectual property of the unit under test. We are also encouraged by the results by,
Meinke (2017), on the good scalability of LBT and its potential with co-operative open cyber-
physical systems-of-systems (CO-CPS).
It would be useful to further explore how the harder to interpret and explain nature of Machine
Learning algorithm is going to viewed by ISO standard 26262. Our chosen papers did not address
this aspect. However, a more thorough literature review should surface some answers to this.
Finally, it was very useful to learn how case study methodology is used in research on machine
learning based testing within the automotive industry.
3
References
Bharathi, R., & Selvarani, R. (2019). A Machine Learning Approach for Quantifying the Design
Error Propagation in Safety Critical Software System. IETE Journal of Research, 1-15.
Khosrowjerdi, H., Meinke, K., & Rasmusson, A. (2017). Learning-based testing for safety
critical automotive applications. 5th International Symposium On Model-Based Safety
And Assessment, 197–211.
Meinke, K. (2017). Learning-Based Testing of Cyber-Physical Systems-of-Systems: A
Platooning Study. European Workshop on Performance Engineering, 135-151.
Törnblom, J., & Nadjm-Tehrani, S. (2018). Formal verification of random forests in safety-
critical applications. International Workshop on Formal Techniques for Safety-Critical
Systems (pp. 55-71). Springer.