0% found this document useful (0 votes)
22 views4 pages

tspr20 - Assignment - Group7 - 2597 - 73620 - TSPR20 - Draft Scientific Report Assignment - Group 7

Uploaded by

i.anonyme7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views4 pages

tspr20 - Assignment - Group7 - 2597 - 73620 - TSPR20 - Draft Scientific Report Assignment - Group 7

Uploaded by

i.anonyme7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Evaluation of Machine Learning approaches for testing

Safety-Critical Systems within the Automotive Industry

Alice Namutebi Haval Kadhem


21 April 2020

Introduction

Safety critical systems as those where failure results in catastrophic loss of injury
(life) and damage to property and the environment (Bharathi & Selvarani, 2019)
(Törnblom & Nadjm-Tehrani, 2018). These safety critical systems have
requirements on accuracy as well as timeliness of when the results are delivered;
prime examples exist in automotive and healthcare applications, where rigorous
monitoring and dependability validation is carried out, and strict adherence to ISO
standards like ISO26262-6 is must for software engineers in the automotive
industry (Bharathi & Selvarani, 2019).

It is often cost or technology prohibitive to guarantee that a safety critical system


is failure free, instead such systems should fail gracefully without violating any
safety constraints (Bharathi & Selvarani, 2019). Safety critical software system
development thus requires designs and implementations methods that would
prevent migration of faults into catastrophic failures. In software systems context,
analysis can be carried out of errors in the system to ascertain a failure pattern
(Bharathi & Selvarani, 2019).

Recent advances in Machine Learning (ML) algorithms has accelerated their


adoption in complex, automated and safety critical systems (Törnblom & Nadjm-
Tehrani, 2018). This increase in adoption is primarily attributed to ML algorithms’
ability to learn and work with incomplete knowledge to arrive at a generalisation.

Although it is interesting to learn about how to assure safety when ML is used in


safety critical systems, this is not the purpose of this short report. Rather, this
report explores how ML can be used to test software in safety critical systems
with emphasis on case studies from the automotive industry.

1
Methods

Artificial intelligence and machine learning can be used for early identification of
errors in the design stage in order to reduce rigours testing efforts during the
verification stage.

In this regard, Bharathi & Selvarani (2019), employed a case study approach
where they used a stocastic model called Hidden Markov Model (HMM). They
used HMM to build a state machine of all possible behaviour conditions of an anti-
locking braking systems (ABS) and its related transtions for 4 major parameters
(stopping distance, tire torque, wheel and vehicle speed). Hidden states are
defined as being those that are caused by underlying invisible factors, yet they
are still observable and measurable in the same way as operational and error
states which are visible (Bharathi & Selvarani, 2019). The approach by Bharathi
& Selvarani (2019) is to first identify the hidden states of the system and then
compute the probability of failure by first determing the temporal distributionof
error occurance, and then using HMM to traverse the states with the aid of
forward-backward algorithm.

An alternative approach is employed by Khosrowjerdi, Meinke, & Rasmusson


(2017), which is to use Learning Based Technique (LBT). LBT is the combination
of Machine Learning and Model-based testing, and at its focus is the utilisation of
observation of a system under test in run time, to reverse engineer the SUT’s
behavioural model, (Khosrowjerdi, Meinke, & Rasmusson, 2017).

Behavioural models are state machines, thus model based test tools can be used
(e.g. LBTest 3.x) to scrutinise and generate a learned model. This learning model
that can then be used to test safety and security properties of the system using
model checkers (Meinke, 2017). Any anomalies found during this analysis are
then classified as being true negative or false negatives via execution of test
cases on the system under test (Meinke, 2017). The tool LBTest uses
propositional linear temporal logic (PLTL), to model temporal behaviour of the
system. PLTL works with two basic modal operators e.g. (p U q) means p is true
until q is true.

Khosrowjerdi, Meinke, & Rasmusson (2017), applied their study to two


automotive case studies which were supplied by Scania CV; the first case study
was for a remote engine start (ESTA) application where they needed to test
“behavioral modeling of a time dependent system in the presence of somewhat
complex temporal sequences.”, (Khosrowjerdi, Meinke, & Rasmusson, 2017, p.
202). The second case study was the dual circuit steering (DCS) in addition to
behavioral modelling the study authors were “investigating the capability of the
tool to find undiscovered discrepancies in the SUT”, (Khosrowjerdi, Meinke, &
Rasmusson, 2017, p. 202).

2
Results

Bharathi & Selvarani, (2019) relied on three performance mertics, namesly precision, recall and
F-measure, when assessing the machine learning algorithm HMM. They claimed that HMM had
helped in identifying more temporal sequenses of error occurance, and concluded that HMM
provided satisfacory recall performance.

Khosrowjerdi, Meinke, & Rasmusson (2017) showed through the use of two industrial case study
they were able to use temporal logic to model informal behavioural requirements and
demonstarted the success of LBT in surfacing known and unknown errors. Their results showed
the viability of modelling many behavioral requiremetns, and that their tools were capable of
achieving high test throughput for a low latency Engine Control Unit tests. The further observed
that their models avoid major source of false positive and false negative tests that are usually
found in model based testing, because they incorporated machine learning techniques.

Discussion and conclusion

It was interesting to read that Bharathi & Selvarani, (2019) state that although their approach
provided good results for the ABS case study in terms of providing temporal sequences of error
occurance, the same framework may not map directly to other safety critical systems that are not
within the automotive industry. This provides a big threat to validity of such machine learning
usage approach during testing and as such researchers need to look at the vailidty on a case by
case basis.

We see that there are advantages from using machine learning with model based testing, as this
apprach appears to be independed of platform and code and has the added advantage of working
with black box testing approach. This is really useful as LBT can be used without needing to
expose intellectual property of the unit under test. We are also encouraged by the results by,
Meinke (2017), on the good scalability of LBT and its potential with co-operative open cyber-
physical systems-of-systems (CO-CPS).

It would be useful to further explore how the harder to interpret and explain nature of Machine
Learning algorithm is going to viewed by ISO standard 26262. Our chosen papers did not address
this aspect. However, a more thorough literature review should surface some answers to this.

Finally, it was very useful to learn how case study methodology is used in research on machine
learning based testing within the automotive industry.

3
References
Bharathi, R., & Selvarani, R. (2019). A Machine Learning Approach for Quantifying the Design
Error Propagation in Safety Critical Software System. IETE Journal of Research, 1-15.
Khosrowjerdi, H., Meinke, K., & Rasmusson, A. (2017). Learning-based testing for safety
critical automotive applications. 5th International Symposium On Model-Based Safety
And Assessment, 197–211.
Meinke, K. (2017). Learning-Based Testing of Cyber-Physical Systems-of-Systems: A
Platooning Study. European Workshop on Performance Engineering, 135-151.
Törnblom, J., & Nadjm-Tehrani, S. (2018). Formal verification of random forests in safety-
critical applications. International Workshop on Formal Techniques for Safety-Critical
Systems (pp. 55-71). Springer.

You might also like