0% found this document useful (0 votes)
6 views16 pages

Testing of Ev

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views16 pages

Testing of Ev

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

UNIT-3 [fundamentals of functions safety and EMS

3.2.1 Functional safety life cycle


Functional safety needs consideration from the very beginning. This requires
an
analysis of hazards which starts with the design and follows up the
development
process. The accompanying safety design starts on a system level and breaks
down into the subordinate activities, as also the normal development
processes
do. A crucial point in the cycle is the step in which a development process
splits
into hardware and software development. Here the more abstract safety
process
splits into more concrete practices. Similarly important is the step in which
the
processes for hardware and software merge back into a single process.
Finally, a
validation is necessary.
When the development is finished and the first product leaves the factory,
most of the functional safety works is done, but not all. Functional safety
follows
the whole product life cycle, including also manufacturing, service and
decommissioning. Figure 3.1 illustrates this life cycle.
During the life cycle, further developments which are not directly related to
the item under consideration must be taken into account if there are some
interactions. If for instance the vehicle dynamics control is the developed
item, it
depends also on the engine control, so modifications of engine control cannot
be
ignored. Furthermore, external developments, sometimes just the advancing
stateof technology, happen in parallel and need to be considered.

3.4 Software development


For hardware engineers, software is difficult to grasp, because neither work
progress nor quality and safety can be visually estimated. Theoretically,
software
can be changed quickly and easily, but exploiting this property intensively very
often leads to quality or even safety problems. Another property is that for
software, there is no distinction between development and manufacturing
(except writing into/onto a physical device). The necessity to cope with the
UNIT-3 [fundamentals of functions safety and EMS

invisibility of software has led to highly structured and systematic


development
processes. These processes have less emerged from anticipatory reason, but
more from the pain of many failures over decades. Long-time software
engineers
have considered failures in quality, costs or delivery as the normal case. Today,
there is a tendency that hardware engineers try to learn from the processes
introduced for software. Sometimes, this approach works, but the very
different
nature of hardware and software must be kept in mind. One trailblazer of
software engineering has been Barry W. Boehm, many of his seminal papers
are
summarised in [214]. He could be considered as one of the inventors of process
models. A more recent approach is the assessment of development processes.
In
software engineering, a configuration management makes sure not to mix up
configurations (source code files belonging together) or versions of source
code
files. In a hardware-related development, this term is not familiar, but similar
tools have been introduced.

3.4.1 Process models


One aspect of software development has been the early development of process
models. A process model collects the working steps to be done and arranges
them in a reasonable order. One of these models which is highly risk oriented
(project risk and product risks) has been Boehm’s spiral model [18]. One basic
idea has been a development in repeated cycles with intermediate evaluations;
today, we find something similar in the development cycles of automotive
components with an A sample to gather experience first, a B sample to evaluate
further functions, a C sample to evaluate the final function implementation in
theory (in practice even with the C sample, new features are implemented) and
finally a D sample which should not bear new product development, but
evaluates the first time the production process for series production (so from
the
development perspective, it is a series product, from the production
perspective
it is the trial run). Over many years, a typical sequence of steps which can be
repeated for each sample cycle has evolved: requirements, specification,
design,
implementation (i.e. coding in the software process), module test, integration,
UNIT-3 [fundamentals of functions safety and EMS

system test, acceptance test. Since software engineers like to draw these steps
as
a diagram of cascaded rectangles, it has been dubbed waterfall model. The
awareness that the later steps have to do a lot with earlier steps can also be
represented graphically, e.g. by arrows. It turns out that an acceptance test
searches for deviations from the requirements, a system test for deviations
from
the specification, the integration fails if the software deviates from its original
design and the module or unit tests shows coding bugs. These relations can be

represented graphically in a V as shown in Figure 3.5. The small return arrows


symbolise iterations. In practice, iterations often cross multiple stages, they
might even return to the requirement stages. Sometimes, it is discussed if such
iterations are necessary or allowed, development practice clearly answers this
question with ‘yes’.Figure 3.5 V model
This V model is the standard process model in automotive industry far
beyond software, and the functional safety standard ISO 26262 implicitly
refers
to it. If several samples are delivered, the V model can be applied to each
sample
stage individually.
Usually, engineering products have an inherent hierarchy. In this case, the
design on one product level yields the requirements for the sub-product on the
level below. Sub-products can be modules of an electronic device, but also the
software and hardware are sub-products of an ECU. In particular, a system
UNIT-3 [fundamentals of functions safety and EMS

design delivers requirements for the hardware and the software, leading into
two
parallel subordinate V models. With integration both, sub-Vs merge again, so
finally the system test and the acceptance test of the whole product can be
done.
Figure 3.6 shows an example how ISO 26262 splits the system V model into a
hardware V and a software V and merges them again. Putting several items
(hardware, software or hardware and software modules) together is called
integration. Integration is often critical, even in the fortunate case that
everything
immediately seems to work together at first sight, further tests are necessary to
scrutinise the hardware–software interface (HSI) and module interfaces.
Although the V model is common in automotive industry and functional
safety standards such as ISO 26262 assume tacitly the V model, there are also
two grave disadvantages. One disadvantage is that serious problems, in
particular a misunderstanding of requirements, are discovered very late in
thetests before delivery, and usually, there is no more time left to fix them. The
second problem appears in the cooperation between car manufacturer and
supplier, the rigid structure makes it nearly impossible to react flexibly to new
customer requirements, and it is not realistic to believe that the customer has
documented all his requirements in the beginning of the V. There are other
process models called ‘agile’ models which respond to this problem. Two of
them, extreme programming and scrum, are successfully used in other
industries
for pure software development, whereas the automotive industry is highly
reluctant. One argument against the use of these agile models in automotive
industry is the difficulty or maybe even impossibility to develop hardware this
way. Another reason is that in the recent years, a lot of standardisation work
including ISO 26262 has built upon the V model.
Figure 3.6 Double V
3.4.2 Development assessments
One key thought of quality management is that good development processes
yield good products, although there is no proof of this assumption. Experience
shows that sometimes even good processes yield inferior products and good
products have been sometimes developed in an incredibly chaotic way. But it
is
reasonable to belief that the chance to develop reproducibly good products
increases with adequately good processes. In particular if a development is
assigned to a supplier, the product quality is not known before, but the process
quality can be known. So a desire to estimate the capability of a
developmentwithout analysis of ready products can be understood.
UNIT-3 [fundamentals of functions safety and EMS

Many assessment methods have been devised first in the software industry,
two of them have spread widely in the automotive industry, i.e.CMM
(capability
maturity model) and SPICE (software process improvement and capability
determination, which must not be confused with the homonymous circuit
simulator).
CMM has been developed by Carnegie Mellon University and was published
in 1986. The assessment checks good practices of software engineering, called
key process areas. For each of the five CMM levels (except the lowest one), a
set
of several practices needs to be implemented and documented. Serious
problems
are the high costs of CMM assessments, bureaucracy increasing dramatically
with higher level and monitoring of people’s performance on high level which
contradicts to labour legislation in some countries. The author has experienced
systematic trials with projects on different CMM levels where in spite of
higher
process quality, product quality has even decreased with higher CMM levels.
The number of projects in these trials has been too small for meaningful and
universally applicable statistics, but it has got obvious that the increasing
bureaucratic load without taking additional people into the project killed time
left for engineering tasks. From this experience, for good and safe products,
one
might suppose a reasonable, local optimum point of process quality as target
and
not just the theoretically possible maximum of process quality. From 2000,
CMM has been substituted by a similar system called CMMI (CMM
integrated)
or more precisely CMMI-DEV for development, which covers system
engineering including hardware development. Whereas CMMI has lost
relevance in the European automotive industry, it is still common in the USA.
SPICE has been standardised by the ISO [96–98]. It shares some basic ideas
with CMM(I), uses a completely different vocabulary, but there are also many
true differences in details. One principal difference is that SPICE does not
assign
one level to the whole development process, but individual levels to different
process areas. A special adaptation of SPICE beyond the standard is
automotive
SPICE [67].
3.4.3 Configuration management
Large software products are composed out of hundreds or even thousands of
UNIT-3 [fundamentals of functions safety and EMS

files which contain the source code of a product. In one software build,
erroneously files which do not belong together could be combined. Each
fileevolves in several versions which need to be tracked, which can be
complicated
in particular if versions additionally split up in variants, e.g. if an ECU is
designed for different cars (which is the normal case). While one engineer
works
on a file, another engineer might fix a bug on the same file; later, the first
engineer might overwrite the bugfix with his changes. Those are a few
examples
of things which could go wrong when many engineers work on many files;
there
are tools which avoid such problems. A structured working procedure using
such
tools is called configuration management.
It is obvious that similar things might happen to hardware development,
where a product consists of many components. And in fact, similar tools are
used increasingly by hardware engineers, although product life-cycle
management is a more frequent term in electrical and mechanical
engineering.

3.5.1 ReliabilityFrom past experience for hardware components, an


approximate failure rate λ (t)
can be given. In reliability theory, there is no distinction between failure and
fault which is common in functional safety, so in this subsection, we use the
term
failure rate as usual in technical reliability. This failure rate says, how many
failures per time this component will have. If we assume R (t = 0) = 1, then
the
relation between reliability and failure rate is given as
(3.12)
3.5.2 Reliability block diagrams and redundancy
In the FTA, we have seen OR-linked events and AND-linked events; in the
first
case, fault or failure probabilities are added, in the second case they are
multiplied. If the probability of an event depends on the failure rate of a
component or sub-system (we have already discussed that reliability and
safety
are different concepts, but in some cases, they are strongly related), we can
have
a look at failure rates λ (t) (failures per time) of combined systems.
UNIT-3 [fundamentals of functions safety and EMS

Reliability block diagrams address a similar problem as FTA does, but the
crucial value is the reliability R (t) here, which has already been introduced in
the beginning of this chapter.
An example is a transistor power stage. Possibly we want to avoid that a
failure to switch on has an effect on the application, and we decide to use a
parallel circuit of two MOSFETs as T1 and T2 in Figure 3.7 (a parallel
circuit of
bipolar transistors would be thermally unstable and must be avoided). One
out of
the two transistors must work to perform the required function, this case is
called
a 1-out-of-2 redundancy. If Ri is the reliability of the requested function of
transistor i, the total reliability is R12 = R1 +R2 −R1R2 [17]. Now let us
imagine
that both MOSFETs have a common driver circuit at their gates. The circuit
fails
if the transistor combination fails or if the driver circuit fails, in other terms
both
functions, that of the driver circuit and that of the transistors, are required. In
this
case, the total reliability is the product of partial reliabilities. If R3 is the
reliability of the driver, then the total reliability R123 =R12R3 which is
equivalent
to R123 =R1R3 +R2R3 −R1R2R3 .
Sometimes, transistors are paralleled to double power. In this case, we need
both transistors for the required function (maybe, there is a still useful
reduced
function with one remaining transistor, then this reduced function is formally
adifferent function and must be specified with its own reliability). If we truly
need
both transistors, the reliability of exactly the same circuit is a different one,
i.e.R123 =R1R2R3 . We get the same expression if we do not consider the
reliability of the function ‘closing’, but of the function ‘opening’. Figure 3.7
shows clearly that the reliability block diagram strongly depends on function.
In
case of an electronic circuit, it does not necessarily represent its physical
structure, e.g. the parallel connection of both transistors in this example.
UNIT-3 [fundamentals of functions safety and EMS

Figure 3.7 Examples of reliability block diagrams


Reliability block diagrams are standardised in IEC 61078 [78].
A common term is hardware fault tolerance (HFT). It is the number of
hardware faults a system or sub-system (more accurately, a safety function of
a
system or subsystem) can withstand without failure. A HFT of 0 means that
just
a single hardware fault is sufficient to fail. The HFT can be increased by
redundancy.

3.6 Functional safety and EMC


The target of functional safety is to keep the risk within an acceptable order.
Ofcourse, EMI is not the only source of possibly dangerous faults. A rule of
thumb
is to assign 10 per cent of the complete tolerable risk as the limit of EMC
related
risk.
For functional safety, it is not sufficient to prove EMC by successful tests
according to the usual test standards (Chapter 9). Functional safety must be
assured under all regular working conditions [22], considering all component
tolerances, in some cases even under misuse conditions, not just under the
test
conditions. Principal differences between testing and operation can be
UNIT-3 [fundamentals of functions safety and EMS

environmental conditions (in particular temperature), age, use and misuse


cases
or even the present state of the software running on a micro-controller during
an
interference. A common practice is to increase test levels to reach some
safety
margin, but even with extreme safety margins, this approach does not
guarantee
safety absolutely, but it drives costs (which are not lost, because they also
boost
quality).
A standard which links functional safety to EMC is IEC 61000-1-2 [84]. To
some extent, it also advocates the present overtesting philosophy. It tries to
connect other EMC standards out of the IEC 61000 series to the generic
functional safety standard IEC 61508 [79], so it is kept general without
consideration of specific automotive EMC and functional safety standards. It
refers to IEC 61025 to introduce the FTA. It helps to define requirements to
EMC and functional safety including testing and shows some examples in its
annexes. The IEEE is working on a supplemental standard 1848 [91] which
is
closely related to the IET Code of Practice for Electromagnetic Resilience
[92].

3.7 Functional safety and quality


Functional safety requires a deep reflection about faults, errors and failures.
Without development for functional safety, many of them would stay
undiscovered. Searching for dangerous malfunctions discovers also those
possible malfunctions which are not dangerous and helps to document them
in a
systematic manner. These malfunctions are typical quality issues which can
be
fixed after discovery.
Development for safety requires strict processes. Although experience shows
that product quality does not automatically benefit from process quality
(product
quality might even suffer from too bureaucratic processes if staff is kept
short),
the ways to establish processes in safety engineering and in quality
management
UNIT-3 [fundamentals of functions safety and EMS

are similar, so it is straightforward to have a process which covers both


safetyand quality. The third issue, security, can also be integrated into the
process.
One common quality standard in automotive engineering is IATF16949 [69]
which also mentions some topics of functional safety, in particular the
necessity
of an FMEA. Formally, the extension of an FMEA to quality improvement
can
be accomplished simply by the reduction of the risk priority threshold (or a
redefinition of priority levels in [8]) to a value which is deemed critical for
quality; of course, there are accordingly more measures to be taken then.
In hardware engineering, functional safety depends often on the reliability of
components. It is obvious that the overall reliability of the product also
benefits
beyond safety critical aspects.
3.8 Standards
3.8.1 History
Albeit not directly mandatory it is useful, but not sufficient to respect present
standards. If any damage occurs, the burden of proof that a product has been
developed according to the state of the art shifts easily to the manufacturer.
Valid
standards are considered the state of the art; deviating from these standards is
possible as long as these standards are not explicitly mentioned in legal
sources,
but even then, deviation needs good reasons. A first very general standard
about
functional safety is IEC 61508 [79] about functional safety of
electric/electronic
or programmable systems. This standard has been released by the
International
Electrotechnical Commission (IEC) in 1998. This standard can be called a
historical milestone; before its release, it has not been generally accepted to
control safety critical functions electronically.
Although the authors of IEC 61508 had manufacturing systems in mind, it
has got a generic standard about functional safety. Later, many dedicated
standards have been derived, so ISO 26262 for passenger cars. Examples of
other derived standards are IEC 61511 for process industry, IEC 61513 for
nuclear power plants or EN 50128 for railways. Although not dedicated to
commercial cars or motorcycles, ISO 26262 has documented a state of the art
which could be partially applied also to those vehicles. Actually, a second
UNIT-3 [fundamentals of functions safety and EMS

edition of ISO 26262 is under development which explicitly considers other


road
vehicles than passenger cars (literally all series production road vehicles,
excluding mopeds).
A late development is ISO 25119 [156–159] for agricultural machines
whichis to some extent based on the experience with ISO 26262.

3.9 Functional safety of autonomous vehicles


What makes autonomous vehicles so special to dedicate them a separate
section
in this chapter? One difference is that safety considerations rely much on
human
reaction if something goes wrong. In an autonomous vehicle, a human
reaction
will not happen within a sufficient time. If laws stipulate the final
responsibility
of a person called driver, this is far from the reality of a distracted person in
an
autonomous car and challenges even the sense of autonomous driving. Also
ISO
26262 assumes the existence of a driver, and therefore, it might be unsuitable
in
its present form for autonomous cars. Another issue is complexity of
decision,
there are not just right or wrong, safe or unsafe decisions, but even ethical
and
unethical decisions which exceed the state of the art in functional safety. It is
obvious that an infinite space of possible situations which occur in traffic
needs
to be considered. If deep learning is employed, the behaviour is not
reproducible
and not testable. Furthermore, it will take a long time to get a mature
technology
with either many accidents or an extraordinarily careful driving like an
insecure
human driver in high age (who in turn might urge other traffic participants to
risky manoeuvres), although on long term, the main cause of accidents,
human
failure, will lose its relevance.
In a typical safety analysis (FMEA or HARA), one criterion to quantify the
relevance of the fault is the controllability. A formal approach to consider the
UNIT-3 [fundamentals of functions safety and EMS

absent driver is to set the controllability to its worst value for all items where
an
interaction is required. This will lead to a completely new hazard pattern of
well
established sub-systems and so to changes of system architecture in places
where
no problem is expected intuitively.
It is more difficult to tackle the ethical problem. The functional safety
community has no ready answer yet, how ethics can be integrated to safety
concepts. An easier ethical problem is the question if a rule should be
violated to
avoid an accident, e.g. if a continuous line should be crossed to prevent a
probable collision. Nearly every human driver would do so. The different
severity levels in hazard analyses might be a good starting point to find a
solution here. If a casualty cannot be avoided, it is much more difficult to
decide
who should be killed preferably.
Autonomous driving will lead into many unforeseeable situations.
Thisinfinite space of situation must be restricted. Anxious people strictly
avoid
situations in which they cannot estimate consequences of their doing. So it
could
be a strategy to put the system into a safe state if a situation cannot be
mastered
in a different way, although it might be quite troublesome if an autonomous
car
often pushes to the curb and stops.
For deep-learning behaviour, limits are necessary. If the system strictly stays
within these limits, it is testable. One must be aware that even relatively
simple
systems today are no more completely testable.
On the way to get experienced with autonomous driving, the only chance
seems to be not to implement technology faster in series than it can be
understood from the risk side.
3.4.2 Development assessments
One key thought of quality management is that good development processes
yield good products, although there is no proof of this assumption.
Experience
shows that sometimes even good processes yield inferior products and good
products have been sometimes developed in an incredibly chaotic way. But it
is
UNIT-3 [fundamentals of functions safety and EMS

reasonable to belief that the chance to develop reproducibly good products


increases with adequately good processes. In particular if a development is
assigned to a supplier, the product quality is not known before, but the
process
quality can be known. So a desire to estimate the capability of a
developmentwithout analysis of ready products can be understood.
Many assessment methods have been devised first in the software industry,
two of them have spread widely in the automotive industry, i.e.CMM
(capability
maturity model) and SPICE (software process improvement and capability
determination, which must not be confused with the homonymous circuit
simulator).
CMM has been developed by Carnegie Mellon University and was published
in 1986. The assessment checks good practices of software engineering,
called
key process areas. For each of the five CMM levels (except the lowest one),
a set
of several practices needs to be implemented and documented. Serious
problems
are the high costs of CMM assessments, bureaucracy increasing dramatically
with higher level and monitoring of people’s performance on high level
which
contradicts to labour legislation in some countries. The author has
experienced
systematic trials with projects on different CMM levels where in spite of
higher
process quality, product quality has even decreased with higher CMM levels.
The number of projects in these trials has been too small for meaningful and
universally applicable statistics, but it has got obvious that the increasing
bureaucratic load without taking additional people into the project killed time
left for engineering tasks. From this experience, for good and safe products,
one
might suppose a reasonable, local optimum point of process quality as target
and
not just the theoretically possible maximum of process quality. From 2000,
CMM has been substituted by a similar system called CMMI (CMM
integrated)
or more precisely CMMI-DEV for development, which covers system
engineering including hardware development. Whereas CMMI has lost
relevance in the European automotive industry, it is still common in the
USA.
UNIT-3 [fundamentals of functions safety and EMS

SPICE has been standardised by the ISO [96–98]. It shares some basic ideas
with CMM(I), uses a completely different vocabulary, but there are also
many
true differences in details. One principal difference is that SPICE does not
assign
one level to the whole development process, but individual levels to different
process areas. A special adaptation of SPICE beyond the standard is
automotive
SPICE [67].

3.3.7 Hazard and risk assessment


The HARA or HRA (also called hazard and risk analysis) is done during the
conception of a product. The term is coined by ISO 26262 and described in its
Part 3, Chapter 7 [135], but of course, it can be done independently from the
application of the standard. The resulting document is a detailed list of all
hazards and risks defining requirements for development. There is no strict
scheme; usually, it is an extended FMEA (Section 3.3.3) with the scope to
derive
safety goals, in a further step functional safety requirements and then technical
requirements. It could be implemented as a table which lists functions,
malfunctions, conditions, situations, persons exposed to a hazard, effects, an
estimation of severity with reasons, an estimation of exposure with reasons, an
estimation of controllability with reasons, consequences for development, safe
state, tolerance time and further items if deemed appropriate. It is very typical
of
a HARA to define situations as one of the first steps, on the one hand as an aid
tofind hazards, but in particular, the exposure rating depends very much on
driving
or operation situations and their frequency, whereas in a typical FMEA,
probabilities of failures are often derived from components statistics instead.
So
exposures (HARA) and probabilities (FMEA) are similar and comparable
concepts, but they are not the same.
Situations can by coarsely structured by driving direction, acceleration,
deceleration, velocity, special driving situations, traffic situation, parking
situation, environmental conditions (temperature, air pressure, humidity, dirt,
road quality, visibility/weather), crash situations, driver activity (brake, clutch,
accelerator, gearshift, hand brake) and other users’ activities. A detailed
catalogue with exposure ratings according to ISO 26262 is available from the
German automotive industry association [228, in German].
UNIT-3 [fundamentals of functions safety and EMS

Table 3.1 shows as an example a small excerpt of a HARA for the power
train. In practice, the HARA can be split into more than one table. A very
obvious difference to an FMEA is that all not dangerous malfunctions get
severity 0 and so they are any QM issues and not hazards. In an FMEA also,
malfunctions which are not dangerous can reach a high score if their
probability

3.3.2 Fault tree analysis


The FTA identifies causes of faults or failures, so it is clearly a deductive
analysis. Beyond the way described in IEC 61025 [77], many different
ways(3.3)
(3.4)
have evolved, how to do an FTA in practice. It can be done qualitatively as a
root
cause breakdown or also quantitatively leading to a failure metrics such as
probabilistic metric for random hardware failures (PMHF) as requested by ISO
26262. Like a DFA also an FTA can help to identify common causes of
different
faults.
Figure 3.2 shows as an example an incomplete fault tree without
probabilities for the case that after releasing the accelerator the car does not
stop
accelerating. The diagram ramifies until all causes are deduced directly or
indirectly from root causes (also called basic events) which cannot be traced
back further. Complete large FTAs are usually drawn in a modular way. In
practice, it is difficult to identify a criterion to abort the chain of causes
defining
one cause in the chain as the root cause, because these chains tend to infinity.
UNIT-3 [fundamentals of functions safety and EMS

The example shows that in some cases (in practice in most of cases), reasons
of
an event are OR-linked, so e.g. the input voltage 2 reaches its maximum when
ground is interrupted (in this case, the potentiometer in the pedal remains
connected to the positive supply with its positive terminal and to the input with
its slider) or if the input has a direct short circuit to the supply voltage. In other
cases, events are AND-linked, so it is not directly dangerous if one return
spring
is broken, but acceleration goes on if both springs are broken. This second case
is known as redundancy.
The probabilities of OR-linked events add if they are mutually exclusive, the
probability of AND-linked events multiplies if they are independent. Most of
the
OR-linked events are not mutually exclusive but can occur at the same time.
Then, both probabilities overlap and the intersection set must be counted only
one time and not two times for each contributing event. So if p(E1 ) is the
probability of event 1, p(E2 ) the probability of event 2 and p(E1
E2 ) the
probability of common occurrence, then
In case of more than two events, the sieve formula [35, in Portuguese] (also
assigned to Poincaré and Sylvester) applies, e.g. for three events E1 , E2 and E3
:

You might also like