Specification and Verification
Specification and Verification
FUNDAMENTALS OF SOFTWARE
TESTING
–
MODEL-BASED TESTING
& RUNTIME VERIFICATION
CHRISTIAN COLOMBO
UNIVERSITY OF MALTA
NOVEMBER 2022
OUTLINE
• High-Level Introduction
• A Specification Language – Finite State Automata
• An Overview of Runtime Verification
• Model-Based Testing
• System Verification in a Nutshell
PART I
HIGH-LEVEL
INTRODUCTION
SPECIFICATION
Acceptable behaviour
Spec
All
behaviours
IMPLEMENTATION?
Spec
All
behaviours
IMPLEMENTATION
• The behaviours allowed by the implementation are a subset of those allowed by the
specification
Specificatio
n
Impleme
nt-ation
Does the
implementation really
follow the specification?
THE TESTING PERSPECTIVE
Test Oracle
Impleme
nt-ation
THE RUNTIME VERIFICATION
PERSPECTIVE
User Monitor
Impleme
nt-ation
LIGHTING SYSTEM EXAMPLE
A SPECIFICATION LANGUAGE
FINITE STATE AUTOMATA
REMEMBER THIS?
ALGORITHM
dim dim
LIGHTING SYSTEM SIMULATOR
Lift receiver
PART III
AN OVERVIEW OF
RUNTIME VERIFICATION
RUNTIME VERIFICATION
System
Specification
RUNTIME VERIFICATION
System
RV
tool
Specification
RUNTIME VERIFICATION
System
System
RV
tool
Monitor
Specification
Verifier
RV ARCHITECTURE
Monitor
System Verifier
RV ARCHITECTURE
Monitor
events
System Verifier
ok/error
RV ARCHITECTURE
System
Specification
SEPARATION OF CONCERNS
System
RV
Specification
tool
SEPARATION OF CONCERNS
System
RV Weaving
Specification
tool instructions
SEPARATION OF CONCERNS
System
Weaving
RV Weaving tool
Specification
tool instructions
SEPARATION OF CONCERNS:
ASPECT-ORIENTED PROGRAMMING
• Consider adding a logging feature for all types of money transfers, and
which can be toggled on or off.
• Either we add a line to each transfer method:
void transfer() {
if (log.enabled) { log.write(“…”); }
...
}
logout transfer
login
PROPERTY SPECIFICATION
LANGUAGES: AUTOMATA
u.login | !u.isBlackListed()
PROPERTY SPECIFICATION
LANGUAGES: AUTOMATA
u.login | !u.isBlackListed()
PROPERTY SPECIFICATION
LANGUAGES: AUTOMATA
badPin() | wpin=3
insertCard() | | wpin=0
withdrawCard()
goodPin()
withdrawCard()
goodPin()
Monitored
System
--------------
1 USER Specification
--------------
COMPLETE PICTURE
EVENTS VERIFYING
Monitored
AspectJ
SYSTEM
System
FEEDBACK
LARVA
2 Compiler
--------------
Specification
--------------
COMPLETE PICTURE
EVENTS VERIFYING
Monitored
AspectJ
SYSTEM
System
FEEDBACK
3 LOG
USER
PROPERTY SPECIFICATIONS
logout
login
badpassword
badpassword
PROPERTIES IN LARVA
login / / count=0
logout
logout
timer@30 / /
logout
timer@30 / /
logout
timer@30 / /
User u;
u =u1 ;
_cls_badlogins31 _cls_inst =
_cls_badlogins31._get_cls_badlogins31_inst( u);
AspectJ
_cls_inst.u1 = u1;
_cls_inst._call(thisJoinPoint.getSignature().toString(), 4/*logout*/);
}
User u;
u =u1 ;
_cls_badlogins31 _cls_inst =
_cls_badlogins31._get_cls_badlogins31_inst( u);
_cls_inst.sessionID = sessionID;
_cls_inst.u1 = u1;
LOG SAMPLE
AUTOMATON::> badlogins(user) STATE::>loggedout
aspects._asp_badlogin80.ajc$before$aspects__asp_badlogi(_asp_badlogin80.aj:30)
COMPLETE PICTURE
EVENTS VERIFYING
Monitored
AspectJ
SYSTEM
System
FEEDBACK
3 LOG
USER
PART IV
MODEL-BASED
TESTING
TESTING
Driver Oracle
Impleme
nt-ation
WHY MODEL-BASED TESTING?
Driver Oracle
Impleme
nt-ation
Specification
TESTING
Driver Oracle
Impleme
nt-ation
manual manual
Specification
MODEL-BASED TESTING
Impleme
nt-ation
(Driver) (Oracle)
Model
WHERE ARE THE BUGS?
Impleme
Model nt-ation
Specification
HOW CAN WE FIND THEM?
Impleme
Model nt-ation
Specification
HOW CAN WE FIND THEM?
Check
correspondence Impleme
Model nt-ation
Specification
GENERIC MODEL-BASED
TESTING
TEST SUITE GENERATION
• Random
• Greedy algorithm
• Lookahead
EXECUTING TESTS
Model after
Model
model action action
MODEL-BASED TESTING
Generating test
cases
Model after
Model
model action action
Checking test
output
EXAMPLE
Dim
Model Model
(Normal) model action (Dim)
Correspond?
MEASURING TEST
EFFECTIVENESS
Assertions
THE SYSTEM: VENDING
MACHINE
• Define a Model
• Normally using an FSM with
• Variables
• Conditions
• Actions
• Build graph of possible transitions
• Generates abstract tests
• Add operation on SUT
• Update any tracked variables
• Add JUnit assertions on SUT after applying actions
MODEL IMPLEMENTS
FSMMODEL INTERFACE
DECLARE ACTIONS THAT CAN BE
CARRIED OUT WITH @ACTION
ANNOTATION
GUARDS CAN BE INTRODUCED
• Random
• Greedy tester
• Decides at the current state which transitions are yet to be
visited
• Lookahead tester
• Looks ahead to find unvisited transitions
GENERATING TESTS IN
MODELJUNIT
Choose a test
Choose how many generation
tests you want algorithm
MEASURE COVERAGE Add coverage
metric
Print statistics
PART V
SYSTEM VERIFICATION
IN A NUTSHEEL
WHY TEST?
EXECUTION GRAPH
~X, ~Y,
~X, ~Y,
~A, B
A, B
X, Y
~A, B
RET ~RET
THE QUESTION
RET ~RET
TESTING: THE CHALLENGES… (1)
generation is challenging.
through the then or the else branch of a conditional) have we tried?
• Property coverage: What percentage of property cases have we
covered e.g. “(if the user is whitelisted then the transaction should
pass) and (if the user is blacklisted then the transaction should be
frozen)” tested only for graylisted users or gray- and whitelisted users.
SUMMARY: TESTING
• Pros:
• (Relatively) easy to set up
• Can be performed throughout the development phase
• (Usually) tests can be rerun across versions
• Cons:
• Difficult to generate paths intelligently
• Can only talk about what happens, not about what may happen
• We can (normally) never say “Correct”
WHAT ABOUT MODEL CHECKING?
RET ~RET
MODEL CHECKING: THE
CHALLENGES
• Pros:
• We can (possibly) say “Correct”
• We can talk about richer properties than in testing
• Cons:
• Does not scale up
• Despite the original intention, the tools are far from providing
push-button technology
• We require a formal semantics of the language
VERIFICATION SUMMARY
• Observation 1:
a. Checking a whole system is usually computationally expensive…
b. But checking a single path is usually not (hence oracles in testing).
• Observation 2:
a. Generating a representative set of paths is tough…
b. because the system may go through some other path which we may
not have checked before… (hence complete coverage in model
checking).
A LOGICAL CONCLUSION…
• So why not:
• check properties only on certain execution paths
• but continue checking after deployment to ensure that any
execution paths followed by the live system do not violate the
property,
• and if they do fix the system or just stop it.
A LOGICAL CONCLUSION…
runtime monitoring
• So why not:
• check properties only on certain execution paths
• but continue checking after deployment to ensure that any
execution paths followed by the live system do not violate the
property,
• and if they do fix the system or just stop it.
A LOGICAL CONCLUSION…
• So why not:
• check properties only on certain execution paths
• but continue checking after deployment to ensure that any
execution paths followed by the live system do not violate the
property,
• and if they do fix the system or just stop it.
runtime verification
SO FINALLY, RUNTIME
VERIFICATION
RET ~RET
RV: THE CHALLENGES… (1)
• Pros:
• We can say “The system never continues after failure.”
• Scales up.
• Easy to adopt.
• Cons:
• We can never say “The system is correct.”
• Overheads (time and memory) may be prohibitive.
• Can only talk about finite traces.
• Can only be performed at runtime.
• Does not remove bugs, but stops their consequences.
SUMMARY