0% found this document useful (0 votes)
69 views27 pages

Software Architecture Notes

The document outlines concepts related to test-driven development (TDD), systematic black-box testing, variability management, design patterns, and software testing approaches. It discusses the TDD rhythm of quickly adding a test, seeing it fail, making a small code change to make it pass, refactoring, and re-running tests. It also defines key TDD principles like "test first" and using test stubs. For systematic black-box testing, it covers definitions, equivalence partitioning, boundary value analysis, and test case tables. Variability management techniques like parametric, polymorphic, and compositional approaches are presented. Finally, it introduces design patterns and their components as well as test stubs and unit/integration testing.

Uploaded by

Mathias
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views27 pages

Software Architecture Notes

The document outlines concepts related to test-driven development (TDD), systematic black-box testing, variability management, design patterns, and software testing approaches. It discusses the TDD rhythm of quickly adding a test, seeing it fail, making a small code change to make it pass, refactoring, and re-running tests. It also defines key TDD principles like "test first" and using test stubs. For systematic black-box testing, it covers definitions, equivalence partitioning, boundary value analysis, and test case tables. Variability management techniques like parametric, polymorphic, and compositional approaches are presented. Finally, it introduces design patterns and their components as well as test stubs and unit/integration testing.

Uploaded by

Mathias
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

dSoftArk Noter Q2 2013-2014

Mathias Skovgaard Birk, 20116447, DAT2


January 6, 2014

Contents
1

Test-driven development
1.1 Definitions . . . . .
1.2 TDD Rythm . . .
1.3 TDD Principles . .
1.4 JUnit syntax . . .
1.5 Fordele . . . . . . .
1.6 Ulemper . . . . . .
1.7 Perspektiv . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

Systematic black-box testing


2.1 Definitioner . . . . . . . . . . . . .
2.2 Testing Approaches . . . . . . . . .
2.3 Partitioning Heuristics . . . . . . .
2.4 EC Table . . . . . . . . . . . . . .
2.5 Test Case Table . . . . . . . . . . .
2.6 Myers Rules for Valid/Invalid ECs
2.7 Boundary Value Analysis . . . . .
2.8 Conditions/Parameters . . . . . . .
2.9 Key points . . . . . . . . . . . . .
2.10 Perspektiv . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

6
. 6
. 7
. 8
. 8
. 8
. 9
. 9
. 9
. 9
. 10

Variability management
3.1 Definitioner . . . . . . . .
3.2 Variabilitetspunkter . . .
3.3 Variabilitetsh
andtering . .
3.3.1 Kildekodekopiering
3.3.2 Parametrisk . . . .
3.3.3 Polymorf . . . . .
3.3.4 Kompositionel . .
3.4 3-1-2 Processen . . . . . .
3.5 STRATEGY Pattern . . .

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

3
3
4
4
5
6
6
6

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

10
10
11
11
11
11
12
12
13
13

3.6

3.5.1 Roller . . . . . . . . . . .
3.5.2 Fordele og ulemper . . . .
ABSTRACT FACTORY Pattern
3.6.1 Roller . . . . . . . . . . .
3.6.2 Fordele og ulemper . . . .

Test
4.1
4.2
4.3
4.4

stubs and unit/integration


Definitioner . . . . . . . .
Test stub . . . . . . . . .
Kommentar . . . . . . . .
Keypoints . . . . . . . . .

testing
. . . .
. . . .
. . . .
. . . .

Design patterns
5.1 Definitioner . . . . . . . . . . .
5.2 The 3-1-2 Process . . . . . . . .
5.3 Design Patterns komponenter .
5.4 Roles, Responibility, Protocols
5.5 Pattern Fragility . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

13
14
14
15
15

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

15
15
16
16
17

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

17
17
18
18
19
19

Compositional design

20

Frameworks

20

Patterns
8.1 Konstruktrer . . . . . . . . . . .
8.1.1 Abstract Factory . . . . .
8.1.2 Builder . . . . . . . . . .
8.2 Struktur . . . . . . . . . . . . . .
8.2.1 Adapter . . . . . . . . . .
8.2.2 Composite . . . . . . . . .
8.2.3 Decorator . . . . . . . . .
8.2.4 Facade . . . . . . . . . . .
8.2.5 Proxy . . . . . . . . . . .
8.2.6 Null Object . . . . . . . .
8.2.7 Model View Controller . .
8.3 Opfrsel . . . . . . . . . . . . . .
8.3.1 Command-Pattern . . . .
8.3.2 Iterator-Pattern . . . . .
8.3.3 Observer-Pattern . . . . .
8.3.4 State-Pattern . . . . . . .
8.3.5 Strategy-Pattern . . . . .
8.3.6 Template-method-Pattern

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

20
20
20
21
21
21
22
22
23
23
24
24
25
25
25
26
26
27
27

1 Test-driven development
Emphasis on applying the rhythm and using/understanding the values and TDD principles.

1.1 Definitions
Def. Testing Testing is the process of executing software in order to find failures.
Def. Failure A failure is the situation in which the behavior of the executing software
deviates from what is expected.
Def. Defect A defect is the algorithmic cause of a failure: some code logic that is
incorrectly implemented.
Def. Test case A test case is a definition of input values and expected output values
for the unit under test.
Def. Unit under test (UUT) The unit under test is some part of the system that
we consider to be a whole.
Def. Unit Testing Unit testing is the process of executing a software unit in isolation
in order to find failures in the unit itself.
Def. Integration Test Integration testing is the process of executing a software unit
in collaboration with other units in order to find failures in their interactions.
Def. Refactoring Refactoring is the process of changing a software system in such a
way, that it does not alter the external behavior of the code, yet improves its
internal structure.
Def. Direct input Direct input is values or data, provided directly by the testing code,
that affect the behaviour of the unit under test (UUT).
Def. Indirect input Indirect input is values or data, that cannot be provided directly
by the testing code, that affect the behaviour of the unit under test (UUT).
Def. Depended-On Unit (DOU) A unit in the production code that provides values or behaviour that affect the behaviour of the unit under test. NOTE: Dette
relaterer sig til Test-stubs, hvor DOUen f.eks. er et ur eller en random generator
der i produktionen giver input der varierer.
Def. Test Stub A test stub is a replacement of a real depended-on unit (DOU) that
feeds indirect input, defined by the test code, into the unit under test (UUT).

1.2 TDD Rythm


1. Quickly add a test.
2. Run all tests and see the new test fail
3. Make a little change
4. Run all tests and see them all succeed
5. Refactor to remove duplication
6. (Run all tests again to check refactoring!)
Man tilfjer alts
a en test i forhold til sin testliste, krer testen og ser den fejle (for
at sikre at ens test rent faktisk tester noget - er man i tvivl kan man altid kre en
assertTrue(false) for at se at testen kres). Dernst laves mindst mulige ndringer i
produktionskoden - SM
A SKRIDT - for at se at det virker. Herefter kres tests igen,
og endeligt refaktoriseres det hele for at opn
a clean code. Kr s
a alle tests igen for at
kontrollere at refaktoriseringen ikke har ndret p
a kodens opfrsel. Sidste skridt er p
a
en m
ade en del af regression testing, dvs. det at kre alle test cases ofte, for at sikre
at systemets opfrsel ikke har ndret sig - det er det der er fordelen ved automatiserede
tests over manuelle.

1.3 TDD Principles


Test First: When should you write your tests? Before you write the code that is to be
tested.
Test List: What should you test? Before you begin, write a list of all the tests you
know you will have to write. Add to it as you find new potential tests.
One Step Test: Which test should you pick next from the test list? Pick a test that
will teach you something and that you are confident you can implement.
Isolated Test: How should the running of tests affect one another? Not at all.
Evident Tests: How do we avoid writing defective tests? By keeping the testing code
evident, readable and as simple as possible.
Fake It (Till You Make It): What is your first implementation once you have a broken test? Return a constant. Once you have your tests running, gradually transform it.
Triangulation: How do you most conservatively drive abstraction with tests? Abstract
only when you have two or more examples.
Assert First: When should you write the asserts? Try writing them first.

Evident Data: How do you represent the intent of the data? Include expected and
actual results in the test itself, and make their relationship apparent. You are
writing tests for the reader, not just for the computer.
Obvious Implementation: How do you implement simple operations? Just implement them.
Representative Data: What data do you use for your tests? Select a small set of
data where each element represents a conceptual aspect or special computational
processing.
Automated Test: How do you test your software? Write an automated test.
Test Data: What data do you use for test-first tests? Use data that makes the tests
easy to read and follow. If there is a difference in the data, then it should be
meaningful. If there isnt a conceptual difference between 1 and 2, use 1.
Child Test: How do you get a test case running that turns out to be too big? Write a
smaller test case that represents the broken part of the bigger test case. Get the
smaller test case running. Reintroduce the larger test case.
Regression Test: What is the first thing you do when a defect is reported? Write the
smallest possible test that fails and that, once run, will be repaired.
Det skal siges, at hvis man bruger Fake It, br man efter brugen skrive en ny test
til ens test list. Denne test skal s
aledes tvinge implementationen af Fake It til at blive
fuldendt. Et eksempel p
a s
adan en fremgangsm
ade er Triangulation, hvor man udvider
kravet til algoritmen, s
aledes det at returnere en konstant ikke lngere er godt nok.
Desuden er der to andre vigtige principper, der dog ikke kun knytter sig til TDD,
men udvikling generelt:
Break: What do you do when you feel tired or stuck? Take a break.
Do Over: What do you do when you are feeling lost? Throw away the code and start
over.

1.4 JUnit syntax


Husk import-statements:
import org.junit.*;
import static org.junit.Assert.*;
Assertion-metoder:
assertTrue
assertFalse
5

assertNull
assertNotNull
assertEquals
Muligt at bruge String-parameter til beskrivelse, ex: assertTrue(Br vre sandt, true);
Brug @Test til at markere en test-metode
Brug @Before til at markere en opstningsmetode, der kres inden hver eneste test
(fixture).

1.5 Fordele
Ren (vedligeholdbar/maintainable) kode, som virker (trovrdig/reliable)
Hurtigt feedback giver programmren selvtillid (ved fejl ved man altid, at fejlen
ligger i skridtet fr)
Trovrdig/reliable software (pga. testcases)
Kun nsket opfrsel programmeres (da vi kun programmerer det testene krver)
Ingen driver-kode da vi har testcases
Struktureret programmeringsproces

1.6 Ulemper
Hvis interfaces af en grund skal ndres, s
a skal alle tests ndres. Tests skal holdes
simple, ellers er de vrdilse for lseren.

1.7 Perspektiv
TDD st
ar i relation til integration testing, test stubs, black-box testing, og godsoftware (flexible, reliable).

2 Systematic black-box testing


Emphasis on applying and understanding equivalence partitioning techniques and boundary value analysis.

2.1 Definitioner
Def. Systematic testing Systematic testing is a planned and systematic process with
the explicit goal of finding defects in some well-defined part of the system.

Def. Testing Testing is the process of executing software in order to find failures.
Def. Failure A failure is the situation in which the behavior of the executing software
deviates from what is expected.
Def. Defect A defect is the algorithmic cause of a failure: some code logic that is
incorrectly implemented.
Def. Test case A test case is a definition of input values and expected output values
for the unit under test.
Def. Unit under test (UUT) The unit under test is some part of the system that
we consider to be a whole.
Def. Black-box testing The UUT is treated as a black box. The only knowledge we
have to guide our testing is the specification of the UUT and a general knowledge
of common programming techniques and algorithmic constructs.
Def. White-box testing The full implementation of the UUT is known, so the actual
code can be inspected in order to generate test cases.
Def. Equivalence class (EC) A subset of all possible inputs to the UUT, that has
the property that if one element in the subset demonstrates a defect during testing,
then we assume that all other elements in the subset will demonstrate the same
defect.
Def. Soundness For a set of equivalence classes to be sound, the ECs most uphold the
criteria of Coverage, Representation and Disjointness.
Def. Coverage Every possible input belongs to one of the equivalence classes.
Def. Representation If a failure is demonstrated on a particular member of an equivalence class, the same failure is demonstrated by any other member of that class.
Def. Disjointness No input belongs to more than one equivalence class.
Myers rule for valid ECs Until all valid partitions have been covered, define a new
EC (and test-case) covering as many uncovered valid partitions as possible.
Myers rule for invalid ECs Until all invalid partitions have been covered, define a
new EC (and testcase) that covers one, and only one, of the uncovered invalid
partitions.

2.2 Testing Approaches


No testing: Meget sm
a metoder kan ikke betale sig at teste, da deres funktionalitet er
s
a simpel at testkoden bliver strre end produktionskoden. Dette drejer sig f.eks.
om getters/setters.
7

Explorative testing: Explorative testing g


ar ud p
a at bruge intuition og erfaring til
at teste. Denne stil kan vre meget nyttig ved middelkomplekse metoder og er
bl.a. ogs
a den mest dominante stilart bag TDD.
Systematic testing: Systematisk testing bruges n
ar vi har at gre med meget komplekse metoder, eller metoder med en meget lav fejltolerance (software til rumfaretjer f.eks.). Her finder vi, p
a systematisk vis, kvivalensklasser for de conditions der p
avirker systemet og bygger vore test-cases op p
a baggrund af disse.

2.3 Partitioning Heuristics


Basalt set skal vi blot (re)partitionere ECs hver gang man er i tvivl om hvorvidt
Reprsentationsprincippet er overholdt. Man deler ofte op i gyldige og ugyldige ECs,
afhngigt af gyldigheden af input. Men gyldigt input kan ogs
a vre ugyldigt, hvis det
fx betyder at metoden bailer ud (fx en if stning i starten af en metode).
En god m
ade at partitionere sine kvivalensklasser p
a, er ved at kigge efter betingelser/conditions i specifikationen af UUTen. Givet en betingelse kan man udlede ECs ved
flgende guidelines:
Range If a condition is specified as a range of values, select one valid EC that covers
the allowed range, and two invalid ECs, one above and one below the end of the
range.
Set If a condition is specified as a set of values then define an EC for each value in the
set and one EC containing all elements outside the set.
Boolean If a condition is specified as a must be condition then define one EC for the
condition being true and one EC for the condition being false.

2.4 EC Table
Et eksempel p
a en EC tabel ses her:

Figure 1

2.5 Test Case Table


Et eksempel p
a en test case table ses her:

Figure 2

2.6 Myers Rules for Valid/Invalid ECs


Man kan danne test cases fra sine ECs ved hjlp af Myers regler.
1. Indtil alle gyldige ECs er dkket, definer s
a en test case der dkker s
a mange
ikke-dkkede gyldige ECs som muligt.
2. Indtil alle ugyldige ECs er blevet dkket, definer s
a en test case hvis element kun
ligger i en enkelt ugyldig EC. Der m
a kun vre en for at undg
a masking.
Herudover er det meget godt at lave tests, som ligger lige p
a grnsen, boundary
analysis tre for hver grnse. S
a der sikres, at der findes dumme programmr-fejl, hvor
der fx mangler en = i en if-stning.

2.7 Boundary Value Analysis


Boundary Value Analysis g
ar ud p
a vlge test data p
a begge sider af grnserne mellem
kvivalensklasser. Hvis en valid EC range g
ar fra 1-5, s
a ville man teste ved 0, 1, 5 og
6. Kombineres med systematisk testing, men bruges egentlig ogs
a indirekte ved TDD
n
ar man vlger test data.

2.8 Conditions/Parameters
Det er ikke nok at kigge p
a parameetrene til den metode man tester. Man er ndt til at
kigge p
a betingelser/conditions fremfor parametre.

2.9 Key points


Test ikke p
a preconditions.
Test ikke p
a overdrevet dum software. Partitionering af ECs kan ikke forudsige
dumme programmrer og easter-eggs.
Myers heuristics skal bruges velovervejet og ikke ukritisk.
9

2.10 Perspektiv
TDD og god-software (flexible, reliable).

3 Variability management
Emphasis on applying the four different techniques for handling variability and analysing
their benefits and liabilities.

3.1 Definitioner
Def. Change by modification Change by modification is when software changes are
introduced by modifying existing production code.
Def. Code bloat Code bloat is the production of code that is perceived as unnecessarily long, slow or otherwise wasteful of resources.
Def. Switch creep Switch creep is the tendency that conditional statements become
more and more complex as software ages.
Def. Change by addition Change by addition is when software changes are introduced by adding new production code instead of modifying existing.
Def. Flexibility The capability of the software product to support added/enhanced
functionality purely by adding software units and specifically not by modifying
existing software units.
Def. Maintainability (ISO 9126) The capability of the software product to be modified. Modifications may include corrections, improvements or adaptation of the
software to changes in environment, and in requirements and functional specifications.
Def. Analysability (ISO 9126) The capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be
modified to be identified.
Def. Changeability (ISO 9126) The capability of the software product to enable a
specified modification to be implemented.
Def. Stability (ISO 9126) The capability of the software product to avoid unexpected effects from modifications of the system.
Def. Testability (ISO 9126) The capability of the software product to enable modified system to be validated.

10

3.2 Variabilitetspunkter
Et variabilitetspunkt er en sektion af kode der skal variere. Vi bruger dette n
ar vi skal
lave en eller anden form for genbrug, enten i form af ekstra krav til funktionaliteten (e.g.
algoritmen skal vre anderledes hver lrdag nat) eller hvis nye kunder vil have samme
produkt med en lille ndring.

3.3 Variabilitetsh
andtering
3.3.1

Kildekodekopiering

Den groveste lsning er at lave direkte source code copy og s


a kun ndre det lille sted
hvor variationen er nsket. Fordele og ulemper er:
Fordele:
Simpelt
Hurtigt
Afkobler varianter fuldstndigt (decoupling)
Ulemper:
Vedligeholdelsen af flere systemer er besvrlig - Multiple maintenance problem
Besvrligt at ndre fejl i en variant, hvis fejlen er i kode som varianter har tilflles.
Fejlen skal rettes i alle varianter.
Varianterne glider fra hinanden, og ender som forskellige produkter
3.3.2

Parametrisk

En anden lsning er at styre varianterne vha. If eller Switch statements i koden. Fordele
og ulemper er her:
Fordele:
Simpelt
Intet Multiple maintenance problem
Ulemper:
Change by modification med stor risiko for at introducere fejl i gammel kode.
Code bloat (Hvis der er 43 varianter bliver det meget uoverskueligt)
D
arlig analysability/readability - ifs p
a ifs p
a ifs.
Cohesion-problem med ansvarstilfjelse - nu f
ar den specifikke klasse pludseligt
ansvar for ogs
a at h
andtere varianter.
11

3.3.3

Polymorf

En tredje lsning er at styre varianter ved at extende klasser og override de metoder


man nsker. Fordele og ulemper er her:
Fordele:
Intet Multiple maintenance problem
Kun Change by modification frste gang, hvorefter det er Change by addition
for hver subclass
Nemt at lse
Ulemper:
Forhjet antal klasser
Kan kun nedarve fra en klasse (C# og Java), hvorfor genbrug henover varianter er
svrt
Objekter bindes p
a compile-time (variant kan ikke ndres p
a run-time)
3.3.4

Kompositionel

En fjerde og foretrukken lsning er at styre varianter ved at delegere varianten til sin
egen abstraktion i form af et interface, der sttes sammen med produkterne. Fordele og
ulemper er her:
Fordele:
Nemt at lse
Run-time binding
Seperation af ansvar
Seperation af tests
Variant vlges kun et sted og er nemt at finde
Nedarvning mulig hvis behovet opst
ar
Change by addition, p
alideligt/reliable
Ulemper:
Forhjet antal interfaces og klasser
Brugerne skal vre bekendt med lsningen og alle dets forskellige uddelegeringer.
12

3.4 3-1-2 Processen


3-1-2 processen er en generel opskrift p
a god h
andtering af variabilitet:
1. Identificer en opfrsel/operation der varierer
2. Udtrk et ansvar der omfatter denne operation, og udtryk den i et interface
3. Udfr nu denne operation ved at delegere opgaven til en konkret implementation
af interfacet, der er en variant af operationen.
Kommer af:
1. Programmer op mod et interface, ikke implementation
2. Vlg objektkomposition over klassenedarvning
3. Overvej hvad der skal variere i dit design, og indkapsl den varierende opfrsel

3.5 STRATEGY Pattern


Strategy mnsteret er et af de bedste design mnstre til at h
andtere variationer.

Figure 3

3.5.1

Roller

Strategy: Specificerer ansvaret og interfacet for algoritmen


ConcreteStrategy: Definerer den konkrete operation der udfylder ansvaret
Context: Udfrer arbejdet for Client ved at uddelegere til en strategi
13

3.5.2

Fordele og ulemper

Fordele:
Strategier eleminerer conditional statements
Undg
a subklasser
Undg
a multiple maintenance
Change by addition, ikke modification
Gr seperat testing af Context og Strategy muligt
Ulemper:
Forhjet antal objekter
Klienter skal kende strategierne
Et mindre resourcetab uddelegering

3.6 ABSTRACT FACTORY Pattern


Abstract Factory kan bruges til at samle konstruktionen af varianter et sted og styre det
ved at lade factories samle komponenter til et samlet produkt.

Figure 4

14

3.6.1

Roller

Abstract Factory: Definerer et flles interface for objektoprettelse


ProductX: Definerer interfaces for et objekt ConcreteProductXY (produkt X i variant
Y)
ConcreteFactoryY: Er ansvarlig for at oprette Products der tilhrer en variant Y, der
er en familie af objekter som er konsistente med hinanden.
3.6.2

Fordele og ulemper

Fordele:
Lav kobling mellem klient og produkter
Gr udskiftning af produktfamilier let
Frembringer konsistens mellem produkter
Change by addition, ikke modification
Klientens konstruktrs parameterliste forbliver intakt
Ulemper:
Introducerer ekstra klasser og objekter
Understttelser af nye aspekter i variation er svrt - krver ndring i alle konkrete
factories

4 Test stubs and unit/integration testing


Emphasis on applying test stubs and understanding the testing levels of unit/integration/system testing.

4.1 Definitioner
Direct input Direct input is values or data, provided directly by the testing code, that
affect the behavior of the unit under test (UUT).
Indirect input Indirect input is values or data, that cannot be provided directly by
the testing code, that affect the behavior of the unit under test (UUT).
Depended-on unit (DOU) A unit in the production code that provides values or
behavior that affect the behavior of the unit under test.
Test stub A test stub is a replacement of a real depended-on unit that feeds indirect
input, defined by the test code, into the unit under test.
15

Figure 5
Unit under test A unit under test is some part of a system that we consider to be a
whole.
Unit test Unit testing is the process of executing a software unit in isolation in order
to find defects in the unit itself.
Integration test Integration testing is the process of executing a software unit in collaboration with other units in order to find defects in their interactions.
System test System testing is the process of executing the whole software system in
order to find deviations from the specified requirements.
Test stub F
a indirekte input under kontrol.
Test spy Indirekte output optages for at kontrollere om UUT giver de rigtige kommandoer.
Mock objekt A double, created and programmed dynamically by a mock library, that
may both serve as a stub and spy.
Fake objekt En hurtig men realistisk erstating (n
ar UUT-DOU er langsom). A double
whose purpose is to be a high performance replacement for a slow or expensive
DOU.

4.2 Test stub


4.3 Kommentar
Man skal vide, hvorn
ar man ikke skal teste fx br man stole p
a at firmaer som Sun og
Microsoft selv tester, deres software.
16

4.4 Keypoints
Many software units depend on indirect input that influence their behavior. Typical
indirect input are external resources like hardware sensors, random-number generators,
system clocks, etc. Test stubs replace the real units and allow the testing code to control
the indirect input.

5 Design patterns
Emphasis on finding the proper design pattern for a problem at hand and applying it.

5.1 Definitioner
Def. Software Architecture The software architecture of a computing system is the
structures of the system, which comprise software elements, the externally visible
properties of those elements and the relationships among them.
Def. Design Pattern A design pattern is defined by a set of roles, each role having a
specific set of responsibilities, and by a well defined protocol (interaction pattern)
between these roles.
Def. Design Pattern (Gamma et al.) Patterns are descriptions of communicating
objects and classes that are customized to solve a general design problem in a
particular context.
Def. Design Pattern (Beck et al.) A design pattern is a particular prose form of
recording design information such that designs which have worked well in the past
can be applied again in similar situations in the future.
NOTE: prose form betyder template og denne template indeholder som min.
altid: Name, Problem, Solution og Consequences.
Def. Pattern Fragility Pattern fragility is the property of design patterns, that their
benefits can only be fully utilized if the patterns object structure and protocol is
implemented correctly.
Def. Coupling Coupling is a measure of how strongly dependent one software unit is
on other software units.
Def. Cohesion Cohesion is a measure of how strongly related and focused the responsibilities of a software unit is.
Law of Demeter Do not collaborate with indirect objects.
NOTE: Also called Dont Talk to Strangers
Def. Object Orientation (Nygaard m.fl.) A program execution is viewed as a physical model simulating the behaviour of either a real or imaginary part of the world.
NOTE: Desuden kendt som Model-centric focus

17

Def. Object Orientation (Budd) An object-oriented program is structured as a community of interacting agents called objects. Each object has a role to play. Each
object provides a service or performs an action that is used by other members of
the community.
NOTE: Desuden kendt som Responsibility-centric focus
Def. Responsibility The state of being accountable and dependable to answer a request.
Def. Protocol A convention detailing the expected sequence of interactions or actions
expected by a set of roles.
Def. Role (Software) A set of responsibilities and associated protocol with associated
roles.

5.2 The 3-1-2 Process


3-1-2 processen er en generel opskrift p
a at h
andtere variationer p
a forskellig vis. Den
lyder s
aledes:
3. Identificer en operation der vil variere
1. Skab en responsibility der omfatter denne operation og udtryk den i et interface.
2. Udfr nu denne operation ved at delegere opgaven til et underordnet objekt der implementerer interfacet.

5.3 Design Patterns komponenter


Design patterns defineres p
a flere forskellige m
ader, men den vi flger her er:
Def. Design Pattern A design pattern is defined by a set of roles, each role having a
specific set of responsibilities, and by a well defined protocol (interaction pattern)
between these roles.

Vi bruger patterns til at implementere forskellige best-practice designkomponenter


for at opn
a en eller anden Quality (typisk fleksibilitet og maintainability). Et design
pattern er typisk beskrevet ved minimum 4 ting:
1. Et navn (Name)
2. Et problem patternet lser (Problem)
3. En lsning (Solution)
4. Konsekvenserne af lsningen (Consequences)
I bogen indeholder de fleste patterns ogs
a en betegnelse af roller (Roles), et UML
diagram (Structure) og en hensigt med patternet (Intent).
18

5.4 Roles, Responibility, Protocols


Patterns er bygget op af Roles, Responsibilities for disse Roles og deres interne kommunikation (Protocols). Lad os definere disse tre termer:
Def. Role (Software) A set of responsibilities and associated protocol with associated
roles.
Def. Responibility The state of being accountable and dependable to answer a request.
Def. Protocol A convention detailing the expected sequence of interactions or actions
expected by a set of roles.

UML kan ikke vise roller, s


a det er derfor vigtigt at vurdere patterns ikke ud fra deres
diagrammer, men ud fra deres Intent og Roles med dertil hrende Responsibilities og
Protocols.

5.5 Pattern Fragility


Patterns er en m
ade at opn
a et m
al p
a og ikke et m
al i sig selv! Et forkert implementeret
pattern kan derfor lede til at man f
ar alle ulemperne ved det p
agldende Pattern og
ingen af dets fordele. Pattern Fragility defineres som:
Def. Pattern Fragility Pattern fragility is the property of design patterns, that their
benefits can only be fully utilized if the patterns object structure and protocol is
implemented correctly.

Typiske fejl er:


1. Brug af klassenavne ved deklaration fremfor interfacenavn
2. Binding til klasse p
a et forkert tidspunkt (i.e. et sted hvor man ikke kan variere
klassen)
3. Pga. stress eller deadlines laver man et hurtigt fix til noget variability (typisk med
parametre)

19

6 Compositional design
Emphasis on applying compositional design principles and relating it to concepts behavior, responsibilities, roles, and multi-dimensional variance.

7 Frameworks
Emphasis on designing frameworks and understanding framework theory.

8 Patterns
I denne sektion vil de forskellige design mnstre vre reprsenteret, hvor der blandt
andet er brugt UML-diagrammer til at illustrere de forskellige mnstre.

8.1 Konstruktrer
8.1.1

Abstract Factory

s. 217

20

8.1.2

Builder

s. 301

8.2 Struktur
8.2.1

Adapter

s. 295
21

8.2.2

Composite

s. 322
8.2.3

Decorator

s. 289

22

8.2.4

Facade

s. 282
8.2.5

Proxy

s. 317

23

8.2.6

Null Object

s. 325
8.2.7

Model View Controller

Define a loosely coupled design to form the architecture of graphical user interfaces having
multiple windows and handling user input from mouse, key- board,or other input sources.
s. 342

24

8.3 Opfrsel
8.3.1

Command-Pattern

s. 308
8.3.2

Iterator-Pattern

s. 312

25

8.3.3

Observer-Pattern

s. 335
8.3.4

State-Pattern

s. 185

26

8.3.5

Strategy-Pattern

s. 130
8.3.6

Template-method-Pattern

s. 366

27

You might also like