0% found this document useful (0 votes)
18 views

Topics in Software Engineering – Summary

Software Engineering - summary Topics

Uploaded by

Orel Mazon
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Topics in Software Engineering – Summary

Software Engineering - summary Topics

Uploaded by

Orel Mazon
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 43

Topics in Software Engineering – Summary

Lecture 1- Overview and Lifecycle models (Part 1)

Software engineering is a set of methodologies, techniques and tools to build high quality
software in a cost effective way. Systematic approach which operates under specific
development constraints and follows a process.

It is not really an engineering discipline but has the potential to become one.

In general engineering is- Create cost effective solutions (economical use of all resources
including money) to practical problems (problems which matter to people outside the
engineering domain), by applying scientific knowledge (solving problems using science,
mathematics and design analysis) to build things (usually something in the real world) in the
service of mankind (developing technology and expertise that will support the society)

“Engineering shares prior solutions rather than relying always on virtuoso problem solving”

The development of engineering can be divided into 3 phases


1. Craft- amateurs and virtuosos, knowledge does not propagate. Usually there is a
waste of material and small scale production with little commercialization
a. start-ups, early large systems
2. Commercialization- Skilled workers, training in operational procedures with concern
for cost and material and large scale production.
a. most software production these days
b. What they use: Structured tools (IDEs), Automated testing, lifecycle models
3. Engineering – educated professionals, using scientific analysis and theory. Enabling
new application and within specialized market segments.
a. These are rare cases
b. Using algorithms, data structures and computation models.

Currently software engineering is a misleading term because it diverts the attention from
what needs to be done- need to focus on creating the theory and science for a real
engineering discipline.
The path to “true” engineering:
1. Define the knowledge required for experts- this is rapidly evolving and there are
some early attempts to do so (IEEE SWEBOK)
2. Make this knowledge accessible- Documentation of libraries, wikis, forums. Trading
the extensiveness up-to date for authority, accuracy and organization.
3. Support repetition and re-use- through design patterns, standard libraries and
integrated environments.
4. Professional specialization- no-one cam master everything. Specializing in
Technologies, applications and problem classes.
5. Improve coupling between science and commercial practice- Better technology
transfer from theory to practice. Practical problems to motivate theoretical
investigations.

By Ran Erez
Measuring progress- the “software development problem” will never be “solved” because
we will always build systems at the edge of our capabilities. We can measure the progress
by the expansion of this edge (of our capabilities) or by the growth of classes of problems
which are solved routinely.

Software engineering is different than other engineering disciplines because:


1. There is no fundamental theory
2. Change is constantly happening and the technology is rapidly evolving
3. Negligible manufacturing costs
4. No borders

The software crisis


the difficulty of writing useful and efficient computer programs in the required time. The
software crisis was due to the rapid increases in computer power and the complexity of the
problems that could not be tackled. With the increase in the complexity of the software,
many software problems arose because existing methods were insufficient. The main cause
is that improvements in computing power had outpaced the ability of programmers to
effectively utilize those capabilities.

Main reasons for the software crisis;


1. Rising demand for software as part of the transition from HW to SW
2. Increase in the required software development effort due to the increase in product
size and complexity
3. Too slow increase in the development productivity.

Comparison between a program and a software product


Program Software Product
Typically small Large
Author is sole user Many users
Single author Team of developers
No proper UI Well designed UI
No proper documentation Well documented
Ad hoc Systematic development

The evidence for the crisis


1. Over 40% of projects cost more than planned
2. Over 61% of projects take more time than planned
3. Over 42% of projects require modifications and corrections in house to be usable
4. A report from 1990 states that
a. only 2% of software was productively used and another 3% after
modifications.
b. 29% was undeliverable
c. 26% was incorrect
d. 47% of software were not used a short time after delivery.

Bugs are also evidence for the crisis.

By Ran Erez
Traditional software engineering phases:
1. Requirements engineering- try to understand what kind of system we need to build
2. Design- high level structure which can be come more detailed of the system we will
build
3. Implementation- writing code that implements the design we defined
4. Verification and validation- make sure it behaves as intended
5. Maintenance – after first release correct bugs found by users, changes in the
environment, new features.

By Ran Erez
Lecture 2 – Overview and Lifecycle models – part 2

Software lifecycle models – sequence of decisions that determine the history of the
software you build and others will user. The models should answer the questions of “what
should I do next?” and “how long should I do it for?”. The factors which influence the
decision which model to use can be the project size, the criticality and the expected
variability in the requirements. The choice of a lifecycle model has fundamental importance.

Traditional software engineering phases:

Requirements engineering- The process of establishing the needs of the stakeholders that
the software should solve. This is important because the cost of correcting an error depends
on the number of decisions that are based in it, the later we correct an error, the higher its
cost will be.

The requirement engineering phase can be divided into the following steps
1. Elicitation – collect the requirements from stakeholders and other sources
2. Analyses- Study and deeper understanding of the collected requirements
3. Specification- represent, organize, (semi) formalize, store
4. Validation- check for consistency, completeness and non-redundancy.
5. Management- mange change through the lifetime of the project

These are not necessarily linear steps, the process it typically iterative.

Design - requirements are analyzed in order to produce a description of the internal


structure and organization of the system. The description will serve as the basis for the
construction of the actual system. Consists of:
1. design activities
a. Architectural design
b. Interface design
c. Component design
d. Data structure and algorithm design
2. design products
a. System structure
b. Component specification and their interfaces

Implementation- realizing the design, creating the actual software system. The principles
which guide this phase are;
1. Reduction of complexity
2. Anticipation of diversity
3. Structuring for validation (design for testability)
4. Use of external and internal standards.

Verification and Validation- Check that the system meets its specification and fulfills its
purpose. Verification- Did we build the system right? (At the unit, integration and system
levels). Validation- Did we build the right system? (is it something the customers want?)

By Ran Erez
Maintenance- Sustain the software system as it evolves after its first delivery to its users.
The system will evolve because of bugs found by users, changes in the environment and
feature requests. Maintenance activities can be;
1. Corrective- The released software may have bugs that are found by the users
2. Adaptive- software doesn’t run in isolation, there are new operating systems, new
standards, etc.
3. Perfective- users are never satisfied; someone will want new or enhanced features
to be added. Suggestions need to be evaluated and implemented.
4. Preventive – sometimes changes are needed for internal reasons- for example code
reorganization which needs to be planned, implemented and tested.

Software lifecycle model


Describes how the phases should be put together from start to finish. Lifecycle models
determine the order of the phases and the criteria for moving between phases.

Example of models:

Waterfall- pure model

appeared in an article by Royce, 1970, 1987. The author himself writes “I believe in this
concept, but the implementation described above is risky and invites failures”.

The pure Waterfall model works well when there is a stable product definition, the domain
is well known and the technologies are well understood. The main advantage is that the
model helps finding errors early. The main disadvantage is that it is not flexible and
problematic in situations where the requirements are partial and expected to change, the
developers are not domain expert and the technology is new and evolving.

By Ran Erez
Waterfall – pure model with fallback- to every step twice

By Ran Erez
The Spiral model
Incremental risk oriented lifecycle model, proposed in an article by Boehm (1988).

The motivations;
1. Code driven process- “code and fix” approach, no design leads to poor code and
angry clients
2. Document driven process – waterfall model- each steps produces a new document,
the requirement for fully developed documents is unrealistic
3. Risk Driven process- supports iterative developments, and decide how to proceed by
reducing the risk of failure.

The spiral model consists of several rounds of development:


1. System concept
2. Requirements
3. Design

In each round we mitigate risks. We define objectives, map alternatives, recognize the
constraints of each alternative, use prototyping and analysis to reduce risk and plan the next
step. The next step will depend on the biggest remaining risk. At the end perform a
sequence of coding->Testing->Integration.

By Ran Erez
How to use the model:
1. Start with hypothesis that something can be done
2. Round 1: concept and lifecycle plan
3. Round 2: top level requirements
4. Additional rounds: preliminary design, detailed design
5. May go back and redo previous round if needed.

If the evaluation at some stage shows that it won’t work- Stop. At the end you have a
workable design that addresses all risk-> go on to coding.

Risks- developing software is full of uncertainty. This implies risks. We need to quantify that
risk. The Risk Exposure = Probability X Loss.

This can be used to choose between alternatives and select the one where the risk exposure
is smaller.

 The main advantages of the model – Risk reduction, Functionality can be added and
software is produced early in the lifecycle
 The main disadvantages are- risk analysis requires specific expertise, it is highly
dependent on risk analysis and is a complex process.

By Ran Erez
Evolutionary Prototyping model

Consists of 4 main phases;


1. Initial concept
2. Design and implementation of the initial prototype- The developers implement parts
of the system that they understand
3. Refine prototype until acceptable (iterative)- partial system to show to customers,
the feedback is used as the basis for the next iteration
4. Complete and release prototype.

 Main advantages- immediate feedback and low risk of developing the wrong system
 Main disadvantages- difficult to plan (forecast cost and time) and can become an
excuse for a code and fix approach.

Throwaway Prototyping
The prototype is used to gather the requirements and is thrown away at the end of the
requirement gathering phase. The real system will be implemented from scratch.

 The main advantages- can be done quickly and no risk of technical debt
 The main disadvantages- user confuses prototype with real system and it influences
his expectations, the cost and time of implementing the prototype.

The unified process (Rational UP)


This is not really a process but more of a framework. It needs to be adjusted to each project
according to need. Many refinements and extensions such as Agile unified process and
Enterprise unified process.

The principles of UP;


1. Iterative and incremental – 4 phases divided into multiple iterations
2. Use case driven – development is based on usage scenarios
3. Architecture centric – Defining and refining the architecture is a major activity,
baseline architecture a major milestone.
4. Risk focused- activities in initial phases designed to reduce risk, later phases fill
details.

By Ran Erez
The model is based on phases and workflows

It is similar to doing a waterfall in each iteration, but each step in the waterfall continues
across all iterations.

Model phases;
1. Inception phase-
a. Scope- in this phase we define the scope, what the system will do and the use
cases
b. Establish the business case- cost vs. benefit, may discover that the project is
not worth doing
c. Draw initial contract- but maintain the possibility of cancellation later.
d. Outcome- Green light for the project
2. Elaboration phase
a. Fill in details of requirements and more use cases
b. Identify and mitigate risks (UC for requirement risks and prototypes for
technical risks)
c. Use UML to create skeletal models (only the important details)
d. Create baseline architecture which can change later
e. Draw contract for the whole system.
f. Outcome: project architecture and contract
3. Construction
a. Create a work plan- estimating the time for each UC, assign UCs based on
risk, priority and work capacity
b. Execute iterations- maintain rhythm- if schedule is running late, move a use
case to a later iteration
c. Test and integrate
d. Refactor
e. Outcome: initial running system installed, feedback from users
4. Transition
a. First running system installed (beta testing)

By Ran Erez
b. Bug fixes based on user feedback
c. User training

Model Workflows
The workflows are iterative and continuous. Testing is done from beginning even when
there is no code to test, time to test validity and completeness of what artifact was
produced and plan future code tests.

1. Requirement workflow
a. Users don’t know what they want or need and even if they do they can’t
specify it precisely and fully.
b. Developers’ background is different than users’ background- they don’t know
what the user assumes to be self evident.
c. In this workflow we need to understand the domain and create a domain
model
d. We have 2 types of requirements- function (use cases) and non-functional
(performance, etc.)
e. Things may change during the project- new ideas, requirements or things we
missed in the past
f. Use Case- a tool to foster discussion of requirements, focuses on the goals for
using the system, identify roles and actors (not necessarily human) and
interactions with the system. It is expressed in use case diagrams which
include scenario description and alternative\failure scenarios.
2. Analysis workflow
a. Translate requirements from the language of the client to the language of the
designer
b. Resource usage and conflicts
c. More formal notations
d. Organize requirements
3. Design workflow
a. Shape the system and find a form that lives up to the requirements
b. Less abstract than analysis
c. Define sub-systems and interfaces between them
d. Take constraints into account
4. Implementation
a. Implement the classes in the design
b. Integrate with results of previous iterations
c. Map to platform
5. Test workflow
a. Unit tests for new code
b. Integration tests for each build
c. System test at the end of iterations
d. Record tests and act on results (open bugs)

By Ran Erez
Agile models
A group of software development methods based on highly iterative and incremental
development

1. TDD three main phases


a. Write test cases that encode the requirements- they will fail
b. Write just enough code to make the tests pass
c. Refactor to improve code quality
2. Other methods; SCRUM, XP

Classic mistakes in software development projects;


1. People mistakes
a. Heroics – the idea that one person can do everything ,leads to too much
responsibility on one individual and encourages risk taking while discouraging
team work
b. Work environment- there is evidence that productivity increases in nice
working environments
c. Poor management- lack of leadership, or leadership exercised in the wrong
way. Brooks’s Law- Adding people to a late project only makes it later
2. Process mistakes
a. Scheduling – underestimate the efforts required for different parts of the
project. Overestimating the ability of people\tools
b. General planning- not enough planning, abandoning planning due to
pressure.
c. The cone of uncertainty- estimation accuracy vs. phase- the earlier we are in
the process the bigger is the uncertainty.
3. Product mistakes
a. Too many requirements
b. Feature creep- adding features as the project progresses
c. Research is different than development. The more research it requires the
more risk it adds to the project
4. Technology mistakes
a. Switching tools – start using a new tool in the middle is never a good idea
b. No version control system
c. Silver bullet syndrome- too much reliance on the benefits of previously
unused technology
d. Technology alone doesn’t solve problems.

By Ran Erez
Lecture 3 – Requirements Engineering (RE)

The goal is to understand and specify the purpose of a software system.

Requirements are a description of what a software has to do and not how to do it.
Functionality of features required in order to satisfy stakeholder. The requirements are
usually described informally in text. Sometimes described more formally using user stories,
use cases, state transition diagrams.

Why is requirements engineering important? One of the major reasons for project failures is
not getting the requirements right in the first place, if we don’t understand them we will
build the wrong software. This step is critical for the success of the software development
efforts.

Software Requirements Specification (SRS)- The final outcome of the requirements phase in
a document called SRS. RE in general and the SRS in particular describe what the system is
supposed to do and not how. Often the SRS is an important part of the contract signed
between the customer and the dev organization.

Software intensive system = software + hardware + context

Software quality- is not a function of the software itself but a function of both the software
and its purpose. The quality is “fitness for purpose. RE is mostly about identifying the
purpose of the software system.

Why defining requirements is hard?


1. Complexity of the purpose of the system- size, number of functions,
dependencies
2. Difficult to make knowledge explicit
3. Changing requirements- they change over time, customer change their minds,
environment changes
4. Multiple stakeholder with conflicting goals- hard to come to an agreeable set of
requirements.
5. Lack of completeness- difficult to define all of the system requirements. Some
requirements are missing and the system will eventually lack important
functionality
6. Lack of pertinence – trying to avoid incompleteness, one may collect too many
requirements including unnecessary and conflicting ones.

From the definition of requirements we learn;


1. RE is not a phase or a stage
2. Communication of requirements is as important as analysis
3. Can’t say anything about quality if we don’t understand the purpose
4. Context- designers need to know how the system will be used
5. Real world needs
6. Need to identify all stakeholders not just the customer and user
7. Partly about what is possible

By Ran Erez
What are requirements?

C- the hardware on which the software runs, operating system


P- The code of the program

D- Things that are true of the world regardless of the existence of the program
R- Things we would like to achieve by delivering the system that we build

S- The specifications- the Bridge between the 2 domains. They are formal descriptions of
what the system should do to meet the requirements. Writtern in terms of shared
phenomena, things that are observable in both domains.

Examples of shared phenomena


 Events in the world that the machine can sense (e.g. a button, sensor input)
 Actions that the machine can cause in the world (e.g. image on a screen, beep)

Requirements are the description of the system services and constraints that are generated
during the RE process.

Types of requirements
 User vs. system requirements
o User- written for customers in natural language, a means for
communicating with customers and stakeholders.
o System- written for developers. Contain detailed functional and non-
functional requirements. Tell dev what to build
 Functional vs. non-functional requirements
o Not a clear cut distinction
o Functional –what the system does, typically have well-defined satisfaction
criteria.
o Non-functional- refers to the systems qualities such as performance,
reliability, adaptability.
 Domain requirements

Problems in Requirements Elicitation (collection)


1. Thin spread of domain knowledge- many sources, difficult to collect from all of
them
2. Limited observability- subjects change behavior when analyst is watching
3. Bias- conscious or unconscious wrong, misleading descriptions of the system to
be, it will affect the person describing it.

By Ran Erez
Traditional methods for collection include background reading, hard data samples,
interviews, surveys, meetings.

Requirement modeling –representing the requirements so that they can be communicated


and analyzed

Requirement analysis- after collection and modeling we need to analyse them;


1. Verification- check for correctness, completeness, make sure they are not
ambiguous.
2. Validation – check if the requirements define a system the customer actually wants
3. Risk analysis- some may change? Some we are not sure about?

Requirements prioritization- resources are limited and we may not be able to satisfy all
requirements. We typically categorize requirements into; critical / nice to have / useful

Effort and risk – beyond the priorities we need to get effort and risk estimations, this will
typically be a rough estimation (High\Medium\Low)

Requirement engineering process

not a one time, sequential process but an iterative one until we are happy with the
results.

Properties of good requirements


1. Simple – each requirement should express one specific functionality
2. Unambiguous
3. Testable
4. Organized –grouped and refined, prioritized.
5. Numbered for traceability.

By Ran Erez
Lecture 4 – Requirement representation

Most of the SRS in the industry are written in natural language (NL). Estimates from 1996
say that 90% are in NL and less than 10% in NL + formal language. Only 1% written solely in
formal language. NL makes it easier to communicate the requirements but the
disadvantages of NL are that they don’t encourage structure and order, very difficult to
analyze automatically and con be understood in multiple ways.

NL can create ambiguity, there are 2 types of ambiguity;


1. Sub-conscious disambiguation- The reader of an ambiguous phrase is not even
aware that there is an interpretation other the one that came first to his mind. The
reader understands the interpretation and think it is the only one
2. Sub-conscious ambiguation – The writer of an ambiguous phrase is not even aware
that he wrote a phrase that has an interpretation other than he though when he
created it. The writer has an interpretation and thinks it is the only one possible
Solving ambiguity can be done via restricted language and sentence patterns. Effective
usage requires dedicated tools (editors and parsers). Sentence patterns are an example for
the use of controlled NL. The objective of a controlled language is to increase the readability
of any kind of technical documentation. This is accomplished by reducing the inherent
ambiguity of NL through restricted grammar and fixed vocabulary.

Example of controlled language – ACE (Attempto Controlled English)

Ambiguity should not be confused with incompleteness. Requirements should strive to be


complete to a certain extent, but are always incomplete. Ambiguity should not be confused
with precision. Requirements may be imprecise on purpose. Over precision is not best.

User Stories
“As a Developer, I want to change the status of a bug report, in order to let others know it
has been fixed”.

User role (who?) wants to have a capability (what?) for the value (why?)
This is very popular, especially in agile methods. Mapping between NL requirements and
user stories is not 1:1, one NL requirement ay translate to several user stories. In some
books user stories are called “narratives”. User stories are simple, short and good for
communication with stakeholders and assist with acceptance tests but they user NL so they
might also be ambiguous.

Use Cases
Used for functional requirement specification and analysis. A use case describes how the
user will use the system to accomplish his goals.

Detailed use cases are written as usage scenarios, listing specific sequence of actions and
interactions between the actors and the system

By Ran Erez
Actors- an actor is an entity which is external to the system, can be human or another
system. The actor plays a role in the UC, and can participate in more than 1 UC.

Types of actors
1. Initiating Actor- initiates the use case to achieve a goal
2. Participating Actor – participates but doesn’t initiate

Use Case Diagram

Every actor must be associate with at least one UC.

Relations between UCs


 <<include>> - The behavior of one UC must be used in the behavior of another UC
 <<extend>> - The behavior of one UC may be used in the behavior of another UC,
depending on conditional extension points (may be executed without it). The
extending UC may be executed on its own.

Use Case description schema

Errors in use case diagrams;


1. Actors are not part of the system
2. UC diagrams do not model processes/ workflows

By Ran Erez
Traceability Matrix- Map system requirements to use cases in order to check that all
requirements are covered by use cases, no irrelevant use cases, prioritize the work on the
UCs.

Summary of RE
1. Problem Description vs. solution description- useful to separate the descriptions, still
the problem and solution interact
2. What vs. How- specifications say what the system should do, not how
3. Application Domain vs. Machine Domain
4. Functional vs. Non-functional requirements
5. Systems vs. software- RE deals with software intensive systems = software + context
+HW, not just software
6. Indicative vs. optative descriptions- indicative describes the world as it is now and
optative describe what we would like to bring. Application domain properties are
indicative whereas requirements are optative.
7. Verification vs. Validation- Verification- software meets specification, validation- we
are solving the right problem
8. Capturing vs. Synthesizing- We don’t just capture requirements we analyze and
negotiate them along with reasonable estimates for the unknown.

RE Future challenges
 Modeling the environment- model inference from execution logs
 Richer models for non-functional requirements- better standard means to make all
the “ilities” easy to formalize and test
 Reuse of requirements models – like design patterns for requirements
 Multidisciplinary training for requirements practitioners- Who is a requirement
engineer?
 Scale, complexity, variability, uncertainty
 Formalization and disambiguation using NLP- automatically detect ambiguity, asking
the engineer to clarify them. Automatically translate NL to formal specification.

By Ran Erez
Lecture 5 – Formal Specifications: Temporal Logics

Want to create a formal bridge between the application domain and the machine domain.
We want formal to avoid the problems of NL (ambiguity and not automated). Formal specs
have syntax and semantics and can be processed automatically.

LTL – Linear Temporal Logic


Introduced in 1977, A logic to represent and reason about propositions qualified in terms of
time. The formulas are not statically true or false in a model. Consists of Boolean operators
+ temporal operators.

Linear vs. branching model of time


Discrete vs. continuous model of time
Point-based vs. interval based

Models time as a sequence of states (a computation path) extending to the future.


LTL formulas are interpreted over transition systems.

Transition system- a structure M=<S,->,L>, where S is a set of states, -> is the transition
relation, L is the labeling function L:S->P(AP) (maps to the power set of Atomic Propositions)

Path P (Pie symbol) in a model M is an infinite sequence of states s1,s2,s3… from S such that
for all i there is a transition from si->si+1
P^i- the suffix starting at Si.

Path satisfaction- if P satisfies a LTL formula is defined as |=

Temporal Operators
X- is neXt
U- is Until

Model Satisfaction- M,s |= Formula if and only if for every path P of M starting at s we
satisfy the formula.

Some things cannot be expressed in LTL, for example;


 “From any state it is possible to get to a restart state”

By Ran Erez
 “The lift can remain idle when the door closed”
 We cannot express these examples because we cannot directly assert the existence
of paths.

Equivalence between 2 LTL formulas – 2 formulas are equivalent if for every model M and
all paths P in M, formula 1 accepts <-> formula 2 accepts

Additional LTL operators


 Weak until – W
 Release (dual of until) R- aRb – “a must be true and remain true up to and including a
state where b becomes true (if there is one)”

We also have past LTL operators

LTL patterns
Because it is hard to read there are known patterns. For example ‘p’ is false between ‘q’ and
‘r’. There are specification patterns. The patterns are classified;
 Occurrence patterns – talk about the occurrence of a given state during system
execution
 Order patterns- talk about the relative order in which multiple states occur during
system execution.

Each pattern has a scope- When during program execution the pattern must hold;
 Global
 Before R (relate to first occurrence)
 After Q (relate to first occurrence)
 Between Q and R
 After Q until R

Occurrence Patterns
 Absence (Never)
 Universality (Globally, always)
 Existence (future, Eventually)
 Bounded existence – part of the system’s execution that contains at most a
specified number of instances of a designated state

By Ran Erez
Order patterns
 Basic precedence and response
o Precedence- the occurrence of the first is a necessary pre-condition for an
occurrence of the second (resource is only granted in response for a request)
o Response- describe cause-effect relationships. Occurrence of the first (cause)
must be followed by the second. (resource must be granted after it is
requested)
 Chain precedence and response- Generalization of the above.

CTL – Computation Tree Logic

LTL were linear time, CTL is Branching time logics. Adding to the expressive power of LTL we
have Path Quantifiers;
 A- all paths
 E- exists a path
Format of CTL – each temporal operator (U,W,X,G,F) must have a path quantifier (A,E)
before it.

Applications of formal specifications


 Checking consistency
 Test generation
 Formal verification- input a M and a Spec and output “yes” if M stratifies the spec
and “no” otherwise (sometimes the output is a proof or a counter example)
 Synthesis- Input the Spec, output an M such that M satisfies the spec or no if no such
M exists

By Ran Erez
Lecture 6 – Models

Model- a simplified representation of a system

The model criteria


 Mapping criterion- The original doesn’t necessarily exist, a model may act as an
“original” for another model. Program design is a model for the code to be written
which is a model for the computation to perform.
 Reduction criterion (abstraction)- the model mirrors some properties of the original
but not all of them, seems like a weakness but actually a real strength.
 Pragmatic criterion- the model can replace the original for some purpose

Descriptive vs. prescriptive models- The first means the model is mapped to an existing
original and the second means to something to be created. Transient- first prescriptive then
descriptive (a model can be in both but from different originals’ perspective).

Usage of models in SE
We use them to better handle the growing complexity of the systems we build. We use it at
all stages of the software development lifecycle. We model structure, behavior and process

Modeling Language- give a predefined set of basic abstractions on top of which they can
define new abstractions (models) for the system they build. In many ways the language
used determines the way engineers think about the systems they build.

A modeling language ML is a structure <Syn,Sem, sem> where;


 Syn – set of syntactically correct expressions- Defines the set of allowed expressions,
this can be textual or visual
 Sem- Semantic domain- provides meaning for each expression (in some domain),
typically infinite
 Sem- the semantic mapping sem: Syn-> P(Sem) (power set of the semantic domain)-
maps each element in the syntax of the language (each correct expression) to its
meaning (a subset of the semantic domain)
 The syntax, semantic domain and mapping need a representation, the
representation itself is given in another language with its own syn,Sem,sem.

By Ran Erez
UML- Unified Modeling Language
A set of visual languages for specifying, constructing and documenting software systems. 13
types of diagrams defined using a common meta-model. UML uses a bootstrapping
approach where a small part of UML is used to define UML itself.

The 4-layer meta-model hierarchy;


 Meta Meta-model- defines the language for specifying a meta-model (called MOF)
 Meta-model – defines the language for specifying models (instance of the meta
meta-model). Here we have the meta-classes “Association” and “Class”
 Model layer- defines an instance of the meta-model (used to describe a problem
domain, solution, etc.)
 Instance layer- defines run-time instances of the model

OCL – Object Constraint Language


A textual language used to express constraints. Used by modelers to define constrains on
relationships, define pre and post conditions on operations.

By Ran Erez
Lecture 7 – Model of Structure

Object Oriented Models- UML object and class diagrams

Objects- individual elements of a system. An object diagram describes the objects and their
relationships, it is a snapshot of object at a specific point in time.

Classes- A class is a “construction plan” for a set of similar objects of a system. Objects are
instances of classes. Each class has;
 Attributes- structural characteristics of a class, determine the state of the object.
 Operations- behavioral characteristics of a class, means to read or update the state

Attribute Syntax

For each attribute;

Visibility;
 + public
 - private
 # protected (only to class and sub-classes)
 ~ package (classes in the same package)

Derived attribute- value can be derived from other attributes, marked with /

Name- name of the attribute


Type- User defined classes, data type which can be <<primitive>>, composite <<datatype>>
and <<enumeration>>

Multiplicity- number of values the attribute may contain, by default its 1, notation [min,
max], can be with no upper/lower limits.

Default value – used if the attribute value is not set explicitly by the user.

By Ran Erez
Properties –Pre-defined {readOnly}, {unique}, {ordered} and attribute specification set can
be {unordered, unique}, list {ordered, non-unique}

Operation Syntax

Parameters- in= input parameters, out= output, inout= combined input/output parameters.
Type- type of return value

Class variables and class operations are marked with underline (Relevant not just to a single
instance)

Relations between classes


Dependence – “…depends on…” changes in one class may affect the other class, marked
with a dashed line. Example; object from class A is a parameter to an operation of class B.

Association- models possible relations between instances of classes. Binary association


connects instances of two classes with one another

Navigability- an object knows its partner objects and can access their visible attributes and
operations. Indicated by open arrow head. Non navigability is indicated by a cross

A can access B’s attributes and operations


B cannot access A’s attributes and operations.

When navigability is undefined it is assumed that both can access each other.

Multiplicity- number of object that may be associated with exactly one object of the
opposite side.

By Ran Erez
Role- describe the way which and object is involved in an association relationship.

Association Class – is a mean to assign attributes to the relationship between classes rather
than to a class itself

it is useful when modeling n:m associations. The association class can be unique by default
or non-unique.

Aggregation- “…has a…”, it is a special form of association, used to express that a class is
part of another class. There are 2 types of aggregations;
 Shared aggregation (Weak)- expresses weak belonging of the parts to a whole, parts
also exist independently. Multiplicity at the aggregating end may be >1. Syntax is a
hollow diamond at the aggregating end.
 Composition (Strong)- composite object and its part are dependent for existence.
One part can only be contained in at most 1 composite object at a specific point in
time. If the composite object is deleted its parts are also deleted.

Generalizations- “...is a…”- all attributes and operations, association and aggregations that
are specified for a general class (super-class) are pass on to its subclasses. Every instance of
a subclass is indirectly an instance of the superclass. Subclasses inherit all characteristics
except private ones. Subclass may have further characteristics, associations and
aggregations. Can use an Abstract Class to highlight common characteristics of their
subclasses. Useful in the context of generalization relationships, notation {abstract}. UML
allows multiple inheritance; a subclass can have many super-classes.

Code Generation – class diagrams are often created with the intention of implementing the
modeled elements in an OO programing language. There are commercial tools to generate
code from it.

UML Class Diagrams can also have constrained generalization sets. That is generalizations
of a common super class can be grouped into sets. A generalization set can have a name.

Application in research
 Finite satisfiability – some CDs have no instances or allow only infinite instances,
research on CDs analysis has suggested automated ways to identify such cases and
provide suggestions for repair (using Linear Programming and reductions to SAT)
 Consistency checking – the question whether the object model (instance) is in the
semantics of the class diagram.

By Ran Erez
Modal object diagrams (MOD)- an extension of object diagrams with positive/negative and
example/invariant modalities. MOD analysis checks whether a CD satisfies an MOD
specification by reduction to SAT.
in design methodology we design positive examples-> design negative examples after
gaining more knowledge -> defining positive and negative invariants in later stages. MOD
serves as a rich and formal means of communications between domain experts and
engineers.

By Ran Erez
Lecture 8 – Models of Behavior

State Machines-
Are part of UML, have intro, states, transitions, type of events, types of states, entry and
exit points. They are based on an article by David Harel “StateCharts…” from 1987.

Introduction – For every object there is a number of different states through his life time.
The state machine diagram describes the transition and behaviors of objects in each state.

Example on lecture hall with a class diagram (1 class with private attribute “free”, and 2
public methods occupy() and release())

Example on digital clock – inc() method is increment. Initial states makes hours and minutes
= 0. First state sets the hour and beeps. When pressing on increment it increments the hour
by 1. Then moving to minutes and incrementing upon pressing set.

/ - marks “Activity”, like what the code does.

Class diagram doesn’t need to describe everything, also a state machine. It is a model and
hence doesn’t show everything only what we want to focus on.

State A node in a graph (it is represented by rounded rectangles). A state can be active. This
means that the object is in this state and all the activities under do, can be executed (an
activity can consist of multiple actions, like assignments)

 Entry/ activities that are executed when entering the state


 Exit/ activities that are executed after
 /do – while in state

Transitions – Change from one state to another

Event -> trigger – comes from outside. Used to trigger the transition into other states
Guard-> condition- Boolean expression with no side effects. Checking the guard only if the
event occurred. If the guard is true (like assert()) all the activities in the current state are
terminated. All exit activities are executed and the transition is being done. If False no state
transition takes places. The guard decides whether to make the transition, assuming the
event happened
Activity-> effect –sequence of actions executed during the state transition.

By Ran Erez
What’s the difference between putting the activity on the entry of next state or on the
transition itself? If the next state has many incoming transition it is better to put the activity
on the transition itself. If the next state can only be reached from one state we can do it in
the entry of next state.

We have both internal and external transitions.

 Internal- act1->act3->act 2
 External – act 2 is performed before act3 because we exit.
 Empty transition- s1 -> doing A1 and when it is completed a completion event is
generated that initiates the transition t s2

Types of events
 Signal event- Receipt of a signal (SMS)
 Call event – operation or method call e.g. register(exam)
 Time event-
o relative based on the time of occurrence of the event after (5 seconds)
o absolute – when time == 16:00
 Any receive event – keyword = “all” occurs when any event occurs that does not
trigger another transition from the active state.

 Completion event (the empty event) assumed to be auto-generated after we finish


entry and do activities

 Change event- Boolean expression constantly being checked

State charts can be non-deterministic- it is syntactically allowed but when a code is


generated the semantics are not well defined. It can be one or the other. Some tools cannot
compile these situations…

Change event vs. guard


 Change event- after 90 minutes, checked permanently without a need for an event,
except time

By Ran Erez
 Guard- “lecture ended” somebody called this situation and the evaluation is
(time>=90minutes)

Different states
 Initial – pseudo state. The start of the machine. No incoming edges- the system is
never in this state. It is a control structure. If more than 1 outgoing edge guards must
be mutual exclusive and cover all possible cases to ensure that exactly one target
state is reached. No events allowed on the outgoing edges (except new())
 Final- real state marks the end of the sequence of states. The object can remain it
the final state forever.
 Terminate node (X) – pseudo state- terminates the state machine. The modeled
object ceases to exist (=is deleted)
 Decision node- pseudo state, used to model alternative transitions.
 Parallelization node- pseudo state – 1 incoming edge, more than 1 outgoing edge.
Splits the control flow into multiple concurrent flows.
 Synchronization node- merges multiple concurrent flows. More than 1 incoming
edges and 1 outgoing edge.
 Composite states- states can have sub-states or nested states. Only one of the sub-
states is active at any point in time. Arbitrary number of nesting depth. Can have
transition between sub-states and also out of the composite state. In activities we
first perform the entry activities of the state and only then the entry activities of the
sub-state. We can start directly with a sub-state but it harms encapsulation.
 Orthogonal states – composite state separated into two regions by a dashed line.
When S1 is active. One sub-state from each region is active. When exiting all regions
must come to a completion event before exiting the orthogonal state. Non-
determinism enables saving an exponential number of states.

 Submachine states – means to reuse parts of state machine diagrams in other state
machine diagram, like a subroutine in programming languages. As soon as the
submachine is activated the behaviors of the submachine is executed. Notation
state:submachineState.
 History State- remembers which sub-state of a composite state was the last active
one. Activates the old sub-state and all entry activities are conducted sequentially
from the outside to the inside of the composite state. Exactly one outgoing edge of
the history state points to a sub-state which is used if the composite state was never
active.
o Shallow history(H)- restores the state that is on the same level of the
composite state.
o Deep history(H*)- can go as deep as we want to remember the exact state of
the sub-state.

By Ran Erez
State Representation
Initial State

Final State

Terminate Node X
Decision Node

Parallelization Node

Synchronization Node

Shallow history state


Deep history state

Entry and Exit points- We can view composite states as encapsulation. A composite state
shall be entered or exited via a state other than the initial and final states. The internal
transition must/need not know the structure for the composite state.

We use state machines for


1. documentation and communication- most common usage
2. For simulation and code generation
3. For analysis (formal- not widely used) checking and proving properties on the model
is much easier than manual made code.

Interaction diagrams

They model the inter-object behavior, meaning the interaction between objects. Interaction
–specifies how messages are exchanged between interaction partners (Human or non-
human). The focus is on the order of events and the states are not explicitly seen.
Modeling concrete scenarios, short stories of behavior.

Sequence diagram- two dimensional diagram (Horizontal – interaction partners, Vertical-


Chronological sequence of events).
 Interaction partners are depicted as lifelines. In the head of the lifeline there is a
rectangle with the expression roleName:Class. Role is a more general concept than
object. One object can have different roles in different situations.
 The body of the lifeline is vertical with a dashed line, represent the lifetime of the
object associated with it.
 On different life-lines it is not necessary for lower to happen before higher (slide 46)

Messages

By Ran Erez
 Synchronous messages- filled arrow head
 A-synchronous message – empty arrow head.
 Response message- may be omitted if content and location are obvious.
 Found message- sender of message is unknown or irrelevant.
 Lost message – receiver of a message is unknown or irrelevant.
 Time consuming message- a message with duration.

Object creation –dashed arrow, keyword “new”

Object destruction- the object is deleted, marked with a large cross (X) at the end of the
lifeline

Combined Fragments
Model various control structure and has 12 predefined types of operators

Within sequence diagram there are some operators you can employ. Such as combined
fragements. “alt”, “loop”.
 Branches and loops (alt,loop,opt,break)
 Concurrency and order (seq,strict, par,critical)
 Filters and assertions (ignores, considers,neg).

Alt- models alternative sequences, it is like switch case- divided into a few regions based on
the “status”. Guards are used to select the one path to be executed (guard expression in
square brackets).

Opt- optional sequence, like an if statement without the “else” part

By Ran Erez
Loop- Express a sequence is to be executed repeatedly. The loop has a number of times to
execute (min-max range), default it *(forever) or when the guard is false.

Break- simple form of exception handling. If the guard is true we do what’s in the “break”
and jump out of the fragment we’ve been in.

Seq- default order of event- a partial order. This is weak sequencing, events on the same
lifelines are ordered from top to bottom, the order between events on different lifelines is
undefined.

Strict- sequential interaction with order. The order of event occurrences on different
lifelines between different operands is significant.

Par – sets aside chronological order between messages in different operands. The execution
paths of different operands can be interleaved but the restrictions of each operand need to
be respected.

Example

Co-Region – to model concurrent events of a single lifeline. Area of the lifeline to be covered
by the co-region is marked by square brackets rotated by 90 degrees

Critical – atomic area in the interaction, to make sure that certain parts of the interaction
are not interrupted by unexpected events.

By Ran Erez
By Ran Erez
Lecture 9 – Black Box Testing

Failure – Observable incorrect behavior


Fault (a.k.a bug, defect)- incorrect code. This is a necessary but insufficient condition for the
occurrence of a failure.
Error- the cause of the fault, typically a human error, conceptual or typo.

Different approaches to check software correctness


 Testing- using a system to try to make it fail.
 Inspections- reviews, effective in practice and widely used.
 Formal verification and proof of correctness- prove that the program satisfies the
specification. Not always possible and not common in the industry.

Approach Pros Cons


Testing no false alarms Highly incomplete
Inspections Can be systematic and Informal and subjective
thorough
Formal verification Provides strong guarantees Requires formal
specifications- expensive
and complex to get and use.

Aspects of testing – executing a system on a tiny subset of its input domain


 Dynamic technique – program must be executed
 Optimistic approximation – assuming that what we see for the subset is consistent
with the rest of the domain
 A test is successful if it makes the program fail- we cannot prove the absence of
errors but only reveal their presence.

Testing granularity
 Unit testing- each module is tested in isolation
 Integration testing- test the interactions between models
 System testing- test the entire system, both functional and non-functional testing
 Acceptance testing
 Regression testing

Black box and white box testing


 BB Testing- based on the description of the software, tries to cover as much of the
specified behavior as possible. Doesn’t look at the code.
 WB testing- based on the code, tries to cover as much of the coded behavior as
possible (can’t reveal failures due to missing implementation paths).

BB Testing advantages
 Focus on the domain
 No need for code- allows early test design
 Catches logical errors
 Applicable at all levels of granularity.

By Ran Erez
Process for BB testing
1. Start with the functional requirements
2. Identify independently testable features
3. Identify relevant inputs
a. Exhaustive testing- usually not possible
b. Random testing- gives input uniformly and avoids bias, very low probability of
finding bugs that way and very weak confidence regarding test coverage.
c. Manual testing- very common, however produces similar tests, many of
which add little value in discovering defects, resulting in large coverage gaps.
d. CDT- Combinatorial Test Design- maximizes the ability to detect bugs by
addressing the identified variables and the interactions amongst variables.
Minimizes time and cost, each test covers multiple variables and their
interactions. Testing individual variables gives 20%-68% chance of detections,
with pair variables it goes to 54%-98% chance of detection, and with 5 tuples
it is ~97%. A CTD algorithm finds a small test plan which covers 100% of a
given interaction level. We have to add constraints because we just can’t
skip a test with invalid combination because each test in the CTD test plan
covers multiple unique legal combinations, skipping a test loses all these
combinations and we won’t have 100% interaction coverage. Example of a
CTD algorithm is IPO (in Parameter Order) a greedy algorithm which
incrementally extends a set of partial tests.
4. Derive test case specifications
5. Generate actual test cases
6. Final result: test suite- a set of test cases.

Finite state machine model based testing


From informal specification to a FSM mode, systematically cover the possible behaviors in
the model. A test case = a sequence of states / transitions in the FSM model.

State coverage (very weak criteria) < Transition coverage (much better) < k-tuple
transition coverage (even better). The tradeoff is between the coverage criteria and the
number of test cases. There are tools the generate test cases based on the above criteria.

Considerations in FSM testing


 Very general applicability- at all granularity levels, tools are available
 Abstraction is key- tradeoff between level of detail and size of the model, number of
test cases
 Automation is key- very difficult to automate the step from informal specs to model
but very easy to generate automatic tests from a model.

Lecture 10 – White Box Testing

By Ran Erez
BB testing is based on specifications of what the software should do. Its good if it covers
everything in the specifications.

WB- based on the code, tries to see how much of the coded behavior is covered. Cannot
find faults in things not in the code.

The assumption that executing a faulty statement is a necessary condition to finding it is not
always valid (e,g, finding faults in code review)

The advantages of WB Testing –can be used to compare quality of test suites. And it can be
measured objectively and automatically. Which one covers more parts of the code.

1. Is BB testing subjective? Modeling of the parameters and possible values is


subjective but once they are decided it is objective
2. Objective better than subjective? Subjective is more relevant to what we want to
achieve. No everything which is easily measureable is necessary better for what we
want to achieve.

Test requirements – which elements of the code should be executed


Test specification- presented as constraints on the input. In printSum we need both a+b> 0
and a+b < 0 as constraints)

Test cases- actual values (possibly including output value) which cover the requirements
with their specification.

WB Coverage criteria
 SC- Statement coverage- The requirements are all the statements in the program
and we measure it by Number of executed statements/ number of statements in the
program. Essentially which lines in the program were covered. Most common
coverage criteria in the industry. Typical coverage target is 80%-90%. The lowest
form of WB testing.
 BC – Branch coverage (decision coverage)- The limitation of SC is that it doesn’t help
find cases which are not dealt. We need stronger coverage which will uncover other
cases. A useful representation is a control flow graph (CFG). Nodes represents
statements and edges represent control.

By Ran Erez
The test requirements are the branches in the program. The coverage measure is
Number of executed branches/ total number of branches. BC is better than SC
because every BC with 100% coverage gives 100% of SC. BC coverage subsumes
statement coverage (but not the opposite).

 CC – Condition coverage- Looks at all Boolean subexpression in the program. Test


requirements are to cover all individual Boolean sub expressions. Coverage =
number of conditions that are both T and F (counted if we have a test which makes it
true and another test which makes it false) / total number of conditions. We know
that BC is stronger than SC, but is CC stronger than BC? No, because they are
incomparable.
 BCC- Branch and condition coverage – test requirements are branches and individual
conditions in the program. Since they are incomparable it makes sense to combine
them. Measure= conditions that are both T and F + number of executed branches/
Total conditions and branches. BCC subsumes both BC and CC by definition.
 MC/DC- Modified decision/condition coverage- testing only important combinations
of conditions. The requirement is that each condition should affect the decision
outcome independently. The set of test cases will demonstrate that each of the
conditions can affect the decision outcome independently of the other condition.
The algorithm for doing it can be greedy (slide 42) we got branch coverage from 1,5
and we don’t have CC for b. if we add test 3 we add CC for b. MC/DC subsumes BCC.
By definition because we have to have CC + BC (=BCC) + independence.
 PC- Path Coverage- covers all combinations of branches.

Eclipse WB testing tool- “JaCoCo”- Java Code Coverage- helps measure code coverage in
Java automatically. Marks Green (Statements covered) Yellow (Decision nodes if all covered)
Red (not covered)

Focus- Tool for Combinatorial Test design. Define attributes and values

By Ran Erez
TDD- The first thing we do is write test cases based on the requirements. It will fail (might
not even compile). Writing just enough code to make it pass and then refactor to improve
code quality.

Mutation testing-
Fault based testing to catch typical faults. Evaluating how good is our test suite (not
coverage). Basic idea is to catch typical bugs, such as; a loop stopping in >= stops only at =,
or because of naming of variable.
1. We start with a program and a test suite which passes. Our goal is to assess how
good is the test suite.
2. We create a number of similar programs (mutants), each program differs from the
original program in one small way.
3. We execute all mutants on the same test suite. If our test suite can make a failure in
all our mutants it means the test suite is good enough.

Mutants are not syntactical errors (compiler will catch), trivial, semantically equivalent to
the original.

We are interested in mutants that cause the original program to behave differently or
mutants that none of our test cases kill. Mutants are automatically generated based on
predefined program modification rules.

Assumptions
 Programmers write programs that are nearly correct
 Coupling effect assumption – if our test suite is able to distinguish between our
program and very slight changes (i.e. kills all mutants) it is sensitive enough to find
complex bugs in the actual programs.

By Ran Erez
Lecture 11 – Agile & Guest Lecture (Ittay Dror from Akamai)

The motivation for Agile is that the assumptions of the traditional software development
processes are not so relevant anymore. The cost of change has dramatically decreased. If
the cost decreases (until maybe becoming even flat) then upfront work becomes a liability.
If there is ambiguity and volatility in the requirements then it is good to delay.

Agile principles
 Focus on code rather on design (to avoid unnecessary documentation)
 Value people over process (developers, customers)
 Iterative approach (deliver working solutions quickly, change quickly)
 Customer involvement (feedback throughout the process)
 Expecting that the requirements will change
 Mentality of simplicity (As simple as possible)

XP –Extreme Programming-
XP is a lightweight methodology for small to medium size teams, developing software in the
face of vague or rapidly changing requirements.

Lightweight humanistic discipline of software development;


 Lightweight- process kept to a minimum
 Humanistic- centered around people, developers and customers
 Discipline- includes practices that one needs to follow
 Driving metaphor – Stirring rather than planning

XP Principles
 Communication – between developers, customers. Keep the information flowing
 Simplicity – look for the simplest thing that works
 Feedback at all levels- developers write test cases which provides immediate
feedback. Developers estimate cost of stories as soon as they get them from
customers. Customers and developers develop system test cases together.
 Courage – To throw away code if it does not work, or change it if you find a way to
improve it, can build and test quickly so be brave.

XP Practices
 Incremental Planning
o Select user stories for this release
o Break stories into development tasks
o Plan release
o Iteratively develop, integrate and test
o Release software
o Evaluate system and iterate (back to selecting stories for next release)
 Small releases – No big release at the end of a long development process. Instead
working in small releases and short development cycles (Release every 1-2 weeks).
The advantages is that the customer gets new value in every release, provides rapid
feedback. There is a sense of accomplishment for the developers while minimizing

By Ran Erez
the risk of developing the wrong product and quickly adapting the code to new
requirements.
 Simple Design- Avoid complex design at the beginning of the project, instead simple
design to meet the requirements. Fewest possible classes and methods, just the
amount of design we need to get the system to work. Design is happening more
often because we will change the code a lot.
 Test first- a feature that does not have an automated test doesn’t exist. You first
write the test for the feature before you develop it. (Similar to TDD)
 Refactoring- Taking code with suboptimal design and improving it while preserving
functionality. Refactor often, as soon as you see an opportunity, but not on
speculation, only refactor on demand.
 Pair Programming- All code is written in pairs, 2 people looking at 1 machine. The
two programmers alternate between two roles, programming and strategizing.
Empirical studies show that a productivity of a pair of programmers is higher than
the total productivity of two programmers working individually.
 Continuous integration- integrating and testing every few hours or a day at most.
 On-site customer- The customer is an actual member of the team, sits with them and
brings the requirements. To those who say that it is too expensive- maybe the
system is not worth developing in the first place.

SCRUM – a popular instance of agile process

In scrum we have 3 actors;


1. Product Owner- representing the customer, responsible for the product backlog,
user stories and other requirements, manages and prioritizes them
2. Team- responsible for delivering shippable increments, estimate efforts on backlog
items. Consists of 4-9 people.
3. Scrum Master- responsible for the overall scrum process, facilitates communication,
events, overall supervision of the process.

High level process


 Product Backlog- single source of requirements, ordered by value, risk, priority,
necessity. The list is changing and maintained by the product owner.
 Sprint planning and backlog- items are selected converted into task, creating the
spring backlog
 Sprint every 2-4 weeks, daily scrum- Daily scrum starts with 15 minutes stand-up
meeting
 Sprint Review and retrospective- at the end of the 2-4 weeks, 4 hour meeting
discussion of accomplishments, issues, demo, backlog
 Potentially shippable product increment.

By Ran Erez
Guest lecture by Ittay Dror from Akamai

Product lifecycle
• Scope
• Product Architecture
• Engineering plan
• Review
• Execution & Revisions
• Main dev lifecycle

Development lifecycle
• Engineering plan
• Cross teams
• High level
• Major functionalities
• Integration points
• Execution
• Multiple team, each with its own methodology
• Release every 3 2-week sprints
• Postmortem, plan
• “squads” – dev & qa
• Jira epics, stories and tasks
• Git branches
• Release & ongoing pushes

Different systems, different teams- work is happening cross group, technology and lifecycles

Security Concerns
• Data leakage
• Customers
• Adversaries
• Competitors
• Eves droppers
• Data interception & manipulation
• STRIDE – Spoofing, Tampering, Repudiation, Information disclosure, DOS, Elevation
of privileges
• Secure channels
• Client & server certs between systems
• Configuration safety & rollback
• Permissions

By Ran Erez
Rollout & migration – they had a new version which was running parallel to v1, they had to
migrate customers and manage the onboarding to the new platform and off-boarding from
the old platform

By Ran Erez

You might also like