Software Engineering 3 & 4 TBD
Software Engineering 3 & 4 TBD
Software design:
1
It is the first stage of the Software Design Lifecycle. It defines:
Software architecture:
It is the blueprint for a software system and the highest level of a software
design. It defines:
2
While there are several different software architecture patterns, we are going to
look at the following top-four:
● Serverless architecture
● Event-driven architecture
● Microservices Architecture
● Call and return Architecture
● Data Flow and Data Centered Architectures
● Hierarchical Architecture
● Component based architecture
This pattern can be used to build software and services without managing the
infrastructure. A third-party is used to manage servers, the backend, and other
services. This lets you focus on fast, continuous software delivery. When you
don’t have to worry about managing the infrastructure and planning for
expansion, you have more time to look at the value that can be added to your
software and services.
3
consumer event might process the event, or it might only be impacted by the
event.
Hierarchical Architecture
Applications
4
Widely applied in the areas of parallel and distributed computing.
Advantages
Call and return architecture is a specific type of hierarchical architecture that focuses
on managing control flow through function calls, while hierarchical architecture is a
broader organizational principle encompassing various layered structures.
Data Design
5
Key Aspects of Data Design:
Data Types:
Defining the types of data (e.g., integers, strings, dates) to be stored and
manipulated.
Data Relationships:
Establishing how different data elements relate to each other (e.g., one-to-one,
one-to-many).
Data Constraints:
Implementing rules to ensure data integrity and consistency (e.g., primary keys,
foreign keys, validation rules).
Data Structures:
Choosing appropriate data structures (e.g., arrays, linked lists, hash tables) to
optimize performance.
6
Database Design:
Transforming the data model into a database schema, considering factors like
normalization and performance.
7
Assessing Alternative Architectural Designs
There are basically 2 types of assessments
8
The Software Engg Institute has developed an Architecture Trade off
Analysis Method (ATAM) that establishes an iterative evaluation process
for software Architectures.
I.ATAM
The design analysis that follow are performed iteratively to evaluate the
processes:
1. Collect scenarios
2. Elicit requirements, constraints and Environment description
3. Describe the architectural patterns that hav been chosen to address the
scenarios and requirements.
The following are architectural views:
● Model View
● Process view →Analyses of performance
● Data flow view → Analyses of he degree to which the architecture
meets the functional requirements
Using the results of steps - 5 & step-6 some of the architectural alternatives
may be eliminated.
Also, one or more of the ATAM steps are modified and re-applied to the
ATAM processes.
9
II. Architectural complexity
proposed by zoho. There are 3 dependencies
Sharing dependencies - ex : 2 consumers share the same data.
Data is a dependency
Flow dependencies - The representation of dependent relationship between
producers and consumers
Constraint dependencies - It represents the constraints on the relative flow
among a set of activities.
for ex. 2 components can not execute at the same time. The
execution of one component depends on the other component
10
Mapping Data Flow into a Software Architecture
11
12
Efferent : "To carry away from".
Afferent: "To carry to".
13
14
15
16
Data gets into the system. Transformed
17
Data flows into the system.
Data gets transformed into a format required by the system
Data flows out of the system after the process - world data
18
19
● Step 2: Review and refine the data flow diagrams for the s/w
20
Modeling Component-Level Design
The foundation of any software architecture is component-level design
— you break a system into its component parts and define how they
interact. Done well, component-level design facilitates reuse, reduces
complexity, and enables parallel development.
Component-based architecture focuses on the decomposition of the design
into individual functional or logical components that represent well-defined
communication interfaces containing methods, events, and properties.
What is a Component?
A component is a software object, intended to interact with other
components, encapsulating certain functionality or a set of functionalities. It
has an obviously defined interface and conforms to a recommended behavior
common to all components within an architecture.
21
A software component can be defined as a unit of composition
with a contractually specified interface and explicit context
dependencies only. That is, a software component can be
deployed independently and is subject to composition by third
parties.
Characteristics of Components
Reusability − Components are usually designed to be reused in
different situations in different applications. However, some
components may be designed for a specific task.
Replaceable − Components may be freely substituted with other
similar components.
Not context specific − Components are designed to operate in
different environments and contexts.
Extensible − A component can be extended from existing
components to provide new behavior.
Encapsulated − A component depicts the interfaces, which allow the
caller to use its functionality, and do not expose details of the internal
processes or any internal variables or state.
Independent − Components are designed to have minimal
dependencies on other components.
22
● Describes persistent data sources (databases and files) and identifies
the classes required to manage them.
● Develop and elaborate behavioral representations for a class or
component. This can be done by elaborating the UML state diagrams
created for the analysis model and by examining all use cases that are
relevant to the design class.
● Elaborates deployment diagrams to provide additional implementation
detail.
23
Principles
Cohesion High
Coupling Very low or minimu . Loosely coupled
24
25
Object Constraint Language (OCL)
26
Users of the Unified Modeling Language and other
languages can use OCL to specify constraints and other expressions
attached to their models.
Why OCL
In order to write unambiguous constraints, so-called formal
languages have been developed. The disadvantage of traditional formal
languages is that they are usable to persons with a strong mathematical
background, but difficult for the average business or system modeler to
use.
● Expression language
OCL is a pure expression language. Therefore, an OCL expression is
guaranteed to be without side effects. It cannot change anything in the model.
This means that the state of the system will never change because of an OCL
expression, even though an OCL expression can be used to specify such a state
change (e.g., in a post-condition). All values for all objects, including all links, will
not change. Whenever an OCL expression is evaluated, it simply delivers a
value.
● Modeling language
OCL is a modeling language, not a programming language. It is not possible to
write program logic or flow-control in OCL. You especially cannot invoke
processes or activate non-query operations within OCL. Because OCL is a
modeling language in the first place, not everything in it is promised to be directly
executable.
As a modeling language, all implementation issues are out of scope and cannot
be expressed in OCL. Each OCL expression is conceptually atomic. The state of
the objects in the system cannot change during evaluation.
27
● Formal language
OCL is a formal language where all constructs have a formally defined
meaning. The specification of OCL is part of the UML specification.
The above Figure gives an overview of the OCL type system in the form of a feature
model. Using a tree-like description, feature models allow describing mandatory and
optional features of a subject, and to specify alternative features as well as conjunctive
features. In particular, the figure pictures the different kinds of available types.
28
Applications of OCL
29
Key Words
Context
30
Self
Invariant
31
32
33
34
35
Example
36
User Interface
User interface is the first impression of a software system from the user's
point of view. Therefore any software system must satisfy the requirements
of the user. UI mainly performs two functions −
37
Golden Rules
38
Interface types
UI Design Process
39
UI Analysis
40
Interface Design Steps
UI Design patterns
41
UI Design Issues & Challenges
UI Design Evaluation
42
43
Software Quality assurance (SQA)
44
Software Quality Factors
statistical methods. It involves collecting defect data, categorizing it, and using
techniques like the Pareto principle to pinpoint the most significant causes of
defects. This allows for targeted corrective actions to reduce defects and
45
Process Steps
1) Collect and categorize information (i.e., causes) about software defects that
occur
2) Attempt to trace each defect to its underlying cause (e.g., nonconformance
to specifications, design error, violation of standards, poor communication with
the customer)
3) Using the Pareto principle (80% of defects can be traced to 20% of all
causes), isolate the 20%
Sample of Errors
● Incomplete or erroneous specifications
● Misinterpretation of customer communication
● Intentional deviation from specifications
● Violation of programming standards
● Errors in data representation
● Inconsistent component interface
● Errors in design logic
● Incomplete or erroneous testing
● Inaccurate or incomplete documentation
● Errors in programming language translation of design
● Ambiguous or inconsistent human/computer interface
● ISO 9000 describes quality assurance elements in generic terms that can be applied to
any business.
46
● To be ISO-compliant processes should adhere to the standards described.
● Ensures quality planning, quality control, quality assurance and quality improvement.
SoftWare Reliability
SQA Plan
47
The plan identifies the SQA responsibilities
1. Purpose
2. Reference
48
6. Code control
8. Testing methodology
SQA Activities
#1) Creating an SQA Management Plan Creating an SQA Management plan involves
charting out a blueprint of how SQA will be carried out in the project with respect to the
engineering activities while ensuring that you corral the right talent/team.
#2) Setting the Checkpoints The SQA team sets up periodic quality checkpoints to
ensure that product development is on track and shaping up as expected.
#4) Conduct Formal Technical Reviews An FTR is traditionally used to evaluate the
quality and design of the prototype. In this process, a meeting is conducted with the
technical staff to discuss the quality requirements of the software and the design quality
of the prototype. This activity helps in detecting errors in the early phase of SDLC and
reduces rework effort later.
49
#5) Formulate a Multi-Testing Strategy The multi-testing strategy employs different
types of testing so that the software product can be tested well from all angles to ensure
better quality.
#6) Enforcing Process Adherence This activity involves coming up with processes and
getting cross-functional teams to buy in on adhering to set-up systems. This activity is a
blend of two sub-activities:
Process Evaluation: This ensures that the set standards for the project are
followed correctly. Periodically, the process is evaluated to make sure it is working
as intended and if any adjustments need to be made.
#7) Controlling Change This step is essential to ensure that the changes we make are
controlled and informed. Several manual and automated tools are employed to make this
happen. By validating the change requests, evaluating the nature of change, and
controlling the change effect, it is ensured that the software quality is maintained during
the development and maintenance phases.
#8) Measure Change Impact The QA team actively participates in determining the
impact of changes that are brought about by defect fixing or infrastructure changes, etc.
This step has to consider the entire system and business processes to ensure there are no
unexpected side effects. For this purpose, we use software quality metrics that allow
managers and developers to observe the activities and proposed changes from the
beginning till the end of SDLC and initiate corrective action wherever required.
#9) Performing SQA Audits The SQA audit inspects the actual SDLC process followed
vs. the established guidelines that were proposed. This is to validate the correctness of the
50
planning and strategic process vs. the actual results. This activity could also expose any
non-compliance issues.
#11) Manage Good Relations The strength of the QA team lies in its ability to maintain
harmony with various cross functional teams. QA vs. developer conflicts should be kept
at a minimum and we should look at everyone working towards the common goal of a
quality product. No one is superior or inferior to each other- we are all a team.
51
Testing Strategies
In the world of software engineering, ensuring the reliability and quality of software
applications is paramount. A critical aspect of achieving this goal is through
comprehensive testing. Test strategies play a vital role in orchestrating the testing
process, guiding the efforts of software development teams to systematically identify
and rectify defects, thereby enhancing the overall quality of the software. In this
article, we will explore the testing strategies in software engineering.
52
A test strategy helps allocate resources effectively, including human resources, time,
and testing tools. This ensures that testing efforts are proportional to the software's
complexity and importance.
Mitigating Risks:
Test strategies identify potential risks and challenges in the testing process and outline
mitigation measures to address them.
Testing Methodologies:
Outlines the testing methodologies and techniques to be employed during the testing
process.
Testing Levels:
Specifies the different testing levels to be performed, such as unit testing, integration
testing, system testing, and acceptance testing.
53
Test Environment:
Describes the environment in which testing will be conducted, including hardware,
software, and network configurations.
Test Deliverables:
Lists the documents and artifacts that will be produced during the testing process,
such as test plans, test cases, and defect reports.
Resource Allocation:
Details the resources required for testing, including human resources, tools, and
infrastructure.
Introduction:
An overview of the purpose and scope of the test strategy document.
Testing Objectives:
54
Clearly defined goals and objectives of the testing efforts.
Testing Scope:
The areas and functionalities that will be tested, along with any specific exclusions.
Testing Approach:
The overall approach to testing, including the types of testing to be performed and the
order in which they will occur.
Test Environment:
Details about the hardware, software, and network configurations used for testing.
Resource Allocation:
Information about the roles and responsibilities of team members involved in testing,
as well as the tools and equipment required.
Testing Schedule:
A timeline that outlines the testing phases, milestones, and deadlines.
Testing Deliverables:
A list of documents, reports, and artifacts that will be produced during the testing
process.
Exit Criteria:
55
The conditions that must be met for testing to be considered complete.
Considering the process from a procedural point of view, testing within the context of
software engineering is actually a series of four steps that are implemented
sequentially. The steps are shown in Figure
56
Unit testing makes heavy use of testing techniques that exercise specific paths in a
component’s control structure to ensure complete coverage and maximum error
detection. Next, components must be assembled or integrated to form the complete
software package.
Unit testing focuses verification effort on the smallest unit of software design—the
software component or module. Using the component-level design description as a
guide, important control paths are tested to uncover errors within the boundary of the
module. The relative complexity of tests and the errors those tests uncover is limited
by the constrained scope established for unit testing. The unit test focuses on the
internal processing logic and data structures within the boundaries of a component.
This type of testing can be conducted in parallel for multiple components.
57
Unit-test considerations:
The module
interface is tested to ensure that information properly flows into and out of the
program unit under test. Local data structures are examined to ensure that data stored
temporarily maintains its integrity during all steps in an algorithm’s execution. All
independent paths through the control structure are exercised to ensure that all
statements in a module have been executed at least once. Boundary conditions are
tested to ensure that the module operates properly at boundaries established to limit or
58
restrict processing. And finally, all error-handling paths are tested. Data flow across a
component interface is tested before any other testing is initiated.
Selective testing of execution paths is an essential task during the unit test. Test cases
should be designed to uncover errors due to erroneous computations, incorrect
comparisons, or improper control flow.
Boundary testing is one of the most important unit testing tasks. Software often fails
at its boundaries. That is, errors often occur when the nth element of an n-dimensional
array is processed, when the ith repetition of a loop with i passes is invoked, when the
maximum or minimum allowable value is encountered. Test cases that exercise data
structure, control flow, and data values just below, at, and just above maxima and
minima are very likely to uncover errors.
Integration testing addresses the issues associated with the dual problems of
verification and program construction. Test case design techniques that focus on
inputs and outputs are more prevalent during integration, although techniques that
exercise specific program paths may be used to ensure coverage of major control
paths. After the software has been
Validation testing provides final assurance that software meets all informational,
functional, behavioral, and performance requirements.
System testing: The last high-order testing step falls outside the boundary of software
59
engineering and into the broader context of computer system engineering. Software,
once validated, must be combined with other system elements (e.g., hardware, people,
databases). System testing verifies that all elements mesh properly and that overall
system function/performance is achieved.
● Strengths:
Focuses on user perspectives, effective for functional and usability testing.
Does not require knowledge of internal code.
● Weaknesses:
Limited coverage of code paths, may miss certain logic errors.
● Best for:
Validating functionality, usability, and user scenarios. Detects issues related to
inputs, outputs, and user experience.
60
In this method, testers dive deep into the software's internal code and logic. They
create tests to cover different paths the code can take, ensuring that each part works as
intended. It's like dissecting the software to verify its accuracy.
● Strengths:
Thorough coverage of code paths, effective for logic and structural testing.
Provides insights into code quality.
● Weaknesses:
May not catch integration or external system issues. Requires knowledge of
internal code.
● Best for:
Verifying code logic, complex algorithms, and integration points. Detects issues
within the code structure and logic.
Regression Testing:
Whenever changes or updates are made to the software, regression testing kicks in.
Testers run tests that were already done before to make sure these changes haven't
caused any new problems or broken existing functions.
● Strengths:
Ensures new changes don't break existing functionality. Efficient for identifying
regressions.
● Weaknesses:
May not catch new defects outside the scope of previous tests.
● Best for:
Validating software after changes, updates, or enhancements. Detects issues
caused by recent modifications.
Smoke Testing:
61
Before thorough testing begins, a smoke test is done to quickly check if the basic
functions of the software are operational. This is like a preliminary check to catch any
major issues early on.
● Strengths:
Quickly identifies major issues in basic functionalities. Provides initial
assessment of software stability.
● Weaknesses:
Limited coverage and depth of testing.
● Best for:
Initial assessment before in-depth testing. Detects defects that could hinder
further testing.
Exploratory Testing:
Testers take an open-ended approach here. They interact with the software without
strict plans, trying to find hidden problems that might not be caught by scripted tests.
It's like an adventure to discover unexpected issues.
● Strengths:
Finds unexpected defects, focuses on user behavior. Flexible and adaptable
approach.
● Weaknesses:
May lack repeatability and documentation.
● Best for:
Identifying hidden defects, usability issues, and scenarios not covered by
scripted tests.
Performance Testing:
62
This type of testing focuses on the software's speed, stability, and scalability. Testers
simulate different workloads to ensure the software can handle various levels of
demand without crashing or slowing down.
● Strengths:
Measures software speed, stability, and scalability. Identifies performance
bottlenecks.
● Weaknesses:
May not uncover all usability or functional issues.
● Best for:
Ensuring software can handle different workloads and stress conditions.
Detects performance-related bottlenecks.
Security Testing:
Testers concentrate on finding any weak points in the software's security. They look
for vulnerabilities that hackers might exploit to gain unauthorized access or steal
sensitive data.
● Strengths:
Identifies vulnerabilities and security loopholes. Ensures data protection.
● Weaknesses:
May not catch all functional or usability defects.
● Best for:
Uncovering security weaknesses, vulnerabilities, and potential breaches.
Usability Testing:
Here, testers evaluate the software's user-friendliness. They assess how easy it is for
users to navigate, understand, and accomplish tasks within the software. This testing
ensures a positive and intuitive user experience.
63
● Strengths:
Evaluates user-friendliness and user experience. Ensures intuitive interaction.
● Weaknesses:
May not uncover underlying technical issues.
● Best for:
Assessing user interaction, navigation, and overall satisfaction.
Project Requirements:
The type of software being developed, how complex it is, and what it's going to be
used for all influence the testing approach chosen. For instance, if the software is
critical and needs to be super reliable, a more thorough testing strategy might be
necessary.
Risk Analysis:
Identifying potential problems or risks in the software is crucial. This helps in picking
the right testing strategies that can effectively tackle and minimize these risks. For
example, if there's a risk of data loss, thorough testing around data handling would be
important.
Testing Objectives:
64
The goals set for testing matter too. Whether it's finding specific types of bugs or
ensuring the software can handle a certain number of users, these objectives guide
which strategies are most suitable.
Stakeholder Expectations:
Considering what end-users, clients, and other stakeholders expect from the software
is essential. The chosen testing strategy should align with these expectations. For
instance, if the software is meant to be user-friendly, the testing should heavily focus
on usability.
65