0% found this document useful (0 votes)
2 views66 pages

Software Engineering 3 & 4 TBD

Software architecture serves as a blueprint for organizing and conceptualizing systems, defining essential components and their interactions. It encompasses characteristics such as system setup, fundamental elements, and high-level structures, while distinguishing itself from software design, which focuses on individual modules and specifications. Various architecture patterns, like microservices and event-driven architecture, help address common development challenges, and data design plays a crucial role in structuring and managing information within software systems.

Uploaded by

sravyaracha1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views66 pages

Software Engineering 3 & 4 TBD

Software architecture serves as a blueprint for organizing and conceptualizing systems, defining essential components and their interactions. It encompasses characteristics such as system setup, fundamental elements, and high-level structures, while distinguishing itself from software design, which focuses on individual modules and specifications. Various architecture patterns, like microservices and event-driven architecture, help address common development challenges, and data design plays a crucial role in structuring and managing information within software systems.

Uploaded by

sravyaracha1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

Creating an Architectural Design

What is software architecture?

It is like a blueprint of a building, a bridge, or any other kind of


structure, software architecture is used to organize and conceptualize
a system. It includes a definition of which elements and components
need to be in the system, which components need to interact with
each other, and what type of environment the software needs to
operate.

Software architecture defines the structure and constraints that the


software developers will need to work in. It includes the documentation,
charts, diagrams, and anything used to facilitate communications with
stakeholders.

Characteristics of software architecture

Architecture characteristics define the software’s requirements and


what it is expected to do. Some of the characteristics shared by
software architectures include:

A description of the overall system setup: This includes the structure


of the software you want to build. To make it easier for stakeholders to
understand, you might want to create a visual representation with
diagrams and charts. Visuals are a great way to show relationships
between components and subsystems. They give everybody involved
insight into the architecture and give you perspective as you analyze
the structure and look for ways to improve your structure or plan an
expansion to an existing system.

A definition of fundamental elements: Software architecture defines


the core set of elements and properties that are required to build the
system. It does not document every element in detail. It simply
identifies the structures that are required to build the software’s core
functionality. For example, a web browser and a web server describe
the core elements needed for a user to interact with the internet.

A description of high-level structures: The development teams need to


make decisions about the high-level structure that describe things like
the system availability, performance, ability to scale, system reliability
and fault tolerance, configuration and support, and monitoring and
maintenance.

A description of what is being built: You are likely building software or


a system to address the needs and requirements of stakeholders. But
you can’t always fully develop everything that the stakeholders ask for.
A description of what you are building can help you manage
stakeholder expectations. Use diagrams, flowcharts, and process
documents to keep stakeholders informed and to avoid feature and
scope creep.

Software Design Vs Software Architecture

Software design:

1
It is the first stage of the Software Design Lifecycle. It defines:

●​ How the individual modules and components of a system will be


designed
●​ Detailed software properties
●​ Specifications that will help developers implement the software
●​ How all components, modules, functions, and so on are being built

Software architecture:

It is the blueprint for a software system and the highest level of a software
design. It defines:

●​ Which elements and components need to be in the system


●​ Which components need to interact
●​ The type of environment the software needs to operate

Software architecture patterns


Developers often run into similar problems while working on a project.
Software architecture patterns give developers a way to solve these
problems whenever they come up.

Software architecture patterns are important because they help developers to be


more productive and efficient.
A software architecture can be defined in many ways −
UML
Architectural views
​ Functional and non-functional
Architecture Description Language (ADL) − ADL defines the software
architecture formally and semantically.

2
While there are several different software architecture patterns, we are going to
look at the following top-four:

●​ Serverless architecture
●​ Event-driven architecture
●​ Microservices Architecture
●​ Call and return Architecture
●​ Data Flow and Data Centered Architectures
●​ Hierarchical Architecture
●​ Component based architecture

Serverless architecture pattern

This pattern can be used to build software and services without managing the
infrastructure. A third-party is used to manage servers, the backend, and other
services. This lets you focus on fast, continuous software delivery. When you
don’t have to worry about managing the infrastructure and planning for
expansion, you have more time to look at the value that can be added to your
software and services.

Event-driven software architecture

This type of architecture relies on events to trigger actions, communications, and


other services in a decoupled system. An event can be anything that changes the
current state. Think about when a customer adds bank information to the
payment options section in their account on an e-commerce website. The event
can carry the state, like when a purchase is completed. Or the event can be an
identifier, like when a notification is sent that an order has been placed
successfully.

Event-driven architecture includes event producers and event consumers. The


producers detect events and transmit them to the consumer events. The

3
consumer event might process the event, or it might only be impacted by the
event.

Microservices software architecture

Microservices are multiple applications that work interdependently. These


microservices are developed independently and each is designed to solve
specific problems or to perform specific tasks. But their functionality is also
designed to communicate with each other and to be entwined so they can work
together to achieve business goals.

Because each microservice is developed separate from the others, development


is streamlined and deployment is easier. This also increases your ability to
quickly scale to meet customer expectations.

Call and Return Architecture

Hierarchical Architecture

Applications

​ Suitable for applications where reliability of software is a critical issue.

4
​ Widely applied in the areas of parallel and distributed computing.

Advantages

​ Faster computation and easy scalability.


​ Provides robustness as slaves can be duplicated.
​ Slave can be implemented differently to minimize semantic errors.
​ Easy to decompose the system based on hierarchy refinement.
​ Can be used in a subsystem of object oriented design.

Call and Return architecture vs Hierarchical architecture:

Call and return architecture is a specific type of hierarchical architecture that focuses
on managing control flow through function calls, while hierarchical architecture is a
broader organizational principle encompassing various layered structures.

Data Design

Data design in software engineering is the process of structuring and


organizing data within a software system to ensure efficient storage, retrieval,
and manipulation. It involves determining data types, relationships, and
constraints to effectively manage information for the application's functionality
and performance.

5
Key Aspects of Data Design:

​ Data Types:​
Defining the types of data (e.g., integers, strings, dates) to be stored and
manipulated.
​ Data Relationships:​
Establishing how different data elements relate to each other (e.g., one-to-one,
one-to-many).
​ Data Constraints:​
Implementing rules to ensure data integrity and consistency (e.g., primary keys,
foreign keys, validation rules).
​ Data Structures:​
Choosing appropriate data structures (e.g., arrays, linked lists, hash tables) to
optimize performance.

6
​ Database Design:​
Transforming the data model into a database schema, considering factors like
normalization and performance.

Levels of Data Design:

●​ Program Component Level: Designing data structures and algorithms for


individual software components.
●​ Application Level: Converting the data model into a database structure.
●​ Business Level: Designing data warehouses for data mining and analysi

7
Assessing Alternative Architectural Designs
There are basically 2 types of assessments

Iterative approach → to analyze Design Trade-offs


Pseudo Quantitative approach → To assess Design Quality

Assessing alternative architectural designs in software engineering involves


evaluating
​different architectural styles and
​patterns
to determine the most suitable design for a specific project.
This process typically involves considering various factors like
●​ system requirements,
●​ quality attributes, and
●​ potential trade-offs.

8
The Software Engg Institute has developed an Architecture Trade off
Analysis Method (ATAM) that establishes an iterative evaluation process
for software Architectures.

I.ATAM
The design analysis that follow are performed iteratively to evaluate the
processes:
1.​ Collect scenarios
2.​ Elicit requirements, constraints and Environment description
3.​ Describe the architectural patterns that hav been chosen to address the
scenarios and requirements.
The following are architectural views:
●​ Model View
●​ Process view →Analyses of performance
●​ Data flow view → Analyses of he degree to which the architecture
meets the functional requirements

4.​ Evaluate the quality attributes by considering by considering each


attribute in isolation
5.​ Identify the sensitivity of the quality attributes to various architectural
attributes for a specific architectural style
6.​ Critique candidate Architectures (as developed in step-3)
using sensitivity analysis conducted as in step -5

Using the results of steps - 5 & step-6 some of the architectural alternatives
may be eliminated.
Also, one or more of the ATAM steps are modified and re-applied to the
ATAM processes.

Next Alternate assessing technique is :


Architectural complexity. It is a technique to assess the overall complexity
of the architecture.

9
II. Architectural complexity
proposed by zoho. There are 3 dependencies
Sharing dependencies - ex : 2 consumers share the same data.
Data is a dependency
Flow dependencies - The representation of dependent relationship between
producers and consumers
Constraint dependencies - It represents the constraints on the relative flow
among a set of activities.
for ex. 2 components can not execute at the same time. The
execution of one component depends on the other component

III.Architectural Description Language


(A D L)
It provides Semantic and syntactic describing the s/w architecture

10
Mapping Data Flow into a Software Architecture

11
12
Efferent : "To carry away from".
Afferent: "To carry to".

13
14
15
16
Data gets into the system. Transformed

17
Data flows into the system.
Data gets transformed into a format required by the system
Data flows out of the system after the process - world data

18
19
●​ Step 2: Review and refine the data flow diagrams for the s/w

20
Modeling Component-Level Design
The foundation of any software architecture is component-level design
— you break a system into its component parts and define how they
interact. Done well, component-level design facilitates reuse, reduces
complexity, and enables parallel development.
Component-based architecture focuses on the decomposition of the design
into individual functional or logical components that represent well-defined
communication interfaces containing methods, events, and properties.

The primary objective of component-based architecture is to ensure


component reusability. A component encapsulates functionality and
behaviors of a software element into a reusable and self-deployable
binary unit. There are many standard component frameworks such as
COM/DCOM, JavaBean, EJB, .NET, web services, and grid services.
These technologies are widely used in local desktop GUI application
design such as graphic JavaBean components, MS ActiveX
components, and COM components which can be reused by simply
drag and drop operation.

Component-oriented software design has many advantages over the


traditional object-oriented approaches such as −

What is a Component?
A component is a software object, intended to interact with other
components, encapsulating certain functionality or a set of functionalities. It
has an obviously defined interface and conforms to a recommended behavior
common to all components within an architecture.

A component is a modular, portable, replaceable, and


reusable set of well-defined functionality that encapsulates
its implementation and exports it as a higher-level
interface.

21
A software component can be defined as a unit of composition
with a contractually specified interface and explicit context
dependencies only. That is, a software component can be
deployed independently and is subject to composition by third
parties.

Characteristics of Components
​ Reusability − Components are usually designed to be reused in
different situations in different applications. However, some
components may be designed for a specific task.
​ Replaceable − Components may be freely substituted with other
similar components.
​ Not context specific − Components are designed to operate in
different environments and contexts.
​ Extensible − A component can be extended from existing
components to provide new behavior.
​ Encapsulated − A component depicts the interfaces, which allow the
caller to use its functionality, and do not expose details of the internal
processes or any internal variables or state.
​ Independent − Components are designed to have minimal
dependencies on other components.

Conducting Component-Level Design


Recognizes all design classes that correspond to the problem domain as
defined in the analysis model and architectural model.

●​ Recognizes all design classes that correspond to the infrastructure


domain.
●​ Describes all design classes that are not acquired as reusable
components, and specifies message details.
●​ Identifies appropriate interfaces for each component and elaborates
attributes and defines data types and data structures required to
implement them.
●​ Describes processing flow within each operation in detail by means of
pseudo code or UML activity diagrams.

22
●​ Describes persistent data sources (databases and files) and identifies
the classes required to manage them.
●​ Develop and elaborate behavioral representations for a class or
component. This can be done by elaborating the UML state diagrams
created for the analysis model and by examining all use cases that are
relevant to the design class.
●​ Elaborates deployment diagrams to provide additional implementation
detail.

●​ Demonstrates the location of key packages or classes of components


in a system by using class instances and designating specific hardware
and operating system environments.
●​ The final decision can be made by using established design principles
and guidelines. Experienced designers consider all (or most) of the
alternative design solutions before settling on the final design model.

Designing Class-based Components

23
Principles

Cohesion High
Coupling Very low or minimu . Loosely coupled

24
25
Object Constraint Language (OCL)

The Object Constraint Language (OCL) is an expression language.


It describes constraints on object-oriented languages and other modelling
artifacts.
A constraint can be seen as a restriction on a model or a system.
OCL is part of Unified Modeling Language (UML) and it plays an important
role in the analysis phase of the software lifecycle.

OCL is a typed, declarative and side-effect free specification language. Typed


means that each OCL expression evaluates to a type (either one of the predefined OCL
types or a type in the model where the OCL expression is used) and must conform to
the rules and operations of that type. Side-effect free implies that OCL expressions
can query or constrain the state of the system but not modify it.

Object Constraint Language (OCL), is a formal language

26
Users of the Unified Modeling Language and other
languages can use OCL to specify constraints and other expressions
attached to their models.

Why OCL
In order to write unambiguous constraints, so-called formal
languages have been developed. The disadvantage of traditional formal
languages is that they are usable to persons with a strong mathematical
background, but difficult for the average business or system modeler to
use.

To understand OCL, the component parts of this statement should be


examined. Thus, OCL has the characteristics of an expression language, a
modeling language and a formal language.

●​ Expression language
OCL is a pure expression language. Therefore, an OCL expression is
guaranteed to be without side effects. It cannot change anything in the model.
This means that the state of the system will never change because of an OCL
expression, even though an OCL expression can be used to specify such a state
change (e.g., in a post-condition). All values for all objects, including all links, will
not change. Whenever an OCL expression is evaluated, it simply delivers a
value.

●​ Modeling language
OCL is a modeling language, not a programming language. It is not possible to
write program logic or flow-control in OCL. You especially cannot invoke
processes or activate non-query operations within OCL. Because OCL is a
modeling language in the first place, not everything in it is promised to be directly
executable.
As a modeling language, all implementation issues are out of scope and cannot
be expressed in OCL. Each OCL expression is conceptually atomic. The state of
the objects in the system cannot change during evaluation.

27
●​ Formal language
OCL is a formal language where all constructs have a formally defined
meaning. The specification of OCL is part of the UML specification.

Why a formal language?


In object-oriented modeling, a graphical model, like a class model, is not enough for a
precise and unambiguous specification. There is a need to describe additional
constraints about the objects in the model. Such constraints are often described in
natural language. Practice has shown that this will always result in ambiguities. To write
unambiguous constraints so-called formal languages have been developed.

The above Figure gives an overview of the OCL type system in the form of a feature
model. Using a tree-like description, feature models allow describing mandatory and
optional features of a subject, and to specify alternative features as well as conjunctive
features. In particular, the figure pictures the different kinds of available types.

28
Applications of OCL

OCL can be used for a number of different purposes:


●​ To specify invariants on classes and types in the class model
●​ To specify type invariant for Stereotypes
●​ To describe pre- and post conditions on Operations and
Methods
●​ To describe Guards
●​ As a navigation language
●​ To specify constraints on operations

29
Key Words

Context

30
Self

Invariant

31
32
33
34
35
Example

36
User Interface
User interface is the first impression of a software system from the user's
point of view. Therefore any software system must satisfy the requirements
of the user. UI mainly performs two functions −

​ Accepting the users input


​ Displaying the output

​ UI has its syntax and semantics. The syntax comprises


component types such as textual, icon, button etc. and
usability summarizes the semantics of UI. The quality of UI
is characterized by its look and feel (syntax) and its usability
(semantics).

​ There are basically two major kinds of user interface
​ a) Textual
​ b) Graphical. Menu based & Direct manipulation

​ Software in different domains may require different style of
its user interface for e.g. calculator need only a small area
for displaying numeric numbers, but a big area for
commands, A web page needs forms, links, tabs, etc.

37
Golden Rules

38
Interface types

UI Design Process

39
UI Analysis

40
Interface Design Steps

UI Design patterns

41
UI Design Issues & Challenges

UI Design Evaluation

42
43
Software Quality assurance (SQA)

44
Software Quality Factors

Statistical Software Quality Assurance


Statistical Software Quality Assurance (SQA) focuses on improving software

quality by identifying and addressing the root causes of defects using

statistical methods. It involves collecting defect data, categorizing it, and using

techniques like the Pareto principle to pinpoint the most significant causes of

defects. This allows for targeted corrective actions to reduce defects and

improve software quality.

45
Process Steps
1) Collect and categorize information (i.e., causes) about software defects that
occur
2) Attempt to trace each defect to its underlying cause (e.g., nonconformance
to specifications, design error, violation of standards, poor communication with
the customer)
3) Using the Pareto principle (80% of defects can be traced to 20% of all
causes), isolate the 20%

Sample of Errors
●​ Incomplete or erroneous specifications
●​ Misinterpretation of customer communication
●​ Intentional deviation from specifications
●​ Violation of programming standards
●​ Errors in data representation
●​ Inconsistent component interface
●​ Errors in design logic
●​ Incomplete or erroneous testing
●​ Inaccurate or incomplete documentation
●​ Errors in programming language translation of design
●​ Ambiguous or inconsistent human/computer interface

ISO 9000 Quality Standards

●​ ISO 9000 describes quality assurance elements in generic terms that can be applied to
any business.

●​ It treats an enterprise as a network of interconnected processes.

46
●​ To be ISO-compliant processes should adhere to the standards described.

●​ Elements include organizational structure, procedures, processes and resources.

●​ Ensures quality planning, quality control, quality assurance and quality improvement.

SoftWare Reliability

Defined as the probability of failure free operation of a computer program in a specified


environment for a specified time.

It can measured, directed and estimated

A measure of software reliability is mean time between failures where,

MTBF = MTTF + MTTR

MTTF → mean time to failure

MTTR → mean time to repair

Software availability is the probability that a program is operating according to


requirements at a given point in time

Availability = MTTF / (MTTF + MTTR) * 100 %

SQA Plan

The Software Quality Assurance Plan comprises the procedures, techniques,


and tools that are employed to make sure that a product or service aligns with
the requirements defined in the SRS(Software Requirement Specification).

47
The plan identifies the SQA responsibilities

of the team and lists


the areas that need to be reviewed and audited. It also identifies the SQA work
products.

The SQA plan document consists of the following sections:

1. Purpose

2. Reference

3. Software configuration management

4. Problem reporting and corrective action

5. Tools, technologies, and methodologies

48
6. Code control

7. Records: Collection, maintenance, and retention

8. Testing methodology

SQA Activities

Given below is the list of SQA activities:

#1) Creating an SQA Management Plan Creating an SQA Management plan involves
charting out a blueprint of how SQA will be carried out in the project with respect to the
engineering activities while ensuring that you corral the right talent/team.

#2) Setting the Checkpoints The SQA team sets up periodic quality checkpoints to
ensure that product development is on track and shaping up as expected.

#3) Support/Participate in the Software Engineering team’s requirement gathering

Participate in the software engineering process to gather high-quality specifications. For


gathering information, a designer may use techniques such as interviews and FAST
(Functional Analysis System Technique). Based on the information gathered, the
software architects can prepare the project estimation using techniques such as WBS
(Work Breakdown Structure), SLOC (Source Line of Codes), and FP(Functional Point)
estimation.

#4) Conduct Formal Technical Reviews An FTR is traditionally used to evaluate the
quality and design of the prototype. In this process, a meeting is conducted with the
technical staff to discuss the quality requirements of the software and the design quality
of the prototype. This activity helps in detecting errors in the early phase of SDLC and
reduces rework effort later.

49
#5) Formulate a Multi-Testing Strategy The multi-testing strategy employs different
types of testing so that the software product can be tested well from all angles to ensure
better quality.

#6) Enforcing Process Adherence This activity involves coming up with processes and
getting cross-functional teams to buy in on adhering to set-up systems. This activity is a
blend of two sub-activities:

Process Evaluation: This ensures that the set standards for the project are
followed correctly. Periodically, the process is evaluated to make sure it is working
as intended and if any adjustments need to be made.

Process Monitoring: Process-related metrics are collected in this step at a


designated time interval and interpreted to understand if the process is maturing as
we expect it to.

#7) Controlling Change This step is essential to ensure that the changes we make are
controlled and informed. Several manual and automated tools are employed to make this
happen. By validating the change requests, evaluating the nature of change, and
controlling the change effect, it is ensured that the software quality is maintained during
the development and maintenance phases.

#8) Measure Change Impact The QA team actively participates in determining the
impact of changes that are brought about by defect fixing or infrastructure changes, etc.
This step has to consider the entire system and business processes to ensure there are no
unexpected side effects. For this purpose, we use software quality metrics that allow
managers and developers to observe the activities and proposed changes from the
beginning till the end of SDLC and initiate corrective action wherever required.

#9) Performing SQA Audits The SQA audit inspects the actual SDLC process followed
vs. the established guidelines that were proposed. This is to validate the correctness of the

50
planning and strategic process vs. the actual results. This activity could also expose any
non-compliance issues.

#10) Maintaining Records and Reports It is crucial to keep the necessary


documentation related to SQA and share the required SQA information with the
stakeholders. Test results, audit results, review reports, change request documentation,
etc. should be kept current for analysis and historical reference.

#11) Manage Good Relations The strength of the QA team lies in its ability to maintain
harmony with various cross functional teams. QA vs. developer conflicts should be kept
at a minimum and we should look at everyone working towards the common goal of a
quality product. No one is superior or inferior to each other- we are all a team.

51
Testing Strategies

In the world of software engineering, ensuring the reliability and quality of software
applications is paramount. A critical aspect of achieving this goal is through
comprehensive testing. Test strategies play a vital role in orchestrating the testing
process, guiding the efforts of software development teams to systematically identify
and rectify defects, thereby enhancing the overall quality of the software. In this
article, we will explore the testing strategies in software engineering.

Objectives of Test Strategy


The testing strategies in software engineering have the following objectives:

Defining Testing Goals:


A test strategy clarifies the objectives of testing, be it verifying functionality,
assessing performance, ensuring security or all of these combined.

Ensuring Adequate Coverage:


It determines the scope of testing, specifying what parts of the software will be tested
and to what extent. This helps ensure comprehensive coverage of the application.

Optimizing Resource Allocation:

52
A test strategy helps allocate resources effectively, including human resources, time,
and testing tools. This ensures that testing efforts are proportional to the software's
complexity and importance.

Mitigating Risks:
Test strategies identify potential risks and challenges in the testing process and outline
mitigation measures to address them.

Guiding the Testing Team:


By providing a structured plan, a test strategy guides the testing team in their efforts,
leading to more efficient and effective testing.

Features of Test Strategy Document


The testing strategies in software engineering typically include the following features
in the test strategy document:

Scope and Objectives:


Clearly defines the scope of testing, the goals to be achieved, and the intended
outcomes.

Testing Methodologies:
Outlines the testing methodologies and techniques to be employed during the testing
process.

Testing Levels:
Specifies the different testing levels to be performed, such as unit testing, integration
testing, system testing, and acceptance testing.

53
Test Environment:
Describes the environment in which testing will be conducted, including hardware,
software, and network configurations.

Entry and Exit Criteria:


Sets the conditions that must be met to initiate testing (entry criteria) and the
conditions that signify the completion of testing (exit criteria).

Test Deliverables:
Lists the documents and artifacts that will be produced during the testing process,
such as test plans, test cases, and defect reports.

Resource Allocation:
Details the resources required for testing, including human resources, tools, and
infrastructure.

Risks and Mitigation Strategies:


Identifies potential risks related to testing and provides strategies to mitigate those
risks.

Components of Test Strategy Document


A typical test strategy document comprises several key components:

Introduction:
An overview of the purpose and scope of the test strategy document.

Testing Objectives:

54
Clearly defined goals and objectives of the testing efforts.

Testing Scope:
The areas and functionalities that will be tested, along with any specific exclusions.

Testing Approach:
The overall approach to testing, including the types of testing to be performed and the
order in which they will occur.

Test Environment:
Details about the hardware, software, and network configurations used for testing.

Resource Allocation:
Information about the roles and responsibilities of team members involved in testing,
as well as the tools and equipment required.

Testing Schedule:
A timeline that outlines the testing phases, milestones, and deadlines.

Testing Deliverables:
A list of documents, reports, and artifacts that will be produced during the testing
process.

Risks and Mitigation:


Identification of potential risks and a plan for managing and mitigating them.

Exit Criteria:

55
The conditions that must be met for testing to be considered complete.

Approval and Sign-Off:


The process for reviewing, approving, and obtaining sign-off on the test strategy
document.

Considering the process from a procedural point of view, testing within the context of
software engineering is actually a series of four steps that are implemented
sequentially. The steps are shown in Figure

Initially, tests focus on each component individually, ensuring that it functions


properly as a unit. Hence, the name unit testing.

56
Unit testing makes heavy use of testing techniques that exercise specific paths in a
component’s control structure to ensure complete coverage and maximum error
detection. Next, components must be assembled or integrated to form the complete
software package.

Unit testing focuses verification effort on the smallest unit of software design—the
software component or module. Using the component-level design description as a
guide, important control paths are tested to uncover errors within the boundary of the
module. The relative complexity of tests and the errors those tests uncover is limited
by the constrained scope established for unit testing. The unit test focuses on the
internal processing logic and data structures within the boundaries of a component.
This type of testing can be conducted in parallel for multiple components.

57
Unit-test considerations:

The module

interface is tested to ensure that information properly flows into and out of the
program unit under test. Local data structures are examined to ensure that data stored
temporarily maintains its integrity during all steps in an algorithm’s execution. All
independent paths through the control structure are exercised to ensure that all
statements in a module have been executed at least once. Boundary conditions are
tested to ensure that the module operates properly at boundaries established to limit or

58
restrict processing. And finally, all error-handling paths are tested. Data flow across a
component interface is tested before any other testing is initiated.

Selective testing of execution paths is an essential task during the unit test. Test cases
should be designed to uncover errors due to erroneous computations, incorrect
comparisons, or improper control flow.

Boundary testing is one of the most important unit testing tasks. Software often fails
at its boundaries. That is, errors often occur when the nth element of an n-dimensional
array is processed, when the ith repetition of a loop with i passes is invoked, when the
maximum or minimum allowable value is encountered. Test cases that exercise data
structure, control flow, and data values just below, at, and just above maxima and
minima are very likely to uncover errors.

A good design anticipates error conditions and establishes error-handling paths to


reroute or cleanly terminate processing when an error does occur.

Integration testing addresses the issues associated with the dual problems of
verification and program construction. Test case design techniques that focus on
inputs and outputs are more prevalent during integration, although techniques that
exercise specific program paths may be used to ensure coverage of major control
paths. After the software has been

Validation testing provides final assurance that software meets all informational,
functional, behavioral, and performance requirements.

System testing: The last high-order testing step falls outside the boundary of software

59
engineering and into the broader context of computer system engineering. Software,
once validated, must be combined with other system elements (e.g., hardware, people,
databases). System testing verifies that all elements mesh properly and that overall
system function/performance is achieved.

Most Common Testing Strategies


Black Box Testing:
This approach checks the software's performance without looking at the actual code
inside. Testers act like users, giving various inputs and checking if the outputs match
what's expected. This helps ensure the software does what it's supposed to without
needing to understand the complex code behind it.

●​ Strengths:​
Focuses on user perspectives, effective for functional and usability testing.
Does not require knowledge of internal code.
●​ Weaknesses:​
Limited coverage of code paths, may miss certain logic errors.
●​ Best for:​
Validating functionality, usability, and user scenarios. Detects issues related to
inputs, outputs, and user experience.

White Box Testing:

60
In this method, testers dive deep into the software's internal code and logic. They
create tests to cover different paths the code can take, ensuring that each part works as
intended. It's like dissecting the software to verify its accuracy.

●​ Strengths:​
Thorough coverage of code paths, effective for logic and structural testing.
Provides insights into code quality.
●​ Weaknesses:​
May not catch integration or external system issues. Requires knowledge of
internal code.
●​ Best for:​
Verifying code logic, complex algorithms, and integration points. Detects issues
within the code structure and logic.

Regression Testing:
Whenever changes or updates are made to the software, regression testing kicks in.
Testers run tests that were already done before to make sure these changes haven't
caused any new problems or broken existing functions.

●​ Strengths:​
Ensures new changes don't break existing functionality. Efficient for identifying
regressions.
●​ Weaknesses:​
May not catch new defects outside the scope of previous tests.
●​ Best for:​
Validating software after changes, updates, or enhancements. Detects issues
caused by recent modifications.

Smoke Testing:

61
Before thorough testing begins, a smoke test is done to quickly check if the basic
functions of the software are operational. This is like a preliminary check to catch any
major issues early on.

●​ Strengths:​
Quickly identifies major issues in basic functionalities. Provides initial
assessment of software stability.
●​ Weaknesses:​
Limited coverage and depth of testing.
●​ Best for:​
Initial assessment before in-depth testing. Detects defects that could hinder
further testing.

Exploratory Testing:
Testers take an open-ended approach here. They interact with the software without
strict plans, trying to find hidden problems that might not be caught by scripted tests.
It's like an adventure to discover unexpected issues.

●​ Strengths:​
Finds unexpected defects, focuses on user behavior. Flexible and adaptable
approach.
●​ Weaknesses:​
May lack repeatability and documentation.
●​ Best for:​
Identifying hidden defects, usability issues, and scenarios not covered by
scripted tests.

Performance Testing:

62
This type of testing focuses on the software's speed, stability, and scalability. Testers
simulate different workloads to ensure the software can handle various levels of
demand without crashing or slowing down.

●​ Strengths:​
Measures software speed, stability, and scalability. Identifies performance
bottlenecks.
●​ Weaknesses:​
May not uncover all usability or functional issues.
●​ Best for:​
Ensuring software can handle different workloads and stress conditions.
Detects performance-related bottlenecks.

Security Testing:
Testers concentrate on finding any weak points in the software's security. They look
for vulnerabilities that hackers might exploit to gain unauthorized access or steal
sensitive data.

●​ Strengths:​
Identifies vulnerabilities and security loopholes. Ensures data protection.
●​ Weaknesses:​
May not catch all functional or usability defects.
●​ Best for:​
Uncovering security weaknesses, vulnerabilities, and potential breaches.

Usability Testing:
Here, testers evaluate the software's user-friendliness. They assess how easy it is for
users to navigate, understand, and accomplish tasks within the software. This testing
ensures a positive and intuitive user experience.

63
●​ Strengths:​
Evaluates user-friendliness and user experience. Ensures intuitive interaction.
●​ Weaknesses:​
May not uncover underlying technical issues.
●​ Best for:​
Assessing user interaction, navigation, and overall satisfaction.

How to Choose from Different Software Testing Strategies?


The choice of testing strategies in software engineering depends on several factors:

Project Requirements:
The type of software being developed, how complex it is, and what it's going to be
used for all influence the testing approach chosen. For instance, if the software is
critical and needs to be super reliable, a more thorough testing strategy might be
necessary.

Risk Analysis:
Identifying potential problems or risks in the software is crucial. This helps in picking
the right testing strategies that can effectively tackle and minimize these risks. For
example, if there's a risk of data loss, thorough testing around data handling would be
important.

Budget and Resources:


The amount of time, money, and skilled people available affects the testing strategy
choice. Some strategies demand more time and effort than others. If there are tight
resource constraints, a more focused and efficient strategy might be preferred.

Testing Objectives:

64
The goals set for testing matter too. Whether it's finding specific types of bugs or
ensuring the software can handle a certain number of users, these objectives guide
which strategies are most suitable.

Stakeholder Expectations:
Considering what end-users, clients, and other stakeholders expect from the software
is essential. The chosen testing strategy should align with these expectations. For
instance, if the software is meant to be user-friendly, the testing should heavily focus
on usability.

65

You might also like