0% found this document useful (0 votes)
37 views40 pages

SEPM Answer Key

Uploaded by

adengrayson17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views40 pages

SEPM Answer Key

Uploaded by

adengrayson17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

🎃

SEPM Answer Key

Doubts

1. COCOMO II or Basic COCOMO ? Syllabus says cocomo II, we studied II, question bank says basic : basic Cocomo

2. What diagrams to be added in for White Box and Black Box testing : Adding gears in a box makes it black box

3. Is the question bank enough for the exams?: it will have to be

Chapter 2
Explain concept of Requirement Engineering and Requirement Modeling

💡 The Process of

Establishing the services that the customer requires from a system

The constraints under which it operates and is developed

Requirement: A requirement can range from a high-level abstract statement of a service or of a system constraint to
a detailed mathematical functional specification

Requirement Modeling uses a combination of text and diagrammatic forms to depict requirements in a way that is
relatively easy to understand, straightforward to review for correctness, completeness and consistency.

Types of Requirements
1. User Requirements: Written for customers, it is a

a. collection of statements in natural language that

b. give a description of the services the system provides and

SEPM Answer Key 1


c. its operational constraints

2. System Requirements: A structured document that gives the

a. detailed description of the system services

b. Written as contract between client and contractor.

3. Software Specification

a. Detailed software description that can serve as the basis for design or implementation.

b. Typically written for software developers.

Requirement Modeling Steps


Problem Recognition: Understanding the need of the system

Evaluation and Synthesis

Software engineers build it using the requirements elicited from the customer

Define all externally observable data objects evaluate data flow

Define software functions

Understand the behavior of the system

Establish system interface characteristics

Uncover the design constrains

Modelling

To validate software requirements, you need to examine them from a number of different points of view.

Scenario-based modeling represents the system from the user's point of view

Data modeling represents the information space and data objects that the software will manipulate and the relationships
among them.

Class based modeling defines objects, attributes and relationships

These models are refined and analyzed to assess their clarity, completeness and consistency.

Specification: Building the SRS (Software Requirements Specifications)

Review: Reviewing by the project manager and refining it.

Draw Use case diagram and Activity Diagram for Airline Booking System

Verify Correctness

Use Case Diagram

SEPM Answer Key 2


Draw Level-0,1,2 DFDs for Online Banking System

Verify Correctness

The Level-0 DFD, also known as the Context Diagram, provides an overview of the entire system.

The Level-1 DFD expands on the processes shown in the Level-0 DFD. It breaks down the high-level process into
subprocesses, providing a more detailed view of how data moves within the system.

The Level-2 DFD further decomposes the subprocesses from the Level-1 DFD into finer details. It provides a more granular
view of the processes identified in Level-1, breaking them down into more detailed subprocesses and data flows.

SEPM Answer Key 3


SEPM Answer Key 4
Level 2 Online Banking System

Explain concept of Requirement Analysis and Requirement Gathering

💡 Requirement Analysis: It involves a thorough examination and interpretation of gathered requirements to ensure that
they are complete, accurate, and feasible. The primary goal of requirement analysis is to transform high-level
requirements into a detailed understanding of what the software system needs to accomplish.

Requirement Gathering: process of collecting information from stakeholders to identify their needs, expectations, and
constraints.

Key Activities in Requirement Analysis:

1. Review and Clarification:

Reviewing the collected requirements to identify any inconsistencies, contradictions, or ambiguities. The analysis
process often involves seeking clarification from stakeholders to ensure a shared understanding.

2. Organizing and Prioritizing:

Organizing requirements into a structured format and establishing priorities based on their importance and impact
on the system.

3. Modeling:

SEPM Answer Key 5


Developing models and diagrams to represent different aspects of the system, such as use cases, data flow
diagrams, and entity-relationship diagrams. These models help in visualizing the system's functionality and
interactions.

4. Feasibility Study:

Assessing the feasibility of implementing the proposed system. This includes analyzing technical, economic,
operational, legal, and scheduling aspects to determine the project's viability.

5. Risk Analysis:

Identifying potential risks associated with the proposed system and developing strategies to mitigate or manage
these risks.

Key Activities in Requirement Gathering

1. Stakeholder Identification:

Identifying and involving all relevant stakeholders who have an interest or role in the software system. This includes
end-users, customers, project managers, and other impacted parties.

2. Communication:

Establishing effective communication channels to interact with stakeholders. This may involve conducting
interviews, surveys, workshops, or informal discussions.

3. Document Analysis:

Reviewing existing documentation, such as business documents, user manuals, and current system specifications,
to gain insights into the requirements.

4. Brainstorming:

Facilitating brainstorming sessions with stakeholders to gather ideas, requirements, and potential functionalities of
the system.

5. Observation:

Observing how users interact with existing systems or processes to identify pain points, challenges, and areas for
improvement.

Write a short note on Scenario based model


Scenario-Based Model in Software Engineering:
Software engineers leverage scenario-based modeling to understand user-system interactions by characterizing requirements
through various analysis models. This involves creating scenarios, including use cases, activity diagrams, and swimlane
diagrams, to represent the overall system functionality.
Key Concepts:

Scenarios and Use Cases:

Scenarios are sequences of steps describing interactions between users and the system. Use cases and activity
diagrams are employed to expose the functionalities of the system.

Actors:

Actors, representing entities interacting with the system, carry out use cases. Associations between actors and use
cases are identified.

Relationships:

1. Association:

Identifies interactions between actors and use cases.

2. Include Relationship:

Specifies that one use case includes the functionality of another.

SEPM Answer Key 6


3. Exclude Relationship:

Specifies that one use case excludes the functionality of another.

Modeling Techniques:

Activity Diagram:

Graphical representation of interaction flow within specific scenarios. It includes forks and branches for parallel activities
and transitions.

Swimlane Diagram:

A partitioned activity diagram where activities are grouped according to the responsible class or entity.

What is Software Requirement Specification document, explain key features of IEEE standard Software Requirement
Specification document

💡 A Software Requirements Specification (SRS) document is a comprehensive and detailed description of the intended
behavior and functionalities of a software system

Key Features of IEEE Standard Software Requirement Specification Document

1. Introduction:

Purpose: Clearly state the purpose of the SRS document and its intended audience.

Scope: Define the scope of the software, including what is included and excluded.

2. Overall Description:

Product Perspective: Describe how the software fits into the broader system or context.

Product Functions: Provide a high-level overview of the system's functionalities.

User Classes and Characteristics: Identify the different user classes and their characteristics.

Operating Environment: Specify the environments in which the software will operate.

3. External Interface Requirements:

User Interfaces: Describe the interfaces the system will have with users.

Hardware Interfaces: Specify any hardware interfaces required.

Software Interfaces: Identify software interfaces with other systems or components.

Communication Interfaces: Define communication protocols and data formats.

4. System Features:

Provide a detailed description of each functional requirement, organized by feature. Include input, processing, and
output aspects.

5. Non-Functional Requirements:

Performance Requirements: Specify performance criteria, such as response time and throughput.

Safety and Security Requirements: Define safety and security considerations.

Explain with diagram different Requirements Modeling Approaches

(Questions can be asked w.r.t any system liking library management system, bus booking system etc.)

SEPM Answer Key 7


⚠️ GPT Answer
- Skipping diagrams cuz earlier questions have done them

Requirements modeling is a critical phase in software engineering, and different approaches can be used to represent and
analyze system requirements.

1. Use Case Diagram:


Explanation:
Use Case Diagrams depict the interactions between actors (users or external systems) and the system under consideration.
They focus on the functionalities the system provides and the external entities that interact with the system.
Diagram:
In the context of a Library Management System:

Actors: Librarian, Library Member.

Use Cases: Issue Book, Return Book, Search Catalog, Manage Member, Manage Inventory.

2. Entity-Relationship Diagram (ERD):


Explanation:
Entity-Relationship Diagrams model the data entities and their relationships within a system. Entities are represented as
rectangles, and relationships between entities are represented by connecting lines.
Diagram:
In the context of a Library Management System:

Entities: Book, Member, Author.

Relationships: Book is written by Author, Member borrows Book.

3. Data Flow Diagram (DFD):


Explanation:
Data Flow Diagrams represent the flow of data within a system. They consist of processes, data stores, data flows, and external
entities. Processes transform input data into output data.

Diagram:
In the context of a Library Management System:

Processes: Issue Book, Return Book, Update Catalog.

Data Stores: Catalog, Member Database.

Data Flows: Borrowed Books, Member Information.

Additional Explanation:
1. Use Case Diagram:

Scenario: A library member wants to borrow a book.

Actors: Librarian, Library Member.

Use Cases: Librarian issues a book, Library Member borrows a book.

2. Entity-Relationship Diagram (ERD):

SEPM Answer Key 8


Scenario: Tracking book information and member details.

Entities: Book, Member, Author.

Relationships: Book is written by Author, Member borrows Book.

3. Data Flow Diagram (DFD):

Scenario: Handling book issuance and return processes.

Processes: Issue Book, Return Book, Update Catalog.

Data Stores: Catalog, Member Database.

Data Flows: Borrowed Books, Member Information.

These modeling approaches complement each other, providing a comprehensive view of the system's functionalities, data
entities, and data flow. In practice, a combination of these models is often used to ensure a thorough representation of system
requirements.

Chapter 3 ✅
Define the term Software Metrics? What are direct and Indirect software measures
Software project metrics are quantitative measurements used to assess and evaluate various aspects of a software
development project. These metrics provide data-driven insights into the project's progress, quality, and efficiency, helping
project managers and teams make informed decisions.

Direct measures of the software process include cost and effort applied. Direct measures of the product include lines of code
(LOC) produced, execution speed,
memory size, and defects reported over some set period of time.

Indirect measures of the product include functionality, quality, complexity, efficiency, reliability, maintainability

❗ Don’t know if this is enough for a 5 marker. Need to elaborate more here

Following the decomposition technique for LOC, an estimation table is developed.


A range of LOC estimates is developed for each function.

A review of historical data indicates that

SEPM Answer Key 9


Average productivity was 620 LOC / pm

Labor rate 8000$ per month

Find out

1. Cost per line of code

Labor Rate 820


Cost Per LOC = = = $1.3
620
​ ​

P roductivity

2. Estimated project cost

Total Cost = Total LOC × Cost Per LOC = 33200 ∗ $1.3 = $43200

Explain with Example Basic COCOMO Model and its advantages and drawbacks?
The Constructive Cost Model (COCOMO) is a widely used software cost estimation model that was introduced by Barry
Boehm in the late 1970s. It provides a framework for estimating the effort, time, and cost required to develop a software
project. COCOMO comes in three variants: Basic COCOMO, Intermediate COCOMO, and Detailed COCOMO. Here, I'll
explain the Basic COCOMO model, along with its advantages and drawbacks.

Basic COCOMO Model:


The Basic COCOMO model estimates effort as a function of the size of the software product. The formula is:

[Effort = a × (KLOC)b ]

where:

Effort is the effort required in person-months.

KLOC is the estimated size of the software product in thousands of lines of code.

a and b are constants derived from historical data.

Example:
Let's say we want to estimate the effort for a software project with an estimated size of 50,000 lines of code. If historical data
suggests that a = 2.4 and b = 1.05 then the effort can be calculated as follows:

[Effort = 2.4 × (50)1.05 ]

Advantages of Basic COCOMO:


1. Simplicity: The Basic COCOMO model is straightforward and easy to understand, making it accessible to a wide range
of users.

2. Quick Estimates: Since it relies on a simple formula based on size, Basic COCOMO allows for quick and early
estimates, which can be useful for project planning.

3. Versatility: It can be used in the early stages of a project when detailed information is not available. As the project
progresses, more detailed estimation models like Intermediate and Detailed COCOMO can be employed.

Drawbacks of Basic COCOMO:


1. Sensitivity to Size: Basic COCOMO heavily depends on the estimated size of the software product. Small errors in size
estimation can lead to significant errors in effort estimation.

SEPM Answer Key 10


2. Limited Factors: It considers only one factor (size) for effort estimation, neglecting other factors such as personnel
capability, product complexity, and development environment.

3. Generic Constants: The model uses generic constants (\( a \) and \( b \)) that are derived from historical data. These
constants may not be applicable to all types of projects and organizations.

In summary, Basic COCOMO provides a simple and quick way to estimate software development effort based on size, but it
has limitations and may not be suitable for all projects, especially those with unique characteristics or requirements.

Explain with suitable example FP based Cost Estimation

Steps
The process of calculating Function Points (FP) involves the following steps:

1. Identify Function Types:

External Inputs (EI): Identify user inputs affecting the system.

External Outputs (EO): Identify system outputs to users.

External Inquiries (EQ): Identify user inquiries that result in data retrieval.

Internal Logical Files (ILF): Identify internal files storing data.

External Interface Files (EIF): Identify external files referenced by the system.

2. Count Function Types:

Count the number of occurrences of each function type in the software.

3. Assign Complexity Weights:

Assign complexity weights to each function type based on factors such as data complexity, transaction
complexity, and environmental factors.

4. Calculate Unadjusted Function Points (UFP):

Use the formula: UFP=∑(Count×Weight) for each function type

5. Apply Value Adjustment Factor (VAF):

Evaluate value adjustment factors, considering various factors that influence development effort.

Calculate VAF using a formula that considers the degree of influence of these factors.

6. Calculate Adjusted Function Points (AFP):

Use the formula: AFP=UFP×VAF to get the adjusted function points.

Example
Given:

I =30 (Number of external inputs)

IW=4 (Complexity weighting factor for external inputs)

O=60 (Number of external outputs)

OW=5 (Complexity weighting factor for external outputs)

E=23 (Number of external inquiries)

EW=4 (Complexity weighting factor for external inquiries)

F=8 (Number of files)

10FW=10 (Complexity weighting factor for files)

N=2 (Number of external interfaces)

NW=7 (Complexity weighting factor for external interfaces)

SEPM Answer Key 11


Calculate the unadjusted Function Points

UFP = I × IW + O × OW + E × EW + F × F W + N × NW

UFP = 606

The Value Adjustment Factor (VAF) is determined by the value adjustment factors. Given that four factors are not
applicable (each with a value of 0), four factors have a value of 3, and the remaining factors have a value of 4, the VAF is
calculated as follows:

T otalValueAdjustmentF actor = 4 × 3 + 6 × 4 = 36

V AF = 0.65 + (0.01 × T otalValueAdjustmentF actor) = 0.65 + (0.01 × 36) = 0.65 + 0.36 = 1.01

Calculate Adjusted Function Points (AFP)

Adjusted Function Point = UFP × V AF = 606 ∗ 1.01 = 612.06

Software Engineering | Functional Point (FP) Analysis - GeeksforGeeks


A Computer Science portal for geeks. It contains well written, well thought and well explained computer
science and programming articles, quizzes and practice/competitive programming/company interview
Questions.
https://www.geeksforgeeks.org/software-engineering-functional-point-fp-analysis/

Compare LOC based and FP based Software cost estimation Models

Lines of Code (LOC) Based Software Cost Estimation:


1. Focus on Code Size:

Characteristics: LOC-based models estimate project effort and cost based on the size of the code. The size is
measured in lines of code, and the assumption is that there is a linear relationship between the amount of code
and the effort required.

Advantages: Simple and intuitive, especially for projects where code size is a significant factor.

2. Direct Measure:

Measurement: LOC is a direct measure, providing a tangible and concrete metric for project size.

Advantages: Straightforward to count, and tools can automate the process.

3. Challenges:

Drawbacks: Fails to capture differences in complexity, programming languages, and development practices. Can
be influenced by coding styles and doesn't account for differences in productivity among developers.

4. Example Model:

Formula: E = a × (KLOC)b where E is effort, KLOC is the size in thousands of lines of code, and a and b
are constants.

Function Point (FP) Based Software Cost Estimation:


1. Focus on Functionality:

Characteristics: FP-based models measure software size based on the functionality it delivers to users,
considering inputs, outputs, inquiries, internal logical files, and external interface files.

Advantages: Reflects the software's functionality, making it more language- and implementation-independent.

2. Indirect Measure:

SEPM Answer Key 12


Measurement: FP is an indirect measure, providing a size metric that incorporates multiple factors such as
complexity, functionality, and user interactions.

Advantages: Better captures the overall value delivered by the software, accounting for differences in design
and implementation.

3. Flexibility:

Advantages: Provides flexibility by allowing different types of projects to be measured using the same metric,
facilitating comparisons and benchmarking.

4. Formula:

UFP = (CountLowComplexityF unctions) + (CountMediumComplexityF unctions × 3) + (CountHig

Comparisons
1. Granularity:

LOC: Granular, focusing on the size of the code at a low level.

FP: Provides a more abstract, high-level view of software size based on functionality.

2. Language Independence:

LOC: Highly dependent on the programming language and coding practices.

FP: More language-independent, making it suitable for comparing projects across different technologies.

3. Complexity Consideration:

LOC: Does not explicitly consider complexity but assumes a linear relationship.

FP: Incorporates complexity factors in its calculation, providing a more nuanced size metric.

4. Estimation Process:

LOC: Relatively straightforward to count, but may not capture the full scope of software functionality.

FP: Requires a more in-depth understanding of the software's functionality, involving the identification and
classification of various function types.

5. Applicability:

LOC: Often used for traditional, code-centric projects.

FP: Suitable for a broader range of projects, including those with diverse technologies and development
methodologies.

Both LOC and FP-based models have their strengths and weaknesses, and the choice between them often depends on
the nature of the project and the information available during the estimation process. FP models are generally considered
more versatile and suitable for a wider range of projects, especially in modern software development environments.

Explain concept of Project Scheduling & Tracking

💡 Software project scheduling is an action that distributes estimated effort across the planned project duration by
allocating the effort to specific software engineering tasks

1. Compartmentalization: The project must be compartmentalized into a number of manageable activities and tasks.

SEPM Answer Key 13


2. To accomplish the compartmentalization, both the product and the process are refined

3. Interdependency: The interdependency of each compartmentalized activity or task must be determined.

a. Some tasks must occur in sequence, whereas others can occur in parallel.

b. Some activities cannot be completed until the work product from another task is complete.

4. Time Allocation: Each task to be scheduled must be allocated some work units (person-days of effort).

a. Each task must have some start and end date that is a function of the inter-dependencies and whether work will be
conducted full time or part-time

5. Effort Validation: Ensuring that an allocated task is assigned the required amount of resources

Example: For example, consider a project that has three assigned software engineers (e.g., three person-days are
available per day of assigned effor4 ). On a given day, seven concurrent tasks must be accomplished. Each task
requires 0.50 person-days of effort. More effort has been allocated than there are people to do the work.

6. Defined Responsibilities: Every task that is scheduled should be assigned to a specific team member

7. Defined Outcomes: Every task that is scheduled should have a defined outcome.

For software projects, the outcome is normally a work product (e.g., the design of a component) or a part of a work
product. Work products are often combined in deliverables

8. Defined milestones: Every task or group of tasks should be associated with a project milestone.

a. A milestone is accomplished when one or more work products has been reviewed for quality and has been
approved.

💡 Project tracking involves monitoring and updating the project's progress against the established schedule. It helps
project managers and team members ensure that the project stays on track, identify and address issues or delays
promptly, and make informed decisions to keep the project moving forward.

Conducting periodic project status meetings in which each team member reports progress and problems

Evaluating the results of all reviews conducted throughout the software engineering process

Determining whether formal project milestones have been accomplished by the scheduled date

Comparing the actual start date to the planned start date for each project task listed in the resource table

Meeting informally with practitioners to obtain their subjective assessment of progress to date and problems on the
horizon

Using earned value analysis to assess progress quantitatively

Write a short note on

1. Lines of Code (LOC) Based Software Cost Estimation


1. Focus on Code Size:

Characteristics: LOC-based models estimate project effort and cost based on the size of the code. The size is
measured in lines of code, and the assumption is that there is a linear relationship between the amount of code
and the effort required.

Advantages: Simple and intuitive, especially for projects where code size is a significant factor.

2. Direct Measure:

Measurement: LOC is a direct measure, providing a tangible and concrete metric for project size.

Advantages: Straightforward to count, and tools can automate the process.

SEPM Answer Key 14


3. Challenges:

Drawbacks: Fails to capture differences in complexity, programming languages, and development practices. Can
be influenced by coding styles and doesn't account for differences in productivity among developers.

4. Example Model:

Formula: E = a × (KLOC)b where E is effort, KLOC is the size in thousands of lines of code, and a and b
are constants.

2. Function Point (FP) Based Software Cost Estimation


1. Focus on Functionality:

Characteristics: FP-based models measure software size based on the functionality it delivers to users,
considering inputs, outputs, inquiries, internal logical files, and external interface files.

Advantages: Reflects the software's functionality, making it more language- and implementation-independent.

2. Indirect Measure:

Measurement: FP is an indirect measure, providing a size metric that incorporates multiple factors such as
complexity, functionality, and user interactions.

Advantages: Better captures the overall value delivered by the software, accounting for differences in design
and implementation.

3. Flexibility:

Advantages: Provides flexibility by allowing different types of projects to be measured using the same metric,
facilitating comparisons and benchmarking.

4. Formula:

UFP = (CountLowComplexityF unctions) + (CountMediumComplexityF unctions × 3) + (CountHig

3. COCOMO Model: SAME Answer as above?


What are the issues in measuring the software size using LOC as metric

💡 Choose points out of this

While Lines of Code (LOC) is a common metric for measuring software size, it has several limitations and issues that can
impact the accuracy and reliability of its application. Some of the key issues with using LOC as a metric for software size
include:

1. Language Dependence:

LOC is highly dependent on the programming language used. Different languages have different syntax and
conventions, which can lead to variations in the number of lines needed to express the same functionality. This
makes LOC less comparable across projects using different languages.

2. Coding Styles:

Coding styles and practices can influence the number of lines of code. Two developers implementing the same
functionality may produce different LOC counts based on their coding styles, formatting preferences, and use of
code comments.

SEPM Answer Key 15


3. Code Duplication:

LOC does not differentiate between unique and duplicated code. In cases where code is copied and pasted, LOC
may overstate the actual size of the software, as duplicated lines are counted multiple times.

4. Variability in Complexity:

LOC does not capture the inherent complexity of the code or the problem being solved. Two pieces of code with the
same LOC count may have vastly different levels of complexity, making it an inadequate measure of the software's
intricacy.

5. Non-functional Code:

LOC does not distinguish between functional code (code that directly contributes to the software's functionality) and
non-functional code (comments, whitespace, boilerplate code). This can lead to inaccurate assessments of the effort
required for development.

6. Code Efficiency:

Focusing solely on LOC does not account for code efficiency or performance. More efficient and optimized code may
have fewer lines but achieve the same functionality as less optimized and more verbose code.

7. Evolution of Code:

Over time, software undergoes changes, updates, and optimizations. The evolution of code may result in
modifications that do not significantly impact the LOC count but are crucial for maintaining and improving the
software.

8. Ignorance of Functionality:

LOC does not directly measure the functionality delivered by the software. A small change in functionality may result
in a disproportionately large change in LOC, or vice versa, making it challenging to assess the actual impact on the
software.

9. Inadequate for Non-Code Artifacts:

LOC is primarily designed for measuring code size and may not be suitable for assessing non-code artifacts such as
documentation, configuration files, or data definitions.

10. Difficulty in Estimation:

Estimating the number of lines of code accurately before or during the early stages of development is challenging.

Chapter 4 ✅
Initial estimations may not account for the full complexity of the project

What are Design Principles ?What is the benefit of modular design?

💡 Design principles are fundamental concepts and guidelines that guide the process of creating effective and
efficient designs, whether in the fields of software engineering, architecture, industrial design, or other disciplines

💡 Software design encompasses the set of principles, concepts, and practices that lead to the development of a
high-quality system or product.

Module: Separate and addressable components that together make up the software.

Monolithic software are hard to track, hence dividing a single software into a number of products has become the
common practice.

SEPM Answer Key 16


Divide and conquer strategy: Divide the problem into smaller sub problems and solve them

Increased number of modules will mean increased efforts for each module.

Avoid overmodularity and undermodularity

Benefits of Modular Design:

Modular design involves breaking down a system into smaller, independent, and interchangeable modules or components.
The benefits of adopting a modular design approach include:

1. Ease of Maintenance:

Modules can be developed, tested, and maintained independently. Changes or updates to one module are less likely
to impact other modules, making maintenance more straightforward.

2. Reusability:

Modular components can be reused in different parts of the system or even in other projects, promoting a more
efficient development process.

3. Scalability:

The system can be easily scaled by adding or replacing modules without affecting the entire system. This facilitates
both horizontal and vertical scalability.

4. Parallel Development:

Different teams or developers can work on different modules simultaneously, speeding up the development process
and reducing time-to-market.

5. Debugging and Testing:

Isolating modules makes it easier to identify and fix bugs. Additionally, testing can be performed on individual
modules, leading to more effective and focused testing efforts.

6. Enhanced Collaboration:

Modular design facilitates collaboration among teams or developers, as they can work on separate modules without
interfering with each other's work.

7. Flexibility and Adaptability:

Changes or updates to one module do not necessarily impact the entire system. This flexibility allows for easier
adaptation to evolving requirements.

8. Encapsulation:

Modules encapsulate their internal details, exposing only the necessary interfaces to the rest of the system. This
helps in hiding implementation details and reducing dependencies.

9. Maintainability:

Modular design contributes to the overall maintainability of a system by providing a clear structure, reducing
complexity, and enabling easier updates or modifications.

What is a cohesive module? What are the different types of Cohesion?

A cohesive module performs only one task in the software procedure with little interaction with other modules.

Cohesive module performs only one thing

Types of cohesion

Coincidentally cohesive : The modules in which the set of tasks are related with each other loosely then such
modules are called coincidentally cohesive.

Logically cohesive : A module that performs the tasks that are logically related with each other is called logically
cohesive.

SEPM Answer Key 17


Temporal cohesion : The module in which the tasks need to be executed in some specific time span is called
temporal cohesive.

Procedural cohesion : When processing elements of a module are related with one another and must be executed
in some specific order then such module is called procedural cohesive.

Communicational cohesion : When the processing elements of a module share the data then such module is
communicational cohesive.

The goal is to achieve high cohesion for modules in the system.

What is Coupling? What are the various types of coupling


Represents how modules can be "connected" with other modules or the outside world

Coupling is the measure of the degree of interdependence between the modules

Measure of interconnection among modules in a program structure

Depends on the interface complexity between modules

Strive for lowest possible coupling among modules in software design

Types of Coupling

Data Coupling: parameter passing or data interaction

Control Coupling: Modules share related control data

Common Coupling: Common data or global data is shared among the modules

Content Coupling: Content Coupling occurs when one module makes use of data or control information maintained
in another module

Define Modularity? Explain Advantages and Disadvantages of Modularity

Modularity:
Modularity is a design concept that involves breaking down a complex system into smaller, independent, and
interchangeable modules or components. These modules encapsulate specific functionalities, have well-defined interfaces,
and can operate independently or in conjunction with other modules. The goal of modularity is to create a system that is
easier to understand, develop, test, maintain, and scale.

Advantages of Modularity:

1. Ease of Understanding:

Breaking a system into modular components makes it easier to understand, as developers can focus on one module
at a time without being overwhelmed by the entire system's complexity.

2. Ease of Development:

Different teams or developers can work on different modules simultaneously, speeding up the development process.
Each module can be developed and tested independently.

3. Reusability:

Modular components can be reused in different parts of the system or even in other projects. This promotes
efficiency by leveraging existing, well-tested modules.

4. Scalability:

Systems designed with modularity in mind are more scalable. New features or capabilities can be added by
integrating new modules, and the system can be scaled horizontally or vertically.

5. Flexibility and Adaptability:

Changes or updates to one module do not necessarily impact the entire system. This flexibility allows for easier
adaptation to evolving requirements without affecting the entire codebase.

Disadvantages of Modularity:

SEPM Answer Key 18


1. Overhead of Interfaces:

The need for well-defined interfaces between modules introduces an overhead in terms of design and
documentation.

2. Coordination Challenges:

Coordinating interactions between modules can be challenging, especially in large and complex systems. Proper
communication and synchronization are crucial.

3. Dependency Management:

Managing dependencies between modules can be complex. Changes in one module may affect other modules,
requiring careful dependency management.

4. Testing Complexity:

While modular design facilitates independent testing of modules, testing the entire system's interactions and
integration can be complex and time-consuming.

5. Increased Memory Usage:

Modular systems may consume more memory due to the need to load multiple modules into memory, especially in
cases where modules are not loaded on demand.

Differentiate between Coupling and Cohesion

Software Engineering | Differences between Coupling and Cohesion - GeeksforGeeks


A Computer Science portal for geeks. It contains well written, well thought and well explained computer
science and programming articles, quizzes and practice/competitive programming/company interview
Questions.
https://www.geeksforgeeks.org/software-engineering-differences-between-coupling-and-cohesion/

Coupling Cohesion

Represents how modules are connected with


Cohesive modules only perform one thing
other modules or the outside world

Interface complexity is decided Information (data) hiding is achieved

Goal: Achieve lowest coupling Goal: Achieve highest Cohesion

Types of Couplings: Data, Control, Common, Types: Coincidental, Logical, Temporal,


Content Procedural, Communicational

Explain different Architectural Styles

Software designing is the process of translating the analysis model into the design model

1. Architecture design defines the relationship between major structural elements of the software. The architectural styles
and design patterns can be used to achieve the requirements defined for the system

2. Interface design: Describes how software communicates with the system. These systems interact with each other as
well as with the humans who operate them. Thus interface design represents the flow of information and specific type of
behavior.

3. Component Level Design: The component-level design stage provides a dedicated purpose for each component and
describes how the interface, algorithms, data structure, and communication methods of each component will function to
carry out a process.

a. Transforms structural elements of software architecture into procedural description of software module. The
information used by component design is obtained from class based model, flow based model and behavioral
model

Architecture Design

SEPM Answer Key 19


House Analogy
floor plan of a house. The floor plan depicts the overall layout of the rooms; their size, shape,
and relationship to one another; and the doors and windows that allow movement into and out
of the rooms. The floor plan gives us an overall view of the house

Givens a layout for the overall view of the software


The architecture design element is depicted as a set of interconnected subsystems, often derived from analysis packages
within the requirements model.
Each subsystem may have it's own architecture.

Built from

1. Data Flow Models or Class Diagrams

2. Information obtained from the application domain

3. Architectural patterns and styles


Component Level Design

House Analogy
A set of detailed drawings (and specifications) for each room in a house. These drawings
depict wiring and plumbing within each room, the location of electrical receptacles and wall
switches, faucets, sinks, showers, tubs, drains, cabinets, and closets.

Component-level design for software fully describes the internal detail for each software component

The component level design defines data structures for all local data objects and algorithmic detail for processing that
occurs within a component and an interface that allows access to all component operations

The design details of a component can be modeled at many different levels of abstraction.

Detailed procedural flow for a component can be represented using either pseudocode (a programming language-like
representation) or some other diagrammatic form (e.g., flowchart or box diagram)

Algorithmic structure follows the rules established for structured programming (i.e., a set of constrained procedural
constructs).

Data structures, selected based on the nature of the data objects to be processed, are usually modeled using
pseudocode or the programming language to be used for implementation.

Interface Design

House Analogy
analogous to a set of detailed drawings (and specifications) for the doors, windows, and
external utilities of a house. These drawings depict the size and shape of doors and windows,
the manner in which they operate, the way in which utility connections (e.g., water, electrical,
gas, telephone) come into the house and are distributed among the rooms depicted in the floor
plan.

The interface design has three important elements

1. User Interface: Incorporates aesthetic elements, ergonomic elements, technical elements.

2. External Interfaces to other systems: Requires definitive information about the entity to which information is sent or
received.

a. Should also incorporate error checking and security features

SEPM Answer Key 20


3. Internal interfaces between various design components

These interface design elements allow the software to communicate externally and enable internal communication
and collaboration among the components that populate the software architecture.

What is Component? Explain concept of UML in Component Design

💡 Component: A modular, deployable, and replaceable part of a system that encapsulates implementation and
exposes a set of interfaces.

UML (Unified Modeling Language) is a standardized modeling language used in software engineering to visually represent
and document software systems. UML provides a set of diagrams and notation for representing various aspects of software
design, including class diagrams, sequence diagrams, and component diagrams.

The Visual Collaboration Platform for Every Team | Miro


Scalable, secure, cross-device, and enterprise-ready team collaboration whiteboard for distributed teams.
Join 50M+ users from around the world.

https://miro.com/diagramming/what-is-a-uml-component-diagram/

1. Components:

In UML component diagrams, a component is represented by a rectangular box with the component's name written
inside. The box typically includes the component's provided and required interfaces.

2. Interfaces:

Interfaces are elements of components or classes that deliver function to other components or classes.

Provided interfaces represent services or functionalities offered by a component

required interfaces represent services or functionalities that a component needs from its environment.

3. Dependencies:

Dependencies between components are represented by arrows pointing from the dependent component to the
component on which it depends. Dependencies can be used to indicate relationships such as usage, association, or
generalization.

SEPM Answer Key 21


4. Connectors:

Connectors are used to show the flow of information between components, dependencies, and communication
channels

For example, a connector might represent the flow of user login information from the authentication component to
the data management component.

5. Ports:

Ports represent points of interaction on a component.

Ports are depicted as small squares on the edges of the component box and are connected to interfaces.

Write a short note on Architectural Design

Architecture Design

House Analogy
floor plan of a house. The floor plan depicts the overall layout of the rooms; their size, shape, and
relationship to one another; and the doors and windows that allow movement into and out of the
rooms. The floor plan gives us an overall view of the house

💡 the process of defining a collection of hardware and software components and their interfaces to establish the
framework for the development of a computer system

The architecture design element is depicted as a set of interconnected subsystems, often derived from analysis packages
within the requirements model.

Each subsystem may have it's own architecture.

Built from

1. Data Flow Models or Class Diagrams

2. Information obtained from the application domain

SEPM Answer Key 22


3. Architectural patterns and styles

Key Aspects of Architectural Design:

1. System Structure:

Define the overall structure of the system, including the organization of components, modules, layers, and
subsystems. This involves deciding how the system will be decomposed into manageable and cohesive parts.

2. Component Identification:

Identify the major components of the system, considering their responsibilities, functionalities, and interactions.
Components may include user interfaces, application logic, databases, external interfaces, and more.

3. Data Design:

Design the data architecture, including data models, databases, and data flow. Specify how data will be stored,
retrieved, processed, and shared among different components.

4. Interface Design:

Define the interfaces between system components, specifying how they will communicate and interact. This involves
determining the methods, protocols, and data formats used for communication.

5. Architectural Patterns:

Choose appropriate architectural patterns or styles that align with the system's requirements. Common architectural
patterns include client-server, layered architecture, microservices, and event-driven architecture.

6. Scalability and Performance:

Consider scalability and performance requirements during architectural design. Decide how the system will handle
growing user loads, data volumes, and performance demands.

7. Security Considerations:

Incorporate security measures into the architectural design, addressing issues such as data protection, access
control, authentication, and encryption.

8. Reliability and Fault Tolerance:

Design the system to be reliable and resilient. Consider mechanisms for error handling, fault tolerance, and recovery
to ensure the system's availability and robustness.

9. Technology Selection:

Choose appropriate technologies, frameworks, and tools that align with the architectural decisions. Consider factors
such as development platforms, databases, communication protocols, and third-party integrations.

10. Maintainability and Extensibility:

Plan for the long-term maintainability and extensibility of the system. Design components and interfaces in a way
that facilitates future updates, enhancements, and modifications.

What is User Interface Design? Explain 3 golden rules of UI design

💡 User interface (UI) design primarily focuses on information architecture. It is the process of building interfaces that
clearly communicate to the user what's important.

The three golden rules of UI design as stated by Theo Mandel are

SEPM Answer Key 23


1. Place the user in control:

a. Define the interaction modes in such a way that does not force the user into unnecessary or undesired actions: The
user should be able to easily enter and exit the mode with little or no effort.

b. Provide for flexible interaction: Different people will use different interaction mechanisms, some might use keyboard
commands, some might use mouse, some might use touch screen, etc, Hence all interaction mechanisms should be
provided.

2. Reduce the user’s memory load:

Reduce demand on short-term memory: When users are involved in some complex tasks the demand on short-term
memory is significant. So the interface should be designed in such a way to reduce the remembering of previously
done actions, given inputs and results.

Establish meaningful defaults: Always initial set of defaults should be provided to the average user, if a user needs to
add some new features then he should be able to add the required features.

3. Make the interface consistent:

Allow the user to put the current task into a meaningful context: Many interfaces have dozens of screens. So it is
important to provide indicators consistently so that the user know about the doing work. The user should also know
from which page has navigated to the current page and from the current page where can navigate.

Maintain consistency across a family of applications: The development of some set of applications all should follow
and implement the same design, rules so that consistency is maintained among applications.

If past interactive models have created user expectations do not make changes unless there is a compelling reason.

Write a short note on UI design

💡 User interface (UI) design primarily focuses on information architecture. It is the process of building interfaces that
clearly communicate to the user what's important.

Key Aspects of UI Design:

1. User-Centered Design:

UI design begins with a deep understanding of the target users and their needs. Designers use user-centered design
principles, involving users in the design process through research, personas, and usability testing.

2. Visual Design:

Visual design involves the use of color, typography, imagery, and layout to create an aesthetically pleasing and
cohesive interface. Visual elements should align with the brand identity and contribute to a positive emotional
response from users.

3. Information Architecture:

Information architecture organizes and structures content in a way that is logical and easy to navigate. This includes
defining the hierarchy of information, creating clear navigation paths, and ensuring content discoverability.

4. Interaction Design:

Interaction design focuses on defining how users will interact with the interface. It includes designing intuitive
navigation, clear calls to action, and interactive elements that guide users through the intended workflow.

5. Usability:

Usability is a critical factor in UI design. Designers strive to create interfaces that are easy to learn, efficient to use,
and error-tolerant. Usability testing helps identify areas for improvement and ensures a positive user experience.

6. Consistency:

Consistency in design elements, terminology, and layout enhances user predictability and comprehension. A
consistent UI promotes a sense of familiarity, making it easier for users to navigate and understand the system.

SEPM Answer Key 24


Importance of UI Design:

1. Enhanced User Experience:

A well-designed UI contributes to a positive and enjoyable user experience, increasing user satisfaction and
engagement.

2. Increased Usability:

Usable interfaces reduce the learning curve for users, making it easier for them to accomplish tasks and navigate
the system.

3. Brand Image:

UI design plays a role in shaping the brand image. A visually appealing and consistent interface reinforces the brand
identity and professionalism.

4. Efficient Workflows:

Thoughtful UI design streamlines workflows, helping users accomplish tasks efficiently and without unnecessary
friction.

Chapter 5 (Need Diagrams) ✅


1. Explain with example Unit testing, Integration testing, Validation testing and System testing

2. What is White Box Testing ,Explain with Diagram how white box testing can be performed?

3. What is Black Box Testing? Explain with Diagram how black box testing can be performed?

4. Compare White Box Testing and Black Box Testing

5. Compare Unit Testing and Integration Testing

💡 Instead of the answering the repeated questions, I’m just adding the notes per topic here. We can then adjust the
amount we write

Answered Questions
Explain different Test Characteristics
Testing Objectives

1. A good test case has a high probability of finding an undiscovered error

2. A successful test case is one that uncovers an as-yet undiscovered error

Testing Principles

1. All tests should be traceable to customer requirements.

2. Tests should be planned long before testing begins.

3. The Pareto principle can be applied to software testing - 80 % of all errors uncovered during testing will likely be
traceable to 20 % of all program modules.

4. Testing should begin "in the small" and progress toward testing "in the large"

5. Exhaustive testing is not possible.

6. To be most effective, testing should be conducted by an independent third party.

SEPM Answer Key 25


Why is testing important?
• Generally, testing is a process that requires more efforts than any other software engineering activity.
Testing is a set of activities that can be planned in advance and conducted systematically.
• If it is conducted haphazardly, then only time will be wasted and more even worse errors may get introduced.

What is Software Testing? Explain different Software Testing Strategies

💡 A critical element of software quality assurance and represents the ultimate review of software, design and coding.

Software is tested to uncover errors in it, that were made when the software was being designed or constructed.

Unit Testing
Focuses on verification effort on the smallest unit of software design - the software component or module.

Using the component-level design description as a guide, important control paths are tested to uncover errors within
the boundary of the module.

Because a component is not a stand-alone program, driver and/or stub software must often be developed for each
unit test.

"driver code" typically refers to the code that is responsible for initializing and invoking the units of code (such as
functions or methods) being tested.

Stubs replace modules that are invoked by the component to be tested. A stub uses the subordinate module's
interface, prints verification of entry and returns control to the module undergoing testing.

Integration Testing
A systematic technique of conducting tests to uncover errors associated with interfacing components

The objective is to take unit-tested components and build a program structure that has been dictated by design

Incremental integration: The program is constructed and tested in small increments, where errors are easier to isolate
and correct.

Interfaces are more likely to be tested completely, and a systematic test approach may be applied.

Validation Testing
Software Validation Tests are done to confirm conformity with requirements

After each validation test case has been conducted, one of two possible conditions exists:

The function or performance characteristic conforms to specification and is accepted

A deviation from specification is uncovered and a deficiency list is created.

Errors at this point often mean that the scheduled delivery will be delayed.

Alpha and Beta Testing


Acceptance tests are conducted by the customer (or the end users) to uncover errors while using the system.

Alpha Testing
The alpha test is conducted at the developer’s site by a representative group of end users.

The software is used in a natural setting with the developer “looking over the shoulder” of the users and recording
errors and usage problems.

Beta Testing
Conducted at one or more end-user sites.

SEPM Answer Key 26


The developer is not present

The beta test is a “live” application of the software in an environment that cannot be controlled by the developer.

Customer records all the problems

Developers then attempt to fix all the issues reported.

Explain different White-box test design techniques

White-box test design techniques, also known as structural or glass-box testing techniques, involve creating test cases
based on an understanding of the internal logic, code structure, and paths of the software application. These techniques aim
to ensure that various code segments, conditions, and branches are thoroughly tested.

1. Statement Coverage:

Objective: Ensure that each statement in the code is executed at least once during testing.

Approach: Design test cases to cover individual statements in the source code.

Formula:

Number of Executed Statements


(Statement Coverage = × 100)
Total Number of Statements

2. Branch Coverage:

Objective: Ensure that all branches (decision points) in the code are taken at least once during testing.

Approach: Design test cases to cover all possible branches, including both true and false conditions.

Formula:

Number of Executed Branches


(Branch Coverage = × 100)
Total Number of Branches

3. Condition Coverage (Seems similar to Branch Coverage):

Objective: Ensure that each boolean condition in the code evaluates to both true and false during testing.

Approach: Design test cases to exercise each condition in both true and false states.

Formula:

Number of Executed Conditions


(Condition Coverage = × 100)
Total Number of Conditions

Good Explanation 👇
def example_function(a, b, c):
if a > 0:
result = "Positive A"
elif b > 0:
result = "Positive B"
else:
result = "Non-Positive"
return result

Now, let's break down the conditions and branches:

Branches:

1. a > 0 (True branch)

2. a > 0 (False branch, but leading to the next condition b > 0 )

Conditions:

SEPM Answer Key 27


1. a > 0

2. b > 0

To achieve 100% branch coverage, you need to ensure that both branches are executed at least once. However,
achieving 100% condition coverage requires that each condition is evaluated in both true and false states.

4. Loop Coverage:

Objective: Ensure that loops are adequately tested, including zero iterations, single iterations, and multiple
iterations.

Approach: Design test cases that exercise loops under different scenarios, such as empty loops, loops with a single
iteration, and loops with multiple iterations.

5. Path Coverage:

Objective: Ensure that all possible paths through the code are tested.

Approach: Design test cases to cover different paths, considering all possible combinations of branches and
conditions.

Challenge: Path coverage can be complex for large programs with numerous paths, and achieving 100% path
coverage may be impractical.

6. Data Flow Coverage:

Objective: Ensure that variables are defined and used correctly throughout the program.

Approach: Design test cases to trace the flow of data through the program, including variable assignments and
references.

Focus: Identify instances of uninitialized variables, unused variables, and potential data flow issues.

7. Boundary Value Analysis (BVA):

Objective: Focus on testing values at the boundaries of input domains.

Approach: Design test cases using values at the edges or boundaries of valid input ranges.

Example: For an input range of 1 to 100, test with values like 1, 100, 2, 99, and values just outside the specified
range.

8. Mutation Testing (Test the test cases):

Objective: Evaluate the effectiveness of test cases by introducing intentional faults (mutations) into the code and
checking if the test cases detect these faults.

Approach: Introduce mutations into the code, such as changing operators, modifying constants, or deleting
statements, and observe if the test cases can identify the changes.

Each of these white-box test design techniques has its strengths and limitations. Testers often use a combination of these
techniques to achieve comprehensive coverage and ensure the effectiveness of their testing efforts. The choice of technique
depends on factors such as the nature of the software, testing objectives, and resource constraints.

SEPM Answer Key 28


White Box Testing (WHAT DIAGRAM?)
White-box testing of software is predicated on close examination of procedural detail
Logical paths through the software and collaborations between components are tested by exercising specific sets of
conditions and/or loops.

Presents logistical issues due to number of possibilities

Working process of white box testing:

Input: Requirements, Functional specifications, design documents, source code.

Processing: Performing risk analysis to guide through the entire process.

Proper test planning: Designing test cases to cover the entire code. Execute rinse-repeat until error-free software is
reached. Also, the results are communicated.

Output: Preparing final report of the entire testing process.

Basis Path
basis path method enables the test-case designer to derive a logical complexity measure of a procedural design and use
this measure as a guide for defining a basis set of execution paths.

Test cases derived to exercise the basis set are guaranteed to execute every statement in the program at least one time
during testing

Control Structure Testing


Control structure testing is used to increase the coverage area by testing various control structures present in the program.
The different types of testing performed under control structure testing are as follows (can be taken from answer above as
well, should be more than enough)

1. Condition Testing: Condition testing is a test cased design method, which ensures that the logical condition and
decision statements are free from errors. The errors present in logical conditions can be incorrect Boolean operators,
missing parenthesis in a Booleans expression, error in relational operators, arithmetic expressions, and so on.

2. Data Flow Testing: The data flow test method chooses the test path of a program based on the locations of the
definitions and uses all the variables in the program. The data flow test approach is depicted as follows suppose each
statement in a program is assigned a unique statement number and that theme function cannot modify its parameters or
global variables.

SEPM Answer Key 29


3. Loop Testing: Loop testing is a white-box testing technique that focuses exclusively on the validity of loop constructs.

a. Simple Loops Test

b. Nested Loops test

c. Concatenated Loops Test etc.


Black Box Testing (WHAT DIAGRAM?)

Black box testing - Software Engineering - GeeksforGeeks


A Computer Science portal for geeks. It contains well written, well thought and well explained computer
science and programming articles, quizzes and practice/competitive programming/company interview
Questions.
https://www.geeksforgeeks.org/software-engineering-black-box-testing/

A black-box test examines some fundamental aspect of a system with little regard for the internal logical structure of the
software

Usually applied during later stages of testing.

Black-box testing attempts to find errors in the following categories:

1. incorrect or missing functions,

2. interface errors,

3. errors in data structures or external database access,

4. behavior or performance errors

5. initialization and termination errors

Procedure
Black box testing is a type of testing where the tester focuses solely on the software's functionality without knowledge of
its internal code structure. The goal is to verify that the software behaves as expected based on specified requirements.

Steps in Black Box Testing:

1. Identify inputs (causes) and outputs (effect).

2. Develop a cause-effect graph.

3. Transform the graph into a decision table.

SEPM Answer Key 30


4. Convert decision table rules to test cases.

1. Input Specification:

The tester defines a set of test inputs based on the software's specifications and requirements. These inputs
represent the conditions under which the software will be tested.

2. Test Case Design:

Test cases are designed to cover various scenarios, including normal operation, boundary cases, and error
conditions. Each test case specifies the input data, the expected output, and the conditions for executing the test.

3. Test Execution:

The designed test cases are executed on the software without any knowledge of its internal logic or code
structure. The tester interacts with the software through its user interface, APIs, or other specified entry points.

4. Output Evaluation:

The tester observes and evaluates the software's outputs or responses to the test inputs. This involves comparing
the actual results with the expected results specified in the test cases.

5. Defect Reporting:

If discrepancies are found between the actual and expected results, the tester reports these as defects. The
defects are documented with details such as the steps to reproduce, the observed behavior, and any other
relevant information.

6. Regression Testing:

As the software undergoes changes or updates, regression testing is performed by re-executing the black box
test cases to ensure that the modifications do not introduce new defects or impact existing functionalities.

SEPM Answer Key 31


Graph Based Testing

Definition
Graph testing begins by creating a graph of important objects and their relationships, then
devising a series of tests that will cover the graph so that each object and relationship is
exercised and errors are uncovered.

Equivalence Testing

Definition

Equivalence partitioning is a black-box testing method that divides the input domain of a
program into classes of data from which test cases can be derived.

An ideal test case single-handedly uncovers a class of errors (e.g., incorrect processing of all
character data) that might otherwise require many test cases to be executed before the
general error is observed.

Test-case design for equivalence partitioning is based on an evaluation of equivalence classes for an input condition.

An equivalence class represents a set of valid or invalid states for input conditions. Typically, an input condition is either a
specific numeric value, a range of values, a set of related values, or a Boolean condition

Black Box vs White Box


Criteria Black Box Testing White Box Testing

It is a way of testing the software in which the


It is a way of software testing in which the
tester has knowledge about the internal
Definition internal structure or the program or the code
structure or the code or the program of the
is hidden and nothing is known about it.
software.

SEPM Answer Key 32


Done by It is mostly done by software testers. It is mostly done by software developers.

Knowledge of
No knowledge of implementation is needed. Knowledge of implementation is required.
Implementation

Implementation of code is not needed for Code implementation is necessary for white
Knowledge of Code
black box testing. box testing.

Knowledge of It is mandatory to have knowledge of


No knowledge of programming is required.
Programming programming.

This testing can be initiated based on the This type of testing of software is started after
Document needed
requirement specifications document. a detail design document.

Algorithm Testing It is not suitable or preferred for algorithm


It is suitable for algorithm testing.
Suitability testing.

Time Consumed It is least time consuming. It is most time consuming.

Black-box test design techniques- • White-box test design techniques- •


Techniques Decision table testing • All-pairs testing • Control flow testing • Data flow testing •
Equivalence partitioning • Error guessing Branch testing

Types of Black Box Testing: • Functional


Types of White Box Testing: • Path
Types Testing • Non-functional testing •
Testing • Loop Testing • Condition testing
Regression Testing

It is less exhaustive as compared to white It is comparatively more exhaustive than


box testing. black box testing.

Honestly, useless
It is a functional test of the software. It is a structural test of the software.
points after this

It is the behavior testing of the software. It is the logic testing of the software.

It is applicable to the higher levels of testing It is generally applicable to the lower levels of
of software. software testing.

It can be referred to as outer or external


It is the inner or the internal software testing.
software testing.

It is also called closed testing. It is also called as clear box testing.

Can be done by trial and error ways and Data domains along with inner or internal
methods. boundaries can be better tested.

Example: Search something on google by


Example: By input to check and verify loops
using keywords

Unit Testing

Definition
Focuses on verification effort on the smallest unit of software design - the software component or
module.

Using the component-level design description as a guide, important control paths are tested to uncover errors within the
boundary of the module.

The test focuses on the internal processing logic and data structures of a single component.

Developing tests before the code for a component is made is often done to ensure that you write code that passes all the
tests.

Simplified when the code has high cohesion

When a single component has only one function, it allows for test cases to be reduced and increase the ease in
uncovering errors.

Considerations (while testing)

SEPM Answer Key 33


The module interface is tested to ensure that information flows in and out properly from the program unit under test

Local data structures are examined to ensure data integrity.

All independent paths in the control structure are tested to ensure that all statements are executed at least once

All boundary conditions are tested as well to ensure that the module operates properly at boundaries established.

All error handling paths are tested as well

Data Flow is checked first, if that doesn't happen properly then there's essentially no point.

Boundary conditions are the last element in the last iteration of a loop or something, this is where most errors occur
(experience 🥲)
Procedures
Because a component is not a stand-alone program, driver and/or stub software must often be developed for each unit
test.

Stubs replace modules that are invoked by the component to be tested. A stub uses the subordinate module's interface,
prints verification of entry and returns control to the module undergoing testing

Driver code and stubs both are overhead (code that is not shipped in the final product)

Sometimes testing may need to be postponed until the integration step is completely carried out

Example

Consider a software application for a calculator. In unit testing, you would test each operation of the calculator (e.g., addition,
subtraction, multiplication) as an individual unit. For instance, you would check that the addition function produces the correct
result for various input combinations. Each operation is tested in isolation to ensure it works as expected.

Integration Testing

💡 A systematic technique of conducting tests to uncover errors associated with interfacing components
The objective is to take unit-tested components and build a program structure that has been dictated by design

Incremental integration: The program is constructed and tested in small increments, where errors are easier to isolate and
correct. Interfaces are more likely to be tested completely, and a systematic test approach may be applied.

Procedure
There are various types of integration testing

1. Top Down Integration

a. Starts testing from the highest level of the software hierarchy and progressively integrates lower-level modules.

b. Advantage: Testing of major control functions early.

c. Disadvantage: Needs a lot stubs

2. Bottom Up Integration

a. Begins testing from the lower-level modules and incrementally integrates higher-level components.

b. Advantage: Easier test-case design and a lack of stubs

c. Disadvantage: Program does not exist as an entity until the last program is added

3. Regression Testing

a. Involves rerunning existing test cases to ensure that new changes or additions to the codebase do not negatively
impact the existing functionalities.

SEPM Answer Key 34


4. Smoke Testing

a. A preliminary, high-level test that checks if the basic functionalities of a software build are working correctly, providing
a quick assessment of the build's stability.

Tester should identify and test the critical modules as much as possible.

Example

Imagine an e-commerce website with separate modules for user authentication and order processing. In integration testing,
you would validate that when a user places an order, the order processing module interacts correctly with the user
authentication module. This ensures a seamless flow of data and functionality between these two components.
Validation Testing

[!cite] Definition
Software Validation Tests are done to confirm conformity with requirements

After each validation test case has been conducted, one of two possible conditions exists:

1. The function or performance characteristic conforms to specification and is accepted

2. A deviation from specification is uncovered and a deficiency list is created.

Errors at this point often mean that the scheduled delivery will be delayed.

Configuration Review
Ensure that all elements of the software configuration have been properly developed, are cataloged, and have the necessary
detail to bolster the support activities

Alpha and Beta Testing


Acceptance tests are conducted by the customer (or the end users) to uncover errors while using the system.

Alpha Testing
The alpha test is conducted at the developer’s site by a representative group of end users. The software is used in a natural
setting with the developer “looking over the shoulder” of the users and recording errors and usage problems.

Beta Testing
Conducted at one or more end-user sites.

The developer is not present

The beta test is a “live” application of the software in an environment that cannot be controlled by the developer.

Customer records all the problems

Developers then attempt to fix all the issues reported.

Example
Consider a social media application where users can post text updates. In validation testing, you would check if the
application correctly validates user input. For instance, you would ensure that the application rejects posts that exceed a
character limit and prompts users to provide required information, such as a post title.

Difference

SEPM Answer Key 35


System Testing
System testing is actually a series of different tests whose primary purpose is to fully exercise the computer-based system.

Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly
performed. If recovery is automatic (performed by the system itself), reinitialization, checkpointing mechanisms, data
recovery, and restart are evaluated for correctness

Security testing attempts to verify that protection mechanisms built into a system will, in fact, protect it from improper
penetration.

Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency, or volume.

Performance testing is designed to test the run-time performance of software within the context of an integrated system.
Performance testing occurs throughout all steps in the testing process

Deployment testing, sometimes called configuration testing, exercises the software in each environment in which it is to
operate. In addition, deployment testing examines all installation procedures and specialized installation software (e.g.,
“installers”) that will be used by customers, and all documentation that will be used to introduce the software to end users

Example: Imagine a complex enterprise resource planning (ERP) system used by a manufacturing company. In system
testing, you would test end-to-end scenarios, such as creating a new order, processing it through the system, updating
inventory, and generating financial reports. This type of testing ensures that the entire ERP system functions seamlessly
as a cohesive unit.

These conceptual examples help illustrate the different levels and focuses of testing in the software development
lifecycle. Unit testing ensures the correctness of individual components, integration testing verifies interactions between
components, validation testing checks adherence to requirements, and system testing evaluates the overall functionality
of the complete system.

Unit testing vs Integration Testing


Criteria Unit Testing Integration Testing

In unit testing, each module of the software is tested In integration testing, all modules of the software are
Definition
separately. tested combined.

In unit testing tester knows the internal design of the Integration testing doesn’t know the internal design of
Tester Knowledge
software. the software.

Integration testing is performed after unit testing and


Performing Order Unit testing is performed first of all testing processes.
before system testing.

AKA Unit testing is white box testing. Integration testing is black box testing.

SEPM Answer Key 36


Tester Role Unit testing is performed by the developer. Integration testing is performed by the tester.

Defect detection ease easy difficult

It tests parts of the project without waiting for others to


It tests only after the completion of all parts.
be completed.

Cost Unit testing is less costly. Integration testing is more costly.

Unit testing is responsible to observe only the Error detection takes place when modules are
Scope
functionality of the individual units. integrated to create an overall system.

Specification Module specification is done initially. Interface specification is done initially.

External Dependencies The proper working of your code with the external The proper working of your code with the external
functioning dependencies is not ensured by unit testing. dependencies is ensured by integration testing.

Maintenance cost Maintenance is cost effective. Maintenance is expensive.

Speed of execution Fast execution as compared to integration testing. Its speed is slow because of the integration of modules.

Integration testing results in the integration structure’s


Exposure to code Unit testing results in in-depth exposure to the code.
detailed visibility.

Chapter 6
Explain concept of Risk Analysis & Management

Risk: Uncertainty that may occur due to choices in the part and can cause heavy losses
Risk Management: Process of making decisions based on an evaluation of the factors that
threatens the business

Risk Analysis
1. Risk Identification:

Identify potential risks by considering all aspects of the project, including technical, organizational, and external
factors. This can be done through brainstorming, historical data analysis, and expert interviews.

2. Risk Assessment:

Assess the probability of each identified risk occurring and estimate the potential impact on the project. Risks are
often assessed in terms of likelihood, severity, and the ability to detect them.

3. Risk Prioritization:

Prioritize risks based on their significance. Risks with high impact and high probability are often given priority, but
other factors such as the project phase and available resources may also influence prioritization.

4. Risk Documentation:

Document identified risks, including their descriptions, potential impacts, likelihood, and proposed mitigation or
contingency plans. This documentation serves as a reference throughout the project.

Risk Management
1. Risk Mitigation:

Develop and implement strategies to reduce the probability or impact of identified risks. This may involve taking
preventive actions, improving processes, or incorporating additional resources.

2. Risk Avoidance:

In some cases, it may be possible to avoid certain risks altogether. This could involve changing project plans,
technologies, or methodologies to eliminate the possibility of a particular risk occurring.

SEPM Answer Key 37


3. Risk Transfer:

Transfer the impact of a risk to a third party, often through insurance or outsourcing. This is a common strategy for
risks that are beyond the control of the project team.

4. Risk Acceptance:

Accepting certain risks without taking specific actions to mitigate them. This is a valid strategy when the potential
impact is low, the cost of mitigation is too high, or when there are no practical mitigation measures available.

5. Contingency Planning:

Develop contingency plans to address potential risks if they materialize. Contingency plans outline the steps to be
taken if a risk event occurs, helping the project team respond quickly and effectively.

6. Continuous Monitoring:

Regularly monitor the project environment for new risks, changes in existing risks, or the effectiveness of
implemented risk management strategies. Adjust the risk management plan as needed throughout the project
lifecycle.

Write a note on Risk Mitigation, Monitoring and Management Plan (RMMM).

RMMM
Risk mitigation, monitoring and management

Risk Mitigation
Preventing the risks in the first place.

Objective: Developing strategies and action plans to reduce the impact of identified risks.

Methods: Proactive measures to reduce the likelihood of occurrence (preventive) and responsive actions to address
consequences (contingency).

Output: Mitigation plans and strategies.

Some of the steps that can be taken to ensure risk mitigation include

1. Communication with staff to find probable risk

2. Find and eliminate all the causes that can create risk before the project starts

3. Conduct timely reviews in order to speed up work

Risk Monitoring
Objective: Continuously tracking and reassessing identified risks throughout the project.

Methods: Regular status meetings, progress reports, and ongoing risk analysis.

Output: Updated risk registers and documentation, adjustments to mitigation plans based on changing circumstances

1. The degree to which the team performs with the spirit of team work

2. The type of problems that are occurring

3. Behavior of developers are pressure of the project varies

The objective of Risk Monitoring is

1. To check the reality of the predicted risks

2. To ensure the steps defined to avoid the risks are implemented well

3. To gather the information which can be useful for analyzing the risk.

Risk Management

SEPM Answer Key 38


1. Risk management and contingency planning assumes that mitigation efforts have failed and that the risk has become a
reality.

2. In general, risk management tools assist in generic risk identification by

a. providing a list of typical project and business risks,

b. provide checklists or other “interview” techniques that assist in identifying project specific risks,

c. assign probability and impact to each risk,

d. support risk mitigation strategies, and generate many different risk-related reports

Document
The RMMM plan documents all work performed as part of risk analysis and is used by the project manager as part of the
overall project plan.

Some software teams do not develop a formal RMMM document. Rather, each risk is documented individually using a
risk information sheet

In most cases, the RIS is maintained using a database system so that creation and information entry, priority ordering,
searches, and other analysis may be accomplished easily

Once RMMM has been documented and the project has begun, risk mitigation and monitoring steps commence.

Explain the concept of The Software Configuration Management (SCM)

The software configuration management process defines a series of tasks that have four primary objectives:

1. to identify all items that collectively define the software configuration,

2. to manage changes to one or more of these items,

3. to facilitate the construction of different versions of an application.

4. to ensure that software quality is maintained as the configuration evolves over time.

Version Control
Version control combines procedures and tools to manage different versions of configuration objects that are created during
the software process

1. A project repository that stores all relevant configuration objects

2. A version management capability stores all version of a configuration object (or enables construction of any version from
differences in past versions)

3. A make facility that enables you to collect all relevant configuration objects and construct a specific version of the
software

4. A version control and change control systems often implement an issues tracking (also called bug tracking) capability
that enables the team to record and track the status of all outstanding issues associated with each configuration object.

Change Control
Too much change control and we create problems. Too little, and we create other problems.

For a large software project, uncontrolled change rapidly leads to chaos. For such projects, change control combines
human procedures and automated tools to provide a mechanism for the control of change.

A change request is submitted and evaluated to assess technical merit, potential side effects, overall impact on other
configuration objects and system functions, and the projected cost of the change

The results of the evaluation are presented as a change report, which is used by a change control authority (CCA)—a
person or group that makes a final decision on the status and priority of the change.

SEPM Answer Key 39


An engineering change order (ECO) is generated for each approved change. The ECO describes the change to be
made, the constraints that must be respected, and the criteria for review and audit.

SEPM Answer Key 40

You might also like