SEPM Answer Key
SEPM Answer Key
Doubts
1. COCOMO II or Basic COCOMO ? Syllabus says cocomo II, we studied II, question bank says basic : basic Cocomo
2. What diagrams to be added in for White Box and Black Box testing : Adding gears in a box makes it black box
Chapter 2
Explain concept of Requirement Engineering and Requirement Modeling
💡 The Process of
Requirement: A requirement can range from a high-level abstract statement of a service or of a system constraint to
a detailed mathematical functional specification
Requirement Modeling uses a combination of text and diagrammatic forms to depict requirements in a way that is
relatively easy to understand, straightforward to review for correctness, completeness and consistency.
Types of Requirements
1. User Requirements: Written for customers, it is a
3. Software Specification
a. Detailed software description that can serve as the basis for design or implementation.
Software engineers build it using the requirements elicited from the customer
Modelling
To validate software requirements, you need to examine them from a number of different points of view.
Scenario-based modeling represents the system from the user's point of view
Data modeling represents the information space and data objects that the software will manipulate and the relationships
among them.
These models are refined and analyzed to assess their clarity, completeness and consistency.
Draw Use case diagram and Activity Diagram for Airline Booking System
Verify Correctness
Verify Correctness
The Level-0 DFD, also known as the Context Diagram, provides an overview of the entire system.
The Level-1 DFD expands on the processes shown in the Level-0 DFD. It breaks down the high-level process into
subprocesses, providing a more detailed view of how data moves within the system.
The Level-2 DFD further decomposes the subprocesses from the Level-1 DFD into finer details. It provides a more granular
view of the processes identified in Level-1, breaking them down into more detailed subprocesses and data flows.
💡 Requirement Analysis: It involves a thorough examination and interpretation of gathered requirements to ensure that
they are complete, accurate, and feasible. The primary goal of requirement analysis is to transform high-level
requirements into a detailed understanding of what the software system needs to accomplish.
Requirement Gathering: process of collecting information from stakeholders to identify their needs, expectations, and
constraints.
Reviewing the collected requirements to identify any inconsistencies, contradictions, or ambiguities. The analysis
process often involves seeking clarification from stakeholders to ensure a shared understanding.
Organizing requirements into a structured format and establishing priorities based on their importance and impact
on the system.
3. Modeling:
4. Feasibility Study:
Assessing the feasibility of implementing the proposed system. This includes analyzing technical, economic,
operational, legal, and scheduling aspects to determine the project's viability.
5. Risk Analysis:
Identifying potential risks associated with the proposed system and developing strategies to mitigate or manage
these risks.
1. Stakeholder Identification:
Identifying and involving all relevant stakeholders who have an interest or role in the software system. This includes
end-users, customers, project managers, and other impacted parties.
2. Communication:
Establishing effective communication channels to interact with stakeholders. This may involve conducting
interviews, surveys, workshops, or informal discussions.
3. Document Analysis:
Reviewing existing documentation, such as business documents, user manuals, and current system specifications,
to gain insights into the requirements.
4. Brainstorming:
Facilitating brainstorming sessions with stakeholders to gather ideas, requirements, and potential functionalities of
the system.
5. Observation:
Observing how users interact with existing systems or processes to identify pain points, challenges, and areas for
improvement.
Scenarios are sequences of steps describing interactions between users and the system. Use cases and activity
diagrams are employed to expose the functionalities of the system.
Actors:
Actors, representing entities interacting with the system, carry out use cases. Associations between actors and use
cases are identified.
Relationships:
1. Association:
2. Include Relationship:
Modeling Techniques:
Activity Diagram:
Graphical representation of interaction flow within specific scenarios. It includes forks and branches for parallel activities
and transitions.
Swimlane Diagram:
A partitioned activity diagram where activities are grouped according to the responsible class or entity.
What is Software Requirement Specification document, explain key features of IEEE standard Software Requirement
Specification document
💡 A Software Requirements Specification (SRS) document is a comprehensive and detailed description of the intended
behavior and functionalities of a software system
1. Introduction:
Purpose: Clearly state the purpose of the SRS document and its intended audience.
Scope: Define the scope of the software, including what is included and excluded.
2. Overall Description:
Product Perspective: Describe how the software fits into the broader system or context.
User Classes and Characteristics: Identify the different user classes and their characteristics.
Operating Environment: Specify the environments in which the software will operate.
User Interfaces: Describe the interfaces the system will have with users.
4. System Features:
Provide a detailed description of each functional requirement, organized by feature. Include input, processing, and
output aspects.
5. Non-Functional Requirements:
Performance Requirements: Specify performance criteria, such as response time and throughput.
(Questions can be asked w.r.t any system liking library management system, bus booking system etc.)
Requirements modeling is a critical phase in software engineering, and different approaches can be used to represent and
analyze system requirements.
Use Cases: Issue Book, Return Book, Search Catalog, Manage Member, Manage Inventory.
Diagram:
In the context of a Library Management System:
Additional Explanation:
1. Use Case Diagram:
These modeling approaches complement each other, providing a comprehensive view of the system's functionalities, data
entities, and data flow. In practice, a combination of these models is often used to ensure a thorough representation of system
requirements.
Chapter 3 ✅
Define the term Software Metrics? What are direct and Indirect software measures
Software project metrics are quantitative measurements used to assess and evaluate various aspects of a software
development project. These metrics provide data-driven insights into the project's progress, quality, and efficiency, helping
project managers and teams make informed decisions.
Direct measures of the software process include cost and effort applied. Direct measures of the product include lines of code
(LOC) produced, execution speed,
memory size, and defects reported over some set period of time.
Indirect measures of the product include functionality, quality, complexity, efficiency, reliability, maintainability
❗ Don’t know if this is enough for a 5 marker. Need to elaborate more here
Find out
P roductivity
Total Cost = Total LOC × Cost Per LOC = 33200 ∗ $1.3 = $43200
Explain with Example Basic COCOMO Model and its advantages and drawbacks?
The Constructive Cost Model (COCOMO) is a widely used software cost estimation model that was introduced by Barry
Boehm in the late 1970s. It provides a framework for estimating the effort, time, and cost required to develop a software
project. COCOMO comes in three variants: Basic COCOMO, Intermediate COCOMO, and Detailed COCOMO. Here, I'll
explain the Basic COCOMO model, along with its advantages and drawbacks.
[Effort = a × (KLOC)b ]
where:
KLOC is the estimated size of the software product in thousands of lines of code.
Example:
Let's say we want to estimate the effort for a software project with an estimated size of 50,000 lines of code. If historical data
suggests that a = 2.4 and b = 1.05 then the effort can be calculated as follows:
2. Quick Estimates: Since it relies on a simple formula based on size, Basic COCOMO allows for quick and early
estimates, which can be useful for project planning.
3. Versatility: It can be used in the early stages of a project when detailed information is not available. As the project
progresses, more detailed estimation models like Intermediate and Detailed COCOMO can be employed.
3. Generic Constants: The model uses generic constants (\( a \) and \( b \)) that are derived from historical data. These
constants may not be applicable to all types of projects and organizations.
In summary, Basic COCOMO provides a simple and quick way to estimate software development effort based on size, but it
has limitations and may not be suitable for all projects, especially those with unique characteristics or requirements.
Steps
The process of calculating Function Points (FP) involves the following steps:
External Inquiries (EQ): Identify user inquiries that result in data retrieval.
External Interface Files (EIF): Identify external files referenced by the system.
Assign complexity weights to each function type based on factors such as data complexity, transaction
complexity, and environmental factors.
Evaluate value adjustment factors, considering various factors that influence development effort.
Calculate VAF using a formula that considers the degree of influence of these factors.
Example
Given:
UFP = I × IW + O × OW + E × EW + F × F W + N × NW
UFP = 606
The Value Adjustment Factor (VAF) is determined by the value adjustment factors. Given that four factors are not
applicable (each with a value of 0), four factors have a value of 3, and the remaining factors have a value of 4, the VAF is
calculated as follows:
T otalValueAdjustmentF actor = 4 × 3 + 6 × 4 = 36
V AF = 0.65 + (0.01 × T otalValueAdjustmentF actor) = 0.65 + (0.01 × 36) = 0.65 + 0.36 = 1.01
Characteristics: LOC-based models estimate project effort and cost based on the size of the code. The size is
measured in lines of code, and the assumption is that there is a linear relationship between the amount of code
and the effort required.
Advantages: Simple and intuitive, especially for projects where code size is a significant factor.
2. Direct Measure:
Measurement: LOC is a direct measure, providing a tangible and concrete metric for project size.
3. Challenges:
Drawbacks: Fails to capture differences in complexity, programming languages, and development practices. Can
be influenced by coding styles and doesn't account for differences in productivity among developers.
4. Example Model:
Formula: E = a × (KLOC)b where E is effort, KLOC is the size in thousands of lines of code, and a and b
are constants.
Characteristics: FP-based models measure software size based on the functionality it delivers to users,
considering inputs, outputs, inquiries, internal logical files, and external interface files.
Advantages: Reflects the software's functionality, making it more language- and implementation-independent.
2. Indirect Measure:
Advantages: Better captures the overall value delivered by the software, accounting for differences in design
and implementation.
3. Flexibility:
Advantages: Provides flexibility by allowing different types of projects to be measured using the same metric,
facilitating comparisons and benchmarking.
4. Formula:
Comparisons
1. Granularity:
FP: Provides a more abstract, high-level view of software size based on functionality.
2. Language Independence:
FP: More language-independent, making it suitable for comparing projects across different technologies.
3. Complexity Consideration:
LOC: Does not explicitly consider complexity but assumes a linear relationship.
FP: Incorporates complexity factors in its calculation, providing a more nuanced size metric.
4. Estimation Process:
LOC: Relatively straightforward to count, but may not capture the full scope of software functionality.
FP: Requires a more in-depth understanding of the software's functionality, involving the identification and
classification of various function types.
5. Applicability:
FP: Suitable for a broader range of projects, including those with diverse technologies and development
methodologies.
Both LOC and FP-based models have their strengths and weaknesses, and the choice between them often depends on
the nature of the project and the information available during the estimation process. FP models are generally considered
more versatile and suitable for a wider range of projects, especially in modern software development environments.
💡 Software project scheduling is an action that distributes estimated effort across the planned project duration by
allocating the effort to specific software engineering tasks
1. Compartmentalization: The project must be compartmentalized into a number of manageable activities and tasks.
a. Some tasks must occur in sequence, whereas others can occur in parallel.
b. Some activities cannot be completed until the work product from another task is complete.
4. Time Allocation: Each task to be scheduled must be allocated some work units (person-days of effort).
a. Each task must have some start and end date that is a function of the inter-dependencies and whether work will be
conducted full time or part-time
5. Effort Validation: Ensuring that an allocated task is assigned the required amount of resources
Example: For example, consider a project that has three assigned software engineers (e.g., three person-days are
available per day of assigned effor4 ). On a given day, seven concurrent tasks must be accomplished. Each task
requires 0.50 person-days of effort. More effort has been allocated than there are people to do the work.
6. Defined Responsibilities: Every task that is scheduled should be assigned to a specific team member
7. Defined Outcomes: Every task that is scheduled should have a defined outcome.
For software projects, the outcome is normally a work product (e.g., the design of a component) or a part of a work
product. Work products are often combined in deliverables
8. Defined milestones: Every task or group of tasks should be associated with a project milestone.
a. A milestone is accomplished when one or more work products has been reviewed for quality and has been
approved.
💡 Project tracking involves monitoring and updating the project's progress against the established schedule. It helps
project managers and team members ensure that the project stays on track, identify and address issues or delays
promptly, and make informed decisions to keep the project moving forward.
Conducting periodic project status meetings in which each team member reports progress and problems
Evaluating the results of all reviews conducted throughout the software engineering process
Determining whether formal project milestones have been accomplished by the scheduled date
Comparing the actual start date to the planned start date for each project task listed in the resource table
Meeting informally with practitioners to obtain their subjective assessment of progress to date and problems on the
horizon
Characteristics: LOC-based models estimate project effort and cost based on the size of the code. The size is
measured in lines of code, and the assumption is that there is a linear relationship between the amount of code
and the effort required.
Advantages: Simple and intuitive, especially for projects where code size is a significant factor.
2. Direct Measure:
Measurement: LOC is a direct measure, providing a tangible and concrete metric for project size.
Drawbacks: Fails to capture differences in complexity, programming languages, and development practices. Can
be influenced by coding styles and doesn't account for differences in productivity among developers.
4. Example Model:
Formula: E = a × (KLOC)b where E is effort, KLOC is the size in thousands of lines of code, and a and b
are constants.
Characteristics: FP-based models measure software size based on the functionality it delivers to users,
considering inputs, outputs, inquiries, internal logical files, and external interface files.
Advantages: Reflects the software's functionality, making it more language- and implementation-independent.
2. Indirect Measure:
Measurement: FP is an indirect measure, providing a size metric that incorporates multiple factors such as
complexity, functionality, and user interactions.
Advantages: Better captures the overall value delivered by the software, accounting for differences in design
and implementation.
3. Flexibility:
Advantages: Provides flexibility by allowing different types of projects to be measured using the same metric,
facilitating comparisons and benchmarking.
4. Formula:
While Lines of Code (LOC) is a common metric for measuring software size, it has several limitations and issues that can
impact the accuracy and reliability of its application. Some of the key issues with using LOC as a metric for software size
include:
1. Language Dependence:
LOC is highly dependent on the programming language used. Different languages have different syntax and
conventions, which can lead to variations in the number of lines needed to express the same functionality. This
makes LOC less comparable across projects using different languages.
2. Coding Styles:
Coding styles and practices can influence the number of lines of code. Two developers implementing the same
functionality may produce different LOC counts based on their coding styles, formatting preferences, and use of
code comments.
LOC does not differentiate between unique and duplicated code. In cases where code is copied and pasted, LOC
may overstate the actual size of the software, as duplicated lines are counted multiple times.
4. Variability in Complexity:
LOC does not capture the inherent complexity of the code or the problem being solved. Two pieces of code with the
same LOC count may have vastly different levels of complexity, making it an inadequate measure of the software's
intricacy.
5. Non-functional Code:
LOC does not distinguish between functional code (code that directly contributes to the software's functionality) and
non-functional code (comments, whitespace, boilerplate code). This can lead to inaccurate assessments of the effort
required for development.
6. Code Efficiency:
Focusing solely on LOC does not account for code efficiency or performance. More efficient and optimized code may
have fewer lines but achieve the same functionality as less optimized and more verbose code.
7. Evolution of Code:
Over time, software undergoes changes, updates, and optimizations. The evolution of code may result in
modifications that do not significantly impact the LOC count but are crucial for maintaining and improving the
software.
8. Ignorance of Functionality:
LOC does not directly measure the functionality delivered by the software. A small change in functionality may result
in a disproportionately large change in LOC, or vice versa, making it challenging to assess the actual impact on the
software.
LOC is primarily designed for measuring code size and may not be suitable for assessing non-code artifacts such as
documentation, configuration files, or data definitions.
Estimating the number of lines of code accurately before or during the early stages of development is challenging.
Chapter 4 ✅
Initial estimations may not account for the full complexity of the project
💡 Design principles are fundamental concepts and guidelines that guide the process of creating effective and
efficient designs, whether in the fields of software engineering, architecture, industrial design, or other disciplines
💡 Software design encompasses the set of principles, concepts, and practices that lead to the development of a
high-quality system or product.
Module: Separate and addressable components that together make up the software.
Monolithic software are hard to track, hence dividing a single software into a number of products has become the
common practice.
Increased number of modules will mean increased efforts for each module.
Modular design involves breaking down a system into smaller, independent, and interchangeable modules or components.
The benefits of adopting a modular design approach include:
1. Ease of Maintenance:
Modules can be developed, tested, and maintained independently. Changes or updates to one module are less likely
to impact other modules, making maintenance more straightforward.
2. Reusability:
Modular components can be reused in different parts of the system or even in other projects, promoting a more
efficient development process.
3. Scalability:
The system can be easily scaled by adding or replacing modules without affecting the entire system. This facilitates
both horizontal and vertical scalability.
4. Parallel Development:
Different teams or developers can work on different modules simultaneously, speeding up the development process
and reducing time-to-market.
Isolating modules makes it easier to identify and fix bugs. Additionally, testing can be performed on individual
modules, leading to more effective and focused testing efforts.
6. Enhanced Collaboration:
Modular design facilitates collaboration among teams or developers, as they can work on separate modules without
interfering with each other's work.
Changes or updates to one module do not necessarily impact the entire system. This flexibility allows for easier
adaptation to evolving requirements.
8. Encapsulation:
Modules encapsulate their internal details, exposing only the necessary interfaces to the rest of the system. This
helps in hiding implementation details and reducing dependencies.
9. Maintainability:
Modular design contributes to the overall maintainability of a system by providing a clear structure, reducing
complexity, and enabling easier updates or modifications.
A cohesive module performs only one task in the software procedure with little interaction with other modules.
Types of cohesion
Coincidentally cohesive : The modules in which the set of tasks are related with each other loosely then such
modules are called coincidentally cohesive.
Logically cohesive : A module that performs the tasks that are logically related with each other is called logically
cohesive.
Procedural cohesion : When processing elements of a module are related with one another and must be executed
in some specific order then such module is called procedural cohesive.
Communicational cohesion : When the processing elements of a module share the data then such module is
communicational cohesive.
Types of Coupling
Common Coupling: Common data or global data is shared among the modules
Content Coupling: Content Coupling occurs when one module makes use of data or control information maintained
in another module
Modularity:
Modularity is a design concept that involves breaking down a complex system into smaller, independent, and
interchangeable modules or components. These modules encapsulate specific functionalities, have well-defined interfaces,
and can operate independently or in conjunction with other modules. The goal of modularity is to create a system that is
easier to understand, develop, test, maintain, and scale.
Advantages of Modularity:
1. Ease of Understanding:
Breaking a system into modular components makes it easier to understand, as developers can focus on one module
at a time without being overwhelmed by the entire system's complexity.
2. Ease of Development:
Different teams or developers can work on different modules simultaneously, speeding up the development process.
Each module can be developed and tested independently.
3. Reusability:
Modular components can be reused in different parts of the system or even in other projects. This promotes
efficiency by leveraging existing, well-tested modules.
4. Scalability:
Systems designed with modularity in mind are more scalable. New features or capabilities can be added by
integrating new modules, and the system can be scaled horizontally or vertically.
Changes or updates to one module do not necessarily impact the entire system. This flexibility allows for easier
adaptation to evolving requirements without affecting the entire codebase.
Disadvantages of Modularity:
The need for well-defined interfaces between modules introduces an overhead in terms of design and
documentation.
2. Coordination Challenges:
Coordinating interactions between modules can be challenging, especially in large and complex systems. Proper
communication and synchronization are crucial.
3. Dependency Management:
Managing dependencies between modules can be complex. Changes in one module may affect other modules,
requiring careful dependency management.
4. Testing Complexity:
While modular design facilitates independent testing of modules, testing the entire system's interactions and
integration can be complex and time-consuming.
Modular systems may consume more memory due to the need to load multiple modules into memory, especially in
cases where modules are not loaded on demand.
Coupling Cohesion
Software designing is the process of translating the analysis model into the design model
1. Architecture design defines the relationship between major structural elements of the software. The architectural styles
and design patterns can be used to achieve the requirements defined for the system
2. Interface design: Describes how software communicates with the system. These systems interact with each other as
well as with the humans who operate them. Thus interface design represents the flow of information and specific type of
behavior.
3. Component Level Design: The component-level design stage provides a dedicated purpose for each component and
describes how the interface, algorithms, data structure, and communication methods of each component will function to
carry out a process.
a. Transforms structural elements of software architecture into procedural description of software module. The
information used by component design is obtained from class based model, flow based model and behavioral
model
Architecture Design
Built from
House Analogy
A set of detailed drawings (and specifications) for each room in a house. These drawings
depict wiring and plumbing within each room, the location of electrical receptacles and wall
switches, faucets, sinks, showers, tubs, drains, cabinets, and closets.
Component-level design for software fully describes the internal detail for each software component
The component level design defines data structures for all local data objects and algorithmic detail for processing that
occurs within a component and an interface that allows access to all component operations
The design details of a component can be modeled at many different levels of abstraction.
Detailed procedural flow for a component can be represented using either pseudocode (a programming language-like
representation) or some other diagrammatic form (e.g., flowchart or box diagram)
Algorithmic structure follows the rules established for structured programming (i.e., a set of constrained procedural
constructs).
Data structures, selected based on the nature of the data objects to be processed, are usually modeled using
pseudocode or the programming language to be used for implementation.
Interface Design
House Analogy
analogous to a set of detailed drawings (and specifications) for the doors, windows, and
external utilities of a house. These drawings depict the size and shape of doors and windows,
the manner in which they operate, the way in which utility connections (e.g., water, electrical,
gas, telephone) come into the house and are distributed among the rooms depicted in the floor
plan.
2. External Interfaces to other systems: Requires definitive information about the entity to which information is sent or
received.
These interface design elements allow the software to communicate externally and enable internal communication
and collaboration among the components that populate the software architecture.
💡 Component: A modular, deployable, and replaceable part of a system that encapsulates implementation and
exposes a set of interfaces.
UML (Unified Modeling Language) is a standardized modeling language used in software engineering to visually represent
and document software systems. UML provides a set of diagrams and notation for representing various aspects of software
design, including class diagrams, sequence diagrams, and component diagrams.
https://miro.com/diagramming/what-is-a-uml-component-diagram/
1. Components:
In UML component diagrams, a component is represented by a rectangular box with the component's name written
inside. The box typically includes the component's provided and required interfaces.
2. Interfaces:
Interfaces are elements of components or classes that deliver function to other components or classes.
required interfaces represent services or functionalities that a component needs from its environment.
3. Dependencies:
Dependencies between components are represented by arrows pointing from the dependent component to the
component on which it depends. Dependencies can be used to indicate relationships such as usage, association, or
generalization.
Connectors are used to show the flow of information between components, dependencies, and communication
channels
For example, a connector might represent the flow of user login information from the authentication component to
the data management component.
5. Ports:
Ports are depicted as small squares on the edges of the component box and are connected to interfaces.
Architecture Design
House Analogy
floor plan of a house. The floor plan depicts the overall layout of the rooms; their size, shape, and
relationship to one another; and the doors and windows that allow movement into and out of the
rooms. The floor plan gives us an overall view of the house
💡 the process of defining a collection of hardware and software components and their interfaces to establish the
framework for the development of a computer system
The architecture design element is depicted as a set of interconnected subsystems, often derived from analysis packages
within the requirements model.
Built from
1. System Structure:
Define the overall structure of the system, including the organization of components, modules, layers, and
subsystems. This involves deciding how the system will be decomposed into manageable and cohesive parts.
2. Component Identification:
Identify the major components of the system, considering their responsibilities, functionalities, and interactions.
Components may include user interfaces, application logic, databases, external interfaces, and more.
3. Data Design:
Design the data architecture, including data models, databases, and data flow. Specify how data will be stored,
retrieved, processed, and shared among different components.
4. Interface Design:
Define the interfaces between system components, specifying how they will communicate and interact. This involves
determining the methods, protocols, and data formats used for communication.
5. Architectural Patterns:
Choose appropriate architectural patterns or styles that align with the system's requirements. Common architectural
patterns include client-server, layered architecture, microservices, and event-driven architecture.
Consider scalability and performance requirements during architectural design. Decide how the system will handle
growing user loads, data volumes, and performance demands.
7. Security Considerations:
Incorporate security measures into the architectural design, addressing issues such as data protection, access
control, authentication, and encryption.
Design the system to be reliable and resilient. Consider mechanisms for error handling, fault tolerance, and recovery
to ensure the system's availability and robustness.
9. Technology Selection:
Choose appropriate technologies, frameworks, and tools that align with the architectural decisions. Consider factors
such as development platforms, databases, communication protocols, and third-party integrations.
Plan for the long-term maintainability and extensibility of the system. Design components and interfaces in a way
that facilitates future updates, enhancements, and modifications.
💡 User interface (UI) design primarily focuses on information architecture. It is the process of building interfaces that
clearly communicate to the user what's important.
a. Define the interaction modes in such a way that does not force the user into unnecessary or undesired actions: The
user should be able to easily enter and exit the mode with little or no effort.
b. Provide for flexible interaction: Different people will use different interaction mechanisms, some might use keyboard
commands, some might use mouse, some might use touch screen, etc, Hence all interaction mechanisms should be
provided.
Reduce demand on short-term memory: When users are involved in some complex tasks the demand on short-term
memory is significant. So the interface should be designed in such a way to reduce the remembering of previously
done actions, given inputs and results.
Establish meaningful defaults: Always initial set of defaults should be provided to the average user, if a user needs to
add some new features then he should be able to add the required features.
Allow the user to put the current task into a meaningful context: Many interfaces have dozens of screens. So it is
important to provide indicators consistently so that the user know about the doing work. The user should also know
from which page has navigated to the current page and from the current page where can navigate.
Maintain consistency across a family of applications: The development of some set of applications all should follow
and implement the same design, rules so that consistency is maintained among applications.
If past interactive models have created user expectations do not make changes unless there is a compelling reason.
💡 User interface (UI) design primarily focuses on information architecture. It is the process of building interfaces that
clearly communicate to the user what's important.
1. User-Centered Design:
UI design begins with a deep understanding of the target users and their needs. Designers use user-centered design
principles, involving users in the design process through research, personas, and usability testing.
2. Visual Design:
Visual design involves the use of color, typography, imagery, and layout to create an aesthetically pleasing and
cohesive interface. Visual elements should align with the brand identity and contribute to a positive emotional
response from users.
3. Information Architecture:
Information architecture organizes and structures content in a way that is logical and easy to navigate. This includes
defining the hierarchy of information, creating clear navigation paths, and ensuring content discoverability.
4. Interaction Design:
Interaction design focuses on defining how users will interact with the interface. It includes designing intuitive
navigation, clear calls to action, and interactive elements that guide users through the intended workflow.
5. Usability:
Usability is a critical factor in UI design. Designers strive to create interfaces that are easy to learn, efficient to use,
and error-tolerant. Usability testing helps identify areas for improvement and ensures a positive user experience.
6. Consistency:
Consistency in design elements, terminology, and layout enhances user predictability and comprehension. A
consistent UI promotes a sense of familiarity, making it easier for users to navigate and understand the system.
A well-designed UI contributes to a positive and enjoyable user experience, increasing user satisfaction and
engagement.
2. Increased Usability:
Usable interfaces reduce the learning curve for users, making it easier for them to accomplish tasks and navigate
the system.
3. Brand Image:
UI design plays a role in shaping the brand image. A visually appealing and consistent interface reinforces the brand
identity and professionalism.
4. Efficient Workflows:
Thoughtful UI design streamlines workflows, helping users accomplish tasks efficiently and without unnecessary
friction.
2. What is White Box Testing ,Explain with Diagram how white box testing can be performed?
3. What is Black Box Testing? Explain with Diagram how black box testing can be performed?
💡 Instead of the answering the repeated questions, I’m just adding the notes per topic here. We can then adjust the
amount we write
Answered Questions
Explain different Test Characteristics
Testing Objectives
Testing Principles
3. The Pareto principle can be applied to software testing - 80 % of all errors uncovered during testing will likely be
traceable to 20 % of all program modules.
4. Testing should begin "in the small" and progress toward testing "in the large"
💡 A critical element of software quality assurance and represents the ultimate review of software, design and coding.
Software is tested to uncover errors in it, that were made when the software was being designed or constructed.
Unit Testing
Focuses on verification effort on the smallest unit of software design - the software component or module.
Using the component-level design description as a guide, important control paths are tested to uncover errors within
the boundary of the module.
Because a component is not a stand-alone program, driver and/or stub software must often be developed for each
unit test.
"driver code" typically refers to the code that is responsible for initializing and invoking the units of code (such as
functions or methods) being tested.
Stubs replace modules that are invoked by the component to be tested. A stub uses the subordinate module's
interface, prints verification of entry and returns control to the module undergoing testing.
Integration Testing
A systematic technique of conducting tests to uncover errors associated with interfacing components
The objective is to take unit-tested components and build a program structure that has been dictated by design
Incremental integration: The program is constructed and tested in small increments, where errors are easier to isolate
and correct.
Interfaces are more likely to be tested completely, and a systematic test approach may be applied.
Validation Testing
Software Validation Tests are done to confirm conformity with requirements
After each validation test case has been conducted, one of two possible conditions exists:
Errors at this point often mean that the scheduled delivery will be delayed.
Alpha Testing
The alpha test is conducted at the developer’s site by a representative group of end users.
The software is used in a natural setting with the developer “looking over the shoulder” of the users and recording
errors and usage problems.
Beta Testing
Conducted at one or more end-user sites.
The beta test is a “live” application of the software in an environment that cannot be controlled by the developer.
White-box test design techniques, also known as structural or glass-box testing techniques, involve creating test cases
based on an understanding of the internal logic, code structure, and paths of the software application. These techniques aim
to ensure that various code segments, conditions, and branches are thoroughly tested.
1. Statement Coverage:
Objective: Ensure that each statement in the code is executed at least once during testing.
Approach: Design test cases to cover individual statements in the source code.
Formula:
2. Branch Coverage:
Objective: Ensure that all branches (decision points) in the code are taken at least once during testing.
Approach: Design test cases to cover all possible branches, including both true and false conditions.
Formula:
Objective: Ensure that each boolean condition in the code evaluates to both true and false during testing.
Approach: Design test cases to exercise each condition in both true and false states.
Formula:
Good Explanation 👇
def example_function(a, b, c):
if a > 0:
result = "Positive A"
elif b > 0:
result = "Positive B"
else:
result = "Non-Positive"
return result
Branches:
Conditions:
2. b > 0
To achieve 100% branch coverage, you need to ensure that both branches are executed at least once. However,
achieving 100% condition coverage requires that each condition is evaluated in both true and false states.
4. Loop Coverage:
Objective: Ensure that loops are adequately tested, including zero iterations, single iterations, and multiple
iterations.
Approach: Design test cases that exercise loops under different scenarios, such as empty loops, loops with a single
iteration, and loops with multiple iterations.
5. Path Coverage:
Objective: Ensure that all possible paths through the code are tested.
Approach: Design test cases to cover different paths, considering all possible combinations of branches and
conditions.
Challenge: Path coverage can be complex for large programs with numerous paths, and achieving 100% path
coverage may be impractical.
Objective: Ensure that variables are defined and used correctly throughout the program.
Approach: Design test cases to trace the flow of data through the program, including variable assignments and
references.
Focus: Identify instances of uninitialized variables, unused variables, and potential data flow issues.
Approach: Design test cases using values at the edges or boundaries of valid input ranges.
Example: For an input range of 1 to 100, test with values like 1, 100, 2, 99, and values just outside the specified
range.
Objective: Evaluate the effectiveness of test cases by introducing intentional faults (mutations) into the code and
checking if the test cases detect these faults.
Approach: Introduce mutations into the code, such as changing operators, modifying constants, or deleting
statements, and observe if the test cases can identify the changes.
Each of these white-box test design techniques has its strengths and limitations. Testers often use a combination of these
techniques to achieve comprehensive coverage and ensure the effectiveness of their testing efforts. The choice of technique
depends on factors such as the nature of the software, testing objectives, and resource constraints.
Proper test planning: Designing test cases to cover the entire code. Execute rinse-repeat until error-free software is
reached. Also, the results are communicated.
Basis Path
basis path method enables the test-case designer to derive a logical complexity measure of a procedural design and use
this measure as a guide for defining a basis set of execution paths.
Test cases derived to exercise the basis set are guaranteed to execute every statement in the program at least one time
during testing
1. Condition Testing: Condition testing is a test cased design method, which ensures that the logical condition and
decision statements are free from errors. The errors present in logical conditions can be incorrect Boolean operators,
missing parenthesis in a Booleans expression, error in relational operators, arithmetic expressions, and so on.
2. Data Flow Testing: The data flow test method chooses the test path of a program based on the locations of the
definitions and uses all the variables in the program. The data flow test approach is depicted as follows suppose each
statement in a program is assigned a unique statement number and that theme function cannot modify its parameters or
global variables.
A black-box test examines some fundamental aspect of a system with little regard for the internal logical structure of the
software
2. interface errors,
Procedure
Black box testing is a type of testing where the tester focuses solely on the software's functionality without knowledge of
its internal code structure. The goal is to verify that the software behaves as expected based on specified requirements.
1. Input Specification:
The tester defines a set of test inputs based on the software's specifications and requirements. These inputs
represent the conditions under which the software will be tested.
Test cases are designed to cover various scenarios, including normal operation, boundary cases, and error
conditions. Each test case specifies the input data, the expected output, and the conditions for executing the test.
3. Test Execution:
The designed test cases are executed on the software without any knowledge of its internal logic or code
structure. The tester interacts with the software through its user interface, APIs, or other specified entry points.
4. Output Evaluation:
The tester observes and evaluates the software's outputs or responses to the test inputs. This involves comparing
the actual results with the expected results specified in the test cases.
5. Defect Reporting:
If discrepancies are found between the actual and expected results, the tester reports these as defects. The
defects are documented with details such as the steps to reproduce, the observed behavior, and any other
relevant information.
6. Regression Testing:
As the software undergoes changes or updates, regression testing is performed by re-executing the black box
test cases to ensure that the modifications do not introduce new defects or impact existing functionalities.
Definition
Graph testing begins by creating a graph of important objects and their relationships, then
devising a series of tests that will cover the graph so that each object and relationship is
exercised and errors are uncovered.
Equivalence Testing
Definition
Equivalence partitioning is a black-box testing method that divides the input domain of a
program into classes of data from which test cases can be derived.
An ideal test case single-handedly uncovers a class of errors (e.g., incorrect processing of all
character data) that might otherwise require many test cases to be executed before the
general error is observed.
Test-case design for equivalence partitioning is based on an evaluation of equivalence classes for an input condition.
An equivalence class represents a set of valid or invalid states for input conditions. Typically, an input condition is either a
specific numeric value, a range of values, a set of related values, or a Boolean condition
Knowledge of
No knowledge of implementation is needed. Knowledge of implementation is required.
Implementation
Implementation of code is not needed for Code implementation is necessary for white
Knowledge of Code
black box testing. box testing.
This testing can be initiated based on the This type of testing of software is started after
Document needed
requirement specifications document. a detail design document.
Honestly, useless
It is a functional test of the software. It is a structural test of the software.
points after this
It is the behavior testing of the software. It is the logic testing of the software.
It is applicable to the higher levels of testing It is generally applicable to the lower levels of
of software. software testing.
Can be done by trial and error ways and Data domains along with inner or internal
methods. boundaries can be better tested.
Unit Testing
Definition
Focuses on verification effort on the smallest unit of software design - the software component or
module.
Using the component-level design description as a guide, important control paths are tested to uncover errors within the
boundary of the module.
The test focuses on the internal processing logic and data structures of a single component.
Developing tests before the code for a component is made is often done to ensure that you write code that passes all the
tests.
When a single component has only one function, it allows for test cases to be reduced and increase the ease in
uncovering errors.
All independent paths in the control structure are tested to ensure that all statements are executed at least once
All boundary conditions are tested as well to ensure that the module operates properly at boundaries established.
Data Flow is checked first, if that doesn't happen properly then there's essentially no point.
Boundary conditions are the last element in the last iteration of a loop or something, this is where most errors occur
(experience 🥲)
Procedures
Because a component is not a stand-alone program, driver and/or stub software must often be developed for each unit
test.
Stubs replace modules that are invoked by the component to be tested. A stub uses the subordinate module's interface,
prints verification of entry and returns control to the module undergoing testing
Driver code and stubs both are overhead (code that is not shipped in the final product)
Sometimes testing may need to be postponed until the integration step is completely carried out
Example
Consider a software application for a calculator. In unit testing, you would test each operation of the calculator (e.g., addition,
subtraction, multiplication) as an individual unit. For instance, you would check that the addition function produces the correct
result for various input combinations. Each operation is tested in isolation to ensure it works as expected.
Integration Testing
💡 A systematic technique of conducting tests to uncover errors associated with interfacing components
The objective is to take unit-tested components and build a program structure that has been dictated by design
Incremental integration: The program is constructed and tested in small increments, where errors are easier to isolate and
correct. Interfaces are more likely to be tested completely, and a systematic test approach may be applied.
Procedure
There are various types of integration testing
a. Starts testing from the highest level of the software hierarchy and progressively integrates lower-level modules.
2. Bottom Up Integration
a. Begins testing from the lower-level modules and incrementally integrates higher-level components.
c. Disadvantage: Program does not exist as an entity until the last program is added
3. Regression Testing
a. Involves rerunning existing test cases to ensure that new changes or additions to the codebase do not negatively
impact the existing functionalities.
a. A preliminary, high-level test that checks if the basic functionalities of a software build are working correctly, providing
a quick assessment of the build's stability.
Tester should identify and test the critical modules as much as possible.
Example
Imagine an e-commerce website with separate modules for user authentication and order processing. In integration testing,
you would validate that when a user places an order, the order processing module interacts correctly with the user
authentication module. This ensures a seamless flow of data and functionality between these two components.
Validation Testing
[!cite] Definition
Software Validation Tests are done to confirm conformity with requirements
After each validation test case has been conducted, one of two possible conditions exists:
Errors at this point often mean that the scheduled delivery will be delayed.
Configuration Review
Ensure that all elements of the software configuration have been properly developed, are cataloged, and have the necessary
detail to bolster the support activities
Alpha Testing
The alpha test is conducted at the developer’s site by a representative group of end users. The software is used in a natural
setting with the developer “looking over the shoulder” of the users and recording errors and usage problems.
Beta Testing
Conducted at one or more end-user sites.
The beta test is a “live” application of the software in an environment that cannot be controlled by the developer.
Example
Consider a social media application where users can post text updates. In validation testing, you would check if the
application correctly validates user input. For instance, you would ensure that the application rejects posts that exceed a
character limit and prompts users to provide required information, such as a post title.
Difference
Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly
performed. If recovery is automatic (performed by the system itself), reinitialization, checkpointing mechanisms, data
recovery, and restart are evaluated for correctness
Security testing attempts to verify that protection mechanisms built into a system will, in fact, protect it from improper
penetration.
Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency, or volume.
Performance testing is designed to test the run-time performance of software within the context of an integrated system.
Performance testing occurs throughout all steps in the testing process
Deployment testing, sometimes called configuration testing, exercises the software in each environment in which it is to
operate. In addition, deployment testing examines all installation procedures and specialized installation software (e.g.,
“installers”) that will be used by customers, and all documentation that will be used to introduce the software to end users
Example: Imagine a complex enterprise resource planning (ERP) system used by a manufacturing company. In system
testing, you would test end-to-end scenarios, such as creating a new order, processing it through the system, updating
inventory, and generating financial reports. This type of testing ensures that the entire ERP system functions seamlessly
as a cohesive unit.
These conceptual examples help illustrate the different levels and focuses of testing in the software development
lifecycle. Unit testing ensures the correctness of individual components, integration testing verifies interactions between
components, validation testing checks adherence to requirements, and system testing evaluates the overall functionality
of the complete system.
In unit testing, each module of the software is tested In integration testing, all modules of the software are
Definition
separately. tested combined.
In unit testing tester knows the internal design of the Integration testing doesn’t know the internal design of
Tester Knowledge
software. the software.
AKA Unit testing is white box testing. Integration testing is black box testing.
Unit testing is responsible to observe only the Error detection takes place when modules are
Scope
functionality of the individual units. integrated to create an overall system.
External Dependencies The proper working of your code with the external The proper working of your code with the external
functioning dependencies is not ensured by unit testing. dependencies is ensured by integration testing.
Speed of execution Fast execution as compared to integration testing. Its speed is slow because of the integration of modules.
✅
Exposure to code Unit testing results in in-depth exposure to the code.
detailed visibility.
Chapter 6
Explain concept of Risk Analysis & Management
Risk: Uncertainty that may occur due to choices in the part and can cause heavy losses
Risk Management: Process of making decisions based on an evaluation of the factors that
threatens the business
Risk Analysis
1. Risk Identification:
Identify potential risks by considering all aspects of the project, including technical, organizational, and external
factors. This can be done through brainstorming, historical data analysis, and expert interviews.
2. Risk Assessment:
Assess the probability of each identified risk occurring and estimate the potential impact on the project. Risks are
often assessed in terms of likelihood, severity, and the ability to detect them.
3. Risk Prioritization:
Prioritize risks based on their significance. Risks with high impact and high probability are often given priority, but
other factors such as the project phase and available resources may also influence prioritization.
4. Risk Documentation:
Document identified risks, including their descriptions, potential impacts, likelihood, and proposed mitigation or
contingency plans. This documentation serves as a reference throughout the project.
Risk Management
1. Risk Mitigation:
Develop and implement strategies to reduce the probability or impact of identified risks. This may involve taking
preventive actions, improving processes, or incorporating additional resources.
2. Risk Avoidance:
In some cases, it may be possible to avoid certain risks altogether. This could involve changing project plans,
technologies, or methodologies to eliminate the possibility of a particular risk occurring.
Transfer the impact of a risk to a third party, often through insurance or outsourcing. This is a common strategy for
risks that are beyond the control of the project team.
4. Risk Acceptance:
Accepting certain risks without taking specific actions to mitigate them. This is a valid strategy when the potential
impact is low, the cost of mitigation is too high, or when there are no practical mitigation measures available.
5. Contingency Planning:
Develop contingency plans to address potential risks if they materialize. Contingency plans outline the steps to be
taken if a risk event occurs, helping the project team respond quickly and effectively.
6. Continuous Monitoring:
Regularly monitor the project environment for new risks, changes in existing risks, or the effectiveness of
implemented risk management strategies. Adjust the risk management plan as needed throughout the project
lifecycle.
RMMM
Risk mitigation, monitoring and management
Risk Mitigation
Preventing the risks in the first place.
Objective: Developing strategies and action plans to reduce the impact of identified risks.
Methods: Proactive measures to reduce the likelihood of occurrence (preventive) and responsive actions to address
consequences (contingency).
Some of the steps that can be taken to ensure risk mitigation include
2. Find and eliminate all the causes that can create risk before the project starts
Risk Monitoring
Objective: Continuously tracking and reassessing identified risks throughout the project.
Methods: Regular status meetings, progress reports, and ongoing risk analysis.
Output: Updated risk registers and documentation, adjustments to mitigation plans based on changing circumstances
1. The degree to which the team performs with the spirit of team work
2. To ensure the steps defined to avoid the risks are implemented well
3. To gather the information which can be useful for analyzing the risk.
Risk Management
b. provide checklists or other “interview” techniques that assist in identifying project specific risks,
d. support risk mitigation strategies, and generate many different risk-related reports
Document
The RMMM plan documents all work performed as part of risk analysis and is used by the project manager as part of the
overall project plan.
Some software teams do not develop a formal RMMM document. Rather, each risk is documented individually using a
risk information sheet
In most cases, the RIS is maintained using a database system so that creation and information entry, priority ordering,
searches, and other analysis may be accomplished easily
Once RMMM has been documented and the project has begun, risk mitigation and monitoring steps commence.
The software configuration management process defines a series of tasks that have four primary objectives:
4. to ensure that software quality is maintained as the configuration evolves over time.
Version Control
Version control combines procedures and tools to manage different versions of configuration objects that are created during
the software process
2. A version management capability stores all version of a configuration object (or enables construction of any version from
differences in past versions)
3. A make facility that enables you to collect all relevant configuration objects and construct a specific version of the
software
4. A version control and change control systems often implement an issues tracking (also called bug tracking) capability
that enables the team to record and track the status of all outstanding issues associated with each configuration object.
Change Control
Too much change control and we create problems. Too little, and we create other problems.
For a large software project, uncontrolled change rapidly leads to chaos. For such projects, change control combines
human procedures and automated tools to provide a mechanism for the control of change.
A change request is submitted and evaluated to assess technical merit, potential side effects, overall impact on other
configuration objects and system functions, and the projected cost of the change
The results of the evaluation are presented as a change report, which is used by a change control authority (CCA)—a
person or group that makes a final decision on the status and priority of the change.