0% found this document useful (0 votes)
8 views14 pages

SoftwareEngineeringQuestion Bank1

The document is a question bank on software engineering covering various topics such as feasibility studies, control flow structures, software requirements specifications (SRS), and software testing levels. It discusses methodologies like the Waterfall model, principles of software design, and coding standards, while also explaining concepts like cohesion, coupling, and validation. Additionally, it highlights the importance of software documentation and various techniques for representing complex logic.

Uploaded by

nagapooshnam2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views14 pages

SoftwareEngineeringQuestion Bank1

The document is a question bank on software engineering covering various topics such as feasibility studies, control flow structures, software requirements specifications (SRS), and software testing levels. It discusses methodologies like the Waterfall model, principles of software design, and coding standards, while also explaining concepts like cohesion, coupling, and validation. Additionally, it highlights the importance of software documentation and various techniques for representing complex logic.

Uploaded by

nagapooshnam2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

SOFTWARE ENGINEERING

QUESTION BANK

1. Write the aim of the feasibility study?

A feasibility study is an assessment of the practicality of a proposed plan or project.


It analyzes the viability of a project to determine whether the project or venture is likely to
succeed. The study is also designed to identify potential issues and problems that could arise
while pursuing the project.

2. What is Control flow structure?

Control flow structures are mechanisms that determine the order in which statements
in a program are executed, enabling conditional execution and repetition. They allow
programs to make decisions and repeat actions, making them flexible and powerful.

3. List the task regions in the Spiral model.

The spiral model's task regions, also known as phases or activities, are Planning, Risk
Analysis, Engineering, and Evaluation.

4. Define SRS.

In software development, an SRS, or Software Requirements Specification, is a


document that details the functional and non-functional requirements of a software product,
serving as a blueprint for its development.

5. What are the activities of Requirements Analysis and Specification phase?

The requirements analysis and specification phase involves identifying stakeholders,


eliciting their needs, analyzing and documenting those needs, and then validating the
requirements to ensure they are complete, consistent, and meet stakeholder expectations.

6. List down the principles of a software design.


most common Design Principles in System Design are:
1. Separation of Concerns
2. Encapsulation and Abstraction
3. Loose Coupling and High Cohesion
4. Scalability and Performance
5. Resilience to Fault Tolerance
6. Security and Privacy

7. What is Coupling?

In general, "coupling" refers to the act of joining or connecting two things, whether
physically, mechanically, or conceptually, and can also refer to a device used for such a
connection.
8. What is functional independence in software design?

In software design, functional independence means a module or function performs a


single, well-defined task with minimal interaction or dependencies on other modules,
promoting modularity and maintainability.

9. List the different types of views supported in UML diagram?

UML supports several views of a system, including the user view (use case diagram),
structural view (class, component, package, composite structure, object diagrams), behavioral
view (activity, state machine, sequence, communication diagrams), and
implementation/deployment view (component, deployment diagrams).

10. Define validation.

Validation is the process of assessing whether the final product meets the needs and
expectations of the end-user and other stakeholders, ensuring it functions as intended in real-
world scenarios.

11. What is failure?


A failure occurs when the software deviates from its expected behavior, producing
incorrect results or failing to perform its intended function, often due to underlying defects or
faults.

12. What are the different levels of software testing?

Software testing occurs at different levels, including unit, integration, system, and
acceptance testing, each focusing on verifying different aspects of the software's functionality
and quality.

13.Why do we use a life cycle model? Explain.

SDLC is a method, approach, or process that is followed by a software development


organization while developing any software. SDLC models were introduced to follow a
disciplined and systematic method while designing software. With the software
development life cycle, the process of software design is divided into small parts, which
makes the problem more understandable and easier to solve. SDLC comprises a detailed
description or step-by-step plan for designing, developing, testing, and maintaining the
software.

14. Explain the organization of the SRS document.

Software Requirement Specification (SRS) Format as the name suggests, is a


complete specification and description of requirements of the software that need to be
fulfilled for the successful development of the software system. These requirements can be
functional as well as non-functional depending upon the type of requirement. The
interaction between different customers and contractors is done because it is necessary to
fully understand the needs of customers.
15. List the different techniques for representing complex logic.

To represent complex logic, you can use techniques like logical representation, decision table
testing, propositional logic, Boolean algebra, fuzzy logic, and simulation.
Here's a more detailed breakdown:
1. Logical Representation:
 This involves using a language with defined rules to represent knowledge, ensuring clarity
and avoiding ambiguity.
 It's a fundamental method for communicating knowledge to machines, allowing for the
representation of facts and conclusions based on conditions.
2. Decision Table Testing:
 This technique helps design test cases based on all possible combinations of inputs and
outputs, useful for testing complex logic, business rules, and decision-making processes.
 It simplifies the representation of complex business logic and helps identify missing logic or
gaps in requirements.
3. Propositional Logic:
 Propositional logic focuses on the relationships between propositions (statements that can be
true or false) using logical connectives like "and", "or", "not", and "if-then".
 It's often used in scenarios where the knowledge domain is simple and the relationships
between propositions are straightforward.
4. Boolean Algebra:
 This is a mathematical system that provides a way to represent and simplify digital logic
circuits.
 It's used for performing complex calculations and designing circuits involving logic gates.
5. Fuzzy Logic:
 Fuzzy logic deals with situations where information is imprecise or uncertain, allowing for
the representation of "degrees of truth" rather than just true or false.
 It's often used in situations where multiple factors need to be considered, such as in advanced
software trading models.
6. Simulation:
 Simulation models can be used to visualize potential outcomes and refine processes before
implementing changes in the real world.
 This indirect approach is valuable for process improvement and decision-making, ensuring
that changes are data-driven.
7. Other Techniques:
 Semantic Networks:
These represent knowledge as a network of nodes and links, where nodes represent concepts
and links represent relationships.
 Frame-Based Representation:
This approach uses frames (structured data) to represent knowledge, with slots for different
attributes and values.
 Rule-Based Representation:
This uses rules to represent knowledge, where each rule specifies a condition and an action.
16. Summarize the different types of Cohesion.

Cohesion, in software engineering, describes how closely related the functions and
responsibilities within a module are, and different types exist, ranging from high to low
cohesion, with examples including functional, sequential, communicational, procedural,
temporal, logical, and coincidental cohesion.
Here's a breakdown of the different types of cohesion:
 Functional Cohesion (Highest):
All elements within a module contribute to a single, well-defined task or function.
 Sequential Cohesion:
The output of one element serves as the input for another, creating a sequence of operations.
 Communicational Cohesion:
Elements within a module operate on the same data or contribute to the same data structure.
 Procedural Cohesion:
Elements are grouped based on the sequence of execution, or the procedures they perform.
 Temporal Cohesion:
Elements are grouped based on the time they are processed during execution, such as
initialization or shutdown tasks.
 Logical Cohesion:
Elements are grouped because they perform similar kinds of activities, even if not directly
related to a single function.
 Coincidental Cohesion (Lowest):
Elements are grouped arbitrarily with little to no meaningful relationship to each other.

17. Write short note on Interaction diagrams.

Interaction Overview Diagrams (IODs) in UML (Unified Modeling Language) provide a


high-level view of the interactions between various components or objects in a system.
They are used to visualize the flow of control and interactions within a system, showing
how different parts of the system communicate and collaborate to achieve a specific goal.

An Interaction Overview Diagram (IOD) is a type of UML (Unified


Modeling Language) diagram that illustrates the flow of interactions between various
elements in a system or process. It provides a high-level overview of how interactions
occur, including the sequence of actions, decisions, and interactions between different
components or objects.
 Interaction diagrams give a general overview of system behavior that stakeholders can
understand without getting bogged down in specifics.
 They help with requirements analysis, documentation, communication, and system
design. Overall, they enable stakeholders to comprehend complicated systems and make
educated decisions.
18. Describe about software documentation.

In the software development process, software documentation is the information that


describes the product to the people who develop, deploy and use it. It includes the technical
manuals and online material, such as online versions of manuals and help capabilities.

In the software development process, software documentation is the information that


describes the product to the people who develop, deploy and use it.

It includes the technical manuals and online material, such as online versions of
manuals and help capabilities. The term is sometimes used to refer to source information
about the product discussed in design documentation, code comments, white papers and
session notes.

Software documentation is a way for engineers and programmers to describe their


product and the process they used in creating it in formal writing. Early computer users were
sometimes simply given the engineers' or programmers' notes. As software development
became more complicated and formalized, technical writers and editors took over the
documentation process.

Software documentation shows what the software developers did when creating the software
and what IT staff and users must do when deploying and using it. Documentation is often
incorporated into the software's user interface and also included as part of help
documentation. The information is often divided into task categories, including the following:

 evaluating

 planning

 setting up or installing

 customizing

 administering

 using

 maintaining
19. Explain some representative coding standards.

Coding Standards in Software Engineering

Some of the coding standards are given below:

1. Limited use of globals: These rules tell about which types of data that can be declared
global and the data that can’t be.
2. Standard headers for different modules: For better understanding and maintenance of
the code, the header of different modules should follow some standard format and
information. The header format must contain below things that is being used in various
companies:
 Name of the module
 Date of module creation
 Author of the module
 Modification history
 Synopsis of the module about what the module does
 Different functions supported in the module along with their input output parameters
 Global variables accessed or modified by the module
3. Naming conventions for local variables, global variables, constants and
functions: Some of the naming conventions are given below:
 Meaningful and understandable variables name helps anyone to understand the reason
of using it.
 Local variables should be named using camel case lettering starting with small letter
(e.g. localData) whereas Global variables names should start with a capital letter
(e.g. GlobalData). Constant names should be formed using capital letters only
(e.g. CONSDATA).
 It is better to avoid the use of digits in variable names.
 The names of the function should be written in camel case starting with small letters.
 The name of the function must describe the reason of using the function clearly and
briefly.
4. Indentation: Proper indentation is very important to increase the readability of the code.
For making the code readable, programmers should use White spaces properly. Some of
the spacing conventions are given below:
 There must be a space after giving a comma between two function arguments.
 Each nested block should be properly indented and spaced.
 Proper Indentation should be there at the beginning and at the end of each block in the
program.
 All braces should start from a new line and the code following the end of braces also
start from a new line.
5. Error return values and exception handling conventions: All functions that
encountering an error condition should either return a 0 or 1 for simplifying the
debugging.
20.Explain Water fall model. What are the problems that are encountered when
waterfall model is applied?

The Waterfall Model is a classical software development methodology. It was first


introduced by Winston W. Royce in 1970. It is a linear and sequential approach to software
development that consists of several phases. It must be completed in a specific order. This
classical waterfall model is simple and idealistic. It was once very popular. Today, it is not
that popularly used. However, it is important because most other types of software
development life cycle models are a derivative of this. In this article we will see waterfall
model in detail.
Features of Waterfall Model
Following are the features of the waterfall model:
1. Sequential Approach: The waterfall model involves a sequential approach to software
development, where each phase of the project is completed before moving on to the next
one.
2. Document-Driven: The waterfall model depended on documentation to ensure that the
project is well-defined and the project team is working towards a clear set of goals.
3. Quality Control: The waterfall model places a high emphasis on quality control and
testing at each phase of the project, to ensure that the final product meets the
requirements and expectations of the stakeholders.
4. Rigorous Planning: The waterfall model involves a careful planning process, where the
project scope, timelines, and deliverables are carefully defined and monitored throughout
the project lifecycle.
Overall, the waterfall model is used in situations where there is a need for a highly
structured and systematic approach to software development. It can be effective in ensuring
that large, complex projects are completed on time and within budget, with a high level of
quality and customer satisfaction.

The Classical Waterfall Model suffers from various shortcomings we can’t use it in real
projects, but we use other software development lifecycle models which are based on the
classical waterfall model. Below are some major drawbacks of this model.
 No Feedback Path: In the classical waterfall model evolution of software from one
phase to another phase is like a waterfall. It assumes that no error is ever committed by
developers during any phase. Therefore, it does not incorporate any mechanism for error
correction.
 Difficult to accommodate Change Requests: This model assumes that all the customer
requirements can be completely and correctly defined at the beginning of the project, but
the customer’s requirements keep on changing with time. It is difficult to accommodate
any change requests after the requirements specification phase is complete.
 No Overlapping of Phases: This model recommends that a new phase can start only
after the completion of the previous phase. But in real projects, this can’t be maintained.
To increase efficiency and reduce cost, phases may overlap.
 Limited Flexibility: The Waterfall Model is a rigid and linear approach to software
development, which means that it is not well-suited for projects with changing or
uncertain requirements. Once a phase has been completed, it is difficult to make changes
or go back to a previous phase.
 Limited Stakeholder Involvement: The Waterfall Model is a structured and sequential
approach, which means that stakeholders are typically involved in the early phases of the
project (requirements gathering and analysis) but may not be involved in the later
phases (implementation, testing, and deployment).
 Late Defect Detection: In the Waterfall Model, testing is typically done toward the end
of the development process. This means that defects may not be discovered until late in
the development process, which can be expensive and time-consuming to fix.
 Lengthy Development Cycle: The Waterfall Model can result in a lengthy development
cycle, as each phase must be completed before moving on to the next. This can result in
delays and increased costs if requirements change or new issues arise.
When to Use Waterfall Model?
Here are some cases where the use of the Waterfall Model is best suited:
 Well-understood Requirements: Before beginning development, there are precise,
reliable, and thoroughly documented requirements available.
 Very Little Changes Expected: During development, very little adjustments or
expansions to the project’s scope are anticipated.
 Small to Medium-Sized Projects: Ideal for more manageable projects with a clear
development path and little complexity.
 Predictable: Projects that are predictable, low-risk, and able to be addressed early in the
development life cycle are those that have known, controllable risks.
 Regulatory Compliance is Critical: Circumstances in which paperwork is of utmost
importance and stringent regulatory compliance is required.
 Client Prefers a Linear and Sequential Approach: This situation describes the client’s
preference for a linear and sequential approach to project development.
 Limited Resources: Projects with limited resources can benefit from a set-up strategy,
which enables targeted resource allocation.
The Waterfall approach involves less user interaction in the product development process.
The product can only be shown to end user when it is ready.

21. Explain about the formal system development techniques.

Formal system development techniques are mathematically rigorous approaches used


to specify, develop, and verify software and hardware systems, aiming for higher reliability
and correctness by using mathematical proofs and models instead of relying solely on
testing.
 Mathematical Rigor:
Formal methods employ mathematical languages and logic to describe system behavior,
ensuring precision and clarity.
 Specification and Verification:
 Formal Specification: This involves creating a precise, mathematical model of the system's
intended behavior, outlining what the system should do, not how.
 Formal Verification: This uses mathematical proofs to demonstrate that the system's
implementation (or a model of it) adheres to the formal specification, ensuring correctness and
identifying potential errors early in the development process.
Benefits:
 Improved Reliability: By rigorously verifying the system's behavior, formal methods can help
reduce the risk of errors and improve the overall reliability of the system.
 Early Error Detection: Formal methods allow for the detection of potential problems during
the design phase, rather than waiting for runtime testing, which can be more costly and time-
consuming.
 Enhanced Trust and Safety: In safety-critical systems, formal methods can provide a high
degree of confidence in the system's correctness and safety, which is crucial for applications
like medical devices or autonomous vehicles.

Examples of Formal Methods:


 Model Checking: A technique that systematically explores all possible states of a model to
ensure that certain properties hold.
 Theorem Proving: Using automated or interactive theorem provers to prove properties of the
system based on its formal specification.
 Formal Specification Languages: Languages like Z, B, and VDM are used to create formal
models of systems.
Challenges:
 Complexity: Formal methods can be complex and require specialized knowledge and tools.
 Cost and Time: Developing and verifying systems using formal methods can be more time-
consuming and costly than traditional approaches.
 Limited Adoption: Despite their potential benefits, formal methods are not widely used in
industry, partly due to the challenges mentioned above.

22. Summarize on the Data Flow Diagrams with an example.

DFD is the abbreviation for Data Flow Diagram. The flow of data in a system or
process is represented by a Data Flow Diagram (DFD). It also gives insight into the inputs
and outputs of each entity and the process itself. Data Flow Diagram (DFD) does not have a
control flow and no loops or decision rules are present. Specific operations, depending on
the type of data, can be explained by a flowchart. It is a graphical tool, useful for
communicating with users, managers and other personnel. it is useful for analyzing existing
as well as proposed systems.

It should be pointed out that a DFD is not a flowchart. In drawing the DFD, the designer
has to specify the major transforms in the path of the data flowing from the input to the
output. DFDs can be hierarchically organized, which helps in progressively partitioning
and analyzing large systems.

It provides an overview of

 What data is system processes.


 What transformation are performed.
 What data are stored.
 What results are produced , etc.

Data Flow Diagram can be represented in several ways. The Data Flow Diagram (DFD)
belongs to structured-analysis modeling tools. Data Flow diagrams are very popular
because they help us to visualize the major steps and data involved in software-system
processes.
Characteristics of Data Flow Diagram (DFD)
Below are some characteristics of Data Flow Diagram (DFD):
 Graphical Representation: Data Flow Diagram (DFD) use different symbols and
notation to represent data flow within system. That simplify the complex model.
 Problem Analysis: Data Flow Diagram (DFDs) are very useful in understanding a
system and can be effectively used during analysis. Data Flow Diagram (DFDs) are quite
general and are not limited to problem analysis for software requirements specification.
 Abstraction: Data Flow Diagram (DFD) provides a abstraction to complex model i.e.
DFD hides unnecessary implementation details and show only the flow of data and
processes within information system.
 Hierarchy: Data Flow Diagram (DFD) provides a hierarchy of a system. High- level
diagram i.e. 0-level diagram provides an overview of entire system while lower-level
diagram like 1-level DFD and beyond provides a detailed data flow of individual
process.
 Data Flow: The primary objective of Data Flow Diagram (DFD) is to visualize the data
flow between external entity, processes and data store. Data Flow is represented by an
arrow Symbol.
 Ease of Understanding: Data Flow Diagram (DFD) can be easily understand by both
technical and non-technical stakeholders.
 Modularity: Modularity can be achieved using Data Flow Diagram (DFD) as it breaks
the complex system into smaller module or processes. This provides easily analysis and
design of a system.

Types of Data Flow Diagram (DFD)


There are two types of Data Flow Diagram (DFD)

1. Logical Data Flow Diagram


2. Physical Data Flow Diagram

Logical Data Flow Diagram (DFD)


Logical data flow diagram mainly focuses on the system process. It illustrates how data
flows in the system. Logical Data Flow Diagram (DFD) mainly focuses on high level
processes and data flow without diving deep into technical implementation details. Logical
DFD is used in various organizations for the smooth running of system. Like in a Banking
software system, it is used to describe how data is moved from one entity to another.

23.Discuss about the concepts used in object-oriented approach.

Object-oriented programming (OOP) relies on key concepts like objects, classes, inheritance,
encapsulation, abstraction, and polymorphism to organize and structure code, promoting
modularity, reusability, and maintainability.
OOPs (Object-Oriented Programming) is a programming paradigm based on the concept of
objects, which can contain data in the form of fields (attributes or properties) and code in the
form of procedures (methods or functions). In Java, OOPs concepts include encapsulation,
inheritance, polymorphism, and abstraction.
Here's a more detailed explanation of these concepts:
 Objects:
Objects are the fundamental building blocks of OOP, representing entities or concepts in the
real world with both data (attributes) and behavior (methods).
 Classes:
Classes are blueprints or templates for creating objects, defining their attributes and
methods.
 Inheritance:
Inheritance allows classes to inherit properties and behaviors from other classes (parent or
superclass), promoting code reusability and establishing relationships between classes.
 Encapsulation:
Encapsulation bundles data (attributes) and the methods that operate on that data within a
class, restricting direct access to the internal state of an object and promoting data
protection.
 Abstraction:
Abstraction focuses on presenting only the essential details of an object or class to the user,
hiding unnecessary implementation details.
 Polymorphism:
Polymorphism allows objects of different classes to respond to the same method call in their
own way, enabling flexible and adaptable code.

24. Explain the various types of White-box testing methods.

White box testing is a Software Testing Technique that involves testing the internal
structure and workings of a Software Application. The tester has access to the source code
and uses this knowledge to design test cases that can verify the correctness of the software
at the code level.

White box testing is also known as Structural Testing or Code-based Testing, and it is
used to test the software’s internal logic, flow, and structure. The tester creates test cases to
examine the code paths and logic flows to ensure they meet the specified requirements.

White box testing include testing a software application with an extend understanding of
its internal code and structure. This type of testing allows testers to create detailed test
cases based on the application’s design and functionality.

Here are some key types of tests commonly used in white box testing:

 Path Testing: White box testing will be checks all possible execution paths in the
program to sure about the each one of the function behaves as expected. It helps verify
that all logical conditions in the code are functioning correctly and efficiently with as
properly manner, avoiding unnecessary steps with better code reusability.
 Input and Output Validation: By providing different inputs to a function, white box
testing check the the function gives the correct output each of the time. This helps to
confirm that the software consistently produces the required results under various
conditions.
 Security Testing: this will focuses on finding security issues in the code. Tools like
static code analysis are used to check the code for potential security flaws, for checking
the application for the best practices for secure development.
 Loop testing: It will be check that loops (for or while loops) in the program operate
correctly and efficiently. It checks that the loop handles variables correctly and doesn’t
cause errors like infinite loops or logic flaws.
 Data Flow Testing: This involves tracking the flow of variables through the program. It
ensures that variables are properly declared, initialized, and used in the right places,
preventing errors related to incorrect data handling.

Types Of White Box Testing


White box testing can be done for different purposes at different places. There are three
main types of White Box testing which is follows:-

Types Of White Box Testing

 Unit Testing: Unit Testing checks if each part or function of the application works
correctly. It will check the application meets design requirements during development.
 Integration Testing: Integration Testing Examines how different parts of the application
work together. After unit testing to make sure components work well both alone and
together.
 Regression Testing: Regression Testing Verifies that changes or updates don’t break
existing functionality of the code. It will check the application still passes all existing
tests after updates.

White Box Testing Techniques

One of the main benefits of white box testing is that it allows for testing every part of an
application. To achieve complete code coverage, white box testing uses the following
techniques:

1. Statement Coverage: In this technique, the aim is to traverse all statements at least
once. Hence, each line of code is tested. In the case of a flowchart, every node must be
traversed at least once. Since all lines of code are covered, it helps in pointing out faulty
code.

If we see in the case of a flowchart, every node must be traversed at least once. Since all
lines of code are covered, it helps in pointing out faulty code while detecting.

Statement Coverage flowchart


2. Branch Coverage: Branch coverage focuses on testing the decision points or conditional
branches in the code. It checks whether both possible outcomes (true and false) of each
conditional statement are tested. In this technique, test cases are designed so that each
branch from all decision points is traversed at least once. In a flowchart, all edges must be
traversed at least once.
In a flowchart, all edges must be traversed at least once.

Branch Coverage flowchart

3. Condition Coverage: In this technique, all individual conditions must be covered as


shown in the following example:

 READ X, Y
 IF(X == 0 || Y == 0)
 PRINT ‘0’
 #TC1 – X = 0, Y = 55
 #TC2 – X = 5, Y = 0

4. Multiple Condition Coverage: In this technique, all the possible combinations of the
possible outcomes of conditions are tested at least once. Let’s consider the following
example:

 READ X, Y
 IF(X == 0 || Y == 0)
 PRINT ‘0’
 #TC1: X = 0, Y = 0
 #TC2: X = 0, Y = 5
 #TC3: X = 55, Y = 0
 #TC4: X = 55, Y = 5
5. Basis Path Testing: In this technique, control flow graphs are made from code or
flowchart and then Cyclomatic complexity is calculated which defines the number of
independent paths so that the minimal number of test cases can be designed for each
independent path.

Steps:

 Make the corresponding control flow graph


 Calculate the cyclomatic complexity
 Find the independent paths
 Design test cases corresponding to each independent path
 V(G) = P + 1, where P is the number of predicate nodes in the flow graph
 V(G) = E – N + 2, where E is the number of edges and N is the total number of nodes
 V(G) = Number of non-overlapping regions in the graph
 #P1: 1 – 2 – 4 – 7 – 8
 #P2: 1 – 2 – 3 – 5 – 7 – 8
 #P3: 1 – 2 – 3 – 6 – 7 – 8
 #P4: 1 – 2 – 4 – 7 – 1 – . . . – 7 – 8

6. Loop Testing: Loops are widely used and these are fundamental to many algorithms
hence, their testing is very important. Errors often occur at the beginnings and ends of
loops.

 Simple loops: For simple loops of size n, test cases are designed that:
1. Skip the loop entirely
2. Only one pass through the loop
3. 2 passes
4. m passes, where m < n
5. n-1 ans n+1 passes
 Nested loops: For nested loops, all the loops are set to their minimum count, and we
start from the innermost loop. Simple loop tests are conducted for the innermost loop and
this is worked outwards till all the loops have been tested.

 Concatenated loops: Independent loops, one after another. Simple loop tests are applied
for each. If they’re not independent, treat them like nesting.

Black Box vs White Box vs Gray Box Testing

In the process of STLC, three primary testing approaches are used which are Black Box
Testing, White Box Testing, and Gray Box Testing. Each of these methods are different
with the terms of the level of knowledge the tester has about the application and how they
approach the testing process.

You might also like