SoftwareEngineeringQuestion Bank1
SoftwareEngineeringQuestion Bank1
QUESTION BANK
Control flow structures are mechanisms that determine the order in which statements
in a program are executed, enabling conditional execution and repetition. They allow
programs to make decisions and repeat actions, making them flexible and powerful.
The spiral model's task regions, also known as phases or activities, are Planning, Risk
Analysis, Engineering, and Evaluation.
4. Define SRS.
7. What is Coupling?
In general, "coupling" refers to the act of joining or connecting two things, whether
physically, mechanically, or conceptually, and can also refer to a device used for such a
connection.
8. What is functional independence in software design?
UML supports several views of a system, including the user view (use case diagram),
structural view (class, component, package, composite structure, object diagrams), behavioral
view (activity, state machine, sequence, communication diagrams), and
implementation/deployment view (component, deployment diagrams).
Validation is the process of assessing whether the final product meets the needs and
expectations of the end-user and other stakeholders, ensuring it functions as intended in real-
world scenarios.
Software testing occurs at different levels, including unit, integration, system, and
acceptance testing, each focusing on verifying different aspects of the software's functionality
and quality.
To represent complex logic, you can use techniques like logical representation, decision table
testing, propositional logic, Boolean algebra, fuzzy logic, and simulation.
Here's a more detailed breakdown:
1. Logical Representation:
This involves using a language with defined rules to represent knowledge, ensuring clarity
and avoiding ambiguity.
It's a fundamental method for communicating knowledge to machines, allowing for the
representation of facts and conclusions based on conditions.
2. Decision Table Testing:
This technique helps design test cases based on all possible combinations of inputs and
outputs, useful for testing complex logic, business rules, and decision-making processes.
It simplifies the representation of complex business logic and helps identify missing logic or
gaps in requirements.
3. Propositional Logic:
Propositional logic focuses on the relationships between propositions (statements that can be
true or false) using logical connectives like "and", "or", "not", and "if-then".
It's often used in scenarios where the knowledge domain is simple and the relationships
between propositions are straightforward.
4. Boolean Algebra:
This is a mathematical system that provides a way to represent and simplify digital logic
circuits.
It's used for performing complex calculations and designing circuits involving logic gates.
5. Fuzzy Logic:
Fuzzy logic deals with situations where information is imprecise or uncertain, allowing for
the representation of "degrees of truth" rather than just true or false.
It's often used in situations where multiple factors need to be considered, such as in advanced
software trading models.
6. Simulation:
Simulation models can be used to visualize potential outcomes and refine processes before
implementing changes in the real world.
This indirect approach is valuable for process improvement and decision-making, ensuring
that changes are data-driven.
7. Other Techniques:
Semantic Networks:
These represent knowledge as a network of nodes and links, where nodes represent concepts
and links represent relationships.
Frame-Based Representation:
This approach uses frames (structured data) to represent knowledge, with slots for different
attributes and values.
Rule-Based Representation:
This uses rules to represent knowledge, where each rule specifies a condition and an action.
16. Summarize the different types of Cohesion.
Cohesion, in software engineering, describes how closely related the functions and
responsibilities within a module are, and different types exist, ranging from high to low
cohesion, with examples including functional, sequential, communicational, procedural,
temporal, logical, and coincidental cohesion.
Here's a breakdown of the different types of cohesion:
Functional Cohesion (Highest):
All elements within a module contribute to a single, well-defined task or function.
Sequential Cohesion:
The output of one element serves as the input for another, creating a sequence of operations.
Communicational Cohesion:
Elements within a module operate on the same data or contribute to the same data structure.
Procedural Cohesion:
Elements are grouped based on the sequence of execution, or the procedures they perform.
Temporal Cohesion:
Elements are grouped based on the time they are processed during execution, such as
initialization or shutdown tasks.
Logical Cohesion:
Elements are grouped because they perform similar kinds of activities, even if not directly
related to a single function.
Coincidental Cohesion (Lowest):
Elements are grouped arbitrarily with little to no meaningful relationship to each other.
It includes the technical manuals and online material, such as online versions of
manuals and help capabilities. The term is sometimes used to refer to source information
about the product discussed in design documentation, code comments, white papers and
session notes.
Software documentation shows what the software developers did when creating the software
and what IT staff and users must do when deploying and using it. Documentation is often
incorporated into the software's user interface and also included as part of help
documentation. The information is often divided into task categories, including the following:
evaluating
planning
setting up or installing
customizing
administering
using
maintaining
19. Explain some representative coding standards.
1. Limited use of globals: These rules tell about which types of data that can be declared
global and the data that can’t be.
2. Standard headers for different modules: For better understanding and maintenance of
the code, the header of different modules should follow some standard format and
information. The header format must contain below things that is being used in various
companies:
Name of the module
Date of module creation
Author of the module
Modification history
Synopsis of the module about what the module does
Different functions supported in the module along with their input output parameters
Global variables accessed or modified by the module
3. Naming conventions for local variables, global variables, constants and
functions: Some of the naming conventions are given below:
Meaningful and understandable variables name helps anyone to understand the reason
of using it.
Local variables should be named using camel case lettering starting with small letter
(e.g. localData) whereas Global variables names should start with a capital letter
(e.g. GlobalData). Constant names should be formed using capital letters only
(e.g. CONSDATA).
It is better to avoid the use of digits in variable names.
The names of the function should be written in camel case starting with small letters.
The name of the function must describe the reason of using the function clearly and
briefly.
4. Indentation: Proper indentation is very important to increase the readability of the code.
For making the code readable, programmers should use White spaces properly. Some of
the spacing conventions are given below:
There must be a space after giving a comma between two function arguments.
Each nested block should be properly indented and spaced.
Proper Indentation should be there at the beginning and at the end of each block in the
program.
All braces should start from a new line and the code following the end of braces also
start from a new line.
5. Error return values and exception handling conventions: All functions that
encountering an error condition should either return a 0 or 1 for simplifying the
debugging.
20.Explain Water fall model. What are the problems that are encountered when
waterfall model is applied?
The Classical Waterfall Model suffers from various shortcomings we can’t use it in real
projects, but we use other software development lifecycle models which are based on the
classical waterfall model. Below are some major drawbacks of this model.
No Feedback Path: In the classical waterfall model evolution of software from one
phase to another phase is like a waterfall. It assumes that no error is ever committed by
developers during any phase. Therefore, it does not incorporate any mechanism for error
correction.
Difficult to accommodate Change Requests: This model assumes that all the customer
requirements can be completely and correctly defined at the beginning of the project, but
the customer’s requirements keep on changing with time. It is difficult to accommodate
any change requests after the requirements specification phase is complete.
No Overlapping of Phases: This model recommends that a new phase can start only
after the completion of the previous phase. But in real projects, this can’t be maintained.
To increase efficiency and reduce cost, phases may overlap.
Limited Flexibility: The Waterfall Model is a rigid and linear approach to software
development, which means that it is not well-suited for projects with changing or
uncertain requirements. Once a phase has been completed, it is difficult to make changes
or go back to a previous phase.
Limited Stakeholder Involvement: The Waterfall Model is a structured and sequential
approach, which means that stakeholders are typically involved in the early phases of the
project (requirements gathering and analysis) but may not be involved in the later
phases (implementation, testing, and deployment).
Late Defect Detection: In the Waterfall Model, testing is typically done toward the end
of the development process. This means that defects may not be discovered until late in
the development process, which can be expensive and time-consuming to fix.
Lengthy Development Cycle: The Waterfall Model can result in a lengthy development
cycle, as each phase must be completed before moving on to the next. This can result in
delays and increased costs if requirements change or new issues arise.
When to Use Waterfall Model?
Here are some cases where the use of the Waterfall Model is best suited:
Well-understood Requirements: Before beginning development, there are precise,
reliable, and thoroughly documented requirements available.
Very Little Changes Expected: During development, very little adjustments or
expansions to the project’s scope are anticipated.
Small to Medium-Sized Projects: Ideal for more manageable projects with a clear
development path and little complexity.
Predictable: Projects that are predictable, low-risk, and able to be addressed early in the
development life cycle are those that have known, controllable risks.
Regulatory Compliance is Critical: Circumstances in which paperwork is of utmost
importance and stringent regulatory compliance is required.
Client Prefers a Linear and Sequential Approach: This situation describes the client’s
preference for a linear and sequential approach to project development.
Limited Resources: Projects with limited resources can benefit from a set-up strategy,
which enables targeted resource allocation.
The Waterfall approach involves less user interaction in the product development process.
The product can only be shown to end user when it is ready.
DFD is the abbreviation for Data Flow Diagram. The flow of data in a system or
process is represented by a Data Flow Diagram (DFD). It also gives insight into the inputs
and outputs of each entity and the process itself. Data Flow Diagram (DFD) does not have a
control flow and no loops or decision rules are present. Specific operations, depending on
the type of data, can be explained by a flowchart. It is a graphical tool, useful for
communicating with users, managers and other personnel. it is useful for analyzing existing
as well as proposed systems.
It should be pointed out that a DFD is not a flowchart. In drawing the DFD, the designer
has to specify the major transforms in the path of the data flowing from the input to the
output. DFDs can be hierarchically organized, which helps in progressively partitioning
and analyzing large systems.
It provides an overview of
Data Flow Diagram can be represented in several ways. The Data Flow Diagram (DFD)
belongs to structured-analysis modeling tools. Data Flow diagrams are very popular
because they help us to visualize the major steps and data involved in software-system
processes.
Characteristics of Data Flow Diagram (DFD)
Below are some characteristics of Data Flow Diagram (DFD):
Graphical Representation: Data Flow Diagram (DFD) use different symbols and
notation to represent data flow within system. That simplify the complex model.
Problem Analysis: Data Flow Diagram (DFDs) are very useful in understanding a
system and can be effectively used during analysis. Data Flow Diagram (DFDs) are quite
general and are not limited to problem analysis for software requirements specification.
Abstraction: Data Flow Diagram (DFD) provides a abstraction to complex model i.e.
DFD hides unnecessary implementation details and show only the flow of data and
processes within information system.
Hierarchy: Data Flow Diagram (DFD) provides a hierarchy of a system. High- level
diagram i.e. 0-level diagram provides an overview of entire system while lower-level
diagram like 1-level DFD and beyond provides a detailed data flow of individual
process.
Data Flow: The primary objective of Data Flow Diagram (DFD) is to visualize the data
flow between external entity, processes and data store. Data Flow is represented by an
arrow Symbol.
Ease of Understanding: Data Flow Diagram (DFD) can be easily understand by both
technical and non-technical stakeholders.
Modularity: Modularity can be achieved using Data Flow Diagram (DFD) as it breaks
the complex system into smaller module or processes. This provides easily analysis and
design of a system.
Object-oriented programming (OOP) relies on key concepts like objects, classes, inheritance,
encapsulation, abstraction, and polymorphism to organize and structure code, promoting
modularity, reusability, and maintainability.
OOPs (Object-Oriented Programming) is a programming paradigm based on the concept of
objects, which can contain data in the form of fields (attributes or properties) and code in the
form of procedures (methods or functions). In Java, OOPs concepts include encapsulation,
inheritance, polymorphism, and abstraction.
Here's a more detailed explanation of these concepts:
Objects:
Objects are the fundamental building blocks of OOP, representing entities or concepts in the
real world with both data (attributes) and behavior (methods).
Classes:
Classes are blueprints or templates for creating objects, defining their attributes and
methods.
Inheritance:
Inheritance allows classes to inherit properties and behaviors from other classes (parent or
superclass), promoting code reusability and establishing relationships between classes.
Encapsulation:
Encapsulation bundles data (attributes) and the methods that operate on that data within a
class, restricting direct access to the internal state of an object and promoting data
protection.
Abstraction:
Abstraction focuses on presenting only the essential details of an object or class to the user,
hiding unnecessary implementation details.
Polymorphism:
Polymorphism allows objects of different classes to respond to the same method call in their
own way, enabling flexible and adaptable code.
White box testing is a Software Testing Technique that involves testing the internal
structure and workings of a Software Application. The tester has access to the source code
and uses this knowledge to design test cases that can verify the correctness of the software
at the code level.
White box testing is also known as Structural Testing or Code-based Testing, and it is
used to test the software’s internal logic, flow, and structure. The tester creates test cases to
examine the code paths and logic flows to ensure they meet the specified requirements.
White box testing include testing a software application with an extend understanding of
its internal code and structure. This type of testing allows testers to create detailed test
cases based on the application’s design and functionality.
Here are some key types of tests commonly used in white box testing:
Path Testing: White box testing will be checks all possible execution paths in the
program to sure about the each one of the function behaves as expected. It helps verify
that all logical conditions in the code are functioning correctly and efficiently with as
properly manner, avoiding unnecessary steps with better code reusability.
Input and Output Validation: By providing different inputs to a function, white box
testing check the the function gives the correct output each of the time. This helps to
confirm that the software consistently produces the required results under various
conditions.
Security Testing: this will focuses on finding security issues in the code. Tools like
static code analysis are used to check the code for potential security flaws, for checking
the application for the best practices for secure development.
Loop testing: It will be check that loops (for or while loops) in the program operate
correctly and efficiently. It checks that the loop handles variables correctly and doesn’t
cause errors like infinite loops or logic flaws.
Data Flow Testing: This involves tracking the flow of variables through the program. It
ensures that variables are properly declared, initialized, and used in the right places,
preventing errors related to incorrect data handling.
Unit Testing: Unit Testing checks if each part or function of the application works
correctly. It will check the application meets design requirements during development.
Integration Testing: Integration Testing Examines how different parts of the application
work together. After unit testing to make sure components work well both alone and
together.
Regression Testing: Regression Testing Verifies that changes or updates don’t break
existing functionality of the code. It will check the application still passes all existing
tests after updates.
One of the main benefits of white box testing is that it allows for testing every part of an
application. To achieve complete code coverage, white box testing uses the following
techniques:
1. Statement Coverage: In this technique, the aim is to traverse all statements at least
once. Hence, each line of code is tested. In the case of a flowchart, every node must be
traversed at least once. Since all lines of code are covered, it helps in pointing out faulty
code.
If we see in the case of a flowchart, every node must be traversed at least once. Since all
lines of code are covered, it helps in pointing out faulty code while detecting.
READ X, Y
IF(X == 0 || Y == 0)
PRINT ‘0’
#TC1 – X = 0, Y = 55
#TC2 – X = 5, Y = 0
4. Multiple Condition Coverage: In this technique, all the possible combinations of the
possible outcomes of conditions are tested at least once. Let’s consider the following
example:
READ X, Y
IF(X == 0 || Y == 0)
PRINT ‘0’
#TC1: X = 0, Y = 0
#TC2: X = 0, Y = 5
#TC3: X = 55, Y = 0
#TC4: X = 55, Y = 5
5. Basis Path Testing: In this technique, control flow graphs are made from code or
flowchart and then Cyclomatic complexity is calculated which defines the number of
independent paths so that the minimal number of test cases can be designed for each
independent path.
Steps:
6. Loop Testing: Loops are widely used and these are fundamental to many algorithms
hence, their testing is very important. Errors often occur at the beginnings and ends of
loops.
Simple loops: For simple loops of size n, test cases are designed that:
1. Skip the loop entirely
2. Only one pass through the loop
3. 2 passes
4. m passes, where m < n
5. n-1 ans n+1 passes
Nested loops: For nested loops, all the loops are set to their minimum count, and we
start from the innermost loop. Simple loop tests are conducted for the innermost loop and
this is worked outwards till all the loops have been tested.
Concatenated loops: Independent loops, one after another. Simple loop tests are applied
for each. If they’re not independent, treat them like nesting.
In the process of STLC, three primary testing approaches are used which are Black Box
Testing, White Box Testing, and Gray Box Testing. Each of these methods are different
with the terms of the level of knowledge the tester has about the application and how they
approach the testing process.