Software Engineering Important Questions and Answers
Software Engineering Important Questions and Answers
Unit 1
1. Correctness:
User review is used to ensure the correctness of requirements stated in the SRS. SRS is said to be
correct if it covers all the requirements that are actually expected from the system.
2. Completeness:
Completeness of SRS indicates every sense of completion including the numbering of all the pages,
resolving the to be determined parts to as much extent as possible as well as covering all the
functional and non-functional requirements properly.
3. Consistency:
Requirements in SRS are said to be consistent if there are no conflicts between any set of
requirements. Examples of conflict include differences in terminologies used at separate places,
logical conflicts like time period of report generation, etc.
4. Unambiguousness:
A SRS is said to be unambiguous if all the requirements stated have only 1 interpretation. Some of the
ways to prevent unambiguousness include the use of modelling techniques like ER diagrams, proper
reviews and buddy checks, etc.
6. Modifiability:
SRS should be made as modifiable as possible and should be capable of easily accepting changes to
the system to some extent. Modifications should be properly indexed and cross-referenced.
7. Verifiability:
A SRS is verifiable if there exists a specific technique to quantifiably measure the extent to which
every requirement is met by the system. For example, a requirement starting that the system must be
user-friendly is not verifiable and listing such requirements should be avoided.
8. Traceability:
One should be able to trace a requirement to design component and then to code segment in the
program. Similarly, one should be able to trace a requirement to the corresponding test cases.
9. Design Independence:
There should be an option to choose from multiple design alternatives for the final system. More
specifically, the SRS should not include any implementation details.
10. Testability:
A SRS should be written in such a way that it is easy to generate test cases and test plans from the
document.
The sequence diagram represents the UML, The collaboration diagram also comes under
which is used to visualize the sequence of the UML representation which is used to
calls in a system that is used to perform a visualize the organization of the objects and
specific functionality. their interaction.
The sequence diagram is used when time The collaboration diagram is used when object
sequence is main focus. organization is main focus.
2. Non-functional Requirements:
Non-functional necessities accommodate the characteristics of the system which may not be
expressed as functions – like the maintainability of the system, movability of the system, the usability
of the system, etc. Non-functional requirements may include:
1. Reliability issues
2. Accuracy of results
3. Human-computer interface issues
4. Constraints on the system implementation, etc.
3. Goals of Implementation:
The goals of implementation part documents some general suggestions relating to development.
These suggestions guide trade-off among style goals. The goals of the implementation section would
possibly document problems like revisions to the system functionalities that will be needed within the
future, new devices to be supported within the future, reusability problems, etc. These are the things
that the developers would possibly detain their mind throughout development in order that the
developed system may meet some aspects that don’t seem to be needed straightaway.
9. Define Use case diagram? Draw and explain symbols for the same.
Use cases are represented with a labeled oval shape. Stick figures represent actors in the process, and
the actor's participation in the system is modeled with a line between the actor and use case. To depict
the system boundary, draw a box around the use case itself.
Advantages Dis-Advantages
• Before the next phase of development, each
• Error can be fixed only during the phase
phase must be completed
• Suited for smaller projects where • It is not desirable for complex project where
requirements are well defined requirement changes frequently
Incremental Model
The various phases of incremental model are as follows:
a) Requirement analysis: In the first phase of the incremental model, the product analysis expertise
identifies the requirements. And the system functional requirements are understood by the
requirement analysis team. To develop the software under the incremental model, this phase performs
a crucial role.
b) Design & Development: In this phase of the Incremental model of SDLC, the design of the system
functionality and the development method are finished with success. When software develops new
practicality, the incremental model uses style and development phase.
c) Testing: In the incremental model, the testing phase checks the performance of each existing function
as well as additional functionality. In the testing phase, the various methods are used to test the
behavior of each task.
d) Implementation: Implementation phase enables the coding phase of the development system. It
involves the final coding that design in the designing and development phase and tests the functionality
in the testing phase. After completion of this phase, the number of the product working is enhanced and
upgraded up to the final system product
13. Draw a Sequence diagram for online ordering of food delivery System.
14. Describe the Evolutionary Model with its advantages and disadvantages.
15. Explain Deployment diagram with an example.
Unit 2
Cocomo (Constructive Cost Model) is a regression model based on LOC, i.e number of Lines of Code. It is
a procedural cost estimate model for software projects and is often used as a process of reliably
predicting the various parameters associated with making a project such as size, effort, cost, time, and
quality. It was proposed by Barry Boehm in 1981 and is based on the study of 63 projects, which makes
it one of the best-documented models. The key parameters which define the quality of any software
products, which are also an outcome of the Cocomo are primarily Effort & Schedule:
• Effort: Amount of labor that will be required to complete a task. It is measured in person-months units.
• Schedule: Simply means the amount of time required for the completion of the job, which is, of course,
proportional to the effort put in. It is measured in the units of time such as weeks, months.
Different models of Cocomo have been proposed to predict the cost estimation at different levels,
based on the amount of accuracy and correctness required. All of these models can be applied to a
variety of projects, whose characteristics determine the value of constant to be used in subsequent
calculations. These characteristics pertaining to different system types are mentioned below. Boehm’s
definition of organic, semidetached, and embedded systems:
1. Organic – A software project is said to be an organic type if the team size required is adequately small,
the problem is well understood and has been solved in the past and also the team members have a
nominal experience regarding the problem.
2. Semi-detached – A software project is said to be a Semi-detached type if the vital characteristics such as
team size, experience, knowledge of the various programming environment lie in between that of
organic and Embedded. The projects classified as Semi-Detached are comparatively less familiar and
difficult to develop compared to the organic ones and require more experience and better guidance and
creativity. Eg: Compilers or different Embedded Systems can be considered of Semi-Detached type.
3. Embedded – A software project requiring the highest level of complexity, creativity, and experience
requirement fall under this category. Such software requires a larger team size than the other two
models and also the developers need to be sufficiently experienced and creative to develop such
complex models.
1. Basic COCOMO Model
2. Intermediate COCOMO Model
3. Detailed COCOMO Model
• Reusability. Code can be reused through inheritance, meaning a team does not have to write the same
code multiple times.
• Productivity. Programmers can construct new programs quicker through the use of multiple libraries and
reusable code.
• Easily upgradable and scalable. Programmers can implement system functionalities independently.
• Interface descriptions. Descriptions of external systems are simple, due to message passing techniques
that are used for objects communication.
• Security. Using encapsulation and abstraction, complex code is hidden, software maintenance is easier
and internet protocols are protected.
• Flexibility. Polymorphism enables a single function to adapt to the class it is placed in. Different objects
can also pass through the same interface.
Coupling Cohesion
While creating, you should aim While creating you should aim
for low coupling, i.e., dependency for high cohesion, i.e., a
among modules should be less. cohesive component/ module
focuses on a single function (i.e.,
single-mindedness) with little
interaction with other modules
of the system.
Risk Management:
A computer code project may be laid low with an outsized sort of risk. so as to be ready to consistently
establish the necessary risks which could have an effect on a computer code project, it’s necessary to
reason risks into completely different categories. The project manager will then examine the risks from
every category square measure relevant to the project.
There square measure 3 main classes of risks that may have an effect on a computer code project:
1. Project Risks:
Project risks concern various sorts of monetary funds, schedules, personnel, resource, and customer-
related issues. a vital project risk is schedule slippage. Since computer code is intangible, it’s terribly
tough to observe and manage a computer code project. it’s terribly tough to manage one thing that can
not be seen. For any producing project, like producing cars, the project manager will see the
merchandise taking form.
For example, see that the engine is fitted, at the moment the area of the door unit fitted, the
automotive is obtaining painted, etc. so he will simply assess the progress of the work and manage it.
The physical property of the merchandise being developed is a vital reason why several computer codes
come to suffer from the danger of schedule slippage.
2. Technical Risks:
Technical risks concern potential style, implementation, interfacing, testing, and maintenance issues.
Technical risks conjointly embody ambiguous specifications, incomplete specification, dynamic
specification, technical uncertainty, and technical degeneration. Most technical risks occur thanks to
the event team’s lean information concerning the project.
3. Business Risks:
This type of risk embodies the risks of building a superb product that nobody needs, losing monetary
funds or personal commitments, etc.
User interface is the front-end application view to which user interacts in order to use the software. User
can manipulate and control the software as well as hardware by means of user interface. Today, user
interface is found at almost every place where digital technology exists, right from computers, mobile
phones, cars, music players, airplanes, ships etc.
User interface is part of software and is designed such a way that it is expected to provide the user insight
of the software. UI provides fundamental platform for human-computer interaction.
UI can be graphical, text-based, audio-video based, depending upon the underlying hardware and
software combination. UI can be hardware or software or a combination of both.
The software becomes more popular if its user interface is:
• Attractive
• Simple to use
• Responsive in short time
• Clear to understand
• Consistent on all interfacing screens
UI is broadly divided into two categories:
Function-Oriented Metrics are also known as Function Point Model. This model generally focuses on the
functionality of the software application being delivered. These methods are actually independent of
the programming language that is being used in software applications and based on calculating the
Function Point (FP). A function point is a unit of measurement that measures the business functionality
provided by the business product.
To determine whether or not a particular entry is simple, easy, average, or complex, a criterion is
needed and should be developed by the organization. With the help of observations or experiments,
the different weighing factors should be determined as shown below in the table. With the help of
these tables, the count table can be computed.
12. State and Explain the Quality metrics.
13. Write a short note on Levels of Testing.
Software Testing is an activity performed to identify errors so that errors can be removed to obtain a
product with greater quality. To assure and maintain the quality of software and to represents the
ultimate review of specification, design, and coding, Software testing is required. There are different
levels of testing :
1. Unit Testing :
In this type of testing, errors are detected individually from every component or unit by individually
testing the components or units of software to ensure that if they are fit for use by the developers. It is
the smallest testable part of the software.
2. Integration Testing :
In this testing, two or more modules which are unit tested are integrated to test i.e. technique
interacting components and are then verified if these integrated modules work as per the expectation
or not and interface errors are also detected.
3. System Testing :
In system testing, complete and integrated Softwares are tested i.e. all the system elements forming
the system is tested as a whole to meet the requirements of the system.
4. Acceptance Testing :
It is a kind of testing conducted to ensure whether the requirement of the users are fulfilled prior to its
delivery and the software works correctly in the user’s working environment.
Unit 3
1. Functional errors
This is a broad type of error that happens whenever software doesn’t behave as intended. For
example, if the end user clicks the “Save” button, but their entered data isn’t saved, this is a functional
error. After some investigation, a software tester may identify a more specific culprit behind the error
and reclassify it as a different type of bug.
2. Syntax errors
A syntax error occurs in the source code of a program and prevents the program from being properly
compiled. This type of error is very common and typically occurs when there are one or more missing
or incorrect characters in the code. For example, a single missing bracket could cause a syntax error.
Compiling programs typically indicate where a syntax error has occurred so the programmer can fix it.
3. Logic errors
A logic error represents a mistake in the software flow and causes the software to behave incorrectly.
This type of error can cause the program to produce an incorrect output, or even hang or crash. Unlike
syntax errors, logic errors will not prevent a program from compiling.
A common logic error is the infinite loop. Due to poorly written code, the program repeats a sequence
endlessly until it crashes or halts due to external intervention, such as the user closing a browser
window or turning the power off.
4. Calculation errors
Anytime software returns an incorrect value — whether it’s one the end user sees or one that’s passed
to another program — that’s a calculation error. This could happen for several reasons:
While such an error can be costly in certain contexts — like in banking, where an incorrect calculation
can result in the loss of money — hunting down the calculation error is typically just a matter of math.
5. Unit-level bugs
David LaVine, founder of RocLogic Marketing and a former engineer, says unit-level software bugs are
the most common. They’re also typically the easiest to fix.
After your software is initially coded, you need to see how it works through unit testing — taking a
small, logical section of code and verifying that it performs as designed. This is where various forms of
state machine bugs, calculation errors, and basic logic bugs are often uncovered.
“The bugs are relatively easy to isolate when you’re dealing with a small amount of code that’s within
your control,” LaVine says. “They’re also relatively easy to replicate because there aren’t a lot of
complex, asynchronous interactions taking place yet.”
This type of bug occurs when two or more pieces of software from separate subsystems i nteract
erroneously. Often the two sets of code are written by different developers. LaVine explains that even
when there’s a solid set of requirements for developers to follow, there’s usually some level of
interpretation required or details that get overlooked, causing the interaction between two pieces of
software to fail.
“System-level integration bugs are harder to fix because you’re dealing with more than one piece of
software, so the complexity increases while overall visibility decreases,” LaVine sa ys. “This class of bug
is often caused by things like byte-swapping, message parsing, or memory overflow issues.”
LaVine notes that these types of software bugs show up when the end user interacts with the software
in ways that weren’t expected. This often occurs when the user sets a parameter outside the limits of
intended use, such as entering a significantly larger or smaller number than coded for or inputting an
unexpected data type, like text where a number should be.
Six Sigma is the process of producing high and improved quality output. This can be done in two phases
– identification and elimination. The cause of defects is identified and appropriate elimination is done
which reduces variation in whole processes. A six sigma method is one in which 99.99966% of all the
products to be produced have the same features and are of free from defects.
Characteristics of Six Sigma:
The Characteristics of Six Sigma are as follows:
1. Statistical Quality Control:
Six Sigma is derived from the Greek Letter ? which denote Standard Deviation in statistics. Standard
Deviation is used for measuring the quality of output.
2. Methodical Approach:
The Six Sigma is a systematic approach of application in DMAIC and DMADV which can be used to
improve the quality of production. DMAIC means for Design-Measure- Analyze-Improve-Control. While
DMADV stands for Design-Measure-Analyze-Design-Verify.
3. Fact and Data-Based Approach:
The statistical and methodical method shows the scientific basis of the technique.
Structural testing is basically related to the internal design and implementation of the software i.e. it
involves the development team members in the testing team. It basically tests different aspects of the
software according to its types. Structural testing is just the opposite of behavioral testing.
Test Oracle is a mechanism, different from the program itself, that can be used to test the accuracy of
a program’s output for test cases.
The test plan is a base of every software's testing. It is the most crucial activity which ensures availability
of all the lists of planned activities in an appropriate sequence.
Software Quality Assurance (SQA) is simply a way to assure quality in the software. It is the set of
activities which ensure processes, procedures as well as standards are suitable for the project and
implemented correctly.
Software Quality Assurance is a process which works parallel to development of software. It focuses on
improving the process of development of software so that problems can be prevented before they
become a major issue. Software Quality Assurance is a kind of Umbrella activity that is applied
throughout the software process.
Software Quality Assurance has:
1. A quality management approach
2. Formal technical reviews
3. Multi testing strategy
4. Effective software engineering technology
5. Measurement and reporting mechanism
It is the most established, effective measure of quantifying and calculating the business value of testing.
There are four categories to measure cost of quality: Prevention costs, Detection costs, Internal failure
costs, and External failure costs.
These are explained as follows below.
1. Prevention costs include cost of training developers on writing secure and easily maintainable code
2. Detection costs include the cost of creating test cases, setting up testing environments, revisiting
testing requirements.
3. Internal failure costs include costs incurred in fixing defects just before delivery.
4. External failure costs include product support costs incurred by delivering poor quality software.
11. Differentiate between White Box testing and Black Box Testing.
It is a way of software testing in which the It is a way of testing the software in which
internal structure or the program or the the tester has knowledge about the internal
code is hidden and nothing is known structure or the code or the program of the
1. about it. software.
Implementation of code is not needed for Code implementation is necessary for white
2. black box testing. box testing.
No knowledge of implementation is
4. needed. Knowledge of implementation is required.
Verification:
Verification is the process of checking that a software achieves its goal without any bugs. It is the
process to ensure whether the product that is developed is right or not. It verifies whether the
developed product fulfills the requirements that we have.
Verification is Static Testing.
Activities involved in verification:
1. Inspections
2. Reviews
3. Walkthroughs
4. Desk-checking
Validation:
Validation is the process of checking whether the software product is up to the mark or in other words
product has high level requirements. It is the process of checking the validation of product i.e. it checks
what we are developing is the right product. it is validation of actual and expected product.
Validation is the Dynamic Testing.
Activities involved in validation:
1. Black box testing
2. White box testing
3. Unit testing
4. Integration testing