0% found this document useful (0 votes)
11 views107 pages

Open Software - Engg - UGC NET Old Paper - 2004 - 17 PDF

Uploaded by

Mukesh Bagaria
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views107 pages

Open Software - Engg - UGC NET Old Paper - 2004 - 17 PDF

Uploaded by

Mukesh Bagaria
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 107

14.

SOFTWARE ENGINEERING
Paper-II
System Development Life Cycle (SDLC): Steps, Water fall model,
Prototypes, Spiral model
Software Metrics: Software Project Management.
Software Design: System design, detailed design, function oriented design,
object oriented design, user interface design. Design level metrics.
Coding and Testing: Testing level metrics, Software quality and reliability,
Clean room approach, software reengineering.
Paper-III
Software development models, Requirement analysis and specifications,
Software design, Programming techniques and tools, Software validation and
quality assurance techniques, Software maintenance and advanced concepts,
Software management.

Paper Name No. of Questions

1. Paper - II December - 2004 5


2. Paper - II June - 2005 7
3. Paper - II December - 2005 5
4. Paper - II June - 2006 5
5. Paper - II December - 2006 6
6. Paper - II June - 2007 4
7. Paper - II December - 2007 5
8. Paper - II June - 2008 5
9. Paper - II December - 2008 3
10. Paper - II June - 2009 5
11. Paper - II December - 2009 3
12. Paper - II June - 2010 5
13. Paper - II December - 2010 6
14. Paper - II June - 2011 4
15. Paper - II December- 2011 7
16. Paper - II June - 2012 7
17. Paper - III June - 2012 4
18. Paper - II December - 2012 3
19. Paper - III December - 2012 3
20. Paper - II June - 2013 5
21. Paper - III June - 2013 6
22. Paper - II June - 2013 (Retest) 4
23. Paper - III June - 2013 (Retest) 5
24. Paper - II December - 2013 4
25. Paper - III December - 2013 6
26. Paper - II June - 2014 5
27. Paper - III June - 2014 6
28. Paper - II December - 2014 5
29. Paper - III December - 2014 5
30. Paper - II June - 2015 4
31. Paper - III June - 2015 7
32. Paper - II December - 2015 2
33. Paper - III December - 2015 3
34. Paper - II July - 2016 4
35. Paper - III July - 2016 6
36. Paper - II August - 2016 (Retest) 6
37. Paper - III August - 2016(Retest) 5
38. Paper - II Jan - 2017 5
39. Paper - III Jan - 2017 6
1. Paper - II December - 2004
1. The main objective of designing various modules of a software
system is:
(A) To decrease the cohesion and to increase the coupling
(B) To increase the cohesion and to decrease the coupling
(C) To increase the coupling only
(D) To increase the cohesion only
Ans: B
Cohesion - grouping of all functionally related elements.
Coupling - communication between different modules.
A good structured design has high cohesion and low coupling
arrangements.

2. Three essential components of a software project plan are:


(A) Team structure, Quality assurance plans, Cost estimation
(B) Cost estimation, Time estimation, Quality assurance plan
(C) Cost estimation, Time estimation, Personnel estimation
(D) Cost estimation, Personnel estimation, Team structure
Ans: B

The image above shows triple constraints for software projects. It is an


essential part of software organization to deliver quality product, keeping the
cost within client’s budget constrain and deliver the project as per scheduled

3. Reliability of software is dependent on:


(A) Number of errors present in software
(B) Documentation
(C) Testing suties
(D) Development Processes
Ans: A
Software reliability testing a testing technique that relates to testing a
software's ability to function given environmental conditions
consistently that helps uncover issues in the software design and functionality.
Parameters involved in Reliability Testing:
Dependent elements of reliability Testing:
Probability of failure-free operation
Length of time of failure-free operation
The environment in which it is executed
Key Parameters that are measured as part of reliability are given below:
MTTF: Mean Time To Failure
MTTR: Mean Time To Repair
MTBF: Mean Time Between Failures (= MTTF + MTTR)

4. In transform analysis, input portion is called:


(A) Afferent branch (B) Efferent branch
(C) Central Transform (D) None of the above
Ans: A
Transform analysis identifies the primary functional components
(modules) and the high level inputs and outputs for these components.
The first step in transform analysis is to divide the DFD into 3 types of
parts:
• Input • Logical processing • Output
The input portion of the DFD includes processes that transform
input data from physical (e.g. character from terminal) to logical
forms (e.g. internal tables, lists, etc.). Each input portion is called
an afferent branch.
The output portion of a DFD transforms output data from logical to
physical form. Each output portion is called an efferent
branch. The remaining portion of a DFD is called the central
transform.
In the next step of transform analysis, the structure chart is derived
by drawing one functional component for the central
transform

5. The Function Point (FP) metric is:


(A) Calculated from user requirements
(B) Calculated from Lines of code
(C) Calculated from software’s complexity assessment
(D) None of the above
Ans: C
Function point metric computes the size of a software product (in units
of functions points or FPs) using three other characteristics of the
product as shown in the following expression. The size of a product in function
points (FP) can be expressed as the weighted sum of these
five problem characteristics. The weights associated with the five
characteristics were proposed empirically and validated by the observations
over many projects. Function point is computed in two steps.
The first step is to compute the unadjusted function point (UFP).

UFP = (Number of inputs)*4 + (Number of outputs)*5 + (Number of


inquiries)*4 +

(Number of files)*10 + (Number of interfaces)*10

It is used to calculate Complexity Adjustment Factors (CAF), using the


following formulae:
CAF = 0.65 + 0.01N
Then,
Delivered Function Points (FP)= CAF x Raw FP
This FP can then be used in various metrics, such as:
Cost = $ / FP
Quality = Errors / FP
Productivity = FP / person-month

2. Paper-II June-2005

6. Which of the following tools is not required during system analysis


phase of system development Life cycle?
(A) CASE Tool
(B) RAD Tool
(C) Reverse engineering tool
(D) None of these
Ans: C
CASE stands for Computer Aided Software Engineering. It means,
development and maintenance of software projects with help of
various automated software tools.
CASE Tools
CASE tools are set of software application programs, which are used to
automate SDLC activities. CASE tools are used by software
project managers, analysts and engineers to develop software system.
There are number of CASE tools available to simplify various stages of
Software Development Life Cycle such as Analysis tools,
Design tools, Project management tools, Database Management tools,
Documentation tools are to name a few.

Upper Case Tools - Upper CASE tools are used in planning,


analysis and design stages of SDLC.
Lower Case Tools - Lower CASE tools are used in implementation,
testing and maintenance.
Integrated Case Tools - Integrated CASE tools are helpful in all the
stages of SDLC, from Requirement gathering to Testing and
documentation.
The RAD (Rapid Application Development) model is based on
prototyping and iterative development with no specific planning
involved. The process of writing the software itself involves the
planning required for developing the product.
Rapid Application development focuses on gathering customer
requirements through workshops or focus groups, early testing of the
prototypes by the customer using iterative concept, reuse of the existing
prototypes (components), continuous integration and rapid delivery.
7. A black hole in a DFD is a:
(A) A data store with no inbound flows
(B) A data store with only in bound flows
(C) A data store with more than one in bound flow
(D) None of these.
Ans: B
A processing step may have input flows but no output flows. This
situation is sometimes called a black hole.

8. The capability maturity model (err) defines 5 levels:


(a) Level 1 (i) Managed
(b) Level 2 (ii) Defined
(c) Level 3 (iii) Repeatable
(d) Level 4 (iv) Initial
(e) Level 5 (v) Optimized
correct matching is:
a b c d e
(A) (i) (ii) (iii) (iv) (v)
(B) (iv) (iii) (ii) (i) (v)
(C) (v) (i) (iii) (ii) (iv)
(D) (v) (ii) (i) (iii) (iv)
Ans: B
The Software Engineering Institute (SEI) Capability Maturity Model
(CMM) specifies an increasing series of levels of a software
development organization. The higher the level, the better the
software development process, hence reaching each level is an expensive and
time-consuming process.
At the initial level, processes are disorganized, even chaotic. Success is
likely to depend on individual efforts, and is not considered to be
repeatable, because processes would not be sufficiently defined and
documented to allow them to be replicated.
At the repeatable level, basic project management techniques are
established, and successes could be repeated, because the
requisite processes would have been made established, defined, and
documented.
At the defined level, an organization has developed its own standard
software process through greater attention to documentation,
standardization, and integration.
At the managed level, an organization monitors and controls its own
processes through data collection and analysis.
At the optimizing level, processes are constantly being improved through
monitoring feedback from current processes and introducing
innovative processes to better serve the organization's particular needs.

9. Which one of the following is not a software process model?


(A) Linear sequential model
(B) Prototyping model
(C) The spiral model
(D) COCOMO model
Ans: D
A Process Model describes the sequence of phases for the entire
lifetime of a product. Therefore it is sometimes also called Product
Life Cycle. This covers everything from the initial commercial idea until the
final de- installation or disassembling of the product after its
use.
The Waterfall Model
The waterfall model is believed to have been the first process model
which was introduced and widely followed in software
engineering. The innovation was that the first time software engineering was
divided into separate phases. In the early 1970's there was no
awareness of splitting up software development into different phases.
Programs were very small, the requirements only a few
The V Model
A further development of the waterfall model led to the so called "V-
Model". If you look at it closely the individual steps of the process
are almost the same as in the waterfall model. However, there is one big
difference. Instead of going down the waterfall in a linear way the
process steps are bent upwards at the coding phase, to form
the typical V shape.
The Spiral Model
The Spiral Model is the most flexible and agile of all traditional
software process models. The process begins at the centre
position. From there it moves clockwise in traversals. Each traversal of the
spiral usually results in a deliverable. It is not clearly defined what
this deliverable is. This changes from traversal to traversal.

10. System Development Life-cycle has following stages:


(I) Requirement analysis (II) Coding
(III) Design (IV) Testing
Which option describes the correct sequence of stages?
(A) III, I, IV, II
(B) II, III, I, IV
(C) I, III, IV, II
(D) None of the above
Ans: D
Following are the seven phases of the SDLC:
1. Planning
2. Systems Analysis
3. Systems Design
4. Development
5. Testing
6. Implementation
7. Maintenance
11. Which one is measure of software complexity ?
(A) Number of lines of code (LOC)
(B) Number of man years
(C) Number of function points (FP)
(D) All of the above
Ans: A
Three main measures of software complexity
1. Halstead's Complexity Measures
2. Cyclomatic Complexity Measures
3. Function Point
Number of lines of code (LOC) and Number of man years are size
measures.

12. Which type of coupling is least preferred ?


(A) Content coupling
(B) Data coupling
(C) Control coupling
(D) Common coupling
Ans: A
Coupling - communication between different modules.
A good structured design has high cohesion and low coupling
arrangements.
Types of Coupling
Coupling can be "low" (also "loose" and "weak") or "high" (also "tight"
and "strong"). Some types of coupling, in order of
highest to lowest coupling, are as follows:
Content coupling (high)
Content coupling is when one module modifies or relies on the internal
workings of another module (e.g., accessing local data of
another module). Therefore changing the way the second module
produces data (location, type, timing) will lead to changing the
dependent module.
Common coupling
Common coupling is when two modules share the same global data (e.g.,
a global variable). Changing the shared resource implies
changing all the modules using it.
External coupling
External coupling occurs when two modules share an externally imposed
data format, communication protocol, or device
interface.
Control coupling
Control coupling is one module controlling the flow of another, by
passing it information on what to do (e.g., passing a what-to-
do flag). Stamp coupling (Data-structured coupling). Stamp coupling is
when modules share a composite data structure and
use only a part of it, possibly a different part (e.g., passing a whole
record to a function that only needs one field of it). This may
lead to changing the way a module reads a record because a field,
which the module doesn't need, has been modified.
Data coupling
Data coupling is when modules share data through, for example,
parameters. Each datum is an elementary piece, and these are
the only data shared (e.g., passing an integer to a function that
computes a square root).
Message coupling (low)
This is the loosest type of coupling. It can be achieved by state
decentralization(as in objects)and component communication
is done via parameters or message passing.
No coupling
Modules do not communicate at all with one another.
Content Common Control Stamp Data
Tight
loose
Tight means
More interdependency
Morecoordination
More information flow

3. Paper - II December - 2005


13. The testing of software against SRS is called:
(A) Acceptance testing (B) Integration testing
(C) Regression testing (D) Series testing
Ans: A
Testing Levels
Testing itself may be defined at various levels of SDLC. The testing
process runs parallel to software development. Before
jumping on the next stage, a stage is tested, validated and verified.
Testing separately is done just to make sure that there are no hidden bugs
or issues left in the software. Software is tested on various levels -
Unit Testing
While coding, the programmer performs some tests on that unit of
program to know if it is error free. Testing is performed under white-
box testing approach. Unit testing helps developers decide that individual
units of the program are working as per requirement and are error
free.
Integration Testing
Even if the units of software are working fine individually, there is a
need to find out if the units if integrated together would also work
without errors. For example, argument passing and data updation etc.
System Testing
The software is compiled as product and then it is tested as a whole.
This can be accomplished using one or more of the following tests:
Functionality testing - Tests all functionalities of the software
against the requirement.
Performance testing - This test proves how efficient the software
is. It tests the effectiveness and average time taken by the software to do
desired task. Performance testing is done by means of load testing and
stress testing where the software is put under high user and data load
under various environment conditions.
Security & Portability - These tests are done when the software is
meant to work on various platforms and accessed by number of
persons.
Acceptance Testing
When the software is ready to hand over to the customer it has to go
through last phase of testing where it is tested for user-interaction
and response. This is important because even if the software matches all user
requirements and if user does not like the way it appears or works, it
may be rejected.
Alpha testing - The team of developer themselves perform alpha
testing by using the system as if it is being used in work environment.
They try to find out how user would react to some action in software
and how the system should respond to inputs.
Beta testing - After the software is tested internally, it is handed
over to the users to use it under their production environment only for
testing purpose. This is not as yet the delivered product. Developers
expect that users at this stage will bring minute problems, which were
skipped to attend.
Regression Testing
Whenever a software product is updated with new code, feature or
functionality, it is tested thoroughly to detect if there is any
negative impact of the added code. This is known as regression testing.

14. The lower degree of cohesion is:


(A) logical cohesion (B) coincidental cohesion
(C) procedural cohesion (D) communicational cohesion
Ans: B
Cohesion - grouping of all functionally related elements.
Coupling - communication between different modules.
A good structured design has high cohesion and low coupling
arrangements.
Types of Cohesion
Coincidental cohesion:
A module is said to have coincidental cohesion, if it performs a set of
tasks that relate to each other very loosely, if at all. In this case, the module
contains a random collection of functions. It is likely that the functions have
been put in the module out of pure coincidence without any thought or
design. For example, in a transaction processing system (TPS), the get-input,
print-error, and summarize-members functions are grouped into one
module. The grouping does not have any relevance to the structure of
the problem.
Logical cohesion:
A module is said to be logically cohesive, if all elements of the module
perform similar operations, e.g. error handling, data input, data output, etc. An
example of logical cohesion is the case where a set of print functions
generating different output reports are arranged into a single module.
Temporal cohesion:
When a module contains functions that are related by the fact that all the
functions must be executed in the same time span, the module is said to exhibit
temporal cohesion. The set of functions are responsible for
initialization, start-up, shutdown of some process, etc. exhibit temporal
cohesion.
Procedural cohesion:
A module is said to possess procedural cohesion, if the set of functions
of the module are all part of a procedure (algorithm) in which certain sequence
of steps have to be carried out for achieving an objective, e.g. the algorithm for
decoding a message.
Communicational cohesion:
A module is said to have communicational cohesion, if all functions of
the module refer to or update the same data structure, e.g. the set of
functions defined on an array or a stack.
Sequential cohesion:
A module is said to possess sequential cohesion, if the elements of a
module form the parts of sequence, where the output from one element
of the sequence is input to the next. For example, in a TPS, the get-input,
validate-input, sort-input functions are grouped into one module.
Functional cohesion:
Functional cohesion is said to exist, if different elements of a module
cooperate to achieve a single function. For example, a module containing all
the functions required to manage employees’ pay-roll exhibits functional
cohesion. Suppose a module exhibits functional cohesion and we are asked to
describe what the module does, then we would be able to describe it using a
single sentence.
Types of Cohesion
Functional cohesion (Most Required)
Sequential cohesion
Communicational cohesion
Procedural cohesion
Temporal cohesion
Logical cohesion
Coincidental cohesion (Least Required)

15. The reliability of the software is directly dependent upon:


(A) Quality of the design
(B) Programmer’s experience
(C) Number of error
(D) Set of user requirements
Ans: C
Software reliability testing a testing technique that relates to testing a
software's ability to function given environmental conditions
consistently that helps uncover issues in the software design and functionality.
Parameters involved in Reliability Testing:
Dependent elements of reliability Testing:
Probability of failure-free operation
Length of time of failure-free operation
The environment in which it is executed
Key Parameters that are measured as part of reliability are given below:
MTTF: Mean Time To Failure
MTTR: Mean Time To Repair
MTBF: Mean Time Between Failures (= MTTF + MTTR)
16. Successive layer of design in software using bottom-up design is
called:
(A) Layer of Refinement (B) Layer of Construction
(C) Layer of abstraction (D) None of the above
Ans: C
Top down design use Layer of Refinement
Bottom-up design use Layer of abstarction.

17. Sliding window concept of software project management is:


(A) Preparation of comprehensible plan
(B) Preparation of the various stages of development
(C) Ad-hoc planning
(D) Requirement analysis
Ans: B
Project planning is a very tedious task. If we plan for the complete
development process , then especially for the big and risky projects ,
it is very tough to plan completely
Hence, we prefer to make plans as the number of stages pass and are
successively completed.
As the new stage arrives , we make a separate plan for it. This technique
is called sliding window planning and the project is planned more accurately
in successive development stages.

4. Paper - II June - 2006

18. In software project planning, work Breakdown structure must be


.................
(A) A graph (B) A tree
(C) A Euler’s graph (D) None of the above
Ans: B
Dividing complex projects to simpler and manageable tasks is the
process identified as Work Breakdown Structure (WBS).
Usually, the project managers use this method for simplifying the project
execution. In WBS, much larger tasks are broken down to manageable chunks
of work. These chunks can be easily supervised and estimated.
WBS is not restricted to a specific field when it comes to application.
This methodology can be used for any type of project management.
In a WBS diagram, the project scope is graphically expressed. Usually
the diagram starts with a graphic object or a box at the top, which represents
the entire project. Then, there are sub-components under the box.
Gantt chart is used for tracking the progression of the tasks derived by
WBS.

19. In Software Metrics, McCABE’s cyclomatic number is given by


following formula:
(A) c=e-n+2p (B) c=e-n-2p
(C) c=e+n+2p (D) c=e-n*2p
Ans: A
Cyclomatic complexity is a source code complexity measurement that is
being correlated to a number of coding errors. It is calculated by developing a
Control Flow Graph of the code that measures the number of linearly-
independent paths through a program module.
Lower the Program's cyclomatic complexity, lower the risk to modify
and easier to understand. It can be represented using the below
formula:
Cyclomatic complexity = E - N + P
where,
E = number of edges in the flow graph.
N = number of nodes in the flow graph.
P = number of nodes that have exit points
Example :
IF A = 10 THEN
IF B > C THEN
A= B
ELSE
A= C
ENDIF
ENDIF
Print A
Print B
Print C

The Cyclomatic complexity is calculated using the above control flow


diagram that shows seven nodes(shapes) and eight edges (lines), hence the
cyclomatic complexity is 8 - 7 + 2 = 3

20. In a good software design, ................ coupling is desirable between


modules.
(A) Highest (B) Lowest
(C) Internal (D) External
Ans: B
Cohesion - grouping of all functionally related elements.
Coupling - communication between different modules.
A good structured design has high cohesion and low coupling
arrangements

21. System study yields the following:


(A) Requirement specifications
(B) Prevailing process description
(C) Data source identification
(D) All the above
Ans: D

22. The COCOMO model is used for ..................


(A) software design
(B) software cost estimation
(C) software cost approximation
(D) software analysis
Ans: B
COCOMO
COCOMO stands for COnstructive COst MOdel, developed by Barry
W. Boehm. It divides the software product into three categories of
software: organic, semi-detached and embedded.
Boehm’s [1981] definition of organic, semidetached, and embedded
systems are elaborated below.
Organic: A development project can be considered of organic type, if
the project deals with developing a well understood application program, the
size of the development team is reasonably small, and the team members are
experienced in developing similar types of projects.
Semidetached: A development project can be considered of
semidetached type, if the development consists of a mixture of experienced and
inexperienced staff. Team members may have limited experience on related
systems but may be unfamiliar with some aspects of the system being
developed.
Embedded: A development project is considered to be of embedded
type, if the software being developed is strongly coupled to complex
hardware, or if the stringent regulations on the operational procedures exist.

5. Paper - II December - 2006

23. Which possibility among the following is invalid in case of a Data


Flow Diagram ?
(A) A process having in-bound data flows more than out-bound data
flows
(B) A data flow between two processes
(C) A data flow between two data stores
(D) A data store having more than one in-bound data flows
Ans: C
DFD is a graphical representation of the "flow" of data through an
information system, modelling its process aspects. A DFD is often
used as a preliminary step to create an overview of the system, which can later
be elaborated. DFDs can also be used for the visualization of data
processing (structured design). A DFD shows what kind of
information will be input to and output from the system, where the data will
come from and go to, and where the data will be stored. It does not show
information about the timing of process or information about whether
processes will operate in sequence or in parallel.
24. Software Cost Performance index (CPI) is given by:
(A) BCWP/ACWP (B)
(C) BCWP−ACWP (D) BCWP−BCWS
Where: BCWP stands for Budgeted Cost of Work Performed
BCWS stands for Budget Cost of Work Scheduled
ACWP stands for Actual Cost of Work Performed
Ans: A
Schedule Performance Index (SPI) and Cost Performance Index (CPI),
like variances, allow you to assess the health of a project.
In specific, SPI and CPI help you analyze the efficiency of schedule
performance and cost performance of any project.
Cost Performance Indicator
Cost Performance Indicator (CPI) is an index showing the efficiency of
the utilization of the resources on the project. CPI can be calculated using the
following formula:
CPI = Earned Value (EV) ⁄ Actual Cost (AC)
OR
CPI = BCWP ⁄ ACWP
The formula mentioned above gives the efficiency of the utilization of
the resources allocated to the project.
A CPI value above 1 indicates the efficiency of utilizing the
resources allocated to the project is good.
A CPI value below 1 indicates the efficiency of utilizing the
resources allocated to the project is not good.
To Complete Cost Performance Indicator
To Complete Cost Performance Indicator (TCPI) is an index showing the
efficiency at which the resources on the project should be utilized for the
remainder of the project. It can be calculated using the following formula:
TCPI = ( Total Budget − EV ) ⁄ ( Total Budget − AC )
OR
TCPI = ( Total Budget − BCWP ) ⁄ ( Total Budget − ACWP )
The formula mentioned above gives the efficiency at which the
project team should be utilized for the remainder of the project.
A TCPI value above 1 indicates the utilization of the project team for
the remainder of the project can be stringent.
A TCPI value below 1 indicates the utilization of the project team for
the remainder of the project should be lenient.

25. Software Risk estimation involves following two tasks:


(A) risk magnitude and risk impact
(B) risk probability and risk impact
(C) risk maintenance and risk impact
(D) risk development and risk impact
Ans: B

26. In a object oriented software design, ‘Inheritance’ is a kind


of...................
(A) relationship (B) module
(C) testing (D) optimization
Ans: A
One of the advantages of an Object-Oriented programming language is
code reuse. There are two ways we can do code reuse either by the
implementation of inheritance (IS-A relationship), or object composition
(HAS-A relationship).

27. Reliability of software is directly dependent on:


(A) quality of the design
(B) number of errors present
(C) software engineer’s experience
(D) user requirement
Ans: B

28. ‘Abstraction’ is......................step of Attribute in a software design.


(A) First (B) Final
(C) Last (D) Middle
Ans: A

6. Paper - II June - 2007

29. Which of the following combination is preferred with respect to


cohesion and coupling?
(A) low and low B) low and high
(C) high and low (D) high and high
Ans: C

30. Difference between flow-chart and data-flow diagram is:


(A) there is no difference
(B) usage in high level design and low level design
(C) control flow and data flow
(D) used in application programs and system programs
Ans: C
Flowchart vs Data Flow Diagram (DFD)
• The main difference between flow chart and data flow diagram is that
flow chart presents steps to complete a process where as data flow
diagram presents the flow of data.
• Flow chart does not have any input from or output to external source
whereas data flow diagram describes the path of data from external
source to internal store or vice versa.
• The timing and sequence of the process is aptly shown by a flow chart
where as the processing of data is taking place in a particular order or
several processes are taking simultaneously is not described by a data
flow diagram.
• Data flow diagrams define the functionality of a system where as flow
diagram shows how to make a system function.
• Flow charts are used in designing a process but data flow diagram are
used to describe the path of data that will complete that process.

31. Match the following:


(a) Unit test (i) Requirements
(b) System test (ii) Design
(c) Validation test (iii) Code
(d) Integration test (iv) System Engineering
Which of the following is true?
(a) (b) (c) (d)
(A) (ii) (iii) (iv) (i)
(B) (i) (ii) (iv) (iii)
(C) (iii) (iv) (i) (ii)
(D) None of the above
Ans: D

32. Problems with waterfall model are:


1. Real projects rarely follow this model proposes
2. It is often difficult for the customer
3. Working model is available only in the end
4. Developers are delayed unnecessarily
Which of the following is true?
(A) 1 and 4 only (B) 2 and 3 only
(C) 1, 2 and 3 only (D) 1, 2, 3 and 4
Ans: D
Advantages of waterfall model:
This model is simple and easy to understand and use.
It is easy to manage due to the rigidity of the model – each phase has
specific deliverables and a review process.
In this model phases are processed and completed one at a time. Phases
do not overlap.
Waterfall model works well for smaller projects where requirements are
very well understood.
Disadvantages of waterfall model:
Once an application is in the testing stage, it is very difficult to go back
and change something that was not well-thought out in the concept stage.
No working software is produced until late during the life cycle.
High amounts of risk and uncertainty.
Not a good model for complex and object-oriented projects.
Poor model for long and ongoing projects.
Not suitable for the projects where requirements are at a moderate to
high risk of changing.
When to use the waterfall model:
This model is used only when the requirements are very well known,
clear and fixed.
Product definition is stable.
Technology is understood.
There are no ambiguous requirements
Ample resources with required expertise are available freely
The project is short.

7. Paper - II December - 2007

33. A major defect in water fall model in software development is that:


(A) the documentation is difficult
(B) a blunder at any stage can be disastrous
(C) a trial version is available only at the end of the project
(D) the maintenance of the software is difficult
Ans: C
No working software is produced until late during the life cycle.
Disadvantages of waterfall model:
Once an application is in the testing stage, it is very difficult to go back
and change something that was not well-thought out in the concept stage.
No working software is produced until late during the life cycle.
High amounts of risk and uncertainty.
Not a good model for complex and object-oriented projects.
Poor model for long and ongoing projects.
Not suitable for the projects where requirements are at a moderate to
high risk of changing.

34. Function point metric of a software also depends on the:


(A) number of function needed
(B) number of final users of the software
(C) number of external inputs and outputs
(D) time required for one set of output from a set of input data
Ans: C
Function point metric of a software depends on the
Number of inputs
Number of outputs
Number of inquiries
Number of files
Number of interfaces

35. An error message produced by an interactive system should have:


(A) always the error code
(B) the list of mistakes done by the user displayed
(C) a non-judgemental approach
(D) the past records of the occurrence of the same mistake
Ans: B

36. System development cost estimation with use-cases is problematic


because:
(A) of paucity of examples
(B) the data can be totally incorrect
(C) the expertise and resource available are not used
(D) the problem is being over simplified
Ans: B
A Use-Case is a series of related interactions between a user and a
system that enables the user to achieve a goal. Use-Cases are a way to capture
functional requirements of a system. The user of the system is referred to as an
‘Actor’.
The use case point method is a useful model of estimating effort
and cost on software development
Use Case Points = (UUCP + AW) * TCF * EF.
The total use case points equals the sum of unadjusted use case points
plus the actor weight, multiplied by both the technical complexity
factor and the environmental factor.
The concept of UCP is similar to FPs.
The number of UCPs in a project is based on the following −
The number and complexity of the use cases in the system.
The number and complexity of the actors on the system.
Various non-functional requirements (such as portability,
performance, maintainability) that are not written as use cases.
The environment in which the project will be developed (such
as the language, the team’s motivation, etc.)
Estimation with UCPs requires all use cases to be written with a goal
and at approximately the same level, giving the same amount of
detail.

UCP can be used only when requirements are written in the form of use
cases.
Dependent on goal-oriented, well-written use cases. If the use cases
are not well or uniformly structured, the resulting UCP may not be
accurate.
Technical and environmental factors have a high impact on UCP. Care
needs to be taken while assigning values to the technical and
environmental factors.

37. The approach to software testing is to design test cases to:


(A) break the software
(B) understand the software
(C) analyze the design of sub processes in the software
(D) analyze the output of the software
Ans: A
Testing should begin at the module. The focus of testing should be
concentrated on the smallest programming units first and then expand to other
parts of the system.

8. Paper - II June - 2008

38. In software development, value adjustment factors include the


following among others:
(A) the criticality of the performance and reusability of the code.
(B) number of lines of code in the software.
(C) number of technical manpower and hardware costs.
(D) time period available and the level of user friendliness.
Ans: A
The Value Adjustment Factor (VAF) consists of 14 ``General System
Characteristics'', or GSCs.
These GSCs represent characteristics of the application under
consideration. Each is weighted on a scale from 0 (low) to 5 (high).
When you sum up the values of these 14 GSCs you get a value named ``
Total Degree of Influence'', or TDI. As you can see from the math the
TDI can vary from 0 (when all GSCs are low) to 35 (when all GSCs are
high).
Before getting into the VAF formula, let me quickly list the 14 GSCs:
1. Data Communication
2. Distributed data processing
3. Performance
4. Heavily used configuration
5. Transaction rate
6. Online data entry
7. End user efficiency
8. Online update
9. Complex processing
10. Reusability
11. Installation ease
12. Operational ease
13. Multiple sites
14. Facilitate change
Given this background information, you can see with the following formula:
VAF = (TDI*0.01) + 0.65
that the VAF can vary in range from 0.65 (when all GSCs are low) to 1.35
(when all GSCs are high).
Adjusted FP Count = Unadjusted FP Count * VAF

39. While designing the user interface, one should:


(A) use as many short cuts as possible.
(B) use as many defaults as possible.
(C) use as many visual layouts as possible.
(D) reduce the demand on short-term memory.
Ans: D
Short-term memory or primary memory or active memory is the capacity for
holding, but not manipulating, a small amount of information in an active
entity. The duration of short-term memory is in seconds. From this fact we can
understand if we reduce the demand on short term memory then it will be
helpful in designing process of user interface.

40. In software cost estimation, base estimation is related to:


(A) cost of similar projects already completed.
(B) cost of the base model of the present project.
(C) cost of the project with the base minimum profit.
(D) cost of the project under ideal situations.
Ans: A

41. In clean room software engineering:


(A) only eco-friendly hardware is used.
(B) only hired facilities are used for development.
(C) correctness of the code is verified before testing.
(D) implementation is done only after ensuring correctness.
Ans: D
• The name “cleanroom” is derived from the process used to fabricate
semiconductor
• The philosophy focuses on defect avoidance rather than defect
removal
Cleanroom is Shift in Practice
From To
Individual Peer reviewed engineering
craftsmanship
Sequential Incremental development
development
Individual unit Team correctness verification
testing
Informal coverage Statistical usage testing
testing
Unknown reliability Measured reliability
Informal design Disciplined engineering
specification and design
42. Water fall model for software development is:
(A) a top down approach.
(B) a bottom up approach.
(C) a sequential approach.
(D) a consequential approach.
Ans: C
In Waterfall model, typically, the outcome of one phase acts as the input
for the next phase sequentially.
The sequential phases in Waterfall model are −
Requirement Gathering and analysis − All possible requirements
of the system to be developed are captured in this phase and
documented in a requirement specification document.
System Design − The requirement specifications from first phase are
studied in this phase and the system design is prepared. This system
design helps in specifying hardware and system requirements and helps
in defining the overall system architecture.
Implementation − With inputs from the system design, the system is
first developed in small programs called units, which are integrated in
the next phase. Each unit is developed and tested for its functionality,
which is referred to as Unit Testing.
Integration and Testing − All the units developed in the
implementation phase are integrated into a system after testing of each
unit. Post integration the entire system is tested for any faults and
failures.
Deployment of system − Once the functional and non-functional
testing is done; the product is deployed in the customer environment or
released into the market.
Maintenance − There are some issues which come up in the client
environment. To fix those issues, patches are released. Also to enhance
the product some better versions are released. Maintenance is done to
deliver these changes in the customer environment.

9. Paper - II December - 2008


43. Software Quality Assurance(SQA) encompasses:
(A) verification
(B) validation
(C) both verification and validation
(D) none of the above
Ans: C
Difference between verification and validation
S.N. Verification Validation
1 Verification addresses Validation addresses the
the concern: "Are you concern: "Are you building the
building it right?" right thing?"
2 Ensures that the Ensures that the functionalities
software system meets meet the intended behaviour.
all the functionality.
3 Verification takes place Validation occurs after
first and includes the verification and mainly
checking for involves the checking of the
documentation, code, overall product.
etc.
4 Done by developers. Done by testers.
5 It has static activities, It has dynamic activities, as it
as it includes collecting includes executing the software
reviews, walkthroughs, against the requirements.
and inspections to
verify software.
6 It is an objective It is a subjective process and
process and no involves subjective decisions
subjective decision on how well a software works.
should be needed to
verify software.
QA- To ensure whether the requirement meets the developed product.
QC-Set of activities developed for evaluating a developed product
Testing-It is the process of evaluating a developed product with the
intent of finding errors.
Verification-is the process of checking whether the developing product
(not the final product) meets requirements, the verification starts from
the development
Validation-Checking whether the developed product (final product)
meets all the requirements
Detection-To find bugs in the developed system.

44. Which level is called as “defined” in capability maturity model?


(A) level 0 (B) level 3
(C) level 4 (D) level 1
Ans: B
In CMMI models with a staged representation, there are five maturity levels designated by the
numbers 1 through 5
1. Initial
2. Managed
3. Defined
4. Quantitatively Managed
5. Optimizing

45. COCOMO model is used for:


(A) product quality estimation
(B) product complexity estimation
(C) product cost estimation
(D) all of the above
Ans: C
COCOMO COCOMO (Constructive Cost Estimation Model) was
proposed by Boehm [1981]. According to Boehm, software cost
estimation should be done through three stages:
Basic COCOMO,
Intermediate COCOMO,
and Complete COCOMO.
Basic COCOMO Model The basic COCOMO model gives an
approximate estimate of the project parameters.
The basic COCOMO estimation model is given by the following
expressions:
Effort = a1 х (KLOC)a 2 PM T
dev = b1 x (Effort)b 2 Months
Where • KLOC is the estimated size of the software product expressed in
Kilo Lines of Code,
• a1, a2, b1, b2 are constants for each category of software products,
• Tdev is the estimated time to develop the software, expressed in months,
• Effort is the total effort required to develop the software product,
expressed in person months (PMs).
Example: Assume that the size of an organic type software product has
been estimated to be 32,000 lines of source code. Assume that
the average salary of software engineers be Rs. 15,000/- per month.
Determine the effort required to develop the software product
and the nominal development time.
From the basic COCOMO estimation formula for organic software:
Effort = 2.4 х (32)1.05 = 91 PM Nominal development time = 2.5 х
(91)0.38 = 14 months
Cost required to develop the product = 14 х 15,000 = Rs. 210,000/-

10. Paper - II June - 2009

46. Capability Maturity Model is meant for:


(A) Product (B) Process
(C) Product and Process (D) None of the above
Ans: B
CMM can be used to assess an organization against a scale of
five process maturity levels. It is a methodology used to develop and refine an
organization's software development process.

47. In the light of software engineering software consists of:


(A) Programs (B) Data
(C) Documentation (D) All of the above
Ans: D
Software includes computer programs, libraries and related non-
executable data, such as online documentation or digital media.

48. Which one of the following ISO standard is used for software
process?
(A) ISO 9000 (B) ISO 9001
(C) ISO 9003 (D) ISO 9000-3
Ans: D
SEI = ‘Software Engineering Institute’ at Carnegie-Mellon University;
initiated by the U.S. Defense Department to help improve software
development processes.
CMM = ‘Capability Maturity Model’, developed by the SEI. It’s a
model of 5 levels of organizational ‘maturity’ that determine
effectiveness in delivering quality software.
ISO = ‘International Organization for Standards’ – The ISO 9001, 9002,
and 9003 standards concern quality systems that are assessed by
outside auditors, and they apply to many kinds of production and
manufacturing organizations, not just software. The most comprehensive is
9001, and this is the one most often used by software development
organizations. It covers documentation, design, development,
production, testing, installation, servicing, and other processes. ISO
9000-3 (not the same as 9003) is a guideline for applying ISO 9001 to
software development organizations.
ISO 9000 is a set of standards for quality assurance systems
ISO 9001 to the Development, Supply, and Maintenance of Software
ISO 9000-3, Guidelines for the Application of ISO 9001 to the
Development, Supply, and Maintenance of Software

49. Which of the following is used for test data generation?


(A) White box (B) Black box
(C) Boundary-value analysis (D) All of the above
Ans: C
Boundary value analysis is a type of black box or specification based
testing technique in which tests are performed using the boundary values.
Example:
An exam has a pass boundary at 50 percent, merit at 75 percent and
distinction at 85 percent. The Valid Boundary values for this scenario will be
as follows:
49, 50 - for pass
74, 75 - for merit
84, 85 - for distinction
Boundary values are validated against both the valid boundaries and
invalid boundaries.
The Invalid Boundary Cases for the above example can be given as
follows:
0 - for lower limit boundary value
101 - for upper limit boundary value

50. Reverse engineering is the process which deals with:


(A) Size measurement (B) Cost measurement
(C) Design recovery (D) All of the above
Ans: C
Design recovery is a phase of reverse engineering that deals with
extraction of design piece from high level abstraction.

11. Paper - II December - 2009

51. Software Engineering is a discipline that integrates ………….. for


the development of computer software.
(A) Process (B) Methods
(C) Tools (D) All
Ans: D

52. Recorded software attributes can be used in the following endeavors


:
(i) Cost and schedule estimates.
(ii) Software product reliability predictions.
(iii) Managing the development process.
(iv) No where
Codes :
(A) (i) (ii) (iv)
(B) (ii) (iii) (iv)
(C) (i) (ii) (iii)
(D) (i) (ii) (iii) (iv)
Ans: C

53. Black Box testing is done


(A) to show that s/w is operational at its interfaces i.e. input and
output.
(B) to examine internal details of code.
(C) at client side.
(D) none of above.
Ans: A
Black-box testing is a method of software testing that examines the
functionality of an application based on the specifications. It is also known as
Specifications based testing. Independent Testing Team usually performs this
type of testing during the software testing life cycle.
This method of test can be applied to each and every level of software
testing such as unit, integration, system and acceptance testing.
Behavioural Testing Techniques:
There are different techniques involved in Black Box testing.
Equivalence Class
Boundary Value Analysis
Domain Tests
Orthogonal Arrays
Decision Tables
State Models
Exploratory Testing
All-pairs testing

12. Paper - II June - 2010

54. S1 : I teach algorithms and maths.


S2 : My professor teaches maths, electronics and computer science.
S3 : I have a student of maths.
S4 : Algorithm is a part of computer science.
S5 : Maths students know computer science.
What would be the chromatic number of a graph, vertices of which are the
actors/entities that are involved in the sentences S1 to S5 and edges-to represent
the associations/relationships amongst the entities/actors as expressed in the
sentences S1 to S5 above?
(A) 2 (B) 3
(C) 4 (D) None of these
Ans:

55. Software engineering primarily aims on


(A) reliable software
(B) cost effective software
(C) reliable and cost effective software
(D) none of the above
Ans: C

56. Top-down design does not require


(A) step-wise refinement (B) loop invariants
(C) flow charting (D) modularity
Ans: B
It requires stepwise refinement, flow charting and modules it has nothing
to do with loop invariants.
A top-down approach (also known as stepwise design) is essentially the
breaking down of a system to gain insight into the sub-systems that make it up.
In a top-down approach an overview of the system is formulated, specifying
but not detailing any first-level subsystems. Each subsystem is then refined in
yet greater detail, sometimes in many additional subsystem levels,
until the entire specification is reduced to base elements. Once these base
elements are recognised then we can build these as computer modules. Once
they are built we can put them together, making the entire system from these
individual components.
A loop invariant is a condition [among program variables] that is
necessarily true immediately before and after each iteration of a loop.

57. Which model is simplest model in Software Development ?


(A) Waterfall model (B) Prototyping
(C) Iterative (D) None of these
Ans: A

58. Design phase will usually be


(A) top-down (B) bottom-up
(C) random (D) centre fringing
Ans: A
Design phase usually has a Top-down design approach.
- Top-down design is used to solve the complex problems.
- It breaks the problem into parts which helps us to clarify what needs to be
done.
- Breaking the problem into part allows more than one person to work on the
solution.
- Parts of the solution may be turn out to be reusable.

13. Paper - II December - 2010

59. “Black” refers in the “Black-box” testing means


(A) Characters of the movie “Black”
(B) I – O is hidden
(C) Design is hidden
(D) Users are hidden
Ans: C
Because only input and expected output are known, the whole design and
process of converting input to output is not known in black box testing.

60. Prototyping is used to


(A) test the software as an end product
(B) expand design details
(C) refine and establish requirements gathering
(D) None of the above
Ans: C
Prototyping is one of SDLC model. It starts with requirement gathering
and establishing a quick prototype which is an early approximation of
a final product. This prototype is then evaluated by the customer/user
and used to refine the requirements for the software to be developed.

61. Which one of these are not software maintenance activities?


(A) Error correction
(B) Adaptation
(C) Implementation of Enhancement
(D) Establishing scope
Ans: D
Software maintenance can be divided into three components.
Corrective, Adaptive and Perfective Maintenance.
Corrective maintenance involves correcting errors or actual faults in the
software. So this is option A.
Adaptive maintenance is the changes needed as a consequence of some
change in the environment in which the system must operate. And this
is option B.
Finally perfective maintenance refers to changes that originate from user
requests.
Infect adaptive and perfective categories can be joined together and
called as enhancements, which is option C. So the only one which is
not talked about is option D and so D is the right answer.

62. The system specification is the first deliverable in the computer


system engineering process which does not include
(A) Functional Description
(B) Cost
(C) Schedule
(D) Technical Analysis
Ans: A
The system specification document describes the system and gives a
high-level view of what the system will provide. The system specification is
the guide that will allow details on hardware, software and test requirements.
So functional description will not be a part of system specification. So the
correct answer is A.

63. The COCOMO model was introduced in the book title “Software
Engineering Economics” authored by
(A) Abraham Silberschatz
(B) Barry Boehm
(C) C.J. Date
(D) D.E. Knuth
Ans: B

64. The Warnier diagram enables analyst


(A) to represent information hierarchy in a compact manner
(B) to further identify requirement
(C) to estimate the total cost involved
(D) None of the above
Ans: A
The Warnier diagram enables analyst to represent information hierarchy
in a compact manner. It is also referred to as Warnier-Orr diagram. It is a
graphic charting technique used in software engineering for system analysis
and design.
14. Paper - II June - 2011

65. Which one of the items listed below is not one of the software
engineering layers ?
(A) Process (B) Manufacturing
(C) Method (D) Tools
Ans: B
software engineering layers
Tools
Methods
Process
A Quality
Focus

66. What is the first stage in program development ?


(A) Specification and design
(B) System Analysis
(C) Testing
(D) None of the above
Ans: B
Planning or analysis
Designing
Coding
Testing

67. By means of a data flow diagram, the analyst can detect


(A) Task duplication (B) Unnecessary delays
(C) Task overlapping (D) All of the above
Ans: D

68. Which of these are the 5 generic software engineering framework


activities ?
(A) Communication, planning, modelling, construction, deployment
(B) Communication, risk management, measurement, production,
reviewing
(C) Analysis, designing, programming, Debugging, maintenance
(D) Analysis, planning, designing, programming, testing
Ans: A
Communication
- This framework activity involves heavy communication and
collaboration with the customers (and other stakeholders) and encompasses
requirements gathering and other related activities.
Planning
- This activity establishes a plan for the software engineering work that
follows.
- It describes the technical tasks to be conducted, the risks that are likely,
the resources that will be required, the work products to be produced and a
work schedule.
Modeling
- This activity encompasses the creation of models that allow the
developer and the customer to better understand software
requirements and the design that will achieve those requirements.
Construction
- This activity combines code generation (either manual or automated)
and the testing that is required to uncover errors in the code.
Deployment
- The software (as a complete entity or as a partially completed
increment) is delivered to the customer who evaluates the delivered product
and provides feedback based on the evaluation.
These five generic framework activities can be used during the
development of small programs, the creation of large Web
applications and for the engineering of large, complex computer-based
systems.

15 Paper - II December- 2011

69. For a data entry project for office staff who have never used
computers before (user interface and user-friendliness are extremely
important), one will use
(A) Spiral model (B) Component based model
(C) Prototyping (D) Waterfall model
Ans: C

70. An SRS
(A) establishes the basis for agreement between client and the supplier.
(B) provides a reference for validation of the final product.
(C) is a prerequisite to high quality software.
(D) all of the above.
Ans: D
The purpose of the SRS is to:
1. Establish the basis for agreement between the customers and the
suppliers on what the software product is to do. The complete
description of the functions to be performed by the software specified in
the SRS will assist the potential user to determine if the software
specified meets their needs or how the software must be modified to
meet their needs
2. Provide a basis for developing the software design. The SRS is the most
important document of reference in developing a design
3. Reduce the development effort. The preparation of the SRS forces the
various concerned groups in the customer's organisation to thoroughly
consider all of the requirements before design work begins. A complete
and correct SRS reduces effort waisted on redesign, recoding and
retesting. Careful review of the requirements in the SRS can reveal
omissions, misunderstandings and inconsistencies early in the
development cycle when these problems are easier to correct
4. Provide a basis for estimating costs and schedules. The description of
the product to be developed as given in the SRS is a realistic basis for
estimating project costs and can be used to obtain approval for bids or
price estimates
5. Provide a baseline for validation and verification. Organisations can
develop their test documentation much more productively from a good
SRS. As a part of the development contract, the SRS provides a baseline
against which compliance can be measured
6. Facilitate transfer. The SRS makes it easier to transfer the software
product to new users or new machines. Customers thus find it easier to
transfer the software to other parts of their organisation and suppliers
find it easier to transfer it to new customers
7. Serve as a basis for enhancement. Because the SRS discusses the
product but not the project that developed it, the SRS serves as a basis
for later enhancement of the finished product. The SRS may need to be
altered, but it does provide a foundation for continued product
evaluation.
71. McCabe’s cyclomatic metric V(G) of a graph G with n vertices, e
edges and p connected component is
(A) e
(B) n
(C) e – n + p
(D) e – n + 2p
Ans: C
Cyclomatic complexity = E - N + P
where,
E = number of edges in the flow graph.
N = number of nodes in the flow graph.
P = number of nodes that have exit points

The Cyclomatic complexity is calculated using the above control flow


diagram that shows seven nodes(shapes) and eight edges (lines),
hence the cyclomatic complexity is 8 - 7 + 2 = 3

72. Emergency fixes known as patches are result of


(A) adaptive maintenance
(B) perfective maintenance
(C) corrective maintenance
(D) none of the above
Ans: C
Corrective maintenance is a maintenance task performed to identify,
isolate, and rectify a fault so that the failed equipment, machine, or system can
be restored to an operational condition within the tolerances or limits
established for in-service operations.

73. Design recovery from source code is done during


(A) reverse engineering (B) re-engineering
(C) reuse (D) all of the above
Ans: D
Reverse engineering, also called back engineering, is the processes of
extracting knowledge or design information from anything man-made
and re-producing it or re-producing anything based on the extracted
information

74. Following is used to demonstrate that the new release of software


still performs the old one did by rerunning the old tests :
(A) Functional testing (B) Path testing
(C) Stress testing (D) Regression testing
Ans: D
Regression testing technique is Retest all. This technique checks all the
test cases on the current program to check its integrity. Though it is expensive
as it needs to re-run all the cases, it ensures that there are no errors because of
the modified code

75. Software risk estimation involves following two tasks :


(A) Risk magnitude and risk impact
(B) Risk probability and risk impact
(C) Risk maintenance and risk impact
(D) Risk development and risk impact
Ans: B

16. Paper - II June - 2012


76. Main aim of software engineering is to produce
(A) program
(B) software
(C) within budget
(D) software within budget in the given schedule
Ans: D

77. Key process areas of CMM level 4 are also classified by a


process which is
(A) CMM level 2 (B) CMM level 3
(C) CMM level 5 (D) All of the above
Answer: C
All higher level CMM by default includes all the KPA of lower level CMM so
level 4 is included in level 5.

78. Validation means


(A) are we building the product right
(B) are we building the right product
(C) verification of fields
(D) None of the above
Answer: B

79. If a process is under statistical control, then it is


(A) Maintainable (B) Measurable
(C) Predictable (D) Verifiable
Answer: C
Statistical Process Control (SPC) is a group of tools and techniques
used to determine the stability and predictability of a process.

80. In a function oriented design, we


(A) minimize cohesion and maximize coupling
(B) maximize cohesion and minimize coupling
(C) maximize cohesion and maximize coupling
(D) minimize cohesion and minimize coupling
Answer: B
81. Which of the following metric does not depend on the
programming language used ?
(A) Line of code (B) Function count
(C) Member of token (D) All of the above
Answer: B
Function points are computed from direct measures of the information
domain of a business software application and assessment of its complexity.

82. Reliability of software is directly dependent on


(A) quality of the design
(B) number of errors present
(C) software engineers experience
(D) user requirement
Answer: B

17. Paper - III June - 2012

83. While unit testing a module, it is found that for a set of test
data, maximum 90% of the code alone were tested with a probability of
success 0.9. The reliability of the module is
(A) atleast greater than 0.9
(B) equal to 0.9
(C) atmost 0.81
(D) atleast 1/0.81
Ans: C
Code tested maximum 0.90%
probability of success is 0.9
So, reliability of the module atmost 0.9 * 0.9 =0.81

84. Consider the following pseudo-code :


If (A > B) and (C > D) then
A= A+ 1
B=B+1
Endif
The cyclomatic complexity of the pseudo-code is
(A) 2 (B) 3
(C) 4 (D) 5
Ans: B
Explanation:
V(G)= E-N+2P
=5-4+2
=3

yes no

85. Are we building the right product? This statement refers to


(A) Verification (B) Validation
(C) Testing (D) Software quality assurance
Ans: B
Validation occurs after verification and mainly involves the checking of
the overall product.
Validation-Checking whether the developed product (final product)
meets all the requirements
It is done by testers.
It is a subjective process
It has dynamic activities, as it includes executing the software against the
requirements.

86. Which one of the following statements is incorrect ?


(A) The number of regions corresponds to the cyclomatic complexity.
(B) Cyclometric complexity for a flow graph G is V(G) = N–E+2, where
E is the number of edges and N is the number of nodes in the flow
graph.
(C) Cyclometric complexity for a flow graph G is V(G) = E–N+2, where
E is the number of edges & N is the number of nodes in the flow
graph.
(D) Cyclometric complexity for a flow graph G is V(G) = P + 1, where P
is the number of predicate nodes contained in the flow graph G.
Ans: B
A. Is Correct Coz Cp=# Of Regions Or # Of Closed Regions + 1
B Is Incorrect....
C. Is Correct As It Is The Easy Way To Calculate If Edge, Nodes Are
Given
D. Is Correct Coz Cp =# Of Decision Statements + 1 Or # Of Predicate
Nodes + 1
Cyclomatic complexity is a software metric (measurement), used to
indicate the complexity of a program. Cyclomatic complexity = E - N
+P
E = number of edges in the flow graph.
N = number of nodes in the flow graph.
P = number of nodes that have exit points

18. Paper - II December - 2012

87. Component level design is concerned with


(A) Flow oriented analysis (B) Class based analysis
(C) Both of the above (D) None of the above
Ans: C
Component-level design - created by transforming the structural
elements defined by the software architecture into a procedural
description of the software components using information obtained from the
analysis model class-based elements, flow-oriented elements, and
behavioural elements

88. RAD stands for ………………


(A) Rapid and Design
(B) Rapid Aided Development
(C) Rapid Application Development
(D) Rapid Application Design
Ans: C

89. …………... is an “umbrella” activity that is applied throughout


the software engineering process.
(A) Debugging (B) Testing
(C) Designing (D) Software quality assurance
Ans: D

19. Paper - III December - 2012

90. The factors that determine the quality of a software system are
(A) correctness, reliability
(B) efficiency, usability, maintainability
(C) testability, portability, accuracy, error tolerances, expandability,
access control, audit.
(D) All of the above
Ans: D

91. A program P calls two subprograms P1 and P2. P1 can fail 50% times
and P2 40% times. Then P can fail
(A) 50% (B) 60%
(C) 10% (D) 70%
Ans: D
Program P fails when either P1 fails or P2 fails, i.e. failure of P1 +
failure of P2.
But this will also contain the case when both P1 and P2 fails at the same
time, i.e. failure of P1 ∩ failure of P2, since this case will be
already be counted on (P1+P2).
Therefore, our final answer will be failure of P1 + failure of P2 -
(failure of P1 ∩ failure of P2)
= (50/100)(50/100) + (40/100)(40/100) +(50/100 ∗ 40/100)
(50/100 ∗ 40/100)
= (90/100)(90/100) - (2000/10000)(2000/10000)
= (90/100)(90/100) - (20/100)(20/100)
= (70/100)(70/100)
= 70%
or
P1: fails 50% time. Success 50% time....0.5
P2: fails 40% time. Success 60% time.... 0.6
Success rate = both p1 and p2 wins = 0.5x0.6= 0.3
Failure rate =1- success rate = 1-0.3 =0.7=
70%

92. …………… establishes information about when, why and by whom


changes are made in a software.
(A) Software Configuration Management.
(B) Change Control.
(C) Version Control.
(D) An Audit Trail.
Ans: D
A Software Configuration Management (SCM) Plan defines the strategy
to be used for change management.
SCM Tool Features
Audit trails - establishes additional information about when,
where, why, and by whom changes were made
Software Configuration Management Task
Identification (tracking multiple versions to enable efficient
changes)
Version control (control changes before and after release to
customer)
Change control (authority to approve and prioritize changes)
Configuration auditing (ensure changes made properly)
Reporting (tell others about changes made)
20. Paper - II June - 2013

93. COCOMO stands for


(A) COmposite COst MOdel
(B) COnstructive COst MOdel
(C) COnstructive COmposite MOdel
(D) COmprehensive COnstruction MOdel
Ans: B
The Constructive Cost Model (COCOMO) is an algorithmic software
cost estimation model.

94. Match the following:


a. Good quality i. Program does not fail for a specified time in a given
environment
b. Correctness ii. Meets the functional requirements
c. Predictable iii. Meets both functional and non-functional
requirements
d. Reliable iv. Process is under statistical control
Codes:
a b c d
(A) iii ii iv i
(B) ii iii iv i
(C) i ii iv iii
(D) i ii iii iv
Ans: A
Reliability is Program does not fail for a specific time in a given
environment so D-1
Predictable means Process is under statistical control and so C-4
Correctness meets the functional requirements B-2
Good quality is more than just correctness meets both functional and non
functional requirements A-3

95. While estimating the cost of software, Lines of Code (LOC) and
Function Points (FP) are used to measure which one of the following?
(A) Length of code (B) Size of software
(C) Functionality of software (D) None of the above
Ans: B
Both FP and LOC are units of measurement for software size. The size of
a software that is subject to development is required in order to come
up with accurate estimates of effort, cost and duration of a software
project. Most parametric estimation models such as COCOMO accept size
expressed in either FP or LOC as input.

96. A good software design must have


(A) High module coupling, High module cohesion
(B) High module coupling, Low module cohesion
(C) Low module coupling, High module cohesion
(D) Low module coupling, Low module cohesion
Ans: C

97. Cyclometric complexity of a flow graph G with n vertices and e


edges is
(A) V(G) = e+n-2
(B) V(G) = e-n+2
(C) V(G) = e+n+2
(D) V(G) = e-n-2
Ans: B
It Is The Easy Way To Calculate If Edge, Nodes Are Given
Cyclometric complexity for a flow graph G is V(G) = E–N+2, where E
is the number of edges & N is the number of nodes in the flow graph
21. Paper - III June - 2013

98. The following three golden rules:


(i) Place the user in control
(ii) Reduce the user’s memory load
(iii) Make the interface consistent are for
(A) User satisfaction
(B) Good interface design
(C) Saving system’s resources
(D) None of these
Ans: B
These rules are known as Mandel’s Golden Rules
Theo Mandel [MAN97] coins three “golden rules”:
1. Place the user in control.
2. Reduce the user’s memory load.
3. Make the interface consistent.
These golden rules actually form the basis for a set of user interface
design principles that guide this important software design activity.

99. Software safety is a ................... activity that focuses on the


identification and assessment of potential hazards that may affect software
negatively and cause an entire system to fail.
(A) Risk mitigation, monitoring and management
(B) Software quality assurance
(C) Software cost estimation
(D) Defect removal efficiency
Ans: B
Explanation:
Software quality assurance (SQA) is a process that ensures that
developed software meets and complies with defined or standardized
quality specifications. SQA is an ongoing process within the software
development life cycle (SDLC) that routinely checks the developed
software to ensure it meets desired quality measures.
Software safety is a Software quality assurance (SQA) activity that
focuses on the identification and assessment of potential hazards that
may affect software negatively and cause an entire system to fail.

100. The Software Maturity Index (SMI) is defined as


SMI = [Mf – (Fa + Fc + Fd)] / Mf
Where
Mf = the number of modules in the current release.
Fa = the number of modules in the current release that have been added.
Fc = the number of modules in the current release that have been changed.
Fd = the number of modules in the current release that have been deleted.
The product begins to stabilize when
(A) SMI approaches 1
(B) SMI approaches 0
(C) SMI approaches -1
(D) None of the above
Ans: A
Explanation:
SMI = (M – (A + C + D)) / M
SMI = 1 – N/M, where M is the total number of modules in the current
version of the system and N is the number of modules added, changed
or deleted between the previous version and this one.
SMI can be a measurement of product stability, when SMI approaches
1.0 the product is stable. When correlated with the time it takes to
complete a version of the software, you have an indication of the
maintenance effort needed.

101. Match the following:


a. Watson-Felix model i. Failure intensity
b. Quick-Fix model ii. Cost estimation
c. Putnam resource allocation model iii. Project planning
d. Logarithmetic- Poisson Model iv. Maintenance
Codes:
a b c d
(A) ii i iv iii
(B) i ii iv iii
(C) ii i iii iv
(D) ii iv iii i
Ans: D
Explanation:-
Watson Felix model is used for cost estimation.
Putnam resource allocation model is used in project planning.
Quick-Fix model is used in software maintenance.
Logarithmetic Poisson model is used in failure intensity.

102. ............... is a process model that removes defects before they


can precipitate serious hazards.
(A) Incremental model
(B) Spiral model
(C) Cleanroom software engineering
(D) Agile model
Ans: C
Explanation:
The cleanroom software engineering process is a software development
process intended to produce software with a certifiable level of reliability.
The cleanroom process was originally developed by Harlan Mills and several
of his colleagues including Alan Hevner at IBM.
It is a process model that removes defects before they can precipitate
serious hazards.

103. Equivalence partitioning is a .................. method that divides the


input domain of a program into classes of data from which test cases can
be derived.
(A) White-box testing
(B) Black-box testing
(C) Orthogonal array testing
(D) Stress testing
Ans: B
Explanation:
White Box Testing is a software testing method in which the internal
structure/ design/ implementation of the item being tested is known to the
tester. … Programming knowledge and implementation knowledge (internal
structure and working) is required in White Box testing, which is not necessary
in Black Box testing.
White Box Testing Techniques:
Statement Coverage – This technique is aimed at exercising all
programming statements with minimal tests.
Branch Coverage – This technique is running a series of tests to ensure
that all branches are tested at least once.
Path Coverage – This technique corresponds to testing all possible paths
which means that each statement and branch is covered.
Typical black-box test design techniques include:
Decision table testing
All-pairs testing
Equivalence partitioning
Boundary value analysis
Cause–effect graph
Error guessing
State transition testing
Use case testing
User story testing
Domain analysis
Combining technique
22. Paper - II June - 2013 (Re test)

104. The .................. of a program or computing system is the


structure or structures of the system, which comprise software
components, the externally visible properties of these components, and the
relationship among them.
(A) E-R diagram (B) Data flow diagram
(C) Software architecture (D) Software design
Ans: C
“The software architecture of a program or computing system is the
structure or structures of the system, which comprise software
elements, the externally visible properties of those elements, and the
relationships among them. Architecture is concerned with the public
side of interfaces; private details of elements—details having to do
solely with internal implementation—are not architectural.”

105. Working software is not available until late in the process in


(A) Waterfall model (B) Prototyping model
(C) Incremental model (D) Evolutionary Development
model
Ans: A

106. Equivalence partitioning is a ................ testing method that divides


the input domain of a program into classes of data from which test cases
can be derived.
(A) White box (B) Black box
(C) Regression (D) Smoke
Ans: B
Black box
107. Consider the following characteristics:
(i) Correct and unambiguous
(ii) Complete and consistent
(iii) Ranked for importance and/or stability and verifiable
(iv) Modifiable and Traceable
Which of the following is true for a good SRS?
(A) (i), (ii) and (iii)
(B) (i), (iii) and (iv)
(C) (ii), (iii) and (iv)
(D) (i), (ii), (iii) and (iv)
Ans: D
Any good requirement should have these 6 characteristics:
Complete.
Consistent.
Feasible.
Modifiable.
Unambiguous.
Testable.
23. Paper-III June- 2013 Retest

108. Improving processing efficiency or performance or restructuring


of software to improve changeability is known as
(A) Corrective maintenance
(B) Perfective maintenance
(C) Adaptive maintenance
(D) Code maintenance
Ans: B
There are four types of maintenance, namely, corrective, adaptive,
perfective, and preventive.
Corrective maintenance is concerned with fixing errors that are observed
when the software is in use.
Adaptive maintenance is concerned with the change in the software that
takes place to make the software adaptable to new environment such
as to run the software on a new operating system.
Perfective maintenance is concerned with the change in the software that
occurs while adding new functionalities in the software.
Preventive maintenance involves implementing changes to prevent the
occurrence of errors. The distribution of types of maintenance by type
and by percentage of time consumed.

109. In ...............,modules A and B make use of a common data type,


but perhaps perform different operations on it.
(A) Data coupling (B) Stamp coupling
(C) Control coupling (D) Content coupling
Ans: B
Two modules are stamp coupled if they communicate via a passed data
structure that contains more information than necessary for them to
perform their functions.
Stamp/Data-structure Coupling: Communicating via a data
structure passed as a parameter. The data structure
holds more information than the recipient needs.
110. Sixty (60) reusable components were available for an application.
If only 70% of these components can be used, rest 30% would have to be
developed from scratch. If average component is 100 LOC and cost of
each LOC is Rs 14, what will be the risk exposure if risk probability is
80% ?
(A) Rs 25,200 (B) Rs 20,160
(C) Rs 25,160 (D) Rs 20,400
Ans: B
Total resusable components planned=60.
Custom developed =.3*60=18.
Total cost in development=18*100*14.
Risk exposure= .8*18*1400.=20160 dollar

111. Equivalence class partitioning approach is used to divide the


input domain into a set of equivalence classes, so that if a program works
correctly for a value, then it will work correctly for all the other values in
that class. This is used .................
(A) to partition the program in the form of classes.
(B) to reduce the number of test cases required.
(C) for designing test cases in white box testing.
(D) all of the above.
Ans: B
Equivalence partitioning is a method for deriving test cases. In this
method, equivalence classes (for input values) are identified such that
each member of the class causes the same kind of processing and output to
occur. The values at the extremes (start/end values or lower/upper end values)
of such class are known as Boundary values. Analyzing the behaviour of a
system using such values is called Boundary value analysis (BVA).

112. The failure intensity for a basic model as a function of failures


experienced is given as λ(μ)-λ0[1 – (μ)/(V0)] where λ0 is the initial
failure intensity at the start of the execution, μ is the average or expected
number of failures at a given point in time, the quantity V0 is the total
number of failures that would occur in infinite time.
Assume that a program will experience 100 failures in infinite time, the initial
failure intensity was 10 failures/CPU hr. Then the decrement of failures
intensity per failure will be
(A) 10 per CPU hr.
(B) 0.1 per CPU hr.
(C) –0.1 per CPU hr.
(D) 90 per CPU hr.
Ans: C

24. Paper-II December-2013

113. The relationship of data elements in a module is called


(A) Coupling (B) Modularity
(C) Cohesion (D) Granularity
Ans: C
Explanation:
Cohesion is the indication of the relationship within module. Coupling is
the indication of the relationships between modules. While designing
you should strive for high cohesion. While designing you should strive for
low coupling.

114. Software Configuration Management is the discipline for


systematically controlling
(A) the changes due to the evolution of work products as the project
proceeds.
(B) the changes due to defects (bugs) being found and then fixed.
(C) the changes due to requirement changes
(D) all of the above
Ans: D
In software engineering, software configuration management (SCM or
S/W CM) is the task of tracking and controlling changes in
the software, part of the larger cross-disciplinary field of configuration
management. SCM practices include revision control and the
establishment of baselines.
Software Configuration Management (SCM) is the overall management
of a software design project as it evolves into a software
product or system. This includes technical aspects of the project, all
level of communications, organization, and the control of
modifications changes to the project plan by
the programmers during the development phase. Software
Configuration Management is also called Software Control
Management.

115. Which one of the following is not a step of requirement


engineering?
(A) Requirement elicitation (B) Requirement analysts
(C) Requirement design (D) Requirement documentation
Ans: C
These may include:
1. Requirements inception or requirements elicitation -
2. Requirements identification - identifying new requirements
3. Requirements analysis and negotiation - checking requirements and
resolving stakeholder conflicts
4. Requirements specification (e.g., software requirements
specification; SRS) - documenting the requirements in a requirements
document
5. Systems modelling - deriving models of the system, often using a
notation such as the Unified Modelling Language (UML) or
the Lifecycle Modelling Language (LML)
6. Requirements validation - checking that the documented requirements
and models are consistent and meet stakeholder needs
7. Requirements management - managing changes to the requirements as
the system is developed and put into use

116. Testing of software with actual data and in actual environment is


called
(A) Alpha testing (B) Beta testing
(C) Regression testing (D) None of the above
Ans: B
Beta Testing of a product is performed by "real users" of the software
application in a "real environment" and can be considered as a form
of external user acceptance testing.

Alpha Testing Beta Testing


Alpha testing performed by Testers Beta testing is performed by
who are usually internal employees Clients or End Users who are not
of the organization employees of the organization
Alpha Testing performed at Beta testing is performed at client
developer's site location or end user of the product

Reliability and security testing are Reliability, Security, Robustness


not performed in-depth Alpha are checked during Beta Testing
Testing
Alpha testing involves both the Beta Testing typically uses black
white box and black box techniques box testing

Alpha testing requires lab Beta testing doesn't require any lab
environment or testing environment environment or testing environment.
Software is made available to the
public and is said to be real time
environment

Long execution cycle may be Only few weeks of execution are


required for Alpha testing required for Beta testing

Critical issues or fixes can be Most of the issues or feedback is


addressed by developers collected from Beta testing will be
immediately in Alpha testing implemented in future versions of
the product

25. Paper-III December-2013

117. Given a flow graph with 10 nodes, 13 edges and one connected
components, the number of regions and the number of predicate (decision)
nodes in the flow graph will be
(A) 4, 5 (B) 5, 4
(C) 3, 1 (D) 13, 8
Ans: B
E= edges.
N=nodes.
V(g)=e-n+2
V(g)=13-10+2=5
V (G) = P + 1
Where P = Number of predicate nodes (node that contains condition)
Hence p=4.
Number of regions =5.(the cyclomatic complexity.)

118. Function points can be calculated by


(A) UFP*CAF (B) UFP*FAC
(C) UFP*Cost (D) UFP*Productivity
Ans: A
Explanation:
UFP (Unadjusted Function Point)
CAF (Complexity Adjustment Factor)
Function Point FP = UFP*CAF

119. Match the following:


List-I List-II
a. Data coupling i. Module A and Module B have shared data
b. Stamp coupling ii. Dependency between modules
is based on the fact they communicate
by only passing of data.
c. Common coupling iii. When complete data structure is
passed from one module to another.
d. Content coupling iv. When the control is passed from
one module to the middle of another.
Codes:
a b c d
(A) iii ii i iv
(B) ii iii i iv
(C) ii iii iv i
(D) iii ii iv i
Ans: B
a. Data coupling ii. Dependency between modules is based on the fact
they communicate
by only passing of data
b. Stamp coupling iii. When complete data structure is passed from one
module to another
c. Common coupling i. Module A and Module B have shared data
d.Content iv. When the control is passed from
coupling one module to the middle of another
120. A process which defines a series of tasks that have the following
four primary objectives is known as
1. to identify all items that collectively define the software configuration.
2. to manage changes to one or more of these items.
3. to facilitate the construction of different versions of an application.
4. to ensure that software quality is maintained as the configuration
evolves over time.
(A) Software Quality Management Process
(B) Software Configuration Management Process
(C) Software Version Management Process
(D) Software Change Management Process
Ans: B
Software Configuration Management Process has following primary
objective:
1. to identify all items that collectively define the software configuration
2. to manage changes to one or more of these items
3. to facilitate the construction of different versions of an application
4. to ensure that software quality is maintained as the configuration
evolves over time

121. One weakness of boundary value analysis and equivalence


partitioning is
(A) they are not effective.
(B) they do not explore combinations of input circumstances.
(C) they explore combinations of input circumstances
(D) none of the above
Ans: B
One weakness of boundary-value analysis and equivalence partitioning
is that they do not explore combinations of input circumstances.
if the product of the number of questions and the number of students
exceeds some limit (the program runs out of memory, for example).
Boundary-value testing would not necessarily detect such an error.

122. Which one of the following is not a software myth?


(A) Once we write the program and get it to work, our job is done.
(B) Project requirements continually change, but change can be easily
accommodated because software is flexible.
(C) If we get behind schedule, we can add more programmers and catch
up.
(D) If an organization does not understand how to control software
projects internally, it will invariably struggle when it outsources software
projects.
Ans: D
Some of the most prevalent myths are:

The Waterfall Method of design, the idea that it is both possible,


efficient and good practice to completely specify a system
before building it, and to execute the steps of a software project
sequentially rather than iteratively. This was popularized by a
paper that described the method as an example of poor
development practices, but which people took as an example of good
practice
That customers or end users will know what they want and will be able
to articulate it.
That some language, technology, or popular method, other than the one
you are currently using, is a silver bullet that will magically
solve your problems.
The Mythical Man Month, the idea that adding people to a development
team makes it more efficient in a inear fashion.
That coming to agreement on a specification means agreeing on the
actual features, even though specifications are fuzzy and
subject to different interpretations.
That development works best when there is just one way to do it, where
programmers freedom is severely restricted by the language.
That development works best when there more than one way to
accomplish a task, where programmers have complete
freedom.
That design patterns are universal, rather than examples of limitations
in the expressiveness of particular programming languages.
That the best technical solution wins.
That you can parse HTML with a regular expression
That marketing doesn't matter, and is best left to "suits".
That software can be estimated accurately.
That software development can be effectively and profitably sold as
fixed cost, fixed timeframe projects.
That objects are the best way to model anything in the real world. That
modeling real-world entities is the way objects are most
commonly used.
That data should always be hidden within objects, and the object
should provide all the operations necessary to work with that
data.
That JavaScript has something to do with Java.
That logic can and should always be completely separated from
presentation.
That software development is mostly about having good math skills, is
best taught by studying theoretical computer science, and is
best done by people who are highly mathematical. That solving logic
puzzles is the best way to gauge a software engineer's ability.
The idea that software is mostly about what's visible on the surface,
and that what is happening undrneath the design isn't worth paying
attention to or understanding—a belief especially held by nontechnical
managers and clients.
That writing software is a good profession for people who lack people
skills.
That software can be effectively mocked up or designed in some other
medium, such as wireframes or Photoshop comps, because designing in
the actual medium (such as HTML and CSS) is too hard and expensive.
That designer can't or won't learn any coding, and must be protected
from real code.
That design is just a layer of decoration applied on the surface, and is
much less important than good engineering.
The idea that software can be reliably built up on a stack of
abstractions, and that you only need to understand the topmost,
abstract layers, rather than the underlying implementations. See Joel
Spolsky's Law of Leaky Abstractions for a discussion of why
this is a myth:
That when you finally release your new app or website, you're done.

26. Paper-II June-2014

123. KPA in CMM stands for


(A) Key Process Area (B) Key Product Area
(C) Key Principal Area (D) Key Performance Area
Ans: A
CMM stands for Capability Maturity Model. This was developed by
Software Engineering Institute(SEI). This is a well known benchmark
for judging the quality level of the processes in the organization. CMM defines
five levels determined on the basis of organisation’s support for certain “key”
process areas known as KPAs. The five levels are:
CMM –
LEVEL Initial Undefined, chaotic, no process in place
1
CMM –
LEVEL Repeatable Basic project management processes are in place.
2
CMM –
In addition to level 2 activities, organization has defined
LEVEL Defined
processes and management activities are documented.
3
CMM –
LEVEL Standards are built for organization. Managed through
Managed
4 setting standards

CMM – Leveraging on knowledge and innovative ideas.


LEVEL Optimising Continous process improvement and preventing the
5 occurrence of defects.

124. Which one of the following is not a risk management technique


for managing the risk due to unrealistic schedules and budgets?
(A) Detailed multi source cost and schedule estimation
(B) Design Cost
(C) Incremental development
(D) Information hiding
Ans: D

125. ................ of a system is the structure or structures of the system


which comprise software elements, the externally visible properties of
these elements and the relationship amongst them.
(A) Software construction (B) Software evolution
(C) Software architecture (D) Software reuse
Ans: C

126. In function point analysis, the number of complexity adjustment


factors is
(A) 10 (B) 12
(C) 14 (D) 20
Ans: C
Calculation of VAF (Value added Factor) which is based on the TDI
(Total Degree of Influence of the 14 General system
characteristics)
1. TDI = Sum of (DI of 14 General System Characteristics)
where DI stands for Degree of Influence.
2. These 14 GSC are
1. Data Communication
2. Distributed Data Processing
3. Performance
4. Heavily Used Configuration
5. Transaction Role
6. Online Data Entry
7. End-User Efficiency
8. Online Update
9. Complex Processing
10. Reusability
11. Installation Ease
12. Operational Ease
13. Multiple Sites14. Facilitate Change
3. These GSC are on a scale of 0-5

127. Regression testing is primarily related to


(A) Functional testing (B) Development testing
(C) Data flow testing (D) Maintenance testing
Ans: D
Once a system is deployed it is in service for years and decades. During
this time the system and its operational environment is often
corrected, changed or extended. Testing that is provided during this phase is
called maintenance testing.
Usually maintenance testing is consisting of two parts:
First one is, testing the changes that has been made because of the
correction in the system or if the system is extended or because of some
additional features added to it.
Second one is regression tests to prove that the rest of the system has not
been affected by the maintenance work.
When any modification or changes are done to the application or even
when any small change is done to the code then it can bring unexpected
issues. Along with the new changes it becomes very important to test
whether the existing functionality is intact or not. This can be achieved
by doing the regression testing.
Regression testing is used when:
Any new feature is added
Any enhancement is done
Any bug is fixed
Any performance related issue is fixed
27. Paper-III June-2014

128. Software testing is


(A) the process of establishing that errors are not present.
(B) the process of establishing confidence that a program does what it is
supposed to do.
(C) the process of executing a program to show that it is working as per
specifications.
(D) the process of executing a program with the intent of finding errors.
Ans: D
Software testing is a process of executing a program or application with
the intent of finding the software bugs.

129. Assume that a program will experience 200 failures in infinite


time. It has now experienced 100 failures. The initial failure intensity was
20 failures/CPU hr. Then the current failure intensity will be
(A) 5 failures/CPU hr
(B) 10 failures/CPU hr.
(C) 20 failures/CPU hr.
(D) 40 failures/CPU hr.
Ans: B

130. Consider a project with the following functional units :


Number of user inputs = 50
Number of user outputs = 40
Number of user enquiries = 35
Number of user files = 06
Number of external interfaces = 04
Assuming all complexity adjustment factors and weighing factors as average,
the function points for the project will be
(A) 135 (B) 722
(C) 675 (D) 672
Ans: D
FP(function point)=UFP*CAF
where CAF=complex adjustment factor and= [0.65+0.01 x ΣFi ]
Fi i=1 to 14 are the degree of influence
UFP=unadusted function points = Σ Wij Zij(i=1 to 5)
Computing function points
0---no infuence
1---incidental
2----moderate
3----average
4-------significant
5-----essential
Based on 14 questions
function points components and weights as below
s/w components weighting factors
simple average
complex
No. of user inputs 3 4 6
No. of user outputs 4 5 7
No. of user inquires 3 4 6
No. of files 7 10 15
No. of external interfaces 5 7 10
Now coming to question
UFP=ΣWijZij=50x4+ 40x5+ 35x4+ 6x10 +4x7=628 as all weighting
factors are avg
CAF= (0.65+0.01(14x3)=1.07
FP=UFP*CAF=628X1.07=672 APPROX.
Hence ans is D
131. Match the following :
List – I List – II
a. Correctness i. The extent to which a software tolerates the
unexpected problems
b. Accuracy ii. The extent to which a software meets its
specifications
c. Robustness iii. The extent to which a software has specified
functions
d. Completeness iv. Meeting specifications with precision
Codes :
a b c d
(A) ii iv i iii
(B) i ii iii iv
(C) ii i iv iii
(D) iv ii i iii
Ans: A
Correctness : working as per requirements => ii
Accuracy : how much precisely answer is computed => iv
robustness : how much error software can tolerate and avoid crashing=>
i
completeness : how much requirements the software is meeting => iii

132. Which one of the following is not a definition of error ?


(A) It refers to the discrepancy between a computed, observed or
measured value and the true, specified or theoretically
correct value.
(B) It refers to the actual output of a software and the correct output.
(C) It refers to a condition that causes a system to fail.
(D) It refers to human action that results in software containing a defect
or fault.
Ans: C
133. Which one of the following is not a key process area in CMM level
5?
(A) Defect prevention
(B) Process change management
(C) Software product engineering
(D) Technology change management
Ans: C
list of KPAs for each Maturity Level.
Level 1 - Initial
Level 2 - Repeatable
a. Requirements Management
b. Software Project Planning
c. Software Project Tracking & Oversight
d. Software Subcontract Management
e. Software Quality Assurance
f. Software Configuration Management
Level 3 - Defined
a. Organizational Process Focus
b. Organizational Process Definition
c. Training Program
d. Integrated Software Management
e. Software Product Engineering
f. Intergroup Coordination
g. Peer Reviews
Level 4 - Managed
a. Quantitative Process Management
b. Software Quality Management
Level 5 - Optimizing
a. Defect Prevention
b. Technology Change Management
c. Process Change Management

28. Paper-II December-2014


134. ................... are applied throughout the software process.
(A) Framework activities (B) Umbrella activities
(C) Planning activities (D) Construction activities
Ans: B
Umbrella Activities (applied throughout process)
Software project tracking and control
Risk management
Software quality assurance
Formal technical reviews
Measurement
Software configuration management
Reusability management
Work product preparation and production

135. Requirement Development, Organizational Process Focus,


Organizational Training, Risk Management and Integrated Supplier
Management are process areas required to achieve maturity level
(A) Performed (B) Managed
(C) Defined (D) Optimized
Ans: C
List of KPAs for each Maturity Level.
Level 1 - Initial
Level 2 - Repeatable
a. Requirements Management
b. Software Project Planning
c. Software Project Tracking & Oversight
d. Software Subcontract Management
e. Software Quality Assurance
f. Software Configuration Management
Level 3 - Defined
a. Organizational Process Focus
b. Organizational Process Definition
c. Training Program
d. Integrated Software Management
e. Software Product Engineering
f. Intergroup Coordination
g. Peer Reviews
Level 4 - Managed
a. Quantitative Process Management
b. Software Quality Management
Level 5 - Optimizing
a. Defect Prevention
b. Technology Change Management
c. Process Change Management

136. The software ................. of a program or a computing system is


the structure or structures of the system, which comprise software
components, the externally visible properties of those components, and the
relationships among them.
(A) Design (B) Architecture
(C) Process (D) Requirement
Ans: B

137. Which one of the following set of attributes should not be


encompassed by effective software metrics?
(A) Simple and computable
(B) Consistent and objective
(C) Consistent in the use of units and dimensions
(D) Programming language dependent
Ans: D
Software Metric Attributes
Simple and computable - It should be relatively easy to learn how to
derive the metric, and its computation should not demand inordinate
effort or time.
Empirically and intuitively persuasive - The metric should satisfy the
engineer's intuitive notions about the product attribute under
consideration (e.g., a metric that measures module cohesion should
increase in value as the level of cohesion increases).
Consistent and objective - The metric should always yield results that
are unambiguous. An independent third party should be able to derive
the same metric value using the same information about the software.
Consistent in its use of units and dimensions - The mathematical
computation of the metric should use measures that do not lead to bizarre
combinations of units. For example, multiplying people on the project
teams by programming language variables in the program results in a
suspicious mix of units that are not intuitively persuasive.
Programming language independent - Metrics should be based on the
analysis model, the design model, or the structure of the program itself.
They should not be dependent on the vagaries of programming language
syntax or semantics.
An effective mechanism for high-quality feedback - That is, the metric
should provide a software engineer with information that can lead to a
higher-quality end product.

138. Which one of the following is used to compute cyclomatic


complexity ?
(A) The number of regions – 1
(B) E – N + 1, where E is the number of flow graph edges and N is the
number of flow graph nodes.
(C) P – 1, where P is the number of predicate nodes in the flow graph G.
(D) P + 1, where P is the number of predicate nodes in the flow graph G.
Ans: D

29. Paper- III December- 2014

139. To compute function points (FP), the following relationship is


used
FP = Count – total × (0.65 + 0.01 × (Fi)) where Fi (i = 1 to n) are value
adjustment factors (VAF) based on n questions. The value of n is
(A) 12 (B) 14
(C) 16 (D) 18
Ans: B
Calculation of VAF (Value added Factor) which is based on the TDI
(Total Degree of Influence of the 14 General system
characteristics)
3. TDI = Sum of (DI of 14 General System Characteristics)
where DI stands for Degree of Influence.
4. These 14 GSC are
1. Data Communication
2. Distributed Data Processing
3. Performance
4. Heavily Used Configuration
5. Transaction Role
6. Online Data Entry
7. End-User Efficiency
8. Online Update
9. Complex Processing
10. Reusability
11. Installation Ease
12. Operational Ease
13. Multiple Sites14. Facilitate Change
4. These GSC are on a scale of 0-5

140. Assume that the software team defines a project risk with 80%
probability of occurrence of risk in the following manner :
Only 70 percent of the software components scheduled for reuse will be
integrated into the application and the remaining functionality will have to be
custom developed. If 60 reusable components were planned with average
component size as 100 LOC and software engineering cost for each LOC as $
14, then the risk exposure would be
(A) $ 25,200 (B) $ 20,160
(C) $ 17,640 (D) $ 15,120
Ans: B
Total resusable components planned=60.
Custom developed =.3*60=18.
Total cost in development=18*100*14.
Risk exposure= .8*18*1400.=$20160

141. Maximum possible value of reliability is


(A) 100 (B) 10
(C) 1 (D) 0
Ans: C
Max possible reliability is 1 or 100 %
142. ‘FAN IN’ of a component A is defined as
(A) Count of the number of components that can call, or pass control, to
a component A
(B) Number of components related to component A
(C) Number of components dependent on component A
(D) None of the above
Ans: A
FAN IN’ is simply a count of the number of other Components that can
call, or pass control, to Component A.
FANOUT’ is the number of Components that are called by Component A

143. Temporal cohesion means


(A) Coincidental cohesion
(B) Cohesion between temporary variables
(C) Cohesion between local variables
(D) Cohesion with respect to time
Ans: D
A temporally cohesive module is one whose elements are functions that
are related in time.
Example
Consider a module called "On_Really_Bad_Failure" that is invoked
when a Really_Bad_Failure happens. The module performs several
tasks that are not functionally similar or logically related, but all tasks
need to happen at the moment when the failure occurs. The module might
cancel all outstanding requests for services
cut power to all assembly line machines
notify the operator console of the failure
make an entry in a database of failure records

30. Paper - II June - 2015

144. In which testing strategy requirements established during


requirements analysis are validated against developed software?
(A) Validation testing (B) Integration testing
(C) Regression testing (D) System testing
Ans: A
Software Validation: The process of evaluating software during or at the
end of the development process to determine whether it satisfies specified
requirements.
Software Testing Strategy
Unit Testing – makes heavy use of testing techniques that exercise specific
control paths to detect errors in each software component individually
Integration Testing – focuses on issues associated with verification and
program construction as components begin interacting with one another
Validation Testing – provides assurance that the software validation criteria
(established during requirements analysis) meets all functional, behavioral,
and performance requirements
System Testing – verifies that all system elements mesh properly and that
overall system function and performance has been achieved

145. Which process model is also called as classic life cycle model?
(A) Waterfall model (B) RAD model
(C) Prototyping model (D) Incremental model
Ans: A
Waterfall model is one of the process models used in software
development .
This model is also called as the classic life cycle model as it suggests a
systematic sequential approach to software development. This one of
the oldest model followed in software engineering.
The process begins with the communication phase where the customer
specifies the requirements and then progress through other phases
of planning, modelling, construction and deployment of the software.

146. Cohesion is an extension of:


(A) Abstraction concept (B) Refinement concept
(C) Information hiding concept (D) Modularity
Ans: C
Cohesion is a natural extension of the information hiding concept. A
cohesive module performs a single task within a software procedure,
requiring little interaction with procedures being performed in other parts of a
program. Simply state, a cohesive module should (ideally) do just one
thing.

147. Which one from the following is highly associated activity of


project planning?
(A) Keep track of the project progress.
(B) Compare actual and planned progress and costs
(C) Identify the activities, milestones and deliverables produced by a
project
(D) Both (B) and (C)
Ans: C
Management activities
The job of a software manager depends on the organization and the
software product being developed. However, most managers
have a certain responsibility for the following activities relating to a
software project:
1. Proposal writing. The first task of managers in a software project is
writing a proposal. It describes the objectives of the project and how it
will be carried out. It usually includes cost and schedule estimates.
2. Project planning and scheduling. Project planning is concerned with
identifying the activities, milestones and deliverables produced by
the development project.
3. Project cost estimation. Cost estimation is concerned with estimating the
resources required to accomplish the project plan.
4. Project monitoring and reviews. Project monitoring is a continuing
project activity. The manager monitors the progress of the project and
compares the actual and planned progress and costs. Project reviews are
concerned with reviewing overall progress and technical development
of the project and checking whether the project and the goals are still
aligned.
5. Personnel selection and evaluation. Project managers are usually
responsible for selecting people with appropriate skill and experience
to work on the project.
6. Report writing and presentations. Project managers are usually
responsible for reporting on the project to both the client and contractor
organizations. They must be able to present this information during
progress reviews.
31. Paper-III June-2015

148. Module design is used to maximize cohesion and minimize


coupling. Which of the following is the key to implement this rule?
(A) Inheritance (B) Polymorphism
(C) Encapsulation (D) Abstraction
Ans: C
Abstraction and encapsulation are complementary concepts: abstraction
focuses on the observable behavior of an object... encapsulation focuses
upon the implementation that gives rise to this behavior... encapsulation
is most often achieved through information hiding, which is the
process of hiding all of the secrets of object that do not contribute to its
essential characteristics.
In other words: abstraction = the object externally; encapsulation
(achieved through information hiding) = the object internally,
149. Verification:
(A) refers to the set of activities that ensure that software correctly
implements a specific function.
(B) gives Ans to the question - Are we building the product right ?
(C) requires execution of software
(D) both (A) and (B)
Ans: D
S.N. Verification Validation
1 Verification addresses Validation addresses the
the concern: "Are you concern: "Are you building
building it right?" the right thing?"
2 Ensures that the Ensures that the
software system meets functionalities meet the
all the functionality. intended behaviour.
3 Verification takes Validation occurs after
place first and verification and mainly
includes the checking involves the checking of the
for documentation, overall product.
code, etc.
4 Done by developers. Done by testers.
5 It has static activities, It has dynamic activities, as it
as it includes includes executing the
collecting reviews, software against the
walkthroughs, and requirements.
inspections to verify a
software.
6 It is an objective It is a subjective process and
process and no involves subjective decisions
subjective decision on how well a software
should be needed to works.
verify a software.
150. Which design metric is used to measure the compactness of the
program in terms of lines of code?
(A) Consistency (B) Conciseness
(C) Efficiency (D) Accuracy
Ans: B
McCall's Software Metrics - (Subjective)
Auditability - The ease with which conformance to standards can be
checked.
Accuracy - The precision of computations and control.
Communication commonality - The degree to which standard interfaces,
protocols, and bandwidth are used.
Completeness - The degree to which full implementation of required
function has been achieved.
Conciseness - The compactness of the program in terms of lines of
code.
Consistency - The use of uniform design and documentation techniques
throughout the software development project.
Data commonality - The use of standard data structures and types
throughout the program.
Error tolerance - The damage that occurs when the program encounters
an error.
Execution efficiency - The run-time performance of a program.
Expandability - The degree to which architectural, data, or procedural
design can be extended.
Generality - The breadth of potential application of program
components.
Hardware independence - The degree to which the software is
decoupled from the hardware on which it operates.
Instrumentation - The degree to which the program monitors its own
operation and identifies errors that do occur.
Modularity - The functional independence (Chapter 13) of program
components.
Operability - The ease of operation of a program.
Security - The availability of mechanisms that control or protect
programs and data.
Self-documentation - The degree to which the source code provides
meaningful documentation.
Simplicity - The degree to which a program can be understood without
difficulty.
Software system independence - The degree to which the program is
independent of nonstandard programming language features, operating
system characteristics, and other environmental constraints.
Traceability - The ability to trace a design representation or actual
program component back to requirements.
Training - The degree to which the software assists in enabling new
users to apply the system.
151. Requirements prioritization and negotiation belongs to:
(A) Requirements validation (B) Requirements elicitation
(C) Feasibility Study (D) Requirement reviews
Ans: B
The involvement of stakeholders such as future system end users is a key
success factor for Software Engineering (SE) in general, and for
Requirements Engineering (RE) in particular
Several process models are available to describe RE activities. Key
activities include requirements elicitation, prioritization and
negotiation. Requirements elicitation is the process of seeking,
capturing and consolidating requirements from available requirements
sources (e.g. stakeholders). The requirements gathered should be
prioritized. The priority of a requirement shows its importance in
comparison to others; it also helps decide which requirements to include
in a project. Furthermore, prioritization supports requirements
negotiation which focuses on conflict resolution by finding a settlement
that mostly satisfies all stakeholders.

152. Adaptive maintenance is a maintenance which .............


(A) Correct errors that were not discovered till testing phase.
(B) is carried out to port the existing software to a new environment.
(C) improves the system performance.
(D) both (B) and (C)
Ans: B
Adaptive maintenance: Modification of a software product performed
after delivery to keep a software product usable in a changed or
changing environment.

153. A Design concept Refinement is a:


(A) Top-down Approach (B) Complementary of Abstraction
concept
(C) Process of elaboration (D) All of the above
Ans: D
Refinement is surely a Top-Down approach as we are extracting a
particular type Like from Person entity to S/w engineer
Refinement start with the statement of function defined at the abstract
level, decompose the statement of function in a step wise fashion until
programming language statements are reached.
It is a process of elaboration.
Abstraction and refinement are complementay concepts. Abstraction
suppress low level datails and refinement reveals low level details.
Refinement is actually a process of elaboration. It begins with a
statement of function (or description of information) that is
defined at a high level of abstraction. That is, the statement describes
function or information conceptually but provides no information about
the internal workings of the function or the internal structure of
information.
Refinement causes the designer to elaborate on the original statement,
providing more and more detail as each successive refinement
(elaboration) occurs. Abstraction and refinement are complementary
concepts. Abstraction enables a designer to specify procedure and data
and yet suppress low-level details. Refinement helps the designer to
expose low-level details as design progresses.

154. A software design is highly modular if :


(A) cohesion is functional and coupling is data type.
(B) cohesion is coincidental and coupling is data type.
(C) cohesion is sequential and coupling is content type.
(D) cohesion is functional and coupling is stamp type.
Ans: A
Types of Cohesion Types of Coupling
Functional cohesion (Most Content coupling
Required) highest (Least Required)
highest
Sequential cohesion Common coupling
Communicational cohesion External coupling
Procedural cohesion Control coupling
Temporal cohesion Data coupling(Most
Required) lowest
Logical cohesion
Coincidental
cohesion (Least Required)
lowest

32. Paper-II December-2015


155. In software testing, how the error, fault and failure are related to
each other?
(A) Error leads to failure but fault is not related to error and failure
(B) Fault leads to failure but error is not related to fault and failure
(C) Error leads to fault and fault leads to failure
(D) Fault leads to error and error leads to failure
Ans: C
Error/bug/defect/mistake--> is human intraction which produce an
incorrect result.
Fault--> Fault is a stage of software which is caused by an
error/bug/defect/mistake.
Failure--> It is a deviation of software from its expected delivery or
service.
For eg.
you are driving a car and you are on road while on driving now there is
two way on the road
1) left--> mumbai
2) right--> delhi
now you have to go to delhi it means you have to turn the stearing to the
right, but by mistake you turn the stearing to the left, from that position that is
called as "Error" because human interaction is there. And now Fault is there
till you will not reach the mumbai, but when you reach mumbai that is a final
stage which is called "Failure" becoz you had to reach delhi but now
you are in Mumbai.
Defect : Variance between Expected and Actual Reault

156. Which of the following is not a software process model?


(A) Prototyping (B) Iterative
(C) Timeboxing (D) Glassboxing
Ans: D
Ans is D no such model named glassboxing however glassbox testing is
there
A and B are well known process models
In time boxing model, development is done iteratively as in the iterative
enhancement model. However, in time boxing model, each iteration is
done in a timebox of fixed duration.
33. Paper-III December-2015

157. Which one of the following non-functional quality attributes is


not highly affected by the architecture of the software ?
(A) Performance (B) Reliability
(C) Usability (D) Portability
Answer: C
Achieving quality attributes must be considered throughout design,
implementation, and deployment. No quality attribute is entirely
dependent on design, nor is it entirely dependent on implementation or
deployment. Satisfactory results are a matter of getting the big picture
(architecture) as well as the details (implementation) correct. For
example:
Usability involves both architectural and non architectural aspects. The
non architectural aspects include making the user interface
clear and easy to use. Should you provide a radio button or a check box? What
screen layout is most intuitive? What typeface is
most clear? Although these details matter tremendously to the end user
and influence usability, they are not architectural because they belong
to the details of design. Whether a system provides the user
with the ability to cancel operations, to undo operations, or to re-use data
previously entered is architectural, however. These
requirements involve the cooperation of multiple elements.
Non-functional requirements are:
Performance – for example Response Time, Throughput, Utilization, Static
Volumetric
Scalability
Capacity
Availability
Reliability
Recoverability
Maintainability
Serviceability
Security
Regulatory
Manageability
Environmental
Data Integrity
Usability
Interoperability
157a. Which one of the following statements is incorrect ?
(A) Pareto analysis is a statistical method used for analyzing causes, and is one
of the
primary tools for quality management.
(B) Reliability of a software specifies the probability of failure-free operation
of that software for a given time duration.
(C) The reliability of a system can also be specified as the Mean Time To
Failure (MTTF).
(D) In white-box testing, the test cases are decided from the specifications or
the
requirements.
Answer: D
157. Which one of the following statements, related to the
requirements phase in Software
Engineering, is incorrect ?
(A) “Requirement validation” is one of the activities in the requirements
phase.
(B) “Prototyping” is one of the methods for requirement analysis.
(C) “Modelling-oriented approach” is one of the methods for specifying the
functional
specifications.
(D) “Function points” is one of the most commonly used size metric for
requirements.
Answer: C

34. Paper-II July-2016 (Retest)

158. The ................ model is preferred for software development


when the requirements are not clear.
(A) Rapid Application Development (B) Rational Unified
Process
(C) Evolutionary Model (D) Waterfall Model
Answer: C
Software Process Models
Waterfall Model (classic life cycle - old fashioned but reasonable
approach when requirements are well understood)
Incremental Models (deliver software in small but usable pieces, each
piece builds on pieces already delivered)
Evolutionary Models
Prototyping Model (good first step when customer has a legitimate need,
but is clueless about the details, developer needs to resist pressure to
extend a rough prototype into a production product)
Spiral Model (couples iterative nature of prototyping with the controlled
and systematic aspects of the Waterfall Model)
Concurrent Development Model (concurrent engineering - allows
software teams to represent the iterative and concurrent element of any
process model)

159. Which of the following is not included in waterfall model?


(A) Requirement analysis (B) Risk analysis
(C) Design (D) Coding
Answer: B

160. The cyclomatic complexity of a flow graph V(G), in terms of


predicate nodes is:
(A) P + 1 (B) P - 1
(C) P - 2 (D) P + 2
Where P is number of predicate nodes in flow graph V(G).
Answer: A

161. The extent to which a software tolerates the unexpected


problems, is termed as:
(A) Accuracy (B) Reliability
(C) Correctness (D) Robustness
Answer: D
LIST-1 LIST-II
a. Correctness ii. The extent to which a software meets its
specifications
b. Accuracy iv. Meeting specifications with precision
c. Robustness i. The extent to which a software tolerates the
unexpected problems
d. Completeness iii. The extent to which a software has specified
functions
35. Paper-III July-2016
162. A server crashes on the average once in 30 days, that is, the
Mean Time Between Failures (MTBF) is 30 days. When this happens, it
takes 12 hours to reboot it, that is, the Mean Time to Repair (MTTR) is 12
hours. The availability of server with these reliability data values is
approximately:
(A) 96.3% (B) 97.3%
(C) 98.3% (D) 99.3%
Answer: C
Server is not available for 12 hours after each 30 days
So 30*24+12= total hours
30*24= available hours
availablity=(30*24)/ (30*24+12)
=60/61
=.9836
=98.3%

162. Match the software maintenance activities in List-I to its meaning


in List-II.
List-I List-II
I. Corrective (a) Concerned with performing activities to
reduce the software complexity thereby
improving program understand ability and increasing software
maintainability.
II. Adaptive (b) Concerned with fixing errors that are
observed when the software is in use.
III. Perfective (c) Concerned with the change in the software
that takes place to make the software
adaptable to new environment (both hardware and software).
IV. Preventive (d) Concerned with the change in the software
that takes place to make the software
adaptable to changing user requirements.
Codes:
I II III IV
(A) (b) (d) (c) (a)
(B) (b) (c) (d) (a)
(C) (c) (b) (d) (a)
(D) (a) (d) (b) (c)
Answer: B

163. Match each application/software design concept in List-I to its


definition in List-II.
List-I List-II
I. Coupling (a) Easy to visually inspect the design of the
software and understand its purpose.
II. Cohesion (b) Easy to add functionality to a software
without having to redesign it.
III. Scalable (c) Focus of a code upon a single goal.
IV. Readable (d) Reliance of a code module upon other code
modules.
Codes:
I II III IV
(A) (b) (a) (d) (c)
(B) (c) (d) (a) (b)
(C) (d) (c) (b) (a)
(D) (d) (a) (c) (b)
Answer C

164. Software safety is quality assurance activity that focuses on hazards


that
(A) affect the reliability of a software component
(B) may cause an entire system to fail.
(C) may result from user input errors.
(D) prevent profitable marketing of the final product
Answer: B

165. Which of the following sets represent five stages defined by


Capability Maturity
Model (CMM) in increasing order of maturity?
(A) Initial, Defined, Repeatable, Managed, Optimized.
(B) Initial, Repeatable, Defined, Managed, Optimized.
(C) Initial, Defined, Managed, Repeatable, Optimized.
(D) Initial, Repeatable, Managed, Defined, Optimized.
Answer: B

166. The number of function points of a proposed system is calculated


as 500. Suppose that the system is planned to be developed in Java and
the LOC/FP ratio of Java is 50. Estimate the effort (E) required to
complete the project using the effort formula of basic
COCOMO given below:
E = a(KLOC)b
Assume that the values of a and b are 2.5 and 1.0 respectively.
(A) 25 person months (B) 75 person months
(C) 62.5 person months (D) 72.5 person months
Answer: C
LOC/FP=50= LOC/500
LOC=500*50=25,000
LO person-month/FP
=2.5*25
=62.5 person months

36. Paper-II August-2016 (Retest)

167. Which of the following is used to determine the specificity of


requirements?
(A) n1/n2 (B) n2/n1
(C) n1+n2 (D) n1–n2
Where n1 is the number of requirements for which all reviewers have
identical interpretations, n2 is number of requirements in a
specification.
Answer: A
Metrics for the Analysis Model
These metrics examine the analysis model with the extent of producing
the size of the resultant system. One of the most common used
metrics is the function pont(FP) metric.
The following metrics can be used to assess the quality of the
requirements
Specificity metric (not of ambiguity)
q1=n1/n2
n1 is the number of requirements for which all functions had identical
interpretation
n2=nf + nnf
n2 is the total number of requirements
nf is the functional requirements
nnf is the non functional requirements

168. The major shortcoming of waterfall model is


(A) the difficulty in accommodating changes after requirement analysis.
(B) the difficult in accommodating changes after feasibility analysis.
(C) the system testing.
(D) the maintenance of system.
Answer: A
Explanation:
The main drawback of the waterfall model is the difficulty of
accommodating change after the process is underway. In principle, a
phase has to be complete before moving onto the next phase. Waterfall
model problems include:
1) Difficult to address change
Inflexible partitioning of the project into distinct stages makes it difficult
to respond to changing customer requirements. Therefore, this model
is only appropriate when the requirements are well- understood and
changes will be fairly limited during the design process. Few business systems
have stable requirements.
2) Very few real-world applications
The waterfall model is mostly used for large systems engineering
projects where a system is developed at several sites. In those
circumstances, the plan-driven nature of the waterfall model helps
coordinate the work.

169. The quick design of a software that is visible to end users leads
to ............
(A) iterative model (B) prototype model
(C) spiral model (D) waterfall model
Answer: B
Explanation:
Prototype model: The basic idea here is that instead of freezing the
requirements before a design or coding can proceed, a
throwaway prototype is built to understand the requirements. This
prototype is developed based on the currently known
requirements. By using this prototype, the client can get an “actual feel” of
the system, since the interactions with prototype can enable the
client to better understand the requirements of the desired
system. Prototyping is an attractive idea for complicated and large
systems for which there is no manual process or existing system
to help determining the requirements.
Waterfall model is the simplest model of software development
paradigm. It says the all the phases of SDLC will function one after
another in linear manner.
Iterative model leads the software development process in iterations. It
projects the process of development in cyclic manner repeating every step
after every cycle of SDLC process.
V-model‘ is also used by many of the companies in their product. ‘V-
model’ is nothing but ‘Verification’ and ‘Validation’ model. In ‘V-
model’ the developer’s life cycle and tester’s life cycle are mapped to
each other. In this model testing is done side by side of the development.
Likewise ‘Incremental model’, ‘RAD model’, ‘Iterative model’ and
‘Spiral model’ are also used based on the requirement of the customer
and need of the product.
Big Bang Model
This model is the simplest model in its form. It requires little planning,
lots of programming and lots of funds. This model is conceptualized
around the big bang of universe.

170. For a program of k variables, boundary value analysis yields


.............. test cases.
(A) 4k – 1 (B) 4k
(C) 4k + 1 (D) 2k – 1
Answer: C
Explanation:
boundary value analysis 4n+1
Robusness testing 6n+1
worst case testing 5n

171. The extent to which a software performs its intended functions


without failures, is termed as
(A) Robustness (B) Correctness
(C) Reliability (D) Accuracy
Answer: C
Explanation:
Reliability is the probability of failure-free operation of a system over a
specified time within a specified environment for a specified purpose
Robustness is the ability of a computer system to cope with errors during
execution and cope with erroneous input.
Correctness of an algorithm is asserted when it is said that the algorithm
is correct with respect to a specification
Accuracy of a measurement system is the degree of closeness of
measurements of a quantity to that quantity’s true value
37. Paper- III August-2016(Retest)
172. Match each software lifecycle model in List – I to its description
in List – II:
List – I List – II
I. Code-and-Fix a. Assess risks at each step; do most
critical action first.
II. Evolutionary prototyping b. Build an initial small requirement
specifications, code it, then
“evolve” the specifications and code as needed.
III. Spiral c. Build initial requirement specification
for several releases, then
design-and- code in sequence
IV. Staged Delivery d. Standard phases (requirements, design,
code, and test) in order
V. Waterfall e. Write some code, debug it, and repeat
(i.e. ad-hoc)
Codes :
I II III IV V
(A) e b a c d
(B) e c a b d
(C) d a b c e
(D) c e a b d
Answer: A
Code-and-Fix------->Write some code, debug it, repeat (i.e. ad-hoc)
Evolutionary prototyping--->Build an initial small requirement
specifications, code it, then “evolve” the specifications
& code as needed
Spiral------>Assess risks at each step; do most critical action first.
Staged Delivery --->. Build initial requirement specification for several
releases, then design-and-code in sequence
Waterfall ------->Standard phases (requirements, design, code, test) in
order

173. Match each software term in List – I to its description in List – II:
List – I List – II
I. Wizards a. Forms that provide structure for a
document
II. Templates b. A series of commands grouped into a
single command
III. Macro c. A single program that incorporates
most commonly used tools
IV. Integrated Software d. Step-by-step guides in application
software
V. Software Suite e. Bundled group of software programs
Codes :
I II III IV V
(A) d a b c e
(B) b a d c e
(C) d e b a c
(D) e c b a d
Answer: A

174. The ISO quality assurance standard that applies to software


Engineering is
(A) ISO 9000 : 2004 (B) ISO 9001 : 2000
(C) ISO 9002 : 2001 (D) ISO 9003 : 2004
Answer: B
ISO 9001:2000 specifies requirements for a quality management system
where an organization needs to demonstrate its ability to
consistently provide product that meets customer and applicable
regulatory requirements, and aims to enhance customer
satisfaction through the effective application of the system,
including processes for continual improvement of the system and the
assurance of conformity to customer and applicable
regulatory requirements.
All requirements of this International Standard are generic and are
intended to be applicable to all organizations, regardless of
type, size and product provided.

175. Which of the following are external qualities of a software


product ?
(A) Maintainability, reusability, portability, efficiency, correctness.
(B) Correctness, reliability, robustness, efficiency, usability.
(C) Portability, interoperability, maintainability, reusability.
(D) Robustness, efficiency, reliability, maintainability, reusability.
Answer: B
Internal Quality determines your ability to move forward on a project
External Quality determines the fulfilment of stakeholder requirements
External Quality Characteristics: Correctness, Usability, Efficiency,
Reliability, Integrity, Adaptability, Accuracy, and
Robustness.
Internal Quality Characteristics: Maintainability, Flexibility, Portability,
Re-usability, Readability, Testability, and Understand ability.
Software with a high internal quality is easy to change, easy to add new
features, and easy to test. Software with a low internal
quality is hard to understand, difficult to change, and troublesome to
extend. Measures like McCabe’s Cyclomatic Complexity, Cohesion,
Coupling and Function Points can all be used to understand internal
quality.
External software quality is a measure of how the system as a whole
meets the requirements of stakeholders. Does the
system provide the functionality required? Is the interface clear and
consistent? Does the software provide the expected business value?
176. Which of the following is/are CORRECT statement(s) about
version and release?
I. A version is an instance of a system, which is functionally identical but
non-functionally distinct from other instances of a system.
II. A version is an instance of a system, which is functionally distinct in
some way from other system instances.
III. A release is an instance of a system, which is distributed to users
outside of the development team.
IV. A release is an instance of a system, which is functionally identical
but non-functionally distinct from other instances of a system.
(A) I and III (B) II and IV
(C) I and IV (D) II and III
Answer: D
S.
Build Release Version
No.
1 Build is Release means Version is
Executable file. which ready to use extension of the
it. build.
Release hand it Version is number
Build is handed
over to of release made
over to the tester
Client/Customer according to the
2 to test the
after completion of addition of
developed part of
development and requirement of the
the project.
testing phase. client.
Build refers to the
Release refers to Version refer
S/W part
the S/W which is Variation of an
3 which still in
no longer in earlier or original
testing. Or which
testing. type of S/W
is not tested yet.
Build can be
rejected by test
One Release can
team if defect Version based on
have several
4 found or it does the Build, not Vice
builds associated
not meet the Versa..
with it.
certain
requirement.
Build is nothing Release is nothing Version is nothing
5 but a part of the but the but the
application. application. application.
Eg: Apple Eg: I have
6 Eg: Component released new downloaded latest
iphone 4. version IE9

177.. An Operating System (OS) crashes on the average once in 30


days, that is, the Mean Time Between Failures (MTBF) = 30 days. When
this happens, it takes 10 minutes to recover the OS, that is, the Mean Time
To Repair (MTTR) = 10 minutes. The availability of the OS with these
reliability figures is approximately :
(A) 96.97% (B) 97.97%
(C) 99.009% (D) 99.97%
Answer: D

38. Paper-II Jan-2017

178.. Software Engineering is an engineering discipline that is


concerned with:
(A) how computer systems work.
(B) theories and methods that underlie computers and software systems.
(C) all aspects of software production.
(D) all aspects of computer-based systems development, including
hardware, software and process engineering.
Ans C

179. Which of the following is not one of three software product


aspects addressed by McCall's software quality factors?
(A)Ability to undergo change.
(B) Adaptability to new environments.
(C) Operational characteristics
(D) Production costs and scheduling
Ans D
Explanation
McCall’s software quality factors depends on Ability to undergo change
,Adaptability to new environments and Operational characteristics.
McCall identified three main perspectives for characterizing the quality
attributes of a software product.

These perspectives are:-


Product revision (ability to change).
Product transition (adaptability to new environments).
Product operations (basic operational characteristics).
Product revision
The product revision perspective identifies quality factors that influence the
ability to change the software product, these factors are:-
Maintainability, the ability to find and fix a defect.
Flexibility, the ability to make changes required as dictated by the
business.
Testability, the ability to Validate the software requirements.
Product transition
The product transition perspective identifies quality factors that influence the
ability to adapt the software to new environments:-
Portability, the ability to transfer the software from one environment to
another.
Reusability, the ease of using existing software components in a different
context.
Interoperability, the extent, or ease, to which software components work
together.
Product operations
The product operations perspective identifies quality factors that influence the
extent to which the software fulfils its specification:-
Correctness, the functionality matches the specification.
Reliability, the extent to which the system fails.
Efficiency, system resource (including cpu, disk, memory, network)
usage.
Integrity, protection from unauthorized access.
Usability, ease of use.
In total McCall identified the 11 quality factors broken down by the 3
perspectives, as listed above.
For each quality factor McCall defined one or more quality criteria (a way of
measurement), in this way an overall quality assessment could be made of a
given software product by evaluating the criteria for each factor.
so answer is
Production costs and scheduling

180. Which of the following statement(s) is/are true with respect to


software architecture?
S1: Coupling is a measure of how well the things grouped together in a module
belong together logically.
S2: Cohesion is a measure of the degree of interaction between software
modules.
S3: If coupling is low and cohesion is high then it is easier to change one
module without affecting others.
(A) Only S1 and S2
(B) Only S3
(C) All of S1, S2 and S3
(D) Only S1
Ans B
Explanation
S1: false-> Coupling is a measure of the degree of interaction between
software modules.
S2: false -> Cohesion is a measure of how well the things grouped together in
a module belong together logically.
S3: true-> If coupling is low and cohesion is high then it is easier to change
one module without affecting others.

181. The Prototyping model of software development is:


(A) a reasonable approach when requirements are well-defined.
(B) a useful approach when a customer cannot define requirements
clearly.
(C) the best approach to use for projects with large development teams.
(D) a risky model that rarely produces a meaningful product.
Ans B
Explanation
prototype model -> A useful approach when a customer cannot
define requirements clearly.

182. A software design pattern used to enhance the functionality of an


object at run-time is:
(A) Adapter
(B) Decorator
(C) Delegation
(D) Proxy
Ans B
Explanation
Decorator-- In object-oriented programming, the decorator
pattern (also known as Wrapper, an alternative naming shared with
the Adapter pattern) is a design pattern that allows behavior to be added
to an individual object, either statically or dynamically, without affecting the
behavior of other objects from the same class.

39. Paper-III Jan-2017

183. Which of the following statement(s) is/are TRUE with regard to


software testing?
I. Regression testing technique ensures that the software product runs correctly
after the changes during maintenance.
II. Equivalence partitioning is a white-box testing technique that divides the
input domain of a program into classes of data from which test cases can be
derived.
A. only I
B. only II
C. both I and II
D. neither I nor II
Ans A
Regression testing is a type of software testing that verifies that software
previously developed and tested still performs correctly even after it was
changed or interfaced with other software.

Equivalence prtitioning is black box testing approach and not a white box
testing
Equivalent Partioning or Equivalnece Class Partitioning is a black box
technique (code is not visible to tester) which can be applied to all levels of
testing like unit, integration, system, etc. In this technique, you divide the set of
test condition into a partition that can be considered the same.

184. Which of the following are facts about a top-down software


testing approach?
I. Top-down testing typically requires the tester to build method stubs.
II. Top-down testing typically requires the tester to build test drivers.
A. only I
B. Only II
C. Both I and II
D. Neither I nor II
Ans A

185. Match the terms related to Software Configuration Management


(SCM) in List-I with the descriptions in List-II.
List-1 List-II
I. Version A. An instance of a system that is distributed to customers.
II. Release B. An instance of a system which is functionally identical to
other instances, but designed for different hardware/software
configurations.
III. Variant C. An instance of a system that differs, in some way, from
other instances.
Codes:
I II III
A. B C A
B. C A B
C. C B A
D. B A C
Ans D
I. Version B. An instance of a system which is functionally identical to
other instances, but designed for different hardware/software
configurations.
II. Release A. An instance of a system that is distributed to customers.
III. Variant C. An instance of a system that differs, in some way, from
other instances.
186. A software project was estimated at 352 Function Points (FP). A
four person team will be assigned to this project consisting of an architect,
two programmers, and a tester. The salary of the architect is Rs.80,000 per
month, the programmer Rs.60,000 per month and the tester Rs.50,000 per
month. The average productivity for the team is 8 FP per person month.
Which of the following represents the projected cost of the project?
A. Rs.28,16,000
B. Rs.20,90,000
C. Rs.26,95,000
D. Rs.27,50,000
Ans D
Total 352 FP
avg 8 FP per person per month
total 4 person.
so 352/(8*4) =11
now,
(1 architect+2 programmer+1 tester)* 11 =
(80000+2*60000+50000)*11=2750000

188. Complete each of the following sentences in List-I on the left


hand side by filling in the word or phrase from the List-II on the right hand
side that best completes the sentence:
List-I List-II
I. Determining whether youhave built the right system is called
............. A. Software testing
II. Determining whether you have built the system right is called
......... B. Software verification
III. ............ is the process of demonstrating the existence of defects or
providing confidence that they do not
appear to be present.
C. Software debugging
IV. .......... is the process of discovering the cause of a defect and fixing it.
D. Software validation
Codes:
I II III IV
A. B D A C
B. B D C A
C. D B C A
D. D B A C
Ans D
whether you have built the right system is called ............. D)
Validation
whether you have built the system right is called ..............
B)Verification
.......is the process of demonstrating the existence of defects ..........
...A)Testing
or providing confidence that they do not appear to be present
.... is the process of discovering the cause of a defect and fixing it
.........C)Debugging

189. A software company needs to develop a project that is estimated


as 1000 function points and is planning to use JAVA as the programming
language whose approximate lines of code per function point is accepted
as 50. Considering a=1.4 as multiplicative factor, b=1.0 as exponention
factor for the basic COCOMO effort equation and c=3.0 as multiplicative
factor, d=0.33 as exponention factor for the basic COCOMO duration
equation, approximately how long does the project take to complete?
A. 11.2 months
B. 12.2 months
C. 13.2 months
D. 10.2 months
Ans B
For basic COCOMO model
E effort =a (KLOC)b in person months
Here a=1.4 b=1.0 total loc=1000x50=50000=50 KLOC
so E=1.4x50=70
D (Devlopment Time =C (E)d in Months
here C=3 d=0.33 so Development time =3x(70)0.33
3x(approx cube root of 70)=3x4.1 =12.3 (approx ) so right ans is B 12.2

You might also like