CS39983
CS39983
in CS8494 - SE
ENGINEERING COLLEGES
Prepared by
Sl. Affiliating
Name of the Faculty Designation
No. IMPORTANT QUESTIONS AND ANSWERS College
1. Mrs. A.P.Subapriya AP / CSE SCADIT
Department of Computer Science & Engineering
TEXT BOOK:
REFERENCES:
TABLE OF CONTENTS
Hours
Sl. Cumulative Books
Unit Topic / Portions to be Covered Required /
No Hrs Referred
Planned
22 3 Architectural Design 1 23 T1
23 3 Architectural styles 1 24 T1
Architectural Mapping using Data
24 3 2 26 T1
Flow
User Interface Design: Interface
25 3 1 27 T1
analysis
26 3 Interface Design 1 28 T1
Component level Design: Designing
27 3 1 29 T1
Class based components
28 3 Traditional Components 1 30 T1
UNIT – IV- TESTING AND IMPLEMENTATION
Software testing fundamentals,
29 4 1 31 T1
Internal and external views of Testing
30 4 White box testing 1 32 T1
Basis path testing-control structure
31 4 1 33 T1
testing
32 4 Black box testing 2 35 T1
33 4 Regression Testing 1 36 T1
34 4 Unit Testing 1 37 T1
35 4 Integration Testing 1 38 T1
36 4 Validation Testing 1 39 T1
37 4 System Testing And Debugging 1 40 T1
Software Implementation Techniques:
38 4 1 41 T1
Coding practices, Refactoring
UNIT – V- PROJECT MANAGEMENT
Estimation: FP Based, LOC Based,
39 5 1 42 T1
Make/Buy Decision
40 5 COCOMO II 1 43 T1
Planning , Project Plan, Planning
41 5 1 44 R1
Process
RFP Risk Management, Identification
42 5 1 45 T1
, Projection, RMMM
43 5 Scheduling and Tracking 1 46 T1
Relationship between people and
44 5 1 47 T1
effort, Task Set & Network
45 5 EVA 1 48 T1
46 5 Process and Project Metrics 2 50 T1
12. Depict the relationship between work product task activity and system
Nov :16
Work product : A work product may begin as an analysis made during the
development of a project, creating a type of proposal for the project that a company
cannot deliver until the project has received approval.
Task :A task can also be an activity that is being imposed on another person or a
demand on the powers of another. A task can be said to be any activity that is done
with a particular purpose.
Activity An activity is a major unit of work to be completed in achieving the
objectives of a process. An activity has precise starting and ending dates,
incorporates a set of tasks to be completed, consumes resources, and results in
work products. An activity may have a precedence relationship with other activities.
System :A set of detailed methods, procedures and routines created to carry out a
specific activity, perform a duty, or solve a problem.
13. What led to the transition from product oriented development to process
oriented development? May : 16
The goal of product engineering is to translate the customer‗s desire for a set
of defined capabilities into a working product. It is a standalone entity that can be
produced by development organization.
The process can be defined as the structured set of activities that are required
to develop software system.
15. If you have to develop a word processing software product ,what process
model will use you choose? Justify your answer. Nov :16
Prototype model will be used in development of this application. This model is made
in series of increment throughout project. Prototype model is strategy that allows
system to be developed in pieces.It allows the additions in process as per
requirements,process change can be implemented.
PART-B
1. Neatly explain all the Prescriptive process models and Specialized process
models May: 03, 05, 06,09,10,14,16 Dec : 04,08,09,12 ,16
―Prescriptive‖ means a set of process elements—framework activities,
software engineering actions, tasks, work products, quality assurance, and
change control mechanisms for each project. Each process model also
prescribes a process flow (also called a work flow)—that is, the manner in
which the process elements are interrelated to one another.
The software process model is also known as Software Development Life
Cycle (SDLC) Model for or software paradigm.
Various prescriptive process models are
Prescriptive Process
Models
Communicatio
n Plannin
g
Project
Modeli
initiation Estimat ng
requirements Construc
ing
gathering scheduli Analysi tion
Deployme
ng s nt
tracking design Code,
test Delivery
Support
feedback
The waterfall model, sometimes called the classic life cycle, is a systematic,
sequential approach to software development that begins with customer specification
of requirements and progresses through planning, modeling, construction, and
deployment, culminating in ongoing support of the completed software. A variation in
the representation of the waterfall model is called the V-model. Represented in
figure, the V-model depicts the relationship of quality assurance actions to the
actions associated with
something to see and use and to provide feedback regarding the delivery and their
requirements.
Data
Modeling
Data
Modeling Process
Modeling
Data
Process
Modeling Application
Modeling
Generation
Process
Application
Modeling Testing &
Generation
Turnover
Application
Testing &
Generation
Turnover
Testing &
Turnover
10
11
ii) The other is a set of anchor point milestones for ensuring stakeholder commitment
to feasible and mutually satisfactory system solutions. Using the spiral model,
software is developed in a series of evolutionary releases. During early iterations, the
release might be a model or prototype. During later iterations, increasingly more
complete versions of the engineered system are produced.
A spiral model is divided into a set of framework activities defined by the
software engineering team. For illustrative purposes, each of the framework activities
represents one segment of the spiral path. Risk is considered as each revolution is
12
made. Anchor point milestones—a combination of work products and conditions that
are attained along the path of the spiral—are noted for each evolutionary pass.
The first circuit around the spiral might result in the development of a product
specification; each pass through the planning region results in adjustments to the
project plan. Cost and schedule are adjusted based on feedback derived from the
customer after delivery. The project manager adjusts the planned number of
iterations required to complete the software.
Advantages
The spiral model is a realistic approach to the development of large-scale
systems and software. Because software evolves as the process progresses,
the developer and customer better understand and react to risks at each
evolutionary level.
13
14
COCOMO MODEL
2. What is COCOMO Model? Explain in detail. May: 07, 08, 14, Dec:05,13
15
Model 1. The Basic COCOMO model is a static, single-valued model that computes
software development effort (and cost) as a function of program size expressed in
estimated lines of code (LOC).
Model 2. The Intermediate COCOMO model computes software development effort
as a function of program size and a set of "cost drivers" that include subjective
assessments of product, hardware, personnel and project attributes.
Model 3. The Advanced COCOMO model incorporates all characteristics of the
intermediate version with an assessment of the cost driver's impact on each step
(analysis, design, etc.) of the software engineering process.
The COCOMO models are defined for three classes of software projects. Using
Boehm's terminology these are: (1) organic mode–relatively small, simple software
projects in which small teams with good application experience work to a set of less
than rigid requirements (e.g., a thermal analysis program developed for a heat
transfer group);
(2) Semi-detached mode –an intermediate (in size and complexity) software project
in which teams with mixed experience levels must meet a mix of rigid and less than
rigid requirements (e.g., a transaction processing system with fixed requirements for
terminal hardware and data base software);
(3)Embedded mode –a software project that must be developed within a set of tight
hardware, software and operational constraints (e.g., flight control software for
aircraft).
Let us understand each model in detail:
1. Basic Model: The basic COCOMO model estimates the software development
effort using only Lines of Code .Various Equations in this model are
E = ab KLOC bb
D = cb E db
P=E/D
Where E is the effort applied in person – month. D is the development time in
chronological months.
KLOC means kilo lines of code for the project. The coefficients ab , bb, cb , db for
thre
Table 1. Basic COCOMO Model
Software
ab bb cb db
Project
16
17
ESTIMATION
18
1. Software Sizing
The accuracy of a software project estimate is predicated on a number of things:
(1) The degree to which you have properly estimated the size of the product to be
built;
(2) The ability to translate the size estimate into human effort, calendar time, and
dollars (a function of the availability of reliable software metrics from past projects);
(3) The degree to which the project plan reflects the abilities of the software team;
and
(4) The stability of product requirements and the environment that supports the
software engineering effort.
If a direct approach is taken, size can be measured in lines of code (LOC). If
an indirect approach is chosen, size is represented as function points (FP).Putnam
and Myers suggest four different approaches to the sizing problem:
• ―Fuzzy logic‖ sizing: To apply this approach, the planner must identify the type of
application, establish its magnitude on a qualitative scale, and then refine the
magnitude within the original range.
• Function point sizing: The planner develops estimates of the information domain
characteristics.
• Standard component sizing: Software is composed of a number of different
―standard components‖ that are generic to a particular application area. For example,
the standard components for an information system are subsystems, modules,
screens, reports, interactive programs, batch programs, files, LOC, and object-level
instructions.
• Change sizing: This approach is used when a project encompasses the use of
existing software that must be modified in some way as part of a project. The planner
estimates the number and type (e.g., reuse, adding code, changing code, and
deleting code) of modifications that must be accomplished.
2 Problem-Based Estimation
LOC and FP data are used in two ways during software project estimation:
(1) As estimation variables to ―size‖ each element of the software and
(2) As baseline metrics collected from past projects and used in conjunction with
estimation variables to develop cost and effort projections.
LOC and FP estimation are distinct estimation techniques. Yet both have a
number of characteristics in common.
LOC or FP (the estimation variable) is then estimated for each function.
Function estimates are combined to produce an overall estimate for the entire
project. In general, LOC/pm or FP/pm averages should be computed by project
domain. That is, projects should be grouped by team size, application area,
complexity, and other relevant parameters. Local domain averages should then be
computed. When a new project is estimated, it should first be allocated to a domain,
19
and then the appropriate domain average for past productivity should be used in
generating the estimate.
The LOC and FP estimation techniques differ in the level of detail required for
decomposition and the target of the partitioning. When LOC is used as the
estimation variable, decomposition is absolutely essential and is often taken to
considerable levels of detail. The greater the degree of partitioning, the more likely
reasonably accurate estimates of LOC can be developed.
For FP estimates, decomposition works differently. Each of the information
domain characteristics—inputs, outputs, data files, inquiries, and external
interfaces—as well as the 14 complexity adjustment values are estimated. The
resultant estimates can then be used to derive an FP value that can be tied to past
data and used to generate an estimate. Using historical data or (when all else fails)
intuition,
Estimate an optimistic, most likely, and pessimistic size value for each
function or count for each information domain value. A three-point or expected value
can then be computed.
The expected value for the estimation variable (size) S can be computed as a
weighted average of the optimistic (sopt), most likely (sm), and pessimistic (spess)
estimates.
An Example of LOC-Based Estimation
Following the decomposition technique for LOC, an estimation table is
developed. A range of LOC estimates is developed for each function. For example,
the range of LOC estimates for the 3D geometric analysis function is optimistic, 4600
LOC; most likely, 6900 LOC; and pessimistic, 8600 LOC. Applying Equation the
expected value for the 3D geometric analysis functions is 6800 LOC. Other
estimates are derived in a similar fashion.
20
FP estimated _ count total _ [0.65 _ 0.01 _ _(Fi)] _ 375 The organizational average
productivity for systems of this type is 6.5 FP/pm. Based on a burdened labor rate of
$8000 per month, the cost per FP is approximately $1230. Based on the FP estimate
and the historical productivity data, the total estimated
Project cost is $461,000 and the estimated effort is 58 person-months.
Finally, the estimated number of FP is derived:
5. Process-Based Estimation
The most common technique for estimating a project is to base the estimate
on the process that will be used. That is, the process is decomposed into a relatively
small set of tasks and the effort required to accomplish each task is estimated.
Like the problem-based techniques, process-based estimation begins with a
delineation of software functions obtained from the project scope. A series of
framework activities must be performed for each function. Functions and related
framework activities8 may be represented as part of a table similar to the one
presented. Once problem functions and process activities are melded, you estimate
21
the effort (e.g., person-months) that will be required to accomplish each software
process activity for each software function. These data constitute the central matrix
of the table.
Average labor rates (i.e., cost/unit effort) are then applied to the effort
estimated for each process activity. It is very likely the labor rate will vary for each
task. Senior staffs are heavily involved in early framework activities and are generally
more expensive than junior staff involved in construction and release. Costs and
effort for each function and framework activity are computed as the last step.
If process-based estimation is performed independently of LOC or FP
estimation, we now have two or three estimates for cost and effort that may be
compared and reconciled. If both sets of estimates show reasonable agreement,
there is good reason to believe that the estimates are reliable. If, on the other hand,
the results of these decomposition techniques show little agreement, further
investigation and analysis must be conducted.
6. Estimation with Use Cases
Developing an estimation approach with use cases is problematic for the
following reasons [Smi99]:
• Use cases are described using many different formats and styles—there is no
standard form.
• Use cases represent an external view (the user‘s view) of the software and can
therefore be written at many different levels of abstraction.
• Use cases do not address the complexity of the functions and features that are
described.
• Use cases can describe complex behaviors (e.g., interactions) that involve many
functions and features.
Unlike an LOC or a function point, one person‘s ―use case‖ may require
months of effort while another person‘s use case may be implemented in a day or
two.
22
23
PROJECT SCHEDULING
4. Write shot notes on i) Project Scheduling ii) Timeline Charts May: 05,06, 15
24
The curve indicates a minimum value to that indicates the least cost for
delivery (i.e., the delivery time that will result in the least effort expended). The PNR
curve also indicates that the lowest cost delivery option,to _ 2td. The implication here
is that delaying project delivery can reduce costs significantly. Of course, this must
be weighed against the business cost associated with the delay. The number of
Delivered lines of code (source statements), L, is related to effort and development
time by the equation:
L =P * E 1/3 t 4/3
where E is development effort in person-months, P is a productivity parameter
that reflects a variety of factors that lead to high-quality software engineering work
(typical values for P range between 2000 and 12,000), and t is the project duration in
calendar months.
Rearranging this software equation, we can arrive at an expression for development
effort E:
E=L3/p3 t4
Where E is the effort expended (in person-years) over the entire life cycle for
software development and maintenance and t is the development time in years. The
equation for development effort can be related to development cost by the inclusion
of a burdened labor rate factor ($/person-year).
25
that the elaboration of student user takes no more than six days, while the faculty
user needs four days; -
when the design of student user completes there have to be developed the
network protocol (T4), it is a subtask that requires eleven days, and simultaneously
there have to be designed network management routines (T5) for up to seven days; -
after the termination of the faculty user subtask, a library directory (T3) should be
made for nine days to maintain information about the different users and their
addresses;
The completion of the network protocol and management routines should be
followed by design of the overall network control (T7) procedures for up to eight
days; - the library directory design should be followed by a subtask elaboration of
library staff (T6), which takes eleven days; - the software engineering process
terminates with testing (T8) for no more than four days
Sometimes called the project work breakdown structure (WBS), are defined
for the product as a whole or for individual functions. Both PERT and CPM provide
quantitative tools that allow you to (1) determine the critical path—the chain of tasks
that determines the duration of the project, (2) establish ―most likely‖ time estimates
26
for individual tasks by applying statistical models, and (3) calculate ―boundary times‖
that define a time ―window‖ for a particular task.
Time-line chart
Time-line chart, also called a Gantt chart, is generated. A time-line chart can be
developed for the entire project. Alternatively, separate charts can be developed for
each project function or for each individual. Working on the project. Format of a time-
line chart. It depicts a part of a software project schedule that emphasizes the
concept scoping task for a word-processing (WP) software product. All project tasks
are listed in the left-hand column. The horizontal bars indicate the duration of each
task. When multiple bars occur at the same time on
the calendar, task concurrency is implied. The diamonds indicate milestones
Tracking the Schedule
Tracking can be accomplished in a number of different ways:
• Conducting periodic project status meetings in which each team member reports
progress and problems
• Evaluating the results of all reviews conducted throughout the software engineering
process
•Determining whether formal project milestones (the diamonds shown in Figure)
have been accomplished by the scheduled date
• Comparing the actual start date to the planned start date for each project task listed
in the resource table
• Meeting informally with practitioners to obtain their subjective assessment of
progress to date and problems on the horizon
27
28
Effort Distribution
A recommended distribution of effort across the software process is often
referred to as the 40–20–40 rule. Forty percent of all effort is allocated to frontend
analysis and design. A similar percentage is applied to back-end testing. The
characteristics of each project dictate the distribution of effort. Work expended on
project planning rarely accounts for more than 2 to 3 percent of effort, unless the
plan commits an organization to large expenditures with high risk.
Customer communication and requirements analysis may comprise 10 to 25
percent of project effort. Effort expended on analysis or prototyping should increase
in direct proportion with project size and complexity. A range of 20 to 25 percent of
effort is normally applied to software design. Time expended for design review and
subsequent iteration must also be considered. Because of the effort applied to
software design, code should follow with relatively little difficulty. A range of 15 to 20
percent of overall effort can be achieved. Testing and subsequent debugging can
account for 30 to 40 percent of software development effort. The criticality of the
software often dictates the amount of testing that is required. If software is human
rated even higher percentages are typical.
RISK MANAGEMENT
5. What are the categories of software risks? Give an overview about risk
management. May:14,15
Risk analysis and management are actions that help a software team to
understand and manage uncertainty. Many problems can plague a software project.
A risk is a potential problem—it might happen, it might not.
Two characteristics of risk
i) Uncertainty – the risk may or may not happen, that is, there are no 100% risks
(those, instead, are called constraints)
ii) Loss – the risk becomes a reality and unwanted consequences or losses occur
29
verification, and maintenance problems. Technical risks occur because the problem
is harder to solve than you thought it would be.
iii) Business risks threaten the viability of the software to be built and often
jeopardize the project or the product. Candidates for the top five business risks are
(1) building an excellent product or system that no one really wants (market risk), (2)
building a product that no longer fits into the overall business strategy for the
company (strategic risk), (3) building a product that the sales force doesn‘t
understand how to
sell (sales risk), (4) losing the support of senior management due to a change in
focus or a change in people (management risk), and (5) losing budgetary or
personnel commitment (budget risks). It is extremely important to note that simple
risk categorization won‘t always work.
iv)Known risks are those that can be uncovered after careful evaluation of the
project plan, the business and technical environment in which the project is being
developed, and other reliable information sources (e.g., unrealistic delivery date, lack
of documented requirements or software scope, poor development environment).
v) Predictable risks are extrapolated from past project experience (e.g., staff
turnover, poor communication with the customer, dilution of staff effort as ongoing
maintenance requests are serviced).
vi) Unpredictable risks are the joker in the deck. They can and do occur, but they
are extremely difficult to identify in advance.
Risk Identification
Risk identification is a systematic attempt to specify threats to the project plan
(estimates, schedule, resource loading, etc.). By identifying known and predictable
risks, the project manager takes a first step toward avoiding them when possible and
controlling them when necessary. There are two distinct types of risks. Generic risks
and Product-specific risks.
Generic risks are a potential threat to every software project. Product-specific
risks can be identified only by those with a clear understanding of the technology, the
people, and the environment that is specific to the software that is to be built. Some
subset of known and predictable risks in the following generic subcategories:
•Product size—risks associated with the overall size of the software to be built or
modified.
•Business impact—risks associated with constraints imposed by management or the
marketplace.
•Stakeholder characteristics—risks associated with the sophistication of the
stakeholders and the developer‘s ability to communicate with stakeholders in a
timely manner.
•Process definition—risks associated with the degree to which the software process
has been defined and is followed by the development organization.
30
RISK PROJECTION
Risk projection, also called risk estimation, attempts to rate each risk in two
ways—(1)The likelihood or probability that the risk is real and (2) the consequences
of the problems associated with the risk, should it occur. Work along with other
managers and technical staff to perform four risk projection steps:
1. Establish a scale that reflects the perceived likelihood of a risk.
2. Delineate the consequences of the risk.
3. Estimate the impact of the risk on the project and the product.
4. Assess the overall accuracy of the risk projection so that there will ne no
misunderstandings. Developing a Risk Table
A risk table provides you with a simple technique for risk projection.2 A
sample risk table is illustrated in Figure. We begin by listing all risks in the first
column of the table. This can be accomplished with the help of the risk item
checklists. Each risk is categorized in the second column (e.g., PS implies a project
size risk, BU implies a business risk). The probability of occurrence of each
Risk is entered in the next column of the table. The probability value for each
risk can be estimated by team members individually. One way to accomplish this is
to poll individual team members in round-robin fashion until their collective
assessment of risk probability begins to converge. Next, the impact of each risk is
assessed. Each risk component is assessed using the characterization presented an
impact category is determined. The categories for each of the four risk
components—performance, support, cost, and schedule—are averaged to determine
an overall impact value.
Once the first four columns of the risk table have been completed, the table is
sorted by probability and by impact. High-probability, high-impact risks percolate to
31
the top of the table, and low-probability risks drop to the bottom. The cutoff line
(drawn horizontally at some point in the table) implies that only risks that lie above
the line will be given further attention. Risks that fall below the line are reevaluated to
accomplish second-order prioritization. A risk factor that has a high impact but a very
low probability of occurrence should not absorb a significant amount of management
time.
32
PART - C
1. Elaborate on business process engineering and product engineering.
(NOV/DEC/2010)
OR
Business process engineering strives to define data and application
architecture as well as technology infrastructure. Describe what each of
these terms mean and provide an example. (NOV/DEC/2012)
OR
Explain the concept of business process engineering. (MAY/JUN/2012)
(MAY/JUN/2013)
OR
Explain the concept of product engineering. (APR/MAY/2011)
(MAY/JUN/2012)
The technology infrastructure provides the foundation for the data and
application architectures. The infrastructure encompasses the hardware and
software that are used to support the application and data. This includes
computers, operating systems, networks, telecommunication links, storage
technologies, and the architecture (e.g., client/server) that has been designed
to implement these technologies.
33
Product Engineering
The goal of product engineering is to translate the customer‗s desire for a set
of defined capabilities into a working product. To achieve this goal, product
engineering—like business process engineering—must derive architecture
and infrastructure.
System component engineering is actually a set of concurrent activities that
address each of the system components separately: software engineering,
hardware engineering, human engineering, and database engineering.
Once allocation has occurred, system component engineering commences.
System component engineering is actually a set of concurrent activities that
address each of the system components separately: software engineering,
hardware engineering, human engineering, and database engineering. Each
of these engineering disciplines takes a domain -specific view, but it is
important to note that the engineering disciplines must establish and maintain
active communication with one another. Part of the role of requirements
engineering is to establish the interfacing mechanisms that will enable this to
happen.
34
The element view for product engineering is the engineering discipline itself
applied to the allocated component. For software engineering, this means analysis
and design modeling activities (covered in detail in later chapters) and construction
and integration activities that encompass code generation, testing, and support
steps. The analysis step models allocated requirements into representations of data,
function, and behavior. Design maps the analysis model into data, architectural,
interface, and software component-level designs.
35
12. Regularly, the team reflects on how to become more effective, and adjusts
accordingly
3. Explain the various phases of software development life cycle (SDLC) and
identify deliverables at each phase.(May/Jun 2011)
36
The spiral model is similar to the incremental model, with more emphasis placed
on risk analysis. The spiral model has four phases: Planning, Risk Analysis,
Engineering and Evaluation. A software project repeatedly passes through these
phases in iterations (called Spirals in this model). The baseline spiral, starting in
the planning phase, requirements are gathered and risk is assessed. Each
subsequent spirals builds on the baseline spiral.
Advantages of Spiral model:
High amount of risk analysis hence, avoidance of Risk is enhanced.
Good for large and mission-critical projects.
Strong approval and documentation control.
Additional Functionality can be added at a later date.
Software is produced early in the software life cycle.
PART – A
37
- External Entity
38
11. What is prototype in software process? & what are the types of
prototypes? OR
List any two advantages of prototypes. Dec: 13, May: 14, May :13.Nov :13
A prototype is an initial version of a system used to demonstrate concepts
and try out design options.
A prototype can be used in:
The requirements engineering process to help with requirements elicitation
and validation;
In design processes to explore options and develop a UI design; In the
testing process to run back-to-back tests.
Types of prototypes
39
Tone
House Holder
Generator
40
PART-B
FUNCTIONAL &NON-FUNCTIONAL REQUIREMENTS
Types of Requirements
Types of requirements
User requirements
It is a collection of statements in natural language plus description of the
service the system provides and its operational constraints. It is written for
customers.
Guidelines for Writing User Requirements
For example
Consider a spell checking and correcting system a word processor. The user
requirements can be given in natural language as the system should posses a
traditional word dictionary and user supplied dictionary. It shall provide a user-
activated facility which checks the spelling of words in the document against
spellings in the system dictionary and user-supplied dictionaries.
When a word is found in the document which is not given in the dictionary, then the
system should suggest 10 alternative words. These alternative words should be
based on a match between the word found and corresponding words in the
dictionaries. When a word is found in the document which is not in any dictionary,
the system should propose following options to user:
1. Ignore the corresponding instance of the word and go to next sentence.
2. Ignore all instances of the word
41
42
43
Non-Functional
Requirements
Inter Ethical
operability Requirement
reqirements
Legislative
Requirement
Safety
Requirement
44
2. Narrate the importance of SRS. Explain typical SRS structure and its parts. .
Show the IEEE template of SRS document. Dec: 05, Nov: 12 ,May :16
45
for completing a project with as little cost growth as possible. The SRS is often
referred to as the "parent" document because all subsequent project management
documents, such as design specifications, statements of work, software architecture
specifications, testing and validation plans, and documentation plans, are related to
it.
It's important to note that an SRS contains functional and nonfunctional
requirements only; it doesn'tq offer design suggestions, possible solutions to
technology or business issues, or any other information other than what the
development team understands the customer's system requirements to be.
A well-designed, well-written SRS accomplishes four major goals:
It provides feedback to the customer. An SRS is the customer's assurance
that the development organization understands the issues or problems to be solved
and the software behavior necessary to address those problems. Therefore, the SRS
should be written in natural language (versus a formal language, explained later in
this article), in an unambiguous manner that may also include charts, tables, data
flow diagrams, decision tables, and so on.
It decomposes the problem into component parts. The simple act of writing
down software requirements in a well-designed format organizes information, places
borders around the problem, solidifies ideas, and helps break down the problem into
its component parts in an orderly fashion.
It serves as an input to the design specification. As mentioned previously, the
SRS serves as the parent document to subsequent documents, such as the software
design specification and statement of work. Therefore, the SRS must contain
sufficient detail in the functional system requirements so that a design solution can
be devised.
It serves as a product validation check. The SRS also serves as the parent
document for testing and validation strategies that will be applied to the requirements
for verification.
SRSs are typically developed during the first stages of "Requirements
Development," which is the initial product development phase in which information is
gathered about what requirements are needed--and not. This information-gathering
stage can include onsite visits, questionnaires, surveys, interviews, and perhaps a
return-on-investment (ROI) analysis or needs analysis of the customer or client's
current business environment. The actual specification, then, is written after the
requirements have been gathered and analyzed.
SRS development process can offer several benefits:
Technical writers are skilled information gatherers, ideal for eliciting and
articulating customer requirements. The presence of a technical writer on the
requirements-gathering team helps balance the type and amount of information
extracted from customers, which can help improve the SRS.
Technical writers can better assess and plan documentation projects and
better meet customer document needs. Working on SRSs provides technical writers
46
with an opportunity for learning about customer needs firsthand--early in the product
development process.
Technical writers know how to determine the questions that are of concern to
the user or customer regarding ease of use and usability. Technical writers can then
take that knowledge and apply it not only to the specification and documentation
development, but also to user interface development, to help ensure the UI (User
Interface) models the customer requirements.
Technical writers involved early and often in the process, can become an
information resource throughout the process, rather than an information gatherer at
the end of the process.
IEEE) have identified nine topics that must be addressed when designing and writing
an SRS:
1. Interfaces
2. Functional Capabilities
3. Performance Levels
4. Data Structures/Elements
5. Safety
6. Reliability
7. Security/Privacy
8. Quality
9. Constraints and Limitations
An SRS document typically includes four ingredients are:
1. A template
2. A method for identifying requirements and linking sources
3. Business operation rules
4. A traceability matrix
Table 1 A sample of a basic SRS outline
1. Introduction
1.1 Purpose
1.2 Document conventions
1.3 Intended audience
1.4 Additional information
1.5 Contact information/SRS team members
1.6 References
2. Overall Description
2.1 Product perspective
2.2 Product functions
2.3 User classes and characteristics
2.4 Operating environment
2.5 User environment
2.6 Design/implementation constraints
47
48
4.Qualification To be determined.
Provisions
49
5.Requirements To be determined.
Traceability
50
i) Inception (2) ii) Elicitation (3) iii) Elaboration (3) iv)Negotiation (2)
v) Specification (2) vi) Validation (2) vii) Requirements Management (2)
51
4 Negotiations
It is not unusual for customers and users to ask for more than can be
achieved given limited business resources. The requirements engineer must
reconcile these conflicts through a process of negotiation.
Customers, users, and other stakeholders, are asked to rank requirements and then
discuss conflicts in priority. Risks associated with each requirement are identified
and analyzed; hence, rough ―guest mates‖ of development efforts are made.
5. Specification
A specification can be any one (or more) of the following:
A written document
A set of models
A formal mathematical
A collection of usage scenarios (use-cases)
A prototype
The specification is the final work product produced by the requirements
engineer. It serves as the foundation for subsequent software engineering activities.
It describes the function and performance of a computer-based system and the
constraints that will govern its development.
6 Validations
Requirements validation examines the specification to ensure that all software
requirements have been stated unambiguously; that inconsistencies, omissions, and
errors have been detected and corrected; and that the work products conform to the
standards established for the process, the project and the product. The primary
requirements validation mechanism is the formal technical review.
The review team that validates requirements includes software engineers,
customers, users, and other stakeholders who examine the specification looking for
errors in content or interpretation, areas where clarification may be required, missing
information, inconsistencies, conflicting requirements, or unrealistic requirements.
52
• If the system can be engineered using current technology and within budget
• If the system can be integrated with other systems that are used
Feasibility Study
Requirements
Elicitation and Analysis
Requirements
Specifications
Requirements
Feasibility Report Validation
System Models
53
54
Reviews
• Systematic manual analysis of the requirements
Prototyping
• Using an executable model of the system to check requirements.
Test-case generation
• Developing tests for requirements to check testability
Automated consistency analysis • Checking the consistency of a structured
requirements description
Requirements Management
Requirements Management is a set of activities that help the project team
identify, control, and track requirements and changes to requirements at any time as
the project proceeds.
ii) Initiating the Requirements Engineering Process
The process of initiating engineering is the subject of this section. The point
of view described is having all stakeholders (including customers and developers)
involved in the creation of the system requirements documents.
The following are steps required to initiate requirements engineering:
1. Identifying the Stakeholders
A stakeholder is anyone who benefits in a direct or indirect way from the
system which is being developed. Stakeholders are: operations managers, product
managers, marketing people, internal and external customers, end-users,
consultants, product engineers, software engineers, support and maintenance
engineers, etc.
At inception, the requirements engineer should create a list of people who will
contribute input as requirements are elicited. The initial list will grow as stakeholders
are contacted because every stakeholder will be asked‖ Who else do you think I
should talk to?‖
2 Recognizing Multiple Viewpoints
Each of the stockholders will contribute to the RE process. As information
from multiple viewpoints is collected, emerging requirements may be inconsistent or
may conflict with one another.
The requirements engineer is to categorize all stakeholder information
including inconsistencies and conflicting requirements in a way that will allow
decision makers to choose an internally inconsistent set of requirements for the
system.
3 Working toward Collaboration
Collaboration does not necessarily mean that requirements are defined by
committee. In many cases, stakeholders collaborate by providing their view of
requirements, but a strong ―project champion‖ (business manager, or a senior
technologist) may make the final decision about which requirements make the final
cut.
55
56
57
Behavioral elements: The state diagram is one method for representing the
behavior of a system by depicting its states and the events that cause the system to
change state. A state is any observable mode of behavior. Moreover, the state
diagram indicates what actions are taken as a consequence of a particular event.
Flow-oriented elements: Information is transformed as it flows through a
computer-based system. The system accepts input in a variety of forms; applies
functions to transform it; and produces output in a variety of forms.
5. What are the components of the standard structure for the software
requirements document? May: 14,16
58
Definition:
59
CLASSICAL ANALYSIS
60
61
The data flow diagram for Sally‟s Software Shop: first refinement.
The data flow diagram for Sally‟s Software Shop: second refinement.
The data flow diagram for Sally‟s Software Shop: part of third refinement.
62
63
64
65
A Petri net.
66
PART -C
1) Sample problems related with Structured System Analysis.
1) Tamilnadu electricity Board (TNEB) would like to automate its billing process.
Customers apply for a connection. EB staff take readings and update the system
each customer is required to pay charges bi-monthly according to the rates set for
the types of connection. Customers can choose to pay either by cash / card.
Dec:13
A bill is generated on payment
i)Give a name for the system.
ii)Draw the Level – 0 DFD
iii)Draw the Level-1 DFD
67
Level- 1 DFD
1.0 Connection
Apply for connection
Details
Customer Add Customer Record
Connection
Record Details
1.1
EB Staff
Takes Meter
Readings
1.2
Record Database
Update
Record
Record
1.3 Details
1.5
Generate Bill
Generate
Monthly
Monthly Report
Report
1.4
EB Manager
Makes
Payment
68
Ans:
1) Stakeholders
Bank customer , Service operator , Hardware and software maintenance
engineers , Database administrators , Banking regulators, Security
administrator
Functional requirements
There should be the facility for the customer to insert a card.
The system should first validate card and PIN.
The system should allow the customer to deposit amount in the bank.
The system should dispense the cash on withdraw.
The system should provide the printout for the transaction.
The system should make the record of the transactions made by
particular customer.
The cash withdrawal is allowed in multiple of 100
The cash deposition is allowed in multiple of 100.
The customer is allowed to transfer amount between the two accounts.
The customer is allowed to know the balance enquiry.
69
Each of the transaction should be made within 60 seconds. If the time limit
is exceeded , then cancel the transaction.
If there is no response from the bank computer after request is made
within the minutes then the card is rejected with error message.
The bank dispenses money only after the processing of withdrawl form
the bank. That means if sufficient fund is available in user‘s account then
only the withdrawl request is processed.
Each bank should process the transactions from several ATM centress at
the same time.
The machine should be loaded with sufficient fund in it.
3) Draw Use Case and data Flow diagrams for a Restaurant System”. The
activates of the Restaurant system are listed below.
Receive the customer food orders , Produce the customer ordered foods,
Serve the customer with their ordered foods , Collect Payment form
customers, Store customers payment details , Order Raw Mateirals and pay
for labor May:2015
70
71
Order
coffee
Select
one
recipe
Custom Return
er change
Makes
Payment
Adds
recipe
Service
Assista
nt
Edit
Recipe
Delete
Recipe
72
73
Sequence diagram
74
PART – A
1. List down the steps to be followed for user interface design. May: 15
1. During the interface analysis step define interface objects and corresponding
actions or operations.
2. Define the major events in the interface.
3. Analyze how the interface will look like from user‘s point of view.
4. Identify how the user understands the interface with the information provided
along with it.
2. Define software architecture May: 14
Software architecture is a structure of systems which consists of various
components, externally visible properties of these components and the inter-
relationship among these components.
3. Define abstraction. May: 13
The abstraction means is a kind of representation of data in which the
implementation details are hidden. At the highest level of abstraction, a solution is
stated in broad terms using the language of the problem environment. At lower levels
of abstraction, a more detailed description of the solution is provided. A procedural
abstraction refers to a sequence of instructions that have a specific and limited
function.
4. List out design methods. May: 12
The two software design methods are
1. Object oriented design
2. Function oriented design
5. What are the design qualities attributes „FURPS” meant. Dec: 12
FURPS stands for
1. Functionality: can be checked by assessing the set of features and
capabilities of the functions.
2. Usability: Reliability is a measure of frequency and severity of failure.
3. Reliability: Is a measure of frequency and severity of failure.
4. Performance: It is a measure that represents the response of the system.
5. Supportability: It is the ability to adopt the changes made in the software.
75
1. In Structured analysis the process and In Object Oriented analysis the process
data are separately treated. and data are encapsulated in the form of
object.
2. It is not suitable for large and complex For large and complex system object
projects. oriented approach is more suitable.
8. What architectural styles are preferred for the following ? why ? Nov 2016
a) Net working b) Web based Systems C)Banking System
Net working – Data Centered Archtecture. It posses the property of
interchangeability.
Web based systems – Data flow architecture. It is the series of transformations are
applied to produce the output data.
Banking system- Call & Retun architecture.In this style the hierarchical control fro
call and return is represented.
9.What UI design patterns are used for the following? [Nov / Dec 2016]
a) Page layout. b)Tables. C)Navigation through menus and web
pages.
b) Shopping cart.
Answer :
Page layout – Page Grids
Navigation through menus and web pages- bread crump navigation.
Tables. – Form Wizard
Shopping cart – Information Dashboard
10. Draw diagram to demonstrate the architecture styles. May: 15
76
13. Develop CRC model index card for a class account used in banking
application. Nov / Dec 2013
Bank Customer
Account Number
Name
Cash withdrawal
Deposit
Safety locker
77
PART – B
DESIGN PROCESS & DESIGN CONCEPTS
1. Explain about the various design process & design concepts considered
during design. May: 03.06,07,08, Dec: 05
78
79
Process models focus on the design of the business or technical process that the
system must accommodate. Finally, functional models can be used to represent the
functional hierarchy of a system.
iii) Patterns: A pattern is a named nugget (something valuable) of insight which
conveys the essence of a proven solution to a recurring problem within a certain
context amidst competing concerns‖. The intent of each design pattern is to provide
a description that enables a designer to determine (1) whether the pattern is
applicable to the current work, (2) whether the pattern can be reused and (3)
whether the pattern can serve as a guide for developing a similar, but functionally or
structurally different pattern.
iv) Separation of concerns is a design concept that suggests that any complex
problem can be more easily handled if it is subdivided into pieces that can each be
solved and/or optimized independently. A concern is a feature or behavior that is
specified as part of the requirements model for the software. By separating concerns
into smaller, and therefore more manageable pieces, a problem takes less effort and
time to solve.
v) Modularity is the single attribute of software that allows a program to be
intellectually manageable‖. Software is divided into separately named and
addressable components, sometimes called modules, that are integrated to satisfy
problem requirements.
vi) Information Hiding: The principle of information hiding suggests that modules
be ―characterized by design decisions that (each) hides from all others.‖ In other
words, modules should be specified and
designed so that information (algorithms and data) contained within a module is
inaccessible to other modules that have no need for such information.
vii) The concept of functional independence is a direct outgrowth of separation of
concerns, modularity, and the concepts of abstraction and information hiding.
Independence is assessed using two qualitative criteria: cohesion and coupling.
Cohesion is an indication of the relative functional strength of a module. Coupling is
an indication of the relative interdependence among modules. Cohesion is a natural
extension of the information-hiding concept. A cohesive module performs a single
task, requiring little interaction with other components in other parts of a program.
Coupling is an indication of interconnection among modules in a software structure.
Coupling depends on the interface complexity between modules, the point at which
entry or reference is made to a module, and what data pass across the interface.
viii) Refinement is actually a process of elaboration. Refinement helps to reveal
low-level details as design progresses. Both concepts allow you to create a complete
design model as the design evolves.
ix) Refactoring is the process of changing a software system in such a way that it
does not alter the external behavior of the code yet improves its internal structure.‖
When software is refactored, the existing design is examined for redundancy,
unused design elements, inefficient or unnecessary algorithms, poorly constructed or
80
inappropriate data structures, or any other design failure that can be corrected to
yield a better design.
x) Design classes that refine the analysis classes by providing design detail that will
enable the classes to be implemented, and implement a software infrastructure that
supports the business solution.
Five different types of design classes, each representing a different layer of the
design architecture, can be developed:
• User interface classes define all abstractions that are necessary for human
computer interaction (HCI). In many cases, HCI occurs within the context of a
metaphor (e.g., a checkbook, an order form, a fax machine), and the design classes
for the interface may be visual representations of the elements of the metaphor.
• Business domain classes The classes identify the attributes and services
(methods) that are required to implement some element of the business domain.
• Process classes implement lower-level business abstractions required to fully
manage the business domain classes.
• Persistent classes represent data stores (e.g., a database) that will persist beyond
the execution of the software.
• System classes implement software management and control functions that
enable the system to operate and communicate within its computing environment
and with the outside world.
81
coordination and cooperation‖ among components; (3) constraints that define how
components can be integrated to form the system; and (4) semantic models that
enable a designer to understand the overall properties of a system. An architectural
style is a transformation that is imposed on the design of an entire system.
The intent is to establish a structure for all components of the system. An
architectural pattern, like an architectural style, imposes a transformation on the
design of architecture. However, a pattern differs from a style in a number of
fundamental ways: (1) the scope of a pattern is less broad, focusing on one aspect
of the architecture rather than the architecture in its entirety; (2) a pattern imposes a
rule on the architecture, describing how the software will handle some aspect of its
functionality at the infrastructure level (e.g., concurrency)
The commonly used architectural styles are
1. Data-centered architectures. A data store (e.g., a file or database) resides at the
center of this architecture and is accessed frequently by other components that
update, add, delete, or otherwise modify data within the store. Figure illustrates a
typical data-centered style. Client software accesses a central repository. In some
cases the data repository is passive. That is, client software accesses the data
independent of any changes to the data or the actions of other client software. A
variation on this approach transforms the repository into a ―blackboard‖ that sends
notifications to client software when data of interest to the client changes. Data-
centered architectures promote inerrability. That is, existing
82
3. Call and return architectures. This architectural style enables you to achieve a
program structure that is relatively easy to modify and scale. A number of sub styles
exist within this category:
83
84
..
Fig: Layered Architecture
85
Defining Archetypes
An archetype is a class or pattern that represents a core abstraction that is
critical to the design of an architecture for the target system. In general, a relatively
small set of archetypes is required to design even relatively complex systems. The
target system architecture is composed of these archetypes, which represent stable
elements of the architecture but may be instantiated many different ways based on
the behavior of the system.
Continuing the discussion of the Safe Home security function, we might define the
following archetypes:
• Node. Represents a cohesive collection of input and output elements of the home
security function. For example a node might be comprised of (1) various sensors and
(2) a variety of alarm (output) indicators.
• Detector. An abstraction that encompasses all sensing equipment that feeds
information into the target system.
Indicator. An abstraction that represents all mechanisms (e.g., alarm siren, flashing
lights, bell) for indicating that an alarm condition is occurring.
• Controller. An abstraction that depicts the mechanism that allows the arming or
disarming of a node. If controllers reside on a network, they have the ability to
communicate with one another. Each of these archetypes is depicted using UML
notation as shown in Figure. For example, Detector might be refined into a class
hierarchy of sensors.
86
87
―Transform‖ mapping for a small part of the Safe Home security function. In order to
perform the mapping, the type of information flow must be determined. One type of
information flow is called transform flow and exhibits a linear quality. Data flows into
the system along an incoming flow path where it is transformed from an external
world representation into internalized form. Once it has been internalized, it is
processed at a transform center. Finally, it flows out of the system along an outgoing
flow path that transforms the data into external world form.
Transform Mapping
Transform mapping is a set of design steps that allows a DFD with transform flow
characteristics to be mapped into a specific architectural style. To illustrate this
approach, we again consider the SafeHome security function. One element of the
analysis model is a set of data flow diagrams that describe information flow within
the security function. To map these data flow diagrams into software architecture, we
would initiate the following design steps:
Step 1. Review the fundamental system model. The fundamental system model
or context diagram depicts the security function as a single transformation,
representing the external producers and consumers of data that flow into and out of
the function. Figure 1 depicts a level 0 context models, and Figure 2 shows refined
data flow for the security function.
88
Fig:2 Level 1 DFD for the Safe Home security function Level 1 DFD for the Safe
Home security
Step 2. Review and refine data flow diagrams for the software. Information
obtained from the requirements model is refined to produce greater detail. For
example, the level 2 DFD for monitor sensors (Figure 3) is examined, and a level 3
data flow diagram is derived as shown in Figure 4. At level 3, each transform in the
data flow diagram exhibits relatively high cohesion.
89
Fig:4 : Level 3
90
Evaluating the DFD (Figure 4), we see data entering the software along one
incoming path and exiting along three outgoing paths. Therefore, an overall
transform characteristic will be assumed for information flow.
Step 4. Isolate the transform center by specifying incoming and outgoing flow
boundaries. Incoming data flows along a path in which information is converted
from external to internal form; outgoing flow converts internalized data to external
form. Incoming and outgoing flow boundaries are open to interpretation. That is,
different designers may select slightly different points in the flow as boundary
locations. Flow boundaries for the example are illustrated as shaded curves running
vertically through the flow in Figure 4.
The transforms (bubbles) that constitute the transform center lie within the two
shaded boundaries that run from top to bottom in the figure. An argument can be
made to read just a boundary (e.g., an incoming flow boundary separating read
sensors and acquire response info could be proposed). The emphasis in this design
step should be on selecting reasonable boundaries, rather than lengthy iteration on
placement of divisions.
Step 5. Perform “first-level factoring.” The program architecture derived using this
mapping results in a top-down distribution of control. Factoring leads to a program
structure in which top-level components perform decision making and low level
components perform most input, computation, and output work. Middle-level
components perform some control and do moderate amounts of work. When
transform flow is encountered, a DFD is mapped to a specific structure (a call and
return architecture) that provides control for incoming, transform, and outgoing
information processing.
This first-level factoring for the monitor sensors subsystem is illustrated in Figure 5.
A main controller (called monitor sensors executive) resides at the top of the
program structure and coordinates the following subordinate control functions: • An
incoming information processing controller, called sensor input controller,
coordinates receipt of all incoming data.
• A transform flow controller, called alarm conditions controller, supervises all
operations on data in internalized form (e.g., a module that invokes various data
transformation procedures).
• An outgoing information processing controller, called alarm output controller,
coordinates production of output information.
Although a three-pronged structure is implied by Figure 5, complex flows in
large systems may dictate two or more control modules for each of the generic
control functions described previously. The number of modules at the first level
should be limited to the minimum that can accomplish control functions and still
maintain good functional independence characteristics.
91
92
93
calculation transforms of the transform portion of the DFD is mapped into a module
subordinate to the transform controller. Completed first-iteration architecture is
shown in Figure 7. The components
Step 7. Refine the first-iteration architecture using design heuristics for
improved software quality. A first-iteration architecture can always be refined by
applying concepts of functional independence .Components are exploded or
imploded to produce sensible factoring, separation of concerns, good cohesion,
minimal coupling, and most important, a structure that can be implemented without
difficulty, tested without confusion, and maintained without grief. There are times, for
example, when the controller for incoming data flow is totally unnecessary, when
some input processing is required in a component that is subordinate to the
transform controller, when high coupling due to global data cannot be avoided, or
when optimal structural characteristics cannot be achieved. Software requirements
coupled with human judgment is the final arbiter.
Cohesion
• Cohesion is the ―single-mindedness‘ of a component
• It implies that a component or class encapsulates only attributes and operations
that are closely related to one another and to the class or component itself
• The objective is to keep cohesion as high as possible.
• The kinds of cohesion can be ranked in order from highest (best) to lowest (worst)
Functional
• A module performs one and only one computation and then returns a result.
94
Layer
• A higher layer component accesses the services of a lower layer component
Communicational
• All operations that access the same data are defined within one class
Kinds of cohesion
Sequential cohesion
Components or operations are grouped in a manner that allows the first to provide
input to the next and so on in order to implement a sequence of operations
Procedural cohesion
Components or operations are grouped in a manner that allows one to be invoked
immediately after the preceding one was invoked, even when no data passed
between them
Temporal cohesion
Operations are grouped to perform a specific behavior or establish a certain state
such as program start-up or when an error is detected
Utility cohesion
Components, classes, or operations are grouped within the same category because
of similar general functions but are otherwise unrelated to each other
Coupling
• As the amount of communication and collaboration increases between operations
and classes, the complexity of the computer-based system also increases
• As complexity rises, the difficulty of implementing, testing, and maintaining software
also increases
• Coupling is a qualitative measure of the degree to which operations and classes
are connected to one another
• The objective is to keep coupling as low as possible
• The kinds of coupling can be ranked in order from lowest (best) to highest (worst)
• Data coupling • Operation A() passes one or more atomic data operands to
operation B(); the less the number of operands, the lower the level of coupling.
• Stamp coupling
A whole data structure or class instantiation is passed as a parameter to an
operation
• Control coupling Operation A() invokes operation B() and passes a control flag to
B that directs logical flow within B(). Consequently, a change in B() can require a
change to be made to the meaning of the control flag passed by A(), otherwise an
error may result
• Common coupling
A number of components all make use of a global variable, which can lead to
uncontrolled error propagation and unforeseen side effects
• Content coupling
One component secretly modifies data that is stored internally in another component
• Subroutine call coupling
95
96
97
focuses the simplicity to accomplish tasks with the system for new users, heuristic
evaluation, in which a set of heuristics are used to identify usability problems in the
UI design, and pluralistic walkthrough, in which a selected group of people step
through a task scenario and discuss usability issues.
Usability testing – testing of the prototypes on an actual user—often using a
technique called think aloud protocol where you ask the user to talk about their
thoughts during the experience. User interface design testing allows the designer to
understand the reception of the design from the viewer‘s standpoint, and thus
facilitates creating successful applications.
Graphical user interface design – actual look and feel design of the final graphical
user interface (GUI). It may be based on the findings developed during the user
research, and refined to fix any usability problems found through the results of
testing.]
Interface analysis, Interface Design
User interface design (UID) or user interface engineering is the design of
websites, computers, appliances, machines, mobile communication devices, and
software applications with the focus on the user's experience and interaction. The
goal of user interface design is to make the user's interaction as simple and efficient
as possible, in terms of accomplishing user goals—what is often called user-
centered design.
Good user interface design facilitates finishing the task at hand without
drawing unnecessary attention to itself. Graphic design and typography are utilized
to support its usability, influencing how the user performs certain interactions and
improving the aesthetic appeal of the design; design aesthetics may enhance or
detract from the ability of users to use the functions of the interface. [1] The design
process must balance technical functionality and visual elements (e.g., mental
model) to create a system that is not only operational but also usable and adaptable
to changing user needs.
Interface design is involved in a wide range of projects from computer
systems, to cars, to commercial planes; all of these projects involve much of the
same basic human interactions yet also require some unique skills and knowledge.
As a result, designers tend to specialize in certain types of projects and have skills
centered on their expertise, whether that be software design, user research, web
design, or industrial design.
Interface design deals with the process of developing a method for two (or
more) modules in a system to connect and communicate. These modules can apply
to hardware, software or the interface between a user and a machine. An example of
a user interface could include a GUI, a control panel for a nuclear power plant,]or
even the cockpit of an aircraft .In systems engineering, all the inputs and outputs of a
system, subsystem, and its components are often listed in an interface control
document as part of the requirements of the engineering project.
User Interface design activities:
98
User interface analysis and design process begins at the interior of the spiral
and encompasses four distinct framework activities:
(1) Interface analysis and modeling,
(2) interface design,
(3) Interface construction, and
(4) Interface validation.
Interface analysis focuses on the profile of the users who will interact with the
system. Skill level, business understanding, and general receptiveness to the new
system are recorded; and different user categories are defined. For each user
category, requirements are elicited. Once general requirements have been defined,
a more detailed task analysis is conducted. Those tasks that the user performs to
accomplish the goals of the system
Interface design is to define a set of interface objects and actions (and their screen
representations) that enable a user to perform all defined tasks in a manner that
meets every usability goal defined for the system.
Interface construction normally begins with the creation of a prototype that enables
usage scenarios to be evaluated. As the iterative design process continues, a user
interface tool kit may be used to complete the construction of the interface.
Interface validation focuses on (1) the ability of the interface to implement every
user task correctly, to accommodate all task variations, and to achieve all general
user requirements; (2) the degree to which the interface is easy to use and easy to
learn, and (3) the users‘ acceptance of the interface as a useful tool in their work.
User Analysis
Information from a broad array of sources can be used to accomplish this:
User Interviews. The most direct approach, members of the software team meet
with end users to better understand their needs, motivations, work culture, and a
myriad of other issues. This can be accomplished in one-on-one meetings or through
focus groups.
99
Sales input. Sales people meet with users on a regular basis and can gather
information that will help the software team to categorize users and better
understand their requirements.
Marketing input. Market analysis can be invaluable in the definition of market
segments and an understanding of how each segment might use the software in
subtly different ways.
Support input. Support staff talks with users on a daily basis. They are the most
likely source of information on what works and what doesn‘t, what users like and
what they dislike, what features generate questions and what features are easy to
use.
Use cases. Task analysis, the use case is developed to show how an end user
performs some specific work-related task. In most instances, the use case is written
in an informal style (a simple paragraph) in the first-person. For example, assume
that a small software company wants to build a computer-aided design system
explicitly for interior designers.
Workflow analysis. When a number of different users, each playing different roles,
makes use of a user interface, it is sometimes necessary to go beyond task analysis
and object elaboration and apply workflow analysis. This technique allows you to
understand how a work process is completed when several people (and roles) are
involved. Consider a company that intends to fully automate the process of
prescribing and delivering prescription drugs.
User interface design models have been proposed, all suggest some combination
of the following steps:
1. Using information developed during interface analysis; define interface objects
and actions (operations).
2. Define events (user actions) that will cause the state of the user interface to
change. Model this behavior.
3. Depict each interface state as it will actually look to the end user.
4. Indicate how the user interprets the state of the system from information provided
through the interface.
100
DESIGN HEURISTICS
6. Discuss the design heuristics for effective modularity design.(8m) (4m)
May/June 2013,16
Design Heuristics
1. Evaluate the ―first iteration of the program Structure to reduce coupling and
improve cohesion. The task is to improve module independence, once the
program structure has been developed.
2. Attempt to minimize structures with fan-out. Strive for fan-in as depth
increases.
3. Keep the scope of effect of a module within the slope of control of that
module.
If module e makes a decision that affects module r, then the heuristic
is violated, because module r lies outside the scope of control of
module e.
4. Evaluate module interfaces to reduce complexity and redundancy to improve
consistency.
5. Define modules whose function is predictable. But avoid modules that are
overly restrictive.
101
PART- C
1. What are the characteristics of good design? Describe the types of coupling
and cohesion. How is design evaluation performed (APR/MAY/2010) (or)
Which is a measure of interconnection among modules in a program
structure? Explain (NOV/DEC/2011) – Answer – Coupling
Purpose of Design
• Design is where customer requirements, business needs, and technical
considerations all come together in the formulation of a product or system
• The design model provides detail about the software data structures,
architecture, interfaces, and components
• The design model can be assessed for quality and be improved before code
is generated and tests are conducted
Does the design contain errors, inconsistencies, or omissions?
Are there better design alternatives?
Can the design be implemented within the constraints, schedule, and cost that
have been established?
• A designer must practice diversification and convergence
The designer selects from design components, component solutions, and
knowledge available through catalogs, textbooks, and experience
The designer then chooses the elements from this collection that meet the
requirements defined by requirements engineering and analysis modeling
Convergence occurs as alternatives are considered and rejected until one
particular configuration of components is chosen
• Software design is an iterative process through which requirements are
translated into a blueprint for constructing the software
Design begins at a high level of abstraction that can be directly traced back to
the data,functional, and behavioral requirements
As design iteration occurs, subsequent refinement leads to design
representations at much lower levels of abstraction
102
Goals of a Good Design -three characteristics that serve as a guide for the
evaluation of a good design:
• The design must implement all of the explicit requirements contained in the
analysis model
– It must also accommodate all of the implicit requirements desired by
the customer
• The design must be a readable and understandable guide for those who
generate code, and for those who test and support the software
• The design should provide a complete picture of the software, addressing the
data, functional, and behavioral domains from an implementation perspective
Technical criteria for good design
1) A design should exhibit an architecture that
a) Has been created using recognizable architectural styles or patterns
b) Is composed of components that exhibit good design characteristics
c) Can be implemented in an evolutionary fashion, thereby facilitating
implementation and testing
2) A design should be modular; that is, the software should be logically
partitioned into elements or subsystems
3) A design should contain distinct representations of data, architecture,
interfaces, and components
4) A design should lead to data structures that are appropriate for the classes to
be implemented and are drawn from recognizable data patterns
5) A design should lead to components that exhibit independent functional
characteristics
6) A design should lead to interfaces that reduce the complexity of connections
between components and with the external environment
103
2.Write down software design procedures for data acquisition and control
system(APR/MAY/2010) (NOV/DEC/2013). OR
Explain the various steps involved in analyzing and designing a data
acquisition system
Data acquisition systems collect data from sensors for subsequent processing
and analysis. These systems are used in circumstances where the sensors are
collecting lots of data from the system‘s environment and it isn‘t possible or
necessary to process the data collected in real-time. Data acquisition systems are
commonly used in scientific experiments and process control systems where
physical processes, such as a chemical reaction, happen very quickly.
In data acquisition systems, the sensors may be generating data very quickly,
and the key problem is to ensure that a sensor reading is collected before the sensor
value changes. This leads to a generic architecture. The essential feature of the
architecture of data acquisition systems is that each group of sensors has three
processes associated with it.
These are the sensor process that interfaces with the sensor and converts
analogue data to digital values if necessary, a buffer process, and a process that
consumes the data and carries out further processing. Sensors, of course, can be of
different types, and the number of sensors in a group depends on the rate at which
data arrives from the environment.
In Figure below shows two groups of sensors, s1–s3 and s4–s6. I have also
shown, on the right, a further process that displays the sensor data. Most data
acquisition systems include display and reporting processes that aggregate the
collected data and carry out further processing.
104
Component Definitions
Component is a modular, deployable, replaceable part of a system that encapsulates
implementation and exposes a set of interfaces
Object-oriented view is that component contains a set of collaborating classes
o Each elaborated class includes all attributes and operations relevant to its
implementation
o All interfaces communication and collaboration with other design classes
are also defined
o Analysis classes and infrastructure classes serve as the basis for object-
oriented elaboration
Traditional view is that a component (or module) reside in the software and
serves one of three roles
o Control components coordinate invocation of all other problem domain
components
o Problem domain components implement a function required by the
customer
o Infrastructure components are responsible for functions needed to support
the processing required in a domain application
o The analysis model data flow diagram is mapped into a module hierarchy
as the starting point for the component derivation
Process-Related view emphasizes building systems out of existing components
chosen from a catalog of reusable components as a means of populating the
architecture
Class-based Component Design
Focuses on the elaboration of domain specific analysis classes and the definition
of infrastructure classes
105
Coupling
Content coupling – occurs when one component surreptitiously modifies internal
data in another component
Common coupling – occurs when several components make use of a global
variable
Control coupling – occurs when one component passes control flags as
arguments to another
Stamp coupling – occurs when parts of larger data structures are passed
between components
Data coupling – occurs when long strings of arguments are passed between
components
Routine call coupling – occurs when one operator invokes another
106
Type use coupling – occurs when one component uses a data type defined in
another
107
PART – A
1. Distinguish between verification and validation Dec: 07,16 May: 09, 13, 14
Verification Validation
Verification refers to the set of Validation refers to the set of activities
activities that ensure software that ensure that the software that has
correctly implements the specific been built is traceable to customer
function. requirements.
Are we building the product right? Are we building the right product?
After a valid and complete Validation begins as soon as project
specification the verification starts. starts.
The verification is conducted to Validation is conducted to show that
ensure whether software meets the user requirements are getting
specification or not. satisfied.
3. What is the difference between alpha testing and beta testing? May: 09
Alpha Testing Beta Testing
This testing is done by a developer or by This testing is done by customer without
a customer under the supervision of any interference of developer and is
developer in company‘s premises. done at customer‘s place.
Sometime full product is not tested using The complete product is tested under
alpha testing and only core this testing. Such product is usually
functionalities are tested. given as free trial version.
108
7. How is the software testing results related to the reliability of the software?
Dec: 12
During the software testing the program is executed with the intention of
finding as much errors as possible. The test cases are designed that are intended to
discover yet-undiscovered errors. Thus after testing, majority of serious errors are
removed from the system and it is turned into the model that satisfied user
requirements.
8. What are the levels at which the testing is done? Dec: 13
Testing can be done in the two levels such as:
i)Component level testing: In the component level testing the individual component is
tested.
ii)System level testing: In system testing, the testing of group of components
integrated to create a system or subsystem is done.
9. What are the classes of loops that can be tested? May: 14
1. Simple loop 2. Nested Loop
3. Concatenated Loop 4. Unstructured Loop
109
11. In unit testing of a module, it is found for a set of test data, at maximum
90% of the code alone were tested with the probability of success 0.9. What is
the reliability of the module?
The reliability of the module is at the most 0.81. For the given set of test data.
The 90% of the code is tested and reliability
of success is given as 0.9 i.e 90%
Hence, Reliability = (90/100)*0.9=0.81
PART – B
WHITE BOX TESTING
1. Illustrate white box testing. May: 04, 07, Dec: 07, May: 15
110
Guarantee that all independent paths within a module have been exercised at
least once
Exercise all logical decisions on their true and false sides,
Execute all loops at their boundaries and within their operational bounds
Exercise internal data structures to ensure their validity.
Basis path testing:
Basis path testing is a white-box testing technique
To derive a logical complexity measure of a procedural design.
Test cases derived to exercise the basis set are guaranteed to execute every
statement in the program at least one time.
Methods:
1. Flow graph notation
2. Independent program paths or Cyclomatic complexity
3. Deriving test cases
4. Graph Matrices
Flow Graph Notation:
Start with simple notation for the representation of control flow (called flow
graph). It represent logical control flow.
111
Fig. A represent program control structure and fig. B maps the flowchart into a
corresponding flow graph.
In fig. B each circle, called flow graph node, represent one or more procedural
statement.
A sequence of process boxes and decision diamond can map into a single
node.
The arrows on the flow graph, called edges or links, represent flow of control
and are parallel to flowchart arrows.
An edge must terminate at a node, even if the node does not represent any
procedural statement.
112
Areas bounded by edges and nodes are called regions. When counting
regions, we include the are outside the graph as a region.
When compound conditions are encountered in procedural design, flow graph
becomes slightly more complicated.
When we translating PDL segment into flow graph, separate node is created
for each condition.
Each node that contains a condition is called predicate node and is
characterized by two or more edges comes from it.
Independent program paths or Cyclomatic complexity:
An independent path is any path through the program that introduces at least
one new set of processing statement or new condition.
For example, a set of independent paths for flow graph:
Path 1: 1-11
Path 2: 1-2-3-4-5-10-1-11
Path 3: 1-2-3-6-8-9-1-11
Path 4: 1-2-3-6-7-9-1-11
Note that each new path introduces a new edge.
The path 1-2-3-4-5-10-1-2-3-6-8-9-1-11 is not considered to e an independent
path because it is simply a combination of already specified paths and does
not traverse any new edges.
Test cases should be designed to force execution of these paths (basis set).
Every statement in the program should be executed at least once and every
condition will have been executed on its true and false.
Cyclomatic complexity is a software metrics that provides a quantitative
measure of the logical complexity of a program.
It defines no. of independent pats in the basis set and also provides number
of test that must be conducted.
One of three ways to compute cyclomatic complexity:
1. The no. of regions corresponds to the cyclomatic complexity.
Cyclomatic complexity, V(G), for a flow graph, G, is defined as
V(G) = E - N + 2
113
where E is the number of flow graph edges, N is the number of flow graph nodes.
Cyclomatic complexity, V(G), for a flow graph, G, is also defined as V(G) = P + 1
where P is the number of predicate nodes edges.
So the value of V(G) provides us with upper bound of test cases.
Deriving Test Cases:
It is a series of steps method.
The procedure average depicted in PDL.
Average, an extremely simple algorithm, contains compound conditions and
loops.
To derive basis set, follow the steps.
1. Using the design or code as a foundation, draw a corresponding flow graph.
A flow graph is created by numbering those PDL statements that will be mapped
into corresponding flow graph node.
2. Explain the various types of black box testing methods. Dec: 07,16 May: 15
Black box testing:
Also called behavioral testing, focuses on the functional requirements of the
software.
It enables the software engineer to derive sets of input conditions that will fully
exercise all functional requirements for a program.
Black-box testing is not an alternative to white-box techniques but it is
complementary approach.
114
115
116
117
118
Example:
Consider the send function for a fax application.
Four parameters, P1, P2, P3, and P4, are passed to the send function. Each
takes on three discrete values.
P1 takes on values:
o P1 = 1, send it now
o P1 = 2, send it one hour later
o P1 = 3, send it after midnight
P2, P3, and P4 would also take on values of 1, 2 and 3, signifying other send
functions.
OAT is an array of values in which each column represents a Parameter -
value that can take a certain set of values called levels.
Each row represents a test case.
Parameters are combined pair-wise rather than representing all possible
combinations of parameters and levels
3. Explain about various testing strategy. May: 05, 06, Dec: 08, May: 10, 13
119
4. All the basis (independent) paths are tested for ensuring that all statements in the
module have been executed only once.
5. All error handling paths should be tested.
2. Integration Testing:
Integration testing tests integration or interfaces between components,
interactions to different parts of the system such as an operating system, file system
and hardware or interfaces between systems. Also after integrating two different
components together we do the integration testing. As displayed in the image below
when two different modules ‗Module A‘ and ‗Module B‘ are integrated then the
integration testing is done.
2. Bottom up integration
3. Regression Testing
4. Smoke Testing
120
Advantage:
Big Bang testing has the advantage that everything is finished before integration
testing starts.
Disadvantage:
The major disadvantage is that in general it is time consuming and difficult to trace
the cause of failures because of this late integration.
Top-Down Integration Testing:
A
B F G
D E
121
2. Subordinate stubs are replaced one at a time with real components (following the
depth-first or breadth-first approach).
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests and other stub is replaced with a real
component.
5. Regression testing may be used to ensure that new errors not introduced.
Advantages of Top-Down approach:
The tested product is very consistent because the integration testing is basically
performed in an environment that almost similar to that of reality
Stubs can be written with lesser time because when compared to the drivers then
Stubs are simpler to author.
Disadvantages of Top-Down approach:
Basic functionality is tested at the end of cycle
Bottom-Up Integration:
In Bottom-Up Integration the modules at the lowest levels are integrated first,
then integration is done by moving upward through the control structure.
Bottom up Integration process can be performed using following steps.
• Low level components are combined in clusters that perform a specific software
function.
• A driver (control program) is written to coordinate test case input and output.
• The cluster is tested.
• Drivers are removed and clusters are combined moving upward in the program
structure.
122
123
Is a testing in which the version of the complete software is tested by customer at his
or her own site without the developer being present
7. System Testing:
The system test is a series of tests conducted to fully the computer based system.
Various types of system tests are:
• Recovery testing
Is intended to checks system‘s ability to recover from failures
• Security testing
Security testing verifies that system protection mechanism prevents improper
penetration or data alteration
• Stress testing
Determines breakpoint of a system to establish maximum service level. Program is
checked to see how well it deals with abnormal resource demands
8. Performance testing
Performance testing evaluates the run-time performance of software.
Performance Testing:
• Stress test.
• Volume test.
• Configuration test (hardware & software).
• Compatibility.
• Regression tests.
Security tests.
• Timing tests.
• Environmental tests.
• Quality tests.
• Recovery tests.
• Maintenance tests.
• Documentation tests.
• Human factors tests.
Testing Life Cycle:
• Establish test objectives.
• Design criteria (review criteria).
Correct.
Feasible.
Coverage.
Demonstrate functionality.
124
Principles play an important role in all engineering disciplines and are usually
introduced as part of an educational background in each branch of engineering.
Principle: 1
Testing is the process of exercising a software component using a selected
set of test cases, with the intent of (i) revealing defects, and (ii) evaluating
quality.
This principle supports testing as an execution-based activity to detect
defects. It also supports the separation of testing from debugging since the intent of
the latter is to locate defects and repair the software. The term ―software component‖
is used in this context to represent any unit of software ranging in size and
complexity from an individual procedure or method, to an entire software system.
The term ―defects‖ as used in this and in subsequent principles represents any
deviations in the software that have a negative impact on its functionality,
performance, reliability, security, and/or any other of its specified quality attributes
Principle: 2
When the test objective is to detect defects, then a good test case is one that
has a high robability of revealing a yet undetected defect(s).
Principle 2 supports careful test design and provides a criterion with which to
evaluate test case design and the effectiveness of the testing effort when the
objective is to detect defects. It requires the tester to consider the goal for each test
case, that is, which specific type of defect is to be detected by the test case. In this
way the tester approaches testing in the same way a scientist approaches an
experiment. In the case of the scientist there is a hypothesis involved that he/she
wants to prove or disprove by means of the experiment.
Principle: 3
Test results should be inspected meticulously.
Testers need to carefully inspect and interpret test results. Several erroneous
and costly scenarios may occur if care is not taken. For example: A failure may be
overlooked, and the test may be granted a ―pass‖ status when in reality the software
has failed the test. Testing may continue based on erroneous test results. The defect
may be revealed at some later stage of testing, but in that case it may be more costly
and difficult to locate and repair.
• A failure may be suspected when in reality none exists. In this case the test
may be granted a ―fail‖ status. Much time and effort may be spent on trying to find
the defect that does not exist. A careful reexamination of the test results could finally
indicate that no failure has occurred. The outcome of a quality test may be
misunderstood, resulting in unnecessary rework, or oversight of a critical problem.
125
Principle: 4
A test case must contain the expected output or result.
It is often obvious to the novice tester that test inputs must be part of a test
case. However, the test case is of no value unless there is an explicit statement of
the expected outputs or results, for example, a specific variable value must be
observed or a certain panel button that must light up. Expected outputs allow the
tester to determine (i) whether a defect has been revealed, and (ii) pass/fail status
for the test. It is very important to have a correct statement of the output so that
needless time is not spent due to is conceptions about the outcome of a test. The
specification of test inputs and outputs should be part of test design activities.
Principle: 5
Test cases should be developed for both valid and invalid input conditions.
A tester must not assume that the software under test will always be provided
with valid inputs. Inputs may be incorrect for several reasons For example; software
users may have misunderstandings, or lack information about the nature of the
inputs. They often make typographical errors even when complete/correct
information is available. Devices may also provide invalid inputs due to erroneous
conditions and malfunctions. Use of test cases that are based on invalid inputs is
very useful for revealing defects since they may exercise the code in unexpected
ways and identify unexpected software behavior. Invalid inputs also help developers
and testers evaluate the robustness of the software, that is, its ability to recover
when unexpected events occur.
Principle: 6
The probability of the existence of additional defects in a software component
is proportional to the number of defects already detected in that component.
What this principle says is that the higher the number of defects already
detected in a component, the more likely it is to have additional defects when it
undergoes further testing. For example, if there are two components A and B, and
testers have found 20 defects in A and 3 defects in B, then the probability of the
existence of additional defects in A is higher than B. This empirical observation may
be due to several causes. Defects often occur in clusters and often in code that has
a high degree of complexity and is poorly designed.
Principle: 7
Testing should be carried out by a group that is independent of the
development group.
This principle holds true for psychological as well as practical reasons. It is
difficult for a developer to admit or conceive that software he/she has created and
developed can be faulty. Testers must realize that (i) developers have a great deal of
pride in their work, and (ii) on a practical level it may be difficult for them to
conceptualize where defects could be found. Even when tests fail, developers often
have difficulty in locating the defects since their mental model of the code may
overshadow their view of code as it exists in actuality. They may also have
126
127
5. Given a set of numbers „n‟, the function FindPrime (a [],n) prints a number –
if it is a prime number. Draw a control flow graph, calculate the cyclomatic
complexity and enumerate all paths. State how many test case-s are needed to
adequately cover the code in terms of branches, decisions and statement?
Develop the necessary test cases using sample values for „a‟ and „n‟.
Dec:13
128
1
void FindPrime(int a[], int n)
2
{
1. i=0;
2. while(i<n) 3
{
3. flag=0; 4
4. i=2;
5. while(j<a[i]) {
5
6. rem=a[i]%j;
7. if(rem==0)
{ 6
8. flag=1;
9. break; } 7
10 j++;
11 } //end of while
8
12. if(flag==0)
13. Printf(―%d‖,a[i]);
14. i++;
9
15. } //end of while
}
10
0
11
1
12
13
3
14
15
129
Test Case:
Precondition :
1. a[] stores number to be tested.
2. j denotes any number within a range to test divisibility.
Test Test case name Description Step Test case
Case id status(P/F)
Remainder is Rem = a[i]%j
zero when Where j<n
number a[] is If (rem==0)
Divisibility by
1. divisible by Set flag=1
other number P
other than 1 or If(flag!=0) then
itself. ―Number is not
Prime‖
Remainder is Rem = a[i]%j
not zero when Where j<n
number a[] is If (rem!=0)
Divisibility by 1
2. not divisible by a Set flag=0
or self F
number other If(flag=0) then
than 1 or itself. ―Number is not
Prime‖
6.Consider the pseudocode for simple subtraction given below :[Nov / Dec 16]
(1) Program „Simple Subtraction‟
(2) Input (x,y)
(3) Output (x)
(4) Output (y)
(5) If x > y then DO
(6) x-y =z
(7) Else y-x = z
(8) EndIf
(9) Output (z)
(10) Output “ End Program”
Perform basis path testing and generate test cases.
Solution :
The Flow Graph :
Number of Edges = 7
130
Number of Nodes = 7
Cyclomatic Complexity = E-N+2
= 7-7+2 = 2
Cyclomatic Complexity = P+1
= 1 +1 = 2
Basis Set of path
Path 1 : 1,2,3,4,5,7
Path 2 : 1,2,3,4,6,7
Test Cases :
Test case id Test case Description Step Test case
Name status (P/F)
1. Checking list Check If x is If x>y P
element with greater than y
key or not .
2. Validating the If condition is If x>y Do F
output value true then x-y=z
subtract x and Else y-x =z
y else
subtract y and
x.
VALIDATION TESTING
131
132
Coding standards
"Establish programming conventions before you begin programming. It's
nearly impossible to change code to match them later."
As listed near the end of Coding conventions, there are different
conventions for different programming languages, so it may be counterproductive to
apply the same conventions across different languages.
The use of coding conventions is particularly important when a project
involves more than one programmer (there have been projects with thousands of
programmers). It is much easier for a programmer to read code written by someone
else if all code follows the same conventions.
For some examples of bad coding conventions, Roedy Green provides a
lengthy (tongue-in-cheek) article on how to produce unmaintainable code.
Commenting
Due to time restrictions or enthusiastic programmers who want
immediate results for their code, commenting of code often takes a back seat.
Programmers working as a team have found it better to leave comments behind
since coding usually follows cycles, or more than one person may work on a
particular module. However, some commenting can decrease the cost of knowledge
transfer between developers working on the same module.
In the early days of computing, one commenting practice was to leave a
brief description of the following:
1. Name of the module.
2. Purpose of the Module.
3. Description of the Module (In brief).
4. Original Author
5. Modifications
6. Authors who modified code with a description on why it was modified.
However, the last two items have largely been obsolete by the advent of revision
control systems.Also regarding complicated logic being used, it is a good practice to
leave a comment "block" so that another programmer can understand what exactly is
happening.
Unit testing can be another way to show how code is intended to be
used. Modifications and authorship can be reliably tracked using a source-
code revision control system, rather than using comments.
Naming conventions
Use of proper naming conventions is considered good practice.
Sometimes programmers tend to use X1, Y1, etc. as variables and forget to replace
them with meaningful ones, causing confusion.
In order to prevent this waste of time, it is usually considered good
practice to use descriptive names in the code since we deal with real data.
133
REFACTORING
8. Explain Refactoring in detail. May 2016, Nov 2016
Refactoring is:
Restructuring (rearranging) code...
134
135
PART - C
1. Distinguish between top-down and bottom-up integration. (10)(AUC DEC
2010)
Top-down integration
Top-down integration testing is an incremental approach to construction of
program structure. Modules are integrated by moving downward through the control
hierarchy, beginning with the main control module (main program). Modules
subordinate (and ultimately subordinate) to the main control module are incorporated
into the structure in either a depth-first or breadth-first manner. Referring to Figure
18.6, depth-first integration would integrate all components on a major control path of
the structure. Selection of a major path is somewhat arbitrary and depends on
application-specific characteristics. For example, selecting the left hand path,
components M1, M2 , M5 would be integrated first. Next, M8 or (if necessary for
proper functioning of M2) M6 would be integrated. Then, the central and right hand
control paths are built. Breadth-first integration incorporates all components directly
subordinate at each level, moving across the structure horizontally.
From the figure, components M2, M3, and M4 (a replacement for stub S4) would be
integrated first. The next control level, M5, M6, and so on, follows.
The integration process is performed in a series of five steps:
1. The main control module is used as a test driver and stubs are
substituted for all components directly subordinate to the main control
module.
2. Depending on the integration approach selected (i.e., depth or
breadth first), subordinate stubs are replaced one at a time with
actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is replaced
with the real component.
5. Regression testing may be conducted to ensure that new errors
have not been introduced.
136
Bottom-up Integration
Bottom-up integration testing, as its name implies, begins construction
and testing with atomic modules (i.e., components at the lowest levels
in the program structure).
Because components are integrated from the bottom up, processing
required for
components subordinate to a given level is always available and the
need for stubs is eliminated.
A bottom-up integration strategy may be implemented with the following steps:
1. Low-level components are combined into clusters (sometimes called
builds) that perform a specific software subfunction.
2. A driver (a control program for testing) is written to coordinate
test case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the
program structure.
Components are combined to form clusters 1, 2, and 3. Each of the clusters is tested
using a driver (shown as a dashed block). Components in clusters 1 and 2 are
subordinate to Ma. Drivers D1 and D2 are removed and the clusters are interfaced
directly to Ma. Similarly, driver D3 for cluster 3 is removed prior to integration with
module Mb. Both Ma and Mb will ultimately be integrated with component Mc, and so
forth.
137
138
Basis path and loop testing are effective techniques for uncovering a broad
array of path errors. Among the more common errors in computation are
(1) misunderstood or incorrect arithmetic precedence.
(2) mixed mode operations.
(3) incorrect initialization.
(4) precision inaccuracy.
(5) Incorrect symbolic representation of an expression.
Comparison and control flow are closely coupled to one another Test cases
should uncover errors such as
(1) comparison of different data types,
(2) incorrect logical operators or precedence,
(3) expectation of equality when precision error makes equality unlikely,
(4) incorrect comparison of variables,
(5) improper or nonexistent loop termination,
(6) failure to exit when divergent iteration is encountered, and
(7) Improperly modified loop variables.
Among the potential errors that should be tested when error handling is evaluated
are
1. Error description is unintelligible.
2. Error noted does not correspond to error encountered.
3. Error condition causes system intervention prior to error handling.
4. Exception-condition processing is incorrect.
5. Error description does not provide enough information to assist in the location
of the cause of the error.
139
140
Advantage of big-bang:
This approach is simple.
Disadvantages:
• It is hard to debug.
• It is not easy to isolate errors while testing.
In this approach it is not easy to validate test results.
• An incremental construction strategy includes
1) Top down integration
2) Bottom up integration
3) Regression testing
4) Smoke testing
4.Consider a program for determining the previous date. Its input is a triple of
day ,month and year with the values in the range 1≤month ≤12 , 1 ≤ day ≤ 31
,1990 ≤ year ≤ 2014. The possible outputs would be previous date or invalid
input date. Design the boundary value test cases. [ 8 Marks ][May / june 2016]
141
Where n =3
BVA yields = (4n +1) = 4(3) +1 = 13 test cases.
142
143
144
145
5. Evaluate the project team‘s ability to control quality of software work products
Product metrics:
1. Aid in the evaluation of analysis and design models
2. Provide an indication of the complexity of procedural designs and source code
3. Facilitate the design of more effective testing techniques
4. Assess the stability of a fielded software product
13. What is RMMM?
The risk mitigation, monitoring, and management plan documents all work
performed as part of risk analysis and is used by the project manager as part of the
overall project plan. Once the RMMM has been documented and the project has
begun risk mitigation and monitoring steps commence.
14. Will exhaustive testing guarantee that the program is 100% correct?
May : 16
No, because there are many times a program runs correctly as designed, but I think
there is no such thing as 100% reliability even after very exhaustive testing. Many
things within a person‘s computer can cause a program to not function as designed
even if it works for most other users of that program.
PART – B
PROCESS & PROJECT METRICS
1. Describe two metrics which have been used to measure the software.
May 04,05
146
Software process and project metrics are quantitative measures. The software
measures are collected by software engineers and software metrics are analyzed by
software managers.
• They are a management tool.
• They offer insight into the effectiveness of the software process and the projects
that are conducted using the process as a framework.
• Basic quality and productivity data are collected.
• These data are analyzed, compared against past averages, and assessed.
• The goal is to determine whether quality and productivity improvements have
occurred.
• The data can also be used to pinpoint problem areas.
• Remedies can then be developed and the software process can be improved.
Use of Measurement:
• Can be applied to the software process with the intent of improving it on a
continuous basis.
• Can be used throughout a software project to assist in estimation, quality control,
productivity assessment, and project control.
• Can be used to help assess the quality of software work products and to assist in
tactical decision making as a project proceeds.
Reason for measure:
• To characterize in order to
• Gain an understanding of processes, products, resources, and environments
• Establish baselines for comparisons with future assessments
• To evaluate in order to determine status with respect to plans
• To predict in order to gain understanding of relationships among processes and
products
• Build models of these relationships
• To improve in order to Identify roadblocks, root causes, inefficiencies, and other
opportunities for improving product quality and process performance
Metric in Process Domain:
• Process metrics are collected across all projects and over long periods of time.
• They are used for making strategic decisions.
• The intent is to provide a set of process indicators that lead to long-term software
process improvement.
• The only way to know how/where to improve any process is to
• Measure specific attributes of the process
• Develop a set of meaningful metrics based on these attributes
• Use the metrics to provide indicators that will lead to a strategy for improvement
Properties of Process Metrics
• Use common sense and organizational sensitivity when interpreting metrics data
• Provide regular feedback to the individuals and teams who collect measures and
metrics
147
148
149
o This is the number of defects per KLOC, where a defect is a verified lack of
conformance to requirements
o Defects
Defects are those problems reported by a program user after the program is
released for general use
o Maintainability
This describes the ease with which a program can be corrected if an error is found,
adapted if the environment changes, or enhanced if the customer has changed
requirements
o Mean time to change (MTTC) :
The time to analyze, design, implement, test, and distribute a change to all users
Maintainable programs on average have a lower MTTC
Defect Removal Efficiency (DRE)
DRE represents the effectiveness of quality assurance activities. The DRE also
helps the project manager to assess the progress of software project as it gets
developed through its scheduled work task. Any errors that remain uncovered and
are found in later tasks are called defects.
The defect removal efficiency can be defined as
DRE = E / (E + D) Where DRE is the defect removal efficiency, E is the error, D is
the defect.
Measuring Quality
Following are the measure of the software quality:
1. Correctness: Is a degree to which the software produces the desired
functionality. The correctness can be measured as Correctness = Defects per
KLOC.
2. Integrity: Integrity is basically an ability of the system to withstand against the
attacks. There are two attributes that are associated with integrity: threat and
security.
3. Usability: User friendliness of the system or ability of the system that indicates
the usefulness of the system.
4. Maintainability: Is an ability of the system to accommodate the corrections
made after encountering errors, adapting the environment changes in the
system in order to satisfy the user.
2. What are the categories of software risks? Give an overview about risk
management. May: 14
Risk is a potential problem – it might happen and it might not conceptual definition of
risk
o Risk concerns future happenings
o Risk involves change in mind, opinion, actions, places, etc.
150
151
152
153
compensation and benefits, and the availability of jobs within the company and
outside it are all monitored.
In addition to monitoring these factors, a project manager should monitor the
effectiveness of risk mitigation steps. For example, a risk mitigation step noted here
called for the definition of work product standards and mechanisms to be sure that
work products are developed in a timely manner. This is one mechanism for
ensuring continuity, should a critical individual leave the project. The project
manager should monitor work products carefully to ensure that each can stand on its
own and that each imparts information that would be necessary if a newcomer were
forced to join the software team somewhere in the middle of the project.
Risk management and contingency planning assumes that mitigation efforts
have failed and that the risk has become a reality. Continuing the example, the
project is well under way and a number of people announce that they will be leaving.
If the mitigation strategy has been followed, backup is available, information is
documented, and knowledge has been dispersed across the team.
In addition, you can temporarily refocus resources (and readjust the project
schedule) to those functions that are fully staffed, enabling newcomers who must be
added to the team to ―get up to speed.‖ Those individuals who are leaving are asked
to stop all work and spend their last weeks in ―knowledge transfer mode.‖ This might
include video-based knowledge capture, the development of ―commentary
documents or Wikis,‖ and/or meeting with other team members who will remain on
the project.
It is important to note that risk mitigation, monitoring, and management
(RMMM) steps incur additional project cost. For example, spending the time to back
up every critical technologist costs money. Part of risk management, therefore, is to
evaluate when the benefits accrued by the RMMM steps are outweighed by the
costs associated with implementing them. In essence, you perform a classic cost-
benefit analysis.
If risk aversion steps for high turnover will increase both project cost and duration by
an estimated 15 percent, but the predominant cost factor is ―backup,‖ management
may decide not to implement this step. On the other hand, if the risk aversion steps
are projected to increase costs by 5 percent and duration by only 3 percent,
management will likely put all into place.
THE RMMM PLAN
A risk management strategy can be included in the software project plan, or
the risk management steps can be organized into a separate risk mitigation,
monitoring, and management plan (RMMM). The RMMM plan documents all work
performed as part of risk analysis and is used by the project manager as part of the
overall project plan.
There are three issues in strategy for handling the risk is
1) Risk avoidance 2) Risk Monitoring 3) Risk management
Risk mitigation
154
Risk mitigation means preventing the risks to occur. Following are the steps to
be taken for mitigating the risks:
1. Communicate with the concerned staff to find of probable risk.
2. Find out and eliminate all those causes that can create risk before the project
starts.
3. Conduct timely reviews in order to speed up the work.
Risk Monitoring
The risk monitoring process following things must be monitored by the project
manager,
1. The approach or the behavior of the team members as pressure of project varies.
2. The degree in which the team performs with the spirit of ―team work‖.
3. The type of co-operation among the team members.
4. The types of problems that are occurring.
Risk Management
Project manager performs this task when risk becomes a reality. If project
manager is successful in applying the project mitigation effectively then it becomes
very much easy to manage the risks.
Some software teams do not develop a formal RMMM document. Rather,
each risk is documented individually using a risk information sheet. In most cases,
the RIS is maintained using a database system so that creation and information
entry, priority ordering, searches, and other analysis may be accomplished easily.
The format of the RIS is illustrated in Figure .Once RMMM has been
documented and the project has begun, risk mitigation and monitoring steps
commence. As I have already discussed, risk mitigation is a problem avoidance
activity. Risk monitoring is a project tracking
155
Activity with three primary objectives: (1) to assess whether predicted risks do, in
fact, occur; (2) to ensure that risk aversion steps defined for the risk are being
properly applied; and (3) to collect information that can be used for future risk
analysis. In many cases, the problems that occur during a project can be traced to
more than one risk. Another job of risk monitoring is to attempt to allocate origin.
156
The count table can be computed with the help of above table.
Now the software complexity can be computed by answering following questions.
These are complexity adjustment values.
Rate the above factors according to the following scale:
Function Points(FP) = Count total x (0.65 + (0.01 x Sum(Fi))
Once the functional point is calculated then we can compute various measures as
follows
Productivity = FP / person – month
Quality = Number of faults / FP
Cost = $ / FP
Documentation = Pages of documentation / FP
Advantages:
1. This method is independent of programming languages.
2. It is based on the data which can be obtained in early stage of project.
Disadvantages:
1. Many aspects of this method are not validated.
2. The functional point has no significant meaning. It is numerical value.
157
COCOMO II MODEL
4. Explain in detail the COCOMO II Model May: 08, Dec: 13, May: 14,16
158
159
EAF Is the Effort Adjustment Factor derived from the Cost Drivers
E Is an exponent derived from the five Scale Drivers
As an example, a project with all Nominal Cost Drivers and Scale Drivers would have
an EAF of 1.00 and exponent, E, of 1.0997. Assuming that the project is projected to
consist of 8,000 source lines of code, COCOMO II estimates that 28.9 Person-
Months of effort is required to complete it:
Effort = 2.94 * (1.0) * (8)1.0997 = 28.9 Person-Months
Effort Adjustment Factor
The Effort Adjustment Factor in the effort equation is simply the product of the effort
multipliers corresponding to each of the cost drivers for your project.
For example, if your project is rated Very High for Complexity (effort multiplier of
1.34), and Low for Language & Tools Experience (effort multiplier of 1.09), and all of
the other cost drivers are rated to be Nominal (effort multiplier of 1.00), the EAF is
the product of 1.34 and 1.09.
Effort Adjustment Factor = EAF = 1.34 * 1.09 = 1.46
Effort = 2.94 * (1.46) * (8)1.0997 = 42.3 Person-Months
160
4b) Describe in detail COCOMO model for software cost estimation. Use it ti
estimate the effort required to build software for a simple ATM that produces
12 screens, 10 reports and 80 software components. Assume average
complexity and average developer maturity. Use application composition
model with object points. [Nov / Dec 2016]
Solution :
The project Resources are :
1. Human Resource.
2. Reusable Software Resource.
3. Environmental Resource.
5. Explain Software Project Planning May: 05, 06, Dec: 06, 07, May:15
• Software project planning encompasses five major activities
Estimation, scheduling, risk analysis, quality management planning, and change
management planning.
Estimation determines how much money, effort, resources, and time it will take to
build a specific system or product
• The software team first estimates
– The work to be done
– The resources required
– The time that will elapse from start to finish
• Then they establish a project schedule that
161
162
o interfaces
o reliability
Resources
Must estimate resources required to accomplish the development effort
Three major categories of software engineering resources
o People
o Development environment
o Reusable software components
Often neglected during planning but become a paramount concern during the
construction phase of the software process
Each resource is specified with
o A description of the resource
o A statement of availability
o The time when the resource will be required
o The duration of time that the resource will be applied
Off-the-shelf components
o Components are from a third party or were developed for a previous project
o Ready to use; fully validated and documented; virtually no risk
Full-experience components
o Components are similar to the software that needs to be built
o Software team has full experience in the application area of these
components
o Modification of components will incur relatively low risk
Partial-experience components
o Components are related somehow to the software that needs to be built but
will require substantial modification
o Software team has only limited experience in the application area of these
components
o Modifications that are required have a fair degree of risk
New components
o Components must be built from scratch by the software team specifically for
the needs of the current project
o Software team has no practical experience in the application area
o Software development of components has a high degree of risk
Plan Description
Quality plan Describes the quality procedures and standards
that will be used in a project.
Validation plan Describes the approach, resources and schedule
used for system validation.
Configuration management plan Describes the configuration management
procedures and structures to be used.
Maintenance plan Predicts the maintenance requirements of the
system, maintenance costs and effort required.
Staff development plan. Describes how the skills and experience of the
project team members will be developed.
163
6. Write shot notes on i) Project Scheduling ii) Timeline Charts May: 05,
06,Dec:06,07,May:15
Software project scheduling is an action that distributes estimated effort across the
planned project duration by allocating the effort to specific software engineering
tasks. Schedule identifies all major process framework activities and the product
functions to which they are applied. Scheduling for software engineering projects can
be viewed from two rather different perspectives. I) first, an end date for release of a
computer-based system. The second view of software scheduling assumes that
rough chronological bounds have been discussed but that the end date is set by the
software engineering organization.
1 Basic Principle
A number of basic principles guide software project scheduling:
Compartmentalization. The project must be compartmentalized into a number of
manageable activities and tasks. To accomplish compartmentalization, both the
product and the process are refined.
Interdependency. The interdependency of each compartmentalized activity or task
must be determined. Some tasks must occur in sequence, while others can occur in
parallel. Some activities cannot commence until the work product produced by
another is available.
Time allocation. Each task to be scheduled must be allocated some number of work
units (e.g., person-days of effort). In addition, each task must be assigned a start
date and a completion date that are a function of the interdependencies and whether
work will be conducted on a full-time or part-time basis.
Effort validation. Every project has a defined number of people on the software
team. As time allocation occurs, you must ensure that no more than the allocated
number of people has been scheduled at any given time
Defined responsibilities. Every task that is scheduled should be assigned to a
specific team member.
Defined outcomes. Every task that is scheduled should have a defined outcome.
For software projects, the outcome is normally a work product (e.g., the design of a
component) or a part of a work product. Work products are often combined in
deliverables.
Defined milestones. Every task or group of tasks should be associated with a
project milestone. A milestone is accomplished when one or more work products has
been reviewed for quality (Chapter 15) and has been approved.
2. The Relationship between People and Effort
In small software development project a single person can analyze requirements,
perform design, generate code, and conduct tests. As the size of a project increases,
more people must become involved.
164
The curve indicates a minimum value to that indicates the least cost for delivery (i.e.,
the delivery time that will result in the least effort expended). The PNR curve also
indicates that the lowest cost delivery option,to _ 2td. The implication here is that
delaying project delivery can reduce costs significantly. Of course, this must be
weighed against the business cost associated with the delay. The number of
Delivered lines of code (source statements), L, is related to effort and development
time by the equation:
L =P * E 1/3 t 4/3
where E is development effort in person-months, P is a productivity parameter that
reflects a variety of factors that lead to high-quality software engineering work
(typical values for P range between 2000 and 12,000), and t is the project duration in
calendar months.
Rearranging this software equation, we can arrive at an expression for development
effort E:
E=L3/p3 t4
Where E is the effort expended (in person-years) over the entire life cycle for
software development and maintenance and t is the development time in years. The
equation for development effort can be related to development cost by the inclusion
of a burdened labor rate factor ($/person-year).
Defining a task set for the software project
A task set is a collection of software engineering tasks, milestones and deliverables
that must be accomplished to complete the project. A task network, called also
activity network, is a graphic representation of the task flow of a project. It depicts the
major software engineering tasks from the selected process model arranged
sequentially or in parallel. Consider the task of developing a software library
information system.
The scheduling of this system must account for the following requirements (the
subtasks are given in italic): - initially the work should start with design of a control
terminal (T0) class for no more than eleven working days; - next, the classes for
student user (T1) and faculty user (T2) should be designed in parallel, assuming that
165
the elaboration of student user takes no more than six days, while the faculty user
needs four days; -
when the design of student user completes there have to be developed the network
protocol (T4), it is a subtask that requires eleven days, and simultaneously there
have to be designed network management routines (T5) for up to seven days; - after
the termination of the faculty user subtask, a library directory (T3) should be made
for nine days to maintain information about the different users and their addresses; -
the completion of the network protocol and management routines should be followed
by design of the overall network control (T7) procedures for up to eight days; - the
library directory design should be followed by a subtask elaboration of library staff
(T6), which takes eleven days; - the software engineering process terminates with
testing (T8) for no more than four days
Defining a task network
A task network, also called an activity network, is a graphic representation of the task
flow for a project. It is sometimes used as the mechanism through which task
sequence and dependencies are input to an automated project scheduling tool. In its
simplest form (used when creating a macroscopic schedule), the task network
depicts major software engineering actions. A task network for concept development
Program evaluation and review technique (PERT) and the critical path method
(CPM) are two project scheduling methods that can be applied to software
development. Both techniques are driven by information already developed in earlier
project planning activities: estimates of effort, a decomposition of the product
function, the selection of the appropriate process model and task set, and
decomposition of the tasks that are selected. Interdependencies among tasks may
be defined using a task network. Tasks,
Sometimes called the project work breakdown structure (WBS), are defined for the
product as a whole or for individual functions. Both PERT and CPM provide
quantitative tools that allow you to (1) determine the critical path—the chain of tasks
that determines the duration of the project, (2) establish ―most likely‖ time estimates
for individual tasks by applying statistical models, and (3) calculate ―boundary times‖
that define a time ―window‖ for a particular task.
Time-line chart
166
Time-line chart, also called a Gantt chart, is generated. A time-line chart can be
developed for the entire project. Alternatively, separate charts can be developed for
each project function or for each individual
Working on the project. Format of a time-line chart. It depicts a part of a software
project schedule that emphasizes the concept scoping task for a word-processing
(WP) software product. All project tasks are listed in the left-hand column. The
horizontal bars indicate the duration of each task. When multiple bars occur at the
same time on the calendar, task concurrency is implied. The diamonds indicate
the milestones.
.
Tracking the Schedule
Tracking can be accomplished in a number of different ways:
• Conducting periodic project status meetings in which each team member reports
progress and problems
• Evaluating the results of all reviews conducted throughout the software engineering
process
•Determining whether formal project milestones (the diamonds shown in Figure)
have been accomplished by the scheduled date
• Comparing the actual start date to the planned start date for each project task listed
in the resource table
• Meeting informally with practitioners to obtain their subjective assessment of
progress to date and problems on the horizon
167
168
PROBLEM-BASED ESTIMATION
Problem-Based Estimation
LOC and FP data are used in two ways during software project estimation:
(1) As estimation variables to ―size‖ each element of the software and
(2) As baseline metrics collected from past projects and used in conjunction with
estimation variables to develop cost and effort projections.
LOC and FP estimation are distinct estimation techniques. Yet both have a
number of characteristics in common.
LOC or FP (the estimation variable) is then estimated for each function.
Function estimates are combined to produce an overall estimate for the entire
project. In general, LOC/pm or FP/pm averages should be computed by project
domain. That is, projects should be grouped by team size, application area,
complexity, and other relevant parameters. Local domain averages should then be
computed. When a new project is estimated, it should first be allocated to a domain,
and then the appropriate domain average for past productivity should be used in
generating the estimate.
The LOC and FP estimation techniques differ in the level of detail required for
decomposition and the target of the partitioning. When LOC is used as the
estimation variable, decomposition is absolutely essential and is often taken to
considerable levels of detail. The greater the degree of partitioning, the more likely
reasonably accurate estimates of LOC can be developed.
For FP estimates, decomposition works differently. Each of the information
domain characteristics—inputs, outputs, data files, inquiries, and external
interfaces—as well as the 14 complexity adjustment values are estimated. The
resultant estimates can then be used to derive an FP value that can be tied to past
data and used to generate an estimate. Using historical data or (when all else fails)
intuition,
Estimate an optimistic, most likely, and pessimistic size value for each
function or count for each information domain value. A three-point or expected value
can then be computed.
The expected value for the estimation variable (size) S can be computed as a
weighted average of the optimistic (sopt), most likely (sm), and pessimistic (spess)
estimates.
An Example of LOC-Based Estimation
Following the decomposition technique for LOC, an estimation table is
developed. A range of LOC estimates is developed for each function. For example,
the range of LOC estimates for the 3D geometric analysis function is optimistic, 4600
LOC; most likely, 6900 LOC; and pessimistic, 8600 LOC. Applying Equation the
169
expected value for the 3D geometric analysis functions is 6800 LOC. Other
estimates are derived in a similar fashion.
170
171
PART – C
Baseline
• The IEEE (IEEE Std. No. 610.12-1990) defines a baseline as:
A specification or product that has been formally reviewed and agreed
upon, that thereafter
serves as the basis for further development, and that can be changed only through
formal change control procedures.
• A baseline is a milestone in the development of software that is marked by the
delivery of one or more software configuration items and the approval of them is
obtained through formal technical review
• Further changes to the program architecture (which is actually documented in the
design
model) can be made only after each has been evaluated and approved.
Configuration
Management Planning
Configuration planning defines the standards and procedures that should be
used for configuration management. Initially configuration management standards
must be set. These standards must be adapted to fit the requirements and to identify
constraints.
The configuration management plan consists of -
• Definition of managed configuration items and schemes used for identifying
the entities.
• Find out the targets that are responsible for configuration management
procedures.
172
• Define configuration management policies which can be used for control and
Version management.
• Specify the name of the tool used for configuration management along with its
use.
• Describe configuration management database structure so that stored
information can be maintained.
•
Software Configuration Item Identification
DATA
• Program components or functions
• External data
• File structure
For each type of item, there may be a large number of different individual
items produced. For instance there may be many documents for a software
specification
such as project plan, quality plan, test plan, design documents, programs, test
reports, review reports. These SCI or items will be produced during the project,
stored, retrieved, changed, stored again, and so on.
Advantage:
There are several advantages to the Delphi technique.
• One of the most significant is its versatility. The technique can be used in a
wide range of environments, e.g., government planning, business and
industry predictions, volunteer group decisions.
173
3. Discuss Putnam resources allocation model .Derive the time and effort
equations. May : 16
The Putnam model is an empirical software effort estimation model. As a group,
empirical models work by collecting software project data (for example, effort and
size) and fitting a curve to the data. Future effort estimates are made by providing
size and calculating the associated effort using the equation which fit the original
data . It is one of the earliest of these types of models developed, and is among the
most widely used. Closely related software parametric models are Constructive Cost
Model (COCOMO).
Putnam used his observations about productivity levels to derive the software
equation:
174
where:
Size is the product size (whatever size estimate is used by your organization
is appropriate). Putnam uses ESLOC (Effective Source Lines of Code)
throughout his books.
B is a scaling factor and is a function of the project size.
Productivity is the Process Productivity, the ability of a particular software
organization to produce software of a given size at a particular defect rate.
Effort is the total effort applied to the project in person-years.
Time is the total schedule of the project in years.
In practical use, when making an estimate for a software task the software equation
is solved for effort:
This estimating method is fairly sensitive to uncertainty in both size and process
productivity. Putnam advocates obtaining process productivity by calibration Putnam
175
One of the key advantages to this model is the simplicity with which it is
calibrated. Most software organizations, regardless of maturity level can easily
collect size, effort and duration (time) for past projects. Process Productivity, being
exponential in nature is typically converted to a linear productivity index an
organization can use to track their own changes in productivity and apply in future
effort estimates
4. Suppose you have a budgeted cost of a project as Rs. 9,00,000 . The project
is to be completed in 9 months. After a month , you have completed 10 percent
of the project at a total expense of Rs.1,00,000. The planned completion should
have been 15 percent. You need to determine whether the project is on -time
and on - budget? Use Earned value analysis approach and interpret.
[Nov / Dec 2016]
Solution :
Here , BAC = $ 900000
AC = $ 100000
The planned value and Earned value can computed as,
Planned Value = Planned Completion (%) x BAC
= 15 % + $ 900000
= $ 135,000
Earned Value = Actual Completion (% ) x BAC
=10 % x $ 900000
= $ 90,000
Compute the earned value Variances :
Cost Performance Index (CPI) = EV /AC
= $ 90,000 / $ 100,000 = 0.90
This means for every $1 spent, the project is producing only 90 cents in work.
Schedule Performance Index (SPI) =EV / PV
= $90,000 / $135,000 = 0.67
This means for every estimated hour of work ,the project team is completing
only 0.67 hours (approximately 40 minutes).
Interpretation: Since both cost performance index (CPI index) and schedule
performance index (SPI Index) are less than 1. It means that the project is over
budget and behind schedule. This example project is in major trouble and corrective
action needs to be taken. Risks management needs to kick-in.
176
Weighting factor
Information Domain value Count Simple Average Complex
Low external inputs (LET‘s) 10 x 3 4 6=30
High External outputs (HEO‘s) 8 x 4 5 7=32
Low internal logical files 13 x 3 4 6=39
High External (HEIF) 17 x 7 10 15=119
Interface files
Average external inquires 11 x 5 7 10=5
count 275
Adjustment Factor =1.10
=305.25
=305
177
178
1. What led to the transition from product oriented development to process oriented
development? [Page.No:6]
2. Mention the characteristics of software contrasting it with characteristics of
hardware. [Page.No:6]
3. List the good characteristics of a good SRS. [Page.No:40]
4. What are the linkages between data flow and E-R diagram? [Page.No:38]
5. If a module has logical cohesion, what kind of coupling is this module likely to
have? [Page.No:77]
6. What is the need for architectural mapping using data flow? [Page.No:77]
7. How can refactoring be made more effective? [Page.No:110]
8. Why does software fail it has passed from acceptance testing?
[Page.No:110]
9. List a few process and product metrics. [Page.No:145]
10. Will exhaustive testing guarantee that the program is 100% correct?
[Page.No:146]
PART – B (5x16=80 Marks )
179
OR
(b) (i) An application has the following : 10 low external inputs, 8 high external
outputs,13 low internal logical files,17 high external interface files, 11 average
external inquires and complexity adjustments factor of 1.10. What are the
unadjusted and adjusted function point counts?(4) [pg.no:177]
(ii) Discuss Putnam resources allocation model .Derive the time and effort
equations. (12) [pg.no:174]
180
OR
(ii) Consider 7 functions with their estimated lines of code below. (8)
Function LOC
Func1 2340
181
Func2 5380
Func3 6800
Func4 3350
Func5 4950
Func6 2140
Func6 8400
Average productivity based on historical data is 620 LOC/pm and labour rate
is Rs.8000 per month. Find the total estimated project cost and effort. [pg.no:23]
OR
(b) What is the purpose of data flow diagrams?What are the notations
used forthe same.Explain by constructing a context flow diagram level -0 DFD
and level- 1 DFD for a library management system. [pg.no:60]
13. (a)What is structured design? Illustrate the structured design process from
DFD tostructured chart with a case study.(16) [pg.no:87]
OR
14. (a)(i) Consider the pseudocode for simple subtraction given below :[10]
(1) Program ‗Simple Subtraction‘
(2) Input (x,y)
(3) Output (x)
(4) Output (y)
(5) If x > y then DO
(6) x-y =z
(7) Else y-x = z
(8) EndIf
(9) Output (z)
(10) Output ― End Program‖
Perform basis path testing and generate test cases. [pg.no:130]
(ii)What is refactoring? When is it needed? Explain with an example.
[pg.no:134]
OR
182
(b) What is black box testing?Explain the different types of black box testing
strategies. Explain by considering suitable examples. [pg.no:114]
15. (a)(i) Suppose you have a budgeted cost of a project as Rs. 9,00,000 . The
project is to be completed in 9 months. After a month , you have completed 10
percent of the project at a total expense of Rs.1,00,000. The planned completion
should have been 15 percent. You need to determine whether the project is on -
time and on - budget? Use Earned value analysis approach and interpret.(8)
[pg.no:176]
(ii) Consider the following function point components and their complexity. If
the total degree of influence is 52,find the estimated function points. (8)
[pg.no:157]
Function Type Estimated Count Complexity
ELF 2 7
ILF 4 10
EQ 22 4
EO 16 5
EI 24 4
OR
(b) Describe in detail COCOMO model for software cost estimation. Use it ti
estimate the effort required to build software for a simple ATM that produces 12
screens, 10 reports and 80 software components. Assume average complexity
and average developer maturity. Use application composition model with object
points. (16) [pg.no:161]
183
184
Receive the Customer food Orders, Produce the customers ordered foods,
Serve the customer with their ordered foods, Collect Payment from customers,
Store customer payment details, Order Raw Materials for food products, Pay for
Raw Materials and Pay for labor. Pg,Np :59
(16)
13. (a). Explain the various coupling and cohesion models used in Software design.
Pg.No :81 (16)
Or
(b). For a case study of your choice show the architectural and Component
design. Pg :68 (16)
14. (a). Describe the various black box and White box testing techniques. Use
Suitable examples for your explanation. Pg.No:93,97
(16)
Or
(b). Discuss about the various Integration and Debugging strategies followed in
Software development. Pg.No :112 (16)
15. (a). State the need for Risk Management and explain the activities under Risk
Management. Pg.No :24 (16)
Or
(b). Write short notes on the following. Pg.No : 138
(i). Project Scheduling. (8)
(ii). Project Timeline chart and Task network. (8)
185
12. (a)(i) What are the components of the standard structure for the software
requirements document? Explain in detail. (8)
(ii) Write the software requirement specification for a system for your
Choice . (8)
Or
(b) What are the types of behavioral models ? Explain with examples.
(ii) Illustrate with the aid of an appropriate example how to design a real
time monitoring and control systems. (6)
186
(ii) What are CASE tools? Explain in the role of CASE tools in software
development process (8)
Or
(b)(i) Elaborate on software Configuration Management (10)
(ii) What are the categories of software risks? Give an overview about risk
Management. (6)
187
188
189
190
13. (a)(i) Discuss the design heuristics for effective Modularity design (8)
(ii) Explain the architectural styles used in Architectural design (8)
191
Or
(b)(i) List the activities of user interface design process (8)
(ii) Explain the general model of a real time system (8)
14. (a)(i) Explain the integration testing in detail (16)
Or
(b)(i) Write note on unit testing (8)
(ii) Explain the categories of debugging approaches (8)
15. (a)(i) Explain the use of COCOMO model (8)
(ii) Describe the steps involved in project scheduling Process. (8)
Or
(b)(i) Briefly discuss the activities of Software Configuration management. (8)
(ii) Explain the types of software project plan. (8)
192