0% found this document useful (0 votes)
26 views

Software Engineering Overview

SDLC is important in developing large software for the following reasons: 1) It identifies all activities required to develop and maintain software throughout its lifetime by using software development lifecycle models. 2) It provides precise understanding between team members of what needs to be done at each phase to avoid chaos and project failure. 3) It contains entry and exit criteria for each phase to ensure phases are completed properly before moving on. 4) It systematically organizes and controls various development activities and encourages discipline in the software development process.

Uploaded by

M KEERTHIKA
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

Software Engineering Overview

SDLC is important in developing large software for the following reasons: 1) It identifies all activities required to develop and maintain software throughout its lifetime by using software development lifecycle models. 2) It provides precise understanding between team members of what needs to be done at each phase to avoid chaos and project failure. 3) It contains entry and exit criteria for each phase to ensure phases are completed properly before moving on. 4) It systematically organizes and controls various development activities and encourages discipline in the software development process.

Uploaded by

M KEERTHIKA
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Q.Explain Programs V/S Software Products. .............................................

2
Q.What is software engineering and discuss the phases like: ....................2
Q. Why SDLC is important in developing the large Software? ....................4
Q.Discuss the Models .................................................................................5
Q.Write the Organization of software management plan.What is the
important item the SPMD document has to have?....................................10
Q. Explain LOC as the message of the problem size. ................................12
Q. Discuss the main three Techniques of project estimation parameters. 12
Q. Explain COCOMO Model which helps to find the approximate software
cost..........................................................................................................14
Q.Explain the following terms with respect to Risk Management.............16
Q. Importance of Scheduling in project management. ............................17
Q. What do you understand by Software Requirement Analysis? .............18
Q. What is good Software design? Explain cohesiveness and coupling. ...19
Q. Give overview of Object-oriented concept. Discuss the following terms:
................................................................................................................20
Q. EXPLAIN CHARACTERISTICS OF A GOOD USER INTERFACE DESIGN:..23
Q. WHAT IS TESTING? EXPLAIN UNIT TESTING: ......................................25
Q- EXPLAIN BLACK BOX TESTING & INTEGRATION TESTING? .................27
Q. What do u understand by Software reliability and explain reliability
matrix and reliability growth modeling? ..................................................30
Q.Explain Programs V/S Software Products.

⌦ Individuals for their personal use develop programs but the group of software engineers develops
software.
⌦ Programs are usually small in size and have limited functionality but software is large in size and
they have multiple users.
⌦ Usually the author of the program himself uses his programs and maintains it so it is possible that
there is not a proper user interface and proper documentation. On the other hand software have
multiple users and therefore it contains good user-interface, proper users’ manuals and good
documentation support.
⌦ For Eg. The programs developed by student as part of their class assignments are just programs
and not software products. Since software product has a large no. Of users, it must be properly
designed, carefully implemented and thoroughly tested.
⌦ A program consists of only program code but software product consist not only program code but
also of associated documents. Such as requirement specification document, design document, test
document and use manual.

Q.What is software engineering and discuss the


phases like:
Control Flow
Data Structure
Data Flow
Object Oriented

Definition of Software Engineering

⌦ Software Engineering is the establishment and use of sound engineering principles in order to
obtain economically software that is reliable and works efficiently on real machine.
Fritz Bauer
⌦ Software Engineering is the application of systematic, disciplined, quantifiable approach to
the development, operation and maintenance of software; that is the application of
engineering to software.
IEEE

► Control Flow Design


⌦ As the size and complexity of programs increased programmers found that it is not only
difficult to write cost-effective and correct programs but also to understand and maintain
programs written by other programmers. To overcome from these problem programmers
have started the design of the program’s control structure. This design is called “Control
Flow Design”.
⌦ For the above purpose “Flow Chart” techniques were developed. A programs controls
structure indicates the sequence in which the program’s instructions are executed. i.e the
use of GO TO statement considered harmful.
⌦ Gradually, everybody accepted and recommended that good programs should have neat
control structure without any complexity.
⌦ This formed the basis of structure programming methodology. A program is called structured
when it uses only the sequences without any complexity.
⌦ This formed the basis of structure programming methodology. A program is called structured
when it uses only the sequences, selection and iteration types of constructs and avoids use
of “GOTO” statements.

► Data Structure Oriented Design

⌦ As computer became more powerful with the advent of integrated circuits, they were used to
solve more complex problems. The control flow-based program development techniques
were not sufficient to handle these problem and more effective program development
techniques were needed/
⌦ While developing a program it is more important to consider the design of the data structure
of the program than to the design of its control structure.
⌦ Design techniques based on this principle are called data structure oriented design techniques.
⌦ The program code structure should correspond to the data structure. The data structure
oriented design avoids any error related data.

► Data – Flow Oriented Design

⌦ As the requirement of more complex, integrated and sophisticated software arises the new
concept of data flow-oriented techniques were proposed.
⌦ In this concept the major data items handled by a system must be first identified and then the
processing required on these data items to be produce the required outputs should be
determined.
⌦ The data flow techniques identify the different processing statements in system and the data
that flow between the different processing stations.
⌦ This is useful in creating data flow model of entire system, which covers all the processing
and data flow in any system i.e. in below figure represents the data flow representation of a
car assembly unit where each processing station consumes certain input items and produces
certain output
Engine Store Door Store

Fit Chasis Fit Partly Fit Assembled Car


Paint
engine Doors Wheels And
With engine Assembled
Test
Car

Chasis Store Wheel Store

⌦ A major advantage of the data flow technology is its simplicity.


⌦ Once the data flow model is formalized the data flow-oriented design techniques transform
the developed data flow model to system design.

► Object Oriented Design

⌦ With the further advancements in the field of software design, the data flow oriented
technique or design is reached to a concept of object-oriented design.
⌦ An object oriented techniques is a design approach where the natural objects such as
employees, payroll, register etc. occurring in a problem are first identified and then the
relationship among the objects such as composition, reference and inheritance are
determined.
⌦ Each object essentially acts as data hiding or data abstraction entity.
⌦ Object oriented designed approach is targeting the convenience of users than developer.

Q. Why SDLC is important in developing the large


Software?
⌦ A software life cycle is a series of identifiable stages that a software product undergoes
during its lifetime.
⌦ A software product development effort usually starts with feasibility study stage and then
requirement analysis and specification, design, coding, testing and maintenance are
undertaken. Each of these stages is called life cycle phases.

⌦ A SDLC identifies all the activities that are required to develop and maintain the software
through out its lifetime. This can be done by using software development life cycle models.

⌦ When a software product is being developed by a team, there must be precise understanding
between the members as to when to do what to do otherwise it may lead to chaos and failure
to entire project.
⌦ To avoid above situation there must be an entry and exit criteria for every phase of
development of software. When these entry and exit criteria are successfully satisfied then
and only then the corresponding phase can be processed or exited.

⌦ SDLC models are descriptive and diagrammatically model of software lifecycle. So it is


precisely contains the different phase of software life cycle and its internal process
thoroughly with the specific criteria.

⌦ For example the software requirement specification documents once completed and reviewed
by developers team and finally approved by the customers. SRS documents contain well-
defined criteria for entry and exit for various phases.

⌦ With the specified SRS documents it becomes easier for the software project manager to
monitor the progress of the project. Thus a major advantage of a well-defined SDLC model is
that it helps control and systematically organizes various activities. We can say that SDLC
encourages the development of software in a systematic and discipline manner.

⌦ When a well-defined SDLC model is adhered to the project manager can easily tell at which
stage (i.e. design code, test etc.) of development the project currently is. If no SDLC model is
adhered then it is very difficult to determine the progress of the object and the project
manager have to depend on the estimation of the team member.

⌦ Above situation usually lead to a problem known as the 99% complete syndrome. In This
⌦ Syndrome there is not definite way to assess the actual progress of project.

Q.Discuss the Models


Classical Waterfall Model
Prototype Model
Spiral Model

► Classical Water Fall Model

This model divides the life cycle of a software development process into phase shown in fig.

Feasibility Study

Requirement Analysis
And Specification

Design

Coding
And unit testing
Integration and
System Testing

Maintenance

Classical Waterfall Model

⌦ During each phase of the life cycle model a set of well-defined activities are carried out.
⌦ Each phase of model has well-defined starting & ending point. Therefore software engineers
know precisely when to stop a phase and start the next phase.

Feasibility Study
⌦ The main aim of feasibility study is to determine whether developing the software product is
financially and technically feasible. It involves analysis of the problem and collection of data,
which would be input to the system, t he processing required to be produced by the system.
⌦ The collected data are analyzed to arrive at the following:
☻ An abstract definition of the problem.
☻ Formulation of the different solutions strategies.
☻ Examination of alternative solution strategies and their benefits, indicating resources
required, development, cost and time in respect of each of the alternative solution.
☻ At this stage it determines that whether any of the solutions is not feasible due to high
cost, resource constraints or extraordinary technical reason.
Requirement analysis and Specification

⌦ The basic aim of this phase is to understand the extract requirement of the customer and to
documents them properly. This phase consists of two distinct activities: Requirement analysis
and Requirement Specification.
☻ Requirement analysis
☺ The goal of the requirements analysis is to collect and analyze all related data
to understand the requirement of customers clearly and to remove
inconsistencies and incompleteness in these requirements.
☺ Requirement analysis starts with the collection of all relevant data from users
through interviews, discussions and questionnaires. If data contains any
ambiguity, inconsistency or incompleteness then after resolving them all the
requirements properly organized into SRS documents.

☻ Requirement Specification
☺ In this phase users functional requirement, non – functional requirement, and
the special requirements on the maintenance and development of the software
are properly organized and documented in the SRS document.
Design
⌦ The goal of the design phase if to transform the requirements specification into a structured
that is suitable for implementation in some programming language. There are two distinct
approaches of design.
☻ Traditional Approach
☺ During this phase first the structured analysis of requirement specification is
carried out.
☺ Various processing activities of the system are identified and data flow
between the processes is also identified. To do this, the data flow
diagramming (DFD) technique is used. According to this detail design are
carried out.
☻ Object Oriented
☺ First various objects that occur in the problem and solution are identified and
relationship among these objects are identified.
☺ This object structure is further refined to obtain the detailed design.
☺ This approach has several advantages such as less development efforts, and
time and better maintainability.
☻ Coding and Unit Testing
☺ The purpose of this translates the software design in to source code.
☺ Each component of the design is implemented independently and separated
into module (unit).
☺ These units are tested, debugged and documented. The purpose of unit testing
is to determine the correct working of each module or unit. This involves clear
definition of the test cases, testing criteria’s and management of test case.

☻ Integration and System Testing


☺ During this phase the different modules are integrated in planned manner.
☺ Integration is carried out through a number of steps. After integration of each
module the partially integrated system is tested.
☺ When all the modules are integrated and tested then finally system testing is
carried out according to system test plan document prepared during the
creation of SRS document. There are three types of testing.
∝ - Testing
β - Testing
Acceptance Testing
☻ Maintenance
☺ Maintenance of developed software is required much more effort and
attention. It is necessary to document step for maintenance that a user has to
carry out.
☺ Maintenance involves one or more activities, which are given below.
Corrective maintenance
To correct errors which are not discovered during
The software development phase

Perceptive Maintenance
To improve the implementation of system and
functionality of system according to customer’s
requirement.
Adaptive Maintenance
Porting the software to a new environment Eg. To a
new computer or to a new operating system.
► Prototype Model

This model suggests that before development of the actual software, a working prototype of the
system should be built first and temporary implemented. The several reasons for developing a
prototype is as follow:
⌦ By illustrating the input data format, messages, reports to the customer. These developers
to get clear understanding of the customer’s needs.
⌦ Prototyping used to critically examine the technical issues associated with the product
development. Often, major design decisions depend on issues like the response time of a
hardware controller, or the efficiency of a sorting algorithm etc.
⌦ Prototyping model shown in below fig.

Requirements Gathering

Quick Decision

Refine Requirement Build Prototype

Customer Evaluation of
the prototype
Customer suggestions
Acceptable By Customer
Design

Implement

Test

Maintain
Prototyping Model of software development

⌦ The Model starts with the initial requirements gathering phase. A quick design is carried
out and the prototype model is built using several shortcuts. The shortcuts might involve
using inefficient, inaccurate or dummy functions.
⌦ The developed prototype is submitted to the customer for his evaluation. If any
suggestion are given by the customer then the requirement are refined as per customer
desire.
⌦ This cycle continues until the user approves the prototype. The actual system is then
developed model, the requirement analysis and specification phase is most important.
⌦ The cost of overall software might decrease with this model because many user
requirements are properly defined and technical issues are resolved during the execution
of prototype model.
⌦ Thus minimizing changes requests and redesign costs after the system is delivered to the
customer.

► Spiral Model

⌦ The spiral model of software development is as below:

⌦ In spiral model, software development is carried out in four main phases.


⌦ In the 1st phase identifies the objection of the product and the alternative solutions possible.
⌦ In the 2nd phase the alternative solutions are evaluated and project risks are identified and dealt with
by developing an appropriate prototype.
⌦ The 3rd phase consist of developing and verifying the next level of the product.
⌦ The 4th phase reviews the results of phases traversed so far with the customer and planning the next
iteration around the spiral.
⌦ Each iteration around the spiral are more complete version of the software gets built progressivel
⌦ After several iterations around the spiral all the risks are resolved step by step and the software is
ready for development. At this time waterfall model of software development is adopted.
⌦ The spiral model enables the developer to understand the risks and to resolve them at each evolution
by level.
⌦ This model uses prototyping as a risk reduction mechanism and retains the systematic stepwise
approach of the waterfall model.

Q.Write the Organization of software management


plan.What is the important item the SPMD document
has to have?

⌦ Once a project is found to be feasible the project manager undertake project planning.Project
planning consists of the following important activities:
☻ Effort,cost,resource and project duration estimation.
☻ Risk identification,analysis and abatement procedure.
☻ Project scheduling.
☻ Staff organization and staffing plans.
☻ Miscellaneous plans such as quality assurance plan, configuration management plan etc.

⌦ Estimation of effort,cost,resource and project duration is an important component


of project planning.Several heuristic techniques are available for estimation.The reaource to be
planned for a project include the total number and skill level of the people is needed.Estimating
the number of engineers needed is a difficult task consisting of two parts.
☻ Estimating the difficulty level of the task.
☻ The productivity level of each individual engineer.

The number of engineers needed depends on the size of the project


⌦ Important task during organization of software plan is selection of a sutable development process
model.It depends upon the type of project , some life cvycle phase may have to be ommited
,modified or new phase may have to be added to a selected life cycle model.
⌦ When the project is more complex then “sliding window planning” techniques is used.In this
technique the project is planned more accurately in successive development phases.
⌦ One of factors that make project managers document the result of the planning phase in a
software project management plan document whose general architecture is shown below.In
addition to recording the software project plan in the SPMP document,project manager usually
record a clear statement of the goal amd majoe decision taken at this point,Given below are the
important items that the SPMP document should contain and possible organization of the SPMP
document.

1) Introduction
(a) Objective
(b) Major function
(c) Performance Issues
(d) Management and technical constraints.
2) Project Estimates
(a)Historical Data used
(b)Estimation techniques used
(c)Effort, resource ,cost & project duration estimates.
3) Risk management plan
(a) Risk Analysis
(b) Risk identification
(c) Risk Estimation
(d) Risk abatement procedures.
4) Schedule
a) Work breakdown structure
b) Task network representation
c) Grant chart representation
d) PERT chat representation
5) Project Resources
a) People
b) Hardware & Software
c) Special resources
6) Staff Organization
a)Team Structure
b) Management & Control Plan
7) Project tracking & control Plan
8) Miscellaneous
a) Project tailoring
b) Quality assurance
c) Configuration management
d) Validation & Verification
e) System Testing
f) Delivery, installation & maintenance.
Q. Explain LOC as the message of the problem size.
⌦ The simplest measure of problem size is line of code.This metric is very popular primarily
due to the fact that it is simple to use.These metric measures the number of source instruction
requires to solve the problem.Lines used for commenting the code and headers lines are ignored.
⌦ Estimating the LOC at the end of project is very simple but at the beginning of a project is
very tricky.
⌦ To estimate the LOC at the beginning of a project,project managers divide the problem into
modules, and each module into sub module, and so on until the sizes of the different loaf level
modules can be approximately predicted.By estimating the size of the lowest level modules
project mnagers arrive at the total size estimation.Here the past experience in developing similar
products is helpful.However,LOC as a measure of problem size has several shortcomings.
☻ LOC gives a numerical value of problem size that varies with coding style, as
different styles.For Eg. one programmer might write several source instruction on a
single instruction across several lines.This problem can be overcome by counting the
language token in the program rather than length of code.
☻ A good problem size measures should consider the overall complexity of the problem
and the effort needed to solve it.LOC counts the number of source lines in final
program, it does not consider the overall complexity of the problem, design,testing
etc.Coding is only a small part of the overall software development process.
☻ LOC measure does not consider the quality and efficiency of the code. For Eg. some
programmers produce a lengthy and complicated code structure might have more
number of source instruction than a neat and efficient code and therefore would have
a higher LOC counts.
☻ LOC considers only textual complexity but does not considers the more important
logical complexities. A program having a complex logic requires much more effort to
develop than a program with very simple logic.
☻ It is very difficult to obtain an accurate LOC estimation from a problem
specification.LOC can be computed only after the code has been fully
developed.Therefore, LOC is of little use to project managers,at the beginning of a
project.

Q. Discuss the main three Techniques of project


estimation parameters.
⌦ During project planning , project managers estimate the following parameters.
☻ Project size
☻ Effort required developing the Software .
☻ Project duration.
☻ Cost.
⌦ These estimations are helping project manager in informing about the project cost to customer and
also in resource planning and scheduling.
⌦ Therefore there are three main estimation techniques.
☻ Empirical estimation techniques.
☻ Heuristic techniques,
☻ Analytical estimation techniques.
► Empirical Estimation techniques

These techniques are based on making an educated guess of the project parameter using
past experience.Although, these techniques are based on common sense, different
activities involved in estimation have been formalized over the year. There are two types
of these techniques.
Expert judgement
Delphi techniques
► Heuristic Techniques

This techniques is used mathematical expression.Various heuristic estimation models can


be divided into the following classes.
☻ Static single variable models
☻ Static multiple variable models
☻ Dynamic multivariable models

Static single variable estimation model uses some previously estimated


characteristics of a software product such as its size.The basic form of this model is:
Resource = C1 * e^d1

Where ‘e’ is a characteristic of the software, which has already been estimated and the
resource to be predicted could be the effort, project duration,staff size etc.
Constants ‘C1’ and ‘D1’ can be determined using the data collected from the past project
(historical data) Basic COCOMO model is example of a static single variable cost
estimation model,

A static multivariable cost estimation model is of the form

Resource = C1 * e^d1 + C2 * e^d2 + ………


Where e1,e2,e3.. are some characteristics of project which has already been estimated and
C1,C2,C3… & d1,d2,d3.. are constants. This method is more accurate than static single
variable estimation models.

Dynamic multivariable models project resource requirement as a function of time.

► Analytical Estimation Techniques

• The required results derived by these techniques are based upon certain
basic assumptions regarding the project. These techniques do not have any
scientific basis.
• This technique is not very useful for planning develop at project, but it is
very useful to estimate software maintenance efforts.
• In estimation of software maintenance efforts this technique is more useful
then both previous techniques.
• Halstead’s software science is an example of Analytical Estimation
Techniques.

Q. Explain COCOMO Model which helps to find the


approximate software cost.
⌦ The basic COCOMO model provides an approximate estimation of software costs and is given by
the following expressions:

Effort = a1 * (KLOC) ^a2


Tdev = b1 *(Effort) ^b2

Where,
☻ KLOC is the estimated kilo lines of code.
☻ a1,a2,b1,b2 are constants for different categories of software products.
☻ Tdev is the estimated time to develop the software in months.
☻ Effort is the total development effort required to produce the software product, in
programmer – months(PMs).
⌦ Every line of source text should be calculated as one LOC, thus, if a single instruction spans
several lines(say n lines) it is considered to be nLOC. The values of a1,a2,b1,b2 for different
categories of products as given below.

Estimation of development effort

For the three classes of software products, the formulate for estimating the effort based on
the code size are shown below:

Organic : Effort = 2.4 (KLOC) ^ 1.05 PM


Semidetached : Effort = 3.0 (KLOC) ^ 1.12 PM
Embedded : Effort = 3.6 (KLOC) ^ 1.20 PM

Estimation of Development Time


Organic :Tdev = 2.5 (Effort) ^ 0.38 Months
Semidetached :Tdev = 3.0 (KLOC) ^ 1.12 Months
Embedded :Tdev = 3.6 (KLOC) ^ 1.20 Months
⌦ Fig. 1 shows a plot of estimated effort V/S. size for various product sizes. From fig.(1) We can
observe that the effort is almost linearly proportional to the size of the software product. Actually,
effort is somewhat super linear in problem size.
⌦ The development time V/s the product size in KLOC is plotted in Fig. (2), we can observe that the
development time is a sub linear function of the size of the product, i.e. when the size of the
product increase by two times, the time tp develop does not become double but rise moderately.
⌦ This can be explained by the fact that for larger products several parallel activities can be
identified, which can be carried out simultaneously by a number of engineers.
⌦ Further from fig.(2) we can observe that the development time is roughly the same for all the three
categories of products. For E.g. 60 KLOC program can be developed in approximately 18 months
regardless of whether it is of organic, semidetached or embedded type. There is more scope for
parallel activities in system and application programs than in utility programs. Given the
estimated effort for a project in programmer – month and the nominal development time, the
average staffing level can be determined by simple division.

Example:-

Assume that the size of an organic software product has been estimated to be 32,000 lines of
source code. Let us determine the effort require to develop the software product and the nominal
development time.
Effort = 2.4 * (32) ^ 1.05 = 91 PM
Nominal development time = 2.5 * (91) ^ 0.38 = 14 months

Embedded
Semi Detached
Organic
Estimated Effort

Size

Embedded

Normal Development Time


Semi Detached

Organic

Size

Q.Explain the following terms with respect to Risk


Management.
Risk Identification
Risk Assessment
Risk Containment

⌦ A risk is any unfavorable event or circumstances that can occur while a large project is
underway.
⌦ Aim of risk management is to deal with all kinds of risks that might affect a project by preparing
contingency plans in advance.

Risk Identification

A project can be affected by various types of risks, so


that it is necessary to first categorize them into different
classes. There are three main classes.
☻ Project Risks
Project risks concerns with various forms due to budgetary schedule, personal
resource and customer related problems. An important project risk is schedule
slippage.
☻ Technical Risks
Most technical risks occur due to insufficient knowledge about the software
product.

Technical risks concern with potential sign,implementation,interfacing,testing and


maintenance problems.In addition to that ambiguous specification,incomplete
specification, changing specification,technical uncertainty and technical obsolence
are also technical risk factors.
☻ Business Risks
These risks include building an excellent product that no one wants, not fulfilling
budgetary or personal commitments etc.

It is good practice for a software company to prepare disaster list that contains all
the bas events that have happened in the past with software products. This list can
be read by the project manager in order to be aware of some of the risks that could
arise in a project therefore a disaster list very much useful to analyze typical areas
of risk in a better way.

Risk Assessment

The objective of risk assessment is to rate each risk into two ways.
i) The likelihood of a risk coming true.
ii) The effect of the problem associated with that risk.
Based on these two factors, we can prioritize different risks in the following way:

P = r*s
Where,
P is the priority of a risk
r is the probability of the risk becoming true
s is the severity of damage due to the risk.
If different risks are prioritized in this way then, the most likely and damaging risks can be handled
first and much more effectively with comprehensive risk abatement procedures can be designed for
these risks.

Risk Containment

Once risks of a project are assessed, steps are first initiated to avoid or at least contain the most
damaging and the most likely risks. Different risks require different containment (it means different
logic) procedures.

The logic behind containment procedures depend on the skill of the project manager.
Important type of risk that occurs in many software project is schedule slippage. These risks arise
primarily due to the intangible nature of software. These can be dealt with by increasing the visibility
of a software product can be increased, by producing the relevant documents during process
whenever it is feasible and getting these documents reviewed by an appropriate team.

Milestones at each phase should be places, this helps manger to review the progress.

Q. Importance of Scheduling in project management.


⌦ Scheduling is an important activity of project managers. To schedule a project, the project
manager needs to do the following steps:
o Identify the tasks needed to complete the project.
o Determine the dependency among different tasks.
o Establish the most likely estimates for the duration of the identified tasks.
o Plan the starting and ending dates for various tasks.
o Determine the critical path.
⌦ To schedule a software project , first the entire problem is broken into a set of logical
tasks, which would be assigned to different engineers and determining the ordering among
these tasks.
⌦ In breaking down the work, we are essentially enumerating the different tasks that needed
to be done.
⌦ In scheduling, we decide the order in which to do these tasks. Tasks dependencies define a
partial ordering among tasks, i.e. some tasks must precede some other tasks.
⌦ Most important step is to determine critical path. If any delay occurs along a critical path,
the entire project gets delayed. It is therefore necessary to identify critical paths in a
schedule. If critical path is very well determined in a schedule the project manager may
switch resource from a non critical task to a critical task so that all the milestone along
critical path are met. It is possible that a schedule may have more than one critical path.

⌦ Several tools are currently available which can help us in figure out the critical path in an
unrestricted schedule, but figuring out an optimal schedule with resource limitation and
with a large number of parallel tasks.

⌦ The time schedule for a large size task may be too long. Therefore the manager needs to
break large tasks into smaller ones, expecting to find ,ore parallelism, which could lead to
a shorter development time.

⌦ It is not useful to subdivide tasks into units, which take less than a week or two to
complete. Finer subdivision mean that a disproportionate amount of time must be spent on
estimating and chart revision.

Q. What do you understand by Software Requirement


Analysis?
⌦ The aim of requirement analysis is to obtain a clear and entire understanding of the product to be
developed and the user requirements.
⌦ For this, analysts’ team usually visits the customer site and collects the data pertaining to the
product to be developed and analyze these data to clearly understand the exact requirements.
⌦ For Eg. if a product is being developed to automate the existing activities of an office, the analyst
can easily study the input data, the output data, the exact formats of these data and the existing
office procedures. If the product involves developing something new for which no working model
exists,then the task of gathering requirements becomes quite difficult.
⌦ Even experienced analyst take considerable time to understand the exact requirement of the
customer.They know that without a clear understanding of the problem, it is almost impossible to
develop a satisfactory solution.So the analysts should first understand the following.
What is the problem?
Why it is important to solve the problem?
What are the possible solutions of the problem?
What exactly are the data inputs and what exactly are the data outputs?
What are the likely complexities that might arise while solving the problem?
⌦ To give the answers of the above problem the analysts usually carry out the following two main
activities.
o Requirement gathering

The analysts interview the end-users and customers to collect all possible information
regarding the system.If the project involves automating some existing procedures, then
the task of the system analyst becomes a little easier.They observe the current working
system. However in the absence of a working system, much more imagination and skills
are required.
o Analysis of gathered requirements

The main purpose of analysis of the collected information is to clearly understand the
exact requirements of the customer and resolve anomalies, conflicts and inconsistencies
in the gathered requirements. These are resolved by further discussions with the end users
and customers.Some inconsistencies and anomalies can be detected easily while some
require study of the proble.

After gathering requirements and removing all inconsistencies and anomalies then the
SRS document is prepared. The SRS document usually contains all user requirement in
an informal form.

Q. What is good Software design? Explain


cohesiveness and coupling.
A. Software Design:
Most researchers and software engineers agree that software design for general applications must
have a few desirable characteristics. These are listed below:

A good design should capture all the functionalities of the system correctly.
It should be efficient and easily maintainable.
It should be easily understandable.

Understandability is a major factor of good software design because easily understandable design is also
easy to maintain and change. Unless a design is easily understandable, it would require a tremendous
effort to maintain. If the software is not easily understandable, the maintenance effort would increase
manifold. The understandability of a design should have the following feature:

Use of consistent and meaningful name for various design components.


Use of a cleanly decomposed set of modules.
Neat arrangement of modules in a hierarchy, i.e. tree like diagram.

Modular design is one of the fundamental principles of a good design. Decomposition of a problem into
modules reduces the complexity greatly and enables to understand each module easily and separately.
Clean decomposition of a design problem into modules means that the modules in a software design
should display high cohesion and low coupling.

Neat arrangement of modules in a hierarchy essentially means low fan-out and abstraction.

Cohesion and Coupling:

Most researchers and engineers agree that a good software design implies clean decomposition of a
problem into modules, and the arrangement of these modules in a neat hierarchy. The primary
characteristics of clean decomposition are high cohesion and low coupling. Cohesion is a measure of the
functional strength of a module whereas coupling of a module with another module is interaction
between two modules.

A module having high cohesion and low coupling is said to be functionally independent of other
modules. By the term functional independence, we mean that a cohesive module performs a single task
or function. A functionally independent module has minimal interaction with other modules. Functional
independencies is a key to good design primarily due to the following reasons:

Functional independence reduces error propagation. Therefore, an error existing in one module does not
directly affect other modules and also any error existing in other modules does not directly affect this
module.

Reuse of a module is possible, because each module performs some well-defined and precise function
and the interface of the module with other modules is simple and minimal. Therefore, any such module
can be easily taken out and reused in a different program.

Complexity of the design is reduced, because different modules can be understood in isolation, as
modules are more or less independent of each other.

Even though no quantitative measures are available to determine the degree of cohesion and coupling.
Therefore, by examining the types of cohesiveness exhibited by a module, we can say that it display high
or low cohesion.

Q. Give overview of Object-oriented concept. Discuss


the following terms:
(1) Object
(2) Class
(3) Inheritance
(4) Multiple Inheritances
(5) Abstraction
(6) Encapsulation
(7) Polymorphism
(8) Dynamic Binding
A. Object oriented approach is a relatively new paradigm for software development. It is different
approach compared to the traditional function oriented approach.
In the object oriented approach a system is designed as a set of interacting objects. These objects should
first be identified and then implemented.

Consider an example a ‘chair’ is a member of a large object ‘furniture’ which is called as a ‘class’. A set
of generic attributes can be associated with every object of class furniture i.e. cost, dimensions, weight,
color etc. A set of functions, which are applied to modify the attributes of the object are called the
‘methods’.

Object-oriented approach lead to reuse and reuse leads to faster software development and higher quality
programs. Object-oriented software is easy to maintain, adapt and scale.

The object-oriented technology moves through the process, such as object oriented requirements
analysis, object oriented design (OOD), OODBMS, OOCASE. Some important terms related to this
approach are objects, class, inheritance, message and methods, abstraction, encapsulation,
polymorphism, dynamic binding, genericity etc.

Object:
In the object-oriented approach, a system is designed as a set of interacting objects. Each object
represents some real-world entity.
Each object essentially consists of some data that is private to the object and a set of functions that
operate on those data. An object cannot directly access the data internal to another object.
For example, in a word processing application, a paragraph, a line, and a page can be object. A
library member can be an object of a library automation system. The private data of each member
object can be:
Name of the member.
Code number assigned to this member.
Birth date.
His phone number.
E-mail address.
His membership expiry date.
Books issued to him, etc.
The date internal to an object is called ‘attributes’ of the object and the functions supported by an
object are called its ‘methods’.
Each object is essentially a data abstraction entity. Data abstraction means that each object hides
from other objects the way in which its internal data is stored and manipulated.
The principle of data abstraction reduces coupling among the objects and therefore reduces the
overall complexity of the design and also helps code reuse.
When a system is analyzed, developed, and implemented in terms of the natural objects occurring in
it, it becomes easier to understand the design and implementation of the system.

Class:
Objects possessing similar attributes or displaying similar behavior constitute a class.
For example, the set of all employees can constitute a class in an employee payroll system, since
each employee object has similar attributes such as his name, code number, salary, address etc. and
display similar behavior as other employee objects.
Once we define a class serves as a template for object creation. Thus a class can be considered as
abstract data type.
Each object must be defined as an instance of some class. It means the attributes and behavior of an
object are determined by the class it belongs to.

Inheritance:
Inheritance feature allows us to define a new class by extending or modifying an existing class.
The original class is called the base class (or super class) and the new class obtained through
inheritance is called the derived class (or sub class).
In below figure, library members are the base class for the derived classes faculty, students and staff.
Similarly, students are the base class for the derived classes undergrad, post grad, and research.
For example, in the library information system the library members base class might define the data
for name, address and library membership number for each member and its derived classes might
define additional data such as max-number books and max-duration-of-issue and which may vary for
different member categories.

Figure

A base class is a generalization of its derived classes i.e. a base class contains only those properties
that is common to all the derived classes.
Important advantage of the inheritance mechanism is code reuse. If certain methods or data are
similar in a set of classes, then instead of defining these methods and data in each of these classes
separately, these are defined only once in the base class and are inherited by each of its subclass.

Multiple Inheritance:
Multiple inheritances are the mechanism by which a derived class can inherit attributes and methods
from more than one base class. For example, in some situations, the research students can also be staff of
the institute and therefore, some of the characteristics of research class might be similar to the student
class and some other characteristics might be similar to the staff class. In below figure, multiple
inheritances are represented by arrows directly from the derived class to the representative base classes.

Abstraction:
Abstraction is a mechanism for reducing the complexity of software.
It is a way of increasing software productivity because of the fact that software productivity is
inversely proportional to software complexity.
Abstraction is the selective examination of certain aspects of a problem while ignoring other aspects
of the problem. In other words, the main idea behind abstraction is to consider only those aspects of
the problem that are relevant for a certain purpose and to suppress all other aspects of the problem
that are not relevant for the given purpose.
Thus, abstraction mechanism allows us to represent a problem in a simpler way by omitting
unimportant details.
Many different abstractions of the same problem are possible depending on the purpose for which
they are made.
Abstraction does not only help in the development of engineers to understand the problem better, but
also leads to better comprehension of the system design by the end-users and maintenance team.

Encapsulation:
The property of an object by which it only interfaces with the outside world is by means of messages is
referred to as encapsulation.
The data of an object are encapsulated and can be accessed only through message-based communication.
This property offers three advantages.

The internal implementation details of data and procedures are hidden from the outside world. Thus,
it protects data of an object from corruption by other object or from unauthorized access.
Encapsulation hides the internal structure of an object so that interaction with the object is simple
and standardized. This facilitates reuse of objects across different project. If the internal structure of
an object is modified, other objects are not affected.
Since objects communicate with each other using messages only, they are weakly coupled. The fact
that objects are inherently weakly coupled enhances the understandability of the design.

Polymorphism:
Polymorphism exactly means poly (many) morphism (forms). It denotes the following.

The same message can result in different actions when received by objects of different types. This is
referred to as static binding.
When different objects are referred to through a pointer, then in case when a message is sent, an
appropriate method is called depending upon the object the pointer is currently pointing to. This is
referred to as dynamic binding.

Using polymorphism, a programmer can send a generic message to a set of objects, which may be of
different type and leave the exact implementation to the receiving objects. The main advantage of
polymorphism is that it facilitates code reuse and maintenance. Also, new lower-level objects can be
added with minimal changes to existing objects. As illustrated in figure:
Traditional Code Object-oriented code
if(shape == CIRCLE) shape.draw();
Draw_circle();
else if(shape == RECTANGLE)
Draw_rectangle();

We can see that the object-oriented code is much more concise and easily understandable. Also, suppose
in the example program segment, if it is later found that it is necessary to add a new graphics drawing
then the procedural code has to be changed by adding a new if-the clause. However, in the case of
object-oriented program the code need not be changed, only a new class for ellipse has to be defined.

Dynamic binding:
Static binding is said to occur if the address of the called method is known at compile time.
In dynamic binding the address of an invoked method is known only at run time.
Dynamic binding is useful for implementing polymorphism.

Q. EXPLAIN CHARACTERISTICS OF A GOOD USER


INTERFACE DESIGN:
Ans:
The following are the characteristics of a good user inter face design.

Speed of learning:-

⇒ A good user inter face should be simple to learn. Also a good user interface should not require its
users to memorize commands. Another important characteristics of a user interface that affects
the speed of learning are consistency. User should be able to use the same command in different
circumstances for the same task.
⇒ User can learn an interface faster, if the interface should be based either on some day- to-day real
life example or on some concepts with which the user are already familiar.

Speed of use:-

The speed of use of a user interface is determined by the amount of time and effort
required to initiate and execute different command. The time and effort require to initial and
execute different commands should be minimal.

Speed to recall:-

Once users recall how to use an interface, their speed of recall about how to use the
software should be maximized. The speed of recall is improved if interface is base on some
symbolic command, graphical and simple command names.

Attractiveness:-

An Attractive user interface catches user attention and fancy. In this respect, the graphic-
based user interfaces have a definite advantage over the text-based interface.

Consistency:-

The command supported by a user interface should be consistent. The basic purpose of
constancy is to allow user to generalize the knowledge about one aspect of the interface to
another. Thus, consistency facilitates speed of learning, speed of recall and also helps in
reducing error-rate.

Feedback:-

A good user interface must provide feedback to various user actions. For example if any
user request takes more than a few seconds to process, the user must be informed that his/her
request is being processed.

Support fro multiple skill level:-


A good user interface must support multiple options for issuing the command, because
user with different experience level prefer different type of user interface.
For example a familiar user always prefers of using keyboard (short cut key) while new
user may prefer using mouse in the starting.

Error recovery:-

A good user interface should minimize the scope of committing error while initiating
different commands. Consistency of names, issue procedure, and the behavior of similar
command and the simplicity of the command issue procedure minimize error possibilities.

User guidance and on-line help:-

Whenever users need guidance of seek help from the system, they should be provided
with appropriate guidance and help. This is a very important aspect of good user interface
design.

Q. WHAT IS TESTING? EXPLAIN UNIT TESTING:


Ans:

Testing: -
Testing a program consists of providing the program with a set of test. Input data ( or test cases)
and observing if the program behaves as expected. If the program fails to behave as expected, then the
conditions under which a failure occurs are noted for debugging and correction.

Unit testing:-

1. Design of test cases:-

Exhaustive testing of any large and complex system is impractical due to the
extremely large domain of input data values. Therefore, we must design an optimal test
suite of reasonable size to uncover as many errors in the system as possible.
A test suite is a set of all test cases with which a given system is to be tested. The
test cases must be selected using systematic approaches.

There are essentially two main approaches for designing test cases.

⇒ Black box approach.


⇒ White box (glass-box) approach.

In the black-box approach, the test cases are designed using only functional specification
of software, i.e. without any knowledge of the internal structure of the software.
Therefore, the black box testing is also called the functional testing.

In the white box approach, thorough knowledge of the internal structure of


software is required. Therefore the white box testing is also called the structure
testing.

2. Drivers and structure modules:-

To test single module we need a complete environment


to provide all that is necessary for execution of the
module. We need the following to test the module.
⇒ The procedures the module under test calls and which do not belong to ir.
⇒ Non-local data structure that the module accesses.
⇒ A procedure to call the functions of the module under test with appropriate parameters.

However, since the required modules are usually not


available until they too have been unit tested. So studs
and drivers are designed to provide the complete
environment for a module. Stud procedure is a dummy
procedure that has same I/O parameters as the given
procedure but it has a highly simplified behavior.

A driver module contains the non-local data structures


accessed by the module under test, and would also have
the code to call the different functions of the module
with appropriate parameter values.

Driver Module

Module under test

Stud Module

Unit testing using the help of stud and driver modules.

Q- EXPLAIN BLACK BOX TESTING & INTEGRATION


TESTING?
ANS:

BLACK – BOX TESTING:-

There are essentially two approaches to design black-box test cases:


⇒ Equivalence Class Partitioning
⇒ Boundary value analysis.

1. Equivalence Class Partitioning:-

• Equivalence partitioning allows us to divide the domain of input values into a set of equivalence
classes. So that the behavior of program is similar for every input data belonging to the same class.
• The main idea of defining the equivalence classes is that testing the code with any one value
belonging to an equivalence class is as good as testing the software with any other value belonging
to that equivalence class.
• Equivalence class for software can be designed by examining the input data.
• The following are some general guidelines for designing the equivalence classes:
o If an input condition requires a specific value of specifies a range, then one valid and two
invalid equivalence classes should be defined.
o If an input condition specifies a member of a set of is Boolean then one valid and one
invalid class are defined.

Example 1:-
For software that computes the sssquare toot of an integer in the range of 1 to 5000. there are
three equivalence classes:
The set of integers.
The set of range form 1 to 5000
Integers larger than 5000.

There fore, the test cases must include one value from each class. Thus, a possible test set can be [-
5, 500, 6000).

2. Boundary value analysis:-

• Some typically programming errors occur at the boundaries of different equivalence classes of
inputs. For example, programmers may improperly use < instead of <=.

• Boundary value analysis leads to selection of test cases at the boundaries of the different
equivalence class.
• Guidelines for designing test cases are as follows:
If an condition specifies a range from values a & b, test cases should be designing with
values a, b, a-1, b-1.
If an input condition specified a set of values, the test cases should be designed with the
minimum, the maximum, minimum-1 and maximum + 1.

In example, test cases must include the values [0, 1, 5000, 5001].

INTEGRATION TESTING:-

During integration testing, different modules of a system are integrated using an integration plan.
The primary objective of integration testing is to test the module interfaces. The following are the
different types integration testing.

1) Big-Bang Integration Testing:-

⇒ This is the simplest approach of integration testing.


⇒ Here all modules are integrated and then tested.
⇒ This approach is useful only for very small system. The main problem with this
approach is that once an error is found during integration testing, it is very difficult to
localize the error as the error may belong to any of the modules being integrated.

2) Bottom-up Integration Testing:-


⇒ In this testing, each subsisting is tested separately and then integrated full system is
tested.
⇒ A subsystem might consist of many modules.
⇒ The primary purpose of testing each subsystem is to test the interface among various
modules making up the subsystem. Both control and data interfaces are tested.
⇒ An advantage is that the several disjoint systems can be tested simultaneously.
⇒ A disadvantage is the complexity of the system i.e. a large number of small
subsystems.
⇒ In a pure bottom – up testing, no stubs are required, only test – drivers are required.

3) Top – Down Integration Testing:-

⇒ In this approach, testing starts with the main routine i.e. root module, and one or
two sub-routines in the system. After the top-level ‘skeleton’ has been tested, the
subroutines of the ‘skeleton’ are immediately combined with it and tested.
⇒ A pure top-down testing does not require any driver routine but only stubs
program are required.
⇒ A disadvantage is that in the absence of lower – level routines, many times it may
become difficult to exercise the top-level routines in the desired manner since lower
level routines perform several low-level functions such as I/O.
4) Mixed Integration Testing:-

⇒ A mixed integration testing follows both the top-down and bottom-up approaches.
⇒ Here, a testing can start only after the top-level modules have been coded and unit
tested. Similarly, bottom-up testing start only after the bottom level modules are
ready.]
⇒ The mixed approach overcomes these shortcomings of the top-down and bottom-
up approaches and testing can start when modules become available. Therefore, this
is the most commonly used integration testing technique.
Q. What do u understand by Software reliability and
explain reliability matrix and reliability growth
modeling?
Software Reliability
⌦ Software reliability can be defined as the probability of the product working “correctly” over a
given period of time.
⌦ A software product having a larger number of defects would be very unreliable.Reliability would
improve if the number of defects were reduced.However, there is no simple relationship between
the system reliability and the number of defects of a system would make little difference to the
software reliability.But if the error is removed from the counter part, improvement in the
reliability will be more.

Software reliability and Growth Modelling

A reliability growth model is a mathematical model, which shows how software reliability grows as
errors are detected and removed.

The two very simple reliability grow model are as follow:

Step Function Model


⌦ The simplest reliability growth model is step function model where it is assumed that the
reliability increases by a constant increment each time an error is detected and repaired.
⌦ However, this assumption is highly unrealistic because different errors contribute differently to
reliability growth.

R
O
C
O
F

Time
Jelinski and Moranda Model

The assumption of this is that the reliability does not increase by a constant amount, each
time error is solve but the growth of reliability is inversely proportional to the number of
remaining error.

Although, this model is more realistic for many applications, it still suffers from the fact that the
most problem failures are discovered early during the testing process.Repairing these errors
contributes maximum to the reliability growth. Therefore, the rate of reliability growth would be
large initially and then slow down later on, contrary to the assumption of this model.There are some
other complex reliability growth models,which gives more accurate apprximate to the growth of
reliability.

You might also like