Se 5&2
Se 5&2
4. What are Data source and sink? How to represent them in DFDs?
Rectangles represent a source or sink and are a net originator or consumer of data
(Google): A Source represents any source of data that is identified as outside the boundary of the
process that the DFD is modeling. Similarly, a sink is and destination for data that is outside the
boundary of the process that the DFD is modeling.
1|MCK
8. What do you mean by divide and conquer?
For complex tasks, divide and conquer method is used. That is, partition the problem into sub-problems
and then try to understand each sub-problem and its relationship to other sub-problems in an effort to
understand the whole problem. The question here is “partition with respect to what?” Generally, in
analysis, the partition is done with respect to object or function. Most analysis techniques view the
problem as consisting of objects or functions and aim to identify objects or functions and hierarchies
and relationships among them.
10. Differentiate between glass box and white box testing. (Google)
BLACK BOX TESTING WHITE BOX TESTING
It is mostly done by software testers. It is mostly done by software developers.
No knowledge of implementation is needed. Knowledge of implementation is required.
It can be referred as outer or external software It is the inner or the internal software testing.
It is functional test of the software. It is structural test of the software.
2|MCK
15. Define validation. (Google)
Validation is the process of checking whether the software product is up to the mark or in other words
product has high level requirements. It is the process of checking the validation of product i.e. it checks
what we are developing is the right product. it is validation of actual and expected product.
23. What are the different types of metrics? When they are used? (Google)
Product metrics describe the characteristics of the product such as size, complexity, design features,
performance, and quality level.
3|MCK
Process metrics can be used to improve software development and maintenance. Examples include the
effectiveness of defect removal during development, the pattern of testing defect arrival, and the
response time of the fix process.
Project metrics describe the project characteristics and execution. Examples include the number of
software developers, the staffing pattern over the life cycle of the software, cost, schedule, and
productivity.
27. Which model is used for developing software for automation of existing manual system and why?
Nobody knows but maybe prototyping.
Unit-1
1. Briefly explain the software engineering problems.
Software Engineering is a systematic approach to the development, operation, maintenance and
retirement of the software. There is another definition for s/w engineering, which states that “Software
engineering is an application of science and mathematics by which the capabilities of computer
equipment are made useful to man via computer programs, procedures and associated
documentation”.
Problem of Scale
A common factor that software engineering must deal with is the issue of scale. Development of a very
large scale system requires a very different set of methods compared to developing a small system; i.e.
methods that are used for developing small systems generally do not scale up to large systems. For
example: consider the problem of counting people in a room versus taking the census of a country. Both
are counting problems but the methods used are totally different. A different set of methods have to be
used for developing large software.
Any large project involves the use of technology and project management. In small projects, informal
methods for development and management can be used. However, for large projects both have to be
much more formal. When dealing with small software project, the technology and project management
requirement is low. However, when the scale changes to the larger systems, we have to follow formal
4|MCK
methods. For example: if we have 50 bright programmers without formal management and
development procedures and ask them to develop a large project, they will produce anything of no use.
(Project is small if its size is less than 10 KLOC, medium if it is less than 100 KLOC and large if it is less
than one million LOC and very large if the size is many more million LOC. For eg. Python - 200 KLOC,
Apache - 100 KLOC, Red Hat Linux - 30000 KLOC, Windows XP - 40000 KLOC)
Cost, Schedule and Quality
The cost of developing a system is the cost of resources used for the system, which in case of software
are the manpower, hardware, software and other support resources. The manpower component is
predominant as the software development is highly labor-intensive.
Schedule is an important factor in many projects. For some business systems, it is required to build the
software with small cycle of time. The developing methods that produce high quality software is
another fundamental goal of software engineering. Quality of a software product has three dimensions:
Product Operation, Transition and Revision.
5|MCK
complete, the code is integrated and testing is done. On successful completion of testing, the system is
installed. After this, the regular operations and maintenance take place as shown in the figure.
Each phase begins soon after the completion of the previous phase. Verification and validation activities
are to be conducted to ensure that the output of a phase is consistent with the overall requirements of
the system. At the end of every phase there will be an output. Outputs of earlier phases can be called as
work products and they are in the form of documents like requirement document and design
document. The output of the project is not just the final program along with the user manuals but also
the requirement document, design document, project plan, test plan and test results.
Project Outputs of the Waterfall Model
• Requirementdocument
• Projectplan
• System designdocument
• Detailed designdocument
• Test plan and testreport
• Finalcode
• Softwaremanuals
• Reviewreport.
Reviews are formal meetings to uncover deficiencies in a product. The review reports are the outcomes
of these reviews.
Advantages (Google)
• It allows for departmentalization and managerial control.
• Simple and easy to understand and use.
• Easy to manage due to the rigidity of the model – each phase has specific deliverables and a
review process.
• Phases are processed and completed one at a time.
• Works well for smaller projects where requirements are very well understood.
• A schedule can be set with deadlines for each stage of development and a product can proceed
through the development process like a car in a car-wash, and theoretically, be delivered on time.
6|MCK
Limitations
• Waterfall model assumes that requirements of a system can be frozen before the design begins.
It is difficult to state all the requirements before starting aproject.
• Freezing the requirements usually requires choosing the hardware. A large project might take a
few years to complete. If the hardware stated is selected early then due to the speed at which
the hardware technology is changing, it will be very difficult to accommodate the
technologicalchanges.
• Waterfall model stipulates that the requirements be completely specified before the rest of the
development can proceed. In some situations, it might be desirable to produce a part of the
system and then later enhance the system. This can’t be done if waterfall model is used.
• It is a document driven model which requires formal documents at the end of each phase. This
approach is not suitable for interactiveapplications.
• In an interesting analysis it is found that, the linear nature of the life cycle leads to “blocking
states” in which some project team members have to wait for other team members to complete
the dependent task. The time spent in waiting can exceed the time spent in productivework.
• Client gets a feel about the software only at theend.
7|MCK
Problems: This model much depends on the efforts required to build and improve the prototype which
in turn depends on computer aided prototyping tools. If the prototype is not efficient, too much effort
will be put to design it.
8|MCK
but to reduce the cost of testing and maintenance.
Early Defect Removal and Defect Prevention: If there is a greater delay in detecting the errors, it
becomes more expensive to correct them. As the figure given below shows, an error that occurs in the
requirement phase if corrected during acceptance testing can cost about 100 times more than
correcting the error in the requirement phase. To correct errors after coding, both the design and code
are to be changed; there by changing the cost of correction. All the defect removal methods are limited
in their capabilities and cannot detect all the errors that are introduced. Hence it is better to provide
support for defect prevention.
Process Improvement: Improving the quality and reducing the cost are the fundamental goals of the
software engineering process. This requires the evaluation of the existing process and understanding
the weakness in the process. Software process must be a closed-loop process. The process must be
improved based on previous experiences and each project done using the existing process must feed
information back to facilitate this improvement. This activity of analyzing and improving the process is
largely done by the process management component of the software process. Other processes should
also have to take an active part in this, for better performance.
9|MCK
a dramatic change in the infrastructure of the organization itself. This is usually called as law of software
evolution. The resulting maintenance is referred as adaptive maintenance.As a result, software
developers needs to go through not only the code, but also the documentation associated with it. They
should test the whole software again to ensure consistency.
In olden days, hardware was very costly. To purchase a computer lacks of rupees were required. Now a
days hardware cost has been decreased dramatically. Now software can cost more than a million
dollars, and can efficiently run on hardware that costs almost tens of thousands of dollars.
Late, Costly and Unreliable: Software Engineering is driven by three major factors: cost, schedule and
quality. There are many instances quoted about software projects that are behind the schedule and
have heavy cost overruns. If the completion of a particular project is delayed by a year, the cost of the
project may be double or still more. If the software is not completed in the scheduled period, then it will
become very costly.
Unreliability means, the software does not do what it is supposed to do or does something it is not
supposed to do. In software, failures occur due to bugs or errors that get introduced during the design
and development process. Hence, even though the software may fail after operating correctly for
sometime, the bug that causes the failure was there from the start. It only got executed at the time of
failure.
Problem of Change and Rework: Once the software is delivered to the customer, it enters into
maintenance phase. All systems need maintenance. Software needs to be maintained because there are
often some residual errors remaining in the system that must be removed as they are discovered. These
errors once discovered, need to be removed, leading to software getting changed. This is sometimes
called as corrective maintenance.
Software often must be upgraded and enhanced to include more features and provide more services.
This also requires modification of the software. If operating environment of the software changes, then
the software must also be modified accordingly. The software must adapt some new qualities to fit to
the new environment. The maintenance due to this is called adaptive maintenance.
8. Explain the SCM lifecycle of an item/Briefly explain the various activities of software configuration
management process.
SCM is a process of identifying and defining the items in the system, controlling the change of these
items throughout their life-cycle, recording and reporting the status of item and change request and
verifying the completeness and correctness of these items. SCM is independent of development
process. Development process handles normal changes such as change in code while the programmer is
developing it or change in the requirement while the analyst is gathering the information. However it
can not handle changes like requirement changes while coding is being done. Approving the changes,
evaluating the impact of change, decide what needs to be done to accommodate a change request etc.
are the issues handled by SCM. SCM has beneficial effects on cost, schedule and quality of the product
being developed.
It has three major components:
Configuration Identification: When a change is done, it should be clear, to what, the change has been
applied. This requires a baseline to be established. A baseline forms a reference point in the
development of a system and is generally defined after the major phases in the development process. A
software baseline represents the software in a most recent state. Some baselines are requirement
baseline, design baseline and the product baseline or system baseline.
10 | M C K
Though the goal of SCM is to control the establishment and changes to these baselines, treating each
baseline as a single entity for the change is undesirable, because the change may be limited to a very
small portion of the baseline. For this reason, a baseline can consist of many software configuration
items. [SCI’s] A baseline is a set of SCIs and their relations. Because a baseline consists of SCIs and SCI is
the basic unit for change control, the SCM process starts with identification of the configuration items.
Once the SCI is identified, it is given a name and becomes the unit of change control.
Change Control: Once the SCIs are identified, and their dependencies are understood, the change
control procedures of SCM can be applied. The decisions regarding the change are generally taken by
the configuration control board [CCB] headed by configuration manager [CM]
When a SCI is under development, it has considered being in working state. It is not under SCM and can
be changed freely. Once the developer is satisfied with the SCI, then it is given to CM for review and the
item enters to ‘under review’ state. The CM reviews the SCI and if it is approved, it enters into a library
after which the item is formally under SCM. If the item is not approved, the item is given back to the
developer. This cycle of a SCI is given in the figure.
Once the SCI is in the library, it can not be modified, even without the permission of the CM. an SCI
under SCM can be changed only if the change has been approved by the CM. A change is initiated by a
change request (CR). The reason for change can be anything. The CM evaluates the CR primarily by
considering the effect of change on the cost schedule and quality of the project and the benefits likely
to come due to this change. Once the CR is accepted, project manager will take over the plan and then
CR is implemented by the programmer.
Status Accounting and Auditing: The aim of status accounting is to answer the question like what is the
status of the CR (approved/rejected), what is the average and effort for fixing a CR and what is the
number of CR. For status accounting, the main source of information is CR. Auditing has a different role.
11 | M C K
programs should be easy to read andunderstand.
Testing
Once the programs are available, we can proceed with testing. Testing not only has to uncover errors
introduced during coding, but also errors introduced during previous phases.
The starting point of testing is unit testing. Here each module is tested separately. After this, the
module is integrated to form sub-systems and then to form the entire system. During integration of
modules, integration testing is done to detect design errors. After the system is put together, system
testing is performed. Here, the system is tested against the requirements to see whether all the
requirements are met or not. Finally, acceptance testing is performed by giving user’s real- world data
to demonstrate to the user.
The risk-driven nature of the spiral model allows it to suit for any applications. The important feature of
spiral model is that, each cycle of spiral is completed by a review that covers all the products developed
during that cycle; including plans for the next cycle.
In a typical application of spiral model, one might start with an extra round-zero, in which the feasibility
of the basic project objectives is studied. In round-one a concept of operation might be developed. The
risks are typically whether or not the goals can be met within the constraints. In round-2, the top-level
requirements are developed. In succeeding rounds the actual development may be done. In a project,
where risks are high, this model is preferable.
Problems:
1) It is difficult to convince the customers that the evolutionary approach iscontrollable.
2) It demands considerable risk-assessment expertise and depends heavily on this expertise for
12 | M C K
thesuccess.
3) If major risks are uncovered and managed, major problems mayoccur.
11. Explain the working of an iterative enhancement model with the help of a diagram.
This model tries to combine the benefits of both prototyping and waterfall model. The basic idea is,
software should be developed in increments, and each increment adds some functional capability to the
system. This process is continued until the full system is implemented. An advantage of this approach is
that, it results in better testing because testing each increment is likely to be easier than testing the
entire system. As prototyping, the increments provide feedback from the client, which will be useful for
implementing the final system. It will be helpful for the client to state the final requirements.
Here a project control list is created. It contains all tasks to be performed to obtain the final
implementation and the order in which each task is to be carried out. Each step consists of removing the
next task from the list, designing, coding, testing and implementation and the analysis of the partial
system obtained after the step and updating the list after analysis. These three phases are called design
phase, implementation phase and analysis phase. The process is iterated until the project control list
becomes empty. At this moment, the final implementation of the system will be available.
The first version contains some capability. Based on the feedback from the users and experience with
the current version, a list of additional features is generated. And then more features are added to the
next versions. This type of process model will be helpful only when the system development can be
broken down into stages.
Disadvantage:
This approach will work only if successive increments can actually put into operation.
Unit-2
1. Explain the characteristics of an SRS.
A good SRS is:
1. Correct & Complete
2. Unambiguous
3. Verifiable.
4. Consistent
5. Ranked for important/stability
6. Modifiable
7. Traceable
A SRS is correct if every requirement included in SRS represents something required in the final system.
An SRS is complete if everything the software is supposed to do and the responses of the software to all
classes of input data are specified in the SRS. Completeness and correctness go hand-in-hand.
An SRS is unambiguous if and only if every requirement stated one and only one interpretation.
Requirements are often written in natural language, which are inherently ambiguous. If the
requirements are specified using natural language, the SRS writer should ensure that there is no
ambiguity. One way to avoid ambiguity is to use some formal requirement specification language. The
major disadvantage of using formal languages is large effort is needed to write an SRS and increased
difficulty in understanding formally stated requirements especially by clients.
13 | M C K
A SRS is verifiable if and only if every stored requirement is verifiable. A requirement is verifiable if
there exists some cost effective process that can check whether the final software meets that
requirement. Un-ambiguity is essential for verifiability. Verification of requirements is often done
through reviews.
A SRS is consistent if there is no requirement that conflicts with another. This can be explained with the
help of an example: suppose that there is a requirement stating that process A occurs before process B.
But another requirement states that process B starts before process A. This is the situation of
inconsistency. Inconsistencies in SRS can be a reflection of some major problems.
Generally, all the requirements for software need not be of equal importance. Some are critical. Others
are important but not critical. An SRS is ranked for importance and/or stability if for each requirement
the importance and the stability of the requirement are indicated. Stability of a requirement reflects the
chances of it being changed. Writing SRS is an iterative process.
An SRS is modifiable if its structure and style are such that any necessary change can be made easily
while preserving completeness and consistency. Presence of redundancy is a major difficulty to
modifiability as it can easily lead to errors. For example, assume that a requirement is stated in two
places and that requirement later need to be changed. If only one occurrence of the requirement is
modified, the resulting SRS will be inconsistent.
An SRS is traceable if the origin of each requirement is clear and if it facilitates the referencing of each
requirement in future development. Forward traceability means that each requirement should be
traceable to some design and code elements. Backward traceability requires that it is possible to trace
the design and code element to the requirements they support.
14 | M C K
needed by a procedure, often the entire record is passed, rather than just passing that field of the
record. By passing the record we are increasing the coupling unnecessarily. Essentially, we should keep
the interface of module as simple and small as possible.
The type of information flow along the interfaces is the third major factor affecting coupling. Two kinds
of information that can flow along an interface: data or control. Passing or receiving control
information indicates that action of a module will depend on this control information, which makes it
more difficult to understand the module and provide its abstraction. Transfer of data information
means that a module passes as input some data to another module and gets in return some data as
output. This allows a module to be treated as a simple input output function that performs some
transformation on the input data to produce the output data. Interfaces with only data communication
result in the lowest degree of coupling, followed by interfaces that only transfer control data. Coupling
is considered highest when the data is hybrid,(i.e. some data items and some control items are passed
betweenmodules.)
15 | M C K
freedom in creating a DFD that will solve the problem stated in theSRS. So this deals with a model
developing for an eventual system. That is, the DFD during design represents how the data will flow in
the system when it is built. In this stage, the major transforms or functions in the software are decided
and the DFD shows the major transforms that the software will have and how the data will flow through
different transforms.
The general rules of drawing a DFD remain the same. As an example, consider the problem of
determining the number of different words in an input file. The data flow diagram for this problem is
shown n Figure 4.4 This problem as only one input data stream, the input file, while the desired output
is the count of different words in the file. To transform the input to the desired output, the first thing
we do is form a list of all the words in the file. Then sort the list, as this will make identifying different
words easier. This sorted list is
Then used to count the number of different words and the output of the transform is the desired count,
which is then printed. This sequence of data transformation is what we have in the data flow diagram.
16 | M C K
data flow diagram and traveling toward the inputs. These are the data elements that are most removed
from the actual outputs but can still be considered outgoing. The MAO data elements may also be
considered the logical output data items.
There will usually be some transforms left between the most abstract input and output data items.
These central transforms perform the basic transformation for the system, taking the most abstract
input and transforming it into the most abstract output.The central transforms focus on the modules
implementing these transforms can concentrate on performing the transformation without being
concerned with converting the data into proper format, validating the data and so forth.
Consider the data flow diagram shown in Figure 4.4. The arcs in the data flow diagram are the most
abstract input and most abstract output. The choice of the most abstract input is obvious. We start
following the input. First, the input file is converted into a word list, which is essentially the input in a
different form. The sorted word list is still basically the input, as it is still the same list, in a different
order. This appears to be the most abstract input because the next data (i.e., count) is not just another
form of the input data. The choice of the most abstract output is even more obvious; count is the
natural choice (a data that is a form of input will not usually be a candidate for the most abstract
output). Thus we have one central transform, count-the-number-of-different-words, which has one
input and one output data item.
First-Level Factoring
Having identified the central transforms and the most abstract input and output data items, we are
ready to identify some modules for the system. Initially we specify a main module, whose purpose is to
invoke the subordinates. The main module is therefore a coordinate module. For each of the most
abstract input data items, an immediate subordinate module to the main module is specified. Each of
these modules is an input module, whose purpose is to deliver to the main module the most abstract
data item for which it is created.Similarly, for each most abstract output data item, a subordinate
module that is an output module that accepts data from the main module is specified. Each of the
arrows connecting these input and output subordinate modules are labeled with the respective abstract
data item flowing in the proper direction.
Finally, for each central transform, a module subordinate to the main is specified. These modules will be
transform modules, whose purpose is to accept data from the main module, and then return the
appropriate data back to the main module. The data items coming to a transform module from the main
module are on the incoming arcs of the corresponding transform in the data flow diagram. The data
items returned are on the outgoing arcs of that transform. Note that here a module is created for a
transform, while input/output modules are created for data items. The structure after the first-level
factoring of the word-counting problem is shown in the above figure.
In the above example, there is one input module, which returns the sorted word list to the main
module. The output module takes from the main module the value of the count. There is only one
central transform in this example, and a module is drawn for that. Note that the data items traveling to
and from this transformation module are the same as the data items going in and out of the central
transform. The main module is the overall control module, which win form the main program or
17 | M C K
procedure in the implementation of the design. It is a coordinate module that invokes the input
modules to get the most abstract data items, passes these to the appropriate transform modules, and
delivers the results of the transform modules to other transform modules until the most abstract data
items are obtained. These are then passed to the outputmodules.
Factoring the Input, Output, and TransformBranches
The first-level factoring results in a very high-level structure, where each subordinate module has a lot
of processing to do. To simplify these modules, they must be factored into subordinate modules that
will distribute the work of a module. Each of the input, output, and transformation modules must be
considered for factoring.
The purpose of an input module, as viewed by the main program, is to produce some data. To factor an
input module, in the data flow diagram that produced the data item is now treated as a central
transform. The process performed for the first-level factoring is repeated here with this new central
transform, with the input module being considered the main module. A subordinate input module is
created for each input data stream coming into the new central transform, and a subordinate transform
module is created for the new central transform. The new input modules now created can then be
factored again, until the physical inputs are reached. Factoring or input modules will usually not yield
any output subordinate modules.
Count
number of
different
d
w.lis coun
wor coun
Get a word Increment count
wor fla
Same as
The factoring of the input module get-sorted-list in the first-level structure is shown in Figure4.6. The
transform producing the input returned by this module (i.e., the sort transform) is treated as a central
transform. Its input is the word list. Thus, in the first factoring we have an input module to get the list
and a transform module to sort the list. The input module can be factored further, as the module needs
to perform two functions, getting a word and then adding it to the list. Note that the looping arrow is
used to show the iteration.
The factoring of the output modules is symmetrical to the factoring of the input modules. For an output
module we look at the next transform to be applied to the output to bring it closer to the ultimate
desired output. This now becomes the central transform, and an output module is created for each data
stream. During the factoring of output modules, there will be no input modules.Factoring the central
transform is essentially an exercise in functional decomposition and will depend on the designers'
experience and judgment. One way to factor a transform module is to treat it as a problem in its own
right and start with a data flow diagram for it. The inputs to the DFD are the data coming into the
module and the outputs are the data being returned by the module. Each transform in this DFD
represents a sub transform of this transform. The factoring of the central transform count-the-number-
of- different-words is shown in Figure 4.7.
This was a relatively simple transform, and it is not needed to draw the data flow diagram. To determine
the number of words, we have to get a word repeatedly, determine if it is the same as the previous
word (for a sorted list, this checking is sufficient to determine if the word is different from other words),
and then count the word if it is different. For each of the three different functions, we have a
subordinate module, and we get the structure shown in Figure4.7.
18 | M C K
It should be clear that the structure that is obtained depends a good deal on what are the most abstract
inputs and most abstract outputs. And as mentioned earlier, this is based on a good judgment.
Although the judgment varies among the designers, it’s effect is minimal. The net effect is that bubble
that appears as a transform module at one level may appear as a transform module at another level.
In a DFD, data flows are identified by unique names. These names are chosen so that they convey some
meaning about what the data is. However, the precise structure of the data flows is not specified in a
DFD. The data dictionary is a repository of various data flows defined in a DFD. Data dictionary states
the structure of each data flow in the DFD. To define data structure, different notations are used. A
composition is represented by +, selection is represented by / (i.e., either or relationship), and
repetition may be represented by *. Example of a data dictionary is givenbelow:
Weekly timesheet= employee_name + employee_id+[regular_hrs+Overtime_hrs]*
Pay_rate= [hourly_pay/daily_pay /weekly_pay]
Employee_name= Last_name+ First_name +Middle_name
Employee_id= digit+ digit+ digit + digit
Most of the data flows in the DFD are specified here. Once we have constructed a DFD and associated
data dictionary, we have to somehow verify that they are correct. There is no specific method to do so
but data dictionary and DFDs are examined such that, the data stored in data dictionary should be there
somewhere in the DFD and vice versa. Some common errors in DFDs are listedbelow:
1. Unlabelled Data flows
2. Missing data flows (information required by a process is not available)
3. Extraneous data flows; some information is not being used in any process.
4. Consistency not maintained during refinement.
5. Missing Process
19 | M C K
6. Contains some control information.
21 | M C K
resemblance to the problemstructure.
Functional cohesion is the strongest cohesion. Functional cohesion is when parts of a module are
grouped because they all contribute to a single well-defined task of the module. In a functionally bound
module, all the elements of the module are related to performing a single function. By function, we do
not mean simply mathematical functions; modules accomplishing a single goal are also included.
Functions like "compute square root" and "sort the array" are clear examples of functionally cohesive
modules.
How to determine the cohesion level of a module? There is no mathematical formula that can be used.
We have to use our judgment for this. A useful technique for determining if a module has functional
cohesion is to write a sentence that describes fully and accurately, the function or purpose of the
module. The following tests can then bemade:
1. If the sentence is a compound sentence, if it contains has more than one verb, the module is
probably performing more than one function, and it probably has sequential or
communicationalcohesion.
2. If the sentence contains words relating to time, like "first," "next," "when" and "after", the module
probably has sequential or temporalcohesion.
3. If the predicate of the sentence does not contain a single specific object following the verb (such as
"edit all data") the module probably has logicalcohesion.
4. Words like "initialize," and "cleanup" imply temporalcohesion.
Modules with functional cohesion can always be described by a simplesentence. However, if a
description is a compound sentence, it does not mean that the module does not have functional
cohesion. If we cannot describe it using a simple sentence, then the module is not likely to have
functional cohesion.
22 | M C K
………….
}
Usually procedural information is not represented in a structure chart, and the focus is on representing
the hierarchy of modules. However there are some situations where the designer may wish to
communicate certain procedural information explicitly, like major loops and decisions. Such information
can also be in a structure chart. A loop can be represented by a looping arrow. In Figure given below,
module A calls module C and D repeatedly. All the subordinate modules activated within a common
loop are enclosed in the same looping arrow.
Major decisions can be represented similarly. For example, if the invocation of modules C and D in
module A depends on the outcome of some decision, that is represented by a small diamond in the box
for A, with the arrows joining C and D coming out of this diamond, as shown inFigure.
A A
B C D B C D
Modules in a system can be categorized into few classes. There are some modules that obtain
information from their subordinates and then pass it to their superiordinate. This kind of module is an
input module. Similarly, there are output modules that take information from their superiordinate and
pass it on to its subordinates. As the name suggests, the input and output modules are typically used for
input and output of data. The input modules get the data from the sources and get it ready to be
processed, and the output modules take the output produced and prepare it for proper presentation to
the environment.
Then there are modules that exist solely for the sake of transforming data into some other form. Such a
module is called a transform module. Most of the computational modules typically fall in this category.
Finally, there are modules whose primary concern is managing the flow of data to and from different
subordinates. Such modules are called coordinate modules. The structure chart representation of the
different types of modules is shown in Figure 4.3. A module can perform functions of more than one
type of module.
The composite module in the above figure is an input module from of point of view of its subordinates
as it feeds the data Y to the superordinate. Internally a coordinate module views its job as getting data X
from one subordinate and passing it to another subordinate which converts it to Y. A structure chart is
very useful while creating the design. It shows the modules and their call hierarchy, the interfaces
between the modules and what information passes in between the modules. A designer can make
effective use of structure charts to represent the models creating while he is designing. However, it is
not very useful for representing the final design, as it does not give all the information needed about
the design. For example, it does not specify the scope, structure of data, specification of each module,
etc., Hence it is generally supplemented with textual specification to convey design to the implementer.
Data to Superiordinate
Data from Superiordinate
Input
Output
Module
Module
y
x y
Coordinat
Composite Transform
e
Module Module
x y y
x
x
23 | M C K
8. Explain the components of SRS.
Components of SRS
The basic issues an SRS must address are:
1. Functional Requirements: These specify which output should be produced from the given inputs.
They describe the relationship between the input and output of a system. All operations to be
performed on the input data to obtain the output should be specified. This includes specifying the
validity checks on the inputs and output data. Important part of the specification is, the system
behavior in abnormal situations like invalid inputs or error during computation. They must clearly
state what the system should do if such situations occur. It should specify the behavior of the
system for invalid inputs and invalid outputs long with the behavior of the system where the input is
valid but normal operation cannot be performed should also be specified. Eg., An airline reservation
system, where the reservation cannot be made even for a valid passenger if the airplane is fully
booked. In short, system behavior for all foreseen inputs and for all foreseen system states should
bespecified.
2. Performance Requirements: This part of the SRS specifies the performance constraints on the
software system. There two types of performance requirements—static and dynamic. Static
requirements do not impose constraints on the execution characteristics of the system. These
include requirements like number of terminals to be suspported, the number of simultaneous
operations to be supported etc. These are also called capacity requirements of the system.
Dynamic requirements specify constraints on the execution behavior of the system. These typically
include response time and throughput constraints on the system. Acceptable ranges of the different
performance parameters should be specified along with acceptable performance for both normal &
peak workload conditions.All these requirements must be stated in measurable terms. Eg., “Usually,
the response time of x is less than one second in 98% of the times”.
3. Design Constraints: There are a number of factors in the client’s environment that may restrict the
choices of the designer. Such factors include some standards that must be followed, resource limits,
operating environment, reliability and security requirements which may have some impact on the
design of the system. An SRS should identify and specify all such constraints.
Standard Compliance: This specifies the requirements for the standards the system must follow.
The standards may include the report format and accounting procedures. It can also include certain
changes or operations that must be recorded in an audit file.
Hardware Limitations: the software may have to operate on some existing or pre- determined
hardware, thus, imposing restrictions on the design. This can include the type of machines to be
used, operating systems available, languages supported and limits on primary and secondary
storage.
Reliability and Fault Tolerance: These requirements can place major constraints on how the system
is to be designed. Fault tolerance requirements make the system more complex. Requirements in
the system behavior in face of certain kinds of faults are to be specified. Recovery requirements deal
with the system behavior in case offailure.
Security: These requirements place restriction on the use of certain commands, control access to
data, provide different kinds of access requirements for different people, require the use of
passwords & cryptography techniques and maintain a log activity of the system.
4. External Interface Requirements: All the possible interactions of the software with the people,
hardware and other software should be clearly specified. User interface should be user friendly. To
create user friendly interface one can use GUI tools. A preliminary user manual should be created
with all user commands, screen formats, feedback and error messages, explanation about how the
system will appear to the user etc., Like other specifications, these should also be precise and
verifiable. Eg. “Commands should reflect the function they perform”.
For hardware interface requirements, SRS should specify logical characteristics of each interface
between the software product and hardware components.
The interface requirement should specify the interface with other software the system will use or
that will use the system.
24 | M C K
9. Explain the activities of requirement process with a proper diagram
The requirement process is the sequence of activities
that need to be performed in the requirement phase. Client/use
There are three basic activities in case of requirement r needs
analysis. They are:
1. Problem analysis or requirement analysis.
2. Requirementspecification.
3. Requirementvalidation.
Problem Analysis
Problem analysis is initiated with some general
statement of needs. It often starts with a high level
“problem-statement”. Client is the originator of these
needs. During analysis, the system behavior, constraints Product Description
on the system, its inputs, and outputs are analyzed. The
basic purpose of this activity is to obtain the thorough
understanding of what the software needs to provide.
The requirement specification clearly specifies the Fig. 3.1
Validation
requirements in the form of a document. Properly
organizing and describing the requirements is an Validated SRS
important goal of this activity. Requirements validation
focuses on ensuring that what has been specified in the SRS un-avoidable & making sure that the SRS is
of good quality. The final activity focuses on validation of the collected requirements. Requirement
process terminates with the production of the validated SRS.
Though it seems that the requirement process is a linear sequence of these activities, in reality it is not
so. The reality is, there will be aconsiderable overlap and feedback between these activities. So, some
parts of the system are analyzed and then specified while the analysis of some other parts is going on. If
validation activities reveal some problem, for a part of the system, analysis and specifications are
conducted again.
The requirement process is represented diagrammatically in figure (a). As shown in the figure, from
specification activity we may go back to the analysis activity. This happens because the process
specification is not possible without a clear understanding of the requirements. Once the specification is
complete, it goes through the validation activity. This activity may reveal problems in the specification
itself, which requires going back to the specification step, which in turn may reveal shortcomings in the
understanding of the problem, which requires going back to the analysisactivity.
During requirement analysis, the focus is on understanding the system and its requirements. For
complex systems, this is the most difficult task. Hence the concept “divide-and-conquer” i.e.,
decomposing the problem into sub-problems and then understanding the parts and their relationship is
inevitably applied to manage the complexity.
25 | M C K
c. User Characteristics
d. General Constraints
e. Assumptions and Dependencies
3. Specific Requirements
a. External Interface Requirements
i. User Interfaces
ii. Hardware Interfaces
iii. Software Interfaces
iv. Communication Interfaces
b. Functional Requirements
i. Mode 1
1. Functional Requirement 1.1
…..
…..
Functional Requirement 1.n
ii. Mode m
1. Functional Requirement m.1
……
…….
c. Performance Requirements
d. Design Constraints
e. Attributes
f. Other Requirements
The introduction section contains the purpose, scope, overview, etc. of the requirements document. It
also contains the references cited in the document and any definitions that are used. Section 2
describes the general factors that affects the product and its requirements. Product perspective is
essentially the relationship of the product to other products. Defining if the product is independent or is
a part of a larger product. A general abstract description of the functions to be performed by the
product is given. Schematic diagrams showing a general view of different functions and their
relationship with each other. Similarly, characteristics of the eventual end user and general constraints
are also specified.
The specific requirements section describes all the details that the software developer needs to know
for designing and developing the system. This is the largest and most important part of the documents.
One method to organize the specific requirements is to first specify the external interfaces, followed by
functional requirements, performance requirements, design constraints and system attributes.
The external interface requirements section specifies all the interfaces of the software: to people, other
software, hardware, and other systems. User interfaces are clearly a very important component; they
specify each human interface the system plans to have, including screen formats, contents of menus,
and command structure. In hardware interfaces, the logical characteristics of each interface between
the software and hardware on which the software can run are specified. In software interfaces, all other
software that is needed for this software to run is specified, along with the interfaces. Communication
interfaces need to be specified if the software communicates with other entities in other machines.
In the functional requirements section, the functional capabilities of the system are described. For each
functional requirement, the required inputs, desired outputs, and processing requirements will have to
be specified.
The performance section should specify both static and dynamic performance requirements.
The attributes section specifies some of the overall attributes that the system should have. Any
requirement not covered under these is listed under other requirements. Design constraints specify all
the constraints imposed on design.
26 | M C K
followed blindly. The strategy requires the designer to exercise sound judgment and common sense.
The basic objective is to make the program structure reflect the problem as closely as possible. The
structure obtained earlier should be treated as an initial structure, which may get modified. Here we
mention some heuristics that can be used to modify the structure, if necessary.
Module size is often considered the indication of module complexity. In terms of the structure of the
system, modules that are very large may not be implementing a single function and can therefore be
broken into many modules, each implementing a different function. On the other hand, modules that
are too small may not require any additional identity and can be combined with other modules.
However, the decision to split a module or combine different modules should not be based on size
alone. Cohesion and coupling of modules should be the primary guiding factors. A module should be
split into separate modules only if the cohesion of the original module was low, the resulting modules
have a higher degree of cohesion, and the coupling between modules doesn’t increase. Similarly, two or
more modules should be combined only if the resulting module has a high degree of cohesion and the
coupling of the resulting module is not greater than the coupling of the sub-modules. Furthermore, a
module should not be split or combined with another module, if it is a subordinate to many other
modules. In general, the module should contain LOC between 5 and 100. Above 100 and less than 5 LOC
is not desirable.
Another factor to be considered is “fan-in” and “fan-out” of modules. Fan-in of a module is the number
of arrows coming towards the module indicating the number of superiordinates. Fan-out of a module is
the number of arrows going out of that module; indicating the number of subordinates for that module.
A very-high fan-out is not desirable as it means that the module has to control and co-ordinate too
many modules. Whenever possible, fan-in should be maximized. In general, the fan-out should not be
more than 6.
Another important factor that should be considered is the correlation of the scope of effect and scope
of control. The scope of effect of a decision (in a module) is collection of all the modules that contain
any processing that is conditional that decision or whose invocation is dependent on the outcome of the
decision; The scope of control of a module is the module itself and all its subordinates (just the
immediate subordinates). The system is usually simpler when the scope of effect of a decision is a
subset of the scope of control of the module in which decision islocated. Ideally, the scope of effect
should be limited to the modules that are immediate subordinates of the module in which the decision
is located. Violation of this rule usually results in highest degree of coupling between modules.
Unit-3
1. Explain PDL with suitable examples.
One method to present a design is to specify it in a natural language like English. This sometimes leads
to misunderstanding and such imprecise communication is not of any use when converting the design
into code. The other extreme is to communicate it precisely in a formal language like a programming
language. This type of representation usually works better and provides a perfect implementation but
not in favor of communicating the design. PDL, which is as precise and unambiguous as possible without
having too much detail and that can be easily converted into required implementation. With this view in
hand, PDL works.It is related to pseudocode; but unlike pseudocode, it is written in plain language
without any terms that could suggest the use of any programming language or library
PDL has an overall outer syntax of a structured programming language and contains a vocabulary of a
natural language (English in our case). It can be thought of as "Structured English". Because the
structure of a design expressed in PDL is formal, (using the formal language constructs), automated
processing can be done to some extent on such designs. E.g.The problem of finding the minimum and
maximum of a set of numbers in a fileandoutputting thesenumbers in PDL as shown below:
minmax(infile)
ARRAY z
Do UNTIL end of input
Read an item into z
ENDDO
27 | M C K
max, min := first item of z
DO FOR each item in z
IF max < item THEN set max to item
IF min > item THEN set min to item
ENDDO
END
Notice that in the PDL program we have the entire logic of the procedure, but little about the details of
implementation in a particular language. To implement this in a language, each of the PDL statements
will have to be converted into programming language statements.
With PDL, a design can be expressed in whatever level of detail that is suitable for the problem. One
way to use PDL is to first generate a rough outline of the entire solution at a given level of detail. When
the design is agreed on at this level, more detail can be added. This allows a successive refinement
approach, and can save considerable cost by detecting the design errors early during the design phase.
It also aids design verification by phases, which helps in developing error-free designs. The structured
outer syntax of PDL also encourages the use of structured language constructs while implementing the
design. The basic constructs of PDL are similar to those of a structured language.
PDL provides IF construct which is similar to the if-then-else construct of Pascal. Conditions and the
statements to be executed need not be stated in a formal language. For a general selection, there is a
CASE statement.
CASE of transaction type
CASE of operator type
Some examples of The DO construct is used to indicate repetition. The construct is indicated by:
DO iteration-criteria
one or more statements
ENDDO
The iteration criteria can be chosen to suit the problem, and unlike a formal programming language,
they need not be formally stated. Examples of valid uses are:
DO WHILE there are characters in input file
DO UNTIL the end of file is reached
DO FOR EACH item in the list EXCEPT when the item is ZERO.
A variety of data structures can be defined and used in PDL such as lists, tables, scalar, and integers.
Variations of PDL, along with some automated support, are used extensively for communicating
designs.
28 | M C K
The goal of structured programming is to ensure that the static structure and the dynamic structures
are the same. The objective of structured programming is to write programs so that the sequence of
statements executed during the execution of a program is the same asthe sequence of statements in
the text of that program. As the statements in a program text are linearly organized, the objective of
structured programming becomes developing programs whose control flow during execution is
linearized and follows the linear organization of the program text.
Clearly, no meaningful program can be written as a sequence of simple statements without any
branching or repetition. In structured programming, a statement is not a simple assignment statement,
it is a structured statement. The key property of a structured statement is that it has a single-entry and
a single-exit, That is, during execution, the execution of the (structured) statement starts from one
defined point and the execution terminates at another defined point. With single-entry and single-exit
statements, we can view a program as a sequence of (structured) statements. And if all statements are
structured statements, then during execution, the sequence of execution of these statements will be
the same as the sequence in the program text. Hence, by using single-entry and single-exit statements,
the correspondence between the static and dynamic structures can be obtained. The most commonly
used single-entry and single-exit statementsare:
Selection: if B then S1 else S2
if B then SI
Iteration: While B do S
Repeat S until B
Sequencing: S1; S2; S3;.
It can be shown that these three basic constructs are sufficient to program any conceivable algorithm.
Modern languages have other such constructs that help linearize the control flow of a program, which
makes it easier to understand a program. Hence, programs should be written so that, as far as possible,
single-entry, single-exit control constructs is used.
The basic goal, as we have tried to emphasize, is to make the logic of the program simple to understand.
The basic objective of using structured constructs is to linearize the control flow so that the execution
behavior is easier to understand. In linearized control flow, if we understand the behavior of each of the
basic constructs properly, the behavior of the program can be considered a composition of the
behaviors of the different statements.
Overall, it can be said that structured programming, in general, leads to programs that are easier to
understand than unstructuredprograms. Structured programming is a safe approach to achieve this.
Unstructured construct can be used only if the structured alternative is harder to understand.
29 | M C K
questions, point out possible errors or seek clarification. A beneficial side effect of walkthroughs is that
in the process of articulating and explaining the design in detail, the designer himself can uncover some
of the errors.
Walkthroughs are essentially a form of peer review. Due to its informal nature, they are usually not as
effective as the design review.
Critical Design Review
The purpose of critical design review is to ensure that the detailed design satisfies the specifications laid
down during system design. It is very desirable to detect and remove design errors early, as the cost of
removing them later can be considerably more. Detecting errors in detailed design is the aim of critical
design review.
The critical design review process is similar to the other reviews, in that a group of people get together
to discuss the design with the aim of revealing designs errors or undesirable properties. The review
group includes, besides the author of detailed design, a member of the system design team, the
programmer responsible for ultimately coding the module(s) under review, and an independent
software quality engineer. That is, each member studies the design beforehand and with aid of a
checklist marks items that the reviewer feels are incorrect or need clarification. The members ask
questions and the designer tries to explain the situation. During the discussion design errors are
revealed.
As with any review, it should be kept in mind that the aim of the meeting is to uncover design errors,
not try to fix them. Fixing is done later. Also, the psychological frame of mind should be healthy, and the
designer should not be put in a defensive position. The meeting should end with a list of action items, to
be acted on later by the designer. The use of checklists, as with other reviews, is considered important
for the success of the review.
Consistency Checkers
Design reviews and walkthroughs are manual processes; the people involved in the review and
walkthrough determine the errors in the design. If the design is specified in PDL or some other formally
defined design language, it is possible to detect some design defects by using consistency checkers.
Consistency checkers are essentially compilers that take as input the design specified in a design
language (PDL). Clearly, they cannot produce executable code because the inner syntax of PDL allows
natural language and many activities are not specified in the natural language. However, the module
interface specifications (which belong to outer syntax) are specified formally.
A consistency checker can ensure that any modules invoked or used by a given module actually exist in
the design and that the interface used by the caller is consistent with the interface definition of the
called module. It can also check if the used global data items are defined globally in the design.
30 | M C K
The next step is development of a mathematical model for the problem. In modeling, one has to select
the mathematical structures that are best suited for the problem.
The next step is the design of the algorithm. During this step the data structure and program structure
are decided. Once the algorithm is designed, correctness should be verified. No clear procedure can be
given for designing algorithms.
The most common method for designing algorithms or the logic for a module is to use the stepwise
refinement technique. This technique breaks the logic design problem into a series of steps, so that the
development can be done gradually. The process starts by converting the specifications of the module
into an abstract description of an algorithm containing a few abstract statements. In each step, one or
several statements in the algorithm developed so far are decomposed into more detailed instructions.
The refinement terminates when all instructions are sufficiently precise that they can easily be
converted into programming language statements. During refinement, both data and instructions have
to be refined.
The stepwise refinement technique is a top-down method for developing detailed design. To perform
the stepwise refinement, a language is needed to express the logic of a module at different levels of
detail, starting from the specifications of the module. The language should have enough flexibility to
accommodate different levels of precision. Due to lack of flexibility, Programming languages cannot be
used in this context. PDL is very suitable mainly because of certain properties it holds. The outer syntax
of PDL ensures that the design being developed is a computer algorithm whose statements can later be
converted to statements of a programming language. Its flexible natural language-based inner syntax
acts as a plus point in this context.
An Example: Let us consider the problem of counting different words in a text file. Assume that the
COUNT module is specified whose job is to determine the count of different words. During detailed
design we have to determine the logic of this module so that the specifications are met. We will use the
stepwise refinement method for this purpose. For specification purpose, we will use PDL, adapted to C-
style syntax. A simple strategy for the first step is shown bellow (Figure(a)). The primitive operations
used in this strategy are very high-level and need to be further refined (as shown in figure(b)).
Specifically, there are three operations that need refinement. They are
1. read file into the word list, whose purpose is to read all the words from the file and create a
wordlist,
2. sort(wl), which sorts the word list in ascending order,and
3. count different words from a sorted word list.
So far, only one data structure is defined: the word list. As refinement proceeds, more data structures
might beneeded.
In the next refinement step, we should select one of the three operations to be refined-and further
elaborate it. In this step we will refine the reading procedure. One strategy of implementing the read
module is to read words and add them to the word list. This is shown in Figure (b). For the next
refinement step we select the counting function. A strategy for implementing this function is shown in
Figure (c). Similarly, we can refine the sort function.Once these refinements are done, we have a design
that is sufficiently detailed and needs no further refinement.For more complex problems many
31 | M C K
successive refinements might be neededfor a single operation.
intdifferent_words (word_listwl)
{
word last, cur; intcnt;
last = first word in wl
cnt =1;
while not end of list
{
cur = next word from wl
if (cur <> last)
{
cnt = cnt + 1; last = cur;
}
}
return (cnt)
}
Figure (c). Refinement of the function different_words
Similarly we can refine the sort function. Once these refinements are done, we can get a design that is
sufficient and does not require further more refinements.
32 | M C K
Here there is only one path in the function,
and this symbolic execution is equivalent
to checking for all possible values of x, y,
and z. (Note that the implied assumption
is that input values are such that the
machine will be able to perform the
product and no overflow will occur.)
Essentially, with only one path and an
acceptable symbolic result, we can claim
that the program is correct.
The different paths followed during
symbolic execution can be represented by
an "execution tree." A node in this tree
represents the execution of a statement, while an arc represents the transition from one statement
to another. For each if statement, there are two arcs from the node corresponding to the if
statement, one labeled with T (true) and the other with F (false), for the then and else paths. At each
branching, the path condition is also often shown in the tree.
33 | M C K
performed. So, any information in the problem domain typically has a small number of defined
operations performed on it.
When the information is represented as data structures, the same principle should be applied, and only
some defined operations should be performed on the data structures. This is the principle of
information hiding. The information captured in the data structures should be hidden from the test of
the system, and only the access functions on the data structures that represent the operations
performed on the information should be visible. The other modules access the data only with the help
of these access functions. Information hiding can reduce the coupling between the modules and make
the system more maintainable. This is also an effective tool for managing the complexity of developing
software. All Object Oriented languages are supporting the concept of Information hiding.
8. What are the activities that are undertaken during critical design review?
The purpose of critical design review is to ensure that the detailed design satisfies the specifications laid
down during system design. It is very desirable to detect and remove design errors early, as the cost of
removing them later can be considerably more. Detecting errors in detailed design is the aim of critical
design review.
The critical design review process is similar to the other reviews, in that a group of people get together
to discuss the design with the aim of revealing designs errors or undesirable properties. The review
group includes, besides the author of detailed design, a member of the system design team, the
programmer responsible for ultimately coding the module(s) under review, and an independent
software quality engineer. That is, each member studies the design beforehand and with aid of a
checklist marks items that the reviewer feels are incorrect or need clarification. The members ask
questions and the designer tries to explain the situation. During the discussion design errors are
revealed.
As with any review, it should be kept in mind that the aim of the meeting is to uncover design errors,
not try to fix them. Fixing is done later. Also, the psychological frame of mind should be healthy, and the
designer should not be put in a defensive position. The meeting should end with a list of action items, to
be acted on later by the designer. The use of checklists, as with other reviews, is considered important
for the success of the review.
34 | M C K
successively assign two values to a variable without using the earlier value at all or using a value of a
variable, before assigning any value into that variable. Data flow anomalies are "suspicious" use of
data in a program. In general, data flow anomalies are technically not errors, and they may go
undetected by the compiler. However, they are often a symptom of an error, caused due to
carelessness in typing or error in coding. At the very least, presence of data flow anomalies implies poor
coding. Hence, if a program has data flow anomalies, they should be properly addressed.
x =a; x=b; // x does not appear in any right hand side (i.e. it is not used at all).
An example of the data flow anomaly is the live variable problem, in which a variable is assigned some
value but then the variable is not used in any later computation. Such an assignment to the variable is
clearly redundant. Another simple example of this is having two assignments to a variable without using
the value of the variable between the two assignments. In this case the first assignment is redundant.
For example, consider the simple case of the code segment given earlier. Clearly, the first assignment
statement is useless. Perhaps the programmer meant to say y = b in the second statement, and
mistyped y as x. In that case, detecting this anomaly and directing the programmer's attention to it can
save considerable effort in testing and debugging. In addition to revealing anomalies, data flow analysis
can provide valuable information for documentation of programs. For example, data flow analysis can
provide information about which variables are modified on invoking a procedure in the caller program
and the value of the variables used in the called procedure (this can also be used to make sure that the
interface of the procedure is minimum, resulting in lower coupling). This information can be useful
during maintenance to ensure that there are no undesirable side effects of some modifications to
aprocedure.
In a top-down implementation, the implementation starts from the top of the hierarchy and proceeds
to the lower levels. First the main module is implemented, then its subordinates are implemented, and
their subordinates, and so on. In a bottom-up implementation, the process is the reverse. The
development starts with implementing the modules at the bottom of the hierarchy and proceeds
through the higher levels until it reaches the top.Top-down and bottom-up implementation should not
be confused with top-down and bottom- up design. Here, the design is being implemented, and if the
design is fairly detailed and complete, its implementation can proceed in either the top-down or the
bottom-up manner, even if the design was produced in a top-down manner. Which of the two is used
mostly affects testing. All large systems must be built by assembling validated pieces together. The case
with software systems is the same. Parts of the system have to first be built and tested before putting
them together to form the system. Because parts have to be built and tested separately, the issue of
top-down versus bottom-up arises.
Unit-4
1. Explain dataflow based testing with suitable example.
In data flow-based testing, besides the control flow, information about where the variables are defined
and where the definitions are used is also used to specify the test cases. The basic idea behind data
flow-based testing is to make sure that during testing, the definitions of variables and their subsequent
use is tested.
35 | M C K
For data flow-based testing, a definition-use graph for the program is first constructed from the control
flow graph of the program. A statement in a node in the flow graph representing a block code has
variable occurrences in it. A variable occurrence can be one of the following here types:
• Def represents the definition of the variable.
Variables on the left hand side of an assignment
statement are the one getting defined.
• C- use represents computational use of a
variable. Any statement that uses the value of
variables for computational purposes is said to be
making use c-use of the variables. In an
assignment statement, all variables on the right
hand side have a c-use occurrence.
• P-use represents predicate use.These are all the
occurrences of the variables in a predicate, which
is used for transfer control.
36 | M C K
As a large program is continuously changed, its complexity, which reflects deteriorating structure,
increases unless work is done to maintain or reduce it.
The IEEE defined preventive maintenance as “maintenance performed for the purpose of preventing
problems before they occur”. This is the process of changing software to improve its future
maintainability or to provide a better basis for future enhancements.
The preventive change is usually initiated from within the maintenance organization within the
maintenance organization with the intention of making programs easier to understand and hence
facilitate future maintenance work. Preventive change does not usually give rise to a substantial
increase in the baseline functionality.
Preventive maintenance is rare the reason being that other pressures tend to push it to the end of the
queue. For instance, a demand may come to develop a new system that will improve the organizations
competitiveness in the market. This will likely be seen as more desirable than spending time and money
on a project that delivers no new function. Still, it is easy to see that if one considers the probability of a
software unit needing change and the time pressures that are often present when the change is
requested, it makes a lot of sense to anticipate change and to prepare accordingly.
37 | M C K
• VuGen (Virtual User Generator) for generating and editing scripts.
• Controller for composing scenarios which specify which load generators are used for which script, and
for how long, etc. During runs the Controller receives real-time monitoring data and displays status.
• Analysis which assembles logs from various load generators and formats reports for visualization of run
result data and monitoring data.
38 | M C K
Now let us consider control flow-based criteria. The simplest coverage criteria are statement coverage,
which requires that each statement of the program be executed at least once during testing. This is
called all node criterions. This coverage criterion is not very strong and can leave errors undetected.
For example, if there is an if statement in the program without else part, the statement coverage
criterion for this statement will be satisfied by a test case that evaluates the condition to true. No test
case is needed that ensures that the condition in the if statement evaluates to false. This is a major
problem because decisions in the programs are potential sources of errors
Another coverage criterion is branch coverage, which requires that each edge in the control flow graph
be traversed at least once during testing. In other words, branch coverage requires that each criterion in
the program be evaluated to true and false values at least once during testing.
Testing based on branch coverage criterion is known as branch testing. Problem with branch coverage
comes if a decision has many conditions in it j(consisting of Boolean expression with Boolean operators
“and” and “or” ). In such a situation, a decision can be evaluated to true and false without actually
exercising all conditions.
It has been observed that there are many errors whose presence is not detected by branch testing. This
is because some errors are related to some combinations of branches and their presence is revealed by
an execution that follows the path that includes those branches. Hence a more general coverage
criterion which covers all the paths is required. This is called path coverage criterion and testing based
on this criterion is called path testing.
But the problem with this criterion is that programs that contain loops can have an infinite number of
possible paths. Some methods have been suggested to solve this problem. One such method is to limit
the number of paths
39 | M C K
Domain Management: TestDirector projects are now grouped by domain. A domain contains a group of
related TestDirector projects, and assists you in organizing and managing a large number of projects.
Enhanced Reports and Graphs Additional standard report types and graphs have been added, and the
user interface is richer in functionality. The new format enables you to customize more features.
Version Control Version control enables you to keep track of the changes you make to the testing
information in your TestDirector project. You can use your version control database for tracking manual,
WinRunner and QuickTest Professional tests in the test plan tree and test grid.
Collaboration Module: The Collaboration module, available to existing customers as an optional
upgrade, allows you to initiate an online chat session with another TestDirector user. While in a chat
session, users can share applications and make changes.
TestDirector Advanced Reports Add-in: With the new Advanced Reports Add-in, TestDirector users are
able to maximize the value of their testing project information by generating customizable status and
progress reports. The Advanced Reports Add-in offers the flexibility to create custom report
configurations and layouts, unlimited ways to aggregate and compare data and ability to generate
cross-project analysis reports.
Automatic Traceability Notification: The new traceability automatically traces changes to the testing
process entities—such as requirements or tests, and notifies the user via flag or e-mail. For example,
when the requirement changes, the associated test is flagged and tester is notified that the test may
need to be reviewed to reflect requirement changes. Coverage Analysis View in Requirements Module
The graphical display enables you to analyze the requirements according to test coverage status and
view associated tests - grouped according to test status.
Hierarchical Test Sets: Hierarchical test sets provide the ability to better organize your test run process
by grouping test sets into folders.
Workflow for all TestDirector Modules: The addition of the script editor to all modules enables
organizations to customize TestDirector to follow and enforce any methodology and best practices.
Improved Customization With a greater number of available user fields, ability to add memo fields and
create input masks users can customize their TestDirector projects to capture any data required by their
testing process. New rich edit option add color and formatting options to all memo fields.
=========================================BYE=========================================
40 | M C K