0% found this document useful (0 votes)
152 views40 pages

Se 5&2

IEEE defines software as the collection of computer programs, procedures, rules and associated documentation and data. Software engineering is defined as a systematic approach to the development, operation, maintenance and retirement of software. A module consists of a single block of code that can be invoked like a procedure or function, and encapsulates code and data to implement functionality through a uniform interface.

Uploaded by

SAJIN P
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
152 views40 pages

Se 5&2

IEEE defines software as the collection of computer programs, procedures, rules and associated documentation and data. Software engineering is defined as a systematic approach to the development, operation, maintenance and retirement of software. A module consists of a single block of code that can be invoked like a procedure or function, and encapsulates code and data to implement functionality through a uniform interface.

Uploaded by

SAJIN P
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

2 Marks

1. Give the IEEE definition for software and software engineering.


IEEE defines Software as the collection of computer programs, procedures, rules and associated
documentation and data. This definition clearly states that, software is not just programs, but includes
all the associated documentation and data.
Software Engineering is a systematic approach to the development, operation, maintenance and
retirement of the software. There is another definition for s/w engineering, which states that “Software
engineering is an application of science and mathematics by which the capabilities of computer
equipment are made useful to man via computer programs, procedures and associated
documentation”.

2. What is Module? (Google)


A module consists a single block of code that can be invoked in the way that a procedure, function, or
method is invoked. A module,
• encapsulates code and data to implement a particular functionality.
• has an interface that lets clients to access its functionality in an uniform manner.
• is easily pluggable with another module that expects its interface.
• is usually packaged in a single unit so that it can be easily deployed.

3. What are design walkthroughs?


A design walkthrough is a manual method of verification. A design walkthrough is done in an informal
meeting called by the designer or the leader of the designer's group. The walkthrough group is usually
small. It includes the designer, the group leader and/or another designer of the group. The designer
might just get together with a colleague for the walkthrough or the group leader might require the
designer to have the walkthrough with him.

4. What are Data source and sink? How to represent them in DFDs?
Rectangles represent a source or sink and are a net originator or consumer of data
(Google): A Source represents any source of data that is identified as outside the boundary of the
process that the DFD is modeling. Similarly, a sink is and destination for data that is outside the
boundary of the process that the DFD is modeling.

5. What is Data abstraction?


Abstraction is a tool that permits a designer to consider a component at an abstract level without
worrying about the details of the implementation of the component. An abstraction of a component
describes the external behavior of that component without bothering with the internal details that
produce the behavior. The abstract definition of a component is much simpler than the component
itself.

6. Define most abstract input and most abstract output.


The most abstract input data elements (MAI) are those data elements in the data flow diagram that are
furthest removed from the physical inputs but it can still be considered inputs to the system. The most
abstract input data elements often have little resemblance to the actual physical data. These are often
the data elements obtained after operations like error checking, data validation, proper formatting, and
conversion are complete.Similarly, we identify the most abstract output data elements (MAO) by
starting from the outputs in the data flow diagram and traveling toward the inputs. These are the data
elements that are most removed from the actual outputs but can still be considered outgoing. The MAO
data elements may also be considered the logical output data items.

7. Define test cases.


Test cases are required to find out the presence of fault in a system. Test cases are the inputs to the
testing process. In order to reveal the correct behavior of the system it is necessary to have a large set
of valid test cases.

1|MCK
8. What do you mean by divide and conquer?
For complex tasks, divide and conquer method is used. That is, partition the problem into sub-problems
and then try to understand each sub-problem and its relationship to other sub-problems in an effort to
understand the whole problem. The question here is “partition with respect to what?” Generally, in
analysis, the partition is done with respect to object or function. Most analysis techniques view the
problem as consisting of objects or functions and aim to identify objects or functions and hierarchies
and relationships among them.

9. Define fault, error, failure. (Guaranteed Question)


Error - The term error refers to the discrepancy between a computed, observed or measured value and
the true, specified or theoretically correct value. So we can say that error is the difference between the
actual output of software and the correct output. Error is used t refer to human actions, that results in
software containing a defect or fault.
Fault - is a condition that causes a system to fail in performing its required function. This is the basic
reason for software malfunction and has a similar meaning of bug.
Failure - is the inability of a system or component to perform a required function according to its
specification. Existence of a fault is the potential cause of failure of software. A software failure occurs if
the behavior of the software is different from the actual specified behavior.

10. Differentiate between glass box and white box testing. (Google)
BLACK BOX TESTING WHITE BOX TESTING
It is mostly done by software testers. It is mostly done by software developers.
No knowledge of implementation is needed. Knowledge of implementation is required.
It can be referred as outer or external software It is the inner or the internal software testing.
It is functional test of the software. It is structural test of the software.

11. Mention any two important aspects of WinRunner


• Functionality testing tool
• Win runner run on Windows only.
• To automate our manual test win runner used TSL (Test Script language like c)

12. Define SCM


SCM is a process of identifying and defining the items in the system, controlling the change of these
items throughout their life-cycle, recording and reporting the status of item and change request and
verifying the completeness and correctness of these items. SCM is independent of development
process. Development process handles normal changes such as change in code while the programmer is
developing it or change in the requirement while the analyst is gathering the information.

13. Differentiate between metrics and measurement.


Software metrics are quantifiable measures which could be used to measure the certain characteristics
of the software. The quality of the software cannot be measured directly because software has no
physical attributes. Metrics measurements and models go together. Metrics provide quantification of
some property, measurement provides actual value for the metrics and models are needed to get the
value for metrics that cannot be measured directly.

14. Define software process


A process means, a particular method of doing something, generally involving several operations. In
software engineering, software process refers to the method of developing the software. A software
process is a set of activities together with proper ordering to build high- quality software with low cost
and small cycle-time.

2|MCK
15. Define validation. (Google)
Validation is the process of checking whether the software product is up to the mark or in other words
product has high level requirements. It is the process of checking the validation of product i.e. it checks
what we are developing is the right product. it is validation of actual and expected product.

16. Define black box testing. (Google)


Black Box Testing is a software testing method in which the internal structure/ design/ implementation
of the item being tested is NOT known to the tester. Mainly applicable to higher levels of testing.
It is a testing approach which is used to test the software without the knowledge of the internal
structure of program or application.

17. What do you mean by test oracles?


To test any program, we need to have a description of its excepted behavior and a method of
determining whether the observed behavior conforms to the excepted behavior. For this we need test
oracle. A test oracle is a mechanism, different from the program itself that can be used to check the
correctness of the output of the program for different test cases.

18. What is the use of load runner?


HP LoadRunner is an automated performance and test automation product from Hewlett- Packard for
application load testing: examining system behaviour and performance, while generating actual load.
HP acquired Load Runner as part of its acquisition of Mercury Interactive in November 2006.
A software testing tool, HP Load Runner works by creating virtual users who take the place of real users'
operating client software, such as Internet Explorer, sending requests using the HTTP protocol to IIS or
Apache web servers.

19. Expand KDLOC, SCM, SEPG


1000 Developed Lines Of Code
Software Configuration Management
Software Engineering Process Group

20. What is data Dictionary?


The data dictionary is a repository of various data flows defined in a DFD. Data dictionary states the
structure of each data flow in the DFD. To define data structure, different notations are used. A
composition is represented by +, selection is represented by / (i.e., either or relationship), and
repetition may be represented by *.

21. Define coupling.


Coupling is the degree of interdependence between software modules; a measure of how closely
connected two routines or modules are; the strength of the relationships between modules. Coupling is
usually contrasted with cohesion.

22. What is unit testing?


Unit Testing is like regular testing where programs are executed with some test cases except that the
focus is on testing smaller programs or modules called units. A unit may be a function, a small collection
or functions, a class or a small collection of classes. Unit testing is essentially for verification of the code
produced during the code phase. That is, the goal of this testing is to test the internal logic of the
modules.

23. What are the different types of metrics? When they are used? (Google)
Product metrics describe the characteristics of the product such as size, complexity, design features,
performance, and quality level.

3|MCK
Process metrics can be used to improve software development and maintenance. Examples include the
effectiveness of defect removal during development, the pattern of testing defect arrival, and the
response time of the fix process.
Project metrics describe the project characteristics and execution. Examples include the number of
software developers, the staffing pattern over the life cycle of the software, cost, schedule, and
productivity.

24. What is functional abstraction?


In functional abstraction, a module is specified by the function it performs. For example, a module to
compute the log of a value can be abstractly represented by the function log. Similarly, a module to sort
an input array can be represented by the specification of sorting. Functional abstraction is the basis of
partitioning in function-oriented approaches. That is, when the problem is being partitioned, the overall
transformation function for the system is partitioned into smaller functions that comprise the system
function. The decomposition of the system is in terms of functional modules.

25. What is detailed design?


In a design document, a more detailed specification is given by explaining in natural language what a
module is supposed to do. These non-formal methods of specification can lead to problems during
coding, because, the coder is a different person from the designer. Even if the designer and the coder
are the same person, problems can occur, as the design can take a long time, and the designer may not
remember precisely what the module is supposed to do.
Before the detailed design or code for a module can be developed, the specification of the concerned
module should be given precisely. After the specification, the internal logic for the module is developed,
which will implement the given specifications.

26. What is apache JMeter? why is it used?


The principle of JMeter is very simple. If you want to test e.g. a SOAP interface layer, all you basically
need is the URL and SOAP request. Starting with that you can build your test plan. And this can be as
fancy as you want. Using variables, counters, parameters, CSV files, loops, logs, etc. There are almost no
limits in designing your test and making it as maintainable as possible.

27. Which model is used for developing software for automation of existing manual system and why?
Nobody knows but maybe prototyping.

Unit-1
1. Briefly explain the software engineering problems.
Software Engineering is a systematic approach to the development, operation, maintenance and
retirement of the software. There is another definition for s/w engineering, which states that “Software
engineering is an application of science and mathematics by which the capabilities of computer
equipment are made useful to man via computer programs, procedures and associated
documentation”.
Problem of Scale
A common factor that software engineering must deal with is the issue of scale. Development of a very
large scale system requires a very different set of methods compared to developing a small system; i.e.
methods that are used for developing small systems generally do not scale up to large systems. For
example: consider the problem of counting people in a room versus taking the census of a country. Both
are counting problems but the methods used are totally different. A different set of methods have to be
used for developing large software.
Any large project involves the use of technology and project management. In small projects, informal
methods for development and management can be used. However, for large projects both have to be
much more formal. When dealing with small software project, the technology and project management
requirement is low. However, when the scale changes to the larger systems, we have to follow formal

4|MCK
methods. For example: if we have 50 bright programmers without formal management and
development procedures and ask them to develop a large project, they will produce anything of no use.
(Project is small if its size is less than 10 KLOC, medium if it is less than 100 KLOC and large if it is less
than one million LOC and very large if the size is many more million LOC. For eg. Python - 200 KLOC,
Apache - 100 KLOC, Red Hat Linux - 30000 KLOC, Windows XP - 40000 KLOC)
Cost, Schedule and Quality
The cost of developing a system is the cost of resources used for the system, which in case of software
are the manpower, hardware, software and other support resources. The manpower component is
predominant as the software development is highly labor-intensive.
Schedule is an important factor in many projects. For some business systems, it is required to build the
software with small cycle of time. The developing methods that produce high quality software is
another fundamental goal of software engineering. Quality of a software product has three dimensions:
Product Operation, Transition and Revision.

Software Quality Attributes


The Product operation deals with the quality factors such as correctness reliability and efficiency.
Product transition deals with quality factors such as portability, interoperability. Product revision deals
with aspects related to modification of programs, including factors like maintainability and testability.
Correctness is the extent to which a program satisfies its specifications. Reliability is the property that
defines how well the software meets its requirements. Efficiency is the factor in all issues rating to the
execution of the software. It includes considerations such as response time, memory requirements and
throughput. Usability is the effort required to learn and operate the software properly.
Maintainability is the effort required to locate and fix errors in the programs. Testability is the effort
required to test and check that symbol or module performs correct operation or not. Flexibility is the
effort required to modify an operational program (functionality).
Portability is the effort required to transfer the software from one hardware configuration to another.
Reusability is the extent to which parts of software can be used in other related applications. Inter-
operability is the effort required to couple the system with other systems.
The Problem of Consistency
For an organization there is another goal i.e. consistency. An organization involved in software
development does not just want low cost and high quality for a project but it wants these consistently.
Consistency of performance is an important factor for any organization; it allows an organization to
predict the outcome of the project with reasonable accuracy and to improve its processes to produce
higher-quality products. To achieve consistency, some standardized procedures must be followed.

2. Explain the waterfall model. Write the advantages, limitations of it.


Waterfall model is the simplest model which states that the phases are organized in a linear order. In
this model, a project begins with feasibility analysis. On successfully demonstrating the feasibility of a
project, the requirement analysis and project planning begins. The design starts after the requirement
analysis is complete and the coding begins after the design is complete, once the programming is

5|MCK
complete, the code is integrated and testing is done. On successful completion of testing, the system is
installed. After this, the regular operations and maintenance take place as shown in the figure.
Each phase begins soon after the completion of the previous phase. Verification and validation activities
are to be conducted to ensure that the output of a phase is consistent with the overall requirements of
the system. At the end of every phase there will be an output. Outputs of earlier phases can be called as
work products and they are in the form of documents like requirement document and design
document. The output of the project is not just the final program along with the user manuals but also
the requirement document, design document, project plan, test plan and test results.
Project Outputs of the Waterfall Model
• Requirementdocument
• Projectplan
• System designdocument
• Detailed designdocument
• Test plan and testreport
• Finalcode
• Softwaremanuals
• Reviewreport.
Reviews are formal meetings to uncover deficiencies in a product. The review reports are the outcomes
of these reviews.

Advantages (Google)
• It allows for departmentalization and managerial control.
• Simple and easy to understand and use.
• Easy to manage due to the rigidity of the model – each phase has specific deliverables and a
review process.
• Phases are processed and completed one at a time.
• Works well for smaller projects where requirements are very well understood.
• A schedule can be set with deadlines for each stage of development and a product can proceed
through the development process like a car in a car-wash, and theoretically, be delivered on time.

6|MCK
Limitations
• Waterfall model assumes that requirements of a system can be frozen before the design begins.
It is difficult to state all the requirements before starting aproject.
• Freezing the requirements usually requires choosing the hardware. A large project might take a
few years to complete. If the hardware stated is selected early then due to the speed at which
the hardware technology is changing, it will be very difficult to accommodate the
technologicalchanges.
• Waterfall model stipulates that the requirements be completely specified before the rest of the
development can proceed. In some situations, it might be desirable to produce a part of the
system and then later enhance the system. This can’t be done if waterfall model is used.
• It is a document driven model which requires formal documents at the end of each phase. This
approach is not suitable for interactiveapplications.
• In an interesting analysis it is found that, the linear nature of the life cycle leads to “blocking
states” in which some project team members have to wait for other team members to complete
the dependent task. The time spent in waiting can exceed the time spent in productivework.
• Client gets a feel about the software only at theend.

3. Explain the quality attributes of software engineering.


The Product operation deals with the quality factors such as correctness reliability and efficiency.
Product transition deals with quality factors such as portability, interoperability. Product revision deals
with aspects related to modification of programs, including factors like maintainability and testability.
• Correctness is the extent to which a program satisfies its specifications.
• Reliability is the property that defines how well the software meets its requirements.
• Efficiency is the factor in all issues rating to the execution of the software. It includes
considerations such as response time, memory requirements and throughput.
• Usability is the effort required to learn and operate the software properly.
• Maintainability is the effort required to locate and fix errors in the programs.
• Testability is the effort required to test and check that symbol or module performs correct
operation or not.
• Flexibility is the effort required to modify an operational program (functionality).
• Portability is the effort required to transfer the software from one hardware configuration to
another.
• Reusability is the extent to which parts of software can be used in other related applications.
• Inter-operability is the effort required to couple the system with other systems.

4. Explain prototyping model.


The goal of prototyping is to overcome the limitations of waterfall model. Here a throwaway prototype
is built to understand the requirements. This prototype is developed based on the currently known
requirements. Development of the prototype undergoes design, coding and testing, but each of these
phases is not done very thoroughly or formally. By using the prototype, the client can get actual feel of
the system because the interaction with the prototype can enable the client to better understand the
system. This results in more stable requirements that change less frequently. Prototyping is very much
useful if there is no manual process or existing systems which help to determine the requirements.
Initially, primary version of the requirement specification document will be developed and the end-
users and clients are allowed to use the prototype. Based on their experience with the prototype, they
provide feedback to the developers regarding the prototype. They are allowed to suggest changes if
any. Based on the feedback, the prototype is modified to incorporate the changes suggested. Again
clients are allowed to use the prototype. This process is repeated until no further change issuggested.
This model is helpful when the customer is not able to state all the requirements. Because the
prototype is throwaway, only minimum documentation is needed during prototyping. For example
design document and test plan etc. are not needed for the prototype.

7|MCK
Problems: This model much depends on the efforts required to build and improve the prototype which
in turn depends on computer aided prototyping tools. If the prototype is not efficient, too much effort
will be put to design it.

5. Explain any three characteristics of software process.


The fundamental objectives of software processes are optimality and scalability. Optimality means that
the process must be able to produce high quality software at low cost and small cycle time. Scalability
means that, it should also be applicable for large software projects. To achieve these objectives, the
process must have some properties. Some characteristics of the software processes are listed below.
Predictability: Predictability of a process determines how accurately the outcome of following a process
in a project can be predicted before the project is completed. Predictability is a fundamental property of
any process. Effective management of quality assurance activities largely depend on the predictability of
the process. A predictable process is also said to be under statistical control. A process is said to be
under statistical control if following the same process produces similar results. Statistical control implies
that most projects will be within a bound around the expected value. Any data beyond the line implies
that the data and the project should be examined and followed to pass through only if clear evidence is
found that this is a Statistical aberration.
Here is an example for the above. “Project ONE is very similar to project TWO which was done before 2
years. Cost of both the projects are very closer to each other. And we can say that even the methods
used in project ONE can be used in project TWO. i.e. this indicates that the process is predictable. And if
it is not, then it will result in a loss. A predictable process also is said to be under statistical control. A
process is under statistical control if following the same process produces similar results. If the results
vary a little, just due to random causes, but not due to process issues.

Support Maintainability and Testability: An important objective of the project development is to


produce software which is easy to maintain. Software products are not easily maintainable because of
the development process which is used for developing the software does not contain maintainability as
a clear goal. Developers are made responsible for maintenance at least for a couple of years after
developing the software.
Many examples show us that, programming is not a major activity where programmer spends his time.
Testing consumes most resources during development. The goal of the process should not be to reduce
the effort of design and coding but to reduce the effort of testing and maintenance. Both testing and
maintenance depend heavily on design and coding and these costs are considerably reduced if the
software is designed and coded make testing and maintenance easy.
Overall, we can say that the goal of the process should not be to reduce the effort of design and coding,

8|MCK
but to reduce the cost of testing and maintenance.
Early Defect Removal and Defect Prevention: If there is a greater delay in detecting the errors, it
becomes more expensive to correct them. As the figure given below shows, an error that occurs in the
requirement phase if corrected during acceptance testing can cost about 100 times more than
correcting the error in the requirement phase. To correct errors after coding, both the design and code
are to be changed; there by changing the cost of correction. All the defect removal methods are limited
in their capabilities and cannot detect all the errors that are introduced. Hence it is better to provide
support for defect prevention.

Process Improvement: Improving the quality and reducing the cost are the fundamental goals of the
software engineering process. This requires the evaluation of the existing process and understanding
the weakness in the process. Software process must be a closed-loop process. The process must be
improved based on previous experiences and each project done using the existing process must feed
information back to facilitate this improvement. This activity of analyzing and improving the process is
largely done by the process management component of the software process. Other processes should
also have to take an active part in this, for better performance.

6. Write a note on software problem.


Software is not only a collection of computer programs: There is a distinction between a program and
programming system’s product. A program is generally complete in itself and is used usually by the
author of the program.
A programming system’s product is used largely by people other than the developers of the system. The
users may be from different backgrounds and may expect the software to yield results based on their
requirements. So a proper user-friendly interface should be provided for easy use. There is sufficient
documentation to help the users to use the system.
IEEE defines Software as the collection of computer programs, procedures, rules and associated
documentation and data. This definition clearly states that, software is not just programs, but includes
all the associated documentation and data.
Note: IEEE stands for Institute of Electrical and Electronic Engineers
Requirement of high quality software has many obligations. As the first, it required that the software be
thoroughly tested before being used. Secondly, building such software should go through different
phases so that output of each phase is evaluated and reviewed immediately so that is easy to remove
the bugs if any. Aspects like fault-tolerance, back-up and recovery, portability etc., must also be taken
into consideration.
Software Is Expensive: The main reason for the high cost of the software is that, software development
is still labor- intensive. LOC (Lines of Code) or KLOC (thousands of Lines of code) is the most commonly
used criteria to measure the size of the software. Because manpower is used for this, the cost of
developing software is usually measured in terms of person-months of effort spent in development.
And the productivity is frequently measured in the industry in terms of LOC or KLOC per person-month.
Another aspect is that software is usually of dynamic nature. Even without bugs also, it undergoes
changes as the requirement changes from the end-users. The changed software brings changes in the
working environment of the organization which in turn requires further alterations. Sometimes it needs

9|MCK
a dramatic change in the infrastructure of the organization itself. This is usually called as law of software
evolution. The resulting maintenance is referred as adaptive maintenance.As a result, software
developers needs to go through not only the code, but also the documentation associated with it. They
should test the whole software again to ensure consistency.
In olden days, hardware was very costly. To purchase a computer lacks of rupees were required. Now a
days hardware cost has been decreased dramatically. Now software can cost more than a million
dollars, and can efficiently run on hardware that costs almost tens of thousands of dollars.
Late, Costly and Unreliable: Software Engineering is driven by three major factors: cost, schedule and
quality. There are many instances quoted about software projects that are behind the schedule and
have heavy cost overruns. If the completion of a particular project is delayed by a year, the cost of the
project may be double or still more. If the software is not completed in the scheduled period, then it will
become very costly.
Unreliability means, the software does not do what it is supposed to do or does something it is not
supposed to do. In software, failures occur due to bugs or errors that get introduced during the design
and development process. Hence, even though the software may fail after operating correctly for
sometime, the bug that causes the failure was there from the start. It only got executed at the time of
failure.
Problem of Change and Rework: Once the software is delivered to the customer, it enters into
maintenance phase. All systems need maintenance. Software needs to be maintained because there are
often some residual errors remaining in the system that must be removed as they are discovered. These
errors once discovered, need to be removed, leading to software getting changed. This is sometimes
called as corrective maintenance.
Software often must be upgraded and enhanced to include more features and provide more services.
This also requires modification of the software. If operating environment of the software changes, then
the software must also be modified accordingly. The software must adapt some new qualities to fit to
the new environment. The maintenance due to this is called adaptive maintenance.

7. Write a note on software metrics, measurement and models.


Software metrics are quantifiable measures which could be used to measure the certain characteristics
of the software. The quality of the software cannot be measured directly because software has no
physical attributes. Metrics measurements and models go together. Metrics provide quantification of
some property, measurement provides actual value for the metrics and models are needed to get the
value for metrics that cannot be measured directly.

8. Explain the SCM lifecycle of an item/Briefly explain the various activities of software configuration
management process.
SCM is a process of identifying and defining the items in the system, controlling the change of these
items throughout their life-cycle, recording and reporting the status of item and change request and
verifying the completeness and correctness of these items. SCM is independent of development
process. Development process handles normal changes such as change in code while the programmer is
developing it or change in the requirement while the analyst is gathering the information. However it
can not handle changes like requirement changes while coding is being done. Approving the changes,
evaluating the impact of change, decide what needs to be done to accommodate a change request etc.
are the issues handled by SCM. SCM has beneficial effects on cost, schedule and quality of the product
being developed.
It has three major components:
Configuration Identification: When a change is done, it should be clear, to what, the change has been
applied. This requires a baseline to be established. A baseline forms a reference point in the
development of a system and is generally defined after the major phases in the development process. A
software baseline represents the software in a most recent state. Some baselines are requirement
baseline, design baseline and the product baseline or system baseline.

10 | M C K
Though the goal of SCM is to control the establishment and changes to these baselines, treating each
baseline as a single entity for the change is undesirable, because the change may be limited to a very
small portion of the baseline. For this reason, a baseline can consist of many software configuration
items. [SCI’s] A baseline is a set of SCIs and their relations. Because a baseline consists of SCIs and SCI is
the basic unit for change control, the SCM process starts with identification of the configuration items.
Once the SCI is identified, it is given a name and becomes the unit of change control.
Change Control: Once the SCIs are identified, and their dependencies are understood, the change
control procedures of SCM can be applied. The decisions regarding the change are generally taken by
the configuration control board [CCB] headed by configuration manager [CM]
When a SCI is under development, it has considered being in working state. It is not under SCM and can
be changed freely. Once the developer is satisfied with the SCI, then it is given to CM for review and the
item enters to ‘under review’ state. The CM reviews the SCI and if it is approved, it enters into a library
after which the item is formally under SCM. If the item is not approved, the item is given back to the
developer. This cycle of a SCI is given in the figure.
Once the SCI is in the library, it can not be modified, even without the permission of the CM. an SCI
under SCM can be changed only if the change has been approved by the CM. A change is initiated by a
change request (CR). The reason for change can be anything. The CM evaluates the CR primarily by
considering the effect of change on the cost schedule and quality of the project and the benefits likely
to come due to this change. Once the CR is accepted, project manager will take over the plan and then
CR is implemented by the programmer.
Status Accounting and Auditing: The aim of status accounting is to answer the question like what is the
status of the CR (approved/rejected), what is the average and effort for fixing a CR and what is the
number of CR. For status accounting, the main source of information is CR. Auditing has a different role.

9. Explain the different phases of phased development process.


Phased DevelopmentProcess
A development process consists of various phases, each phase ending with a predefined output.
Software engineering must consist of these activities:
• Requirement specification for understanding and clearly stating theproblem.
• Design for deciding a plan for thesolution.
• Coding for implementing the plannedsolution.
• Testing for verifying theprograms.
Requirement Analysis
Requirement analysis is done in order to understand the problem to be solved. In this phase, collect the
requirement needed for the software project.
The goal of software requirement specification phase is to produce the software requirement
specification document. The person who is responsible for requirement analysis is called as analyst. In
problem analysis, the analyst has to understand the problem. Such analysis requires a thorough
understanding of the existing system. This requires interaction with the client and end- users as well as
studying the existing manuals and procedures. Once the problem is analyzed, the requirements must be
specified in the requirement specification document.
Software Design
The purpose of design phase is to plan a solution for the problem specified by the requirement
document. The output of this phase is the design document which is the blue-pint or plan for the
solution and used later during implementation, testing and maintenance.
Design activity is divided into two phases- System design and detailed design. System design aims to
identify the module that should be included in the system. During detailed design, the internal logic of
each of the modules specified during system design is decided.
Coding
Goal of coding is to translate the design into code in a given programming language. The aim is to
implement the design in the best possible manner. Testing and maintenance costs are much higher than
the coding cost, therefore, the goal of should be to reduce testing and maintenance efforts. Hence the

11 | M C K
programs should be easy to read andunderstand.
Testing
Once the programs are available, we can proceed with testing. Testing not only has to uncover errors
introduced during coding, but also errors introduced during previous phases.
The starting point of testing is unit testing. Here each module is tested separately. After this, the
module is integrated to form sub-systems and then to form the entire system. During integration of
modules, integration testing is done to detect design errors. After the system is put together, system
testing is performed. Here, the system is tested against the requirements to see whether all the
requirements are met or not. Finally, acceptance testing is performed by giving user’s real- world data
to demonstrate to the user.

10. Explain the spiral model with the help of a diagram.


As the name suggests, the activities of this model can be organized like a spiral that has many cycles as
shown in the above figure. Each cycle in the spiral begins with the identification of objectives for that
cycle; the different alternatives that are possible for achieving the objectives and the constraints that
exist. This is the first quadrant of the cycle. The next step is to evaluate different alternatives based on
the objectives and constraints. The focus is based on the risks. Risks reflect the chances that some of the
objectives of the project may not be met. Next step is to develop strategies that resolve the
uncertainties and risks. This step may involve activities like prototyping.

The risk-driven nature of the spiral model allows it to suit for any applications. The important feature of
spiral model is that, each cycle of spiral is completed by a review that covers all the products developed
during that cycle; including plans for the next cycle.
In a typical application of spiral model, one might start with an extra round-zero, in which the feasibility
of the basic project objectives is studied. In round-one a concept of operation might be developed. The
risks are typically whether or not the goals can be met within the constraints. In round-2, the top-level
requirements are developed. In succeeding rounds the actual development may be done. In a project,
where risks are high, this model is preferable.
Problems:
1) It is difficult to convince the customers that the evolutionary approach iscontrollable.
2) It demands considerable risk-assessment expertise and depends heavily on this expertise for

12 | M C K
thesuccess.
3) If major risks are uncovered and managed, major problems mayoccur.

11. Explain the working of an iterative enhancement model with the help of a diagram.

This model tries to combine the benefits of both prototyping and waterfall model. The basic idea is,
software should be developed in increments, and each increment adds some functional capability to the
system. This process is continued until the full system is implemented. An advantage of this approach is
that, it results in better testing because testing each increment is likely to be easier than testing the
entire system. As prototyping, the increments provide feedback from the client, which will be useful for
implementing the final system. It will be helpful for the client to state the final requirements.
Here a project control list is created. It contains all tasks to be performed to obtain the final
implementation and the order in which each task is to be carried out. Each step consists of removing the
next task from the list, designing, coding, testing and implementation and the analysis of the partial
system obtained after the step and updating the list after analysis. These three phases are called design
phase, implementation phase and analysis phase. The process is iterated until the project control list
becomes empty. At this moment, the final implementation of the system will be available.
The first version contains some capability. Based on the feedback from the users and experience with
the current version, a list of additional features is generated. And then more features are added to the
next versions. This type of process model will be helpful only when the system development can be
broken down into stages.
Disadvantage:
This approach will work only if successive increments can actually put into operation.

Unit-2
1. Explain the characteristics of an SRS.
A good SRS is:
1. Correct & Complete
2. Unambiguous
3. Verifiable.
4. Consistent
5. Ranked for important/stability
6. Modifiable
7. Traceable
A SRS is correct if every requirement included in SRS represents something required in the final system.
An SRS is complete if everything the software is supposed to do and the responses of the software to all
classes of input data are specified in the SRS. Completeness and correctness go hand-in-hand.
An SRS is unambiguous if and only if every requirement stated one and only one interpretation.
Requirements are often written in natural language, which are inherently ambiguous. If the
requirements are specified using natural language, the SRS writer should ensure that there is no
ambiguity. One way to avoid ambiguity is to use some formal requirement specification language. The
major disadvantage of using formal languages is large effort is needed to write an SRS and increased
difficulty in understanding formally stated requirements especially by clients.

13 | M C K
A SRS is verifiable if and only if every stored requirement is verifiable. A requirement is verifiable if
there exists some cost effective process that can check whether the final software meets that
requirement. Un-ambiguity is essential for verifiability. Verification of requirements is often done
through reviews.
A SRS is consistent if there is no requirement that conflicts with another. This can be explained with the
help of an example: suppose that there is a requirement stating that process A occurs before process B.
But another requirement states that process B starts before process A. This is the situation of
inconsistency. Inconsistencies in SRS can be a reflection of some major problems.
Generally, all the requirements for software need not be of equal importance. Some are critical. Others
are important but not critical. An SRS is ranked for importance and/or stability if for each requirement
the importance and the stability of the requirement are indicated. Stability of a requirement reflects the
chances of it being changed. Writing SRS is an iterative process.
An SRS is modifiable if its structure and style are such that any necessary change can be made easily
while preserving completeness and consistency. Presence of redundancy is a major difficulty to
modifiability as it can easily lead to errors. For example, assume that a requirement is stated in two
places and that requirement later need to be changed. If only one occurrence of the requirement is
modified, the resulting SRS will be inconsistent.
An SRS is traceable if the origin of each requirement is clear and if it facilitates the referencing of each
requirement in future development. Forward traceability means that each requirement should be
traceable to some design and code elements. Backward traceability requires that it is possible to trace
the design and code element to the requirements they support.

2. What is coupling? Explain the various factors that effect on coupling.


Two modules are considered independent if one can function completely without the presence of
other. If two modules are independent, they are solvable and modifiable separately. However, all the
modules in a system cannot be independent of each other, as they must interact so that together they
produce the desired behavior of the system. The more connections between modules more knowledge
about one module is required to understand or solve the other module. Hence, the fewer and simpler
the connections between modules, the easier it is to understand one without understanding theother.
The notion of coupling attempts to capture the concept “how strongly” different modules are
interconnected.
Coupling between modules is the strength of interconnections between modules or a measure of
interdependence among modules. In general, the more we must know about module A in order to
understand module B, the more closely connected A is to B; "Highly coupled" modules are joined by
strong interconnections, while "loosely coupled" modules have weak interconnections. Independent
modules have no interconnections. To solve and modify a module separately, we would like the module
to be loosely coupled with other modules. The choice of modules decides the coupling between
modules. Because the modules of the software system are created during system design, the coupling
between the modules is largely decided during system design and cannot be reduced during
implementation. Coupling is an abstract concept and is not easily quantifiable. So, no formulas can be
given to determine the coupling between two modules. However, some major factors can be identified
as influencing coupling between modules. Among .them the most important are the type of connection
between modules, the complexity of the interface, and the type of information flow betweenmodules.
Coupling increases with the complexity of the interface between modules. To keep coupling low we
would like to minimize the number of interfaces per module and the complexity of each interface.
Coupling is reduced if only the defined entry interface of a module is used by other modules. (for
example, passing information to and from other modules exclusively through parameters). Coupling
would increase if a module is used by other modules via an indirect and obscure interface, like directly
using the internals of a module or using shared variables.
Complexity of the interface is another factor affecting coupling. The more complex each interface is, the
higher will be the degree of coupling. For example, complexity of the entry interface of a procedure
depends on the number of items being passed as parameters. For example, if a field of a record is

14 | M C K
needed by a procedure, often the entire record is passed, rather than just passing that field of the
record. By passing the record we are increasing the coupling unnecessarily. Essentially, we should keep
the interface of module as simple and small as possible.
The type of information flow along the interfaces is the third major factor affecting coupling. Two kinds
of information that can flow along an interface: data or control. Passing or receiving control
information indicates that action of a module will depend on this control information, which makes it
more difficult to understand the module and provide its abstraction. Transfer of data information
means that a module passes as input some data to another module and gets in return some data as
output. This allows a module to be treated as a simple input output function that performs some
transformation on the input data to produce the output data. Interfaces with only data communication
result in the lowest degree of coupling, followed by interfaces that only transfer control data. Coupling
is considered highest when the data is hybrid,(i.e. some data items and some control items are passed
betweenmodules.)

3. Explain steps in SDM strategy. / Write a note on SDM strategy.


Structured Design Methodology (SDM) views every software system as having some inputs that are
converted into the desired outputs by the software system. The software is viewed as a transformation
function that transforms the given inputs into the desired outputs, and the central problem of designing
software systems is considered to be properly designing this transformation function. Due to this view
of software, the structured design methodology is primarily function-oriented and relies heavily on
functional abstraction and functional decomposition.
In properly designed systems, it is often the case that a module with subordinate does not actually
perform much computation. The bulk of actual computation is performed by its subordinates, and the
module itself largely coordinates the data flow between the subordinates to get the computation done.
The subordinates in turn can get the bulk of their work done by their subordinates until the "atomic”
modules, which have no subordinatesare reached. Factoring is the process of decomposing a module so
that the bulk of its work is done by its subordinates. A system is said to be completely factored, if all the
actual processing is accomplished by bottom level atomic modules also non-atomic modules perform
the jobs of control and co-ordination. The overall strategy is to identify the input and output streams
and the primary transformations that have to be performed to produce the output. High Level modules
are then created to perform the major activities which are later refined. There are four major steps in
this strategy:
1. Restate the problem as a data flowdiagram
2. Identify the input and output dataelements
3. First-levelfactoring
4. Factoring of input, output, and transformbranches
Restate the Problem as a Data FlowDiagram
To use the SDM, the first step is to construct the DFD for the problem. There is a fundamental
difference between the DFDs drawn during requirements analysis and during structured design. In the
requirements analysis, a DFD is drawn to model the problem domain. The analyst has little control over
the problem, and hence his task is to extract from the problem all the information and then represent it
as a DFD. During design, the designer is dealing with the solution domain; the designer has complete

15 | M C K
freedom in creating a DFD that will solve the problem stated in theSRS. So this deals with a model
developing for an eventual system. That is, the DFD during design represents how the data will flow in
the system when it is built. In this stage, the major transforms or functions in the software are decided
and the DFD shows the major transforms that the software will have and how the data will flow through
different transforms.
The general rules of drawing a DFD remain the same. As an example, consider the problem of
determining the number of different words in an input file. The data flow diagram for this problem is
shown n Figure 4.4 This problem as only one input data stream, the input file, while the desired output
is the count of different words in the file. To transform the input to the desired output, the first thing
we do is form a list of all the words in the file. Then sort the list, as this will make identifying different
words easier. This sorted list is
Then used to count the number of different words and the output of the transform is the desired count,
which is then printed. This sequence of data transformation is what we have in the data flow diagram.

Identify the Most Abstract Input and Output DataElements


Most systems have some basic transformations that perform the required operations. However, in most
cases the transformation cannot be easily applied to the actual physical input and produce the desired
physical output. Instead, the input is first converted into a form on which the transformation can be
applied with ease. Similarly, the main transformation modules often produce outputs that have to be
converted into the desired physical output. The goal of this second step is to separate the transforms in
the data flow diagram. For this separation, once the data flow diagram is ready, the next step is to
identify the highest abstract level of input and output.
The most abstract input data elements(MAI) are those data elements in the data flow diagram that are
furthest removed from the physical inputs but it can still be considered inputs to the system. The most
abstract input data elements often have little resemblance to the actual physical data. These are often
the data elements obtained after operations like error checking, data validation, proper formatting, and
conversion are complete.
Similarly, we identify the most abstract output data elements (MAO) by starting from the outputs in the

16 | M C K
data flow diagram and traveling toward the inputs. These are the data elements that are most removed
from the actual outputs but can still be considered outgoing. The MAO data elements may also be
considered the logical output data items.
There will usually be some transforms left between the most abstract input and output data items.
These central transforms perform the basic transformation for the system, taking the most abstract
input and transforming it into the most abstract output.The central transforms focus on the modules
implementing these transforms can concentrate on performing the transformation without being
concerned with converting the data into proper format, validating the data and so forth.
Consider the data flow diagram shown in Figure 4.4. The arcs in the data flow diagram are the most
abstract input and most abstract output. The choice of the most abstract input is obvious. We start
following the input. First, the input file is converted into a word list, which is essentially the input in a
different form. The sorted word list is still basically the input, as it is still the same list, in a different
order. This appears to be the most abstract input because the next data (i.e., count) is not just another
form of the input data. The choice of the most abstract output is even more obvious; count is the
natural choice (a data that is a form of input will not usually be a candidate for the most abstract
output). Thus we have one central transform, count-the-number-of-different-words, which has one
input and one output data item.
First-Level Factoring
Having identified the central transforms and the most abstract input and output data items, we are
ready to identify some modules for the system. Initially we specify a main module, whose purpose is to
invoke the subordinates. The main module is therefore a coordinate module. For each of the most
abstract input data items, an immediate subordinate module to the main module is specified. Each of
these modules is an input module, whose purpose is to deliver to the main module the most abstract
data item for which it is created.Similarly, for each most abstract output data item, a subordinate
module that is an output module that accepts data from the main module is specified. Each of the
arrows connecting these input and output subordinate modules are labeled with the respective abstract
data item flowing in the proper direction.

Finally, for each central transform, a module subordinate to the main is specified. These modules will be
transform modules, whose purpose is to accept data from the main module, and then return the
appropriate data back to the main module. The data items coming to a transform module from the main
module are on the incoming arcs of the corresponding transform in the data flow diagram. The data
items returned are on the outgoing arcs of that transform. Note that here a module is created for a
transform, while input/output modules are created for data items. The structure after the first-level
factoring of the word-counting problem is shown in the above figure.
In the above example, there is one input module, which returns the sorted word list to the main
module. The output module takes from the main module the value of the count. There is only one
central transform in this example, and a module is drawn for that. Note that the data items traveling to
and from this transformation module are the same as the data items going in and out of the central
transform. The main module is the overall control module, which win form the main program or

17 | M C K
procedure in the implementation of the design. It is a coordinate module that invokes the input
modules to get the most abstract data items, passes these to the appropriate transform modules, and
delivers the results of the transform modules to other transform modules until the most abstract data
items are obtained. These are then passed to the outputmodules.
Factoring the Input, Output, and TransformBranches
The first-level factoring results in a very high-level structure, where each subordinate module has a lot
of processing to do. To simplify these modules, they must be factored into subordinate modules that
will distribute the work of a module. Each of the input, output, and transformation modules must be
considered for factoring.
The purpose of an input module, as viewed by the main program, is to produce some data. To factor an
input module, in the data flow diagram that produced the data item is now treated as a central
transform. The process performed for the first-level factoring is repeated here with this new central
transform, with the input module being considered the main module. A subordinate input module is
created for each input data stream coming into the new central transform, and a subordinate transform
module is created for the new central transform. The new input modules now created can then be
factored again, until the physical inputs are reached. Factoring or input modules will usually not yield
any output subordinate modules.
Count
number of
different
d

w.lis coun
wor coun
Get a word Increment count

wor fla

Same as

The factoring of the input module get-sorted-list in the first-level structure is shown in Figure4.6. The
transform producing the input returned by this module (i.e., the sort transform) is treated as a central
transform. Its input is the word list. Thus, in the first factoring we have an input module to get the list
and a transform module to sort the list. The input module can be factored further, as the module needs
to perform two functions, getting a word and then adding it to the list. Note that the looping arrow is
used to show the iteration.
The factoring of the output modules is symmetrical to the factoring of the input modules. For an output
module we look at the next transform to be applied to the output to bring it closer to the ultimate
desired output. This now becomes the central transform, and an output module is created for each data
stream. During the factoring of output modules, there will be no input modules.Factoring the central
transform is essentially an exercise in functional decomposition and will depend on the designers'
experience and judgment. One way to factor a transform module is to treat it as a problem in its own
right and start with a data flow diagram for it. The inputs to the DFD are the data coming into the
module and the outputs are the data being returned by the module. Each transform in this DFD
represents a sub transform of this transform. The factoring of the central transform count-the-number-
of- different-words is shown in Figure 4.7.
This was a relatively simple transform, and it is not needed to draw the data flow diagram. To determine
the number of words, we have to get a word repeatedly, determine if it is the same as the previous
word (for a sorted list, this checking is sufficient to determine if the word is different from other words),
and then count the word if it is different. For each of the three different functions, we have a
subordinate module, and we get the structure shown in Figure4.7.

18 | M C K
It should be clear that the structure that is obtained depends a good deal on what are the most abstract
inputs and most abstract outputs. And as mentioned earlier, this is based on a good judgment.
Although the judgment varies among the designers, it’s effect is minimal. The net effect is that bubble
that appears as a transform module at one level may appear as a transform module at another level.

4. Explain DFD with an example/Data Flow Diagrams and Data Dictionary


Data flow diagrams (DFD) are commonly used during problem analysis. DFDs are quite general and are
not limited to problem analysis. They were in use before software engineering discipline began. DFDs
are very useful in understanding a system can be effectively used during analysis. DFD shows the flow of
data through the system. It views a system as a function that transforms the input into desired output.
Any complex system will not perform this in a single step and the data will typically undergo a series of
transformations before it becomes an output. The DFD aims to capture the transformations that take
place within a system to the input data so that eventually the output data is produced. The agent that
performs the transformation of data from one state to another is called a process and is represented in
the form of a circle (or bubble) in the DFD. The processes are shown by named circles and data flows are
represented by named arrows entering or leaving the bubbles. Rectangles represent a source or sink
and are a net originator or consumer of data. An example of DFD is given in figure given below.
This diagram represents the basic operations that are taking place while calculating the pay of
employees in an organization. The basic output is the paycheck and sink is the worker here. Here, first
the employee’s record is retrieved using Employee-ID. Through the ID, rate of payment & overtime are
obtained and the pay is computed. Later taxes are deducted and all related information is stored in
company records. Finally, the paycheck is issued. Some conventions used in DFDs are: a labeled arrow
represents an input or output. The need for multiple data flows by a process is represent by “*”
between the data flows. This symbol represents AND relationship. For example, if “*” is there between
two inputs A and B for a process, it means that A and B are needed for the process. Similarly the “OR”
relationship is represented by a “+” between the dataflows.
It should be pointed out that a DFD is not a flowchart. A DFD represents the flow of data, while a flow
chart shows the flow of control. A DFD does not include procedural information.

In a DFD, data flows are identified by unique names. These names are chosen so that they convey some
meaning about what the data is. However, the precise structure of the data flows is not specified in a
DFD. The data dictionary is a repository of various data flows defined in a DFD. Data dictionary states
the structure of each data flow in the DFD. To define data structure, different notations are used. A
composition is represented by +, selection is represented by / (i.e., either or relationship), and
repetition may be represented by *. Example of a data dictionary is givenbelow:
Weekly timesheet= employee_name + employee_id+[regular_hrs+Overtime_hrs]*
Pay_rate= [hourly_pay/daily_pay /weekly_pay]
Employee_name= Last_name+ First_name +Middle_name
Employee_id= digit+ digit+ digit + digit
Most of the data flows in the DFD are specified here. Once we have constructed a DFD and associated
data dictionary, we have to somehow verify that they are correct. There is no specific method to do so
but data dictionary and DFDs are examined such that, the data stored in data dictionary should be there
somewhere in the DFD and vice versa. Some common errors in DFDs are listedbelow:
1. Unlabelled Data flows
2. Missing data flows (information required by a process is not available)
3. Extraneous data flows; some information is not being used in any process.
4. Consistency not maintained during refinement.
5. Missing Process

19 | M C K
6. Contains some control information.

5. Write a note on decision table.


Decision tables provide a mechanism for specifying complex decision logic. It is formal table-based
notation that can be automatically processed to check for qualities like completeness and lack of
ambiguity. A decision table has two parts. The top part indicates different conditions and the bottom
part specifies different actions. The table specifies under what combination of conditions what action is
to be performed.
Example: Consider the part of a banking system responsible for debiting from the accounts. For this part
the relevant conditions will be
1 2 3 4 5
C1: The account number is correct
C2: The signature matches C1 N N Y Y Y
C3: There is enough money in the account. C2 N N Y Y
The possible actionsare
C3 N Y N
A1: Givemoney
A2: Give statement that not enough money is there in the account. A1 X
A3: Call the Police to check for fraud. A2 X X
These conditions and possible actions can be represented in tabular A3
X
form as follows:
The part of the decision table is shown here. For each condition a Y in the column indicates yes or true
and N indicates no or false and a blank means that it can be either true or false. If an action is to be
taken for a particular combination of the conditions, it is shown by X for that action. If there is no mark
for an action for a particular combination of conditions, it means that the action is not to be performed.

6. Define cohesion. Explain different types of cohesion or (levels)


With cohesion, we are interested in determining how closely the elements of a module are related to
each other. Cohesion of a module represents how tightly bound the internal elements of the module
are to one another. In one sense, it is a measure of the strength of relationship between the methods
and data of a class and some unifying purpose or concept served by that class. In another sense, it is a
measure of the strength of relationship between the class’s methods and data themselves. Cohesion
and coupling are clearly related. Usually, the greater the cohesion of each module in the system, the
20 | M C K
lower the coupling between modules is. There are several levels of cohesion:
(i) Coincidental (ii) Logical (iii) Temporal (iv) Procedural
(v) Communicational (vi) Sequential (vii) Functional.
Coincidental is the lowest level, and functional is the highest. Functional binding is much stronger than
the rest, while the first two are considered much weaker than others. Coincidental cohesion occurs
when there is no meaningful relationship among the elements of a module. Coincidental cohesion is
when parts of a module are grouped arbitrarily; the only relationship between the parts is that they
have been grouped together. This cohesion can occur if an existing program is modularized by chopping
it into pieces and making different pieces modules. If a module is created to save duplicate code by
combining some part of-code that occurs at many different places, that module is likely to have
coincidental cohesion. In this situation, the statements in the module have no relationship with each
other, and if one of the modules using the code needs to be modified and this modification includes the
common code, it is likely that other modules using the code do not want the code modified.
Consequently, the modification of this "common module" may cause other modules to behave
incorrectly. It is poor practice to create a module merely to avoid duplicatecode.
A module has logical cohesion if there is some logical relationship between the elements of a module,
and the elements perform functions that fall in the same logical class. It can also be referred to
cohesion, where parts of a module are grouped because they are logically categorized to do the same
thing even though they are different by nature (e.g. grouping all mouse and keyboard as input handling
routines). A typical example of this kind of cohesion is a module that performs all the inputs or all the
outputs. In such a situation, if we want to input or output a particular record, we have to somehow
convey this to the module. Often, this will be done by passing some kind of special status flag, which will
be used to determine that statements to execute in the module.This result in hybrid information flow
between modules, which is generally the worst form of coupling between modules. Logically cohesive
modules should be avoided, if possible.
Temporal cohesion is the same as logical cohesion, except that the elements are also related in time
and are executed together. In other words we can say that Temporal cohesion occurs when parts of a
module are grouped by when they are processed - the parts are processed at a particular time in
program execution (e.g. a function which is called after catching an exception which closes open files,
creates an error log, and notifies the user).Modules that perform activities like "initialization," "clean-
up," and "termination" are usually temporally bound. Temporal cohesion is higher than logical cohesion,
because the elements are all executed together. This avoids the problem of passing the flag, and the
code is usuallysimpler.
A procedurally cohesive module contains elements that belong to a common procedural unit. i.e.
Procedural cohesion occurs when parts of a module are grouped because they always follow a certain
sequence of execution For example - a function which checks file permissions and then opens the file or
a loop or a sequence of decision statements in a module may be combined to form a separatemodule.
A module with communicational cohesion/Informational Cohesionhas elements that are related by a
reference to the same input or output data. That is, in a communicational bound module, the elements
are together because they operate on the same input or output data. In other words it is said that the
Communicational cohesion occurs when parts of a module are grouped because they operate on the
same data (e.g. a module which operates on the same record of information)Communicational cohesive
modules may perform more than one function by a reference to the same input or output data. An
example of this could be a module to "print and punchrecord."
Sequential cohesion occurs when parts of a module are grouped because the output from one part is
the input to another part like an assembly line (e.g. a function which reads data from a file and
processes the data).If we have a sequence of elements in which the output of one forms the input to
another. Sequential cohesion does not provide any guidelines on how to combine them into modules.
Different possibilities exists – combine all in one module; put the first half in one and the rest in
another; the first third in one an rest in the other and so forth. Consequently, a sequentially bound
module may contain several functions or parts of different functions. Sequentially bound module may
contain several functions or parts of different functions. Sequentially cohesive modules bear a close

21 | M C K
resemblance to the problemstructure.
Functional cohesion is the strongest cohesion. Functional cohesion is when parts of a module are
grouped because they all contribute to a single well-defined task of the module. In a functionally bound
module, all the elements of the module are related to performing a single function. By function, we do
not mean simply mathematical functions; modules accomplishing a single goal are also included.
Functions like "compute square root" and "sort the array" are clear examples of functionally cohesive
modules.
How to determine the cohesion level of a module? There is no mathematical formula that can be used.
We have to use our judgment for this. A useful technique for determining if a module has functional
cohesion is to write a sentence that describes fully and accurately, the function or purpose of the
module. The following tests can then bemade:
1. If the sentence is a compound sentence, if it contains has more than one verb, the module is
probably performing more than one function, and it probably has sequential or
communicationalcohesion.
2. If the sentence contains words relating to time, like "first," "next," "when" and "after", the module
probably has sequential or temporalcohesion.
3. If the predicate of the sentence does not contain a single specific object following the verb (such as
"edit all data") the module probably has logicalcohesion.
4. Words like "initialize," and "cleanup" imply temporalcohesion.
Modules with functional cohesion can always be described by a simplesentence. However, if a
description is a compound sentence, it does not mean that the module does not have functional
cohesion. If we cannot describe it using a simple sentence, then the module is not likely to have
functional cohesion.

7. Explain the structure chart.


For a function-oriented design, the design can be represented graphically by structure charts. The
structure of a program is made up of the modules of that program together with the interconnections
between modules. The structure chart of a program is a graphic representation of its structure. In a
structure chart a module is represented by a box with the module name written in the box. An arrow
from module A to module B represents that module A invokes module B. B is called the subordinate of
A, and A is called the superiordinateof B. The arrow is labeled by the parameters received by B as input
and the parameters returned by B as output, with the direction of flow of the input and output
parameters represented by small arrows. The parameters can be shown to be data (unfilled circle at the
tail of the label) or control (filled circle at the tail). As an example consider the structure of the following
program, whose structure is shown in Figure 4.1.
main()
{
int sum, n, N, a[MAX];
readnums(a, &N); sort(a, N); scanf(&n); sum = add_n(a, n); printf(sum);
}
readnums(int a[], int *N)
{ - --
- -- main
}
sort(int a[], int N) a, su
a
{ an
a,
if (a[i] > a[t]) switch(a[i], a[t]);
} readnums sort add_n
/* Add the first n numbers of a */
add_n(int a[], int n) x,y
x,y
{
…………. switch

22 | M C K
………….
}
Usually procedural information is not represented in a structure chart, and the focus is on representing
the hierarchy of modules. However there are some situations where the designer may wish to
communicate certain procedural information explicitly, like major loops and decisions. Such information
can also be in a structure chart. A loop can be represented by a looping arrow. In Figure given below,
module A calls module C and D repeatedly. All the subordinate modules activated within a common
loop are enclosed in the same looping arrow.
Major decisions can be represented similarly. For example, if the invocation of modules C and D in
module A depends on the outcome of some decision, that is represented by a small diamond in the box
for A, with the arrows joining C and D coming out of this diamond, as shown inFigure.

A A

B C D B C D

Modules in a system can be categorized into few classes. There are some modules that obtain
information from their subordinates and then pass it to their superiordinate. This kind of module is an
input module. Similarly, there are output modules that take information from their superiordinate and
pass it on to its subordinates. As the name suggests, the input and output modules are typically used for
input and output of data. The input modules get the data from the sources and get it ready to be
processed, and the output modules take the output produced and prepare it for proper presentation to
the environment.
Then there are modules that exist solely for the sake of transforming data into some other form. Such a
module is called a transform module. Most of the computational modules typically fall in this category.
Finally, there are modules whose primary concern is managing the flow of data to and from different
subordinates. Such modules are called coordinate modules. The structure chart representation of the
different types of modules is shown in Figure 4.3. A module can perform functions of more than one
type of module.
The composite module in the above figure is an input module from of point of view of its subordinates
as it feeds the data Y to the superordinate. Internally a coordinate module views its job as getting data X
from one subordinate and passing it to another subordinate which converts it to Y. A structure chart is
very useful while creating the design. It shows the modules and their call hierarchy, the interfaces
between the modules and what information passes in between the modules. A designer can make
effective use of structure charts to represent the models creating while he is designing. However, it is
not very useful for representing the final design, as it does not give all the information needed about
the design. For example, it does not specify the scope, structure of data, specification of each module,
etc., Hence it is generally supplemented with textual specification to convey design to the implementer.

Data to Superiordinate
Data from Superiordinate
Input
Output
Module
Module
y
x y

Coordinat
Composite Transform
e
Module Module
x y y
x
x

23 | M C K
8. Explain the components of SRS.
Components of SRS
The basic issues an SRS must address are:
1. Functional Requirements: These specify which output should be produced from the given inputs.
They describe the relationship between the input and output of a system. All operations to be
performed on the input data to obtain the output should be specified. This includes specifying the
validity checks on the inputs and output data. Important part of the specification is, the system
behavior in abnormal situations like invalid inputs or error during computation. They must clearly
state what the system should do if such situations occur. It should specify the behavior of the
system for invalid inputs and invalid outputs long with the behavior of the system where the input is
valid but normal operation cannot be performed should also be specified. Eg., An airline reservation
system, where the reservation cannot be made even for a valid passenger if the airplane is fully
booked. In short, system behavior for all foreseen inputs and for all foreseen system states should
bespecified.
2. Performance Requirements: This part of the SRS specifies the performance constraints on the
software system. There two types of performance requirements—static and dynamic. Static
requirements do not impose constraints on the execution characteristics of the system. These
include requirements like number of terminals to be suspported, the number of simultaneous
operations to be supported etc. These are also called capacity requirements of the system.
Dynamic requirements specify constraints on the execution behavior of the system. These typically
include response time and throughput constraints on the system. Acceptable ranges of the different
performance parameters should be specified along with acceptable performance for both normal &
peak workload conditions.All these requirements must be stated in measurable terms. Eg., “Usually,
the response time of x is less than one second in 98% of the times”.
3. Design Constraints: There are a number of factors in the client’s environment that may restrict the
choices of the designer. Such factors include some standards that must be followed, resource limits,
operating environment, reliability and security requirements which may have some impact on the
design of the system. An SRS should identify and specify all such constraints.
Standard Compliance: This specifies the requirements for the standards the system must follow.
The standards may include the report format and accounting procedures. It can also include certain
changes or operations that must be recorded in an audit file.
Hardware Limitations: the software may have to operate on some existing or pre- determined
hardware, thus, imposing restrictions on the design. This can include the type of machines to be
used, operating systems available, languages supported and limits on primary and secondary
storage.
Reliability and Fault Tolerance: These requirements can place major constraints on how the system
is to be designed. Fault tolerance requirements make the system more complex. Requirements in
the system behavior in face of certain kinds of faults are to be specified. Recovery requirements deal
with the system behavior in case offailure.
Security: These requirements place restriction on the use of certain commands, control access to
data, provide different kinds of access requirements for different people, require the use of
passwords & cryptography techniques and maintain a log activity of the system.
4. External Interface Requirements: All the possible interactions of the software with the people,
hardware and other software should be clearly specified. User interface should be user friendly. To
create user friendly interface one can use GUI tools. A preliminary user manual should be created
with all user commands, screen formats, feedback and error messages, explanation about how the
system will appear to the user etc., Like other specifications, these should also be precise and
verifiable. Eg. “Commands should reflect the function they perform”.
For hardware interface requirements, SRS should specify logical characteristics of each interface
between the software product and hardware components.
The interface requirement should specify the interface with other software the system will use or
that will use the system.

24 | M C K
9. Explain the activities of requirement process with a proper diagram
The requirement process is the sequence of activities
that need to be performed in the requirement phase. Client/use
There are three basic activities in case of requirement r needs
analysis. They are:
1. Problem analysis or requirement analysis.
2. Requirementspecification.
3. Requirementvalidation.
Problem Analysis
Problem analysis is initiated with some general
statement of needs. It often starts with a high level
“problem-statement”. Client is the originator of these
needs. During analysis, the system behavior, constraints Product Description
on the system, its inputs, and outputs are analyzed. The
basic purpose of this activity is to obtain the thorough
understanding of what the software needs to provide.
The requirement specification clearly specifies the Fig. 3.1
Validation
requirements in the form of a document. Properly
organizing and describing the requirements is an Validated SRS
important goal of this activity. Requirements validation
focuses on ensuring that what has been specified in the SRS un-avoidable & making sure that the SRS is
of good quality. The final activity focuses on validation of the collected requirements. Requirement
process terminates with the production of the validated SRS.
Though it seems that the requirement process is a linear sequence of these activities, in reality it is not
so. The reality is, there will be aconsiderable overlap and feedback between these activities. So, some
parts of the system are analyzed and then specified while the analysis of some other parts is going on. If
validation activities reveal some problem, for a part of the system, analysis and specifications are
conducted again.
The requirement process is represented diagrammatically in figure (a). As shown in the figure, from
specification activity we may go back to the analysis activity. This happens because the process
specification is not possible without a clear understanding of the requirements. Once the specification is
complete, it goes through the validation activity. This activity may reveal problems in the specification
itself, which requires going back to the specification step, which in turn may reveal shortcomings in the
understanding of the problem, which requires going back to the analysisactivity.
During requirement analysis, the focus is on understanding the system and its requirements. For
complex systems, this is the most difficult task. Hence the concept “divide-and-conquer” i.e.,
decomposing the problem into sub-problems and then understanding the parts and their relationship is
inevitably applied to manage the complexity.

10. Explain the general structure of an SRS document.


All the requirements for the system have to be included in a document that is clear and concise. For
this, it is necessary to organize the requirements document as sections and subsections. There can be
many ways to structure requirements documents.
The general structure of an SRS is given below.
1. Introduction
a. Purpose
b. Scope
c. Definitions, Acronyms, and Abbreviations
d. References
e. Overview
2. Overall Description
a. Product Perspective
b. Product Functions

25 | M C K
c. User Characteristics
d. General Constraints
e. Assumptions and Dependencies
3. Specific Requirements
a. External Interface Requirements
i. User Interfaces
ii. Hardware Interfaces
iii. Software Interfaces
iv. Communication Interfaces
b. Functional Requirements
i. Mode 1
1. Functional Requirement 1.1
…..
…..
Functional Requirement 1.n
ii. Mode m
1. Functional Requirement m.1
……
…….
c. Performance Requirements
d. Design Constraints
e. Attributes
f. Other Requirements

The introduction section contains the purpose, scope, overview, etc. of the requirements document. It
also contains the references cited in the document and any definitions that are used. Section 2
describes the general factors that affects the product and its requirements. Product perspective is
essentially the relationship of the product to other products. Defining if the product is independent or is
a part of a larger product. A general abstract description of the functions to be performed by the
product is given. Schematic diagrams showing a general view of different functions and their
relationship with each other. Similarly, characteristics of the eventual end user and general constraints
are also specified.
The specific requirements section describes all the details that the software developer needs to know
for designing and developing the system. This is the largest and most important part of the documents.
One method to organize the specific requirements is to first specify the external interfaces, followed by
functional requirements, performance requirements, design constraints and system attributes.
The external interface requirements section specifies all the interfaces of the software: to people, other
software, hardware, and other systems. User interfaces are clearly a very important component; they
specify each human interface the system plans to have, including screen formats, contents of menus,
and command structure. In hardware interfaces, the logical characteristics of each interface between
the software and hardware on which the software can run are specified. In software interfaces, all other
software that is needed for this software to run is specified, along with the interfaces. Communication
interfaces need to be specified if the software communicates with other entities in other machines.
In the functional requirements section, the functional capabilities of the system are described. For each
functional requirement, the required inputs, desired outputs, and processing requirements will have to
be specified.
The performance section should specify both static and dynamic performance requirements.
The attributes section specifies some of the overall attributes that the system should have. Any
requirement not covered under these is listed under other requirements. Design constraints specify all
the constraints imposed on design.

11. Write a note on design heuristics.


The design steps mentioned earlier do not reduce the design process to a series of steps that can be

26 | M C K
followed blindly. The strategy requires the designer to exercise sound judgment and common sense.
The basic objective is to make the program structure reflect the problem as closely as possible. The
structure obtained earlier should be treated as an initial structure, which may get modified. Here we
mention some heuristics that can be used to modify the structure, if necessary.
Module size is often considered the indication of module complexity. In terms of the structure of the
system, modules that are very large may not be implementing a single function and can therefore be
broken into many modules, each implementing a different function. On the other hand, modules that
are too small may not require any additional identity and can be combined with other modules.
However, the decision to split a module or combine different modules should not be based on size
alone. Cohesion and coupling of modules should be the primary guiding factors. A module should be
split into separate modules only if the cohesion of the original module was low, the resulting modules
have a higher degree of cohesion, and the coupling between modules doesn’t increase. Similarly, two or
more modules should be combined only if the resulting module has a high degree of cohesion and the
coupling of the resulting module is not greater than the coupling of the sub-modules. Furthermore, a
module should not be split or combined with another module, if it is a subordinate to many other
modules. In general, the module should contain LOC between 5 and 100. Above 100 and less than 5 LOC
is not desirable.
Another factor to be considered is “fan-in” and “fan-out” of modules. Fan-in of a module is the number
of arrows coming towards the module indicating the number of superiordinates. Fan-out of a module is
the number of arrows going out of that module; indicating the number of subordinates for that module.
A very-high fan-out is not desirable as it means that the module has to control and co-ordinate too
many modules. Whenever possible, fan-in should be maximized. In general, the fan-out should not be
more than 6.
Another important factor that should be considered is the correlation of the scope of effect and scope
of control. The scope of effect of a decision (in a module) is collection of all the modules that contain
any processing that is conditional that decision or whose invocation is dependent on the outcome of the
decision; The scope of control of a module is the module itself and all its subordinates (just the
immediate subordinates). The system is usually simpler when the scope of effect of a decision is a
subset of the scope of control of the module in which decision islocated. Ideally, the scope of effect
should be limited to the modules that are immediate subordinates of the module in which the decision
is located. Violation of this rule usually results in highest degree of coupling between modules.

Unit-3
1. Explain PDL with suitable examples.
One method to present a design is to specify it in a natural language like English. This sometimes leads
to misunderstanding and such imprecise communication is not of any use when converting the design
into code. The other extreme is to communicate it precisely in a formal language like a programming
language. This type of representation usually works better and provides a perfect implementation but
not in favor of communicating the design. PDL, which is as precise and unambiguous as possible without
having too much detail and that can be easily converted into required implementation. With this view in
hand, PDL works.It is related to pseudocode; but unlike pseudocode, it is written in plain language
without any terms that could suggest the use of any programming language or library
PDL has an overall outer syntax of a structured programming language and contains a vocabulary of a
natural language (English in our case). It can be thought of as "Structured English". Because the
structure of a design expressed in PDL is formal, (using the formal language constructs), automated
processing can be done to some extent on such designs. E.g.The problem of finding the minimum and
maximum of a set of numbers in a fileandoutputting thesenumbers in PDL as shown below:
minmax(infile)
ARRAY z
Do UNTIL end of input
Read an item into z
ENDDO

27 | M C K
max, min := first item of z
DO FOR each item in z
IF max < item THEN set max to item
IF min > item THEN set min to item
ENDDO
END
Notice that in the PDL program we have the entire logic of the procedure, but little about the details of
implementation in a particular language. To implement this in a language, each of the PDL statements
will have to be converted into programming language statements.
With PDL, a design can be expressed in whatever level of detail that is suitable for the problem. One
way to use PDL is to first generate a rough outline of the entire solution at a given level of detail. When
the design is agreed on at this level, more detail can be added. This allows a successive refinement
approach, and can save considerable cost by detecting the design errors early during the design phase.
It also aids design verification by phases, which helps in developing error-free designs. The structured
outer syntax of PDL also encourages the use of structured language constructs while implementing the
design. The basic constructs of PDL are similar to those of a structured language.
PDL provides IF construct which is similar to the if-then-else construct of Pascal. Conditions and the
statements to be executed need not be stated in a formal language. For a general selection, there is a
CASE statement.
CASE of transaction type
CASE of operator type
Some examples of The DO construct is used to indicate repetition. The construct is indicated by:
DO iteration-criteria
one or more statements
ENDDO
The iteration criteria can be chosen to suit the problem, and unlike a formal programming language,
they need not be formally stated. Examples of valid uses are:
DO WHILE there are characters in input file
DO UNTIL the end of file is reached
DO FOR EACH item in the list EXCEPT when the item is ZERO.
A variety of data structures can be defined and used in PDL such as lists, tables, scalar, and integers.
Variations of PDL, along with some automated support, are used extensively for communicating
designs.

2. Explain structured programming.


The basic objective of the coding activity is to produce programs which are easy to understand. It has
been argued by many, that structured programming practice helps develop programs that are easier to
understand. Structured programming, which is started in 1970’s and often regarded as "goto-less"
programming. Although extensive use of gotos is certainly desirable, structured programs can be
written with the use ofgotos. Many structured programming languages support the goto statement,
which can be used in a structured manner. For example, to exit to the end of a routine, or return to the
beginning of a loop etc.,
A program has a static structure as well as a dynamic structure. The static structure is the structure of
the text of the program, which is usually just a linear organization of statements of the program. The
dynamic structure of the program is the sequences of statements executed during the execution of the
program. In other words, both the static structure and the dynamic behavior are sequences of
statements. The sequence representing the static structure of a program is fixed. The sequence of
statements during execution can vary from one execution to another.
If the structure in the dynamic behavior resembles the static structure, it is easy to understand the
dynamic behavior of the program.The closer the correspondence between execution and text structure,
the easier the program is to understand. The more different the structure during execution, it becomes
harder and arguments might arise about the behavior from the program text.

28 | M C K
The goal of structured programming is to ensure that the static structure and the dynamic structures
are the same. The objective of structured programming is to write programs so that the sequence of
statements executed during the execution of a program is the same asthe sequence of statements in
the text of that program. As the statements in a program text are linearly organized, the objective of
structured programming becomes developing programs whose control flow during execution is
linearized and follows the linear organization of the program text.
Clearly, no meaningful program can be written as a sequence of simple statements without any
branching or repetition. In structured programming, a statement is not a simple assignment statement,
it is a structured statement. The key property of a structured statement is that it has a single-entry and
a single-exit, That is, during execution, the execution of the (structured) statement starts from one
defined point and the execution terminates at another defined point. With single-entry and single-exit
statements, we can view a program as a sequence of (structured) statements. And if all statements are
structured statements, then during execution, the sequence of execution of these statements will be
the same as the sequence in the program text. Hence, by using single-entry and single-exit statements,
the correspondence between the static and dynamic structures can be obtained. The most commonly
used single-entry and single-exit statementsare:
Selection: if B then S1 else S2
if B then SI
Iteration: While B do S
Repeat S until B
Sequencing: S1; S2; S3;.
It can be shown that these three basic constructs are sufficient to program any conceivable algorithm.
Modern languages have other such constructs that help linearize the control flow of a program, which
makes it easier to understand a program. Hence, programs should be written so that, as far as possible,
single-entry, single-exit control constructs is used.
The basic goal, as we have tried to emphasize, is to make the logic of the program simple to understand.
The basic objective of using structured constructs is to linearize the control flow so that the execution
behavior is easier to understand. In linearized control flow, if we understand the behavior of each of the
basic constructs properly, the behavior of the program can be considered a composition of the
behaviors of the different statements.
Overall, it can be said that structured programming, in general, leads to programs that are easier to
understand than unstructuredprograms. Structured programming is a safe approach to achieve this.
Unstructured construct can be used only if the structured alternative is harder to understand.

3. Explain the verification methods of detailed design.


There are a few techniques available to verify that the detailed design is consistent with the system
design. The focus of verification in the detailed design phase is on showing that the detailed design
meets the specifications laid down in the system design. Validating that the system as designed is
consistent with the requirements of the system is not stressed during detailed design. The three
verification methods we consider are design walkthroughs, critical design review and consistency
checkers.
Design Walkthroughs
In software engineering, a walkthrough or walk-through is a form of software review "in which a
designer or programmer leads members of the development team and other interested parties go
through a software product, and the participants ask questions and make comments about possible
errors, violation of development standards, and other problems"
A design walkthrough is a manual method of verification. A design walkthrough is done in an informal
meeting called by the designer or the leader of the designer's group. The walkthrough group is usually
small. It includes the designer, the group leader and/or another designer of the group. The designer
might just get together with a colleague for the walkthrough or the group leader might require the
designer to have the walkthrough with him.
In a walkthrough the designer explains the logic step by step, and the members of the group ask

29 | M C K
questions, point out possible errors or seek clarification. A beneficial side effect of walkthroughs is that
in the process of articulating and explaining the design in detail, the designer himself can uncover some
of the errors.
Walkthroughs are essentially a form of peer review. Due to its informal nature, they are usually not as
effective as the design review.
Critical Design Review
The purpose of critical design review is to ensure that the detailed design satisfies the specifications laid
down during system design. It is very desirable to detect and remove design errors early, as the cost of
removing them later can be considerably more. Detecting errors in detailed design is the aim of critical
design review.
The critical design review process is similar to the other reviews, in that a group of people get together
to discuss the design with the aim of revealing designs errors or undesirable properties. The review
group includes, besides the author of detailed design, a member of the system design team, the
programmer responsible for ultimately coding the module(s) under review, and an independent
software quality engineer. That is, each member studies the design beforehand and with aid of a
checklist marks items that the reviewer feels are incorrect or need clarification. The members ask
questions and the designer tries to explain the situation. During the discussion design errors are
revealed.
As with any review, it should be kept in mind that the aim of the meeting is to uncover design errors,
not try to fix them. Fixing is done later. Also, the psychological frame of mind should be healthy, and the
designer should not be put in a defensive position. The meeting should end with a list of action items, to
be acted on later by the designer. The use of checklists, as with other reviews, is considered important
for the success of the review.
Consistency Checkers
Design reviews and walkthroughs are manual processes; the people involved in the review and
walkthrough determine the errors in the design. If the design is specified in PDL or some other formally
defined design language, it is possible to detect some design defects by using consistency checkers.
Consistency checkers are essentially compilers that take as input the design specified in a design
language (PDL). Clearly, they cannot produce executable code because the inner syntax of PDL allows
natural language and many activities are not specified in the natural language. However, the module
interface specifications (which belong to outer syntax) are specified formally.
A consistency checker can ensure that any modules invoked or used by a given module actually exist in
the design and that the interface used by the caller is consistent with the interface definition of the
called module. It can also check if the used global data items are defined globally in the design.

4. Write a note on Logic/Algorithm design.


The basic goal in detailed design is to specify the logic for the different modules that have been
specified during system design. Specifying the logic will require developing an algorithm that will
implement the given specifications. Here we consider some principles for designing algorithms or logic
that will implement the given specifications.
An algorithm is a sequence of steps that need to be performed to solve a given problem. The problem
need not be a programming problem. We can, for example, design algorithms for such activities as
cooking dishes (the recipes are nothing but algorithms) and building a table.
In the software development life cycle, we are only interested in algorithms related to specific software.
For this purpose, an algorithm must be an unambiguous procedure for solving the given problem. A
procedure is a finite sequence of well-defined steps or operations, each of which requires a finite
amount of memory and time to complete.
There are a number of steps that one has to perform while developing an algorithm.
The starting step in the design of algorithms is statement of the problem. The problem for which an
algorithm is being devised has to be precisely and clearly stated and .properly understood by the person
responsible for designing the algorithm. For detailed design, the problem statement comes from the
system design.

30 | M C K
The next step is development of a mathematical model for the problem. In modeling, one has to select
the mathematical structures that are best suited for the problem.
The next step is the design of the algorithm. During this step the data structure and program structure
are decided. Once the algorithm is designed, correctness should be verified. No clear procedure can be
given for designing algorithms.
The most common method for designing algorithms or the logic for a module is to use the stepwise
refinement technique. This technique breaks the logic design problem into a series of steps, so that the
development can be done gradually. The process starts by converting the specifications of the module
into an abstract description of an algorithm containing a few abstract statements. In each step, one or
several statements in the algorithm developed so far are decomposed into more detailed instructions.
The refinement terminates when all instructions are sufficiently precise that they can easily be
converted into programming language statements. During refinement, both data and instructions have
to be refined.
The stepwise refinement technique is a top-down method for developing detailed design. To perform
the stepwise refinement, a language is needed to express the logic of a module at different levels of
detail, starting from the specifications of the module. The language should have enough flexibility to
accommodate different levels of precision. Due to lack of flexibility, Programming languages cannot be
used in this context. PDL is very suitable mainly because of certain properties it holds. The outer syntax
of PDL ensures that the design being developed is a computer algorithm whose statements can later be
converted to statements of a programming language. Its flexible natural language-based inner syntax
acts as a plus point in this context.
An Example: Let us consider the problem of counting different words in a text file. Assume that the
COUNT module is specified whose job is to determine the count of different words. During detailed
design we have to determine the logic of this module so that the specifications are met. We will use the
stepwise refinement method for this purpose. For specification purpose, we will use PDL, adapted to C-
style syntax. A simple strategy for the first step is shown bellow (Figure(a)). The primitive operations
used in this strategy are very high-level and need to be further refined (as shown in figure(b)).
Specifically, there are three operations that need refinement. They are
1. read file into the word list, whose purpose is to read all the words from the file and create a
wordlist,
2. sort(wl), which sorts the word list in ascending order,and
3. count different words from a sorted word list.
So far, only one data structure is defined: the word list. As refinement proceeds, more data structures
might beneeded.

In the next refinement step, we should select one of the three operations to be refined-and further
elaborate it. In this step we will refine the reading procedure. One strategy of implementing the read
module is to read words and add them to the word list. This is shown in Figure (b). For the next
refinement step we select the counting function. A strategy for implementing this function is shown in
Figure (c). Similarly, we can refine the sort function.Once these refinements are done, we have a design
that is sufficiently detailed and needs no further refinement.For more complex problems many

31 | M C K
successive refinements might be neededfor a single operation.
intdifferent_words (word_listwl)
{
word last, cur; intcnt;
last = first word in wl
cnt =1;
while not end of list
{
cur = next word from wl
if (cur <> last)
{
cnt = cnt + 1; last = cur;
}
}
return (cnt)
}
Figure (c). Refinement of the function different_words
Similarly we can refine the sort function. Once these refinements are done, we can get a design that is
sufficient and does not require further more refinements.

5. Explain symbolic execution and execution tree.


Symbolic Execution
Here the program is "symbolically executed" with symbolic data. Hence the inputs to the program
are not numbers but symbols representing the input data, which can take different values. The
execution of the program proceeds like normal execution, except that it deals with values that are
not numbers but formulas consisting of the symbolic input values. The outputs are symbolic
formulas of input values. These formulas can be checked to see if the program will behave as
expected. This approach is called as symbolic execution.
A simple program to compute the product of three positive integers is shown in Figure 8.3. Let us
consider that the symbolic inputs to the function are xi, yi, and zi. We start executing this function
with these inputs. The aim is to determine the symbolic values of different variables in the program
after "executing" each statement, so that eventually we can determine the result of executing
thisfunction.
Example:
1. function product (x, y, z: integer):integer;
2. var tmp1,tmp2:integer;
3. begin.
4. tmp1 :=x*y;
5. tmp2 := y*z;
6. product := tmp_1*tmp_2/y;
7. end
Function to determine product.

After Values of thevariables


Statement x y z tmp1 tmp2 product
1. xi yi zi ? ? ?
2. xi yi zi xi*yi ? ?
3. xi yi zi xi*yi yi*zi ?
4. xi yi zi xi*yi yi*zi (xi*yi)* (yi*zi)/yi
The symbolic execution of the function product
Execution tree

32 | M C K
Here there is only one path in the function,
and this symbolic execution is equivalent
to checking for all possible values of x, y,
and z. (Note that the implied assumption
is that input values are such that the
machine will be able to perform the
product and no overflow will occur.)
Essentially, with only one path and an
acceptable symbolic result, we can claim
that the program is correct.
The different paths followed during
symbolic execution can be represented by
an "execution tree." A node in this tree
represents the execution of a statement, while an arc represents the transition from one statement
to another. For each if statement, there are two arcs from the node corresponding to the if
statement, one labeled with T (true) and the other with F (false), for the then and else paths. At each
branching, the path condition is also often shown in the tree.

6. Explain internal documentation.


In the coding phase, the output document is the code itself. However, some amount of internal
documentation in the code can be extremely useful in enhancing the understandability of programs.
Internal documentation of programs is done by the use of comments. All languages provide a means for
writing comments in programs. Comments are textual statements that are meant for the program
reader and are not executed. Comments, if properly written and kept consistent with the code, can be
invaluable during maintenance. The purpose of comments is not to explain in English the logic of the
program. The program itself is the best documentation for the details of the logic. The comments
should explain what the code is doing, not how it is doing it. Comments should be provided for blocks of
code, particularly those parts of code that are hard to follow. Providing comments for modules is most
useful, as modules form the unit of testing, compiling, verification and modification.
It contains the following information.
1. Module functionality or what the module is doing.
2. Parameters and their purpose.
3. Assumptions about the inputs, if any.
4. Global variables accessed and/or modified in the module.
An explanation of parameters (whether they are input only, output only, or both input and output; why
they are needed by the module; how the parameters are modified) can be quite useful during
maintenance. Stating how the global data is affected and the side effects of a module is also very useful
during maintenance. In addition other information can be included, depending on the local coding
standards. Examples are the name of the author, the date of compilation, and the last date-of
modification. It should be pointed out that the prologues are used only if they are kept consistent with
the logic of the module. (Prologue is an introductory work about the module which is to be presented).
If the module is modified, then the prologue should also be modified, if necessary. A prologue that is
inconsistent with the internal logic of the module is probably worse than no prologue at all

7. Explain the concept of information hiding.


A software solution to a problem usually contains data structures that are meant to represent
information in the problem domain. That is, when software is developed to resolve a problem, it uses
some data structures to capture the information in the problem domain. Any software solution to a
problem contains data structures that represent information in the problem domain. In the problem
domain, in general, only certain operations are performed on some information. That is, a piece of
information in the problem domain is used only in a limited number of ways. For example, a ledger in an
accountant's office has some defined uses: debit, credit, check the current balance, etc. An operation
where all debits are multiplied together and then divided by the sum of all credits is typically not

33 | M C K
performed. So, any information in the problem domain typically has a small number of defined
operations performed on it.
When the information is represented as data structures, the same principle should be applied, and only
some defined operations should be performed on the data structures. This is the principle of
information hiding. The information captured in the data structures should be hidden from the test of
the system, and only the access functions on the data structures that represent the operations
performed on the information should be visible. The other modules access the data only with the help
of these access functions. Information hiding can reduce the coupling between the modules and make
the system more maintainable. This is also an effective tool for managing the complexity of developing
software. All Object Oriented languages are supporting the concept of Information hiding.

8. What are the activities that are undertaken during critical design review?
The purpose of critical design review is to ensure that the detailed design satisfies the specifications laid
down during system design. It is very desirable to detect and remove design errors early, as the cost of
removing them later can be considerably more. Detecting errors in detailed design is the aim of critical
design review.
The critical design review process is similar to the other reviews, in that a group of people get together
to discuss the design with the aim of revealing designs errors or undesirable properties. The review
group includes, besides the author of detailed design, a member of the system design team, the
programmer responsible for ultimately coding the module(s) under review, and an independent
software quality engineer. That is, each member studies the design beforehand and with aid of a
checklist marks items that the reviewer feels are incorrect or need clarification. The members ask
questions and the designer tries to explain the situation. During the discussion design errors are
revealed.
As with any review, it should be kept in mind that the aim of the meeting is to uncover design errors,
not try to fix them. Fixing is done later. Also, the psychological frame of mind should be healthy, and the
designer should not be put in a defensive position. The meeting should end with a list of action items, to
be acted on later by the designer. The use of checklists, as with other reviews, is considered important
for the success of the review.

9. Explain static analysis and its uses.


Analysis of programs by methodically analyzing the program text is called Static analysis. This is usually
performed mechanically by the aid of software tools. During static analysis the program itself is not
executed, but the program text isthe input to the tools. The aim of the static analysis tools isto detect
errors or potential errors or to generate information about the structure of the program that can be
useful for documentation or understanding of the program.
Static analysis tools focus on detecting errors. Two approaches are possible. First is to detect patterns in
code that are “unusual” or “undesirable” and that are likely to represent defects. Second is to directly
look for defects in the code, i.e. look for those conditions that can cause programs to fail when
executing.
In both these cases, a static analyzer as it is trying to identify defects without running the code, but only
by analyzing the code, sometimes identify situations as errors which are not actually errors. These
limitations of a static analyzer are characterized by its soundness and completeness. Soundness
captures the occurrence of false positives in the errors the static analyzer identifies & completeness
characterizes how many of the existing errors are missed by the static analyzer.
An advantage is that static analysis sometimes detects the errors themselves, not just the presence of
errors, as in testing. This saves the effort of tracing the error from the data that reveals the presence of
errors. Furthermore, static analysis can provide "warnings" against potential errors and can provide
insight into the structure of the program. It is also useful for determining violations of local
programming standards, which the standard compilers will be unable to detect. Extensive static analysis
can considerably reduce the effort later needed during testing.
Anomaly is referred as abnormal way of doing something. For eg.it is an abnormal situation to

34 | M C K
successively assign two values to a variable without using the earlier value at all or using a value of a
variable, before assigning any value into that variable. Data flow anomalies are "suspicious" use of
data in a program. In general, data flow anomalies are technically not errors, and they may go
undetected by the compiler. However, they are often a symptom of an error, caused due to
carelessness in typing or error in coding. At the very least, presence of data flow anomalies implies poor
coding. Hence, if a program has data flow anomalies, they should be properly addressed.
x =a; x=b; // x does not appear in any right hand side (i.e. it is not used at all).
An example of the data flow anomaly is the live variable problem, in which a variable is assigned some
value but then the variable is not used in any later computation. Such an assignment to the variable is
clearly redundant. Another simple example of this is having two assignments to a variable without using
the value of the variable between the two assignments. In this case the first assignment is redundant.
For example, consider the simple case of the code segment given earlier. Clearly, the first assignment
statement is useless. Perhaps the programmer meant to say y = b in the second statement, and
mistyped y as x. In that case, detecting this anomaly and directing the programmer's attention to it can
save considerable effort in testing and debugging. In addition to revealing anomalies, data flow analysis
can provide valuable information for documentation of programs. For example, data flow analysis can
provide information about which variables are modified on invoking a procedure in the caller program
and the value of the variables used in the called procedure (this can also be used to make sure that the
interface of the procedure is minimum, resulting in lower coupling). This information can be useful
during maintenance to ensure that there are no undesirable side effects of some modifications to
aprocedure.

10. Write a note on top down and bottom up approaches in coding.

In a top-down implementation, the implementation starts from the top of the hierarchy and proceeds
to the lower levels. First the main module is implemented, then its subordinates are implemented, and
their subordinates, and so on. In a bottom-up implementation, the process is the reverse. The
development starts with implementing the modules at the bottom of the hierarchy and proceeds
through the higher levels until it reaches the top.Top-down and bottom-up implementation should not
be confused with top-down and bottom- up design. Here, the design is being implemented, and if the
design is fairly detailed and complete, its implementation can proceed in either the top-down or the
bottom-up manner, even if the design was produced in a top-down manner. Which of the two is used
mostly affects testing. All large systems must be built by assembling validated pieces together. The case
with software systems is the same. Parts of the system have to first be built and tested before putting
them together to form the system. Because parts have to be built and tested separately, the issue of
top-down versus bottom-up arises.

Unit-4
1. Explain dataflow based testing with suitable example.
In data flow-based testing, besides the control flow, information about where the variables are defined
and where the definitions are used is also used to specify the test cases. The basic idea behind data
flow-based testing is to make sure that during testing, the definitions of variables and their subsequent
use is tested.

35 | M C K
For data flow-based testing, a definition-use graph for the program is first constructed from the control
flow graph of the program. A statement in a node in the flow graph representing a block code has
variable occurrences in it. A variable occurrence can be one of the following here types:
• Def represents the definition of the variable.
Variables on the left hand side of an assignment
statement are the one getting defined.
• C- use represents computational use of a
variable. Any statement that uses the value of
variables for computational purposes is said to be
making use c-use of the variables. In an
assignment statement, all variables on the right
hand side have a c-use occurrence.
• P-use represents predicate use.These are all the
occurrences of the variables in a predicate, which
is used for transfer control.

2. Write a note on adaptive, preventive maintenance and corrective maintenance.


In order for a software system to remain useful in its environment it may be necessary to carry out a
wide range of maintenance activities upon it. Generally, there are three different categories of
maintenance activities:
Corrective
Changes necessitated by actual errors in a system are termed corrective maintenance. A defect or “bug”
can result from design errors, logic errors and coding errors. Design errors occur when changes made to
the software are incorrect, incomplete, wrongly communicated or the change request is misunderstood.
Logical errors result from invalid tests and conclusions, incorrect implementation of design specification,
faulty logic flow or incomplete test data.
Coding errors are caused by incorrect implementation of detailed logic design and incorrect use of the
source code logic. Defects are also caused by data processing errors and system performance errors. All
these errors, sometimes called “residual errors” or “bugs” prevent the software from confirming to its
agreed specification.
In the event of a system failure due to an error, actions are taken to restore operation of the software
system. The approach here is to locate the original specifications in order to determine what the system
was originally designed to do.
However, due to pressure from management, maintenance personnel sometimes resort to emergency
fixes known as “patching”. The nature of this approach gives rise to a range of problems that include
increased program complexity. Corrective maintenance has been estimated to account for 20% of all
maintenance activities.
Adaptive
Any effort that is initiated as a result of changes in the environment in which a software system must
operate is termed adaptive change. Adaptive change is a change driven by the need to accommodate
modifications in the environment of the software system, without which the system would become
increasingly less useful until it became obsolete.
The term environment in this context refers to all the conditions and influences which act from outside
upon the system, for example business rules, government policies, work patterns, software and
hardware operating platforms. A change to the whole or part of this environment will warrant a
corresponding modification of the software.
Unfortunately, with this type of maintenance the user does not see a direct change in the operation of
the system, but the software maintainer must expend resources to effect the change. This task is
estimated to consume about 25% of the total maintenance activity.
Preventive
The long-term effect of corrective, adaptive and perfective change is expressed in Lehman’s law of
increasingly entropy:

36 | M C K
As a large program is continuously changed, its complexity, which reflects deteriorating structure,
increases unless work is done to maintain or reduce it.
The IEEE defined preventive maintenance as “maintenance performed for the purpose of preventing
problems before they occur”. This is the process of changing software to improve its future
maintainability or to provide a better basis for future enhancements.
The preventive change is usually initiated from within the maintenance organization within the
maintenance organization with the intention of making programs easier to understand and hence
facilitate future maintenance work. Preventive change does not usually give rise to a substantial
increase in the baseline functionality.
Preventive maintenance is rare the reason being that other pressures tend to push it to the end of the
queue. For instance, a demand may come to develop a new system that will improve the organizations
competitiveness in the market. This will likely be seen as more desirable than spending time and money
on a project that delivers no new function. Still, it is easy to see that if one considers the probability of a
software unit needing change and the time pressures that are often present when the change is
requested, it makes a lot of sense to anticipate change and to prepare accordingly.

3. Explain SQA Robot and Load Runner.


SQA Robot
Welcome to SQA Forums – is the most popular Software Testing and Quality Assurance discussions site.
It includes over 50 forums that cover almost every area in software testing, quality assurance and
quality engineering. Here, you will also find a forum for every software test tool available like
WinRunner, QuickTest Pro and LoadRunner by HP Mercury Interactive, IBM Rational Robot, TestPartner
by Compuware and SilkTest by Borland Segue Software to name a few.
If you are looking for place to get help or support on any software testing tool, you've found the only
place! Simply ask our 180,000+ Members for almost anything, and you'll be surprised at the amount of
help you can get here which you cannot get anywhere else. Looking for help on WinRunner,
LoadRunner, SilkTest, Robot, QARun, eTest, TestComplete and Webload? This is the place!
Important Notes:
• Registration is required to use this site. If you do not register, you will not be able to view or post
to any forums.
• When registering, please make sure you use a valid email address and spell it correctly. When
you register, your password is automatically emailed to you. If you do not provide a valid email
address, you will not be able to login. Also note that we usually ban users who try to use invalid
email addresses. If your password does not reach you after you have registered and you are sure
you used a correct email address, notify the webmaster.
• Read your registration email carefully. The email includes forum rules and posting guidelines. If
you violate the forum rules, you will get banned! Also read the FAQ Help if you have trouble
logging in. Note that the username and password are case sensitive. Also note that if you use a
public email service like Yahoo or Hotmail to make sure your account is NOT over quota. If it is,
your registration email will bounce back and you will not get your password information.
Load Runner
HP LoadRunner is an automated performance and test automation product from Hewlett- Packard for
application load testing: examining system behaviour and performance, while generating actual load.
HP acquired Load Runner as part of its acquisition of Mercury Interactive in November 2006.
A software testing tool, HP Load Runner works by creating virtual users who take the place of real users'
operating client software, such as Internet Explorer, sending requests using the HTTP protocol to IIS or
Apache web servers. HP Load Runner can simulate thousands of concurrent users to put the application
through the rigors of real-life user loads, while collecting information from key infrastructure
components (Web servers, database servers etc.) The results can then be analyzed in detail to explore
the reasons for particular behaviour.
Architecture
HP LoadRunner by default installs 3 icons on the Windows desktop:

37 | M C K
• VuGen (Virtual User Generator) for generating and editing scripts.
• Controller for composing scenarios which specify which load generators are used for which script, and
for how long, etc. During runs the Controller receives real-time monitoring data and displays status.
• Analysis which assembles logs from various load generators and formats reports for visualization of run
result data and monitoring data.

4. Explain the equivalence class partitioning.


In this method the domain of all the inputs are divided
into a set of equivalence classes so that if any tests in
that class succeed, then every test in that class will
succeed. That is, we want to identify classes of test
such that the success of one test case in a class implies
the success of others.
However, without looking at the internal structure of
the program, it is impossible to determine an ideal
equivalence classes. The equivalence class partitioning
method tries to approximate this ideal. An equivalence
class formed for the inputs for which the behavior of
the system is already specified. Each group of inputs
for which the behavior is expected to be different from
others is considered a separate equivalence class.
For example, the specification module that determines
the absolute value for integers specifies one behavior
for positive integers and another behavior for negative integers. In this case, we will form two
equivalence classes-one consisting of positive integers and the other consisting of negative integers.
Equivalence classes are usually formed by considering each condition specified on an input as specifying
a valid equivalence class and one or more invalid equivalence classes. For example, if an input condition
specifies a range of values (say 0 < count < max), then form a valid equivalence class with that range and
two invalid equivalence classes, one with values less than the lower bound (i.e. count < 0) and another
with values higher than the upper bound (i.e. count > max).
It is often useful to consider equivalence classes in the output. For an output equivalence class, the goal
is to generate test cases such that the output for that test case lies in the output equivalence class.
Determining test cases for output classes may be more difficult, but output classes have been found to
reveal errors that are not revealed by just considering the input classes.

5. Explain control flow based testing.


In this method, the control flow graph
of a program is considered and
coverage of various aspects of the
graph is specified as criteria. A control
flow graph G of a program P has set of
nodes and edges. A node in this graph
represents a block of statements that
are always executed together. An
edge (I, J) from node I to node J
represents a possible transfer of
control after executing the last
statement of the block represented by
node I to the first statement of the block represented by node J. A node corresponding to a block whose
first statement is the start statement of P is called start node of G. Similarly, the node corresponding to
a block whose last statement is an exit statement is called an exit node.

38 | M C K
Now let us consider control flow-based criteria. The simplest coverage criteria are statement coverage,
which requires that each statement of the program be executed at least once during testing. This is
called all node criterions. This coverage criterion is not very strong and can leave errors undetected.
For example, if there is an if statement in the program without else part, the statement coverage
criterion for this statement will be satisfied by a test case that evaluates the condition to true. No test
case is needed that ensures that the condition in the if statement evaluates to false. This is a major
problem because decisions in the programs are potential sources of errors
Another coverage criterion is branch coverage, which requires that each edge in the control flow graph
be traversed at least once during testing. In other words, branch coverage requires that each criterion in
the program be evaluated to true and false values at least once during testing.
Testing based on branch coverage criterion is known as branch testing. Problem with branch coverage
comes if a decision has many conditions in it j(consisting of Boolean expression with Boolean operators
“and” and “or” ). In such a situation, a decision can be evaluated to true and false without actually
exercising all conditions.
It has been observed that there are many errors whose presence is not detected by branch testing. This
is because some errors are related to some combinations of branches and their presence is revealed by
an execution that follows the path that includes those branches. Hence a more general coverage
criterion which covers all the paths is required. This is called path coverage criterion and testing based
on this criterion is called path testing.
But the problem with this criterion is that programs that contain loops can have an infinite number of
possible paths. Some methods have been suggested to solve this problem. One such method is to limit
the number of paths

6. Explain Silk Test.


Silk Test is a tool for automated function and regression testing of enterprise applications. It was
originally developed by Segue Software which was acquired by Borland in 2006. Borland was acquired
by Micro Focus International in 2009.
Silk Test offers various clients:
Silk Test Classic uses the domain specific 4Test language for automation scripting. It is an object
oriented language similar to C++. It uses the concepts of classes, objects, and inheritance.
Silk4J allows automation in Eclipse using Java as scripting language
Silk4Net allows the same in Visual Studio using VB or C#
Silk test workbench allows automated testing on visual level (similar to former test partner).As well as
using VB.NET as scripting language.

7. Write a short note on WinRunner.


Win Runner is the most used Automated Software Testing Tool. Main Features of Win Runner are
• Developed by Mercury Interactive
• Functionality testing tool
• Supports C/s and web technologies such as (VB, VC++, D2K, Java, HTML, Power Builder, Delphe,
Cibell (ERP))
• To Support .net, xml, SAP, PeopleSoft, Oracle applications, Multimedia we can use QTP.
• Win runner run on Windows only.
• XRunner run only UNIX and Linux.
• Tool developed in C on VC++ environment.
• To automate our manual test win runner used TSL (Test Script language like c)

8. Write the important features of Test Director. (Google)


Web-based Site Administrator: The Site Administrator includes tabs for managing projects, adding
users and defining user properties, monitoring connected users, monitoring licenses and monitoring
TestDirector server information.

39 | M C K
Domain Management: TestDirector projects are now grouped by domain. A domain contains a group of
related TestDirector projects, and assists you in organizing and managing a large number of projects.
Enhanced Reports and Graphs Additional standard report types and graphs have been added, and the
user interface is richer in functionality. The new format enables you to customize more features.
Version Control Version control enables you to keep track of the changes you make to the testing
information in your TestDirector project. You can use your version control database for tracking manual,
WinRunner and QuickTest Professional tests in the test plan tree and test grid.
Collaboration Module: The Collaboration module, available to existing customers as an optional
upgrade, allows you to initiate an online chat session with another TestDirector user. While in a chat
session, users can share applications and make changes.
TestDirector Advanced Reports Add-in: With the new Advanced Reports Add-in, TestDirector users are
able to maximize the value of their testing project information by generating customizable status and
progress reports. The Advanced Reports Add-in offers the flexibility to create custom report
configurations and layouts, unlimited ways to aggregate and compare data and ability to generate
cross-project analysis reports.
Automatic Traceability Notification: The new traceability automatically traces changes to the testing
process entities—such as requirements or tests, and notifies the user via flag or e-mail. For example,
when the requirement changes, the associated test is flagged and tester is notified that the test may
need to be reviewed to reflect requirement changes. Coverage Analysis View in Requirements Module
The graphical display enables you to analyze the requirements according to test coverage status and
view associated tests - grouped according to test status.
Hierarchical Test Sets: Hierarchical test sets provide the ability to better organize your test run process
by grouping test sets into folders.
Workflow for all TestDirector Modules: The addition of the script editor to all modules enables
organizations to customize TestDirector to follow and enforce any methodology and best practices.
Improved Customization With a greater number of available user fields, ability to add memo fields and
create input masks users can customize their TestDirector projects to capture any data required by their
testing process. New rich edit option add color and formatting options to all memo fields.

9. Explain test case and test criteria.


Test cases are required to find out the presence of fault in a system. Test cases are the inputs to the
testing process. In order to reveal the correct behavior of the system it is necessary to have a large set
of valid test cases.
While selecting the test cases the primary objective is to ensure that if there is an error or fault in the
program. An ideal test case set is one that succeeds only if there are no errors in the program. One
possible ideal set of test cases is one that includes all the possible inputs to the program. This is often
called exhaustive testing. However, exhaustive testing is impractical and infeasible, as even for small
programs the number of elements in the input domain can be extremely large. Hence, a realistic goal
for testing is to select a set of test cases that is close to ideal.
The range and type of test cases to be prepared in order to perform testing depends upon test criterion.
A test criterion is the condition that must be satisfied by a set of test cases. The criterion becomes a
basis for test selection. For example: If the criterion is that all statements in the program be executed at
least once during testing, then a set of test cases T satisfies this criterion for a program P if the
execution of P with T ensure that each statement in P is executed at least once.
There are two fundamental properties for a testing criterion: reliability and validity. A criterion is
reliable if all the sets of test cases that satisfy the criterion detect the same errors; i.e. every set will
detect exactly the same error. A criterion is valid if for any error in the program there is some set
satisfying the criterion that will reveal the error.

=========================================BYE=========================================

40 | M C K

You might also like