0% found this document useful (0 votes)
57 views

Software Engineering

A successful CASE tool should have the following key characteristics: 1. Support a standard software development methodology like UML. 2. Offer flexibility in tool usage and editor choice. 3. Provide strong integration between development stages so changes are reflected everywhere. 4. Integrate with testing software to automate regression and other tests. 5. Support reverse engineering to generate models from existing code.

Uploaded by

MURARI MOUNIKA
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views

Software Engineering

A successful CASE tool should have the following key characteristics: 1. Support a standard software development methodology like UML. 2. Offer flexibility in tool usage and editor choice. 3. Provide strong integration between development stages so changes are reflected everywhere. 4. Integrate with testing software to automate regression and other tests. 5. Support reverse engineering to generate models from existing code.

Uploaded by

MURARI MOUNIKA
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Software Engineering Notes READ & PASS Aney Academy

What are the characteristics of a successful CASE Tools? Explain briefly.


Characteristics of a successful CASE Tools are
• A standard methodology:-- A CASE tool must support a standard software development methodology and standard modeling
techniques. In the present scenario most of the CASE tools are moving towards UML.
• Flexibility:-- Flexibility in use of editors and other tools. The CASE tool must offer flexibility and the choice for the user of editors’
development environments.
• Strong Integration:-- The CASE tools should be integrated to support all the stages. This implies that if a change is made at any
stage, for example, in the model, it should get reflected in the code documentation and all related design and other documents,
thus providing a cohesive environment for software development.
• Integration with testing software:-- The CASE tools must provide interfaces for automatic testing tools that take care of regression
and other kinds of testing software under the changing requirements.
• Support for reverse engineering:-- A CASE tools must be able to generate complex models from already generated code.
Write a short note on Java Device Test Suite(JDTS).
To test the J2ME applications we use Java Device Test Suite (JDTS). implementations of CLDC and MIDP can be evaluated by using
Java Device Test Suit. It is possible to include additional test suites to JDTS depending on the requirement. This will enable the
developer to execute a select set of test cases. It is also possible to automate the entire testing process. But, in this case, test results
do not include the cases where problems may arise due to interaction with application. In JDTS, there is a Test Console as well as a
Test Manager which help developers to test their J2ME applications.
Following are different types of tests can be conducted using JDTS:
➢ Functional tests→In this testing the application is tested whether it behaves as intended or not. It’s behaviour with both
internal and external components is checked.
➢ Stress tests→During this testing, the application will be run under the upper limits of the memory and the processing power.
➢ Performance tests→During this testing response time is evaluated.
➢ Security tests→In security tests, the MIDlets are checked for their security model which also includes access permissions.
The following are some of the features of JDTS:
1. More than one test suite can be executed at once.
2. Test Manager can accept connections from multiple test consoles concurrently. Each test console might be testing several
devices with the same configuration.
3. Test reports are generated in HTML format.
Define debugging. Explain the characteristics of bugs in detail.
Debugging refers to the process of identifying the cause for defective behavior of a system and addressing that problem. When a
test case uncovers an error, debugging is the process that results in the removal of the error. The debugging process attempts to
match symptoms with cause, thereby leading to error correction. Debugging is the activity of determining the exact nature and
location of the suspected error within the program and Fixing the error.
Characteristics of bugs
1. The symptom and the cause may be geographically remote. Highly coupled program structures exacerbate this situation.
2. The symptom may disappear when another error is corrected. 3. The symptom may actually be caused by non-errors.
4. The symptom may be caused by human error that is not easily traced. 5. The symptom may be a result of timing problems,
rather than processing problems. 6. It may be difficult to accurately reproduce input conditions. 7. The symptom may be
intermittent. This is particularly common in embedded systems that couple hardware and software inextricably. 8. The symptom
may be due to causes that are distributed across a number of tasks running on different processors.
Discuss the objectives of software change management process.
Software change management is an umbrella activity that aims at maintaining the integrity of software products and items.
Software change management is a challenging task faced by modern project managers, especially in an environment where
software development is spread across a wide geographic area with a number of software developers in a distributed environment.
Enforcement of regulatory requirements and standards demand a robust change management. The aim of change management is
to facilitate justifiable changes in the software product. A formal process of change management is acutely felt in the current
scenario when the software is developed in a very complex distributed environment with many versions of a software existing at
the same time, many developers involved in the development process using different technologies.
Objectives of software change management process are:
• Configuration identification: The process of identification involves identifying each component name, giving them a version name
and a configuration identification.
• Configuration control: Controlling changes to a product.
• Review: Reviewing the process to ensure consistency among different configuration items.
• Status accounting: Recording and reporting the changes and status of the components.
• Auditing and reporting: Validating the product and maintaining consistency of the product throughout the software life cycle.
Software Engineering Notes READ & PASS Aney Academy
Explain organisation of web application teams.
Web applications tend to need much more ongoing support, maintenance and enhancement than others. After the initial
application has been rolled out, comes the stage of maintenance. Sometimes the need for project management during
development itself is questioned but, as in any other project, good management is critical to success in a web application project as
well. Following are web application teams:
1. Webmaster→ This role is not unique to web applications, but is usually otherwise referred to as the administrator. It entails taking
care of the site on a day to day basis and keeping it in good health. It involves close interaction with the support team to ensure
that problems are resolved and the appropriate changes made. Some of her activities include ensuring proper access control and
security for the site. This could include authentication, taking care of the machines that house the server and so on.
2. Application Support Team→ In conventional software projects, while maintenance is important, it may or may not be done by the
organisation that developed the project. In web applications, the organisation that developed the site is quite likely to be given the
responsibility of its maintenance. This is because web applications tend to keep evolving and what corresponds to the development
phase in a conventional application is here quite brief and frenetic. The activities here can consist of removing bugs and taking care
of cosmetic irritants, altering the business rules of the application as required etc..
3. Content Development Team→ Web applications frequently require much more ongoing, sustained effort to retain user interest
because of the need to change the content in the site. In the case of a news site, this updation can be every few minutes. The
actual news could be coming from a wire feed from some news agencies, or from the organisation’s own sources. A site could be
serving out a wide variety of information and other stories from different interest areas. Content development teams could be
researchers who seek out useful or interesting facts, authors, experts in different areas, and so on.
4. Web Publisher→ This is an important role that connects the content development team to the actual website. The raw material
created by the writers has to be converted into a form that is suitable for serving out of a webserver. It means formatting the
content according to the requirements of a markup language such as HTML. In the case of automated news feeds, such publishing
needs to be as tool driven as possible. The web publisher must have a good understanding of the technical aspects of web servers
and the markup language.
BASELINES→ It used in the context of software change management. A baseline is an approved software configuration item that
has been reviewed and finalised. The baseline serves as a reference for any change. Once the changes to a reference baseline is
reviewed and approved, it acts as a baseline for the next change(s). A baseline is a set of configuration items (h/w, documents and
s/w components) that have been formally reviewed and agreed upon, thereafter serve as the basis for future development, and
that can be changed only through formal change control procedures. A baseline is functionally complete, i.e., it has a defined
functionality. The features of these functionalities are documented for reference for further changes. An example of a baseline is an
approved design document that is consistent with the requirements.

Version Control →Version control is the management of multiple revisions of the same unit of item during the software
development process. The initial version of the item is given version number Ver 1.0. Subsequent changes to the item which could
be mostly fixing bugs or adding minor functionality is given as Ver 1.1 and Ver 1.2. After that, a major modification to Ver 1.2 is
given a number Ver 2.0 at the same time, a parallel version of the same item without the major modification is maintained and
given a version number 1.3.

Software engineers use this version control mechanism to track the source code, documentation and other configuration items.
Commercial tools are available for version control which performs one or more of following tasks;
• Source code control • Revision control • Concurrent version control
Software Engineering Notes READ & PASS Aney Academy
Change Control→Change control is a management process and is to some extent automated to provide a systematic mechanism for
change control. Changes can be initiated by the user or other stake holder during the maintenance phase, although a change
request may even come up during the development phase of the software. The adoption and evolution of changes are carried out in
a disciplined manner. The real challenge of change manager and project leader is to accept and accommodate all justifiable changes
without affecting the integrity of product or without any side effect. A change control report is generated by the technical team
listing the extent of changes and potential side effects. A designated team called change control authority makes the final decision,
based on the change control report, whether to accept or reject the change request. The role of the change control authority is vital
for any item which has become a baseline item. All changes to the baseline item must follow a formal change control process.
Explain the concept of cleanroom software engineering.
Cleanroom software engineering is an engineering and managerial process for the development of high-quality software with
certified reliability. Cleanroom was originally developed by Dr. Harlan Mills. The name “Cleanroom” was taken from the electronics
industry, where a physical clean room exists to prevent introduction of defects during hardware fabrication. It reflects the same
emphasis on defect prevention rather than defect removal, as well as Certification of reliability for the intended environment of use.
The focus of Cleanroom involves moving from traditional software development practices to rigorous, engineering-based practices.
This software development is based on mathematical principles. It follows the box principle for specification and design. Formal
verification is used to confirm correctness of implementation of specification. Testing is based on statistical principles.
The following principles are the foundation for the Cleanroom-based software development:
• Incremental development under statistical quality control (SQC):→ Incremental development as practiced in Cleanroom provides
a basis for statistical quality control of the development process.
• Software development based on mathematical principles:→ In Cleanroom software engineering development, the key principle is
that, a computer program is an expression of a mathematical function. The Box Structure Method is used for specification and
design, and functional verification is used to confirm that the design is a correct implementation of the specification.
• Software testing based on statistical principles:→ In Cleanroom, software testing is viewed as a statistical experiment. A
representative subset of all possible uses of the software is generated, and performance of the subset is used as a basis for
conclusions about general operational performance.
The following is the phase-wise strategy followed for Cleanroom software development.
• Increment planning:→ The project plan is built around the incremental strategy.
• Requirements gathering:→ Customer requirements are elicited and refined for each increment using traditional methods.
• Box structure specification:→ Box structures isolate and separate the definition of behaviour, data, and procedures at each level
of refinement.
• Formal design:→ Specifications (black-boxes) are iteratively refined to become architectural designs (state-boxes) and
component-level designs (clear boxes).
• Correctness verification:→ Correctness questions are asked and answered, formal mathematical verification is used as required.
• Code generation, inspection, verification:→ Box structures are translated into program language; inspections are used to ensure
conformance of code and boxes, as well as syntactic correctness of code; followed by correctness verification of the code.
• Statistical test planning:→ A suite of test cases is created to match the probability distribution of the projected product usage
pattern.
• Statistical use testing:→ A statistical sample of all possible test cases is used rather than exhaustive testing.
• Certification:→ Once verification, inspection, and usage testing are complete and all defects removed, the increment is certified
as ready for integration.

limitations of the Cleanroom software engineering: --


1. Cleanroom techniques are too theoretical, too mathematical, and too radical for use in real software development.
2. Relies on correctness verification and statistical quality control rather than unit testing.
Software Engineering Notes READ & PASS Aney Academy
Explain various testing techniques.
Following are some common testing techniques-
1.White box testing—White box testing is also known as structural testing or glass box testing. Its goal is to test the program at the
level of source code. Here the tester has the complete knowledge of programming language and its structure in which product is
developed. Basically white box testing is perform by the programmer.
2.Black box testing—Black box testing is also known as functional testing is to test the application software from its functional point
of view. In this type of testing, the software is tested to check whether it full fill the user’s requirement or not. Basically black box
testing is used by the tester team to not know the internal code or program logics but they test the system experimentally.
3.Alpha testing—In alpha testing, a proposed customer who is ready to use the software to identify. The software is used by the end
user online while the developer observe the process. The end user uses the software in natural ways whereas developer focus the
working mechanism and troubles of end user.
4.Beta testing—Beta testing is perform a the side of end user and developer is not present. The software product is treated as a live
application by the end user. The developer collects the feedback and experiences from the end users periodically. On the basis of
user’s feedback developer reconstruct the product and placed on the site again.
5.Acceptance testing—Software for a specific customer is built for one client with specific requirements. Acceptance testing is to
find that the actual user is ready to accept the new software product.
6.Stress testing—Stress testing is perform to test the ability of a system to handle abnormal conditions.
7.Security testing—Security testing is perform to test the security level for sensitive data. It ensures information can be exist by only
the right person and right place.
8.Recovery testing—Recovery testing is a system testing that is perform to test the recovery mechanism of the system in the case of
data loss.
9.Performance testing—Performance testing indicates the response time of the system against query.
What are the different levels of Capability Maturity Model? Describe them.
Ans.-- Capability maturity model ensures the maturity level of a software development organization. It defines five maturity level
which describes the standards of development organization. Following five are CMM level certified for development organization—
Level 1(Initial)— At this level, software is developed an ad hoc basis and no strategic approach is used for its development. Maturity
Level 1 organisations, the software process is unpredictable, because if the developing team changes, the process will change. The
testing of software is also very simple and accurate predictions regarding software quality are not possible. SEI’s assessment
indicates that the vast majority of software organisations are Level 1 organisations. No KPA at this level
Level 2(Repeatable)—The organization satisfies all the requirements of level 1. At this level basic project management policies and
related procedures are established. The institution achieving this maturity level learn with experiences of earlier projects.
Key Process Area
1. Software Project Planning 2. Software Project Tracing & Oversight 3. Requirements Management
4. Software Subcontract Management 5. Software Quality Assurance(SQA) 6. Software Configuration Management(SCM)
Level 3 (Defined): The organisation satisfies all the requirements of level-2. At this maturity level, the software development
processes are well defined, managed and documented. Training is imparted to staff to gain the required knowledge. The standard
practices are simply tailored to create new projects.
Key Process Area
1. Organisation Process Focus(OPF) 2. Training Program 3. Organisation Process Definition
4. Integrated Software Management(ISM) 5. Software Product Engineering(SPE) 6. Inter group co-ordination(IC)
7. Peer reviews(PR)
Level 4(Managed)— The organisation satisfies all the requirements of level-3. At this level quantitative standards are set for
software products and processes. The project analysis is done at integrated organisational level and collective database is created.
The organisation’s capability at Level 4 is “predictable” because projects control their products and processes to ensure their
performance within quantitatively specified limits. The quality of software is high.
Key Process Area
1. Quantitative Process Management(QPM) 2. Software Quality Management(SQM)
Level 5 (Optimising): The organisation satisfies all the requirements of level-4. This is last level. The organisation at this maturity
level is considered almost perfect. At this level, the entire organisation continuously works for process improvement with the help
of quantitative feedback obtained from lower level. Based on the cost benefit analysis of new technologies, the organisation
changes their Software development processes.
Key Process Area
1. Defect Prevention(DP) 2. Technology Change Management(TCM) 3. Process Change Management(PCM)
Software Engineering Notes READ & PASS Aney Academy
List and explain the features to be considered for function point analysis.
Following are features of Function Point Analysis: -
• External inputs: A process by which data crosses the boundary of the system. Data may be used to update one or more logical
files. It may be noted that data here means either business or control information.
• External outputs: A process by which data crosses the boundary of the system to outside of the system. It can be a user report or
a system log report.
• External user inquires: A count of the process in which both input and output results in data retrieval from the system. These are
basically system inquiry processes.
• Internal logical files: A group of logically related data files that resides entirely within the boundary of the application software
and is maintained through external input as described above.
• External interface files: A group of logically related data files that are used by the system for reference purposes only. These data
files remain completely outside the application boundary and are maintained by external applications.
List and explain different categories of tools that can be used for testing.
Testing tools help testing in varying ways: automatic creation of test cases, execution of test cases, debugging defects. Following
table indicates various popular testing tools used. The tools are categorized based on the type of testing where the tool is used:
Tool Categories Testing phase
Junit Unit testing tool * Unit testing
SOAPUI Integration Testing tool *Service testing
*JDBC testing
Selenium Web testing * Cross browser testing
Jenkins/Hudson Continuous integration *Code coverage reports
HP Quality Test case management *Management of test cases
Center *Scheduling of test cases
HP LoadRunner Performance testing tool *Load testing
*Stress testing
*Peak load testing
Define Cohesion and Coupling. Explain various types in each of them.
Cohesion—Cohesion is the measure of how well and internal element of a module or a function are connected to each other. This
enables a strong relationship between internal element of function.
There are seven types of cohesion, namely –
• Co-incidental cohesion - It is unplanned and random cohesion, which might be the result of breaking the program into smaller
modules for the sake of modularization. Because it is unplanned, it may serve confusion to the programmers and is generally not-
accepted.
• Logical cohesion - When logically categorized elements are put together into a module, it is called logical cohesion.
• Temporal Cohesion - When elements of module are organized such that they are processed at a similar point in time, it is called
temporal cohesion.
• Procedural cohesion - When elements of module are grouped together, which are executed sequentially in order to perform a
task, it is called procedural cohesion.
• Communicational cohesion - When elements of module are grouped together, which are executed sequentially and work on
same data (information), it is called communicational cohesion.
• Sequential cohesion - When elements of module are grouped because the output of one element serves as input to another and
so on, it is called sequential cohesion.
• Functional cohesion - It is considered to be the highest degree of cohesion, and it is highly expected. Elements of module in
functional cohesion are grouped because they all contribute to a single well-defined function. It can also be reused.
Coupling—Coupling indicates the dependencies across different module. If a module is dependent on another multiple module,
then it is known as strong coupling. Less number of dependencies indicates loose coupling. The design rule is to loose coupling show
that each of the individual modules are independent and can’t greater impact with another module.
There are five levels of coupling, namely -
• Content coupling - When a module can directly access or modify or refer to the content of another module, it is called content
level coupling.
• Common coupling- When multiple modules have read and write access to some global data, it is called common or global
coupling.
• Control coupling- Two modules are called control-coupled if one of them decides the function of the other module or changes its
flow of execution.
• Stamp coupling- When multiple modules share common data structure and work on different part of it, it is called stamp coupling.
• Data coupling- Data coupling is when two modules interact with each other by means of passing data (as parameter). If a module
passes data structure as parameter, then the receiving module should use all its components.
Software Engineering Notes READ & PASS Aney Academy
What is scheduling? Explain any two project scheduling techniques with examples.
Scheduling of a software project can be correlated to prioritising various tasks with respect to their cost, time and duration.
Scheduling can be done with resource constraint or time constraint in mind. Depending upon the project, scheduling methods can
be static or dynamic in implementation. The following are various types of scheduling techniques in software engineering are:
• Work Breakdown Structure: The project is scheduled in various phases following a bottom-up or top-down approach. A tree-like
structure is followed without any loops. At each phase or step, milestone and deliverables are mentioned with respect to
requirements. The work breakdown structure shows the overall breakup flow of the project and does not indicate any parallel flow.

The project is split into requirement and analysis, design, coding, testing and maintenance phase. Further it will be split into
multiple submodule.
• Flow Graph: Various modules are represented as nodes with edges connecting nodes. Dependency between nodes is shown by
flow of data between nodes. Nodes indicate milestones and deliverables with the corresponding module implemented. Cycles are
not allowed in the graph. Start and end nodes indicate the source and terminating nodes of the flow.

M1 is the starting module and the data flows to M2 and M3. The combined data from M2 and M3 flow to M4 and finally the project
terminates. The arrows indicate the flow of information between modules.
GANTT CHART:-- A Gantt chart is a type of bar chart, first developed by Karol Adamiecki in 1896, and independently by Henry Gantt
in the 1910s, that illustrates a project schedule. Gantt charts illustrate the start and finish dates of the terminal elements and
summary elements of a project. Terminal elements and summary elements comprise the work breakdown structure of the project.
Modern Gantt charts also show the dependency (i.e., precedence network) relationships between activities. Gantt charts can be
used to show current schedule status using percent-complete shadings.

SOFTWARE RELIABILITY→Software reliability is defined as the probability that software will provide failure-free operation in a fixed
environment for a fixed interval of time. Software reliability is typically measured per a unit of time, whereas probability of failure is
generally time independent.
Software Reliability Models are as follows:
• The failures are independent of each other.
• The inputs are random samples.
• Failure intervals are independent and all software failures are observed.
• Time between failures is exponentially distributed.
Software Engineering Notes READ & PASS Aney Academy
PERT (Project Evolution and Review Technique)→Charts consist of a network of boxes and arrows. The boxes represent activities
and the arrows represent task dependencies. PERT is organized by events and activities or tasks. PERT have more advantages and
they are likely to be used for more complex projects. Through PERT chart the various task paths are defined. PERT enables the
calculation of critical path. Each path consists of combination of tasks, which must be completed. The time and the cost associated
with each task along a path are calculated, and the path that requires the greatest amount of elapsed time is the critical path.
Calculation of the critical path enables project manager to monitor this series of task more closely than others to shift resources to
it if it begins to fall behind schedule. PERT controls time and cost during the project and also facilitate finding the right balance
between completing a project on time and completing it within the budget. There are thus not one but many critical paths,
depending on the permutations of the estimates for each task. This makes analyses of critical path in PERT charts very complex. The
PERT chart representation of this site given below: -

How are software risks identified?


A risk may be defined as a potential problem. It may or may not occur. But, it should always be assumed that it may occur and
necessary steps are to be taken. Risks can arise from various factors like improper technical knowledge or lack of communication
between team members, lack of knowledge about software products, market status, hardware resources, competing software
companies, etc.
Basis for Different Types of Software risks
• Interface modules: Complete software contains various modules and each module sends and receives information to other
modules and their concerned data types have to match.
• Poor knowledge of tools: If the team or individual members have poor knowledge of tools used in the software product, then the
final product will have many risks, since it is not thoroughly tested.
• Programming Skills: The software product should be able to implement various object oriented techniques and be able to catch
exceptions in case of errors. Various data values have to be checked and in case of improper values, appropriate messages have to
be displayed. If this is not done, then it leads to risk, thereby creating panic in the software computations.
• Management Issues: The management of the organisation should give proper training to the project staff, arrange some
recreation activities, give bonus and promotions and interact with all members of the project and try to solve their necessities at the
best.
• Extra support: The software should be able to support a set of a few extra features in the vicinity of the product to be developed.
• Customer Risks: Customer should have proper knowledge of the product needed, and should not be in a hurry to get the work
done. He should take care that all the features are implemented and tested.
• External Risks: The software should have backup in CD, pendrive, etc., fully encrypted with full license facilities. The software can
be stored at various important locations to avoid any external calamities like floods, earthquakes, etc.
• Commercial Risks: The organisation should be aware of various competing vendors in the market and various risks involved if their
product is not delivered on time.
Requirements Gathering Tools→The requirements gathering is an art. The requirements are gathered regarding organisation,
which include information regarding its policies, objectives, and organisation structure, regarding user staff. It includes the
information regarding job function and their personal details, regarding the functions of the organisation including information
about work flow, work schedules and working procedure.
Requirements Gathering Tools are as follows
•Record review: -- A review of recorded documents of the organisation is performed. Procedures, manuals, forms and books are
reviewed to see format and functions of present system. The search time in this technique is more.
•On site observation: -- In case of real life systems, the actual site visit is performed to get a close look of system. It helps the analyst
to detect the problems of existing system.
•Interview: -- A personal interaction with staff is performed to identify their requirements. It requires experience of arranging the
interview, setting the stage, avoiding arguments and evaluating the outcome.
•Questionnaire: -- It examines a large number of respondents simultaneously and gets customized answers. It gives person
sufficient time to answer the queries and give correct answers.
Software Engineering Notes READ & PASS Aney Academy
Elaborate the rules for human computer interface design.
Rules for Human Computer Interface Design:
1. Consistency→Interface is designed so as to ensure consistent sequences of actions for similar situations. Terminology should be
used in prompts, menus, and help screens should be identical. Color scheme, layout and fonts should be consistently applied
throughout the system.
2. Enable expert users to use shortcuts→Use of short cut increases productivity. Short cut to increase the pace of interaction with
use of, special keys and hidden commands.
3. Informative feedback→The feedback should be informative and clear.
4. Error prevention and handling common errors:
➢ Screen design should be such that users are unlikely to make a serious error. Highlight only actions relevant to current
context. Allowing user to select options rather than filling up details. Don’t allow alphabetic characters in numeric fields
➢ In case of error, it should allow user to undo and offer simple, constructive, and specific instructions for recovery.
5. Allow reversal of action: Allow user to reverse action committed. Allow user to migrate to previous screen.
6. Reduce effort of memorisation by the user→Do not expect user to remember information. A human mind can remember little
information in short term memory. Reduce short term memory load by designing screens by providing options clearly using pull-
down menus and icons
7. Relevance of information: The information displayed should be relevant to the present context of performing certain task.
8. Screen size: Consideration for screen size available to display the information. Try to accommodate selected information in case
of limited size of the window.
9. Minimize data input action: Wherever possible, provide predefined selectable data inputs.
10. Help: Provide help for all input actions explaining details about the type of input expected by the system with example.
Describe the categories of Risk Management.
Risk management plays an important role in ensuring that the software product is error free. Risk management takes care that the
risk is avoided, and if it not avoidable, then the risk is detected, controlled and finally recovered. A priority is given to risk and the
highest priority risk is handled first. Various factors of the risk are who are the involved team members, what hardware and
software items are needed, where, when and why are resolved during risk management. The risk manager does scheduling of risks.
There are following categories of risk management: -
1.Risk Avoidance
a. Risk anticipation
b. Risk tools
2. Risk Detection
a. Risk analysis
b. Risk category
c. Risk prioritization
3. Risk Control
a. Risk pending
b. Risk resolution
c. Risk not solvable
4. Risk Recovery
a. Full
b. Partial
c. Extra/alternate features
Software Engineering Notes READ & PASS Aney Academy
Write a note on management of risks.
Risk management plays an important role in ensuring that the software product is error free. Firstly, risk management takes care
that the risk is avoided, and if it not avoidable, then the risk is detected, controlled and finally recovered. A priority is given to risk
and the highest priority risk is handled first. The risk manager does scheduling of risks. Risk management can be further categorised
as follows:
1.Risk Avoidance
a. Risk anticipation→Various risk anticipation rules are listed according to standards from previous projects’ experience, and
also as mentioned by the project manager.
b. Risk tools→Risk tools are used to test whether the software is risk free. The tools have built-in data base of available risk
areas and can be updated depending upon the type of project.
2. Risk Detection
a. Risk analysis→The risk is analyzed with various hardware and software parameters as probabilistic occurrence (pr), weight
factor (wf), risk exposure (pr * wf).
Risk analysis table
Sl.No. Risk Name Probability of Weight factor (wf) Risk exposure
Occurrence (pr) (pr * wf)
1. Stack overflow 5 15 75
2. No Password forgot option 7 20 140
…….. …………………………….. ……………………… …………… …………………….
b. Risk category→Once proper category is identified, priority is given depending upon the urgency of the product.
c. Risk prioritization→Depending upon the entries of the risk analysis table, the maximum risk exposure is given high priority
and has to be solved first.
3. Risk Control
a. Risk pending→According to the priority, low priority risks are pushed at the end of the queue with a view of various resources
and in case it takes more time their priority is made higher.
b. Risk resolution→Risk manager makes a strong resolve how to solve the risk.
c. Risk not solvable→If a risk takes more time and more resources, then it is dealt in its totality in the business aspect of the
organisation and thereby it is notified to the customer, and the team member proposes an alternate solution.
4. Risk Recovery
a. Full→The risk analysis table is scanned and if the risk is fully solved, then corresponding entry is deleted from the table.
b. Partial→The risk analysis table is scanned and due to partially solved risks, the entries in the table are updated and thereby
priorities are also updated.
c. Extra/alternate features→Sometimes it is difficult to remove some risks, and in that case, we can add a few extra features,
which solves the problem.
Compare Waterfall Model with Spiral Model
Waterfall Model Spiral Model
Risk factor is not considered in the waterfall Model. Risk factor is considered in the Spiral Model.
In waterfall the requirements are freezed. In spiral model the requirements are not freezed.
Waterfall Model is linear sequential model. Spiral model works in loop.
Waterfall model is simple and sequential. Spiral model is complex.
In waterfall model the communication between customer and In spiral model there is a better communication between
developer should have patience. developer and customer
Waterfall model works well on smaller projects Spiral model woks on larger projects also.
Waterfall model is a iterative process. Spiral model is a partitioned in four quadrants it is also called
meta model.
In waterfall model we cannot go to the previous stage, In spiral model we can go to the previous stage.
Waterfall model is also known as 'linear sequential model'. Spiral model is also known as 'Bohem Model'.
Waterfall model is a rigid model and can easily be Spiral model is not a rigid model and can’t easily be manageable.
manageable.
In the waterfall model the customer is not involved. In the spiral model, the customer is made aware of all the
happenings in the software development
Waterfall model is less costlier than spiral model. Spiral Model is costlier than waterfall model.
Waterfall model includes less customer attention or Spiral model includes more customer attention or interaction
interaction rather than spiral model. rather than waterfall model.
Software Engineering Notes READ & PASS Aney Academy
Write the structure of SRS.
A software requirement specification is a specification of the software system which provides complete and structured description
of the system. SRS describes all the requirements of organization. It focuses all the functional requirement of the proposed system.
A good quality SRS ensures the development of good quality product. The Institute of Electrical and Electronic Engineer (IEEE)
provides a standard structure to design the SRS. These standards are
1. Introduction
1.1 Purpose
1.2 Scope
1.3 Definition and abbreviation
1.4 References
2.Overall Description
2.1 Product perspective
2.2 Product function
2.3 Operating environment
2.4 Design and implementation constraints
2.5 Assumption and dependencies
3. Specific requirements
3.1 Functional requirement
3.1.1 Users requirement
3.1.2 Administrative requirement
3.1.3 General requirement
3.2 Non Functional Requirement
3.2.1 Error handling
3.2.2 Performance requirement
3.2.3 Safety requirement
3.2.4 Security requirement
3.2.5 Data recovery
4. External interface design
4.1 Input Design
4.2 Output Design
4.3 Report Design
5. Supportive information
5.1 Table of content
5.2 Appendices
What is software review? Explain in detail with an example. And also explain her types.
The purpose of any review is to discover errors in analysis, design, coding, testing and implementation phase of software
development cycle. The other purpose of review is to see whether procedures are applied uniformly and in a manageable manner.
Reviews are basically of two types:
• Informal Technical Review → Informal meeting and informal desk checking.
• Formal Technical Review→ Formal technical review is a software quality assurance activity performed by software engineering
practitioners to improve software product quality. The product is scrutinised for completeness, correctness, consistency, technical
feasibility, efficiency, and adherence to established standards and guidelines by the client organisation.
Objectives of Formal Technical Review
• To uncover errors in logic or implementation • To ensure that the software has been represented accruing to predefined
standards • To ensure that software under review meets the requirements • To make the project more manageable.
Example of Software Review
The meeting should consist of two to five people and should be restricted to not more than 2 hours. The aim of the review is to
review the product/work and the performance of the people. When the product is ready, the developer informs the project leader
about the completion of the product and requests for review. The project leader contacts the review leader for the review. The
review leader asks the reviewer to perform an independent review of the product/work before the scheduled FTR.
Some Typical activities for review at each phase are described below:
•Software Concept and Initiation Phase •Software Requirements Analysis Phase •Compatibility •Completeness
•Consistency •Correctness •Traceability •Verifiability and Testability •Software Design Phase •Correctness •Etc..
Software Engineering Notes READ & PASS Aney Academy
Explain Software Project estimation in detail, with a neat diagram.
Software project estimation is the process of estimating various resources required for the completion of a project. Effective
software project estimation is an important activity in any software development project. Underestimating software project and
under staffing it often leads to low quality deliverables, and the project misses the target deadline leading to customer
dissatisfaction and loss of credibility to the
company. In a commercial and competitive environment, Software project estimation is crucial for managerial decision making.
Project estimation and tracking help to plan and predict future projects and provide baseline support for project management and
supports decision making.
Software project estimation mainly encompasses the following steps:
1. Estimating the size→ Estimating the size of the software to be developed is the very first step to make an effective estimation
of the project. Customer’s requirements and system specification forms a baseline for estimating the size of a software. At a
later stage of the project, system design document can provide additional details for estimating the overall size of a software.
2. Estimating effort→ Once the size of software is estimated, the next step is to estimate the effort based on the size. The
estimation of effort can be made from the organisational specifics of software development life cycle. Depending on
deliverable requirements, the estimation of effort for project will vary. Efforts are estimated in number of man months.
3. Estimating Schedule→ The next step in estimation process is estimating the project schedule from the effort estimated. The
schedule for a project will generally depend on human resources involved in a process. Efforts in man-months are translated to
calendar months.
Schedule in calendar months = 3.0 * (man-months) 1/3
The parameter 3.0 is variable, used depending on the situation which works best for the organisation.
4. Estimating Cost→ Cost estimation is the next step for projects. The cost of a project is derived not only
from the estimates of effort and size but from other parameters such as hardware, travel expenses, telecommunication costs,
training cost etc. should also be taken into account.

What is prototyping? Explain the problems and advantages of prototyping in detail.


Prototyping is a process that enables developer to create a small model of software. Prototype as a preliminary form or instance of
a system that serves as a model for later stages for the final complete version of the system. Prototype is developed so that
customers, users and developers can learn more about the problem. Thus, prototype serves as a mechanism for identifying
software requirements. It allows the user to explore or criticise the proposed system before developing a full scale system.
Types of Prototype
Throw away prototype:→ In this technique, the prototype is discarded once its purpose is fulfilled and the final system is built from
scratch. The prototype is built quickly to enable the user to rapidly interact with a working system. The duration of prototype
building should be as less as possible because its advantage exists only if results from its use are available in timely fashion.
Evolutionary Prototype:→ In this, the prototype is constructed to learn about the software problems and their solutions in
successive steps. The prototype is initially developed to satisfy few requirements. The prototype once developed is used again and
again. This process is repeated till all requirements are embedded in this and the complete system is evolved.

Problems of Prototyping
1. A common problem with this approach is that people expect much from insufficient effort. As the requirements are loosely
defined, the prototype sometimes gives misleading results about the working of software.
2. The approach of providing early feedback to user may create the impression on user and user may carry some negative biasing
for the completely developed software also.
Advantages of Prototyping→It is a beneficial approach to develop the prototype. The end user cannot demand fulfilling of
incomplete and ambiguous software needs from the developer.
Disadvantage of Prototyping→Adopting this approach is the large investment that exists in software system maintenance. It
requires additional planning about the re-engineering of software.
Software Engineering Notes READ & PASS Aney Academy
How is software configuration management done in software development process? Explain.
Software Configuration Management (SCM) is extremely important from the view of deployment of software applications. SCM
controls deployment of new software versions. Software configuration management can be integrated with an automated solution
that manages distributed deployment. This helps companies to bring out new releases much more efficiently and effectively. It also
reduces cost, risk and accelerates time. A current IT department of an organisation has complex applications to manage. These
applications may be deployed on many locations and are critical systems. Thus, these systems must be maintained with very high
efficiency and low cost and time.
Suppose IGNOU has a data entry software version 1.0 for entering assignment marks which is deployed all the RCs. In case
its version 1.1 is to be deployed, if the re-built software need to be sent and deployed manually, it would be quite troublesome.
Thus, an automatic deployment tool will be of great use under the control of SCM.
We need an effective SCM with facilities of automatic version control, access control, automatic re-building of software,
build audit, maintenance and deployment. Thus, SCM should have the following facilities:
1. Creation of configuration
2. This documents a software builds and enables versions to be reproduced on demand
3. Configuration lookup scheme that enables only the changed files to be rebuilt. Thus, entire application need not be rebuilt.
4. Dependency detection features even hidden dependencies, thus ensuring correct behaviour of the software in partial
rebuilding.
5. Ability for team members to share existing objects, thus saving time of the team members.

McCall’s Quality Factor→A quality factor is an attribute of a quality factor that is related to software development. For example,
modularity is an attribute of the architecture of a software system.
List of McCall’s Quality Criteria/factors are:
1.Correctness→•A software system is expected to meets the explicitly specified functional requirements and the implicitly expected
non-functional requirements. •If a software system satisfies all the functional requirements, the system is said to be correct.
2.Reliability→•Customers may still consider an incorrect system to be reliable if the failure rate is very small and it does not
adversely affect their mission objectives. •Reliability is a customer perception, and an incorrect software can still be considered to
be reliable.
3.Efficiency→•Efficiency concerns to what extent a software system utilizes resources, such as computing power, memory, disk
space, communication bandwidth, and energy. •A software system must utilize as little resources as possible to perform its
functionalities.
4.Integrity:→•A system’s integrity refers to its ability to withstand attacks to its security. •In other words, integrity refers to the
extent to which access to software or data by unauthorized persons or programs can be controlled.
5.Usability→•A software is considered to be usable if human users find it easy to use. •Without a good user interface a software
system may fizzle out even if it possesses many desired qualities.
6.Maintainability→•Maintenance refers to the upkeep of products in response to deterioration of their components due to
continuous use of the products. •Maintenance refers to how easily and inexpensively the maintenance tasks can be performed.
•For software products, there are three categories of maintenance activities: corrective, adaptive and perfective maintenance.
7.Testability→•Testability means the ability to verify requirements. At every stage of software development, it is necessary to
consider the testability aspect of a product. •To make a product testable, designers may have to instrument a design with
functionalities not available to the customer.
8.Flexibility→•Flexibility is reflected in the cost of modifying an operational system. •In order to measure the flexibility of a system,
one has to find an answer to the question: How easily can one add a new feature to a system.
9.Portability→•Portability of a software system refers to how easily it can be adapted to run in a different execution environment.
•Portability gives customers an option to easily move from one execution environment to another to best utilize emerging
technologies in furthering their business.
10.Reusability→•Reusability means if a significant portion of one product can be reused, maybe with minor modifications, in
another product. •Reusability saves the cost and time to develop and test the component being reused.
Software Engineering Notes READ & PASS Aney Academy
Explain GSM architecture with the help of a diagram.
Answer-->GSM Architecture--A GSM network comprises of many functional units. These functions and interfaces can be broadly
divided into:
• The Mobile Station (MS) •The Base Station Subsystem (BSS) • The Network Switching Subsystem (NSS)
• The Operation Support Subsystem (OSS)

Mobile Station (MS): -- It is the mobile phone which consists of the transceiver, the display and the processor and is controlled by a
SIM card operating over the network. A mobile station communicates across the air interface with a base station transceiver in the
same cell in which the mobile subscriber unit is located. The MS communicates the information with the user and modifies it to the
transmission protocols if the air-interface to communicate with the BSS. The MS has two elements. The Mobile Equipment (ME) refers
to the physical device, which comprises of transceiver, digital signal processors, and the antenna. The second element of the MS is
the GSM is the Subscriber Identity Module (SIM). The SIM card is unique to the GSM system. It has a memory of 32 KB.
Base Station Subsystem (BSS): -- It acts as an interface between the mobile station and the network subsystem. It consists of the
Base Station Controller which controls the Base Transceiver station and acts as an interface between the mobile station and mobile
switching center. A base station subsystem consists of a base station controller and one or more base transceiver station. Each Base
Transceiver Station defines a single cell. A cell can have a radius of between 100m to 35km, depending on the environment.
Network and switching subsystem (NSS): --It provides the basic network connection to the mobile stations. The basic part of the
Network Subsystem is the Mobile Service Switching Centre which provides access to different networks like ISDN, PSTN etc. It also
consists of the Home Location Register and the Visitor Location Register which provides the call routing and roaming capabilities of
GSM.
Operation and Support System (OSS): -- OSS helps in mobile networks to monitor and control the complex systems. The basic reason
for developing operation and support system is to provide customers a cost effective support and solutions. It helps in managing,
centralizing, local and regional operational activities required for GMS networks.
SOFTWARE QUALITY→ Software quality can be defined as conformance to explicitly stated and implicitly stated functional
requirements. Quality of a software can be defined as conformance to explicitly stated and implicitly stated functional
requirements. Good quality software satisfies both explicit and implicit requirements. Software quality is a complex mix of
characteristics and varies from application to application and the customer who requests for it. Software quality is a set of
characteristics that can be measured in all phases of software development.
Measurement of Software Quality (Quality metrices):
• Number of design changes required • Number of errors in the code • Number of bugs during different stages of testing
• Reliability metrics • It measures the mean time to failure (MTTF), that may be defined as probability of failure during a particular
interval of time. This will be discussed in software reliability.
Stress Testing→ Stress testing a Non-Functional testing technique that is performed as part of performance testing. During stress
testing, the system is monitored after subjecting the system to overload to ensure that the system can sustain the stress. Stress
testing is a simulation technique often used in the banking industry. It is also used on asset and liability portfolios to determine
their reactions to different financial situations. Additionally, stress tests are used to gauge how certain stressors will affect a
company, industry or specific portfolio. Stress tests are usually computer-generated simulation models that test hypothetical
scenarios; however, highly customized stress testing methodology is also often utilized. Stress testing is a useful method for
determining how a portfolio will fare during a period of financial crisis. Stress testing is most commonly used by financial
professionals for regulatory reporting and also for portfolio risk management. The recovery of the system from such phase (after
stress) is very critical as it is highly likely to happen in production environment.
Reasons for conducting Stress Testing:
• It allows the test team to monitor system performance during failures.
• To verify if the system has saved the data before crashing or NOT.
• To verify if the system prints meaning error messages while crashing or did it print some random exceptions.
• To verify if unexpected failures do not cause security issues.
Software Engineering Notes READ & PASS Aney Academy
What is meant by "Software Reengineering"? Explain the phases of Software Reengineering Life cycle.
Reengineering applies reverse engineering to existing system code to extract design and requirements. Reengineering is the
examination, analysis, and alteration of an existing software system to reconstitute it in a new form, and the subsequent
implementation of the new form. The process typically encompasses a combination of reverse engineering, re-documentation,
restructuring, and forward engineering. The goal is to understand the existing software system components (specification, design,
implementation) and then to re-do them to improve the system’s functionality, performance, or implementation. Re-engineering
starts with the code and comprehensively reverse engineers by increasing the level of abstraction as far as needed toward the
conceptual level, rethinking and re-evaluating the engineering and requirements of the current code, then forward engineers using
a waterfall software development life-cycle to the target system.

There are two reengineering objectives are given below:


Improve quality:→ Re-engineering is intended to improve software quality and to produce current documentation. Improved
quality is needed to increase reliability, to improve maintainability, to reduce the cost of maintenance, and to prepare for functional
enhancement.
Migration:→ Migration may involve extensive redesign if the new supporting platforms and operating systems are very different
from the original, such as the move from a mainframe to a network-based computing environment.
Software Reengineering Life Cycle
• Requirements analysis phase: →This phase refers to the identification of concrete reengineering goals for a given software.
• Model analysis phase: →This phase refers to documenting and understanding the architecture and the functionality of the legacy
system being reengineered.
• Source code analysis phase: →This phase refers to the identification of the parts of the code that are responsible for violations of
requirements originally specified in the system’s analysis phase.
• Remediation phase: →This phase refers to the selection of a target software structure that aims to repair a design or a source
code defect with respect to a target quality requirement.
• Transformation phase: →This phase consists of physically transforming software structures according to the remediation
strategies selected previously.
• Evaluation phase: →It refers to the process of assessing the new system as well as establishing and integrating the revised system
throughout the corporate operating environment.
What is data dictionary? Explain with an example.
Data dictionary is the repository of information about data items such as origin of data, data structure, data uses and other
metadata information related to data elements. It is used as system of record for structure chart and for other references. Data
dictionaries help to organize and document the information related to data flows, data processes and data elements in a structure
fashion. The main benefits of having data dictionaries are that it:
1. Provides a highly structured definition and details of data elements.
2. Identifies all alias and reduces duplicates within data elements.
3. Helps in developing logic for processes.
4. Helps in development in report
Software Engineering Notes READ & PASS Aney Academy
Define "Black Box Testing" and "White Box. Testing". Explain the differences between them. 10
Black Box Testing White Box Testing
Test cases are derived from the functional specification of the Test cases are derived from the internal design i.e. source
system. code of the system.
The selection of this testing can be done without any reference to The selection of this testing can be done by using the
the program code. specification, design and code of the program.
The testing team no needs to access the source code of program but The testing team needs to access the source code of
only concerned with functionality and features of the system. program.
Methods for Black Box Testing Methods for White Box Testing
1. Boundary-Value-Analysis 1. Coverage Based Testing
2. Equivalence Partitioning 2. Cyclomatic Complexity
3. Mutation Testing
Basically black box testing is used by the tester team to not know Here the tester has the complete knowledge of
the internal code or program logics but they test the system programming language and its structure in which product is
experimentally. developed. Basically white box testing is perform by the
programmer.
Black box testing is also known as functional testing. White box testing is also known as structural testing or
glass box testing
Black box testing is to test the application software from its Its goal is to test the program at the level of source code.
functional point of view.
The main focus of black box testing is on the validation of your White Box Testing (Unit Testing) validates internal structure
functional requirements. and working of your software code
Black box testing gives abstraction from code and focuses testing To conduct White Box Testing, knowledge of underlying
effort on the software system behavior. programming language is essential. Current day software
systems use a variety of programming languages and
technologies and it’s not possible to know all of them.
Black box testing facilitates testing communication amongst White box testing does not facilitate testing communication
modules amongst modules
Explain black-box testing with an example.
Black box testing is a software testing techniques in which functionality of the software under test (SUT) is tested without looking at
the internal code structure, implementation details and knowledge of internal paths of the software. This type of testing is based
entirely on the software requirements and specifications. In Black Box Testing we just focus on inputs and output of the software
system without bothering about internal knowledge of the software program. Black Box Testing, also known as Behavioral Testing,
is a software testing method in which the internal structure/ design/ implementation of the item being tested is not known to the
tester. These tests can be functional or non-functional, though usually functional.
For example: an operating system like Windows, a website like Google, a database like Oracle or even your own custom application.
Under Black Box Testing, you can test these applications by just focusing on the inputs and outputs without knowing their internal
code implementation.
EXAMPLE---A tester, without knowledge of the internal structures of a website, tests the web pages by using a browser; providing
inputs (clicks, keystrokes) and verifying the outputs against the expected outcome.
Types of Black Box Testing-There are many types of Black Box Testing but following are the prominent ones -
1. Functional testing - This black box testing type is related to functional requirements of a system; it is done by software testers.
2. Non-functional testing - This type of black box testing is not related to testing of a specific functionality, but non-functional
requirements such as performance, scalability, usability.
3. Regression testing - Regression testing is done after code fixes, upgrades or any other system maintenance to check the new
code has not affected the existing code.
BLACK BOX TESTING TECHNIQUES: -- Following are some techniques that can be used for designing black box tests.
1. Equivalence partitioning: It is a software test design technique that involves dividing input values into valid and invalid partitions
and selecting representative values from each partition as test data.
2. Boundary Value Analysis: It is a software test design technique that involves determination of boundaries for input values and
selecting values that are at the boundaries and just inside/ outside of the boundaries as test data.
3. Cause Effect Graphing: It is a software test design technique that involves identifying the cases (input conditions) and effects
(output conditions), producing a Cause-Effect Graph, and generating test cases accordingly.
Software Engineering Notes READ & PASS Aney Academy
What are CASE Tools? List the features of any two CASE Tools.
CASE tools are the software engineering tools that permit collaborative software development and maintenance. CASE tools
support almost all the phases of the software development life cycle such as analysis, design, etc., including umbrella activities such
as project management, configuration management etc.
CASE tools may support the following development steps:
• Creation of data flow and entity models
• Establishing a relationship between requirements and models
• Development of top-level design
• Development of functional and process description
• Development of test cases.
The CASE tools on the basis of the above specifications can help in automatically generating data base tables, forms and reports,
and user documentation. Most of the CASE tools include one or more of the following types of tools like • Analysis tools, •
Repository to store all diagrams, forms, models and report definitions etc., • Diagramming tools, • Screen and report generators, •
Code generators, • Documentation generators etc.

Computer Aided Software Engineering (CASE) tools instill many software engineering tasks with the help of information created
using computer. CASE tools support software engineering tasks and are available for different tasks of the Software Development
Life Cycle (SDLC).
SOFTWARE QUALITY ASSURANCE→•Software quality assurance (SQA) is a process that ensures that developed software meets and
complies with defined or standardized quality specifications. •SQA is an ongoing process within the software development life cycle
(SDLC) that routinely checks the developed software to ensure it meets desired quality measures. •SQA helps ensure the development
of high-quality software. •SQA practices are implemented in most types of software development, regardless of the underlying
software development model being used. In a broader sense, SQA incorporates and implements software testing methodologies to
test software. •Rather than checking for quality after completion, SQA processes test for quality in each phase of development until
the software is complete. •With SQA, the software development process moves into the next phase only once the current/previous
phase complies with the required quality standards. •SQA generally works on one or more industry standards that help in building
software quality guidelines and implementation strategies. These standards include the ISO 9000 and capability maturity model
integration (CMMI). •A quality factor represents a behavioral characteristic of a system.
Putnam’s Model→L. H. Putnam developed a dynamic multivariate model of the software development Process. It is based on the
assumption that distribution of effort over the life of software Development. It is described by the Rayleigh-Norden curve.
P = Kt exp(t2/2T2) / T2
P = No. of persons on the project at time ‘t’
K = The area under Rayleigh curve which is equal to total life cycle effort
T = Development time

The Rayleigh-Norden curve is used to derive an equation that relates lines of code delivered to other parameters like development
time and effort at any time during the project.
S = CkK1/3T4/3
S = Number of delivered lines of source code (LOC)
Ck = State-of-technology constraints
K = The life cycle effort in man-years
T = Development time.
Software Engineering Notes READ & PASS Aney Academy
Cyclomatic Complexity→It is a white box testing strategy. This strategy is used to find the number of independent paths through a
program. If Control Flow Graph (CFG) of a program is given, then the Cyclomatic complexity V(G) can be computed as with the help
of given formula: V(G) = E – N + 2, where N = number of nodes of the CFG and E = number of edges in the CFG.
Properties of Cyclomatic complexity are:
•V(G) is the maximum number of independent paths in graph G •Inserting and deleting functional statements to G does not
affect V(G) •G has only one path if and only if V(G) = 1.
Example
int avinash(a,b)
{ int a,b;
while (a!= b) { if (a > b) a = a-b; else a = b-a; }
return a;
}
In the above program, two control constructs are used, while-loop and if-then-else respectively. A complete CFG for the above
program is

Number of nodes of the CFG(N)=5;


Number of edges in the CFG(E)=6;
Then Cyclomatic complexity V(G)= E–N+2 =6-5+2=3
Statistical Model→ C.E. Walston and C.P. Felix developed a simple empirical model of software development effort with respect
to number of lines of code. In this model, LOC is assumed to be directly related to development effort as given below:
E = a Lb
Where L = Number of Lines of Code (LOC)
E = total effort required
a, b = parameters obtained from regression analysis of data.
The final equation is of the following form:
E = 5.2 L0.91
The productivity of programming effort can be calculated as
P = L/E
Where P = Productivity Index
Global System for Mobile Communications(GSM)→ GSM stands for Global System for Mobile Communications. GSM is one of the
popular architectures on which mobile devices are based on. GSM is a digital wireless network standard. All mobile devices that are
based on GSM standard across the world will have similar capabilities. The GSM standard has some principles of security like
subscriber identity confidentiality, use of a SIM as security module, subscriber identity authentication, use of triplets and stream
ciphering of user traffic & user control data.
features of GSM:
• If a mobile device is based on GSM, then it can be used in all those countries where this particular standard is prevailing.
• Almost all the services that are existent in a wireline network are provided by GSM to all the users of mobile devices which are
based on it.
• Though the quality of voice telephony is not excellent, it is not inferior to the systems that are analog based.
• It also provides good security as there is an option to encrypt the information that is being exchanged using this standard.
• There is no need for significant modification of wireline networks due to the establishment of networks based on GSM standard.
Advantages of GSM: • GSM is already used worldwide with millions of subscribers. • International roaming allows subscriber to
use a single mobile phone throughout Western Europe. • GSM is mature which started in the mid-80s which is more stable
network with robust features. • The availability of SIM, which are smart cards that provide secure data encryption which gives
GSM mobile commerce advantages.
Disadvantages of GSM:
• Lack of access to American market.
Software Engineering Notes READ & PASS Aney Academy
Auditing: → Auditing and Reporting helps change Reporting→ Status reporting is also known as status accounting. It
management process to ensure whether the changes have records all changes that lead to each new version of the item. Status
been properly implemented or not, whether it has any reporting is the book keeping of each release. The process involves
undesired impact on other components. A formal tracking the change in each version that leads the latest version.
technical review and software configuration audit helps in The report includes the following:
ensuring that the changes have been implemented • The person responsible for the change. • The changes incorporated.
properly during the change process. The aim of a • The date and time of changes. • The reason for such changes (if it is
conducting software audit is to provide an independent a bug fixing). • The effect of the change.
evaluation of the software products and processes to
applicable standards, guidelines, plans, and procedures Every time a change is incorporated it is assigned a unique number to
against compliance. A Formal Technical Review generally identify it from the previous version. Status reporting is of vital
concentrates on technical correctness of the changes to importance in a scenario where a large number of developers work on
the configuration item whereas software configuration the same product at the same time and have little idea about the work
audit complements it by checking the parameters which of other developers. Example of reporting, in source code, reporting
are not checked in a Formal Technical Review. the changes may be as below:
A check list for software configuration audit ---------------------------------------
• Whether a formal technical review is carried out to ---------------------------------------
check the technical accuracy of the changes made? # Title: Sub routine Insert to Employee Data
• Whether the changes as identified and reported in the # Version: Ver 1.1.2
change order have been incorporated? # Purpose: To insert employee data in the master file
• Have the changes been properly documented in the # Author: Raj Kumar
configuration items? # Date: 23/10/2017
• Whether standards have been followed. # Auditor: Honey Jaiswal
• Whether the procedure for identifying, recording and # Modification History:
reporting changes has been followed. 12/12/2017 : by Avinash Kumar →To fix bugs discovered in the first
As it is a formal process, it is desirable to conduct the release
audit by a separate team other than the team responsible 4/5/2018: by Shalini Shilvi → To allow validation in date of birth data
for incorporating the changes. 6/6/2018 : by Saurav Sunny → To add error checking module as
requested by the customer
Cohesion Coupling
Cohesion is the indication of the relationship within module. Coupling is the indication of the relationships between modules.
Cohesion shows the module’s relative functional strength. Coupling shows the relative independence among the modules.
Cohesion is a degree (quality) to which a component / Coupling is a degree to which a component / module is connected
module focuses on the single thing. to the other modules.
While designing you should strive for high cohesion i.e. a While designing you should strive for low
cohesive component/ module focus on a single task coupling i.e. dependency between modules should be less.
(i.e., single-mindedness) with little interaction with other
modules of the system.
Cohesion is the kind of natural extension of data hiding for Making private fields, private methods and non-public classes
example, class having all members visible with a package provides loose coupling.
having default visibility.
Cohesion is Intra – Module Concept. Coupling is Inter -Module Concept.

Component Based Software Engineering(CBSE)→The goal of component-based software engineering is to increase the
productivity, quality, and decrease time-to-market in software development. CBSE uses Software Engineering principles to apply the
same idea as OOP to the whole process of designing and constructing software systems. It focuses on reusing and adapting existing
components, as opposed to just coding in a particular style. A software component is a nontrivial, independent, and replaceable
part of a system that fulfils a clear function in the context of a well-defined architecture. CBSE is in many ways similar to
conventional or object-oriented software engineering. software team establishes requirements for the system to be built using
conventional requirements elicitation techniques. The CBSE process identifies not only candidate components but also qualifies
each component’s interface, adapts components to remove architectural mismatches, assembles components into selected
architectural style, and updates components as requirements for the system change.
Two processes occur in parallel during the CBSE process. These are:
• Domain Engineering
• Component Based Development.
Software Engineering Notes READ & PASS Aney Academy
Challenges for CBSE
•Dependable systems and CBSE: The use of CBD in safety-critical domains, real-time systems, and different process-control systems,
in which the reliability requirements are more rigorous, is particularly challenging. A major problem with CBD is the limited
possibility of ensuring the quality and other nonfunctional attributes of the components and thus our inability to guarantee specific
system attributes.
•Tool support: The purpose of Software Engineering is to provide practical solutions to practical problems, and the existence of
appropriate tools is essential for a successful CBSE performance. Development tools, such as Visual Basic, have proved to be
extremely successful, but many other tools are yet to appear – component selection and evaluation tools, component repositories
and tools for managing the repositories, component test tools, component-based design tools, run-time system analysis tools,
component configuration tools, etc.
•Trusted components: Because the trend is to deliver components in binary form and the component development process is
outside the control of component users, questions related to component trustworthiness become of great importance.
•Component certification: One way of classifying components is to certificate them. In spite of the common belief that certification
means absolute trustworthiness, it is in fact only gives the results of tests performed and a description of the environment in which
the tests were performed. While certification is a standard procedure in many domains, it is not yet established in software in
general and especially not for software components.
•Composition predictability: Even if we assume that we can specify all the relevant attributes of components, it is not known how
these attributes determine the corresponding attributes of systems of which they are composed. The ideal approach, to derive
system attributes from component attributes is still a subject of research.
•Requirements management and component selection: Requirements management is a complex process. A problem of
requirements management is that requirements in general are incomplete, imprecise and contradictory. The process of engineering
requirements is much more complex as the possible candidate components are usually lacking one or more features which meet
the system requirements exactly.
•Long-term management of component-based systems: As component-based systems include sub-systems and components with
independent lifecycles, the problem of system evolution becomes significantly more complex. CBSE is a new approach and there is
little experience as yet of the maintainability of such systems. There is a risk that many such systems will be troublesome to
maintain.
•Development models: Although existing development models demonstrate powerful technologies, they have many ambiguous
characteristics, they are incomplete, and they are difficult to use.
•Component configurations: Complex systems may include many components which, in turn, include other components. In many
cases compositions of components will be treated as components. As soon as we begin to work with complex structures, the
problems involved with structure configuration pop up.
Similarities and differences between cleanroom and OO Paradigm E-R diagram→An entity relationship diagram (ERD)
The following are the similarities and the differences between the shows the relationships of entity sets stored in a
Cleanroom software engineering development and OO software database. An entity in this context is a component of
engineering paradigm. data. An ER diagram is a means of visualizing how the
Similarities information a system produces is related. There are
• Lifecycle - both rely on incremental development. • Usage - cleanroom five main components of an ER-Diagram:
usage model similar to OO use case. • State Machine Use - cleanroom • Entities are represented by means of rectangles.
state box and OO transition diagram. • Reuse - explicit objective in both Rectangles are named with the entity set they
process models. represent. Student Teacher Project
Key Differences
• Cleanroom relies on decomposition and OO relies on composition. • • Attributes are the properties of entities. Attributes
Cleanroom relies on formal methods while OO allows informal use case are represented by means of ellipses. Every ellipse
definition and testing. • OO inheritance hierarchy is a design resource represents one attribute and is directly connected to
whereas cleanroom usage hierarchy is system itself. • OO practitioners its entity (rectangle).
prefer graphical representations while cleanroom practitioners prefer • Actions, which are represented by diamond shapes,
tabular representations. • Tool support is good for most OO processes, but show how two entities share information in the
usually tool support is only found in cleanroom testing, not design. database.
• Connecting lines, solid lines that connect attributes
to show the relationships of entities in the diagram.
• Cardinality specifies how many instances of an entity
relate to one instance of another entity.
Software Engineering Notes READ & PASS Aney Academy
Data Flow diagram→A data flow diagram (DFD) illustrates how data is processed by a system in terms of inputs and outputs. As its
name indicates its focus is on the flow of information, where data comes from, where it goes and how it gets stored. A data flow
diagram (DFD) maps out the flow of information for any process or system. A DFD shows what kind of information will be input to
and output from the system
DFD rules
• Each process should have at least one input and an output. • Each data store should have at least one data flow in and one data
flow out. • Data stored in a system must go through a process. • All processes in a DFD go to another process or a data store.
A data flow diagram can dive into progressively more detail by using levels and layers, zeroing in on a particular
piece. DFD levels are numbered 0, 1 or 2, and occasionally go to even Level 3 or beyond. DFD Level 0 is also called a Context
Diagram. It’s a basic overview of the whole system. DFD Level 1 provides a more detailed breakout of pieces of the Context Level
Diagram. DFD Level 2 then goes one step deeper into parts of Level 1. It may require more text to reach the necessary
level of detail about the system’s functioning. Progression to Levels 3, 4 and beyond is possible, but going beyond Level 3 is
uncommon.
Benefits of change control management
•The existence of a formal process of change management helps the developer to identify the responsibility of code for which a
developer is responsible. An idea is achieved about the changes that affect the main product. The existence of such mechanism
provides a road map to the development process and encourages the developers to be more involved in their work.
•Version control mechanism helps the software tester to track the previous version of the product, thereby giving emphasis on
testing of the changes made since the last approved changes. It helps the developer and tester to simultaneously work on multiple
versions of the same product and still avoid any conflict and overlapping of activity.
•The software change management process is used by the managers to keep a control on the changes to the product thereby
tracking and monitoring every change. The existence of a formal process reassures the management. It provides a professional
approach to control software changes.
•It also provides confidence to the customer regarding the quality of the product.
Various levels of testing
To check the functionality of a software application, several techniques of functional testing are used by testers and developers
during the process of software development. The Four Levels of Software Testing are
Unit Testing: --The first level of functional testing, Unit Testing, is the most micro-level of testing performed on a software. The
purpose of this testing is to test individual units of source code together with associated control data to determine if they are fit for
use. Unit testing is performed by the team of developers before the setup is handed over to the testing team to formally execute
the test cases. It helps developers verify internal design and logic of the written code. One of the biggest benefit of this testing is
that it can be implemented every time a piece of code is modified, allowing issues to be resolved as quickly as possible.
Integration Testing: -- Integration testing allows individuals the opportunity to combine all of the units within a program and test
them as a group. This testing level is designed to find interface defects between the modules/functions. This is particularly
beneficial because it determines how efficiently the units are running together. Keep in mind that no matter how efficiently each
unit is running, if they aren’t properly integrated, it will affect the functionality of the software program. In order to run these types
of tests, individuals can make use of various testing methods, but the specific method that will be used to get the job done will
depend greatly on the way in which the units are defined.
System Testing: --System testing is the first level in which the complete application is tested as a whole. The goal at this level is to
evaluate whether the system has complied with all of the outlined requirements and to see that it meets Quality Standards. System
testing is undertaken by independent testers who haven’t played a role in developing the program. This testing is performed in an
environment that closely mirrors production. System Testing is very important because it verifies that the application meets the
technical, functional, and business requirements that were set by the customer.
Acceptance Testing: --The final level, Acceptance testing (or User Acceptance Testing), is conducted to determine whether the
system is ready for release. During the Software development life cycle, requirements changes can sometimes be misinterpreted in
a fashion that does not meet the intended needs of the users. During this final phase, the user will test the system to find out
whether the application meets their business’ needs. Once this process has been completed and the software has passed, the
program will then be delivered to production.
Software Engineering Notes READ & PASS Aney Academy
Factors affecting the task set for the project
• Technical staff expertise: -- All staff members should have sufficient technical expertise for timely implementation of the
project. Meetings have to be conducted, weekly and status reports are to be generated.
• Customer satisfaction: -- Customer has to be given timely information regarding the status of the project. If not, there might be
a communication gap between the customer and the organisation.
• Technology update: -- Latest tools and existing tested modules have to be used for fast and efficient implementation of the
project.
• Full or partial implementation of the project: -- In case, the project is very large and to meet the market requirements, the
organisation has to satisfy the customer with at least a few modules. The remaining modules can be delivered at a later stage.
• Time allocation: -- The project has to be divided into various phases and time for each phase has to be given in terms of
person-months, module-months, etc.
• Module binding: -- Module has to bind to various technical staff for design, implementation and testing phases. Their necessary
inter-dependencies have to be mentioned in a flow chart.
• Milestones: -- The outcome for each phase has to be mentioned in terms of quality, specifications implemented, limitations of
the module and latest updates that can be implemented (according to the market strategy).
• Validation and Verification: -- The number of modules verified according to customer specification and the number of modules
validated according to customer’s expectations are to be specified.
The following are the objectives of software change management process:
1. Configuration identification: -- The source code, documents, test plans, etc. The process of identification involves identifying
each component name, giving them a version name (a unique number for identification) and a configuration identification.
2. Configuration control: -- Controlling changes to a product. Controlling release of a product and changes that ensure that the
software is consistent on the basis of a baseline product.
3. Review: Reviewing the process to ensure consistency among different configuration items.
4. Status accounting: -- Recording and reporting the changes and status of the components.
5. Auditing and reporting: Validating the product and maintaining consistency of the product throughout the software life cycle.
Identification of Software Risks →A risk may be defined as a potential problem. It may or may not occur. But, it should always be
assumed that it may occur and necessary steps are to be taken. Risks can arise from various factors like improper technical
knowledge or lack of communication between team members, lack of knowledge about software products, market status, hardware
resources, competing software companies, etc.
Basis for Different Types of Software risks
• Skills or Knowledge: The persons involved in activities of problem analysis, design, coding and testing have to be fully aware of the
activities and various techniques at each phase of the software development cycle. In case, they have partial knowledge or lacks
adequate skill, the products may face many risks at the current stage of development or at later stages.
• Interface modules: Complete software contains various modules and each module sends and receives information to other
modules and their concerned data types have to match.
• Poor knowledge of tools: If the team or individual members have poor knowledge of tools used in the software product, then the
final product will have many risks, since it is not thoroughly tested.
• Programming Skills: The code developed has to be efficient, thereby, occupying less memory space and less CPU cycles to
compute given task. • Management Issues: The management of the organisation should give proper training to the project staff,
arrange some recreation activities, give bonus and promotions and interact with all members of the project and try to solve their
necessities at the best. • Extra support: The software should be able to support a set of a few extra features in the vicinity of the
product to be developed. • Customer Risks: Customer should have proper knowledge of the product needed, and should not be in a
hurry to get the work done. • External Risks: The software should have backup in CD, tapes, etc., fully encrypted with full license
facilities. Encryption is maintained such that no external persons from the team can tap the source code.
Rapid Application Development(RAD)→ This model gives a quick approach for software development and is based on a linear
sequential flow of various development processes. The software is constructed on a component basis. Thus multiple teams are
given the task of different component development. It increases the overall speed of software development. It gives a fully
functional system within very short time. It follows a modular approach for development. The problem with this model is that it may
not work when technical risks are high.
Iterative Enhancement Model→ This model was developed to remove the shortcomings of waterfall model. In this model, the
phases of software development remain the same, but the construction and delivery is done in the iterative mode. In the first
iteration, a less capable product is developed and delivered for use. This product satisfies only a subset of the requirements. In the
next iteration, a product with incremental features is developed. Every iteration consists of all phases of the waterfall model. The
complete product is divided into releases and the developer delivers the product release by release. This model is useful when less
manpower is available for software development. The main disadvantage of this model is that iteration may never end, and the user
may have to endlessly wait for the final product. The cost estimation is also tedious.
Software Engineering Notes READ & PASS Aney Academy
What is Software Quality? Causes of error in Software
Quality software is reasonably bug or defect free, delivered on time and • Misinterpretation of customers’ requirements/
within budget, meets requirements and/or expectations, and is communication
maintainable. software quality can be defined as conformance to explicitly • Incomplete/erroneous system specification
stated and implicitly stated functional requirements. Good quality software • Error in logic
satisfies both explicit and implicit requirements. Software quality is a • Not following programming/software standards
complex mix of characteristics and varies from application to application and • Incomplete testing
the customer who requests for it. • Inaccurate documentation/no documentation
Attributes of Quality • Deviation from specification
The following are some of the attributes of quality: • Error in data modeling and representation.
Auditability: -- The ability of software being tested against conformance to J2ME →Java supports mobile application
standard. development using its J2ME. J2ME stands for Java 2
Compatibility: -- The ability of two or more systems or components to Platform, Micro Edition. J2ME provides an
perform their required functions while sharing the same hardware or environment under which application development
software environment. can be done for mobile phone, personal digital
Completeness: -- The degree to which all of the software’s required assistants and other embedded devices.
functions and design constraints are present and fully developed in the As like any other Java platform, J2ME includes API’s
requirements specification, design document and code. (Application Programming Interface) and Java Virtual
Consistency: -- The degree of uniformity, standardization, and freedom from Machines. It includes a range of user interfaces,
contradiction among the documents or parts of a system or component. provides security and supports a large number of
Correctness: -- The degree to which a system or component is free from network protocols. J2ME also supports the concept
faults in its specification, design, and implementation. The degree to which of write once, run anywhere concept. J2ME is one of
software, documentation, or other items meet specified requirements. the popular platforms that is being used across the
Feasibility: -- The degree to which the requirements, design, or plans for a world for a number of mobile devices, embedded
system or component can be implemented under existing constraints. devices etc.
Modularity: -- The degree to which a system or computer program is The architecture of J2ME consists of a number of
composed of discrete components such that a change to one component components that can be used to construct a suitable
has minimal impact on other components. Java Runtime Environment (JRE) for a set of mobile
Predictability: -- The degree to which the functionality and performance of devices. When right components are selected, it will
the software are determinable for a specified set of inputs. lead to a good memory, processing strength and I/O
Robustness: -- The degree to which a system or component can function capabilities for the set of devices for which JRE is
correctly in the presence of invalid inputs or stressful environmental being constructed.
conditions. There are several reasons for the usage of Java
Structuredness: -- The degree to which the SDD (System Design Document) technology for wireless application development.
and code possess a definite pattern in their interdependent parts. This Some of them are given below:
implies that the design has proceeded in an orderly and systematic manner • Java platform is secure. It is safe. It always works
(e.g., top-down, bottom-up). The modules are cohesive and the software within the boundaries of Java Virtual Machine.
has minimized coupling between modules. Hence, if something goes wrong, then only JVM is
Testability: -- The degree to which a system or component facilitates the corrupted. The device is never damaged. •
establishment of test criteria and the performance of tests to determine Automatic garbage collection is provided by Java. In
whether those criteria have been met. the absence of automatic garbage collection, it is to
Traceability: -- The degree to which a relationship can be established the developer to search for the memory leaks. • Java
between two or more products of the development process. The degree to offers exception handling mechanism. Such a
which each element in a software development product establishes its mechanism facilitates the creation of robust
reason for existing (e.g., the degree to which each element in a bubble chart applications. • Java is portable. Suppose that you
references the requirement that it satisfies). For example, the system’s develop an application using MIDP. This application
functionality must be traceable to user requirements. can be executed on any mobile device that
Understandability: -- The degree to which the meaning of the SRS, SDD, and implements MIDP specification. Due to the feature
code are clear and understandable to the reader. of portability, it is possible to move applications to
Verifiability: -- The degree to which the SRS, SDD, and code have been specific devices over the air.
written to facilitate verification and testing.
Software Engineering Notes READ & PASS Aney Academy
Waterfall Model→ The Waterfall Model was first Process Model to be introduced. It is very simple to understand and use. In a
waterfall model, each phase must be completed before the next phase can begin and there is no overlapping in the phases.
Waterfall model is the earliest SDLC approach that was used for software development. This type of software development model is
basically used for the project which is small and there are no uncertain requirements. It is actually the first engineering approach of
software development. The waterfall model provides a systematic and sequential approach to software development and is better
than the build and fix approach. It does not incorporate any kind of risk assessment. The name is mainly due to suppose a waterfall
on the cliff of a steep mountain. Once the water has flowed over the edge of the cliff and has begun its journey down the side of the
mountain, it cannot turn back. It is the same with waterfall development. Once a phase of development is completed, the
development proceeds to the next phase and there is no turning back.

The sequential phases in Waterfall model are:


• Requirement analysis: All possible requirements of the system to be developed are captured in this phase and documented in
a requirement specification doc.
• System Design: System Design helps in specifying hardware and system requirements and also helps in defining overall
system architecture.
• Coding: With inputs from system design, the system is first developed in small programs called units, which are integrated in
the next phase. Each unit is developed and tested for its functionality which is referred to as Unit Testing.
• Testing: All the units developed in the implementation phase are integrated into a system after testing of each unit. Post
integration the entire system is tested for any faults and failures.
• Maintenance: There are some issues which come up in the client environment. To fix those issues patches are released. Also
to enhance the product some better versions are released. Maintenance is done to deliver these changes in the customer
environment.
When to use the waterfall model:
• This model is used only when the requirements are very well known, clear and fixed. • Product definition is stable.
• Technology is understood. • There are no ambiguous requirements •The project is short.
Prototyping Model→ In this model, a working model of actual software is developed initially. The prototype is just like a sample
software having lesser functional capabilities and low reliability and it does not undergo through the rigorous testing phase. The
working prototype is given to the customer for operation. The customer, after its use, gives the feedback. Analysing the feedback
given by the customer, the developer refines, adds the requirements and prepares the final specification document. Once the
prototype becomes operational, the actual product is developed using the normal waterfall model.
There are following features of prototyping model:
(i) It helps in determining user requirements more deeply. (ii) At the time of actual product development, the customer
feedback is available. (iii) It does consider any types of risks at the initial level.
Software Engineering Notes READ & PASS Aney Academy
Characteristics of a successful CASE Tools
A CASE tool must have the following characteristics in order to be used efficiently:
• A standard methodology: A CASE tool must support a standard software development methodology and standard modeling
techniques. In the present scenario most of the CASE tools are moving towards UML.
• Flexibility: Flexibility in use of editors and other tools. The CASE tool must offer flexibility and the choice for the user of editors’
development environments.
• Strong Integration: The CASE tools should be integrated to support all the stages. This implies that if a change is made at any
stage, for example, in the model, it should get reflected in the code documentation and all related design and other documents,
thus providing a cohesive environment for software development.
• Integration with testing software: The CASE tools must provide interfaces for automatic testing tools that take care of regression
and other kinds of testing software under the changing requirements.
• Support for reverse engineering: A CASE tools must be able to generate complex models from already generated code.
• On-line help: The CASE tools provide an online tutorial.
COCOMO Model→COCOMO model stands for Constructive Cost Model. COCOMO is one of the most widely used software
estimation models in the world. This model is developed in 1981 by Barry Boehm to give estimation of number of man-months it
will take to develop a software product. COCOMO is used to estimate size, effort and duration base on the cost of the software. It is
a model for estimating effort, cost, and schedule for software projects.
It provides three level of models are as follows:
•Basic COCOMO→ The basic COCOMO model estimate the software development effort using only Lines of code. The Basic
COCOMO model is a static, single-valued model that computes software development effort (and cost) as a function of program size
expressed in estimated lines of code (LOC).
•Intermediate COCOMO→ This is extension of COCOMO model. This estimation model makes use of set of “Cost Driver Attributes”
to compute the cost of software. This model computes software development effort as a function of program size and a set of "cost
drivers" that include subjective assessments of product, hardware, personnel and project attributes.
•Detailed COCOMO→ This model computed development effort and cost which incorporates all characteristics of intermediate
level with assessment of cost implication on each step of development i.e. analysis,design,testing etc.
The COCOMO models are defined for three classes of software projects. Using Boehm's terminology these are:
• Organic: Small size project. A simple software project where the development team has good experience of the application.
• Semi-detached: An intermediate size project and project is based on rigid and semi-rigid requirements.
• Embedded: The project developed under h/w, s/w and operational constraints. Examples are embedded software, flight control
software. In this model, the development effort equation assumes in the following form:
E = aSb m
Where a & b = constraints that are determined for each model.
E = Effort; S = Value of source in LOC; m = multiplier
Spiral Model→ It is only process model which does use risk management as a part of s/w development process. So, such process
model is used in handling today’s real world project which does include two basic characteristics—Complexity & Risk. For such spiral
model does use six stages—
a. Communication, b. Planning, c. Risk management, d. Modelling, e. Construction, f. Deployment
Phase of such model does proceed in cyclic manner iteration by iteration, in which, output of a particular iteration is used as input
of another iteration & after completion of a specific iteration. Again communication is made by developer from customer about the
last developed work product also for a further and then an independent planning & risk management is made for next iteration.
And in the same way, all the iteration of such models are done completed and in the final iteration, the product is done delivered to
customer. So spiral model does include later changes if asked by customer. But, risk management stage makers such model costlier
than other.

You might also like