Software Development Ac and Lal Bahadur
Software Development Ac and Lal Bahadur
Introduction
Software:
• Software is set of Computer programs associated with documentation & configuration data that is
needed to make these programs operate correctly.
• Software is a set of programs, which is designed to perform a well-defined function.
• Software is computer programs and associated documentation.
• A software system consists of a number of programs, configuration files (used to set up programs),
system documentation (describes the structure of the system) and user documentation (explains
how to use system).
• Software products may be developed for a particular customer or may be developed for a general
market.
Bespoke (custom):
• These are the systems which are commissioned by a particular customer. A software contractor
develops the software especially for that customer. (i.e. developed for a single customer
according to their specification). e.g.: Control system for electronic device, software to support
particular business process.
Software engineering is defined as a process of analyzing user requirements and then designing,
building, and testing software application which will satisfy those requirements.
a) Operational:
This tells us how well software works in operations. It can be measured on:
• Budget
• Usability
• Efficiency
• Correctness
• Functionality
• Dependability
• Security
• Safety
b) Transitional:
This aspect is important when the software is moved from one platform to another:
• Portability
• Interoperability
• Reusability
• Adaptability
c) Maintenance:
This aspect briefs about how well a software has the capabilities to maintain itself in the ever-
changing environment:
• Modularity
• Maintainability
• Flexibility
• Scalability
2. Real-time software:
• Software that monitors/ analyzes/ controls real-world events as they occur is called real time
Software.
• E.g. Real time manufacturing process control.
1. Business Software:
• Business information processing is the largest single software application area. They include
software that accesses one or more large databases containing business information.
• E.g. Payroll, Inventory, Accounts.
5. Embedded Software:
• Embedded software resides in read-only memory and is used to control products and systems for
the consumer and industrial markets.
• E.g. digital functions in an automobile such as fuel control, dashboard displays, and braking
systems.
Software Characteristics:
Software Characteristics are classified into six major components:
i. Functionality:It refers to the degree of performance of the software against its intended
purpose. Required functions are:
ii. Reliability:A set of attribute that bear on capability of software to maintain its level of
performance under the given condition for a stated period of time.Required functions are:
iii. Efficiency:It refers to the ability of the software to use system resources in the most effective and
efficient manner. The software should make effective use of storage space and executive
command as per desired timing requirement.Required functions are:
iv. Usability:It refers to the extent to which the software can be used with ease. The amount of effort
or time required to learn how to use the software.Required functions are:
v. Maintainability:It refers to the ease with which the modifications can be made in a software
system to extend its functionality, improve its performance, or correct errors.Required functions
are:
vi. Portability:
A set of attribute that bear on the ability of software to be transferred from one environment to
another, without or minimum changes.
Required functions are:
Software Applications:
The most significant factor in determining which software engineering methods and techniques
are most important is the type of application that is being developed.
iii. Scientific / Engineering Software: Applications like based on astronomy, automative stress
analysis , molecular Biology, volcanology, space Shuttle orbital dynamic, automated
manufacturing.
iv. Embedded Software: There are software control systems that control and manage hardware
devices. Example- software in mobile phone, software in Anti Lock Braking in car, software
in microwave oven to control the cooking process.
v. Product Line Software: It is designed to provide a specific capability for used by many
different customers. It can focus on unlimited or esoteric Marketplace like inventory control
products. Or address mass market place like : Spreadsheets, computer graphics, multimedia,
entertainment, database management, personal, business financial applications.
vi. Web application: It is also called " web apps ", are evolving into sophisticated computing
environment that not only provide stand alone features, computing functions, and content to
the end user but also are integrated with corporate database and business applications.
vii. Artificial intelligence software:This include- robotic, expert system, pattern recognition,
image and voice, artificial neural network, game playing, theorem proving ... It solves
Complexproblems.
Process:
• The hidden side of engineering is the Process, which means how we're actually building our
product. A software process specifies the abstract set of activities that should be performed to go
from user needs to final product. we need to make sure we're following a process that lets us create
that Product in the most efficient and effective way possible. That means coming up with a process
that is robust enough to get shit done in an imperfect world but also highly responsive to change
(which is inevitable).
Metrics:
• A metrics is a measurement of the level that any impute belongs to a system product or process.
There are 4 functions related to software metrics:
1. Planning
2. Organizing
3. Controlling
4. Improving
2. Understandable:
• Metric computation should be easily understood ,the method of computing metric should be
clearly defined.
3. Applicability:
• Metrics should be applicable in the initial phases of development of the software.
4. Repeatable:
• The metric values should be same when measured repeatedly and consistent in nature.
5. Economical:
• Computation of metric should be economical.
6. Language Independent:
• Metrics should not depend on any programming language.
Classification of Software Metrics:
There are 2 types of software metrics:
1. Product Metrics:
• Product metrics are used to evaluate the state of the product, tracing risks and undercovering
prospective problem areas. The ability of team to control quality is evaluated.
2. Process Metrics:
• Process metrics pay particular attention on enhancing the long term process of the team or
organisation.
3. Project Metrics:
• Project matrix is describes the project characteristic and execution process.
o Number of software developer
o Staffing pattern over the life cycle of software
o Cost and schedule
o Productivity
Measurement:
• A measurement is an manifestation of the size, quantity, amount or dimension of a particular
attributes of a product or process. Software measurement is a titrate impute of a characteristic of a
software product or the software process. It is an authority within software engineering. Software
measurement process is defined and governed by ISO Standard.
2. Indirect Measurement:
• In indirect measurement the quantity or quality to be measured is measured using related
parameter i.e. by use of reference.
2) Project:
• From the ideation phase to the deployment phase, we term the process as a project. Many people
work together on a project to build a final product that can be delivered to the customer as per their
needs or demands. So, the entire process that goes on while working on the project must be
managed properly so that we can get a worthy result after completing the project and also so that
the project can be completed on time without any delay.
3) Process:
• Every process that takes place while developing the software, or we can say while working on the
project must be managed properly and separately. For example, there are various phases in a
software development process and every phase has its process like the designing process is
different from the coding process, and similarly, the coding process is different from the testing.
Hence, each process is managed according to its needs and each needs to be taken special care of.
4) Product:
• Even after the development process is completed and we reach our final product, still, it needs to be
delivered to its customers. Hence the entire process needs a separate management team like the
sales department.
Unit: 2
Data Flow Diagrams
A Data Flow Diagram (DFD) is a traditional visual representation of the
information flows within a system. A neat and clear DFD can depict the right
amount of the system requirement graphically. It can be manual, automated, or a
combination of both.
It shows how data enters and leaves the system, what changes the information, and
where data is stored.
Characteristics of DFD
DFDs are commonly used during problem analysis.
DFDs are quite general and are not limited to problem analysis for software
requirements specification.
DFDs are very useful in understanding a system and can be effectively used
during analysis.
It views a system as a function that transforms the inputs into desired outputs.
The DFD aims to capture the transformations that take place within a system
to the input data so that eventually the output data is produced.
The processes are shown by named circles and data flows are represented by
named arrows entering or leaving the bubbles.
A rectangle represents a source or sink and it is a net originator or consumer
of data. A source sink is typically outside the main system of study.
Circle: A circle (bubble) shows a process that transforms data inputs into data
outputs.
Data Flow: A curved line shows the flow of data into or out of a process or data
store.
Data Store: A set of parallel lines shows a place for the collection of data items. A
data store indicates that the data is stored which can be used at a later stage or by
the other processes in a different order. The data store can have an element or
group of elements.
Source or Sink: Source or Sink is an external entity and acts as a source of system
inputs or sink of system outputs.
Physical and Logical DFD with example
Logical DFD
Logical DFD depicts how the business operates.
The processes represent the business activities.
The data stores represent the collection of data regardless of how the data are
stored.
It s how business controls.
Physical DFD
Physical DFD depicts how the system will be implemented (or how the
current system operates).
The processes represent the programs, program modules, and manual
procedures.
The data stores represent the physical files and databases, manual files.
It show controls for validating input data, for obtaining a record, for
ensuring successful completion of a process, and for system security.
Features Logical DFD Physical DFD
System controls Business controls Controls for data validation, record status,
system security
For example, Suppose we design a school database. In this database, the student will be an entity with attributes like
address, name, id, age, etc. The address can be another entity with attributes like city, street name, pin code, etc and
there will be a relationship between them.
Component of ER Diagram
1. Entity:
An entity may be any object, class, person or place. In the ER diagram, an entity can be represented as rectangles.
Consider an organization as an example- manager, product, employee, department etc. can be taken as an entity.
a. Weak Entity
An entity that depends on another entity called a weak entity. The weak entity doesn't contain any key attribute of its
own. The weak entity is represented by a double rectangle.
2. Attribute
The attribute is used to describe the property of an entity. Eclipse is used to represent an attribute.
For example, id, age, contact number, name, etc. can be attributes of a student.
a. Key Attribute
The key attribute is used to represent the main characteristics of an entity. It represents a primary key. The key
attribute is represented by an ellipse with the text underlined.
b. Composite Attribute
An attribute that composed of many other attributes is known as a composite attribute. The composite attribute is
represented by an ellipse, and those ellipses are connected with an ellipse.
c. Multivalued Attribute
An attribute can have more than one value. These attributes are known as a multivalued attribute. The double oval is
used to represent multivalued attribute.
For example, a student can have more than one phone number.
d. Derived Attribute
An attribute that can be derived from other attribute is known as a derived attribute. It can be represented by a
dashed ellipse.
For example, A person's age changes over time and can be derived from another attribute like Date of birth.
3. Relationship
A relationship is used to describe the relation between entities. Diamond or rhombus is used to represent the
relationship.
Types of relationship are as follows:
a. One-to-One Relationship
When only one instance of an entity is associated with the relationship, then it is known as one to one relationship.
For example, A female can marry to one male, and a male can marry to one female.
b. One-to-many relationship
When only one instance of the entity on the left, and more than one instance of an entity on the right associates with
the relationship then this is known as a one-to-many relationship.
For example, Scientist can invent many inventions, but the invention is done by the only specific scientist.
C. Many-to-many relationship:- When more than one instance of the entity on the left, and more than one instance
of an entity on the right associates with the relationship then it is known as a many-to-many relationship.
For example, Employee can assign by many projects and project can have many employees.
Advantages of ER diagram
SIMPLE : It is simple to draw an ER diagram when we know entities and
relationships.
EFFECTIVE : It is an effective communication tool.
EASY TO UNDERSTAND : The design of ER is very logical and hence they are
easy to design and understand.They show database capabilities like how tables, keys
and columns are used to find a solution to the given question.
INTEGRATED : The ER Model can be easily integrated with relational model.
USEFUL IN DECISION MAKING : Since some of the entities in the database are
analyzed by an ER-Diagram, So by drawing an ER-Diagram we come to know what
kind of attributes and relationship exist between them.
EASY CONVERTION : It can be easily converted to other type of models.
Disadvantages of Er Diagram
LOSS OF INFORMATION: While drawing an ER Model some of the information
can be hidden or lost.
LIMITED RELATIONSHIP: ER model can represent limited relationships as
compared to other models and It is not possible to indicate primary keys and foreign
keys when they’re expected.
NO REPRESENTATION FOR DATA MANIPULATION: It is not possible to
represent data manipulation(commands like insert(),delete(),alter(),update()) in ER
model.
NO INDUSTRY STANDARD: There is no industry standard for notations of an ER
diagram.
DIFFICULT TO MODIFY: ER models can be difficult to modify once they are
created. Any changes made to the model may require extensive rework, which can be
time-consuming and expensive.
LIMITED ATTRIBUTE REPRESENTATION: ER models may not be able to
represent all the attributes required for a particular problem domain. This can lead to
either the loss of important data or the creation of a complex and unwieldy model.
LIMITED SUPPORT FOR ABSTRACTION: ER models are not designed to
support abstraction, which can make it difficult to represent complex relationships or
data structures in a simple and intuitive way.
Bank Entity : Attributes of Bank Entity are Bank Name, Code and
Address.
Code is Primary Key for Bank Entity.
Customer Entity : Attributes of Customer Entity are Customer_id,
Name, Phone Number and Address.
Customer_id is Primary Key for Customer Entity.
Branch Entity : Attributes of Branch Entity are Branch_id, Name and
Address.
Branch_id is Primary Key for Branch Entity.
Account Entity : Attributes of Account Entity are Account_number,
Account_Type and Balance.
Account_number is Primary Key for Account Entity.
Loan Entity : Attributes of Loan Entity are Loan_id, Loan_Type and
Amount.
Loan_id is Primary Key for Loan Entity.
Relationships are :
Bank Entity : Attributes of Bank Entity are Bank Name, Code and
Address.
Code is Primary Key for Bank Entity.
Customer Entity : Attributes of Customer Entity are Customer_id,
Name, Phone Number and Address.
Customer_id is Primary Key for Customer Entity.
Branch Entity : Attributes of Branch Entity are Branch_id, Name and
Address.
Branch_id is Primary Key for Branch Entity.
Account Entity : Attributes of Account Entity are Account_number,
Account_Type and Balance.
Account_number is Primary Key for Account Entity.
Loan Entity : Attributes of Loan Entity are Loan_id, Loan_Type and
Amount.
Loan_id is Primary Key for Loan Entity.
Relationships are :
Assignment:-2
i. ER diagram of School Management System
ii. ER diagram of Hotel Management System
iii. ER diagram of Laundry Management System
Assignment:-2
iv. ER diagram of School Management System
v. ER diagram of Hotel Management System
vi. ER diagram of Laundry Management System
Unit 3. Feasibility Analysis
A feasibility study is an assessment of the practicality of a proposed project or system. It evaluates
the project’s potential for success based on factors such as cost, value, resources, opportunities,
and threats.
A feasibility study is simply an assessment of the practicality of a proposed project plan or method.
It involves analyzing technical, economic, legal, operational, and time feasibility factors.
1. Purpose:
A feasibility study evaluates whether a project or venture is likely to succeed.
It considers all critical aspects to determine if the project aligns with the organization’s goals
and if it’s worth pursuing.
2. Types of Feasibility Analysis:
Technical Feasibility:
Assesses the practicality of implementing a specific technical solution.
Considers factors like available technology, infrastructure, and expertise.
Operational Feasibility:
Examines how well the proposed solution fits within the existing
organizational processes.
Considers user acceptance, ease of implementation, and potential
disruptions.
Schedule Feasibility:
Evaluates the reasonableness of the project timeline.
Determines whether the project can be completed within the desired
timeframe.
Economic Feasibility:
Analyzes the cost-effectiveness of the project.
Includes assessing initial investment, ongoing operational costs, and
potential returns.
3. Components of a Feasibility Study:
A comprehensive feasibility study includes the following elements:
Historical Background:
Provides context about the business or project.
Product or Service Description:
Clearly defines what the project aims to deliver.
Accounting Statements:
Financial projections, including income statements, balance sheets,
and cash flow forecasts.
Operations and Management Details:
Describes how the project will be executed and managed.
Market Research and Policies:
Investigates market demand, competition, and regulatory
requirements.
Financial Data:
Quantifies costs, revenues, and potential profits.
Legal Requirements and Tax Obligations:
Ensures compliance with legal and tax regulations.
Advantages of Feasibility Analysis
1. Identifies potential risks and challenges early on.
1. **Identify Costs**: Begin by identifying all the costs associated with the project. These
may include development costs (such as labor, software tools, equipment), operational costs
(such as maintenance, support, training), and any other relevant expenses.
2. **Identify Benefits**: Determine the potential benefits that the software project is expected
to deliver. These could be tangible benefits such as increased revenue, cost savings,
improved efficiency, or intangible benefits like enhanced customer satisfaction, brand
reputation, or competitive advantage.
3. **Quantify Costs and Benefits**: Assign monetary values to both costs and benefits
whenever possible. This may involve estimating future revenue or cost savings, considering
the time value of money, and accounting for any risks or uncertainties.
4. **Calculate Net Benefits**: Calculate the net benefits by subtracting the total costs from
the total benefits. This provides a quantitative measure of the project's potential profitability
or value to the organization.
5. **Assess Risks and Uncertainties**: Consider the risks and uncertainties associated with
the project, such as changes in technology, market conditions, or project scope. This may
involve conducting sensitivity analysis or scenario planning to understand how variations in
assumptions could affect the outcome.
7. **Make Informed Decisions**: Use the results of the cost-benefit analysis to make
informed decisions about whether to proceed with the project, modify its scope, or abandon
it altogether. Consider factors such as strategic alignment, resource constraints, and
organizational priorities.
8. **Monitor and Review**: Continuously monitor the project's progress and periodically
review the cost-benefit analysis to ensure that assumptions remain valid and that the
expected benefits are being realized. Adjustments may be necessary as circumstances
change over time.
Cost-Benefit Analysis
Pros(Advantage)
Requires data-driven analysis
Limits analysis to only the purpose determined in the initial step of
the process
Results in deeper, potentially more reliable findings
Delivers insights to financial and non-financial outcomes
Cons(Disadvantage)
May be unnecessary for smaller projects
Requires capital and resources to gather data and make analysis
Relies heavily on forecasted figures; if any single critical forecast is
off, estimated findings will likely be wrong.
Return of Investment (ROI)
Return on Investment (ROI) is a financial metric used to evaluate the profitability of an
investment or compare the efficiency of different investments. It measures the ratio of
the net profit generated by an investment relative to the initial cost of the investment.
ROI is typically expressed as a percentage.
Where:
Net Profit = Total returns (revenue generated or benefits realized) from the
investment minus the total costs or expenses associated with the investment.
Initial Investment = The total cost or initial outlay required to make the
investment.
Example:- Let's say you invest $1,000 in a mutual fund, and after one year, it has
grown to $1,200. To calculate the return on investment (ROI):
So,
1200−1000
RIO= ×100
1000
200
= ×100
1000
=20%
Payback Period
In software development, the payback period serves as a financial metric used to
evaluate the time it takes for the investment in developing a software product to be
recovered through the generated revenues or cost savings. Here's how the payback
period is applied in the context of software development:
Calculation:
o Determine the initial investment required for software development, including costs
such as development tools, salaries, and overhead.
o Estimate the annual cash inflow generated by the software, which could include
revenue from sales, subscriptions, or cost savings.
Payback Period Calculation:
o Divide the initial investment by the annual cash inflow to calculate the payback
period.
o The result represents the number of years it will take for the project to recoup its
initial investment.
Interpretation:
o A shorter payback period indicates a quicker return on investment, which is generally
preferable as it reduces financial risk.
o Longer payback periods may indicate higher risk or lower potential returns.
Decision Making:
o The payback period is used as a criterion for decision-making in software
development projects.
o Projects with shorter payback periods are often prioritized as they provide quicker
returns and may be seen as less risky.
Factors Considered:
o Development Costs: Include expenses related to software development, such as
salaries, tools, and infrastructure.
o Revenue Generation: Consider potential revenue sources, including sales, licensing
fees, or cost savings from improved efficiency.
o Market Factors: Assess market demand, competition, and potential disruptions that
may impact revenue projections.
Long-term Sustainability:
o While a shorter payback period is preferred, consider the long-term sustainability
and profitability of the software beyond the payback period.
o Evaluate potential for ongoing revenue generation, customer retention, and
adaptability to changing market conditions.
Comparative Analysis:
o Use the payback period for comparative analysis between different software
development projects or investment options.
o Helps stakeholders prioritize projects based on their expected payback periods,
alongside other financial metrics and qualitative factors.
Feasibility Report:-
In software development, a feasibility report is a detailed analysis that
evaluates the practicality and viability of a proposed software project. This
report is typically prepared during the initial stages of project planning and
serves as a foundation for decision-making. Here's an overview of what a
feasibility report in software development typically includes:
System Proposal
A system proposal is a comprehensive document that outlines a proposed solution to
address a specific need or problem within an organization. It serves as a formal
request for approval and funding to implement the proposed system.
The components of a system proposal typically include:
1. Introduction:
Provides an overview of the proposal.
Introduces the background, context, and objectives of the proposed system.
2. Problem Statement:
Clearly identifies the existing problem or need that the proposed system aims to
address.
Describes the deficiencies or inefficiencies in the current system or processes.
3. Objectives:
States the specific goals and outcomes that the proposed system intends to
achieve.
Objectives should be measurable, achievable, relevant, and time-bound (SMART).
4. Scope:
Defines the boundaries and limitations of the proposed system.
Outlines the features, functionalities, and deliverables of the system.
Specifies any constraints, assumptions, or exclusions.
5. Methodology:
Explains the approach or methodology that will be used to develop and
implement the proposed system.
Describes the development process, technology stack, and project management
approach.
6. System Requirements:
Details the hardware, software, and infrastructure requirements for the
proposed system.
Includes performance criteria, security considerations, and integration needs.
7. System Design:
Provides a high-level overview of the proposed system architecture.
Describes system components, data flow, user interface design, and database
schema.
8. Implementation Plan:
Outlines the timeline, milestones, and activities for implementing the proposed
system.
Allocates resources, assigns responsibilities, and defines roles within the project
team.
9. Cost Estimation:
Estimates the costs associated with developing and implementing the proposed
system.
Includes expenses for hardware, software, labor, training, maintenance, and
other relevant items.
10. Benefits Analysis:
Evaluates the potential benefits and returns on investment (ROI) of the proposed
system.
Identifies both tangible and intangible benefits, such as cost savings, increased
efficiency, and improved decision-making.
11. Risks and Mitigation Strategies:
Identifies potential risks and challenges that may arise during system
implementation.
Proposes strategies and contingency plans to mitigate risks and minimize their
impact.
12. Conclusion:
Summarizes the key points presented in the proposal.
Reiterates the need for the proposed system and its expected outcomes.
Provides a recommendation for whether the proposal should be approved.
Unit 4
Software Project Planning
Project:
• A project is a temporary endeavor(attempt) undertaken to provide a unique product or service
• A project is a (temporary) sequence of unique complex and connected activities that have one
goal or purpose and that must be completed by a specific time, within budget and according to
specification.
• A project is a sequence of activities that must be completed on time, within budget and
according to specification.
Project planning:
• It involves making detailed plan to achieve the objectives.
• Probably the most time-consuming project management activity.
• Continuous activity from initial concept through to system delivery. Plans must be regularly
revised as new information becomes available.
• Various different types of plan may be developed to support the main software project plan that
is concerned with schedule and budget.
Managing Project
o Defining and setting up project scope
o Managing project management activities
o Monitoring progress and performance
o Risk analysis at every phase
o Take necessary step to avoid or come out of problems
o Act as project spokesperson
Size Estimation:
• Estimation of the size of software is an essential part of Software Project Management. It helps
the project manager to further predict the effort and time which will be needed to build the
project.
• Various measures are used in project size estimation. Some of these are:
▪ Lines of Code
▪ Number of entities in ER diagram
▪ Total number of processes in detailed data flow diagram
▪ Function points
Advantages:
• Universally accepted and is used in many models like COCOMO.
• Estimation is closer to developer’s perspective.
• Simple to use.
Disadvantages:
• Different programming languages contains different number of lines.
• No proper industry standard exist for this technique.
• It is difficult to estimate the size using this technique in early stages of project.
Advantages:
• Size estimation can be done during initial stages of planning.
• Number of entities is independent of programming technologies used.
Disadvantages:
• No fixed standards exist. Some entities contribute more project size than others.
• Just like FPA, it is less used in cost estimation model. Hence, it must be converted to LOC.
Advantages:
• It is independent of programming language.
• Each major processes can be decomposed into smaller processes. This will increase the
accuracy of estimation
Disadvantages:
• Studying similar kind of processes to estimate size takes additional time and effort.
• All software projects are not required to construction of DFD.
a) Count the number of functions of each proposed type: We have to find the number of
functions belonging to the following types:
• External Inputs: Functions related to data entering the system.
• External outputs:Functions related to data exiting the system.
• External Inquiries: They leads to data retrieval from system but don’t change the system.
• Internal Files: Logical files maintained within the system. Log files are not included here.
• External interface Files: These are logical files for other applications which are used by
our system.
b) Compute the Unadjusted Function Points(UFP): We have to Categorize each of the five
function types as simple, average or complex based on their complexity. Multiply count of each
function type with its weighting factor and find the weighted sum. The weighting factors for
each type based on their complexity are as follows:
FUNCTION TYPE SIMPLE AVERAGE COMPLEX
External Inputs 3 4 6
External Output 4 5 7
External Inquiries 3 4 6
External Interface
Files 5 7 10
d) Compute Value Adjustment Factor(VAF): Use the following formula to calculate VAF
VAF = (TDI * 0.01) + 0.65
e) Find the Function Point Count: Use the following formula to calculate FPC
FPC = UFP * VAF
Advantages:
• It can be easily used in the early stages of project planning.
• It is independent on the programming language.
• It can be used to compare different projects even if they use different technologies(database,
language etc).
Disadvantages:
• It is not good for real time systems and embedded systems.
• Many cost estimation models like COCOMO uses LOC and hence FPC must be converted to
LOC.
Cost Estimation:
• Cost estimation in software engineering is typically concerned with the financial spend on the
effort to develop and test the software, this can also include requirements review, maintenance,
training, managing and buying extra equipment, servers and software. Many methods have been
developed for estimating software costs for a given project.
• Several estimation procedures have been developed and are having the following attributes in
common.
1. Project scope must be established in advanced.
2. Software metrics are used as a support from which evaluation is made.
3. The project is broken into small PCs which are estimated individually.
To achieve true cost & schedule estimate, several option arise.
4. Delay estimation
5. Used symbol decomposition techniques to generate project cost and schedule estimates.
6. Acquire one or more automated estimation tools.
The Software Engineering Laboratory established a model called SEL model, for estimating its
software production. This model is an example of the static, single variable model.
E=1.4L0.93
DOC=30.4L0.90
D=4.6L0.26
Where E= Efforts (Person Per Month)
DOC=Documentation (Number of Pages)
D = Duration (D, in months)
L = Number of Lines per code
Where Wi is the weight factor for the ithvariable and Xi={-1,0,+1} the estimator gives Xione of the
values -1, 0 or +1depending on the variable decreases, has no effect or increases the productivity.
Example: Compare the Walston-Felix Model with the SEL model on a software development
expected to involve 8 person-years of effort.
a. Calculate the number of lines of source code that can be produced.
b. Calculate the duration of the development.
c. Calculate the productivity in LOC/PY
d. Calculate the average manning
Solution:
The amount of manpower involved = 8PY=96persons-months
(a)Number of lines of source code can be obtained by reversing equation to give:
Then
L (SEL) = (96/1.4)1⁄0.93=94264 LOC
L (SEL) = (96/5.2)1⁄0.91=24632 LOC
(b)Duration in months can be calculated by means of equation
D (SEL) = 4.6 (L)0.26
= 4.6 (94.264)0.26 = 15 months
D (W-F) = 4.1 L0.36
= 4.1 (24.632)0.36 = 13 months
(c) Productivity is the lines of code produced per persons/month (year)
(d)Average manning is the average number of persons required per month in the project
1. Organic:
• A development project can be treated of the organic type, if the project deals with developing a
well-understood application program, the size of the development team is reasonably small, and
the team members are experienced in developing similar methods of projects.
• Examples of this type of projects are simple business systems, simple inventory management
systems, and data processing systems.
2. Semidetached:
• A development project can be treated with semidetached type if the development consists of a
mixture of experienced and inexperienced staff. Team members may have finite experience in
related systems but may be unfamiliar with some aspects of the order being developed.
• Example of Semidetached system includes developing a new operating system (OS), a Database
Management System (DBMS), and complex inventory management system.
3. Embedded:
• A development project is treated to be of an embedded type, if the software being developed is
strongly coupled to complex hardware, or if the stringent regulations on the operational method
exist.
• For Example: ATM, Air Traffic control.
According to Boehm, software cost estimation should be done through three stages:
1. Basic Model
2. Intermediate Model
3. Detailed Model
Some insight into the basic COCOMO model can be obtained by plotting the estimated
characteristics for different software sizes. Fig shows a plot of estimated effort versus product size.
From fig, we can observe that the effort is somewhat superliner in the size of the software product.
Thus, the effort required to develop a product increases very rapidly with project size.
The development time versus the product size in KLOC is plotted in fig. From fig it can be observed
that the development time is a sub linear function of the size of the product, i.e. when the size of the
product increases by two times, the time to develop the product does not double but rises
moderately. This can be explained by the fact that for larger products, a larger number of activities
which can be carried out concurrently can be identified. The parallel activities can be carried out
simultaneously by the engineers. This reduces the time to complete the project. Further, from fig, it
can be observed that the development time is roughly the same for all three categories of products.
For example, a 60 KLOC program can be developed in approximately 18 months, regardless of
whether it is of organic, semidetached, or embedded type.
From the effort estimation, the project cost can be obtained by multiplying the required effort by the
manpower cost per month. But, implicit in this project cost computation is the assumption that the
entire project cost is incurred on account of the manpower cost alone. In addition to manpower cost,
a project would incur costs due to hardware and software required for the project and the company
overheads for administration, office space, etc.
It is important to note that the effort and the duration estimations obtained using the COCOMO
model are called a nominal effort estimate and nominal duration estimate. The term nominal implies
that if anyone tries to complete the project in a time shorter than the estimated duration, then the
cost will increase drastically. But, if anyone completes the project over a longer period of time than
the estimated, then there is almost no decrease in the estimated cost value.
Example1: Suppose a project was estimated to be 400 KLOC. Calculate the effort and development
time for each of the three model i.e., organic, semi-detached & embedded.
(ii)Semidetached Mode
E = 3.0 * (400)1.12=2462.79 PM
D = 2.5 * (2462.79)0.35=38.45 PM
Example2: A project size of 200 KLOC is to be developed. Software development team has
average experience on similar type of projects. The project schedule is not very tight. Calculate the
Effort, development time, average staff size, and productivity of the project.
Solution: The semidetached mode is the most appropriate mode, keeping in view the size, schedule
and experience of development time.
Hence E=3.0(200)1.12=1133.12PM
D=2.5(1133.12)0.35=29.3PM
P = 176 LOC/PM
2. Intermediate Model:
• The basic Cocomo model considers that the effort is only a function of the number of lines of
code and some constants calculated according to the various software systems.
• The intermediate COCOMO model recognizes these facts and refines the initial estimates
obtained through the basic COCOMO model by using a set of 15 cost drivers based on various
attributes of software engineering.
Classification of Cost Drivers and their attributes:
(i) Product attributes:
o Required software reliability extent
o Size of the application database
o The complexity of the product
Hardware attributes:
o Run-time performance constraints
o Memory constraints
o The volatility of the virtual machine environment
o Required turnabout time
Personnel attributes:
o Analyst capability
o Software engineering capability
o Applications experience
o Virtual machine experience
o Programming language experience
Project attributes:
o Use of software tools
o Application of software engineering methods
o Required development schedule
The cost drivers aredivided into four categories:
Intermediate COCOMO equation:
E=ai (KLOC) bi*EAF
D=ci (E)di
COCOMO II:
• COCOMO-II is the revised version of the original Cocomo (Constructive Cost Model)and is
developed at University of Southern California. It is the model that allows one to estimate the
cost, effort and schedule when planning a new software development activity.
• It consists of three sub-models:
1. End User Programming:
• Application generators are used in this sub-model. End user write the code by using these
application generators.
• Example – Spreadsheets, report generator, etc.
2. Intermediate Sector:
• This category will create largely prepackaged capabilities for user programming. Their product
will have many reusable components. Typical firms operating in this sector are Microsoft, Lotus,
Oracle, IBM, Borland, Novell.
3.Infrastructure Sector:
• This category provides infrastructure for the software development like Operating System,
Database Management System, User Interface Management System, Networking System, etc.
2. Stage-II:
• It supports estimation in the early design stage of the project, when we less know about it. For
this it uses Early Design Estimation Model. This model is used in early design stage of
application generators, infrastructure, system integration.
3. Stage-III:
• It supports estimation in the post architecture stage of a project. For this it uses Post Architecture
Estimation Model. This model is used after the completion of the detailed architecture of
application generator, infrastructure, system integration.
COCOMO 2 Model:
• The COCOMO-II is the revised version of the original Cocomo (Constructive Cost Model) and
is developed at the University of Southern California. This model calculates the development
time and effort taken as the total of the estimates of all the individual subsystems. In this model,
whole software is divided into different modules. Example of projects based on this model is
Spreadsheets and report generator.
1. Project risks:
• Project risks concern differ forms of budgetary, schedule, personnel, resource, and customer-
related problems. A vital project risk is schedule slippage. Since the software is intangible, it is
very tough to monitor and control a software project. It is very tough to control something which
cannot be identified. For any manufacturing program, such as the manufacturing of cars, the plan
executive can recognize the product taking shape.
2. Technical risks:
• Technical risks concern potential method, implementation, interfacing, testing, and maintenance
issue. It also consists of an ambiguous specification, incomplete specification, changing
specification, technical uncertainty, and technical obsolescence. Most technical risks appear due
to the development team's insufficient knowledge about the project.
3. Business risks:
• This type of risks contain risks of building an excellent product that no one need, losing
budgetary or personnel commitments, etc.
Well-designed input forms and visual display screens should meet the objectives of
effectiveness, accuracy, ease of use, consistency, simplicity, and attractiveness. All of
these objectives are attainable through the use of basic design principles, knowledge
of what is needed as input for the system, and an understanding of how user respond
to different element of forms and screens.
Example
A data entry user who has to enter 100 or more purchase orders everyday will care most
about the speed of data entry. A usable interface insuch case would require a minimum
number of keystrokes to enter the order.
Likewise, a company director, who needs to check on the status of a project once or twice a
week, will care most about the ease with which the system can be used, and the way
information is presented to him.
Keeping the Screen Consistent:
The second guidelines for good screen design is to keep the screen display consistent .
Screens can be kept consistent by locating information in the same area, each time a new
screen is accessed. Also, information that logically belongs together should be consistently
grouped together. For Example, Name and address go together, not name and zip code.
Facilitating Movement between screens:
The third guideline for good screen design is to make it easy to move from one screen to
another. One common method for movement is to have users feel as if they are physically
moving to a new screen.
Screens should be so designed that user is always aware of the status of an action. this can
be done in following ways.
a. Use error massage to provide feedback on mistake.
b. Use confirmation messages to provide feedback on update actions
c. Use status message, when some backend process is taking place.
Menu Design
The human -computer dialogue defines how the interaction between the user and the
computer takes place. There are two common types of dialogue.
i. Menu
ii. Questions and answer
With a menu dialogue, a menu displays a list of alternate selections. The user makes a
selection by choosing the number or letter of the desired alternative. Menu dialogues are of
the most common type because they are appropriate for both frequent and infrequent users
of a system.
In menu selection, the user read a list of items and selects the one most appropriate to
their task, usually by highlighting the selection and pressing the return key, or by keying
the menu item number.
Menu selection system requires a very complete and accurate analysis of user tasks to
ensure that all the necessary functions are supported with clear and consistent terminology.
Question and Answer Dialogue
With a question and answer, dialogue, questions and alternative answers are presented.
The user selects the alternative that best answers the question. Question and answer
dialogues are most appropriate for intermediate user system.
Example.
After an order was entered on the order Entry screen, the system might ask if user would
like to create an invoice for the entered order. To this the user can offer two answer. Yes Or
NO
Do you want to enter Invoice (Y/N)
Output Design
output design refers to the process of defining how information generated by a
system will be presented to users, including the format, layout, and delivery
method, ensuring the data is clear, relevant, and easily understandable to support
decision-making and meet user needs; essentially, it's about designing the "output"
of the software, like reports, screens, or notifications, in a way that is user-friendly
and aligns with the system's objectives.
External Outputs
Manufacturers create and design external outputs for printers. External output
the system to leave the trigger actions on the part of their recipients or confirm
actions to their recipients.
Some of the external outputs are designed as turnaround outputs, which are
implemented as a form and re-enter the system as an input.
Internal outputs
Internal outputs are present inside the system, and used by end-users and
managers. They support the management in decision making and reporting.
Software Reliability:
Software Reliability means Operational reliability. It is described as the ability of a system or
component to perform its required functions under static conditions for a specific period.
Software reliability is also defined as the probability that a software system fulfills its assigned task
in a given environment for a predefined number of input cases, assuming that the hardware and the
input are free of error.
Software Reliability is an essential connect of software quality, composed with functionality,
usability, performance, serviceability, capability, installability, maintainability, and documentation.
Software Reliability is hard to achieve because the complexity of software turn to be high. While
any system with a high degree of complexity, containing software, will be hard to reach a certain
level of reliability, system developers tend to push complexity into the software layer, with the
speedy growth of system size and ease of doing so by upgrading the software.
For example, large next-generation aircraft will have over 1 million source lines of software on-
board; next-generation air traffic control systems will contain between one and two million lines;
the upcoming International Space Station will have over two million lines on-board and over 10
million lines of ground support software; several significant life-critical defense systems will have
over 5 million source lines of software. While the complexity of software is inversely associated
with software reliability, it is directly related to other vital factors in software quality, especially
functionality, capability, etc.
Failure Classification:
a) Transient: Failures occur only for certain inputs. Permanent: failures occur for all input values.
b) Recoverable: When failures occur the system recovers with or without operator intervention.
c) Unrecoverable: The system may have to be restarted.
d) Cosmetic:May cause minor irritations. Do not lead to incorrect results.
3. Process Metrics:
Process metrics quantify useful attributes of the software development process & its environment.
They tell if the process is functioning optimally as they report on characteristics like cycle time &
rework time. The goal of process metric is to do the right job on the first time through the process.
The quality of the product is a direct function of the process. So process metrics can be used to
estimate, monitor, and improve the reliability and quality of software. Process metrics describe the
effectiveness and quality of the processes that produce the software product.
Examples are:
o The effort required in the process
o Time to produce the product
o Effectiveness of defect removal during development
o Number of defects found during testing
o Maturity of the process
4. Fault and Failure Metrics:
A fault is a defect in a program which appears when the programmer makes an error and causes
failure when executed under particular conditions. These metrics are used to determine the failure-
free execution software.
To achieve this objective, a number of faults found during testing and the failures or other problems
which are reported by the user after delivery are collected, summarized, and analyzed. Failure
metrics are based upon customer information regarding faults found after release of the software.
The failure data collected is therefore used to calculate failure density, Mean Time between
Failures (MTBF), or other parameters to measure or predict software reliability.
Software Quality:
Software quality product is defined in term of its fitness of purpose. That is, a quality product does
precisely what the users want it to do. For software products, the fitness of use is generally
explained in terms of satisfaction of the requirements laid down in the SRS document. Although
"fitness of purpose" is a satisfactory interpretation of quality for many devices such as a car, a table
fan, a grinding machine, etc.for software products, "fitness of purpose" is not a wholly satisfactory
definition of quality.
Example: Consider a functionally correct software product. That is, it performs all tasks as
specified in the SRS document. But, has an almost unusable user interface. Even though it may be
functionally right, we cannot consider it to be a quality product.
The modern view of a quality associated with a software product several quality methods such as
the following:
i. Portability: A software device is said to be portable, if it can be freely made to work in various
operating system environments, in multiple machines, with other software products, etc.
ii. Usability: A software product has better usability if various categories of users can easily invoke
the functions of the product.
iii. Reusability: A software product has excellent reusability if different modules of the product can
quickly be reused to develop new products.
iv. Correctness: A software product is correct if various requirements as specified in the SRS
document have been correctly implemented.
v. Maintainability: A software product is maintainable if bugs can be easily corrected as and when
they show up, new tasks can be easily added to the product, and the functionalities of the product
can be easily modified, etc.
Both kinds of modeling methods are based on observing and accumulating failure data and
analyzing with statistical inference.
Data Uses historical information Uses data from the current software
Reference development effort.
When used in Usually made before development Usually made later in the life cycle (after
development or test phases; can be used as early some data have been collected); not typically
cycle as concept phase. used in concept or development phases.
Time Frame Predict reliability at some future Estimate reliability at either present or some
time. next time.
Reliability Models:
A reliability growth model is a numerical model of software reliability, which predicts how
software reliability should improve over time as errors are discovered and repaired. These models
help the manager in deciding how much efforts should be devoted to testing. The objective of the
project manager is to test and debug the system until the required level of reliability is reached.
Following are the Software Reliability Models are:
Capability Maturity Model (CMM):
Capability Maturity Model is a common-sense application of software or Business Process
Management and quality improvement concepts to software development and maintenance.
The Capability Maturity Model (CMM) is a methodology used to develop and refine an
organization's software development process.
CMM was developed by the Software Engineering Institute (SEI) at Carnegie Mellon University in
1987.
It is not a software process model. It is a framework which is used to analyse the approach and
techniques followed by any organization to develop a software product.
It also provides guidelines to further enhance the maturity of those software products.
It is based on profound feedback and development practices adopted by the most successful
organizations worldwide.
This model describes a strategy that should be followed by moving through 5 different levels.
Each level of maturity shows a process capability level. All the levels except level-1 are further
described by Key Process Areas (KPA’s).
Methods of SEICMM:
There are two methods of SEICMM:
1. Capability Evaluation: Capability evaluation provides a way to assess the software process
capability of an organization. The results of capability evaluation indicate the likely contractor
performance if the contractor is awarded a work. Therefore, the results of the software process
capability assessment can be used to select a contractor.
Levels of CMM:
CMM classifies software development industries into the following five maturity levels:
i. Level 1: Initial
Ad hoc activities characterize a software development organization at this level. Very few or no
processes are described and followed. Since software production processes are not limited,
different engineers follow their process and as a result, development efforts become chaotic.
Therefore, it is also called a chaotic level.
b. Process metrics follow the effectiveness of the process being used, such as average defect
correction time, productivity, the average number of defects found per hour inspection, the
average number of failures detected during testing per LOC, etc. The software process and
product quality are measured, and quantitative quality requirements for the product are met.
Various tools like Pareto charts, fishbone diagrams, etc. are used to measure the product and
process quality. The process metrics are used to analyze if a project performed satisfactorily.
Thus, the outcome of process measurements is used to calculate project performance rather than
improve the process.
v. Level 5: Optimizing
At this phase, process and product metrics are collected. Process and product measurement data are
evaluated for continuous process improvement.
According to IEEE: “Testing means the process of analyzing a software item to detect the differences
between existing and required condition (i.e. bugs) and to evaluate the feature of the software item”.
According to Myers:“Testing is the process of analyzing a program with the intent of finding an
error”.
Software Testing:
• Software testing is a process of identifying the correctness of software by considering its all
attributes (Reliability, Scalability, Portability, Re-usability, Usability) and evaluating the execution
of software components to find the software bugs or errors or defects.
• Software testing provides an independent view and objective of the software and gives surety of
fitness of the software. It involves testing of all components under the required services to confirm
that whether it is satisfying the specified requirements or not. The process is also providing the
client with information about the quality of the software.
Principles of Testing:-
i. All the test should meet the customer requirements
ii. To make our software testing should be performed by third party
iii. Exhaustive testing is not possible. As we need the optimal amount of testing based on the risk
assessment of the application.
iv. All the test to be conducted should be planned before implementing it
v. It follows pareto rule (80/20 rule) which states that 80% of errors comes from 20% of program
components.
vi. Start testing with small parts and extend it to large parts.
Verification:
• Verification is the process of checking that a software achieves its goal without any bugs. It is the
process to ensure whether the product that is developed is right or not.
• It verifies whether the developed product fulfills the requirements that we have.
• Verification is Static Testing.
• Activities involved in verification:
a) Inspections
b) Reviews
c) Walkthroughs
d) Desk-checking
Validation:
• Validation is the process of checking whether the software product is up to the mark or in other
words product has high level requirements.
• It is the process of checking the validation of product i.e. it checks what we are developing is the
right product. It is validation of actual and expected product.
• Validation is the Dynamic Testing.
• Activities involved in validation:
a) Black box testing
b) White box testing
c) Unit testing
d) Integration testing
Difference between Verification and Validation:
Verification Validation
It includes checking documents, design, codes It includes testing and validating the actual
and programs. product.
Verification is the static testing. Validation is the dynamic testing.
It does not include the execution of the code. It includes the execution of the code.
Methods used in verification are reviews, Methods used in validation are Black Box
walkthroughs, inspections and desk-checking. Testing, White Box Testing and non-functional
testing.
It checks whether the software conforms to It checks whether the software meets the
specifications or not. requirements and expectations of a customer or
not.
It can find the bugs in the early stage of the It can only find the bugs that could not be found
development. by the verification process.
The goal of verification is application and The goal of validation is an actual product.
software architecture and specification.
Quality assurance team does verification. Validation is executed on software code with the
help of testing team.
It comes before validation. It comes after verification.
2. Automation Testing:
• Automation testing, which is also known as Test Automation, is when the tester writes scripts and
uses another software to test the product. This process involves automation of a manual process.
• Automation Testing is used to re-run the test scenarios that were performed manually, quickly, and
repeatedly.
• Apart from regression testing, automation testing is also used to test the application from load,
performance, and stress point of view. It increases the test coverage, improves accuracy, and saves
time and money in comparison to manual testing.
Advantages:
• Tests are done from a user’s point of view and will help in exposing discrepancies in the
specifications.
• Tester need not know programming languages or how the software has been implemented.
• Tests can be conducted by a body independent from the developers, allowing for an objective
perspective and the avoidance of developer-bias.
• Test cases can be designed as soon as the specifications are complete.
Disadvantages:
• Only a small number of possible inputs can be tested and many program paths will be left untested.
• Without clear specifications, which is the situation in many projects, test cases will be difficult to
design.
• Tests can be redundant if the software designer/developer has already run a test case.
• Ever wondered why a soothsayer closes the eyes when foretelling events? So is almost the case in
Black Box Testing.
Advantages:
• White box testing is very thorough as the entire code and structures are tested.
• It results in the optimization of code removing error and helps in removing extra lines of code.
• It can start at an earlier stage as it doesn’t require any interface as in case of black box testing.
• Easy to automate.
Disadvantages:
• Main disadvantage is that it is very expensive.
• Redesign of code and rewriting code needs test cases to be written again.
• Testers are required to have in-depth knowledge of the code and programming language as opposed
to black box testing.
• Missing functionalities cannot be detected as the code that exists is tested.
• Very complex and at times not realistic.
Levels of Testing:
A level of software testing is a process where every unit or component of a software/ system is
tested. The main goal of system testing is to evaluate the system's compliance with the specified
needs.
There are mainly four testing levels are:
1. UnitTesting:
A Unit is a smallest testable portion of system or application which can be compiled, liked,
loaded, and executed. This kind of testing helps to test each module separately.
The aim is to test each part of the software by separating it. It checks that component are
fulfilling functionalities or not. This kind of testing is performed by developers.
2. IntegrationTesting:
Integration means combining. For Example, In this testing phase, different software modules
are combined and tested as a group to make sure that integrated system is ready for system
testing.
Integrating testing checks the data flow from one module to other modules. This kind of testing
is performed by testers.
3. SystemTesting:
System testing is performed on a complete, integrated system. It allows checking system's
compliance as per the requirements. It tests the overall interaction of components. It involves
load, performance, reliability and security testing.
System testing most often the final test to verify that the system meets the specification. It
evaluates both functional and non-functional need for the testing.
4. AcceptanceTesting:
Acceptance testing is a test conducted to find if the requirements of a specification or contract
are met as per its delivery. Acceptance testing is basically done by the user or customer.
However, other stockholders can be involved in this process.
Software Maintenance:
• Software Maintenance is the process of modifying a software product after it has been delivered to
the customer.
• Software Maintenance is an inclusive activity that includes error corrections, enhancement of
capabilities, deletion of obsolete capabilities, and optimization.
• The main purpose of software maintenance is to modify and update software application after
delivery to correct faults and to improve performance.
• Software Maintenance is a very broad activity that includes error, corrections, enhancement of
capabilities, deletion of obsolete capabilities and optimization.
b) Adaptive Maintenance: This includes modifications and updations applied to keep the
software product up-to date and tuned to the ever changing world of technology and business
environment.
c) Perfective Maintenance: This includes modifications and updates done in order to keep the
software usable over long period of time. It includes new features, new user requirements for
refining the software and improve its reliability and performance.
2. Analysis Phase:
• The feasibility and scope of each validated modification request are determined and a plan is
prepared to incorporate the changes in the software. The input attribute comprises validated
modification request, initial estimate of resources, project documentation, and repository
information. The cost of modification and maintenance is also estimated.
3. Design Phase:
• The new modules that need to be replaced or modified are designed as per the requirements
specified in the earlier stages. Test cases are developed for the new design including the safety
and security issues. These test cases are created for the validation and verification of the system.
4. Implementation Phase:
• In the implementation phase, the actual modification in the software code are made, new
features that support the specifications of the present software are added, and the modified
software is installed. The new modules are coded with the assistance of structured design
created in the design phase.
1. Quick-Fix Model:
• This is an ad hoc approach used for maintaining the software system. The objective of this model is
to identify the problem and then fix it as quickly as possible. The advantage is that it performs its
work quickly and at a low cost. This model is an approach to modify the software code with little
consideration for its impact on the overall structure of the software system.
4. Boehm's Model:
• Boehm’s Model performs maintenance process based on the economic models and principles. It
represents the maintenance process in a closed loop cycle, wherein changes are suggested and
approved first and then are executed.
1. Non-Technical Factors:
a. Application Domain:
o If the application of the program is defined and well understood, the system requirements may
be definitive and maintenance due to changing needs minimized.
o If the form is entirely new, it is likely that the initial conditions will be modified frequently, as
user gain experience with the system.
b. Staff Stability:
o It is simple for the original writer of a program to understand and change an application rather
than some other person who must understand the program by the study of the reports and code
listing.
o If the implementation of a system also maintains that systems, maintenance costs will reduce.
o In practice, the feature of the programming profession is such that persons change jobs
regularly. It is unusual for one user to develop and maintain an application throughout its useful
life.
c. Program Lifetime:
o Programs become obsolete when the program becomes obsolete, or their original hardware is
replaced, and conversion costs exceed rewriting costs.
e. Hardware Stability:
o If an application is designed to operate on a specific hardware configuration and that
configuration does not changes during the program's lifetime, no maintenance costs due to
hardware changes will be incurred.
o Hardware developments are so increased that this situation is rare.
o The application must be changed to use new hardware that replaces obsolete equipment.
2. Technical Factors:
Technical Factors include the following:
Module Independence:
• It should be possible to change one program unit of a system without affecting any other unit.
Programming Language:
• Programs written in a high-level programming language are generally easier to understand than
programs written in a low-level language.
Programming Style:
• The method in which a program is written contributes to its understandability and hence, the ease
with which it can be modified.
Documentation:
• If a program is supported by clear, complete yet concise documentation, the functions of
understanding the application can be associatively straight-forward.
• Program maintenance costs tends to be less for well-reported systems than for the system supplied
with inadequate or incomplete documentation.
Types of Documentation:
All software documentation can be divided into two main categories: Product Documentation and
Process Documentation.
1. Product documentation:
• Product documentation describes the product that is being developed and provides instructions on
how to perform various tasks with it.
• Product documentation can be broken down into:
a) System documentation: System documentation represents documents that describe the system
itself and its parts. It includes requirements documents, design decisions, architecture descriptions,
program source code, and help guides.
b) User documentation: User documentation covers manuals that are mainly prepared for end-users
of the product and system administrators. User documentation includes tutorials, user guides,
troubleshooting manuals, installation, and reference manuals.
2. Process documentation:
• Process documentation represents all documents produced during development and maintenance
that describe... well, process.
• The common examples of process documentation are project plans, test schedules, reports,
standards, meeting notes, or even business correspondence.