Notes Sepm Unit 2
Notes Sepm Unit 2
UNIT-02
Measures, Metrics and Indicators
UNIT-02/LECTURE-01
Measures, Metrics and Indicators: : [RGPV/June 2014,2012(7),June 2011(5)]
Measure: Quantitative indication of the extent, amount, dimension, or size of some attribute of
a product or process. A single data point.
Metrics: The degree to which a system, component, or process possesses a given attribute.
Relates several measures (e.g. average number of errors found per person hour.)
Indicators: A combination of metrics that provides insight into the software process, project or
product.
Direct Metrics: Immediately measurable attributes (e.g. line of code, execution speed, defects
reported).
Indirect Metrics: Aspects that are not immediately quantifiable (e.g. functionality, quantity,
reliability).
Faults:
Errors: Faults found by the practitioners during software development.
Defects: Faults found by the customers after release.
Process Metrics:-
Focus on quality achieved as a consequence of a repeatable or managed process. Strategic and
Long Term.
Statistical Software Process Improvement (SSPI). Error Categorization and Analysis:
All errors and defects are categorized by origin
The cost to correct each error and defect is recorded
The number of errors and defects in each category is computed
Data is analyzed to find categories that result in the highest cost to the organization
Plans are developed to modify the process
Defect Removal Efficiency (DRE). Relationship between errors (E) and defects (D). The ideal is a
DRE of 1:
DRE=E/(E+D)
Project metrics:-
Used by a project manager and software team to adapt project work flow and technical
activities. Tactical and Short Term.
Purpose:
Minimize the development schedule by making the necessary adjustments to avoid
delays and mitigate problems.
Assess product quality on an ongoing basis.
Metrics:
Effort or time per SE task
Errors uncovered per review hour
Scheduled vs. actual milestone dates
Number of changes and their characteristics
Distribution of effort on SE tasks
Project metrics:-
Focus on the quality of deliverables.
Product metrics are combined across several projects to produce process metrics
Metrics for the product:
Measures of the Analysis Model
Complexity of the Design Model
1. Internal algorithmic complexity
2. Architectural complexity
we dont take any liability for the notes correctness. http://www.rgpvonline.com
3
Provide regular feedback to the individuals and teams who have worked to collect
measures and metrics.
Don’t use metrics to appraise individuals
Work with practitioners and teams to set clear goals and metrics that will be used to
achieve them
Never use metrics to threaten individuals or teams
Metrics data that indicate a problem area should not be considered negative. These
data are merely an indicator for process improvement
Don’t obsess on a single metric to the exclusion of other important metrics
• Size-Oriented:
errors per KLOC (thousand lines of code), defects per KLOC, R per LOC, page of
documentation per KLOC, errors / person-month, LOC per person-month, R / page of
documentation
we dont take any liability for the notes correctness. http://www.rgpvonline.com
4
• Function-Oriented:
errors per FP, defects per FP, R per FP, pages of documentation per FP, FP per person-
month
Why Opt for FP Measures? : [RGPV/Jun 2014(7)]
• Independent of programming language. Some programming languages are more
compact, e.g. C++ vs. Assembler
• Use readily countable characteristics of the information domain of the problem
• Does not penalize inventive implementations that require fewer LOC than others
• Makes it easier to accommodate reuse and object-oriented approaches
• Original FP approach good for typical Information Systems applications (interaction
complexity)
• Variants (Extended FP and 3D FP) more suitable for real-time and scientific software
(algorithm and state transition complexity)
Monitor
Password, Alarm Alert and
Sensors, etc. Response
System
System
Config Data
number of ext.interfaces 4 X 5 7 10 = 22
count-total 52
complexity multiplier[0.65 0.01 F ] [0.65 0.46]
i 1.11
function points 58
metrics.
Q.3 Explain the need for software June 2014 7
measures & describe various
metrics
Q.4 Compute the function point value June 2013 7
for a project with the following
information domain
characteristics:
Number of user inputs: 32
Number of user outputs: 60
Number of user enquiries: 24
Number of files: 8
Number of external interfaces: 2
Assume that weights are average
and external complexity
adjustment values are not
important.
UNIT-02/LECTURE-02
OO Metrics: Distinguishing Characteristics
• The following characteristics require that special OO metrics be developed:
Encapsulation — Concentrate on classes rather than functions
Information hiding — An information hiding metric will provide an indication of
quality
Inheritance — A pivotal indication of complexity
Abstraction — Metrics need to measure a class at different levels of abstraction
and from different viewpoints
Conclusion: the class is the fundamental unit of measurement
OO Project Metrics
• Number of Scenario Scripts (Use Cases):
Number of use-cases is directly proportional the number of classes needed to
meet requirements
A strong indicator of program size
• Number of Key Classes (Class Diagram):
A key class focuses directly on the problem domain
NOT likely to be implemented via reuse
Typically 20-40% of all classes are key, the rest support infrastructure (e.g. GUI,
communications, databases)
• Number of Subsystems (Package Diagram):
Provides insight into resource allocation, scheduling for parallel development and
overall integration effort
Number of Children (NOC): Total number of children for each class. Large NOC
may dilute abstraction and increase testing
OO Testability Metrics
Encapsulation:
Percent Public and Protected (PAP): Percentage of attributes that are public.
Public attributes can be inherited and accessed externally. High PAP means more
side effects
Public Access to Data members (PAD): Number of classes that access another
classes attributes. Violates encapsulation
Inheritance:
Number of Root Classes (NRC): Count of distinct class hierarchies. Must all be
tested separately
Fan In (FIN): The number of superclasses associated with a class. FIN > 1
indicates multiple inheritance. Must be avoided
Number of Children (NOC) and Depth of Inheritance Tree (DIT): Superclasses
need to be retested for each subclass
DRE E /( E D)
Quality Metrics
• Measures conformance to explicit requirements, following specified standards, satisfying
of implicit requirements
• Software quality can be difficult to measure and is often highly subjective
we dont take any liability for the notes correctness. http://www.rgpvonline.com
11
• Correctness:
The degree to which a program operates according to specification
Metric = Defects per FP
2. Maintainability:
The degree to which a program is amenable to change
Metric = Mean Time to Change. Average time taken to analyze, design,
implement and distribute a change
(b) Unstable (process exhibits out of control changes and metrics cannot be used to
predict changes)
Control Chart
6
Er, Errors found/ rev iew hour
0
1 3 5 7 9 11 13 15 17 19
Project s
Compare sequences of metrics values against mean and standard deviation. E.g. metric is
unstable if eight consecutive values lie on one side of the mean.
UNIT-02/LECTURE-03
Software Reliability:
Probability of failure-free operation for a specified time in a specified environment for a
given purpose
This means quite different things depending on the system and the users of that system
Informally, reliability is a measure of how well system users think it provides the services
they require.
Software reliability
Cannot be defined objectively
• Reliability measurements which are quoted out of context are not meaningful
Requires operational profile for its definition
• The operational profile defines the expected pattern of software usage
Must consider fault consequences
• Not all faults are equally serious. System is perceived as more unreliable if there
are more serious faults
Input/output mapping
Inputs causing
erroneous
Input set I outputs
e
Program
Erroneous
outputs
Output set Oe
Reliability improvement
Reliability is improved when software faults which occur in the most frequently used
parts of the software are removed
Removing x% of software faults will not necessarily lead to an x% reliability
improvement
In a study, removing 60% of software defects actually led to a 3% reliability
improvement
Removing faults with serious consequences is the most important objective
Reliability perception
Possible
inputs
User 1 Erroneous
inputs
User 3
User 2
Reliability metrics
Hardware metrics not really suitable for software as they are based on component
failures and the need to repair or replace a component once it has failed. The design
is assumed to be correct.
Software failures are always design failures. Often the system continues to be
available in spite of the fact that a failure has occurred.
Probability of failure on demand
This is a measure of the likelihood that the system will fail when a service
request is made
POFOD = 0.001 means 1 out of 1000 service requests result in failure
Relevant for safety-critical or non-stop systems
Rate of fault occurrence (ROCOF)
Frequency of occurrence of unexpected behaviour
ROCOF of 0.02 means 2 failures are likely in each 100 operational time units
Relevant for operating systems, transaction processing systems
Mean time to failure
• Measure of the time between observed failures
• MTTF of 500 means that the time between failures is 500 time units
• Relevant for systems with long transactions e.g. CAD systems
Availability
• Measure of how likely the system is available for use. Takes
repair/restart time into account
• Availability of 0.998 means software is available for 998 out of 1000 time units
• Relevant for continuously running systems e.g. telephone switching systems
Reliability measurement
Measure the number of system failures for a given number of system inputs
UNIT-02/LECTURE-04
COCOMO, Constructive Cost Model is static single-variable model. Barry Boehm introduced
COCOMO models. There is a hierarchy of these models.
• The Constructive Cost Model (COCOMO) is the most widely used software estimation
model in the world. It
• The COCOMO model predicts the effort and duration of a project based on inputs
relating to the size of the resulting systems and a number of cost drives that affect
productivity.
Effort
• Effort Equation
– PM = C * (KDSI)n (person-months)
• where PM = number of person-month (=152 working hours),
• C = a constant,
• KDSI = thousands of delivered source instructions (DSI) and
n = a constant.
Productivity
• Productivity equation
– (DSI) / (PM)
• where PM = number of person-month (=152 working hours),
• DSI = delivered source instructions
Schedule
• Schedule equation
– TDEV = C * (PM)n (months)
• where TDEV = number of months estimated for software development.
we dont take any liability for the notes correctness. http://www.rgpvonline.com
20
Average Staffing
The most important factors contributing to a project’s duration and cost is the Development
Mode
Organic Mode: The project is developed in a familiar, stable environment, and the
product is similar to previously developed products. The product is relatively small, and
requires little innovation.
Semidetached Mode: The project’s characteristics are intermediate between Organic
and Embedded
Embedded Mode: The project is characterized by tight, inflexible constraints and
interface requirements. An embedded mode project will require a great deal of
innovation.
Modes
Feature Organic Semidetached Embedded
Modes (.)
Feature Organic Semidetached Embedded
Cost=SizeOfTheProject x Productivity
Model 1:
Basic COCOMO model is static single-valued model that computes software development effort
(and cost) as a function of program size expressed in estimated lines of code.
Model 2:
Model 3:
Advanced COCOMO model incorporates all characteristics of the intermediate version with an
assessment of the cost driver’s impact on each step, like analysis, design, etc.
Organic mode:
These projects are very simple and have small team size. The team has a good application
experience work to a set of less than rigid requirements. A thermal analysis program developed
for a heat transfer group is an example of this.
we dont take any liability for the notes correctness. http://www.rgpvonline.com
22
Semi-detached mode:
These are intermediate in size and complexity. Here the team has mixed experience to meet a
mix of rigid and less than rigid requirements. A transaction processing system with fixed
requirements for terminal hardware and database software is an example of this.
Embedded mode:
Software projects that must be developed within a set of tight hardware, software, and
operational constraints. For example, flight control software for aircraft.
where,
Basic COCOMO
The basic model is extended to consider a set of cost drivers attributes . These attributes can
be grouped together into four categories.
1. Product attributes
d) Memory constraints.
3. Personnel attribute
a) Analyst capability.
b) Software engineer capability.
c) Virtual machine experience.
d) Application experience.
e) Programming language experience.
4. Project attributes
Each of the 15 attributes is rated on a 6-point scale that ranges from very low to very high
in importance or value. Based on the rating, an effort multiplier is determined from the tables
given by Boehm. The product of all multipliers results in an effort adjustment factor (EAF).
Typical values of EAF range from 0.9 to 1.4.
KLOC = 10.9
E = ab (KLOC)exp(bb)
= 2.4(10.9)exp(1.05)
= 29.5 person-month
D = Cb(E)exp(db)
= 2.5(29.5)exp(.38)
= 9.04 months
E = ai(LOC)exp(bi) X EAF
Where,
E is the effort applied in person-months,
LOC is the estimated number of delivered lines of code for the project.
The coefficient ai and the exponent bi are given in the table below.
Software project ai bi
Today, a software cost estimation model is doing well if it can estimate software development
costs within 20% of actual costs, 70% of the time, and on its own turf (that is, within the class of
projects to which it has been calibrated ... This is not as precise as we might like, but it is
accurate enough to provide a good deal of help in software engineering economic analysis and
decision making.
To illustrate the use of the COCOMO model, we apply the basic model to the CAD software
example [described in SEPA, 5/e]. Using the LOC estimate and the coefficients noted in Table
5.1, we use the basic model to get:
E = 2.4 (KLOC) 1.05
= 2.4 (33.2) 1.05
= 95 person-months
This value is considerably higher than the estimates derived using LOC. Because the COCOMO
model assumes considerably lower LOC/pm levels than those discussed in SEPA, 5/e, the results
are not surprising. To be useful in the context of the example problem, the COCOMO model
would have to be recalibrated to the local environment.
The value for project duration enables the planner to determine a recommended number of
people, N, for the project:
N = E/D
= 95/12.3
~ 8 people
In reality, the planner may decide to use only four people and extend the project duration
accordingly.
UNIT-02/LECTURE-05
Relation between LOC and FP:
• Relationship:
– LOC = Language Factor * FP
– where
• LOC (Lines of Code)
• FP (Function Points)
Effort Computation
• The Basic COCOMO model computes effort as a function of program size. The Basic
COCOMO equation is:
– E = aKLOC^b
•
• Effort for three modes of Basic COCOMO.
Mode a b
Example
• The intermediate COCOMO model computes effort as a function of program size and a
set of cost drivers. The Intermediate COCOMO equation is:
– E = aKLOC^b*EAF
• Effort for three modes of intermediate COCOMO.
Mode a b
Distribution of Effort
• A development process typically consists of the following stages:
• - Requirements Analysis
• - Design (High Level + Detailed)
• - Implementation & Coding
• - Testing (Unit + Integration)
The following table gives the recommended percentage distribution of Effort (APM) and TDEV
for these stages:
• Calculate the estimated number of errors in your design, i.e.total errors found in
requirements, specifications, code, user manuals, and bad fixes:
– Adjust the Function Point calculated in step1
AFP = FP ** 1.25
– Use the following table for calculating error estimates
–
Requirements 1
Design 1.25
Implementation 1.75
Documentation 0.6
UNIT-02/LECTURE-06
COCOMOII:
Constructive Cost Model II (COCOMO® II) is a model that allows one to estimate the cost, effort,
and schedule when planning a new software development activity. COCOMO® II is the latest
major extension to the original COCOMO® (COCOMO® 81) model published in 1981. It consists
of three submodels, each one offering increased fidelity the further along one is in the project
planning and design process. Listed in increasing fidelity, these submodels are called the
Applications Composition, Early Design, and Post-architecture models.
The original COCOMO® model was first published by Dr. Barry Boehm in 1981, and reflected the
software development practices of the day. In the ensuing decade and a half, software
development techniques changed dramatically. These changes included a move away from
mainframe overnight batch processing to desktop-based real-time turnaround; a greatly
increased emphasis on reusing existing software and building new systems using off-the-shelf
software components; and spending as much effort to design and manage the software
development process as was once spent creating the software product.
These changes and others began to make applying the original COCOMO® model problematic.
The solution to the problem was to reinvent the model for the 1990s. After several years and
the combined efforts of USC-CSSE, ISR at UC Irvine, and the COCOMO® II Project Affiliate
Organizations, the result is COCOMO® II, a revised cost estimation model reflecting the changes
in professional software development practice that have come about since the 1970s. This new,
improved COCOMO® is now ready to assist professional software cost estimators for many years
to come.
If in examining a reference you are still unsure as to which model is being discussed, there are a
few obvious clues. If in the context of discussing COCOMO® these terms are used: Basic,
Intermediate, or Detailed for model names; Organic, Semidetached, or Embedded for
development mode, then the model being discussed is COCOMO® 81. However, if the model
names mentioned are Application Composition, Early Design, or Post-architecture; or if there is
mention of scale factors Precedentedness (PREC), Development Flexibility (FLEX),
Architecture/Risk Resolution (RESL), Team Cohesion (TEAM), or Process Maturity (PMAT), then
the model being discussed is COCOMO® II.
Reverse engineering :
Reverse engineering, the process of taking a software program’s binary code and recreating it
we dont take any liability for the notes correctness. http://www.rgpvonline.com
34
so as to trace it back to the original source code, is being widely used in computer hardware and
software to enhance product features or fix certain bugs. For example, the programmer writes
the code in a high-level language such as C, C++ etc. (you can learn basic C programming with
this beginners course); as computers do not speak these languages, the code written in these
programming languages needs to be assembled in a format that is machine specific. In short,
the code written in high level language needs to be interpreted into low level or machine
language.
The process of converting the code written in high level language into a low level language
without changing the original program is known as reverse engineering. It’s similar to
disassembling the parts of a vehicle to understand the basic functioning of the machine and
internal parts etc. And thereafter making appropriate adjustments to give rise to a better
performing or superior vehicle.
If we have a look at the subject of reverse engineering in the context of software engineering,
we will find that it is the practice of analyzing the software system to extract the actual design
and implementation information. A typical reverse engineering scenario would comprise of a
software module that has been worked on for years and carries the line of business in its code;
but the original source code might be lost, leaving the developers only with the binary code. In
such a case, reverse engineering skills would be used by software engineers to detect probable
virus and malware to eventually protect the intellectual property of the company. Learn more
protecting Intellectual Property in this course.
At the turn of the century, when the software world was hit by the technology crisis Y2K,
programmers weren’t equipped with reverse engineering skills. Since then, research has been
carried out to analyse what kind of development activities can be brought under the category of
reverse engineering so that they can be taught to the programmers. Researchers have revealed
that reverse engineering basically comes under two categories-software development and
software testing. A number of reverse engineering exercises have been developed since then in
this regard to provide baseline education in reversing the machine code.
Reverse Engineering
Reverse engineering can be applied to several aspects of the software and hardware
we dont take any liability for the notes correctness. http://www.rgpvonline.com
35
In the world of reverse engineering, we often hear about black box testing. Even though the
tester has an API, their ultimate goal is to find the bugs by hitting the product hard from
outside. Learn more about different software testing techniques in this course.
Apart from this, the main purpose of reverse engineering is to audit the security, remove the
copy protection, customize the embedded systems, and include additional features without
spending much and other similar activities.
In software design, reverse engineering enables the developer or programmer to add new
features to the existing software with or without knowing the source code. Different techniques
are used to incorporate new features into the existing software.
Reverse engineering is also very beneficial in software testing, as most of the virus
programmers don’t leave behind instructions on how they wrote the code, what they
have set out to accomplish etc. Reverse engineering helps the testers to study the virus
and other malware code. The field of software testing, while very extensive, is also
interesting and requires vast experience to study and analyze virus code. Learn more
about software test design in this course.
The third category where reverse engineering is widely used is in software security.
we dont take any liability for the notes correctness. http://www.rgpvonline.com
36
Reverse engineering techniques are used to make sure that the system does not have
any major vulnerabilities and security flaws. The main purpose of reverse engineering is
to make the system robust so as to protect it from spywares and hackers. Infact, this can
be taken a step forward to Ethical hacking, whereby you try to hack your own system to
identify vulnerabilities. You can learn more about Ethical hacking with this course.
While one needs a vast amount of knowledge to become a successful reverse engineer,
he or she can definitely have a lucrative career in this field by starting off with the basics.
It is highly suggested that you first become familiar with assembly level language and
gain significant amount of practical knowledge in the field of software designing and
testing to become a successful software engineer. Learn how to kick-start your career in
this interesting field by visiting our online course agile testing for reverse engineering
applications.
Disassemblers – A 36artridges36 is used to convert binary code into assembly code and also
used to extract strings, imported and exported functions, libraries etc. The disassemblers
convert the machine language into a user-friendly format. There are different dissemblers that
specialize in certain things.
Debuggers – This tool expands the functionality of a 36artridges36 by supporting the CPU
registers, the hex duping of the program, view of stack etc. Using debuggers, the programmers
can set breakpoints and edit the assembly code at run time. Debuggers analyse the binary in a
similar way as the disassemblers and allow the reverser to step through the code by running
one line at a time to investigate the results.
Hex Editors – These editors allow the binary to be viewed in the editor and change it as per the
requirements of the software. There are different types of hex editors available that are used
for different functions.
PE and Resource Viewer – The binary code is designed to run on a windows based machine and
has a very specific data which tells how to set up and initialize a program. All the programs that
run on windows should have a portable executable that supports the DLLs the program needs to
borrow from.
Reverse engineering has developed significantly and taken a positive approach to creating
descriptive data set of the original object. Today, there are numerous legitimate applications of
reverse engineering. Due to the development of numerous digitizing devices, reverse
engineering software enables programmers to manipulate the data into a useful form. The kind
of applications in which reverse engineering is used ranges from mechanical to digital, each with
its own advantages and applications. Reverse engineering is also beneficial for business owners
as they can incorporate advanced features into their software to meet the demands of the
growing markets.
S.NO RGPV QUESTION YEAR MARKS
Q.1 Write short note on Reverse June 2013 5
engineering?
UNIT-02/LECTURE-07
What is reverse engineering (RE)? : [RGPV/JUNE 2(5)]
Through reverse engineering, a researcher can gather the technical data necessary for the
documentation of the operation of a technology or component of a system. When reverse
engineering software, researchers are able to examine the strength of systems and identify their
weaknesses in terms of performance, security, and interoperability. The reverse engineering
process allows researchers to understand both how a program works and also what aspects of
the program contribute to its not working. Independent manufacturers can participate in a
competitive market that rewards the improvements made on dominant products. For example,
security audits, which allow users of software to better protect their systems and networks by
revealing security flaws, require reverse engineering. The creation of better designs and the
interoperability of existing products often begin with reverse engineering.
7. What are some legal cases and ethical issues involving reverse engineering?
New court cases reveal that reverse engineering practices which are used to achieve
interoperability with an 39artridges39ly created computer program, are legal and ethical. In
we dont take any liability for the notes correctness. http://www.rgpvonline.com
40
December, 2002, Lexmark filed suit against SCC, accusing it of violating copyright law as well as
the DMCA. SCC reverse engineered the code contained in Lexmark printer 40artridge so that it
could manufacture compatible cartiges. According to Computerworld , Lexmark alleged that
SCC’s Smartek chips include Lexmark software that is protected by copyright. The software
handles communication between Lexmark printers and toner 40artridges; without it,
refurbished toner 40artridges won’t work with Lexmark’s printers. The court ruled that
copyright law shouldn’t be used to inhibit interoperability between one vendor’s products and
those of its rivals. In a ruling from the U.S. Copyright Office in October 2003, the Copyright
Office said the DMCA doesn’t block software developers from using reverse engineering to
access digitally protected copyright material if they do so to achieve interoperability with an
independently created computer program.
There are also benefits to reverse engineering. Reverse engineering might be used as a way to
allow products to interoperate. Also reverse engineering can be used as a check so that
computer software isn’t performing harmful, unethical, or illegal activities.
UNIT-02/LECTURE-08
Project Scheduling
Overview
The chapter describes the process of building and monitoring schedules for software
development projects. To build complex software systems, many engineering tasks need to
occur in parallel with one another to complete the project on time. The output from one task
often determines when another may begin. Software engineers need to build activity networks
that take these task interdependencies into account. Managers find that it is difficult to ensure
that a team is working on the most appropriate tasks without building a detailed schedule and
sticking to it. This requires that tasks are assigned to people, milestones are created, resources
are allocated for each task, and progress is tracked.
Adding people to a project after it is behind schedule often causes the schedule to slip
further
The relationship between the number of people on a project and overall productivity is not
linear (e.g. 3 people do not produce 3 times the work of 1 person, if the people have to
work in cooperation with one another)
The main reasons for using more than 1 person on a project are to get the job done more
rapidly and to improve software quality.
Scheduling
Task networks (activity networks) are graphic representations can be of the task
interdependencies and can help define a rough schedule for particular project
Scheduling tools should be used to schedule any non-trivial project.
Program evaluation and review technique (PERT) and critical path method (CPM) ) are
quantitative techniques that allow software planners to identify the chain of dependent
tasks in the project work breakdown structure (WBS) that determine the project duration
time.
Timeline (Gantt) charts enable software planners to determine what tasks will be need to be
conducted at a given point in time (based on estimates for effort, start time, and duration
for each task).
The best indicator of progress is the completion and successful review of a defined software
work product.
Time-boxing is the practice of deciding a priori the fixed amount of time that can be spent
on each task. When the task’s time limit is exceeded, development moves on to the next
task (with the hope that a majority of the critical work was completed before time ran out).
Periodic project status meetings with each team member reporting progress and problems
Evaluation of results of all work product reviews
Comparing actual milestone completion dates to scheduled dates
Comparing actual project task start-dates to scheduled start-dates
Informal meeting with practitioners to have them asses subjectively progress to date and
future problems
Use earned value analysis to assess progress quantitatively
UNIT-02/LECTURE-09
Project Scheduling (Basic Principles): [RGPV/JUNE 20113(5)]
Software project scheduling is an activity that distributes estimated effort across the planed
project duration by allocating the effort to specific software engineering tasks.
First, a macroscopic schedule is developed.
---> a detailed schedule is redefined for each entry in the macroscopic schedule.
A schedule evolves over time.
Basic principles guide software project scheduling:
- Compartmentalization
- Interdependency
- Time allocation
- Effort allocation
- Effort validation
- Defined responsibilities
- Defined outcomes
- Defined milestones
Defining A Task Set For The Software Project
There is no single set of tasks that is appropriate for all projects.
An effective software process should define a collection of task sets, each
designed to meet the needs of different types of projects.
A task set is a collection of software engineering work
-> tasks, milestones, and deliverables.
Tasks sets are designed to accommodate different types of projects and different degrees of
rigor.
Typical project types:
- Concept Development Projects
- New Application Development Projects
- Application Enhancement Projects
- Application Maintenance Projects
- Reengineering Projects
Obtaining Information
Degree of Rigor:
we dont take any liability for the notes correctness. http://www.rgpvonline.com
47
- Casual
- Structured
- Strict
- Quick Reaction
Defining Adaptation Criteria:
-- This is used to determine the recommended degree of rigor.
Eleven criteria are defined for software projects:
- Size of the project
- Number of potential users
- Mission criticality
- Application longevity
- Ease of customer/developer communication
- Maturity of applicable technology
- Performance constraints
- Embedded/non-embedded characteristics
- Project staffing
- Reengineering factors
Defining A Task Network
Critical path:
-- the tasks on a critical path must be completed on schedule to make the whole
project on schedule.
T7 T8
T1 T2 T3 T4
T5 T6
Scheduling of a software project does not differ greatly from scheduling of any multitask
engineering effort.
Two project scheduling methods:
- Program Evaluation and Review Technique (PERT)
- Critical Path Method (CPM)
Both methods are driven by information developed in earlier project planning activities:
- Estimates of effort
- A decomposition of product function
- The selection of the appropriate process model
- The selection of project type and task set
Both methods allow a planer to do:
- determine the critical path
- time estimation
- calculate boundary times for each task
Boundary times:
- the earliest time and latest time to begin a task
- the earliest time and latest time to complete a task
- the total float.
Task 1
Sub-task 1.1
Sub-task 1.2
Sub-task 1.3
Task 2
Sub-task 2.1
Sub-task 2.2
Task 3
Sub-task 3.1
Sub-task 3.2
Task 4
Sub-task 4.1
Sub-task 4.2
Sub-task 4.3
Task 5
Sub-task 5.1
Sub-task 5.2
- project problems
- project resources
- reviews
- project budget
REFERENCCE