Pearson Data Analysis Methodology PDF
Pearson Data Analysis Methodology PDF
Pearson Data Analysis Methodology PDF
5/20/02
4:02 PM
Page 1
Data Analysis
Methodology
Suppose you inherited the database in Table 1.1 and needed to find out what
could be learned from itfast. Say your boss entered your office and said,
Heres some software project data your predecessor collected. Is there anything interesting about it? Id like you to present the results at next weeks
management meeting. Given that it usually takes a number of years to collect enough software project data to analyze, plus the software industrys
high job turnover rate, and more often than not, you probably will be analyzing data that was collected by others.
What is this data? What do the abbreviations mean? What statistical
methods should you use? What should you do first? Calm down and read
on. After eight years of collecting, validating, analyzing, and benchmarking
software projects, Ive written the book that I wish had been available the
day I was told to find something interesting about the European Space
Agency software project database.
In this chapter, I will share with you my data analysis methodology. Each
step is demonstrated using the software project data in Table 1.1. You do not
need to understand statistics to follow the recipe in Sidebar 1.1. I simply
1
01-P2250
5/20/02
4:02 PM
Page 2
CHAPTER 1
D ATA A N A LY S I S M E T H O D O L O G Y
TABLE 1.1
Software Project Data
id
effort
size
app
telonuse
t13
t14
2
3
5
6
8
9
15
16
17
18
19
21
25
26
27
30
31
32
33
34
38
40
43
44
45
46
50
51
53
54
55
56
58
61
7871
845
21272
4224
7320
9125
2565
4047
1520
25910
37286
11039
10447
5100
63694
1745
1798
2957
963
1233
3850
5787
5578
1060
5279
8117
1755
5931
3600
4557
8752
3440
13700
4620
647
130
1056
383
209
366
249
371
211
1849
2482
292
567
467
3368
185
387
430
204
71
548
302
227
59
299
422
193
1526
509
583
315
138
423
204
TransPro
TransPro
CustServ
TransPro
TransPro
TransPro
InfServ
TransPro
TransPro
TransPro
TransPro
TransPro
TransPro
TransPro
TransPro
InfServ
CustServ
MIS
TransPro
TransPro
CustServ
MIS
TransPro
TransPro
InfServ
CustServ
TransPro
InfServ
TransPro
MIS
CustServ
CustServ
TransPro
InfServ
No
No
No
No
No
No
No
No
No
Yes
Yes
No
Yes
Yes
No
No
No
No
No
No
No
No
No
No
Yes
No
No
Yes
No
No
No
No
No
Yes
4
4
3
5
4
3
2
3
3
3
3
4
2
2
4
4
3
3
3
2
4
2
2
3
3
3
2
4
4
5
3
4
4
3
4
4
2
4
2
2
4
3
3
3
1
2
2
3
2
5
3
4
3
4
3
4
3
3
2
2
4
3
2
3
3
3
2
2
01-P2250
5/20/02
4:02 PM
Page 3
Data Validation
SIDEBAR 1.1
explain what to do, why we do it, how to interpret the statistical output
results, and what to watch out for at each stage.
Data Validation
The most important step is data validation. I spend much more time validating data than I do analyzing it. Often, data is not neatly presented to you
in one table as it is in this book, but it is in several files that need to be
merged and which may include information you do not need or understand.
The data may also exist on different pieces of paper.
What do I mean by data validation? In general terms, I mean finding out
if you have the right data for your purpose. It is not enough to write a questionnaire and get people to fill it out; you need to have a vision. Like getting
the requirement specifications right before starting to develop the software.
Specifically, you need to determine if the values for each variable make sense.
Why Do It? You can waste months trying to make sense out of data that
was collected without a clear purpose, and without statistical analysis
01-P2250
5/20/02
4:02 PM
Page 4
CHAPTER 1
D ATA A N A LY S I S M E T H O D O L O G Y
01-P2250
5/20/02
4:02 PM
Page 5
Data Validation
TABLE 1.2
Variable Definitions
Variable
Full Name
Definition
id
identification number
effort
effort
size
application size
app
application type
telonuse
Telon use
t13
staff application
knowledge
t14
01-P2250
5/20/02
4:02 PM
Page 6
CHAPTER 1
D ATA A N A LY S I S M E T H O D O L O G Y
Variable
Full Name
Definition
2 = Low; tools experience less than average; some
members have experience with some tools;
6-12 months on average
3 = Nominal; tools experience good in about half
the team; some members know development and
documentation tools well; 1-3 years on average
4 = High; most team members know tools well; some
members can help others; 3-6 years on average
5 = Very high; team knows all tools well; support
available for specific needs of project; >6 years
average experience
means that data is missing. This may be normal as all variables may not have
been collected for each project, or it may point to a problem. See if you can
find these missing values and add them to the database before you go any
further. Also, check to see if the maximum and minimum values make sense.
In this case, they do. But if t13 or t14 had 7 as a maximum value, we would
immediately know there was a problem because by definition, 5 is the highest value possible.
This is also a useful exercise to undertake when someone transfers a very
large database to you via the Internet. When it is impossible to check each
value individually, check the summary values with the person who sent you
the data. I also recommend checking all the variables one-by-one for the first
project, the last project, and a few random projects from the middle of the
database to make sure nothing got altered during the transfer.
Example 1.1
. summarize
Variable
id
effort
size
t13
t14
Obs
34
34
34
34
34
Mean
31.5
8734.912
578.5882
3.235294
2.911765
Std. Dev.
17.9059
12355.46
711.7584
.8548905
.9000891
Min
2
845
59
2
1
Max
61
63694
3368
5
5
01-P2250
5/20/02
4:02 PM
Page 7
Data Validation
Next, I tabulate each variable that has words or letters as values. Besides
providing valuable information about how many projects are in each category, it is also an easy way to check for spelling mistakes. For example, if
there was one observation for CustSer and five observations for CustServ,
you should check if there are really two different categories.
In Examples 1.2 and 1.3, Freq. is the number of observations in each category, Percent is the percentage of observations in each category, and Cum. is
the cumulative percentage. We can see that the majority of the applications
(about 59%) are transaction processing (TransPro) applications. Seven applications used Telon in addition to COBOL. For business presentations, this
type of information would look good displayed in a pie chart.
Example 1.2
. tabulate app
Application Type
CustServ
MIS
TransPro
InfServ
Total
Freq.
6
3
20
5
34
Percent
17.65
8.82
58.82
14.71
100.00
Cum.
17.65
26.47
85.29
100.00
Example 1.3
. tabulate telonuse
Telon Use
No
Yes
Total
Freq.
27
7
34
Percent
79.41
20.59
100.00
Cum.
79.41
100.00
01-P2250
5/20/02
4:02 PM
Page 8
CHAPTER 1
D ATA A N A LY S I S M E T H O D O L O G Y
Why Do It? The data may have been collected for a clearly stated purpose. Even so, there might be other interesting relationships to study that
occur to you while you are analyzing the data, and which you might be
tempted to investigate. However, it is important to decide in advance what
you are going to do first and then to complete that task in a meticulous,
organized manner. Otherwise, you will find yourself going in lots of different directions, generating lots of computer output, and becoming confused
about what you have tried and what you have not tried; in short, you will
drown yourself in the data. It is also at this stage that you may decide to create new variables or to reduce the number of variables in your analysis.
Variables of questionable validity, variables not meaningfully related to
what you want to study, and categorical variable values that do not have a
sufficient number of observations should be dropped from the analysis. (See
the case studies in Chapters 2 through 5 for examples of variable reduction/modification.) In the following example, we will use all the variables
provided in Table 1.1.
Example The smallest number of observations for a categorical variable
is 3 for the MIS (management information systems) category of the application type (app) variable (see Example 1.2). Given that our data set contains
34 observations, I feel comfortable letting MIS be represented by three projects. No matter how many observations the database contains, I dont
believe it is wise to make a judgment about something represented by less
than three projects. This is my personal opinion. Ask yourself this: If the MIS
category contained only one project and you found in your statistical analysis that the MIS category had a significantly higher productivity, would you
then conclude that all MIS projects in the bank have a high productivity? I
would not. If there were two projects, would you believe it? I would not. If
there were three projects, would you believe it? Yes, I would in this case.
However, if there were 3000 projects in the database, I would prefer for MIS
to be represented by more than three projects. Feel free to use your own
judgment.
01-P2250
5/20/02
4:02 PM
Page 9
Preliminary Analyses
Even with this small sample of software project data, we could investigate
a number of relationships. We could investigate if any of the factors collected
influenced software development effort. Or we could find out which factors
influenced software development productivity (i.e., size/effort). We could also
look at the relationship between application size (size) and Telon use
(telonuse), between size and application type (app), or between application
type (app) and staff application knowledge (t13), just to name a few more possibilities. In this example, we will focus on determining which factors affect
effort. That is, do size, application type (app), Telon use (telonuse), staff application knowledge (t13), staff tool skills (t14), or a combination of these factors
have an impact on effort? Is effort a function of these variables? Mathematically speaking, does:
effort = f (size, app, telonuse, t13, t14)?
In this equation, effort is on the left-hand side (LHS) and the other variables are on the right-hand side (RHS). We refer to the LHS variable as the
dependent variable and the RHS variables as independent variables.
Preliminary Analyses
Before running blind statistical tests, I check that the assumptions underlying them are true. In addition, I like to get some first impressions of the data.
My objective is not a complete understanding of all possible relationships
among all the variables. For example, in Step 2, variable and model selection,
I decided that my first goal was to determine which of the variables collected
had an influence on effort. To achieve that goal, I follow the steps described
in this section before building the multi-variable model (Step 4).
Graphs
Histograms To start, I look at a graph of each numerical variable individually to see how many small values, large values, and medium values
5/20/02
10
4:02 PM
Page 10
CHAPTER 1
D ATA A N A LY S I S M E T H O D O L O G Y
there are, that is, the distribution of each variable. These are also called
histograms.
Why Do It? I want to see if the variables are normally distributed. Many
statistical techniques assume that the underlying data is normally distributed, so you should check if it is. A normal distribution is also known as a
bell-shaped curve. Many of us were graded on such curves at large competitive universities. In a bell-shaped curve, most values fall in the middle, with
few very high and very low values. For example, if an exam is graded and
the results are fit to a normal distribution (Figure 1.1), most students will get
a C. Less students will get a B or a D. And even fewer students will receive
an A or an F. The average test score will be the midpoint of the C grade,
whether the score is 50, or 90, out of 100. That does not always seem very fair,
does it? You can learn more about normal distributions and why they are
important in Chapter 6.
How to Do It To create a histogram for the variable t13 manually, you
would count how many 1s there are, how many 2s, etc. Then, you would
make a bar chart with either the number of observations or the percentage of
observations on the y-axis for each value of t13. However, you dont need to
waste your time doing this by hand.
Let a statistical analysis tool do it for you. You will need to learn how to
use a statistical analysis tool to analyze data. I have used SAS, Excel, and
Number of Students
01-P2250
F
Test Score
FIGURE 1.1
Example of a normal distribution
01-P2250
5/20/02
4:02 PM
Page 11
Preliminary Analyses
11
Stata in my career. My opinions regarding each are: SAS was fine when
I worked for large organizations, but far too expensive when I had to
pay for it myself. Excel is not powerful or straightforward enough for my
purposes. Stata is relatively inexpensive (no yearly licensing fee), does
everything I need, and is very easy to use (see www.stata.com). However, no
matter which statistical software you use, the output should always look
the same, and it is the interpretation of that output on which this book
focuses.
Example The distributions of effort and size show that they are not normally distributed (Figures 1.2 and 1.3). The database contains few projects
with a very high effort, or a very big size. It also contains many low effort,
and small size projects. This is typical in a software development project
database. Not only are the efforts and sizes not normally distributed in this
sample, but we would not expect them to be normally distributed in
the population of all software development projects.
To approximate a normal distribution, we must transform these variables.
A common transformation is to take their natural log (ln). Taking the natural
log makes large values smaller and brings the data closer together. For example, take two project sizes of 100 and 3000 function points. 3000 is much bigger than 100. If I take the ln of these numbers, I find that ln(100) = 4.6 and
FIGURE 1.2
Distribution of effort
01-P2250
5/20/02
12
4:02 PM
Page 12
CHAPTER 1
D ATA A N A LY S I S M E T H O D O L O G Y
FIGURE 1.3
Distribution of size
FIGURE 1.4
Distribution of ln(effort)
ln(3000) = 8.0. These transformed sizes are much closer together. As you can
see, taking the natural log of effort and size more closely approximates a normal distribution (Figures 1.4 and 1.5).
01-P2250
5/20/02
4:02 PM
Page 13
Preliminary Analyses
13
FIGURE 1.5
Distribution of ln(size)
FIGURE 1.6
Distribution of t13
Graphs of staff application knowledge (t13) and staff tool skills (t14) look
more normally distributed (Figures 1.6 and 1.7). Most projects have an average value of 3. Additionally, in the larger multi-company database from
01-P2250
5/20/02
14
4:02 PM
Page 14
CHAPTER 1
D ATA A N A LY S I S M E T H O D O L O G Y
FIGURE 1.7
Distribution of t14
which this subset was taken, the distributions of these factors are approximately normal. In fact, the definitions of the factors were chosen especially
so that most projects would be average. These variables do not need any
transformation.
01-P2250
5/20/02
4:02 PM
Page 15
Preliminary Analyses
15
Two-Dimensional Graphs I also make graphs of the dependent variable against each independent numerical variable. In this example, I am
interested in the relationships between effort and size, effort and staff application knowledge (t13), and effort and staff tool skills (t14).
Example I plot these graphs using the transformed data. We can see in
Figure 1.8 that there appears to be a linear relationship between ln(effort) and
ln(size). As project size increases, the amount of effort needed increases.
Figure 1.9 gives the impression that there is no relationship between effort
and staff application knowledge (t13). Conversely, Figure 1.10 seems to suggest that less effort is required for projects with higher levels of staff tool skills
(t14). These are first impressions that will be verified through statistical tests.
Another good reason to use a log transformation is to make a non-linear
relationship more linear. Figure 1.11 shows the relationship between the variables effort and size before the log transformation. As you can see, the relationship in Figure 1.8 is much more linear than the relationship in Figure 1.11.
01-P2250
5/20/02
16
4:02 PM
Page 16
CHAPTER 1
D ATA A N A LY S I S M E T H O D O L O G Y
FIGURE 1.8
ln(effort) vs. ln(size)
FIGURE 1.9
ln(effort) vs. t13
projects to look as if they are grouped together in a little cloud. All the
straight lines fit to the data will try to go through the outlier, and will
treat the cloud of data (that is, all the other projects) with less impor-
01-P2250
5/20/02
4:02 PM
Page 17
Preliminary Analyses
17
FIGURE 1.10
ln(effort) vs. t14
FIGURE 1.11
effort vs. size
tance. Remove the outlier(s) and re-plot the data to see if there is any
relationship hidden in the cloud. See Chapter 2 for an example where
an outlier is detected and removed.
01-P2250
5/20/02
18
4:02 PM
Page 18
CHAPTER 1
D ATA A N A LY S I S M E T H O D O L O G Y
Tables
I make tables of the average value of the dependent variable and the number of observations it is based on for each value of each categorical variable.
In this example, the tables will show the average value of effort for each
application type, and for Telon use.
Example From Example 1.4, we learn that on average, transaction processing (TransPro) applications require the highest effort, then customer service (CustServ) applications, then MIS applications, and finally, information
service (InfServ) applications. Why is this? Answering this question will be
important for the interpretation phase of the analysis. Example 1.5 tells us
that, on average, projects that used Telon required almost twice as much
effort as projects that did not. Is this because they were bigger in size, or
could there be another explanation?
Example 1.4
. table app, c(n effort mean effort)
Application Type
CustServ
MIS
TransPro
InfServ
N(effort)
6
3
20
5
mean(effort)
7872
4434
10816
4028
Example 1.5
. table telonuse, c(n effort mean effort)
Telon Use
No
Yes
N(effort)
27
7
mean(effort)
7497
13510
What to Watch Out For Remember that we still need to check the relationships in Examples 1.4 and 1.5 to see if they are statistically significant.
01-P2250
5/20/02
4:02 PM
Page 19
Preliminary Analyses
19
Even if there appears to be a big difference in the average values, it may not
really be true because one project with a high effort could have influenced
a categorys average.
Correlation Analysis
Another assumption of the statistical procedure I use to build a multivariable model is that independent variables are independent; that is, they
are not related to each other. In our example, there should be no strong relationships among the variables: size, t13, t14, app, and telonuse. There is a very
quick way to check if the numerical variables, size, t13, and t14, are independent: correlation analysis. If some of the numerical variables were collected using an ordinal or quasi-interval Likert scale (like t13 and t14), I use
Spearmans rank correlation coefficient because it tests the relationships of
orders rather than actual values. (See Chapter 6 for scale definitions.)
Another important feature of Spearmans rank correlation coefficient is that
it is less sensitive to extreme values than the standard Pearson correlation
coefficient.
Two variables will be highly positively correlated if low ranked values of
one are nearly always associated with low ranked values of the other, and
high ranked values of one are nearly always associated with high ranked
values of the other. For example, do projects with very low staff tool skills
always have very low staff application knowledge, too; are average tool
skills associated with average application knowledge, high tool skills with
high application knowledge, etc.? If such a relationship is nearly always
true, the correlation coefficient will be close to 1.
Two variables will be highly negatively correlated if low ranked values of
one are nearly always associated with high ranked values of the other, and
vice-versa. For example, do the smallest projects (smallest in size) always
have the highest staff application knowledge, and do the biggest projects
always have the lowest staff application knowledge? If such a situation is
nearly always true, the correlation coefficient will be close to 1. Variables
that are not correlated at all will have a correlation coefficient close to zero.
You will learn more about correlation analysis in Chapter 6.
01-P2250
5/20/02
20
4:02 PM
Page 20
CHAPTER 1
D ATA A N A LY S I S M E T H O D O L O G Y
Example Example 1.6 shows the statistical output for the Spearmans
rank correlation coefficient test between the variables size and t13. The number of observations equals 34. The correlation coefficient is Spearmans rho,
which is 0.1952. Already it is clear that these two variables are not very correlated as this number is closer to 0 than 1. The Test of Ho tests if size and
t13 are independent (i.e., not correlated). If Pr > |t| = a number greater than
0.05, then size and t13 are independent. Because 0.2686 > 0.05, we conclude
that this is indeed the case. (Pr is an abbreviation for probability; t means
that the t distribution was used to determine the probability. You will learn
more about this in Chapter 6.)
Example 1.6
. spearman size t13
Number of obs =
Spearmans rho =
Test of Ho: size
Pr > |t| =
34
0.1952
and t13 independent
0.2686
From Example 1.7, we learn that the variables size and t14 have a
Spearmans correlation coefficient of 0.3599. We cannot accept that size and
t14 are independent because 0.0365 is less than 0.05. Thus, we conclude that
size and t13 are negatively correlated.
Example 1.7
. spearman size t14
Number of obs = 34
Spearmans rho = 0.3599
Test of Ho: size and t14 independent
Pr > |t| = 0.0365
We conclude from the results in Example 1.8 that t13 and t14 are not correlated.
Example 1.8
. spearman t13 t14
Number of obs = 34
Spearmans rho = 0.0898
Test of Ho: t13 and t14 independent
Pr > |t| = 0.6134
01-P2250
5/20/02
4:02 PM
Page 21
Preliminary Analyses
21
01-P2250
5/20/02
22
4:02 PM
Page 22
CHAPTER 1
D ATA A N A LY S I S M E T H O D O L O G Y
You can see that first, lsize (abbreviation of ln(size) in statistical output) is
added, then t14 is added. No further variation in leffort is explained by t13,
so it is left out of the model. In Chapter 6, you will learn how to interpret
every part of this output; for now, I will just concentrate on the values in
bold. These are the values that I look at to determine the performance and
significance of the model. I look at the number of observations (Number of
obs) to see if the model was built using all the projects. The model was built
using all 34 observations. I look at Prob > F to determine if the model is significant, in other words, can I believe this model? (Prob is an abbreviation for
probability; F means that the F distribution was used to determine the probability. You will learn more about this in Chapter 6.) If Prob > F is a number
less than or equal to 0.05, then I accept the model. Here it is 0, so the model
is significant. I look at the adjusted R-squared value (Adj R-squared) to determine the performance of the model. The closer it is to 1, the better. The Adj
R-squared of 0.7288 means that this model explains nearly 73% (72.88%) of
the variation in leffort. This is a very good result. This means that even without the categorical variables, I am sure to come up with a model than
explains 73% of the variation in effort. I am very interested in finding out
more about which variables explain this variation.
I can see from the output that lsize and t14 are the RHS explanatory variables. I also check the significance of each explanatory variable and the constant (_cons) in the column P > |t|. If P > |t| is a number less than or equal
to 0.05, then the individual variable is significant; that is, it is not in the
model by chance. (P is yet another abbreviation for probability; t means that
the t distribution was used to determine the probability.)
Example 1.9
. sw regress leffort lsize t13 t14, pe(.05)
begin with empty model
p = 0.0000 < 0.0500
p = 0.0019 < 0.0500
Source
Model
Residual
Total
SS
25.9802069
8.88042769
34.8606346
df
2
31
33
leffort
lsize
t14
_cons
Coef.
.7678266
.3856721
5.088876
Std. Err.
.1148813
.1138331
.8764331
adding
adding
lsize
t14
MS
12.9901035
.286465409
1.05638287
t
6.684
-3.388
5.806
Number of obs
F(2,31)
Prob > F
R-squared
Adj R-squared
Root MSE
P>|t|
0.000
0.002
0.000
=
=
=
=
=
=
34
45.35
0.0000
0.7453
0.7288
.53522
01-P2250
5/20/02
4:02 PM
Page 23
23
The output in Example 1.10 shows the results of running a backward stepwise regression procedure on our data set. Backward stepwise regression
means that the model starts full (with all the variables) and then the variables least related to effort are removed one by one in order of unimportance
until no further variable can be removed to improve the model. You can see
here that t13 was removed from the model. In this case, the results are the
same for forward and backward stepwise regression; however, this is not
always the case. Things get more complicated when some variables have
missing observations.
Example 1.10
. sw regress leffort 1size t13 t14, pr(.05)
begin with full model
p = 0.6280 >= 0.0500
removing t13
Source
Model
Residual
Total
SS
25.9802069
8.88042769
34.8606346
df
2
31
33
leffort
lsize
t14
_cons
Coef.
.7678266
-.3856721
5.088876
Std. Err.
.1148813
.1138331
.8764331
MS
12.9901035
.286465409
1.05638287
t
6.684
-3.388
5.806
Number of obs
F(2,31)
Prob > F
R-squared
Adj R-squared
Root MSE
P>|t|
0.000
0.002
0.000
=
=
=
=
=
=
34
45.35
0.0000
0.7453
0.7288
.53522
What to Watch Out For Watch out for variables with lots of missing
values. The stepwise regression procedure only takes into account observations with non-missing values for all variables specified. For example, if t13
is missing for half the projects, then half the projects will not be used. Check
the number of observations used in the model. You may keep coming up
with models that explain a large amount of the variation for a small amount
of the data. If this happens, run the stepwise procedures using only the variables available for nearly every project.
01-P2250
5/20/02
24
4:02 PM
Page 24
CHAPTER 1
D ATA A N A LY S I S M E T H O D O L O G Y
with categorical variables. You will learn more about analysis of variance in
Chapter 6. For the moment, you just need to know that this procedure
allows us to determine the influence of numerical and categorical variables
on the dependent variable, leffort. The model starts empty and then the
variables most related to leffort are added one by one in order of importance
until no other variable can be added to improve the model. The procedure
is very labor-intensive because I make the decisions at each step myself; it
is not automatically done by the computer. Although I am sure this could
be automated, there are some advantages to doing it yourself. As you carry
out the steps, you will develop a better understanding of the data. In addition, in the real world, a database often contains many missing values and
it is not always clear which variable should be added at each step.
Sometimes you need to follow more than one path to find the best model.
In the following example, I will show you the simplest case using our
34-project, 6-variable database with no missing values. My goal for this
chapter is that you understand the methodology. The four case studies in
Chapters 2 through 5 present more complicated analyses, and will focus on
interpreting the output.
Example
Determine Best One-Variable Model First, we want to find the best
one-variable model. Which variable, lsize, t13, t14, app, or telonuse, explains the
most variation in leffort? I run regression procedures for the numerical variables and ANOVA procedures for the categorical variables to determine this.
In practice, I do not print all the output. I save it in output listing files and
record by hand the key information in a summary sheet. Sidebar 1.2 shows a
typical summary sheet. I note the date that I carried out the analysis, the directory where I saved the files, and the names of the data file, the procedure
file(s), and the output file(s). I may want to look at them again in the future,
and if I dont note their names now, I may never find them again! We are going
to be creating lots of procedures and generating lots of output, so it is important to be organized. I also note the name of the dependent variable.
Now I am ready to look at the output file and record the performance of
the models. In the summary sheet, I record data only for significant variables.
For the regression models, a variable is highly significant if its P > |t| value
is 0.05 or less. In this case, I do not record the actual value; I just note the
number of observations, the variables effect on effort, and the adjusted
R-squared value. If the significance is borderline, that is, if P > |t| is a number between 0.05 and 0.10, I note its value. If the constant is not significant, I
note it in the Comments column. If you are analyzing a very small database,
you might like to record these values for every variablesignificant or not.
01-P2250
5/20/02
4:02 PM
Page 25
25
Personally, I have found that it is not worth the effort for databases with
many variables. If I need this information later, I can easily go back and look
at the output file.
For the ANOVA models, I do the same except I look at a variables Prob > F
value to determine if the variable is significant. The effect of a categorical
variable depends on the different types. For example, using Telon (telonuse =
Yes) will have one effect on leffort and not using Telon (telonuse = No) will have
a different effect on leffort. You cannot determine the effect from the ANOVA
table.
In Example 1.11, I have highlighted the key numbers in bold. I see that
there is a very significant relationship between leffort and lsize (P >|t| =
0.000): lsize explains 64% of the variation in leffort. The coefficient of lsize
(Coef.) is a positive number (0.9298). This means that leffort increases with
increasing lsize. The model was fit using data from 34 projects. I add this
information to the summary sheet (Sidebar 1.2).
Example 1.11
. regress leffort 1size
Source
Model
Residual
Total
SS
22.6919055
12.1687291
34.8606346
df
1
32
33
MS
22.6919055
.380272786
1.05638287
leffort
lsize
_cons
Coef.
.9297666
3.007431
Std. Err.
.1203611
.7201766
t
7.725
4.176
Number of obs
F(1,32)
Prob > F
R-squared
Adj R-squared
Root MSE
P>|t|
0.000
0.000
=
=
=
=
=
=
34
59.67
0.0000
0.6509
0.6400
.61666
Example 1.12
. regress leffort t13
Source
Model
Residual
Total
SS
.421933391
34.4387012
34.8606346
df
1
32
33
MS
.421933391
1.07620941
1.05638287
Number of obs
F(1,32)
Prob > F
R-squared
Adj R-squared
Root MSE
=
=
=
=
=
=
34
0.39
0.5357
0.0121
0.0188
-1.0374
01-P2250
5/20/02
26
leffort
t13
_cons
4:02 PM
Page 26
CHAPTER 1
Coef.
.1322679
8.082423
D ATA A N A LY S I S M E T H O D O L O G Y
Std. Err.
.2112423
.706209
t
0.626
11.445
P>|t|
0.536
0.000
Example 1.13
. regress leffort t14
Source
Model
Residual
Total
leffort
t14
_cons
SS
13.1834553
21.6771793
34.8606346
Coef.
-.7022183
10.55504
df
1
32
33
MS
13.1834553
.677411853
1.05638287
Std. Err.
.1591783
.4845066
Number of obs
F(1,32)
Prob > F
R-squared
Adj R-squared
Root MSE
t
-4.412
21.785
P>|t|
0.000
0.000
=
=
=
=
=
=
34
19.46
0.0001
0.3782
0.3587
.82305
Example 1.14
. anova leffort app
Number of obs =
34
Root MSE
= 1.06659
Source
Model
app
Residual
Total
Partial SS
.732134098
.732134098
34.1285005
34.8606346
df
3
3
30
33
R-squared
= 0.0210
Adj R-squared = -0.0769
MS
.244044699
.244044699
1.13761668
1.05638287
F
0.21
0.21
Prob > F
0.8855
0.8855
01-P2250
5/20/02
4:02 PM
Page 27
27
Example 1.15
. anova leffort telonuse
Number of obs = 34
Root MSE = .984978
Source
Model
telonuse
Residual
Total
Partial SS
3.81479355
3.81479355
31.0458411
34.8606346
df
1
1
32
33
R-squared = 0.1094
Adj R-squared = 0.0816
MS
3.81479355
3.81479355
.970182533
1.05638287
F
3.93
3.93
Prob > F
0.0560
0.0560
SIDEBAR 1.2
0.36
telonuse
34
0.08
.056
2-variable models
with lsize
t14
34
0.73
3-variable models
with lsize, t14
none significant
Comments
best model,
sign. = 0.0000
no further
improvement possible
Once I have recorded all of the output in the summary sheet, I select the
variable that explains the most variation in leffort. In this step, it is obviously lsize. There is no doubt about it. Then I ask myself: Does the relationship between leffort and lsize make sense? Does it correspond to the graph of
leffort as a function of lsize (Figure 1.8)? Yes, it does, so I add lsize to the model
and continue with the next step.
01-P2250
5/20/02
28
4:02 PM
Page 28
CHAPTER 1
D ATA A N A LY S I S M E T H O D O L O G Y
Example 1.16
. regress leffort lsize t13
Source
Model
Residual
Total
SS
22.8042808
12.0563538
34.8606346
df
2
31
33
MS
11.4021404
.388914638
1.05638287
leffort
lsize
t13
_cons
Coef.
.943487
-.0697449
3.151871
Std. Err.
.1243685
.1297491
.7763016
t
7.586
-0.538
4.060
Number of obs
F(2,31)
Prob > F
R-squared
Adj R-squared
Root MSE
P>|t|
0.000
0.595
0.000
=
=
=
=
=
=
34
29.32
0.0000
0.6542
0.6318
.62363
In Example 1.17, I learn that t14 is significant (0.002): lsize and t14
together explain 73% of the variation in leffort. The coefficient of t14 is a
negative number. This means that leffort decreases with increasing t14. This
is the same effect that we found in the one-variable model. If the effect was
different in this model, that could signal something strange going on
between lsize and t13, and I would look into their relationship more closely.
lsize and the constant (_cons) are still significant. If they were not, I would
note this in the Comments column. Again, this model was built using data
from 34 projects.
Example 1.17
. regress leffort lsize t14
Source
Model
Residual
Total
SS
25.9802069
8.88042769
34.8606346
df
2
31
33
MS
12.9901035
.286465409
1.05638287
Number of obs
F(2,31)
Prob > F
R-squared
Adj R-squared
Root MSE
=
=
=
=
=
=
34
45.35
0.0000
0.7453
0.7288
.53522
01-P2250
5/20/02
4:02 PM
Page 29
29
leffort
lsize
t14
_cons
Coef.
.7678266
-.3856721
5.088876
Std. Err.
.1148813
.1138331
.8764331
t
6.684
-3.388
5.806
P>|t|
0.000
0.002
0.000
In Examples 1.18 and 1.19, I see that app and telonuse are not significant
(0.6938 and 0.8876).
Example 1.18
. anova leffort lsize app, category (app)
Number of obs =
34
Root MSE
= .63204
Source
Model
lsize
app
Residual
Total
Partial SS
23.2758606
22.5437265
.583955179
11.584774
34.8606346
df
4
1
3
29
33
R-squared
= 0.6677
Adj R-squared = 0.6218
MS
5.81896516
22.5437265
.194651726
.399474964
1.05638287
F
14.57
56.43
0.49
Prob > F
0.0000
0.0000
0.6938
Example 1.19
. anova leffort lsize telonuse, category (telonuse)
Number of obs =
34
Root MSE
=.626325
Source
Model
lsize
telonuse
Residual
Total
Partial SS
22.6998727
18.8850791
.007967193
12.1607619
34.8606346
df
2
1
1
31
33
R-squared
= 0.6512
Adj R-squared = 0.6287
MS
11.3499363
18.8850791
.007967193
.392282644
1.05638287
F
28.93
48.14
0.02
Prob > F
0.0000
0.0000
0.8876
01-P2250
5/20/02
30
4:02 PM
Page 30
CHAPTER 1
D ATA A N A LY S I S M E T H O D O L O G Y
not significant, I record nothing and move on to the next model. Lets look
at the models (Examples 1.20, 1.21, and 1.22).
Example 1.20
. regress leffort lsize t14 t13
Source
Model
Residual
Total
SS
26.0505804
8.81005423
34.8606346
leffort
lsize
t14
t13
_cons
Coef.
.7796095
-.383488
-.055234
5.191477
df
3
30
33
MS
8.68352679
.293668474
1.05638287
Std. Err.
.118781
.1153417
.1128317
.9117996
t
6.563
-3.325
-0.490
5.694
Number of obs =
F(3, 30)
=
Prob > F
=
R-squared
=
Adj R-squared =
Root MSE
=
P>|t|
0.000
0.002
0.628
0.000
34
29.57
0.0000
0.7473
0.7220
.54191
Example 1.21
. anova leffort lsize t14 app, category (app)
Number of obs =
34
Root MSE
=.560325
Source
Model
lsize
t14
app
Residual
Total
Partial SS
26.0696499
12.3571403
2.79378926
.089442988
8.7909847
34.8606346
df
5
1
1
3
28
33
R-squared
= 0.7478
Adj R-squared = 0.7028
MS
5.21392998
12.3571403
2.79378926
.029814329
.313963739
1.05638287
F
16.61
39.36
8.90
0.09
Prob > F
0.0000
0.0000
0.0059
0.9622
Example 1.22
. anova leffort lsize t14 telonuse, category(telonuse)
Number of obs =
34
Root MSE
= .540403
Source
Model
lsize
t14
telonuse
Residual
Total
Partial SS
26.099584
12.434034
3.39971135
.119377093
8.7610506
34.8606346
df
3
1
1
1
30
33
R-squared
= 0.7487
Adj R-squared = 0.7236
MS
8.69986134
12.434034
3.39971135
.119377093
.29203502
1.05638287
F
29.79
42.58
11.64
0.41
Prob > F
0.0000
0.0000
0.0019
0.5274
01-P2250
5/20/02
4:02 PM
Page 31
31
None of the additional variables in the three models (Examples 1.20, 1.21,
and 1.22) are significant.
Example 1.23
. regress leffort lsize t14
Source
Model
Residual
Total
SS
25.9802069
8.88042769
34.8606346
leffort
lsize
t14
_cons
Coef.
.7678266
-.3856721
5.088876
df
2
31
33
MS
12.9901035
.286465409
1.05638287
Std. Err.
.1148813
.1138331
.8764331
t
6.684
-3.388
5.806
Number of obs
F(2, 31)
Prob > F
R-squared
Adj R-squared
Root MSE
P>|t|
0.000
0.002
0.000
=
=
=
=
=
=
34
45.35
0.0000
0.7453
0.7288
.53522
On the summary sheet, I note the significance of the final model. This is
the Prob > F value at the top of the output. The model is significant at the
0.0000 level. This is Statas way of indicating a number smaller than 0.00005.
This means that there is less than a 0.005% chance that all the variables in the
model (lsize and t14) are not related to leffort. (More information about how
to interpret regression output can be found in Chapter 6.)
01-P2250
5/20/02
32
4:02 PM
Page 32
CHAPTER 1
D ATA A N A LY S I S M E T H O D O L O G Y
01-P2250
5/20/02
4:02 PM
Page 33
33
need to check their correlation with each other. To avoid multicollinearity problems, I do not allow any two variables with an absolute value of Spearmans
rho greater than or equal to 0.75 in the final model together. From our preliminary correlation analysis, we learned that size 1 and t14 are slightly negatively
correlated; they have a significant Spearmans correlation coefficient of 0.3599.
Thus, there are no multicollinearity problems with this model.
You should also be aware that there is always the possibility that a variable
outside the analysis is really influencing the results. For example, lets say I
have two variables, my weight and the outdoor temperature. I find that my
weight increases when it is hot and decreases when it is cold. I develop a
model that shows my weight as a function of outdoor temperature. If I did
not use my common sense, I could even conclude that the high outdoor temperature causes my weight gain. However, there is an important variable that
I did not collect which is the real cause of any weight gain or lossmy ice
cream consumption. When it is hot outside, I eat more ice cream, and when it
is cold, I eat much less. My ice cream consumption and the outdoor temperature are therefore highly correlated. The model should really be my weight
as a function of my ice cream consumption. This model is also more useful
because my ice cream consumption is within my control, whereas the outdoor temperature is not. In this case, the outdoor temperature is confounded 2
with my ice cream consumption and the only way to detect this is to think
about the results. Always ask yourself if your results make sense and if there
could be any other explanation for them. Unfortunately, we are less likely to
ask questions and more likely to believe a result when it proves our point.
01-P2250
5/20/02
34
4:02 PM
Page 34
CHAPTER 1
D ATA A N A LY S I S M E T H O D O L O G Y
Example 1.24
. anova t14 app
Number of obs =
34
Root MSE
=.894427
Source
Model
app
Residual
Total
Partial SS
2.73529412
2.73529412
24.00
26.7352941
df
3
3
30
33
R-squared
= 0.1023
Adj R-squared = 0.0125
MS
.911764706
.911764706
.80
.810160428
F
1.14
1.14
Prob > F
0.3489
0.3489
Example 1.25
. anova lsize telonuse
Number of obs =
34
Root MSE
=.832914
Source
Model
telonuse
Residual
Total
Partial SS
4.04976176
4.04976176
22.1998613
26.2496231
df
1
1
32
33
R-squared
= 0.1543
Adj R-squared = 0.1279
MS
4.04976176
4.04976176
.693745665
.795443123
F
5.84
5.84
Prob > F
0.0216
0.0216
Yes, there is a significant relationship between lsize and telonuse. The use
of Telon explains about 13% of the variance in lsize. Example 1.26 shows
that applications that used Telon were much bigger than applications that
01-P2250
5/20/02
4:02 PM
Page 35
35
did not. So, the larger effort required by applications that used Telon
(Example 1.5) may not be due to Telon use per se, but because the applications were bigger. Once size has been added to the effort model, Telon use
is no longer important; size is a much more important driver of effort.
I learn as I analyze. Had this all been done automatically, I may not have
noticed this relationship.
Example 1.26
. table telonuse, c(mean size)
Telon Use
No
Yes
mean(size)
455
1056
Example 1.27
. tabulate app telonuse, chi2
Application Type
CustServ
MIS
TransPro
InfServ
Total
Telon Use
No
Yes
6
0
3
0
16
4
2
3
27
7
Total
6
3
20
5
34
Pr = 0.069
If there is a significant relationship, I need to look closely at the two variables and judge for myself if they are so strongly related that there could be
a problem. For example, if application type (app) and Telon use (telonuse) had
been significantly related, I would first look closely at Example 1.27. There
I would learn that no customer service (CustServ) or MIS application used
01-P2250
5/20/02
36
4:02 PM
Page 36
CHAPTER 1
D ATA A N A LY S I S M E T H O D O L O G Y
Telon. Of the seven projects that used Telon, there is a split between transaction processing (TransPro) applications (a high-effort category; see Example
1.4) and information service (InfServ) applications (a low-effort category).
Thus, the high effort for Telon use (see Example 1.5) is not due to an overrepresentation of high-effort transaction processing applications. In fact, the
majority of projects that did not use Telon are transaction processing applications. I conclude that any relationship between Telon use and effort cannot
be explained by the relationship between application type and Telon use; i.e.
application type and Telon use are not confounded.
If I find any problems in the final model, I return to the step where I
added the correlated/confounded variable to the variables already present
in the model, take the second best choice, and rebuild the model from there.
I do not carry out any further checks. The model is not valid, so there is no
point. We have to start again. (See Chapter 5 for an example of confounded
categorical variables.)
01-P2250
5/20/02
4:02 PM
Page 37
37
FIGURE 1.12
Residuals vs. fitted values
with large predicted errors (residuals) and/or projects very different from
other projects values for at least one of the independent variables in the
model can exert undue influence on the model (leverage).
Cooks distance summarizes information about residuals and leverage
into a single statistic. Cooks distance can be calculated for each project by
dropping that project and re-estimating the model without it. My statistical analysis tool does this automatically. Projects with values of Cooks distance, D, greater than 4/n should be examined closely (n is the number of
observations). In our example, n = 34, so we are interested in projects for
which D > 0.118. I find that one project, 51, has a Cooks distance of 0.147
(Example 1.28).
Example 1.28
. list id size effort t14 cooksd if cooksd>4/34
28.
id
51
size
1526
effort
5931
t14
3
cooksd
.1465599
01-P2250
5/20/02
38
4:02 PM
Page 38
CHAPTER 1
D ATA A N A LY S I S M E T H O D O L O G Y
FIGURE 1.13
Distribution of residuals
information in a single value. Of course, the cut-off values are different for
DIFTS and Welsh distance. Do not complicate your life; use the influence statistic that your statistical analysis tool provides.3
Referring back to Figure 1.8, I see that the influence of Project 51 is due to
its effort being slightly low for its size compared to other projects, so it must
be pulling down the regression line slightly (leverage problem). After looking closely at this project, I see no reason to drop it from the analysis. The
data is valid, and given the small number of large projects we have, we
cannot say that it is an atypical project. If we had more data, we could, in all
likelihood, find more projects like it. In addition, 0.15 is not that far from
the 0.12 cut-off value.
If a project was exerting a very high influence, I would first try to understand why. Is the project special in any way? I would look closely at the data
and discuss the project with anyone who remembered it. Even if the project
is not special, if the Cooks distance is more than three times larger than the
cut-off value, I would drop the project and develop an alternative model
using the reduced data set. Then I would compare the two models to better
understand the impact of the project.
3. If you use Stata, see the fit procedure for these and other diagnostics.
01-P2250
5/20/02
4:02 PM
Page 39
39
ln(x)a = aln(x),
and
ln(e) = 1
In Chapters 3, 4, and 5, you will see how to extract the equation from
models that include categorical variables. The impact of categorical variables
in an equation is simply to modify the constant term (a).
01-P2250
5/20/02
40
4:02 PM
Page 40
CHAPTER 1
D ATA A N A LY S I S M E T H O D O L O G Y
Final Comments
Now that youve learned the basics of my methodology for analyzing software project data, you are ready to attack some more complicated databases.
In the following four case studies (Chapters 2-5), you will learn how to deal
with common problems that occur when analyzing software project data
(see Table 1.3). You will also learn how to interpret data analysis results and
turn them into management implications.
TABLE 1.3
Common Problems and Where to Learn How to Deal with Them
Chapter 2
Productivity
Detecting invalid data
Transforming data
before use
Outliers
Chapter 3
Time to
Market
Chapter 4
Development
Cost
Chapter 5
Maintenance
Cost
X
X
Confounded categorical
variables
Choosing baseline
categorical variables
Influential observations