DM Lab Record PDF
DM Lab Record PDF
S.no. Date Title of the experiment Page No. Marks Sign. With Date
Task Resources:
Mentor lecture on Decision Trees
Andrew Moore’s Data mining Tutorials(See tutorials on Decision Trees and cross-validation )
Decision Trees(Source: Tan, MSU)
Tom Mitchell’s book slides (See slides on concept learning and Decision Trees)
Weka resources:
Introduction to Weka (html version ) (download ppt version)
Download Weka
Weka Tutorial
ARFF format
Using Weka from command line
Introduction
Explore WEKA Data mining /machine Learning Toolkit
EXPLORER:
It is a user interface which contains a group of tabs just below the title bar. Thetabs
are as follows:
1. Preprocess
2. Classify
3. Cluster
4. Associate
5. Select Attributes
6. Visualize
The bottom of the window contains status box, log and WEKA bird.
Experimenter:
The Weka Experiment Environment enables the user to create, run, modify, and analyze
experiments in a more convenient manner than is possible when processing the schemes
individually. For example, the user can create an experiment that runs several schemes against a series
of datasets and then analyse the results to determine if one of the schemes is (statistically) better than
the other schemes.
The Experiment Environment can be run from the command line using the Simple
CLI.
You can choose between those two with the Experiment Configuration Mode radio buttons:
• Simple
• Advanced
Both setups allow you to setup standard experiments, that are run locally on a single machine, or
remote experiments, which are distributed between several hosts.
Knowledge Flow
The Knowledge Flow provides an alternative to the Explorer as a graphical front end to
WEKA’s core algorithms. The KnowledgeFlow presents a data-flow inspired interface to
WEKA. The user can selectWEKA components from a palette, place them on a layout canvas
and connect them together in order to form a knowledge flow for processing and analyzing
data. At present, all of WEKA’s classifiers, filters, clusterers, associators, loaders and savers
are available in the Knowledge Flow along with some extra tools.
Simple CLI
The Simple CLI provides full access to all Weka classes, i.e., classifiers, filters, clusterers, etc.,
but without the hassle of the CLASSPATH (it facilitates the one, with which WEKA was
started). It offers a simple Weka shell with separated command line and output.
II. Navigate the options available in the WEKA (ex. Select attributes panel, preprocess
panel, classify panel, cluster panel, associate panel and visualize panel). And Explorer
the available data sets in WEKA. Load a data set. Load each dataset and observe the
following:
PREPROCESSING:
It is a process of identifying the unwanted data (data cleaning) before loading the data
from the data base.
Now Open the WEKA application as shown in the bellow figure-1
Now Click on Explorer as shown in the above figure-2
Now open file by choosing the “open file” button as shown in the above figure-3.
Now choose the data folder in the open dialogue box as in figure-4.
Now choose the “house.arff” file in the above figure-5.
Figure-1
Figure-2
Figure-3
Relation specifies the name of the database used, instances specify the objects involved,
and attributes specify the number of attributes used in the data base or relation.
Figure-4
Figure-5
Figure-6
Figure-7
III. Study the arff file format
An ARFF (= Attribute-Relation File Format ) file is an ASCII text file that describes a list of
instances sharing a set of attributes. ARFF files are not the only format one can load, but all
files that can be converted with Weka’s “core converters”. The following formats are currently
supported.
Now you create the arff file. Then open the Notepad type the following code and save file with
.arff extension.
@RELATION Student
@ATTRIBUTE customerid NUMERIC
@ATTRIBUTE age{youth,middle,senior}
@ATTRIBUTE income{low,medium,high}
@ATTRIBUTE student{yes,no}
@ATTRIBUTE credit_rating{fair,excellent}
@ATTRIBUTE buy_computer{yes,no}
@data
%
1,youth,high,no,fair,no
2,youth,high,no,excellent,no
3,middle,high,no,fair,yes
4,senior,medium,no,fair,yes
5,senior,low,yes,fair,yes
6,senior,low,yes,excellent,no
7,middle,low,yes,excellent,yes
8,youth,medium,no,fair,no
9,youth,low,yes,fair,yes
10,senior,medium,yes,fair,yes
11,youth,medium,yes,excellent,yes
12,middle,medium,no,excellent,yes
13,middle,high,yes,fair,yes
14,senior,medium,no,excellent,no
%
Description: The business of banks is making loans. Assessing the credit worthiness of an
applicant is of crucial importance. You have to develop a system to help a loan officer decide
whether the credit of a customer is good. Or bad. A bank’s business rules regarding loans must
consider two opposing factors. On th one han, a bank wants to make as many loans as possible.
Interest on these loans is the banks profit source. On the other hand, a bank cannot afford to make
too many bad loans. Too many bad loans could lead to the collapse of the bank. The bank’s loan
policy must involved a compromise. Not too strict and not too lenient.
Actual historical credit data is not always easy to come by because of confidentiality rules.
Now select the credit-g.arff file from weka data folder. After load the file as shown in figure-1.
Tasks:
1. List all the categorical (or nominal) attributes and the real valued attributes
separately.
1. Checking_Status
2. Credit_history
3. Purpose
4. Savings_status
5. Employment
6. Personal_status
7. Other_parties
8. Property_Magnitude
9. Other_payment_plans
10. Housing
11. Job
12. Own_telephone
13. Foreign_worker
1. Duration
2. Credit_amout
3. Installment_Commitment
4. Residence_since
5. Age
6. Existing_credits
Num_dependents
2. What attributes do you think might be crucial in making the credit assessment? Comeup
with some simple rules in plain English using your selected attributes.
Ans) The following are the attributes may be crucial in making the credit assessment.
1. Credit_amount
2. Age
3. Job
4. Savings_status
5. Existing_credits
6. Installment_commitment
7. Property_magnitude
3. One type of model that you can create is a Decision tree. Train a Decision tree using the
complete data set as the training data. Report the model obtained after training.
4. Suppose you use your above model trained on the complete dataset, and classify credit
good/bad for each of the examples in the dataset. What % of examples can you classify
correctly?(This is also called testing on the training set) why do you think can not get 100%
training accuracy?
Ans) If we used our above model trained on the complete dataset and classified credit as
good/bad for each of the examples in that dataset. We can not get 100% training accuracy only
85.5% of examples, we can classify correctly.
5. Is testing on the training set as you did above a good idea? Why or why not?
Ans) It is not good idea by using 100% training data set
6. One approach for solving the problem encountered in the previous question is using cross-
validation? Describe what is cross validation briefly. Train a decision tree again using cross
validation and report your results. Does accuracy increase/decrease? Why?
Ans) Cross-Validation Definition: The classifier is evaluated by cross validation using the
number of folds that are entered in the folds text field.
In Classify Tab, Select cross-validation option and folds size is 2 then Press Start Button, next
time change as folds size is 5 then press start, and next time change as folds size is 10 then press
start.
i) Fold Size-10
Note: With this observation, we have seen accuracy is increased when we have folds size is 5 and
accuracy is decreased when we have 10 folds.
7. Check to see if the data shows a bias against “foreign workers” or “personal-status”. One
way to do this is to remove these attributes from the data set and see if the decision tree
created in those cases is significantly different from the full dataset case which you have
already done. Did removing these attributes have any significantly effect? Discuss.
Ans) We use the Preprocess Tab in Weka GUI Explorer to remove an attribute “Foreign-
workers” & “Perosnal_status” one by one. In Classify Tab, Select Use Training set option then
Press Start Button, If these attributes removed from the dataset, we can see change in the
accuracy compare to full data set when we removed.
i) If Foreign_worker is removed
Analysis:
With this observation we have seen, when “Foreign_worker “attribute is removed from the
Dataset, the accuracy is decreased. So this attribute is important for classification.
8. Another question might be, do you really need to input so many attributes to get good
results? May be only a few would do. For example, you could try just having attributes
2,3,5,7,10,17 and 21. Try out some combinations.(You had removed two attributes in problem
7. Remember to reload the arff data file to get all the attributes initially before you start
selecting the ones you want.)
Procedure:
Figure-1
2. Remove the 3rd Attribute:
Remember to reload the previous removed attribute, press Undo option in Preprocess tab.
We use the Preprocess Tab in Weka GUI Explorer to remove 3rd attribute
(Credit_history). In Classify Tab, Select Use Training set option then Press Start Button, If
these attributes removed from the dataset, we can see change in the accuracy compare to
full data set when we removed. Then see the output as shown in figure-2.
Figure-2
3. Remove 5th attribute (Credit_amount).
Remember to reload the previous removed attribute, press Undo option in Preprocess tab. We
use the Preprocess Tab in Weka GUI Explorer to remove 5th attribute (Credit_amount). In
Classify Tab, Select Use Training set option then Press Start Button, If these attributes removed
from the dataset, we can see change in the accuracy compare to full data set when we removed.
Then see the output as shown in figure -3.
Figure-3
4. Remove 7th attribute (Employment)
Remember to reload the previous removed attribute, press Undo option in Preprocess tab. We
use the Preprocess Tab in Weka GUI Explorer to remove 7th attribute (Employment). In
Classify Tab, Select Use Training set option then Press Start Button, If these attributes
removed from the dataset, we can see change in the accuracy compare to full data set when we
removed. Then see the output as shown in figure-4.
Figure-4
5. Remove 10th attribute (Other_parties):
Remember to reload the previous removed attribute, press Undo option in Preprocess tab. We
use the Preprocess Tab in Weka GUI Explorer to remove 10th attribute (Other_parties). In
Classify Tab, Select Use Training set option then Press Start Button, If these attributes
removed from the dataset, we can see change in the accuracy compare to full data set when
we removed. Then see the output as shown in figure-5.
Figure-5
6.Remove 17th attribute (Job):
Remember to reload the previous removed attribute, press Undo option in Preprocess tab. We
use the Preprocess Tab in Weka GUI Explorer to remove 17th attribute (Job). In Classify
Tab, Select Use Training set option then Press Start Button, If these attributes removed from the
dataset, we can see change in the accuracy compare to full data set when we removed. Then
see the output as shown in figure-6.
Figure-6
Remember to reload the previous removed attribute, press Undo option in Preprocess tab. We
use the Preprocess Tab in Weka GUI Explorer to remove 21st attribute (Class). In Classify
Tab, Select Use Training set option then Press Start Button, If these attributes removed from the
dataset, we can see change in the accuracy compare to full data set when we removed. Then
see the output as shown in figure- 7.
Figure-7
ANALYSIS:
With this observation we have seen, when 3rd attribute is removed from the Dataset, the
accuracy (83%) is decreased. So this attribute is important for classification. when 2nd and
10th attributes are removed from the Dataset, the accuracy(84%) is same. So we can remove any
one among them. when 7th and 17th attributes are removed from the Dataset, the
accuracy(85%) is same. So we can remove any one among them. If we remove 5th and 21st
attributes the accuracy is increased, so these attributes may not be needed for the classification.
9. Sometimes, The cost of rejecting an applicant who actually has good credit might be
higher than accepting an applicant who has bad credit. Instead of counting the
misclassification equally in both cases, give a higher cost to the first case (say cost 5)
and lower cost to the second case. By using a cost matrix in weak. Train your decision
tree and report the Decision Tree and cross validation results. Are they significantly
different from results obtained in problem 6?
Procedure:
Now Open the WEKA GUI Explorer, Select Classify Tab, In that Select Use Training
set option .
In Classify Tab then press Choose button in that select J48 as Decision Tree Technique.
In Classify Tab then press More options button then we get classifier evaluation options
window
Now select cost sensitive evaluation the press set option Button then we get Cost Matrix
Editor.
Now change classes as 2 then press Resize button. Then we get 2X2 Cost matrix.
Now in Cost Matrix (0,1) location value change as 5, then we get modified cost matrix is
as follows. Show in figure-8.
Figure-8
Then close the cost matrix editor, then press ok button. Then press start button. Then shown
below figure-9.
Figure-9
Analysis:
With this observation we have seen that , total 700 customers in that 669 classified as good
customers and 31 misclassified as bad customers. In total 300cusotmers, 186 classified as bad
customers and 114 misclassified as good customers.
10. Do you think it is a good idea to prefect simple decision trees instead of having long complex
decision tress? How does the complexity of a Decision Tree relate to the bias of the model?
Analysis:
It is Good idea to prefer simple Decision trees, instead of having complex Decision tree.
11. You can make your Decision Trees simpler by pruning the nodes. One approach is to use
Reduced Error Pruning. Explain this idea briefly. Try reduced error pruning for training
your Decision Trees using cross validation and report the Decision Trees you obtain? Also
Report your accuracy using the pruned model Does your Accuracy increase?
Now we can make our decision tree simpler by pruning the nodes.
For that In Weka GUI , Select Classify Tab, In that Select Use Training set option .
Now select the Classify Tab then press Choose button in that select J48 as Decision Tree Technique.
Now Beside Choose Button Press on J48 –c 0.25 –M2 text we get Generic Object Editor.
Now select Reduced Error pruning Property as True then press ok.
Now then press start button.
Figure-10
Analysis:
By using pruned model, the accuracy decreased. Therefore by pruning the nodes we can make our
decision tree simpler.
12 How can you convert a Decision Tree into “if-then-else rules”. Make up your own small
Decision Tree consisting 2-3 levels and convert into a set of rules. There also exist different
classifiers that output the model in the form of rules. One such classifier in weka is rules.
PART, train this model and report the set of rules obtained. Sometimes just one attribute
can be good enough in making the decision, yes, just one
! Can you predict what attribute that might be in this data set? OneR classifier uses a single
attribute to make decisions(it chooses the attribute based on minimum error).Report the rule
obtained by training a one R classifier. Rank the performance of j48,PART,oneR.
Procedure:
Sample Decision Tree shown in figure -11, for weather dataset, with 2-3 levels .
Figure-11
Figure-12
Then go to Choose and select Rules in that select OneR and press start Button. Show the
result in figure-13 .
Figure-13
Then go to Choose and select Trees in that select J48 and press start Button. Show the result
in figure-14.
Figure-14.
Analysis:
This observation we have seen the performance of classifier and Rank is as follows
1. PAR
T 2. J48
3. OneR