The goals of this week’s lab:
For this lab, pair programming is optional (see Piazza for more information).
Accept your repo on GitHub classroom. You should have the following folders/files:
run_dtree.py
- your main program executableDecisionTree.py
- a class file to define your decision tree data structureutil.py
- file parsing and featurization helper functionsPartition.py
- a class file to define your partition data structureREADME.md
- for analysis questions and lab feedbackYou will implement an ID3-like decision tree in this part of the lab. A few steps have been provided in the starter code. First investigate these steps and make sure the code is clear to you:
The next steps you will implement:
Your program should take in 2-3 command-line arguments:
-r
-e
-d
(if none, unlimited depth)For example:
python3 run_dtree.py -r data/movies_train.arff -e data/movies_test.arff -d 1
Investigate the function parse_args
in util.py
, where the arguments are parsed. This function makes use of the optparse
library, which we will be using in future labs to manage command-line arguments.
Your program should follow good design practices - effective top-down design, modularity, commenting for functions (triple quotes) and code (hashtags), etc.
You should break your program up into multiple files with run_dtree.py
as the main executable (you shouldn’t end up with a lot of code in run_dtree.py
). You should build classes in separate files (I have provided Partition.py
to hold datasets, and DecisionTree.py
to hold the tree structure, but you could include others).
All decision tree algorithm details should be encapsulated in your decision tree class. The main file should only directly handle data parsing and the main control flow for calling decision tree steps (e.g., train, test, output).
Clean up any debugging print statements at final submission. It may be useful to use these during development, but need to be cleaned up before we grade them.
All functions should include a top-level comment (triple quotes) describing purpose, parameters, and return values.
Avoid using existing libraries to solve major components of the program. You are more than welcome to use numpy for array management. Please ask me if there are other libraries you are interested in using.
Your program is expected to read files that are in the ARFF format.
The header of the file describes the features and class label. You can assume that the class label is the last line in the feature list and will be named class
to easily demarcate the end of the header. You can also assume the headers in the train and test files are the same.
Subsequent lines after the header contain one example per line represented as commas separated feature values. The feature values occur in the same ordering as feature descriptions in the header (with the class label being at the end of the line).
Your program should handle both discrete and continuous features. You can assume the class label is discrete (and binary).
Lines starting with % are comments and should be ignored.
I have provided data sets for testing purposes in the data directory including a heart disease prediction task (heart_{train,test}.arff) and a diabetes prediction task (diabetes_{train,test}.arff). You are encouraged to add your own toy examples for the purposes of debugging. Be sure to follow the format required in the usage.
The other type of input is the optional maximum depth of the tree. This should be a non-negative integer - all other values should be rejected and lead to a clean exit with error message. If no value is given, trees may have unlimited depth but should obey other stopping conditions.
Your program should process inputs, parse and index training and test examples, and induce a decision tree. There are very few restrictions on how you do this, but be sure your tree fits the behavior from class:
Never use information from the test set in any part of learning the tree, including when you decide thresholds for continuous features.
You may assume that the prediction task is binary, with the first value as -1 (negative) and the second value as +1 (positive). You can implement multi-valued prediction as an extension.
Continuous features are converted into multiple binary features using thresholding at class label changes (provided in util.py
). The feature name uses the less than or equal to relation (i.e., the feature value is “True” if it is <=
to the threshold). We will choose the mid-point between two adjacent feature values. E.g., if there is a label change from X=20 and X=30, we will add a feature X<=25.
There should be a branch for each value of a chosen feature. This is described in the header of the arff file, and is limited to “True” and “False” for the converted continuous features. It is probably simplest to store these as string type.
Use information gain for your selection criteria. Among features with the same information gain, choose the feature that comes first in the ordered dictionary. Here are the information gains for the tennis
and movies
datasets (at the root). Make sure you’re able to get these same results using the provided starter code.
TENNIS
Info Gain:
Outlook, 0.246750
Temperature, 0.029223
Humidity, 0.151836
Wind, 0.048127
MOVIES
Info Gain:
Type, 0.306099
Length, 0.306099
Director, 0.557728
Famous_actors, 0.072780
The stopping criteria for a leaf is:
The labels at leaves should be the majority label of the examples at the leaf (in other words, we will not use probabilities for predictions). Ties should be broken in favor of the negative.
When run, your program should print out the following:
For example, here is the result of the tennis
dataset from Handout 3 (sort the child branch labels so that your ordering is consistent). At the root, there are 9 days tennis is played and 5 days it is not.
Here are a few examples:
TENNIS, no max depth
$ python3 run_dtree.py -r data/tennis_train.arff -e data/tennis_test.arff
[5, 9]
Outlook=Overcast [0, 4]: 1
Outlook=Rain [2, 3]
| Wind=Strong [2, 0]: -1
| Wind=Weak [0, 3]: 1
Outlook=Sunny [3, 2]
| Humidity=High [3, 0]: -1
| Humidity=Normal [0, 2]: 1
14 out of 14 correct
accuracy = 1.0000
TENNIS, max depth = 1
$ python3 run_dtree.py -r data/tennis_train.arff -e data/tennis_test.arff -d 1
[5, 9]
Outlook=Overcast [0, 4]: 1
Outlook=Rain [2, 3]: 1
Outlook=Sunny [3, 2]: -1
10 out of 14 correct
accuracy = 0.7143
MOVIES, no max depth
$ python3 run_dtree.py -r data/movies_train.arff -e data/movies_test.arff
[3, 6]
Director=Adamson [0, 3]: 1
Director=Lasseter [3, 1]
| Type=Animated [2, 0]: -1
| Type=Comedy [1, 0]: -1
| Type=Drama [0, 1]: 1
Director=Singer [0, 2]: 1
9 out of 9 correct
accuracy = 1.0000
MOVIES, max depth = 1
$ python3 run_dtree.py -r data/movies_train.arff -e data/movies_test.arff -d 1
[3, 6]
Director=Adamson [0, 3]: 1
Director=Lasseter [3, 1]: -1
Director=Singer [0, 2]: 1
8 out of 9 correct
accuracy = 0.8889
Heart disease data set:
python3 run_dtree.py -r data/heart_train.arff -e data/heart_test.arff -d <depth>
Diabetes data set:
python3 run_dtree.py -r data/diabetes_train.arff -e data/diabetes_test.arff -d <depth>
Entropy and information gain calculations have been provided in the Partition.py
class. Utilize this starter code to write a best_feature
method in the Partition
class.
Think about what the DecisionTree
constructor should take as arguments. One suggestion is to do the entire ID3 algorithm within the DecisionTree
constructor (implicitly returning the root node).
Your algorithm should be recursive! Make sure you are making an instance of the Partition
class when you divide the data based on feature values. Then call your tree-building function on this partition.
You can have a Node
class and distinguish between internal nodes and leaves. This is not a requirement. Another option is to have a node name attribute: for internal nodes this is the feature name and for leaf nodes this is the label of the majority class.
Notice that in the starter code, the training data features are converted from continuous to binary, but the testing data features are not. This is deliberate. (why?) When you classify
test examples, think about how to make their feature values work with the decision tree you have created using the training data.
Answer the following questions in your README.md
file.
What depth did you find “best” for the heart disease dataset? For the diabetes dataset?
How did you determine that these depth values were the “best”? (i.e. how did training and testing accuracy change as the depth changed?)
If doctors were to use these types of decision trees when diagnosing patients, what issues might arise? Advantages? Disadvantages?
Generate two learning curve graphs (i.e., line graphs) with an accompanying analysis of the graphs. On the x-axis, show the depth of a tree. On the y-axis, show the accuracy of the tree on both the training set and test set. You should have one graph for the diabetes data set and one graph for the heart disease data set. Describe what you can conclude from each graph. Be sure that your graphs are of scientific quality - including captions, axis labels, distinguishable curves, and legible font sizes.
Both data sets I have provided have two class labels. Decision trees easily extend to multi-class prediction. Find a relevant data set (e.g., at Kaggle or the UC Irvine Machine Learning Repository) and evaluate your algorithm with multiple discrete values allowed for labels.
Another approach to prevent overfitting is setting a minimum number of examples in a leaf. If a split causes results in children below the threshold, the recursion is stopped at the current node and a leaf is created with the plurality label. This is often more flexible than maximum depth - it allows variable depth branches while still trying to prevent overly detailed hypotheses that only fit to a few examples. Add minimum leaf size as an optional command line argument.
Machine learning algorithms are often sensitive to the amount of training data - some algorithms thrive when there is large amounts of data but struggle when data is sparse (e.g., deep neural networks) while others plateau in performance even if more data is available. Evaluate your decision tree on the given training data by randomly subsampling the data to create smaller training sets e.g., use 10% of training data. Choose at least 5 training set sizes and plot them on the x-axis with the y-axis describing the accuracy. You’ll probably want to run each training set size 3-5 times and average the results. Describe what you see when you compare training and test accuracy as the training set grows.
For the programming portion, be sure to commit your work often to prevent lost data. Only your final pushed solution will be graded. Only files in the main directory will be graded. Please double check all requirements; common errors include:
Modified from lab by: Ameet Soni.
Both data sets were made available by the UC Irvine Machine Learning Repository. The heart disease data set is the result of a study by the Cleveland Clinic to study coronary heart disease. The patients were identified as having or not having heart disease and the resulting feature set is a subset of the original 76 factors that were studied. The diabetes data set is the result of a study of females at least 21 years old of Pima Indian heritage to understand risk factors of diabetes. Both data sets were converted to ARFF format by Mark Craven at the University of Wisconsin.