CS360 Lab 3: Decision Trees

Due: Thursday, February 15 at 11:59pm


Overview

The goals of this week’s lab:

For this lab, pair programming is optional (see Piazza for more information).


Getting Started

Accept your repo on GitHub classroom. You should have the following folders/files:



Implement a Decision Tree

You will implement an ID3-like decision tree in this part of the lab. A few steps have been provided in the starter code. First investigate these steps and make sure the code is clear to you:

The next steps you will implement:

Usage

Your program should take in 2-3 command-line arguments:

  1. Path to the training dataset -r
  2. Path to the testing dataset -e
  3. (optional) Maximum depth of the tree -d (if none, unlimited depth)

For example:

python3 run_dtree.py -r data/movies_train.arff -e data/movies_test.arff -d 1

Investigate the function parse_args in util.py, where the arguments are parsed. This function makes use of the optparse library, which we will be using in future labs to manage command-line arguments.

Program Style Requirements

Program Inputs

Program Main Behavior

Your program should process inputs, parse and index training and test examples, and induce a decision tree. There are very few restrictions on how you do this, but be sure your tree fits the behavior from class:

TENNIS

Info Gain:
     Outlook, 0.246750
 Temperature, 0.029223
    Humidity, 0.151836
        Wind, 0.048127

MOVIES

Info Gain:
         Type, 0.306099
       Length, 0.306099
     Director, 0.557728
Famous_actors, 0.072780

Program output

When run, your program should print out the following:

For example, here is the result of the tennis dataset from Handout 3 (sort the child branch labels so that your ordering is consistent). At the root, there are 9 days tennis is played and 5 days it is not.

Here are a few examples:

TENNIS, no max depth

$ python3 run_dtree.py -r data/tennis_train.arff -e data/tennis_test.arff
[5, 9]
Outlook=Overcast [0, 4]: 1
Outlook=Rain [2, 3]
|   Wind=Strong [2, 0]: -1
|   Wind=Weak [0, 3]: 1
Outlook=Sunny [3, 2]
|   Humidity=High [3, 0]: -1
|   Humidity=Normal [0, 2]: 1

14 out of 14 correct
accuracy = 1.0000

TENNIS, max depth = 1

$ python3 run_dtree.py -r data/tennis_train.arff -e data/tennis_test.arff -d 1
[5, 9]
Outlook=Overcast [0, 4]: 1
Outlook=Rain [2, 3]: 1
Outlook=Sunny [3, 2]: -1

10 out of 14 correct
accuracy = 0.7143

MOVIES, no max depth

$ python3 run_dtree.py -r data/movies_train.arff -e data/movies_test.arff
[3, 6]
Director=Adamson [0, 3]: 1
Director=Lasseter [3, 1]
|   Type=Animated [2, 0]: -1
|   Type=Comedy [1, 0]: -1
|   Type=Drama [0, 1]: 1
Director=Singer [0, 2]: 1

9 out of 9 correct
accuracy = 1.0000

MOVIES, max depth = 1

$ python3 run_dtree.py -r data/movies_train.arff -e data/movies_test.arff -d 1
[3, 6]
Director=Adamson [0, 3]: 1
Director=Lasseter [3, 1]: -1
Director=Singer [0, 2]: 1

8 out of 9 correct
accuracy = 0.8889

Example runs

Heart disease data set:

python3 run_dtree.py -r data/heart_train.arff -e data/heart_test.arff -d <depth>

Diabetes data set:

python3 run_dtree.py -r data/diabetes_train.arff -e data/diabetes_test.arff -d <depth>

Implementation suggestions

  1. Entropy and information gain calculations have been provided in the Partition.py class. Utilize this starter code to write a best_feature method in the Partition class.

  2. Think about what the DecisionTree constructor should take as arguments. One suggestion is to do the entire ID3 algorithm within the DecisionTree constructor (implicitly returning the root node).

  3. Your algorithm should be recursive! Make sure you are making an instance of the Partition class when you divide the data based on feature values. Then call your tree-building function on this partition.

  4. You can have a Node class and distinguish between internal nodes and leaves. This is not a requirement. Another option is to have a node name attribute: for internal nodes this is the feature name and for leaf nodes this is the label of the majority class.

  5. Notice that in the starter code, the training data features are converted from continuous to binary, but the testing data features are not. This is deliberate. (why?) When you classify test examples, think about how to make their feature values work with the decision tree you have created using the training data.


Analysis

Answer the following questions in your README.md file.

  1. What depth did you find “best” for the heart disease dataset? For the diabetes dataset?

  2. How did you determine that these depth values were the “best”? (i.e. how did training and testing accuracy change as the depth changed?)

  3. If doctors were to use these types of decision trees when diagnosing patients, what issues might arise? Advantages? Disadvantages?


Extensions (Optional)

Learning curves for depth

Generate two learning curve graphs (i.e., line graphs) with an accompanying analysis of the graphs. On the x-axis, show the depth of a tree. On the y-axis, show the accuracy of the tree on both the training set and test set. You should have one graph for the diabetes data set and one graph for the heart disease data set. Describe what you can conclude from each graph. Be sure that your graphs are of scientific quality - including captions, axis labels, distinguishable curves, and legible font sizes.

Multi-class prediction

Both data sets I have provided have two class labels. Decision trees easily extend to multi-class prediction. Find a relevant data set (e.g., at Kaggle or the UC Irvine Machine Learning Repository) and evaluate your algorithm with multiple discrete values allowed for labels.

Min Leaf Size

Another approach to prevent overfitting is setting a minimum number of examples in a leaf. If a split causes results in children below the threshold, the recursion is stopped at the current node and a leaf is created with the plurality label. This is often more flexible than maximum depth - it allows variable depth branches while still trying to prevent overly detailed hypotheses that only fit to a few examples. Add minimum leaf size as an optional command line argument.

Learning curves for training set size

Machine learning algorithms are often sensitive to the amount of training data - some algorithms thrive when there is large amounts of data but struggle when data is sparse (e.g., deep neural networks) while others plateau in performance even if more data is available. Evaluate your decision tree on the given training data by randomly subsampling the data to create smaller training sets e.g., use 10% of training data. Choose at least 5 training set sizes and plot them on the x-axis with the y-axis describing the accuracy. You’ll probably want to run each training set size 3-5 times and average the results. Describe what you see when you compare training and test accuracy as the training set grows.


Submitting your work

For the programming portion, be sure to commit your work often to prevent lost data. Only your final pushed solution will be graded. Only files in the main directory will be graded. Please double check all requirements; common errors include:

Acknowledgements

Modified from lab by: Ameet Soni.

Both data sets were made available by the UC Irvine Machine Learning Repository. The heart disease data set is the result of a study by the Cleveland Clinic to study coronary heart disease. The patients were identified as having or not having heart disease and the resulting feature set is a subset of the original 76 factors that were studied. The diabetes data set is the result of a study of females at least 21 years old of Pima Indian heritage to understand risk factors of diabetes. Both data sets were converted to ARFF format by Mark Craven at the University of Wisconsin.