An Example Machine Learning Notebook
An Example Machine Learning Notebook
An Example Machine Learning Notebook
Table of contents
1.
Introduction
2.
License
3.
Required libraries
4.
5.
6.
7.
8.
9.
Step 5: Classification
o
Cross-validation
Parameter tuning
License
[ go back to the top ]
Please see the repository README file for the licenses and usage terms for the instructional material and code in this
notebook. In general, I have licensed this material so that it is as widely usable and shareable as possible.
Required libraries
[ go back to the top ]
If you don't have Python on your computer, you can use the Anaconda Python distribution to install most of the Python
packages you need. Anaconda provides a simple double-click installer for your convenience.
This notebook uses several Python packages that come standard with the Anaconda Python distribution. The primary
libraries that we'll be using are:
pandas: Provides a DataFrame structure to store data in memory and work with it easily and efficiently.
matplotlib: Basic plotting library in Python; most other Python plotting libraries are built on top of it.
To make sure you have all of the packages you need, install them with conda:
conda install numpy pandas scikit-learn matplotlib seaborn
conda may ask you to update some of them if you don't have the most recent version. Allow it to do so.
Note: I will not be providing support for people trying to run this notebook outside of the Anaconda Python distribution.
We've been given a data set from our field researchers to develop the demo, which only includes measurements for three
types of Iris flowers:
Iris setosa
Iris versicolor
Iris virginica
The four measurements we're using currently come from hand-measurements by the field researchers, but they will be
automatically measured by an image processing model in the future.
Note: The data set we're working with is the famous Iris data set included with this notebook which I have modified
slightly for demonstration purposes.
Notice that we've spent a fair amount of time working on the problem without writing a line of code or even looking at the
data.
Thinking about and documenting the problem we're working on is an important step to performing effective data
analysis that often goes overlooked. Don't skip it.
iris_data = pd.read_csv('iris-data.csv')
iris_data.head()
Out[1]:
sepal_length_cm
sepal_width_cm
petal_length_cm
petal_width_cm
class
0 5.1
3.5
1.4
0.2
Iris-setosa
1 4.9
3.0
1.4
0.2
Iris-setosa
2 4.7
3.2
1.3
0.2
Iris-setosa
3 4.6
3.1
1.5
0.2
Iris-setosa
4 5.0
3.6
1.4
0.2
Iris-setosa
Voil! Now pandas knows to treat rows with 'NA' as missing values.
Next, it's always a good idea to look at the distribution of our data especially the outliers.
Let's start by printing out some summary statistics about the data set.
In [3]:
iris_data.describe()
Out[3]:
sepal_length_cm
sepal_width_cm
petal_length_cm
petal_width_cm
count
150.000000
150.000000
150.000000
145.000000
mean
5.644627
3.054667
3.758667
1.236552
std
1.312781
0.433123
1.764420
0.755058
min
0.055000
2.000000
1.000000
0.100000
25%
5.100000
2.800000
1.600000
0.400000
sepal_length_cm
sepal_width_cm
petal_length_cm
petal_width_cm
50%
5.700000
3.000000
4.350000
1.300000
75%
6.400000
3.300000
5.100000
1.800000
max
7.900000
4.400000
6.900000
2.500000
We can see several useful values from this table. For example, we see that five petal_width_cm entries are missing.
If you ask me, though, tables like this are rarely useful unless we know that our data should fall in a particular range. It's
usually better to visualize the data in some way. Visualization makes outliers and errors immediately stand out, whereas they
might go unnoticed in a large table of numbers.
Since we know we're going to be plotting in this section, let's set up the notebook so we can plot inside of it.
In [4]:
# This line tells the notebook to show plots inside of the notebook
%matplotlib inline
Next, let's create a scatterplot matrix. Scatterplot matrices plot the distribution of each column along the diagonal, and then
plot a scatterplot matrix for the combination of each variable. They make for an efficient tool to look for errors in our data.
We can even have the plotting package color each entry by its class to look for trends within the classes.
In [5]:
# We have to temporarily drop the rows with 'NA' values
# because the Seaborn plotting function does not know
# what to do with them
sb.pairplot(iris_data.dropna(), hue='class')
Out[5]:
<seaborn.axisgrid.PairGrid at 0x109668cf8>
From the scatterplot matrix, we can already see some issues with the data set:
1.
There are five classes when there should only be three, meaning there were some coding errors.
2.
There are some clear outliers in the measurements that may be erroneous: one sepal_width_cm entry for Irissetosa falls well outside its normal range, and several sepal_length_cm entries for Iris-versicolor are nearzero for some reason.
3.
In all of these cases, we need to figure out what to do with the erroneous data. Which takes us to the next step...
Now that we've identified several errors in the data set, we need to fix them before we proceed with the analysis.
Let's walk through the issues one-by-one.
There are five classes when there should only be three, meaning there were some coding errors.
After talking with the field researchers, it sounds like one of them forgot to add Iris- before their Iris-versicolor entries.
The other extraneous class, Iris-setossa, was simply a typo that they forgot to fix.
Let's use the DataFrame to fix these errors.
In [6]:
iris_data.loc[iris_data['class'] == 'versicolor', 'class'] = 'Iris-versicolor'
iris_data.loc[iris_data['class'] == 'Iris-setossa', 'class'] = 'Iris-setosa'
iris_data['class'].unique()
Out[6]:
array(['Iris-setosa', 'Iris-versicolor', 'Iris-virginica'], dtype=object)
Much better! Now we only have three class types. Imagine how embarrassing it would've been to create a model that used
the wrong classes.
There are some clear outliers in the measurements that may be erroneous: one sepal_width_cm entry for Iris-setosa falls
well outside its normal range, and several sepal_length_cm entries for Iris-versicolor are near-zero for some reason.
Fixing outliers can be tricky business. It's rarely clear whether the outlier was caused by measurement error, recording the
data in improper units, or if the outlier is a real anomaly. For that reason, we should be judicious when working with outliers:
if we decide to exclude any data, we need to make sure to document what data we excluded and provide solid reasoning for
excluding that data. (i.e., "This data didn't fit my hypothesis" will not stand peer review.)
In the case of the one anomalous entry for Iris-setosa, let's say our field researchers know that it's impossible for Irissetosa to have a sepal width below 2.5 cm. Clearly this entry was made in error, and we're better off just scrapping the entry
than spending hours finding out what happened.
In [7]:
# This line drops any 'Iris-setosa' rows with a separal width less than 2.5 cm
iris_data = iris_data.loc[(iris_data['class'] != 'Iris-setosa') | (iris_data['sepal_width_cm'] >=
2.5)]
iris_data.loc[iris_data['class'] == 'Iris-setosa', 'sepal_width_cm'].hist()
Out[7]:
<matplotlib.axes._subplots.AxesSubplot at 0x10dac0ef0>
Excellent! Now all of our Iris-setosa rows have a sepal width greater than 2.5.
The next data issue to address is the several near-zero sepal lengths for the Iris-versicolor rows. Let's take a look at
those rows.
In [8]:
iris_data.loc[(iris_data['class'] == 'Iris-versicolor') &
Out[8]:
sepal_width_cm
petal_length_cm
petal_width_cm
class
77 0.067
sepal_length_cm
3.0
5.0
1.7
Iris-versicolor
78 0.060
2.9
4.5
1.5
Iris-versicolor
79 0.057
2.6
3.5
1.0
Iris-versicolor
80 0.055
2.4
3.8
1.1
Iris-versicolor
81 0.055
2.4
3.7
1.0
Iris-versicolor
How about that? All of these near-zero sepal_length_cm entries seem to be off by two orders of magnitude, as if they had
been recorded in meters instead of centimeters.
After some brief correspondence with the field researchers, we find that one of them forgot to convert those measurements
to centimeters. Let's do that for them.
In [9]:
iris_data.loc[(iris_data['class'] == 'Iris-versicolor') &
(iris_data['sepal_length_cm'] < 1.0),
'sepal_length_cm'] *= 100.0
Out[9]:
<matplotlib.axes._subplots.AxesSubplot at 0x10d36c320>
Phew! Good thing we fixed those outliers. They could've really thrown our analysis off.
We had to drop those rows with missing values.
Let's take a look at the rows with missing values:
In [10]:
iris_data.loc[(iris_data['sepal_length_cm'].isnull()) |
(iris_data['sepal_width_cm'].isnull()) |
(iris_data['petal_length_cm'].isnull()) |
(iris_data['petal_width_cm'].isnull())]
Out[10]:
sepal_length_cm
sepal_width_cm
petal_length_cm
petal_width_cm
class
5.0
3.4
1.5
NaN
Iris-setosa
4.4
2.9
1.4
NaN
Iris-setosa
4.9
3.1
1.5
NaN
Iris-setosa
10 5.4
3.7
1.5
NaN
Iris-setosa
sepal_length_cm
11 4.8
sepal_width_cm
petal_length_cm
petal_width_cm
class
3.4
1.6
NaN
Iris-setosa
It's not ideal that we had to drop those rows, especially considering they're all Iris-setosa entries. Since it seems like the
missing data is systematic all of the missing values are in the same column for the same Iris type this error could
potentially bias our analysis.
One way to deal with missing data is mean imputation: If we know that the values for a measurement fall in a certain range,
we can fill in empty values with the average of that measurement.
Let's see if we can do that here.
In [11]:
iris_data.loc[iris_data['class'] == 'Iris-setosa', 'petal_width_cm'].hist()
Out[11]:
<matplotlib.axes._subplots.AxesSubplot at 0x10cf69cf8>
Most of the petal widths for Iris-setosa fall within the 0.2-0.3 range, so let's fill in these entries with the average measured
petal width.
In [12]:
average_petal_width = iris_data.loc[iris_data['class'] == 'Iris-setosa', 'petal_width_cm'].mean()
Out[12]:
sepal_length_cm
sepal_width_cm
petal_length_cm
petal_width_cm
class
5.0
3.4
1.5
0.25
Iris-setosa
4.4
2.9
1.4
0.25
Iris-setosa
4.9
3.1
1.5
0.25
Iris-setosa
10 5.4
3.7
1.5
0.25
Iris-setosa
11 4.8
3.4
1.6
0.25
Iris-setosa
In [13]:
iris_data.loc[(iris_data['sepal_length_cm'].isnull()) |
(iris_data['sepal_width_cm'].isnull()) |
(iris_data['petal_length_cm'].isnull()) |
(iris_data['petal_width_cm'].isnull())]
Out[13]:
sepal_length_cm
sepal_width_cm
petal_length_cm
petal_width_cm
class
Great! Now we've recovered those rows and no longer have missing data in our data set.
Note: If you don't feel comfortable imputing your data, you can drop all rows with missing data with the dropna() call:
iris_data.dropna(inplace=True)
After all this hard work, we don't want to repeat this process every time we work with the data set. Let's save the tidied data
file as a separate file and work directly with that data file from now on.
In [14]:
iris_data.to_csv('iris-data-clean.csv', index=False)
iris_data_clean = pd.read_csv('iris-data-clean.csv')
Now, let's take a look at the scatterplot matrix now that we've tidied the data.
In [15]:
sb.pairplot(iris_data_clean, hue='class')
Out[15]:
<seaborn.axisgrid.PairGrid at 0x10ea45630>
Of course, I purposely inserted numerous errors into this data set to demonstrate some of the many possible scenarios you
may face while tidying your data.
The general takeaways here should be:
Make sure your data falls within the expected range, and use domain knowledge whenever possible to define that
expected range
Deal with missing data in one way or another: replace it if you can or drop it
Never tidy your data manually because that is not easily reproducible
Plot everything you can about the data at this stage of the analysis so you can visually confirm everything looks
correct
We can quickly test our data using assert statements: We assert that something must be true, and if it is, then nothing
happens and the notebook continues running. However, if our assertion is wrong, then the notebook stops running and
brings it to our attention. For example:
In [16]:
assert 1 == 2
--------------------------------------------------------------------------AssertionError
Traceback (most recent call last)
<ipython-input-16-a810b3a4aded> in <module>()
----> 1 assert 1 == 2
AssertionError:
Let's test a few things that we know about our data set now.
In [18]:
# We know that we should only have three classes
assert len(iris_data_clean['class'].unique()) == 3
In [19]:
# We know that sepal lengths for 'Iris-versicolor' should never be below 2.5 cm
assert iris_data_clean.loc[iris_data_clean['class'] == 'Iris-versicolor',
'sepal_length_cm'].min() >= 2.5
In [20]:
# We know that our data set should have no missing measurements
assert len(iris_data_clean.loc[(iris_data_clean['sepal_length_cm'].isnull()) |
(iris_data_clean['sepal_width_cm'].isnull()) |
(iris_data_clean['petal_length_cm'].isnull()) |
(iris_data_clean['petal_width_cm'].isnull())]) == 0
And so on. If any of these expectations are violated, then our analysis immediately stops and we have to return to the tidying
stage.
This is the stage where we plot all the data in as many ways as possible. Create many charts, but don't bother making them
pretty these charts are for internal use.
Let's return to that scatterplot matrix that we used earlier.
In [21]:
sb.pairplot(iris_data_clean)
Out[21]:
<seaborn.axisgrid.PairGrid at 0x10fddfcc0>
Our data is normally distributed for the most part, which is great news if we plan on using any modeling methods that
assume the data is normally distributed.
There's something strange going with the petal measurements. Maybe it's something to do with the different Iris types.
Let's color code the data by the class again to see if that clears things up.
In [22]:
sb.pairplot(iris_data_clean, hue='class')
Out[22]:
<seaborn.axisgrid.PairGrid at 0x11132b588>
Sure enough, the strange distribution of the petal measurements exist because of the different species. This is actually great
news for our classification task since it means that the petal measurements will make it easy to distinguish between Irissetosa and the other Iris types.
Distinguishing Iris-versicolor and Iris-virginica will prove more difficult given how much their measurements overlap.
There are also correlations between petal length and petal width, as well as sepal length and sepal width. The field biologists
assure us that this is to be expected: Longer flower petals also tend to be wider, and the same applies for sepals.
We can also make violin plots of the data to compare the measurement distributions of the classes. Violin plots contain the
same information as box plots, but also scales the box according to the density of the data.
In [23]:
plt.figure(figsize=(10, 10))
Step 5: Classification
Assured that our data is now as clean as we can make it and armed with some cursory knowledge of the distributions and
relationships in our data set it's time to make the next big step in our analysis: Splitting the data into training and testing
sets.
A training set is a random subset of the data that we use to train our models.
A testing set is a random subset of the data (mutually exclusive from the training set) that we use to validate our models on
unforseen data.
Especially in sparse data sets like ours, it's easy for models to overfit the data: The model will learn the training set so well
that it won't be able to handle most of the cases it's never seen before. This is why it's important for us to build the model
with the training set, but score it with the testing set.
Note that once we split the data into a training and testing set, we should treat the testing set like it no longer exists: We
cannot use any information from the testing set to build our model or else we're cheating.
Let's set up our data first.
In [24]:
iris_data_clean = pd.read_csv('iris-data-clean.csv')
... ]
# We can extract the data in this format from pandas like this:
all_inputs = iris_data_clean[['sepal_length_cm', 'sepal_width_cm',
'petal_length_cm', 'petal_width_cm']].values
# Make sure that you don't mix up the order of the entries
# all_inputs[5] inputs should correspond to the class in all_classes[5]
Out[24]:
array([[ 5.1,
3.5,
1.4,
0.2],
[ 4.9,
3. ,
1.4,
0.2],
[ 4.7,
3.2,
1.3,
0.2],
[ 4.6,
3.1,
1.5,
0.2],
[ 5. ,
3.6,
1.4,
0.2]])
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_classes, train_size=0.75, random_state=1)
With our data split, we can start fitting models to our data. Our head of data is all about decision tree classifiers, so let's start
with one of those.
Decision tree classifiers are incredibly simple in theory. In their simplest form, decision tree classifiers ask a series of Yes/No
questions about the data each time getting closer to finding out the class of each entry until they either classify the
data set perfectly or simply can't differentiate a set of entries. Think of it like a game of Twenty Questions, except the
computer is much, much better at it.
Here's an example decision tree classifier:
Notice how the classifier asks Yes/No questions about the data whether a certain feature is <= 1.75, for example so it
can differentiate the records. This is the essence of every decision tree.
The nice part about decision tree classifiers is that they are scale-invariant, i.e., the scale of the features does not affect
their performance, unlike many Machine Learning models. In other words, it doesn't matter if our features range from 0 to 1
or 0 to 1,000; decision tree classifiers will work with them just the same.
There are several parameters that we can tune for decision tree classifiers, but for now let's use a basic decision tree
classifier.
In [26]:
from sklearn.tree import DecisionTreeClassifier
decision_tree_classifier.score(testing_inputs, testing_classes)
Out[26]:
0.97368421052631582
Heck yeah! Our model achieves 97% classification accuracy without much effort.
However, there's a catch: Depending on how our training and testing set was sampled, our model can achieve anywhere
from 80% to 100% accuracy:
In [27]:
model_accuracies = []
decision_tree_classifier = DecisionTreeClassifier()
decision_tree_classifier.fit(training_inputs, training_classes)
classifier_accuracy = decision_tree_classifier.score(testing_inputs, testing_classes)
model_accuracies.append(classifier_accuracy)
sb.distplot(model_accuracies)
Out[27]:
<matplotlib.axes._subplots.AxesSubplot at 0x11164c128>
It's obviously a problem that our model performs quite differently depending on the subset of the data it's trained on. This
phenomenon is known as overfitting: The model is learning to classify the training set so well that it doesn't generalize and
perform well on data it hasn't seen before.
Cross-validation
[ go back to the top ]
This problem is the main reason that most data scientists perform k-fold cross-validation on their models: Split the original
data set into k subsets, use one of the subsets as the testing set, and the rest of the subsets are used as the training set.
This process is then repeated k times such that each subset is used as the testing set exactly once.
10-fold cross-validation is the most common choice, so let's use that here. Performing 10-fold cross-validation on our data
set looks something like this:
(each square is an entry in our data set)
In [28]:
import numpy as np
from sklearn.cross_validation import StratifiedKFold
plt.figure(figsize=(15, 15))
plt.imshow(masks, interpolation='none')
plt.ylabel('Fold')
plt.xlabel('Row #')
You'll notice that we used Stratified k-fold cross-validation in the code above. Stratified k-fold keeps the class proportions
the same across all of the folds, which is vital for maintaining a representative subset of our data set. (e.g., so we don't have
100% Iris setosa entries in one of the folds.)
We can perform 10-fold cross-validation on our model with the following code:
In [29]:
from sklearn.cross_validation import cross_val_score
decision_tree_classifier = DecisionTreeClassifier()
Out[29]:
<matplotlib.text.Text at 0x1138e2278>
Now we have a much more consistent rating of our classifier's general classification accuracy.
Parameter tuning
[ go back to the top ]
Every Machine Learning model comes with a variety of parameters to tune, and these parameters can be vitally important to
the performance of our classifier. For example, if we severely limit the depth of our decision tree classifier:
In [30]:
decision_tree_classifier = DecisionTreeClassifier(max_depth=1)
Out[30]:
<matplotlib.text.Text at 0x113a02b70>
decision_tree_classifier = DecisionTreeClassifier()
grid_search = GridSearchCV(decision_tree_classifier,
param_grid=parameter_grid,
cv=cross_validation)
grid_search.fit(all_inputs, all_classes)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))
Best score: 0.959731543624161
Best parameters: {'max_features': 4, 'max_depth': 3}
Now let's visualize the grid search to see how the parameters interact.
In [32]:
grid_visualization = []
grid_visualization = np.array(grid_visualization)
grid_visualization.shape = (5, 4)
sb.heatmap(grid_visualization, cmap='Blues')
plt.xticks(np.arange(4) + 0.5, grid_search.param_grid['max_features'])
plt.yticks(np.arange(5) + 0.5, grid_search.param_grid['max_depth'][::-1])
plt.xlabel('max_features')
plt.ylabel('max_depth')
Out[32]:
<matplotlib.text.Text at 0x113a80dd8>
Now we have a better sense of the parameter space: We know that we need a max_depth of at least 2 to allow the decision
tree to make more than a one-off decision.
max_features doesn't really seem to make a big difference here as long as we have 2 of them, which makes sense since
our data set has only 4 features and is relatively easy to classify. (Remember, one of our data set's classes was easily
separable from the rest based on a single feature.)
Let's go ahead and use a broad grid search to find the best settings for a handful of parameters.
In [33]:
decision_tree_classifier = DecisionTreeClassifier()
grid_search = GridSearchCV(decision_tree_classifier,
param_grid=parameter_grid,
cv=cross_validation)
grid_search.fit(all_inputs, all_classes)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))
Best score: 0.959731543624161
Best parameters: {'max_features': 4, 'max_depth': 3, 'splitter': 'best', 'criterion': 'gini'}
Now we can take the best classifier from the Grid Search and use that:
In [34]:
decision_tree_classifier = grid_search.best_estimator_
decision_tree_classifier
Out[34]:
DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=3,
max_features=4, max_leaf_nodes=None, min_samples_leaf=1,
min_samples_split=2, min_weight_fraction_leaf=0.0,
random_state=None, splitter='best')
We can even visualize the decision tree with GraphViz to see how it's making the classifications:
In [35]:
import sklearn.tree as tree
from sklearn.externals.six import StringIO
sb.boxplot(rf_scores)
sb.stripplot(rf_scores, jitter=True, color='white')
Out[37]:
<matplotlib.axes._subplots.AxesSubplot at 0x113cd4b38>
Hmmm... that's a little boring by itself though. How about we compare another classifier to see how they perform?
We already know from previous projects that Random Forest classifiers usually work better than individual decision trees. A
common problem that decision trees face is that they're prone to overfitting: They complexify to the point that they classify
the training set near-perfectly, but fail to generalize to data they have not seen before.
Random Forest classifiers work around that limitation by creating a whole bunch of decision trees (hence "forest") each
trained on random subsets of training samples (drawn with replacement) and features (drawn without replacement) and
have the decision trees work together to make a more accurate classification.
Let that be a lesson for us: Even in Machine Learning, we get better results when we work together!
random_forest_classifier = RandomForestClassifier()
grid_search = GridSearchCV(random_forest_classifier,
param_grid=parameter_grid,
cv=cross_validation)
grid_search.fit(all_inputs, all_classes)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))
grid_search.best_estimator_
Best score: 0.9731543624161074
Best parameters: {'n_estimators': 5, 'max_features': 3, 'warm_start': True, 'criterion': 'gini'}
Out[40]:
RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features=3, max_leaf_nodes=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=5, n_jobs=1,
oob_score=False, random_state=None, verbose=0, warm_start=True)
random_forest_classifier = grid_search.best_estimator_
Out[42]:
<matplotlib.axes._subplots.AxesSubplot at 0x1141bff28>
How about that? They both seem to perform about the same on this data set. This is probably because of the limitations of
our data set: We have only 4 features to make the classification, and Random Forest classifiers excel when there's hundreds
of possible features to look at. In other words, there wasn't much room for improvement with this data set.
Step 6: Reproducibility
[ go back to the top ]
Ensuring that our work is reproducible is the last and arguably most important step in any analysis. As a rule, we
shouldn't place much weight on a discovery that can't be reproduced. As such, if our analysis isn't reproducible, we
might as well not have done it.
Notebooks like this one go a long way toward making our work reproducible. Since we documented every step as we moved
along, we have a written record of what we did and why we did it both in text and code.
Beyond recording what we did, we should also document what software and hardware we used to perform our analysis. This
typically goes at the top of our notebooks so our readers know what tools to use.
Sebastian Raschka created a handy notebook tool for this:
In [43]:
%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
Installed watermark.py. To use it, type:
%load_ext watermark
In [44]:
%load_ext watermark
In [45]:
Finally, let's extract the core of our work from Steps 1-5 and turn it into a single pipeline.
In [46]:
%matplotlib inline
import pandas as pd
import seaborn as sb
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import train_test_split
from sklearn.cross_validation import cross_val_score
# We can jump directly to working with the clean data because we saved our cleaned data set
iris_data_clean = pd.read_csv('iris-data-clean.csv')
# Testing our data: Our analysis will stop here if any of these assertions are wrong
# We know that sepal lengths for 'Iris-versicolor' should never be below 2.5 cm
assert iris_data_clean.loc[iris_data_clean['class'] == 'Iris-versicolor',
'sepal_length_cm'].min() >= 2.5
assert len(iris_data_clean.loc[(iris_data_clean['sepal_length_cm'].isnull()) |
(iris_data_clean['sepal_width_cm'].isnull()) |
(iris_data_clean['petal_length_cm'].isnull()) |
(iris_data_clean['petal_width_cm'].isnull())]) == 0
all_classes = iris_data_clean['class'].values
random_forest_classifier.fit(training_inputs, training_classes)
random_forest_classifier.predict(testing_inputs[:10]),
testing_classes[:10]):
print('{}\t-->\t{}\t(Actual: {})'.format(input_features, prediction, actual))
[
[
[
[
[
[
[
[
[
[
4.6
5.2
7.1
6.3
6.7
6.9
5.1
6.3
5.2
6.1
3.6
2.7
3.
3.3
3.3
3.1
3.3
2.8
3.4
2.6
1.
3.9
5.9
4.7
5.7
5.4
1.7
5.1
1.4
5.6
0.2]
1.4]
2.1]
1.6]
2.5]
2.1]
0.5]
1.5]
0.2]
1.4]
-->
-->
-->
-->
-->
-->
-->
-->
-->
-->
Iris-setosa
Iris-versicolor
Iris-virginica
Iris-versicolor
Iris-virginica
Iris-virginica
Iris-setosa
Iris-versicolor
Iris-setosa
Iris-virginica
(Actual:
(Actual:
(Actual:
(Actual:
(Actual:
(Actual:
(Actual:
(Actual:
(Actual:
(Actual:
Iris-setosa)
Iris-versicolor)
Iris-virginica)
Iris-versicolor)
Iris-virginica)
Iris-virginica)
Iris-setosa)
Iris-virginica)
Iris-setosa)
Iris-virginica)
There we have it: We have a complete and reproducible Machine Learning pipeline to demo to our head of data. We've met
the success criteria that we set from the beginning (>90% accuracy), and our pipeline is flexible enough to handle new
inputs or flowers when that data set is ready. Not bad for our first week on the job!
Conclusions
[ go back to the top ]
I hope you found this example notebook useful for your own work and learned at least one new trick by reading through it.
If you've spotted any errors or would like to contribute to this notebook, please don't hestitate to get in touch. I can be
reached in the following ways:
Email me
Tweet at me
Fork the notebook repository, make the fix/addition yourself, then send over a pull request
Further reading
[ go back to the top ]
This notebook covers a broad variety of topics but skips over many of the specifics. If you're looking to dive deeper into a
particular topic, here's some recommended reading.
Data Science: William Chen compiled a list of free books for newcomers to Data Science, ranging from the basics of R &
Python to Machine Learning to interviews and advice from prominent data scientists.
Machine Learning: /r/MachineLearning has a useful Wiki page containing links to online courses, books, data sets, etc. for
Machine Learning. There's also a curated list of Machine Learning frameworks, libraries, and software sorted by language.
Unit testing: Dive Into Python 3 has a great walkthrough of unit testing in Python, how it works, and how it should be used
pandas has several tutorials covering its myriad features.
scikit-learn has a bunch of tutorials for those looking to learn Machine Learning in Python. Andreas Mueller's scikit-learn
workshop materials are top-notch and freely available.
matplotlib has many books, videos, and tutorials to teach plotting in Python.
Seaborn has a basic tutorial covering most of the statistical plotting features.