5.1 Naïve Bayes

Before dive in Naive Bayes. Let’s talk a little bit about the thwo main groups in Machine Learning.

Supervised Learning

In supervised learning, we are given a data set and already know what our correct output should look like, having the idea that there is a relationship between the input and the output.

Supervised learning problems are categorized into “regression” and “classification” problems. In a regression problem, we are trying to predict results within a continuous output, meaning that we are trying to map input variables to some continuous function. In a classification problem, we are instead trying to predict results in a discrete output. In other words, we are trying to map input variables into discrete categories.

Example 1:

Given data about the size of houses on the real estate market, try to predict their price. Price as a function of size is a continuous output, so this is a regression problem.

We could turn this example into a classification problem by instead making our output about whether the house “sells for more or less than the asking price.” Here we are classifying the houses based on price into two discrete categories.

Example 2:

  • Regression - Given a picture of a person, we have to predict their age on the basis of the given picture

  • Classification - Given a patient with a tumor, we have to predict whether the tumor is malignant or benign. — Coursera Machine Learning Notebook

Unsupervised Learning

Unsupervised learning allows us to approach problems with little or no idea what our results should look like. We can derive structure from data where we don’t necessarily know the effect of the variables.

We can derive this structure by clustering the data based on relationships among the variables in the data.

With unsupervised learning there is no feedback based on the prediction results.

Example:

Clustering: Take a collection of 1,000,000 different genes, and find a way to automatically group these genes into groups that are somehow similar or related by different variables, such as lifespan, location, roles, and so on.

Non-clustering: The “Cocktail Party Algorithm”, allows you to find structure in a chaotic environment. (i.e. identifying individual voices and music from a mesh of sounds at a cocktail party). — Coursera Machine Learning Notebook

5.1.1 Gaussian - Naive Bayes

In machine learning, naive Bayes classifiers are a family of simple “probabilistic classifiers” based on applying Bayes’ theorem with strong (naive) independence assumptions between the features. — Wikipedia

The Naive Bayes was one of the topics covered in Advanced Statistics.

5.1.2 Scikit Learn

In this lesson we are going to use the scikit learn package to perform the Gaussian Naive Bayes.

# Importing the library.
from sklearn.naive_bayes import GaussianNB

GaussianNB()

Creating the classifier.

# Creating the Classifier.
clf = GaussianNB()

.fit()

Bear in mind, the term fitting or training will be used interchangeably along the course.

The .fit() method will be used to fit/train the classifier based on two inputs:

  • X: Coordinates/features of the variable to be classified;
  • Y: Results of already classified outputs.

Recall, this is a supervised algorithm, which means we have already some results (that I do not care its origins) and what we are aiming is a generalization of this classification using a algorithm to calculate our own coefficients based on a training data.

An example of X and Y from Scikit Learning website:

### Inputs.

# This is the coordinates of each point.
X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])

# This is the outcome of each classification.
Y = np.array([1, 1, 1, 2, 2, 2])

Plotting the X dataframe using the Y vector as categorical variable to classify each point of X.

c5_l2_01.png

Figure 1 - X and Y plotted.

Now, I want to train my clf (classifier) based on X and Y to classify any point.

# fit the classifier on the training features and labels
clf.fit(X, Y)

What is the classification for (-0.8, -1)?

Let’s take a look where is this point.

c5_l2_02.png

Figure 2 - New point in green.

Using the .predict() method it is possible to predict the classification of this point.

clf.predict([[-0.8, -1]])

The result is Type 1, as we expected to be.

Have in mind, generally we use two datasets:

  • Training dataset: Used to train the model;
  • Test dataset: Used to test.

Training and Test datasests are complete different and it is necessary to avoid overfitting. Recall, you will use the test dataset to calculate the accuracy of your model.

5.1.2.1 accuracy_score()

This method is used to calculate the model accuracy.

accuracy_score(y_true, y_pred)

Where:

  • y_true: True values;
  • y_pred: Values predicted from the model, need to be check using the y_true.

5.1.3 Text Learning

This is an application of the same package Scikit Learn, but this time the problem is a bit more complicated.

Figure 3 shows, very simplified, the probability of each word given the email writer.

c5_l2_03.png

Figure 3 - Probability diagram of Chris and Sara.

Suppose an email like this.

\[\text{Love life!} \tag{1}\]

What is the person more likely to be written this email?

Given the probability of Chris and Sara is equal to 50%.

  • \(P(Chris) = P(Sara) = 0.5\)

This is the a priori probabilities.

Based on the probabilities in Figure 3, let’s calculate the joint probabilities.

  • \(P_{chris,\text{Love life}} = 0.1 \cdot 0.1 \cdot 0.5 = 0.005\)
  • \(P_{sara,\text{Love life}} = 0.5 \cdot 0.3 \cdot 0.5 = 0.075\)

A new email like this:

\[\text{Life deal} \tag{2}\]

  • \(P_{chris,\text{Life deal}} = 0.1 \cdot 0.8 \cdot 0.5 = 0.004\)
  • \(P_{sara,\text{Life deal}} = 0.3 \cdot 0.2 \cdot 0.5 = 0.003\)

Founded on the joint probabilities, we can calculate the normalizator (:)) of probabilities.

  • For \(\text{Love life!}\);

\[P(\text{Love life}) = P(chris) \cdot P_{chris,\text{Love life}} + P(sara) \cdot P_{sara,\text{Love life}} = P(chris) \cdot 0.08\]

  • For \(\text{Life deal}\);

\[P(\text{Life deal}) = P(chris) \cdot P_{chris,\text{Life deal}} + P(sara) \cdot P_{sara,\text{Life deal}} = 0.007\]

Finally, using normalizator and joint probabilities we can calculate the a posteriori probabilities.

  • For \(\text{Love life!}\);

Using the \(P(\text{Love life})\) to normalize \(P_{chris,\text{Love life}}\) and \(P_{sara,\text{Love life}}\):

\[P(\text{Chris|"Love life"}) = P(chris) \cdot \frac{P_{chris,\text{Love life}}}{P(chris) \cdot P(\text{Love life})} = \frac{0.005}{0.080} = 0.0625\]

\[P(\text{Sara|"Love life"}) = P(sara) \cdot \frac{P_{sara,\text{Love life}}}{P(sara) \cdot P(\text{Love life})} = \frac{0.075}{0.080} = 0.9375\]

  • For \(\text{Life deal}\);

Using the \(P(\text{Life deal})\) to normalize \(P_{chris,\text{Life deal}}\) and \(P_{sara,\text{Life deal}}\):

\[P(\text{Chris|"Life deal"}) = P(chris) \cdot \frac{P_{chris,\text{Life deal}}}{P(chris) \cdot P(\text{Life deal})} = \frac{0.004}{0.007} = 0.5714\]

\[P(\text{Sara|"Life deal"}) = P(sara) \cdot \frac{P_{sara,\text{Life deal}}}{P(sara) \cdot P(\text{Life deal})} = \frac{0.003}{0.007} = 0.4286\]

 

A work by AH Uyekita

anderson.uyekita[at]gmail.com