Analyzing Machine Learning Model Performance Strategies

Computer Vision traffic analytics with a video stream
Contents

Accuracy prediction is important in any machine learning model process because it insures us that the model we made works properly and can be used with trust. The confusion matrix, precision, recall, and F1 score usually gives a better intuition of prediction results as compared to accuracy.

In this article, we will be discussing the terms Confusion Matrix, Precision, Recall, Specificity and F1 Score. The goal is to learn different ways to analyze the performance of our machine learning models.

Confusion Matrix to evaluate model performance

A confusion matrix is used to display parameters in a matrix format. It allows us to visualize true and false positives, as well as true and false negatives.

To get the overall accuracy, we can subtract total false positives and negatives from the total amount of tests, and divide that by the total amount of tests. We can use a confusion matrix by importing it through the sklearn library. Scikit-learn (sklearn) in Python contains tools for machine learning and statistical modeling including methods from classification to dimensionality reduction.

 

A confusion matrix in machine learning is used for evaluating a classification model.
Confusion matrix in ML is used for evaluating the precision  of a classification model.

The following lines of code would import and implement a confusion matrix, with the assumption that y_pred and y_test have been initialized previously. In the following example, y_test and y_pred are variables that represent the tested and predicted values outputted by the machine learning model.

from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print (cm)

Precision of a Machine Learning Model

Precision is the percentage of positive instances out of the total predicted positive instances. It takes the total positives generated (true or not) in the denominator and the total true positives in the numerator. In the equation below, the denominator is the sum of the true positives and false positives, while the numerator is only true positives. This equation lets us know how often the model is correct when it generates positive values.

 

Calculate the Precision of a ML classification model using a confusion matrix.
Calculate the Precision of an ML classification model using a confusion matrix.

Recall or True Positive Rate

Recall encapsulates what the percentage of positive instances out of the total real positive instances is. The “positive instances” in this case are the model-generated values while the “total real positive instances” are verified by test data. Because of this, the denominator will be the number of real positive instances present in the dataset (false negatives, true positives). Instead of using the total number of positives generated by the model like in precision, here we take the number of positives known according to verified data. This equation lets us know how many extra positives the model declared when it was supposed to be negative.

 

How to calculate the Recall of a machine learning classification models.
How to calculate the Recall of an ML classification model using a confusion matrix.

Specificity of Machine Learning Models

Specificity entails the percentage of negative instances out of the total actual negative instances. It is similar to the true positive rate method above. Here, the denominator is the sum of real numbers not generated by model, but instead verified by data. Because the numerator is the number of true negatives, this equation allows us to see how often the model is correct when it generates negative outputs based on the true negatives out of total negatives it produced in the past.

F1 Score to analyze Machine Learning Models

F1 score is when we take the mean of precision and recall. It takes the contribution of both of them, which means that the higher the F1 score, the more accurate the model. If the product in the numerator dips down too low, the final F1 score decreases dramatically.

A model with a good F1 score has the most drastic ratio of true:false positives as well as the most drastic true:false negatives ratio. For example, if the number of true positives to the number of false positives is 100:1, that will play a role in producing a good F1 score. Meanwhile, having a close ratio, say 50:51 true to false positives, will produce a low F1 score. The equation for F1 score is below.

 

F1 Score is the weighted average of Precision and Recall
F1 Score is the weighted average of Precision and Recall.

An F1 score is considered perfect when it’s 1, while the model is a total failure when it’s 0. A low F1 score is an indication of both poor precision and poor recall.

What’s Next?

These are just the most important techniques for analyzing machine learning model performance. To analyze the model efficiency and accuracy, there are many more including ROC (Receiver Operating Characteristics) and PR (Precision-Recall) curves, which are implemented programmatically.

If you liked this article, we recommend you the following:

Share on linkedin
Share on twitter
Share on whatsapp
Share on facebook
Share on email
Related Articles

Want to use Computer Vision applications?

Get the all-in-one Suite to build and deliver Computer Vision Applications. 
Learn more

This website uses cookies. By continuing to browse this site, you agree to this use.