How To Analyze the Performance of Machine Learning Models

About

Viso Suite is the no-code computer vision platform for teams to build, deploy and operate real-world applications.

Contents
Need Computer Vision?

Viso Suite is only all-in-one business platform to build and deliver computer vision without coding. Learn more.

Accuracy prediction is important in any machine learning model process because it ensures that the model we created works properly and can be used with trust. The confusion matrix, precision, recall, and F1 score usually gives a better intuition of prediction results as compared to accuracy.

This article will discuss the terms Confusion Matrix, Precision, Recall, Specificity, and F1 Score. The goal is to learn different ways to understand and analyze the machine learning performance with Python tools.

Confusion Matrix to evaluate model performance

A confusion matrix is used to display parameters in a matrix format. It allows us to visualize true and false positives, as well as true and false negatives.

To get the overall accuracy, we can subtract total false positives and negatives from the total amount of tests and divide that by the total amount of tests. We can use a confusion matrix by importing it through the sklearn library. Scikit-learn (sklearn) in Python contains tools for machine learning and statistical modeling, including methods from classification to dimensionality reduction.

 

A confusion matrix in machine learning is used for evaluating a classification model.
Confusion matrix in ML is used for evaluating the precision of a classification model.

The following lines of code would import and implement a confusion matrix, with the assumption that y_pred and y_test have been initialized previously. In the following Python example, y_test and y_pred are variables that represent the tested and predicted values outputted by the machine learning model.

from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print (cm)

Precision of a Machine Learning Model

Precision is the percentage of positive instances out of the total predicted positive instances. It takes the total positives generated (true or not) in the denominator and the total true positives in the numerator. In the equation below, the denominator is the sum of the true positives and false positives, while the numerator is only true positives. This equation lets us know how often the model is correct when it generates positive values.

 

Calculate the Precision of a ML classification model using a confusion matrix.
Calculate the Precision of an ML classification model using a confusion matrix.

Recall or True Positive Rate

Recall encapsulates what the percentage of positive instances out of the total real positive instances is. In this case, the “positive instances” are the model-generated values, while the “total real positive instances” are verified by test data. Because of this, the denominator will be the number of real positive instances present in the dataset (false negatives, true positives). Instead of using the total number of positives generated by the model like for the machine learning precision (above), we take the number of positives known according to verified data. This equation lets us know how many extra positives the model declared when it was supposed to be negative.

 

How to calculate the Recall of a machine learning classification models.
How to calculate the Recall of an ML classification model using a confusion matrix.

Specificity of Machine Learning Models

Specificity entails the percentage of negative instances out of the total actual negative instances. It is similar to the true positive rate method above. Here, the denominator is the sum of real numbers not generated by the model but instead verified by data. Because the numerator is the number of true negatives, this equation allows us to see how often the model is correct when it generates negative outputs based on the true negatives of the total negatives it produced in the past.

F1 Score to analyze Machine Learning Models

F1 score is when we take the mean of precision and recall. It takes the contribution of both of them, which means that the higher the F1 score, the more accurate the model. Conversely, if the product in the numerator dips down too low, the final F1 score decreases dramatically.

A model with a good F1 score has the most drastic ratio of “true:false” positives as well as the most drastic “true:false” negatives ratio. For example, if the number of true positives to the number of false positives is 100:1, that will play a role in producing a good F1 score. Meanwhile, having a close ratio, say 50:51 true to false positives, will produce a low F1 score. The equation for the F1 score is below.

 

F1 Score is the weighted average of Precision and Recall
F1 Score is the weighted average of Precision and Recall.

An F1 score is considered perfect when it’s 1, while the model is a total failure when it’s 0. Thus, a low F1 score is an indication of both poor precision and poor recall.

What’s Next?

These are just the most important techniques for analyzing machine learning model performance. To analyze the model efficiency and accuracy, there are many more, including ROC (Receiver Operating Characteristics) and PR (Precision-Recall) curves, which are implemented programmatically.

If you liked this article, we recommend you the following:

Related Articles

Join 6,300+ Fellow
AI Enthusiasts

Get expert AI news 2x a month. Subscribe to the most read Computer Vision Blog.

You can unsubscribe anytime. See our privacy policy.

Build any Computer Vision Application 10x faster

The No Code Computer Vision Platform to build, deploy and scale real-world applications. Check it out

Schedule a live demo

Not interested?

We’re always looking to improve, so please let us know why you are not interested in using Computer Vision with Viso Suite.