A.I, Data and Software Engineering

Common Loss functions and their uses – quick note

C

Machines learn by means of a loss function which reflects how well a specific model performs with the given data. If predictions deviate too much from actual results, loss function would yield a very large value. Gradually, with \(optimization\) function, parameters are modified accordingly to reduce the error in prediction. In this article, we will quickly review some common loss functions and their usage in the domain of machine/deep learning.

Unfortunately, there’s no one-size-fits-all loss function to algorithms in machine learning. There are various factors involved in choosing a loss function for a specific problem such as type of machine learning algorithm chosen, ease of calculating the derivatives and to some degree the percentage of outliers in the data set.

As we have two common problems, classification and regression, loss functions also can be sorted into two major categories — Classification losses and Regression losses.

Classification Losses

In classification, we are trying to predict the output from a set of finite categorical values, e.g. given large data set of images of handwritten digits, categorizing them into one of 0–9 digits.

classification loss functions
Some classification losses

Zero-one loss

In statistics and decision theory, a frequently used loss function is the 0-1 loss function

$$L({\hat {y}},y)=I({\hat {y}}\neq y),\,$$

where I is the indicator function. The function is non-continuous and thus impractical to optimize.

Hinge Loss/Multi-class SVM Loss

In simple terms, the score of the correct category should be greater than the sum of scores of all incorrect categories by some safety margin (usually one). And hence hinge loss is used for maximum-margin classification, most notably for support vector machines. Although not differentiable, it’s a convex function which makes it easy to work with usual convex optimizers used in the machine learning domain.

Mathematical formulation:

$$SVMloss = \sum\limits_{j \# y_i} max(0, s_j – s_{y_i}+1)$$

Consider an example where we have three training examples and three classes to predict — Dog, cat and horse. Below the values predicted by our algorithm for each of the classes:

Img#1Img#2Img#3
Dog-0.39-4.611.03
Cat1.493.28-2.37
Horse4.211.46-2.27

Computing hinge losses for all 3 training examples:

Cross-Entropy Loss/Negative Log-Likelihood

This is the most common setting for classification problems. Cross-entropy loss increases as the predicted probability diverge from the actual label.

Mathematical formulation:

$$CrossEntropyLoss = -(y_i log(\hat{y}_i) + (1 -y_i)log(1 – \hat{y}_i) )$$

Notice that when the actual label is 1 (\(y_i = 1\)), the second half of function disappears whereas in case actual label is 0 (\(y_i = 0\)) first half is dropped off. In short, we are just multiplying the log of the actually predicted probability for the ground truth class. An important aspect of this is that cross-entropy loss penalizes heavily the predictions that are confident but wrong.

Regression Losses

Regression, on the other hand, deals with predicting a continuous value, such as given the floor area, a number of rooms, predict the price of the house which can be any real positive number.

regression loss functions
Some regression losses

Mean Square Error/Quadratic Loss/L2 Loss

Mathematical formulation:-

$$MSE = \frac{\sum_{i=1}^{n} (y_i -\hat{y}_i)}{n}$$

As the name suggests, Mean square error is measured as the average of the squared difference between predictions and actual observations. It’s only concerned with the average magnitude of error irrespective of their direction. However, due to squaring, predictions which are far away from actual values are penalized heavily in comparison to less deviated predictions. Plus MSE has nice mathematical properties which make it easier to calculate gradients.

Mean Absolute Error/L1 Loss

Mean absolute error, on the other hand, is measured as the average sum of absolute differences between predictions and actual observations. Like MSE, this as well measures the magnitude of error without considering their direction. Unlike MSE, MAE loss function needs more complicated tools such as linear programming to compute the gradients. Plus MAE is more robust to outliers since it does not make use of the square.

Mathematical formulation:-

$$MAE = \frac{\sum_{i=1}^{n} |y_i -\hat{y}_i|}{n}$$

Mean Bias Error

This is much less common in machine learning domain as compared to its counterpart. This is similar to MSE with the only difference that we don’t take absolute values. Clearly there’s a need for caution as positive and negative errors could cancel each other out. Although less accurate in practice, it could determine if the model has positive biases or negative biases.

Mathematical formulation:

$$MBE = \frac{\sum_{i=1}^n (y_i – \hat{y}_i)}{n}$$

Wrapping up

There are various factors involved in choosing a loss function for a specific problem such as type of machine learning algorithm chosen, ease of calculating the derivatives and to some degree the percentage of outliers in the data set. Nevertheless, you should at least know which loss functions suitable to a particular problem.

Add comment

A.I, Data and Software Engineering

PetaMinds focuses on developing the coolest topics in data science, A.I, and programming, and make them so digestible for everyone to learn and create amazing applications in a short time.

Pin It on Pinterest

Newsletters

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

Petaminds will use the information you provide on this form to be in touch with you and to provide updates.