A.I, Data and Software Engineering

Latest articles

SAMPLING Bagging vs Pasting

One way to get a diverse set of classifiers is to use very different training algorithms,as just discussed. Another approach is to use the same training algorithm for everypredictor but to train them on different random subsets of the training set. Whensampling is performed with replacement, this method is called bagging(short for bootstrap aggregating). When sampling is performed without...

squared hinge loss

petamind

The squared hinge loss is a loss function used for “maximum margin” binary classification problems. Mathematically it is defined as: where ŷ the predicted value and y is either 1 or -1. Thus, the squared hinge loss is: 0* when the true and predicted labels are the same and* when ŷ≥ 1 (which is an indication that the classifier is sure that it’s the correct label)quadratically increasing with the...

A.I, Data and Software Engineering

PetaMinds focuses on developing the coolest topics in data science, A.I, and programming, and make them so digestible for everyone to learn and create amazing applications in a short time.

Categories