1. Linear regression

Linear regression adopts the form of sum of squares, which is generally derived from the maximum probability product of the maximum likelihood function of the conditional probability of the model.

In statistics, there are generally the following loss functions:

1) 0-1 loss function

L (Y, f (X) = {1, 0, Y indicates f (X) Y = f (X)

2) Squared loss function

L (Y, f (X) = 2 (Y – f (X))

3) Absolute loss function

L (Y, f (X) = | Y – f (X) |

4) Logarithmic loss function

L (Y, P (Y | X)) = – logP (Y | X)

The smaller the loss function is, the better the model is, and the loss function is as convex as possible to facilitate convergence calculation.

Linear regression, using a squared loss function. Logistic regression uses logarithmic loss function.

Logistic regression

The model of logistic regression is a nonlinear model, which is classified by sigmoID function. But it is essentially a linear regression model, because except for sigmoid mapping function, all other steps are linear regression. It can be said that logistic regression is supported by linear regression theory. However, linear models cannot achieve the nonlinear form of SIGmoID, which can easily handle 0/1 classification problems.