This is the 25th day of my participation in the August Challenge

Notes of Andrew Ng’s Machine Learning —— (7) Overfitting

The Problem of Overfitting

Considering the problem of predicting YYy from X ∈Rx \in \Rx∈R. The leftmost figure below shows the result of fitting a Y =θ0+θ1xy=\theta_0+\theta_1xy=θ0+θ1x to a dataset. We can see that the data doesn’t really lie on a straight line, and so the fit is not very good.

Instead, if we had added an extra feature
x 2 x^2
, and fit
y = θ 0 + θ 1 + θ 2 x 2 y=\theta_0+\theta_1+\theta_2x^2
, then we obtain a slightly better fit to the data (See middle figure).

Naively, it might seem that the more features we add, the better. However, there is also a danger in adding too many features: The rightmost figure is The result of fitting a 5th5^{th}5th order polynomial Y =∑j=05θjxjy = \sum_{j=0} ^5 \theta_j X ^jy=∑ J =05θ JXJ. We see that even though the FITTED curve passes through the data perfectly, we would not expect this to be a very good predictor for unseen examples.

The figure on The left shows an instance of underfitting — in which The data clearly shows structure not captured by The model

Underfitting, or high bias, is when the form of our hypothesis function h maps poorly to the trend of the data. It is usually caused by a function that is too simple or uses too few features.

The figure on the right is an example of overfitting.

Overfitting, or high variance, is caused by a hypothesis function that makes accurate predictions for examples in the training set, but it does not generalize well to make accurate predictions on new, previously unseen examples. It is usually caused by a complicated function that creates a lot of unnecessary curves and angles unrelated to the data.

Overfitting is applied to both linear and logistic regression. There are two main options to address the issue of overfitting:

  1. Reduce the number of features:
  • Manually select which features to keep.
  • Use a model selection algorithm (studied later in the course).
  1. Regularization
  • Keep all the features, but reduce the magnitude of parameters
    θ j \theta_j
    .
  • Regularization works well when we have a lot of slightly useful features.

Cost Function

If we have overfitting from our hypothesis function, we can reduce the weight that some of the terms in our function carry by increasing their cost.

Say we wanted to make the following function more quadratic:


Theta. 0 + Theta. 1 x + Theta. 2 x 2 + Theta. 3 x 3 + Theta. 4 x 4 \theta_0 + \theta_1x + \theta_2x^2 + \theta_3x^3 + \theta_4x^4

We’ll want to eliminate the influence of θ3×3\theta_3x^3θ3×3 and θ4×4\theta_4x^4θ4×4. Without actually getting rid of these features or changing the form of our hypothesis, we can instead modify our cost function:


m i n Theta.   1 2 m i = 1 m ( h Theta. ( x ( i ) ) y ( i ) ) 2 + 1000 Theta. 3 2 + 1000 Theta. 4 2 min_\theta\ \dfrac{1}{2m}\sum_{i=1}^m (h_\theta(x^{(i)}) – y^{(i)})^2 + 1000\cdot\theta_3^2 + 1000\cdot\theta_4^2

[synchronized] We’ve added two extra terms at the end to the cost of θ3\ theTA_3 θ3 and θ4\ theTA_4 θ4. Now, in order for the cost function to get close to zero, We will have to reduce the values of θ3\ theTA_3 θ3 and θ4\ theTA_4 θ4 to near zero. This will in turn greatly reduce the Values of θ3×3\theta_3x^3θ3×3 and θ4×4\theta_4x^4θ4×4 in our hypothesis function. As a result, we see that the new hypothesis (depicted by the pink curve) looks like a quadratic function but fits the data better due To the extra small terms θ3×3\theta_3x^3θ3×3 and θ4×4\theta_4x^4θ4×4

We could also regularize all of our theta parameters in a single summation as:


m i n Theta. 1 2 m   i = 1 m ( h Theta. ( x ( i ) ) y ( i ) ) 2 + Lambda.   j = 1 n Theta. j 2 \mathop{min}\limits_{\theta} \dfrac{1}{2m}\ \sum_{i=1}^m (h_\theta(x^{(i)}) – y^{(i)})^2 + \lambda\ \sum_{j=1}^n \theta_j^2

The lambda \ lambda lambda, or lambda, is the regularization parameter. It determines how much the costs of our theta parameters are inflated.

Using the above cost function with the extra summation, we can smooth the output of our hypothesis function to reduce overfitting.

Note. If lambda is chosen to be too large, it may smooth out the function too much and cause underfitting.

Regularized Linear Regression

We can apply regularization to both linear regression and logistic regression.

Gradient Descent

We will modify our gradient descent function to separate out
θ 0 \theta_0
from the rest of the parameters because we do not want to penalize
θ 0 \theta_0
.


R e p e a t { Theta. 0 : = Theta. 0 Alpha. 1 m i = 1 m ( h Theta. ( x ( i ) ) y ( i ) ) x 0 ( i ) Theta. j : = Theta. j Alpha. [ ( 1 m i = 1 m ( h Theta. ( x ( i ) ) y ( i ) ) x j i ) + Lambda. m Theta. j ] j { 1 . 2 . . n } } \begin{array}{l} Repeat\quad\{\\ \qquad \theta_0:=\theta_0-\alpha\frac{1}{m}\sum_{i=1}^m\big(h_\theta(x^{(i)})-y^{(i)}\big)x_0^{(i)}\\ \qquad \theta_j:=\theta_j-\alpha\Big[\Big(\frac{1}{m}\sum_{i=1}^m\big(h_\theta(x^{(i)})-y^{(i)}\big)x_j^{i}\Big)+\frac{\lambda} {m} \ theta_j \ \ qquad Big] j \ in \ {1, 2, \ \ cdots, n \} {array} \ \ \} \ end

The term
λ m θ j \frac{\lambda}{m}\theta_j
performs our regularization. With some manipulation our update rule can also be represented as:


Theta. j : = Theta. j ( 1 Alpha. Lambda. m ) Alpha. 1 m i = 1 m ( h Theta. ( x ( i ) ) y ( i ) ) x j ( i ) \theta_j:=\theta_j(1-\alpha\frac{\lambda}{m})-\alpha\frac{1}{m}\sum_{i=1}^{m}\Big(h_\theta(x^{(i)})-y^{(i)}\Big)x_j^{(i) }

The first term in the above equation,
( 1 α λ m ) (1-\alpha\frac{\lambda}{m})
will always be less than
1 1
. Intuitively we can see it as reducing the value of
θ j \theta_j
by some amount on every update. Notice that the second term is now exactly the same as it was before.

Normal Equation

Now let’s approach regularization using the alternate method of the non-iterative normal equation.

To add in regularization, the equation is the same as our original, except that we add another term inside the parentheses:


Theta. = ( X T X + Lambda. L ) 1 X T y where L = [ 0 1 1 1 ] \begin{array}{l} \theta=(X^TX+\lambda \cdot L)^{-1}X^Ty\\ \textrm{where}\quad L=\left[\begin{array}{cccc} 0\\ &1\\ &&1\\ &&&\ddots\\ &&&&1 \end{array}\right] \end{array}

LLL is a matrix with 000 at the top left and 111’s down the diagonal, It should have dimension (n+1)×(n+1)(n+1)×(n+1)(n+1)×(n+1) Intuitively, This is the identity matrix (though we are not including x0x_0x0), multiplied with a single real number λ.

Recall that if m

Regularized Logistic Regression

We can regularize logistic regression in a similar way that we regularize linear regression. As a result, we can avoid overfitting. The following image shows how the regularized function, displayed by the pink line, is less likely to overfit than the non-regularized function represented by the blue line:

Recall that our cost function for logistic regression was:


J ( Theta. ) = 1 m i = 1 m [ y ( i )   log ( h Theta. ( x ( i ) ) ) + ( 1 y ( i ) )   log ( 1 h Theta. ( x ( i ) ) ) ] J(\theta) = – \frac{1}{m} \sum_{i=1}^m \Big[ y^{(i)}\ \log \big(h_\theta (x^{(i)})\big) + (1 – y^{(i)})\ \log \big(1 – h_\theta(x^{(i)})\big) \Big]

We can regularize this equation by adding a term to the end:


J ( Theta. ) = 1 m i = 1 m [ y ( i )   log ( h Theta. ( x ( i ) ) ) + ( 1 y ( i ) )   log ( 1 h Theta. ( x ( i ) ) ) ] + Lambda. 2 m j = 1 n Theta. j 2 J(\theta) = – \frac{1}{m} \sum_{i=1}^m \large[ y^{(i)}\ \log (h_\theta (x^{(i)})) + (1 – y^{(i)})\ \log (1 – h_\theta(x^{(i)}))\large] + \frac{\lambda}{2m}\sum_{j=1}^n \theta_j^2

The second sum, ∑j=1nθj2\sum_{j=1}^n \theta_j^2∑j=1nθj2 means to exclude The bias term. θ0\theta_0θ0. I.e. the θ\thetaθ vector is indexed from 000 to NNN (holding n+1n+1n+1 values, “θ0\theta_0θ0 through θn\theta_nθn), and this sum is explicitly skips θ0\theta_0θ0, by running from 111 to NNN, skipping 000. Thus, when computing the equation, we should continuously update the two following equations: