# 4.2. Loss Functions¶

Recall our assumptions thus far: we assume that there is a single population tip percentage \( \theta^* \). Our model estimates this parameter; we use the variable \( \theta \) to denote our estimate. We would like to use the collected data on tips to determine the value that \( \theta \) should have,

To precisely decide which value of \( \theta \) is best, we define a **loss function**. A loss function is a mathematical function that takes in an estimate \( \theta \) and the points in our dataset \(y_1, y_2, \ldots, y_n\). It outputs a single number, the **loss**, that measures how well \( \theta \) fits our data. In mathematical notation, we want to create the function:

By convention, the loss function outputs lower values for preferable values of \( \theta \) and larger values for worse values of \( \theta \). To fit our model, we select the value of \( \theta \) that produces a lower loss than all other choices of \( \theta \)—the \( \theta \) that **minimizes the loss**. We use the notation \( \hat{\theta} \) to denote the value of \( \theta \) that minimizes a specified loss function.

Consider once again two possible values of \( \theta \): \( \theta = 10 \) and \( \theta = 15 \).

```
sns.distplot(tips['pcttip'], bins=np.arange(30), rug=True)
plt.axvline(x=10, c='darkblue', linestyle='--', label=r'$ \theta = 10$')
plt.axvline(x=15, c='darkgreen', linestyle='--', label=r'$ \theta = 15$')
plt.legend()
plt.xlim(0, 30)
plt.xlabel('Percent Tip Amount')
plt.ylabel('Proportion per Percent');
```

Since \( \theta = 15 \) falls closer to most of the points, our loss function should output a small value for \( \theta = 15 \) and a larger value for \( \theta = 10 \).

Let’s use this intuition to create a loss function.

## 4.2.1. Our First Loss Function: Mean Squared Error¶

We would like our choice of \( \theta \) to fall close to the points in our dataset. Thus, we can define a loss function that outputs a larger value as \( \theta \) gets further away from the points in the dataset. We start with a simple loss function called the *mean squared error*. Here’s the idea:

We select a value of \( \theta \).

For each value in our dataset, take the squared difference between the value and theta: \( (y_i - \theta)^2 \) . Squaring the difference in a simple way to convert negative differences into positive ones. We want to do this because if our point \( y_i = 14 \), \( \theta = 10 \) and \( \theta = 18 \) are equally far away from the point and are thus equally “bad”.

To compute the final loss, take the average of each of the individual squared differences.

This gives us a final loss function of:

We can write a simple Python function to compute the loss:

```
def mse_loss(theta, y_vals):
return np.mean((y_vals - theta) ** 2)
```

Let’s see how this loss function behaves. Suppose we have a dataset only containing one point, \( y_1 = 14 \). We can try different values of \( \theta \) and see what the loss function outputs for each value.

```
def try_thetas(thetas, y_vals, xlims, loss_fn=mse_loss, figsize=(10, 7), cols=3):
if not isinstance(y_vals, np.ndarray):
y_vals = np.array(y_vals)
rows = int(np.ceil(len(thetas) / cols))
plt.figure(figsize=figsize)
for i, theta in enumerate(thetas):
ax = plt.subplot(rows, cols, i + 1)
sns.rugplot(y_vals, height=0.1, ax=ax)
plt.axvline(theta, linestyle='--',
label=rf'$ \theta = {theta} $')
plt.title(f'Loss = {loss_fn(theta, y_vals):.2f}')
plt.xlim(*xlims)
plt.yticks([])
plt.legend()
plt.tight_layout()
try_thetas(thetas=[11, 12, 13, 14, 15, 16],
y_vals=[14], xlims=(10, 17))
```

As we hoped, our loss is larger as \( \theta \) is further away from our data and is smallest when \( \theta \) falls exactly onto our data point. Let’s now see how our mean squared error behaves when we have five points instead of one. Our data this time are: \( [11, 12, 15, 17, 18 ] \).

```
try_thetas(thetas=[12, 13, 14, 15, 16, 17],
y_vals=[11, 12, 15, 17, 18],
xlims=(10.5, 18.5))
```

Of the values of \( \theta \) we tried \( \theta = 15 \) has the lowest loss. However, a value of \( \theta \) in between 14 and 15 might have an even lower loss than \( \theta = 15 \).

The mean squared error seems to be doing its job by penalizing values of \( \theta \) that are far away from the center of the data. Let’s now see what the loss function outputs on the original dataset of tip percents. For reference, the original distribution of tip percents is plotted below:

```
sns.distplot(tips['pcttip'], bins=np.arange(30), rug=True)
plt.xlim(0, 30)
plt.xlabel('Percent Tip Amount')
plt.ylabel('Proportion per Percent');
```

Let’s try some values of \( \theta \).

```
try_thetas(thetas=np.arange(14.5, 17.1, 0.5),
y_vals=tips['pcttip'],
xlims=(0, 30))
```

It looks like the best value of \( \theta \) that we’ve tried so far is 16.00, slightly above our original guess of 15% tip.

## 4.2.2. A Shorthand¶

We have defined our first loss function, the mean squared error (MSE). It computes high loss for values of \( \theta \) that are further away from the center of the data. Mathematically, this loss function is defined as:

The loss function will compute different losses whenever we change either \( \theta \) or \( y_1, y_2, \ldots, y_n \). We’ve seen this happen when we tried different values of \( \theta \) and when we added new data points (changing \( y_1, y_2, \ldots, y_n \)).

As a shorthand, we can define the vector \( \textbf{y} = [ y_1, y_2, \ldots, y_n ] \). Then, we can write the MSE as:

### 4.2.2.1. Minimizing the Loss¶

So far, we have found the best value of \( \theta \) by simply trying out a bunch of values and then picking the one with the least loss. Although this method works decently well, we can find a better method by using the properties of our loss function.

For the following example, we use a dataset containing five points: \( \textbf{y} = [ 11, 12, 15, 16, 17 ] \).

```
try_thetas(thetas=[12, 13, 14, 15, 16, 17],
y_vals=[11, 12, 15, 17, 18],
xlims=(10.5, 18.5))
```

In the plots above, we’ve used integer \( \theta \) values in between 12 and 17. When we change \( \theta \), the loss seems to start high (at 10.92), decrease until \( \theta = 15 \), then increase again. We can see that the loss changes as \( \theta \) changes, so let’s make a plot comparing the loss to \( \theta \) for each of the six \( \theta \)s we’ve tried.

```
thetas = np.array([12, 13, 14, 15, 16, 17])
y_vals = np.array([11, 12, 15, 17, 18])
losses = [mse_loss(theta, y_vals) for theta in thetas]
plt.scatter(thetas, losses)
plt.title(r'Loss vs. $ \theta $ when $\bf{y}$$ = [11, 12, 15, 17, 18] $')
plt.xlabel(r'$ \theta $ Values')
plt.ylabel('Loss');
```

The scatter plot shows the downward, then upward trend that we noticed before. We can try more values of \( \theta \) to see a complete curve that shows how the loss changes as \( \theta \) changes.

```
thetas = np.arange(12, 17.1, 0.05)
y_vals = np.array([11, 12, 15, 17, 18])
losses = [mse_loss(theta, y_vals) for theta in thetas]
plt.plot(thetas, losses)
plt.title(r'Loss vs. $ \theta $ when $\bf{y}$$ = [11, 12, 15, 17, 18] $')
plt.xlabel(r'$ \theta $ Values')
plt.ylabel('Loss');
```

The plot above shows that in fact, \( \theta = 15\) was not the best choice; a \( \theta \) between 14 and 15 would have gotten a lower loss. We can use calculus to find that minimizing value of \( \theta \) exactly. At the minimum loss, the derivative of the loss function with respect to \( \theta \) is 0.

First, we start with our loss function:

Next, we plug in our points \( \textbf{y} = [11, 12, 15, 17, 18] \):

To find the value of \( \theta \) that minimizes this function, we compute the derivative with respect to \( \theta \):

Then, we find the value of \( \theta \) where the derivative is zero:

We’ve found the minimizing \( \theta \), and as expected, it is between 14 and 15. We denote the \( \theta \) that minimizes the loss \( \hat{\theta} \). Thus, for the dataset \( \textbf{y} = [11, 12, 15, 17, 18] \) and the MSE loss function:

If we happen to compute the mean of the data values, we notice a curious equivalence:

## 4.2.3. The Minimizing Value of the Mean Squared Error¶

As it turns out, the equivalence above is no mere coincidence; the average of the data values *always* produces \( \hat{\theta} \), the \( \theta \) that minimizes the MSE loss.

To show this, we take the derivative of our loss function once more. Instead of plugging in points, we leave the \( y_i \) terms intact to generalize to other datasets.

Since we did not substitute in specific values for \( y_i \), this equation can be used with any dataset with any number of points.

Now, we set the derivative equal to zero and solve for \( \theta \) to find the minimizing value of \( \theta \) as before:

Lo and behold, we see that there is a single value of \( \theta \) that gives the smallest possible MSE no matter what the dataset is. For the mean squared error, we know that \( \hat{\theta} \) is the mean of the dataset values.

## 4.2.4. Back to the Original Dataset¶

We no longer have to test out different values of \( \theta \) as we did before. We can compute the mean tip percentage in one go:

```
np.mean(tips['pcttip'])
```

```
16.080258172250463
```

```
sns.distplot(tips['pcttip'], bins=np.arange(30), rug=True)
plt.axvline(x=16.08, c='darkblue', linestyle='--', label=r'$ \hat \theta = 16.08$')
plt.legend()
plt.xlim(0, 30)
plt.title('Distribution of tip percent')
plt.xlabel('Percent Tip Amount')
plt.ylabel('Proportion per Percent');
```

## 4.2.5. Summary¶

We have introduced a **constant model**, a model that outputs the same number for all entries in the dataset.

A **loss function** \( L(\theta, \textbf{y}) \) measures how well a given value of \( \theta \) fits the data. In this section, we introduce the mean squared error loss function and showed that \( \hat{\theta} = \text{mean}(\textbf{y}) \) for the constant model.

The steps we took in this section apply to many modeling scenarios:

Select a model.

Select a loss function.

Fit the model by minimizing the loss.

In this book, all of our modeling techniques expand upon one or more of these steps. We introduce new models (1), new loss functions (2), and new techniques for minimizing loss (3).