Simple Linear Regression - Data Science from Scratch: First Principles with Python (2015)

Data Science from Scratch: First Principles with Python (2015)

Chapter 14. Simple Linear Regression

Art, like morality, consists in drawing the line somewhere.

G. K. Chesterton

In Chapter 5, we used the correlation function to measure the strength of the linear relationship between two variables. For most applications, knowing that such a linear relationship exists isn’t enough. We’ll want to be able to understand the nature of the relationship. This is where we’ll use simple linear regression.

The Model

Recall that we were investigating the relationship between a DataSciencester user’s number of friends and the amount of time he spent on the site each day. Let’s assume that you’ve convinced yourself that having more friends causes people to spend more time on the site, rather than one of the alternative explanations we discussed.

The VP of Engagement asks you to build a model describing this relationship. Since you found a pretty strong linear relationship, a natural place to start is a linear model.

In particular, you hypothesize that there are constants (alpha) and (beta) such that:

where is the number of minutes user i spends on the site daily, is the number of friends user i has, and is a (hopefully small) error term representing the fact that there are other factors not accounted for by this simple model.

Assuming we’ve determined such an alpha and beta, then we make predictions simply with:

def predict(alpha, beta, x_i):
    return beta * x_i + alpha

How do we choose alpha and beta? Well, any choice of alpha and beta gives us a predicted output for each input x_i. Since we know the actual output y_i we can compute the error for each pair:

def error(alpha, beta, x_i, y_i):
    """the error from predicting beta * x_i + alpha
    when the actual value is y_i"""
    return y_i - predict(alpha, beta, x_i)

What we’d really like to know is the total error over the entire data set. But we don’t want to just add the errors — if the prediction for x_1 is too high and the prediction for x_2 is too low, the errors may just cancel out.

So instead we add up the squared errors:

def sum_of_squared_errors(alpha, beta, x, y):
    return sum(error(alpha, beta, x_i, y_i) ** 2
               for x_i, y_i in zip(x, y))

The least squares solution is to choose the alpha and beta that make sum_of_squared_errors as small as possible.

Using calculus (or tedious algebra), the error-minimizing alpha and beta are given by:

def least_squares_fit(x, y):
    """given training values for x and y,
    find the least-squares values of alpha and beta"""
    beta = correlation(x, y) * standard_deviation(y) / standard_deviation(x)
    alpha = mean(y) - beta * mean(x)
    return alpha, beta

Without going through the exact mathematics, let’s think about why this might be a reasonable solution. The choice of alpha simply says that when we see the average value of the independent variable x, we predict the average value of the dependent variable y.

The choice of beta means that when the input value increases by standard_deviation(x), the prediction increases by correlation(x, y) * standard_deviation(y). In the case when x and y are perfectly correlated, a one standard deviation increase in x results in a one-standard-deviation-of-yincrease in the prediction. When they’re perfectly anticorrelated, the increase in x results in a decrease in the prediction. And when the correlation is zero, beta is zero, which means that changes in x don’t affect the prediction at all.

It’s easy to apply this to the outlierless data from Chapter 5:

alpha, beta = least_squares_fit(num_friends_good, daily_minutes_good)

This gives values of alpha = 22.95 and beta = 0.903. So our model says that we expect a user with n friends to spend 22.95 + n * 0.903 minutes on the site each day. That is, we predict that a user with no friends on DataSciencester would still spend about 23 minutes a day on the site. And for each additional friend, we expect a user to spend almost a minute more on the site each day.

In Figure 14-1, we plot the prediction line to get a sense of how well the model fits the observed data.

Simple Linear Regression.

Figure 14-1. Our simple linear model

Of course, we need a better way to figure out how well we’ve fit the data than staring at the graph. A common measure is the coefficient of determination (or R-squared), which measures the fraction of the total variation in the dependent variable that is captured by the model:

def total_sum_of_squares(y):
    """the total squared variation of y_i's from their mean"""
    return sum(v ** 2 for v in de_mean(y))
 
def r_squared(alpha, beta, x, y):
    """the fraction of variation in y captured by the model, which equals
    1 - the fraction of variation in y not captured by the model"""
 
    return 1.0 - (sum_of_squared_errors(alpha, beta, x, y) /
                  total_sum_of_squares(y))
 
r_squared(alpha, beta, num_friends_good, daily_minutes_good)      # 0.329

Now, we chose the alpha and beta that minimized the sum of the squared prediction errors. One linear model we could have chosen is “always predict mean(y)” (corresponding to alpha = mean(y) and beta = 0), whose sum of squared errors exactly equals its total sum of squares. This means an R-squared of zero, which indicates a model that (obviously, in this case) performs no better than just predicting the mean.

Clearly, the least squares model must be at least as good as that one, which means that the sum of the squared errors is at most the total sum of squares, which means that the R-squared must be at least zero. And the sum of squared errors must be at least 0, which means that the R-squared can be at most 1.

The higher the number, the better our model fits the data. Here we calculate an R-squared of 0.329, which tells us that our model is only sort of okay at fitting the data, and that clearly there are other factors at play.

Using Gradient Descent

If we write theta = [alpha, beta], then we can also solve this using gradient descent:

def squared_error(x_i, y_i, theta):
    alpha, beta = theta
    return error(alpha, beta, x_i, y_i) ** 2
 
def squared_error_gradient(x_i, y_i, theta):
    alpha, beta = theta
    return [-2 * error(alpha, beta, x_i, y_i),       # alpha partial derivative
            -2 * error(alpha, beta, x_i, y_i) * x_i] # beta partial derivative
 
# choose random value to start
random.seed(0)
theta = [random.random(), random.random()]
alpha, beta = minimize_stochastic(squared_error,
                                  squared_error_gradient,
                                  num_friends_good,
                                  daily_minutes_good,
                                  theta,
                                  0.0001)
print alpha, beta

Using the same data we get alpha = 22.93, beta = 0.905, which are very close to the exact answers.

Maximum Likelihood Estimation

Why choose least squares? One justification involves maximum likelihood estimation.

Imagine that we have a sample of data that comes from a distribution that depends on some unknown parameter :

If we didn’t know theta, we could turn around and think of this quantity as the likelihood of given the sample:

Under this approach, the most likely is the value that maximizes this likelihood function; that is, the value that makes the observed data the most probable. In the case of a continuous distribution, in which we have a probability distribution function rather than a probability mass function, we can do the same thing.

Back to regression. One assumption that’s often made about the simple regression model is that the regression errors are normally distributed with mean 0 and some (known) standard deviation . If that’s the case, then the likelihood based on seeing a pair (x_i, y_i) is:

The likelihood based on the entire data set is the product of the individual likelihoods, which is largest precisely when alpha and beta are chosen to minimize the sum of squared errors. That is, in this case (and with these assumptions), minimizing the sum of squared errors is equivalent to maximizing the likelihood of the observed data.

For Further Exploration

Continue reading about multiple regression in Chapter 15!