Greetings, my blog readers!
In this post I will tackle the link between the cost function and the maximum likelihood hypothesis. The idea for this post came to me when I was reading Python Machine Learning by Sebastian Raschka; the chapter on learning the weights on logistic regression classifier. I like this book and strongly recommend it to anyone interested in the field. It has solid Python coding example, many practical insights and a good coverage of what scikit-learn has to offer.
So, in chapter 3, section on Learning the weights of the logistic cost function, the author seamlessly moves from the need to minimize the sum-squared-error cost function to maximizing the (log) likelihood. I think that it is very important to have a clear intuitive understanding on why the two are equivalent in case of regression. So, let’s dive in!
Sum of Squared Errors
Using notation from [2], if is a training input variable, then
is the target class predicted by the learning hypothesis
. It is clear that we want to minimize the distance between the predicted and the actual class value
, thus the sum-squared-error is defined as
. The 1/2 in front of the sum is added to simplify the 1st derivative, which is taken to find the coefficients of regression when performing gradient descent. But one could easily define the cost function without the 1/2 term.
The sum-squared error is a cost function. We want to find a classifier (alternatively, a learning hypothesis) that minimizes this cost. Let’s call it . This is a very intuitive optimal value problem describing what we are after. It is like asking TomTom to find the shortest route to Manchester from London to save money on fuel. No problems! Done. A gradient descend (batch or stochastic) will produce a vector of weights
, which when multiplied into the matrix of training input variables
gives the optimal solution. So, the meaning of equation below should now be, hopefully, clear:
(1)
The MAP Hypothesis
The MAP hypothesis is a maximum a posteriori hypothesis. It is the most probable hypothesis given the observed data (note, I am using material from chapter 6.2 from [1] for this section). Out of all the candidate hypotheses in , we would like to find the most probable hypothesis
that fits the training data
well.
(2)
It should become intuitively obvious that, in case of a regression model, the most probable hypothesis is also the one that minimizes . In (2) we have a conditional probability. Using Bayes theorem, we can further develop it as:
(3)
is the probability of observing the training data. We drop this term because it is a constant and is independent of
.
is a prior probability of some hypothesis. If we assume that every hypothesis in
is equally probable, then we can drop
term from (3) as well (as it also becomes a constant). We are left with a maximum likelihood hypothesis, which is a hypothesis under which the likelihood of observing the training data is maximized:
(4)
How did we go from looking for a hypothesis given the data to the data given the hypothesis? A hypothesis that results in the most probable frequency distribution for the training data is also the one that is the most accurate about it. Being the most accurate implies having the best fit (if we were to generate new data under MAP, it would fit the training data best, among all other possible hypotheses). Thus, MAP and ML are equivalent here.
Bringing in the Binary Nature of Logistic Regression
Under the assumption of independent n training data points, we can rewrite (4) as a product over all observations:
(5)
Because logistic regression is a binary classification, each data point can be either from a positive or a negative target class. We can use Bernoulli distribution to model this probability frequency:
(6)
Let’s recollect that in logistic regression the hypothesis is a logit function that takes a weighted sum of predictors and coefficients as input. Also, a single hypothesis consists of a set of coefficients
and the objective of
is to find the coefficients that minimize the cost function. The problem here is that logit function is not smooth and convex and has many local minimums. In order to be able to use an optimal search procedure like gradient descend, we need to make the cost function convex. This is often done by taking the logarithm. Taking the log and negating (6) gives the familiar formula for the cost function J, which can also be found in chapter 3 of [2]:
(7)
Summary
In this post I made an attempt to show you the connection between the usually seen sum-of-squared errors cost function (which is minimized) and the maximum likelihood hypothesis (which is maximized).
References:
[1] Tom Mitchel, Machine Learning, McGraw-Hill, 1997.*
[2] Sebastian Raschka, Python Machine Learning, Packt Publishing, 2015.
* If there is one book on machine learning that you should read, it should be this one.