Data Analytics Models in Quantitative Finance and Risk Management

Sharing a KDNuggets article on some examples of how PCA, Monte Carlo and linear regression are used in quantitative finance and risk management:

Source: Data Analytics Models in Quantitative Finance and Risk Management

Advertisements
Posted in Home, Machine Learning | Tagged ,

Going from Sum of Squared Errors to the Maximum Likelihood

Greetings, my blog readers!

In this post I will tackle the link between the cost function and the maximum likelihood hypothesis. The idea for this post came to me when I was reading Python Machine Learning by Sebastian Raschka; the chapter on learning the weights on logistic regression classifier. I like this book and strongly recommend it to anyone interested in the field. It has solid Python coding example, many practical insights and a good coverage of what scikit-learn has to offer.

So, in chapter 3, section on  Learning the weights of the logistic cost function, the author seamlessly moves from the need to minimize the sum-squared-error cost function to maximizing the (log) likelihood. I think that it is very important to have a clear intuitive understanding on why the two are equivalent in case of regression. So, let’s dive in!

Sum of Squared Errors

Using notation from [2], if z^{(i)} is a training input variable,  then \phi (z^{(i)}) is the target class predicted by the learning hypothesis \phi. It is clear that we want to minimize the distance between the predicted and the actual class value y^{(i)}, thus the sum-squared-error is defined as \frac{1}{2}\sum_{i}(y^{(i)}-\phi (z^{(i)}))^{2}. The 1/2 in front of the sum is added to simplify the 1st derivative, which is taken to find the coefficients of regression when performing gradient descent. But one could easily define the cost function without the 1/2 term.

The sum-squared error is a cost function. We want to find a classifier (alternatively, a learning hypothesis) that minimizes this cost. Let’s call it J. This is a very intuitive optimal value problem describing what we are after. It is like asking TomTom to find the shortest route to Manchester from London to save money on fuel. No problems! Done. A gradient descend (batch or stochastic) will produce a vector of weights \mathbf{w}, which when multiplied into the matrix of training input variables \mathbf{x} gives the optimal solution. So, the meaning of equation below should now be, hopefully, clear:

J(w)= \frac{1}{2}\sum_{i}(y^{(i)}-\phi (z^{(i)}))^{2}   (1)

The MAP Hypothesis

The MAP hypothesis is a maximum a posteriori hypothesis. It is the most probable hypothesis given the observed data (note, I am using material from chapter 6.2 from [1] for this section). Out of all the candidate hypotheses in \Phi , we would like to find the most probable hypothesis \phi that fits the training data Y well.

h_{MAP} \equiv argmax_{\phi \in \Phi} P(\phi | Y)    (2)

It should become intuitively obvious that, in case of a regression model, the most probable hypothesis is also the one that minimizes J . In (2) we have a conditional probability. Using Bayes theorem, we can further develop it as:

h_{MAP} \equiv argmax_{\phi \in \Phi} P(\phi | Y)

= argmax_{\phi \in \Phi} \frac{P(Y|\phi)P(\phi)}{P(Y)}

= argmax_{\phi \in \Phi} P(Y|\phi)P(\phi)   (3)

P(Y) is the probability of observing the training data. We drop this term because it is a constant and is independent of \phi . P(\phi) is a prior probability of some hypothesis. If we assume that every hypothesis in \Phi is equally probable, then we can drop P(\phi) term from (3) as well (as it also becomes a constant). We are left with a maximum likelihood hypothesis, which is a hypothesis under which the likelihood of  observing the training data is maximized:

h_{ML} \equiv argmax_{\phi \in \Phi} P(Y|\phi)   (4)

How did we go from looking for a hypothesis given the data to the data given the hypothesis? A hypothesis that results in the most probable frequency distribution for the training data is also the one that is the most accurate about it. Being the most accurate implies having the best fit (if we were to generate new data under MAP, it would fit the training data best, among all other possible hypotheses). Thus, MAP and ML are equivalent here.

Bringing in the Binary Nature of Logistic Regression

Under the assumption of independent n training data points, we can rewrite (4) as a product over all observations:

h_{ML} \equiv argmax_{\phi \in \Phi} \prod_{i=1}^{n} P(y_{i}|\phi)   (5)

Because logistic regression is a binary classification, each data point can be either from a positive or a negative target class. We can use Bernoulli distribution to model this probability frequency:

 h_{ML} \equiv argmax_{\phi \in \Phi} \prod_{i=1}^{n}  \left( \phi(z^{(i)}) \right) ^{y^{(i)}} \left( 1-\phi(z^{(i)}) \right) ^{1-y^{(i)}}     (6)

Let’s recollect that in logistic regression the hypothesis is a logit function that takes a weighted sum of predictors and coefficients as input. Also, a single hypothesis \phi consists of a set of coefficients \mathbf{w} and the objective of h_{ML} is to find the coefficients that minimize the cost function. The problem here is that logit function is not smooth and convex and has many local minimums. In order to be able to use an optimal search procedure like gradient descend, we need to make the cost function convex. This is often done by taking the logarithm. Taking the log and negating (6) gives the familiar formula for the cost function J, which can also be found in chapter 3 of [2]:

 J(w) = \sum_{i=1}^{n}  \left[  -y^{(i)} \log(\phi (z^{(i)})) - (1-y^{(i)}) \log(1- \phi(z^{(i)})) \right]     (7)

Summary

In this post I made an attempt to show you the connection between the usually seen sum-of-squared errors cost function (which is minimized) and the maximum likelihood hypothesis (which is maximized).

References:

[1] Tom Mitchel, Machine Learning, McGraw-Hill, 1997.*

[2] Sebastian Raschka, Python Machine Learning, Packt Publishing, 2015.

* If there is one book on machine learning that you should read, it should be this one.

Posted in Home, Machine Learning | Tagged , ,

How To Make Your Mark As A Woman In Big Data

A great post from KDnuggets.com:

Source: How To Make Your Mark As A Woman In Big Data

Posted in Home

It is all in the Optimization Function

At one of the meetups on data science I recently attended, a question about when AI would reach the level of human thinking was posed. I was surprised to see people raising hands in response to a year of 2030, 2045, 2065, … I did not raise my hand at all because I don’t believe it will happen. Ever.

Am I naive? Ill-informed? No, I would like to think not. I totally respect the fact that human brain’s wiring is simple, and boils down to fat, water and electricity. We think we are smart, but we really aren’t. A computer program can be written to mimic us. Several successful examples already exist. We are easily fooled by such examples and attribute more intelligence and feelings to them than we ought to. I myself briefly thought that the two robots that were programmed to “look out” for each other by Tufts University research team really do care about one another. Their cute voices had something to do with it. The robots are driven by some optimal reward function, and they are programmed to optimize it. The robots don’t care about each other. True universal care is too broad to be “coded-up”.

Cat Pictures Please is a great short science fiction story written by Naomi Kritzer. It is written as an inner monologue of an AI system that was developed to help people out. I like this story for two reasons. Firstly, it is a good example of the most basic difference between humans and AI – we are lazy, irrational and slow. The AI system is logical, methodical and fast. Secondly, it highlights what is at the core of all AI – a reward or a payoff that must be optimized. In Cat Pictures Please a part of AI’s reward somehow becomes pictures of cats… If a robot works out that charging itself generates the greatest long-term reward it will end up charging itself most of the time. Note, I am writing ‘works out’, but I really mean converges onto. If humans are mostly composed of fat, water and electricity, the AI is down to search and optimization function.

Searching is what evolutionary computing, reinforcement learning, gradient descent/ascend (thus pretty much all types of regression) and unsupervised learning is about. Minimizing some cost function or alternatively optimizing a reward function is what can make an AI system “happy”. Both must be accurately programmed to work. Undeniably, many functions can be automated and perfected with searching for the best reward approach. Thus many things are within the AI’s reach. I am not saying that if autonomous weapons are unleashed upon us,  we are going to be just fine. But what I do believe in is that AI will never be able to truly think like us. It will never be able to act and rely on luck or its gut feeling. It will never be able to demonstrate such degree of delusion that its bluffing can achieve results, as humans can. AI will never become superstitious, doctrinal and lazy. Even if AI will one day become genius, it won’t reach human’s highs of stupidity. Since as Albert Einstein once said, the difference between stupidity and genius is that genius has its limits.

 

Posted in Machine Learning | Tagged ,

I am a data scientist

Three years ago this week, I wrote a blog post, “Data science is statistics”. I was fiercely against the term at that time, as I felt that we already had a data science, and it was called Statistic…

Source: I am a data scientist

Posted in Home