Introduction to Correspondence Analysis

In this blog I will introduce the Correspondence Analysis – a visualisation technique for categorical data. All the code has been compiled in my github repository.

Correspondence Analysis (CA) has been around for a very long time. It was first developed in the 1930-ies, and made popular by M. Greenacre in the 1980-ies. It is an established statistical analysis techniques with dedicated annual symposiums and sufficient amount of literature covering theory and applications. Inspite of its popularity, I have only recently discovered it, and thought that it is worthwhile to document the fundamentals on my blog.

What Exactly is Correspondence Analysis?

CA is a visualisation technique that can be applied to categorical data for data exploration. Unlike numerical data, categorical features are harder to analyse and visualise. CA uses a matrix decomposition method, namely SVD, and thus you may see CA being likened to the Principle Components Analysis (PCA). However, CA is not, strictly speaking, a PCA for categorical data, mostly because the primary objective of CA is to provide a visualisation of associations among categorical features.

How does one visualise categorical data? CA is based on a simple concept of a contingency table. A contingency table is a tabulation of frequencies of how categorical values are distributed by variables. This blog will be using examples from P. Yelland’s article on CA published in the Mathematica journal[1]. I will translate his Mathematica code to Python (because Python is awesome). In [1] we find CA applied to textual analysis where passages of a few authors analysed by the frequency of letters. The five authors and the letters are shown below:

authors = ["Charles Darwin", "Rene Descartes","Thomas Hobbes", "Mary Shelley", "Mark Twain"]
initials=['CD1','CD2','CD3','RD1','RD2','RD3','TB1','TB2','TB3','MS1','MS2','MS3','MT1','MT2','MT3']
chars=["B", "C", "D", "F", "G", "H", "I", "L", "M", "N","P", "R", "S", "U", "W", "Y"]

The contingency table build from how often these letters appear in three passages per author are:

sampleCrosstab=[[34, 37, 44, 27, 19, 39, 74, 44, 27, 61, 12, 65, 69,22, 14, 21],
                [18, 33, 47, 24, 14, 38, 66, 41, 36,72, 15, 62, 63, 31, 12, 18],
                [32, 43, 36, 12, 21, 51, 75, 33, 23, 60, 24, 68, 85,18, 13, 14],
                [13, 31, 55, 29, 15, 62, 74, 43, 28,73, 8, 59, 54, 32, 19, 20],
                [8, 28, 34, 24, 17, 68, 75, 34, 25, 70, 16, 56, 72,31, 14, 11], 
                [9, 34, 43, 25, 18, 68, 84, 25, 32, 76,14, 69, 64, 27, 11, 18],
                [15, 20, 28, 18, 19, 65, 82, 34, 29, 89, 11, 47, 74,18, 22, 17], 
                [18, 14, 40, 25, 21, 60, 70, 15, 37,80, 15, 65, 68, 21, 25, 9],
                [19, 18, 41, 26, 19, 58, 64, 18, 38, 78, 15, 65, 72,20, 20, 11], 
                [13, 29, 49, 31, 16, 61, 73, 36, 29,69, 13, 63, 58, 18, 20, 25],
                [17, 34, 43, 29, 14, 62, 64, 26, 26, 71, 26, 78, 64, 21, 18, 12],
                [13, 22, 43, 16, 11, 70, 68, 46, 35,57, 30, 71, 57, 19, 22, 20],
                [16, 18, 56, 13, 27, 67, 61, 43, 20, 63, 14, 43, 67,34, 41, 23], 
                [15, 21, 66, 21, 19, 50, 62, 50, 24, 68, 14, 40, 58, 31, 36, 26],
                [19, 17, 70, 12, 28, 53, 72, 39, 22, 71, 11, 40, 67,25, 41, 17]]

Can you spot any differences in the use of letters by author from sampleCrosstab? It is almost impossible to do so by just looking at it. Instead, CA resorts to the \chi^2 statistic.

Chi-Squared Statistic and Chi-Squared Distances

Pearson’s \chi^2 test of independence can be used to say with reasonable certainty if the distribution of letters differs from one author to another. \chi^2 is defined as:

\chi^2 = \sum_{I}\sum_{J}\frac{(n_{ij}-(\frac{n_{i.}n_{.j}}{n}))^2}{\frac{n_{i.}n_{.j}}{n}} (1)

Where n is the total number of frequencies, n_{ij} is the letter frequency in row i and column j , and n_{i.} and n_{.j} are the total frequencies in row i and column j respectively. The product of n_{i.} and n_{.j} normalised by n is the expected frequency for n_{ij} under the independence assumption. Let’s call it independenceModel. The greater is \chi^2 , the greater is the certainty that the use of these letters is different by author. We can calculate this statistic in Python as following:

grandTotal = np.sum(sampleCrosstab)
correspondenceMatrix = np.divide(sampleCrosstab,grandTotal)
rowTotals = np.sum(correspondenceMatrix, axis=1)
columnTotals = np.sum(correspondenceMatrix, axis=0)

independenceModel = np.outer(rowTotals, columnTotals)

#Calculate manually
chiSquaredStatistic = grandTotal*np.sum(np.square(correspondenceMatrix-independenceModel)/independenceModel)
print(chiSquaredStatistic)

# Quick check - compare to scipy Chi-Squared test
statistic, prob, dof, ex = chi2_contingency(sampleCrosstab)
print(statistic)
print(np.round(prob, decimals=2))

In the above code correspondenceMatrix holds normalised frequencies. The \chi^2 statistic is 448.50, which is very unlikely to be observed under the null hypothesis (that the letter frequencies follow the same distribution). Having established this, we can continue with the CA as we now know that it should be able to show us some meaningful associations.

For the purposes of CA, the differences between the distributions of letters in the text samples are measured by \chi^2 -distances, which are weighted Euclidean distances between normalized rows. These are calculated by dividing row entries by their respective row totals. The weights are inversely proportional to the square roots of the column totals. \chi^2 -distances between row i and row k are defined as:

\chi^2_{distance_{ik}} = \sqrt{\sum_{J}\frac{(p_{ij}/p_{i.} - p_{kj}/p_{k.})^2}{p_{.j}}} (2)

# pre-calculate normalised rows
norm_correspondenceMatrix = np.divide(correspondenceMatrix,rowTotals[:, None])

chiSquaredDistances = np.zeros((correspondenceMatrix.shape[0],correspondenceMatrix.shape[0]))

norm_columnTotals = np.sum(norm_correspondenceMatrix, axis=0)
for row in range(correspondenceMatrix.shape[0]):
    chiSquaredDistances[row]=np.sqrt(np.sum(np.square(norm_correspondenceMatrix
                                                        -norm_correspondenceMatrix[row])/columnTotals, axis=1))
# Save distances to the DataFrame
dfchiSquaredDistances = pd.DataFrame(data=np.round(chiSquaredDistances*100).astype(int), columns=authorSamples)

print(dfchiSquaredDistances)

In (2) I switched to notation with p_{ij} , which is simply every entry in correspondenceMatrix (i.e. letter frequencies normalised by the grand total). dfchiSquaredDistances contains:

Chi-Squared Distances In Graphical Form

CA provides a means of representing a table of \chi^2 -distances in a graphical form. This is where the similarity with the PCA analysis comes in. To calculate such representation we need to transform the distances to points in a Cartesian coordinate system. This is achieved by a singular value decomposition (SVD) of a matrix of standardised residuals:

\Omega = \frac{p_{ij}-\mu_{ij}}{\sqrt{\mu_{ij}}} (3)

standardizedResiduals = np.divide((correspondenceMatrix-independenceModel),np.sqrt(independenceModel))

u,s,vh = np.linalg.svd(standardizedResiduals, full_matrices=False)

We are after the row scores, which are coordinates of points in a high-dimensional space (14 dimensions in this case). These points are arranged so that the Euclidean distance between two points is equal to the \chi^2 -distance between the two rows to which they correspond. The row scores are defined as:

R = \delta_{r}\cdot U\cdot S (4)

where U and S are the left singular vectors matrix and singular values on the diagonal matrix from SVD. The \delta_{r} is diagonal matrix made of the reciprocals of the square roots of the row totals.

deltaR = np.diag(np.divide(1.0,np.sqrt(rowTotals)))

rowScores=np.dot(np.dot(deltaR,u),np.diag(s))

dfFirstTwoComponents = pd.DataFrame(data=[l[0:2] for l in rowScores], columns=['X', 'Y'], index=initials)

print(dfFirstTwoComponents)

Extracting the first two components gives us:

Plotting these as points:

The plot clearly shows letters associations by author. Mark Twain and Charles Darwin’s samples stand out as significantly different from the rest.

Source and Reference: [1] P.Yelland, An Introduction to Correspondence Analysis. The Mathematica Journal 12, 2010 Wolfram Media, Inc.

Hidden Technical Debt of Machine Learning – Play Now Pay Later

Last week I was lucky enough to attend the Strata Conference London 2017 for one day. The venue and the event are impressive in scale, participants and content. The quality of tutorials and talks, in general, was very good, and I have walked away with a few new ideas I wanted to share on my blog.

One of the most important lessons from the conference for me was from a reference to the NIPS’16 paper titled Hidden Technical Debt in Machine Learning System, written by Google researchers. The paper is about the long-term maintenance costs introduced by building machine learning (ML) models and systems. The argument is that such cost is hidden as it is not immediately apparent from the point of putting an ML model in production. For data scientists it is important to be aware of the complexity of the models they develop and what impact these models will have on their organisation and how much it will cost to maintain them.

According to the authors, there are three levels of technical complexity which contribute to technical debt in ML: the model itself can be complex and behave non-linearly to a given set of parameters, the model can be taking input from otherwise disparate systems, and the model’s output or its behavior can be complex and difficult to predict before it is released.

ML Model Complexity

ML models entangle input signals from different systems together, making it difficult to avoid the CACE principle: Change Anything Change Everything. This principle applies to all aspects of ML, from parameters (think xgboost!), to input data to convergence thresholds and sampling methods. Isolation and servicing of modelling components is one of the proposed solutions.

The Cost  of Data Dependencies

Large ML systems have large and complex data dependencies, where data quality and any data assumptions can significantly affect the ML system output. ML system input data can be unstable, meaning it changes qualitatively and quantitatively over time. In some cases, the degree of dependency on one set of data vs. another may change. The ML systems are unique because usually their data dependencies are finer (e.g. the input should not just be an integer, but an integer in a certain range). A lot of thinking and possibly investment should go into understanding such dependencies and controlling them. Check-out kensu.io – a start-up company I have come across at the conference, the creators of Adalog – a product designed purely for such task.

The Feedback Loop and Dealing with Changes

Live ML systems learn in real time and influence their own behavior. Sometimes it is necessary to choose static parameters, like prediction thresholds, for a model that is trained or parameterised on  data that is dynamic in nature. Thus leading to the previous set of thresholds being no longer valid on updated data. The authors highlight that comprehensive monitoring of ML system behavior is critical for long-term system reliability.

In summary, maintainable ML systems are costly and require an even higher level of technical competence and foresight among its developers.  ML models testing, validation and monitoring should be considered as an absolute must in organisations that are eager to rip their full benefits.

(Jan-17) Did You Know That?

A brand new idea for my blog in 2017 is a monthly Did You Know That digest where I am going to share with you m things (where m<=3) that I recently learnt and found to be useful. I am going to keep such digests short and simple, as not to overwhelm you with verbiage and unnecessary details. This month’s top 3 Did you know that? are:

  • scikit-learn SGDClassifier – one learner, many tricks up its sleeve;
  • GraphViz is integrated in scikit-learn – no need no import it separately!
  • Zeppelin notebook from Apache – worth a look if you are into Python notebooks;

scikit-learn SGDClassifier

This is a multi-classifier module that implements stochastic gradient descent. The loss parameter controls which model is used to train and perform classification. For example, loss=hinge will give a linear SVM, and loss=log will give a logistic regression. When should you use it? When your training data set does not fit into memory. Note that SGD also allows mini-batch learning.

GraphViz is Integrated in scikit-learn Decision Trees

If you read all my blog post, you may have come across this one where I put together some code to train a binary decision tree to recognize a hand in poker. The code is available on my github space. If you read the code, you will see that I defined graph_decision_tree method with all the hula-loops to graph and save the images. But did you know that you don’t need to do all this work since sklearn.tree has export_graphviz module? If dsTree is an instance of DecisionTreeClassifier, then one can simply do:

from sklearn.tree import export_graphviz

export_graphviz(dsTree, out_file='dsTree.dot',
       feature_names=['feature1', 'feature2'])

The .dot file can be converted to a .png file (if you have installed GraphViz) like this:

dot -Tpng tree.dot -o tree.png

Zeppelin Notebook from Apache

If you are using Apache Spark you may be glad to learn that Apache has a notebook to go along with it. Zeppelin notebook offers similar functionality to Jupyter in terms of data visualization, paragraph writing and notebook sharing. I recommend that you to check it out.

Going from Sum of Squared Errors to the Maximum Likelihood

Greetings, my blog readers!

In this post I will tackle the link between the cost function and the maximum likelihood hypothesis. The idea for this post came to me when I was reading Python Machine Learning by Sebastian Raschka; the chapter on learning the weights on logistic regression classifier. I like this book and strongly recommend it to anyone interested in the field. It has solid Python coding example, many practical insights and a good coverage of what scikit-learn has to offer.

So, in chapter 3, section on  Learning the weights of the logistic cost function, the author seamlessly moves from the need to minimize the sum-squared-error cost function to maximizing the (log) likelihood. I think that it is very important to have a clear intuitive understanding on why the two are equivalent in case of regression. So, let’s dive in!

Sum of Squared Errors

Using notation from [2], if z^{(i)} is a training input variable,  then \phi (z^{(i)}) is the target class predicted by the learning hypothesis \phi. It is clear that we want to minimize the distance between the predicted and the actual class value y^{(i)}, thus the sum-squared-error is defined as \frac{1}{2}\sum_{i}(y^{(i)}-\phi (z^{(i)}))^{2}. The 1/2 in front of the sum is added to simplify the 1st derivative, which is taken to find the coefficients of regression when performing gradient descent. But one could easily define the cost function without the 1/2 term.

The sum-squared error is a cost function. We want to find a classifier (alternatively, a learning hypothesis) that minimizes this cost. Let’s call it J. This is a very intuitive optimal value problem describing what we are after. It is like asking TomTom to find the shortest route to Manchester from London to save money on fuel. No problems! Done. A gradient descend (batch or stochastic) will produce a vector of weights \mathbf{w}, which when multiplied into the matrix of training input variables \mathbf{x} gives the optimal solution. So, the meaning of equation below should now be, hopefully, clear:

J(w)= \frac{1}{2}\sum_{i}(y^{(i)}-\phi (z^{(i)}))^{2}   (1)

The MAP Hypothesis

The MAP hypothesis is a maximum a posteriori hypothesis. It is the most probable hypothesis given the observed data (note, I am using material from chapter 6.2 from [1] for this section). Out of all the candidate hypotheses in \Phi , we would like to find the most probable hypothesis \phi that fits the training data Y well.

h_{MAP} \equiv argmax_{\phi \in \Phi} P(\phi | Y)    (2)

It should become intuitively obvious that, in case of a regression model, the most probable hypothesis is also the one that minimizes J . In (2) we have a conditional probability. Using Bayes theorem, we can further develop it as:

h_{MAP} \equiv argmax_{\phi \in \Phi} P(\phi | Y)

= argmax_{\phi \in \Phi} \frac{P(Y|\phi)P(\phi)}{P(Y)}

= argmax_{\phi \in \Phi} P(Y|\phi)P(\phi)   (3)

P(Y) is the probability of observing the training data. We drop this term because it is a constant and is independent of \phi . P(\phi) is a prior probability of some hypothesis. If we assume that every hypothesis in \Phi is equally probable, then we can drop P(\phi) term from (3) as well (as it also becomes a constant). We are left with a maximum likelihood hypothesis, which is a hypothesis under which the likelihood of  observing the training data is maximized:

h_{ML} \equiv argmax_{\phi \in \Phi} P(Y|\phi)   (4)

How did we go from looking for a hypothesis given the data to the data given the hypothesis? A hypothesis that results in the most probable frequency distribution for the training data is also the one that is the most accurate about it. Being the most accurate implies having the best fit (if we were to generate new data under MAP, it would fit the training data best, among all other possible hypotheses). Thus, MAP and ML are equivalent here.

Bringing in the Binary Nature of Logistic Regression

Under the assumption of independent n training data points, we can rewrite (4) as a product over all observations:

h_{ML} \equiv argmax_{\phi \in \Phi} \prod_{i=1}^{n} P(y_{i}|\phi)   (5)

Because logistic regression is a binary classification, each data point can be either from a positive or a negative target class. We can use Bernoulli distribution to model this probability frequency:

 h_{ML} \equiv argmax_{\phi \in \Phi} \prod_{i=1}^{n}  \left( \phi(z^{(i)}) \right) ^{y^{(i)}} \left( 1-\phi(z^{(i)}) \right) ^{1-y^{(i)}}     (6)

Let’s recollect that in logistic regression the hypothesis is a logit function that takes a weighted sum of predictors and coefficients as input. Also, a single hypothesis \phi consists of a set of coefficients \mathbf{w} and the objective of h_{ML} is to find the coefficients that minimize the cost function. The problem here is that logit function is not smooth and convex and has many local minimums. In order to be able to use an optimal search procedure like gradient descend, we need to make the cost function convex. This is often done by taking the logarithm. Taking the log and negating (6) gives the familiar formula for the cost function J, which can also be found in chapter 3 of [2]:

 J(w) = \sum_{i=1}^{n}  \left[  -y^{(i)} \log(\phi (z^{(i)})) - (1-y^{(i)}) \log(1- \phi(z^{(i)})) \right]     (7)

Summary

In this post I made an attempt to show you the connection between the usually seen sum-of-squared errors cost function (which is minimized) and the maximum likelihood hypothesis (which is maximized).

References:

[1] Tom Mitchel, Machine Learning, McGraw-Hill, 1997.*

[2] Sebastian Raschka, Python Machine Learning, Packt Publishing, 2015.

* If there is one book on machine learning that you should read, it should be this one.