Calling Python Functions from Excel – Overview of Addins

In this blog I will compare three utilities that allow connecting Python code with Excel, exposing Python functions as Excel functions, and opening a data exchange channel between Excel and Python. Why Excel you may ask? Excel is a great interface choice when you need:

  1. A simple Windows interface for storing and manipulating data
  2. An easy to put together Proof of Concept for your machine learning model where you let the end user work with a batch or a stochastic model output
  3. A low maintenance interface for a permanent solution in an environment where server-run APIs are too costly to build and maintain
  4. Linking legacy Excel-based analytics with Python and its many data and modelling libraries (pandas, numpy, scipy, scikit-learn, etc.)

TL;DR – see the Summary table with main features compared.

PyXLL

PyXLL is an established product that has been developed and maintained by Tony Roberts since 2010, and it is currently on its 5th major version.

Ease of Use

It is straightforward to download and install the PyXLL addin, with minimal changes to its main config file to tell it where Python is, as well as where the Python modules are.

Functions can be defined in Python as usual, with a few additional changes to make them PyXLL compatible by adding a decorator with explicit types in the signature. 

PyXLL supports all standard Python data types, as well as pandas dataframes and numpy arrays. For the last two it does require you to figure out what these types are called in PyXLL, like ‘dataframe’ and ‘numpy_array’ which is a small hurdle and knowing these is required to properly define the decorators.

Portability 

Can you integrate existing Python code with Excel using PyXLL? Yes, you can. Can you do it without having to change your code? No. To expose existing Python code you would need to import the pyxll library and decorate each function with @xl_func decorator. Additional specification is needed to indicate input and output data types for arrays and dataframes, which can appear a bit non-Pythonic (e.g. type hints to accept an array and return a 2-d array of str would need – var x: string[][]).

Object Caching

PyXLL supports object caching in Excel, which is handy for passing to Excel objects like classes or large pandas dataframes. Through its config it is possible to control how the recalculation of caches happens when Excel re-opens. Cached objects can also be serialised and saved as part of Excel metadata. I am not sure this is a hugely useful feature since very large objects can always be stored in in-memory databases like sqllite.

Support for Python Async and Real Time Data

PyXLL supports asynchronous functions and Real Time Data streaming. There is an example of an async RTD class being used on the PyXLL documentation site. Note that you need to import PyXLL’s RTD class and inherit from it to ‘switch on’ this functionality. The rest looks like a standard Excel RTD interface, i.e. the need to define connect() and disconnect() methods.

RTD is useful when developing stochastic machine learning models, and async can be useful when working within the reinforcement learning framework. Note that PyXLL also supports Excel async functions.

Logging and Debugging

While testing PyXLL I checked the contents of its logs and found it to be easy to read. PyXLL allows you to customize log formatting, verbosity and location via its config file, as well as the max size the logs can grow to. All of which seems straightforward and simple. I noticed that my own code logs and the addin logs were going to the same log file.

Support for jupyter notebooks and Plotting

Since 2020 PyXLL supports two cool features such as integrating Jupyter notebooks and Python-generated plots into Excel. Both features are useful if you are looking for a seamless merge between flexible coding environments like Jupyter and Excel. However, if your end user is not a Python developer, this will not add much value.

Local Environment, Download and Installation

You need to download and install the version of the addin that matches the version of Python you have locally. And yes, you need to have a Python locally installed as PyXLL does not come with one. After downloading the addin, installation is quite simple with pip and the command line tool. Once installed you need to edit a config file to tell PyXLL where the Python executable is and where the modules to be exposed to Excel are. A lot of PyXLL configurations are done via a single config file. 

Documentation and Support

PyXLL is a mature addin and there is a great deal of documentation and examples on its website. There is also a YouTube channel with a few interesting PyXLL usage videos (a chat bot example). There is 24/7 support via email for all users.

Licencing and Pricing

At the time of writing, a single user licence is $299 a year (before tax), with a discounted option for an annual plan or multi-user plans. A single user licence can be used on multiple PCs. One can download a trial version of the addin and test it free for 30 days.

xlSlim

xlSlim is a recent addition to the set, having been released in June 2022. It has been developed and is maintained by a veteran quant developer Russel Webber. Its non-premium version is free to download and use. Accessing premium features requires a licence.

Ease of Use

It is very easy to download and install xlSlim. It comes with an installer for Windows, which is the only thing that I had to run. Unlike PyXLL, this addin does not require installing any Python libraries or editing and maintaining config files. Reading through the quickstart on xlSlim docs gives the impression that this is a ‘Python-first’ addin with minimal scaffolding around Excel. 

Python modules are registered directly in the Excel workbook using the addin’s RegisterPyModule() function. Multiple modules need to be registered separately. Optional arguments let one specify the location for your own Python executable (3.7 or later) and environment, which is a premium feature. 

xlSlim supports all standard Python data types. The premium version supports pandas dataframes, series and numpy arrays. No decorators are required to tell Excel about what types your methods accept and return. All examples on the documentation site use type hints, which make xlSlim’s job figuring out the types easier.

Portability 

Not having to install additional libraries and not needing decorators makes xlSlim extremely portable. However, not all Python coders use type hints since they are entirely optional. So, I can see that importing the typing library and adding type hints might be required if not already present in the code. In my view, type hints add a lot of value to large projects or code maintained by several developers.

At the moment, xlSlim will work with Python 3.7 or later.

Object Caching

xlSlim supports object caching and objects like dictionaries, dataframes and array are automatically returned as handles to object caches. The addin provides the ViewPyObject() function to display the value of the cache handle. One could organise module registration and handles on a hidden Excel sheet to hide the complexities from the end user.

Support for Python Async and Real Time Data

xlSlim supports asynchronous Python functions, sending method execution to a background thread and RTD. There is a nice example for how to stream data to Excel from a kafka broker on the addin’s documentation page. If you are not using kafka, xlSlim lets you implement a basic data streamer as a generator using yield. Check out this most unusual example of turning streamed data into a video directly in Excel on xlSlim’s YouTube channel.

Logging and Debugging

xlSlim’s LogLocation() function tells you where the addin logs are. User code logs and the addin logs are separated. The logs format, level and file location can be configured in config file (PyLogging-EXCEL.conf). Default logs are easy to find and navigate. 

Support for jupyter notebooks and Plotting

xlSlim does not have the jupyter notebook feature. In fact, the addin’s Excel integration is minimal, as it does not add ribbons or buttons to workbooks. Sending plots to Excel from Python is not supported by xlSlim.

Local Environment, Download and Installation

xlSlim comes with Python, so, you don’t need to install it if you don’t have it. If you need to use your own Python version with xlSlim, it can only be 3.7 or later, as of writing. The addin can be added to Excel manually, or by launching Excel through the xlSlim’s desktop short-cut. I did the latter which was very easy.

Documentation and Support

xlSlim is quite new, but it already has a good set of examples and easy to read documentation. Blogs, YouTube channel and Google Groups channel are also available. Support is via email or the Google Groups forum. 

Licencing and Pricing

At the time of writing, a single user licence is £74.99 ($91) a year (inclusive of tax). A single user licence can be used on multiple PCs. One can download a trial version of the addin and test it free for 14 days.

xlwings

xlwings has been around since 2014 and it is part of the whole ecosystem built around Excel, developed and maintained by two ex-investment bankers, Felix Zumstein and Björn Stiel. The main library is part of the Python Anaconda distribution, and can also be installed with pip. Advanced features of xlwings PRO are available on a paid subscription basis, if used for commercial purposes. This review will focus on using the main library to write Excel UDFs in Python.

Ease of Use

Since xlwings is part of the Anaconda distribution, I did not have to install additional software or libraries. Installing the addin is easy on the command line.

Once that was done, testing a few simple UDFs in Excel took a surprisingly long time. Firstly, configuring xlwings was not straightforward, and the documentation does not cover all possible pitfalls. In my case I am using a virtual environment, and xlwings could not find numpy or pandas after I set the interpreter path to the virtual env (as suggested in xlwings Troubleshooting section). Then, my anti-virus software would shut down Excel when I tried to register the UDF, which I resolved by manually telling it to ignore Excel activity. Finally, after working with a 3rd version of recovered Excel file, where the VBA reference to xlwings was lost (having been set at the start as helpfully mentioned in the installation steps), all tested UDF only returned ‘Object required’. This was resolved by re-adding the VBA reference. So, the start-up time has been considerably longer than with PyXLL or xlSlim. However, once I got it to work, it worked smoothly. xlwings supports all standard Python data types, as well as pandas dataframes, series and numpy arrays.

Portability 

At the time of writing, xlwings requires at least Python 3.7. You do need to import xlwings library and add multiple decorators to existing methods to turn them into Excel UDFs. The decorators are not tricky in themselves, but they do impact the speed at which one can port an existing codebase. So, as with PyXLL, one cannot turn existing modules to Excel UDFs without any code changes.

Object Caching

Tested xlwings version 0.27.14 does not support object caching.

Support for Python Async and Real Time Data

xlwings provides support for offloading Python execution to an async thread, which is useful for long-running processes. However, there is no support for RTD in the tested version.

Logging and Debugging

xlwings default behaviour is to send standard output and error to a console. This means that as you interact with your UDFs in Excel, periodically, a console window pops-up to inform you about what is happening (e.g. xlwings server is running on an event loop, etc.). COM and other internal errors also appear in the console, while user code Python errors are shown in a pop-up message window.

Adding logging to Python code will send user code logs and xlwing logs to one log file.

Support for jupyter notebooks and Plotting

xlwings lets users interact with Excel from jupyter notebooks by providing view and load functionality. This is useful since it can speed up exploratory data analysis  and simplify code. 

Local Environment, Download and Installation

xlwings comes with the Anaconda distribution or can be installed via pip for Python 3.7+. Adding the addin to Excel is achieved by running an installation command. However, users still may run into configuration or security problems, as I did.  I found that xlwings github issues are a great source of information on how to resolve these.

Excel Workbook settings can be controlled either through xlwings.conf, directly through the Excel ribbon, or by adding a sheet with the same config name. 

Documentation and Support

xlwings is mature and goes back to 2014. Over the years it has built up a loyal user base, and being an open-source library, has seen contributions from other developers. It comes with a great set of examples on its main website, fully documented API reference, a YouTube channel and even an O’Reilly published book where several chapters are dedicated to the addin.

Its professional version gets dedicated support as well as access to a video training course.

Licencing and Pricing

The library is free and open-sourced. Its PRO version which, among other things, provides Excel-embedded code and a custom installer, if used for commercial use, starts at $590 per user per.

Summary

Feature PyXLL xlSlim xlwings
Ease of UseFairly easy but requires knowing PyXLL names for data typesVery easy with minimal start-up timeTeething issues on start-up, fairly easy after that
PortabilityNeed to modify existing codeNo need to modify existing codeNeed to modify existing code
Object CachingAvailableAvailableNot available
Support for Python async and Real Time DataSupports async Python functions. Supports async Excel functions. Supports RTD.Supports async Python functions. Supports offloading execution to a background process. Supports RTD.Supports offloading execution to an asynchronous thread
Logging and debuggingCan change log format and level via the configCan change log format and level via the configNo out-of-the box config setting for logs
Supports embedded jupyter notebooks and Python plottingSupports Excel-embedded jupyter notebook. Supports Python plotting No support for Excel-embedded notebooks or plots from PythonSupports passing data to/from Jupyter notebook. Supports Python plotting
Local Environment, Download and Installation– Maintain config file.
– Should match local Python version.
– pip for main library.
– Manually add addin to Excel
– Comes with standard Python or can use local (3.7+)
– no config
– no pip
– Automatically added to Excel
– Part of Anaconda or via pip
– CLI for installation which automatically adds to Excel
Documentation and SupportLots of great examples and documentation on YouTube and the addin’s site. Support via emailLots of great examples and documentation on YouTube and the addin’s site. Support via emailLots of examples and documentation on YouTube and the addin’s site. Support for PRO version or via github issues
Licencing and PricingSingle user or multi-user pricing plans, starting at $299 per user per year (before tax).Free for standard Python data types and libraries. $91 per user per year (inclusive of tax) for advanced features (pandas, numpy, RTD, etc.)Main library is free and open-sourced. PRO for commercial use starts at $590 per user per year, other plans are available
Addins Features Summary

Disclaimer: I have helped to test some of xlSlim’s functionality and have made small contributions to its documentation and examples.

Build Your Business a Product Tool with Word2Vec and xlSlim

Introduction

The idea behind word2vec is based on a linguistic hypothesis called the distributional hypothesis, which states that words that occur in the same or similar contexts tend to have similar meanings. This hypothesis applies to context-to-words relations, and to words-to-context. Thus, similar words should occur in similar contexts, and similar contexts should have similar words.

In NLP, the meaning of words can be represented by their embeddings – numerical vector representations in the multi-dimensional vector space. Embeddings can be ‘learnt’ from text using linear algebra techniques like Singular Value Decomposition (SVD), Non-Negative Matrix Factorization or other decompositions.

The algorithm for word2vec is an example of learning word embedding from text that allows for performing arithmetic on the learned embeddings. Here embeddings are learnt through gradient descent by training a shallow neural network. There is a ready implementation of word2vec in Python in the genism library. Apart from word2vec, one can use the Embeddings module from TensorFlow keras. Yet another approach involves using FastText – an optimization on word2vec where words’ morphology and sub-words are leveraged.

In this blog we pretend that we are data scientists at a small online retailer and develop an easy to use Excel-based tool using word2vec to help our company to do the following:

  • Find products that are similar to a given product based on on-line browsing behaviour of our customers. This can be used to offer similar products for customers who call us to place an order for an item that is out of stock.
  • Find products to recommend as next purchase based on on-line browsing behaviour of our customers.

The above functionality will be offered via simple functions in Excel, exposed via the xlSlim addin. Let’s get started!

The Data

The data for this blog comes from Kaggle. There are two data files available to download, and we will use the one from Oct-2019.  The data contains anonymized records for the items customers viewed, added to their cart and purchased. We only use records for items that customers viewed.

Since word2vec is an NLP model, let’s make the connection between viewing items and words in sentences explicit:

In each online session of a customer, viewed items’ product ids are treated as words and the session is treated as a sentence/context.

The required data pre-processing will transform viewed item records into a lists of lists, each list being a unique customer session. It is quite simple and includes steps for:

  • Keeping records for viewed items only (40,779,399 records)
  • Removing records with N/A category codes (reduced to 27,542,941 records)
  • Remove duplicate views of the same item on the same date by the same customer (reduced to 17,309,221 records)
  • Reduce data to only records of at least two viewed items in a given day (reduced to 15,260,646 records)

The list of lists for a session-generated viewed product is the input into the genism Word2Vec model.

The Model

We use the genism implementation of Word2Vec. The model is initialised with the following parameters:

  • min_count = 2, meaning the product id must appear at least twice in the corpus (i.e. in the list of lists) to be used
  • vector_size = maximum length of an online session in the number of viewed items. This happens to be 1,013. This parameter determines the length of each word vector, i.e. the number of embeddings.

Note that by default, we train a CBOW architecture.

The model is trained using all data and saved as a .model object using genism built-in utilities. All model code is available on my github repository. The code blocks below shows the implementation of the methods to get similar products and recommended product:

def get_similar_product(model:Word2Vec, df:pd.DataFrame, product_id:int)->pd.DataFrame:
"""
Parameters
———-
model : instance of Word2Vec
df : dataframe with preprocessed data
product_id : int
unique product id for which we need to find similar product ids
Returns
——-
dataframe with similar products
"""
try:
sim_product = model.wv.most_similar(positive=[str(product_id)])
return df.loc[df['product_id'].isin([int(word[0])
for word in sim_product])][[
'category_code',
'brand',
'product_id']].drop_duplicates()
except KeyError:
return f"Cannot find the specified product with id {product_id}"
def recommend_next_purchase(model: Word2Vec, df:pd.DataFrame, user_id:int)->pd.DataFrame:
"""
Parameters
———-
model : instance of Word2Vec
df : dataframe with preprocessed data
user_id : int
unique user id for whom we make recommendations
Returns
——-
dataframe with recommended products
"""
try:
# Find the products the user browsed
viewed_products = df.loc[df['user_id']==user_id]['product_id'].unique()
# Get recommendations for next purchase
output_words = model.predict_output_word([str(product) for product in viewed_products])
return df.loc[df['product_id'].isin([int(word[0])
for word in output_words])][[
'category_code',
'brand',
'product_id']].drop_duplicates()
except KeyError:
return f"Cannot find the specified user with id {user_id}"

It takes approximately 5 minutes to train and save the model. The idea is that we, the data scientists, pre-train our word2vec model and provide the end users with an Excel file that has a few functions. One function will let the user to get a product or a category recommendation for a user id. Another function can be used to get similar products for a product id. Note that the config file controls the model and data parameters which would otherwise be hard coded in the Python code. The path to the config file is the only hard-coded global parameter.

User Application

For the user interface we implement two functions, one to find similar products, and another to recommend either the category or products to buy next.

A simple model interface where both functions are exposed in Excel can be built using the xlSlim addin. xlSlim is very easy to use and one can set-up an Excel based tool from Python methods in minutes. Note that we can update the model with new data, but the end user would not care or know about it, as long as the interface code has access to the model object.

My github page has the Excel file, with the formulas are as shown below:

The formulas register the Python module that implements our two functions and loads the model and the data objects.

get_similar_product_from_handle()

and 

recommend_next_purchase_from_handle()

are made available to Excel by xlSlim. We pass to these functions a handle to the model object, and a handle to the dataframe. It is the need to work with the handle to the dataframe that requires to use .._from_handle() versions of the exposed method.

Note that the file needs to be opened from the xlSlim application, which seamlessly loads and activates the addin.

Entering a known product_id (a child’s carriage) for a similar product gives the following output:

Entering a known user_id (a user who has viewed computer memory and video cards) for product recommendations gives the following output:

Conclusion

Word2Vec is a powerful model that goes beyond human language applications. Using Excel and an addin that can expose Python functionality as Excel functions (e.g. xlSlim) is a quick and easy way to build user interfaces either to be used as a proof of concept or as a permanent flexible solution. Not all companies have the IT and data engineering resources to run servers and support web-based APIs, so, Excel is definitely a viable alternative.

Don’t Get in a Pickle with a Python namedtuple

In this blog I will show you what happens when you want to pickle an object that contains a Python namedtuple.

Python’s namedtuple is high-performance data type that lets us define a custom type which behaves like a tuple. For example, the following piece of code defines a new type Viewer, creates an instance of it and initialises its attributes:

    from collections import namedtuple

    Viewer = namedtuple('Viewer', 'gender age points')
    viewer = Viewer('X', 25, 356)

In the above, line 3 defines a new type Viewer, and line 4 defines and initialises a new variable viewer of type Viewer. viewer behaves like a tuple in a sense that it has built-in methods count() and index() and allows access to attributes via indexing or named arguments. For example:

    print(viewer[2])         # prints 356
    print(viewer.age)        # prints 25
    print(viewer.count('X')) # prints 1

Note that unlike with a list or a dict, to work with namedtuples we need to perform two operations: (1) define the new type, (2) create a new instance of it. Also note that the same two steps are followed when we work with classes. And a namedtuple is just a dynamically named class type. But how exactly does this dynamic part works? It works because when we define a new type (line 3 in the first code snippet), we are actually calling a factory function namedtuple that does the dynamic ‘stuff’ for us (i.e. returns a sub-class of a tuple that is named as what we specify in the function call).

Let’s see what happens when we create a class with a namedtuple member.

import pickle
from collections import namedtuple
import datetime as dt


class ViewerClass(object):

    # class-level type definition
    vt = namedtuple(
        'vt', 'start_date mon_views mon_streams name dob'
    )

    def __init__(
        self, start_date, mon_views, mon_streams, name, dob
    ):
        self._my_vt = ViewerClass.vt(
            start_date, mon_views, mon_streams, name, dob
        )

    def get_start_date(self):
        return self._my_vt.start_date

    def get_monthly_views(self):
        return self._my_vt.mon_views

    def get_monthly_streams(self):
        return self._my_vt.mon_streams

    def get_registration_details(self):
        return (
            'Name:'
            + self._my_vt.name
            + ' DOB:'
            + str(self._my_vt.dob)
        )

    def update_monthly_stream(self, new_mon_streams):
        self._my_vt.mon_streams = new_mon_streams

    def update_monthly_views(self, new_mon_views):
        self._my_vt.mon_views = new_mon_views


if __name__ == '__main__':

    viewer1 = ViewerClass(
        dt.date(2019, 1, 1),
        5,
        6234.80,
        'John',
        dt.date(1989, 12, 3),
    )
    print(
        "Viewer {} has streamed for {} seconds this month.".format(
            viewer1.get_registration_details(),
            viewer1.get_monthly_streams(),
        )
    )

    viewer2 = ViewerClass(
        dt.date(2019, 2, 1),
        5,
        5234.80,
        'Mary',
        dt.date(1989, 11, 11),
    )
    print(
        "Viewer {} has streamed for {} seconds this month.".format(
            viewer2.get_registration_details(),
            viewer2.get_monthly_streams(),
        )
    )

    print(type(viewer1))
    print(type(viewer1._my_vt))

The output of the print statements points to a potential problem that can occur if we try to pickle the viewer objects:

It turns out that the protected variable is of type ‘__main__.vt’ but not ‘__main__.ViewerClass.vt’. And if we try to pickle viewer1 we are going to get this error:

_pickle.PicklingError: Can’t pickle <class ‘__main__.vt’>: attribute lookup vt on __main__ failed

This error should make sense because vt is not defined within __main__, but is defined within __main__.ViewerClass, and thus is not visible to pickle as a subclass of a class.

There are several ways to fix this.

First, we can move the definition of vt outside of ViewerClass to the __main__. This will let pickle find vt at the level it is looking for it:

# module-level type definition
vt = namedtuple(
    'vt', 'start_date mon_views mon_streams name dob'
)


class ViewerClass(object):
    def __init__(
        self, start_date, mon_views, mon_streams, name, dob
    ):
        self._my_vt = vt(
            start_date, mon_views, mon_streams, name, dob
        )

    ...

Second solution involves changing a built-in private variable __qual_name__ to that of the class name:

import pickle
from collections import namedtuple
import datetime as dt


class ViewerClass(object):

    # class-level definition
    vt = namedtuple(
        'vt', 'start_date mon_views mon_streams name dob'
    )
    vt.__qualname__ = 'ViewerClass.vt'

    def __init__(
        self, start_date, mon_views, mon_streams, name, dob
    ):
        self._my_vt = ViewerClass.vt(
            start_date, mon_views, mon_streams, name, dob
        )

    ...

This fixes the issue and makes viewer1._my_vt of type ‘__main__.ViewerClass.vt’, under which pickle can look it up.

I must say that I prefer the first solution, since sub-classing from the ViewerClass may prove to be problematic, and we should avoid modifying private variables.

Introduction to Correspondence Analysis

In this blog I will introduce the Correspondence Analysis – a visualisation technique for categorical data. All the code has been compiled in my github repository.

Correspondence Analysis (CA) has been around for a very long time. It was first developed in the 1930-ies, and made popular by M. Greenacre in the 1980-ies. It is an established statistical analysis techniques with dedicated annual symposiums and sufficient amount of literature covering theory and applications. Inspite of its popularity, I have only recently discovered it, and thought that it is worthwhile to document the fundamentals on my blog.

What Exactly is Correspondence Analysis?

CA is a visualisation technique that can be applied to categorical data for data exploration. Unlike numerical data, categorical features are harder to analyse and visualise. CA uses a matrix decomposition method, namely SVD, and thus you may see CA being likened to the Principle Components Analysis (PCA). However, CA is not, strictly speaking, a PCA for categorical data, mostly because the primary objective of CA is to provide a visualisation of associations among categorical features.

How does one visualise categorical data? CA is based on a simple concept of a contingency table. A contingency table is a tabulation of frequencies of how categorical values are distributed by variables. This blog will be using examples from P. Yelland’s article on CA published in the Mathematica journal[1]. I will translate his Mathematica code to Python (because Python is awesome). In [1] we find CA applied to textual analysis where passages of a few authors analysed by the frequency of letters. The five authors and the letters are shown below:

authors = ["Charles Darwin", "Rene Descartes","Thomas Hobbes", "Mary Shelley", "Mark Twain"]
initials=['CD1','CD2','CD3','RD1','RD2','RD3','TB1','TB2','TB3','MS1','MS2','MS3','MT1','MT2','MT3']
chars=["B", "C", "D", "F", "G", "H", "I", "L", "M", "N","P", "R", "S", "U", "W", "Y"]

The contingency table build from how often these letters appear in three passages per author are:

sampleCrosstab=[[34, 37, 44, 27, 19, 39, 74, 44, 27, 61, 12, 65, 69,22, 14, 21],
                [18, 33, 47, 24, 14, 38, 66, 41, 36,72, 15, 62, 63, 31, 12, 18],
                [32, 43, 36, 12, 21, 51, 75, 33, 23, 60, 24, 68, 85,18, 13, 14],
                [13, 31, 55, 29, 15, 62, 74, 43, 28,73, 8, 59, 54, 32, 19, 20],
                [8, 28, 34, 24, 17, 68, 75, 34, 25, 70, 16, 56, 72,31, 14, 11], 
                [9, 34, 43, 25, 18, 68, 84, 25, 32, 76,14, 69, 64, 27, 11, 18],
                [15, 20, 28, 18, 19, 65, 82, 34, 29, 89, 11, 47, 74,18, 22, 17], 
                [18, 14, 40, 25, 21, 60, 70, 15, 37,80, 15, 65, 68, 21, 25, 9],
                [19, 18, 41, 26, 19, 58, 64, 18, 38, 78, 15, 65, 72,20, 20, 11], 
                [13, 29, 49, 31, 16, 61, 73, 36, 29,69, 13, 63, 58, 18, 20, 25],
                [17, 34, 43, 29, 14, 62, 64, 26, 26, 71, 26, 78, 64, 21, 18, 12],
                [13, 22, 43, 16, 11, 70, 68, 46, 35,57, 30, 71, 57, 19, 22, 20],
                [16, 18, 56, 13, 27, 67, 61, 43, 20, 63, 14, 43, 67,34, 41, 23], 
                [15, 21, 66, 21, 19, 50, 62, 50, 24, 68, 14, 40, 58, 31, 36, 26],
                [19, 17, 70, 12, 28, 53, 72, 39, 22, 71, 11, 40, 67,25, 41, 17]]

Can you spot any differences in the use of letters by author from sampleCrosstab? It is almost impossible to do so by just looking at it. Instead, CA resorts to the \chi^2 statistic.

Chi-Squared Statistic and Chi-Squared Distances

Pearson’s \chi^2 test of independence can be used to say with reasonable certainty if the distribution of letters differs from one author to another. \chi^2 is defined as:

\chi^2 = \sum_{I}\sum_{J}\frac{(n_{ij}-(\frac{n_{i.}n_{.j}}{n}))^2}{\frac{n_{i.}n_{.j}}{n}} (1)

Where n is the total number of frequencies, n_{ij} is the letter frequency in row i and column j , and n_{i.} and n_{.j} are the total frequencies in row i and column j respectively. The product of n_{i.} and n_{.j} normalised by n is the expected frequency for n_{ij} under the independence assumption. Let’s call it independenceModel. The greater is \chi^2 , the greater is the certainty that the use of these letters is different by author. We can calculate this statistic in Python as following:

grandTotal = np.sum(sampleCrosstab)
correspondenceMatrix = np.divide(sampleCrosstab,grandTotal)
rowTotals = np.sum(correspondenceMatrix, axis=1)
columnTotals = np.sum(correspondenceMatrix, axis=0)

independenceModel = np.outer(rowTotals, columnTotals)

#Calculate manually
chiSquaredStatistic = grandTotal*np.sum(np.square(correspondenceMatrix-independenceModel)/independenceModel)
print(chiSquaredStatistic)

# Quick check - compare to scipy Chi-Squared test
statistic, prob, dof, ex = chi2_contingency(sampleCrosstab)
print(statistic)
print(np.round(prob, decimals=2))

In the above code correspondenceMatrix holds normalised frequencies. The \chi^2 statistic is 448.50, which is very unlikely to be observed under the null hypothesis (that the letter frequencies follow the same distribution). Having established this, we can continue with the CA as we now know that it should be able to show us some meaningful associations.

For the purposes of CA, the differences between the distributions of letters in the text samples are measured by \chi^2 -distances, which are weighted Euclidean distances between normalized rows. These are calculated by dividing row entries by their respective row totals. The weights are inversely proportional to the square roots of the column totals. \chi^2 -distances between row i and row k are defined as:

\chi^2_{distance_{ik}} = \sqrt{\sum_{J}\frac{(p_{ij}/p_{i.} - p_{kj}/p_{k.})^2}{p_{.j}}} (2)

# pre-calculate normalised rows
norm_correspondenceMatrix = np.divide(correspondenceMatrix,rowTotals[:, None])

chiSquaredDistances = np.zeros((correspondenceMatrix.shape[0],correspondenceMatrix.shape[0]))

norm_columnTotals = np.sum(norm_correspondenceMatrix, axis=0)
for row in range(correspondenceMatrix.shape[0]):
    chiSquaredDistances[row]=np.sqrt(np.sum(np.square(norm_correspondenceMatrix
                                                        -norm_correspondenceMatrix[row])/columnTotals, axis=1))
# Save distances to the DataFrame
dfchiSquaredDistances = pd.DataFrame(data=np.round(chiSquaredDistances*100).astype(int), columns=authorSamples)

print(dfchiSquaredDistances)

In (2) I switched to notation with p_{ij} , which is simply every entry in correspondenceMatrix (i.e. letter frequencies normalised by the grand total). dfchiSquaredDistances contains:

Chi-Squared Distances In Graphical Form

CA provides a means of representing a table of \chi^2 -distances in a graphical form. This is where the similarity with the PCA analysis comes in. To calculate such representation we need to transform the distances to points in a Cartesian coordinate system. This is achieved by a singular value decomposition (SVD) of a matrix of standardised residuals:

\Omega = \frac{p_{ij}-\mu_{ij}}{\sqrt{\mu_{ij}}} (3)

standardizedResiduals = np.divide((correspondenceMatrix-independenceModel),np.sqrt(independenceModel))

u,s,vh = np.linalg.svd(standardizedResiduals, full_matrices=False)

We are after the row scores, which are coordinates of points in a high-dimensional space (14 dimensions in this case). These points are arranged so that the Euclidean distance between two points is equal to the \chi^2 -distance between the two rows to which they correspond. The row scores are defined as:

R = \delta_{r}\cdot U\cdot S (4)

where U and S are the left singular vectors matrix and singular values on the diagonal matrix from SVD. The \delta_{r} is diagonal matrix made of the reciprocals of the square roots of the row totals.

deltaR = np.diag(np.divide(1.0,np.sqrt(rowTotals)))

rowScores=np.dot(np.dot(deltaR,u),np.diag(s))

dfFirstTwoComponents = pd.DataFrame(data=[l[0:2] for l in rowScores], columns=['X', 'Y'], index=initials)

print(dfFirstTwoComponents)

Extracting the first two components gives us:

Plotting these as points:

The plot clearly shows letters associations by author. Mark Twain and Charles Darwin’s samples stand out as significantly different from the rest.

Source and Reference: [1] P.Yelland, An Introduction to Correspondence Analysis. The Mathematica Journal 12, 2010 Wolfram Media, Inc.

Approximate Bayesian Computation

Greetings, my blog readers!

This is my first post in 2018. In this post I will share with you a very simple way of performing inference using Approximate Bayesian Computation (ABC) – not to be confused with Approximate Bootstrap Confidence interval, which is also “ABC”.

Let’s say we have observed some data, and are interested to test if there was a change in behaviour in whatever generated the data. For example, we could be monitoring the total amount that is spent/transferred from some account, and we would like to see if there was a shift in how much is being spent/transferred. Figure below shows what the data could look like. After we have eye-balled the graph, we think that all observations after item 43 belong to the changed behaviour (cutoff=43), and we separate the two by colour.

The first question that we can ask is about the means of the blue and the red regions: are they the same? In the figure above I am showing the mean and standard deviations for the two sets. We can run a basic bootstrap with replacement to check if the difference in the means is possibly accidental.

In the figure above basic_bootstrap generates a distribution of means of randomly sampled sets. The confidence interval is first computed as non-parametric. But a quick comparison with 95th CI using normal standard scores shows that the simulated and the non-simulated confidence intervals around the means are very close. Most importantly, the confidence intervals for the blue and red region means overlap, and thus we would have to accept the null hypothesis that the population means are the same and differences seen here are accidental.

Note how unsatisfying this result is. If we use some other test, like one-way ANOVA from scipy.stats.f_oneway, we get a p-value that is too high to accept an alternative hypothesis. However, if we plot the CDFs of the blue and the red data, we can clearly see that larger values are prevailing in the latter:

Approximate Bayesian Computation

Approximate Bayesian Computation (ABC) relates to probabilistic programming methods and allows us to quantify uncertainty more exactly than a simple CI. A pretty good summary of ABC can be found on Wikipedia. If we are monitoring transactions occurring over time, we may be interested in generating alerts when an amount is above a threshold (for example, your bank could have a monitoring system in place to safeguard you against credit card fraud). If, instead of comparing means of red and blue region, we decided to answer the question about how likely are we to see more trades above the threshold in the red vs. the blue data regions, we could use ABC.

To execute an ABC test on the difference in the number of trades above a threshold in the blue and red data regions, we begin by choosing the threshold! Take a look at the CDF plot above. We see that approximately half of red data is above 20. Whereas only 25% of blue data is above 20. Let’s set our threshold at 20. The ABC is a simple simulation algorithm where we repeatedly perform sample and compare steps. What can we sample here? We will sample from two normal distributions, each with the means set to the fraction of trades above our threshold. I will use Normal distribution, but it is purely a choice of convenience. Ok, what can we compare here? We will compare the number of trades that could have been above the threshold when the data they come from is sampled from the distributions we have chosen as our priors. And we store away the ones that are consistent with it. If we repeat this many times under the two parameterisations, we should build-up two distributions that can be used to answer the main question – how likely are we to obtain more trades above the chosen threshold in the red vs. the blue data sets. The code below does exactly that.

We obtain a very high probability of seeing more trades above the threshold in the red vs. the blue region.