Spurs right to sack Pochettino?

A few months back, I built my “football club elo by manager” visualisation. Essentially, we take the week-by-week Premier League Elo ratings from ClubElo and overlay it with managerial tenures.

A clear pattern emerges – a lot of Premier League sackings have been consistent with clubs going down significantly in terms of Elo Ratings. For example, we have seen that Liverpool sacked Rafa Benitez, Kenny Dalglish (in 2012) and Brendan Rodgers all at the right time, and that similarly Manchester United sacked Jose Mourinho when he brought them back to below where he started.

And now the news comes in that Spurs have joined the party, sacking long-time coach Mauricio Pochettino. What I find interesting is the timing of the sacking – while international breaks are usually a popular time to change managers (the two week gap in fixtures gives a club some time to adjust), most sackings happen in the first week of the international break.

The Pochettino sacking is surprising in that it has come towards the end of the international break, giving the club four days before their next fixture (a derby at the struggling West Ham). However, the Guardian reports that Spurs are close to hiring Jose Mourinho, and that might explain the timing of the sacking.

So were Spurs right in sacking Pochettino, barely six months after he took them to a Champions League final? Let’s look at the Spurs story under Pochettino using Elo ratings. 

 

 

 

 

Pochettino took over in 2014 after an underwhelming 2013-14 when the club struggled under Andre Villas Boas and then Tim Sherwood. Initially, results weren’t too promising, as he took them from a 1800 rating down to 1700.

However, chairman Daniel Levy’s patience paid off, and the club mounted a serious challenge to Leicester in the 2015-16 season before falling away towards the end of the season, finishing third behind Arsenal. As the Elo shows, the improvement continued, as the club remained in Champions League places through the course of Pochettino’s reign.

Personally, the “highlight” of Pochettino’s reign was Spurs’ 4-1 demolition of Liverpool at Wembley in October 2017, a game I happened to watch at the stadium. And as per the Elo ratings the club plateaued shortly after that.

If that plateau had continued,  I suppose Pochettino would have remained in his job, giving the team regular Champions League football. This season, however, has been a disaster.

Spurs are 13 points below what they had scored in comparable fixtures last season, and unlikely to finish in the top six even. Their Elo has also dropped below 1850 for the first time since 2016-17. While that is still higher than where Pochettino started off at, the precipitous drop in recent times has meant that the club has possibly taken the right call in sacking Pochettino.

If Mourinho does replace him (it looks likely, as per the Guardian), it will present a personal problem for me – for over a decade now, Tottenham have been my “second team” in the top half of the Premier League, behind Liverpool. That cannot continue if Mourinho takes over. I’m wondering who to shift my allegiance to – it will have to be either Leicester or (horror of horrors) Chelsea!

Alchemy

Over the last 4-5 days I kinda immersed myself in finishing Rory Sutherland’s excellent book Alchemy.

It all started with a podcast, with Sutherland being the guest on Russ Roberts’ EconTalk last week. I’d barely listened to half the podcast when I knew that I wanted more of Sutherland, and so immediately bought the book on Kindle. The same evening, I finished my previous book and started reading this.

Sometimes I get a bit concerned that I’m agreeing with an author too much. What made this book “interesting” is that Sutherland is an ad-man and a marketer, and keeps talking down on data and economics, and plays up intuition and “feeling”. In other words, at least as far as professional career and leanings go, he is possibly as far from me as it gets. Yet, I found myself silently nodding in agreement as I went through the book.

If I have to summarise the book in one line I would say, “most decisions are made intuitively or based on feeling. Data and logic are mainly used to rationalise decisions rather than making them”.

And if you think about it, it’s mostly true. For example, you don’t use physics to calculate how much to press down on your car accelerator while driving – you do it essentially by trial and error and using your intuition to gauge the feedback. Similarly, a ball player doesn’t need to know any kinematics or projectile motion to know how to throw or hit or catch a ball.

The other thing that Sutherland repeatedly alludes to is that we tend to try and optimise things that are easy to measure or optimise. Financials are a good example of that. This decade, with the “big data revolution” being followed by the rise of “data science”, the amount of data available to make decisions has been endless, meaning that more and more decisions are being made using data.

The trouble, of course, is availability bias, or what I call as the “keys-under-lamppost bias”. We tend to optimise and make decisions on things that are easily measurable (this set of course is now much larger than it was a decade ago), and now that we know we are making use of more objective stuff, we have irrational confidence in our decisions.

Sutherland talks about barbell strategies, ergodicity, why big data leads to bullshit, why it is important to look for solutions beyond the scope of the immediate domain and the Dunning-Kruger effect. He makes statements such as “I would rather run a business with no mathematicians than with second-rate mathematicians“, which exactly mirrors my opinion of the “data science industry”.

There is absolutely no doubt why I liked the book.

Thinking again, while I said that professionally Sutherland seems as far from me as possible, it’s possibly not so true. While I do use a fair bit of data and economic analysis as part of my consulting work, I find that I make most of my decisions finally on intuition. Data is there to guide me, but the decision-making is always an intuitive process.

In late 2017, when I briefly worked in an ill-fated job in “data science”, I’d made a document about the benefits of combining data analysis with human insight. And if I think about my work, my least favourite work is where I’ve done work with data to help clients make “logical decision” (as Sutherland puts it).

The work I’ve enjoyed the most has been where I’ve used the data and presented it in ways in which my clients and I have noticed patterns, rationalised them and then taken a (intuitive) leap of faith into what the right course of action may be.

And this also means that over time I’ve been moving away from work that involves building models (the output is too “precise” to interest me), and take on more “strategic” stuff where there is a fair amount of intuition riding on top of the data.

Back to the book, I’m so impressed with it that in case I was still living in London, I would have pestered Sutherland to meet me, and then tried to convince him to let me work for him. Even if at the top level it seems like his work and mine are diametrically opposite..

I leave you with my highlights and notes from the book, and this tweet.

Here’s my book, in case you are interested.

 

EPL: Mid-Season Review

Going into the November international break, Liverpool are eight points ahead at the top of the Premier League. Defending champions Manchester City have slipped to fourth place following their loss to Liverpool. The question most commentators are asking is if Liverpool can hold on to this lead.

We are two-thirds of the way through the first round robin of the premier league. The thing with evaluating league standings midway through the round robin is that it doesn’t account for the fixture list. For example, Liverpool have finished playing the rest of the “big six” (or seven, if you include Leicester), but Manchester City have many games to go among the top teams.

So my practice over the years has been to compare team performance to corresponding fixtures in the previous season, and to look at the points difference. Then, assuming the rest of the season goes just like last year, we can project who is likely to end up where.

Now, relegation and promotion introduces a source of complication, but we can “solve” that by replacing last season’s relegated teams with this season’s promoted teams (18th by Championship winners, 19th by Championship runners-up, and 20th by Championship playoff winners).

It’s not the first time I’m doing this analysis. I’d done it once in 2013-14, and once in 2014-15. You will notice that the graphs look similar as well – that’s how lazy I am.

Anyways, this is the points differential thus far compared to corresponding fixtures of last season. 

 

 

 

Leicester are the most improved team from last season, having scored 8 points more than in corresponding fixtures from last season. Sheffield United, albeit starting from a low base, have done extremely well as well. And last season’s runners-up Liverpool are on a plus 6.

The team that has done worst relative to last season is Tottenham Hotspur, at minus 13. Key players entering the final years of their contract and not signing extensions, and scanty recruitment over the last 2-3 years, haven’t helped. And then there is Manchester City at minus 9!

So assuming the rest of the season’s fixtures go according to last season’s corresponding fixtures, what will the final table look  like at the end of the season?
We see that if Liverpool replicate their results from last season for the rest of the fixtures, they should win the league comfortably.

What is more interesting is the gaps between 1-2, 2-3 and 3-4. Each of the top three positions is likely to be decided “comfortably”, with a fairly congested mid-table.

As mentioned earlier, this kind of analysis is unfair to the promoted teams. It is highly unlikely that Sheffield will get relegated based on the start they’ve had.

We’ll repeat this analysis after a couple of months to see where the league stands!

Segmentation and machine learning

For best results, use machine learning to do customer segmentation, but then get humans with domain knowledge to validate the segments

There are two common ways in which people do customer segmentation. The “traditional” method is to manually define the axes through which the customers will get segmented, and then simply look through the data to find the characteristics and size of each segment.

Then there is the “data science” way of doing it, which is to ignore all intuition, and simply use some method such as K-means clustering and “do gymnastics” with the data and find the clusters.

A quantitative extreme of this method is to do gymnastics with your data, get segments out of it, and quantitatively “take action” on it without really bothering to figure out what each clusters represent. Loosely speaking, this is how a lot of recommendation systems nowadays work – some algorithm somewhere finds people similar to you based on your behaviour, and recommends to you what they liked.

I usually prefer a sort of middle ground. I like to let the algorithms (k-means easily being my favourite) to come up with the segments based on the data, and then have a bunch of humans look at the segments and make sense of it.

Basically whatever segments are thrown up by the algorithm need to be validated by human intuition. Getting counterintuitive clusters is also not a problem – on several occasions, people I’ve validated the clusters by (usually clients) have used the counterintuitive clusters to discover bugs, gaps in the data  or patterns that they didn’t know of earlier.

Also, in terms of validation of clusters, it is always useful to get people with domain knowledge to validate the clusters. And this also means that whatever clusters you’ve generated you are able to represent them in a human-readable format. The best way of doing that is to use the cluster centres and then represent them somehow in a “physical” manner.

I started writing this post some three days ago and am only getting to finish it now. Unfortunately, in the meantime I’ve forgotten the exact motivation of why I started writing this. If i recall that, I’ll maybe do another post.

Fishing in data pukes

When a data puke is presented periodically, consumers of the puke learn to “fish” for insights in it. 

I’ve been wondering why data pukes are so common. After all, they need significant effort on behalf of the consumer to understand what is happening, and to get any sort of insight from it. In contrast, a well-designed dashboard presents the information crisply and concisely.

In practical life, though, most business reports and dashboards I come across can at best be described as data pukes. There is data all over the place, and little insight to help the consumer to find what they’re looking for. In most cases, there is no customisation as well.

The thing with data  pukes is that data pukes beget data pukes. The first time you come across a report or dashboard that is a data puke, and you have no choice but to consume it, you work hard to get your useful nuggets from it. The next time you come across the same data puke (most likely in the next edition of the report, or the next time you come across the dashboard), it takes less effort for you to get your insight. Soon enough, having been exposed to the data puke multiple times, you become an expert at getting insight out of it.

Your ability to sift through this particular data puke and make sense of it becomes your competitive advantage. And so you demand that the puker continue to puke out the data in the same manner. Even if they were to figure out that they can present it in a better way, you (and people like you) will have none of that, for that will then chip away at your competitive advantage.

And so the data puke continues.

 

Taking Intelligence For Granted

There was a point in time when the use of artificial intelligence or machine learning or any other kind of intelligence in a product was a source of competitive advantage and differentiation. Nowadays, however, many people have got so spoiled by the use of intelligence in many products they use that it has become more of a hygiene factor.

Take this morning’s post, for example. One way to look at it is that Spotify with its customisation algorithms and recommendations has spoiled me so much that I find Amazon’s pushing of Indian music irritating (Amazon’s approach can be called as “naive customisation”, where they push Indian music to me only because I’m based in India, and not learn further based on my preferences).

Had I not been exposed to the more intelligent customisation that Spotify offers, I might have found Amazon’s naive customisation interesting. However, Spotify’s degree of customisation has spoilt me so much that Amazon is simply inadequate.

This expectation of intelligence goes beyond product and service classes. When we get used to Spotify recommending music we like based on our preferences, we hold Netflix’s recommendation algorithm to a higher standard. We question why the Flipkart homepage is not customised to us based on our previous shopping. Or why Google Maps doesn’t learn that some of us don’t like driving through small roads when we can help it.

That customers take intelligence for granted nowadays means that businesses have to invest more in offering this intelligence. Easy-to-use data analysis and machine learning packages mean that at least some part of an industry uses intelligence in at least some form (even if they might do it badly in case they fail to throw human intelligence into the mix!).

So if you are in the business of selling to end customers, keep in mind that they are used to seeing intelligence everywhere around them, and whether they state it or not, they expect it from you.

More on statistics and machine learning

I’m thinking of a client problem right now, and I thought that something that we need to predict can be modelled as a function of a few other things that we will know.

Initially I was thinking about it from the machine learning perspective, and my thought process went “this can be modelled as a function of X, Y and Z. Once this is modelled, then we can use X, Y and Z to predict this going forward”.

And then a minute later I context switched into the statistical way of thinking. And now my thinking went “I think this can be modelled as a function of X, Y and Z. Let me build a quick model to see if the goodness of fit, and whether a signal actually exists”.

Now this might reflect my own biases, and my own processes for learning to do statistics and machine learning, but one important difference I find is that in statistics you are concerned about the goodness of fit, and whether there is a “signal” at all.

While in machine learning as well we look at what the predictive ability is (area under ROC curve and all that), there is a bit of delay in the process between the time we model and the time we look for the goodness of fit. What this means is that sometimes we can get a bit too certain about the models that we want to build without thinking if in the first place they make sense and there’s a signal in that.

For example, in the machine learning world, the concept of R Square is not defined for regression –  the only thing that matters is how well you can predict out of sample. So while you’re building the regression (machine learning) model, you don’t have immediate feedback on what to include and what to exclude and whether there is a signal.

I must remind you that machine learning methods are typically used when we are dealing with really high dimensional data, and where the signal usually exists in the interplay between explanatory variables rather than in a single explanatory variable. Statistics, on the other hand, is used more for low dimensional problems where each variable has reasonable predictive power by itself.

It is possibly a quirk of how the two disciplines are practiced that statistics people are inherently more sceptical about the existence of signal, and machine learning guys are more certain that their model makes sense.

What do you think?

Periodicals and Dashboards

The purpose of a dashboard is to give you a live view of what is happening with the system. Take for example the instrument it is named after – the car dashboard. It tells you at the moment what the speed of the car is, along with other indicators such as which lights are on, the engine temperature, fuel levels, etc.

Not all reports, however, need to be dashboards. Some reports can be periodicals. These periodicals don’t tell you what’s happening at a moment, but give you a view of what happened in or at the end of a certain period. Think, for example, of classic periodicals such as newspapers or magazines, in contrast to online newspapers or magazines.

Periodicals tell you the state of a system at a certain point in time, and also give information of what happened to the system in the preceding time. So the financial daily, for example, tells you what the stock market closed at the previous day, and how the market had moved in the preceding day, month, year, etc.

Doing away with metaphors, business reporting can be classified into periodicals and dashboards. And they work exactly like their metaphorical counterparts. Periodical reports are produced periodically and tell you what happened in a certain period or point of time in the past. A good example are company financials – they produce an income statement and balance sheet to respectively describe what happened in a period and at a point in time for the company.

Once a periodical is produced, it is frozen in time for posterity. Another edition will be produced at the end of the next period, but it is a new edition. It adds to the earlier periodical rather than replacing it. Periodicals thus have historical value and because they are preserved they need to be designed more carefully.

Dashboards on the other hand are fleeting, and not usually preserved for posterity. They are on the other hand overwritten. So whether all systems are up this minute doesn’t matter a minute later if you haven’t reacted to the report this minute, and thus ceases to be of importance the next minute (of course there might be some aspects that might be important at the later date, and they will be captured in the next periodical).

When we are designing business reports and other “business intelligence systems” we need to be cognisant of whether we are producing a dashboard or a periodical. The fashion nowadays is to produce everything as a dashboard, perhaps because there are popular dashboarding tools available.

However, dashboards are expensive. For one, they need a constant connection to be maintained to the “system” (database or data warehouse or data lake or whatever other storage unit in the business report sense). Also, by definition they are not stored, and if you need to store then you have to decide upon a frequency of storage which makes it a periodical anyway.

So companies can save significantly on resources (compute and storage) by switching from dashboards (which everyone seems to think in terms of) to periodicals. The key here is to get the frequency of the periodical right – too frequent and people will get bugged. Not frequent enough, and people will get bugged again due to lack of information. Given the tools and technologies at hand, we can even make reports “on demand” (for stuff not used by too many people).

Surveying Income

For a long time now, I’ve been sceptical of the practice of finding out the average income in a country or state or city or locality by doing a random survey. The argument I’ve made is “whether you keep Mukesh Ambani in the sample or not makes a huge difference in your estimate”. So far, though, I hadn’t been able to make a proper mathematical argument.

In the course of writing a piece for Bloomberg Quint (my first for that publication), I figured out a precise mathematical argument. Basically, incomes are distributed according to a power law distribution, and the exponent of the power law means that variance is not defined. And hence the Central Limit Theorem isn’t applicable.

OK let me explain that in English. The reason sample surveys work is due to a result known as the Central Limit Theorem. This states that for a distribution with finite mean and variance, the average of a random sample of data points is not very far from the average of the population, and the difference follows a normal distribution with zero mean and variance that is inversely proportional to the number of points surveyed.

So if you want to find out the average height of the population of adults in an area, you can simply take a random sample, find out their heights and you can estimate the distribution of the average height of people in that area. It is similar with voting intention – as long as the sample of people you survey is random (and without bias), the average of their voting intention can tell you with high confidence the voting intention of the population.

This, however, doesn’t work for income. Based on data from the Indian Income Tax department, I could confirm (what theory states) that income in India follows a power law distribution. As I wrote in my piece:

The basic feature of a power law distribution is that it is self-similar – where a part of the distribution looks like the entire distribution.

Based on the income tax returns data, the number of taxpayers earning more than Rs 50 lakh is 40 times the number of taxpayers earning over Rs 5 crore.
The ratio of the number of people earning more than Rs 1 crore to the number of people earning over Rs 10 crore is 38.
About 36 times as many people earn more than Rs 5 crore as do people earning more than Rs 50 crore.

In other words, if you increase the income limit by a factor of 10, the number of people who earn over that limit falls by a factor between 35 and 40. This translates to a power law exponent between 1.55 and 1.6 (log 35 to base 10 and log 40 to base 10 respectively).

Now power laws have a quirk – their mean and variance are not always defined. If the exponent of the power law is less than 1, the mean is not defined. If the exponent is less than 2, then the distribution doesn’t have a defined variance. So in this case, with an exponent around 1.6, the distribution of income in India has a well-defined mean but no well-defined variance.

To recall, the central limit theorem states that the population mean follows a normal distribution with the mean centred at the sample mean, and a variance of \frac{\sigma^2}{n} where \sigma is the standard deviation of the underlying distribution. And when the underlying distribution itself is a power law distribution with an exponent less than 2 (as the case is in India), \sigma itself is not defined.

Which means the distribution of population mean around sample mean has infinite variance. Which means the sample mean tells you absolutely nothing!

And hence, surveying is not a good way to find the average income of a population.

Vlogging!

The first seed was sown in my head by Harish “the Psycho” J, who told me a few months back that nobody reads blogs any more, and I should start making “analytics videos” to increase my reach and hopefully hit a new kind of audience with my work.

While the idea was great, I wasn’t sure for a long time what videos I could make. After all, I’m not the most technical guy around, and I had no patience for making videos on “how to use regression” and stuff like that. I needed a topic that would be both potentially catchy and something where I could add value. So the idea remained an idea.

For the last four or five years, my most common lunchtime activity has been to watch chess videos. I subscribe to the Youtube channels of Daniel King and Agadmator, and most days when I eat lunch alone at home are spent watching their analyses of games. Usually this routine gets disrupted on Fridays when the wife works from home (she positively hates these videos), but one Friday a couple of months back I decided to ignore her anyway and watch the videos (she was in her room working).

She had come out to serve herself to another serving of whatever she had made that day and saw me watching the videos. And suddenly asked me why I couldn’t make such videos as well. She has seen me work over the last seven years to build what I think is a fairly cool cricket visualisation, and said that I should use it to make little videos analysing cricket matches.

And since then my constant “background process” has been to prepare for these videos. Earlier, Stephen Rushe of Cricsheet used to unfailingly upload ball by ball data of all cricket matches as soon as they were done. However, two years back he went into “maintenance mode” and has stopped updating the data. And so I needed a method to get data as well.

Here, I must acknowledge the contributions of Joe Harris of White Ball Analytics, who not only showed me the APIs to get ball by ball data of cricket matches, but also gave very helpful inputs on how to make the visualisation more intuitive, and palatable to the normal cricket fan who hasn’t seen such a thing before. Joe has his own win probability model based on ball by ball data, which I think is possibly superior to mine in a lot of scenarios (my model does badly in high-scoring run chases), though I’ve continued to use my own model.

So finally the data is ready, and I have a much improved visualisation to what I had during the IPL last year, and I’ve created what I think is a nice app using the Shiny package that you can check out for yourself here. This covers all T20 international games, and you can use the app to see the “story of each game”.

And this is where the vlogging comes in – in order to explain how the model works and how to use it, I’ve created a short video. You can watch it here:

While I still have a long way to go in terms of my delivery, you can see that the video has come out rather well. There are no sync issues, and you see my face also in one corner. This was possible due to my school friend Sunil Kowlgi‘s Outklip app. It’s a pretty easy to use Chrome app, and the videos are immediately available on the platform. There is quick YouTube integration as well, for you to upload them.

And this is not a one time effort – going forward I’ll be making videos of limited overs games analysing them using my app, and posting them on my Youtube channel (or maybe I’ll make a new channel for these videos. I’ll keep you updated). I hope to become a regular Vlogger!

So in the meantime, watch the above video. And give my app a spin. Soon I’ll be releasing versions covering One Day Internationals and franchise T20s as well.