Stirring the pile efficiently

Warning: This is a technical post, and involves some code, etc. 

As I’ve ranted a fair bit on this blog over the last year, a lot of “machine learning” in the industry can be described as “stirring the pile”. Regular readers of this blog will be familiar with this image from XKCD by now:

Source: https://xkcd.com/1838/

Basically people simply take datasets and apply all the machine learning techniques they have heard of (implementation is damn easy – scikit learn allows you to implement just about any model in three similar looking lines of code; See my code here to see how similar the implementation is).

So I thought I’ll help these pile-stirrers by giving some hints of what method to use for different kinds of data. I’ve over-simplified stuff, and so assume that:

  1. There are two predictor variables X and Y. The predicted variable “Z” is binary.
  2. X and Y are each drawn from a standard normal distribution.
  3. The predicted variable Z is “clean” – there is a region in the X-Y plane where Z is always “true” and another region where Z is always “false”
  4. So the idea is to see which machine learning techniques are good at identifying which kind of geometrical figures.
  5. Everything is done “in-sample”. Given the nature of the data, it doesn’t matter if we do it in-sample or out-of-sample.

For those that understand Python (and every pile-stirrer worth his salt is excellent at Python), I’ve put my code in a nice Jupyter Notebook, which can be found here.

So this is what the output looks like. The top row shows the “true values” of Z. Then we have a row for each of the techniques we’ve used, which shows how well these techniques can identify the pattern given in the top row (click on the image for full size).

As you can see, I’ve chosen some common geometrical shapes and seen which methods are good at identifying those. A few pertinent observations:

  1. Logistic regression and linear SVM are broadly similar, and both are shit for this kind of dataset. Being linear models, they fail to deal with non-linear patterns
  2. SVM with RBF kernel is better, but it fails when there are multiple “true regions” in the dataset. At least it’s good at figuring out some non-linear patterns. However, it can’t figure out the triangle or square – it draws curves around them, instead.
  3. Naive Bayesian (I’ve never understood this even though I’m pretty good at Bayesian statistics, but I understand this is a commonly used technique; and I’ve used default parameters so not sure how it is “Bayesian” even) can identify some stuff but does badly when there are disjoint regions where Z is true.
  4. Ensemble methods such as Random Forests and Gradient Boosting do rather well on all the given inputs. They do well for both polygons and curves. Elsewhere, Ada Boost mostly does well but trips up on the hyperbola.
  5. For some reason, Lasso fails to give an output (in the true spirit of pile-stirring, I didn’t explore why). Ridge is again a regression method and so does badly on this non-linear dataset
  6. Neural Networks (Multi Layer Perceptron to be precise) does reasonably well, but can’t figure out the sharp edges of the polygons.
  7. Decision trees again do rather well. I’m pleasantly surprised that they pick up and classify the disjoint sets (multi-circle and hyperbola) correctly. Maybe it’s the way scikit learn implements them?

Of course, the datasets that one comes across in real life are never such simple geometrical figures, but I hope that this set can give you some idea on what techniques to use where.

At least I hope that this makes you think about the suitability of different techniques for the data rather than simply applying all the techniques you know and then picking the one that performs best on your given training and test data.

That would count as nothing different from p-hacking, and there’s an XKCD for that as well!

Source: https://xkcd.com/882/

Duckworth Lewis Book

Yesterday at the local council library, I came across this book called “Duckworth Lewis” written by Frank Duckworth and Tony Lewis (who “invented” the eponymous rain rule). While I’d never heard about the book, given my general interest in sports analytics I picked it up, and duly finished reading it by this morning.

The good thing about the book is that though it’s in some way a collective autobiography of Duckworth and Lewis, they restrict their usual life details to a minimum, and mostly focus on what they are famous for. There are occasions when they go into too much detail describing a trip to either Australia or the West Indies, but it’s easy to filter out such stuff and read the book for the rain rule.

Then again, it isn’t a great book. If you’re not interested in cricket analytics there isn’t that much for you to know from the book. But given that it’s a quick read, it doesn’t hurt so much! Anyway, here are some pertinent observations:

  1. Duckworth and Lewis didn’t get paid much for their method. They managed to get the ICC to accept their method sometime in the mid 90s, but it wasn’t until the early 2000s, by when Lewis had become a business school professor, that they managed to strike a financial deal with ICC. Even when they did, they make it sound like they didn’t make much money off it.
  2. The method came about when Duckworth quickly put together something for a statistics conference he was organising, where another speaker who was supposed to speak about cricket pulled out at the last minute. Lewis later came across the paper, and then got one of his undergrad students to do a project about it. The two men subsequently collaborated
  3. It’s amazing (not in a positive way) the kind of data that went into the method. Until the early 2000s, the only dataset that was used to calibrate the method was what was put together by Lewis’s undergrad. And this was mostly English County games, played over 40, 55 and 60 overs. Even after that, the frequency of updation with new data (which reflects new playing styles and strategies) is rather low.
  4. The system doesn’t seem to have been particularly well software engineered – it was initially simply coded up by Duckworth, and until as late as 2007 it ran on the DOS operating system. It was only in 2008 or so, when Steven Stern joined the team (now the method is called DLS to include his name), that a windows version was introduced.
  5. There is very little discussion of alternate methods, and though there is a chapter about it, Duckworth and Lewis are rather dismissive about them. For example, another popular method is by this guy called V Jayadevan from Thrissur. Here is some excellent analysis by Srinivas Bhogle where he compares the two methods. Duckworth and Lewis spend a couple of pages listing a couple of scenarios where Jayadevan’s method doesn’t work, and then spends a paragraph disparaging Bhogle for his support of the VJD method.
  6. This was the biggest takeaway from the book for me – the Duckworth Lewis method doesn’t equalise probabilities of victory of the two teams before and after the rain interruption. Instead, the method equalises the margin of victory between the teams before and after the break. So let’s say a team was 10 runs behind the DL “par score” when it rains. When the game restarts, the target is set such that the team is still 10 runs behind the par score! They make an attempt to explain why this is superior to equalising probabilities of winning  but don’t go too far with it.
  7. The adoption of Duckworth Lewis seems like a fairly random event. Following the World Cup 1992 debacle (when South Africa’s target went from 22 off 13 to 22 off 1 ball after a rain break), there was a demand for new rain rules. Duckworth and Lewis somehow managed to explain their method to the ECB secretary. And since it was superior to everything that was there then, it simply got adopted. And then it became incumbent, and became hard to dislodge!
  8. There is no mention in the book about the inherent unfairness of the DL method (in that it can be unfair to some playing styles).

Ok this is already turning out to be a long post, but one final takeaway is that there’s a fair amount of randomness in sports analytics, and you shouldn’t get into it if your only potential customer is a national sporting body. In that sense, developments such as the IPL are good for sports analytics!

The (missing) Desk Quants of Main Street

A long time ago, I’d written about my experience as a Quant at an investment bank, and about how banks like mine were sitting on a pile of risk that could blow up any time soon.

There were two problems as I had documented then. Firstly, most quants I interacted with seemed to be solving maths problems rather than finance problems, not bothering if their models would stand the test of markets. Secondly, there was an element of groupthink, as quant teams were largely homogeneous and it was hard to progress while holding contrarian views.

Six years on, there has been no blowup, and in some sense banks are actually doing well (I mean, they’ve declined compared to the time just before the 2008 financial crisis but haven’t done that badly). There have been no real quant disasters (yes I know the Gaussian Copula gained infamy during the 2008 crisis, but I’m talking about a period after that crisis).

There can be many explanations regarding how banks have not had any quant blow-ups despite quants solving for math problems and all thinking alike, but the one I’m partial to is the presence of a “middle layer”.

Most of the quants I interacted with were “core” in the sense that they were not attached to any sales or trading desks. Banks also typically had a large cadre of “desk quants” who are directly associated with trading teams, and who build models and help with day-to-day risk management, pricing, etc.

Since these desk quants work closely with the business, they turn out to be much more pragmatic than the core quants – they have a good understanding of the market and use the models more as guiding principles than as rules. On the other hand, they bring the benefits of quantitative models (and work of the core quants) into day-to-day business.

Back during the financial crisis, I’d jokingly predicted that other industries should hire quants who were now surplus to Wall Street. Around the same time, DJ Patil et al came up with the concept of the “data scientist” and called it the “sexiest job of the 21st century”.

And so other industries started getting their own share of quants, or “data scientists” as they were now called. Nowadays its fashionable even for small companies for whom data is not critical for business to have a data science team. Being in this profession now (I loathe calling myself a “data scientist” – prefer to say “quant” or “analytics”), I’ve come across quite a few of those.

The problem I see with “data science” on “Main Street” (this phrase gained currency during the financial crisis as the opposite of Wall Street, in that it referred to “normal” businesses) is that it lacks the cadre of desk quants. Most data scientists are highly technical people who don’t necessarily have an understanding of the business they operate in.

Thanks to that, what I’ve noticed is that in most cases there is a chasm between the data scientists and the business, since they are unable to talk in a common language. As I’m prone to saying, this can go two ways – the business guys can either assume that the data science guys are geniuses and take their word for the gospel, or the business guys can totally disregard the data scientists as people who do some esoteric math and don’t really understand the world. In either case, value added is suboptimal.

It is not hard to understand why “Main Street” doesn’t have a cadre of desk quants – it’s because of the way the data science industry has evolved. Quant at investment banks has evolved over a long period of time – the Black-Scholes equation was proposed in the early 1970s. So the quants were first recruited to directly work with the traders, and core quants (at the banks that have them) were a later addition when banks realised that some quant functions could be centralised.

On the other hand, the whole “data science” growth has been rather sudden. The volume of data, cheap incrementally available cloud storage, easy processing and the popularity of the phrase “data science” have all increased well-at-a-faster rate in the last decade or so, and so companies have scrambled to set up data teams. There has simply been no time to train people who get both the business and data – and the data scientists exist like addendums that are either worshipped or ignored.

Medium stats

So Medium sends me this email:

Congratulations! You are among the top 10% of readers and writers on Medium this year. As a small thank you, we’ve put together some highlights from your 2016.

Now, I hardly use Medium. I’ve maybe written one post there (a long time ago) and read only a little bit (blogs I really like I’ve put on RSS and read on Feedly). So when Medium tells me that I, who considers myself a light user, is “in the top 10%”, they’re really giving away the fact that the quality of usage on their site is pretty bad.

Sometimes it’s bloody easy to see through flattery! People need to be more careful on what the stats they’re putting out really convey!

 

Restaurants, deliveries and data

Delivery aggregators are moving customer data away from the retailer, who now has less knowledge about his customer. 

Ever since data collection and analysis became cheap (with cloud-based on-demand web servers and MapReduce), there have been attempts to collect as much data as possible and use it to do better business. I must admit to being part of this racket, too, as I try to convince potential clients to hire me so that I can tell them what to do with their data and how.

And one of the more popular areas where people have been trying to use data is in getting to “know their customer”. This is not a particularly new exercise – supermarkets, for example, have been offering loyalty cards so that they can correlate purchases across visits and get to know you better (as part of a consulting assignment, I once sat with my clients looking at a few supermarket bills. It was incredible how much we humans could infer about the customers by looking at those bills).

The recent tradition (after it has become possible to analyse large amounts of data) is to capture “loyalties” across several stores or brands, so that affinities can be tracked across them and customer can be understood better. Given data privacy issues, this has typically been done by third party agents, who then sell back the insights to the companies whose data they collect. An early example of this is Payback, which links activities on your ICICI Bank account with other products (telecom providers, retailers, etc.) to gain superior insights on what you are like.

Nowadays, with cookie farming on the web, this is more common, and you have sites that track your web cookies to figure out correlations between your activities, and thus infer your lifestyle, so that better advertisements can be targeted at you.

In the last two or three years, significant investments have been made by restaurants and retailers to install devices to get to know their customers better. Traditional retailers are being fitted with point-of-sale devices (provision of these devices is a highly fragmented market). Restaurants are trying to introduce loyalty schemes (again a highly fragmented market). This is all an attempt to better get to know the customer. Except that middlemen are ruining it.

I’ve written a fair bit on middleman apps such as Grofers or Swiggy. They are basically delivery apps, which pick up goods for you from a store and deliver it to your place. A useful service, though as I suggest in my posts linked above, probably overvalued. As the share of a restaurant or store’s business goes to such intermediaries, though, there is another threat to the restaurant – lack of customer data.

When Grofers buys my groceries from my nearby store, it is unlikely to tell the store who it is buying for. Similarly when Swiggy buys my food from a restaurant. This means loyalty schemes of these sellers will go for a toss. Of course not offering the same loyalty program to delivery companies is a no-brainer. But what the sellers are also missing out on is the customer data that they would have otherwise captured (had they sold directly to the customer).

A good thing about Grofers or Swiggy is that they’ve hit the market at a time when sellers are yet to fully realise the benefits of capturing customer data, so they may be able to capture such data for cheap, and maybe sell it back to their seller clients. Yet, if you are a retailer who is selling to such aggregators and you value your customer data, make sure you get your pound of flesh from these guys.

On Uppi2’s top rating

So it appears that my former neighbour Upendra’s new magnum opus Uppi2 is currently the top rated movie on IMDB, with a rating of 9.7/10.0. The Times of India is so surprised that it has done an entire story about it, which I’ve screenshot here: Screen Shot 2015-08-17 at 8.50.33 pm

The story also mentions that another Kannada movie RangiTaranga (which I’ve reviewed here) is in third spot, with a rating of 9.4 out of 10. This might lead you to wonder why Kannada movies have suddenly turned out to be so good. The answer, however, lies in simple logic.

The first is that both are relatively new movies and hence their ratings suffer from “small sample bias”. Of course, the sample isn’t that small – Uppi2 has received 1900 votes, which is 3 times as much as its 1999 prequel Upendra. Yet, it being a new movie, only a subset of the small set of people who have watched it so far would have reviewed it.

The second is selection bias. The people who see a movie in its first week are usually the hardcore fans, and in this case it is hardcore fans of Upendra’s movies. And hardcore fans usually find it hard to have their belief shaken (a version of what I’ve written about online opinions for Mint here), and hence they all give the movie a high rating.

As time goes by, and people who are not as hardcore fans of Upendra start watching and reviewing the movie, the ratings are likely to rationalise. Finally, ratings are easy to rig, especially when samples are small. For example, an Upendra fan club might have decided to play up the movie online by voting en masse on IMDB, and pushing up its ratings. This might explain both why the movie already has 1900 ratings in four days, and most of them are extremely positive.

The solution for this is for the rating system (IMDB in this case) to pay more weightage for “verified ratings” (by people who have rated more movies in the past, for instance), or remove highly correlated ratings. Right now, the rating algorithm seems pretty naive.

Coming back to Uppi2, from what I’ve heard from people, the movie is supposed to be really good, though perhaps not 9.7 good. I plan to watch the movie in the next few days and will write a review once I do so.

Meanwhile, read this absolutely brilliant review (in Kannada) written by this guy called “Jogi”

The Ramayana and the Mahabharata principles

An army of monkeys can’t win you a complex war like the Mahabharata. For that you need a clever charioteer.

A business development meeting didn’t go well. The potential client indicated his preference for a different kind of organisation to solve his problem. I was about to say “why would you go for an army of monkeys to solve this problem when you can.. ” but I couldn’t think of a clever end to the sentence. So I ended up not saying it.

Later on I was thinking of the line and good ways to end it. The mind went back to Hindu mythology. The Ramayana war was won with an army of monkeys, of course. The Mahabharata war was won with the support of a clever and skilled consultant (Krishna didn’t actually fight the war, did he?). “Why would you go for an army of monkeys to solve this problem when you can hire a studmax charioteer”, I phrased. Still doesn’t have that ring. But it’s a useful concept anyway.

Extending the analogy, the Ramayana was was different from the Mahabharata war. In the former, the enemy was a ten-headed demon who had abducted the hero’s wife. Despite what alternate retellings say, it was all mostly black and white. A simple war made complex with the special prowess of the enemy (ten heads, special weaponry, etc.). The army of monkeys proved decisive, and the war was won.

The Mahabharata war was, on the other hand, much more complex. Even mainstream retellings talk about the “shades of grey” in the war, and both sides had their share of pluses and minuses. The enemy here was a bunch of cousins, who had snatched away the protagonists’ kingdom. Special weaponry existed on both sides. Sheer brute force, however, wouldn’t do. The Mahabharata war couldn’t be won with an army of monkeys. Its complexity meant it needed was skilled strategic guidance, and a bit of cunning, which is what Krishna provided when he was hired by Arjuna ostensibly as a charioteer. Krishna’s entire army (highly trained and skilled, but footsoldiers mostly) fought on opposite side, but couldn’t influence the outcome.

So when the problem at hand is simple, and the only complexity is in size or volume or complexity of the enemy, you will do well to hire an army of monkeys. They’ll work best for you there. But when faced with a complex situation and complexity that goes well beyond the enemy’s prowess, you need a charioteer. So make the choice based on the kind of problem you are facing.