Waiting for Kumaraswamy’s Tiger

Finally, last week Softbank announced that it has closed its $9.3 Billion investment in Uber. Since the deal was in the making for a long time, the deal itself is not news. What is news is what Softbank’s Rajeev Misra told Uber – to “focus on its core markets in US, Europe and Latin America”.

One way of reading this message is to see it as “keep off from competing with our other investments in Didi, Grab and Ola“. If Uber takes Misra’s words seriously (they better do, since Softbank is now probably Uber’s second biggest shareholder, after Travis Kalanick), it is likely that they’ll go less aggressive in Asian markets, including India. This is not going to be good for customers (both drivers and passengers) of taxi marketplaces in India.

Until 2014, the Indian market had three vibrant cab marketplaces – Uber, Ola and TaxiForSure. Then in early 2015, TaxiForSure was unable to raise further funding and sold itself to Ola, turning the market into a duopoly. Back then I’d written about why it was a bad deal for Indian customers, and hoped that another company would take TaxiForSure’s place.

Three years later, that has not come to be and the Indian market continues to be a duopoly. When I visited Bangalore in December, I noticed service levels in both Uber and Ola being significantly inferior to what I’d seen a year earlier when I was living there. Now, if Uber were to cede ground to Ola in India (as Softbank implicitly wishes), things will get further worse.

Back in 2015, when TaxiForSure was shutting down, I had assumed that another corporate entity, perhaps Meru (which runs call taxis) would take its place. And for a really long time now there have been rumours of Reliance entering into the cab marketplace business. Neither has come to be.

So this time my hopes have moved from corporates to politicians. The word on the street in Bangalore when I visited in December was that former Karnataka Chief Minister HD Kumaraswamy had partnered with cab driver associations to start a new cab marketplace, supposedly called “Tygr” (sic). The point of this marketplace, I was informed during my book launch event in Bangalore in December, was that it was going to be a “driver oriented app”.

This marketplace, too, has been coming for a long time now, but with the Softbank deal, it can’t come sooner. Yes, it is likely that it will not be a great app (if it is “too driver oriented”, it won’t get passengers and the drivers will also subsequently disappear), but at least it will bring in a sense of competition into the market and keep Ola honest. And hopefully there will also similar competition in other cities in India, though it is unlikely that it will be Kumaraswamy who will disrupt those markets.

A lot is made of the fact that investors like Warren Buffett own stocks in all major airlines in the US. Now, Softbank seems to be occupying that space in the cab marketplace market. It can’t be good either for drivers or passengers.

Patanjali going online

Mint has a piece on Baba Ramdev-led FMCG company Patanjali going online to further its sales.

Some may have seen the irony in Patanjali Ayurved Ltd tying up with foreign-owned/funded e-commerce companies, even as it swears to end the reign of foreign-owned consumer brands in the market.

Patanjali is only being pragmatic in doing what’s good for its own business, of being available where the consumers are. Its decision is one more pointer to the growing importance of e-commerce as a distribution channel for packaged consumer goods.

I have an entire chapter in my book dedicated to this – about the internet has revolutionised distribution and retail. In that I talk about Dollar Shave Club, pickle sellers from Sringeri and mobile manufacturers such as Xiaomi who have pioneered the “flash sale” concept. In another part of the book, I’ve written about how Amazon has revolutionised bookselling, first by selling online and then by pioneering e-books.

Whenever a new consumer goods company wants to set up shop, one of the hardest tasks is in establishing a distribution network. Conventional distribution networks are typically several layers deep, and in order to get to the customer, each layer of the distribution network needs to be adequately compensated.

Apart from the monetary cost, there is also the transaction cost of convincing each layer that it is worthwhile carrying the new seller’s goods. The other factor to be considered is that distributors at various levels are in a sense loyal to incumbent sellers (since they are responsible for a large portion of the current business), making it harder for new seller to break through.

The advantage with online retailers is that they compress the supply chain, with one entity replacing a whole network of distributors. This may not necessarily be cost-effective from the money perspective, since the online retailers will seek to capture all the value that all the layers of the current distribution chain are capturing. However, in terms of transaction costs it is significantly easier since there is only one layer to get past, and online retailers seldom have loyalty or exclusive relationships.

In fact, the size and bargaining power of online retailers (vis-a-vis offline distributors) means that if there is an exclusive relationship, it is the retailer who holds the exclusive rights and not the seller.

In Patanjali’s case, they have already established a wide offline network with exclusive stores and partnerships, but my sense is that they seem to be hitting the limits of distribution. Thanks to Baba Ramdev’s popularity as a yoga guru, Patanjali enjoys strong brand recall, and it appears as if their distribution is unable to keep pace with their brand.

From this perspective, going online (through Amazon/Flipkart) is a rational strategy for them since with one deal they get significantly higher distribution power. Moreover, being a new brand, they don’t have legacy distributors who might get pissed off if they go online (this is a problem that the Unilevers of the world face).

So it is indeed a pragmatic decision by Patanjali to take the online route. And after all, in the end, sheer commerce can trump nationalist tendencies and xenophobia.

PM’s Eleven

The first time I ever heard of Davos was in 1997, when then Indian Prime Minister HD Deve Gowda attended the conference in the ski resort and gave a speech. He was heavily pilloried by the Kannada media, and given the moniker “Davos Gowda”.

Maybe because of all the attention Deve Gowda received for the trip, and not in a good way, no Indian Prime Minister ventured to go there for another twenty years. Until, of course, Narendra Modi went there earlier this week and gave a speech that apparently got widely appreciated in China.

There is another thing that connects Modi and Deve Gowda as Prime Ministers (leaving aside trivialties such as them being chief ministers of their respective states before becoming Prime Ministers).

Back in 1996 when Deve Gowda was Prime Minister, Rahul Dravid,  Venkatesh Prasad and Sunil Joshi made their Test debuts (on the tour of England). Anil Kumble and Javagal Srinath had long been fixtures in the Indian cricket team. Later that year, Sujith Somasunder played a couple of one dayers. David Johnson played two Tests. And in early 1997, Doddanarasaiah Ganesh played a few Test matches.

In case you haven’t yet figured out, all these cricketers came from Karnataka, the same state as the Prime Minister. During that season, it was normal for at least five players in the Indian Eleven to be from Karnataka. Since Deve Gowda had become Prime Minister around the same time, there was no surprise that the Indian cricket team was called “PM’s Eleven”. Coincidentally, the chairman of selectors at that point in time was Gundappa Vishwanath, who is also from Karnataka.

The Indian team playing in the current Test match in Johannesburg has four players from Gujarat. Now, this is not as noticeable as five players from Karnataka because Gujarat is home to three Ranji Trophy teams. Cheteshwar Pujara plays for Saurashtra, Parthiv Patel and Jasprit Bumrah play for Gujarat, and Hardik Pandya plays for Baroda. And Saurashtra’s Ravindra Jadeja is also part of the squad.

It had been a long time since once state had thus dominated the Indian cricket team. Perhaps we hadn’t seen this kind of domination since Karnataka had dominated in the late 1990s. And it so happens that once again the state dominating the Indian cricket team happens to be the Prime Minister’s home state.

So after a gap of twenty one years, we had an Indian Prime Minister addressing Davos. And after a gap of twenty one years, we have an Indian cricket team that can be called “PM’s Eleven”!

As Baada put it the other day, “Modi is the new Deve Gowda. Just without family and sleep”.

Update: I realised after posting that I have another post called “PM’s Eleven” on this blog. It was written in the UPA years.

Duckworth Lewis Book

Yesterday at the local council library, I came across this book called “Duckworth Lewis” written by Frank Duckworth and Tony Lewis (who “invented” the eponymous rain rule). While I’d never heard about the book, given my general interest in sports analytics I picked it up, and duly finished reading it by this morning.

The good thing about the book is that though it’s in some way a collective autobiography of Duckworth and Lewis, they restrict their usual life details to a minimum, and mostly focus on what they are famous for. There are occasions when they go into too much detail describing a trip to either Australia or the West Indies, but it’s easy to filter out such stuff and read the book for the rain rule.

Then again, it isn’t a great book. If you’re not interested in cricket analytics there isn’t that much for you to know from the book. But given that it’s a quick read, it doesn’t hurt so much! Anyway, here are some pertinent observations:

  1. Duckworth and Lewis didn’t get paid much for their method. They managed to get the ICC to accept their method sometime in the mid 90s, but it wasn’t until the early 2000s, by when Lewis had become a business school professor, that they managed to strike a financial deal with ICC. Even when they did, they make it sound like they didn’t make much money off it.
  2. The method came about when Duckworth quickly put together something for a statistics conference he was organising, where another speaker who was supposed to speak about cricket pulled out at the last minute. Lewis later came across the paper, and then got one of his undergrad students to do a project about it. The two men subsequently collaborated
  3. It’s amazing (not in a positive way) the kind of data that went into the method. Until the early 2000s, the only dataset that was used to calibrate the method was what was put together by Lewis’s undergrad. And this was mostly English County games, played over 40, 55 and 60 overs. Even after that, the frequency of updation with new data (which reflects new playing styles and strategies) is rather low.
  4. The system doesn’t seem to have been particularly well software engineered – it was initially simply coded up by Duckworth, and until as late as 2007 it ran on the DOS operating system. It was only in 2008 or so, when Steven Stern joined the team (now the method is called DLS to include his name), that a windows version was introduced.
  5. There is very little discussion of alternate methods, and though there is a chapter about it, Duckworth and Lewis are rather dismissive about them. For example, another popular method is by this guy called V Jayadevan from Thrissur. Here is some excellent analysis by Srinivas Bhogle where he compares the two methods. Duckworth and Lewis spend a couple of pages listing a couple of scenarios where Jayadevan’s method doesn’t work, and then spends a paragraph disparaging Bhogle for his support of the VJD method.
  6. This was the biggest takeaway from the book for me – the Duckworth Lewis method doesn’t equalise probabilities of victory of the two teams before and after the rain interruption. Instead, the method equalises the margin of victory between the teams before and after the break. So let’s say a team was 10 runs behind the DL “par score” when it rains. When the game restarts, the target is set such that the team is still 10 runs behind the par score! They make an attempt to explain why this is superior to equalising probabilities of winning  but don’t go too far with it.
  7. The adoption of Duckworth Lewis seems like a fairly random event. Following the World Cup 1992 debacle (when South Africa’s target went from 22 off 13 to 22 off 1 ball after a rain break), there was a demand for new rain rules. Duckworth and Lewis somehow managed to explain their method to the ECB secretary. And since it was superior to everything that was there then, it simply got adopted. And then it became incumbent, and became hard to dislodge!
  8. There is no mention in the book about the inherent unfairness of the DL method (in that it can be unfair to some playing styles).

Ok this is already turning out to be a long post, but one final takeaway is that there’s a fair amount of randomness in sports analytics, and you shouldn’t get into it if your only potential customer is a national sporting body. In that sense, developments such as the IPL are good for sports analytics!

Machine learning and degrees of freedom

For starters, machine learning is not magic. It might appear like magic when you see Google Photos automatically tagging all your family members correctly, down to the day of their birth. It might appear so when Siri or Alexa give a perfect response to your request. And the way AlphaZero plays chess is almost human!

But no, machine learning is not magic. I’d made a detailed argument about that in the second edition of my newsletter (subscribe if you haven’t already!).

One way to think of it is that the output of a machine learning model (which could be anything from “does this picture contain a cat?” to “is the speaker speaking in English?”) is the result of a mathematical formula, whose parameters are unknown at the beginning of the exercise.

As the system gets “trained” (of late I’ve avoided using the word “training” in the context of machine learning, preferring to use “calibration” instead. But anyway…), the hitherto unknown parameters of the formula get adjusted in a manner that the formula output matches the given data. Once the system has “seen” enough data, we have a model, which can then be applied on unknown data (I’m completely simplifying it here).

The genius in machine learning comes in setting up mathematical formulae in a way that given input-output pairs of data can be used to adjust the parameters of the formulae. The genius in deep learning, which has been the rage this decade, for example, comes from a 30-year old mathematical breakthrough called “back propagation”. The reason it took until a few years back for it to become a “thing” has to do with data availability, and compute power (check this terrific piece in the MIT Tech Review about deep learning).

Within machine learning, the degree of complexity of a model can vary significantly. In an ordinary univariate least squares regression, for example, there are only two parameters the system can play with (slope and intercept of the regression line). Even a simple “shallow” neural network, on the other hand, has thousands of parameters.

Because a regression has so few parameters, the kind of patterns that the system can detect is rather limited (whatever you do, the system can only draw a line. Nothing more!). Thus, regression is applied only when you know that the relationship that exists is simple (and linear), or when you are trying to force-fit a linear model.

The upside of simple models such as regression is that because there are so few parameters to be adjusted, you need relatively few data points in order to adjust them to the required degree of accuracy.

As models get more and more complicated, the number of parameters increases, thus increasing the complexity of patterns that can be detected by the system. Close to one extreme, you have systems that see lots of current pictures of you and then identify you in your baby pictures.

Such complicated patterns can be identified because the system parameters have lots of degrees of freedom. The downside, of course, is that because the parameters start off having so much freedom, it takes that much more data to “tie them down”. The reason Google Photos can tag you in your baby pictures is partly down to the quantum of image data that Google has, which does an effective job of tying down the parameters. Google Translate similarly uses large repositories of multi-lingual text in order to “learn languages”.

Like most other things in life, machine learning also involves a tradeoff. It is possible for systems to identify complex patterns, but for that you need to start off with lots of “degrees of freedom”, and then use lots of data to tie down the variables. If your data is small, then you can only afford a small number of parameters, and that limits the complexity of patterns that can be detected.

One way around this, of course, is to use your own human intelligence as a pre-processing step in order to set up parameters in a way that they can be effectively tuned by data. Gopi had a nice post recently on “neat learning versus deep learning“, which is relevant in this context.

Finally, there is the issue of spurious correlations. Because machine learning systems are basically mathematical formulae designed to learn patterns from data, spurious correlations in the input dataset can lead to the system learning random things, which can hamper its predictive power.

Data sets, especially ones that have lots of dimensions, can display correlations that appear at random, but if the input dataset shows enough of these correlations, the system will “learn” them as a pattern, and try to use them in predictions. And the more complicated your model gets, the harder it is to know what it is doing, and thus the harder it is to identify these spurious correlations!

And the thing with having too many “free parameters” (lots of degrees of freedom but without enough data to tie down the parameters) is that these free parameters are especially susceptible to learning the spurious correlations – for they have no other job.

Thinking about it, after all, machine learning systems are not human!

Biases, statistics and luck

Tomorrow Liverpool plays Manchester City in the Premier League. As things stand now I don’t plan to watch this game. This entire season so far, I’ve only watched two games. First, I’d gone to a local pub to watch Liverpool’s visit to Manchester City, back in September. Liverpool got thrashed 5-0.

Then in October, I went to Wembley to watch Tottenham Hotspur play Liverpool. The Spurs won 4-1. These two remain Liverpool’s only defeats of the season.

I might consider myself to be a mostly rational person but I sometimes do fall for the correlation-implies-causation bias, and think that my watching those games had something to do with Liverpool’s losses in them. Never mind that these were away games played against other top sides which attack aggressively. And so I have this irrational “fear” that if I watch tomorrow’s game (even if it’s from a pub), it might lead to a heavy Liverpool defeat.

And so I told Baada, a Manchester City fan, that I’m not planning to watch tomorrow’s game. And he got back to me with some statistics, which he’d heard from a podcast. Apparently it’s been 80 years since Manchester City did the league “double” (winning both home and away games) over Liverpool. And that it’s been 15 years since they’ve won at Anfield. So, he suggested, there’s a good chance that tomorrow’s game won’t result in a mauling for Liverpool, even if I were to watch it.

With the easy availability of statistics, it has become a thing among football commentators to supply them during the commentary. And from first hearing, things like “never done this in 80 years” or “never done that for last 15 years” sounds compelling, and you’re inclined to believe that there is something to these numbers.

I don’t remember if it was Navjot Sidhu who said that statistics are like a bikini (“what they reveal is significant but what they hide is crucial” or something). That Manchester City hasn’t done a double over Liverpool in 80 years doesn’t mean a thing, nor does it say anything that they haven’t won at Anfield in 15 years.

Basically, until the mid 2000s, City were a middling team. I remember telling Baada after the 2007 season (when Stuart Pearce got fired as City manager) that they’d be surely relegated next season. And then came the investment from Thaksin Shinawatra. And the appointment of Sven-Goran Eriksson as manager. And then the youtube signings. And later the investment from the Abu Dhabi investment group. And in 2016 the appointment of Pep Guardiola as manager. And the significant investment in players after that.

In other words, Manchester City of today is a completely different team from what they were even 2-3 years back. And they’re surely a vastly improved team compared to a decade ago. I know Baada has been following them for over 15 years now, but they’re unrecognisable from the time he started following them!

Yes, even with City being a much improved team, Liverpool have never lost to them at home in the last few years – but then Liverpool have generally been a strong team playing at home in these years! On the other hand, City’s 18-game winning streak (which included wins at Chelsea and Manchester United) only came to an end (with a draw against Crystal Palace) rather recently.

So anyways, here are the takeaways:

  1. Whether I watch the game or not has no bearing on how well Liverpool will play. The instances from this season so far are based on 1. small samples and 2. biased samples (since I’ve chosen to watch Liverpool’s two toughest games of the season)
  2. 80-year history of a fixture has no bearing since teams have evolved significantly in these 80 years. So saying a record stands so long has no meaning or predictive power for tomorrow’s game.
  3. City have been in tremendous form this season, and Liverpool have just lost their key player (by selling Philippe Coutinho to Barcelona), so City can fancy their chances. That said, Anfield has been a fortress this season, so Liverpool might just hold (or even win it).

All of this points to a good game tomorrow! Maybe I should just watch it!

 

 

More issues with Slack

A long time back I’d written about how Slack in some ways was like the old DBabble messaging and discussion group platform, except for one small difference – Slack didn’t have threaded conversations which meant that it was only possible to hold one thread of thought in a channel, significantly limiting discussion.

Since then, Slack has introduced threaded conversations, but done it in an atrocious manner. The same linear feed in each channel remains, but there’s now a way to reply to specific messages. However, even in this little implementation Slack has done worse than even WhatsApp – by default, unless you check one little checkbox, your reply will only be sent to the person who originally posted the message, and doesn’t really post the message on the group.

And if you click the checkbox, the message is displayed in the feed, but in a rather ungainly manner. And threads are only one level deep (this was one reason I used to prefer LiveJournal over blogspot back in the day – comments could be nested in the former, allowing for significantly superior discussions).

Anyway, the point of this post is not about threads. It’s about another bug/feature of Slack which makes it an extremely difficult tool to use, especially for people like me.

The problem is slack is that it nudges you towards sending shorter messages rather than longer messages. In fact, there’s no facility at all to send a long well-constructed argument unless you keep holding on to Shift+Enter everytime you need a new line. There is a “insert text snippet” feature, but that lacks richness of any kind – like bullet points, for example.

What this does is to force you to use Slack for quick messages only, or only share summaries. It’s possible that this is a design feature, intended to capture the lack of attention span of the “twitter generation”, but it makes it an incredibly hard platform to use to have real discussions.

And when Slack is the primary mode of communication in your company (some organisations have effectively done away with email for internal communications, preferring to put everything on Slack), there is no way at all to communicate nuance.

PS: It’s possible that the metric for someone at Slack is “number of messages sent”. And nudging users towards writing shorter messages can mean more messages are sent!

PS2: DBabble allowed for plenty of nuance, with plenty of space to write your messages and arguments.