Shooting, investing and the hot hand

A couple of years back I got introduced to “Stumbling and Mumbling“, a blog written by Chris Dillow, who was described to me as a “Marxist investment banker”. I don’t agree with a lot of the stuff in his blog, but it is all very thoughtful.

He appears to be an Arsenal fan, and in his latest post, he talks about “what we can learn from football“. In that, he writes:

These might seem harmless mistakes when confined to talking about football. But they have analogues in expensive mistakes. The hot-hand fallacy leads investors to pile into unit trusts with good recent performance (pdf) – which costs them money as the performance proves unsustainable. Over-reaction leads them to buy stocks at the top of the market and sell at the bottom. Failing to see that low probabilities compound to give us a high one helps explain why so many projects run over time and budget. And so on.

Now, the hot hand fallacy has been a hard problem in statistics for a few years now. Essentially, the intuitive belief in basketball is that someone who has scored a few baskets is more likely to be successful in his next basket (basically, the player is on a “hot hand”).

It all started with a seminal paper by Amos Tversky et al in the 1980s, that used (the then limited) data to show that the hot hand is a fallacy. Then, more recently, Miller and Sanjurjo took another look at the problem and, with far better data at hand, found that the hot hand is actually NOT a fallacy.

There is a nice podcast on The Art of Manliness, where Ben Cohen, who has written a book about hot hands, spoke about the research around it. In any case, there are very valid reasons as to why hot hands exist.

Yet, Dillow is right – while hot hands might exist in something like basketball shooting, it doesn’t in something like investing. This has to do with how much “control” the person in question has. Let me switch fields completely now and quote a paragraph from Venkatesh Guru Rao‘s “The Art Of Gig” newsletter:

As an example, take conducting a workshop versus executing a trade based on some information. A significant part of the returns from a workshop depend on the workshop itself being good or bad. For a trade on the other hand, the returns are good or bad depending on how the world actually behaves. You might have set up a technically perfect trade, but lose because the world does something else. Or you might have set up a sloppy trade, but the world does something that makes it a winning move anyway.

This is from the latest edition, which is paid. Don’t worry if you aren’t a subscriber. The above paragraph I’ve quoted is sufficient for the purpose of this blogpost.

If you are in the business of offering workshops, or shooting baskets, the outcome of the next workshop or basket depends largely upon your own skill. There is randomness, yes, but this randomness is not very large, and the impact of your own effort on the result is large.

In case of investing, however, the effect of the randomness is very large. As VGR writes, “For a trade on the other hand, the returns are good or bad depending on how the world actually behaves”.

So if you are in a hot hand when it comes to investing, it means that “the world behaved in a way that was consistent with your trade” several times in a row. And that the world has behaved according to your trade several times in a row makes it no more likely that the world will behave according to your trade next time.

If, on the other hand, you are on a hot hand in shooting baskets or delivering lectures, then it is likely that this hot hand is because you are performing well. And because you are performing well, the likelihood of you performing well on the next turn is also higher. And so the hot hand theory holds.

So yes, hot hands work, but only in the context “with a high R Square”, where the impact of the doer’s performance is large compared to the outcome. In high randomness regimes, such as gambling or trading, the hot hand doesn’t matter.

Behavioural colour schemes

One of the seminal results of behavioural economics (a field I’m having less and less faith in as the days go by, especially once I learnt about ergodicity) is that by adding a choice to an existing list of choices, you can change people’s preferences.

For example, if you give people a choice between vanilla ice cream for ?70 and vanilla ice cream with chocolate sauce for ?110, most people will go for just the vanilla ice cream. However, when you add a third option, let’s say “vanilla ice cream with double chocolate sauce” for ?150, you will see more people choosing the vanilla ice cream with chocolate sauce (?110) over the plain vanilla ice cream (?70).

That example I pulled out of thin air, but trust me, this is the kind of examples you see in behavioural economics literature. In fact, a lot of behavioural economics research is about getting 24 undergrads to participate in an experiment (which undergrad doesn’t love free ice cream?) and giving them options like above. Then based on how their preferences change when the new option is added, a theory is concocted on how people choose.

The existence of “green jelly beans” (or p-value hunting, also called “p-hacking”) cannot be ruled out in such studies.

Anyway, enough bitching about behavioural economics, because while their methods may not be rigorous, and can sometimes be explained using conventional economics, some of their insights do sometimes apply in real life. Like the one where you add a choice and people start seeing the existing choices in a different way.

The other day, Nitin Pai asked me to product a district-wise map of Karnataka colour coded by the prevalence of Covid-19 (or the “Wuhan virus”) in each district. “We can colour them green, yellow, orange and red”, he said, “based on how quickly cases are growing in each district”.

After a few backs and forths, and using data from the excellent covid19india.org  , we agreed on a formula for how to classify districts by colour. And then I started drawing maps (R now has superb methods to draw maps using ggplot2).

For the first version, I took his colour recommendations at face value, and this is what came out. 

While the data is shown easily, there are two problems with this chart. Firstly, as my father might have put it, “the colours hit the eyes”. There are too many bright colours here and it’s hard to stare at the graph for too long. Secondly, the yellow and the orange appear a bit too similar. Not good.

So I started playing around. As a first step, I replaced “green” with “darkgreen”. I think I got lucky. This is what I got. 

Just this one change (OK i made one more change – made the borders black, so that the borders between contiguous dark green districts can be seen more clearly) made so much of a difference.

Firstly, the addition of the sober dark green (rather the bright green) means that the graph looks so much better on the eye now. The same yellow and orange and red don’t “hit the eyes” like they used to in green’s company.

And more importantly (like the behavioural economics theory), the orange and yellow look much more distinct from each other now (my apologies to readers who are colour blind). Rather than trying to change the clashing colours (the other day I’d tried changing yellow to other closer colours but nothing had worked), adding a darker shade alongside meant that the distinctions became much more visible.

Maybe there IS something to behavioural economics, at least when it comes to colour schemes.

Magnus Carlsen’s Endowment

Game 12 of the ongoing Chess World Championship match between Magnus Carlsen and Fabiano Caruana ended in an unexpected draw after only 31 moves, when Carlsen, in a clearly better position and clearly ahead on time, made an unexpected draw offer.

The match will now go into a series of tie-breaks, played with ever-shortening time controls, as the world looks for a winner. Given the players’ historical record, Carlsen is the favourite for the rapid playoffs. And he knows it, since starting in game 11, he seemed to play towards taking it into the playoffs.

Yesterday’s Game 12 was a strange one. It started off with a sharp Sicilian Pelikan like games 8 and 10, and then between moves 15 and 20, players repeated the position twice. Now, the rules of chess state that if the same position appears three times on the board, the game is declared a draw. And there was this move where Caruana had the chance to repeat a position for the third time, thus drawing the game.

He spent nearly half an hour on the move, and at the end of it, he decided to deviate. In other words, no quick draw. My suspicion is that this unnerved Carlsen, who decided to then take a draw at the earliest available opportunity available to him (the rules of the match state that a draw cannot be agreed before move 30. Carlsen made his offer on move 31).

In behavioural economics, Endowment Effect refers to the bias where you place a higher value on something you own than on something you don’t own. This has several implications, all of which can lead to potentially irrational behaviour. The best example is “throwing good money after bad” – if you have made an investment that has lost money, rather than cutting your losses, you double down on the investment in the hope that you’ll recoup your losses.

Another implication is that even when it is rational to sell something you own, you hold on because of the irrationally high value you place on it. The endowment effect also has an impact in pricing and negotiations – you don’t mind that “convenience charge” that the travel aggregator adds on just before you enter your credit card details, for you have already mentally “bought” the ticket, and this convenience charge is only a minor inconvenience. Once you are convinced that you need to do a business deal, you don’t mind if the price moves away from you in small marginal steps – once you’ve made the decision that you have to do the deal, these moves away are only minor, and well within the higher value you’ve placed on the deal.

So where does this fit in to Carlsen’s draw offer yesterday? It was clear from the outset that Carlsen was playing for a draw. When the position was repeated twice, it raised Carlsen’s hope that the game would be a draw, and he assumed that he was getting the draw he wanted. When Caruana refused to repeat position, and did so after a really long think, Carlsen suddenly realised that he wasn’t getting the draw he thought he was getting.

It was as if the draw was Carlsen’s and it had now been taken away from him, so now he needed to somehow get it. Carlsen played well after that, and Caruana played badly, and the engines clearly showed that Carlsen had an advantage when the game crossed move 30.

However, having “accepted” a draw earlier in the game (by repeating moves twice), Carlsen wanted to lock in the draw, rather than play on in an inferior mental state and risk a loss (which would also result in the loss of the Championship). And hence, despite the significantly superior position, he made the draw offer, which Caruana was only happy to accept (given his worse situation).

 

 

Startup equity and the ultimatum game

The Ultimatum Game is a fairly commonly used game to study people’s behaviour, cooperation, social capital, etc. Participants are divided into pairs, and one half of the pair is given a sum of money, say Rs. 100. The objective of this player (let’s call her A) is to divide this money between herself and her partner for the game (whom we shall call B). There are no rules in terms of how A can divide the money, except that both sums need to be non-negative and add up to the total (Rs. 100 here).

After A has decided the division, B has an option to either accept or reject it. If B accepts the division, then both players get the amounts as per the division. If B rejects the division, both players get nothing.

Now, classical economics dictates that as long as B gets any amount that is strictly greater than zero, she should accept it, for she is strictly better off in such a circumstance than if she rejects it (by the amount that A has offered her). Yet, several studies have found that B often rejects the offer. This is to do with a sense of “unfairness”, that A has been unfair to her. Sociologists have found that certain societies are much more likely to accept an “unfair division” than others. And so forth.

The analogy isn’t perfect, but the way co-foundes of a startup split equity can be likened to a kind of an ultimatum game. Let’s say that there are two people with complementary and reasonably unique skills (the latter condition implies that such people are not easily replaceable), who are looking to get together to start a business. Right up front, there is the issue of who gets how much equity in the venture.

The thing with equity divisions between co-founders is that there is usually not much room for negotiation – if you end up negotiating too hard, it creates unnecessary bad blood up front between the founders which can affect the performance of the company, so you would want to get done with the negotiations as soon as possible. It should also be kept in mind that if one of the two parties is unhappy about his ownership, it can affect company performance later on.

So how do the founders decide the equity split in this light? Initially there will be feelers they send to each other on how much they are expecting. After that let us say that one of the founders (call him the proposer) proposes an equity division. Now it is up to the other founder (call him the acceptor) to either accept or reject this division. Considering that too much negotiation is not ideal, and that the proposer’s offer is an indication of his approximate demand, we can assume that there will be no further negotiation. If the acceptor doesn’t accept the division that the proposer has proposed, based on the above (wholly reasonable) conditions we can assume that the deal has fallen through.

So now it is clear how this is like an ultimatum game. We have a total sum of equity (100% – this is the very founding of the company, so we can assume that equity for venture investors, ESOPs, etc. will come later), which the proposer needs to split between himself and the acceptor, and in a way that the acceptor is happy with the offer that he has got. If the acceptor accepts, the company gets formed and the respective parties get their respective equity shares (of course both parties will then have to put in significant work to make that equity share worth something – this is where this “game” differs from the ultimatum game). If the acceptor rejects, however, the company doesn’t get formed (we had assumed that neither founder is perfectly replaceable, so whatever either of them starts is something completely different).

Some pairs of founders simply decide to split equally (the “fairest”) to avoid the deal falling through. The more replaceable a founder or commoditised his skill set is, the less he can be offered (demand-supply). But there are not too many such rules in place. Finally it all boils down to a rather hard behavioural problem!

Thinking about it, can we model pre-nuptial agreements also as ultimatum games? Think about it!

Ranji Trophy and the Ultimatum Game

The Ultimatum Game is a commonly used research tool in behavioural economics. It is a “game” played between two players (say A and B) where A is given a sum of money which he has to split among himself and B. If B “accepts” the split,  both of them get the money as per A’s proposal. If, however, B rejects it,  both A and B get nothing.

This setup has been useful for behavioural economists to prove that people are not always necessarily rational. If everyone were to be rational, B would accept the split as long as he was given any amount greater than zero. However, real-life experiments have shown that B players frequently reject the deal when they think the split is “unfair”.

A version of this is being played out in this year’s Ranji Trophy thanks to some strange rules regarding points split in drawn games. A win fetches five points while a loss fetches none. In case of a drawn game, if the first innings of both sides has been completed, the team that has scored higher in the first innings gets three points, while the other team gets one. The rules, however, get interesting if not even one innings for each side has been completed. If the match has been rain affected and overs have been lost, both sides get two points each. Otherwise, both sides get zero points each!

I don’t know about the rationale of this strange points system, but I guess it is there to act as a deterrent against teams preparing featherbeds, batting for most of the four days and not even trying to win the match. In general, I haven’t been a fan at all of the Ranji Trophy’s points scoring system, and think it’s quite irrational and so refuse to comment on this rule. What I will comment about, however, is about the “ultimatum” opportunity this throws up.

In the first round of matches, Saurashtra batted first against Orissa and piled up a mammoth 545 in a little under two days. The magnitude of the score and the time left in the match meant that Orissa had been shut out of the game, and the best they could’ve done was to overtake Saurashtra on first innings score and get themselves three points. However, they batted slowly and steadily, with Natraj Behera scoring a patient double century, and with a few minutes to go in the game, they were still over 50 runs adrift of Saurashtra’s score, with three wickets in hand.

At that time, they had the chance to declare their innings, still some runs adrift of Saurashtra’s score, and collect one point, and handing over three points to Saurashtra. They, however, chose to bat on and block the game, and both teams finally ended up with zero points. It maybe because they also see Saurashtra as a competitor for “relegation”, but I thought this was irrational. Why would they deny themselves one point – if only to deny Saurashtra three points? It’s all puzzling.

Going forward, though, I hope the Ranji Trophy rules are changed to make each game a zero sum game (literally). Or else they could adopt the soccer scoring of 3 points for a win and 1 for a draw (something I’ve long advocated), first innings lead be damned!