Known stories and trading time

One of the most fascinating concepts I’ve ever come across is that of “trading time”. I first came across it in Benoit Mandelbrot’s The (Mis)Behaviour of Markets, which is possibly the only non-textbook and non-children’s book that I’ve read at least four times.

The concept of “trading time” is simple – if you look at activity on a market, it is not distributed evenly over time. There are times when nothing happens, and then there are times when “everything happens”. For example, 2020 has been an incredibly eventful year when it comes to world events. Not every year is eventful like this.

A year or so after I first read this book, I took a job where I had to look at intra-day trading in American equities markets. And I saw “trading time” happening in person – the volume of trade in the market was massive in the first and last hour, and the middle part of the day, unless there was some event happening, was rather quiet.

Trading time applies in a lot of other contexts as well. In some movies, a lot of action happens in certain times of the movie where nothing happens in other times. When I work, I end up doing a lot of work in some small windows, and nothing most of the time. Children have “growth spurts”, both physical and mental.

I was thinking about this topic when I was reading SL Bhyrappa’s Parva. Unfortunately I find it time-consuming to read more than a newspaper headline or signboard of Kannada, so I read it in translation.

However, the book is so good that I have resolved to read the original (how much ever time it takes) before the end of this year.

It is a sort of retelling of the Mahabharata, but it doesn’t tell the whole story in a linear manner. The book is structured largely around a set of monologues, largely set around journeys. So there is Bhima going into the forest to seek out his son Ghatotkacha to help him in the great war. Around the same time, Arjuna goes to Dwaraka. Just before the war begins, Bhishma goes out in search of Vyasa. Each of these journeys associated with extra long flashbacks, and philosophical musings.

In other words, what Bhyrappa does is to seek out tiny stories within the great epic, and then drill down massively into those stories. Some of these journey-monologues run into nearly a hundred pages (in translation). The rest of the story is largely glossed over or given only a passing mention to.

Bhyrappa basically gives “trading time treatment” to the Mahabharata. It helps that the overall story is rather well known, so readers can be expected to easily fill in any gaps. While the epic itself is great, there are parts where “a lot happens”, and parts where “nothing happens”. What is interesting about Parva is that Bhyrappa picks out unintuitive parts to explore in massive depth, and he simply glosses over the parts which most other retellings give a lot of footage to.

And this is what makes the story rather fascinating.

I can now think of retellings of books, or remakes of movies, where the story remains the same, but “trading time is inverted”. Activities that were originally given a lot of footage get glossed over, but those that were originally ignored get explored in depth.

 

Scrabble

I’ve forgotten which stage of lockdown or “unlock” e-commerce for “non-essential goods” reopened, but among the first things we ordered was a Scrabble board. It was an impulse decision. We were on Amazon ordering puzzles for the daughter, and she had just about started putting together “sounds” to make words, so we thought “scrabble tiles might be useful for her to make words with”.

The thing duly arrived two or three days later. The wife had never played Scrabble before, so on the day it arrived I taught her the rules of the game. We play with the Sowpods dictionary open, so we can check words that hte opponent challenges. Our “scrabble vocabulary” has surely improved since the time we started playing (“Qi” is a lifesaver, btw).

I had insisted on ordering the “official Scrabble board” sold by Mattel. The board is excellent. The tiles are excellent. The bag in which the tiles are stored is also excellent. The only problem is that there was no “scoreboard” that arrived in the set.

On the first day we played (when I taught the wife the rules, and she ended up beating me – I’m so horrible at the game), we used a piece of paper to maintain scores. The next day, we decided to score using an Excel sheet. Since then, we’ve continued to use Excel. The scoring format looks somewhat like this.

So each worksheet contains a single day’s play. Initially after we got the board, we played pretty much every day. Sometimes multiple times a day (you might notice that we played 4 games on 3rd June). So far, we’ve played 31 games. I’ve won 19, Priyanka has won 11 and one ended in a tie.

In any case, scoring on Excel has provided an additional advantage – analytics!! I have an R script that I run after every game, that parses the Excel sheet and does some basic analytics on how we play.

For example, on each turn, I make an average of 16.8 points, while Priyanka makes 14.6. Our score distribution makes for interesting viewing. Basically, she follows a “long tail strategy”. Most of the time, she is content with making simple words, but occasionally she produces a blockbuster.

I won’t put a graph here – it’s not clear enough. This table shows how many times we’ve each made more than a particular threshold (in a single turn). The figures are cumulative

Threshold
Karthik
Priyanka
30 50 44
40 12 17
50 5 10
60 3 5
70 2 2
80 0 1
90 0 1
100 0 1

Notice that while I’ve made many more 30+ scores than her, she’s made many more 40+ scores than me. Beyond that, she has crossed every threshold at least as many times as me.

Another piece of analysis is the “score multiple”. This is a measure of “how well we use our letters”. For example, if I start place the word “tiger” on a double word score (and no double or triple letter score), I get 12 points. The points total on the tiles is 6, giving me a multiple of 2.

Over the games I have found that I have a multiple of 1.75, while she has a multiple of 1.70. So I “utilise” the tiles that I have (and the ones on the board) a wee bit “better” than her, though she often accuses me of “over optimising”.

It’s been fun so far. There was a period of time when we were addicted to the game, and we still turn to it when one of us is in a “work rut”. And thanks to maintaining scores on Excel, the analytics after is also fun.

I’m pretty sure you’re spending the lockdown playing some board game as well. I strongly urge you to use Excel (or equivalent) to maintain scores. The analytics provides a very strong collateral benefit.

 

Shooting, investing and the hot hand

A couple of years back I got introduced to “Stumbling and Mumbling“, a blog written by Chris Dillow, who was described to me as a “Marxist investment banker”. I don’t agree with a lot of the stuff in his blog, but it is all very thoughtful.

He appears to be an Arsenal fan, and in his latest post, he talks about “what we can learn from football“. In that, he writes:

These might seem harmless mistakes when confined to talking about football. But they have analogues in expensive mistakes. The hot-hand fallacy leads investors to pile into unit trusts with good recent performance (pdf) – which costs them money as the performance proves unsustainable. Over-reaction leads them to buy stocks at the top of the market and sell at the bottom. Failing to see that low probabilities compound to give us a high one helps explain why so many projects run over time and budget. And so on.

Now, the hot hand fallacy has been a hard problem in statistics for a few years now. Essentially, the intuitive belief in basketball is that someone who has scored a few baskets is more likely to be successful in his next basket (basically, the player is on a “hot hand”).

It all started with a seminal paper by Amos Tversky et al in the 1980s, that used (the then limited) data to show that the hot hand is a fallacy. Then, more recently, Miller and Sanjurjo took another look at the problem and, with far better data at hand, found that the hot hand is actually NOT a fallacy.

There is a nice podcast on The Art of Manliness, where Ben Cohen, who has written a book about hot hands, spoke about the research around it. In any case, there are very valid reasons as to why hot hands exist.

Yet, Dillow is right – while hot hands might exist in something like basketball shooting, it doesn’t in something like investing. This has to do with how much “control” the person in question has. Let me switch fields completely now and quote a paragraph from Venkatesh Guru Rao‘s “The Art Of Gig” newsletter:

As an example, take conducting a workshop versus executing a trade based on some information. A significant part of the returns from a workshop depend on the workshop itself being good or bad. For a trade on the other hand, the returns are good or bad depending on how the world actually behaves. You might have set up a technically perfect trade, but lose because the world does something else. Or you might have set up a sloppy trade, but the world does something that makes it a winning move anyway.

This is from the latest edition, which is paid. Don’t worry if you aren’t a subscriber. The above paragraph I’ve quoted is sufficient for the purpose of this blogpost.

If you are in the business of offering workshops, or shooting baskets, the outcome of the next workshop or basket depends largely upon your own skill. There is randomness, yes, but this randomness is not very large, and the impact of your own effort on the result is large.

In case of investing, however, the effect of the randomness is very large. As VGR writes, “For a trade on the other hand, the returns are good or bad depending on how the world actually behaves”.

So if you are in a hot hand when it comes to investing, it means that “the world behaved in a way that was consistent with your trade” several times in a row. And that the world has behaved according to your trade several times in a row makes it no more likely that the world will behave according to your trade next time.

If, on the other hand, you are on a hot hand in shooting baskets or delivering lectures, then it is likely that this hot hand is because you are performing well. And because you are performing well, the likelihood of you performing well on the next turn is also higher. And so the hot hand theory holds.

So yes, hot hands work, but only in the context “with a high R Square”, where the impact of the doer’s performance is large compared to the outcome. In high randomness regimes, such as gambling or trading, the hot hand doesn’t matter.

Half-watching movies, and why I hate tweetstorms

It has to do with “bit rate”

I don’t like tweetstorm. Up to six tweets is fine, but beyond that I find it incredibly difficult to hold my attention for. I actually find it stressful. So of late, I’ve been making a conscious effort to stop reading tweetstorms when they start stressing me out. The stress isn’t worth any value that the tweetstorms may have.

I remember making the claim on twitter that I refuse to read any more tweetstorms of more than six tweets henceforth. I’m not able to find that tweet now.

Anyways…

Why do I hate tweetstorms? It is for the same reason that I like to “half-watch” movies, something that endlessly irritates my wife. I has to do with “bit rates“.

I use the phrase “bit rate” to refer to the rate of flow of information (remember that bit is a measure of information).

The thing with movies is that some of them have very low bit rate. More importantly, movies have vastly varying bit rates through their lengths. There are some parts in a movie where pretty much nothing happens, and a lot of it is rather predictable. There are other parts where lots happens.

This means that in the course of a movie you find yourself engrossed in some periods and bored in others, and that can be rather irritating. And boredom in the parts where nothing is happening sometimes leads me to want to turn off the movie.

So I deal with this by “half watching”, essentially multi tasking while watching. Usually this means reading, or being on twitter, while watching a movie. This usually works beautifully. When the bit rate from the movie is high, I focus. When it is low, I take my mind off and indulge in the other thing that I’m doing.

It is not just movies that I “half-watch” – a lot of sport also gets the same treatment. Like right now I’m “watching” Watford-Southampton as I’m writing this.

A few years back, my wife expressed disapproval of my half-watching. By also keeping a book or computer, I wasn’t “involved enough” in the movie, she started saying, and that half-watching meant we “weren’t really watching the movie together”. And she started demanding full attention from me when we watched movies together.

The main consequence of this is that I started watching fewer movies. Given that I can rather easily second-guess movie plots, I started finding watching highly predictable stuff rather boring. In any case, I’ve recently received permission to half-watch again, and have watched two movies in the last 24 hours (neither of which I would have been able to sit through had I paid full attention – they had low bit rates).


So what’s the problem with tweetstorms? The problem is that their bit rate is rather high. With “normal paragraph writing” we have come to expect a certain degree of redundancy. This allows us to skim through stuff while getting information from them at the same time. The redundancy means that as long as we get some key words or phrases, we can fill in the rest of the stuff, and reading is rather pleasant.

The thing with a tweetstorm is that each sentence (tweet, basically) has a lot of information packed into it. So skimming is not an option. And the information hitting your head at the rate that tweetstorms generally convey can result in a lot of stress.

The other thing with tweetstorms, of course, is that each tweet is disjoint from the one before and after it. So there is no flow to the reading, and the mind has to expend extra energy to process what’s happening. Combine this with a rather high bit rate, and you know why I can’t stand them.

What is the Case Fatality Rate of Covid-19 in India?

The economist in me will give a very simple answer to that question – it depends. It depends on how long you think people will take from onset of the disease to die.

The modeller in me extended the argument that the economist in me made, and built a rather complicated model. This involved smoothing, assumptions on probability distributions, long mathematical derivations and (for good measure) regressions.. And out of all that came this graph, with the assumption that the average person who dies of covid-19 dies 20 days after the thing is detected.

 

Yes, there is a wide variation across the country. Given that the disease is the same and the treatment for most people diseased is pretty much the same (lots of rest, lots of water, etc), it is weird that the case fatality rate varies by so much across Indian states. There is only one explanation – assuming that deaths can’t be faked or miscounted (covid deaths attributed to other reasons or vice versa), the problem is in the “denominator” – the number of confirmed cases.

What the variation here tells us is that in states towards the top of this graph, we are likely not detecting most of the positive cases (serious cases will get themselves tested anyway, and get hospitalised, and perhaps die. It’s the less serious cases that can “slip”). Taking a state low down below in this graph as a “good tester” (say Andhra Pradesh), we can try and estimate what the extent of under-detection of cases in each state is.

Based on state-wise case tallies as of now (might be some error since some states might have reported today’s number and some mgiht not have), here are my predictions on how many actual number of confirmed cases there are per state, based on our calculations of case fatality rate.

Yeah, Maharashtra alone should have crossed a million caess based on the number of people who have died there!

Now let’s get to the maths. It’s messy. First we look at the number of confirmed cases per day and number of deaths per day per state (data from here). Then we smooth the data and take 7-day trailing moving averages. This is to get rid of any reporting pile-ups.

Now comes the probability assumption – we assume that a proportion p of all the confirmed cases will die. We assume an average number of days (N) to death for people who are supposed to die (let’s call them Romeos?). They all won’t pop off exactly N days after we detect their infection. Let’s say a proportion \lambda dies each day. Of everyone who is infected, supposed to die and not yet dead, a proportion \lambda will die each day.

My maths has become rather rusty over the years but a derivation I made shows that \lambda = \frac{1}{N}. So if people are supposed to die in an average of 20 days, \frac{1}{20} will die today, \frac{19}{20}\frac{1}{20} will die tomorrow. And so on.

So people who die today could be people who were detected with the infection yesterday, or the day before, or the day before day before (isn’t it weird that English doesn’t a word for this?) or … Now, based on how many cases were detected on each day, and our assumption of p (let’s assume a value first. We can derive it back later), we can know how many people who were found sick k days back are going to die today. Do this for all k, and you can model how many people will die today.

The equation will look something like this. Assume d_t is the number of people who die on day t and n_t is the number of cases confirmed on day t. We get

d_t = p  (\lambda n_{t-1} + (1-\lambda) \lambda n_{t-2} + (1-\lambda)^2 \lambda n_{t-3} + ... )

Now, all these ns are known. d_t is known. \lambda comes from our assumption of how long people will, on average, take to die once their infection has been detected. So in the above equation, everything except p is known.

And we have this data for multiple days. We know the left hand side. We know the value in brackets on the right hand side. All we need to do is to find p, which I did using a simple regression.

And I did this for each state – take the number of confirmed cases on each day, the number of deaths on each day and your assumption on average number of days after detection that a person dies. And you can calculate p, which is the case fatality rate. The true proportion of cases that are resulting in deaths.

This produced the first graph that I’ve presented above, for the assumption that a person, should he die, dies on an average 20 days after the infection is detected.

So what is India’s case fatality rate? While the first graph says it’s 5.8%, the variations by state suggest that it’s a mild case detection issue, so the true case fatality rate is likely far lower. From doing my daily updates on Twitter, I’ve come to trust Andhra Pradesh as a state that is testing well, so if we assume they’ve found all their active cases, we use that as a base and arrive at the second graph in terms of the true number of cases in each state.

PS: It’s common to just divide the number of deaths so far by number of cases so far, but that is an inaccurate measure, since it doesn’t take into account the vintage of cases. Dividing deaths by number of cases as of a fixed point of time in the past is also inaccurate since it doesn’t take into account randomness (on when a Romeo might die).

Anyway, here is my code, for what it’s worth.

deathRate <- function(covid, avgDays) {
covid %>%
mutate(Date=as.Date(Date, '%d-%b-%y')) %>%
gather(State, Number, -Date, -Status) %>%
spread(Status, Number) %>%
arrange(State, Date) -> 
cov1

# Need to smooth everything by 7 days 
cov1 %>%
arrange(State, Date) %>%
group_by(State) %>%
mutate(
TotalConfirmed=cumsum(Confirmed),
TotalDeceased=cumsum(Deceased),
ConfirmedMA=(TotalConfirmed-lag(TotalConfirmed, 7))/7,
DeceasedMA=(TotalDeceased-lag(TotalDeceased, 7))/ 7
) %>%
ungroup() %>%
filter(!is.na(ConfirmedMA)) %>%
select(State, Date, Deceased=DeceasedMA, Confirmed=ConfirmedMA) ->
cov2

cov2 %>%
select(DeathDate=Date, State, Deceased) %>%
inner_join(
cov2 %>%
select(ConfirmDate=Date, State, Confirmed) %>%
crossing(Delay=1:100) %>%
mutate(DeathDate=ConfirmDate+Delay), 
by = c("DeathDate", "State")
) %>%
filter(DeathDate > ConfirmDate) %>%
arrange(State, desc(DeathDate), desc(ConfirmDate)) %>%
mutate(
Lambda=1/avgDays,
Adjusted=Confirmed * Lambda * (1-Lambda)^(Delay-1)
) %>%
filter(Deceased > 0) %>%
group_by(State, DeathDate, Deceased) %>%
summarise(Adjusted=sum(Adjusted)) %>%
ungroup() %>%
lm(Deceased~Adjusted-1, data=.) %>%
summary() %>%
broom::tidy() %>%
select(estimate) %>%
first() %>%
return()
}

Bad Apples

Nowadays, I keep apples in the fridge. Apart from remaining fresh longer, I like eating cold apples as well.

It wasn’t always this way. And I would frequently encounter what I call the “bad apples” problem.

You have a bunch of apples at home. They get a little overripe. You don’t want to eat them. You go to the market and see fresh apples there, but you know that you have apples at home. Because you have apples at home, you don’t want to buy new ones. But you don’t want to eat the apples at home, because they are too ripe.

And so they just sit there, getting progressively worse by a wee bit every day. Seeing them everyday makes you feel bad about having not finished them, but also reminds you to not buy new apples. And so you go days together without eating any apples, until one day you gather the courage to throw them in the bin and buy new apples.

I’ve become conscious of this problem for a lot of foodstuff. Apples, as I told you, I now keep in the fridge, so they last longer. The problem doesn’t fully go since you can have months-old wrinkly apples sitting in your fridge that you don’t want to eat, and which prevent you from buying new ones in the market. However, it is far better than seeing apples rot on the shelf.

Bananas and oranges offer the benefit that as soon as they are overripe, they make for excellent smoothies and juices respectively. I’ve become particular about finishing them off that way. Mangoes can be juiced/milkshaked as well. And I’ve developed processes around a lot of foodstuff now so that this “bad apples” problem doesn’t happen.

However, there is no preventing this problem from occurring elsewhere. Books is a prominent example. From this excellent interview of venture capitalist Marc Andreessen that I’m reading:

The problem of having to finish every book is you’re not only spending time on books you shouldn’t be but it also causes you to stall out on reading in general. If I can’t start the next book until I finish this one, but I don’t want to read this one, I might as well go watch TV. Before you know it, you’ve stopped reading for a month and you’re asking “what have I done?!”

It happens with work. There might be a half-written blogpost that you’re loathe to finish, but which prevent you from starting a new blogpost (I’ve gotten pretty ruthless at deleting drafts. I prefer to write posts “at one shot”, so this isn’t that much of a pain).

The good thing, though, is that once you start recognising the bad apples problem in some fields (such as apples), you start seeing them elsewhere as well. And you will develop policies on dealing with them.

Now I’m cursing myself for setting myself an annual target of “number of books to read” (on Goodreads). It’s leading to this:

the sunk cost fallacy means that I try harder to finish so that I can add to my annual count. Sometimes I literally flip through the pages of the book looking for interesting things, in an attempt to finish it one way or the other

Bad apples aren’t that easy to get rid of!

 

Games of luck and skill

My good friend Anuroop has two hobbies – poker and wildlife photography. And when we invited him to NED Talks some 5 years ago, he decided to combine these two topics into the talk, by speaking about “why wildlife photography is like poker” (or the other way round, I’ve forgotten).

I neither do wildlife photography nor play poker so I hadn’t been able to appreciate his talk in full when he delivered it. However, our trip to Jungle Lodges River Tern Resort (at Bhadra Wildlife Sanctuary) earlier this year demonstrated to me why poker and wildlife photography are similar – they are both “games of luck AND skill”.

One debate that keeps coming up in Indian legal circles is whether a particular card game (poker, rummy, etc.) is a “game of luck” or a “game of skill”. While this might sound esoteric, it is a rather important matter – games of skill don’t need any permission from any authority, while games of luck are banned to different extents by different states (they are seen as being similar to “gambling”, and the moralistic Indian states don’t want to permit that).

Many times in the recent past, courts in India have declared poker and rummy to be “games of skill“, which means “authorities” cannot disrupt any such games. Still, for different reasons, they remain effectively illegal in certain states.

In any case, what makes games like poker interesting is that they combine skill and luck. This is also what makes games like this addictive. That there is skill involved means that you get constantly better over time, and the more you play, the greater the likelihood that you will win (ok it doesn’t increase at the same rate for everyone, and there is occasional regression as well).

If it were a pure game of skill, then things would get boring, since in a game of skill the better player wins every single time. So unless you get a “sparring partner” of approximately your own level, nobody will want to play with you (this is one difficulty with games like chess).

With luck involved, however, the odds change. It is possible to beat someone much better (on average) than you, or lose to someone much worse (on average). In other words, if you are designing an Elo rating system for a game like poker, you need to change players’ ratings by very little after each game (compared to a game of pure skill such as chess).

Because there is luck involved, there is “greater information content” in the result of each game (remember from information theory that a perfectly fair coin has the most information content (1 bit) among all coins). And this makes the game more fun to play. And the better player is seen as better only when lots of games are played. And so people want to play more.

It is the same with wildlife photography. It is a game of skill because as you do more and more of it, you know where to look for the tigers and leopards (and ospreys and wild dogs). You know where and how long you should wait to maximise your chances of a “sighting”. The more you do it, the better you become at photography as well.

And it is a game of luck because despite your best laid plans, there is a huge amount of luck involved. Just on the day you set up, the tiger might decide to take another path to the river. The osprey might decide on a siesta that is a little bit longer than usual.

At the entrance of JLR River Tern Lodge, there is a board that shows what animals were “sighted” during each safari in the preceding one week. Each day, the resort organises two safaris, one each in the morning and afternoon, and some of them are by boat and some by jeep.

I remember trying to study the boards and try and divine patterns to decide when we should go by boat and when by jeep (on the second day of our stay there, we were the “longest staying guests” and thus given the choice of safari). One the first evening, in our jeep safari, we saw a herd of elephants. And a herd of gaur. And lots of birds. And a dead deer.

That we had “missed out” on tigers and leopards meant that we wanted to do it again. If what we saw depended solely on the skill of the naturalist and the driver who accompanied us, we would not have been excited to go into the forest again.

However, the element of luck meant that we wanted to just keep going, and going.

Games of pure luck or pure skill can get boring after a while. However, when both luck and skill get involved, they can really really get addictive. Now I fully appreciate Anuroop’s NED Talk.

 

Night trains

In anticipation of tonight’s Merseyside Derby, I was thinking of previous instances of this fixture at Goodison Park. My mind first went back to the game in the 2013-14 season, which was a see-saw 3-3 draw, with the Liverpool backline being incredibly troubled by Romelu Lukaku, and Daniel Sturridge scoring with a header immediately after coming on to make it 3-3 (and Joe Allen had missed a sitter earlier when Liverpool were 2-1 up).

I remember my wife coming back home from work in the middle of that game, and I didn’t pay attention to her until it was over. She wasn’t particularly happy about that, but the intense nature of the game gave me a fever (that used to happen often in the 2013-14 and 2008-9 seasons).

Then I remember Everton winning 3-0 once, though I don’t remember when that was (googling tells me that was in the 2006-7 season, when I was already a Liverpool fan, but not watching regularly).

And then I started thinking about what happened to this game last season, and then remembered that it was a 0-0 draw. Incidentally, it was on the same day that I travelled to Liverpool – I had a ticket for an Anfield Tour the next morning.

I now see that I had written about getting to Liverpool after I got to my hotel that night. However, I haven’t written about what happened before that. My train from Euston was around 8:00 pm. I remember leaving home (which was in Ealing) at around 6 or so, and then taking two tubes (Central changing to Victoria at Oxford Circus) to get to Euston. And then buying chewing gum and a bottle of water at Marks and Spencer while waiting for my train.

I also remember that while leaving home that evening, I was scared. I was psyched out. It wasn’t supposed to be that way. This was a trip to Liverpool I had been wanting to make for the best part of 14 years. I had kept putting it off during my stay in London until I knew that I was going to move out of London in two weeks’ time. Liverpool were having a great season (they would go on to win the Champions League, and only narrowly lose the Premiser League title).

I was supposed to be excited. Instead I was nervous. My nerve possibly settled only after I was seated in the train that evening.

Thinking about it, I basically hate night trains (well, this wasn’t an overnight train, but it started late in the evening). I hate night buses as well. And this only applies to night trains and buses that take me away from my normal place of residence – starting towards “home” late in the night never worries me.

This anxiety possibly started when I was in IIT Madras. I remember clearly then that I used to sleep comfortably without fail while travelling from Madras to Bangalore, but almost always never slept or only slept fitfully when travelling in the opposite direction. While in hindsight it all appears fine, I never felt particularly settled when I was at IITM.

And consequently, anything that reminds me of travelling to IITM psyches me out. I always took the night train while travelling there, and the anxiety would start on the drive to the railway station. Even now, sometimes, I get anxious while taking that road late in the evening.

Then, taking night trains has been indelibly linked to travelling to Madras, and something that I’ve come to fear as well. While I haven’t taken a train in India since 2012, my experience with the trip to Liverpool last year tells me that even non-overnight night trains have that effect on me.

And then, of course, there is the city of Chennai as well. The smells of the city after the train crosses Basin Bridge trigger the first wave of anxiety. Stepping out of the railway station and the thought of finding an autorickshaw trigger the next wave (things might be different now with Uber/Ola, but I haven’t experienced that).

The last time I went to Chennai was for a close friend’s wedding in 2012. I remember waking up early on the day of the wedding and then having a massive panic attack. I spent long enough time staring at the ceiling of my hotel room that I ended up missing the muhurtham.

I’ve made up my mind that the next time I have to go to Chennai, I’ll just drive there. And for sure, I’m not going to take a train leaving Bangalore in the night.

Finite and infinite cricket games

I’ve written about James Carse’s Finite and Infinite Games here before. It is among the more influential books I’ve read, though it’s a bit of a weirdly written book, almost in a constant staccato tone.

From one of my previous posts:

One of the most influential books I’ve read is James Carse’s Finite and Infinite Games. Finite Games are artificial games where we play to “win”. There is a defined finish, and there is a set of tasks that we need to achieve that constitutes “victory”. Most real-life games are on the other hand are “infinite games” where the objective is to simply ensure that the game simply goes on.

I’ve spent most of this evening watching The Test, the Amazon Prime documentary about the Australian cricket team after Sandpapergate. It’s a good half-watch. Parts of it demand a lot of attention, but overall it’s a nice “background watch” while I’m doing something else.

In any case, the reason for writing the post is this little interview of Harsha Bhogle somewhere in the middle of this documentary (he has appeared several times more after this one). In this bit, he talks about how in Test cricket, the opponent might be having a good time for a while, but it is okay to permit him that. To paraphrase Gully Boy, “apna time aayega” – the bowler or batsman in question will tire or diminish after some time, after which you can do your business.

He went on to say that this is not the case in limited overs cricket (ODIs and T20s) where both batsmen and bowlers need to constantly look to dominate, and cannot simply look to “survive” when an opponent is on the roll.

While Test cricket is strictly not an “infinite game” (it needs to end in five days), I thought this was a beautiful illustration of the concept of finite and infinite games. The objective of an infinite game, as James Carse describes in his book, is to just continue to play the game.

As a batsman in Test cricket, you look to just be there, weather out the good spells and spend time at the crease. You do this and the runs will come (it is analogous for bowlers – you need to bowl well enough to continue to be in the game, and then when the time comes you will get your rewards).

In ODIs and T20s, you cannot bide your time. Irrespective of how the opponent is playing, you need to “win every moment”, which is the premise for a finite game.

Now, I don’t know what I’m getting at here, and what he point of this post is, but I think I just liked Harsha Bhogle’s characterisation of Tests as infinite games, and wanted to share that with you.

Famous people from little-known countries

I recently finished reading Svetlana Alexievitch’s Second-hand Time, a memoir of people in the erstwhile Soviet Union as the union broke down in 1991. It’s a long and rather intense book, and maybe it wasn’t the best choice for reading on days when I wasn’t able to sleep.

I don’t, however, regret reading the book at all. It was incredibly enlightening and taught me a lot of life in the Soviet Union and in the post-Soviet republics. This is what I wrote in review on Goodreads:

Absolutely brilliant book. Very very informative and enlightening, especially for someone for whom “USSR” was this monolith growing up, and then finding out that it was actually 15 different countries.

Only reasons I didn’t give it 5 stars are that it’s a bit too long (though at no point did I want to give up on the book – it’s very good), and that some of the stories are a bit too similar.

Also I would have preferred more stories from the non-Russian republics.

One of the stories in the book is about migrant workers from Tajikistan in Moscow, and how they are ill-treated and racially abused. They are called “blackies”, for example, a term that puzzled me since to my knowledge Tajiks are rather fair-skinned.

I had to “see it to believe it”, and what did I do? I googled the photo of perhaps the only Tajik I’ve heard about – Ahmed Shah Massoud, late leader of the Northern Alliance who fought in the early 2000s to expel the Taliban from Afghanistan.

(Now I learn that Massoud was Afghan and not Tajik, so I was actually mistaken. I somehow remember him as being the leader of the ethnic Tajiks in the battle against the Taliban (and General Abdul Rashid Dostum being an ethnic Uzbek leader as part of the Northern Alliance) ).

In any case, now that it turns out that Ahmad Shah Massoud wasn’t actually Tajik, it turns out that I don’t know of even a single Tajik. Not one. So this got me thinking about countries that have very few people who are present in popular imagination.

And I don’t think there are too many countries from where there are so few little-known people (I consider my own “general knowledge” to be pretty good, so me knowing someone “famous” from a country should count).

Some countries have charismatic or otherwise popular political leaders, and you are likely to know them by face. Then, there is sport – if you follow a handful of sports, you are likely to know at least a few people from most countries.

For example, my ability at guessing a European’s nationality from their first name comes from my following of football, a sport that is popular all over Europe, and has famous players pretty much from all countries (I admit I don’t know anyone from Moldova or Belarus, though the latter has a rather famous and nicely named football club).

I know of people from a lot of former Soviet republics (but not any Tajiks, or Uzbeks or Kazakhs) because I follow chess. Paul Keres from Estonia, Mikhail Tal from Latvia, Levon Aronian and Tigran Petrosian from Armenia, Teimour Rajdabov and Shakhryar Mamedyarov from Azerbaijan and so on are among the very few (or only) people I know from their respective countries.

Think about it – which are the countries from which you can’t name a single person? How many such countries might there be? In my case there may be a maximum of 50 such countries (there are about 200 independent countries in the world, IIRC).

I recently came across this blog post in EconLog which made a pretty interesting point comparing blacks in the US to Uighurs in China, which possibly prompted my post:

In most cases, oppressed groups tend to be relatively poor and powerless, and thus are often invisible to outsiders. Can you name a single member of the Uyghur minority in China?

It seems to me that African-Americans are somewhat different. Correct me if I’m wrong, but aren’t most well informed people in other countries able to name and identify quite a few African-Americans? In politics the most obvious example is Barack Obama…

I can’t think of a single Uyghur either. Oh, and I forgot to mention the role of movies and books and other methods of popular culture that give you exposure to people from different countries.