Credentialed and credential-less networks

Recently I tried out Instagram Reels, just to see what the big deal about it is. The first impression wasn’t great. My feed was filled with famous people (KL Rahul was there, along with some bollywood actresses), doing supposedly funny things. Compared to the little I had seen of TikTok (I had the app installed for a day last year), this was barely funny.

In fact, from my first impression it seems like Instagram Reels is a sort of bastard child of TikTok and Quibi (I took a 90 day trial of Quibi and uninstalled after a month, having used it 2-3 times and got bored each time). There is already a “prior reputation network”, based on people’s followers on the main Instagram product. And Reels takes off on this.

This means that for a new person coming into this supposedly new social network, the barriers to entry to getting more followers is rather high. They need to compete with people who already have built their reputations elsewhere (either on the Instagram main product, or in the case of someone like KL Rahul, completely offline).

I was reading this blogpost yesterday that compared and contrasted social networking in the 2000s (blogs) with that of the 2010s (twitter). It’s a nice blogpost, though I should mention that it sort of confirms my biases since I sort of built my reputation using my blog in the late 2000s.

That post makes the same point – blogs created their own reputation networks, while twitter leverages people’s reputations from elsewhere.

The existence of the blue checks points to the way in which the barriers that a new blogger faced entering a community was far lower than is currently the case on twitter. The start-up costs of blogging were higher, but once somebody integrated themselves into a community and began writing, they were judged on the quality of that writing alone. Very little attention was paid to who that person was outside of the blogosphere. While some prominent and well known individuals blogged, there was nothing like the “blue checks” we see on twitter today. It is not hard to understand why this is. Twitter is an undifferentiated mass of writhing souls trying to inflict their angry opinions on the earth. Figuring out who to listen to in this twist of two-sentences is difficult. We use a tweeter’s offline affiliations to separate the wheat and the chaff.

For the longest time, I refrained from putting my real name on this blog (though it was easy enough to triangulate my identity based on all the things I’d written here). This was to create a sort of plausible deniability in case some employer somewhere got pissed off with what I was writing.

Most of the blogosphere was similarly pseudonymous (or even anonymous). A lot of people I got to know through their blogging, I learnt about them from their writing before I could know anything else about them (that came from their “offline lives”). Reputation outside the blogosphere didn’t matter – your standing as a blogger depended on the quality of blogposts, and comments on other people’s blogposts only.

It is similar with TikTok – it’s “extreme machine learning” means that people’s reputations outside the network don’t matter in terms of people’s following on the network, and how likely they are to appear in people’s feeds. Instead, all that matters is the quality of the content on the platform, based (in TikTok’s case) on user engagement on the platform.

So as we look for an alternative to replace TikTok, given that the Chinese Communist Party seems to be able to get supposedly confidential data from it, we need to remember that we need a “fresh network”, or a “credential free” network.

Instagram has done something it’s good at, which is copying. However, given that it relies on existing credentials, Reels will never have the same experience as TikTok. Neither will any other similar product created from an existing social network. What we need is something that can create its own reputation network, bottom up.

Then again, blogging was based on an open platform so it was easy for people to build their networks. With something like TikTok relying heavily on network effects and algorithmic curation, I don’t know if such a thing can happen there.

Omnichannel retail

About 10 days back I decided that the number of covid-19 positive cases in Bangalore was high enough to recalibrate my risk levels. So I decided I’m not going to go to “indoor shops” (where you have to step inside the shop) any more.

Instead, as much as possible I would buy from “over the counter” shops (where you don’t have to step inside). This way, I would avoid being indoors, and as long as I’m outdoors (and wearing a mask) when I’m out of homeI should be reasonably safe.

However, over the years we have come to need a lot of things that at least in an Indian context can be classified as “long tail”. Over the last three months I’ve been buying them from the large format Namdhari store close to home. Now, that’s a large airconditioned shop which my new risk levels don’t allow me to go to. So I decided to order from their website.

Now, Namdhari is a classic “omnichannel retail” (the phrase was told to me by one of the guys who helped set it up). There is no warehouse – all customer orders are fulfilled from stores. You could think of it like calling your local shop and asking for delivery.

As you can imagine, this can lead to insane inventory issues, especially for a shop like Namdhari’s that specialises in long tail stuff. It is pretty impossible for a store to reconcile how much stock is there in the store with the website (even with perfect technology, you’ll miss out on what is there in people’s (physical) charts).

There is also the issue of prioritisation of customers that they are kept in the dark about. If the shop has a limited inventory of any item (and with long tail stuff, even a small spike in demand can make inventory very limited), how does it allocate it between people who have trudged all the way to the store and those who have prepaid for it on the website?

I wasn’t that surprised, I guess, when half the items that I had ordered failed to arrive. The delivery guy told me that the rest of my money would get refunded.

I wondered why they wouldn’t try to fulfil my order the next day instead. This brings me to my next grouse – there is no real reasons sometimes to provide same day delivery. If you offer next day delivery then you know tomorrow delivery volumes beforehand, and it will be easy for you to stock up. These guys had this process, it seems, where you have to order for the same day and if the thing runs out you don’t get it at all.

In any case, three days after my half-fulfilled order had been delivered I got a mail that refund had been initiated for the items I had ordered but hadn’t arrived.

It was like writing a cheque. Cheques are inefficient because between the time it is written and encashed, neither the giver nor the receiver has access to the funds (online transfer such as IMPS, on the other hand, ensures that the money is in either the giver or receiver’s account at all points in time).

So my order which had been partially fulfilled was in a similar trishanku state – I didn’t know if it would arrive or if I should order the same items from elsewhere. In case I waited I would have the risk of getting the stuff even later (since I’d delay order from elsewhere).

It was only after it failed to arrive on Wednesday (and I got the mail) that I was able to place an order from elsewhere. Hopefully this one won’t get into trishanku state as well.

WhatsApp Export Chat

There was a tiny controversy on one WhatsApp group I’m part of. This is a “sparse” WhatsApp group, which means there aren’t too many messages sent. Only around 1000 in nearly 5 years (you’ll soon know how I got that number).

And this morning I wake up to find 42 messages (many members of the group are in the US). Some of them I understood and some I didn’t. So the gossip-monger I am (hey, remember that Yuval Noah Harari thinks gossip is the basis of human civilisation?), I opened up half a dozen backchannel chats.

Like the six blind men of Indostan, these chats helped me construct a picture of what had happened. My domain knowledge had gotten enhanced. However, there was one message that had made a deep impression on me – that claimed that some people were monopolising whatever little conversation there was on that group.

I HAD to test that hypothesis.

The jobless guy that I am, I figured out how to export a chat from WhatsApp. With iOS, it’s rather easy. Go to the info page of a chat or a group, and near “delete chat/group”, you see “export chat/group”. If you say you don’t want media (like I did), you get a text file (I airdropped mine immediately into my Mac).

The formatting of the WhatsApp export file is rather clean, making it easy to parse. The date is in square brackets. The sender’s name (or number, if they’re not in your contact list) is before a colon after the square brackets. A couple of “separate” functions later you are good to go (there are a couple of other nuances. If you can read R code, check mine here).

chat <- read_lines('~/Downloads/_chat.txt')
tibble(txt=chat) %>% 
separate(txt, c("Date", "Content"), '\\] ') %>%
separate(Content, c("Sender", "Content"), ': ') %>%
mutate(
Content=coalesce(Content, Date),
Date=str_trim(str_replace_all(Date, '\\[', '')),
Date2=as.POSIXct(Date, format='%d/%m/%y, %H:%M:%S %p')
) %>%
fill(Date2, .direction = 'updown') %>%
fill(Sender, .direction = 'downup') %>%
filter(!str_detect(Sender, "changed their phone number to a new number") ) %>%
filter(!str_detect(Sender, ' added ') & !str_detect(Sender, ' left')) %>%
filter(!str_detect(Sender, " joined using this group's invite link"))->
mychat

That’s it. You are good to go. You have a nice data frame with sender’s name, message content and date/time of sending. And as one of the teachers at my JEE coaching factory used to say, you can now do “gymnastics”.

And so for the last hour or so I’ve been wasting my time doing such gymnastics. Number of posts sent on each day. Testing the hypothesis that some people talk a lot on the group (I turned out to be far more prolific than I’d imagined). People who start conversations. Whether there are any long bilateral conversations on the group. And so on and so forth (this is how I know there are ~1000 messages on this group).

Now I want to subject all my conversations to such analysis. For bilaterals it won’t be that much fun – but in case there is some romantic or business interest involved you might find it useful to know who initiates more and who closes more conversations.

You can subject the conversations to natural language processing (with what objective, I don’t know). The possibilities are endless.

And the time wastage can be endless as well. So I’ll stop here.

Diversity and campus placements

I graduated from IIMB in 2006. As was a sort of habit around that time in all IIMs, many recruiters who were supposed to come to campus for recruitment in the third or fourth slot were asked to not turn up – everyone who was in the market for a job had been placed by then.

The situation was very different when my wife was graduating from IESE Business School in 2016. There, barring consulting firms and a handful of other firms, campus placements was nonexistent.

Given the diversity of her class (the 200 odd students came from 60 different countries, and had vastly different experience), it didn’t make sense for a recruiter to come to campus. The ones that turned up like the McKinseys and Amazons of the world were looking for “generic management talent”, or to put it less charitably, “perfectly replaceable people”.

When companies were looking for perfectly replaceable people, background and experience didn’t matter that much. What mattered was the candidate’s aptitude for the job at hand, which was tested in a series of gruelling interviews.

However, when the jobs were a tad more specialised, a highly diverse campus population didn’t help. The specialisation in the job would mean that the recruiters would have a very strong preference for certain people in the class rather than others, and the risk of not getting the most preferred candidates was high. For specialised recruiters to turn up to campus, it was all or nothing, since the people in the class were so unlike one another.

People in the class were so unlike one another for good reason, and by design – they would be able to add significantly better value to one another in class by dint of their varied experience. When it came to placements, however, it was a problem.

My IIMB class was hardly diverse. Some 130 out of 180 of us were engineers, if I remember correctly. More than a 100 of us had a year or less of real work experience. About 150 out of 180 were male. Whatever dimension you looked at us from, there was little to differentiate us. We were a homogeneous block. That also meant that in class, we had little to add to each other (apart from wisecracks and “challenges”).

This, however, worked out beautifully when it came to us getting jobs. Because we were so similar to one another, for a recruiter coming in, it didn’t really matter which of us joined them. While every recruiter might have come in with a shortlist of highly preferred candidates, not getting people from this shortlist wouldn’t have hurt them as much – whoever else they got was not very dissimilar to the ones in their original shortlist.

This also meant that the arbitrarily short interviews (firms had to make a decision after two or three interviews that together lasted an hour) didn’t matter that much. Yes, it was a highly random process that I came to hate from both sides (interviewee and interviewer), but in the larger scheme of things, thanks to the lack of diversity, it didn’t matter to the interviewer.

And so with the students being more or less commoditised, the incentive for a recruiter to come and recruit was greater. And so they came in droves, and in at least my batch and the next, several of them had to be requested to not come since “everyone was already placed” (after that came to Global Financial Crisis, so I don’t know how things were).

Batch sizes at IIM have increased and diversity, too, on some counts (there are more women now). However, at a larger level I still think IIM classes are homogeneous enough to attract campus recruiters. I don’t know what the situation this year is with the pandemic, but I would be surprised if placements in the last few years was anything short of stellar.

So this is a tradeoffs that business schools (and other schools) need to deal with – the more diverse the class, the richer will be the peer learning, but lesser the incentive for campus recruitment.

Of late I’ve got into this habit of throwing ideas randomly at twitter, and then expanding them into blog posts. This is one of those posts. While this post has been brewing for five years now (ever since my wife started her placement process at IESE), the immediate trigger was some discussion on twitter regarding liberal arts courses.

 

Unbundling Higher Education

In July 2004, I went to Madras, wore fancy clothes and collected a laminated piece of paper. The piece of paper, formally called “Bachelor of Technology”, certified three things.

First, it said that I had (very likely) got a very good rank in IIT JEE, which enabled me to enrol in the Computer Science B.Tech. program at IIT Madras.  Then, it certified that I had attended a certain number of lectures and laboratories (equivalent to “180 credits”) at IIT Madras. Finally, it certified that I had completed assignments and passed tests administered by IIT Madras to a sufficient degree that qualified me to get the piece of paper.

Note that all these three were necessary conditions to my getting my degree from IIT Madras. Not passing IIT JEE with a fancy enough rank would have precluded me from the other two steps in the first place. Either not attending lectures and labs, or not doing the assignments and exams, would  have meant that my “coursework would be incomplete”, leaving me ineligible to get the degree.

In other words, my higher education was bundled. There is no reason that should be so.

There is no reason that a single entity should have been responsible for entry testing (which is what IIT-JEE essentially is), teaching and exit testing. Each of these three could have been done by an independent entity.

For example, you could have “credentialing entities” or “entry testing entities”, whose job is to test you on things that admissions departments of colleges might test you on. This could include subject tests such as IIT-JEE, or aptitude tests such as GRE, or even evaluations of extra-curricular activities, recommendation letters and essays as practiced in American universities.

Then, you could have “teaching entities”. This is like the MOOCs we already have. The job of these teaching entities is to teach a subject or set of subjects, and make sure you understood your stuff. Testing whether you had learnt the stuff, however, is not the job of the teaching entities. It is also likely that unless there are superstar teachers, the value of these teaching entities comes down, on account of marginal cost pricing, close to zero.

To test whether you learnt your stuff, you have the testing entities. Their job is to test whether your level of knowledge is sufficient to certify that you have learnt a particular subject.  It is well possible that some testing entities might demand that you cleared a particular cutoff on entry tests before you are eligible to get their exit test certificates, but many others may not.

The only downside of this unbundling is that independent evaluation becomes difficult. What do you make of a person who has cleared entry tests  mandated by a certain set of institutions, and exit tests traditionally associated with a completely different set of institutions? Is the entry test certificate (and associated rank or percentile) enough to give you a particular credential or should it be associated with an exit test as well?

These complications are possibly why higher education hasn’t experimented with any such unbundling so far (though MOOCs have taken the teaching bit outside the traditional classroom).

However, there is an opportunity now. Covid-19 means that several universities have decided to have online-only classes in 2019-20. Without the peer learning aspect, people are wondering if it is worth paying the traditional amount for these schools. People are also calling for top universities to expand their programs since the marginal cost is slipping further, with the backlash being that this will “dilute” the degrees.

This is where unbundling comes into play. Essentially anyone should be able to attend the Harvard lectures, and maybe even do the Harvard exams (if this can be done with a sufficiently low marginal cost). However, you get a Harvard degree if and only if you have cleared the traditional Harvard admission criteria (maybe the rest get a diploma or something?).

Some other people might decide upon clearing the traditional Harvard admission criteria that this credential itself is sufficient for them and not bother getting the full degree. The possibilities are endless.

Old-time readers of this blog might remember that I had almost experimented with something like this. Highly disillusioned during my first year at IIT, I had considered dropping out, reasoning that my JEE rank was credential enough. Finally, I turned out to be too much of a chicken to do that.

Known stories and trading time

One of the most fascinating concepts I’ve ever come across is that of “trading time”. I first came across it in Benoit Mandelbrot’s The (Mis)Behaviour of Markets, which is possibly the only non-textbook and non-children’s book that I’ve read at least four times.

The concept of “trading time” is simple – if you look at activity on a market, it is not distributed evenly over time. There are times when nothing happens, and then there are times when “everything happens”. For example, 2020 has been an incredibly eventful year when it comes to world events. Not every year is eventful like this.

A year or so after I first read this book, I took a job where I had to look at intra-day trading in American equities markets. And I saw “trading time” happening in person – the volume of trade in the market was massive in the first and last hour, and the middle part of the day, unless there was some event happening, was rather quiet.

Trading time applies in a lot of other contexts as well. In some movies, a lot of action happens in certain times of the movie where nothing happens in other times. When I work, I end up doing a lot of work in some small windows, and nothing most of the time. Children have “growth spurts”, both physical and mental.

I was thinking about this topic when I was reading SL Bhyrappa’s Parva. Unfortunately I find it time-consuming to read more than a newspaper headline or signboard of Kannada, so I read it in translation.

However, the book is so good that I have resolved to read the original (how much ever time it takes) before the end of this year.

It is a sort of retelling of the Mahabharata, but it doesn’t tell the whole story in a linear manner. The book is structured largely around a set of monologues, largely set around journeys. So there is Bhima going into the forest to seek out his son Ghatotkacha to help him in the great war. Around the same time, Arjuna goes to Dwaraka. Just before the war begins, Bhishma goes out in search of Vyasa. Each of these journeys associated with extra long flashbacks, and philosophical musings.

In other words, what Bhyrappa does is to seek out tiny stories within the great epic, and then drill down massively into those stories. Some of these journey-monologues run into nearly a hundred pages (in translation). The rest of the story is largely glossed over or given only a passing mention to.

Bhyrappa basically gives “trading time treatment” to the Mahabharata. It helps that the overall story is rather well known, so readers can be expected to easily fill in any gaps. While the epic itself is great, there are parts where “a lot happens”, and parts where “nothing happens”. What is interesting about Parva is that Bhyrappa picks out unintuitive parts to explore in massive depth, and he simply glosses over the parts which most other retellings give a lot of footage to.

And this is what makes the story rather fascinating.

I can now think of retellings of books, or remakes of movies, where the story remains the same, but “trading time is inverted”. Activities that were originally given a lot of footage get glossed over, but those that were originally ignored get explored in depth.

 

Scrabble

I’ve forgotten which stage of lockdown or “unlock” e-commerce for “non-essential goods” reopened, but among the first things we ordered was a Scrabble board. It was an impulse decision. We were on Amazon ordering puzzles for the daughter, and she had just about started putting together “sounds” to make words, so we thought “scrabble tiles might be useful for her to make words with”.

The thing duly arrived two or three days later. The wife had never played Scrabble before, so on the day it arrived I taught her the rules of the game. We play with the Sowpods dictionary open, so we can check words that hte opponent challenges. Our “scrabble vocabulary” has surely improved since the time we started playing (“Qi” is a lifesaver, btw).

I had insisted on ordering the “official Scrabble board” sold by Mattel. The board is excellent. The tiles are excellent. The bag in which the tiles are stored is also excellent. The only problem is that there was no “scoreboard” that arrived in the set.

On the first day we played (when I taught the wife the rules, and she ended up beating me – I’m so horrible at the game), we used a piece of paper to maintain scores. The next day, we decided to score using an Excel sheet. Since then, we’ve continued to use Excel. The scoring format looks somewhat like this.

So each worksheet contains a single day’s play. Initially after we got the board, we played pretty much every day. Sometimes multiple times a day (you might notice that we played 4 games on 3rd June). So far, we’ve played 31 games. I’ve won 19, Priyanka has won 11 and one ended in a tie.

In any case, scoring on Excel has provided an additional advantage – analytics!! I have an R script that I run after every game, that parses the Excel sheet and does some basic analytics on how we play.

For example, on each turn, I make an average of 16.8 points, while Priyanka makes 14.6. Our score distribution makes for interesting viewing. Basically, she follows a “long tail strategy”. Most of the time, she is content with making simple words, but occasionally she produces a blockbuster.

I won’t put a graph here – it’s not clear enough. This table shows how many times we’ve each made more than a particular threshold (in a single turn). The figures are cumulative

Threshold
Karthik
Priyanka
30 50 44
40 12 17
50 5 10
60 3 5
70 2 2
80 0 1
90 0 1
100 0 1

Notice that while I’ve made many more 30+ scores than her, she’s made many more 40+ scores than me. Beyond that, she has crossed every threshold at least as many times as me.

Another piece of analysis is the “score multiple”. This is a measure of “how well we use our letters”. For example, if I start place the word “tiger” on a double word score (and no double or triple letter score), I get 12 points. The points total on the tiles is 6, giving me a multiple of 2.

Over the games I have found that I have a multiple of 1.75, while she has a multiple of 1.70. So I “utilise” the tiles that I have (and the ones on the board) a wee bit “better” than her, though she often accuses me of “over optimising”.

It’s been fun so far. There was a period of time when we were addicted to the game, and we still turn to it when one of us is in a “work rut”. And thanks to maintaining scores on Excel, the analytics after is also fun.

I’m pretty sure you’re spending the lockdown playing some board game as well. I strongly urge you to use Excel (or equivalent) to maintain scores. The analytics provides a very strong collateral benefit.

 

Shooting, investing and the hot hand

A couple of years back I got introduced to “Stumbling and Mumbling“, a blog written by Chris Dillow, who was described to me as a “Marxist investment banker”. I don’t agree with a lot of the stuff in his blog, but it is all very thoughtful.

He appears to be an Arsenal fan, and in his latest post, he talks about “what we can learn from football“. In that, he writes:

These might seem harmless mistakes when confined to talking about football. But they have analogues in expensive mistakes. The hot-hand fallacy leads investors to pile into unit trusts with good recent performance (pdf) – which costs them money as the performance proves unsustainable. Over-reaction leads them to buy stocks at the top of the market and sell at the bottom. Failing to see that low probabilities compound to give us a high one helps explain why so many projects run over time and budget. And so on.

Now, the hot hand fallacy has been a hard problem in statistics for a few years now. Essentially, the intuitive belief in basketball is that someone who has scored a few baskets is more likely to be successful in his next basket (basically, the player is on a “hot hand”).

It all started with a seminal paper by Amos Tversky et al in the 1980s, that used (the then limited) data to show that the hot hand is a fallacy. Then, more recently, Miller and Sanjurjo took another look at the problem and, with far better data at hand, found that the hot hand is actually NOT a fallacy.

There is a nice podcast on The Art of Manliness, where Ben Cohen, who has written a book about hot hands, spoke about the research around it. In any case, there are very valid reasons as to why hot hands exist.

Yet, Dillow is right – while hot hands might exist in something like basketball shooting, it doesn’t in something like investing. This has to do with how much “control” the person in question has. Let me switch fields completely now and quote a paragraph from Venkatesh Guru Rao‘s “The Art Of Gig” newsletter:

As an example, take conducting a workshop versus executing a trade based on some information. A significant part of the returns from a workshop depend on the workshop itself being good or bad. For a trade on the other hand, the returns are good or bad depending on how the world actually behaves. You might have set up a technically perfect trade, but lose because the world does something else. Or you might have set up a sloppy trade, but the world does something that makes it a winning move anyway.

This is from the latest edition, which is paid. Don’t worry if you aren’t a subscriber. The above paragraph I’ve quoted is sufficient for the purpose of this blogpost.

If you are in the business of offering workshops, or shooting baskets, the outcome of the next workshop or basket depends largely upon your own skill. There is randomness, yes, but this randomness is not very large, and the impact of your own effort on the result is large.

In case of investing, however, the effect of the randomness is very large. As VGR writes, “For a trade on the other hand, the returns are good or bad depending on how the world actually behaves”.

So if you are in a hot hand when it comes to investing, it means that “the world behaved in a way that was consistent with your trade” several times in a row. And that the world has behaved according to your trade several times in a row makes it no more likely that the world will behave according to your trade next time.

If, on the other hand, you are on a hot hand in shooting baskets or delivering lectures, then it is likely that this hot hand is because you are performing well. And because you are performing well, the likelihood of you performing well on the next turn is also higher. And so the hot hand theory holds.

So yes, hot hands work, but only in the context “with a high R Square”, where the impact of the doer’s performance is large compared to the outcome. In high randomness regimes, such as gambling or trading, the hot hand doesn’t matter.

Half-watching movies, and why I hate tweetstorms

It has to do with “bit rate”

I don’t like tweetstorm. Up to six tweets is fine, but beyond that I find it incredibly difficult to hold my attention for. I actually find it stressful. So of late, I’ve been making a conscious effort to stop reading tweetstorms when they start stressing me out. The stress isn’t worth any value that the tweetstorms may have.

I remember making the claim on twitter that I refuse to read any more tweetstorms of more than six tweets henceforth. I’m not able to find that tweet now.

Anyways…

Why do I hate tweetstorms? It is for the same reason that I like to “half-watch” movies, something that endlessly irritates my wife. I has to do with “bit rates“.

I use the phrase “bit rate” to refer to the rate of flow of information (remember that bit is a measure of information).

The thing with movies is that some of them have very low bit rate. More importantly, movies have vastly varying bit rates through their lengths. There are some parts in a movie where pretty much nothing happens, and a lot of it is rather predictable. There are other parts where lots happens.

This means that in the course of a movie you find yourself engrossed in some periods and bored in others, and that can be rather irritating. And boredom in the parts where nothing is happening sometimes leads me to want to turn off the movie.

So I deal with this by “half watching”, essentially multi tasking while watching. Usually this means reading, or being on twitter, while watching a movie. This usually works beautifully. When the bit rate from the movie is high, I focus. When it is low, I take my mind off and indulge in the other thing that I’m doing.

It is not just movies that I “half-watch” – a lot of sport also gets the same treatment. Like right now I’m “watching” Watford-Southampton as I’m writing this.

A few years back, my wife expressed disapproval of my half-watching. By also keeping a book or computer, I wasn’t “involved enough” in the movie, she started saying, and that half-watching meant we “weren’t really watching the movie together”. And she started demanding full attention from me when we watched movies together.

The main consequence of this is that I started watching fewer movies. Given that I can rather easily second-guess movie plots, I started finding watching highly predictable stuff rather boring. In any case, I’ve recently received permission to half-watch again, and have watched two movies in the last 24 hours (neither of which I would have been able to sit through had I paid full attention – they had low bit rates).


So what’s the problem with tweetstorms? The problem is that their bit rate is rather high. With “normal paragraph writing” we have come to expect a certain degree of redundancy. This allows us to skim through stuff while getting information from them at the same time. The redundancy means that as long as we get some key words or phrases, we can fill in the rest of the stuff, and reading is rather pleasant.

The thing with a tweetstorm is that each sentence (tweet, basically) has a lot of information packed into it. So skimming is not an option. And the information hitting your head at the rate that tweetstorms generally convey can result in a lot of stress.

The other thing with tweetstorms, of course, is that each tweet is disjoint from the one before and after it. So there is no flow to the reading, and the mind has to expend extra energy to process what’s happening. Combine this with a rather high bit rate, and you know why I can’t stand them.

What is the Case Fatality Rate of Covid-19 in India?

The economist in me will give a very simple answer to that question – it depends. It depends on how long you think people will take from onset of the disease to die.

The modeller in me extended the argument that the economist in me made, and built a rather complicated model. This involved smoothing, assumptions on probability distributions, long mathematical derivations and (for good measure) regressions.. And out of all that came this graph, with the assumption that the average person who dies of covid-19 dies 20 days after the thing is detected.

 

Yes, there is a wide variation across the country. Given that the disease is the same and the treatment for most people diseased is pretty much the same (lots of rest, lots of water, etc), it is weird that the case fatality rate varies by so much across Indian states. There is only one explanation – assuming that deaths can’t be faked or miscounted (covid deaths attributed to other reasons or vice versa), the problem is in the “denominator” – the number of confirmed cases.

What the variation here tells us is that in states towards the top of this graph, we are likely not detecting most of the positive cases (serious cases will get themselves tested anyway, and get hospitalised, and perhaps die. It’s the less serious cases that can “slip”). Taking a state low down below in this graph as a “good tester” (say Andhra Pradesh), we can try and estimate what the extent of under-detection of cases in each state is.

Based on state-wise case tallies as of now (might be some error since some states might have reported today’s number and some mgiht not have), here are my predictions on how many actual number of confirmed cases there are per state, based on our calculations of case fatality rate.

Yeah, Maharashtra alone should have crossed a million caess based on the number of people who have died there!

Now let’s get to the maths. It’s messy. First we look at the number of confirmed cases per day and number of deaths per day per state (data from here). Then we smooth the data and take 7-day trailing moving averages. This is to get rid of any reporting pile-ups.

Now comes the probability assumption – we assume that a proportion p of all the confirmed cases will die. We assume an average number of days (N) to death for people who are supposed to die (let’s call them Romeos?). They all won’t pop off exactly N days after we detect their infection. Let’s say a proportion \lambda dies each day. Of everyone who is infected, supposed to die and not yet dead, a proportion \lambda will die each day.

My maths has become rather rusty over the years but a derivation I made shows that \lambda = \frac{1}{N}. So if people are supposed to die in an average of 20 days, \frac{1}{20} will die today, \frac{19}{20}\frac{1}{20} will die tomorrow. And so on.

So people who die today could be people who were detected with the infection yesterday, or the day before, or the day before day before (isn’t it weird that English doesn’t a word for this?) or … Now, based on how many cases were detected on each day, and our assumption of p (let’s assume a value first. We can derive it back later), we can know how many people who were found sick k days back are going to die today. Do this for all k, and you can model how many people will die today.

The equation will look something like this. Assume d_t is the number of people who die on day t and n_t is the number of cases confirmed on day t. We get

d_t = p  (\lambda n_{t-1} + (1-\lambda) \lambda n_{t-2} + (1-\lambda)^2 \lambda n_{t-3} + ... )

Now, all these ns are known. d_t is known. \lambda comes from our assumption of how long people will, on average, take to die once their infection has been detected. So in the above equation, everything except p is known.

And we have this data for multiple days. We know the left hand side. We know the value in brackets on the right hand side. All we need to do is to find p, which I did using a simple regression.

And I did this for each state – take the number of confirmed cases on each day, the number of deaths on each day and your assumption on average number of days after detection that a person dies. And you can calculate p, which is the case fatality rate. The true proportion of cases that are resulting in deaths.

This produced the first graph that I’ve presented above, for the assumption that a person, should he die, dies on an average 20 days after the infection is detected.

So what is India’s case fatality rate? While the first graph says it’s 5.8%, the variations by state suggest that it’s a mild case detection issue, so the true case fatality rate is likely far lower. From doing my daily updates on Twitter, I’ve come to trust Andhra Pradesh as a state that is testing well, so if we assume they’ve found all their active cases, we use that as a base and arrive at the second graph in terms of the true number of cases in each state.

PS: It’s common to just divide the number of deaths so far by number of cases so far, but that is an inaccurate measure, since it doesn’t take into account the vintage of cases. Dividing deaths by number of cases as of a fixed point of time in the past is also inaccurate since it doesn’t take into account randomness (on when a Romeo might die).

Anyway, here is my code, for what it’s worth.

deathRate <- function(covid, avgDays) {
covid %>%
mutate(Date=as.Date(Date, '%d-%b-%y')) %>%
gather(State, Number, -Date, -Status) %>%
spread(Status, Number) %>%
arrange(State, Date) -> 
cov1

# Need to smooth everything by 7 days 
cov1 %>%
arrange(State, Date) %>%
group_by(State) %>%
mutate(
TotalConfirmed=cumsum(Confirmed),
TotalDeceased=cumsum(Deceased),
ConfirmedMA=(TotalConfirmed-lag(TotalConfirmed, 7))/7,
DeceasedMA=(TotalDeceased-lag(TotalDeceased, 7))/ 7
) %>%
ungroup() %>%
filter(!is.na(ConfirmedMA)) %>%
select(State, Date, Deceased=DeceasedMA, Confirmed=ConfirmedMA) ->
cov2

cov2 %>%
select(DeathDate=Date, State, Deceased) %>%
inner_join(
cov2 %>%
select(ConfirmDate=Date, State, Confirmed) %>%
crossing(Delay=1:100) %>%
mutate(DeathDate=ConfirmDate+Delay), 
by = c("DeathDate", "State")
) %>%
filter(DeathDate > ConfirmDate) %>%
arrange(State, desc(DeathDate), desc(ConfirmDate)) %>%
mutate(
Lambda=1/avgDays,
Adjusted=Confirmed * Lambda * (1-Lambda)^(Delay-1)
) %>%
filter(Deceased > 0) %>%
group_by(State, DeathDate, Deceased) %>%
summarise(Adjusted=sum(Adjusted)) %>%
ungroup() %>%
lm(Deceased~Adjusted-1, data=.) %>%
summary() %>%
broom::tidy() %>%
select(estimate) %>%
first() %>%
return()
}