Peanut Butter

You can put this down as a “typical foreign-returned crib”. The fact, however, is that since I returned to India a little over a year ago, I’ve been unable to find good quality peanut butter here.

For two years in London, I subsisted on this “whole earth” peanut butter. Available in both crunchy and creamy versions (I preferred the former, but the daughter, who was rather young then, preferred the latter), it was made only of peanuts, palm oil and salt. It was absolutely brilliant.

The thing with Indian peanut butter makers is that so far I’m yet to find a single brand that both contains salt and doesn’t contain sugar! The commercial big brands all have added sugar. Some of the smaller players don’t have salt, and in case they have salt as well, they include some other added sweet like honey or jaggery, which defeats the point of not having sugar.

I’d ranted about this on twitter a few months back:

I got a few suggestions there, tried the whole lot, and the result is still the same. Those that have salt also have sugar. The consistency in the product (especially for the smaller guys) is completely off. And a lot of the smaller “organic” players (that play up on the organic factor rather than the nutrition factor) don’t add any preservatives or stabilising agents, which mean that in Indian temperatures (30s), the oil inevitably separates from the rest of the product, leaving the rest of the thing in an intense mess.

Finally, my “sustainable solution of choice” I settled on was to buy Haldiram’s roasted and salted peanuts, and then just grind them in the mixie (making it from first principles at home just hasn’t worked out for me). Except that now during the lockdown I haven’t been able to procure Haldiram’s roasted and salted peanuts.

So during another shopping trip to a supermarket earlier this week (not that much insight to write a full blog post on that), I decided to try one of the commercial brands. It was available in a big white 1.25kg box and had a fitness (rather than “goodness”) messaging on the cover. It had added sugar, but at 10 grams per 100 grams of the product it was less than other commercial brands. On a whim, I decided to just go for it.

And so far it’s been brilliant. Yes, it’s sweeter than I would have liked, but the most important thing is thanks to the “stabilising agent”, it has a consistent texture. I don’t need to endlessly mix it in the box each time I eat it. The taste is great (apart from the sweetness), and I must say I’m having “real peanut butter” after a long time!

It’s a pity that it took a year and a period of lockdown for me to figure this out. And I’m never trying one of those “organic” peanut butters again. They’re simply not practical to use.

Then again, why can’t anyone figure out that you can add salt without necessarily adding sugar?

Tests per positive case

I seem to be becoming a sort of “testing expert”, though the so-called “testing mafia” (ok I only called them that) may disagree. Nothing external happened since the last time I wrote about this topic, but here is more “expertise” from my end.

As some of you might be aware, I’ve now created a script that does the daily updates that I’ve been doing on Twitter for the last few weeks. After I went off twitter last week, I tried for a couple of days to get friends to tweet my graphs. That wasn’t efficient. And I’m not yet over the twitter addiction enough to log in to twitter every day to post my daily updates.

So I’ve done what anyone who has a degree in computer science, and who has a reasonable degree of self-respect, should do – I now have this script (that runs on my server) that generates the graph and some mildly “intelligent” commentary and puts it out at 8am everyday. Today’s update looked like this:

Sometimes I make the mistake of going to twitter and looking at the replies to these automated tweets (that can be done without logging in). Most replies seem to be from the testing mafia. “All this is fine but we’re not testing enough so can’t trust the data”, they say. And then someone goes off on “tests per million” as if that is some gold standard.

As I discussed in my last post on this topic, random testing is NOT a good thing here. There are several ethical issues with that. The error rates with the testing means that there is a high chance of false positives, and also false negatives. So random testing can both “unleash” infected people, and unnecessarily clog hospital capacity with uninfected.

So if random testing is not a good metric on how adequately we are testing, what is? One idea comes from this Yahoo report on covid management in Vietnam.

According to data published by Vietnam’s health ministry on Wednesday, Vietnam has carried out 180,067 tests and detected just 268 cases, 83% of whom it says have recovered. There have been no reported deaths.

The figures are equivalent to nearly 672 tests for every one detected case, according to the Our World in Data website. The next highest, Taiwan, has conducted 132.1 tests for every case, the data showed

Total tests per positive case. Now, that’s an interesting metric. The basic idea is that if most of the people we are testing show positive, then we simply aren’t testing enough. However, if we are testing a lot of people for every positive case, then it means that we are also testing a large number of marginal cases (there is one caveat I’ll come to).

Also, tests per positive case also takes the “base rate” into effect. If a region has been affected massively, then the base rate itself will be high, and the region needs to test more. A less affected region needs less testing (remember we only  test those with a high base rate). And it is likely that in a region with a higher base rate, more positive cases are found (this is a deadly disease. So anyone with more than a mild occurrence of the disease is bound to get themselves tested).

The only caveat here is that the tests need to be “of high quality”, i.e. they should be done on people with high base rates of having the disease. Any measure that becomes a metric is bound to be gamed, so if tests per positive case becomes a metric, it is easy for a region to game that by testing random people (rather than those with high base rates). For now, let’s assume that nobody has made this a “measure” yet, so there isn’t that much gaming yet.

So how is India faring? Based on data from covid19india.org, until yesterday India had done (as of yesterday, 23rd April) about 520,000 tests, of which about 23,000 people have tested positive. In other words, India has tested 23 people for every positive test. Compared to Vietnam (or even Taiwan) that’s a really low number.

However, different states are testing to different extents by this metric. Again using data from covid19india.org, I created this chart that shows the cumulative “tests per positive case” in each state in India. I drew each state in a separate graph, with different scales, because they were simply not comparable.

Notice that Maharashtra, our worst affected state is only testing 14 people for every positive case, and this number is going down over time. Testing capacity in that state (which has, on an absolute number, done the maximum number of tests) is sorely stretched, and it is imperative that testing be scaled up massively there. It seems highly likely that testing has been backlogged there with not enough capacity to test the high base rate cases. Gujarat and Delhi, other badly affected states, are also in similar boats, testing only 16 and 13 people (respectively) for every infected person.

At the other end, Orissa is doing well, testing 230 people for every positive case (this number is rising). Karnataka is not bad either, with about 70 tests per case  (again increasing. The state massively stepped up on testing last Thursday). Andhra Pradesh is doing nearly 60. Haryana is doing 65.

Now I’m waiting for the usual suspects to reply to this (either on twitter, or as a comment on my blog) saying this doesn’t matter we are “not doing enough tests per million”.

I wonder why some people are proud to show off their innumeracy (OK fine, I understand that it’s a bit harsh to describe someone who doesn’t understand Bayes’s Theorem as “innumerate”).

 

Zoom in, zoom out

It was early on in the lockdown that the daughter participated in her first ever Zoom videoconference. It was an extended family call, with some 25 people across 9 or 10 households.

It was chaotic, to say the least. Family call meant there was no “moderation” of the sort you see in work calls (“mute yourself unless you’re speaking”, etc.). Each location had an entire family, so apart from talking on the call (which was chaotic with so many people anyways), people started talking among themselves. And that made it all the more chaotic.

Soon the daughter was shouting that it was getting too loud, and turned my computer volume down to the minimum (she’s figured out most of my computer controls in the last 2 months). After that, she lost interest and ran away.

A couple of weeks later, the wife was on a zoom call with a big group of her friends, and asked the daughter if she wanted to join. “I hate zoom, it’s too loud”, the daughter exclaimed and ran away.

Since then she has taken part in a couple of zoom calls, organised by her school. She sat with me once when I chatted with a (not very large) group of school friends. But I don’t think she particularly enjoys Zoom, or large video calls. And you need to remember that she is a “video call native“.

The early days of the lockdown were ripe times for people to turn into gurus, and make predictions with the hope that nobody would ever remember them in case they didn’t come through (I indulged in some of this as well). One that made the rounds was that group video calling would become much more popular and even replace group meetings (especially in the immediate aftermath of the pandemic).

I’m not so sure. While the rise of video calling has indeed given me an excuse to catch up “visually” with friends I haven’t seen in ages, I don’t see that much value from group video calls, after having participated in a few. The main problem is that there can, at a time, be only one channel of communication.

A few years back I’d written about the “anti two pizza rule” for organising parties, where I said that if you have a party, you should either have five or fewer guests, or ten or more (or something of the sort). The idea was that five or fewer can indeed have one coherent conversation without anyone being left out. Ten or more means the group naturally splits into multiple smaller groups, with each smaller group able to have conversations that add value to them.

In between (6-9 people) means it gets awkward – the group is too small to split, and too large to have one coherent conversation, and that makes for a bad party.

Now take that online. Because we have only one audio channel, there can only be one conversation for the entire group. This means that for a group of 10 or above, any “cross talk” needs to be necessarily broadcast, and that interferes with the main conversation of the group. So however large the group size of the online conversation, you can’t split the group. And the anti two pizza rule becomes “anti greater than or equal to two pizza rule”.

In other words, for an effective online conversation, you need to have four (or at max five) participants. Else you can risk the group getting unwieldy, some participants feeling left out or bored, or so much cross talk that nobody gets anything out of it.

So Zoom (or any other video chat app) is not going to replace any of our regular in-person communication media. It might to a small extent in the immediate wake of the pandemic, when people are afraid to meet large groups, but it will die out after that. OK, that is one more prediction from my side.

In related news, I swore off lecturing in Webinars some five years ago. Found it really stressful to lecture without the ability to look into the eyes of the “students”. I wonder if teachers worldwide who are being forced to lecture online because of the shut schools feel the way I do.

Gully Cricket With A Test Cricketer

Long, long ago, I’d written a post comparing gully cricket with baseball. This was based on my experience playing cricket in school, on roads next to friends’ houses, in the gap between my house and the next, and even the gap between rows of desks in my school classroom.

I hadn’t imagined all this gully cricket experience to come in useful in any manner. Until a few weeks back when Siddhartha Vaidyanathan asked me to join him in this episode of “81 all out” podcast. The “main guest” on this show was Test cricketer Vijay Bharadwaj, whose Test debut, you might remember, ended in “83 all out“.

It was a fascinating conversation, and I loved being part of it. I realised that the sort of gully cricket I played was nothing like the sort that Vijay played. As I mention in the podcast, I “never graduated from the road to the field”.

Unfortunately I wasn’t able to put my fundaes on baseball, and other theories I’ve concocted about Gully Cricket. Nevertheless, I had fun recording this, and I think you’ll have fun listening to it as well. You can listen to it here, or on any of your usual podcast tools (search for “81 all out”).

Diamonds and Rust

So this post is going to piss off the wife on at least two counts. Firstly, she thinks I’m “spending too much time on the computer” nowadays, and not enough with her. Secondly, this post refers to an old crush who my wife thinks I had “blogged too much about” (the implication is that I don’t blog enough about my wife).

Then again, I think I’ve been taking myself too seriously on this blog of late, and so need something to break out of this rut, and this post is something I’ve been intending to write for a long time. So I’m taking a chance here.

The song in question is Diamonds and Rust, originally performed by Joan Baez, and then covered by Judas Priest in their album Sin After Sin.

I was first introduced to this song by the Judas Priest version. It was that time back in college where I had computer, and access to a LAN full of pirated music, and was sampling all the bands that I thought might be cool (it’s another matter that I ended up liking a lot of these “cool” bands, including Judas Priest).

As was my wont then whenever I “discovered” some artist, I would listen to all their works in order, album by album. I do this nowadays as well, when I “rediscover” artists. And so I got introduced to Diamonds and Rust. I remember the song immediately making an impression on me, but not too much (the other song that that made an immediate impact was called “between the hammer and the anvil“, and I’d wondered if it was about the mechanics of the inner ear).

Anyway, in the middle of discovering Judas Priest for the first time, I met this girl. I mean I’d known her for a really long time but this was the first time we were “having a conversation”. We had met at this tiny cafe full of college kids (we were also college kids then) where she had made a big fuss about being a “low calorie person”. Music was playing. Soon a vaguely familiar sounding song played, in a voice that wasn’t familiar at all. Between bits of the conversation, all I caught from the song was that it was “_____ and _____ “. Surprisingly for me, I didn’t try to immediately figure out which song it was upon returning to my room that night.

The years went by. I probably ended up blogging this girl a bit too much for my own good later on. The person who is now my wife read some of those posts and thought she had found a guy who would write loads about her as well. I started off brightly, but in the long term I don’t think I’ve lived up to the expectation.

I don’t recall the circumstances in which I rediscovered Diamonds And Rust. It happened in London, either towards the end of 2017 or the beginning of 2018. I think the rediscovery again happened through Judas Priest – I was working through their albums one by one after a 12 year gap, and chanced upon Diamonds And Rust again. Some chord (not literally) hit. I went down a rabbit hole.

I realised this was possibly the song that had initially registered all those years ago, and that I had heard in the cafe. Googling revealed it was a cover, and the original did sound very familiar (I think this is the story. I’ve sat on this post for so long now I’ve really forgotten). I was convinced. The Joan Baez version did seem very familiar. It all started coming back to me. The next couple of days I was careful around the wife so she wouldn’t realise that I had gotten excited about something vaguely related to an old crush.

In any case, I liked the cover so much that soon I started creating a playlist of “metal covers of non-metal songs”.

I called it “Rust Covers Diamonds” (get the clever pun?). I’m listening to that playlist right now as I write this. It’s a public playlist, so feel free to listen to it. You’ll love a lot of the songs in it! Especially the first “title track”.

Update

There is one thing I don’t like about Diamonds and Rust, and I blame Joan Baez for it (Judas Priest simply copied it without checking it seems). The song is not dimensionally consistent. Check the lyrics:

And here I sit, hand on the telephone
Hearing the voice I’d known
A couple of light years ago
Heading straight for a fall

Light year is a unit of distance, not time. So “a couple of light years ago” makes absolutely no sense. I really don’t know how the editors let that pass. Then again, you don’t expect most editors to know physics!

Blogs and tweetstorms

The “tweetstorm” is a relatively new art form. It basically consists of a “thread” of tweets that serially connect to one another, which all put together are supposed to communicate one grand idea.

It is an art form that grew organically on twitter, almost as a protest against the medium’s 140 (now raised to 280) character limit. Nobody really knows who “invented” it. It had emerged by 2014, at least, as this Buzzfeed article cautions.

In the early days, you would tweetstorm by continuously replying to your own tweet, so the entire set of tweets could be seen by readers as a “thread”. Then in 2017, Twitter itself recognised that it was being taken over by tweetstorms, and added “native functionality” to create them.

In any case, as with someone from “an older generation” (I’m from the blogging generation, if I can describe myself so), I was always fascinated by this new art form that I’d never really managed to master. Once in a while, rather than writing here (which is my natural thing to do), I would try and write a tweet storm. Most times I didn’t succeed. Clearly, someone who is good at an older art form struggles to adapt to newer ones.

And then something clicked on Wednesday when I wrote my now famous tweetstorm on Bayes Theorem and covid-19 testing. I got nearly two thousand new followers, I got invited to a “debate” on The Republic news channel and my tweetstorm is circulated in apartment Telegram groups (though so far nobody has yet sent my my own tweetstorm).

In any case, I don’t like platforms where I’m not in charge of content (that’s a story for another day), and so thought I should document my thoughts here on my blog. And I did so last night. At over 1200 words, it’s twice as long as my average blogpost (it tired me so much that the initial version, which went on my RSS feed, had a massive typo in the last line!).

And while I was writing that, I realised that the tone in the blog post was very different from what I sounded like in my famous tweetstorm. In my post (at least by my own admission, though a couple of friends have agreed with me), I sound reasonable and measured. I pleasantly build up the argument and explain what I wanted to explain with a few links and some data. I’m careful about not taking political sides, and everything. It’s how good writing should be like.

Now go read my tweetstorm:

Notice that right from the beginning I’m snide. I’m bossy. I come across as combative. And I inadvertently take sides here and there. Overall, it’s bad writing. Writing that I’m not particularly proud of, though it gave me some “rewards”.

I think that’s inherent to the art form. While you can use as many tweets as you like, you have a 280 character limit in each. Which means that each time you’re trying to build up an argument, you find yourself running out of characters, and you attempt to “finish your argument quickly”. That means that each individual tweet can come across as too curt or “to the point”. And  when you take a whole collection of curt statements, it’s easy to come across as rude.

That is possibly true of most tweetstorms. However good your intention is when you sit down to write them, the form means that you will end up coming across as rude and highly opinionated. Nowadays, people seem to love that (maybe they’ve loved it all the time, and now there is an art form that provides this in plenty), and so tweetstorms can get “picked up” and amplified and you become popular. However, try reading it when you’re yourself in a pleasant and measured state, and you find that most tweetstorms are unreadable, and constitute bad writing.

Maybe I’m writing this blogpost because I’m loyal to my “native art form”. Maybe my experience with this artform means that I write better blogs than tweetstorms. Or maybe it’s simply all in my head. Or that blogs are “safe spaces” nowadays – it takes effort for people to leave comments on blogs (compared to replying to a tweet with abuse).

I’ll leave you with this superb old article from The Verge on “how to tweetstorm“.

More on covid testing

There has been a massive jump in the number of covid-19 positive cases in Karnataka over the last couple of days. Today, there were 44 new cases discovered, and yesterday there were 36. This is a big jump from the average of about 15 cases per day in the preceding 4-5 days.

The good news is that not all of this is new infection. A lot of cases that have come out today are clusters of people who have collectively tested positive. However, there is one bit from yesterday’s cases (again a bunch of clusters) that stands out.

Source: covid19india.org

I guess by now everyone knows what “travelled from Delhi” is a euphemism for. The reason they are interesting to me is that they are based on a “repeat test”. In other words, all these people had tested negative the first time they were tested, and then they were tested again yesterday and found positive.

Why did they need a repeat test? That’s because the sensitivity of the Covid-19 test is rather low. Out of every 100 infected people who take the test, only about 70 are found positive (on average) by the test. That also depends upon when the sample is taken.  From the abstract of this paper:

Over the four days of infection prior to the typical time of symptom onset (day 5) the probability of a false negative test in an infected individual falls from 100% on day one (95% CI 69-100%) to 61% on day four (95% CI 18-98%), though there is considerable uncertainty in these numbers. On the day of symptom onset, the median false negative rate was 39% (95% CI 16-77%). This decreased to 26% (95% CI 18-34%) on day 8 (3 days after symptom onset), then began to rise again, from 27% (95% CI 20-34%) on day 9 to 61% (95% CI 54-67%) on day 21.

About one in three (depending upon when you draw the sample) infected people who have the disease are found by the test to be uninfected. Maybe I should state it again. If you test a covid-19 positive person for covid-19, there is almost a one-third chance that she will be found negative.

The good news (at the face of it) is that the test has “high specificity” of about 97-98% (this is from conversations I’ve had with people in the know. I’m unable to find links to corroborate this), or a false positive rate of 2-3%. That seems rather accurate, except that when the “prior probability” of having the disease is low, even this specificity is not good enough.

Let’s assume that a million Indians are covid-19 positive (the official numbers as of today are a little more than one-hundredth of that number). With one and a third billion people, that represents 0.075% of the population.

Let’s say we were to start “random testing” (as a number of commentators are advocating), and were to pull a random person off the street to test for Covid-19. The “prior” (before testing) likelihood she has Covid-19 is 0.075% (assume we don’t know anything more about her to change this assumption).

If we were to take 20000 such people, 15 of them will have the disease. The other 19985 don’t. Let’s test all 20000 of them.

Of the 15 who have the disease, the test returns “positive” for 10.5 (70% accuracy, round up to 11). Of the 19985 who don’t have the disease, the test returns “positive” for 400 of them (let’s assume a specificity of 98% (or a false positive rate of 2%), placing more faith in the test)! In other words, if there were a million Covid-19 positive people in India, and a random Indian were to take the test and test positive, the likelihood she actually has the disease is 11/411 = 2.6%.

If there were 10 million covid-19 positive people in India (no harm in supposing), then the “base rate” would be .75%. So out of our sample of 20000, 150 would have the disease. Again testing all 20000, 105 of the 150 who have the disease would test positive. 397 of the 19850 who don’t have the disease will test positive. In other words, if there were ten million Covid-19 positive people in India, and a random Indian were to take the test and test positive, the likelihood she actually has the disease is 105/(397+105) = 21%.

If there were ten million Covid-19 positive people in India, only one-fifth of the people who tested positive in a random test would actually have the disease.

Take a sip of water (ok I’m reading The Ken’s Beyond The First Order too much nowadays, it seems).

This is all standard maths stuff, and any self-respecting book or course on probability and Bayes’s Theorem will have at least a reference to AIDS or cancer testing. The story goes that this was a big deal in the 1990s when some people suggested that the AIDS test be used widely. Then, once this problem of false positives and posterior probabilities was pointed out, the strategy of only testing “high risk cases” got accepted.

And with a “low incidence” disease like covid-19, effective testing means you test people with a high prior probability. In India, that has meant testing people who travelled abroad, people who have come in contact with other known infected, healthcare workers, people who attended the Tablighi Jamaat conference in Delhi, and so on.

The advantage with testing people who already have a reasonable chance of having the disease is that once the test returns positive, you can be pretty sure they actually have the disease. It is more effective and efficient. Testing people with a “high prior probability of disease” is not discriminatory, or a “sampling bias” as some commentators alleged. It is prudent statistical practice.

Again, as I found to my own detriment with my tweetstorm on this topic the other day, people are bound to see politics and ascribe political motives to everything nowadays. In that sense, a lot of the commentary is not surprising. It’s also not surprising that when “one wing” heavily retweeted my article, “the other wing” made efforts to find holes in my argument (which, again, is textbook math).

One possibly apolitical criticism of my tweetstorm was that “the purpose of random testing is not to find out who is positive. It is to find out what proportion of the population has the disease”. The cost of this (apart from the monetary cost of actually testing) are threefold. Firstly, a large number of uninfected people will get hospitalised in covid-specific hospitals, clogging hospital capacity and increasing the chances that they get infected while in hospital.

Secondly, getting a truly random sample in this case is tricky, and possibly unethical. When you have limited testing capacity, you would be inclined (possibly morally, even) to use it on people who already have a high prior probability.

Finally, when the incidence is small, we need a really large sample to find out the true range.

Let’s say 1 in 1000 Indians have the disease (or about 1.35 million people). Using the Chi Square test of proportions, our estimate of the incidence of the disease varies significantly on how many people are tested.

If we test a 1000 people and find 1 positive, the true incidence of the disease (95% confidence interval) could be anywhere from 0.01% to 0.65%.

If we test 10000 people and find 10 positive, the true incidence of the disease could be anywhere between 0.05% and 0.2%.

Only if we test 100000 people (a truly massive random sample) and find 100 positive, then the true incidence lies between 0.08% and 0.12%, an acceptable range.

I admit that we may not be testing enough. A simple rule of thumb is that anyone with more than a 5% prior probability of having the disease needs to be tested. How we determine this prior probability is again dependent on some rules of thumb.

I’ll close by saying that we should NOT be doing random testing. That would be unethical on multiple counts.

Yet another pinnacle

During our IIMB days, Kodhi and I used to measure our lives in pinnacles. Pinnacles could come through various ways. Getting a hug from a sought-after person of the opposite gender usually qualified. Getting featured in newspapers also worked. Sometimes even a compliment from a professor would be enough of a “pinnacle”.

In any case, in college days pinnacles keep coming your way, and you can live your life from one pinnacle to another. Once you graduate, that suddenly stops. Positive feedback of any kind in your work life is rare. If you “manage to do well in social life”, it might work, but there is really nobody else to show off your pinnacle to. It is really hard to adjust to this sudden paucity of positive feedback in life, and this usually leads to what people call as “quarter life crisis”.

Shortly after I had resolved my quarter life crisis, and “done well in social life”, I decided to change track. I quit full time employment, and as you all might know, have been pursuing a sort of “portfolio life” in the last eight odd years. And this means doing several things apart from the thing that contributes to most of my income.

One upside of this kind of life (lack of steady cash flow is the big downside) is that you keep getting pinnacles. Publishing my book was a pinnacle, for example. Getting invited to write regularly for Mint was another. Becoming a bit of a social media star (nothing like yesterday) in the run up to the 2014 general elections was yet another. And there were the kicks about being invited to teach at IIMB. And all that.

It had been a while since I had one such pinnacle. Perhaps the last one was in 2018, during the Karnataka Assembly Elections, when I had my first shot at television punditry, when I appeared on News9, and waxed eloquent about sample sizes and survey techniques.

In any case, that was nothing compared to the sort of pinnacle that I’ve got following my tweetstorm from yesterday. This email came to the inbox of NED Talks (I didn’t know that email ID was public) this afternoon:

Kind Attn: Karthik Shashidhar, Founder NED TALK

Dear Sir,

Greetings from Republic TV!

I’m <redacted> , a Mumbai based News Coordinator with Republic TV. Republic TV is India’s first and only Independent News Venture headed by Mr. Arnab Goswami.

Sir,  we would like to get in touch with you for our show on Corona virus reality on Rahul Gandhi Claim that Lock down is temporary measures and not the solution to defeat virus and need more testing to be done in the country.

Sir,It will be our pleasure to have you join us on our channel at 9PM.

We, at Republic TV,  believe that your command over the issue will add depth and perspective to our discussions and help mould popular discourse.

As it happened, I was unable to accept this invitation. However, I’m documenting this here to record this absolute pinnacle of life. The next time I feel shitty about myself, or feel a sort of imposter syndrome, I can look at this invite and think that I’ve truly arrived in life.

PS: Just look at the number of times I’ve been called “Sir” in that email. That alone should constitute a pinnacle.

Yet another social media sabbatical

Those of you who know me well know that I keep taking these social media sabbaticals. Once in a while I decide that I’m spending too much time on these platforms, wasting both time and mental energy, and log off. Time has come for yet another such break.

I had a bumper day on twitter yesterday. I wrote this one tweet storm that went viral. Some 2000 plus retweets and all that. Basically I used some 15 tweets to explain Bayes’s Theorem, a concept that most people find really hard to understand.

For the last 24 hours, my twitter mentions have been a mess. I’ve tried various things – applying filters, switching from the native app to tweetdeck, etc. but I find that I keep checking my mentions for that dopamine rush that comes out of new followers (I have some 1500 new followers after the tweetstorm, including Chris Arnade of Dignity fame), new retweets and new likes.

And the dopamine rush is frequently killed by hate, as a tweetstorm like this will inevitably generate. I did another tweetstorm this morning detailing this hate – it has to do with the “two Overton Windows” post I’d written a couple of weeks ago.

People are so deranged that even a maths tweetstorm (like the one at the beginning of this post) can be made political, and you see people go on and on.

In fact, there is this other piece I had written (for Mint, back in 2015) that again uses Bayes’s Theorem to explain online flamewars. Five years down, everything I wrote is true.

It is futile to engage with most people on Twitter, especially when they take their political selves too seriously. It can be exhausting, and 27 hours after I wrote that tweetstorm I’m completely exhausted.

So yeah this is not a social media sabbatical like my previous ones where I logged off all media. As things stand I’m only off Twitter (I’ve taken mitigating steps on other platforms to protect my blood pressure and serotonin).

Then again, those of you who know me well know that when I’m off twitter I’ll be writing more here. You can continue to expect that. I hope to be more productive here, and in my work (I’m swamped with work this lockdown) as well.

I continue to be available on WhatsApp, and Telegram, and email. Those of you who have my email or number can reach me in one of those places. For everything else, there’s the “contact” tab on this blog.

See you more regularly here in the coming days!

Beckerian Disciplines

When Gary Becker was awarded the “Nobel Prize” (or whatever its official name is) for Economics, the award didn’t cite any single work of his. Instead, as Justin Wolfers wrote in his obituary,

He was motivated by the belief that economics, taken seriously, could improve the human condition. He founded so many new fields of inquiry that the Nobel committee was forced to veer from the policy of awarding the prize for a specific piece of work, lauding him instead for “having extended the domain of microeconomic analysis to a wide range of human behavior and interaction, including nonmarket behavior.”

Or as Matthew Yglesias put it in his obituary of Becker,

Becker is known not so much for one empirical finding or theoretical conjecture, as for a broad meta-insight that he applied in several areas and that is now so broadly used that many people probably don’t realize that it was invented relatively recently.

Becker’s idea, in essence, was that the basic toolkit of economic modeling could be applied to a wide range of issues beyond the narrow realm of explicitly “economic” behavior. Though many of Becker’s specific claims remain controversial or superseded by subsequent literature, the idea of exploring everyday life through a broadly economic lens has been enormously influential in the economics profession and has altered how other social sciences approach their issues

Essentially Becker sort of pioneered the idea of using economic reasoning for fields outside traditional economics. It wasn’t always popular – for example, his use of economics methods in sociology was controversial, and “traditional sociologists” didn’t like the encroachment into their field.

However, Becker’s ideas endured. It is common nowadays for economists to explore ideas traditionally considered outside the boundaries of “standard economics”.

I think this goes well beyond economics. I think there are several other fields that are prone to “go out of syllabus” – where concepts are generic enough that they can be applied to areas traditionally outside the fields.

One obvious candidate is mathematics – most mathematical problems come from “real life”, and only the purest of mathematicians don’t include an application from “real life” (well outside of mathematics) while writing a mathematical paper. Immediately coming to mind is the famous “Hall’s Marriage Theorem” from Graph Theory.

Speaking of Graph Theory, Computer Science is another candidate (especially the area of algorithms, which I sort of specialised in during my undergrad).  I remember being thoroughly annoyed that papers and theses that would start so interestingly with a real-life problem would soon involve into inscrutable maths by the time you got to the second section. I remember my B.Tech. project (this was taken rather seriously at IIT Madras) being about what I had described as a “Party Hall Problem” (this was in Online Algorithms).

Rather surprisingly (to me), another area whose practitioners are fond of encroaching into other subjects is physics. This old XKCD sums it up

Complex Systems (do you know most complex systems scientists are physicists by training?) is another such field. There are more.

In any case, assuming no one else has done this already, I hereby christen all these fields (whose practitioners are prone to venturing into “out of syllabus matters”) as “Beckerian Disciplines” in honour of Gary Becker (OK I have a economics bias but I’m pretty sure there have been scientists well before Becker who have done this).

And then you have what I now call as “anti-Beckerian Disciplines” – areas that get pissed off that people from other fields are “invading their territory”. In Becker’s own case, the anti-Beckerian Discipline was Sociology.

When all university departments talk about “interdisciplinary research” what they really need is Beckerians. People who are able and willing to step out of the comfort zones of their own disciplines to lend a fresh pair of eyes (and a fresh perspective) to other disciplines.

The problem with this is that they can encounter an anti-Beckerian response from people trying to defend their own turf from “outside invasion”. This doesn’t help the cause of science (or research of any kind) but in general (well, a LOT of exceptions exist), academics can be a prickly and insecure bunch forever playing zero-sum status games.

With the covid-19 virus crisis, one set of anti-Beckerians who have emerged is epidemiologists. Epidemiology is a nice discipline in that it can be studied using graph theory, non-linear dynamics or (as I did earlier today) simple Bayesian maths or so many other frameworks that don’t need a degree in biology or medicine.

And epidemiologists are not happy (I’m not talking about my tweet specifically but this is a more general comment) that their turf is being invaded upon. “Listen to the experts”, they are saying, with the assumption that the experts in question here are them. People are resorting to credentialism. They’re adding “, PhD” to their names on twitter (a particularly shady credentialist practice IMHO). Questioning credentials and locus standi of people producing interesting analysis.

Enough of this rant. Since you’ve come all the way, I leave you with this particularly awesome blogpost by Tyler Cowen, who is a particularly Beckerian economist, about epidemiologists. Sample this:

Now, to close, I have a few rude questions that nobody else seems willing to ask, and I genuinely do not know the answers to these:

a. As a class of scientists, how much are epidemiologists paid?  Is good or bad news better for their salaries?

b. How smart are they?  What are their average GRE scores?

c. Are they hired into thick, liquid academic and institutional markets?  And how meritocratic are those markets?

d. What is their overall track record on predictions, whether before or during this crisis?

e. On average, what is the political orientation of epidemiologists?  And compared to other academics?  Which social welfare function do they use when they make non-trivial recommendations?