Ticking all the boxes

Last month my Kindle gave up. It refused to take charge, only heating up the  charging cable (and possibly destroying an Android charger) in the process. This wasn’t the first time this was happening.

In 2012, my first Kindle had given up a few months after I started using it, with its home button refusing to work. Amazon had sent me a new one then (I’d been amazed at the no-questions-asked customer-centric replacement process). My second Kindle (the replacement) developed problems in 2016, which I made worse by trying to pry it open with a knife. After I had sufficiently damaged it, there was no way I could ask Amazon to do anything about it.

Over the last year, I’ve discovered that I read much faster on my Kindle than in print – possibly because it allows me to read in the dark, it’s easy to hold, I can read without distractions (unlike phone/iPad) and it’s easy on the eye. I possibly take half the time to read on a Kindle what I take to read in print. Moreover, I find the note-taking and highlighting feature invaluable (I never made a habit of taking notes on physical books).

So when the kindle stopped working I started wondering if I might have to go back to print books (there was no way I would invest in a new Kindle). Customer care confirmed that my Kindle was out of warranty, and after putting me on hold for a long time, gave me two options. I could either take a voucher that would give me 15% off on a new Kindle, or the customer care executive could “talk to the software engineers” to see if they could send me a replacement (but there was no guarantee).

Since I had no plans of buying a new Kindle, I decided to take a chance. The customer care executive told me he would get back to me “within 24 hours”. It took barely an hour for him to call me back, and a replacement was in my hands in 2 days.

It got me wondering what “software engineers” had to do with the decision to give me a replacement (refurbished) Kindle. Shortly I realised that Amazon possibly has an algorithm to determine whether to give a replacement Kindle for those that have gone kaput out of warranty. I started  trying to guess what such an algorithm might look like.

The interesting thing is that among all the factors that I could list out based on which Amazon might make a decision to send me a new Kindle, there was not one that would suggest that I shouldn’t be given a replacement. In no particular order:

  • I have been an Amazon Prime customer for three years now
  • I buy a lot of books on the Kindle store. I suspect I’ve purchased books worth more than the cost of the Kindle in the last year.
  • I read heavily on the Kindle
  • I don’t read Kindle books on other apps (phone / iPad / computer)
  • I haven’t bought too many print books from Amazon. Most of the print books I’ve bought have been gifts (I’ve got them wrapped)
  • My Goodreads activity suggests that I don’t read much outside of what I’ve bought from the Kindle store

In hindsight, I guess I made the correct decision of letting the “software engineers” determine whether I qualify for a new Kindle. I guess Amazon figured that had they not sent me a new Kindle, there was a significant amount of low-marginal-cost sales that they were going to lose!

I duly rewarded them with two book purchases on the Kindle store in the course of the following week!

Showing off

So like good Indian parents we’ve started showing off the daughter in front of guests. And today she showed us that she’s equal to the task.

A couple of weeks back, after seeing the photo of a physicist friend’s son with the book Quantum Physics for babies, I decided to get a copy. Like with all new things the daughter gets, she “read” the book dutifully for the rest of the day it arrived. She learnt to recognised the balls in the book, but wasn’t patient enough for me to teach her about atoms.

The next day the book got put away into her shelf, never to appear again, until today that is. Some friends were visiting and we were all having lunch. As I was feeding the daughter she suddenly decided to run off towards her bookshelf, and with great difficulty pulled out a book – this one. As you might expect, our guests were mighty impressed.

Then they started looking at her bookshelf and were surprised to find a “children’s illustrated atlas” there. We told them that the daughter can identify countries as well. Soon enough, she had pulled out the atlas from the shelf (she calls it the “Australia book”) and started pointing out continents an d countries in that.

To me the high point was the fact that she was looking at the maps upside down (or northside-down – the book was on the table facing the guests), and still identified all the countries and continents she knows correctly. And once again, I must point out that she hadn’t seen the atlas for at least two or three weeks now.

Promise is showing, but we need to be careful and make sure we don’t turn her into a performing monkey.

PS: Those of you who follow me on Instagram can look at this video of Berry identifying countries.

PS2: Berry can identify continents on a world map, but got damn disoriented the other day when I was showing her a map that didn’t contain Antarctica.

Carbon taxes and mental health

The beautiful thing about mid-term elections in the USA is that apart from the “main elections” for senators, congresspersons and governors, there were also votes on “auxiliary issues” – referenda, basically, on issues such as legalisation of marijuana.

One such issue that went to the polls was in Washington State, where there was a proposal for imposition of carbon taxes, which sought to tax carbon dioxide emissions at $15 a tonne. The voters rejected the proposal, with the proposal getting only 44% of the polled votes in favour.

The defeat meant that another attempt at pricing in environmental costs, which could have offered significant benefits to ordinary people in terms of superior mental health, went down the drain.

Chapter 11 of Jordan Peterson’s 12 Rules for Life is both the best and the worst chapter of the book. It is the best for the reasons I’ve mentioned in this blog post earlier – about its discussions of risk, and about relationships and marriage in the United States. It is the worst because Peterson unnecessarily lengthens the chapter by using it to put forward his own views on several controversial issues – such as political correctness and masculinity – issues which only have a tenuous relationship with the meat of the chapter, and which only give an opportunity for Peterson’s zillion critics to downplay the book.

Among all these unnecessary digressions in Chapter Eleven, one stood out, possibly because of the strength of the argument and my own relationship with it – Peterson bullshits climate change and environmentalism, claiming that it only seeks to worsen the mental health of ordinary people. As a clinical psychologist, he can be trusted to tell us what affects people’s mental health. However, dismissing something just because it affects people negatively is wrong.

The reason environmentalism and climate change play a negative impact on people’s mental health, in my opinion, is that there is no market based pricing in these aspects. From childhood, we are told that we should “not waste water” or “not cut trees”, because activities like this will have an adverse effect on the environment.

Such arguments are always moral, about telling people to think of their descendants and the impact it will have. The reason these arguments are hard to make is because they need to persuade people to act contrary to their self-interest. For example, one may ask me to forego my self-interest of the enjoyment in bursting fireworks in favour of better air quality (which I may not necessarily care about). Someone else might ask me to forego my self-interest of a long shower, because of “water shortages”.

And this imposition of moral arguments that make us undertake activities that violate our self-interset is what imposes a mental cost. We are fundamentally selfish creatures, only indulging in activities that benefit us (either immediately or much later). And when people force us to think outside this self-interest, it comes with the cost of increased mental strain, which is reason enough for Jordan Peterson to bullshit environmentalism itself.

If you think about this, the reason we need to use moral arguments and make people act against their self-interest for environmental causes is because the market system fails in these cases. If we were able to put a price on environmental costs of activities, and make entities that indulge in such activities pay these costs, then the moral argument could be replaced by a price argument, and our natural self-interest maximising selves would get aligned with what is good for the world.

And while narrowly concerned with the issue of climate change and global warming, carbon taxes are one way to internalise the externality of environmental damage of our activities. And by putting a price on it, it means that we don’t need to think in terms of our everyday activities and thus saves us a “mental cost”. And this can lead to superior overall mental health.

In that sense, the rejection of the carbon tax proposal in Washington State is a regressive move.

Book challenge update

At the beginning of this year, I took a break from Twitter (which lasted three months), and set myself a target to read at least 50 books during the calendar year. As things stand now, the number stands at 28, and it’s unlikely that I’ll hit my target, unless I count Berry’s story books in the list.

While I’m not particularly worried about my target, what I am worried about is that the target has made me see books differently. For example, I’m now less liable to abandon books midway – the sunk cost fallacy means that I try harder to finish so that I can add to my annual count. Sometimes I literally flip through the pages of the book looking for interesting things, in an attempt to finish it one way or the other (I did this for Ray Dalio’s Principles and Randall Munroe’s What If, both of which I rated lowly).

Then, the target being in terms of number of books per year means that I get annoyed with long books. Like it’s been nearly a month since I started Jonathan Wilson’s Angels with dirty faces , but I’m still barely 30% of the way there – a figure I know because I’m reading it on my Kindle.

Even worse are large books that I struggle to finish. I spent about a month on Bill Bryson’s At Home, but it’s too verbose and badly written and so I gave it up halfway through. I don’t know if I should put this in my reading challenge. A similar story is with Siddhartha Mukherjee’s The Emperor of All Maladies – this morning, I put it down for maybe the fourth time (I bought it whenever it was first published) after failing to make progress – it’s simply too dry for someone not passionate about the subject.

Oh, and this has been the big insight from this reading challenge – that I read significantly faster on Kindle than I do on physical books. Firstly, it’s easier to carry around. Secondly, I can read in the dark since I got myself a Kindle Paperwhite last year. One of the times when I read from my kindle is in the evening when I’m putting Berry to sleep, and that means I need to read in the dark with a device that doesn’t produce so much light. Then, the ability to control font size and easy page turns means that I progress so much faster – even when I stop to highlight and make notes (a feature I miss dearly when reading physical books; searchable notes are a game changer).

I also find that when I’m reading on Kindle, it’s easier to “put fight” to get through a book that is difficult to read but is insightful. That’s how managed to get through Diana Eck’s India: A Sacred Geography, and that’s the reason I made it a point to buy Jordan Peterson’s book on Kindle – I knew it would be a tough read and I would never be able to get through it if I were to read the physical version.

Finally, the time taken to finish a book follows a bimodal distribution. I either finish off the book in a day or two, or I take a month to finish it. For example, I went to Copenhagen for a holiday in August, and found a copy of Michael Lewis’s The Big Short in my AirBnB. I was there for three days but finished off in that time. On the other hand, 12 rules for life took over a month.

Pertinent Observations Grows Up

Over the weekend, I read Ben Blatt’s Nabokov’s Favourite Word Is Mauve, a simple natural language processing based analysis of hundreds of popular authors and their books. In this, Blatt uses several measures of goodness or badness of writing, and then measures different authors by it.

So he finds, for example, that Danielle Steel opens a lot of her books by talking about the weather, or that Charles Dickens uses a lot of “anaphora” (anyone who remembers the opening of A Tale of Two Cities shouldn’t be surprised by that). He also talks about the use of simple word counts to detect authorship of unknown documents (a separate post will come on that soon).

As someone who has already written a book (albeit nonfiction), I found a lot of this rather interesting, and constantly found myself trying to evaluate myself on the metrics with which Blatt subjected the famous authors to. And one metric that I found especially interesting was the “Flesch-Kincaid grade level“, which is a measure of complexity of language in a work.

It is a fairly simple formula, based on a linear combination of the average number of words per sentence and the average number of syllables per word. The formula goes like this:

Flesch-Kincaid Grade Score

And the result of the formula tells the approximate school grade of a reader who will be able to understand your writing. As you see, it is not a complex formula, and the shorter your sentences and shorter your words (measured in syllables), the simpler your prose is supposed to be.

The simplest works by this metric as mentioned in Blatt’s book are the works of Dr. Seuss, such as The Cat in the Hat or Green Eggs and Spam, on account of the exclusive usage of a small set of words in both books (Dr Seuss wrote the latter as a challenge, not unlike the challenges we would pose each other during “class participation” in business school). These books have a negative grade score, technically indicating that even a nursery kid should be able to read them, but actually meaning they’re simply easy to read.

Since the Flesch Kincaid Grade Score is based on a simple set of parameters (word count, sentence count and syllable count), it was rather simple for me to implement that on the posts from this blog.

I downloaded an XML export of all posts (I took this dump some two or three weeks ago), and then used R, with the Tidytext package to analyse the posts. Word count was most straightforward, using the str_count function in the stringr package (part of the tidyverse). Sentence count was a bit more complicated – there were no ready algorithms. I instead just searched for “sentence enders” (., ?, !, etc. I know the use of . in abbreviations creates problems but I can live with that).

Syllable count was the hardest. Again, there are some packages but it’s incredibly hard to use. Finally after much searching, I came across some code that again approximates this and used it.

Now that the technical stuff is done with, let’s get to the content. This word count, sentence count and syllable count all flow into calculating the Flesch-Kincaid (FK) score, which is the approximate class that one needs to be in to understand the text. Let’s just plot the FK score for all my blog posts (a total of 2341 of them) against time. I’ve added a regression line for good effect.

The trend is pretty clear. Over time, this blog has become more complicated and harder to read. In fact, drawing this graph slightly differently gives another message. This time, instead of a regression line, I’ve drawn a curve showing the trend.

When I started writing in 2004, I was at a 5th standard level. This increased steadily for the first two years (I gained a lot of my steady readership in this time) to get to about 8th standard, and plateaued there for a bit. And then again around 2009-10 there was n increase, as my blog got up to the 10th standard level. It’s pretty much stayed there ever since, apart from a tiny bump up in the end of 2014.

I don’t know if this increase in “complexity” of my blog is a good or a bad thing. On the one hand, it shows growing up. On the other, it’s becoming tougher to read, which has probably coincided with a plateauing (or even a drop) in the readership as well.

Let me know what you think – if you prefer this “grown up style”, or if you want to go back to the more simple writing I started off with.

Tinder taming and Incels and blogging

So Takshashila has launched a new group blog called “Pragati Express”. It’s basically old-style blogging, with lots of links and short posts and not necessarily making a coherent argument – rather than blog posts that try are basically attempts at writing OpEds (like how blog posts in a lot of places have turned out to be).

I’m one of the contributors for this blog, and wrote my first post today. Copy-pasting it here below the fold!

And thinking about it, I’m so glad about this attempt at reviving old-style blogging – I see that the bug of making blogposts coherent and and wannabe-OpEd has bit me as well, and my posts have been getting longer and more serious.

Hopefully  I can bring back the joy into my blogging.

Continue reading “Tinder taming and Incels and blogging”

Bond Market Liquidity and Selection Bias

I’ve long been a fan of Matt Levine’s excellent Money Stuff newsletter. I’ve mentioned this newsletter here several times in the past, and on one such occasion, I got a link back.

One of my favourite sections in Levine’s newsletter is called “people are worried about bond market liquidity”. One reason I got interested in it was that I was writing a book on Liquidity (speaking of which, there’s a formal launch function in Bangalore on the 15th). More importantly, it was rather entertainingly written, and informative as well.

I appreciated the section so much that I ended up calling one of the sections of one of the chapters of my book “people are worried about bond market liquidity”. 

In any case, the Levine has outdone himself several times over in his latest instalment of worries about bond market liquidity. This one is from Friday’s newsletter. I strongly encourage you to read fully the section on people being worried about bond market liquidity.

To summarise, the basic idea is that while people are generally worried about bond market liquidity, a lot of studies about such liquidity by academics and regulators have concluded that bond market liquidity is just fine. This is based on the finding that the bid-ask spread (gap between prices at which a dealer is willing to buy or sell a security) still remains tight, and so liquidity is just fine.

But the problem is that, as Levine beautifully describes the idea, there is a strong case of selection bias. While the bid-ask spread has indeed narrowed, what this data point misses out is that many trades that could have otherwise happened are not happening, and so the data comes from a very biased sample.

Levine does a much better job of describing this than me, but there are two ways in which a banker can facilitate bond trading – by either taking possession of the bonds (in other words, being a “market maker” (PS: I have a chapter on this in my book) ), or by simply helping find a counterparty to the trade, thus acting like a broker (I have a chapter on brokers as well in my book).

A new paper by economists at the Federal Reserve Board confirms that the general finding that bond market liquidity is okay is affected by selection bias. The authors find that spreads are tighter (and sometimes negative) when bankers are playing the role of brokers than when they are playing the role of market makers.

In the very first chapter of my book (dealing with football transfer markets), I had mentioned that the bid-ask spread of a market is a good indicator of market liquidity. That the higher the bid-ask spread, the less liquid a market.

Later on in the book, I’d also mentioned that the money that an intermediary can make is again a function of how inherent the market is.

This story about bond market liquidity puts both these assertions into question. Bond markets see tight bid-ask spreads and bankers make little or no money (as the paper linked to above says, spreads are frequently negative). Based on my book, both of these should indicate that the market is quite liquid.

However, it turns out that both the bid-ask spread and fees made by intermediaries are biased estimates, since they don’t take into account the trades that were not done.

With bankers cutting down on market making activity (see Levine’s post or the paper for more details), there is many a time when a customer will not be able to trade at all since the bankers are unable to find them a counterparty (in the pre Volcker Rule days, bankers would’ve simply stepped in themselves and taken the other side of the trade). In such cases, the effective bid-ask spread is infinity, since the market has disappeared.

Technically this needs to be included while calculating the overall bid-ask spread. How this can actually be achieve is yet another question!