Why Real Estate Prices are High

World over, high housing prices seem to be a problem. They’ve always been an issue in India. They are an issue in the US, where millennials are not able to afford houses to live in. In the UK as well, rising housing prices mean that today’s young are unable to buy up houses. The global phenomenon that is driving all this is the drive towards increasingly large cities.

Going by first principles, there are two major components that determine the cost of a house (note that I said cost and not price) – the cost of the land and the cost of construction. It can be safely assumed that the latter hasn’t increased at a rate dramatically higher than inflation over the years.

Yes, there are bubbles and busts in prices of commodities such as steel and cement. Houses nowadays are being built largely to better specifications and quality than earlier homes. In places like the US, modern houses are  bigger. But all this is balanced by technological innovation which makes stuff cheaper. So on an average, the increase in construction costs over the years is not dramatic.

That implies that the massive increase in price of housing the world over is driven by  increasing costs of land. Some scaremongers will try to tell you that this is due to there being too many human beings in the world, and we are soon headed for a Malthusian collapse. However, the land needed for housing is small, compared to say agriculture, so regular transfer of land from agriculture to housing should take care of this. So why are land prices increasing so much?

It has to do with the distribution. During most of the 20th century, manufacturing being the base of the economy meant that a lot of smaller cities and towns flourished. These cities and towns were either located conveniently enough to tap raw materials or markets for industrial goods, or were helped by the fact that land requirements for industries meant that big cities would get expensive very soon for industries, driving development to smaller cities and towns.

As the share of populations in manufacturing falls, and more people move into services, the larger cities gain at the expense of smaller cities and towns. This means the distribution of demand has changed massively over the last 30 years or so. Rather than demand being more or less uniform over cities, nowadays most of the housing demand is spread over a few small cities.

And these cities aren’t able to keep up. Supply in some cities such as San Francisco and Mumbai, are constrained by regulations on how much can be built. Other cities such as Bangalore or Houston have expanded radially, but housing in the far suburbs is much less attractive than closer to town (due to increased transport costs), and there is only so much supply in “convenient areas” of towns.

This changing pattern of urbanisation is leading to rapid increase in the prices of housing in places that people want to live in. And so millennials are being priced out, unable to buy homes. The distribution of jobs across cities means they don’t have the luxury of “settling down” in smaller cities and towns where housing is still affordable. And until the larger cities hit their limits of growth and businesses start moving to smaller cities (thus creating newer hubs), this housing shortage will exist.

 

Randomness and sample size

I have had a strange relationship with volleyball, as I’ve documented here. Unlike in most other sports I’ve played, I was a rather defensive volleyball player, excelling in backline defence, setting and blocking, rather than spiking.

The one aspect of my game which was out of line with the rest of my volleyball, but in line with my play in most other sports I’ve played competitively, was my serve. I had a big booming serve, which at school level was mostly unreturnable.

The downside of having an unreturnable serve, though, is that you are likely to miss your serve more often than the rest – it might mean hitting it too long, or into the net, or wide. And like in one of the examples I’ve quoted in my earlier post, it might mean not getting a chance to serve at all, as the warm up serve gets returned or goes into the net.

So I was discussing my volleyball non-career with a friend who is now heavily involved in the game, and he thought that I had possibly been extremely unlucky. My own take on this is that given how little I played, it’s quite likely that things would have gone spectacularly wrong.

Changing domains a little bit, there was a time when I was building strategies for algorithmic trading, in a class known as “statistical arbitrage”. The deal there is that you have a small “edge” on each trade, but if you do a large enough number of trades, you will make money. As it happened, the guy I was working for then got spooked out after the first couple of trades went bad and shut down the strategy at a heavy loss.

Changing domains a little less this time, this is also the reason why you shouldn’t check your portfolio too often if you’re investing for the long term – in the short run, when there have been “fewer plays”, the chances of having a negative return are higher even if you’re in a mostly safe strategy, as I had illustrated in this blog post in 2008 (using the Livejournal URL since the table didn’t port well to wordpress).

And changing domains once again, the sheer number of “samples” is possibly one reason that the whole idea of quantification of sport and “SABRmetrics” first took hold in baseball. The Major League Baseball season is typically 162 games long (and this is before the playoffs), which means that any small edge will translate into results in the course of the league. A smaller league would mean fewer games and thus more randomness, and a higher chance that a “better play” wouldn’t work out.

This also explains why when “Moneyball” took off with the Oakland A’s in the 1990s, they focussed mainly on league performance and not performance in the playoffs – in the latter, there are simply not enough “samples” for a marginal advantage in team strength to necessarily have the impact in terms of results.

And this is the problem with newly appointed managers of elite football clubs in Europe “targeting the Champions League” – a knockout tournament of that format means that the best team need not always win. Targeting a national league, played out over at least 34 games in the season is a much better bet.

Finally, there is also the issue of variance. A higher variance in performance means that observations of a few instances of bad performance is not sufficient to conclude that the player is a bad performer – a great performance need not be too far away. For a player with less randomness in performance – a more steady player, if you will – a few bad performances will tell you that they are unlikely to come good. High risk high return players, on the other hand, need to be given a longer rope.

I’d put this in a different way in a blog a few years back, about Mitchell Johnson.

Human, Animal and Machine Intelligence

Earlier this week I started watching this series on Netflix called “Terrorism Close Calls“. Each episode is about an instance of attempted terrorism that has been foiled in the last 2 decades. For example, there is one example of the plot to bomb a set of transatlantic flights from London to North America in 2006 (a consequence of which is that liquids still aren’t allowed on board flights).

So the first episode of the series involves this Afghani guy who drives all the way from Colorado to New York to place a series of bombs in the latter’s subways (metro train system). He is under surveillance through the length of his journey, and just as he is about to enter New York, he is stopped for what seems like a “routine drugs test”.

As the episode explains, “a set of dogs went around his car sniffing”, but “rather than being trained to sniff drugs” (as is routine in such a stop), “these dogs had been trained to sniff explosives”.

This little snippet got me thinking about how machines are “trained” to “learn”. At the most basic level, machine learning involves showing a large number of “positive cases” and “negative cases” based on which the program “learns” the differences between the positive and negative cases, and thus to identify the positive cases.

So if you want to built a system to identify cats in an image, you feed the machine a large number of images with cats in them, and a large(r) number of images without cats in them, each appropriately “labelled” (“cat” or “no cat”) and based on the differences, the system learns to identify cats.

Similarly, if you want to teach a system to detect cancers based on MRIs, you show it a set of MRIs that show malignant tumours, and another set of MRIs without malignant tumours, and sure enough the machine learns to distinguish between the two sets (you might have come across claims of “AI can cure cancer”. This is how it does it).

However, AI can sometimes go wrong by learning the wrong things. For example, an algorithm trained to recognise sheep started classifying grass as “sheep” (since most of the positive training samples had sheep in meadows). Another system went crazy in its labelling when an unexpected object (an elephant in a drawing room) was present in the picture.

While machines learn through lots of positive and negative examples, that is not how humans learn, as I’ve been observing as my daughter grows up. When she was very little, we got her a book with one photo each of 100 different animals. And we would sit with her every day pointing at each picture and telling her what each was.

Soon enough, she could recognise cats and dogs and elephants and tigers. All by means of being “trained on” one image of each such animal. Soon enough, she could recognise hitherto unseen pictures of cats and dogs (and elephants and tigers). And then recognise dogs (as dogs) as they passed her on the street. What absolutely astounded me was that she managed to correctly recognise a cartoon cat, when all she had seen thus far were “real cats”.

So where do animals stand, in this spectrum of human to machine learning? Do they recognise from positive examples only (like humans do)? Or do they learn from a combination of positive and negative examples (like machines)? One thing that limits the positive-only learning for animals is the limited range of their communication.

What drives my curiosity is that they get trained for specific things – that you have dogs to identify drugs and dogs to identify explosives. You don’t usually have dogs that can recognise both (specialisation is for insects, as they say – or maybe it’s for all non-human animals).

My suspicion (having never had a pet) is that the way animals learn is closer to how humans learn – based on a large number of positive examples, rather than as the difference between positive and negative examples. Just that the limit of the animal’s communication being limited means that it is hard to train them for more than one thing (or maybe there’s something to do with their mental bandwidth as well. I don’t know).

What do you think? Interestingly enough, there is a recent paper that talks about how many machine learning systems have “animal-like abilities” rather than coming close to human intelligence.

For millions of years, mankind lived, just like the animals.
And then something happened that unleashed the power of our imagination. We learned to talk
– Stephen Hawking, in the opening of a Roger Waters-less Pink Floyd’s Keep Talking

Elegant and practical solutions

There are two ways in which you can tie a shoelace – one is the “ordinary method”, where you explicitly make the loops around both ends of the lace before tying together to form a bow. The other is the “elegant method” where you only make one loop explicitly, but tie with such great skill that the bow automatically gets formed.

I have never learnt to tie my shoelaces in the latter manner – I suspect my father didn’t know it either, because of which it wasn’t passed on to me. Metaphorically, however, I like to implement such solutions in other aspects.

Having been educated in mathematics, I’m a sucker for “elegant solutions”. I look down upon brute force solutions, which is why I might sometimes spend half an hour writing a script to accomplish a repetitive task that might have otherwise taken 15 minutes. Over the long run, I believe, this elegance will pay off, in terms of scaling easier.

And I suspect I’m not alone in this love for elegance. If the world were only about efficiency, brute force would prevail. That we appreciate things like poetry and music and art and what not means that there is some preference for elegance. And that extends to business solutions as well.

While going for elegance is a useful heuristic, sometimes it can lead to missing the woods for the trees (or missing the random forests for the decision trees if you may will). For there are situations that simply don’t, or won’t, scale, and where elegance will send you on a wild goose chase while a little fighter work will get the job done.

I got reminded of this sometime last week when my wife asked me for some Excel help in some work she was doing. Now, there was a recent article in WSJ which claimed that the “first rule of Microsoft Excel is that you shouldn’t let people know you’re good at it”. However, having taught a university course on spreadsheet modelling, there is no place to hide for me, and people keep coming to me for Excel help (though it helps I don’t work in an office).

So the problem wasn’t a simple one, and I dug around for about half an hour without a solution in sight. And then my wife happened to casually mention that this was a one-time thing. That she had to solve this problem once but didn’t expect to come across it again, so “a little manual work” won’t hurt.

And the problem was solved in two minutes – a minor variation of the requirement was only one formula away (did you know that the latest versions of Excel for Windows offer a “count distinct” function in pivot tables?). Five minutes of fighter work by the wife after that completely solved the problem.

Most data scientists (now that I’m not one!)  typically work in production environments, where the result of their analysis is expressed in code that is run on a repeated basis. This means that data scientists are typically tuned to finding elegant solutions since any manual intervention means that the code is not production-able and scalable.

This can mean finding complicated workarounds in order to “pull the bow of the shoelaces” in order to avoid that little bit of manual effort at the end, so that the whole thing can be automated. And these habits can extend to the occasional work that is not needed to be repeatable and scalable.

And so you have teams spending an inordinate amount of time finding elegant solutions for problems for which easy but non-scalable “solutions exist”.

Elegance is a hard quality to shake off, even when it only hinders you.

I’ll close with a fairytale – a deer looks at its reflection and admires its beautiful anchors and admonishes its own ugly legs. Lion arrives, the ugly legs help the deer run fast, but the beautiful antlers get stuck in a low tree, and the lion catches up.

 

The Crane-Mongoose Theory of Public Policy

I have several favourite stories from the Panchatantra (which perhaps explains my lack of appreciation of modern children’s fiction). One of them involves a crane and a mongoose. And I think it is a good lesson on when and where to call for regulation, and government or legal intervention.

So the story goes like this. A snake lives at the bottom of the tree where a crane has built its nest. Each time the crane lays eggs, the snake slithers up the tree and devours them. And the crane doesn’t know what to do. Ultimately it receives some “brilliant advice”.

There is a mongoose living somewhere nearby, and the crane lays out a Hansel-and-Gretel like path of fish from the mongoose’s house to the snake’s house. The mongoose duly follows the trail of fish and finishes off the snake. The next day, the mongoose is hungry again, and it climbs up the tree and devours the crane’s eggs.

It is common political discourse nowadays to call for the government’s or court’s intervention to solve what seems to be private problems. The governments and courts are of course happy to oblige – any new source for intervention and rent-seeking are good news for the people involved. And then you get a solution that temporarily solves the problem (slaughtering the snake). And then in the long term, what you get is a bigger problem (mongoose eating the crane’s eggs). The only difference is that in real life it is not just the crane that gets negatively affected – the regulations hurt everyone.

The examples that come to my mind at this point in time are all “local”. Some residents in Indiranagar in Bangalore weren’t happy about the noise from nearby pubs. They asked the government to “do something”. And the government “did something” – it banned the playing of live music in restaurants, killing off what was then a budding industry in Bangalore.

Some other residents somewhere else in Bangalore were unhappy that their neighbours had dogs that barked. They asked the government to do something. The government did something – coming up with an elaborate document to regulate dogs that people can own.

And there are more involved (and dangerous) examples of this as well.

Don’t be like the crane.

Acceptable forms of help

I was reading this note by Kunal Bahl, CEO and co-founder of Snapdeal on the company’s turnaround after the failed acquisition by Flipkart last year. It’s a very interesting note – while I’ve never been a fan of the company (never considered buying from them), this story seems rather interesting, especially given the deep shit it was in a year ago.

What caught my eye is this little note about getting help from a small network of mentors. Bahl writes:

I was able to get the guidance and counsel from some of the most respected and leading business persons in the country. […] In our time of need, it was those who had the least to gain, and most to give, that came to our help. Not with money. But with their wisdom and encouragement. I recall sitting in the room with one of the above persons in August 2017, staring down the barrel with only months of money left in the bank. The gentleman, probably seeing how dire our situation was, picked up the phone and called six of the top business people in the country in quick succession explaining our situation to them – that we were good guys stuck in a bad situation – and requesting them to meet me to see if there were any synergies with their businesses[…]

(emphasis added)

What this got me thinking was about why it’s considered okay to give or take help in the form of intangibles, but not in terms of money. It’s rather common that people help each other out by way of providing advice, making introductions and sometimes just hearing them out. It’s not that common, though, that people help each other out with money.

To take a personal example, if someone asks to talk to me to get some advice, or asks for some connections, it’s very likely that I’ll help them out. On the other hand, if someone were to ask me for money I’ll start seeing them suspiciously.

One quick reason as to why intangibles is okay is that it is sometimes “cheap”. Making introductions doesn’t cost you much as long as you think it’s mutually beneficial for both parties (and in that, it seriously helps if you do double consent introductions – talk to both parties independently before introducing). Advice costs you maybe half an hour or an hour of your time, and if you feel like your time is being wasted, it’s not hard to cut losses. And the value that the recipient gets from this can far exceed the cost incurred by the “giver”.

Another reason is that intangibles are intangible – they’re hard to measure. And by that measure, you don’t rack up some sort of debt. If I take money from you, then what I owe you becomes precisely measurable. And until I repay you, things between us can be awkward. Introductions or advice, on the other hand, keep the value of the “debt” fuzzy, and in most case it gets “written off” any way, permitting the two parties to continue their relationship normally.

Anything else that I might have missed out?

Speaking of yellow

Last night, we needed to distract the daughter from the play-doh she was playing with so that she could have dinner. So I set up a diversionary tactic by feeding her M&Ms while her mother hurriedly put away the play-doh.

Soon we figured we needed a diversionary tactic from the diversionary tactic, for the daughter wanted to continuously eat M&Ms rather than have dinner. I tried being the “bad dad” by just refusing to give her any more M&Ms but that didn’t work. So another diversion was set up where the put on TV, and in that little moment of distraction, I put away the yellow packet of M&Ms behind some boxes in its shelf.

Evidently, it wasn’t enough of a distraction, as the daughter quickly remembered the M&Ms and started asking for it. I told her it’s “gone” (a word she uses to describe my aunt who passed away recently), but she wouldn’t believe it. Soon she demanded to inspect the shelf by herself.

Her mother held her high, and she surveyed all three shelves in the cupboard. I hadn’t done a particularly great job of hiding the M&M packet, but thankfully she didn’t spot the yellow top of the packet from behind the masala box.

Instead, her eyes went up to the top shelf of the same cupboard where there was the only visible yellow thing – a bright yellow packet of coffee powder (from Electric Coffee). She demanded to inspect it.

Both of us told her it was coffee powder, but she simply wouldn’t listen. I opened the packet to make her smell it, and see the brown powder inside (we get our coffee ground at the shop since we don’t have a grinder at home, else it’s likely she might have mistaken a bean for a brown M&M). She still wasn’t convinced.

She put her hand right in and pulled out a tiny fistful of coffee powder, which she proceeded to ingest. Soon enough, she was making funny faces, though to her credit she ate all the coffee. It seems the high was enough to make her forget the M&Ms. And suddenly she started running around well-at-a-faster-rate. Fast enough to go bang her head to the wall a minute later – I suspect the caffeine had begun to act.

By the time she had finished crying and recovering from the head-bang, she was ready to belt curd-rice with lime pickle.

And if you want to ask, she fell asleep an hour later. Unlike us oldies, caffeine doesn’t seem to interfere with her sleep!

PS: The title of this post is a dedication to Sanjeev Naik, for reasons that cannot be described here.