Super Deluxe

In my four years in Madras (2000-4), I learnt just about enough Tamil to watch a Tamil movie with subtitles. Without subtitles is still a bit of a stretch for me, but the fact that streaming sites offer all movies with subtitles means I can watch Tamil movies now.

At the end, I didn’t like Super Deluxe. I thought it was an incredibly weird movie. The last half an hour was beyond bizarre. Rather, the entire movie is weird (which is good in a way we’ll come to in a bit), but there is a point where there is a step-change in the weirdness.

The wife had watched the movie some 2-3 weeks back, and I was watching it on Friday night. Around the time when she finished the movie she was watching and was going to bed, she peered into my laptop and said “it’s going to get super weird now”. “As if it isn’t weird enough already”, I replied. In hindsight, she was right. She had peered into my laptop right at the moment when the weirdness goes to yet another level.

It’s not often that I watch movies, since most movies simply fail to hold my attention. The problem is that most plots are rather predictable, and it is rather easy to second-guess what happens in each scene. It is the information theoretic concept of “surprise”.

Surprise is maximised when the least probable thing happens at every point in time. And when the least probable thing doesn’t happen, there isn’t a story, so filmmakers overindex on surprises and making sure the less probable thing will happen. So if you indulge in a small bit of second order thinking, the surprises aren’t surprising any more, and the movie becomes boring.

Super Deluxe establishes pretty early on that the plot is going to be rather weird. And when you think the scene has been set with sufficient weirdness in each story (there are four intertwined stories in the movie, as per modern fashion), the next time the movie comes back to the story, the story is shown to get weirder. And so you begin to expect weirdness. And this, in a way, makes the movie less predictable.

The reason a weird movie is less predictable is that at each scene it is simply impossible for the view to even think of the possibilities. And in a movie that gets progressively weirder like this one, every time you think you have listed out the possibilities and predicted what happens, what follows is something from outside your “consideration set”. And that keeps you engaged, and wanting to see what happens.

The problem with a progressively weird movie is that at some point it needs to end. And it needs to end in a coherent way. Well, it is possible sometimes to leave the viewer hanging, but some filmmakers see the need to provide a coherent ending.

And so what usually happens is that at some point in time the plot gets so remarkably simplified that everything suddenly falls in place (though nowhere as beautifully as things fall in place at the end of a Wodehouse novel). Another thing that can happen is that weirdness it taken up a notch, so that things fall in place at a “meta level”, at which point the movie can end.

The thing with Super Deluxe is that both these things happen! On one side the weirdness is taken up several notches. And on the other the plots get so oversimplified that things just fall in place. And that makes you finish the movie with a rather bitter taste in the mouth, feeling thoroughly unsatisfied.

That the “ending” of the movie (where things get really weird AND really simplified) lasts half an hour doesn’t help matters.

Context switches and mental energy

Back in college, whenever I felt that my life needed to be “resurrected”, I used to start by cleaning up my room. Nowadays, like most other things in the world, this has moved to the virtual world as well. Since I can rely on the wife (:P) to keep my room “Pinky clean” all the time, resurrection of life nowadays begins with going off social media.

My latest resurrection started on Monday afternoon, when I logged off twitter and facebook and linkedin from all devices, and deleted the instagram app off my phone. My mind continues to wander, but one policy decision I’ve made is to both consume and contribute content only in the medium or long form.

Regular readers of this blog might notice that there’s consequently been a massive uptick of activity here – not spitting out little thoughts from time to time on twitter means that I consolidate them into more meaningful chunks and putting them here. What is interesting is that consumption of larger chunks of thought has also resulted in greater mindspace.

It’s simple – when you consume content in small chunks – tweets or instagram photos, for example, you need to switch contexts very often. One thought begins and ends with one tweet, and the next tweet is something completely different, necessitating a complete mental context switch. And, in hindsight, I think that is “expensive”.

While the constant stream of diverse thoughts is especially stimulating (and that is useful for someone like me who’s been diagnosed with ADHD), it comes with a huge mental cost of context switch. And that means less energy to do other things. It’s that simple, and I can’t believe I hadn’t thought of it so long!

I still continue to have my distractions (my ADHD mind won’t allow me to live without some). But they all happen to be longish content. There are a few blog posts (written by others) open in my browser window. My RSS feed reader is open on my browser for the first time since possibly my last twitter break. When in need of distraction, I read chunks of one of the articles that’s open (I read one article fully until I’ve finished it before moving on to the next). And then go back to my work.

While this provides me the necessary distraction, it also provides the distraction in one big chunk which doesn’t take away as much mental energy as reading twitter for the same amount of time would.

I’m thinking (though it may not be easy to implement) that once I finish this social media break, I’ll install apps on the iPad rather than having them on my phone or computer. Let’s see.

Television and interior design

One of the most under-rated developments in the world of architecture and interior design has been the rise of the flat-screen television. Its earlier avatar, the Cathode Ray Tube version, was big and bulky, and needed special arrangements to keep. One solution was to keep it in corners. Another was to have purpose-built deep “TV cabinets” into which these big screens would go.

In the house that I grew up in, there was a purpose-built corner to keep our televisions. Later on in life, we got a television cabinet to put in that place, that housed the television, music system, VCR and a host of other things.

For the last decade, which has largely coincided with the time when flat-screen LCD/LED TVs have replaced their CRT variations, I’ve seen various tenants struggle to find a good spot for the TVs. For the corner is too inelegant for the flat screen television – it needs to be placed flat against the middle of a large wall.

When the flat screen TV replaced the CRT TV, out went the bulky “TV cabinets” and in came the “console” – a short table on which you kept the TV, and below which you kept the accompanying accessories such as the “set top box” and DVD player. We had even got a purpose-built TV console with a drawer to store DVDs in.

Four years later, we’d dispensed with our DVD player (at a time when my wife’s job involved selling DVDs and CDs, we had no device at home that could play any of these storage devices!). And now we have “cut the cord”. After we returned to India earlier this year, we decided to not get cable TV, relying on streaming through our Fire stick instead.

And this heralds the next phase in which television drives interior design.

In the early days of flat screen TVs, it became common for people to “wall mount” them. This was usually a space-saving device, though people still needed a sort of console to store input devices such as set top boxes and DVD players.

Now, with the cable having been cut and DVD player not that common, wall mounting doesn’t make sense at all. For with WiFi-based streaming devices, the TV is now truly mobile.

In the last couple of months, the TV has nominally resided in our living room, but we’ve frequently taken it to whichever room we wanted to watch it in. All that we need to move the TV is a table to keep it on, and a pair of plug points to plug in the TV and the fire stick.

In our latest home reorganisation we’ve even dispensed with a permanent home for the TV in the living room, thus radically altering its design and creating more space (the default location of the TV now is in the study). The TV console doesn’t make any sense, and has been temporarily converted into a shoe rack. And the TV moves from room to room (it’s not that heavy, either), depending on where we want to watch it.

When the CRT TV gave way to the flat screen, architects responded by creating spaces where TVs could be put in the middle of a long wall, either mounted on the wall or kept on a console. That the TV’s position in the house changed meant that the overall architecture of houses changed as well.

Now it will be interesting to see what large-scale architectural changes get driven by cord-cutting and the realisation that the TV is essentially a mobile device.

Coordinated and uncoordinated potlucks

Some potluck meals are coordinated. One or more coordinators assume leadership and instruct each attending member what precisely to bring. It’s somewhat like central planning in that sense – the coordinators make assumptions on what each person wants and how much they will eat and what goes well with what, and make plans accordingly.

Uncoordinated potlucks can be more interesting. Here, people don’t talk about what to bring, and simply bring what they think the group might be interested. This can result in widely varying outcomes – some great meals, occasionally a lot of wasted food, and some weird mixes of starters, main courses and desserts.

We had one such uncoordinated potluck at my daughter’s school picnic last week. All children were accompanied by their parents and were asked to bring “snacks”. Nothing was specified apart from the fact that we should bring it in steel containers, and that we should get homemade stuff.

Now, for a bit of background. For slightly older kids (my daughter doesn’t qualify yet) the school has a rotating roster for lunch, where each kid brings in lunch for the entire class on each day. So parents are used to sending lunch for all the children, and children are used to eating a variety of foods. A friend who sent his daughter to the same school tells me that it can become a bit too competitive sometimes, with families seeking to outdo one another with the fanciness of the foods they send.

In that sense, I guess the families of these older kids had some information on what normally came for lunch and what got eaten and so on – a piece of information we didn’t have. The big difference between this picnic potluck and school lunch (though I’m not sure if other parents knew of this distinction) was that this was “anonymous”.

All of us kept our steel boxes and vessels on a large table set up for the purpose, so when people served themselves there was little clue of which food had come from whose house. In that sense there was no point showing off (though we tried, taking hummus with carrot and cucumber sticks). And it resulted in what I thought was a fascinating set of food, though I guess some of it couldn’t really be classified as “snack”.

The fastest to disappear was a boxful of chitranna (lemon rice). I thought it went rather well with roasted and salted peanuts that someone else had bought. There were some takers for our hummus as well, though our cut apples didn’t “do that well”. I saw a boxful of un-taken idlis towards the end of the snack session. Someone had brought boiled sweet corn on the cob. And there were many varieties of cakes that families had (presumably baked and) brought.

What I found interesting was that despite their being zero coordination between the families, they had together served up what was a pretty fascinating snack, with lots of variety. “Starters”, “Mains”, “Desserts” and “Sides” were all well represented, even if the balance wasn’t precisely right.

The number of families involved here (upwards of 30) meant that perfect coordination would’ve been nigh impossible, and I’m not sure if a command-and-control style coordinated potluck would have worked in any case (that would have also run the risk of a family bunking the picnic last moment, and an important piece of the puzzle missing).

The uncoordinated potluck meant that there were no such imbalances, and families, left to themselves and without any feedback, had managed to serve themselves a pretty good “snack”!

More power to decentralised systems!

Gamification and finite and infinite games

Ok here I’m integrating a few concepts that I learnt via Venkatesh Guru Rao. The first is that of Finite and Infinite games, a classic if hard to read book written by philosopher James Carse (which I initially discovered thanks to his Breaking Smart Season 1 compilation). The second is of “playflow”, which again I discovered through a recent edition of his newsletter.

A lot of companies try to “gamify” the experiences for their employees in order to make work more fun, and to possibly make them more efficient.

For example, sales organisations offer complicated incentives (one of my historically favourite work assignments has been to help a large client optimise these incentives). These incentives are offered at multiple “slabs”, and used to drive multiple objectives (customer acquisition, retention, cross-sell, etc.). And by offering employees incentives for achieving some combination of these objectives, the experience is being “gamified”. It’s like the employee is gaining points by achieving each of these objectives, and the points together lead to some “reward”.

This is just one example. There are several other ways in which organisations try to gamify the experience for their employees. All of them involve some sort of award of “points” for things that people do, and then a combination of points leading to some “reward”.

The problem with gamification is that the games organisations design are usually finite games. “Sell 10 more widgets in the next month”. “Limit your emails to a maximum of 200 words in the next fifteen days”. “Visit at least one client each day”. And so on.

Running an organisation, however, is an infinite game. At the basic level, the objective of an organisation is to remain a going concern, and keep on running. Growth and dividends and shareholder returns are secondary to that – if the organisation is not a going concern, none of that matters.

And there is the contradiction – the organisation is fundamentally playing an infinite game. The employees, thanks to the gamified experience, are playing finite games. And they aren’t always compatible.

Of course, there are situations where finite games can be designed in a way that their objectives align with the objectives of the overarching infinite game. This, however, is not always possible. Hence, gamification is not always a good strategy for organisations.

Organisations have figured out the solution to this, of course. There is a simple way to make employees play the same infinite game as the organisation – by offering employees equity in the company. Except that employees have the option of converting that to a finite game by selling the said equity.

Whoever said incentive alignment is an easy task..

 

Marginalised communities and success

Yesterday I was listening to this podcast where Tyler Cowen interviews Neal Stephenson, who is perhaps the only Science Fiction author whose books I’ve read. Cowen talks about the characters in Stephenson’s The Baroque Cycle, a masterful 3000-page work which I polished off in a month in 2014.

The key part of the conversation for me is this:

COWEN: Given your focus on the Puritans and the Baroque Cycle, do you think Christianity was a fundamental driver of the Industrial Revolution and the Scientific Revolution, and that’s why it occurred in northwestern Europe? Or not?

STEPHENSON: One of the things that comes up in the books you’re talking about is the existence of a certain kind of out-communities that were weirdly overrepresented among people who created new economic systems, opened up new trade routes, and so on.

I’m talking about Huguenots, who were the Protestants in France who suffered a lot of oppression. I’m talking about the Puritans in England, who were not part of the established church and so also came in for a lot of oppression. Armenians, Jews, Parsis, various other minority communities that, precisely because of their outsider minority status, were forced to form long-range networks and go about things in an unconventional, innovative way.

So when we think about communities such as Jews or Parsis, and think about their outsized contribution to business or culture, it is this point that Stephenson makes that we should keep in mind. Because Jews and Parsis and Armenians were outsiders, they were “forced to form long-range networks”.

In most cases, for most people of these communities, these long-range networks and unconventional way of doing things didn’t pay off, and they ended up being worse off compared to comparable people from the majority communities in wherever they lived.

However, in the few cases where these long-range networks and innovative ways of doing things succeeded, they succeeded spectacularly. And these incidents are cases that we have in mind when we think about the spectacular success or outsized contributions of these communities.

Another way to think of this is – denied “normal life”, people from marginalised communities were forced to take on much more risk in life. The expected value of this risk might have been negative, but this higher risk meant that these communities had a much better “upper tail” than the majority communities that suppressed and oppressed them.

Given that in terms of long-term contributions and impact and public visibility it is only the tails of the distribution that matter (mediocrity doesn’t make news), we think of these communities as having been extraordinary, and wonder if they have “better genes” and so on.

It’s a simple case of risk, and oppression. This, of course, is no justification for oppressing swathes of people and forcing them to take more risks than necessary. People need to decide on their own risk preferences.

10X Studs and Fighters

Tech twitter, for the last week, has been inundated with unending debate on this tweetstorm by a VC about “10X engineers”. The tweetstorm was engineered by Shekhar Kirani, a Partner at Accel Partners.

I have friends and twitter-followees on both sides of the debate. There isn’t much to describe more about the “paksh” side of the debate. Read Shekhar’s tweetstorm I’ve put above, and you’ll know all there is to this side.

The vipaksh side argues that this normalises “toxicity” and “bad behaviour” among engineers (about “10X engineers”‘s hatred for meetings, and their not adhering to processes etc.). Someone I follow went to the extent to say that this kind of behaviour among engineers is a sign of privilege and lack of empathy.

This is just the gist of the argument. You can just do a search of “10X engineer”, ignore the jokes (most of them are pretty bad) and read people’s actual arguments for and against “10X engineers”.

Regular readers of this blog might be familiar with the “studs and fighters” framework, which I used so often in the 2007-9 period that several people threatened to stop reading me unless I stopped using the framework. I put it on a temporary hiatus and then revived it a couple of years back because I decided it’s too useful a framework to ignore.

One of the fundamental features of the studs and fighters framework is that studs and fighters respectively think that everyone else is like themselves. And this can create problems at the organisational level. I’d spoken about this in the introductory post on the framework.

To me this debate about 10X engineers and whether they are good or bad reminds me of the conflict between studs and fighters. Studs want to work their way. They are really good at what they’re competent at, and absolutely suck at pretty much everything else. So they try to avoid things they’re bad at, can sometimes be individualistic and prefer to work alone, and hope that how good they are at the things they’re good at will compensate for all that they suck elsewhere.

Fighters, on the other hand, are process driven, methodical, patient and sticklers for rules. They believe that output is proportional to input, and that it is impossible for anyone to have a 10X impact, even 1/10th of the time (:P). They believe that everyone needs to “come together as a group and go through a process”.

I can go on but won’t.

So should your organisation employ 10X engineers or not? Do you tolerate the odd “10X engineer” who may not follow company policy and all that in return for their superior contributions? There is no easy answer to this but overall I think companies together will follow a “mixed strategy”.

Some companies will be encouraging of 10X behaviour, and you will see 10X people gravitating towards such companies. Others will dissuade such behaviour and the 10X people there, not seeing any upside, will leave to join the 10X companies (again I’ve written about how you can have “stud organisations” and “fighter organisations”.

Note that it’s difficult to run an organisation with solely 10X people (they’re bad at managing stuff), so organisations that engage 10X people will also employ “fighters” who are cognisant that 10X people exist and know how they should be managed. In fact, being a fighter while recognising and being able to manage 10X behaviour is, I think, an important skill.

As for myself, I don’t like one part of Shekhar Kirani’s definition – that he restricts it to “engineers”. I think the sort of behaviour he describes is present in other fields and skills as well. Some people see the point in that. Others don’t.

Life is a mixed strategy.

Ride Sharing and Goodbyes

Ride sharing apps such as Uber and Ola have destroyed the art of the goodbye. Given that we can’t be sure how long our ride takes to arrive, and that we better ‘catch’ the ride as soon as it arrives, the use of the apps means that most of the time goodbyes are either abrupt or too prolonged.

Back in the day before we had these apps once the guests told the hosts they were leaving, they could be reliably expected to leave in a certain amount of time. And they would leave, take out their car or scooter or walk to get an auto, and after a nice goodbye, off they would go.

Ride sharing apps have changed the workflow here. It can work two ways. One way is that you say that you’re leaving, and then take out your phone to hail an Uber or Ola. And then you find that a cab is 20 minutes away. And so after having said all the goodbyes you sit down again. The host who was waiting to clean up and get on with life sits down with you. And then your cab arrives presently and you pack up and dash off.

And the opposite can happen as well. You might think it might take a while before the cab arrives and so you book the cab before you start the goodbye process. And then as your luck (good or bad I don’t know) would have it, there is a cab right round the corner, and it is just a minute or two away. And then you say goodbye hurriedly, maybe leave behind an item or two, and dash off.

A combination of the two happened at a party last night. A friend and I decided to leave around the same time. And we took out our phones to book our respective rides home before we informed the hosts. I made a mental note at that time that we should take a picture with the hosts before we leave.

Then as it happened, I tried Uber and it was some 20 minutes away (my friend got one that was only 5 minutes away). I first thought I’ll get another drink but then I got bugged and decided to try Ola Auto, and I found one right outside (1 minute away). And I didn’t want to miss that, and so that meant a quick goodbye. And I forgot to take that photo that I wanted to take.

So it goes.

Periodicals and Dashboards

The purpose of a dashboard is to give you a live view of what is happening with the system. Take for example the instrument it is named after – the car dashboard. It tells you at the moment what the speed of the car is, along with other indicators such as which lights are on, the engine temperature, fuel levels, etc.

Not all reports, however, need to be dashboards. Some reports can be periodicals. These periodicals don’t tell you what’s happening at a moment, but give you a view of what happened in or at the end of a certain period. Think, for example, of classic periodicals such as newspapers or magazines, in contrast to online newspapers or magazines.

Periodicals tell you the state of a system at a certain point in time, and also give information of what happened to the system in the preceding time. So the financial daily, for example, tells you what the stock market closed at the previous day, and how the market had moved in the preceding day, month, year, etc.

Doing away with metaphors, business reporting can be classified into periodicals and dashboards. And they work exactly like their metaphorical counterparts. Periodical reports are produced periodically and tell you what happened in a certain period or point of time in the past. A good example are company financials – they produce an income statement and balance sheet to respectively describe what happened in a period and at a point in time for the company.

Once a periodical is produced, it is frozen in time for posterity. Another edition will be produced at the end of the next period, but it is a new edition. It adds to the earlier periodical rather than replacing it. Periodicals thus have historical value and because they are preserved they need to be designed more carefully.

Dashboards on the other hand are fleeting, and not usually preserved for posterity. They are on the other hand overwritten. So whether all systems are up this minute doesn’t matter a minute later if you haven’t reacted to the report this minute, and thus ceases to be of importance the next minute (of course there might be some aspects that might be important at the later date, and they will be captured in the next periodical).

When we are designing business reports and other “business intelligence systems” we need to be cognisant of whether we are producing a dashboard or a periodical. The fashion nowadays is to produce everything as a dashboard, perhaps because there are popular dashboarding tools available.

However, dashboards are expensive. For one, they need a constant connection to be maintained to the “system” (database or data warehouse or data lake or whatever other storage unit in the business report sense). Also, by definition they are not stored, and if you need to store then you have to decide upon a frequency of storage which makes it a periodical anyway.

So companies can save significantly on resources (compute and storage) by switching from dashboards (which everyone seems to think in terms of) to periodicals. The key here is to get the frequency of the periodical right – too frequent and people will get bugged. Not frequent enough, and people will get bugged again due to lack of information. Given the tools and technologies at hand, we can even make reports “on demand” (for stuff not used by too many people).

Good vodka and bad chicken

When I studied Artificial Intelligence, back in 2002, neural networks weren’t a thing. The limited compute capacity and storage available at that point in time meant that most artificial intelligence consisted of what is called “rule based methods”.

And as part of the course we learnt about machine translation, and the difficulty of getting the implicit meaning across. The favourite example by computer scientists in that time was the story of how some scientists translated “the spirit is willing but the flesh is weak” into Russian using an English-Russian translation software, and then converted it back into English using a Russian-English translation software.

The result was “the vodka is excellent but the chicken is not good”.

While this joke may not be valid any more thanks to the advances in machine translation, aided by big data and neural networks, the issue of translation is useful in other contexts.

Firstly, speaking in a language that is not your “technical first language” makes you eschew jargon. If you have been struggling to get rid of jargon from your professional vocabulary, one way to get around it is to speak more in your native language (which, if you’re Indian, is unlikely to be your technical first language). Devoid of the idioms and acronyms that you normally fill your official conversation with, you are forced to think, and this practice of talking technical stuff in a non-usual language will help you cut your jargon.

There is another use case for using non-standard languages – dealing with extremely verbose prose. A number of commentators, a large number of whom are rather well-reputed, have this habit of filling their columns with flowery language, GRE words, repetition and rhetoric. While there is usually some useful content in these columns, it gets lost in the language and idioms and other things that would make the columnist’s high school English teacher happy.

I suggest that these columns be given the spirit-flesh treatment. Translate them into a non-English language, get rid of redundancies in sentences and then  translate them back into English. This process, if the translators are good at producing simple language, will remove the bluster and make the column much more readable.

Speaking in a non-standard language can also make you get out of your comfort zone and think harder. Earlier this week, I spent two hours recording a podcast in Hindi on cricket analytics. My Hindi is so bad that I usually think in Kannada or English and then translate the sentence “live” in my head. And as you can hear, I sometimes struggle for words. Anyway here is the thing. Listen to this if you can bear to hear my Hindi for over an hour.