Alchemy

Over the last 4-5 days I kinda immersed myself in finishing Rory Sutherland’s excellent book Alchemy.

It all started with a podcast, with Sutherland being the guest on Russ Roberts’ EconTalk last week. I’d barely listened to half the podcast when I knew that I wanted more of Sutherland, and so immediately bought the book on Kindle. The same evening, I finished my previous book and started reading this.

Sometimes I get a bit concerned that I’m agreeing with an author too much. What made this book “interesting” is that Sutherland is an ad-man and a marketer, and keeps talking down on data and economics, and plays up intuition and “feeling”. In other words, at least as far as professional career and leanings go, he is possibly as far from me as it gets. Yet, I found myself silently nodding in agreement as I went through the book.

If I have to summarise the book in one line I would say, “most decisions are made intuitively or based on feeling. Data and logic are mainly used to rationalise decisions rather than making them”.

And if you think about it, it’s mostly true. For example, you don’t use physics to calculate how much to press down on your car accelerator while driving – you do it essentially by trial and error and using your intuition to gauge the feedback. Similarly, a ball player doesn’t need to know any kinematics or projectile motion to know how to throw or hit or catch a ball.

The other thing that Sutherland repeatedly alludes to is that we tend to try and optimise things that are easy to measure or optimise. Financials are a good example of that. This decade, with the “big data revolution” being followed by the rise of “data science”, the amount of data available to make decisions has been endless, meaning that more and more decisions are being made using data.

The trouble, of course, is availability bias, or what I call as the “keys-under-lamppost bias”. We tend to optimise and make decisions on things that are easily measurable (this set of course is now much larger than it was a decade ago), and now that we know we are making use of more objective stuff, we have irrational confidence in our decisions.

Sutherland talks about barbell strategies, ergodicity, why big data leads to bullshit, why it is important to look for solutions beyond the scope of the immediate domain and the Dunning-Kruger effect. He makes statements such as “I would rather run a business with no mathematicians than with second-rate mathematicians“, which exactly mirrors my opinion of the “data science industry”.

There is absolutely no doubt why I liked the book.

Thinking again, while I said that professionally Sutherland seems as far from me as possible, it’s possibly not so true. While I do use a fair bit of data and economic analysis as part of my consulting work, I find that I make most of my decisions finally on intuition. Data is there to guide me, but the decision-making is always an intuitive process.

In late 2017, when I briefly worked in an ill-fated job in “data science”, I’d made a document about the benefits of combining data analysis with human insight. And if I think about my work, my least favourite work is where I’ve done work with data to help clients make “logical decision” (as Sutherland puts it).

The work I’ve enjoyed the most has been where I’ve used the data and presented it in ways in which my clients and I have noticed patterns, rationalised them and then taken a (intuitive) leap of faith into what the right course of action may be.

And this also means that over time I’ve been moving away from work that involves building models (the output is too “precise” to interest me), and take on more “strategic” stuff where there is a fair amount of intuition riding on top of the data.

Back to the book, I’m so impressed with it that in case I was still living in London, I would have pestered Sutherland to meet me, and then tried to convince him to let me work for him. Even if at the top level it seems like his work and mine are diametrically opposite..

I leave you with my highlights and notes from the book, and this tweet.

Here’s my book, in case you are interested.

 

Independence and contribution at work

This is based on a discussion I had at work a few days ago. We were talking about people being able to do things out of their own initiative, come up with their own new ideas, inventing their own problems to work on (which would be useful for the firm on the whole) and stuff.

Now if you consider people’s abilities as a multi-dimensional vector (the number of dimensions will be large, since one’s abilities, capabilities, etc. can be along several dimensions), what we realized is that if someone just takes orders from other people and not work on their own ideas and intuition, then their contribution to their role is just the component of their vector along the vector of the person whose orders they are following.

And considering that the probability of their vector and the vector of the person who they’re taking orders from lying in exactly the same direction is close to zero, what this means is that by simply following someone else’s orders they are contributing an amount that is less than what they are capable of contributing (since the component of their ability orthogonal to the vector of the person whose orders they are taking isn’t on display at all).

Hence, it is important to have people in the team who are capable of independent thinking and intuition since that is the only way in which their full possible contribution can be harnessed. On a related note, in order to bring the best out of its employees, and to allow them to contribute to their full capacity, firms should allow the employee to take initiative and come up with their own ideas rather than simply taking orders, since in the latter case only the component of the abilities along the orders is contributed.