The utility of utility functions

That is the title of a webinar I delivered this morning on behalf of Kristal.AI, a company that I’ve been working with for a while now. I spoke about utility functions, and how they can be used in portfolio optimisation.

This is related to the work that I’ve been doing for Kristal, and lies at the boundaries between quantitative finance and behavioural finance, and in fact I spoke about utility functions (combined with Monte Carlo methods) as being a great method to unify quantitative and behavioural finance.

Interactive Brokers (who organised the webinar) recorded the thing, and you can find the recording here. 

I think the webinar went well, though I’m not very sure since there was no feedback. This was by design – the webinar was a speaker-only broadcast, and audience weren’t allowed to participate except in terms of questions that were directly sent to me.

In the first place, webinars are hard to do since it feels like talking to an empty room – there is no feedback, not even nods or smiles, and you don’t know if people are listening. In most “normal” webinars, the audience can interject by raising their hands, and you can try make it interactive. The format used here didn’t permit such interaction which made it seem like I was talking into thin air.

Also, the Mac app of the webinar tool used didn’t seem particularly well optimised. I couldn’t share a particular screen from my laptop (like I couldn’t say “share only my PDF, nothing else” which is normal in most online chat tools), and there are times where I’ve inadvertently exposed my desktop to the full audience (you can see it on the recording).

Anyways, I think I’ve spoken about something remotely interesting, so give it a listen. My “main speech” only takes around 20-25 minutes. And if you want to know more about utility functions and behavioural economics, i recommend this piece by John Cochrane to you.

Bankers predicting football

So the Football World Cup season is upon us, and this means that investment banking analysts are again engaging in the pointless exercise of trying to predict who will win the World Cup. And the funny thing this time is that thanks to MiFiD 2 regulations, which prevent banking analysts from giving out reports for free, these reports aren’t in the public domain.

That means we’ve to rely on media reports of these reports, or on people tweeting insights from them. For example, the New York Times has summarised the banks’ predictions on the winner. And this scatter plot from Goldman Sachs will go straight into my next presentation on spurious correlations:

Different banks have taken different approaches to predict who will win the tournament. UBS has still gone for a classic Monte Carlo simulation  approach, but Goldman Sachs has gone one ahead and used “four different methods in artificial intelligence” to predict (for the third consecutive time) that Brazil will win the tournament.

In fact, Goldman also uses a Monte Carlo simulation, as Business Insider reports.

The firm used machine learning to run 200,000 models, mining data on team and individual player attributes, to help forecast specific match scores. Goldman then simulated 1 million possible variations of the tournament in order to calculate the probability of advancement for each squad.

But an insider in Goldman with access to the report tells me that they don’t use the phrase itself in the report. Maybe it’s a suggestion that “data scientists” have taken over the investment research division at the expense of quants.

I’m also surprised with the reporting on Goldman’s predictions. Everyone simply reports that “Goldman predicts that Brazil will win”, but surely (based on the model they’ve used), that prediction has been made with a certain probability? A better way of reporting would’ve been to say “Goldman predicts Brazil most likely to win, with X% probability” (and the bank’s bets desk in the UK could have placed some money on it).

ING went rather simple with their forecasts – simply took players’ transfer values, and summed them up by teams, and concluded that Spain is most likely to win because their squad is the “most valued”. Now, I have two major questions about this approach – firstly, it ignores the “correlation term” (remember the famous England conundrum of the noughties of fitting  Gerrard and Lampard into the same eleven?), and assumes a set of strong players is a strong team. Secondly, have they accounted for inflation? And if so, how have they accounted for inflation? Player valuation (about which I have a chapter in my book) has simply gone through the roof in the last year, with Mo Salah at £35 million being considered a “bargain buy”.

Nomura also seems to have taken a similar approach, though they have in some ways accounted for the correlation term by including “team momentum” as a factor!

Anyway, I look forward to the football! That it is live on BBC and ITV means I get to watch the tournament from the comfort of my home (a luxury in England!). Also being in England means all matches are at a sane time, so I can watch more of this World Cup than the last one.

 

A banker’s apology

Whenever there is a massive stock market crash, like the one in 1987, or the crisis in 2008, it is common for investment banking quants to talk about how it was a “1 in zillion years” event. This is on account of their models that typically assume that stock prices are lognormal, and that stock price movement is Markovian (today’s movement is uncorrelated with tomorrow’s).

In fact, a cursory look at recent data shows that what models show to be a one in zillion years event actually happens every few years, or decades. In other words, while quant models do pretty well in the average case, they have thin “tails” – they underestimate the likelihood of extreme events, leading to building up risk in the situation.

When I decided to end my (brief) career as an investment banking quant in 2011, I wanted to take the methods that I’d learnt into other industries. While “data science” might have become a thing in the intervening years, there is still a lot for conventional industry to learn from banking in terms of using maths for management decision-making. And this makes me believe I’m still in business.

And like my former colleagues in investment banking quant, I’m not immune to the fat tail problem as well – replicating solutions from one domain into another can replicate the problems as well.

For a while now I’ve been building what I think is a fairly innovative way to represent a cricket match. Basically you look at how the balance of play shifts as the game goes along. So the representation is a line graph that shows where the balance of play was at different points of time in the game.

This way, you have a visualisation that at one shot tells you how the game “flowed”. Consider, for example, last night’s game between Mumbai Indians and Chennai Super Kings. This is what the game looks like in my representation.

What this shows is that Mumbai Indians got a small advantage midway through the innings (after a short blast by Ishan Kishan), which they held through their innings. The game was steady for about 5 overs of the CSK chase, when some tight overs created pressure that resulted in Suresh Raina getting out.

Soon, Ambati Rayudu and MS Dhoni followed him to the pavilion, and MI were in control, with CSK losing 6 wickets in the course of 10 overs. When they lost Mark Wood in the 17th Over, Mumbai Indians were almost surely winners – my system reckoning that 48 to win in 21 balls was near-impossible.

And then Bravo got into the act, putting on 39 in 10 balls with Imran Tahir watching at the other end (including taking 20 off a Mitchell McClenaghan over, and 20 again off a Jasprit Bumrah over at the end of which Bravo got out). And then a one-legged Jadhav came, hobbled for 3 balls and then finished off the game.

Now, while the shape of the curve in the above curve is representative of what happened in the game, I think it went too close to the axes. 48 off 21 with 2 wickets in hand is not easy, but it’s not a 1% probability event (as my graph depicts).

And looking into my model, I realise I’ve made the familiar banker’s mistake – of assuming independence and Markovian property. I calculate the probability of a team winning using a method called “backward induction” (that I’d learnt during my time as an investment banking quant). It’s the same system that the WASP system to evaluate odds (invented by a few Kiwi scientists) uses, and as I’d pointed out in the past, WASP has the thin tails problem as well.

As Seamus Hogan, one of the inventors of WASP, had pointed out in a comment on that post, one way of solving this thin tails issue is to control for the pitch or  regime, and I’ve incorporated that as well (using a Bayesian system to “learn” the nature of the pitch as the game goes on). Yet, I see I struggle with fat tails.

I seriously need to find a way to take into account serial correlation into my models!

That said, I must say I’m fairly kicked about the system I’ve built. Do let me know what you think of this!

Weighting indices

One of the biggest recent developments in finance has been the rise of index investing. The basic idea of indexing is that rather than trying to beat the market, a retail investor should simply invest in a “market index”, and net of fees they are likely to perform better than they would if they were to use an active manager.

Indexing has become so popular over the years that researchers at Sanford Bernstein, an asset management firm, have likened it to being “worse than Marxism“. People have written dystopian fiction about “the last active manager”. And so on.

And as Matt Levine keeps writing in his excellent newsletter, the rise of indexing means that the balance of power in the financial markets is shifting from asset managers to people who build indices. The context here is that because now a lot of people simply invest “in the index”, determining which stock gets to be part of an index can determine people’s appetite for the stock, and thus its performance.

So, for example, you have indexers who want to leave stocks without voting rights (such as those of SNAP) out of indices. Some other indexers want to leave out extra-large companies (such as a hypothetically public Saudi Aramco) out of the index. And then there are people who believe that the way conventional indices are built is incorrect, and instead argue in favour of an “equally weighted index”.

While one an theoretically just put together a bunch of stocks and call it an “index” and sell it to investors making them believe that they’re “investing in the index” (since that is now a thing), the thing is that not every index is an index.

Last week, while trying to understand what the deal about “smart beta” (a word people in the industry throw around a fair bit, but something that not too many people are clear of what it means) is, I stumbled upon this excellent paper by MSCI on smart beta and factor investing.

About a decade ago, the Nifty (India’s flagship index) changed the way it was computed. Earlier, stocks in the Nifty were weighted based on their overall market capitalisation. From 2009 onwards, the weights of the stocks in the Nifty are proportional to their “free float market capitalisation” (that is, the stock price multiplied by number of shares held by the “public”, i.e. non promoters).

Back then I hadn’t understood the significance of the change – apart from making the necessary changes in the algorithm I was running at a hedge fund to take into account the new weights that is. Reading the MSCI paper made me realise the sanctity of weighting by free float market capitalisation in building an index.

The basic idea of indexing is that you don’t make any investment decisions, and instead simply “follow the herd”. Essentially you allocate your capital across stocks in exactly the same proportion as the rest of the market. In other words, the index needs to track stocks in the same proportion that the broad market owns it.

And the free float market capitalisation, which is basically the total value of the stock held by “public” (or non-promoters), represents the allocation of capital by the total market in favour of the particular stock. And by weighting stocks in the ratio of their free float market capitalisation, we are essentially mimicking the way the broad market has allocated capital across different companies.

Thus, only a broad market index that is weighted by free flow market capitalisation counts as “indexing” as far as passive investing is concerned. Investing in stocks in any other combination or ratio means the investor is expressing her views or preferences on the relative performance of stocks that are different from the market’s preferences.

So if you invest in a sectoral index, you are not “indexing”. If you invest in an index that is weighted differently than by free float market cap (such as the Dow Jones Industrial Average), you are not indexing.

One final point – you might wonder why indices have a finite number of stocks (such as the S&P 500 or Nifty 50) if true indexing means reflecting the market’s capital allocation across all stocks, not just a few large ones.

The reason why we cut off after a point is that beyond that, the weightage of stocks becomes so low that in order to perfectly track the index, the investment required is significant. And so, for a retail investor seeking to index, following the “entire market” might mean a significant “tracking error”. In other words, the 50 or 500 stocks that make up the index are a good representation of the market at large, and tracking these indices, as long as they are free float market capitalisation weighted, is the same as investing without having a view.

Bond Market Liquidity and Selection Bias

I’ve long been a fan of Matt Levine’s excellent Money Stuff newsletter. I’ve mentioned this newsletter here several times in the past, and on one such occasion, I got a link back.

One of my favourite sections in Levine’s newsletter is called “people are worried about bond market liquidity”. One reason I got interested in it was that I was writing a book on Liquidity (speaking of which, there’s a formal launch function in Bangalore on the 15th). More importantly, it was rather entertainingly written, and informative as well.

I appreciated the section so much that I ended up calling one of the sections of one of the chapters of my book “people are worried about bond market liquidity”. 

In any case, the Levine has outdone himself several times over in his latest instalment of worries about bond market liquidity. This one is from Friday’s newsletter. I strongly encourage you to read fully the section on people being worried about bond market liquidity.

To summarise, the basic idea is that while people are generally worried about bond market liquidity, a lot of studies about such liquidity by academics and regulators have concluded that bond market liquidity is just fine. This is based on the finding that the bid-ask spread (gap between prices at which a dealer is willing to buy or sell a security) still remains tight, and so liquidity is just fine.

But the problem is that, as Levine beautifully describes the idea, there is a strong case of selection bias. While the bid-ask spread has indeed narrowed, what this data point misses out is that many trades that could have otherwise happened are not happening, and so the data comes from a very biased sample.

Levine does a much better job of describing this than me, but there are two ways in which a banker can facilitate bond trading – by either taking possession of the bonds (in other words, being a “market maker” (PS: I have a chapter on this in my book) ), or by simply helping find a counterparty to the trade, thus acting like a broker (I have a chapter on brokers as well in my book).

A new paper by economists at the Federal Reserve Board confirms that the general finding that bond market liquidity is okay is affected by selection bias. The authors find that spreads are tighter (and sometimes negative) when bankers are playing the role of brokers than when they are playing the role of market makers.

In the very first chapter of my book (dealing with football transfer markets), I had mentioned that the bid-ask spread of a market is a good indicator of market liquidity. That the higher the bid-ask spread, the less liquid a market.

Later on in the book, I’d also mentioned that the money that an intermediary can make is again a function of how inherent the market is.

This story about bond market liquidity puts both these assertions into question. Bond markets see tight bid-ask spreads and bankers make little or no money (as the paper linked to above says, spreads are frequently negative). Based on my book, both of these should indicate that the market is quite liquid.

However, it turns out that both the bid-ask spread and fees made by intermediaries are biased estimates, since they don’t take into account the trades that were not done.

With bankers cutting down on market making activity (see Levine’s post or the paper for more details), there is many a time when a customer will not be able to trade at all since the bankers are unable to find them a counterparty (in the pre Volcker Rule days, bankers would’ve simply stepped in themselves and taken the other side of the trade). In such cases, the effective bid-ask spread is infinity, since the market has disappeared.

Technically this needs to be included while calculating the overall bid-ask spread. How this can actually be achieve is yet another question!

The (missing) Desk Quants of Main Street

A long time ago, I’d written about my experience as a Quant at an investment bank, and about how banks like mine were sitting on a pile of risk that could blow up any time soon.

There were two problems as I had documented then. Firstly, most quants I interacted with seemed to be solving maths problems rather than finance problems, not bothering if their models would stand the test of markets. Secondly, there was an element of groupthink, as quant teams were largely homogeneous and it was hard to progress while holding contrarian views.

Six years on, there has been no blowup, and in some sense banks are actually doing well (I mean, they’ve declined compared to the time just before the 2008 financial crisis but haven’t done that badly). There have been no real quant disasters (yes I know the Gaussian Copula gained infamy during the 2008 crisis, but I’m talking about a period after that crisis).

There can be many explanations regarding how banks have not had any quant blow-ups despite quants solving for math problems and all thinking alike, but the one I’m partial to is the presence of a “middle layer”.

Most of the quants I interacted with were “core” in the sense that they were not attached to any sales or trading desks. Banks also typically had a large cadre of “desk quants” who are directly associated with trading teams, and who build models and help with day-to-day risk management, pricing, etc.

Since these desk quants work closely with the business, they turn out to be much more pragmatic than the core quants – they have a good understanding of the market and use the models more as guiding principles than as rules. On the other hand, they bring the benefits of quantitative models (and work of the core quants) into day-to-day business.

Back during the financial crisis, I’d jokingly predicted that other industries should hire quants who were now surplus to Wall Street. Around the same time, DJ Patil et al came up with the concept of the “data scientist” and called it the “sexiest job of the 21st century”.

And so other industries started getting their own share of quants, or “data scientists” as they were now called. Nowadays its fashionable even for small companies for whom data is not critical for business to have a data science team. Being in this profession now (I loathe calling myself a “data scientist” – prefer to say “quant” or “analytics”), I’ve come across quite a few of those.

The problem I see with “data science” on “Main Street” (this phrase gained currency during the financial crisis as the opposite of Wall Street, in that it referred to “normal” businesses) is that it lacks the cadre of desk quants. Most data scientists are highly technical people who don’t necessarily have an understanding of the business they operate in.

Thanks to that, what I’ve noticed is that in most cases there is a chasm between the data scientists and the business, since they are unable to talk in a common language. As I’m prone to saying, this can go two ways – the business guys can either assume that the data science guys are geniuses and take their word for the gospel, or the business guys can totally disregard the data scientists as people who do some esoteric math and don’t really understand the world. In either case, value added is suboptimal.

It is not hard to understand why “Main Street” doesn’t have a cadre of desk quants – it’s because of the way the data science industry has evolved. Quant at investment banks has evolved over a long period of time – the Black-Scholes equation was proposed in the early 1970s. So the quants were first recruited to directly work with the traders, and core quants (at the banks that have them) were a later addition when banks realised that some quant functions could be centralised.

On the other hand, the whole “data science” growth has been rather sudden. The volume of data, cheap incrementally available cloud storage, easy processing and the popularity of the phrase “data science” have all increased well-at-a-faster rate in the last decade or so, and so companies have scrambled to set up data teams. There has simply been no time to train people who get both the business and data – and the data scientists exist like addendums that are either worshipped or ignored.

Auctions of distressed assets

Bloomberg Quint reports that several prominent steel makers are in the fray for the troubled Essar Steel’s assets. Interestingly, the list of interested parties includes the promoters of Essar Steel themselves. 

The trouble with selling troubled assets or bankrupt companies is that it is hard to put a value on them. Cash flows and liabilities are uncertain, as is the value of the residual assets that the company can keep at the end of the bankruptcy process. As a result of the uncertainty, both buyers and sellers are likely to slap on a big margin to their price expectations – so that even if they were to end up overpaying (or get underpaid), there is a reasonable margin of error.

Consequently, several auctions for assets of bankrupt companies fail (an auction is always a good mechanism to sell such assets since it brings together several buyers in a competitive process and the seller – usually a court-appointed bankruptcy manager – can extract the maximum possible value). Sellers slap on a big margin of error on their asking price and set a high reserve price. Buyers go conservative in their bids and possibly bid too low.

As we have seen with the attempted auctions of the properties of Vijay Mallya (promoter of the now bankrupt Kingfisher Airlines) and Subroto Roy Sahara (promoter of the eponymous Sahara Group), such auctions regularly fail. It is the uncertainty of the value of assets that dooms the auctions to failure.

What sets apart the Essar Steel bankruptcy process is that while the company might be bankrupt, the promoters (the Ruia brothers) are not. And having run the company (albeit to the ground), they possess valuable information on the value of assets that remain with the company. And in the bankruptcy process, where neither other buyers nor sellers have adequate information, this information can prove invaluable.

When I first saw the report on Essar’s asset sale, I was reminded of the market for footballers that I talk about in my book Between the buyer and the seller. That market, too, suffers from wide bid-ask spreads on account of difficulty in valuation.

Like distressed companies, the market for footballers also sees few buyers and sellers. And what we see there is that deals usually happen at either end of the bid-ask spectrum – if the selling club is more desperate to sell, the deal happens at an absurdly low price, and if the buying club wants the deal more badly, they pay a high price for it.

I’ve recorded a podcast on football markets with Amit Varma, for the Seen and the unseen podcast.

Coming back to distressed companies, it is well known that the seller (usually a consortium of banks or their representatives) wants to sell, and is usually the more desperate party. Consequently, we can expect the deal to happen close to the bid price. A few auctions might fail in case the sellers set their expectations too high (all buyers bid low since value is uncertain), but that will only make the seller more desperate, which will bring down the price at which the deal happens.

So don’t be surprised if the Ruias do manage to buy Essar Steel, and if they manage to do that at a price that seems absurdly low! The price will be low because there are few buyers and sellers and the seller is the more desperate party. And the Ruias will win the auction, because their inside information of the company they used to run will enable them to make a much better bid.

 

Shorting private markets

This is one of those things I’ll file in the “why didn’t I think of it before?” category.

The basic idea is that if you think there is a startup bubble, and that private companies (as a class) are being overvalued by investors, there exists a rather simple way to short the market – basically start your own company and sell equity to these investors!

The basic problem with shorting a market such as those for shares of privately held startups is that the shares are owned by a small set of investors, none of whom are likely to lend you stock that you can sell and buy back later. More importantly, markets in privately held stock can be incredibly illiquid, and it may take a long time indeed before the stocks move to what you think is their “right” level.

So what do you do? I’ll simply let the always excellent Matt Levine to provide the answer here:

We have talked a few times in the past about the difficulty of shorting unicorns: Investors can buy shares in the big venture-backed private tech companies, but they can’t sell those shares short, which arguably leads to those shares being overvalued as enthusiasts join in but skeptics are excluded. As I once said, though, “the way to profit from a bubble is by selling into it, and that people sometimes focus too narrowly on short-selling into it”: If you think that unicorns as a category are overvalued, the way to profit from that is not so much by shorting Uber as it is by founding your own dumb startup, raising a lot of money from overenthusiastic venture capitalists, paying yourself a big salary, and walking away whistling when the bubble collapses.

Same here! If you are skeptical of the ICO trend, the right thing to do is not to short all the new tokens that are coming to market. It’s to build your own token, do an initial coin offering, and walk off with the proceeds. For the sake of your own conscience, you can just go ahead and say that that’s what you’re doing, right in the ICO white paper. No one seems to mind.

Seriously! Why didn’t I think of this?

Portfolio communication

I just got a promotional message from my broker (ICICI Direct). The intention of the email is possibly to get me to log back on to the website and do some transactions – remember that the broker makes money when I transact, and buy-and-hold investors don’t make much money for them.

So the mail, which I’m sure has been crafted after getting some “data insight”, goes like this:

Here is a quick update on what is happening in the world of investments since you last visited your ICICIdirect.com investment account.
1. Your total portfolio size is INR [xxxxxx]*
2. Sensex moved up by 8.36% during this period#
3. To know more about the top performing stocks and mutual funds, click here.

While this information might be considered to be useful, it simply isn’t enough information to make me learn sufficiently about my portfolio to take any action.

It’s great to know what my portfolio value is, and what the Sensex moved by in this period (“since my last logon”). A simple additional piece of information would be how much my portfolio has gone up by in this period – to know how I’m performing relative to the market.

And right in my email, they could’ve suggested some mutual funds and stock portfolios that I should move my money to – and given me an easy way to click through to the website/app and trade into these new portfolios using a couple of clicks.

There’s so much that can be done in the field of personal finance, in terms of how brokers and advisors can help clients invest better. And a lot of it is simple formula-based, which means it can be automated and hence done at a fairly low cost.

But then as long as the amount of money brokers make is proportional to the amount the client trades, there will always be conflicts of interest.

Asking people out and saving for retirement

As early readers on this blog might be aware of, I had several unsuccessful attempts at getting into a relationship before I eventually met the person who is now my wife. Each of those early episodes had this unfailing pattern – I’d somehow decide one day that I loved someone, get obsessed with her within a short period of time, and see dreams for living together happily ever after.

All this would happen without my having made the least effort on figuring out how to communicate my feelings for the person in question, and that was something I was lousy at. On a couple of occasions I took a high risk strategy, simply approaching the person in question (either in person or online), and expressing my desire to possibly get into a long-term gene-propagating relationship with her.

Most times, though, I’d go full conservative. Try to make conversation. Talk about banal things. Talk about things so banal that the person would soon find me uninteresting and not want to talk to me any more; and which would mean that I had no chance of getting into a relationship – never mind “long-term” and “gene-propagating”.

So recently Pinky the ladywife (who, you might remember, is a Marriage Broker Auntie) and I were talking about strategies to chat up people you were interested in (I must mention here we used to talk about such random stuff in our early conversations as well – Pinky’s ability to indulge in “arbit conversations” were key in my wanting to get into a long-term gene-propagating relationship with her).

As it happens with such conversations, I was telling stories of how I’d approach this back in the day. And we were talking about the experiences of some other people we know who are on the lookout for long-term gene-propagating relationships.

Pinky, in one of her gyaan-spouting moods, was explaining why it’s important that you DON’T have banal conversations in your early days of hitting on someone. She said it is important that you try to make the conversation interesting, and that meant talking about potentially contentious stuff. Sometimes, this would throw off the counterparty and result in failure. But if the counterparty liked the potentially contentious stuff, there was a real chance things might go forward.

I might be paraphrasing here, but what Pinky essentially said is that in the early days, you should take a high-risk strategy, but as you progress in your relationship, you should eschew risk, and become more conservative. This way, she said, you maximise the chances of getting into and staying in a relationship.

While I broadly agree with this strategy (when she first told me this I made a mental note of why I’d never been able to properly hit on anyone in the first place), what I was struck by is how similar it is to save for your retirement. 

There are many common formulae that financial advisors and planners use when they help clients save for retirement. While the mechanics might vary, there is a simple principle – invest in riskier securities when you are young, and progressively decrease the risk profile of your portfolio as you grow older. This way, you get to maximise the expected portfolio value at the time of retirement. Some of these investment strategies are popularly known as “glide path” strategies.

Apart from gene propagation, one of the purposes of getting into a long-term relationship is that there will be “someone who’ll need you, someone who’ll feed you when you’re sixty four”. Sixty four is also the time when you’re possibly planning to retire, and want to have built up a significant retirement kitty. Isn’t it incredible that the strategies for achieving both are rather similar?