Weighting indices

One of the biggest recent developments in finance has been the rise of index investing. The basic idea of indexing is that rather than trying to beat the market, a retail investor should simply invest in a “market index”, and net of fees they are likely to perform better than they would if they were to use an active manager.

Indexing has become so popular over the years that researchers at Sanford Bernstein, an asset management firm, have likened it to being “worse than Marxism“. People have written dystopian fiction about “the last active manager”. And so on.

And as Matt Levine keeps writing in his excellent newsletter, the rise of indexing means that the balance of power in the financial markets is shifting from asset managers to people who build indices. The context here is that because now a lot of people simply invest “in the index”, determining which stock gets to be part of an index can determine people’s appetite for the stock, and thus its performance.

So, for example, you have indexers who want to leave stocks without voting rights (such as those of SNAP) out of indices. Some other indexers want to leave out extra-large companies (such as a hypothetically public Saudi Aramco) out of the index. And then there are people who believe that the way conventional indices are built is incorrect, and instead argue in favour of an “equally weighted index”.

While one an theoretically just put together a bunch of stocks and call it an “index” and sell it to investors making them believe that they’re “investing in the index” (since that is now a thing), the thing is that not every index is an index.

Last week, while trying to understand what the deal about “smart beta” (a word people in the industry throw around a fair bit, but something that not too many people are clear of what it means) is, I stumbled upon this excellent paper by MSCI on smart beta and factor investing.

About a decade ago, the Nifty (India’s flagship index) changed the way it was computed. Earlier, stocks in the Nifty were weighted based on their overall market capitalisation. From 2009 onwards, the weights of the stocks in the Nifty are proportional to their “free float market capitalisation” (that is, the stock price multiplied by number of shares held by the “public”, i.e. non promoters).

Back then I hadn’t understood the significance of the change – apart from making the necessary changes in the algorithm I was running at a hedge fund to take into account the new weights that is. Reading the MSCI paper made me realise the sanctity of weighting by free float market capitalisation in building an index.

The basic idea of indexing is that you don’t make any investment decisions, and instead simply “follow the herd”. Essentially you allocate your capital across stocks in exactly the same proportion as the rest of the market. In other words, the index needs to track stocks in the same proportion that the broad market owns it.

And the free float market capitalisation, which is basically the total value of the stock held by “public” (or non-promoters), represents the allocation of capital by the total market in favour of the particular stock. And by weighting stocks in the ratio of their free float market capitalisation, we are essentially mimicking the way the broad market has allocated capital across different companies.

Thus, only a broad market index that is weighted by free flow market capitalisation counts as “indexing” as far as passive investing is concerned. Investing in stocks in any other combination or ratio means the investor is expressing her views or preferences on the relative performance of stocks that are different from the market’s preferences.

So if you invest in a sectoral index, you are not “indexing”. If you invest in an index that is weighted differently than by free float market cap (such as the Dow Jones Industrial Average), you are not indexing.

One final point – you might wonder why indices have a finite number of stocks (such as the S&P 500 or Nifty 50) if true indexing means reflecting the market’s capital allocation across all stocks, not just a few large ones.

The reason why we cut off after a point is that beyond that, the weightage of stocks becomes so low that in order to perfectly track the index, the investment required is significant. And so, for a retail investor seeking to index, following the “entire market” might mean a significant “tracking error”. In other words, the 50 or 500 stocks that make up the index are a good representation of the market at large, and tracking these indices, as long as they are free float market capitalisation weighted, is the same as investing without having a view.

Bond Market Liquidity and Selection Bias

I’ve long been a fan of Matt Levine’s excellent Money Stuff newsletter. I’ve mentioned this newsletter here several times in the past, and on one such occasion, I got a link back.

One of my favourite sections in Levine’s newsletter is called “people are worried about bond market liquidity”. One reason I got interested in it was that I was writing a book on Liquidity (speaking of which, there’s a formal launch function in Bangalore on the 15th). More importantly, it was rather entertainingly written, and informative as well.

I appreciated the section so much that I ended up calling one of the sections of one of the chapters of my book “people are worried about bond market liquidity”. 

In any case, the Levine has outdone himself several times over in his latest instalment of worries about bond market liquidity. This one is from Friday’s newsletter. I strongly encourage you to read fully the section on people being worried about bond market liquidity.

To summarise, the basic idea is that while people are generally worried about bond market liquidity, a lot of studies about such liquidity by academics and regulators have concluded that bond market liquidity is just fine. This is based on the finding that the bid-ask spread (gap between prices at which a dealer is willing to buy or sell a security) still remains tight, and so liquidity is just fine.

But the problem is that, as Levine beautifully describes the idea, there is a strong case of selection bias. While the bid-ask spread has indeed narrowed, what this data point misses out is that many trades that could have otherwise happened are not happening, and so the data comes from a very biased sample.

Levine does a much better job of describing this than me, but there are two ways in which a banker can facilitate bond trading – by either taking possession of the bonds (in other words, being a “market maker” (PS: I have a chapter on this in my book) ), or by simply helping find a counterparty to the trade, thus acting like a broker (I have a chapter on brokers as well in my book).

A new paper by economists at the Federal Reserve Board confirms that the general finding that bond market liquidity is okay is affected by selection bias. The authors find that spreads are tighter (and sometimes negative) when bankers are playing the role of brokers than when they are playing the role of market makers.

In the very first chapter of my book (dealing with football transfer markets), I had mentioned that the bid-ask spread of a market is a good indicator of market liquidity. That the higher the bid-ask spread, the less liquid a market.

Later on in the book, I’d also mentioned that the money that an intermediary can make is again a function of how inherent the market is.

This story about bond market liquidity puts both these assertions into question. Bond markets see tight bid-ask spreads and bankers make little or no money (as the paper linked to above says, spreads are frequently negative). Based on my book, both of these should indicate that the market is quite liquid.

However, it turns out that both the bid-ask spread and fees made by intermediaries are biased estimates, since they don’t take into account the trades that were not done.

With bankers cutting down on market making activity (see Levine’s post or the paper for more details), there is many a time when a customer will not be able to trade at all since the bankers are unable to find them a counterparty (in the pre Volcker Rule days, bankers would’ve simply stepped in themselves and taken the other side of the trade). In such cases, the effective bid-ask spread is infinity, since the market has disappeared.

Technically this needs to be included while calculating the overall bid-ask spread. How this can actually be achieve is yet another question!

The (missing) Desk Quants of Main Street

A long time ago, I’d written about my experience as a Quant at an investment bank, and about how banks like mine were sitting on a pile of risk that could blow up any time soon.

There were two problems as I had documented then. Firstly, most quants I interacted with seemed to be solving maths problems rather than finance problems, not bothering if their models would stand the test of markets. Secondly, there was an element of groupthink, as quant teams were largely homogeneous and it was hard to progress while holding contrarian views.

Six years on, there has been no blowup, and in some sense banks are actually doing well (I mean, they’ve declined compared to the time just before the 2008 financial crisis but haven’t done that badly). There have been no real quant disasters (yes I know the Gaussian Copula gained infamy during the 2008 crisis, but I’m talking about a period after that crisis).

There can be many explanations regarding how banks have not had any quant blow-ups despite quants solving for math problems and all thinking alike, but the one I’m partial to is the presence of a “middle layer”.

Most of the quants I interacted with were “core” in the sense that they were not attached to any sales or trading desks. Banks also typically had a large cadre of “desk quants” who are directly associated with trading teams, and who build models and help with day-to-day risk management, pricing, etc.

Since these desk quants work closely with the business, they turn out to be much more pragmatic than the core quants – they have a good understanding of the market and use the models more as guiding principles than as rules. On the other hand, they bring the benefits of quantitative models (and work of the core quants) into day-to-day business.

Back during the financial crisis, I’d jokingly predicted that other industries should hire quants who were now surplus to Wall Street. Around the same time, DJ Patil et al came up with the concept of the “data scientist” and called it the “sexiest job of the 21st century”.

And so other industries started getting their own share of quants, or “data scientists” as they were now called. Nowadays its fashionable even for small companies for whom data is not critical for business to have a data science team. Being in this profession now (I loathe calling myself a “data scientist” – prefer to say “quant” or “analytics”), I’ve come across quite a few of those.

The problem I see with “data science” on “Main Street” (this phrase gained currency during the financial crisis as the opposite of Wall Street, in that it referred to “normal” businesses) is that it lacks the cadre of desk quants. Most data scientists are highly technical people who don’t necessarily have an understanding of the business they operate in.

Thanks to that, what I’ve noticed is that in most cases there is a chasm between the data scientists and the business, since they are unable to talk in a common language. As I’m prone to saying, this can go two ways – the business guys can either assume that the data science guys are geniuses and take their word for the gospel, or the business guys can totally disregard the data scientists as people who do some esoteric math and don’t really understand the world. In either case, value added is suboptimal.

It is not hard to understand why “Main Street” doesn’t have a cadre of desk quants – it’s because of the way the data science industry has evolved. Quant at investment banks has evolved over a long period of time – the Black-Scholes equation was proposed in the early 1970s. So the quants were first recruited to directly work with the traders, and core quants (at the banks that have them) were a later addition when banks realised that some quant functions could be centralised.

On the other hand, the whole “data science” growth has been rather sudden. The volume of data, cheap incrementally available cloud storage, easy processing and the popularity of the phrase “data science” have all increased well-at-a-faster rate in the last decade or so, and so companies have scrambled to set up data teams. There has simply been no time to train people who get both the business and data – and the data scientists exist like addendums that are either worshipped or ignored.

Direct listing

So it seems like Swedish music streaming company Spotify is going to do a “direct listing” on the markets. Here is Felix Salmon on why that’s a good move for the company. And in this newsletter, Matt Levine (a former Equity Capital Markets banker) talks about why it’s not.

In a traditional IPO, a company raises money from the “public” in exchange for fresh shares. A few existing shareholders usually cash out at the time of the IPO (offering their shares in addition to the new ones that the company is issuing), but IPOs are primarily a capital raising exercise for the company.

Now, pricing an IPO is tricky business since the company hasn’t been traded yet, and so a company has to enlist investment bankers who, using their experience and investor relations, will “price” the IPO and take care of distributing the fresh stock to new investors. Bankers also typically “underwrite” the IPO, by guaranteeing to buy at the IPO price in case investor demand is low (this almost never happens – pricing is done keeping in mind what investors are willing to pay). I’ve written several posts on this blog on IPO pricing, and here’s the latest (with links to all previous posts on the topic).

In a “direct listing”, no new shares of the company are issued, the stock gets listed on an exchange. It is up to existing shareholders (including employees) to sell stock in order to create action on the exchange. In that sense, it is not a capital raising exercise, but more of an opportunity for shareholders to cash out.

The problem with direct listing is that it can take a while for the market to price the company. When there is an IPO, and shares are allotted to investors, a large number of these allottees want to trade the stock on the day it is listed, and that creates activity in the stock, and an opportunity for the market to express its opinion on the value of the company.

In case of a direct listing, since it’s only a bunch of insiders who have stock to sell, trading volumes in the first few days might be low, and it takes time for the real value to get discovered. There is also a chance that the stock might be highly volatile until this price is discovered (all an IPO does is to compress this time rather significantly).

One reason why Spotify is doing a direct listing is because it doesn’t need new capital – only an avenue to let existing shareholders cash out. The other reason is that the company recently raised capital, and there appears to be a consensus that the valuation at which it was raised – $13 billion – is fair.

Since the company raised capital only recently, the price at which this round of capital was raised will be anchored in the minds of investors, both existing and prospective. Existing shareholders will expect to cash out their shares at a price that leads to this valuation, and new investors will use this valuation as an anchor to place their initial bids. As a result, it is unlikely that the volatility in the stock in initial days of trading will be as high as analysts expect.

In one sense, by announcing it will go public soon after raising its last round of private investment, what Spotify has done is to decouple its capital raising process from the going public process, but keeping them close enough that the price anchor effects are not lost. If things go well (stock volatility is low in initial days), the company might just be setting a trend!

People are worried about investment banker liquidity 

This was told to me by an investment banker I met a few days back, who obviously doesn’t want to be named. But like Matt Levine writes about people being worried about bond market liquidity, there is also a similar worry about the liquidity of the market for investment bankers as well. 

And once again it has to do with regulations introduced in the aftermath of the 2008 global financial crisis. It has to do with the European requirement that bankers’ bonuses are not all paid immediately, and that they be deferred and amortised over a few years. 

While good in spirit what the regulation has led to is that bankers don’t look to move banks any more. This is because each successful (and thus well paid) banker has a stock of deferred compensation that will be lost in case of a job change. 

This means that any bank looking to hire one such banker will have to compensate for all the deferred compensation in terms of a really fat joining bonus. And banks are seldom willing to pay such a high price. 

And so the rather vibrant and liquid market for investment bankers in Europe has suddenly gone quiet. Interbank moves are few and far in between – with the deferred compensation meaning that banks look to hire internally instead. 

And lesser bankers moving out has had an effect on the number of openings for banker jobs. Which has led to even fewer bankers looking to move. Basically it’s a vicious cycle of falling liquidity! 

Which is not good news for someone like me who’s just moved into London and looking for a banking job!

PS: speaking of liquidity I have a book on market design and liquidity coming out next month or next next month. It’s in the publication process right now. More on that soon! 

May a thousand market structures bloom

In my commentary on SEBI’s proposal to change the regulations of Indian securities markets in order to allow new kinds of market structures, I had mentioned that SEBI should simply enable exchanges to apply whatever market structures they wanted to apply, and let market participants sort out, through competition and pricing, what makes most sense for them.

This way, different stock exchanges in India can pick and choose their favoured form of regulation, and the market (and market participants) can decide which form of regulation they prefer. So you might have the Bombay Stock Exchange (BSE) going with order randomisation, while the National Stock Exchange (NSE) might use batch auctions. And individual participants might migrate to the platform of their choice.

Now, Matt Levine, who has been commenting on market structures for a long time now, makes a similar case in his essay on the Chicago Stock Exchange’s newly introduced “speed bump”:

A thousand — or at least a dozen — market structures can bloom, each subtly optimized for a different type of trader. It’s an innovative and competitive market, in which each exchange can figure out what sorts of traders it wants to favor, and then optimize its speed bumps to cater to those traders.

Maybe I should now accuse Levine of “borrowing” my ideas without credit! 😛

 

Regulating HFT in India

The Securities and Exchange Board of India (SEBI) has set a cat among the HFT (High Frequency Trading) pigeons by proposing seven measures to curb the impact of HFT and improve “real liquidity” in the stock markets.

The big problem with HFT is that algorithms tend to cancel lots of orders – there might be a signal to place an order, and even before the market has digested that order, the order might get cancelled. This results in an illusion of liquidity, while the constant placing and removal of liquidity fucks with the minds of the other algorithms and market participants.

There has been a fair amount of research worldwide, and SEBI seems to have drawn from all of them to propose as many as seven measures – a minimum resting time between HFT orders, matching orders through frequent batch auctions rather than through the order book, introducing random delays (IEX style) for orders, randomising the order queue periodically, capping order-to-trade ratio, creating separate queues for orders from co-located servers (used by HFT algorithms) and review provision of the tick-by-tick data feed.

While the proposal seems sound and well researched (in fact, too well researched, picking up just about any proposal to regulate stock markets), the problem is that there are so many proposals, which are all pairwise mutually incompatible.

As the inimitable Matt Levine commented,

If you run batch auctions and introduce random delays and reshuffle the queue constantly, you are basically replacing your matching engine with a randomizer. You might as well just hold a lottery for who gets which stocks, instead of a market.

My opinion this is that SEBI shouldn’t mandate how each exchange should match its orders. Instead, SEBI should simply enable individual exchanges to regulate the markets in a way they see fit. So in my opinion, it is possible that all the above proposals go through (though I’m personally uncomfortable with some of them such as queue randomisation), but rather than mandating exchanges pick all of them, SEBI simply allows them to use zero or more of them.

This way, different stock exchanges in India can pick and choose their favoured form of regulation, and the market (and market participants) can decide which form of regulation they prefer. So you might have the Bombay Stock Exchange (BSE) going with order randomisation, while the National Stock Exchange (NSE) might use batch auctions. And individual participants might migrate to the platform of their choice.

The problem with this, of course, is that there are only two stock exchanges of note in India, and it is unclear if the depth in the Indian equities market will permit too many more. This might lead to limited competition between bad methods (the worst case scenario), leading to horrible market inefficiencies and the scaremongers’ pet threat of trading shifting to exchanges in Singapore or Dubai actually coming true!

The other problem with different exchanges having different mechanisms is that large institutions and banks might find it difficult to build systems that can trade accurately on all exchanges, and arbitrage opportunities across exchanges might exist for longer than they do now, leading to market inefficiency.

Then again, it’s interesting to see how a “let exchanges do what they want” approach might work. In the United States, there is a new exchange called the Intercontinental Exchange (IEX) that places “speed bumps” over incoming orders, thus reducing the advantage of HFTs. IEX started only recently, after major objections from incumbents who alleged they were making markets less fair.

With IEX having started, however, other exchanges are responding in their own ways to make the markets “fairer” to investors. NASDAQ, which had vehemently opposed IEX’s application, has now filed a proposal to reward orders by investors who wait for at least once second before cancelling them.

Surely, large institutions won’t like it if this proposal goes through, but this gives you a flavour of what competition can do! We’ll have to wait and see what SEBI does now.