Monetising the side bets

If you were to read Matt Levine’s excellent newsletter regularly, you might hypothesize that the market for Credit Default Swaps (CDS) is dying. Every other day, we see news of either engineered defaults (companies being asked to default by CDS holders in exchange for cheap loans in the next round), transfer of liability from one legal entity to another (parent to subsidiary or vice versa), “orphaning” of CDSs (where on group company pays off debt belonging to another) and so on.

So what was once a mostly straightforward instrument (I pay you a regular stream of money, and you pay me a lumpsum if the specified company defaults) has now become an overly legal product. From what seemed like a clever way to hedge out the default risk of a loan (or a basket of loans), CDSs have become an over-lawyered product of careful clauses and letters and spirits, where traders try to manipulate the market they are betting on (if stuff like orphaning or engineered default were to happen in sports, punters would get arrested for match-fixing).

One way to think of it is that it was a product that got too clever, and now people are figuring out a way to set that right and the market will soon disappear. If you were to follow this view, you would thin that ordinary credit traders (well, most credit traders work for large banks or hedge funds, so not sure this category exists) will stop trading CDSs and the market will die.

Another way to think about it is that these over-legalistic implications of CDSs are a way by the issuer of the debt to make money off all the side bets that happen on that debt. You can think about this in terms of horse racing.

Horse breeding is largely funded by revenues from bets. Every time there is a race, there is heavy betting (this is legal in most countries), and a part of the “rent” that the house collects from these bets is shared with the owners of the horses (in the form of prizes and participation fees). And this revenue stream (from side bets on which horse is better, essentially) completely funds horse rearing.

CDSs were a product invented to help holders of debt to transfer credit risk to other players who could hedge the risk better (by diversifying the risk, owning opposite exposures, etc.). However, over time they got so popular that on several debt instruments, the amount of CDSs outstanding is a large multiple of the total value of the debt itself.

This is a problem as we saw during the 2008 financial crisis, as this rapidly amplified the impacts of mortgage defaults. Moreover, the market in CDSs has no impact whatsoever on the companies that issued the debt  – they can see what the market thinks of their creditworthiness but have no way to profit from these side bets.

And that is where engineered defaults come in – they present a way for debt issuers to actually profit from all the side bets. By striking a deal with CDS owners, they are able to transfer some of the benefits of their own defaults to cheaper rates in the next round of funding. Even orphaning of debt and transferring between group companies are done in consultation with CDS holders – people the company ordinarily should have nothing to do with.

The market for CDS is very different from ordinary sports betting markets – there are no “unsophisticated players”, so it is unclear if anyone can be punished for match fixing. The best way to look at all the turmoil in the CDS market can thus be looked at in the same way as horse rearing – an activity being funded by “side bets”.

Advertising Agencies: From Brokers to Dealers

The Ken, where I bought a year long subscription today, has a brilliant piece on the ad agency business (paywalled) in India. More specifically, the piece is on pricing in the industry and how it is moving from a commissions only basis to a more mixed model.

Advertising agencies perform a dual role for their clients. Apart from advising them on advertising strategy and helping them create the campaigns, they are also in charge of execution and buying the advertising slots – either in print or television or hoardings (we’ll leave online out since the structure there is more complicated).

As far as the latter business (acquisition of slots to place the ad – commonly known as “buying”) is concerned, typically agencies have operated on a commission basis. The fees charged has been to the extent of about 2.5% of the value of the inventory bought.

In financial markets parlance, advertising agencies have traditionally operated as brokers, buying inventory on behalf of their clients and then charging a fee for it. The thrust of Ashish Mishra’s piece in ate Ken is that agencies are moving away from this model – and instead becoming what is known in financial markets as “dealers”.

Dealers, also known as market makers, make their money by taking the other side of the trade from the client. So if a client wants to buy IBM stock, the dealer is always available to sell it to her.

The dealer makes money by buying low and selling high – buying from people who want to sell and selling to people who want to buy. Their income is in the spread, and it is risky business, since they bear the risk of not being able to offload inventory they have had to buy. They hedge this risk by pricing – the harder they think it is to offload inventory, the wider they set the spreads.

Similarly, going by the Ken story, what ad agencies are nowadays doing is to buy inventory from media companies, and then selling it on to the clients, and making money on the spread. And clients aren’t taking too well to this new situation, subjecting the dealers ad agencies to audits.

From a market design perspective, there is nothing wrong in what the ad agencies are doing. The problem is due to their transition from brokers to dealers, and their clients not coming to terms with the fact that dealers don’t normally have a fiduciary responsibility towards their clients (unlike brokers who represent their clients). There are also local monopoly issues.

The main service that a dealer performs is to take the other side of the trade. The usual mechanism is that the dealer quotes the prices (both buy and sell) and then the client has the option to trade. If the client feels the dealer is ripping her off, she has a chance to not do the deal.

And in this kind of a situation, the price at which the dealer obtained the inventory is moot – all that matters to the deal is the price that the dealer is willing to sell to the client at, and the price that competing dealers might be charging.

So when clients of ad agencies demand that they get the inventory at the same price at which the agencies got it from the media, they are effectively asking for “retail goods at wholesale rates” and refusing to respect the risk that the dealers might have taken in acquiring the inventories (remember the ad agencies run the risk of inventories going unsold if they price them too high).

The reason for the little turmoil in the ad agency industry is that it is an industry in transition – where the agencies are moving from being brokers to being dealers, and clients are in the process of coming to terms with it.

And from one quote in the article (paywalled, again), it seems like the industry might as well move completely to a dealer model from the current broker model.

Clients who are aware are now questioning the point of paying a commission to an agency. “The client’s rationale is that is that it is my money that is being spent. And on that you are already making money as rebate, discount, incentive and reselling inventory to me at a margin, so why do I need to pay you any agency commissions? Some clients have lost trust in their agencies owing to lack of transparency,” says Sodhani.

Finally, there is the issue of monopoly. Dealers work best when there is competition – the clients need to have an option to walk away from the dealers’ exorbitant prices. And this is a bit problematic in the advertising world since agencies act as their clients’ brokers elsewhere in the chain – planning, creating ads, etc.

However the financial industry has dealt with this problem where most large banks function as both brokers and dealers. It’s only a matter of time before the advertising world goes down that path as well.

PS: you can read more about brokers and dealers and marketplaces and platforms in my book Between the Buyer and the Seller

Suckers still exist

Matt Levine’s latest newsletter describes a sucker of a trade:

 

  1. You give Citigroup Inc. $1,000, when Amazon.com’s stock is at $1,339.60.
  2. At the end of each quarter for the next three years, Citi looks at Amazon’s stock price. If it’s at or below $1,339.60, Citi sends you $25 and the trade continues. If it’s above $1,339.60, Citi sends you back your $1,000 and the trade is over.
  3. At the end of the three years, Citi looks at Amazon’s stock price. If it’s above $1,004.70 (75 percent of the initial stock price), then Citi sends you $1,025 and the trade is over. But if it’s below $1,004.70, you eat the full amount of the loss: For instance, if Amazon’s stock price is $803.80 (60 percent of the initial stock price), then you lose 40 percent of your money, and get back only $600. Citi keeps the rest. (You get to keep all the premiums, though.)

Anyone with half a brain should know that this is not a great trade.

For starters, it gives the client (usually a hedge fund or a pension fund or someone who represents rich guys) a small limited upside (of 10% per year for three years), while giving unlimited downside if Amazon lost over 25% in 3 years.

Then, the trade has a “knock out” (gets unwound with Citigroup paying back the client the principal) clause, with the strike price of the knockout being exactly the Amazon share price on the day the contract came into force. And given that Amazon has been on a strong bull run for a while now, it seems like a strange price at which to put a knock out clause. In other words, there is a high probability that the trade gets “knocked out” soon after it comes into existence, with the client having paid up all the transaction costs (3.5% of the principal in fees).

Despite this being such a shitty deal, Levine reports that Citigroup sold $16.3 million worth of these “notes”. While that is not a large amount, it is significant that nearly ten years after the financial crisis, there are still suckers out there, whom clever salespersons in investment banks can con into buying such shitty notes. It seems institutional memory is short (or these clients are located in states in the US where marijuana is legal).

I mean, who even buys structured notes nowadays?

PS: Speaking of suckers, I recently got to know of the existence of a school in Mumbai named “Our Lady of Perpetual Succour“. Splendid.

A one in billion trillion event

It seems like capital markets quants have given up on the lognormal model for good, for nobody described Facebook’s stock price drop last Thursday as a “one in a billion trillion event”. For that is the approximate probability of it happening, if we were to assume a lognormal model of the market.

Created using Quantmod package. Data from Yahoo.

Without loss of generality, we will use 90 days trailing data to calculate the mean and volatility of stock returns. As of last Thursday (the day of the fall), the daily mean returns for FB was 0.204%, or an annualised return of 51.5% (as you can see, very impressive!). The daily volatility in the stock (using a 90-day lookback period again) was 1.98%, or an annualised volatility of 31.4% . While it is a tad on the higher side, it is okay considering the annual return of 51.5%.

Now, traditional quantitative finance models have all used a lognormal distribution to represent stock prices, which implies that the distribution of stock price returns is normal. Under such an assumption, the likelihood of a 18.9% drop in the value of Facebook (which is what we saw on Thursday) is very small indeed.

In fact, to be precise, when the stock is returning 0.204% per day with a vol of 1.98% per day, the an 18.9% drop is a 9.7 sigma event. In other words, if the distribution of returns were to be normal, Thursday’s drop is 9 sigmas away from normal. Remember that most quality control systems (admittedly in industrial settings, where faults are indeed governed by a nearly normal distribution) are set for a six sigma limit.

Another way to look at Thursday’s 9.7 sigma event is that again under the normal distribution, the likelihood of seeing this kind of a fall in a day is $math ~10^{-21}$. Or one in a billion trillion. In terms of the number of trading days required for such a fall to arrive at random, it is of the order of a billion billion years, which is an order of magnitude higher than the age of the universe!

In fact, when the 1987 stock market crash (black monday) happened, this was the defence the quants gave for losing their banks’ money – that it was an incredibly improbable event. Now, my reading of the papers nowadays is sketchy, and I mostly consume news via twitter, but I haven’t heard a single such defence from quants who lost money in the Facebook crash. In fact, I haven’t come across too many stories of people who lost money in the crash.

Maybe it’s the power of diversification, and maybe indexing, because of which Facebook is now only a small portion of people’s portfolios. A 20% drop in a stock that is even 10% of your portfolio erodes your wealth by 2%, which is tolerable. What possibly caused traders to jump out of windows on Black Monday was that it was a secular drop in the US market then.

Or maybe it’s that the lessons learnt from Black Monday have been internalised, and included in models 30 years hence (remember that concepts such as volatility smiles and skews, and stochastic volatility, were introduced in the wake of the 1987 crash).

That a 20% drop in one of the five biggest stocks in the United States didn’t make for “human stories” or stories about “one in a billion billion event” is itself a story! Or maybe my reading of the papers is heavily biased!

PostScript

Even after the spectacular drop, the Facebook stock at the time of this update is trading at 168.25, a level last seen exactly 3 months ago – on 26th April, following the last quarter results of Facebook. That barely 3 months’ worth of earnings have been wiped out by such a massive crash suggests that the only people to have lost from the crash are traders who wrote out of the money puts.

Bankers predicting football

So the Football World Cup season is upon us, and this means that investment banking analysts are again engaging in the pointless exercise of trying to predict who will win the World Cup. And the funny thing this time is that thanks to MiFiD 2 regulations, which prevent banking analysts from giving out reports for free, these reports aren’t in the public domain.

That means we’ve to rely on media reports of these reports, or on people tweeting insights from them. For example, the New York Times has summarised the banks’ predictions on the winner. And this scatter plot from Goldman Sachs will go straight into my next presentation on spurious correlations:

Different banks have taken different approaches to predict who will win the tournament. UBS has still gone for a classic Monte Carlo simulation  approach, but Goldman Sachs has gone one ahead and used “four different methods in artificial intelligence” to predict (for the third consecutive time) that Brazil will win the tournament.

In fact, Goldman also uses a Monte Carlo simulation, as Business Insider reports.

The firm used machine learning to run 200,000 models, mining data on team and individual player attributes, to help forecast specific match scores. Goldman then simulated 1 million possible variations of the tournament in order to calculate the probability of advancement for each squad.

But an insider in Goldman with access to the report tells me that they don’t use the phrase itself in the report. Maybe it’s a suggestion that “data scientists” have taken over the investment research division at the expense of quants.

I’m also surprised with the reporting on Goldman’s predictions. Everyone simply reports that “Goldman predicts that Brazil will win”, but surely (based on the model they’ve used), that prediction has been made with a certain probability? A better way of reporting would’ve been to say “Goldman predicts Brazil most likely to win, with X% probability” (and the bank’s bets desk in the UK could have placed some money on it).

ING went rather simple with their forecasts – simply took players’ transfer values, and summed them up by teams, and concluded that Spain is most likely to win because their squad is the “most valued”. Now, I have two major questions about this approach – firstly, it ignores the “correlation term” (remember the famous England conundrum of the noughties of fitting  Gerrard and Lampard into the same eleven?), and assumes a set of strong players is a strong team. Secondly, have they accounted for inflation? And if so, how have they accounted for inflation? Player valuation (about which I have a chapter in my book) has simply gone through the roof in the last year, with Mo Salah at £35 million being considered a “bargain buy”.

Nomura also seems to have taken a similar approach, though they have in some ways accounted for the correlation term by including “team momentum” as a factor!

Anyway, I look forward to the football! That it is live on BBC and ITV means I get to watch the tournament from the comfort of my home (a luxury in England!). Also being in England means all matches are at a sane time, so I can watch more of this World Cup than the last one.

 

A banker’s apology

Whenever there is a massive stock market crash, like the one in 1987, or the crisis in 2008, it is common for investment banking quants to talk about how it was a “1 in zillion years” event. This is on account of their models that typically assume that stock prices are lognormal, and that stock price movement is Markovian (today’s movement is uncorrelated with tomorrow’s).

In fact, a cursory look at recent data shows that what models show to be a one in zillion years event actually happens every few years, or decades. In other words, while quant models do pretty well in the average case, they have thin “tails” – they underestimate the likelihood of extreme events, leading to building up risk in the situation.

When I decided to end my (brief) career as an investment banking quant in 2011, I wanted to take the methods that I’d learnt into other industries. While “data science” might have become a thing in the intervening years, there is still a lot for conventional industry to learn from banking in terms of using maths for management decision-making. And this makes me believe I’m still in business.

And like my former colleagues in investment banking quant, I’m not immune to the fat tail problem as well – replicating solutions from one domain into another can replicate the problems as well.

For a while now I’ve been building what I think is a fairly innovative way to represent a cricket match. Basically you look at how the balance of play shifts as the game goes along. So the representation is a line graph that shows where the balance of play was at different points of time in the game.

This way, you have a visualisation that at one shot tells you how the game “flowed”. Consider, for example, last night’s game between Mumbai Indians and Chennai Super Kings. This is what the game looks like in my representation.

What this shows is that Mumbai Indians got a small advantage midway through the innings (after a short blast by Ishan Kishan), which they held through their innings. The game was steady for about 5 overs of the CSK chase, when some tight overs created pressure that resulted in Suresh Raina getting out.

Soon, Ambati Rayudu and MS Dhoni followed him to the pavilion, and MI were in control, with CSK losing 6 wickets in the course of 10 overs. When they lost Mark Wood in the 17th Over, Mumbai Indians were almost surely winners – my system reckoning that 48 to win in 21 balls was near-impossible.

And then Bravo got into the act, putting on 39 in 10 balls with Imran Tahir watching at the other end (including taking 20 off a Mitchell McClenaghan over, and 20 again off a Jasprit Bumrah over at the end of which Bravo got out). And then a one-legged Jadhav came, hobbled for 3 balls and then finished off the game.

Now, while the shape of the curve in the above curve is representative of what happened in the game, I think it went too close to the axes. 48 off 21 with 2 wickets in hand is not easy, but it’s not a 1% probability event (as my graph depicts).

And looking into my model, I realise I’ve made the familiar banker’s mistake – of assuming independence and Markovian property. I calculate the probability of a team winning using a method called “backward induction” (that I’d learnt during my time as an investment banking quant). It’s the same system that the WASP system to evaluate odds (invented by a few Kiwi scientists) uses, and as I’d pointed out in the past, WASP has the thin tails problem as well.

As Seamus Hogan, one of the inventors of WASP, had pointed out in a comment on that post, one way of solving this thin tails issue is to control for the pitch or  regime, and I’ve incorporated that as well (using a Bayesian system to “learn” the nature of the pitch as the game goes on). Yet, I see I struggle with fat tails.

I seriously need to find a way to take into account serial correlation into my models!

That said, I must say I’m fairly kicked about the system I’ve built. Do let me know what you think of this!

Weighting indices

One of the biggest recent developments in finance has been the rise of index investing. The basic idea of indexing is that rather than trying to beat the market, a retail investor should simply invest in a “market index”, and net of fees they are likely to perform better than they would if they were to use an active manager.

Indexing has become so popular over the years that researchers at Sanford Bernstein, an asset management firm, have likened it to being “worse than Marxism“. People have written dystopian fiction about “the last active manager”. And so on.

And as Matt Levine keeps writing in his excellent newsletter, the rise of indexing means that the balance of power in the financial markets is shifting from asset managers to people who build indices. The context here is that because now a lot of people simply invest “in the index”, determining which stock gets to be part of an index can determine people’s appetite for the stock, and thus its performance.

So, for example, you have indexers who want to leave stocks without voting rights (such as those of SNAP) out of indices. Some other indexers want to leave out extra-large companies (such as a hypothetically public Saudi Aramco) out of the index. And then there are people who believe that the way conventional indices are built is incorrect, and instead argue in favour of an “equally weighted index”.

While one an theoretically just put together a bunch of stocks and call it an “index” and sell it to investors making them believe that they’re “investing in the index” (since that is now a thing), the thing is that not every index is an index.

Last week, while trying to understand what the deal about “smart beta” (a word people in the industry throw around a fair bit, but something that not too many people are clear of what it means) is, I stumbled upon this excellent paper by MSCI on smart beta and factor investing.

About a decade ago, the Nifty (India’s flagship index) changed the way it was computed. Earlier, stocks in the Nifty were weighted based on their overall market capitalisation. From 2009 onwards, the weights of the stocks in the Nifty are proportional to their “free float market capitalisation” (that is, the stock price multiplied by number of shares held by the “public”, i.e. non promoters).

Back then I hadn’t understood the significance of the change – apart from making the necessary changes in the algorithm I was running at a hedge fund to take into account the new weights that is. Reading the MSCI paper made me realise the sanctity of weighting by free float market capitalisation in building an index.

The basic idea of indexing is that you don’t make any investment decisions, and instead simply “follow the herd”. Essentially you allocate your capital across stocks in exactly the same proportion as the rest of the market. In other words, the index needs to track stocks in the same proportion that the broad market owns it.

And the free float market capitalisation, which is basically the total value of the stock held by “public” (or non-promoters), represents the allocation of capital by the total market in favour of the particular stock. And by weighting stocks in the ratio of their free float market capitalisation, we are essentially mimicking the way the broad market has allocated capital across different companies.

Thus, only a broad market index that is weighted by free flow market capitalisation counts as “indexing” as far as passive investing is concerned. Investing in stocks in any other combination or ratio means the investor is expressing her views or preferences on the relative performance of stocks that are different from the market’s preferences.

So if you invest in a sectoral index, you are not “indexing”. If you invest in an index that is weighted differently than by free float market cap (such as the Dow Jones Industrial Average), you are not indexing.

One final point – you might wonder why indices have a finite number of stocks (such as the S&P 500 or Nifty 50) if true indexing means reflecting the market’s capital allocation across all stocks, not just a few large ones.

The reason why we cut off after a point is that beyond that, the weightage of stocks becomes so low that in order to perfectly track the index, the investment required is significant. And so, for a retail investor seeking to index, following the “entire market” might mean a significant “tracking error”. In other words, the 50 or 500 stocks that make up the index are a good representation of the market at large, and tracking these indices, as long as they are free float market capitalisation weighted, is the same as investing without having a view.

Bond Market Liquidity and Selection Bias

I’ve long been a fan of Matt Levine’s excellent Money Stuff newsletter. I’ve mentioned this newsletter here several times in the past, and on one such occasion, I got a link back.

One of my favourite sections in Levine’s newsletter is called “people are worried about bond market liquidity”. One reason I got interested in it was that I was writing a book on Liquidity (speaking of which, there’s a formal launch function in Bangalore on the 15th). More importantly, it was rather entertainingly written, and informative as well.

I appreciated the section so much that I ended up calling one of the sections of one of the chapters of my book “people are worried about bond market liquidity”. 

In any case, the Levine has outdone himself several times over in his latest instalment of worries about bond market liquidity. This one is from Friday’s newsletter. I strongly encourage you to read fully the section on people being worried about bond market liquidity.

To summarise, the basic idea is that while people are generally worried about bond market liquidity, a lot of studies about such liquidity by academics and regulators have concluded that bond market liquidity is just fine. This is based on the finding that the bid-ask spread (gap between prices at which a dealer is willing to buy or sell a security) still remains tight, and so liquidity is just fine.

But the problem is that, as Levine beautifully describes the idea, there is a strong case of selection bias. While the bid-ask spread has indeed narrowed, what this data point misses out is that many trades that could have otherwise happened are not happening, and so the data comes from a very biased sample.

Levine does a much better job of describing this than me, but there are two ways in which a banker can facilitate bond trading – by either taking possession of the bonds (in other words, being a “market maker” (PS: I have a chapter on this in my book) ), or by simply helping find a counterparty to the trade, thus acting like a broker (I have a chapter on brokers as well in my book).

A new paper by economists at the Federal Reserve Board confirms that the general finding that bond market liquidity is okay is affected by selection bias. The authors find that spreads are tighter (and sometimes negative) when bankers are playing the role of brokers than when they are playing the role of market makers.

In the very first chapter of my book (dealing with football transfer markets), I had mentioned that the bid-ask spread of a market is a good indicator of market liquidity. That the higher the bid-ask spread, the less liquid a market.

Later on in the book, I’d also mentioned that the money that an intermediary can make is again a function of how inherent the market is.

This story about bond market liquidity puts both these assertions into question. Bond markets see tight bid-ask spreads and bankers make little or no money (as the paper linked to above says, spreads are frequently negative). Based on my book, both of these should indicate that the market is quite liquid.

However, it turns out that both the bid-ask spread and fees made by intermediaries are biased estimates, since they don’t take into account the trades that were not done.

With bankers cutting down on market making activity (see Levine’s post or the paper for more details), there is many a time when a customer will not be able to trade at all since the bankers are unable to find them a counterparty (in the pre Volcker Rule days, bankers would’ve simply stepped in themselves and taken the other side of the trade). In such cases, the effective bid-ask spread is infinity, since the market has disappeared.

Technically this needs to be included while calculating the overall bid-ask spread. How this can actually be achieve is yet another question!

The (missing) Desk Quants of Main Street

A long time ago, I’d written about my experience as a Quant at an investment bank, and about how banks like mine were sitting on a pile of risk that could blow up any time soon.

There were two problems as I had documented then. Firstly, most quants I interacted with seemed to be solving maths problems rather than finance problems, not bothering if their models would stand the test of markets. Secondly, there was an element of groupthink, as quant teams were largely homogeneous and it was hard to progress while holding contrarian views.

Six years on, there has been no blowup, and in some sense banks are actually doing well (I mean, they’ve declined compared to the time just before the 2008 financial crisis but haven’t done that badly). There have been no real quant disasters (yes I know the Gaussian Copula gained infamy during the 2008 crisis, but I’m talking about a period after that crisis).

There can be many explanations regarding how banks have not had any quant blow-ups despite quants solving for math problems and all thinking alike, but the one I’m partial to is the presence of a “middle layer”.

Most of the quants I interacted with were “core” in the sense that they were not attached to any sales or trading desks. Banks also typically had a large cadre of “desk quants” who are directly associated with trading teams, and who build models and help with day-to-day risk management, pricing, etc.

Since these desk quants work closely with the business, they turn out to be much more pragmatic than the core quants – they have a good understanding of the market and use the models more as guiding principles than as rules. On the other hand, they bring the benefits of quantitative models (and work of the core quants) into day-to-day business.

Back during the financial crisis, I’d jokingly predicted that other industries should hire quants who were now surplus to Wall Street. Around the same time, DJ Patil et al came up with the concept of the “data scientist” and called it the “sexiest job of the 21st century”.

And so other industries started getting their own share of quants, or “data scientists” as they were now called. Nowadays its fashionable even for small companies for whom data is not critical for business to have a data science team. Being in this profession now (I loathe calling myself a “data scientist” – prefer to say “quant” or “analytics”), I’ve come across quite a few of those.

The problem I see with “data science” on “Main Street” (this phrase gained currency during the financial crisis as the opposite of Wall Street, in that it referred to “normal” businesses) is that it lacks the cadre of desk quants. Most data scientists are highly technical people who don’t necessarily have an understanding of the business they operate in.

Thanks to that, what I’ve noticed is that in most cases there is a chasm between the data scientists and the business, since they are unable to talk in a common language. As I’m prone to saying, this can go two ways – the business guys can either assume that the data science guys are geniuses and take their word for the gospel, or the business guys can totally disregard the data scientists as people who do some esoteric math and don’t really understand the world. In either case, value added is suboptimal.

It is not hard to understand why “Main Street” doesn’t have a cadre of desk quants – it’s because of the way the data science industry has evolved. Quant at investment banks has evolved over a long period of time – the Black-Scholes equation was proposed in the early 1970s. So the quants were first recruited to directly work with the traders, and core quants (at the banks that have them) were a later addition when banks realised that some quant functions could be centralised.

On the other hand, the whole “data science” growth has been rather sudden. The volume of data, cheap incrementally available cloud storage, easy processing and the popularity of the phrase “data science” have all increased well-at-a-faster rate in the last decade or so, and so companies have scrambled to set up data teams. There has simply been no time to train people who get both the business and data – and the data scientists exist like addendums that are either worshipped or ignored.

Direct listing

So it seems like Swedish music streaming company Spotify is going to do a “direct listing” on the markets. Here is Felix Salmon on why that’s a good move for the company. And in this newsletter, Matt Levine (a former Equity Capital Markets banker) talks about why it’s not.

In a traditional IPO, a company raises money from the “public” in exchange for fresh shares. A few existing shareholders usually cash out at the time of the IPO (offering their shares in addition to the new ones that the company is issuing), but IPOs are primarily a capital raising exercise for the company.

Now, pricing an IPO is tricky business since the company hasn’t been traded yet, and so a company has to enlist investment bankers who, using their experience and investor relations, will “price” the IPO and take care of distributing the fresh stock to new investors. Bankers also typically “underwrite” the IPO, by guaranteeing to buy at the IPO price in case investor demand is low (this almost never happens – pricing is done keeping in mind what investors are willing to pay). I’ve written several posts on this blog on IPO pricing, and here’s the latest (with links to all previous posts on the topic).

In a “direct listing”, no new shares of the company are issued, the stock gets listed on an exchange. It is up to existing shareholders (including employees) to sell stock in order to create action on the exchange. In that sense, it is not a capital raising exercise, but more of an opportunity for shareholders to cash out.

The problem with direct listing is that it can take a while for the market to price the company. When there is an IPO, and shares are allotted to investors, a large number of these allottees want to trade the stock on the day it is listed, and that creates activity in the stock, and an opportunity for the market to express its opinion on the value of the company.

In case of a direct listing, since it’s only a bunch of insiders who have stock to sell, trading volumes in the first few days might be low, and it takes time for the real value to get discovered. There is also a chance that the stock might be highly volatile until this price is discovered (all an IPO does is to compress this time rather significantly).

One reason why Spotify is doing a direct listing is because it doesn’t need new capital – only an avenue to let existing shareholders cash out. The other reason is that the company recently raised capital, and there appears to be a consensus that the valuation at which it was raised – $13 billion – is fair.

Since the company raised capital only recently, the price at which this round of capital was raised will be anchored in the minds of investors, both existing and prospective. Existing shareholders will expect to cash out their shares at a price that leads to this valuation, and new investors will use this valuation as an anchor to place their initial bids. As a result, it is unlikely that the volatility in the stock in initial days of trading will be as high as analysts expect.

In one sense, by announcing it will go public soon after raising its last round of private investment, what Spotify has done is to decouple its capital raising process from the going public process, but keeping them close enough that the price anchor effects are not lost. If things go well (stock volatility is low in initial days), the company might just be setting a trend!