Bankers predicting football

So the Football World Cup season is upon us, and this means that investment banking analysts are again engaging in the pointless exercise of trying to predict who will win the World Cup. And the funny thing this time is that thanks to MiFiD 2 regulations, which prevent banking analysts from giving out reports for free, these reports aren’t in the public domain.

That means we’ve to rely on media reports of these reports, or on people tweeting insights from them. For example, the New York Times has summarised the banks’ predictions on the winner. And this scatter plot from Goldman Sachs will go straight into my next presentation on spurious correlations:

Different banks have taken different approaches to predict who will win the tournament. UBS has still gone for a classic Monte Carlo simulation  approach, but Goldman Sachs has gone one ahead and used “four different methods in artificial intelligence” to predict (for the third consecutive time) that Brazil will win the tournament.

In fact, Goldman also uses a Monte Carlo simulation, as Business Insider reports.

The firm used machine learning to run 200,000 models, mining data on team and individual player attributes, to help forecast specific match scores. Goldman then simulated 1 million possible variations of the tournament in order to calculate the probability of advancement for each squad.

But an insider in Goldman with access to the report tells me that they don’t use the phrase itself in the report. Maybe it’s a suggestion that “data scientists” have taken over the investment research division at the expense of quants.

I’m also surprised with the reporting on Goldman’s predictions. Everyone simply reports that “Goldman predicts that Brazil will win”, but surely (based on the model they’ve used), that prediction has been made with a certain probability? A better way of reporting would’ve been to say “Goldman predicts Brazil most likely to win, with X% probability” (and the bank’s bets desk in the UK could have placed some money on it).

ING went rather simple with their forecasts – simply took players’ transfer values, and summed them up by teams, and concluded that Spain is most likely to win because their squad is the “most valued”. Now, I have two major questions about this approach – firstly, it ignores the “correlation term” (remember the famous England conundrum of the noughties of fitting  Gerrard and Lampard into the same eleven?), and assumes a set of strong players is a strong team. Secondly, have they accounted for inflation? And if so, how have they accounted for inflation? Player valuation (about which I have a chapter in my book) has simply gone through the roof in the last year, with Mo Salah at £35 million being considered a “bargain buy”.

Nomura also seems to have taken a similar approach, though they have in some ways accounted for the correlation term by including “team momentum” as a factor!

Anyway, I look forward to the football! That it is live on BBC and ITV means I get to watch the tournament from the comfort of my home (a luxury in England!). Also being in England means all matches are at a sane time, so I can watch more of this World Cup than the last one.

 

A banker’s apology

Whenever there is a massive stock market crash, like the one in 1987, or the crisis in 2008, it is common for investment banking quants to talk about how it was a “1 in zillion years” event. This is on account of their models that typically assume that stock prices are lognormal, and that stock price movement is Markovian (today’s movement is uncorrelated with tomorrow’s).

In fact, a cursory look at recent data shows that what models show to be a one in zillion years event actually happens every few years, or decades. In other words, while quant models do pretty well in the average case, they have thin “tails” – they underestimate the likelihood of extreme events, leading to building up risk in the situation.

When I decided to end my (brief) career as an investment banking quant in 2011, I wanted to take the methods that I’d learnt into other industries. While “data science” might have become a thing in the intervening years, there is still a lot for conventional industry to learn from banking in terms of using maths for management decision-making. And this makes me believe I’m still in business.

And like my former colleagues in investment banking quant, I’m not immune to the fat tail problem as well – replicating solutions from one domain into another can replicate the problems as well.

For a while now I’ve been building what I think is a fairly innovative way to represent a cricket match. Basically you look at how the balance of play shifts as the game goes along. So the representation is a line graph that shows where the balance of play was at different points of time in the game.

This way, you have a visualisation that at one shot tells you how the game “flowed”. Consider, for example, last night’s game between Mumbai Indians and Chennai Super Kings. This is what the game looks like in my representation.

What this shows is that Mumbai Indians got a small advantage midway through the innings (after a short blast by Ishan Kishan), which they held through their innings. The game was steady for about 5 overs of the CSK chase, when some tight overs created pressure that resulted in Suresh Raina getting out.

Soon, Ambati Rayudu and MS Dhoni followed him to the pavilion, and MI were in control, with CSK losing 6 wickets in the course of 10 overs. When they lost Mark Wood in the 17th Over, Mumbai Indians were almost surely winners – my system reckoning that 48 to win in 21 balls was near-impossible.

And then Bravo got into the act, putting on 39 in 10 balls with Imran Tahir watching at the other end (including taking 20 off a Mitchell McClenaghan over, and 20 again off a Jasprit Bumrah over at the end of which Bravo got out). And then a one-legged Jadhav came, hobbled for 3 balls and then finished off the game.

Now, while the shape of the curve in the above curve is representative of what happened in the game, I think it went too close to the axes. 48 off 21 with 2 wickets in hand is not easy, but it’s not a 1% probability event (as my graph depicts).

And looking into my model, I realise I’ve made the familiar banker’s mistake – of assuming independence and Markovian property. I calculate the probability of a team winning using a method called “backward induction” (that I’d learnt during my time as an investment banking quant). It’s the same system that the WASP system to evaluate odds (invented by a few Kiwi scientists) uses, and as I’d pointed out in the past, WASP has the thin tails problem as well.

As Seamus Hogan, one of the inventors of WASP, had pointed out in a comment on that post, one way of solving this thin tails issue is to control for the pitch or  regime, and I’ve incorporated that as well (using a Bayesian system to “learn” the nature of the pitch as the game goes on). Yet, I see I struggle with fat tails.

I seriously need to find a way to take into account serial correlation into my models!

That said, I must say I’m fairly kicked about the system I’ve built. Do let me know what you think of this!

Weighting indices

One of the biggest recent developments in finance has been the rise of index investing. The basic idea of indexing is that rather than trying to beat the market, a retail investor should simply invest in a “market index”, and net of fees they are likely to perform better than they would if they were to use an active manager.

Indexing has become so popular over the years that researchers at Sanford Bernstein, an asset management firm, have likened it to being “worse than Marxism“. People have written dystopian fiction about “the last active manager”. And so on.

And as Matt Levine keeps writing in his excellent newsletter, the rise of indexing means that the balance of power in the financial markets is shifting from asset managers to people who build indices. The context here is that because now a lot of people simply invest “in the index”, determining which stock gets to be part of an index can determine people’s appetite for the stock, and thus its performance.

So, for example, you have indexers who want to leave stocks without voting rights (such as those of SNAP) out of indices. Some other indexers want to leave out extra-large companies (such as a hypothetically public Saudi Aramco) out of the index. And then there are people who believe that the way conventional indices are built is incorrect, and instead argue in favour of an “equally weighted index”.

While one an theoretically just put together a bunch of stocks and call it an “index” and sell it to investors making them believe that they’re “investing in the index” (since that is now a thing), the thing is that not every index is an index.

Last week, while trying to understand what the deal about “smart beta” (a word people in the industry throw around a fair bit, but something that not too many people are clear of what it means) is, I stumbled upon this excellent paper by MSCI on smart beta and factor investing.

About a decade ago, the Nifty (India’s flagship index) changed the way it was computed. Earlier, stocks in the Nifty were weighted based on their overall market capitalisation. From 2009 onwards, the weights of the stocks in the Nifty are proportional to their “free float market capitalisation” (that is, the stock price multiplied by number of shares held by the “public”, i.e. non promoters).

Back then I hadn’t understood the significance of the change – apart from making the necessary changes in the algorithm I was running at a hedge fund to take into account the new weights that is. Reading the MSCI paper made me realise the sanctity of weighting by free float market capitalisation in building an index.

The basic idea of indexing is that you don’t make any investment decisions, and instead simply “follow the herd”. Essentially you allocate your capital across stocks in exactly the same proportion as the rest of the market. In other words, the index needs to track stocks in the same proportion that the broad market owns it.

And the free float market capitalisation, which is basically the total value of the stock held by “public” (or non-promoters), represents the allocation of capital by the total market in favour of the particular stock. And by weighting stocks in the ratio of their free float market capitalisation, we are essentially mimicking the way the broad market has allocated capital across different companies.

Thus, only a broad market index that is weighted by free flow market capitalisation counts as “indexing” as far as passive investing is concerned. Investing in stocks in any other combination or ratio means the investor is expressing her views or preferences on the relative performance of stocks that are different from the market’s preferences.

So if you invest in a sectoral index, you are not “indexing”. If you invest in an index that is weighted differently than by free float market cap (such as the Dow Jones Industrial Average), you are not indexing.

One final point – you might wonder why indices have a finite number of stocks (such as the S&P 500 or Nifty 50) if true indexing means reflecting the market’s capital allocation across all stocks, not just a few large ones.

The reason why we cut off after a point is that beyond that, the weightage of stocks becomes so low that in order to perfectly track the index, the investment required is significant. And so, for a retail investor seeking to index, following the “entire market” might mean a significant “tracking error”. In other words, the 50 or 500 stocks that make up the index are a good representation of the market at large, and tracking these indices, as long as they are free float market capitalisation weighted, is the same as investing without having a view.

Bond Market Liquidity and Selection Bias

I’ve long been a fan of Matt Levine’s excellent Money Stuff newsletter. I’ve mentioned this newsletter here several times in the past, and on one such occasion, I got a link back.

One of my favourite sections in Levine’s newsletter is called “people are worried about bond market liquidity”. One reason I got interested in it was that I was writing a book on Liquidity (speaking of which, there’s a formal launch function in Bangalore on the 15th). More importantly, it was rather entertainingly written, and informative as well.

I appreciated the section so much that I ended up calling one of the sections of one of the chapters of my book “people are worried about bond market liquidity”. 

In any case, the Levine has outdone himself several times over in his latest instalment of worries about bond market liquidity. This one is from Friday’s newsletter. I strongly encourage you to read fully the section on people being worried about bond market liquidity.

To summarise, the basic idea is that while people are generally worried about bond market liquidity, a lot of studies about such liquidity by academics and regulators have concluded that bond market liquidity is just fine. This is based on the finding that the bid-ask spread (gap between prices at which a dealer is willing to buy or sell a security) still remains tight, and so liquidity is just fine.

But the problem is that, as Levine beautifully describes the idea, there is a strong case of selection bias. While the bid-ask spread has indeed narrowed, what this data point misses out is that many trades that could have otherwise happened are not happening, and so the data comes from a very biased sample.

Levine does a much better job of describing this than me, but there are two ways in which a banker can facilitate bond trading – by either taking possession of the bonds (in other words, being a “market maker” (PS: I have a chapter on this in my book) ), or by simply helping find a counterparty to the trade, thus acting like a broker (I have a chapter on brokers as well in my book).

A new paper by economists at the Federal Reserve Board confirms that the general finding that bond market liquidity is okay is affected by selection bias. The authors find that spreads are tighter (and sometimes negative) when bankers are playing the role of brokers than when they are playing the role of market makers.

In the very first chapter of my book (dealing with football transfer markets), I had mentioned that the bid-ask spread of a market is a good indicator of market liquidity. That the higher the bid-ask spread, the less liquid a market.

Later on in the book, I’d also mentioned that the money that an intermediary can make is again a function of how inherent the market is.

This story about bond market liquidity puts both these assertions into question. Bond markets see tight bid-ask spreads and bankers make little or no money (as the paper linked to above says, spreads are frequently negative). Based on my book, both of these should indicate that the market is quite liquid.

However, it turns out that both the bid-ask spread and fees made by intermediaries are biased estimates, since they don’t take into account the trades that were not done.

With bankers cutting down on market making activity (see Levine’s post or the paper for more details), there is many a time when a customer will not be able to trade at all since the bankers are unable to find them a counterparty (in the pre Volcker Rule days, bankers would’ve simply stepped in themselves and taken the other side of the trade). In such cases, the effective bid-ask spread is infinity, since the market has disappeared.

Technically this needs to be included while calculating the overall bid-ask spread. How this can actually be achieve is yet another question!

The (missing) Desk Quants of Main Street

A long time ago, I’d written about my experience as a Quant at an investment bank, and about how banks like mine were sitting on a pile of risk that could blow up any time soon.

There were two problems as I had documented then. Firstly, most quants I interacted with seemed to be solving maths problems rather than finance problems, not bothering if their models would stand the test of markets. Secondly, there was an element of groupthink, as quant teams were largely homogeneous and it was hard to progress while holding contrarian views.

Six years on, there has been no blowup, and in some sense banks are actually doing well (I mean, they’ve declined compared to the time just before the 2008 financial crisis but haven’t done that badly). There have been no real quant disasters (yes I know the Gaussian Copula gained infamy during the 2008 crisis, but I’m talking about a period after that crisis).

There can be many explanations regarding how banks have not had any quant blow-ups despite quants solving for math problems and all thinking alike, but the one I’m partial to is the presence of a “middle layer”.

Most of the quants I interacted with were “core” in the sense that they were not attached to any sales or trading desks. Banks also typically had a large cadre of “desk quants” who are directly associated with trading teams, and who build models and help with day-to-day risk management, pricing, etc.

Since these desk quants work closely with the business, they turn out to be much more pragmatic than the core quants – they have a good understanding of the market and use the models more as guiding principles than as rules. On the other hand, they bring the benefits of quantitative models (and work of the core quants) into day-to-day business.

Back during the financial crisis, I’d jokingly predicted that other industries should hire quants who were now surplus to Wall Street. Around the same time, DJ Patil et al came up with the concept of the “data scientist” and called it the “sexiest job of the 21st century”.

And so other industries started getting their own share of quants, or “data scientists” as they were now called. Nowadays its fashionable even for small companies for whom data is not critical for business to have a data science team. Being in this profession now (I loathe calling myself a “data scientist” – prefer to say “quant” or “analytics”), I’ve come across quite a few of those.

The problem I see with “data science” on “Main Street” (this phrase gained currency during the financial crisis as the opposite of Wall Street, in that it referred to “normal” businesses) is that it lacks the cadre of desk quants. Most data scientists are highly technical people who don’t necessarily have an understanding of the business they operate in.

Thanks to that, what I’ve noticed is that in most cases there is a chasm between the data scientists and the business, since they are unable to talk in a common language. As I’m prone to saying, this can go two ways – the business guys can either assume that the data science guys are geniuses and take their word for the gospel, or the business guys can totally disregard the data scientists as people who do some esoteric math and don’t really understand the world. In either case, value added is suboptimal.

It is not hard to understand why “Main Street” doesn’t have a cadre of desk quants – it’s because of the way the data science industry has evolved. Quant at investment banks has evolved over a long period of time – the Black-Scholes equation was proposed in the early 1970s. So the quants were first recruited to directly work with the traders, and core quants (at the banks that have them) were a later addition when banks realised that some quant functions could be centralised.

On the other hand, the whole “data science” growth has been rather sudden. The volume of data, cheap incrementally available cloud storage, easy processing and the popularity of the phrase “data science” have all increased well-at-a-faster rate in the last decade or so, and so companies have scrambled to set up data teams. There has simply been no time to train people who get both the business and data – and the data scientists exist like addendums that are either worshipped or ignored.

Direct listing

So it seems like Swedish music streaming company Spotify is going to do a “direct listing” on the markets. Here is Felix Salmon on why that’s a good move for the company. And in this newsletter, Matt Levine (a former Equity Capital Markets banker) talks about why it’s not.

In a traditional IPO, a company raises money from the “public” in exchange for fresh shares. A few existing shareholders usually cash out at the time of the IPO (offering their shares in addition to the new ones that the company is issuing), but IPOs are primarily a capital raising exercise for the company.

Now, pricing an IPO is tricky business since the company hasn’t been traded yet, and so a company has to enlist investment bankers who, using their experience and investor relations, will “price” the IPO and take care of distributing the fresh stock to new investors. Bankers also typically “underwrite” the IPO, by guaranteeing to buy at the IPO price in case investor demand is low (this almost never happens – pricing is done keeping in mind what investors are willing to pay). I’ve written several posts on this blog on IPO pricing, and here’s the latest (with links to all previous posts on the topic).

In a “direct listing”, no new shares of the company are issued, the stock gets listed on an exchange. It is up to existing shareholders (including employees) to sell stock in order to create action on the exchange. In that sense, it is not a capital raising exercise, but more of an opportunity for shareholders to cash out.

The problem with direct listing is that it can take a while for the market to price the company. When there is an IPO, and shares are allotted to investors, a large number of these allottees want to trade the stock on the day it is listed, and that creates activity in the stock, and an opportunity for the market to express its opinion on the value of the company.

In case of a direct listing, since it’s only a bunch of insiders who have stock to sell, trading volumes in the first few days might be low, and it takes time for the real value to get discovered. There is also a chance that the stock might be highly volatile until this price is discovered (all an IPO does is to compress this time rather significantly).

One reason why Spotify is doing a direct listing is because it doesn’t need new capital – only an avenue to let existing shareholders cash out. The other reason is that the company recently raised capital, and there appears to be a consensus that the valuation at which it was raised – $13 billion – is fair.

Since the company raised capital only recently, the price at which this round of capital was raised will be anchored in the minds of investors, both existing and prospective. Existing shareholders will expect to cash out their shares at a price that leads to this valuation, and new investors will use this valuation as an anchor to place their initial bids. As a result, it is unlikely that the volatility in the stock in initial days of trading will be as high as analysts expect.

In one sense, by announcing it will go public soon after raising its last round of private investment, what Spotify has done is to decouple its capital raising process from the going public process, but keeping them close enough that the price anchor effects are not lost. If things go well (stock volatility is low in initial days), the company might just be setting a trend!

People are worried about investment banker liquidity 

This was told to me by an investment banker I met a few days back, who obviously doesn’t want to be named. But like Matt Levine writes about people being worried about bond market liquidity, there is also a similar worry about the liquidity of the market for investment bankers as well. 

And once again it has to do with regulations introduced in the aftermath of the 2008 global financial crisis. It has to do with the European requirement that bankers’ bonuses are not all paid immediately, and that they be deferred and amortised over a few years. 

While good in spirit what the regulation has led to is that bankers don’t look to move banks any more. This is because each successful (and thus well paid) banker has a stock of deferred compensation that will be lost in case of a job change. 

This means that any bank looking to hire one such banker will have to compensate for all the deferred compensation in terms of a really fat joining bonus. And banks are seldom willing to pay such a high price. 

And so the rather vibrant and liquid market for investment bankers in Europe has suddenly gone quiet. Interbank moves are few and far in between – with the deferred compensation meaning that banks look to hire internally instead. 

And lesser bankers moving out has had an effect on the number of openings for banker jobs. Which has led to even fewer bankers looking to move. Basically it’s a vicious cycle of falling liquidity! 

Which is not good news for someone like me who’s just moved into London and looking for a banking job!

PS: speaking of liquidity I have a book on market design and liquidity coming out next month or next next month. It’s in the publication process right now. More on that soon! 

May a thousand market structures bloom

In my commentary on SEBI’s proposal to change the regulations of Indian securities markets in order to allow new kinds of market structures, I had mentioned that SEBI should simply enable exchanges to apply whatever market structures they wanted to apply, and let market participants sort out, through competition and pricing, what makes most sense for them.

This way, different stock exchanges in India can pick and choose their favoured form of regulation, and the market (and market participants) can decide which form of regulation they prefer. So you might have the Bombay Stock Exchange (BSE) going with order randomisation, while the National Stock Exchange (NSE) might use batch auctions. And individual participants might migrate to the platform of their choice.

Now, Matt Levine, who has been commenting on market structures for a long time now, makes a similar case in his essay on the Chicago Stock Exchange’s newly introduced “speed bump”:

A thousand — or at least a dozen — market structures can bloom, each subtly optimized for a different type of trader. It’s an innovative and competitive market, in which each exchange can figure out what sorts of traders it wants to favor, and then optimize its speed bumps to cater to those traders.

Maybe I should now accuse Levine of “borrowing” my ideas without credit! 😛

 

Regulating HFT in India

The Securities and Exchange Board of India (SEBI) has set a cat among the HFT (High Frequency Trading) pigeons by proposing seven measures to curb the impact of HFT and improve “real liquidity” in the stock markets.

The big problem with HFT is that algorithms tend to cancel lots of orders – there might be a signal to place an order, and even before the market has digested that order, the order might get cancelled. This results in an illusion of liquidity, while the constant placing and removal of liquidity fucks with the minds of the other algorithms and market participants.

There has been a fair amount of research worldwide, and SEBI seems to have drawn from all of them to propose as many as seven measures – a minimum resting time between HFT orders, matching orders through frequent batch auctions rather than through the order book, introducing random delays (IEX style) for orders, randomising the order queue periodically, capping order-to-trade ratio, creating separate queues for orders from co-located servers (used by HFT algorithms) and review provision of the tick-by-tick data feed.

While the proposal seems sound and well researched (in fact, too well researched, picking up just about any proposal to regulate stock markets), the problem is that there are so many proposals, which are all pairwise mutually incompatible.

As the inimitable Matt Levine commented,

If you run batch auctions and introduce random delays and reshuffle the queue constantly, you are basically replacing your matching engine with a randomizer. You might as well just hold a lottery for who gets which stocks, instead of a market.

My opinion this is that SEBI shouldn’t mandate how each exchange should match its orders. Instead, SEBI should simply enable individual exchanges to regulate the markets in a way they see fit. So in my opinion, it is possible that all the above proposals go through (though I’m personally uncomfortable with some of them such as queue randomisation), but rather than mandating exchanges pick all of them, SEBI simply allows them to use zero or more of them.

This way, different stock exchanges in India can pick and choose their favoured form of regulation, and the market (and market participants) can decide which form of regulation they prefer. So you might have the Bombay Stock Exchange (BSE) going with order randomisation, while the National Stock Exchange (NSE) might use batch auctions. And individual participants might migrate to the platform of their choice.

The problem with this, of course, is that there are only two stock exchanges of note in India, and it is unclear if the depth in the Indian equities market will permit too many more. This might lead to limited competition between bad methods (the worst case scenario), leading to horrible market inefficiencies and the scaremongers’ pet threat of trading shifting to exchanges in Singapore or Dubai actually coming true!

The other problem with different exchanges having different mechanisms is that large institutions and banks might find it difficult to build systems that can trade accurately on all exchanges, and arbitrage opportunities across exchanges might exist for longer than they do now, leading to market inefficiency.

Then again, it’s interesting to see how a “let exchanges do what they want” approach might work. In the United States, there is a new exchange called the Intercontinental Exchange (IEX) that places “speed bumps” over incoming orders, thus reducing the advantage of HFTs. IEX started only recently, after major objections from incumbents who alleged they were making markets less fair.

With IEX having started, however, other exchanges are responding in their own ways to make the markets “fairer” to investors. NASDAQ, which had vehemently opposed IEX’s application, has now filed a proposal to reward orders by investors who wait for at least once second before cancelling them.

Surely, large institutions won’t like it if this proposal goes through, but this gives you a flavour of what competition can do! We’ll have to wait and see what SEBI does now.

Liquidity and the Trump Trade

The United States Treasury department has floated a new idea to improve liquidity in the market for treasury bonds, which has been a concern ever since the Volcker Rule came into place.

The basic problem with liquidity in the bond market is that there are a large number of similar instruments trading, which leads to a fragmented market. This is a consequence of the issuer (the US Treasury in this case) issuing a new bond every time they wish to borrow more money, and with durations being long, many bonds are in the market at the same time.

The proposed solution, which commentators have dubbed the “Trump Trade” (thanks to the Republican Presidential candidate’s penchant for restructuring debt of his companies), involves the treasury buying back bonds before they have run their full course. These bonds bought back will be paid for by newly issued 10-year bonds.

The idea here is that periodic retirement of old illiquid bonds and their replacement by a new “consolidated” bond can help aggregate the market and boost liquidity. This is not all. As the FT ($) reports,

The US Treasury would then buy older, less liquid and therefore cheaper debt across the market, which could in theory then be reissued at a lower yield. In recent months, yields on older issues have risen more than those for recently sold debt, suggesting a deterioration in liquidity.

This implies that because these “off the run” treasuries are less liquid, they are necessarily cheaper, and this “Trump Trade” is thus a win. This, however, is not necessarily the case. Illiquidity need not always imply lower price – it is more likely that it leads to wider spreads.

Trading an illiquid instrument implies that you need to pay a higher transaction cost. The “illiquidity discount” that many bonds see is because people are loathe to holding them (given the transaction cost), and thus less people are willing to buy them.

When the treasury wants to buy back such instruments, however, it is suddenly a seller’s market – since a large number of bonds need to be bought back to take it off the market, sellers can command a higher spread over the “mid price”.

Matt Levine of Bloomberg View has a nice take on the “IPO pop” which I’ve written about on this blog several times (here, here, here and here). He sees it as the “market impact cost” of trying to sell a large number of securities on the market at a particular instant.

Instead the typical trade of selling, say, $1 million of a bond with $1 billion outstanding, and paying around 0.3 percent ($3,000) for liquidity, you want to sell, say, $1 billion worth of a bond with zero bonds outstanding. That is: You want to issue a brand-new bond, and sell all of it in one day. What sort of bid-ask spread should you pay? First principles would tell you that if selling a few bonds from a large bond issue costs 0.3 percent, then selling 100 or 1,000 times as many bonds — especially brand-new bonds — should cost … I mean, not 100 or maybe even 10 times as much, but more, anyway. No?

Taking an off-the-run bond off the market is reverse of this trade – instead of selling, you are buying a large number of bonds at the same time. And that results in a market impact cost, and you need to pay a significant bid-ask spread. So rather than buying the illiquid bond for cheap, the US Treasury will actually have to pay a premium to retire such bonds.

In other words, the Trump Trade is unlikely to really work out too well – the transaction costs of the scheme are going to defeat it. Instead, I second John Cochrane’s idea of issuing perpetual bonds and then buying them back periodically.

These securities pay $1 coupon forever. Buy these back, not on a regular schedule, but when (!) the day of surpluses comes that the government wants to pay down the debt. Then there is one issue, with market depth in the trillions, and the whole on the run vs. off the run phenomenon disappears.

People don’t worry enough about liquidity when they are trying to solve other liquidity worries, it seems!