A banker’s apology

Whenever there is a massive stock market crash, like the one in 1987, or the crisis in 2008, it is common for investment banking quants to talk about how it was a “1 in zillion years” event. This is on account of their models that typically assume that stock prices are lognormal, and that stock price movement is Markovian (today’s movement is uncorrelated with tomorrow’s).

In fact, a cursory look at recent data shows that what models show to be a one in zillion years event actually happens every few years, or decades. In other words, while quant models do pretty well in the average case, they have thin “tails” – they underestimate the likelihood of extreme events, leading to building up risk in the situation.

When I decided to end my (brief) career as an investment banking quant in 2011, I wanted to take the methods that I’d learnt into other industries. While “data science” might have become a thing in the intervening years, there is still a lot for conventional industry to learn from banking in terms of using maths for management decision-making. And this makes me believe I’m still in business.

And like my former colleagues in investment banking quant, I’m not immune to the fat tail problem as well – replicating solutions from one domain into another can replicate the problems as well.

For a while now I’ve been building what I think is a fairly innovative way to represent a cricket match. Basically you look at how the balance of play shifts as the game goes along. So the representation is a line graph that shows where the balance of play was at different points of time in the game.

This way, you have a visualisation that at one shot tells you how the game “flowed”. Consider, for example, last night’s game between Mumbai Indians and Chennai Super Kings. This is what the game looks like in my representation.

What this shows is that Mumbai Indians got a small advantage midway through the innings (after a short blast by Ishan Kishan), which they held through their innings. The game was steady for about 5 overs of the CSK chase, when some tight overs created pressure that resulted in Suresh Raina getting out.

Soon, Ambati Rayudu and MS Dhoni followed him to the pavilion, and MI were in control, with CSK losing 6 wickets in the course of 10 overs. When they lost Mark Wood in the 17th Over, Mumbai Indians were almost surely winners – my system reckoning that 48 to win in 21 balls was near-impossible.

And then Bravo got into the act, putting on 39 in 10 balls with Imran Tahir watching at the other end (including taking 20 off a Mitchell McClenaghan over, and 20 again off a Jasprit Bumrah over at the end of which Bravo got out). And then a one-legged Jadhav came, hobbled for 3 balls and then finished off the game.

Now, while the shape of the curve in the above curve is representative of what happened in the game, I think it went too close to the axes. 48 off 21 with 2 wickets in hand is not easy, but it’s not a 1% probability event (as my graph depicts).

And looking into my model, I realise I’ve made the familiar banker’s mistake – of assuming independence and Markovian property. I calculate the probability of a team winning using a method called “backward induction” (that I’d learnt during my time as an investment banking quant). It’s the same system that the WASP system to evaluate odds (invented by a few Kiwi scientists) uses, and as I’d pointed out in the past, WASP has the thin tails problem as well.

As Seamus Hogan, one of the inventors of WASP, had pointed out in a comment on that post, one way of solving this thin tails issue is to control for the pitch or  regime, and I’ve incorporated that as well (using a Bayesian system to “learn” the nature of the pitch as the game goes on). Yet, I see I struggle with fat tails.

I seriously need to find a way to take into account serial correlation into my models!

That said, I must say I’m fairly kicked about the system I’ve built. Do let me know what you think of this!

English Premier League: Goal Difference to points correlation

So I was just looking down the English Premier League Table for the season, and I found that as I went down the list, the goal difference went lower. There’s nothing counterintuitive in this, but the degree of correlation seemed eerie.

So I downloaded the data and plotted a scatter-plot. And what do you have? A near-perfect regression. I even ran the regression and found a 96% R Square.

In other words, this EPL season has simply been all about scoring lots of goals and not letting in too many goals. It’s almost like the distribution of the goals itself doesn’t matter – apart from the relegation battle, that is!

PS: Look at the extent of Manchester City’s lead at the top. And what a scrap the relegation is!

The Derick Parry management paradigm

Before you ask, Derick Parry was a West Indian cricketer. He finished his international playing career before I was born, partly because he bowled spin at a time when the West Indies usually played four fearsome fast bowlers, and partly because he went on rebel tours to South Africa.

That, however, doesn’t mean that I never watched him play – there was a “masters” series sometime in the mid 1990s when he played as part of the ‘West Indies masters” team. I don’t even remember who they were playing, or where (such series aren’t archived well, so I can’t find the score card either).

All I remember is that Parry was batting along with Larry Gomes, and the West Indies Masters were chasing a modest target. Parry is relevant to our discussion because of the commentator’s (don’t remember who – it was an Indian guy) repeated descriptions of how he should play.

“Parry should not bother about runs”, the commentator kept saying. “He should simply use his long reach and smother the spin and hold one end up. It is Gomes who should do the scoring”. And incredibly, that’s how West Indies Masters got to the target.

So the Derick Parry management paradigm consists of eschewing all the “interesting” or “good” or “impactful” work (“scoring”, basically. no pun intended), and simply being focussed on holding one end up, or providing support. It wasn’t that Parry couldn’t score – he had at Test batting average of 22, but on that day the commentator wanted him to simply hold one end up and let the more accomplished batsman do the scoring.

I’ve seen this happen at various levels, but this usually happens at the intra-company level. There will be one team which will explicitly not work on the more interesting part of the problem, and instead simply “provide support” to another team that works on this stuff. In a lot of cases it is not that the “supporting team” doesn’t have the ability or skills to execute the task end-to-end. It just so happens that they are a part of the organisation which is “not supposed to do the scoring”. Most often, this kind of a relationship is seen in companies with offshore units – the offshore unit sticks to providing support to the onshore unit, which does the “scoring”.

In some cases, the Derick Parry school goes to inter-company deals as well, and in such cases it is usually done so as to win the business. Basically if you are trying to win an outsourcing contract, you don’t want to be seen doing something that the client considers to be “core business”. And so even if you’re fully capable of doing that, you suppress that part of your offering and only provide support. The plan in some cases is to do a Mustafa’s camel, but in most cases that doesn’t succeed.

I’m not offering any comment on whether the Derick Parry strategy of management is good or not. All I’m doing here is to attach this oft-used strategy to a name, one that is mostly forgotten.

PM’s Eleven

The first time I ever heard of Davos was in 1997, when then Indian Prime Minister HD Deve Gowda attended the conference in the ski resort and gave a speech. He was heavily pilloried by the Kannada media, and given the moniker “Davos Gowda”.

Maybe because of all the attention Deve Gowda received for the trip, and not in a good way, no Indian Prime Minister ventured to go there for another twenty years. Until, of course, Narendra Modi went there earlier this week and gave a speech that apparently got widely appreciated in China.

There is another thing that connects Modi and Deve Gowda as Prime Ministers (leaving aside trivialties such as them being chief ministers of their respective states before becoming Prime Ministers).

Back in 1996 when Deve Gowda was Prime Minister, Rahul Dravid,  Venkatesh Prasad and Sunil Joshi made their Test debuts (on the tour of England). Anil Kumble and Javagal Srinath had long been fixtures in the Indian cricket team. Later that year, Sujith Somasunder played a couple of one dayers. David Johnson played two Tests. And in early 1997, Doddanarasaiah Ganesh played a few Test matches.

In case you haven’t yet figured out, all these cricketers came from Karnataka, the same state as the Prime Minister. During that season, it was normal for at least five players in the Indian Eleven to be from Karnataka. Since Deve Gowda had become Prime Minister around the same time, there was no surprise that the Indian cricket team was called “PM’s Eleven”. Coincidentally, the chairman of selectors at that point in time was Gundappa Vishwanath, who is also from Karnataka.

The Indian team playing in the current Test match in Johannesburg has four players from Gujarat. Now, this is not as noticeable as five players from Karnataka because Gujarat is home to three Ranji Trophy teams. Cheteshwar Pujara plays for Saurashtra, Parthiv Patel and Jasprit Bumrah play for Gujarat, and Hardik Pandya plays for Baroda. And Saurashtra’s Ravindra Jadeja is also part of the squad.

It had been a long time since once state had thus dominated the Indian cricket team. Perhaps we hadn’t seen this kind of domination since Karnataka had dominated in the late 1990s. And it so happens that once again the state dominating the Indian cricket team happens to be the Prime Minister’s home state.

So after a gap of twenty one years, we had an Indian Prime Minister addressing Davos. And after a gap of twenty one years, we have an Indian cricket team that can be called “PM’s Eleven”!

As Baada put it the other day, “Modi is the new Deve Gowda. Just without family and sleep”.

Update: I realised after posting that I have another post called “PM’s Eleven” on this blog. It was written in the UPA years.

Duckworth Lewis Book

Yesterday at the local council library, I came across this book called “Duckworth Lewis” written by Frank Duckworth and Tony Lewis (who “invented” the eponymous rain rule). While I’d never heard about the book, given my general interest in sports analytics I picked it up, and duly finished reading it by this morning.

The good thing about the book is that though it’s in some way a collective autobiography of Duckworth and Lewis, they restrict their usual life details to a minimum, and mostly focus on what they are famous for. There are occasions when they go into too much detail describing a trip to either Australia or the West Indies, but it’s easy to filter out such stuff and read the book for the rain rule.

Then again, it isn’t a great book. If you’re not interested in cricket analytics there isn’t that much for you to know from the book. But given that it’s a quick read, it doesn’t hurt so much! Anyway, here are some pertinent observations:

  1. Duckworth and Lewis didn’t get paid much for their method. They managed to get the ICC to accept their method sometime in the mid 90s, but it wasn’t until the early 2000s, by when Lewis had become a business school professor, that they managed to strike a financial deal with ICC. Even when they did, they make it sound like they didn’t make much money off it.
  2. The method came about when Duckworth quickly put together something for a statistics conference he was organising, where another speaker who was supposed to speak about cricket pulled out at the last minute. Lewis later came across the paper, and then got one of his undergrad students to do a project about it. The two men subsequently collaborated
  3. It’s amazing (not in a positive way) the kind of data that went into the method. Until the early 2000s, the only dataset that was used to calibrate the method was what was put together by Lewis’s undergrad. And this was mostly English County games, played over 40, 55 and 60 overs. Even after that, the frequency of updation with new data (which reflects new playing styles and strategies) is rather low.
  4. The system doesn’t seem to have been particularly well software engineered – it was initially simply coded up by Duckworth, and until as late as 2007 it ran on the DOS operating system. It was only in 2008 or so, when Steven Stern joined the team (now the method is called DLS to include his name), that a windows version was introduced.
  5. There is very little discussion of alternate methods, and though there is a chapter about it, Duckworth and Lewis are rather dismissive about them. For example, another popular method is by this guy called V Jayadevan from Thrissur. Here is some excellent analysis by Srinivas Bhogle where he compares the two methods. Duckworth and Lewis spend a couple of pages listing a couple of scenarios where Jayadevan’s method doesn’t work, and then spends a paragraph disparaging Bhogle for his support of the VJD method.
  6. This was the biggest takeaway from the book for me – the Duckworth Lewis method doesn’t equalise probabilities of victory of the two teams before and after the rain interruption. Instead, the method equalises the margin of victory between the teams before and after the break. So let’s say a team was 10 runs behind the DL “par score” when it rains. When the game restarts, the target is set such that the team is still 10 runs behind the par score! They make an attempt to explain why this is superior to equalising probabilities of winning  but don’t go too far with it.
  7. The adoption of Duckworth Lewis seems like a fairly random event. Following the World Cup 1992 debacle (when South Africa’s target went from 22 off 13 to 22 off 1 ball after a rain break), there was a demand for new rain rules. Duckworth and Lewis somehow managed to explain their method to the ECB secretary. And since it was superior to everything that was there then, it simply got adopted. And then it became incumbent, and became hard to dislodge!
  8. There is no mention in the book about the inherent unfairness of the DL method (in that it can be unfair to some playing styles).

Ok this is already turning out to be a long post, but one final takeaway is that there’s a fair amount of randomness in sports analytics, and you shouldn’t get into it if your only potential customer is a national sporting body. In that sense, developments such as the IPL are good for sports analytics!

Biases, statistics and luck

Tomorrow Liverpool plays Manchester City in the Premier League. As things stand now I don’t plan to watch this game. This entire season so far, I’ve only watched two games. First, I’d gone to a local pub to watch Liverpool’s visit to Manchester City, back in September. Liverpool got thrashed 5-0.

Then in October, I went to Wembley to watch Tottenham Hotspur play Liverpool. The Spurs won 4-1. These two remain Liverpool’s only defeats of the season.

I might consider myself to be a mostly rational person but I sometimes do fall for the correlation-implies-causation bias, and think that my watching those games had something to do with Liverpool’s losses in them. Never mind that these were away games played against other top sides which attack aggressively. And so I have this irrational “fear” that if I watch tomorrow’s game (even if it’s from a pub), it might lead to a heavy Liverpool defeat.

And so I told Baada, a Manchester City fan, that I’m not planning to watch tomorrow’s game. And he got back to me with some statistics, which he’d heard from a podcast. Apparently it’s been 80 years since Manchester City did the league “double” (winning both home and away games) over Liverpool. And that it’s been 15 years since they’ve won at Anfield. So, he suggested, there’s a good chance that tomorrow’s game won’t result in a mauling for Liverpool, even if I were to watch it.

With the easy availability of statistics, it has become a thing among football commentators to supply them during the commentary. And from first hearing, things like “never done this in 80 years” or “never done that for last 15 years” sounds compelling, and you’re inclined to believe that there is something to these numbers.

I don’t remember if it was Navjot Sidhu who said that statistics are like a bikini (“what they reveal is significant but what they hide is crucial” or something). That Manchester City hasn’t done a double over Liverpool in 80 years doesn’t mean a thing, nor does it say anything that they haven’t won at Anfield in 15 years.

Basically, until the mid 2000s, City were a middling team. I remember telling Baada after the 2007 season (when Stuart Pearce got fired as City manager) that they’d be surely relegated next season. And then came the investment from Thaksin Shinawatra. And the appointment of Sven-Goran Eriksson as manager. And then the youtube signings. And later the investment from the Abu Dhabi investment group. And in 2016 the appointment of Pep Guardiola as manager. And the significant investment in players after that.

In other words, Manchester City of today is a completely different team from what they were even 2-3 years back. And they’re surely a vastly improved team compared to a decade ago. I know Baada has been following them for over 15 years now, but they’re unrecognisable from the time he started following them!

Yes, even with City being a much improved team, Liverpool have never lost to them at home in the last few years – but then Liverpool have generally been a strong team playing at home in these years! On the other hand, City’s 18-game winning streak (which included wins at Chelsea and Manchester United) only came to an end (with a draw against Crystal Palace) rather recently.

So anyways, here are the takeaways:

  1. Whether I watch the game or not has no bearing on how well Liverpool will play. The instances from this season so far are based on 1. small samples and 2. biased samples (since I’ve chosen to watch Liverpool’s two toughest games of the season)
  2. 80-year history of a fixture has no bearing since teams have evolved significantly in these 80 years. So saying a record stands so long has no meaning or predictive power for tomorrow’s game.
  3. City have been in tremendous form this season, and Liverpool have just lost their key player (by selling Philippe Coutinho to Barcelona), so City can fancy their chances. That said, Anfield has been a fortress this season, so Liverpool might just hold (or even win it).

All of this points to a good game tomorrow! Maybe I should just watch it!

 

 

AlphaZero defeats Stockfish: Quick thoughts

The big news of the day, as far as I’m concerned, is the victory of Google Deepmind’s AlphaZero over Stockfish, currently the highest rated chess engine. This comes barely months after Deepmind’s AlphaGo Zero had bested the earlier avatar of AlphaGo in the game of Go.

Like its Go version, the AlphaZero chess playing machine learnt using reinforcement learning (I remember doing a term paper on the concept back in 2003 but have mostly forgotten). Basically it wasn’t given any “training data”, but the machine trained itself on continuously playing with itself, with feedback given in each stage of learning helping it learn better.

After only about four hours of “training” (basically playing against itself and discovering moves), AlphaZero managed to record this victory in a 100-game match, winning 28 and losing none (the rest of the games were drawn).

There’s a sample game here on the Chess.com website and while this might be a biased sample (it’s likely that the AlphaZero engineers included the most spectacular games in their paper, from which this is taken), the way AlphaZero plays is vastly different from the way engines such as Stockfish have been playing.

I’m not that much of a chess expert (I “retired” from my playing career back in 1994), but the striking things for me from this game were

  • the move 7. d5 against the Queen’s Indian
  • The piece sacrifice a few moves later that was hard to see
  • AlphaZero’s consistent attempts until late in the game to avoid trading queens
  • The move Qh1 somewhere in the middle of the game

In a way (and being consistent with some of the themes of this blog), AlphaZero can be described as a “stud” chess machine, having taught itself to play based on feedback from games it’s already played (the way reinforcement learning broadly works is that actions that led to “good rewards” are incentivised in the next iteration, while those that led to “poor rewards” are penalised. The challenge in this case is to set up chess in a way that is conducive for a reinforcement learning system).

Engines such as StockFish, on the other hand, are absolute “fighters”. They get their “power” by brute force, by going down nearly all possible paths in the game several moves down. This is supplemented by analysis of millions of existing games of various levels which the engine “learns” from – among other things, it learns how to prune and prioritise the paths it searches on. StockFish is also fed a database of chess openings which it remembers and tries to play.

What is interesting is that AlphaZero has “discovered” some popular chess openings through the course of is self-learning. It is interesting to note that some popular openings such as the King’s Indian or French find little favour with this engine, while others such as the Queen’s Gambit or the Queen’s Indian find favour. This is a very interesting development in terms of opening theory itself.

Frequency of openings over time employed by AlphaZero in its “learning” phase. Image sourced from AlphaZero research paper.

In any case, my immediate concern from this development is how it will affect human chess. Over the last decade or two, engines such as stockfish have played a profound role in the development of chess, with current top players such as Magnus Carlsen or Sergey Karjakin having trained extensively with these engines.

The way top grandmasters play has seen a steady change in these years as they have ingested the ideas from engines such as StockFish. The game has become far more quiet and positional, as players seek to gain small advantages which steadily improves over the course of (long) games. This is consistent with the way the engines that players learn from play.

Based on the evidence of the one game I’ve seen of AlphaZero, it plays very differently from the existing engines. Based on this, it will be interesting to see how human players who train with AlphaZero based engines (or their clones) will change their game.

Maybe chess will turn back to being a bit more tactical than it’s been in the last decade? It’s hard to say right now!

Lessons from poker party

In the past I’ve drawn lessons from contract bridge on this blog – notably, I’d described a strategy called “queen of hearts” in order to maximise chances of winning in a game that is terribly uncertain. Now it’s been years since I played bridge, or any card game for that matter. So when I got invited for a poker party over the weekend, I jumped at the invitation.

This was only the second time ever that I’d played poker in a room – I’ve mostly played online where there are no monetary stakes and you see people go all in on every hand with weak cards. And it was a large table, with at least 10 players being involved in each hand.

A couple of pertinent observations (reasonable return for the £10 I lost that night).

Firstly a windfall can make you complacent. I’m usually a conservative player, bidding aggressively only when I know that I have good chances of winning. I haven’t played enough to have mugged up all the probabilities – that probably offers an edge to my opponents. But I have a reasonable idea of what constitutes a good hand and bid accordingly.

My big drawdown happened in the hand immediately after I’d won big. After an hour or so of bleeding money, I’d suddenly more than broken even. That meant that in my next hand, I bid a bit more aggressively than I would have for what I had. For a while I managed to stay rational (after the flop I knew I had a 1/6 chance of winning big, and having mugged up the Kelly Criterion on my way to the party, bid accordingly).

And when the turn wasn’t to my liking I should’ve just gotten out – the (approx) percentages didn’t make sense any more. But I simply kept at it, falling for the sunk cost fallacy (what I’d put in thus far in the hand). I lost some 30 chips in that one hand, of which at least 21 came at the turn and the river. Without the high of having won the previous hand, I would’ve played more rationally and lost only 9. After all the lectures I’ve given on logic, correlation-causation and the sunk cost fallacy, I’m sad I lost so badly because of the last one.

The second big insight is that poverty leads to suboptimal decisions. Now, this is a well-studied topic in economics but I got to experience it first hand during the session. This was later on in the night, as I was bleeding money (and was down to about 20 chips).

I got pocket aces (a pair of aces in hand) – something I should’ve bid aggressively with. But with the first 3 open cards falling far away from the face cards and being uncorrelated, I wasn’t sure of the total strength of my hand (mugging up probabilities would’ve helped for sure!). So when I had to put in 10 chips to stay in the hand, I baulked, and folded.

Given the play on the table thus far, it was definitely a risk worth taking, and with more in the bank, I would have. But poverty and the Kelly Criterion meant that the number of chips that I was able to invest in the arguably strong hand was limited, and that limited my opportunity to profit from the game.

It is no surprise that the rest of the night petered out for me as my funds dwindled and my ability to play diminished. Maybe I should’ve bought in more when I was down to 20 chips – but then given my ability relative to the rest of the table, that would’ve been good money after bad.

Auctions of distressed assets

Bloomberg Quint reports that several prominent steel makers are in the fray for the troubled Essar Steel’s assets. Interestingly, the list of interested parties includes the promoters of Essar Steel themselves. 

The trouble with selling troubled assets or bankrupt companies is that it is hard to put a value on them. Cash flows and liabilities are uncertain, as is the value of the residual assets that the company can keep at the end of the bankruptcy process. As a result of the uncertainty, both buyers and sellers are likely to slap on a big margin to their price expectations – so that even if they were to end up overpaying (or get underpaid), there is a reasonable margin of error.

Consequently, several auctions for assets of bankrupt companies fail (an auction is always a good mechanism to sell such assets since it brings together several buyers in a competitive process and the seller – usually a court-appointed bankruptcy manager – can extract the maximum possible value). Sellers slap on a big margin of error on their asking price and set a high reserve price. Buyers go conservative in their bids and possibly bid too low.

As we have seen with the attempted auctions of the properties of Vijay Mallya (promoter of the now bankrupt Kingfisher Airlines) and Subroto Roy Sahara (promoter of the eponymous Sahara Group), such auctions regularly fail. It is the uncertainty of the value of assets that dooms the auctions to failure.

What sets apart the Essar Steel bankruptcy process is that while the company might be bankrupt, the promoters (the Ruia brothers) are not. And having run the company (albeit to the ground), they possess valuable information on the value of assets that remain with the company. And in the bankruptcy process, where neither other buyers nor sellers have adequate information, this information can prove invaluable.

When I first saw the report on Essar’s asset sale, I was reminded of the market for footballers that I talk about in my book Between the buyer and the seller. That market, too, suffers from wide bid-ask spreads on account of difficulty in valuation.

Like distressed companies, the market for footballers also sees few buyers and sellers. And what we see there is that deals usually happen at either end of the bid-ask spectrum – if the selling club is more desperate to sell, the deal happens at an absurdly low price, and if the buying club wants the deal more badly, they pay a high price for it.

I’ve recorded a podcast on football markets with Amit Varma, for the Seen and the unseen podcast.

Coming back to distressed companies, it is well known that the seller (usually a consortium of banks or their representatives) wants to sell, and is usually the more desperate party. Consequently, we can expect the deal to happen close to the bid price. A few auctions might fail in case the sellers set their expectations too high (all buyers bid low since value is uncertain), but that will only make the seller more desperate, which will bring down the price at which the deal happens.

So don’t be surprised if the Ruias do manage to buy Essar Steel, and if they manage to do that at a price that seems absurdly low! The price will be low because there are few buyers and sellers and the seller is the more desperate party. And the Ruias will win the auction, because their inside information of the company they used to run will enable them to make a much better bid.

 

Football transfer markets

So the 2017 “summer transfer window” is going to close in three days’ time. It’s been an unusual market, with oddly inflated valuations – such as Neymar going for ~ €200 million from Barcelona to PSG, and Manchester City paying in excess of £50 million each for a pair of full backs (Kyle Walker and Benjamin Mendy).

Meanwhile, transfers are on in the NBA as well. Given that American sporting leagues have a rather socialist structure, there is no money exchanged. Instead, you have complicated structures such as this one between the Cleveland Cavaliers and Boston Celtics:

 by trading Kyrie Irving (pictured, left), their star point guard, to the Boston Celtics. In exchange, Mr Altman received a package of three players headlined by Isaiah Thomas (right), plus a pick in the 2018 entry draft

A week back, renowned blogger Amit Varma interviewed me for his The Seen and the unseen podcast. The topic was football transfers, something that I talk about in the first chapter of my soon-to-be-published book. In that, he asked me what the football transfer market might look like in the absence of price. And I mentioned that PSG might have had to give up their entire team in order to buy Neymar in that situation.

Anyway, listen to the entire podcast episode here.

Oh, and I don’t know if I mentioned it here before, but my book is ready now and will be released on the 8th of September. It’s being published by the Takshashila Institution.

You can pre-order the book on Amazon. For some reason, the Kindle India store doesn’t have a facility to pre-order, so if you live in India and want to read the book on Kindle, you’ll have to wait until the 8th of September. Kindle stores elsewhere already allow you to pre-order. Follow the link above.