Premier League Points Efficiency

It would be tautological to say that you win in football by scoring more goals than your opponent. What is interesting is that scoring more goals and letting in fewer works across games in a season as well, as data from the English Premier League shows.

We had seen an inkling of this last year, when I had showed that points in the Premier League were highly correlated with goal difference (96% R square for those that are interested). A little past the midway point of the current season and the correlation holds – 96% again.

In other words, a team’s goal difference (number of goals scored minus goals let in) can explain 96% of the variance in the number of points gained by the team in the season so far. The point of this post is to focus on the rest.

In the above image, the blue line is the line of best fit (or regression line). This line predicts the number of points scored by a team given their goal difference. Teams located above this line have been more efficient or lucky – they have got more points than their goal different would suggest. Teams below this line have been less efficient or unlucky – their goal difference has been distributed badly across games, leading to fewer points than the team should have got.

Manchester City seem to be extremely unlucky this season, in that they have scored about five fewer points than what their goal difference suggests. The other teams close to the top of the league are all above the line – showing they’ve been more efficient in the way their goals have been distributed (Spurs and Arsenal have been luckier than ManYoo, Chelski and Liverpool).

At the other end of the table, Huddersfield Town have been unlucky – their goal difference suggests they should have had four more points – a big difference for a relegation threatened team. Southampton, Newcastle and Crystal Palace are also in the same boat.

Finally, the use of goal difference is used to break ties in league tables is an attempt to undo the luck (or lack of it) that would have resulted in teams under- or over-performing in terms of points given the number of goals they’ve scored and let in. Some teams would have gotten much more (or less) points than deserved by sheer dint of their goals having been distributed better across matches (big losses and narrow wins). The use of goal difference is a small attempt to set that right.

Football Elo Application

This morning, I discovered the Club Elo Ratings, and promptly proceeded to analyse Liverpool FC’s performance over the years based on these ratings, and then correlated the performance by manager.

Then, playing around with the data of different clubs, I realised that there are plenty more stories to be told using this data, and they are best told by people who are passionate about their respective clubs. So the best thing I could do is to put the data out there (in a form similar to what I did for Liverpool), so that people can analyse how their clubs have performed over the years, and under different managers.

Sitting beside me as I was doing this analysis, my wife popped in with a pertinent observation. Now, she doesn’t watch football. She hates it that I watch so much football. Nevertheless, she has a strong eye for metrics. And watching me analyse club performance by manager, she asked me if I can analyse manager performance by club!

And so I’ve added that as well to the Shiny app that I’ve built. It might look a bit clunky, with two unrelate graphs, one on top of the other, but since the two are strongly related, it makes sense to have both in the same app. The managers listed in the bottom dropdown are those who have managed at least two clubs in the Premier League.

If you’re interested in Premier League football, you should definitely check out the app. I think there are some interesting insights to be gleaned (such as what I presented in this morning’s post).

Built by Shanks

This morning, I found this tweet by John Burn-Murdoch, a statistician at the Financial Times, about a graphic he had made for a Simon Kuper (of Soccernomics fame) piece on Jose Mourinho.

Burn-Murdoch also helpfully shared the code he had written to produce this graphic, through which I discovered ClubElo, a website that produces chess-style Elo ratings for football clubs. They have a free and open API, through which Burn-Murdoch got the data for the above graphic, and which I used to download all-time Elo ratings for all clubs available (I can be greedy that way).

So the first order of business was to see how Liverpool’s rating has moved over time. The initial graph looked interesting, but not very interesting, so I decided to overlay it with periods of managerial regimes (the latter data I got through wikipedia). And this is what the all-time Elo rating of Liverpool looks like.

It is easy to see that the biggest improvement in the club’s performance came under the long reign of Bill Shankly (no surprises there), who took them from the Second Division to winning the old First Division. There was  brief dip when Shankly retired and his assistant Bob Paisley took over (might this be the time when Paisley got intimidated by Shankly’s frequent visits to the club, and then asked him not to come any more?), but Paisley consolidated on Shankly’s improvement to lead the club to its first three European Cups.

Around 2010, when the club was owned by Americans Tom Hicks and George Gillett and on a decline in terms of performance, this banner became popular at Anfield.

The Yanks were subsequently yanked following a protracted court battle, to be replaced by another Yank (John W Henry), under whose ownership the club has done much better. What is also interesting from the above graph is the managerial change decisions.

At the time, Kenny Dalglish’s sacking at the end of the 2011-12 season (which ended with Liverpool losing the FA Cup final to Chelsea) seemed unfair, but the Elo rating shows that the club’s rating had fallen below the level when Dalglish took over (initially as caretaker). Then there was a steep ascent under Brendan Rodgers (leading to second in 2013-14), when Suarez bit and got sold and the team went into deep decline.

Again, we can see that Rodgers got sacked when the team had reverted to the rating that he had started off with. That’s when Jurgen Klopp came in, and thankfully so far there has been a much longer period of ascendance (which will hopefully continue). It is interesting to see, though, that the club’s current rating is still nowhere near the peak reached under Rafa Benitez (in the 2008-9 title challenge).

Impressed by the story that Elo Ratings could tell, I got data on all Premier League managers, and decided to repeat the analysis for all clubs. Here is what the analysis for the so-called “top 6” clubs returns:

We see, for example, that Chelsea’s ascendancy started not with Mourinho’s first term as manager, but towards the end of Ranieri’s term – when Roman Abramovich had made his investment. We find that Jose Mourinho actually made up for the decline under David Moyes and Louis van Gaal, and then started losing it. In that sense, Manchester United have got their sacking timing right (though they were already in decline by the time they finished last season in second place).

Manchester City also seem to have done pretty well in terms of the timing of managerial changes. And Spurs’s belief in Mauricio Pochettino, who started off badly, seems to have paid off.

I wonder why Elo Ratings haven’t made more impact in sports other than chess!

What Ails Liverpool

So Liverpool FC has had a mixed season so far. They’re second in the Premier League with 36 points from 14 games (only points dropped being draws against ManCity, Chelsea and Arsenal), but are on the verge of going out of the Champions League, having lost all three away games.

Yesterday’s win over Everton was damn lucky, down to a 96th minute freak goal scored by Divock Origi (I’d forgotten he’s still at the club). Last weekend’s 3-0 against Watford wasn’t as comfortable as the scoreline suggested, the scoreline having been opened only midway through the second half. The 2-0 against Fulham before that was similarly a close-fought game.

Of concern to most Liverpool fans has been the form of the starting front three – Mo Salah, Roberto Firmino and Sadio Mane. The trio has missed a host of chances this season, and the team has looked incredibly ineffective in the away losses in the Champions League (the only shot on target in the 2-1 loss against PSG being the penalty that was scored by Milner).

There are positives, of course. The defence has been tightened considerably compared to last season. Liverpool aren’t leaking goals the way they did last season. There have been quite a few clean sheets so far this season. So far there has been no repeat of last season’s situation where they went 4-1 up against ManCity, only to quickly let in two goals and then set up a tense finish.

So my theory is this – each of the front three of Liverpool has an incredibly low strike rate. I don’t know if the xG stat captures this, but the number of chances required by each of Mane, Salah and Firmino before they can convert is rather low. If the average striker converts one in two chances, all of these guys convert one in four (these numbers are pulled out of thin air. I haven’t looked at the statistics).

And even during the “glory days” of last season when Liverpool was scoring like crazy, this low strike rate remained. Instead, what helped then was a massive increase in the number of chances created. The one game I watched live (against Spurs at Wembley), what struck me was the number of chances Salah kept missing. But as the chances kept getting created, he ultimately scored one (Liverpool lost 4-1).

What I suspect is that as Klopp decided to tighten things up at the back this season, the number of chances being created has dropped. And with the low strike rate of each of the front three, this lower number of chances translates into much lower number of goals being scored. If we want last season’s scoring rate, we might also have to accept last season’s concession rate (though this season’s goalie is much much better).

There ain’t no such thing as a free lunch.

Bankers predicting football

So the Football World Cup season is upon us, and this means that investment banking analysts are again engaging in the pointless exercise of trying to predict who will win the World Cup. And the funny thing this time is that thanks to MiFiD 2 regulations, which prevent banking analysts from giving out reports for free, these reports aren’t in the public domain.

That means we’ve to rely on media reports of these reports, or on people tweeting insights from them. For example, the New York Times has summarised the banks’ predictions on the winner. And this scatter plot from Goldman Sachs will go straight into my next presentation on spurious correlations:

Different banks have taken different approaches to predict who will win the tournament. UBS has still gone for a classic Monte Carlo simulation  approach, but Goldman Sachs has gone one ahead and used “four different methods in artificial intelligence” to predict (for the third consecutive time) that Brazil will win the tournament.

In fact, Goldman also uses a Monte Carlo simulation, as Business Insider reports.

The firm used machine learning to run 200,000 models, mining data on team and individual player attributes, to help forecast specific match scores. Goldman then simulated 1 million possible variations of the tournament in order to calculate the probability of advancement for each squad.

But an insider in Goldman with access to the report tells me that they don’t use the phrase itself in the report. Maybe it’s a suggestion that “data scientists” have taken over the investment research division at the expense of quants.

I’m also surprised with the reporting on Goldman’s predictions. Everyone simply reports that “Goldman predicts that Brazil will win”, but surely (based on the model they’ve used), that prediction has been made with a certain probability? A better way of reporting would’ve been to say “Goldman predicts Brazil most likely to win, with X% probability” (and the bank’s bets desk in the UK could have placed some money on it).

ING went rather simple with their forecasts – simply took players’ transfer values, and summed them up by teams, and concluded that Spain is most likely to win because their squad is the “most valued”. Now, I have two major questions about this approach – firstly, it ignores the “correlation term” (remember the famous England conundrum of the noughties of fitting  Gerrard and Lampard into the same eleven?), and assumes a set of strong players is a strong team. Secondly, have they accounted for inflation? And if so, how have they accounted for inflation? Player valuation (about which I have a chapter in my book) has simply gone through the roof in the last year, with Mo Salah at £35 million being considered a “bargain buy”.

Nomura also seems to have taken a similar approach, though they have in some ways accounted for the correlation term by including “team momentum” as a factor!

Anyway, I look forward to the football! That it is live on BBC and ITV means I get to watch the tournament from the comfort of my home (a luxury in England!). Also being in England means all matches are at a sane time, so I can watch more of this World Cup than the last one.

 

The science of shirt numbers

Yesterday, Michael Cox, author of the Zonal Marking blog and The Mixer, tweeted:

Now, there is some science to how football shirts are numbered. I had touched upon it in a very similar post I had written four years ago. You can also read this account on how players are numbered. And if you’re more curious about formations and their history, I recommend you read Jonathan Wilson’s Inverting the Pyramid.

To put it simply, number 1 is reserved for goalkeepers. Numbers 2 to 6 are for defenders, though some countries use either 4, 5 or 6 for midfielders. 7-11 are usually reserved for attacking midfielders and forwards, with 9 being the “centre forward” and 10 being the “second forward”.

Some of these numbers are so institutionalised that the number is sometimes enough to describe a player’s position and style. This has even led to jargon such as a “False Nine” (a midfielder playing furthest forward) or a “False Ten” (a striker playing in a withdrawn role).

There is less science to the allocation of shirt numbers 12 to 23, since these are not starting positions. One rule of thumb is to allocate these numbers for the backups for the corresponding positions. So 12 is the reserve goalie, 13 is the reserve right back and so on(with 23 for the squad’s third goalkeeper).

So how have teams chosen to number their squads in the FIFA World Cup that starts next week? This picture summarises the distribution of position by number: 

 

There is no surprise in Number 1, which all teams have allocated to their goalkeeper, and numbers 2 and 3 are mostly allocated to defenders as well (there are some exceptions there, with Iran’s Mehdi Torabi and Denmark’s Michael Krohn Dehli wearing Number 2 even though they are midfielders, and Iceland midfielder Samuel Friojonsson wearing 3).

That different countries use 4, 5 or 6 for midfielders is illustrated in the data, though two forwards (Australian legend Tim Cahill and Croatia’s Ivan Perisic) puzzlingly wear 4 (it’s less puzzling in Cahill’s case since he started as a central midfielder and slowly moved forward).

7 is the right winger’s number, and depending upon that position’s interpretation can either be a midfielder or a forward. 8 is primarily a midfielder, while 9 is (obviously) a striker’s number. Interestingly, five midfielders will wear the Number 9 shirt (the most prominent being Russia’s Alan Dzagoev). 10 and 11 are evenly split between midfielders and forwards, though two defenders (Serbia’s Aleksandr Kolarov and Tunisia’s Dylan Bronn) also wear 11.

Beyond 11, there isn’t that much of a science, but one thing that is clear is that Cox got it wrong – for it isn’t so “textbook” to give 12 to the reserve right back. As we can see from the data, 20 teams have used that number for their reserve goalies!

It’s like England has put their squad numbers into a little bit of a Mixer!

English Premier League: Goal Difference to points correlation

So I was just looking down the English Premier League Table for the season, and I found that as I went down the list, the goal difference went lower. There’s nothing counterintuitive in this, but the degree of correlation seemed eerie.

So I downloaded the data and plotted a scatter-plot. And what do you have? A near-perfect regression. I even ran the regression and found a 96% R Square.

In other words, this EPL season has simply been all about scoring lots of goals and not letting in too many goals. It’s almost like the distribution of the goals itself doesn’t matter – apart from the relegation battle, that is!

PS: Look at the extent of Manchester City’s lead at the top. And what a scrap the relegation is!