Attractive graphics without chart junk

A picture is worth a thousand words, but ten pictures are worth much less than ten thousand words

One of the most common problems with visualisation, especially in the media, is that of “chart junk”. Graphics designers working for newspapers and television channels like to decorate their graphs, to make it more visually appealing. And in most cases, this results in the information in the graphs getting obfuscated and harder to read.

The commonest form this takes is in the replacement of bars in a simple bar graph with weird objects. When you want to show number of people in something, you show little people, sometimes half shaded out. Sometimes instead of having multiple people, the information is conveyed in the size of the people, or objects  (like below). 

Then, instead of using simple bar graphs, designers use more complicated structures such as 3-dimensional bar graphs, or cone graphs or doughnut charts (I’m sure I’ve abused some of them on my tumblr). All of them are visually appealing and can draw attention of readers or viewers. Most of them come at the cost of not really conveying the information!

I’ve spoken to a few professional graphic designers and asked them why they make poor visualisation choices even when the amount of information the graphics convey goes down. The most common answer is novelty – “a page full of bars can be boring for the reader”. So they try to spice it up by replacing bars with other items that “look different”.

Putting it another way, the challenge is two-fold – first you need to get your readers to look at your graph (here is where novelty helps). And once you’ve got them to look at it, you need to convey information to them. And the two objectives can sometimes collide, with the best looking graphs not being the ones that convey the best information. And this combination of looking good and being effective is possibly what turns visualisation into an art.

My way of dealing with this has been to play around with the non-essential bits of the visualisation. Using colours judiciously, for example. Using catchy headlines. Adding decorations outside of the graphs.

Another lesson I’ve learnt over time is to not have too many graphics in the same piece. Some of this has come due to pushback from my editors at Mint, who have frequently asked me to cut the number of graphs for space reasons. And some of this is something I’ve learnt as a reader.

The problem with visualisations is that while they can communicate a lot of information, they can break the flow in reading. So having too many visualisations in the piece means that you break the reader’s flow too many times, and maybe even risk your article looking academic. Cutting visualisations forces you to be concise in your use of pictures, and you leave in only the ones that are most important to your story.

There is one other upshot out of cutting the number of visualisations – when you have one bar graph and one line graph, you can leave them as they are and not morph or “decorate” them just for the heck of it!

PS: Even experienced visualisers are not immune to not having their graphics mangled by editors. Check out this tweet storm by Edward Tufte, the guru of visualisation.

The missing middle in data science

Over a year back, when I had just moved to London and was job-hunting, I was getting frustrated by the fact that potential employers didn’t recognise my combination of skills of wrangling data and analysing businesses. A few saw me purely as a business guy, and most saw me purely as a data guy, trying to slot me into machine learning roles I was thoroughly unsuited for.

Around this time, I happened to mention to my wife about this lack of fit, and she had then remarked that the reason companies either want pure business people or pure data people is that you can’t scale a business with people with a unique combination of skills. “There are possibly very few people with your combination of skills”, she had said, and hence companies had gotten around the problem by getting some very good business people and some very good data people, and hope that they can add value together.

More recently, I was talking to her about some of the problems that she was dealing with at work, and recognised one of them as being similar to what I had solved for a client a few years ago. I quickly took her through the fundamentals of K-means clustering, and showed her how to implement it in R (and in the process, taught her the basics of R). As it had with my client many years ago, clustering did its magic, and the results were literally there to see, the business problem solved. My wife, however, was unimpressed. “This requires too much analytical work on my part”, she said, adding that “If I have to do with this level of analytical work, I won’t have enough time to execute my managerial duties”.

This made me think about the (yet unanswered) question of who should be solving this kind of a problem – to take a business problem, recognise it can be solved using data, figuring out the right technique to apply to it, and then communicating the results in a way that the business can easily understand. And this was a one-time problem, not something you would need to solve repeatedly, and so without the requirement to set up a pipeline and data engineering and IT infrastructure around it.

I admit this is just one data point (my wife), but based on observations from elsewhere, managers are usually loathe to get their hands dirty with data, beyond perhaps doing some basic MS Excel work. Data science specialists, on the other hand, will find it hard to quickly get intuition for a one-time problem, get data in a “dirty” manner, and then apply the right technique to solving it, and communicate the results in a business-friendly manner. Moreover, data scientists are highly likely to be involved in regular repeatable activities, making it an organisational nightmare to “lease” them for such one-time efforts.

This is what I call as the “missing middle problem” in data science. Problems whose solutions will without doubt add value to the business, but which most businesses are unable to address because of a lack of adequate skillset in solving the issue; and whose one-time nature makes it difficult for businesses to dedicate permanent resources to solve.

I guess so far this post has all the makings of a sales pitch, so let me turn it into one – this is precisely the kind of problem that my company Bespoke Data Insights is geared to solving. We specialise in solving problems that lie at the cusp of business and data. We provide end-to-end quantitative solutions for typically one-time business problems.

We come in, understand your business needs, and use a hypothesis-driven approach to model the problem in data terms. We select methods that in our opinion are best suited for the precise problem, not hesitating to build our own models if necessary (hence the Bespoke in the name). And finally, we synthesise the analysis in the form of recommendations that any business person can easily digest and action on.

So – if you’re facing a business problem where you think data might help, but don’t know how to proceed; or if you are curious about all this talk about AI and ML and data science and all that, and want to include it in your business; or you want your business managers to figure out how to use the data  teams better, hire us.

Statistics and machine learning approaches

A couple of years back, I was part of a team that delivered a workshop in machine learning. Given my background, I had been asked to do a half-day session on Regression, and was told that the standard software package being used was the scikit-learn package in python.

Both the programming language and the package were new to me, so I dug around a few days before the workshop, trying to figure out regression. Despite my best efforts, I couldn’t locate how to find out the R^2. What some googling told me was surprising:

There exists no R type regression summary report in sklearn. The main reason is that sklearn is used for predictive modelling / machine learning and the evaluation criteria are based on performance on previously unseen data

As it happened, I requested the students at the workshop to install a package called statsmodels, which provides standard regression outputs. And then I proceeded to lecture to them on regression as I know it, including significance scores, p values, t statistics, multicollinearity and the likes. It was only much later was I to figure out that that is now how regression (and logistic regression) is done in the machine learning world.

In a statistical framework, the data sets in regression are typically “long” – you have a large number of data points, and a small number of variables. Putting it differently, we start off with a model with few degrees of freedom, and then “constrain” the variables with a large enough number of data points, so that if a signal exists, and it is in the right format (linear relationship and all that), we can pin it down effectively.

In a machine learning framework, it is common to run a regression where the number of data points is of the same order of magnitude as, or even smaller than the number of variables. Strictly speaking, such a problem is unbounded (there are too many degrees of freedom), and so regression is not well-defined. Instead, we rely upon “regularisation methods” to “tie down” the variables and (hopefully) produce a consistent solution.

Moreover, machine learning approaches are common to problems where individual predictor variables don’t have meaning. In this scenario, knowing whether a particular variable is significant or not is of no utility. Then, the signal in machine learning lies in the combination of variables, which means that multicollinearity (correlation between predictor variables) is not really a bad thing as it is in statistics. Variables not having meanings means that there are no correlations per se to be defined, and so machine learning models are harder to interpret, and are more likely to have hidden spurious correlations.

Also, when you have a small number of variables and a large number of data points, it is easy to get an “exact solution” for regression, which is what statistical methods use. In a machine learning framework with “wide” data, though, exact solutions are computationally infeasible, and so you need to use approximate algorithms such as gradient descent – which are common across ML techniques.

All in all, while statistics and machine learning might use techniques with the same name (“regression”, for example), they are both in theory and practice, very different ways to solve the problem. The important thing is to figure out the approach most suited for a particular problem, and use it accordingly.

Dam capacity

In Mint, Narayan Ramachandran has a nice op-ed on the issue of dam capacity and damn management in the wake of the floods in Kerala last year. In that, he writes:

For dams to do their jobs in extreme situations, they should have large unfilled capacity in their reservoirs when extreme events occur

Reading this piece reminded me of Benoit Mandelbrot’s The (Mis)Behaviour of Markets, and his description of the efforts of the colonial British government in Egypt in deciding the height of the Aswan dam. The problem with the Nile was “long range dependence” – the flow in the river in a year was positively correlated with the flow in the previous few years. This meant that there would be years of high flow followed by years of low flow.

The problem was solved by a British hydrologist Harold Edwin Hurst by looking at thousands of years of data of the flooding of the Nile (yes, this data was available), and there is a nice description of it in Mandelbrot’s book.

I had taken a few insights from this chapter for my own piece on long-range dependence in stock markets that I had written for Mint a few years back.

Coming back to Narayan’s piece, one problem is that in India we have an obsession with keeping dams filled up. In Karnataka, for example, every year during the monsoons, newspapers keep track of the level of water in the major reservoirs, expressing worry in case they’re not full enough. In that sense, I guess our dams haven’t been planned for long-range water sharing, and that has contributed to problems such as sudden water release.
Also not helping matters I guess is the fact that a lot of rivers flow across states, and the level of dams is a source of negotiation between states, and this leads to further keeping them small and ill-geared to long term water management.

Why data scientists should be comfortable with MS Excel

Most people who call themselves “data scientists” aren’t usually fond of MS Excel. It is slow and clunky, can only handle a million rows of data (and nearly crash your computer if you go anywhere close to that), and despite the best efforts of Visual Basic, is not very easy to program for doing repeatable tasks.

In fact, some data scientists may consider Excel to be “too downmarket” for them to use. At one firm I worked for, I had heard a rumour that using Excel for modelling was a fire-able offence, though I’m glad to report that I flouted this rule without much adverse effect. Yet, in my years as a “data science” and analytics consultant, and having done several modelling jobs before, I think Excel is an extremely necessary tool in a data scientist’s arsenal. There are several reasons for this.

The main one is communication. “Business types” love Excel – they use it for pretty much every official activity (I know of people who write documents in Excel). If you ask for a set of numbers, you are most likely to find it in an Excel sheet. I know of fairly large organisations which use Excel to store and transmit data (admittedly poor usage). And even non-quantitaive business types understand some of the basic quantitative functions thanks to Excel, such as joining (VLookup), pivoting, basic data cleaning (TRIM, VALUE, etc.), averaging, visualisation and sometimes even basic statistics such as correlation and regression.

One of the main problems that organisations face is lack of communication between data scientists and the business side (I mentioned this in a talk I gave last month: video here and slides here). Excel is an excellent middle ground, since it is reasonably quantitative and business people know how to use it.

In fact, in my consulting experience I’ve found that when working with clients, using Excel can make your client (usually a business person) feel more comfortable and involved in the analysis, speeding up the process and significantly improving collaboration. They’ll feel more empowered to intervene, which means they can add value, and they can feel especially happy if you occasionally let them enter some simple quantitative formulae.

The next advantage of Excel is that it puts the numbers out there. A long time back, when I was still doing full time jobs, I was asked to build a forecasting model (using a programming language) and couldn’t get it right for several months. And then on a whim I decided to use Excel, and when I saw the data in front of me, it was clear why the forecasts were so useless – because the data was so random.

Excel also allows you to quickly try things and iterate, again by putting the data and the analysis in front of you. Admittedly, the toolkit available is limited compared to what programming languages or statistical softwares can offer, but through clever usage (especially with Visual Basic), there is a lot you can achieve.

Then, Excel sometimes nudges you towards finding simple solutions. It is possible when you’re using a programming language to veer towards overly complicated solutions, and possibly use the proverbial nuclear weapon against the sparrow.

When I was working on the forecasting work a decade ago, I found that the forecasts would feed into a fairly complicated-looking model that had been developed over several years by several developers. On a whim, I decided to “do more” in Excel and managed to replicate the entire model in Excel (using VB and Solver). The people leading the product weren’t particularly happy, but using Excel was critical in ultimately moving to a simpler solution.

A similar thing occurred recently as well. I had been building a fairly complex optimisation model, which I tried replicating in Excel for communication purposes (so I could work on it together with the client). And it turned out there was a far simpler solution that I had missed all this time, and the simpler solution became apparent only because I used Excel.

I’m sure this is not an exhaustive list. So, if you’re a data scientist, you will do well to be at least conversant with Excel. I know it may only serve limited needs in terms of analysis, but the effort in learning  will get more than compensated for in the communication and collaboration and simplicity.

Tailpiece:
A long time ago, a co-worker passed by my desk and saw me work on Excel. He saw my spreadsheet and remarked, “oh, so many numbers! it must be very complicated” and went on his way. I don’t know if he is a data scientist now.

Meaningful and meaningless variables (and correlations)

A number of data scientists I know like to go about their business in a domain-free manner. They make a conscious choice to not know anything about the domain in which they are solving the problem, and instead treat a dataset as just a set of anonymised data, and attack it with the usual methods.

I used to be like this as well a long time ago. I remember in my very first job I had pissed off some clients by claiming that “I don’t care if this is a nut or a screw. As far as I’m concerned this is just a part number”.

Over time, though, I’ve come to realise that even a little bit of domain knowledge or intuition can help build significantly superior models. To use a framework I had introduced a few months back, your domain knowledge can be used to restrict the degrees of freedom in your model, thus increasing how much the machine can learn with the available data.

Then again, some problems lend themselves better to domain-based intuition than others, and this has to do with the meaning of a data point.

Consider two fairly popular problem statements from data science – determining whether a borrower will pay back a loan, and determining whether there is a cat in a given picture. While at the surface level, both are binary decisions, to be made by looking at large dimensional data (the number of data points that can be used for credit scoring can be immense), there is an important distinction between the two problems.

In the cat picture case, a single data point is basically the colour of a single pixel in an image, and it doesn’t really mean anything. If we were to try and build a cat recognition algorithm based on a single pre-chosen pixel in an image, it is unlikely we can do better than noise. Instead, the information is encoded in groups of pixels near each other – a bunch of pixels that look like cat ears, for example. In this case, whether you are training to model to identify cats or cinnamon buns is immaterial, and the domain-free approach works well.

With the credit scoring problem, the amount of information in each explanatory variable is significant. Unless we are looking at some extremely esoteric or insignificant variables (trust me, these get used fairly often in credit scoring models), it is possible to build a decision model based on just one explanatory variable and still have significant predictive power. There is definitely information in correlation between explanatory variables, but that pales compared to the information in the variables themselves.

And the amount of information captured by each explanatory variable means that it makes sense in these cases to invest some human effort to understand the variables and the impact it is having. In some cases, you might decide to use a mathematical transformation of a variable (square or log or inverse) instead of the variable itself. In other cases, you might determine based on logic that some correlations are spurious and drop the variables altogether. You might see a few explanatory variables with largely similar information and decide to drop some of them or use dimension reduction algorithms. And you can do a much better job of this if you have some experience or intuition about the domain, and care to understand what each variable means. Because variables have meanings.

Unlike in the image recognition problem, where most of the intuition is in the correlation term, because of which the “variables” don’t have any meaning, where domain doesn’t matter that much (though it can – in that some kinds of algorithms are superior at some kinds of images. I don’t have much experience in this domain to comment 🙂 ).

Again like in all the two-by-twos that I produce (and there are many, though this is arguably the most famous one), the problem is where you take people from one side and put them in a situation from the other side.

If you come from a background where you’ve mostly dealt with datasets where each individual variable is meaningless, but there is information in the collective, you are likely to “stir the pile” rather than using intuition to build better models.

If you are used to dealing with datasets with “meaning”, where variables hold the information, you might waste time doing your jiggery-pokery when you should be looking to apply models that get information in the collective.

The problem is this is a rather esoteric classification, so there is plenty of chance for people to be thrown into the wrong end.

Yet another way of classifying data scientists

There are many axes along which we can classify data scientists.

We can classify based on the primary specialty, in terms “analytics”, “business intelligence” and “machine learning”. We can classify based on domain, into “financial data scientists” and “retail data scientists” and “industrial data scientists”. We can classify by the choice of primary software tool, into “R data scientists” and “Python data scientists” and “SAS data scientists”. We can also classify by expertise, such as “deep learning” and “statistics” and “stochastic calculus”. The axes are endless.

Here is my not-so-humble attempt to contribute yet another such axis based on my observations in the industry – “technology facing” and “business facing” data scientists.

Technology facing data scientists put the software first. You’ll see them building pipelines, making sure their solutions can be easily integrated into the software stack, and worrying about how quickly their analysis can run. They will spend a lot of time on data engineering and infrastructure works, and their first concern when designing a solution is that it should be easy to implement. They are highly process oriented and not so fond of hacks.

Business facing data scientists, on the other hand, are primarily concerned with insights, and don’t care much about technological niceties. The technological feasibility and ease of implementation of a solution is an afterthought. Their data is messy, and the process is not easily repeatable (might even involve some manual processes). But they make sure that the insights they draw can be easily understood by a human, and invest time and effort in communication and visualisation. They might even build tools to help the business side of the organisation understand what is happening in the model.

This distinction is actually unsurprising if you look at who the primary clients of these respective types are. The business facing data scientists are more likely to be employed in generating insights, and building models to try and understand what is happening. The technology-facing data scientists will have spent most of their careers building production systems, and are thus very well acquainted with the software engineering process.

It is important, however, to recognise this distinction, and employ the data scientists as per their specialisation. A technology-facing data scientist in a business-facing role might be seen as spending way too much effort in getting the technology right, and doing her own thing while being unmindful of the business clients. A business-facing data scientist in a technology facing role will end up producing messy solutions that may be insightful, but will be a nightmare to implement.

This was first posted on LinkedIn