Simulating Covid-19 Scenarios

I must warn that this is a super long post. Also I wonder if I should put this on medium in order to get more footage.

Most models of disease spread use what is known as a “SIR” framework. This Numberphile video gives a good primer into this framework.

The problem with the framework is that it’s too simplistic. It depends primarily on one parameter “R0”, which is the average number of people that each infected patient infects. When R0 is high, each patient infects a number of other people, and the disease spreads fast. With a low R0, the disease spreads slow. It was the SIR model that was used to produce all those “flatten the curve” pictures that we were bombarded with a week or two back.

There is a second parameter as well – the recovery or removal rate. Some diseases are so lethal that they have a high removal rate (eg. Ebola), and this puts a natural limit on how much the disease can spread, since infected people die before they can infect too many people.

In any case, such modelling is great for academic studies, and post-facto analyses where R0 can be estimated. As we are currently in the middle of an epidemic, this kind of simplistic modelling can’t take us far. Nobody has a clue yet on what the R0 for covid-19 is. Nobody knows what proportion of total cases are asymptomatic. Nobody knows the mortality rate.

And things are changing well-at-a-faster-rate. Governments are imposing distancing of various forms. First offices were shut down. Then shops were shut down. Now everything is shut down, and many of us have been asked to step out “only to get necessities”. And in such dynamic and fast-changing environments, a simplistic model such as the SIR can only take us so far, and uncertainty in estimating R0 means it can be pretty much useless as well.

In this context, I thought I’ll simulate a few real-life situations, and try to model the spread of the disease in these situations. This can give us an insight into what kind of services are more dangerous than others, and how we could potentially “get back to life” after going through an initial period of lockdown.

The basic assumption I’ve made is that the longer you spend with an infected person, the greater the chance of getting infected yourself. This is not an unreasonable assumption because the spread happens through activities such as sneezing, touching, inadvertently dropping droplets of your saliva on to the other person, and so on, each of which is more likely the longer the time you spend with someone.

Some basic modelling revealed that this can be modelled as a sort of negative exponential curve that looks like this.

p = 1 - e^{-\lambda T}

T is the number of hours you spend with the other person. \lambda is a parameter of transmission – the higher it is, the more likely the disease with transmit (holding the amount of time spent together constant).

The function looks like this: 

We have no clue what \lambda is, but I’ll make an educated guess based on some limited data I’ve seen. I’ll take a conservative estimate and say that if an uninfected person spends 24 hours with an infected person, the former has a 50% chance of getting the disease from the latter.

This gives the value of \lambda to be 0.02888 per hour. We will now use this to model various scenarios.

  1. Delivery

This is the simplest model I built. There is one shop, and N customers.  Customers come one at a time and spend a fixed amount of time (1 or 2 or 5 minutes) at the shop, which has one shopkeeper. Initially, a proportion p of the population is infected, and we assume that the shopkeeper is uninfected.

And then we model the transmission – based on our \lambda = 0.02888, for a two minute interaction, the probability of transmission is 1 - e^{-\lambda T} = 1 - e^{-\frac{0.02888 * 2}{60}} ~= 0.1%.

In hindsight, I realised that this kind of a set up better describes “delivery” than a shop. With a 0.1% probability the delivery person gets infected from an infected customer during a delivery. With the same probability an infected delivery person infects a customer. The only way the disease can spread through this “shop” is for the shopkeeper / delivery person to be uninfected.

How does it play out? I simulated 10000 paths where one guy delivers to 1000 homes (maybe over the course of a week? that doesn’t matter as long as the overall infected rate in the population otherwise is constant), and spends exactly two minutes at each delivery, which is made to a single person. Let’s take a few cases, with different base cases of incidence of the disease – 0.1%, 0.2%, 0.5%, 1%, 2%, 5%, 10%, 20% and 50%.

The number of NEW people infected in each case is graphed here (we don’t care how many got the disease otherwise. We’re modelling how many got it from our “shop”). The  right side graph excludes the case of zero new infections, just to show you the scale of the problem.

Notice this – even when 50% of the population is infected, as long as the shopkeeper or delivery person is not initially infected, the chances of additional infections through 2-minute delivery are MINUSCULE. A strong case for policy-makers to enable delivery of all kinds, essential or inessential.

2. SHOP

Now, let’s complicate matters a little bit. Instead of a delivery person going to each home, let’s assume a shop. Multiple people can be in the shop at the same time, and there can be more than one shopkeeper.

Let’s use the assumptions of standard queueing theory, and assume that the inter-arrival time for customers is guided by an Exponential distribution, and the time they spend in the shop is also guided by an Exponential distribution.

At the time when customers are in the shop, any infected customer (or shopkeeper) inside can infect any other customer or shopkeeper. So if you spend 2 minutes in a shop where there is 1 infected person, our calculation above tells us that you have a 0.1% chance of being infected yourself. If there are 10 infected people in the shop and you spend 2 minutes there, this is akin to spending 20 minutes with one infected person, and you have a 1% chance of getting infected.

Let’s consider two or three scenarios here. First is the “normal” case where one customer arrives every 5 minutes, and each customer spends 10 minutes in the shop (note that the shop can “serve” multiple customers simultaneously, so the queue doesn’t blow up here). Again let’s take a total of 1000 customers (assume a 24/7 open shop), and one shopkeeper.

 

Notice that there is significant transmission of infection here, even though we started with 5% of the population being infected. On average, another 3% of the population gets infected! Open supermarkets with usual crowd can result in significant transmission.

Does keeping the shop open with some sort of social distancing (let’s see only one-fourth as many people arrive) work? So people arrive with an average gap of 20 minutes, and still spend 10 minutes in the shop. There are still 10 shopkeepers. What does it look like when we start with 5% of the people being infected?

The graph is pretty much identical so I’m not bothering to put that here!

3. Office

This scenario simulates for N people who are working together for a certain number of hours. We assume that exactly one person is infected at the beginning of the meeting. We also assume that once a person is infected, she can start infecting others in the very next minute (with our transmission probability).

How does the infection grow in this case? This is an easier simulation than the earlier one so we can run 10000 Monte Carlo paths. Let’s say we have a “meeting” with 40 people (could just be 40 people working in a small room) which lasts 4 hours. If we start with one infected person, this is how the number of infected grows over the 4 hours.

 

 

 

The spread is massive! When you have a large bunch of people in a small closed space over a significant period of time, the infection spreads rapidly among them. Even if you take a 10 person meeting over an hour, one infected person at the start can result in an average of 0.3 other people being infected by the end of the meeting.

10 persons meeting over 8 hours (a small office) with one initially infected means 3.5 others (on average) being infected by the end of the day.

Offices are dangerous places for the infection to spread. Even after the lockdown is lifted, some sort of work from home regulations need to be in place until the infection has been fully brought under control.

4. Conferences

This is another form of “meeting”, except that at each point in time, people don’t engage with the whole room, but only a handful of others. These groups form at random, changing every minute, and infection can spread only within a particular group.

Let’s take a 100 person conference with 1 initially infected person. Let’s assume it lasts 8 hours. Depending upon how many people come together at a time, the spread of the infection rapidly changes, as can be seen in the graph below.

If people talk two at a time, there’s a 63% probability that the infection doesn’t spread at all. If they talk 5 at a time, this probability is cut by half. And if people congregate 10 at a time, there’s only a 11% chance that by the end of the day the infection HASN’T propagated!

One takeaway from this is that even once offices start functioning, they need to impose social distancing measures (until the virus has been completely wiped out). All large-ish meetings by video conference. A certain proportion of workers working from home by rotation.

And I wonder what will happen to the conferences.

I’ve put my (unedited) code here. Feel free to use and play around.

Finally, you might wonder why I’ve made so many Monte Carlo Simulations. Well, as the great Matt Levine had himself said, that’s my secret sauce!

 

The future of work, and cities

Ok this is the sort of speculative predictive post that I don’t usually indulge in. However, I think my blog is at the right level of obscurity that makes it conducive for making speculative predictions. It is not popular enough that enough people will remember this prediction in case this doesn’t come through. And it’s not that obscure as well – in case it does come through, I can claim credit.

So my claim is that companies whose work doesn’t involve physically making stuff haven’t explored the possibilities of remote work enough before the current (covid-19) crisis hit. With the gatherings of large people, especially in air-conditioned spaces being strongly discouraged, companies that hadn’t given remote working enough thought are being forced to consider the opportunity now.

My prediction is that once the crisis over and things go back to “normal”, there will be converts. Organisations and teams and individuals who had never before thought that working from home would have taken enough of a liking to the concept to give it a better try. Companies will become more open to remote working, having seen the benefits (or lack of costs) of it in the period of the crisis. People will commute less. They will travel less (at least for work purposes). This is going to have a major impact on the economy, and on cities.

I’m still not done with cities.

For most of history, there has always been a sort of natural upper limit to urbanisation, in the form of disease. Before germ theory became a thing, and vaccinations and cures came about for a lot of common illnesses, it was routine for epidemics to rage through cities from time to time, thus decimating their population. As a consequence, people didn’t live in cities if they could help it.

Over the last hundred years or so (after the “Spanish” flu of 1918), medicine has made sufficient progress that we haven’t seen such disease or epidemics (maybe until now). And so the network effect of cities has far outweighed the problem of living in close proximity to lots of other people.

Especially in the last 30 years or so, as “knowledge work” has formed a larger part of the economies, a disproportionate part of the economic growth (and population growth) has been in large cities. Across the world – Mumbai, Bangalore, London, the Bay Area – a large part of the growth has come in large urban agglomerations.

One impact of this has been a rapid rise in property prices in such cities – it is in the same period that these cities have become virtually unaffordable for the young to buy houses in. The existing large size and rapid growth contribute to this.

Now that we have a scary epidemic around us, which is likely to spread far more in dense urban agglomerations, I imagine people at the margin to reconsider their decisions to live in large cities. If they can help it, they might try to move to smaller towns or suburbs. And the rise of remote work will aid this – if you hardly go to office and it doesn’t really matter where you live, do you want to live in a crowded city with a high chance of being hit by a stray virus?

This won’t be a drastic movement, but I see a marginal redistribution of population in the next decade away from the largest cities, and in favour of smaller towns and cities.It won’t be large, but significant enough to have an impact on property prices. The bull run we’ve seen in property prices, especially in large and fast-growing cities, is likely to see some corrections. Property holders in smaller cities that aren’t too unpleasant to live in can expect some appreciation.

Oh, and speaking of remote work, I have an article in today’s Times Of India about the joys of working from home. It’s not yet available online, so I’ve attached a clipping.

Range of possibilities

After I wrote about “love and arranged jobs” last week, an old friend got back saying he quite appreciates the concept and he’s seen it in his career as well. He’s fundamentally a researcher, with a PhD, who then made a transition to corporate jobs.

He told me that back in his research days, he had many “love work relationships”, where he would come across and meet people, and they would “flirt” (in a professional sense), and that could lead to a wide range of outcomes. Sometimes they would just have discussions without anything professional coming out of it, sometimes it would result in a paper, sometimes in a longer collaboration, and so on.

Now that he is in the corporate world, he told me that it is mostly “arranged jobs” for him now, and that meeting people for this is much less enjoyable in that sense.

The one phrase that he used in our conversation stuck with me, and has made it to the title of this post. He said that “love jobs” work when people meet with a “range of possibilities” in mind.

And that is precisely how it works in terms of romantic relationships as well. When you go out on a date, you are open to exploring a range of possibilities. It could just be an evening out. It could be a one-night stand. It could result in friendship, with or without benefits. There could be a long-term relationship that is possible. Gene propagation is yet another possible result. There is a rather wide range of possibilities and that is what I suppose makes dating fun (I suppose because I’ve hardly dated. I randomly one day met my wife after three years of blog-commenting, orkutting and GTalking, and we ended up hitting the highest part of the range).

Arranged marriages are not like that – you go into the “date” with a binary possibility in mind – you either settle into a long-term gene-propagating relationship with this person or you wish you never encounter them in life again. There is simply no range, or room for any range.

Job interviews in an arranged sense are like that. You either get the job or you don’t – there is one midpoint, though, where things don’t temporarily work out but you keep open the possibility of working together at a later date. This, however, is an incredibly rare occurrence – the outcome is usually binary.

It’s possible I’m even thinking about this “love jobs” scenario because I’ve been consulting for the last 8 odd years now. In all this time I’ve met several people, and the great part of this has been that the first meeting usually happens without any expectations – both parties are open to a range of possibilities.

Some people I’ve met have tried to hire me (for a job). Some have become friends. Some have given me gigs, some several. Some have first given me gigs and then become friends. Others have asked me to write recommendation letters. Yet others have become partners. And so on.

And this has sort of “spoilt” me into believing that a job can be found through this kind of a “love process” where a range of possibilities is open upon the first meeting itself. And when people try to propose the arranged route (“once we start this process we expect to hire you in a week”) I’ve chickened out.

Thinking about it, that’s how a lot of hiring works. Except maybe for the handful of employers which are infamous for long interview processes (I love those proceses, btw), I guess most of the “industry” is all about arranged jobs.

And maybe that’s why so few people “love” their jobs!

Love and arranged jobs

When I first entered the arranged marriage market in early 2009, I had done so with the expectation that I would use it as a sort of dating agency. Remember this was well before the likes of OKCupid or Tinder or TrulyMadly were around, and for whatever reason I had assumed that I could “find chicks” in the arranged marriage market, and then date them for a while before committing.

Now that my wife is in this business, I think my idea was a patently bad one. Each market attracts a particular kind of people, who usually crowd out all other kind of people. And sort of by definition, the arranged marriage market is filled with people looking for arranged marriage. Maybe they just want a Common Minimum Program. But surely, what they are looking for is a quick process where after two (or maximum three) meetings, you commit to someone for life.

So in this kind of a market you want to date, there is an infinitesimal chance of finding someone else who also wants to date. And so you are bound to be disappointed. In this case, you are better off operating in a dating market (such as Tinder, or whatever else did its job ten years ago).

Now that this lengthy preamble is out of the way, let us talk about love and arranged jobs. This has nothing to do with jobs, or work itself. It has everything to do with the process of finding a job. Some of you might find that I, who has been largely out of the job market for over eight years now, to be supremely unqualified to write about jobs, but this outsider view is what allows me to take an objective view of this (just like most other things I write about on this blog).

You get a love job through a sort of lengthy courtship process, like love marriage. You either get introduced to someone, or meet them on twitter, or bump into them at a networking event. Then you have a phone chat, followed by a coffee, and maybe a drink, and maybe a few meals. You talk about work related stuff in most of these, and over time you both realise it makes sense to work together. A formality of an interview process happens, and you start working together.

From my outside view (and having never gotten a job in this manner), I would imagine that this would lead to fulfilling work relationships and satisfying work (the only risk is that the person you have “courted” moves away or up). And when you are looking for a sort of high-trust relationship in a job, this kind of an “interview process” possibly makes sense.

In some ways, you can think about getting a “love job” as following the advise Dale Carnegie dishes out in How To Win Friends and Influence People  – make the counterparty like you as a person and you make the sale.

The more common approach in recruitment is “arranged jobs” (an extreme example of this is campus recruitment). This is no nonsense, no beating around the bush approach. In the first conversation, it is evident to both parties that a full time job is a desired outcome of the interaction. Conversations are brisk, and to the point. Soon enough, formal interviews get set up, and the formal process can be challenging.

And if things go well after that, there is a job offer in hand. And soon you are working together. Love, if at all, happens after marriage, as some “aunties” are prone to telling you.

The advantage of this process is that it is quick, and serves both parties well in that respect. The disadvantage is that the short courtship period means that not enough trust has been built between the parties at the time they start working together. This means “proving oneself” in the first few months of getting a job, which is always tricky and set a bad precedent for the rest of the employment.

In the first five years of my career, I moved between four jobs. All of them happened through the arranged process. The one I lasted the longest in (and enjoyed the most, by a long way, though on a relative basis) was the one where the arranged process itself took a long time. I did some sixteen interviews before getting the job, and in the process the team I was going to join had sold itself very well to me.

And that makes me think that if I end up getting back to formal employment some day, it will have to happen through the love process.

Two steps back, one step forward

In his excellent piece on Everton’s failed recruitment strategy (paywalled), Oliver Kay of the Athletic makes an interesting point – that players seldom do well when they move from a bigger club to a smaller club.

During his time in charge at Arsenal, George Graham used to say that the key to building a team was to buy players who were on the way up — or, alternatively, players who were desperate to prove a point — but to avoid those who might see your club as a soft landing, a comfort zone. “Never buy a player who’s taking a step down to join you,” Graham said. “He will act as if he’s doing you a favour.”

This, I guess, is not unique to football alone – it applies to other jobs as well. When someone joins a company that they think they are “too cool for”, they  look at it as a step down, and occasionally behave as if they’re doing the new employer a favour.

One corollary is that working for “the best” can be a sort of lock in for an employee, since wherever he will move from there will be a sort of step down in some way or the other, and that will mean compromises on the part of all parties involved.

Thinking about footballers who have moved from big clubs and still not done badly, I notice one sort of pattern that I call “two steps back and one step forward”. Evidently, I’m basing this analysis on a small number of data points, which might be biased, but let me play management guru and go ahead with my theory.

Basically, if you want to take a “step down” from the best, one way of doing well in the longer term is to take “two steps down” and then later take a step up. The advantage with this approach is that when you take two steps down, you get to operate in an environment far easier than the one you left, and even if you act entitled and take time to adjust you will be able to prove yourself and make an impact in due course.

And at that point in time, when you’ve started making an impact, you are “on the way up”, and can then step up to a club at the next level where you can make an impact.

Players that come to mind that have taken this approach include Jonny Evans, who moved from Ferguson-era Manchester United to West Brom, and then when West Brom got relegated, moved “up” to Leicester. And he’s doing a pretty good job there.

And then there is Xherdan Shaqiri. He made his name as a player at Bayern Munich, and then moved to Inter where he struggled. And then he made what seemed like a shocking move for the time – to Stoke City (of the “cold Thursday night at Stoke” fame) in the Premier League. Finally, last year, after Stoke got relegated from the Premier League, he “stepped up” to Liverpool, where, injuries aside, he’s been doing rather well.

The risk with this two steps down approach, of course, is that sometimes it can fail to come off, and if you don’t make an impact soon enough, you start getting seen as a “two steps down guy”, and even “one step down” can seem well beyond you.

Ganesha Workflow

I have a problem with productivity. It’s because I follow what I call the “Ganesha Workflow”.

Basically there are times when I “get into flow”, and at those times I ideally want to just keep going, working ad infinitum, until I get really tired and lose focus. The problem, however, is that it is not so easy to “get into flow”. And this makes it really hard for me to plan life and schedule my day.

So where does Ganesha come into this? I realise that my workflow is similar to the story of how Ganesha wrote the Mahabharata.

As the story goes, Vyasa was looking for a scribe to write down the Mahabharata, which he knew was going to be a super-long epic. And he came across Ganesha, who agreed to write it all down under one condition – that if Vyasa ever stopped dictating, Ganesha would put his pen down and the rest of the epic would remain unwritten.

So Ganesha Workflow is basically the workflow where as long as you are going, you go strong, but the moment you have an interruption, it is really hard to pick up again. Putting it another way, when you are in Ganesha Workflow, context switches are really expensive.

This means the standard corporate process of drawing up a calendar and earmarking times of day for certain tasks doesn’t really work. One workaround I have made to accommodate my Ganesha Workflow is that I have “meeting days” – days that are filled with meetings and when I don’t do any other work. On other days I actively avoid meetings so that my workflow is not disturbed.

While this works a fair bit, I’m still not satisfied with how well I’m able to organise my work life. For one, having a small child means that the earlier process of hitting “Ganesha mode” at home doesn’t work any more – it’s impossible to prevent context switches on the child’s account. The other thing is that there is a lot more to coordinate with the wife in terms of daily household activities, which means things on the calendar every day. And those will provide an interruption whether I like it or not.

I’m wondering what else I can do to accommodate my “Ganesha working style” into “normal work and family life”. If you have any suggestions, please let me know!

Segmentation and machine learning

For best results, use machine learning to do customer segmentation, but then get humans with domain knowledge to validate the segments

There are two common ways in which people do customer segmentation. The “traditional” method is to manually define the axes through which the customers will get segmented, and then simply look through the data to find the characteristics and size of each segment.

Then there is the “data science” way of doing it, which is to ignore all intuition, and simply use some method such as K-means clustering and “do gymnastics” with the data and find the clusters.

A quantitative extreme of this method is to do gymnastics with your data, get segments out of it, and quantitatively “take action” on it without really bothering to figure out what each clusters represent. Loosely speaking, this is how a lot of recommendation systems nowadays work – some algorithm somewhere finds people similar to you based on your behaviour, and recommends to you what they liked.

I usually prefer a sort of middle ground. I like to let the algorithms (k-means easily being my favourite) to come up with the segments based on the data, and then have a bunch of humans look at the segments and make sense of it.

Basically whatever segments are thrown up by the algorithm need to be validated by human intuition. Getting counterintuitive clusters is also not a problem – on several occasions, people I’ve validated the clusters by (usually clients) have used the counterintuitive clusters to discover bugs, gaps in the data  or patterns that they didn’t know of earlier.

Also, in terms of validation of clusters, it is always useful to get people with domain knowledge to validate the clusters. And this also means that whatever clusters you’ve generated you are able to represent them in a human-readable format. The best way of doing that is to use the cluster centres and then represent them somehow in a “physical” manner.

I started writing this post some three days ago and am only getting to finish it now. Unfortunately, in the meantime I’ve forgotten the exact motivation of why I started writing this. If i recall that, I’ll maybe do another post.

Taking Intelligence For Granted

There was a point in time when the use of artificial intelligence or machine learning or any other kind of intelligence in a product was a source of competitive advantage and differentiation. Nowadays, however, many people have got so spoiled by the use of intelligence in many products they use that it has become more of a hygiene factor.

Take this morning’s post, for example. One way to look at it is that Spotify with its customisation algorithms and recommendations has spoiled me so much that I find Amazon’s pushing of Indian music irritating (Amazon’s approach can be called as “naive customisation”, where they push Indian music to me only because I’m based in India, and not learn further based on my preferences).

Had I not been exposed to the more intelligent customisation that Spotify offers, I might have found Amazon’s naive customisation interesting. However, Spotify’s degree of customisation has spoilt me so much that Amazon is simply inadequate.

This expectation of intelligence goes beyond product and service classes. When we get used to Spotify recommending music we like based on our preferences, we hold Netflix’s recommendation algorithm to a higher standard. We question why the Flipkart homepage is not customised to us based on our previous shopping. Or why Google Maps doesn’t learn that some of us don’t like driving through small roads when we can help it.

That customers take intelligence for granted nowadays means that businesses have to invest more in offering this intelligence. Easy-to-use data analysis and machine learning packages mean that at least some part of an industry uses intelligence in at least some form (even if they might do it badly in case they fail to throw human intelligence into the mix!).

So if you are in the business of selling to end customers, keep in mind that they are used to seeing intelligence everywhere around them, and whether they state it or not, they expect it from you.

10X Studs and Fighters

Tech twitter, for the last week, has been inundated with unending debate on this tweetstorm by a VC about “10X engineers”. The tweetstorm was engineered by Shekhar Kirani, a Partner at Accel Partners.

I have friends and twitter-followees on both sides of the debate. There isn’t much to describe more about the “paksh” side of the debate. Read Shekhar’s tweetstorm I’ve put above, and you’ll know all there is to this side.

The vipaksh side argues that this normalises “toxicity” and “bad behaviour” among engineers (about “10X engineers”‘s hatred for meetings, and their not adhering to processes etc.). Someone I follow went to the extent to say that this kind of behaviour among engineers is a sign of privilege and lack of empathy.

This is just the gist of the argument. You can just do a search of “10X engineer”, ignore the jokes (most of them are pretty bad) and read people’s actual arguments for and against “10X engineers”.

Regular readers of this blog might be familiar with the “studs and fighters” framework, which I used so often in the 2007-9 period that several people threatened to stop reading me unless I stopped using the framework. I put it on a temporary hiatus and then revived it a couple of years back because I decided it’s too useful a framework to ignore.

One of the fundamental features of the studs and fighters framework is that studs and fighters respectively think that everyone else is like themselves. And this can create problems at the organisational level. I’d spoken about this in the introductory post on the framework.

To me this debate about 10X engineers and whether they are good or bad reminds me of the conflict between studs and fighters. Studs want to work their way. They are really good at what they’re competent at, and absolutely suck at pretty much everything else. So they try to avoid things they’re bad at, can sometimes be individualistic and prefer to work alone, and hope that how good they are at the things they’re good at will compensate for all that they suck elsewhere.

Fighters, on the other hand, are process driven, methodical, patient and sticklers for rules. They believe that output is proportional to input, and that it is impossible for anyone to have a 10X impact, even 1/10th of the time (:P). They believe that everyone needs to “come together as a group and go through a process”.

I can go on but won’t.

So should your organisation employ 10X engineers or not? Do you tolerate the odd “10X engineer” who may not follow company policy and all that in return for their superior contributions? There is no easy answer to this but overall I think companies together will follow a “mixed strategy”.

Some companies will be encouraging of 10X behaviour, and you will see 10X people gravitating towards such companies. Others will dissuade such behaviour and the 10X people there, not seeing any upside, will leave to join the 10X companies (again I’ve written about how you can have “stud organisations” and “fighter organisations”.

Note that it’s difficult to run an organisation with solely 10X people (they’re bad at managing stuff), so organisations that engage 10X people will also employ “fighters” who are cognisant that 10X people exist and know how they should be managed. In fact, being a fighter while recognising and being able to manage 10X behaviour is, I think, an important skill.

As for myself, I don’t like one part of Shekhar Kirani’s definition – that he restricts it to “engineers”. I think the sort of behaviour he describes is present in other fields and skills as well. Some people see the point in that. Others don’t.

Life is a mixed strategy.

Periodicals and Dashboards

The purpose of a dashboard is to give you a live view of what is happening with the system. Take for example the instrument it is named after – the car dashboard. It tells you at the moment what the speed of the car is, along with other indicators such as which lights are on, the engine temperature, fuel levels, etc.

Not all reports, however, need to be dashboards. Some reports can be periodicals. These periodicals don’t tell you what’s happening at a moment, but give you a view of what happened in or at the end of a certain period. Think, for example, of classic periodicals such as newspapers or magazines, in contrast to online newspapers or magazines.

Periodicals tell you the state of a system at a certain point in time, and also give information of what happened to the system in the preceding time. So the financial daily, for example, tells you what the stock market closed at the previous day, and how the market had moved in the preceding day, month, year, etc.

Doing away with metaphors, business reporting can be classified into periodicals and dashboards. And they work exactly like their metaphorical counterparts. Periodical reports are produced periodically and tell you what happened in a certain period or point of time in the past. A good example are company financials – they produce an income statement and balance sheet to respectively describe what happened in a period and at a point in time for the company.

Once a periodical is produced, it is frozen in time for posterity. Another edition will be produced at the end of the next period, but it is a new edition. It adds to the earlier periodical rather than replacing it. Periodicals thus have historical value and because they are preserved they need to be designed more carefully.

Dashboards on the other hand are fleeting, and not usually preserved for posterity. They are on the other hand overwritten. So whether all systems are up this minute doesn’t matter a minute later if you haven’t reacted to the report this minute, and thus ceases to be of importance the next minute (of course there might be some aspects that might be important at the later date, and they will be captured in the next periodical).

When we are designing business reports and other “business intelligence systems” we need to be cognisant of whether we are producing a dashboard or a periodical. The fashion nowadays is to produce everything as a dashboard, perhaps because there are popular dashboarding tools available.

However, dashboards are expensive. For one, they need a constant connection to be maintained to the “system” (database or data warehouse or data lake or whatever other storage unit in the business report sense). Also, by definition they are not stored, and if you need to store then you have to decide upon a frequency of storage which makes it a periodical anyway.

So companies can save significantly on resources (compute and storage) by switching from dashboards (which everyone seems to think in terms of) to periodicals. The key here is to get the frequency of the periodical right – too frequent and people will get bugged. Not frequent enough, and people will get bugged again due to lack of information. Given the tools and technologies at hand, we can even make reports “on demand” (for stuff not used by too many people).