Counter staffing and service levels

I’m writing this from the international section of the Bangalore International Airport, as I wait to board my flight to Barcelona. It was a plan I’d made in October 2014 to “hibernate” for a few months in Barcelona during my wife’s last term of classes there, and this is the execution of the same plan.

There was a fairly long line at the passport control counters this morning, and it took me perhaps twenty minutes to cross it. When I joined the line, there were about 10 passport officers to say goodbye to Indian passport, so the line moved fairly quickly.

Presently, officers started getting up one by one, and going to one side to drink tea. I initially thought it was a tea break, but the officers drinking tea soon disappeared, leaving just four counters in operation, implying that the line moved much slowly thereafter. Some people were pissed off, but I soon got out.

It is not an uncommon occurrence to suddenly see a section of “servers” being closed. For example, you might go to the supermarket on a weekday afternoon to expect quick checkouts, but you might notice that only a fraction of the checkout counters are operational, leading to lines as long as on a weekend evening.

From the system of servers’ point of view, this is quite rational. While some customers might expect some kind of a moral obligation from the system of servers to keep all servers operational, the system of servers has no obligation to do so. All they have an obligation towards is in maintaining a certain service level.

So coming back to passport control at the Bangalore airport, maybe they have a service level of “an average of 30 minutes of waiting time for passengers”, and knowing that the number of international flights in late morning is lower than early morning, they know that the new demand can be met with a smaller number of servers.

The problem here is with the way that this gets implemented, which might piss off people – when half the servers summarily disappear, and waiting period suddenly goes up, people are bound to get pissed off. A superior strategy would be to do it in phases – giving a reasonable gap between each server going off. That smoothens the supply and waiting time, and people are far less likely to notice.

As the old Mirinda Lime advertisement went (#youremember), zor ka jhatka dheere se lage.

Bias in price signals from ask only markets

Yesterday I listened to this superb podcast where Russ Roberts of the Hoover Institution interviews Josh Luber who runs Campless, a secondary market for sneakers (listen to the podcast, it isn’t as bizarre as it sounds). The podcast is full of insights on markets and “thickness” and liquidity and signalling and secondary markets and so on.

To me, one of the most interesting takeaways of the podcast was the concept that the price information in “ask only markets” is positively biased. Let me explain.

A financial market is symmetric in that it has both bids (offers to buy stock) and asks (offers to sell). When there is a seller who is willing to sell the stock at a bid amount, he gets matched to the corresponding bid and the two trade. Similarly, if a buyer is willing to buy at ask, the ask gets “taken out”.

The “order book” at any time thus contains of both bids and asks – which have been unmatched thus far, and looking at the order book gives you an idea of what the “fair price” for the stock is.

However, not all markets are symmetric this way. In fact, most markets are asymmetric in that they only contain asks – offers to sell. Think of your neighbourhood shop – the shopkeeper is set up to only sell goods, at a price he determines (his “ask”). When a buyer comes along who is willing to pay the ask price of a good, a transaction happens and the good disappears.

Most online auction markets (such as eBay or OLX) also function the same way – they are ask only. People post on these platforms only when they have something to sell, accompanied by the ask price. Once a buyer who is willing to pay that price is found, the item disappears and the transaction is concluded.

What makes things complicated with platforms such as OLX or eBay (or Josh Luber’s Campless) is that most sellers are “retail”, who don’t have a clear idea of what price to ask for their wares. And this introduces an interesting bias.

Low (and more reasonable) asks are much more likely to find a match than higher asks. Thus, the former remain in the market for much shorter amount of time than the latter.

So if you were to poll the market at periodic intervals looking at the “best price” for a particular product, you are likely to end up with an overestimate because the unreasonable asks (which don’t get taken out that easily) are much more likely to occur in your sample than more reasonable asks. This problem can get compounded by prospective sellers who decide their ask by polling the market at regular intervals for the “best price” and use that as a benchmark.

Absolutely fascinating stuff that you don’t normally think about. Go ahead and listen to the full podcast!

PS: Wondering how it would be if OLX/eBay were to be symmetric markets, where bids can also be placed. Like “I want a Samsun 26 inch flatscreen LCD TV for Rs. 10000”. There is a marketplace for B&Bs (not Airbnb) which functions this way. Would be interesting to study for sure!

Continuous and barrier regulation

One of the most important pieces of financial regulation in the US and Europe following the 2008 financial crisis is the designation of certain large institutions as “systemically important”, or in other words “too big to fail”. Institutions thus designated have greater regulatory and capital requirements, thus rendering them at a disadvantage compared to smaller competitors.

This is by design – one of the intentions of the “SiFi” (systemically important financial regulations) is to provide incentives to companies to become smaller so that the systemic risk is reduced. American insurer Metlife, for example, decided to hive off certain divisions so that it’s not a SiFi any more.

AIG, another major American insurer (which had to be bailed out during the 2008 financial crisis), is under pressure from its activist investors led by Carl Icahn to similarly break up so that it can avoid being a SiFi. The FT reports that there were celebrations in Italy when insurer Generali managed to get itself off the global SiFi list. Based on all this, the SiFi regulation seems to be working in spirit.

The problem, however, is with the method in which companies are designated SiFis, or rather, with that SiFi is a binary definition. A company is either a SiFi or it isn’t –  there is no continuum. This can lead to perverse incentives for companies to escape the SiFi tag, which might undermine the regulation.

Let’s say that the minimum market capitalisation for a company to be defined a SiFi is $10 billion (pulling this number out of thin air, and assuming that market cap is the only consideration for an entity to be classified as a SiFi). Does this mean that a company that is worth $10 Bn is “systemically important” but one that is worth $9.9 Bn is not? This might lead to regulatory arbitrage that might lead to a revision of the benchmark, but it still remains a binary thing.

A better method for regulation would be for the definition of SiFi to be continuous, or fuzzy, so that as the company’s size increases, its “SiFiness” also increases proportionally, and the amount of additional regulations it has to face goes up “continuously” rather than being hit by a “barrier”. This way, the chances of regulatory arbitrage remain small, and the regulation will indeed serve its purpose.

SiFi is just one example – there are several other cases which are much better served by regulating companies (or individuals) as a continuum and not classifying them into discrete buckets. When you regulate companies as parts of discrete buckets, there is always the temptation to change just enough to move from one bucket to the other, and that might result in gaming. Continuous regulation, on the other hand, leaves no room for such marginal gaming – marginal changes aer only giong to have a marginal impact.

Perhaps for something like SiFi, where the requirements of being a SiFi are binary (compliance, etc.) there may not be a choice but to keep the definition discrete (if there are 10 different compliance measures, they can kick in at 10 different points, to simulate a continuous definition).

However, when the classification results in monetary benefits or costs (let’s say something like SiFis paying additional regulatory costs) it can be managed via non-linear funding. Let’s say that you pay 10% fees (for whatever) in category A and 12% in category B (which you get to once you cross a benchmark). A simply way to regulate would be to have the fees as a superlinear function of your market cap (if that’s what the benchmark is based on).

 

Why Delhi’s odd-even plan might work

While it is too early to look at data and come to an objective decision, there is enough reason to believe that Delhi’s “odd-even” plan (that restricts access to streets on certain days to cars of a certain parity) might work.

 

The program was announced sometime in December and the pilot started in January, and you have the usual (and some unusual) set of outragers outraging about it, and about how it can cause chaos, makes the city unsafe and so forth. An old picture of a Delhi metro was recirculated on Monday and received thousands of retweets, by people who hadn’t bothered to check facts and were biased against the odd-even formula. There has been some anecdotal evidence, however, that the plan might be working.

It can be argued that the large number of exceptions (some of which are bizarre) might blunt the effect of the new policy, and that people might come up with innovative car-swap schemes (not all cars get out of their lots every morning, so a simple car-swap scheme can help people circumvent this ban), because of which only a small proportion of cars in Delhi might go off the roads thanks to the scheme.

While it might be true that the number of cars on Delhi roads might fall by far less than half (thanks to exemptions and swap schemes) due to this measure, that alone can have a significant impact on the city’s traffic, and pollution. This is primarily due to non-linearities in traffic around the capacity.

Consider a hypothetical example of a road with a capacity for carrying 100 cars per hour. As long as the number of cars that want to travel on it in an hour is less than 100, there is absolutely no problem and the cars go on. The 101st car, however, creates the problem, since the resource now needs to be allocated. The simplest way to allocate a resource such as a road is first come-first served, and so the 101st car waits for its turn at the beginning of the road, causing a block in the road it is coming from.

While this might be a hypothetical and hard-to-visualise example, it illustrates the discontinuity in the problem – up to 100, no problem, but 101st causes problem and every additional car adds to the problem. More importantly, these problems also cascade, since a car waiting to get on to a road clogs the road it is coming from.

Data is not available about the utilisation of Delhi roads before this new measure was implemented, but as long as the demand-supply ratio was not too much higher than 1, the new measure will be a success. In fact, if a fraction f of earlier traffic remains on the road, the scheme will be a success as long as the earlier utilisation of the road was no more than \frac{1}{f} (of course we are simplifying heavily here. Traffic varies by region, time of day, etc.).

In other words, the reduction in number of cars due to the new measure should mean significantly lower bottlenecks and traffic jams, and ensure that the remaining cars move much faster than they did earlier. And with lesser bottlenecks and jams, cars will end up burning less fuel than they used to, and that adds a multiplier to the drop in pollution.

Given that roads are hard to price (in theory it’s simple but not so in practice), what we need is a mechanism so that the number of cars using it is less than or equal to capacity. The discontinuity around this capacity means that we need some kind of a coordination mechanism to keep demand below the capacity. The tool that has currently been used (limiting road use based on number plate parity) is crude, but it will tell us whether such measures are indeed successful in cutting traffic.

More importantly, I hope that the Delhi government, traffic police, etc. have been collecting sufficient data through this trial period to determine whether the move has the intended effects. Once the trial period is over, we will know the true effect this has had (measuring pollution as some commentators have tried is crude, given lag effects, etc.).

If this measure is successful, other cities can plan to either replicate this measure (not ideal, since this is rather crude) or introduce congestion pricing in order to regulate traffic on roads.

Bayes and serial correlation in disagreements

People who have been in a long-term relationship are likely to recognise that fights between a couple are not Markovian – in that the likelihood of fighting today is not independent of the likelihood of having fought yesterday.

In fact, if you had fought in a particular time period, it increases the likelihood that you’ll fight in the next time period. As a consequence, what you are likely to find is that there are times when you go days, or weeks, or even months, together in a perennial state of disagreement, while you’ll also have long periods of peace and bliss.

While this serial correlation can be disconcerting at times, and make you wonder whether you are in a relationship with the right person, it is not hard to understand why this happens. Once again, our old friend Reverend Thomas Bayes comes to the rescue here.

This is an extremely simplified model, but will serve the purpose of this post. Each half of a couple beliefs that the other (better?) half can exist in one of two states – “nice” and “jerk”. In fact, it’s unlikely anyone will completely exist in one of these states – they’re likely to exist in a superposition of these states.

So let’s say that the probability of your partner being a jerk is P(J), which makes the probability of him/her being “nice” at P(N) = 1- P(J). Now when he/she does or says something (let’s call this event E), you implicitly do a Bayesian updation of these probabilities.

For every word/action of your partner, you can estimate the probabilities in the two cases of your partner being jerk, and nice. After every action E by the partner, you update your priors about them with the new information.

So the new probability of him being a jerk (given event E) will be given by
P(J|E) = \frac{P(J).P(E|J)}{P(J).P(E|J) + P(N).P(E|N)} (the standard Bayesian  formula).

Now notice that the new probability of the partner being a jerk is dependent upon the prior probability. So when P(J) is already high, it is highly likely that whatever action the partner does will not move the needle significantly. And the longer P(J) stays high, the higher the probability that you’ll lapse into a fight again. Hence the prolonged periods of fighting, and serial correlation.

This equation also explains why attempts to resolve a fight quickly can backfire. When you are fighting, the normal reaction to resolve it is by committing actions that indicate that you are actually nice. The problem is that the equation above has both P(E|N) and P(E|J) in it.

So, in order to resolve a fight, you should not only commit actions that you would do when you are perceived nice, but also actions that you would NOT do if you are a jerk. In other words, the easiest way to pull P(J) down in the above equation is to commit E with high P(E|N) and low P(E|J).

What complicates things is that if you use one such weapon too many times, the partner will will begin to see through you, and up her P(E|J) for this event. So you need to keep coming up with new tricks to defuse fights.

In short, that serial correlation exists in relationship fights is a given, and there is little you can do to prevent it. So if you go through a long period of continuous disagreement with your partner, keep in mind that such things are par for the course, and don’t do something drastic like breaking up.

Hooke’s Curve, hooking up and dressing sense

So Priyanka and I were talking about a mutual acquaintance, and the odds of her (the acquaintance) being in a relationship, or trying to get into one. I offered “evidence” that this acquaintance (who I meet much more often than Priyanka does) has been dressing progressively better over the last year, and from that evidence, it’s likely that she’s getting into a relationship.

“It can be the other way, too”, Priyanka countered. “Haven’t you seen countless examples of people who have started dressing really badly once they’re in a relationship?”. Given that I had several data points in this direction, too, there was no way I could refute it. Yet, I continued to argue that given what I know of this acquaintance, it’s more likely that she’s still getting into a relationship now.

“I can explain this using Hooke’s Law”, said Priyanka. Robert Hooke, as you know was a polymath British scientist of the seventeenth century. He has made seminal contributions to various branches of science, though to the best of my knowledge he didn’t say anything on relationships (he was himself a lifelong bachelor). In Neal Stephenson’s The Baroque Cycle, for example, Hooke conducts a kidney stone removal operation on one of the protagonists, and given the range of his expertise, that’s not too far-fetched.

“So do you mean Hooke’s Law as in stress is proportional to strain?”, I asked. Priyanka asked if I remembered the Hooke’s Curve. I said I didn’t. “What happens when you keep increasing stress?”, she asked. “Strain grows proportional until it snaps”, I said. “And how does the curve go then”, she asked. I made a royal mess of drawing this curve (didn’t help that in my mind I had plotted stress on X-axis and strain on Y, while the convention is the other way round).

After making a few snide remarks about my IIT-JEE performance, Priyanka asked me to look up the curve and proceeded to explain how the Hooke’s curve (produced here) explains relationships and dressing sense.

“As you get into a relationship, you want to impress the counterparty, and so you start dressing better”, she went on. “These two feed on each other and grow together, until the point when you start getting comfortable in the relationship. Once that happens, the need to impress the other person decreases, and you start wearing more comfortable, and less fashionable, clothes. And then you find your new equilibrium.

“Different people find their equilibria at different points, but for most it’s close to their peak. Some people, though, regress all the way to where they started.

“So yes, when people are getting into a relationship they start dressing better, but you need to watch out for when their dressing sense starts regressing. That’s the point when you know they’ve hooked up”, she said.

By this point in time I was asking to touch her feet (which was not possible since she’s currently at the other end of the world). Connecting two absolutely unrelated concepts – Hooke’s Law and hooking up, and building a theory on that. This was further (strong) confirmation that I’d married right!

Mythology, writing and evolution: Exodus edition

I watched half of Exodus: Gods and Kings last night (I’d DVRd it a few days back seeing it’s by Ridley Scott). The movie started alright, and the story was well told. Of Moses’s fight with Rameses, of Moses being found out, of his exile and struggle and love story and finding god on a mountain. All very nice and well within the realms of good mythology.

And then Moses decides to hear god’s word and goes to Memphis to free his fellow Hebrews. There’s a conspiracy hatched. Sabotage begins. Standard guerrilla stuff that slaves ought to do to revolt against their masters. Up to that point in time I’d classified Exodus as a good movie.

And then things started getting bad. God told Moses that the latter wasn’t “doing enough” and god would do things his way. And so the Nile got polluted. Plants died. Animals died. Insects attacked. Birds attacked (like in that Hitchcock movie).  What had been shaping up to be a good slave-revolt story suddenly went awry. The entire movie could be described by this one scene in Indiana Jones and Raiders of the Lost Ark:

When you see the guy twirling the sword, you set yourself up for a good fight. And then Indiana just pulls out a gun and shoots him! As a subplot in that movie, it was rather funny. But if the entire plot of a movie centres around one such incident (god sending the plague to Egypt, in this case), it’s hard to continue watching.

Checking out the movie on IMDB, I realised that it has a pretty low rating and didn’t recover its investment. While this is surprising given the reputation of Scott, and how the first part of the movie is set up and made, looking at the overall plot it isn’t that surprising. The problem with the movie is that it builds on an inherently weak plot, so the failure is not unexpected.

It did not help that I was reading mythology, or a realistic mythological interpretation, earlier in the day – the English translation of SL Bhyrappa’s Parva. In that, Bhyrappa has taken an already complex epic, and added his own degrees of complexity to it by seeking to remove all divinity and humanise the characters. Each major character has a long monologue (I’m about a third into the book), which explores deep philosophical matters such as “what is Dharma”, etc.

While moving directly from humanised philosophical myth to unabashedly religious story might have prevented me from appreciating the latter, it still doesn’t absolve the rather simplistic nature of the latter myth. I admit I’m generalising based on one data point, not having read any Christian myth, but from this one data point, it seems Christian myth seems rather weak compared to Hindu or Greek or Roman myth.

My explanation for this is that unlike other myths, Christian myth didn’t have enough time to evolve before it was written down. While the oral tradition meant that much valuable human memory was wasted in mugging up stories and songs, and that transmission was never exact, it also meant that there was room for the stories to evolve. Having been transmitted through oral tradition for several centuries, Hindu, Greek and Roman stories were able to evolve and become stronger. Ultimately when they got written down, it was in much evolved “best of” form. In fact, some of these myths got written down in multiple forms which allowed them to evolve even after writing came by.

While writing saves human memory space and prevents distortions, it leaves no room for variations or improvisation. Since there is now an “original book”, and such books are determined to be “words of God”, there is no room for improvisation or reinterpretation. So we are left with the same simplistic story that we started of with. I hope this explains why Exodus, despite a stud director, is a weak movie.