Horses, Zebras and Bayesian reasoning

David Henderson at Econlog quotes a doctor on a rather interesting and important point, regarding Bayesian priors. He writes:

 Later, when I went to see his partner, my regular doctor, to discuss something else, I mentioned that incident. He smiled and said that one of the most important lessons he learned from one of his teachers in medical school was:

When you hear hooves, think horses, not zebras.

This was after he had some symptoms that are correlated with heart attack and panicked and called his doctor, got treated for gas trouble and was absolutely fine after that.

Our problem is that when we have symptoms that are correlated with something bad, we immediately assume that it’s the bad thing that has happened, and panic. In that process we don’t consider alternate reasonings, and then do a Bayesian analysis.

Let me illustrate with a personal example. Back when I was a schoolboy, and I wouldn’t return home from school at the right time, my mother would panic. This was the time before cellphones, remember, and she would just assume that “the worst” had happened and that I was in trouble somewhere. Calls would go to my father’s office, and he would ask her to wait, though to my credit I was never so late that they had to take any further action.

Now, coming home late from school can happen due to a variety of reasons. Let us eliminate reasons such as wanting to play basketball for a while before returning – since such activities were “usual” and been budgeted for. So let’s assume that there are two possible reasons I’m late – the first is that I had gotten into trouble – I had either been knocked down on my way home or gotten kidnapped. The second is that the BTS (Bangalore Transport Service, as it was then called) schedule had gone completely awry, thanks to which I had missed my usual set of buses, and was thus delayed. Note that me not turning up at home until a certain point of time was a symptom of both of these.

Having noticed such a symptom, my mother would automatically come to the “worst case” conclusion (that I had been knocked down or kidnapped), and panic.   But then I’m not sure that was the more rational reaction. What she should have done was to do a Bayesian analysis and use that to guide her panic.

Let A be the event that I’d been knocked over or kidnapped, and B be the event that the bus schedule had gone awry. Let L(t) be the event that I haven’t gotten home till time t, and that such an event has been “observed”. The question is that, with L(t) having been observed, what are the odds of A and B having happened? Bayes Theorem gives us an answer. The equation is rather simple:

P(A | L(t) ) =  P(A).P(L(t)|A) / (P(A).P(L(t)|A) + P(B).P(L(t)|B) )

P(B|L(t)) is just one minus the above quantity (we assume that there is nothing else that can cause L(t)) .

So now let us give values. I’m too lazy to find the data now, but let’s say we find from the national crime data that the odds of a fifteen-year-old boy being in an accident or kidnapped on a given day is one in a million. And if that happens, then L(t) obviously gets observed. So we have

P(A) = \frac{1}{1000000}
P(L(t) | A) = 1

The BTS was notorious back in the day for its delayed and messed up schedules. So let us assume that P(B) is \frac{1}{100}. Now, P(L(t)|B) is tricky, and the reason the (t) qualifier has been added to L. The larger t is, the smaller the value of L(t)|B. If there is a bus schedule breakdown, there is probably a 50% probability that I’m not home an hour after “usual”. But there is only a 10% probability that I’m not home two hours after “usual” because a bus breakdown happened. So

P(L(1)|B) = 0.5
P(L(2)|B) = 0.1

Now let’s plug in and based on how delayed I was, find the odds that I was knocked down/kidnapped. If I were late by an hour,
P(A|L(1)) = \frac{ \frac{1}{1000000} \ 1 }{ \frac{1}{1000000}  \ 1 + \frac{1}{100} \ 0.5}
or P(A|L(1)) = 0.00019996. In other words, if I didn’t get home an hour later than usual, the odds that I had been knocked down or kidnapped was just one in five thousand!

What if I didn’t come home two hours after my normal time? Again we can plug into the formula, and here we find that P(A|L(2)) = 0.000999 or one in a thousand! So notice that the later I am, the higher the odds that I’m in trouble. Yet, the numbers (admittedly based on the handwaving assumptions above) are small enough for us to not worry!

Bayesian reasoning has its implications elsewhere, too. There is the medical case, as Henderson’s blogpost illustrates. Then we can use this to determine whether a wrong act was due to stupidity or due to malice. And so forth.

But what Henderson’s doctor told him is truly an immortal line:

When you hear hooves, think horses, not zebras.

Should you stop flying Malaysian?

So Malaysian Airlines faced its second tragedy in four months when its flight MH17 was shot down over Eastern Ukraine yesterday. In response to this terrorist attack, stock prices of Malaysian Airline dropped sharply in today’s trading. Given that the airline has faced two tragedies in quick succession, the question is if you should stop flying the airline, and if the price crash is justified.

The basic question we need to ask ourselves before we book our next ticket is the probability of that Malaysian flight crashing vis-a-vis the probability of a flight belonging to another airline crashing. Now, one never knows what happened to MH370, but most reports (months after the disappearance) point to either sabotage or a terrorist attack. Based on analysis and reports so far, it is extremely unlikely that MH370 disappeared on account of any technical or security lapse on behalf of the airline.

Coming to MH17, which was shot down over Ukraine, again it must be recognized that the airline went down thanks to a terrorist attack. It must also be pointed out that the terrorist attack was from the ground and not from on board, and that there is nothing to indicate that there was any technical or security lapse on the part of the airline that led to the attack.

Moreover, given that neither Malaysia nor the Netherlands (MH17 took off from Amsterdam) has anything to do with either side of the Ukraine conflict, it can be assumed that the targeting of Malaysian Airlines in yesterday’s attack was just incidental. It is more likely that the terrorists wanted to either shoot down a Russian or Ukrainian airline for a particular reason and took down a Malaysian flight by mistake, or just wanted to show their intent by shooting down some airline. Based on this, we can say with very high confidence that the reason a Malaysian airline flight was targeted last night was purely incidental.

Based on this analysis, it is unlikely that there is something specific about Malaysian Airlines that has led to the two accidents in the recent past. In this light, fear of flying Malaysian is irrational, and there is no reason to believe that a Malaysian flight is going to be less safe than a flight of another airline. So if you are flying on a route that is served by Malaysian, after accounting for cost and time and other “normal” factors of consideration, there is no reason why you should prefer to fly another airline rather than Malaysian.

And should you fly at all? If it’s a route that you would normally travel by flight, you should most definitely should, for on a passenger kilometer basis, traveling by flight is definitely safer than traveling by car.

Then what about the markets? The stock price of MH has tanked because the market believes that people are going to fly MH less. Considering that most people are irrational, this is a fair judgment to make, and so one can say that the stock price crash is justified. However, unless something untoward happens (which can actually be traced back to incompetence on behalf of MH), it is likely that MH traffic fall will be much lower than what the markets expect, so it might make sense to buy the stock today – if you have the opportunity to do so. And as a passenger, MH fares are likely to get more competitive in the near term, so you might want to take advantage of that also!

Review: The Theory That Would Not Die

I was introduced to Bayes’ Theorem of Conditional Probabilities in a rather innocuous manner back when I was in Standard 12. KVP Raghavan, our math teacher, talked about pulling black and white balls out of three different boxes. “If you select a box at random, draw two balls and find that both are black, what is the probability you selected box one?” , he asked and explained to us the concept of Bayes’ Theorem. It was intuitive, and I accepted it as truth.

I wouldn’t come across the theorem, however, for another four years or so, until in a course on Communication, I came across a concept called “Hidden Markov Models”. If you were to observe a signal, and it could have come out of four different transmitters, what are the odds that it was generated by transmitter one? Once again, it was rather intuitive. And once again, I wouldn’t come across or use this theorem for a few years.

A couple of years back, I started following the blog of Columbia Statistics and Social Sciences Professor Andrew Gelman. Here, I came across the terms “Bayesian” and “non-Bayesian”. For a long time, the terms baffled me to no end. I just couldn’t get what the big deal about Bayes’ Theorem was – as far as I was concerned it was intuitive and “truth” and saw no reason to disbelieve it. However, Gelman frequently allured to this topic, and started using the term “frequentists” for non-Bayesians. It was puzzling as to why people refused to accept such an intuitive rule.

The Theory That Would Not Die is Shannon Bertsch McGrayne’s attempt to tell the history of the Bayes’ Theorem. The theorem, according to McGrayne,

survived five near-fatal blows: Bayes had shelved it; Price published it but was ignored; Laplace discovered his own version but later favored his frequency theory; frequentists virstually banned it; and the military kept it secret.

The book is about the development of the theorem and associated methods over the last two hundred and fifty years, ever since Rev. Thomas Bayes first came up with it. It talks about the controversies associated with the theorem, about people who supported, revived or opposed it; about key applications of the theorem, and about how it was frequently and for long periods virtually ostracized.

While the book is ostensibly about Bayes’s Theorem, it is also a story of how science develops, and comes to be. Bayes proposed his theorem but didn’t publish it. His friend Price put things together and published it but without any impact. Laplace independently discovered it, but later in his life moved away from it, using frequency-based methods instead. The French army revived it and used it to determine the most optimal way to fire artillery shells. But then academic statisticians shunned it and “Bayes” became a swearword in academic circles. Once again, it saw a revival at the Second World War, helping break codes and test weapons, but all this work was classified. And then it found supporters in unlikely places – biology departments, Harvard Business School and military labs, but statistics departments continued to oppose.

The above story is pretty representative of how a theory develops – initially it finds few takers. Then popularity grows, but the establishment doesn’t like it. It then finds support from unusual places. Soon, this support comes from enough places to build momentum. The establishment continues to oppose but is then bypassed. Soon everyone accepts it, but some doubters remain..

Coming back to Bayes’ Theorem – why is it controversial and why was it ostracized for long periods of time? Fundamentally it has to do with the definition of probability. According to “frequentists”, who should more correctly be called “objectivists”, probability is objective, and based on counting. Objectivists believe that probability is based on observation and data alone, and not from subjective beliefs. If you ask an objectivist, for example, the probability of rain in Bangalore tomorrow, he will be unable to give you an answer – “rain in Bangalore tomorrow” is not a repeatable event, and cannot be observed multiple times in order to build a model.

Bayesians, who should be more correctly be called “subjectivists”, on the other hand believe that probability can also come from subjective beliefs. So it is possible to infer the probability of rain in Bangalore tomorrow based on other factors – like the cloud cover in Bangalore today or today’s maximum temperature. According to subjectivists (which is the current prevailing thought), probability for one-time events is also defined, and can be inferred from other subjective factors.

Essentially, the the battle between Bayesians and frequentists is more to do with the definition of probability than with whether it makes sense to define inverse probabilities as in Bayes’ Theorem. The theorem is controversial only because the prevailing statistical establishment did not agree with the “subjectivist” definition of probability.

There are some books that I call as ‘blog-books’. These usually contain ideas that could be easily explained in a blog post, but is expanded into book length – possibly because it is easier to monetize a book-length manuscript than a blog-length one. When I first downloaded a sample of this book to my Kindle I was apprehensive that this book might also fall under that category – after all, how much can you talk about a theorem without getting too technical? However, McGrayne avoids falling into that trap. She peppers the book with interesting stories of the application of Bayes’ Theorem through the years, and also short biographical tidbits of some of the people who helped shape the theorem. Sometimes (especially towards the end) some of these examples (of applications) seem a bit laboured, but overall, the books sustains adequate interest from the reader through its length.

If I had one quibble with the book, it would be that even after the descriptions of the story of the theorem, the book talks about “Bayesian” and ‘non-Bayesian” camps, and talk about certain scientists “not doing enough to further the Bayesian cause”. For someone who is primarily interested in getting information out of data, and doesn’t care about the methods involved, it was a bit grating that scientists be graded on their “contribution to the Bayesian cause” rather than their “contribution to science”. Given the polarizing history of the theorem, however, it is perhaps not that surprising.

The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy
by Sharon Bertsch McGrayne
U
SD 12.27 (Kindle edition)
360 pages (including appendices and notes)

Religion and Probability

If only people were better at mathematics in general and probability in particular, we may not have had religion

Last month I was showing my mother-in-law the video of the meteor that fell in Russia causing much havoc, and soon the conversation drifted to why the meteor fell where it did. “It is simple mathematics that the meteor fell in Russia”, I declared, trying to show off my knowledge of geography and probability, arguing that Russia’s large landmass made it the most probable country for the meteor to fall in. My mother-in-law, however, wasn’t convinced. “It’s all god’s choice”, she said.

Recently I realized the fallacy in my argument. While it was probabilistically most likely that the meteor would fall in Russia than in any other country, there was no good scientific reason to explain why it fell at the exact place it did. It could have just as likely fallen in any other place. It was just a matter of chance that it fell where it did.

Falling meteors are not the only events in life that happen with a certain degree of randomness. There are way too many things that are beyond our control which happen when they happen and the way they happen for no good reason. And the kicker is that it all just doesn’t average out. Think about the meteor itself for example. A meteor falling is such a rare event that it is unlikely to happen (at least with this kind of impact) again in most people’s lifetimes. This can be quite confounding for most people.

Every time I’ve studied probability (be it in school or engineering college or business school), I’ve noticed that most people have much trouble understanding it. I might be generalizing based on my cohort but I don’t think it would be too much of a stretch to say that probability is not the easiest of subjects to grasp for most people. Which is a real tragedy given the amount of randomness that is a fixture in everyone’s lives.

Because of the randomness inherent in everyone’s lives, and because most of these random events don’t really average out in people’s lifetimes, people find the need to call upon an external entity to explain these events. And once the existence of one such entity is established, it is only natural to attribute every random event to the actions of this entity.

And then there is the oldest mistake in statistics – assuming that if two events happen simultaneously or one after another, one of the events is the cause for the other. (I’m writing this post while watching football) Back in 2008-09, the last time Liverpool FC presented a good challenge for the English Premier League, I noticed a pattern over a month where Liverpool won all the games that I happened to watch live (on TV) and either drew or lost the others. Being rather superstitious, I immediately came to the conclusion that my watching a game actually led to a Liverpool victory. And every time that didn’t happen (that 2-2 draw at Hull comes to mind) I would try to rationalize that by attributing it to a factor I had hitherto left out of “my model” (like I was seated on the wrong chair or that my phone was ringing when a goal went in or something).

So you have a number of events which happen the way they happen randomly, and for no particular reason. Then, you have pairs of events that for random reasons happen in conjunction with one another, and the human mind that doesn’t like un-explainable events quickly draws a conclusion that one led to the other. And then when the pattern breaks, the model gets extended in random directions.

Randomness leads you to believe in an external entity who is possibly choreographing the world. When enough of you believe in one such entity, you come up with a name for the entity, for example “God”. Then people come up with their own ways of appeasing this “God”, in the hope that it will lead to “God” choreographing events in their favour. Certain ways of appeasement happen simultaneously with events favourable to the people who appeased. These ways of appeasement are then recognized as legitimate methods to appease “God”. And everyone starts following them.

Of course, the experiment is not repeatable – for the results were purely random. So people carry out activities to appease “God” and yet experience events that are unfavourable to them. This is where model extension kicks in. Over time, certain ways of model extension have proved to be more convincing than others, the most common one (at least in India) being ‘”God” is doing this to me because he/she wants to test me”. Sometimes these model extensions also fail to convince. However, the person has so much faith in the model (it has after all been handed over to him/her by his/her ancestors, and a wrong model could definitely not have propagated?) that he/she is not willing to question the model, and tries instead to further extend it in another random direction.

In different parts of the world, different methods of appeasement to “God” happened in conjunction with events favourable to the appeasers, and so this led to different religions. Some people whose appeasements were correlated with favourable events had greater political power (or negotiation skills) than others, so the methods of appeasement favoured by the former grew dominant in that particular society. Over time, mostly due to political and military superiority, some of these methods of appeasement grew disproportionately, and others lost their way. And we had what are now known as “major religions”. I don’t need to continue this story.

So going back, it all once again boils down to the median man’s poor understanding of concepts of probability and randomness, and the desire to explain all possible events. Had human understanding of probability and randomness been superior, it is possible that religion didn’t exist at all!

The day I learnt to stop worrying and learnt to protect myself

For at least six years, from early 2006 to early 2012 I “suffered” from what medical practitioners term as “anxiety”. It was “co-morbid” with my depression, and I think it was there from much before 2006. I would frequently think about random events, and and wonder what would happen if things happened in a certain way. I would think of “negative black swan” events, events with low probability but which would have a significant negative impact on my life.

While considering various possibilities and preparing for them is a good thing, the way I handled them were anything but good. Somewhere in my system was wired the thought that simply worrying about an event would prevent it from happening. I once got fired from one job. Every day during my next two jobs, I would worry if I would get fired. If I got an uncharitable email from my boss, I would worry if he would fire me. If my blackberry failed to sync one morning I would worry that it was because I had already been fired. Needless to say, I got fired from both these jobs also, for varying reasons.

I used to be a risk-taker. And it so happened that for a prolonged period in my life, a lot of risks paid off. And then for another rather prolonged period, none of them did (Mandelbrot beautifully calls this phenomenon the Joseph effect). The initial period of successful risk-taking probably led me to take more risk than was prudent. The latter period of failure led me to cut down on risks to an unsustainable level. I would be paranoid about any risks I had left myself exposed to. This however doesn’t mean that the risks didn’t materialize.

It was in January of last year that I started medication for my anxiety and depression. For a few days there was no effect. Then, suddenly I seemed to hit a point of inflexion and my anxious days were far behind. While I do credit Venlafaxine Hexachloride I think one event in this period did more than anything else to get me out of my anxiety.

I was riding my Royal Enfield Classic 500 across the country roads of Rajasthan, as part of the Royal Enfield Tour of Rajasthan. The first five days of the tour had gone rather well. Riding across the rather well-made Pradhan Mantri Gram Sadak Yojana (PMGSY) roads set across beautiful landscapes had already helped clear out my mind a fair bit. It gave me the time and space to think without getting distracted. I would make up stories as I rode, and at the end of each day I would write a 500 word essay in my diary. All the riding gear meant that the wind never really got into my hair or my face, but the experience was stunning nevertheless. For a long time in life, I wanted to “be accelerated”. Ride at well-at-a-faster-rate, pulling no stops. And so I rode. On the way to Jaisalmer on a rather empty highway, I even hit 120 kmph, which I had never imagined I would hit on my bike. And I rode fearlessly, the acceleration meaning that my mind didn’t have much space for negative thoughts. Things were already so much better. Until I hit a cow.

Sometimes I rationalize saying I hadn’t consumed my daily quota of Venlafaxine Hexachloride that morning. Sometimes I rationalize that I was doing three things at the same time – one more than the number of activities I can normally successfully carry out simultaneously. There are times when I replay the scene in my head and wonder how things would have been had I done things differently. And I sometimes wonder why the first time I ever suffered a fracture had to happen in the middle of nowhere far off from home.

It had been a wonderful morning. We had left the camp at Sam early, stopping for fuel at Jaisalmer, and then at this wonderful dhaba at Devikot, where we had the most awesome samosa-bajjis (massive chilis were first coated with a layer of potato curry – the one they put in samosa – and then in batter and deep fried). For the first time that day I had the camera out of its bag, hanging around my neck. I would frequently stop to take photos, of black camels and fields and flowers and patterns in the cloud. The last photo I took was of Manjunath (from my tour group) riding past a herd of black camels.

I function best when I do two things at a time. That morning I got over confident and did three. I was riding on a road 10 feet wide at 80 kilometres per hour. I was singing – though I’ve forgotten what I was singing. And I was thinking about something. My processor went nuts. While things were steady state on the road there was no problem. There was a problem, however, when I saw a bit too late that there was a massive herd of massive cows blocking my path further down the road.

There was no time to brake. I instead decided to overtake the herd by moving to the right extreme of the road (the cows were all walking on the road in the same direction as me). To my misfortune, one of the cows decided to move right at the same time, and I hit her flush in the backside. The next thing I remember is of me lying sprawled on the side of the road about five metres from where my bike was fallen. There was no sign of the cow. The bike was oozing petrol but I wasn’t able to get up to lift it up – presently others in my tour group who were a few hundred metres behind reached the scene and picked up my bike. And I don’t know what state of mind I was in but my first thought after I picked myself up was to check on my camera!

The camera wasn’t alright – it required significant repairs after I got back home, but I was! I had broken my fifth metacarpal, which I later realized was a consequence of the impact of the bike hitting the cow. There were some gashes on my bicep where the protective padding of my riding jacket had pressed against my skin. I still have a problem with a ligament in my left thumb, again a consequence of the impact. And that was it.

I had had an accident while traveling at 80 kmph. I had fallen a few metres away from the point of impact (I don’t know if I did a somersault while I fell, though). I fell flush on my shoulder with my head hitting the ground shortly. It was a rather hard fall on the side of the road where the ground was uneven. And there was absolutely no injury because of the fall (all the injury was due to impact)!

It was the protection. No amount of worry would have prevented that accident. Perhaps I was a bit more careless than I should have been but that is no reason for there not being an accident. When you are riding on a two wheeler at a reasonable pace on country roads, irrespective of how careful you are there is always a chance that you may fall. The probability of a fall can never go to zero.

What I had done instead was to protect myself from the consequences of the fall. Each and every piece of protective equipment I wore that day took some impact – helmet, riding jacket, riding gloves, knee guard, shoes.. Without any one of these pieces, there is a chance I might have ended up with serious injury. There was a cost I paid – both monetary and by means of discomfort caused by wearing such heavy gear – but it had paid off.

Black swans exist. However, worrying about them will not ease them. Those events cannot be prevented. What you need to do, however, is to hedge against the consequences of those events. There was always a finite possibility that I would fall. All I did was to protect myself against the consequences of that!

Despite contrary advice from the doctor, I decided to ride on and finish the tour, struggling to wear my riding glove over my swollen right hand – stopping midway would have had a significant adverse impact on my mental state which had just begun to improve. I’ve stopped worrying after that. Yes, there are times when I see a chance of some negative black swan event happening. I don’t worry about that any more, though. I only think of how I can hedge against its consequences.

Pot and cocaine

Methylphenidate, the drug I take to contain my ADHD, is supposed to be similar to cocaine. Overdosing on Methylphenidate, I’m told, produces the same effects on the mind that snorting cocaine would, because of which it is a tightly controlled drug. It is available only in two pharmacies in Bangalore, and they stamp your prescription with a “drugs issued” stamp before giving you the drugs.

Extrapolating, and referring to the model in my post on pot and ADHD, snorting cocaine increases the probability that two consecutive thoughts are connected, and that there is more coherence in your thought. However, going back to the same post, which was written in a pot-induced state of mind, pot actually pushes you in the other direction, and makes your thoughts less connected.

So essentially, pot and cocaine are extremely dissimilar drugs in the sense that they act in opposite directions! One increases the connectedness in your train of thought, while the other decreases it!

I’ve never imbibed cocaine, so this is not first-hand info, but I’ve noticed that alcohol when taken in heavy doses (which I never reach since I’m the designated driver most of the time) acts in the same direction of cocaine/methylphenidate – it increases the coherence in your thoughts. Now you know why junkies in your college would claim that the kind of “high” that pot gives is very different from the kind of high that alcohol gives.

US MBA Admissions

B-schools based in the US use a unique self-selecting mechanism to filter out applicants who might be a bad fit for a management job. This they achieve by making the application process more complicated, but in a way that the kind of people they hope to attract find it simple.

Let me explain. Like most other graduate programs in the US, B-schools also require applicants to get a set of letters of recommendation. Unlike other programs, though, these are not simple letters of recommendation. Rather than the recommender simply writing out one essay where he/she extols the virtue of the candidate he/she is recommending and requests the university to grant admission, here he/sh has to answer a bunch of questions that the university is asking for. These questions might range from the mundane sounding (I’m told there’s a catch, though) “How do you know the applicant?” to some high-sounding stuff like “What is your opinion of the leadership qualities of the applicant? How can that be improved?”. World limit for all questions put together comes to 1500 words.

So now, if someone comes to you asking for a recommendation, unless you are really invested in their careers you will not want to put the enthu of putting so much effort. If you like the candidate, you might be willing to put in some time into it, but you are likely to wholeheartedly produce four good essays for each school the applicant is applying (note that no two schools ask the same question) only if you feel really invested in the applicant’s career, the probability of which is really low.

By having such a complicated system of soliciting recommendations, the schools ensure that all candidates fall into one of two categories. Either they should have done so well in one of their jobs that their boss or client feels invested enough to spend a few hours of their time writing recommendations, or they should have the necessary people management skills to go to bosses and clients and professors to get them to write the recommendations. Of course, irrespective of how good your people management skills are , it is unlikely to get someone to spend so many hours on your recommendation letters. Still, the minimum you require is to convince them that you will write the recommendation yourself and they should rubber stamp it. No big deal, that.

This way, all applicants to US B-schools are people who have a knack of getting things done. The age at which application happens (mid to late twenties) also minimizes parental participation in the effort. Apart from the self-selection and filtration, the amount of time and effort required for application also helps weed out frivolous candidates (remember those that “wrote CAT just as a backup”?).