Tag Archives: match

Duckworth Lewis and Sprinting a Marathon

How would you like it if you were running a marathon and someone were to set you targets for every 100 meters? “Run the first 100m in 25 seconds. The second in 24 seconds” and so on? It is very likely that you would hate the idea. You would argue that the idea of the marathon would be to finish the 42-odd km within the target time you have set for yourself and you don’t care about any internal targets. You are also likely to argue that different runners have different running patterns and imposing targets for small distances is unfair to just about everyone.

Yet, this is exactly what cricketers are asked to do in games that likely to be affected by rain. The Duckworth Lewis method, which has been in use to adjust targets in rain affected matches since 1999 assumes an average “scoring curve”. The formula assumes a certain “curve” according to which a team scores runs during its innings. It’s basically an extension of the old thumb-rule that a team is likely to score as many runs in the last 20 overs as it does in the first 30 – but D/L also takes into accounts wickets lost (this is the major innovation of D/L. Earlier rain-rules such as run-rate or highest-scoring-overs didn’t take into consideration wickets lost).

The basic innovation of D/L is that it is based on “resources”. With 50 overs to go and 10 wickets in hand, a team has 100% of its resource. As a team utilizes overs and loses wickets, the resources are correspondingly depleted. D/L extrapolates based on the resources left at the end of the innings. Suppose, for example, that a team scores 100 in 20 overs for the loss of 1 wicket, and the match has to be curtailed right then. What would the team have scored at the end of 50 overs? According to the 2002 version of the D/L table (the first that came up when I googled), after 20 overs and the loss of 1 wicket, a team still has 71.8% of resources left. Essentially the team has scored 100 runs using 28.2% (100 – 71.8) % of its resources. So at the end of the innings the team would be expected to score 100 * 100 / 28.2 = 354.

How have D/L arrived at these values for resource depletion? By simple regression, based on historical games. To simplify, they look at all historical games where the team had lost 1 wicket at the end of 20 overs, and look at the ratio of the final score to the 20 over score in those games, and use that to arrive at the “resource score”.

To understand why this is inherently unfair, let us take into consideration the champions of the first two World Cups that I watched. In 1992, Pakistan followed the principle of laying a solid foundation and then exploding in the latter part of the innings. A score of 100 in 30 overs was considered acceptable, as long as the team hadn’t lost too many wickets. And with hard hitters such as Inzamam-ul-haq and Imran Khan in the lower order they would have more than doubled that score by the end of the innings. In fact, most teams followed a similar strategy in that World Cup (New Zealand was a notable exception, using Mark Greatbatch as a pinch-hitter. India also tried that approach in two games – sending Kapil Dev to open).

Four years later in the subcontinent the story was entirely different. Again, while there were teams that followed the approach of a slow build up and late acceleration, but the winners Sri Lanka turned around that formula on its head. Test opener Roshan Mahanama batted at seven, with the equally dour Hashan Tillekeratne preceding him. At the top were the explosive pair of Sanath Jayasuriya and Romesh Kaluwitharana. The idea was to exploit the field restrictions of the first 15 overs, and then bat on at a steady pace. It wasn’t unlikely in that setup that more runs would be scored in the first 25 overs than the last 25.

Duckworth-Lewis treats both strategies alike. The D/L regression contains matches from both the 1992 and 1996 world cups. They have matches where pinch hitters have dominated, and matches with a slow build up and a late slog. And the “average scoring curve” that they have arrived at probably doesn’t represent either – since it is an average based on all games played. 100/2 after 30 overs would have been an excellent score for Pakistan in 1992, but for Sri Lanka in 1996 the same score would have represented a spectacular failure. D/L, however, treats them equally.

So now you have the situation that if you know that a match is likely to be affected by rain, you (the team) have to abandon your natural game and instead play according to the curve. D/L expects you to score 5 runs in the first over? Okay, send in batsmen who are capable of doing that. You find it tough to score off Sunil Narine, and want to simply play him out? Can’t do, for you need to score at least 4 in each of his overs to keep up with the D/L target.

The much-touted strength of the D/L is that it allows you to account for multiple rain interruptions and mid-innings breaks. At a more philosophical level, though, this is also its downfall. Because now you have a formula that micromanages and tells you what you should be ideally doing on every ball (as Kieron Pollard and the West Indies found out recently, simply going by over-by-over targets will not do), you are now bound to play by the formula rather than how you want to play the game.

There are a few other shortcomings with D/L, which is a result of it being a product of regression. It doesn’t take into account who has bowled, or who has batted. Suppose mortgage default hollywood are the fielding captain and you know given the conditions and forecasts that there is likely to be a long rain delay after 25 overs of batting – after which the match is likely to be curtailed. You have three excellent seam bowlers who can take good advantage of the overcast conditions. Their backup is not so strong. So you now play for the rain break and choose to bowl out your best bowlers before that! Similarly, D/L doesn’t take into account the impact of power play overs. So if you are the batting captain, you want to take the batting powerplay ASAP, before the rain comes down!

The D/L is a good system no doubt, else it would have not survived for 14 years. However, it creates a game that is unfair to both teams, and forces them to play according to a formula. We can think of alternatives that overcome some of the shortcomings (for example, I’ve developed a Monte Carlo simulation based system which can take into account power plays and bowling out strongest bowlers). Nevertheless, as long as we have a system that can extrapolate after every ball, we will always have an unfair game, where teams have to play according to a curve. D/L encourages short-termism, at the cost of planning for the full quota of overs. This cannot be good for the game. It is like setting 100m targets for a marathon runner.

PS: The same arguments I’ve made here against the D/L apply to its competitor the VJD Method (pioneered by V Jayadevan of Thrissur) also.

What the hell was Vettori thinking?

I’m writing this post in anger. In disgust. At the sheer lack of strategic vision shown by Royal Challengers Bangalore captain Daniel Vettori. What the hell was he thinking when he threw the ball to Virat Kohli for the 19th over, with 43 required off two overs? Yes, there had been a miscalculation earlier which meant that one of the last five overs had to be bowled either by part-timer Kohli, or by Raju Bhatkal who had been torn apart in his earlier two overs. While it is hard to pardon miscalculation in a twenty over game, it is nothing compared to the strategic error of the 19th over.

When overs sixteen to eighteen were bowled by Zaheer, Vinay and Zaheer respectively, I thought it was a tactical masterstroke by Vettori to keep the one extra over to the end. Given the skyrocketing required run rate, I thought it was a great idea that he was trying to put the match beyond Chennai Super Kings by the 19th over itself. And it worked well. From 75 needed off 5 overs, the equation was brought down to 43 off the last two overs (now, it is reasonable to expect Zaheer and Vinay to go at around 10 an over in the slog overs). And then what happened?

You have two overs left, 43 runs to win. You have a reasonably experienced medium pacer who is generally good at bowling at death, but is also prone to buckling under pressure. And you know you can’t trust whoever the other bowler is going to be. What you want is to have your good bowler bowl without any pressure on him. Without any pressure, you can expect him to go for about 10-15 in the 19th, leaving the batsmen to score nearly 30 off the last over – which would tilt the odds significantly in favour of the part timer who would bowl that over, since the pressure would be on the batsmen.

Instead, what do you do? Give the part timer the 19th over. He has no answers for Morkel’s slogging and edging, and goes for 28, leaving Vinay to defend 15. Now, it is Vinay (who is vulnerable under pressure) who has to bowl under pressure, and the batsmen know that. It is a miracle that the match went down to the last ball.

Of course you might say that I wouldn’t have reacted so angrily had either RCB won or Kohli had gone for less in his over. That’s not true. The match was in RCB’s pocket, to be won. The probability of victory reduced significantly the moment the ball was thrown to Kohli (for the 19th over). The ultimate result doesn’t matter. I would have blasted Vettori even if we had won.

Now, there is another uncharitable explanation that comes to mind, and I’m not very proud that this comes to mind. Was it mere incompetence or some sense of malice on the part of Vettori to give the 19th over to Kohli? I’m not talking about bookmakers here, I respect him too much for that. But think about it. Just yesterday, both Mint and Cricinfo ran articles talking about IPL 5’s poor TV ratings so far. The BCCI Chairman N Srinivasan (who not so coincidentally owns CSK) said that the answer to increasing TRPs was to play on batting-friendly high-scoring pitches, and to have close games.

The first wish was answered, when RCB set a target of 206. I wonder if there were some kind of instructions from “big brother” instructing that the game go into the last over, as a means to increase flagging TRPs. If Vinay had bowled the 19th and gone for 10 (say), that would have left a near-impossible 33 off Kohli/Bhatkal’s over. Match over by over 19. One more match that is not “close”, which will do nothing to boost TRPs. But keep the contest alive till the last over, TRPs would be boosted?

As an RCB fan, I hereby call for the immediate sacking of Daniel Vettori as captain and his replacement at the helm by one of Kohli or AB De Villiers  (maybe even Vinay Kumar or Zaheer Khan). Maybe I should create an online signature campaign for this purpose, and use my contacts to get the results through to Anil Kumble and other powers-that-are at RCB.

 

Sachin’s 100th

In the end it was quite appropriate. That the needlessly hyped “false statistic” of Sachin’s 100 100s came about in a match against a supposed minnow, in an inconsequential tournament, which didn’t even help India win the game. The hype surrounding this statistic had become unbearable, both for normal cricket fans and also for Sachin, perhaps. And that could be seen in his batting over the last one year, in England and in Australia. There was a distinct feeling that every time he just kept playing for his century, and not for the team cause, and the only upshot of his “100th 100″ is that the monkey is finally off his back and hopefully Sachin can go back to playing normal cricket.

Unfortunately, there are a couple of other milestones round the corner. He now has 49 ODI 100s, so now people will hype up his 50th. And as someone pointed out on facebook yesterday, he has 199 international wickets! Hopefully that means he starts turning his arm over once again, with his lethal spinning leg-breaks and long hops.

The thing with Sachin is that he has always seemed to be statistically minded (irrespective of what he says in his interviews). The mind goes back to Cuttack during World Cup 1996, when he played out two maiden overs against Asif Karim while trying to get to his 100 (against Kenya). Even in recent times, including in 2007 when he got out in the 90s a large number of times, it is noticeable how he suddenly slows down the innings once he gets into the 90s. He gets nervous, starts thinking only about the score, and not about batting normally.

In that sense, it is appropriate that this meaningless statistic of a hundredth hundred came about in a game that India lost, to a supposed minnow. It was a “batting pitch”. As Raina and Dhoni showed in the latter stages of the innings, shotmaking wasn’t particularly tough. And yet, what did Sachin do? Plod at a strike rate of 75 for most of the innings, including in the crucial batting powerplay just so that he could get to his 100. I don’t fault his batting for the first 35 overs. He did what was required to set up a solid foundation, in Kohli’s company. But in the batting powerplay, instead of going for it, the only thing on his mind was the century. Quite unfortunate. And appropriate, as I’ve said a number off times earlier.

Again, I emergency action plan nyc to emphasize that I’m NOT an anti-Sachintard. I’ve quite enjoyed his batting in the past, and there is no question that he is one of the all-time great cricketers. I’m only against meaningless stat-tardness. And it was this retardation about a meaningless stat that prevented Sachin from giving his best for the last one year.

Ranji Trophy and the Ultimatum Game

The Ultimatum Game is a commonly used research tool in behavioural economics. It is a “game” played between two players (say A and B) where A is given a sum of money which he has to split among himself and B. If B “accepts” the split,  both of them get the money as per A’s proposal. If, however, B rejects it,  both A and B get nothing.

This setup has been useful for behavioural economists to prove that people are not always necessarily rational. If everyone were to be rational, B would accept the split as long as he was given any amount greater than zero. However, real-life experiments have shown that B players frequently reject the deal when they think the split is “unfair”.

A version of this is being played out in this year’s Ranji Trophy thanks to some strange rules regarding points split in drawn games. A win fetches five points while a loss fetches none. In case of a drawn game, if the first innings of both sides has been completed, the team that has scored higher in the first innings gets three points, while the other team gets one. The rules, however, get interesting if not even one innings for each side has been completed. If the match has been rain affected and overs have been lost, both sides get two points each. Otherwise, both sides get zero points each!

I don’t know about the rationale of this strange points system, but I guess it is there to act as a deterrent against teams preparing featherbeds, batting for most of the four days and not even trying to win the match. In general, I haven’t been a fan at all of the Ranji Trophy’s points scoring system, and think it’s quite irrational and so refuse to comment on this rule. What I will comment about, however, is about the “ultimatum” opportunity this throws up.

In the first round of matches, Saurashtra batted first against Orissa and piled up a mammoth 545 in a little under two days. The magnitude of the score and the time left in the match meant that Orissa had been shut out of the game, and the best they could’ve done was to overtake Saurashtra on first innings score and get themselves three points. However, they batted slowly and steadily, with Natraj Behera scoring a patient double century, and with a few minutes to go in the game, they were still over 50 runs adrift of Saurashtra’s score, with three wickets in hand.

At that time, they had the chance to declare their innings, still some runs adrift of Saurashtra’s score, and collect one point, and handing over three points to Saurashtra. They, however, chose to bat on and block the game, and both teams finally ended up with zero points. It maybe because they also see Saurashtra as a competitor for “relegation”, but I thought this was irrational. Why would they deny themselves one point – if only to deny Saurashtra three points? It’s all puzzling.

Going forward, though, I hope the Ranji Trophy rules are changed to make each game a zero sum game (literally). Or else they could adopt the soccer scoring of 3 points for a win and 1 for a draw (something I’ve long advocated), first innings lead be damned!

Ranji Reform

Perhaps the best thing that the BCCI has done in recent times is to hike the match fees given to players in First Class and List A matches. If i’m not wrong, first class players now get Rs. 2 lakh per game as match fees, and 1 lakh for List A games. Thus, if a player is a regular in his state team, he is assured of at least Rs. 15 lakh per annum, thus ensuring he can remain professional and not have to do a “day job”.

This is excellent in terms of option value for high school students who are good at cricket who are undecided if they should concentrate on their cricket career or if they should go to college and concentrate on studies. And this in turn leads to better quality of cricketers in the pool available for first class games.

For a fringe player, selection to the national team is a lottery. It is also a big step up from the Ranji game. And when you are an under 19 cricketer (unless you are Tendulkar of course; let’s talk about normal people here) there is little that indicates if you are going to be an international regular. However, your performances in school/college level and age group tournaments are an extremely good indicator of how well you are likely to do on the domestic circuit.

Now, the income that the domestic circuit offers means that it might be more profitable for you to concentrate on cricket and try and make it big, rather than giving up cricket and going to college. Even if you fail to make it big, you won’t end up doing too badly in life. So if you think you have a good chance of making the state team, you would rather go for it than playing safe and going to college.

And this means that several players who would have otherwise left the game (in the absence of reasonable personal injury attorney boca from playing domestic cricket) are available in the pool which makes it more competitive and raises the overall quality of cricket in the country, and consequently that of the national team.

At least the BCCI gets some things right.

Diminishing Value of a Red Card

Often when we see players being sent off AND penalty kick being awarded in the event of an illegal stop of a goal-bound ball, Baada and I have thought that the punishment is too harsh. That for stopping one goal, the team effectively gives away the goal (conversion rate of penalties is high) and also loses a player (sometimes the goalie) for the rest of the game.

Now, after last night’s strategic hand ball by Luis Suarez, people are complaining that the punishment is not enough. Though it was a split-second instinctive decision by Suarez to handball, even if he were to replay the incident in his head and analyze the costs and benefits, I’m sure he would’ve done what he did. This clearly contradicts what I mentioned in the first paragraph.

The main issue here is with the value of a red card  at various stages of a game. The red card has intrinsic value – of being suspended for the next game. In addition to this, the red card leaves the team one short for the rest of the game, and so it is clear that the later a red card is given out, the lesser the disadvantage it causes the team because they’ve to play for lesser time with a man short.

What makes Suarez’s decision more logical is the time value of a one-goal lead. The lesser the time left in the game, the more the value of the one-goal lead since there is lesser time for which it needs to be protected. And in this case, the handball occurred on what might have been the last “kick” in the game, and so the value of the one-goal lead was really high.

The earlier this incident had occurred in the match, the less would’ve been Suarez’s incentive to handball – more time to win back the conceded goal and more time to play a man short if redcarded. At the time when it actually occurred, Suarez would’ve been a fool to NOT handball. The payoffs were heavily loaded in favour of handballing and he did it.

People on twitter are suggesting that rules be changed, that the goal should’ve been awarded anyway instead of the penalty and stuff, but considering that the same punishment costs much more if given out earlier in the game, I think the current punishment is appropriate. The excess of this punishment in earlier stages of the game is compensated by the punishment being too little in the latter stages, affordable home products on an average I think it is appropriate.

Let’s continue to keep football simple and not clutter it with Duckworth-Lewis kind of rules. And congrats to Suarez for taking the most logical decision at the moment. It is indeed as great a “sacrifice” as Ballack’s tactical yellow card against Korea in the 2002 semis.

And I feel sad for Asamoah Gyan. But then again, with Ghana being in the knockout stages solely on the merit of two Gyan penalties, it is only appropriate that they are going out nowon the demerit of Gyan’s missed penalty.

Discrete and continuous jobs

Earlier today, while contributing to a spectacular discussion about ambition on a mailing list that I’m part of, I realized that my CV basically translates to spectacular performance in entrance exams and certain other competitive exams, and not much otherwise. This made me think of the concept of a “discrete job” – where you are evaluated based on work that you do at certain discrete points in time, as opposed to a continuous job where you are evaluated based on all the work that you do all the time.

A good example of a discrete job is that of a sportsman. Yes, a sportsman needs to work hard all the time and train well and all that, but the point is that his performance is essentially evaluated based on his performance on the day of the game. His performance on these “big days” matter significantly more than his performance on non-match days. So you can have people like Ledley King who are unable to train (because of weak knees) but are still highly valued because they can play a damn good game when it matters.

In fact any performing artist does a “discrete” job. If you are an actor, you need to do well on the day of your play, and off-days during non-performing days can be easily forgiven. Similarly for a musician and so forth.

Now the advantage of a “discrete” job is that you can conserve your energies for the big occasion. You can afford the occasional slip-up during non-performing days but as long as you do a good job on the performing days you are fine. On the other hand, if you are in a continuous job, off-days cost so much more, and you will need to divide your energies across each day.

If you are of the types that builds up a frenzy and thulps for short period of time and then goes back to “sleep” (I think I fall under this category), doing a continuous job is extremely difficult. The only way that it can be managed is through aggregation – never giving close deadlines so that you can compensate for the off-day by having a high-work day somewhere close to it. But not every job allows you the luxury of aggregation, so problem are there.

Now, my challenge is to find a discrete job in the kind of stuff that I want to do (broadly quant analysis). And maybe give that a shot. So anti bullying video I’ve been mostly involved in continuous jobs, and am clearly not doing very well in that. And given my track record in important examinaitons, it is likely I’ll do better in a discrete job. Now to find one of those..

Josh

This used to be such a commonly used word back in school. To do anything you needed josh. To do anything well, you needed “full josh”. You would suddenly “get josh” to do something. And when you didn’t want to do something you’d say “josh illa” (back in school, I hardly spoke English. Used to be mostly Kannada – this was till 10th standard).

There was this friend who was hitting on a girl in the junior batch. And on every saturday, he would wait for her to come out of class to take one glimpse of her before he went home. He would say he needed to “get josh” from her. And if he didn’t see her before he left for home on saturday, he wouldn’t have enough josh to last the weekend.

There were other ways to get josh. Listening to songs from David Dhawan-Govinda movies was one way. Towards the end of school came Upendra with A – again immensely josh-giving. Everything we did, every activity we planned, had the intention of maximizing our josh inflow. Wonderful times those.

This word faded away from my lexicon when i found it not being used much in IITM. IITese however had the common word “enthu” which I realized was reasonably similar to josh – basically it was similar enough to josh that I didn’t have enough of an incentive to establish josh in IITese. And I switched. Of course there was no exact match – for example, just having a glimpse of that special someone was usually not enough to give you enthu, nor would Tan Tana Tan TanTan Tara help.

Again doing something with “full josh” wasn’t the same as doing that with “full enthu”. You could “put enthu” for something but you “needed josh” to do it.

Of late I’ve been trying to revive josh. I find myself instinctively using the word when I think it is appropriate. I’m trying to distinguish between josh and enthu, and use the one that is more appropriate. It is not easy of course – had it been, I’d’ve made an effort ot establish josh in IITese.

And now, thinking about it, I realize that there was a good chance that this blog might have been residing on some “no josh da” fire watch new york “josh illa maga” website. But again – josh is not exactly the same as enthu.

Don’t use stud processes for fighter jobs and fighter processes for stud jobs

When people crib to other people that their job is not too exciting and that it’s too process-oriented and that there’s not muc scope for independend thinking, the usual response is that no job is inherently process-oriented or thinking-oriented, and that what matters is the way in which one perceives his job. People usually say that it doesn’t matter if a job is stud or fighter, and you can choose to do it the way you want to. This is wrong.

So there are two kinds of jobs – stud (i.e. insight-oriented) and fighter (i.e. process oriented). And you can do the job in either a stud manner (trying to “solve a problem” and looking for insights) or in a fighter manner (logically breaking down the problem, structuring it according to known formula and then applying known processes to each sub-problem). So this gives scope for a 2 by 2. I don’t want this to look like a BCG paper so I’m not actually drawing a 2 by 2.

Two of the four quadrants are “normal” and productive – doing stud jobs in a stud manner, and fighter jobs in a fighter manner. There is usually an expectancy match here in terms of the person doing the job and the “client” (client is defined loosely here as the person for whom this job is being done. in most cases it’s the boss). Both parties have a good idea about the time it will tak e  for the job to be done, the quality of the solution, and so on. If you are in either of these two quadrants you are good.

You can’t do a stud job (something that inherently requires insight) using a fighter process. A fighter process, by definition, looks out for known kind of solutions. When the nature of the solution is completely unknown, or if the problem is completely unstructured, the fighter behaves like a headless chicken. It is only in very rare and lucky conditions that the fighter will be able to do the stud job. As for “fighterization”, about which I’ve been talking so much on this blog, the problem definition is usually tweaked slightly in order to convert the stud problem to a fighter problem. So in effect, you should not try to solve a “stud problem” using a fighter process. Also, as an employer, it is unfair to expect a mostly fighter employee to come up with a good solution for a stud problem.

The fourth quadrant is what I started off this blog post with – studs doing fighter jobs. The point here is that there is no real harm in doing a fighter job in a stud manner, and the stud should be able to come up wiht a pretty good solution. The problem is wiht expectations, and with efficiency. Doing a fighter job in a stud manner creates inefficiency, since a large part of the “solution” involves reinventing the wheel. Yes, the stud might be able to come up with enhanced solutions – maybe solve the problem for a general case, or make the solution more scalable or sustainable, but unless the “client” understands that the problem was a stud problem, he is unlikely to care for these enhancements (unless he asked for them of course), and is likely to get pained because of lack of efficiency.

Before doing something it is important to figure out if the client expects a stud solution or a fighter solution. And tailor your working style according to that. Else there could be serious expectation mismatch which can lead to some level of dissatisfaction.

And when you are distributing work to subordinates, it might also help affordable home products classify them using stud nad fighter scales and give them jobs that take advantage of their stronger suits. I know you can’t do this completely – since transaction costs of having more than one person working on a small piece of work can be high – but if you do this to the extent possible it is likely that you will get superior results out of everyone.

It’s about getting the Cos Theta right

Earlier today I was talking to Baada and to Aadisht (independently) about jobs, and fit, and utilization of various skills and option value of skills not utilized etc. So it is like this – you possess a variety of skills, and the job that you are going to do will not involve a large number of these. For the skills that you have that match the job’s requirements, you get paid in full. For the rest of the skills you possess, you only get paid the “option value” – i.e. your employer has the option to utilize these skills of yours and need not actually utilize them.

Hence in order to maximize your productivity and your pay, you need to maximize the cos theta.

Assume your skill set to be a vector in a N-dimensioanl hyperspace where N is the universe of orthogonal skills that people might possess. Now there are jobs which require a certain combination of skill sets, and can thus be seen as a vector. So it’s about maximizing the cos theta between your vector and your job’s vector.

So it’s something like this – you take your skills vector and project it on to the job requirement vector – your total skills will get multiplied now by the value of cos theta, where theta is the angle in the hyperspace between your skills vector and the job vector. For the projection of your skills on the requirement, you get paid in full. For the skills that you have that are orthogonal to the requirement, you get paid only in option value.

One option is to of course build skill set, and keep learning new tricks, and maybe even invent new skills. However, that is not a short-term plan. In the short to medium term, however, you need to maximize the cos theta in order to maximize the returns that your job provides. But as Baada put it, “But there is slisha too much information asymmetry to ensure that cos theta is maximised.”

There are two difficult steps, actually. First, you need to know your vector properly – most people don’t. Even if you assume that you can do a lot of “Ramnath” stuff and get to know yourself, there still lies the challenge of knowing the job’s vector. And the job’s requirement vector is typically more fluid than your skills vector. Hence you actually need to estimate the expected value of the job’s requirement vector before you take up the job.

The same applies when you are hiring. It is actually easier here since the variation in the hiree’s vector will not be as high as the variation in the job profile requirement vector, and you have a pretty good idea of the latter so it is easy to estimate the “projection”.

This perhaps explains why specialists have it easy. Typically, they have a major component of their skills vector along the axis of a fairly well-defined job profile (which is their specialization). And thus, since theta tends to 0, cos theta tends to 1, and they pretty much get full value for their skills.

At the other extreme, polymaths will find it tough to maximize their returns to skills out of a single job, since it is unlikely that there is any job that comes close to their skills vector. So whichever job they do, the small value of the resulting cos theta will cancel out the large magnitude of the skills vector. So for a polymath to maximize his/her skills, it is necessary to do more than one “job”. Unless he/she can define a job for himsel/herself which lies reasonably close to his/her skills vector.

(there is a small inaccuracy in this post. i’ve talked about the angle between two vectors, and taking the cosine chapter 7 bankruptcy lawyer phoenix arizona that. however, i’m not sure how it plays out in hyperspaces with a large number of dimensions. let us assume that it’s vaguely similar. people with more math fundaes on this please to be cantributing)