Duckworth Lewis and Sprinting a Marathon

How would you like it if you were running a marathon and someone were to set you targets for every 100 meters? “Run the first 100m in 25 seconds. The second in 24 seconds” and so on? It is very likely that you would hate the idea. You would argue that the idea of the marathon would be to finish the 42-odd km within the target time you have set for yourself and you don’t care about any internal targets. You are also likely to argue that different runners have different running patterns and imposing targets for small distances is unfair to just about everyone.

Yet, this is exactly what cricketers are asked to do in games that likely to be affected by rain. The Duckworth Lewis method, which has been in use to adjust targets in rain affected matches since 1999 assumes an average “scoring curve”. The formula assumes a certain “curve” according to which a team scores runs during its innings. It’s basically an extension of the old thumb-rule that a team is likely to score as many runs in the last 20 overs as it does in the first 30 – but D/L also takes into accounts wickets lost (this is the major innovation of D/L. Earlier rain-rules such as run-rate or highest-scoring-overs didn’t take into consideration wickets lost).

The basic innovation of D/L is that it is based on “resources”. With 50 overs to go and 10 wickets in hand, a team has 100% of its resource. As a team utilizes overs and loses wickets, the resources are correspondingly depleted. D/L extrapolates based on the resources left at the end of the innings. Suppose, for example, that a team scores 100 in 20 overs for the loss of 1 wicket, and the match has to be curtailed right then. What would the team have scored at the end of 50 overs? According to the 2002 version of the D/L table (the first that came up when I googled), after 20 overs and the loss of 1 wicket, a team still has 71.8% of resources left. Essentially the team has scored 100 runs using 28.2% (100 – 71.8) % of its resources. So at the end of the innings the team would be expected to score 100 * 100 / 28.2 = 354.

How have D/L arrived at these values for resource depletion? By simple regression, based on historical games. To simplify, they look at all historical games where the team had lost 1 wicket at the end of 20 overs, and look at the ratio of the final score to the 20 over score in those games, and use that to arrive at the “resource score”.

To understand why this is inherently unfair, let us take into consideration the champions of the first two World Cups that I watched. In 1992, Pakistan followed the principle of laying a solid foundation and then exploding in the latter part of the innings. A score of 100 in 30 overs was considered acceptable, as long as the team hadn’t lost too many wickets. And with hard hitters such as Inzamam-ul-haq and Imran Khan in the lower order they would have more than doubled that score by the end of the innings. In fact, most teams followed a similar strategy in that World Cup (New Zealand was a notable exception, using Mark Greatbatch as a pinch-hitter. India also tried that approach in two games – sending Kapil Dev to open).

Four years later in the subcontinent the story was entirely different. Again, while there were teams that followed the approach of a slow build up and late acceleration, but the winners Sri Lanka turned around that formula on its head. Test opener Roshan Mahanama batted at seven, with the equally dour Hashan Tillekeratne preceding him. At the top were the explosive pair of Sanath Jayasuriya and Romesh Kaluwitharana. The idea was to exploit the field restrictions of the first 15 overs, and then bat on at a steady pace. It wasn’t unlikely in that setup that more runs would be scored in the first 25 overs than the last 25.

Duckworth-Lewis treats both strategies alike. The D/L regression contains matches from both the 1992 and 1996 world cups. They have matches where pinch hitters have dominated, and matches with a slow build up and a late slog. And the “average scoring curve” that they have arrived at probably doesn’t represent either – since it is an average based on all games played. 100/2 after 30 overs would have been an excellent score for Pakistan in 1992, but for Sri Lanka in 1996 the same score would have represented a spectacular failure. D/L, however, treats them equally.

So now you have the situation that if you know that a match is likely to be affected by rain, you (the team) have to abandon your natural game and instead play according to the curve. D/L expects you to score 5 runs in the first over? Okay, send in batsmen who are capable of doing that. You find it tough to score off Sunil Narine, and want to simply play him out? Can’t do, for you need to score at least 4 in each of his overs to keep up with the D/L target.

The much-touted strength of the D/L is that it allows you to account for multiple rain interruptions and mid-innings breaks. At a more philosophical level, though, this is also its downfall. Because now you have a formula that micromanages and tells you what you should be ideally doing on every ball (as Kieron Pollard and the West Indies found out recently, simply going by over-by-over targets will not do), you are now bound to play by the formula rather than how you want to play the game.

There are a few other shortcomings with D/L, which is a result of it being a product of regression. It doesn’t take into account who has bowled, or who has batted. Suppose you are the fielding captain and you know given the conditions and forecasts that there is likely to be a long rain delay after 25 overs of batting – after which the match is likely to be curtailed. You have three excellent seam bowlers who can take good advantage of the overcast conditions. Their backup is not so strong. So you now play for the rain break and choose to bowl out your best bowlers before that! Similarly, D/L doesn’t take into account the impact of power play overs. So if you are the batting captain, you want to take the batting powerplay ASAP, before the rain comes down!

The D/L is a good system no doubt, else it would have not survived for 14 years. However, it creates a game that is unfair to both teams, and forces them to play according to a formula. We can think of alternatives that overcome some of the shortcomings (for example, I’ve developed a Monte Carlo simulation based system which can take into account power plays and bowling out strongest bowlers). Nevertheless, as long as we have a system that can extrapolate after every ball, we will always have an unfair game, where teams have to play according to a curve. D/L encourages short-termism, at the cost of planning for the full quota of overs. This cannot be good for the game. It is like setting 100m targets for a marathon runner.

PS: The same arguments I’ve made here against the D/L apply to its competitor the VJD Method (pioneered by V Jayadevan of Thrissur) also.

Doctors marrying doctors

So I’ve learnt that doctors prefer to marry other doctors. Well, there’s nothing new in this. When I think about my extended families, and doctors there, most of them I realize are married to other doctors. The ostensible reason, I’m told, is that it’s a different lifestyle, and only doctors can understand the lifestyles of other doctors, and hence this preference. It cannot be ruled out, however, that it is a fallout of pretty good gender ratios and long hours at medical colleges, which leads to coupling – with the “understand each other’s professions” only being a fig leaf.

While people in other professions also marry within their profession (again put down to ease of “meeting”), this tendency is especially exaggerated among doctors. The problem with this, though, is that it doesn’t make financial sense.

Now, the deal with doctors is that they don’t earn good money until very late. After you’ve finished your bachelors, you first need to slog it off for a few years before you get a masters seat. And once you’ve finished your masters, you need to slog for a few years at a hospital which will pay you a pittance, until a point comes in life when you become senior enough that you start getting paid well.

Typically, most doctors (in India) don’t make much at all till they are 35, and after that they get flooded with money. Now, if two doctors marry, that means they are starved of cash flow during their prime years – time when their engineer and MBA counterparts will be minting money, traveling the world, having kids and buying houses. By the time the doctor couple makes money, they would probably be well past their youth, and it is only their descendants that will get to really enjoy their cash flows.

If a doctor marries an engineer (or an MBA), though, cash flows are better hedged. While it is true of all professions that salary goes up with years of experience, the curve isn’t as steep for professions apart from doctors. So, a doctor-MBA couple (say) can live a good life on the MBAs salary till they are in their mid-late 30s, by which time the doctor’s career would have begun to take off and the MBA would have begun to burn out. And then the doctor’s enhanced cash flow starts kicking in! Great hedge, I would say!

So dear doctors, unless you have fallen in love with a classmate at medical school (which has effectively locked you in to a lifetime of poor cash flow structures), reconsider. Consider marrying out of your profession. Yes, it might be harder for you to get each others’ professions. But at least your finances are taken care of!

PS: Some other professions such as lawyers and accountants also have a fairly steep salary increase curve – starting off at a pittance and then later making money. But in these professions people end up getting to “partner level” at around 30, which is far superior to doctors. Then again, such professionals don’t inter-marry within profession as much as doctors do.

The Value of Fatwas

With random ulemas here, there, everywhere (and maybe nowhere) issuing fatwas left, right and centre, I wonder if the value of the fatwa hasn’t gone down.

The thing with religion is anyone who is mildly religious will try to follow as much of the traditions and customs are possible. However, if one puts way too many restrictions, there is the chance that the follower might “do a ramanamurthy” * and just snap and decide to not any of the customs. AS long as you keep things reasonable, though, there is a good chance that the follower will continue to follow.

Now that the context has been set, I reiterate my question as to whether there isn’t a law of diminishing returns for fatwas. Things I suppose were fine when the fatwa was a rare entity. For example, twenty years ago when someone issued a fatwa to kill Salman Rushdie, it was a rare event (the fatwa) and hence got taken seriously and Rushdie has to go into hiding.

But look at the kind of fatwas that are being issued nowadays and I would be really surprised if these are getting taken seroiusly. For example, read this article (HT: Nitin Pai). There is a fatwa against buying insurance. There is a fatwa against working in banks. There is a fatwa against families accepting income earned by female members. And so forth.

Don’t the ulema understand that there exists a law of diminishing returns, and so people are not likely to take fatwas seriously if too many of them are put in place? Ok I suppose they don’t teach economics in Madrassas. Or is it that Islamic society is still in the part of the curve where slope is significantly positive ? (imagine a curve with the total “degree of acceptance” on the y axis and “number of religious restrictions” on the X axis. You would expect that the curve initially rises and then flattens out, and if you stretch things too far maybe even bend downwards).

All religions and all sects of all religions have their share of loonies. People who come up with random fundaes and then claim it’s part of the teaching of that particular religion and everyone should follow it. But I suppose that most other religions are decentralized enough that loonies are treated as just that, and people go on leading their lives without taking cognizance of the loonies.

PS: Check out this hilarious essay from The Dawn about this bunch of guys who tried to take along a Maulvi to Afghanistan to fight Russians.

Booze and volatility

Another of those things I’ve been intending to write for a really long time. Occasionally when I’m not feeling too good mentally, people ask me to go have a drink telling me that everything will be alright. However, given my limited experience in this I’m not too confident it will work. In fact, the only one time I tried drowning my sorrows in alcohol (this was over four years ago) I ended up feeling significantly worse, worse enough to have not tried it since.

The thing with booze is that it increases the volatility of your state of mind. This means that it will flatten out the curve according to which your mental state moves. So after you’ve had a drink or few, you are unlikely to remain in the same state that you were in that you started off at. You end up feeling either significantly better or significantly worse – and the chances of both these go up tremendously when you drink.

I know I have been so far acting based on one data point that went adversely, but I don’t know what causes the selection bias in people who have been through both sides significantly! Of feeling much worse and feeling much better after having some drinks. Why is it that even though all of them would’ve been through significantly worse after drinking at some point of time or the other, they tend to forget about it and only think of the times when they’ve felt better?

Is it that whether you feel good or not is some kind of a binary payoff depending upon the level of the state of mind (basically state of mind < cutoff => “bad”; state of mind >= cutoff implies “good”)? If this is true, then whenever you are “out of the money” (feeling bad), you dont’ really care if you go even more out of the money – your overall feeling doesn’t change by much. And so you don’t really mind the cases when the alcohol starts making you feel significantly worse. But then the barrier is ahead of you so by increasing volatility, you are giving yourself a better chance of surmounting the barrier so drinking makes sense! But then under this condition it doesn’t make sense to drink at all when you’re already feeling good!

Are there any other reasons you can think of for this selection bias? Why do people give more benefits to positive movement in state of mind as a function of drinking than to negative movement in state of mind? Or is it that volatility is a non-intuitive concept and “there’s a better chance you’ll feel better if you drink” is a simple way of communicating it? And let me know your experience about drink making you feel worse..