I first got this idea during some assignment submission at IIT. One guy in our class, known to be a perfectionist is supposed to have put in 250 hours of effort into a certain course project. He is known to have got 20 out of 20 in this project. I put in about 25 hours of effort into the same project and got 17. Reasonable value for effort, I thought. And that was when I realized the law of diminishing returns to effort. And that was the philosophy I carried along for the rest of my academic life (the following four years).

The problem with working life as opposed to academic life is that the eighty-twenty formula doesn’t work. The biggest problem here is that you are working for someone else, while you were essentially working for yourself while you wree a student. Eighty was acceptable back then, it is not acceptable now. Even if you are working for yourself, the problem is that the completion-rewards curve is completely diffferent now.

Imagine a curve with the percentage of work done the X axis and the “reward” on the Y axis. In an academic setting, it is usually linear. Doing 80% of the work means that you are likely to get 80%. Fantastic. The problem wiht work is that the straight line gets replaced by a convex curve. So even to get an 80% reward, you will need to maybe do 99% of the work. The curve moves up sharply towards the end so as to give 100% reward for 100% work (note that I’m talking about work done here, not effort. Effort is irrelevant)

Now, why did I cap reward at 100% in the previous paragraph? Why did I assume that there is a “maximum” amount of wokr that can be done? Note that if there is a ceiling to the amount of work to be done, and to the reward, then you are looking at a payoff like a bond – the upside is limited – 100% but the downside is unlimited (yeah I know it’s limited at 0, but it is so far away from 100% that it can be assumed to be infinitely far away). Trying hard, doing your best each time, the best you do is 100%. But slip up a bit, and you will get big deficits. It is like the issuer of the bond defaulting.

Almost thirty years back, Michael Milken noticed this skewed payoff structure for bonds, and this led him to invent “junk bonds”, which are now more politely known as “high yield debt”. Now, these bonds were structured (basically high leverage) such that a reasonably high rate of default was built in. In an ordinary bond the “default expectation” is that the bond won’t default at all. For a high-yield bond, the “default expectation of default” is much higher than 0 – so there is a definite upside if the bond doesn’t default. So that balances the payoffs.

So how does that translate to work situations? You need to basically get yourself a job where there is significant scope for doing “something extra”. So that if you take into account the “something extra”, the “expectation” will be say something like 90% of the work. So by doing only a bit more than your old 80-20 rule from college, you can fulfil expectations. And occasionally even beat them, resulting in a major positive payoff (either in terms of money or reputation or power etc.).

The deal is that when the expectation is lower than 100%, the reward-work curve changes. It remains heavily convex for the duration within the expectation (so if expectation is 90% of work for 80% of profit, the curve will be highly convex in the {(0,90),(0,80)} area). And beyond this, it gets less convex and closer to linearity, and so gives you a bit more freedom.

I’m too lazy to draw the curves so you’ll have to imagine them in your heads. And you can find some info on convex curves here: http://en.wikipedia.org/wiki/Convex_function