There is this famous story that says that the Soviet government promised pole vaulter Sergei Bubka some huge sum of money “every time he broke the world record”. Being rather smart, Bubka would break the world record each time by one centimeter (the least count for pole vault measurement), utilizing the fact that the nature of the event (where you set the bar and try to clear it, where success in each attempt is binary) to his advantage.
The thing with academia is that ‘paper count’ matters. And it appears that the quality of papers cannot be objectively measured and so the quality of the journals in which they are published are taken as proxy. And I hear that for decisions like getting a PhD, getting tenure, reputation in the community, etc. there is some sort of informal “paper count” that one needs to clear. You don’t progress until you’ve published a certain “number of papers”.
What this does is to incentivize academics to publish more. The degree of “delta improvement” shown in a particular paper over it’s predecessor (assuming each paper can be seen as an improvement over one particular previously known result) doesn’t matter as much as the number of improvements thus shown. Hence, every time the academic notices a small epsilon improvement, he finds it significant – it gets him a paper! The actual practical utility of this improvement be damned.
This is all fine in academia where one doesn’t need to bother about lowly trivialties such as “practical utility”. But it does start to matter when the academic migrates to industry, and there is no shortage of people doing this movement. Now, suddenly, what he needs to think about it practical utility. But that doesn’t come naturally to him. The academic strives for delta improvements. And each time there is a delta improvement he finds it significant – after all, that is what he has been trained to do during his long stint writing papers.
I must confirm I’m not saying here that ex-academics strive only for delta improvements, but just that they find each delta improvement significant, irrespective of the magnitude of the delta. In that way, they are different from Bubka.
But take that out and there is no difference. Both are incentivized by the number of delta improvements they make, rather than their magnitude. In the first case the Soviet Government ended up transferring more than what was perhaps necessary to Bubka. Similar flawed incentives can lead to corporations losing a lot of money.
PS: I must admit I’m generalizing. Of course there exist studmax creatures like Cat, who refuse to publish unless they have something really significant (he told me of one case where he refused to add his name to a paper since he “didn’t want to be known for that work” or something like that). But the vast majority gets its doctorates and tenures by delta publishing, so I guess I’m allowed to generalize.
also depends on whether you’re in a conference-driven or journal-driven area. journals demand a higher delta than conferences to accept something for publication.
also, better the conference/journal, higher is the delta they ask for.
I think it is present in every field. Vast majority of people at Industry work just enough to keep the boss happy, while only a few work harder than necessary. Those who do work hard, make it big. Stud-v-fighter theory apart, talent, too, comes with hard work. Even the Beatles worked hard for theirs (10,000-hour rule: http://en.wikipedia.org/wiki/Outliers_%28book%29). For every Beatle, there are hundreds who are happy playing in a bar. Just that in academia, there is a “semi-tangible” quality-and-quantity assessment of work. Mediocre quality papers before tenure can be attributed to the incentive structure, but not for post-tenure work.
http://en.wikipedia.org/wiki/H-index This is one other metric that is used to assess the work. This will make profs work on more significant stuff.