Over the last week or more, one of the topics of discussion in the pink papers has been the employment statistics that were recently published by the NSSO. Mint, which first carried the story, has now started a whole series on it, titled “The Great Jobs Debate” where people from both sides of the fence have been using the paper to argue their case as to why the data makes or doesn’t make sense.
The story started when Mint Editor and Columnist Anil Padmanabhan (who, along with Aditya Sinha (now at DNA) and Aditi Phadnis (of Business Standard), ranks among my favourite political commentators in India) pointed out that the number of jobs created during the first UPA government (2004-09) was about 1 million, which is far less than the number of jobs created during the preceding NDA government (~ 60 million). And this has led to hue and cry from all sections. Arguments include leftists who say that jobless growth is because of too much reforms, rightists saying we aren’t creating jobs because we haven’t had enough reform, and some other people saying there’s something wrong in the data. Chief Statistician TCA Anant, in his column published in the paper, tried to use some obscurities in the sub-levels of the survey to point out why the data makes sense.
In today’s column, Niranjan Rajadhyaksha points out that the way employment is counted in India is very different from the way it is in developed countries. In the latter, employers give statistics of their payroll to the statistics collection agency periodically. However, due to the presence of the large unorganized sector, this is not possible in India so we resort to “surveys”, for which the NSSO is the primary organization.
In a survey, to estimate a quantity across a large sample, we simply take a much smaller sample, which is small enough for us to rigorously measure this quantity. Then, we try and extrapolate the results to the large sample. The key thing in survey is “standard error”, which is a measure of error that the “observed statistic” is different from the “true statistic”. What intrigues me is that there is absolutely no mention of the standard error in any of the communication about this NSSO survey (again I’m relying on the papers here, haven’t seen the primary data).
Typically, when we measure something by means of a survey, the “true value” is usually expressed in terms of the “95% confidence range”. What we say is “with 95% probability, the true value of XXXX lies between Xmin and Xmax”. An alternate way of representation is “we think the value of XXXX is centred at Xmid with a standard error of Xse”. So in order to communicate numbers computed from a survey, it is necessary to give out two numbers. So what is the NSSO doing by reporting just one number (most likely the mid)?
Samples used by NSSO are usually very small. At least, they are very small compared to the overall population, which makes the standard error to be very large. Could it be that the standard error is not reported because it’s so large that the mean doesn’t make sense? And if the standard error is so large, why should we even use this data as a basis to formulate policy?
So here’s my verdict: the “estimated mean” of the employment as of 2009 is not very different from the “estimated mean” of the employment as of 2004. However, given that the sample sizes are small, the standard error will be large. So it is very possible that the true mean of employment as of 2009 is actually much higher than the true mean of 2004 (by the same argument, it could be the other way round, which points at something more grave). So I conclude that given the data we have here (assuming standard errors aren’t available), we have insufficient data to conclude anything about the job creation during the UPA1 government, and its policy implications.
You are claiming that the NSSO (National Sample Survey Organisation) survey used a small sample size, too small to get a useful estimate of the population mean. This is unlikely (95% confidence:) though – fundaes on sample sizes and estimates are usually taught in statistics 101. If NSSO doesn’t understand stats, I wonder why they exist!
You should look at the NSSO report (mean +/- std err is usually reported only research publications)