Workforce Participation of Women

Krish Ashok and Puram Politics have been collecting data from various government sources and converting them to excel. This data contains a wealth of information on social indicators in India. You can expect the next few issues of RQ to be based on this dataset. Data is drawn from various government sources including the Ministry of Statistics and Program Implementation (MOSPI).

Today we will look at the workforce participation of women across the states of India. First, let us look at rural women. Notice that the all India average participation is close to 60%. Himachal Pradesh ranks the highest with over 83% while (perhaps surprisingly, developed states such as ) Delhi, Kerala and Punjab bring up the rear.

ruralwomen

 

Next we will look at the workforce participation of urban women. Note now that the all India average drops to an abysmal 20%! While migration to urban areas is generally associated with increased standard of living, it is interesting to note that more and more women don’t work in urban areas. It is perhaps a reflection of the kind of jobs that are available in urban India.

urbanwomen

 

Notice that once again, Himachal Pradesh is top and Punjab and Delhi bring up the rear. Actually there seems to be a correlation between workforce participation of rural and urban women across states. Let us explore that with a scatter plot.

ruralurbanwomen

 

Notice that there is a strong positive correlation. Interestingly, Himachal Pradesh and Tamil Nadu (states associated with excellent education levels) display superior participation of urban women in the workforce relative to the participation of their rural women. Karnataka, Andhra Pradesh and Rajasthan are also found to be above the regression line. Interestingly, it is hard to draw a pattern from this data in terms of which state is more developed.

Standard Error in Survey Statistics

Over the last week or more, one of the topics of discussion in the pink papers has been the employment statistics that were recently published by the NSSO. Mint, which first carried the story, has now started a whole series on it, titled “The Great Jobs Debate” where people from both sides of the fence have been using the paper to argue their case as to why the data makes or doesn’t make sense.

The story started when Mint Editor and Columnist Anil Padmanabhan (who, along with Aditya Sinha (now at DNA) and Aditi Phadnis (of Business Standard), ranks among my favourite political commentators in India) pointed out that the number of jobs created during the first UPA government (2004-09) was about 1 million, which is far less than the number of jobs created during the preceding NDA government (~ 60 million). And this has led to hue and cry from all sections. Arguments include leftists who say that jobless growth is because of too much reforms, rightists saying we aren’t creating jobs because we haven’t had enough reform, and some other people saying there’s something wrong in the data. Chief Statistician TCA Anant, in his column published in the paper, tried to use some obscurities in the sub-levels of the survey to point out why the data makes sense.

In today’s column, Niranjan Rajadhyaksha points out that the way employment is counted in India is very different from the way it is in developed countries. In the latter, employers give statistics of their payroll to the statistics collection agency periodically. However, due to the presence of the large unorganized sector, this is not possible in India so we resort to “surveys”, for which the NSSO is the primary organization.

In a survey, to estimate a quantity across a large sample, we simply take a much smaller sample, which is small enough for us to rigorously measure this quantity. Then, we try and extrapolate the results to the large sample. The key thing in survey is “standard error”, which is a measure of error that the “observed statistic” is different from the “true statistic”. What intrigues me is that there is absolutely no mention of the standard error in any of the communication about this NSSO survey (again I’m relying on the papers here, haven’t seen the primary data).

Typically, when we measure something by means of a survey, the “true value” is usually expressed in terms of the “95% confidence range”. What we say is “with 95% probability, the true value of XXXX lies between Xmin and Xmax”. An alternate way of representation is “we think the value of XXXX is centred at Xmid with a standard error of Xse”. So in order to communicate numbers computed from a survey, it is necessary to give out two numbers. So what is the NSSO doing by reporting just one number (most likely the mid)?

Samples used by NSSO are usually very small. At least, they are very small compared to the overall population, which makes the standard error to be very large. Could it be that the standard error is not reported because it’s so large that the mean doesn’t make sense? And if the standard error is so large, why should we even use this data as a basis to formulate policy?

So here’s my verdict: the “estimated mean” of the employment as of 2009 is not very different from the “estimated mean” of the employment as of 2004. However, given that the sample sizes are small, the standard error will be large. So it is very possible that the true mean of employment as of 2009 is actually much higher than the true mean of 2004 (by the same argument, it could be the other way round, which points at something more grave). So I conclude that given the data we have here (assuming standard errors aren’t available), we have insufficient data to conclude anything about the job creation during the UPA1 government, and its policy implications.