In the world of statistics and operations, people usually talk of two kinds of error – omission and commission. For simplicity, they are referred to as “type 1” and “type 2” errors. I can never remember which is which, but after a little bit of googling, I can tell you that type 1 error is the error where a correct hypothesis is rejected, while type 2 is one where we fail to reject an incorrect hypothesis.
The most common example for this is one of a quality control department. Suppose you are in the business of checking the quality of widgets. There are two kinds of errors you can make – you can classify bad widgets as “good”, or you can classify a good widget as “bad”. Which of this is type 1 and type 2 depends upon how you frame the hypothesis. However, let’s not get into those details – they don’t matter. All that matters is that you understand the two ways you can err – which is not hard to understand at all.
Elementary statistics says that one can’t simultaneously minimize both type 1 and type 2 errors in the same process. This again – I think – is fairly intuitive. If you keep your criteria for success too high, you may hardly classify bad widgets as good. However, the chance that you classify a good widget as bad increases. Similarly, if you loosen your criteria, you will end up rejecting less good widgets, but will pay for it by accepting more bad widgets. Simple, right?
Ok I suppose you you might have figured out where I’m trying to lead you with regards to the fight on terrorism. The reason different groups have vastly different views on how terrorism should be countered lies in what they are trying to minimize. Let’s rephrase the widget problem. Everyone is either a terrorist (T) or a non-terrorist (NT). Now, the job of the police is to identify and put in jail all of T, while ensuring that no NT is put behind bars.
The problem arises because there is no clear test which can classify people into T and NT. There are a few tests that people apply, but they can only give some idea as to whether the testee is a T or a NT. The real battle between different groups lies in drawing the line for this test – which can bring some kind of objectivity into the classification into T and NT.
So on one hand, you have the human rights people whose main objective is that no NT should be classified as T. To achieve this end, they advocate a “decision line” where the chances of a NT being classified as T are minimized. In fact, going by Salil Tripathi’s view that human rights people need to be unreasonable in their demands, if the human rights guys have their way, the line should be set such that no one is classified as T.
On the other hand, you have the pro-security forces, whose sole objective is to reduce the chances of a terrorist attack. Which means that they want a line where no T is classified as NT. Hence, they will advocate a line based on which the chances of a T getting classified as NT is minimal. The side effect of this is that a number of NTs end up getting classified as T.
Now that we have figured out the conflict between human rights and pro-security people, there is another angle to this story – one that is far removed from statistics. The issue is that people who have been classified as T are more likely to have a face than those classified as NT. It’s common to read reports such as “Binayak Sen arrested on grounds of terrorism”, but one never gets to read reports that say “Salil Tripathi not arrested because of lack of evidence that he’s a terrorist”. I’m not able to exactly point out what kind of bias this is, but it is important to note that a NT being classified as T is not the same as T being classified as NT. This asymmetry in footage gives further leeway for human rights people to gain a better position of strength.
Then there is the issue that most Ts are muslim. This has automatically communalised the whole issue. If you are seen as a hardline pro-security guy, you automatically get labeled as “anti-Muslim” and communal. Actuallly, a simple application of Bayes’s Theorem shows that in India, the probability that a random Muslim is a terrorist is significantly higher than the probability that a random non-Muslim is a terrorist. What we need to note here is that though the former probability is quite low, it is still an order of magnitude higher than the latter.
The challenge for the pro-security people is to effectively challenge the unreasonable demands of the human rights people, while at the same time try not to appear to be anti-Muslim. What we need to accept is that we can never have a perfect T/NT filter. And that the “confidence line” that we draw to classify people as T and NT is socially optimal. We will need to balance, on one hand, the loss of lives and property in case of a terrorist attack, and on the other hand, the inconvenience that the average citizen will face in case the line is drawn too “tight”. We will need to evaluate costs of each, estimate probabilities and then draw the line. Even in one such scenario, we need to remember that there can never be a perfect filter. Recognizing this deficiency, I think, is also a major part of the solution to this problem.
I don’t know who made this statement – it was one of Madness, Disease and Ugliness. One of them said “most Muslims are not terrorist but most terrorists are Muslim”.