Goodhart’s Law and getting beaten on the near post

I would have loved to do this post with data but I’m not aware of any source from where I could get data for this over a long period of time. Recent data might be available with vendors such as Opta, but to really test my hypothesis we will need data from much farther back – from the times when few football games were telecast, let alone “tagged” by a system like Opta. Hence, in this post I’ll simply stick to building my hypothesis and leave the testing for an enterprising reader who might be able to access the data.

In association football, it is more likely for an attacker to have a goalscoring opportunity from one side rather than from straight ahead. Standing between the attacker and the net is the opposing goalkeeper, and without loss of generality, the attacker can try to score on either side of the goalkeeper. Now, because of the asymmetry in the attacker’s position, these two sides of the goalkeeper can be described as “near side” and “far side”. The near side is the gap between the goalkeeper and the goalpost closest to the attacker. The far side is the gap between the goalkeeper and the goalpost on the farther side.

Red dot is goalkeeper, blue dot is striker.

 

However, my hypothesis is that this has not been the case recently. For a while now (my football history is poor, so I’m not sure since when) it has been considered shameful for a goalkeeper to be “beaten at the near post”. The argument has been that given the short distance between himself and the near post, the goalie has no business in letting in the ball through that gap. Commentators and team selectors have been more forgiving of the far post, though. The gap there is large enough, they say, that the chances of scoring are high anyway, so it is okay if a goalie lets in a goal on that side.

Introductory microeconomics tells us that people respond to incentives. Goodhart’s Law states that

When a measure becomes a target, it ceases to be a good measure.

So with it becoming part of the general discourse that it is shameful for a goalkeeper to be beaten on the near side, and that selectors and commentators are more forgiving to goals scored on the far side, goalkeepers have responded to the changed incentives. My perception and hypothesis is that with time goalkeepers are positioning themselves closer to their near post, and thus leaving a bigger gap towards the far post. And thus, they are not any more optimizing to minimize the total chance of scoring a goal.

But isn’t it the same thing? Isn’t it possible that the optimal position of the goalkeeper for stopping a shot be the same as that of stopping a shot on the near side? The answer is an emphatic no.

Let us refer to the above figure once again. Let us assume that the chance of scoring when the angle is theta be f(theta). Now, we can argue that this is a super-linear function. That is, if theta increases by 10%, the chances of scoring increase by more than 10%. Again we could use data to prove this but I think it is mathematically intuitive. Given that f(theta) is super-linear, what this means is that 1. The function is strictly increasing, and 2. The derivative f'(theta) is also strictly increasing.

So, going by the above figure, the goalkeeper needs to minimize f(theta_1) + f(theta_2). If the total angle available is theta (= theta_1 + theta_2), then the goalkeeper needs to minimize f(theta_1) + f(theta - theta_1). Taking first derivative and equating it to zero we get,

f'(theta_1) = f'(theta - theta_1)

Because f is a super-linear function, we had argued earlier that its derivative is strictly increasing. Thus, the above equality implies that theta_1 = theta - theta_1 or theta_1 = theta_2 or f(theta_1) = f(theta_2).

Essentially, if the goalkeeper positions himself right, there should be an equal chance of getting beaten on the near and far posts. However, given the stigma attached to being beaten on the near post, he is likely to position himself such that theta_1 < theta_2, and thus increases the overall chance of getting beaten.

It would be interesting to look at data (I’m sure Opta will have this) of different goalkeepers and the number of times they get beaten on the near and far posts. If a goalie is intelligent, these two numbers should be equal. How good the goalkeeper is, however, determined by the total odds of scoring a goal past him.

Review: The Theory That Would Not Die

I was introduced to Bayes’ Theorem of Conditional Probabilities in a rather innocuous manner back when I was in Standard 12. KVP Raghavan, our math teacher, talked about pulling black and white balls out of three different boxes. “If you select a box at random, draw two balls and find that both are black, what is the probability you selected box one?” , he asked and explained to us the concept of Bayes’ Theorem. It was intuitive, and I accepted it as truth.

I wouldn’t come across the theorem, however, for another four years or so, until in a course on Communication, I came across a concept called “Hidden Markov Models”. If you were to observe a signal, and it could have come out of four different transmitters, what are the odds that it was generated by transmitter one? Once again, it was rather intuitive. And once again, I wouldn’t come across or use this theorem for a few years.

A couple of years back, I started following the blog of Columbia Statistics and Social Sciences Professor Andrew Gelman. Here, I came across the terms “Bayesian” and “non-Bayesian”. For a long time, the terms baffled me to no end. I just couldn’t get what the big deal about Bayes’ Theorem was – as far as I was concerned it was intuitive and “truth” and saw no reason to disbelieve it. However, Gelman frequently allured to this topic, and started using the term “frequentists” for non-Bayesians. It was puzzling as to why people refused to accept such an intuitive rule.

The Theory That Would Not Die is Shannon Bertsch McGrayne’s attempt to tell the history of the Bayes’ Theorem. The theorem, according to McGrayne,

survived five near-fatal blows: Bayes had shelved it; Price published it but was ignored; Laplace discovered his own version but later favored his frequency theory; frequentists virstually banned it; and the military kept it secret.

The book is about the development of the theorem and associated methods over the last two hundred and fifty years, ever since Rev. Thomas Bayes first came up with it. It talks about the controversies associated with the theorem, about people who supported, revived or opposed it; about key applications of the theorem, and about how it was frequently and for long periods virtually ostracized.

While the book is ostensibly about Bayes’s Theorem, it is also a story of how science develops, and comes to be. Bayes proposed his theorem but didn’t publish it. His friend Price put things together and published it but without any impact. Laplace independently discovered it, but later in his life moved away from it, using frequency-based methods instead. The French army revived it and used it to determine the most optimal way to fire artillery shells. But then academic statisticians shunned it and “Bayes” became a swearword in academic circles. Once again, it saw a revival at the Second World War, helping break codes and test weapons, but all this work was classified. And then it found supporters in unlikely places – biology departments, Harvard Business School and military labs, but statistics departments continued to oppose.

The above story is pretty representative of how a theory develops – initially it finds few takers. Then popularity grows, but the establishment doesn’t like it. It then finds support from unusual places. Soon, this support comes from enough places to build momentum. The establishment continues to oppose but is then bypassed. Soon everyone accepts it, but some doubters remain..

Coming back to Bayes’ Theorem – why is it controversial and why was it ostracized for long periods of time? Fundamentally it has to do with the definition of probability. According to “frequentists”, who should more correctly be called “objectivists”, probability is objective, and based on counting. Objectivists believe that probability is based on observation and data alone, and not from subjective beliefs. If you ask an objectivist, for example, the probability of rain in Bangalore tomorrow, he will be unable to give you an answer – “rain in Bangalore tomorrow” is not a repeatable event, and cannot be observed multiple times in order to build a model.

Bayesians, who should be more correctly be called “subjectivists”, on the other hand believe that probability can also come from subjective beliefs. So it is possible to infer the probability of rain in Bangalore tomorrow based on other factors – like the cloud cover in Bangalore today or today’s maximum temperature. According to subjectivists (which is the current prevailing thought), probability for one-time events is also defined, and can be inferred from other subjective factors.

Essentially, the the battle between Bayesians and frequentists is more to do with the definition of probability than with whether it makes sense to define inverse probabilities as in Bayes’ Theorem. The theorem is controversial only because the prevailing statistical establishment did not agree with the “subjectivist” definition of probability.

There are some books that I call as ‘blog-books’. These usually contain ideas that could be easily explained in a blog post, but is expanded into book length – possibly because it is easier to monetize a book-length manuscript than a blog-length one. When I first downloaded a sample of this book to my Kindle I was apprehensive that this book might also fall under that category – after all, how much can you talk about a theorem without getting too technical? However, McGrayne avoids falling into that trap. She peppers the book with interesting stories of the application of Bayes’ Theorem through the years, and also short biographical tidbits of some of the people who helped shape the theorem. Sometimes (especially towards the end) some of these examples (of applications) seem a bit laboured, but overall, the books sustains adequate interest from the reader through its length.

If I had one quibble with the book, it would be that even after the descriptions of the story of the theorem, the book talks about “Bayesian” and ‘non-Bayesian” camps, and talk about certain scientists “not doing enough to further the Bayesian cause”. For someone who is primarily interested in getting information out of data, and doesn’t care about the methods involved, it was a bit grating that scientists be graded on their “contribution to the Bayesian cause” rather than their “contribution to science”. Given the polarizing history of the theorem, however, it is perhaps not that surprising.

The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy
by Sharon Bertsch McGrayne
U
SD 12.27 (Kindle edition)
360 pages (including appendices and notes)

Revisiting MIG Colony, Kalanagar

I first landed up in MIG Colony, Kalanagar, Bandra (East), Mumbai in the summer of 2006. I had just moved to Mumbai for my first job, and had heard lots of stories about the difficulty of finding accommodation in Mumbai. When my aunt’s friend told me that she had two rooms to let out in an apartment she owned in MIG Colony, I jumped. I didn’t bother taking a look at the house, or what it was like, or what the facilities were, and I jumped at it. And moved in.

For my first week in the house I positively thought it was spooked. I would hear strange noises, suddenly smell cigarette smoke though I didn’t smoke. There were lots of dark paintings on the walls (the house came furnished, and the landlady had kept one room for her family’s use), and I would imagine them coming to life and coming after me. I even remember taking a video of the house on my point-and-shoot camera and showing it to my folks in Bangalore, just to show them how lousy life was in Mumbai. And as if all this was not bad enough, the house was on the third floor of a building without a lift.

Soon enough my flatmate Brother Louie moved in, and life became better. There were times when we would lock each other out, or leave the key on the door itself (thus enraging the landlady), but things were better. There was this Maharashtrian restaurant called Amey close to where I stayed, where I would eat most of my dinners. The tea there was especially good. Then there would be fruit and vegetable vendors at the intersection closest to the house – and they would give coriander and curry leaves for free with vegetables. Louie found a guy to iron our clothes, and some others to deliver stuff home. But on Sundays I’d take the train and decamp to South Bombay, to just walk there.

And then things quickly went south. Work started getting bad. The monsoons arrived, and every day my worry would be if I would be able to get out of my apartment. Soon a point of inflection was reached. Ours was among the few houses in the colony whose balcony wasn’t barred. I remember standing there staring down, contemplating if I should jump. Then I decided I was much better off simply quitting my job. Two days later I literally ran away, with a one way ticket to Bangalore. I came back a week later, resigned, served notice and moved to Bangalore for good.

In the last one year I’ve had several opportunities to visit and live in MIG Colony. As you know, I’ve been freelancing as a management consultant for a while now, and one of my clients usually arranges accommodation for me at a guest house in Bandra East. Each time I’m here I want to just roam around the colony to see if it’s changed, and somehow never get to do it. It was only today, though, I managed to find the time.

Just before I moved back to Bangalore, my landlady had told me that there were plans to redevelop the area. All buildings were only four storeys high, a function of the time before elevators were commonplace, and also thanks to regulation given the proximity to the airport. However, with lifts having become common and the building height regulation also being relaxed, there was now scope to unlock the value in the unbuilt height of these buildings. All these four-storey apartments would be torn down soon, I was told, to be replaced by high rises. The owners had all agreed on this redevelopment, and I’m sure they had been adequately compensated.

I took a rather circuitous route back to my guest house this evening, after having finished off a fish thali at Highway Gomantak (one place I never visited during my stay here in 2006, since I was vegetarian then). The Bank of Maharashtra branch is still there – I remember looking at it in 2006 as a “useless” bank, since my SBI ATM card wouldn’t work there. A little down the road, Amey is also there, though now it seems like a little more jazzed up than earlier. In fact that road in MIG Colony (which also houses the MIG Cricket Club) has hardly changed in the last seven or more years.

However, that’s only one of the few things about this area that has remained constant in seven years. Redevelopment has started, and is in full swing. I’m writing this from a nine-storey building in this area, while there was no building taller than four back then. Near where I used to stay, there is an even larger complex coming up, and which looks like it’s near completion. Other old buildings still stand, but they have asbestos sheets around their compounds, indicating impending demolition. They look occupied, though.

The building in which I used to live still stands, though there is a board outside that indicates it is up for redevelopment soon. The bhelpuri stall just outside is still there, as are the vegetable vendors in the intersection nearby (who look more organized now, though). There are more tiny roadside stalls in this area now –  I don’t remember these petty shops occupied by tailors, barbers and tea stalls.

It is interesting, interesting to visit a place you were once familiar with after a long time. It is interesting to see what still stands, and what has changed. The question is which surprises you more – that which has still stood or that which has changed.

I’ll end this post with a few pictures from 2006, which I took the day before I left for Bangalore for good. Incredibly, those pictures are there in this laptop – having traveled through several other computers I’ve owned.

My room, during my brief stay in Mumbai in 2006
My room, during my brief stay in Mumbai in 2006
The building I stayed in
The building I stayed in
Constructed in 1963. It better be redeveloped now, before it falls
Constructed in 1963. It better be redeveloped now, before it falls

Serving Bangalore’s best Butter Masale Dose

If you were to do a ranking of Masale Dose in different restaurants in Bangalore, I would say that the clear winner would be the one served at The Restaurant Formerly Known as Central Tiffin Room (TRFKACTR, now known as Shree Sagar). Soaked in ghee (melted butter), extremely crisp on the outside and soft on the inside, and served with two awesome chutneys, it is an experience every visitor to Bangalore must experience (Warning: Not good for your lipid profile, though). Except if you go on a Sunday morning.

The first time I visited TRFKACTR was on a Sunday morning in early 2010. While I was quite impressed by the product itself, I wasn’t so impressed by the ambiance and the operations. It was a Sunday morning and the restaurant was crowded. People were waiting around all over the place waiting to get a seat. Waiters would do nothing to assist you to get a table. And once seated, service was inefficient and slow – the waiters didn’t show any urgency given the size of the crowd at hand. It would remain my last visit to TRFKACTR in close to two years.

And then I shifted my residence, and moved to a house within two kilometres of TRFKACTR . I’ve since visited the restaurant several times (I’ve lost count), and have come away impressed each time. On none of my subsequent visits have I had any complaints about the service and operations, either. I’ve got a table immediately (though usually shared with strangers, as is the practice in such restaurants), been relieved that the waiters are actually not in a hurry and leisurely enjoyed my Butter Masale Dose without being bothered by crowds waiting to grab my seat. In the process I’ve also understood why the waiters didn’t show any urgency on that crowded Sunday morning when I first visited.

I had breakfast at TRFKACTR this morning, and the restaurant looked like this:

 

While this is an extreme case – I went early on a drizzly morning, and the restaurant had just opened – the thing with TRFKACTR is that most of the time it runs at or just below capacity. On any given day, as long as it is not a Sunday morning, you can expect to find a seat as soon as you visit the restaurant. You get served at a leisurely pace (though not too leisurely – this restaurant relies on high table turnover), and can eat in peace.

We need to recognize that “business as usual” in TRFKACTR involves the restaurant running at or close to capacity, and the operations at the restaurant have been optimized for this. That operations are stretched on a Sunday morning is not bad planning by the restaurant – it is a conscious decision by the restaurant that the crowds are a once-in-a-week occurrence and they will not optimize for that. While it might make sense to learn and plan for a different set of procedures on Sunday morning, we need to keep in mind that kitchen and table capacity are limited (slow service at the table on my first visit was perhaps due to a constraint on kitchen capacity) and differential pricing for Sundays is unlikely to go down well with customers.

Instead, what has happened is that customers (the regulars, at least) have learnt that the restaurant is really crowded on Sunday mornings and have shifted their gratification via Butter Masale Dose to other days. It is very likely that a majority the crowd that still comes to the restaurant on a Sunday morning consists of “tourists” – non-regulars who want to see what the restaurant is like.

PS: I’ve visited the restaurant once again on a Sunday morning after that initial visit. I had gone alone, but found a seat immediately. It is a possibility that my perception that the restaurant is really crowded on all Sunday mornings suffers from small sample bias.

Sehwag versus Tendulkar

Though he hasn’t formally retired yet, given that he is hopelessly out of form, one can probably conclude that Virender Sehwag is unlikely to play for India again, and hence it is time to pay tribute.

I have developed a little visualization where I plot the trajectories of a batsman’s innings based on his past records. There are basically two plots – in the first, I track the expected number of runs he would have scored as a function of the number of balls he has faced. In the second, I plot the probability of the batsman still batting as a function of the number of balls faced.

I’ve created an interactive visualization using the Shiny Server plugin for R, on a little Digital Ocean server that I’ve leased. In this application, you can compare the innings trajectories of different players in different formats. I have taken my raw ball by ball data for this application from cricsheet and have analyzed and visualized the data using R.

Having built this “app”, I was playing around with random combinations of players and formats, and soon started comparing Sachin Tendulkar with Virender Sehwag. Medium-timers like me might remember that back when Sehwag started out in the early 2000s, he was called “the clone” for his batting style was extremely similar to that of Sachin Tendulkar. That they are both short and chubby also helped fuel this comparison. One thing that sets Sehwag apart, though, is his sheer pace of scoring, especially in Test matches.

So while playing around with the “app”, when I loaded Sehwag and Tendulkar together, I noticed one interesting thing – Sehwag in Test matches plays exactly like how Tendulkar plays in ODIs, and Sehwag in ODIs plays like Tendulkar does in T20s (data includes IPL  games). Check out the graphs for yourselves!

srtvssehwag1

srtvssehwag2

 

I’m not sure how much load my small server can take so I’m not putting the link to the app here. However, if you think you’ll find this interesting and will want to play with it, write to me and I’ll send you the link.

Who else are you in touch with?

Thing with catching up with old friends/acquaintances is that you sometimes don’t know if you still connect with them. It might be a while since you last met, and having moved on in different directions, there is a very good chance that you don’t connect with each other at all. Yes, there is the environment you shared several years back that connects you, but when that becomes the only source of connection, it can get rather boring and you might be itching for the conversation to be over.

In order to determine whether you still connect with an old friend/acquaintance, I have a simple test. I must warn you that this test has no predictive power – it won’t tell you before you meet your friend if you connect with him/her or not. It, however, analyzes post the event how well you connected. And can help you make a decision if you have an opportunity to meet them again.

Invariably, I’ve found that when you catch up with old friends, sooner or later, one of you will ask the other, “so who else are you in touch with?”. Between any two people, there are always these “filler lines”, what you say when you realize you have nothing to talk about. With old friends/acquaintances, it is this. Remember that your only connection is the environment you shared a while back, and the other people that inhabited that environment. So, in the absence of anything else to talk about, you end up talking about this.

The metric (I know I’ve been meandering) is this: from the time you meet your old friend/acquaintance, measure how much time it takes before the conversation goes to “so who else are you in touch with”. This gives you an indication of how well you connect with this person. The longer the time gap between you people meeting and this question coming up, the better you connect – it simply means you have so many other things to talk about, so this doesn’t come up.

This afternoon I met  a friend from school and in the hour and quarter we spoke, this question never came up. This indicates that I still connect with him pretty well. At the other extreme there have been people with whom the question has been popped within five minutes of meeting – showing how far we’ve drifted and there’s absolutely nothing to connect us any more.

There are times I’ve been surprised, either way. Once I met a senior from school not knowing if I had much to talk to him. The question was popped only forty five minutes into the conversation. We’ve subsequently met a couple of times. Other people I’ve gone to meet thinking of a dozen things to talk to only for them to start the conversation with “who have you been in touch with?”

I’d once visited Bishop Cotton’s Boys’ School in Bangalore (for a chess tournament) and noticed this board somewhere in the school. It said (paraphrasing):

Great minds discuss ideas,
Middling minds discuss events,
Small minds discuss people.

 

Correlation and the 1987 Stock Market Crash

Recently on this blog I had talked about the phenomenon of correlations, and how that can send financial models topsy-turvy. I had taken the example of additional cars on the road on a rainy day and had explained how in 2008 CDOs went bust as a fall in house prices led to mortgages defaulting together. Today I read this interesting post by JP Koning which attributes the stock market crash of 1987 (Black Monday) also to correlation, but of a different kind.

It basically have to do with how bubbles behave. When you know that the stock market is overheated, there are two things you can do. You can either choose to ride whatever is left of the bubble, and thus go long, or short the market and hope that the bubble has come towards its end. There are problems with both approaches – if you are long and the bubble bursts, you stand to lose significant money. On the other hand if you are short and the bubble continues, you can end up getting wiped out before the bubble bursts and offers you an opportunity to profit (as Keynes supposedly said – the market can remain irrational for longer than you can remain solvent).

Trading is difficult business during the times of a bubble. Every good trader knows that a bubble is on. Yet, they are faced with the above dilemma. They want to participate in the party as long as it lasts but leave before the house comes crashing down. But nobody knows when the house will crash. Some smart traders such as Taleb (no doubt backed by their banks’ deep pockets) simply buy put options and wait it out for the bubble to burst and make their money. Some get out of the market. But most remain, taking directional bets (in either direction) and not sure of whether they are going to get wiped out.

Suppose you are a trader in one such bubble, and you decide to use a mixed strategy of whether you go long or short. Let us assume that on four out of five days (randomly chosen) you are long the market, and you short the fifth day. Let us assume every trader follows a similar strategy, but strategies of no two traders is correlated. So on a given day, for every trader going short, there are four traders going long and thus the bubble continues (let us assume that each trader plays with the same amount of money). You can see where this is going. What if there is a day when for some reason more than the usual 20% of  the traders decide to go short?

Let us briefly revisit the house party analogy. There is a party on and you want to enjoy it for as long as possible. However, the house in which the party is going on is unstable, and as soon as the number of people in the house falls below a certain number, the house will collapse, crushing anyone still in there (yes, this is a weird house, but never mind). You go near the house and you see a large number of people having a gala time. You see that the number of people in the house far exceeds the threshold, and so you join the party. And thus the party swells.

Suppose you are now in the party, and you see a large number of people leaving. Suddenly, you realize that following their exit, the number of people left in the house will be not too much more than the threshold. If you stay on, you might end up holding up the house, you might reason, and you will want to leave with the large group. The only problem is that you are not alone in thinking such. Most other guests have also seen this large group leave, and want to accompany them on their way out.

Traders were aware that the crash of 1929 had also occurred in late October, and on a Monday. On the 19th of October 1987, Koning mentions in his blog, the Wall Street Journal published a graph of the stock market in the 1980s and superimposed it with a graph of the stock market in the 1920s, leading up to the stock market crash in 1929 (which led to the Great Depression). The two graphs looked similar, as you can see below.

This was all the trigger that the market needed. Suddenly, you have a day when every trader reads about the bursting of the 1929 bubble in the newspaper, and how the current market is similarly poised. Suddenly every trader is doubly conscious of the stock market bubble, and wants to get away. Instead of every trader playing a random strategy, where only 20% will want to short, on this particular day a much larger number of traders want to short. As they collectively short, the market falls significantly enough to tell everyone that the bubble is busting. Everyone else tries to join them as they try to rush out of the party house. The house duly crashes.

Once again, notice that this was a random system being held up by low correlation. Traders knew there was a bubble, but didn’t know when it would burst and thus played uncorrelated mixed strategies, which kept the market afloat. All it took was one newspaper article, which every trader happened to read. The correlation suddenly jumped, and the market moved decisively.

As an exercise at the end of this blog post, think of other systems which are similarly “held up” because of low correlation in people’s behaviour. It need not only be financial – remember the road on rainy day example I gave in my previous post. Then think of what might result in correlations that hold up these systems to collapse to 1, and how those systems will then respond. Please don’t, however, blame me for scaring you.

Analyzing Premier League Performance so far

After yet another round of matches this weekend, Liverpool were unable to beat a 10-man Newcastle, and have slipped to third spot, with Chelsea going ahead of them on goal difference. Arsenal thumped Norwich to go clear at the top of the table. Manchester United continued to flounder, drawing at home to Southampton, who are the most improved team this season, compared to the last.

Now, the problem is that each team has a different fixture list. Some teams (such as Manchester United) have had an insanely tough set of fixtures so far this season. Others such as Arsenal have had it quite easy (the eight games Arsenal have played this season have all been fixtures that they won last year!). How do we account for this relative ease in fixtures to see how well teams have been performing?

In chess, one of the popular tie breaker methods used for “Swiss League” tournaments is called the “Solkoff method”. According to this method, the tie breaker score for each player is the sum of points scored by all his opponents. In a swiss league, each player plays against a different set of players, so a higher Solkoff score means a player has played his games against tougher opponents, and has hence done better than someone else with the same points tally but who has played weaker opponents. The question is if we can use these principles to evaluate football teams at this point in the season.

I propose what I call the “Modified Solkoff” score. Here, we not only take into account the total points of each opponent of a team, but also the result of the game against the particular opponent. This is then normalized by the total points scored by all your opponents. Take Arsenal for example. Their opponents so far this season have a total of 69 points as of today. Of the eight games they’ve played, Arsenal have lost to Aston Villa and drawn at West Brom. So the numerator of Arsenal’s Modified Solkoff score becomes 0 * Aston Villa’s points (10) + 1 * West Brom’s points (10) + 3 * total points of all their other opponents, which  amounts to 157. This is then normalized by the total  points tally of their opponents so far (69) and we get Arsenal’s normalized Modified Solkoff score of 2.28. You can see that the maximum possible Solkoff score is 3 (if a team has won all its games) and the minimum is 0 (losing all games). The higher the Solkoff score the better (better performance against better opponents).

This is what the Modified Solkoff table looks like as of today (21st October 2013). Arsenal may not have played the toughest opponents but the fact that they have won so many of their games means that they are on top. They are interestingly followed by Manchester City and then Southampton. Manchester United is buried somewhere in the bottom half of the table:

 

It is also interesting to note that Sunderland is ahead of Crystal Palace at the bottom of their table. This is due to the fact that Palace’s only points so far have come against Sunderland, while Sunderland earned their point from a draw with high-flying Southampton.

This also shows that Liverpool’s early season highs have come on the back of wins against relatively weaker teams (it doesn’t help their cause that Manchester United is classified as a “weak team” thanks to their performance so far), and thus their early season table topping is unlikely to sustain.

Let me know in the comments what you think of this method of computing a normalized score based on a team’s opponents so far.

PS: This table will be regularly updated (after each “matchday”), so if you are reading this after October, some of the notes may not match what is there in the table.

Pricing fines for ticketless travel

In large mass transit systems such as those in Mumbai (or even Chennai), ticket checking turnstiles can significantly slow the flow of human traffic. The sheer number of passengers that use these transit systems daily makes it impossible to check the ticket of each and every traveler. Hence, the Railways, rather than checking the tickets of every passenger, instead relies on random checks. During these random ticket checking efforts, people traveling without a ticket are asked to pay a fine. This, the Railways hope, will be deterrent enough for people to purchase tickets before travel.

However, rather than ensuring deterrence, what this system has resulted is in an informal “ticketless travel insurance” economy. The concept is simple – rather than buying a ticket from the official ticket counter, you instead buy protection from an “informal insurance provider”. For a nominal “premium” (believed to be in the range of Rs. 100 per month) these providers insure you against ticketless travel. In other words, in case you get caught by the ticket checkers during the course of your “insurance”, these “insurance providers” step in to pay your fine! Check out this article in The Hindu about how these insurance providers work (WARNING: The link isn’t working too well for me, and is taking me to a third party site a few seconds after loading The Hindu page, so be careful before clicking through).

The very existence of this market, however, implies that fines for ticket less travel are not being priced properly. The math is fairly simple: if the price of the ticket is p and the probability of your ticket getting checked is frac {1}{N} , then the fine for ticketless travel should be strictly greater than Np. If not, it works out cheaper for your to pay the fine each time you are caught rather than buying the ticket.

So what role is being played by these “informal insurance companies”? Risk management! People don’t like risk. While on an average your ticket might be checked only once in 30 days (number pulled out of thin air), there is no reason that you will not be pulled up for ticket less travel multiple times in a month. By outsourcing the risk to a central party who pools the risks (from several commuters), you have a steady cash out flow and are hedged against getting caught multiple times (you might get caught but your insurer pays the fine). In fact, this is how insurance works in other sectors also.

What should the Indian Railways do to drive these “informal insurance companies” out of business? Currently, if the fine is S, S le Np. From this equation, you can see that the Railways can do one of three things so that this inequality gets reversed – the price of a ticket can be reduced – but that would be equivalent to cutting off the nose to spite the face, for it would have significant adverse impact on the railways’ revenues. Next, N can be reduced, or in other words the frequency of surveillance be increased. This, too, is not easily implementable since the Railways will have to invest in additional resources to check tickets. The last option is to increase S, and there is nothing that prevents the railways from doing this!

How will this work, though? By raising the cost of fines for ticketless travel while keeping the frequency of ticket checking constant, the “premium” a commuter will have to pay to these insurance companies will increase. If the fine amount is increased to a certain level, the premium a commuter will have to pay to buy ticketless travel insurance will exceed the price of buying tickets! And the insurance market will implode.

While this seems like a simple solution in theory, I’m not confident of it being implemented any time soon. Who knows – one might have to go to the Union cabinet to increase the level of fines in local trains. That’s how our railways is structured.

Networking eatings

Given that I’m a freelancer and do several things to earn my money, and that there is no consistency in my income flow, I need to do a lot of “networking”. Essentially, this is about generally catching up with someone over an informal chat, discussing what we do, and exploring if there were any synergies to exploit. I think this is great option value, for meeting people and getting their perspectives makes you think different, and that can give you ideas which you can potentially make money out of at a later point in time.

The point of this post about the venues for such networking meetings. I don’t have an office – I work from home, and my home office is not particularly suited for meetings, so I prefer to do my meetings outside. Sometimes, when the person I’m meeting has an office, we end up doing the meeting there. I’ll leave out those meetings from this discussion, since there is nothing really to be described about the venue. Most other occasions, though, meetings happen over food and drink, more likely the latter. This post is about good and bad places for networking meetings.

Most of my “networking meetings” so far have happened at the trusty old Cafe Coffee Day. The city is littered with several of these outlets, and for the price of two cappuccinos, they offer excellent place to sit and talk for hours together. The problem, though, is that they have now (for a couple of years or so) gone pre-paid. You need to order at the counter before you settle down at a table, and each time you want something more you need to go up and order again. There are two problems this poses.

Firstly, if you reach before the other person (chances of both reaching at the same instant are infinitesimal), you will need to wait. And in the time when you’re occupying a table and haven’t ordered you have to deal with strange glares from the cafe staff. You need to keep telling them “I’m waiting for a friend”. The next problem is with payment dynamics. It is so much easier to split the bill when you’re paying at the table. It gets complicated when you’re paying at the counter, with the effect that more often than not one of you will end up paying for both of you. That’s not exactly a problem, but starting a meeting with discussions on who will pay is not exactly the best way to go.

My initial meetings with the person who has turned out to be my biggest client so far happened in the coffee shop of a five star hotel. I must mention here that in most five star hotels in Bangalore, you get remarkably good filter coffee nowadays. Coffee shops of five star hotels are good places for these meetings, for they are usually quiet and you are served at your desk. They come at a cost, however – though you might argue that paying two hundred rupees for filter coffee at Vivanta is not so much more than paying a hundred rupees for a cappuccino at Cafe Coffee Day.

Breakfast at a five star hotel, however, isn’t that great for networking. Recently, I did a breakfast meeting at a five star hotel. As you might expect, we had the buffet. However, the problem with doing a meeting over a breakfast buffet in a five star hotel is that you simply can’t do justice to the spread! You can’t keep going for refills, and you would want to stick to things you can eat without creating much of a mess. And when you’re doing a professional meeting you don’t want to be eating too much also.

Then there are South Indian restaurants. I’ve done some meetings in those, also. The problem, however, is that such restaurants rely on quick table turnover and even if you go in off-peak times you get strange looks if you stay too long. This has to be mitigated with staggered orders through the course of your meeting. The advantage is that these places are cheap and the food is great.

I don’t usually do networking meetings over drinks. It has nothing to do with my capacity – it is just that most pubs are loud and not particularly conducive for conversation. And you don’t want to be screaming at the top of your voice in a professional meeting. That doesn’t mean I haven’t done meetings in pubs, though, but it’s usually after a certain degree of familiarity has been established.

Finally let us come to the lunch meetings. Here, it is important that you choose a cuisine that is high density. Again you don’t want to spend too much time eating, so you should prefer food that you can eat little of but will still fill you up. Also, you need to choose a cuisine that’s not messy. On both counts, North Indian is NOT ideal – it’s not very high density, and you need to eat with your hands which can become messy and that’s not something you want at a meeting. A further problem is that North Indian food in most restaurants comes in shared portions – and when you’re meeting someone professionally it can get a little uncomfortable.

These problems are there in East Asian also. South Indian restaurants (in Bangalore) are mostly quick service and thus not great for networking lunches (and south indian food is low density). So the ideal choice in this case is European – portions are small, the food is filling, you can eat it all with a knife and fork and it comes in individual portions.

I’ll put more fundaes on this matter as I get more experienced in the matter of networking eatings. I’m off now – need to rush to a lunch meeting!