End of month blues

One of the problems with running your blog on your own website is that you need to manage bandwidth. Basically it seems like my blog has been run over by bots and so by the 25th of every month the bandwidth for the month is over, and the blog goes down for the rest of the month. I’ve been trying to do a lot of things to prevent this – blocking suspicious looking IPs, installing bad behaviour, and such like, but still I don’t know why it gets locked out.

My biggest problem with this end of month lockout is the volume of ideas that go down the drain during this time, rather than getting published on the blog. I wish I could try and remember all those blogging ideas and do one mega blog post at least with a summary of all of them, so that I could write about them at some point of time in the future, but it seems like I can’t remember anything now.

In other news, I’ve been getting really stressed out of late, and my mental bandwidth has been at an all time low. I’ve felt that I’ve been going downhill since my trip to New York a few months back, but of late it’s gotten really bad, and I’m just not able to do anything. That’s yet another reason why blogging frequency has dipped in the last couple of weeks or so.

Doing a deep dive into my own past, I think I’ve figured out why this has been happening. Rather, I have a hypothesis about why I’ve been stressing myself out too much at work which has led to this situation. Basically it’s down to studs and fighters.

I traditionally have what I call as a “stud” working style. I work in bursts, at reasonably low intensity. I look at the problem as a series of steps, and for each step, I internalize the problem, and then try to de-focus. And while thinking about something else, or reading something, or writing something else, I end up having a solution to the problem, and then I take a little break and move on to the next step. This is essentially how I’ve worked over the last few years and I think I’ve (to myself at least) done a good job using this method.

There’s yet another method that I’ve frequently used in the past, one that I call the Ganesha method. It’s basically used for tasks I want to get  done with ASAP. I work at it at a very high intensity, shutting myself off from everything else in the world. I work at it continuously without a break, and then take a long break once the solution is done. I’ve used it in the past for things like competitive exams where I think I’ve done rather well.

So the mistake I did a while back (maybe a year or so back) was to try and use this latter method over longer periods of time, for longer problems. The thing with this method is that it’s suited for short problems, which can be finished off in a burst with a little bit of stretching myself. But when applied to significantly larger problems, I’ve found that it’s been stressing me out way too much. By trying to be steady and focused over a long period of time, which is how a fighter traditionally works, I think I’ve mentally destroyed myself.

Moral of the story is that whatever happens you need to be yourself, and do things in your own style. Don’t try to change yourself in order to please others. It is simply not sustainable.

Vishnu and Shiva temples

This post may add to Aadisht’s contention of Shaivism being superior to Vaishnavism. Earlier this month I’d gone with family to this place called Avani, some 100 km east of Bangalore. The main centre of attraction there was this 10th century Shiva temple that had been built by the Gangas.

As we got off the car, I was pleased to see the signage of the Archaeological Society of India. I’m in general not a big fan of temples. I find them to be overwhelmed with “devotees”, and way too noisy, and more importantly for some reason I’m not allowed to use my camera inside temples. So I was pleased that this being an ASI temple there won’t be any worship in there and so I can take pictures peacefully.

As we entered, though, I saw a number of priestly figures standing around the entrance, and one of them shouted “no photo in temple, no photo in temple” (i was in bermudas and a t-shirt, and wearing a backpack and camera bag so looked foreign types). I just nodded and went on. And then another priest accompanied us, and performed the pooja to the idol.

The temple at Avani is that of Ramalingeshwara, a version of Shiva. Now, the studness with Shiva temples is that the idol is extremely simple. It’s just a penis. And it’s not hard to make, and more importantly it’s hard to break, since it’s monolithic, and usually without any portions that can easily break off. Contrast this with Vishnu temples, where the idols are of actual human figures, with arms and legs and ears and noses and fingers – all made of relatively thin pieces of stone, which makes it easier to break.

So think of yourself as an invader who for some reason wants to defile a temple by destroying its idols. The very nature of idols in a Vishnu temple makes your job simple. All you need is to give one strong hit which will break off a nose or a toe or a finger – not much damage, but enough to defile the temple and render it useless for the purpose of worship. But get to a Shiva temple, and you see one large penis-shaped stone in there, and you realize it’s not worth your patience to try break it down. So you just loot the vaults and go your way.

And hence, due to the nature of the idols in these temples, Shiva temples are more resilient to invasion and natural disaster compared to Vishnu temples. Aadisht, you can be happy.

Between Suits and Geeks

So you have suits and you have geeks. The problem with me is that I’m neither. I lie somewhere in between. So when I’m in the company of suits, I look like a geek, and in the company of geeks I look like a suit.

Problem is that suits don’t understand geeky stuff, or tend to get intimidated, or expect me to do magic. Geeks are usually dismissive of suity stuff, saying it’s all “globe” or “pfaff”. They think they are the masters of the universe and suits are dumb.

So. Suits to the left of me, and geeks to my right. Here I am, stuck in the middle with you

Why You Should Not Do An Undergrad in Computer Science at IIT Madras

I did my undergrad in Computer Science and Engineering at IIT Madras. My parents wanted me to study Electrical Engineering, but I had liked programming back in school, and my JEE rank “normally” “implied” Computer Science and Engineering. So I just went with the flow and joined the course. In the short term, I liked some subjects, so I was happy with my decision. Moreover there was a certain aura associated with CS students back in IITM, and I was happy to be a part of it. In the medium term too, the computer science degree did open doors to a few jobs, and I’m happy for that. And I still didn’t regret my decision.

Now, a full seven years after I graduated with my Bachelors, I’m not so sure. I think I should’ve gone for a “lighter” course, but then no one told me. So the thing with a B.Tech. in Computer Science and Engineering at IIT Madras is that it is extremely assignment incentive. Computer Science is that kind of a subject, there is very little you can learn in the classroom. The best way to learn stuff is by actually doing stuff, and “lab” is cheap (all you need is a bunch of computers) so most courses are filled with assignments. Probably from the fourth semester onwards, you spend most of your time doing assignments. Yes, you do end up getting good grades on an average, but you would’ve worked for it. And there’s no choice.

The thing with an Undergrad is that you are clueless. You have no clue what you’re interested in, what kind of a career you want to pursue, what excites you and the stuff. Yes, you have some information from school, from talking to seniors and stuff, but still it’s very difficult to KNOW when you are seventeen as to what you want to do in life. From this perspective, it is important for your to keep your options as open as they can be.

Unfortunately most universities in India don’t allow you to switch streams midway through your undergrad (most colleges are siloed into “arts” or “engineering” or “medicine” and the like). IIT Madras, in fact, is better in that respect since it allows you to choose a “minor” stream of study and courses in pure sciences and the humanities. But still, it is impossible for you to change your stream midway. So how do you signal to the market that you are actually interested in something else?

One way is by doing projects in areas that you think you are really interested in. Projects serve two purposes – first they allow you to do real work in the chosen field, and find out for yourself if it interests you. And if it does interest you, you have an automatic resume bullet point to pursue your career on that axis. Course-related projects are fine but since they’re forced, you have no way out, and they will be especially unpleasant if you happen to not like the course.

So why is CS@IITM a problem? Because it is so hectic, it doesn’t give you the time to pursue your other interests. It doesn’t offer you the kind of time that you need to study and take on projects in other subjects (yeah, it still offers you the 3 + 1 months of vacation per year, when you can do whatever you want, but then in the latter stages you’re so occupied with internships and course projects you’re better off having time during the term). So if you, like me, find out midway through the course that you would rather do something else, there is that much less time for you to explore around, study, and do projects in other subjects.

And there is no downside to joining a less hectic course. How hectic a course inherently is only sets a baseline. If you were to like the course, no one stops you from doing additional projects in the same subject. That way you get to do more of what you like, and get additional bullet points. All for the good, right?

After I graduated, IIT Madras reduced its credit requirement by one-twelfth. I don’t know how effective that has been in reducing the inherent workload of students but it’s a step in the right direction. Nevertheless, if you are going to get into college now, make sure you get into a less hectic course so that the cost of making a mistake in selection is not high.

Copa Format

The ongoing copa america is probably the worst designed sporting event I’ve ever seen, in terms of tournament format. Yes, there have been tournaments that have come close in the past, like the Asia Cup 08, which had a funny format so as to ensure at least two India-Pakistan matches (but that ensured that the chances of an India-Pakistan FINAL were really low). Then there was Euro 2008, where teams qualifying for the knockout from the same group ended up in the same half of the draw. And then, in hindsight, there was the Cricket World Cup 2007, when two upsets threw out two of the favourites before the “real tournament” had begun.

But in the face of the current Copa America, all of those can be described as being extremely well-designed tournaments. The Copa format is so bad that I seriously doubt that this post is going to be exhaustive in listing out all its flaws. Since there are so many of them, and I don’t want to keep saying “moreover”, “next” or “furthermore”, I’ll do it in bullet points. The points are in random order

  • You have 12 countries in the first round which you want to reduce to 8 for the second round. What do you do? Four groups of three with top two from each qualifying right? Instead, they have 3 groups of 4, with the two best third placed teams also qualifying. So you spend 18 matches (2/3rd of the tournament) throwing out one-third of the teams! Ok but I understand (as Atul Mathew points out on twitter) this is the standard format of Copa so I guess I’ll let it be
  • The organizers seem to have clearly drawn from the experience of 2007 CWC, when India and Pakistan went out in the first round. And given how the first two rounds of matches played out, it wouldn’t have been hard to imagine one or both of Argentina and Brazil going out, which would have killed the competition. I guess that’s the reason the Copa adopts this tamasha of third placed teams and stuff.
  • The last matches in each group are not simultaneously played, and the “seeded teams” in each group (Argentina, Uruguay, Brazil) got to play the last games, and thus figure out what exactly they needed to do (fix it even, maybe?) so that they got a favourable draw in the quarters. Actually, as I’ll explain in a subsequent tweet, it was more like “favourable opponent” rather than “favourable draw”. Check out Jonathan Wilson’s piece on watching Brazil-Ecuador with a bunch of Chile fans
  • Now you have in the second round Brazil taking on Paraguay, whom they’ve faced once before in the group stages. Again, daft format that allows a team to play the third placed team in its own group in the second round itself. I remember FIFA 1994 handling third placed teams well, to make sure they didn’t meet teams they’d played before in the second round
  • Take a look at the quarter-finals fixtures, and do  a sensitivity analysis of what would have happened if either Brazil had done slightly worse or Argentina had done better. You will notice that as long as Argentina and Brazil finished their respective groups as either number 1 or number 2, they would end up in different halves of the tournament! Oh, the lengths the organizers have gone to ensure they maximize the chances of getting a Brazil-Argentina final. Another off-shoot is again teams from the same group having to meet in the semis. For example, if Venezuela beat Chile this weekend, then either Brazil or Paraguay could get to the final of the tournament by not ever facing a team that started anywhere outside of group B!!
As I mentioned this list is unlikely to be exhaustive. And I hope for the sake of giving the organizers a kick in the butt, Paraguay and Uruguay will do the needful and throw out Brazil and Argentina respectively. They’re fully capable of doing that, based on tournament form.

 

In search of uncertainty

Back when I was in school, I was a math stud. At least people around me thought so. I knew I wanted to pursue a career in science, and that in part led me to taking science in class XI, and subsequently writing JEE which led to the path I ultimately took. Around the same time (when I was in high school), I started playing chess competitively. I was quite good at it, and I knew that with more effort I could make it big in the game. But then, that never happened, and given that I would fall sick after every tournament, I retired.

It was in 2002, I think, that I was introduced to contract bridge, and I took an instant liking for it. All the strategising and brainwork of chess would be involved once again, and I knew I’d get pretty good at this game, too. But there was one fundamental difference which made bridge so much more exciting – the starting position was randomized (I’m not making a case for Fischer Chess here, mind you). The randomization of starting positions meant that you could play an innumerable number of “hands” with the same set of people without ever getting bored. I simply loved it.

It was around that time that I started losing interest in math and other hard sciences. They had gotten to the point where they were too obscure, and boring, I thought, and that to make an impact in them, I wanted to move towards something less precise, and hard. That was probably what led me to do an MBA. And during the course of my MBA I discovered my interest in economics and social sciences, but am yet to do anything significant on that front, though, apart from the odd blog here or there.

I think what drove me from what I had thought is my topic of interest to what I think now it is is the nature of open problems. In hard sciences, where a lot of things are “known” it’s getting really hard to do anything of substance unless you get really deep in, into the territories of obscurity. In “fuzzy sciences”, on the other hand, nothing too much is “known”, and there will always be scope for doing good interesting work without it getting too obscure.

Similarly, finance, I thought, being a people-driven subject (the price of a stock is what a large set of people think its price is, there are no better models) will have lots of uncertainty, and scope to make assumptions, and thus scope to do good work without getting too obscure. But what I find is that given the influx of hard science grads in Wall Street over the last three decades, most of the large organizations are filled with people who simply choose to ignore the uncertainty and “interestingness” and instead try and solve deterministic problems based on models that they think completely represents the market.

And this has resulted in you having to do stuff that is really obscure and deep (like in the hard sciences) even in a non-deterministic field such as finance, simply because it’s populated by people from hard science background, and it takes way too much in order to go against the grain.

PS: Nice article by Tim Harford on why we can’t have any Da Vincis today. Broadly related to this, mostly on scientific research.

More on consulting partners

I’d written in an earlier post that consulting firms remain young nd dynamic by periodically promoting new people to partnership, and they in their quest to develop new markets and establish themselves, take on risks which can prove useful to underlings who now have a better chance to make a mark and establish themselves.

However, as I’d once experienced a long time back, there can be a major downside to this. The new partners, in their quest to establish themselves, can sometimes be too eager in the commitments they make to the clients. They are prone to promising way more than their team can logically deliver, and that contributes to putting additional and unnecessary pressure on the people working for them.

Nothing earthshattering, but thought I should mention this here for the sake of completeness, so that I don’t mislead you with my conclusions.

Methods of Negotiations

There are fundamentally two ways in which you can negotiate a price. You can either bargain or set a fixed price. Bargaining induces temporary transaction costs – you might end up fighting even, as you are trying to negotiate. But in the process you and the counterparty are giving each other complete information of what you are thinking, and at every step in the process, there is some new information that is going into the price. Finally, if you do manage to strike a deal, it will turn out to be one that both of you like (ok I guess that’s a tautology). Even when there is no deal, you know you at least tried.

In a fixed price environment, on the other hand, you need to take into consideration what the other person thinks the price should be. There’s a fair bit of game theory involved and you constantly need to be guessing, about what the other person might be thinking, and probably adjust your price accordingly. There is no information flow during the course of the deal, and that can severely affect the chances of a deal happening. The consequences in terms of mental strain could be enormous in case you are really keen that the deal goes through.

Some people find the fixed price environment romantic. They think it’s romantic that one can think exactly on behalf of the counterparty and offer them a fair deal. What they fail to discount is the amount of thought process and guessing that actually goes in to the process of determining the “fair deal”. What they discount is the disappointment that has occurred in the past when they’ve been offered an unfair deal, and can do nothing about it because the price is fixed. But I guess that’s the deal about romance – you remember all the nice parts and ignore that similar conditions could lead to not-so-nice outcomes.

Bargaining, on the other hand has none of this romance. It involves short-term costs, fights even. But that’s the best way to go about it if you are keen on striking a deal. Unfortunately the romantics think it’s too unromantic (guess it’s because it’s too practical) and think that if you want a high probability of a deal, you should be willing to offer a fixed price. And the fight continues.. Or maybe not – it could even be a “take it or leave it” thing.

Standard Error in Survey Statistics

Over the last week or more, one of the topics of discussion in the pink papers has been the employment statistics that were recently published by the NSSO. Mint, which first carried the story, has now started a whole series on it, titled “The Great Jobs Debate” where people from both sides of the fence have been using the paper to argue their case as to why the data makes or doesn’t make sense.

The story started when Mint Editor and Columnist Anil Padmanabhan (who, along with Aditya Sinha (now at DNA) and Aditi Phadnis (of Business Standard), ranks among my favourite political commentators in India) pointed out that the number of jobs created during the first UPA government (2004-09) was about 1 million, which is far less than the number of jobs created during the preceding NDA government (~ 60 million). And this has led to hue and cry from all sections. Arguments include leftists who say that jobless growth is because of too much reforms, rightists saying we aren’t creating jobs because we haven’t had enough reform, and some other people saying there’s something wrong in the data. Chief Statistician TCA Anant, in his column published in the paper, tried to use some obscurities in the sub-levels of the survey to point out why the data makes sense.

In today’s column, Niranjan Rajadhyaksha points out that the way employment is counted in India is very different from the way it is in developed countries. In the latter, employers give statistics of their payroll to the statistics collection agency periodically. However, due to the presence of the large unorganized sector, this is not possible in India so we resort to “surveys”, for which the NSSO is the primary organization.

In a survey, to estimate a quantity across a large sample, we simply take a much smaller sample, which is small enough for us to rigorously measure this quantity. Then, we try and extrapolate the results to the large sample. The key thing in survey is “standard error”, which is a measure of error that the “observed statistic” is different from the “true statistic”. What intrigues me is that there is absolutely no mention of the standard error in any of the communication about this NSSO survey (again I’m relying on the papers here, haven’t seen the primary data).

Typically, when we measure something by means of a survey, the “true value” is usually expressed in terms of the “95% confidence range”. What we say is “with 95% probability, the true value of XXXX lies between Xmin and Xmax”. An alternate way of representation is “we think the value of XXXX is centred at Xmid with a standard error of Xse”. So in order to communicate numbers computed from a survey, it is necessary to give out two numbers. So what is the NSSO doing by reporting just one number (most likely the mid)?

Samples used by NSSO are usually very small. At least, they are very small compared to the overall population, which makes the standard error to be very large. Could it be that the standard error is not reported because it’s so large that the mean doesn’t make sense? And if the standard error is so large, why should we even use this data as a basis to formulate policy?

So here’s my verdict: the “estimated mean” of the employment as of 2009 is not very different from the “estimated mean” of the employment as of 2004. However, given that the sample sizes are small, the standard error will be large. So it is very possible that the true mean of employment as of 2009 is actually much higher than the true mean of 2004 (by the same argument, it could be the other way round, which points at something more grave). So I conclude that given the data we have here (assuming standard errors aren’t available), we have insufficient data to conclude anything about the job creation during the UPA1 government, and its policy implications.

On Running a Consulting Firm

So most of the consulting firms are run as partnerships (as you might have already figured out). There was an experiment in the late 90s where a then leading firm was bought over by an IT company, and that saw stagnation for the next few years until the consultants did a “management buy out” in order to rid themselves of the IT company’s controls. By then, though, valuable time was lost, and last I heard this company was severely lagging its peers in terms of reputation, among other things.

As I had mentioned in the earlier post, the rut sets in once partners reach “steady state”, where they have an established set of relationships that they milk to get more business. And as I mentioned, it’s hard to get out of this rut, until employees start leaving protesting the poor quality of work, and lack of opportunities to make it big. And that starts sending the firm into a downward spiral. So what is it that the firms must do, in order to keep themselves dynamic, and not get into this kind of a rut?

The answer is something that is practiced by most leading consulting firms. Every few months or a year, these firms add to the partnership pool, mostly by promoting from within their ranks. Once thus promoted, it is the new partner’s responsibility to expand and generate new business for the firm, and he is not able to piggyback on the relationships established by the established partners. And thus, in his process to expand and get himself established, he has an incentive to take more risks. And take on projects with long-out-of-the-money option kind of payoffs.

Regular promotions to the partnership level means that there is always a bunch of partners who are thus taking risks, and that keeps the firm dynamic. I don’t know how well this works in practice, but in theory at least, this helps firms from getting into stagnation. That this is the model followed by most leading management consulting firms indicates that this is probably an appropriate approach.

So, if you think your consulting partnership is stagnating, get in more partners. Promote. Or make way. And keep the group dynamic and a great place to work.