Perfect and imperfect interviews

The movies give this impression of what is a “perfect” job interview. This usually happens during the stage of the movie when the protagonist has started turning things around, things are looking up for him/her. The “perfect” job interview barely lasts minutes, and the protagonist returns with the offer letter. As good as it can get. Unfortunately, this reversal of fortunes for the protagonist (who has hitherto been jobless, and desperate) happens only towards the end of the movie. And we don’t really get a chance to know how his job went.

I had one such “perfect” interview once. To be fair, I had sent my CV in advance and had done a couple of telephonic rounds (which were primarily about my would-be bosses telling me about what the company did, without them asking me too many questions) before I was called in for the interview. I remember going in around 10 am or so, first meeting the HR person in charge of recruitment and then the India head. No questions asked. I then met my would-be immediate boss. Again, few questions asked. Most time spent explaining what the role would be about. By lunch time I had an offer letter. As perfect as it could get, you might think.

The perfection, unfortunately, didn’t last too long. Within a few weeks (yes, back then I counted my tenure in a job by weeks, as I hadn’t ever lasted more than ten in any earlier organization), it was clear that there was a mismatch of expectations. Given my (then spectacular-looking) CV they had expected me to pull rabbits out of hats. I soldiered on. A year later I announced that there were no rabbits in the hats. Helpfully, two months later, I suggested  how the design of the hats could be improved (all this is metaphorical, of course). But they were disappointed that I wasn’t able to produce rabbits. Soon, we parted.

Coming to think of it, given my disastrous career as an employee (which I don’t hope to resurrect), it’s hard to find a “happy” story. But to contrast the one above I must tell you about this other job I did, where I was significantly happier (while it lasted) and lasted longer. This time I faced sixteen rounds of interviews. Yes, you read that right. Sixteen. All extremely difficult. Each of which told the firm a different dimension of me. And more importantly, each of which gave me insight into what the team I was going to join did and the kind of people I was going to work with. After the last of these interviews, the firm took ages to decide on my offer. But when I joined, I slotted in easily. They knew me and I knew them. Yes, it might have unraveled once again (eighteen months on) but till then I could have only dreamed of eighteen months of being happy in a job!

So I was thinking about it yesterday when I was thinking of a meeting I have this week to explore a potential business opportunity. At first I imagined myself closing the deal quickly. Then I corrected myself. Considering that this is a longish engagement that I’m thinking of, it would be important for them to know me and me to know them well before we sign. It is important to know what each of us is bringing to the table, and for expectations to be clear. So yes, multiple rounds of talks could potentially delay this engagement, but it will put it on a firmer base.

And that’s what you should be thinking of every time you want to get into a potential assignment. Unless of course your intention is to fraud your counterparty.

PS: Think of how campus placements work given this framework. I think of it as purely an option value investment by the employer.

Should you have an analytics team?

In an earlier post a couple of weeks back, I had talked about the importance of business people knowing numbers and numbers people knowing business, and had put in a small advertisement for my consulting services by mentioning that I know both business and numbers and work at their cusp. In this post, I take that further and analyze if it makes sense to have a dedicated analytics team.

Following the data boom, most companies have decided (rightly) that they need to do something to take advantage of all the data that they have and have created dedicated analytics teams. These teams, normally staffed with people from a quantitative or statistical background, with perhaps a few MBAs, is in charge of taking care of all the data the company has along with doing some rudimentary analysis. The question is if having such dedicated teams is effective or if it is better to have numbers-enabled people across the firm.

Having an analytics team makes sense from the point of view of economies of scale. People who are conversant with numbers are hard to come by, and when you find some, it makes sense to put them together and get them to work exclusively on numerical problems. That also ensures collaboration and knowledge sharing and that can have positive externalities.

Then, there is the data aspect. Anyone doing business analytics within a firm needs access to data from all over the firm, and if the firm doesn’t have a centralized data warehouse which houses all its data, one task of each analytics person would be to get together the data that they need for their analysis. Here again, the economies of scale of having an integrated analytics team work. The job of putting together data from multiple parts of the firm is not solved multiple times, and thus the analysts can spend more time on analyzing rather than collecting data.

So far so good. However, writing a while back I had explained that investment banks’ policies of having exclusive quant teams have doomed them to long-term failure. My contention there (including an insider view) was that an exclusive quant team whose only job is to model and which doesn’t have a view of the market can quickly get insular, and can lead to groupthink. People are more likely to solve for problems as defined by their models rather than problems posed by the market. This, I had mentioned can soon lead to a disconnect between the bank’s models and the markets, and ultimately lead to trading losses.

Extending that argument, it works the same way with non-banking firms as well. When you put together a group of numbers people and call them the analytics group, and only give them the job of building models rather than looking at actual business issues, they are likely to get similarly insular and opaque. While initially they might do well, soon they start getting disconnected from the actual business the firm is doing, and soon fall in love with their models. Soon, like the quants at big investment banks, they too will start solving for their models rather than for the actual business, and that prevents the rest of the firm from getting the best out of them.

Then there is the jargon. You say “I fitted a multinomial logistic regression and it gave me a p-value of 0.05 so this model is correct”, the business manager without much clue of numbers can be bulldozed into submission. By talking a language which most of the firm understands you are obscuring yourself, which leads to two responses from the rest. Either they deem the analytics team to be incapable (since they fail to talk the language of business, in which case the purpose of existence of the analytics team may be lost), or they assume the analytics team to be fundamentally superior (thanks to the obscurity in the language), in which case there is the risk of incorrect and possibly inappropriate models being adopted.

I can think of several solutions for this – but irrespective of what solution you ultimately adopt –  whether you go completely centralized or completely distributed or a hybrid like above – the key step in getting the best out of your analytics is to have your senior and senior-middle management team conversant with numbers. By that I don’t mean that they all go for a course in statistics. What I mean is that your middle and senior management should know how to solve problems using numbers. When they see data, they should have the ability to ask the right kind of questions. Irrespective of how the analytics team is placed, as long as you ask them the right kind of questions, you are likely to benefit from their work (assuming basic levels of competence of course). This way, they can remain conversant with the analytics people, and a middle ground can be established so that insights from numbers can actually flow into business.

So here is the plug for this post – shortly I’ll be launching short (1-day) workshops for middle and senior level managers in analytics. Keep watching this space 🙂

 

Duckworth Lewis and Sprinting a Marathon

How would you like it if you were running a marathon and someone were to set you targets for every 100 meters? “Run the first 100m in 25 seconds. The second in 24 seconds” and so on? It is very likely that you would hate the idea. You would argue that the idea of the marathon would be to finish the 42-odd km within the target time you have set for yourself and you don’t care about any internal targets. You are also likely to argue that different runners have different running patterns and imposing targets for small distances is unfair to just about everyone.

Yet, this is exactly what cricketers are asked to do in games that likely to be affected by rain. The Duckworth Lewis method, which has been in use to adjust targets in rain affected matches since 1999 assumes an average “scoring curve”. The formula assumes a certain “curve” according to which a team scores runs during its innings. It’s basically an extension of the old thumb-rule that a team is likely to score as many runs in the last 20 overs as it does in the first 30 – but D/L also takes into accounts wickets lost (this is the major innovation of D/L. Earlier rain-rules such as run-rate or highest-scoring-overs didn’t take into consideration wickets lost).

The basic innovation of D/L is that it is based on “resources”. With 50 overs to go and 10 wickets in hand, a team has 100% of its resource. As a team utilizes overs and loses wickets, the resources are correspondingly depleted. D/L extrapolates based on the resources left at the end of the innings. Suppose, for example, that a team scores 100 in 20 overs for the loss of 1 wicket, and the match has to be curtailed right then. What would the team have scored at the end of 50 overs? According to the 2002 version of the D/L table (the first that came up when I googled), after 20 overs and the loss of 1 wicket, a team still has 71.8% of resources left. Essentially the team has scored 100 runs using 28.2% (100 – 71.8) % of its resources. So at the end of the innings the team would be expected to score 100 * 100 / 28.2 = 354.

How have D/L arrived at these values for resource depletion? By simple regression, based on historical games. To simplify, they look at all historical games where the team had lost 1 wicket at the end of 20 overs, and look at the ratio of the final score to the 20 over score in those games, and use that to arrive at the “resource score”.

To understand why this is inherently unfair, let us take into consideration the champions of the first two World Cups that I watched. In 1992, Pakistan followed the principle of laying a solid foundation and then exploding in the latter part of the innings. A score of 100 in 30 overs was considered acceptable, as long as the team hadn’t lost too many wickets. And with hard hitters such as Inzamam-ul-haq and Imran Khan in the lower order they would have more than doubled that score by the end of the innings. In fact, most teams followed a similar strategy in that World Cup (New Zealand was a notable exception, using Mark Greatbatch as a pinch-hitter. India also tried that approach in two games – sending Kapil Dev to open).

Four years later in the subcontinent the story was entirely different. Again, while there were teams that followed the approach of a slow build up and late acceleration, but the winners Sri Lanka turned around that formula on its head. Test opener Roshan Mahanama batted at seven, with the equally dour Hashan Tillekeratne preceding him. At the top were the explosive pair of Sanath Jayasuriya and Romesh Kaluwitharana. The idea was to exploit the field restrictions of the first 15 overs, and then bat on at a steady pace. It wasn’t unlikely in that setup that more runs would be scored in the first 25 overs than the last 25.

Duckworth-Lewis treats both strategies alike. The D/L regression contains matches from both the 1992 and 1996 world cups. They have matches where pinch hitters have dominated, and matches with a slow build up and a late slog. And the “average scoring curve” that they have arrived at probably doesn’t represent either – since it is an average based on all games played. 100/2 after 30 overs would have been an excellent score for Pakistan in 1992, but for Sri Lanka in 1996 the same score would have represented a spectacular failure. D/L, however, treats them equally.

So now you have the situation that if you know that a match is likely to be affected by rain, you (the team) have to abandon your natural game and instead play according to the curve. D/L expects you to score 5 runs in the first over? Okay, send in batsmen who are capable of doing that. You find it tough to score off Sunil Narine, and want to simply play him out? Can’t do, for you need to score at least 4 in each of his overs to keep up with the D/L target.

The much-touted strength of the D/L is that it allows you to account for multiple rain interruptions and mid-innings breaks. At a more philosophical level, though, this is also its downfall. Because now you have a formula that micromanages and tells you what you should be ideally doing on every ball (as Kieron Pollard and the West Indies found out recently, simply going by over-by-over targets will not do), you are now bound to play by the formula rather than how you want to play the game.

There are a few other shortcomings with D/L, which is a result of it being a product of regression. It doesn’t take into account who has bowled, or who has batted. Suppose you are the fielding captain and you know given the conditions and forecasts that there is likely to be a long rain delay after 25 overs of batting – after which the match is likely to be curtailed. You have three excellent seam bowlers who can take good advantage of the overcast conditions. Their backup is not so strong. So you now play for the rain break and choose to bowl out your best bowlers before that! Similarly, D/L doesn’t take into account the impact of power play overs. So if you are the batting captain, you want to take the batting powerplay ASAP, before the rain comes down!

The D/L is a good system no doubt, else it would have not survived for 14 years. However, it creates a game that is unfair to both teams, and forces them to play according to a formula. We can think of alternatives that overcome some of the shortcomings (for example, I’ve developed a Monte Carlo simulation based system which can take into account power plays and bowling out strongest bowlers). Nevertheless, as long as we have a system that can extrapolate after every ball, we will always have an unfair game, where teams have to play according to a curve. D/L encourages short-termism, at the cost of planning for the full quota of overs. This cannot be good for the game. It is like setting 100m targets for a marathon runner.

PS: The same arguments I’ve made here against the D/L apply to its competitor the VJD Method (pioneered by V Jayadevan of Thrissur) also.

Numbers and management

I learnt Opeations Research thrice. The first was when I had just finished school and was about to go to IIT. My father had just started on a part-time MBA, and his method of making sure he had learnt something properly was to try and teach it to me. And so, using some old textbook he had bought some twenty years earlier, he taught me how to solve the transportation problem. I had already learnt to solve 2-variable linear programming problems in school (so yes, I learnt OR 4 times then). And my father taught my how to solve 3-variable problems using the Simplex table.

I got quite good at it, but by not using it for the subsequent two years I forgot. And then I happened to take Operations Research as a minor at IIT. And so in my fifth semester I learnt the basics again. I was taught by the highly rated Prof. G Srinivasan. He lived up to his rating. Again, he taught us simplex, transportation and assignment problems, among other things. He showed us how to build and operate the simplex table. It was fun, and surprisingly (in hindsight) never once did I consider it to be laborious.

This time I didn’t forget. OR being my minor meant that I had OR-related courses in the following three semesters, and I liked it enough to even considering applying for a PhD in OR. Then I got cold feet and decided to do an MBA instead, and ended up at IIMB. And there I learnt OR for the fourth time.

The professor who taught us wasn’t particularly reputed, and she lived up to her not-so-particular-reputation. But there was a difference here. When we got to the LP part of the course (it was part of “Quantitative Methods 2”, which included regression and OR), I thought I would easily ace it, given my knowledge of simplex. Initially I was stunned to know that we wouldn’t be taught the simplex. “What do they teach in an OR course if they don’t teach Simplex”, I thought. Soon I would know why. Computer!

We were all asked to install this software called Lindo on our PCs, which would solve any linear programming problem you would throw at it, in multiple dimensions. We also discovered that Excel had the Solver plugin. With programs like these, what use of knowing the Simplex? Simplex was probably useful back in the day when readymade algorithms were not available. Also, IIT being a technical school might have seen value in teaching us the algorithm (though we always solved procedurally. I never remember writing down pseudocode for simplex). The business school would have none of it.

It didn’t matter how the problem was actually solved, as long as we knew how to use the solver. What was more important was the art of transforming a real-life problem into one that could be solved using Solver/Lindo. In terms of formulation, the problems we got in our assignments and exams were  tough – back in IIT when we solved manually such problems were out of bounds since Simplex would take too long on those.

I remember taking a few more quant electives at IIM. They were all the same – some theory would be taught where we knew something about the workings of some of the algorithms, but the focus was on applications. How do you formulate a business problem in a way in which you can use the particular technique? How do you decide what technique you use for what problem? These were some of the questions I learnt to answer through the course of my studies at IIM.

I once interviewed with a (now large) marketing analytics firm in Bangalore. They expected me to know how to measure “feelings” and other BS so I politely declined after one round. From what I understood, they had two kinds of people. First they had experienced marketers who would do the “business end” of the problem. Then they had stats/math grads who actually solved the problem. I think that is problematic. But as I have observed in a few other places, that is the norm.

You have tech guys doing absolutely tech stuff and reporting to business guys who know very little of the tech. Because of the business guy’s disinterest in tech, he is unlikely to get his hands dirty with the data. And is likely to take what the tech guy gives him at face value. As for the tech guy doing the data work, he is unlikely to really understand the business problem that he is solving, and so he invariably ends up solving a “tech problem”, which may or may not have business implications.

There are times when people ask me if I “know big data”. When I reply in the negative, they wonder (sometimes aloud) how I can call myself a data scientist. Then there are times when people ask me about a particular statistical technique. Again, it is extremely likely I answer in the negative, and extremely likely they wonder how I call myself a data scientist.

My answer is that if I deem a problem to be solvable by a particular technique, I can then simply read up on the technique! As long as you have the basics right, you don’t need to mug up all available techniques.

Currently I’m working (for a client) on a problem that requires me to cluster data (yes, I know that much stats to know that now the next step is to cluster). So this morning I decided to read up on some clustering algorithms. I’m amazed at the techniques that are out there. I hadn’t even heard of most of them. Then I read up on each of them and considered how well they would fit my data. After reading up, and taking another look at the data, I made what I think is an informed choice. And selected a technique which I think was appropriate. And I had no clue of the existence of the technique two hours before.

Given that I solve business problems using data, I make sure I use techniques that are appropriate to solve the business problem. I know of people who don’t even look at the data at hand and start implementing complex statistical techniques on them. In my last job (at a large investment bank), I know of one guy who suggested five methods  (supposedly popular statistical techniques – I had never heard of them; he had a PhD in statistics) to attack a particular problem, without having even seen the data! As far as he was concerned he was solving a technical problem.

Now that this post is turning out to be an advertisement for my consulting services, let me go all the way. Yes, I call myself a “management consultant and data scientist”. I’m both a business guy and a data guy. I don’t know complicated statistical techniques, but don’t see the need to know either – since I usually have the internet at hand while working. I solve business problems using data. The data is only an intermediary step. The problem definition is business-like. As is the solution. Data is only a means.

And for this, I have to thank the not-so-highly-reputed professor who taught me Operations Research for the fourth time – who taught me that it is not necessary to know Simplex (Excel can do it), as long as you can formulate the problem properly.

The Problem with Unbundled Air Fares

Normally I would welcome a move like the recent one by the Directorate General of Civil Aviation (DGCA) that allows airlines to decrease baggage limit and allows them to charge for seat allocation. While I’m a fan of checking in early and getting in a seat towards the front of the flight (I usually don’t carry much luggage on my business trips), under normal circumstances I wouldn’t mind the extra charge as I would believe it would be offset by a corresponding decrease in the base fare.

However, I have a problem. I don’t pay for most of my flights – I charge them to my client. And this is true of all business travelers – who charge it to either their own or to some other company. And when you want to charge your air fare to someone else, one nice bundled fare makes sense. For example (especially since I charge my flights to my client) I would be embarrassed to add line items in my invoice to ask for reimbursements of the Rs. 200 I paid for an aisle seat, or the Rs. 160 I paid for the sandwich. A nice bundled fare would spare me of all such embarrassment.

Which probably explains why most airlines that primarily depend on business travelers for their business don’t unbundle their fares – that their baggage allocations remain high, that they give free food on board and they don’t charge you extra for lounge access (instead using your loyalty tier to give that to you). Business travelers, as I explained above, don’t like unbundled fares.

Which makes it intriguing that Jet Airways, which prides itself as being a “full service carrier” has decided to cut baggage limits and charge for seat allocation (they continue to not charge for food, though). Perhaps they have recognized that a large number of business travelers have already migrated to the so-called low-cost Indigo (it’s impossible for Indigo to have a 30% market share if they don’t get any business travelers at all), because of which Indian business travelers may not actually mind the unbundling.

Currently, Indigo flights have a “corporate program”, where the price of your sandwich and drink is bundled into the price of the ticket. I normally book my tickets on Cleartrip, so have never been eligible for this, but I can see why this program is popular – it prevents corporates from adding petty line items such as sandwiches to their invoices. On a similar note, I predict that soon all airlines will have a “corporate program” where the price of the allocated seat and a certain amount of baggage (over and above the standard 15kg) will be  bundled into the base price of the ticket. Now that I charge my flights to a client, I hope this happens soon.

Time for bragging

So the Karnataka polls are done and dusted. The Congress will form the next government here and hopefully they won’t mess up. This post, however, is not about that. This is to stake claim on some personal bragging rights.

1. Back in March, after the results of the Urban Local Body polls came out, I had predicted a victory for the Congress in the assembly elections.

2. Then, a couple of weeks back, I used the logic that people like to vote for the winner, and this winner-chasing will result in a self-fulfilling prophecy that will lead to a comfortable Congress victory.

These two predictions were on the “Resident Quant” blog that I run for the Takshashila Institution. It was a classic prediction strategy – put out your predictions in a slightly obscure place, so that you can quickly bury it in case it doesn’t turn out to be right, but showcase it in case you are indeed correct! After that, however, things went slightly wrong (or right?). Looking at my election coverage Mint asked me to start writing for them.

As it happened I didn’t venture to make further predictions till the elections, apart from building a DIY model where people could input swings in favour of or against parties, and get a seat projection. Watching the exit polls on Sunday, however, compelled me to plug in the exit poll numbers into my DIY model, and come up with my own prediction. I quickly wrote up a short piece.

3. As it happened, Mint decided to publish my predictions on its front page, and now I had nowhere to hide. I had taken a more extreme position compared to most other pollsters. While they had taken care to include some numbers that didn’t mean an absolute majority in the range the predicted for the Congress (so as to shield themselves in that eventuality), I found my model compelling enough to predict an outright victory for the Congress. “A comfortable majority of at least 125 seats”, I wrote.

I had a fairly stressful day today, as the counting took place. Initial times were good, as the early leads went according to my predictions. Even when the BJP had more leads than the Congress, I knew those were in seats that I had anyway tipped them to win, so I felt smug. Things started going bad, however, when the wins of the independents started coming out. The model I had used was unable to take care of them, so I had completely left them out of my analysis. And now I was staring at the possibility that the Congress may not even hit the magic figure of 113 (for an absolute majority), let alone reach my prediction of 125. I prepared myself to eat the humble pie.

Things started turning then, however. It turned out that counting had begun late in the hyderabad karnataka seats – a region that the Congress virtually swept. As I left my seat to get myself some lunch, the Congress number tipped past 113. And soon it was at 119. And then five minutes again back at 113. And so it continued to see-saw for a while, as I sat at the edge of my office chair which I had transplanted to in front of my television.

And then it ticked up again, and stayed at 119 for a while. And soon it was ticking past 120. All results have now been declared, with the Congress clocking up 121 seats. It falls short of the majority I had predicted, but it is a comfortable majority nevertheless. I know I got the BJP number horribly wrong, but so did most other pollsters, for nobody expected them to get only 20% of the popular vote. I also admit to have missed the surge in Independents and “Others”.

Nevertheless, I think I’ve consistently got the results of the elections broadly right, and so I can stake claim to some bragging rights. Do you think I’m being unreasonable?

Comparing Goalies in the Premier League: Shot-stopping ability versus passing ability

How does one compare the goalies of the Premier League? Based on Opta data released by Manchester City last year, I have compared the goalkeepers of the 2011-12 season on two parameters – percentage of shots blocked and success in distribution. This shows the relative successes of the goalkeepers in defence and attack respectively.

So who are the successful goalkeepers in the league? If you look for at this data, you will find that Pepe Reina, Wojciech Szczesny, Petr Cech and Joe Hart form a “convex hull”. What this means is that every other goalkeeper in the premier league is inferior to at least one of these four. So the best goalie has to be one of these.

 

How do goalkeepers in the premier league stack up against each other?
How do goalkeepers in the premier league stack up against each other?

 

As for who the absolute best is, that will depend on the relative weights that we give to shot-stopping and distributional ability. Intuitively, a 10% improvement in distribution is likely to result in less goals than a 10% improvement in shot stopping saves. So by that metric, it can be argued that one of Cech and Hart, who are far superior in terms of shot stopping ability is the best goalkeeper.

It is also interesting to note that even in Kenny Dalglish’s time as manager, Reina had a vastly superior distribution success, suggesting that it would not have been that difficult for Liverpool to adapt to Brendan Rodgers’s style of play.

PS: Click on the image to see a larger version

PS2: I’m writing this post sitting in the office room of one of my clients, who also happens to be a frequent visitor to this blog.

Levers and Pulleys

I’m writing this based on my insights as a management consultant. Apologies in advance if I end up on a global or gyaan-spouting trip

One of the most common ways of ending up in corporate paralysis is to split up a particular target into constituent “levers” and hand over the management of each of these “levers” to a different manager. You construct an elaborate funnel based on several filters, and put a manager in charge of maximizing the efficiency of each of these filters. In the course of time, this manager’s performance will start getting measured on the effectiveness of the filter alone. The filter will get refined, and soon you will have the most perfect filters. And then you find that there is no way forward.

Beyond a point, any filter will stop working. This is because it hits what can be described as its “natural efficiency”. After this point, the only way you will be able to increase the output beyond this particular filter will be to increase the overall input to the filter. However, increasing the overall input will mean that this input will be of a lower quality than the input based on which the filter was perfected. Which means there will be a (at least temporary) loss of performance in this particular filter. Which means that the manager in charge of this filter will see a temporary dip in his own performance, based on the metrics he is currently measured on. It is now easy to see why it is in his interest to block any move to increase the volume of input! Apply this process at all points of the “funnel” and you see that the entire system is at a standstill.

The problem lies in the part of the process where an artificially defined construct, such as a particular efficiency ratio becomes sacrosanct and gets institutionalized as someone’s performance metric. Let me illustrate this with an example.

Business school graduates and followers of corporate brothels will be familiar with the concept of the DuPont analysis. It is basically a combination of three ratios – leverage, asset turnover and profit margin, all of which together gives you to the return on invested capital, which according to the authors of the model, is the holy grail of evaluating a company.

Suppose you believe in this analysis and so put one senior manager in charge of each component of the DuPont analysis. So your finance manager is in charge of leverage, ops manager for asset turnover and one other guy for profit margin. The firm has gone ahead in such a way that each of these managers are now evaluated according to these ratios. All of them are doing great jobs and have optimized their respective silos. Now, you decide that the firm needs to expand further. How do you go about it?

You need to invest in some capital goods. So how do you pay for it? If you take a loan, interest payments will affect your profit margin and the manager in charge of that will object. The other option would be to raise equity capital, but now the leverage is lost and the finance manager is not happy. Let’s say you work out some way to finance the deal. Now in the time it takes for this asset to start “producing”, the asset turnover is going to be depressed so the ops manager is unhappy. So as long as you measure your managers on these “intermediate goals”, the larger objective of business expansion will get compromised.

The Trouble with Management Consulting

While I was pumping iron (I know, I know!!) at the gym on Wednesday evening, I got a call from a client seeking to confirm our meeting yesterday afternoon. “Why don’t you put together a presentation with all the insights you’ve gathered so far?”, he suggested, adding that he was planning to call a few more stakeholders to the meeting and it would be good to give them an insight into what is happening.

Most of my morning yesterday was spent putting together the presentation, and I’m not usually one who bothers that much about the finer details in a presentation. As long as the insights are in place I’m good, I think. I had also worked late into the night on Wednesday trying to refine some algorithms, the result of which were to go into the presentation. In short, the client’s request for the presentation had turned the 18 hours between the phone call and the meeting topsy-turvy.

It is amazing how many people expect you to have a powerpoint (or Keynote) presentation every time you walk into a meeting with them. For someone like me, who doesn’t like to prepare power points unless there are specific things to show, it can get rather irritating. Some presentations are necessary, of course, like the one to the CEO of another client that I made last Thursday. What gets my goat is when people start expecting powerpoints from you even at status update meetings.

Preparing presentations is a rather time-consuming process. You need to be careful about what you present and how you present it. You need to make sure that your visualizations are labeled well and intuitive. You need to sometimes find words to fill slides that would otherwise appear empty. And if you are not me, you will need to spend time with precise and consistent formatting and dotting the is and crossing the Ts (I usually don’t bother about this bit, even in presentation to the CEO. As long as content is there and is presentable I go ahead).

So when you have to make presentations to your clients regularly, and at every status update meeting, you can only imagine how much of your time goes into just preparing the presentations rather than doing real work!

The other resource drain in the consulting business is working from client site. While it is true that you get massive amount of work done when you are actually there and have a much shorter turn around time for your requests, spending all your time there can lead to extreme inefficiency and lack of thought.

When you spend all your time at the client site, it invariably leads to more frequent status updates, and hence more presentations and thus more time spent making presentations rather than doing real work. The real damage, though, is significantly more. When you spend all your time at your client’s site, it is easy to get drawn into what can be called as “client servicing mode”. Since you meet the client often, you will have to update him often, and you are always looking for something to update him every time you need to meet him.

Consequently, you end up putting on yourself a number of short deadlines, and each day, each hour, you strive to simply meet the next short deadline you’ve set for yourself. While this might discipline you in terms of keeping your work going and make sure you deliver the entire package on time, it also results in lack of real thinking time.

Often when you are working on a large project, you need to take a step back and look at the big picture and look at where it is all going. There will be times when you realize that some time invested in simply thinking about a problem and coming up with a “global” solution  is going to pay off in the long run. You will want to take some time away from the day-to-day running so that you can work on your “global” solution.

Unfortunately a client servicing environment doesn’t afford you this time. Due to your constant short deadlines, you will always end up on a “greedy” path of chasing the nearest local optimum. There is little chance of any kind of pathbreaking work that can be done in this scenario.

In my work I have taken a conscious decision to not visit my client’s office unless it is absolutely necessary. Of course, there are times when I need to expedite something and think being there will increase my own efficiency also and spend time there. But at other times, when I”m away, here in Bangalore, the fact that there are times when there are no immediate deadlines also means that I get the time to invest on “global” thought and on developing ideas that are long-term optimal.

The long-term productivity that emerges from spending time working off-site never ceases to amaze me!

Job Descriptions

“The best part of my job”, declared the wife this morning, “is that everyone knows what Toyota makes”. This, she said, she realized after she had tried explaining to several people (mostly in vain) about my job (past and present). Here are some sample responses she told me about:

  • “Oh, Golden Socks! Even my neighbour works there. Software, right?” (this was back when I worked for Goldman Sachs. To put this in context, about half of Goldman’s employees in Bangalore are in IT)
  • “Oh, he works for a bank? What is he? Is he a manager there?”
  • “Investment bank? What interest rate do they offer for deposits?”
  • “Oh he is a financial consultant now? Can you ask him to recommend to me a few stocks?” (on being told that I’m a freelance consultant now, after having worked at a bank)

I realize that given what I’m doing (and what I’ve done, and what I intend to do, which is the same as what I’m doing), I’ll never be in a position where the man on the street will be able to understand what I do. For the rest of my lifetime most people will never really “get” what I’m doing. I wonder if that’s a problem.

Personally I don’t think that’s going to be so much of a problem as long as there are enough people who understand what I do for me to get adequate work and adequately paid. But this refusal of the man on the street to even try and understand what I do is a bit irritating, I guess.

I’m going to make up a list of obscure stocks and obscure reasons to buy or short them. The next time someone asks me to recommend stocks, I think I’m going to actually give them some. And given that they mostly tend to be people I won’t care much about, I don’t think I’ll care about the quality of my predictions, either.