When Institutions Decay

A few weeks back, I’d written about “average and peak skills“. The basic idea in this blogpost is that in most jobs, the level of skills you need on most days (or the “average skill” you need) is far far inferior to the “peak skill” level required occasionally.

I didn’t think about this when I wrote that blogpost, but now I realise that a lot of institutional decay can be simply explained by ignoring this gap between average and peak skills required.

I was at my niece’s wedding this morning, and was talking to my wife about the nadaswara players (and more specifically about this tweet):

“Why do you even need a jalra”, she asked. And then I pointed out that the jalra guy had now started playing the nadaswara (volga). “Why do we need this entire band”, she went on, suggesting that we could potentially use a tape instead.

This is a classic case of peak and average skill. The average skill required by the nadaswara player (whether someone sitting there or just operating a tape) is to just play, play it well and play in sync with the dhol guy. And if you want to maximise for the sheer quality of the music played, then you might as well just buy a tape and play it at the venue.

However, the “peak skill” of the nadaswara player goes beyond that. He is supposed to function without instruction. He is supposed to keep an eye on what is happening at the wedding, have an idea of the rituals (given how much the rituals vary by community, this is nontrivial) and know what kind of music to play when (or not play at all). He is supposed to gauge the sense of the audience and adjust the sort of music he is playing accordingly.

And if you consider all these peak skills required, you realise that you need a live player rather than a tape. And you realise that you need someone who is fairly experienced since this kind of judgment is likely to come more easily to the player.

The problem with professions with big gaps between average and peak skills, and where peak skills are seldom called upon, is that penny-pinching managers can ignore the peak and just hire for average skill (I had mentioned this in my previous post on the topic as well).

In the short run, there is an advantage in that people with average skills for the job are far cheaper than those with peak skills for the job (and the former are unlikely to suffer motivational issues as well). Now, over a period of time you find that these average skilled people are able to do rather well (and are much cheaper and much lower maintenance than peak skilled people).

Soon you start questioning why you need the peak skill people after all. And start replacing them with average people. The more rare the requirement of peak skill is in the job, the longer you’ll be able to go on like this. And then one day you’ll find that the job on that day required a little more nuance and skill, and your current team is wholly incapable of handling it.

You replace your live music by tapes, and find that your music has got static and boring. You replace your bank tellers with a combination of ATMs and call centres, and find it impossible to serve that one customer with an idiosyncratic request. You replace your software engineers with people who don’t have that good an idea of algorithmic theory, and one day are saddled with inefficient code.

Ignoring peak skill required while hiring is like ignoring tail risk. Because it is so improbable, you think it’s okay to ignore it. And then when it hits you it hits you hard.

Maybe that’s why risk management is usually bundled into a finance person’s job – if the same person or department in charge of cutting costs is also responsible for managing risk, they should be able to make better tradeoffs.

Average skill and peak skill

One way to describe how complex a job is is to measure the “average level of skill” and “peak level of skill” required to do the job. The more complex the job is, the larger this difference is. And sometimes, the frequency at which the peak level of skill is required can determine the quality of people you can expect to attract to the job.

Let us start with one extreme – the classic case of someone  turning screws in a Ford factory. The design has been done so perfectly and the assembly line so optimised that the level of skill required by this worker each day is identical. All he/she (much more likely a he) has to do is to show up at the job, stand in the assembly line, and turn the specific screw in every single car (or part thereof) that passes his way.

The delta between the complexity of the average day and the “toughest day” is likely to be very low in this kind of job, given the amount of optimisation already put in place by the engineers at the factory.

Consider a maintenance engineer (let’s say at an oil pipeline) on the other hand. On most days, the complexity required of the job is very close to zero, for there is nothing much to do. The engineer just needs to show up and potter around and make a usual round of checks and all izz well.

On a day when there is an issue however, things are completely different – the engineer now needs to identify the source of the issue, figure out how to fix it and then actually put in the fix. Each of this is an insanely complex process requiring insane skill. This maintenance engineer needs to be prepared for this kind of occasional complexity, and despite the banality of most of his days on the job, maintain the requisite skill to do the job on these peak days.

In fact, if you think of it, a lot of “knowledge” jobs, which are supposed to be quite complex, actually don’t require a very high level of skill on most days. Yet, most of these jobs tend to employ people at a far higher skill level than what is required on most days, and this is because of the level of skill required on “peak days” (however you define “peak”).

The challenge in these cases, though, is to keep these high skilled people excited and motivated enough when the job on most days requires pretty low skill. Some industries, such as oil and gas, resolve this issue by paying well and giving good “benefits” – so even an engineer who might get bored by the lack of work on most days stays on to be able to contribute in times when there is a problem.

The other way to do this is in terms of the frequency of high skill days – if you can somehow engineer your organisation such that the high skilled people have a reasonable frequency of days when high skills are required, then they might find more motivation. For example, you might create an “internal consulting” team of some kind – they are tasked with performing a high skill task across different teams in the org. Each time this particular high skill task is required, the internal consulting team is called for. This way, this team can be kept motivated and (more importantly, perhaps) other teams can be staffed at a lower average skill level (since they can get help on high peak days).

I’m reminded of my first ever real taste of professional life – an internship in an investment bank in London in 2005. That was the classic “high variance in skills” job. Having been tested on fairly extreme maths and logic before I got hired, I found that most of my days were spent just keying in numbers in to an Excel sheet to call a macro someone else had written to price swaps (interest rate derivatives).

And being fairly young and immature, I decided this job is not worth it for me, and did not take up the full time offer they made me. And off I went on a rather futile “tour” to figure out what kind of job has sufficient high skill work to keep me interested. And then left it all to start my own consultancy (where others would ONLY call me when there was work of my specialty; else I could chill).

With the benefit of hindsight (and having worked in a somewhat similar job later in life), though, I had completely missed the “skill gap” (delta between peak and average skill days) in my internship, and thus not appreciated why I had been hired for it. Also, that I spent barely two months in the internship meant I didn’t have sufficient data to know the frequency of “interesting days”.

And this is why – most of your time might be spent in writing some fairly ordinary code, but you will still be required to know how to reverse a red-black tree.

Most of your time might be spent in writing SQL queries or pulling some averages, but on the odd day you might need to know that a chi square test is the best way to test your current hypothesis.

Most of your time might be spent in managing people and making sure the metrics are alright, but on the odd day you might have to redesign the process at the facility that you are in charge of.

In most complex jobs, the average day is NOT similar to the most complex day by any means. And thus the average day is NOT representative of the job. The next time someone I’m interviewing asks me what my “average day looks like”, I’ll maybe point that person to this post!

Stereotypes and correlations

Earlier on this blog, I’ve argued in favour of stereotypes. “In the absence of further information, stereotypes give you a strong Bayesian prior”, I had written (I’m paraphrasing myself here). I had gone on to say (paraphrasing myself yet again), “however, it is important that you treat this as a weak prior and update them as and when you get new information. So in the presence of additional information, you need to let go of the stereotypes”.

A lot of stereotyping is due to spurious correlations, often formed due to small number of training samples. My mother, for example, strongly believed that if you drink alcohol, you must be a bad person. Sometime, she had explained to me why she thought so – there were a few of her friends whose fathers or husbands drank alcohol, and they had had to endure domestic abuse.

That is only one extreme correlation stereotype. We keep making these stereotypes based on correlation all the time. I’m not saying that the correlation is not positive – sometimes it can be extremely positive. Just that it may not have full explainability.

For example, certain ways on dressing have come to be associated with certain attitudes (black tshirts and heavy metal, for example). So when we see someone exhibiting one side of this correlation, our minds are naturally drawn to associating them with the other side of the correlation as well (so you see someone in a black heavy metal band t-shirt, and immediately assume that they must be interested in heavy metal – to take a trivial example).

And then when their further behaviour belies the correlation that you had instinctively made, your mind gets messed up.

There was this guy in my batch at IIT Madras, who used to wear a naama (vertical religious mark on forehead commonly worn by Iyengars) on his forehead a lot of the time. Unlike most other undergrads, he also preferred to wear dhotis. So you would see him in his dhoti and naama and assume he was a religious conservative. And then you would see his hand, which would usually be held up showing a prominent middle finger, and all your mental correlations would go for a toss.

Another such example that I’ve spoken about on this blog before is that of the “puritan topper” – having seen a few topper types who otherwise led austere lives, I had assumed that kind of behaviour was correlated with being a topper (in some ways I can now argue that this blog is getting a bit meta).

I find myself doing this all the time. I observe someone’s accent and make assumptions on their abilities or the lack of it. I see someone’s dressing sense and build a whole story in my head on that single data point. I see the way someone is walking, and that supposedly tells me about their state of mind that day.

The good thing I’ve done is to internalise my last year’s blogpost – while all these single data point correlations are fine as a prior (in the absence of other information), the moment I get more information I immediately update them, and the initial stereotypes go out of the window.

The other thing I’m thinking of is – sometimes some of these random spurious correlations are so ingrained in our heads that we let them influence us. We take a certain job and decide that it is associated with a certain way of dressing and also start dressing the same way (thus playing up the stereotypes). We know the sort of clothes most people wear to a certain kind of restaurant, and also dress that way – again playing up the stereotypes.

Without realising it, maybe because of mimetic desire or a desire to fit in, we end up furthering random correlations and stereotypes. So maybe it is time to make a conscious effort to start breaking these stereotypes? But no – you won’t see me wear a suit to work any time soon.

I’ll end with another school anecdote. For whatever reason, many of the topper types in my 11th standard class would wear the school uniform sweater to school every single day, irrespective of how hot or cold it was. And then one fine (and not cold) day, yet another guy showed up in the uniform sweater. “How come you’re wearing this sweater”, I asked. He replied, “Oh, I just wanted to look more intellectual!”

 

Discoverability and chaos

Last weekend (4-5 Feb) I visited Blossom Book House on Church Street (the “second branch” (above Cafe Matteo), to be precise). I bought a total of six books that day, of which four I was explicitly looking for (including two of Tufte’s books). So only two books were “discovered” in the hour or so I spent there.

This weekend (11-12 Feb) I walked a little further down Church Street (both times I had parked on Brigade Road), and with wife and daughter in tow, to Bookworm. The main reason for going to Bookworm this weekend is that daughter, based on a limited data points she has about both shops, declared that “Bookworm has a much better collection of Geronimo Stilton books, so I want to go there”.

This time there were no books I had intended to buy, but I still came back with half a dozen books for myself – all “discovered”. Daughter got a half dozen of Geronimos. I might have spent more time there and got more books for myself, except that the daughter had finished her binge in 10 minutes and was now desperate to go home and read; and the wife got bored after some 10-20 minutes of browsing and finding one book. “This place is too chaotic”, she said.

To be fair, I’ve been to Blossom many many more times than I’ve been to Bookworm (visits to the latter are still in single digits for me). Having been there so many times, the Blossom layout is incredibly familiar to me. I know  that I start with the section right in front of the billing counter that has the bestsellers. Then straight down to the publisher-wise shelves. And so on and so forth.

My pattern of browsing at Blossom has got so ritualised that I know that there are specific sections of the store where I can discover new books (being a big user of a Kindle, I don’t really fancy very old books now). And so if I discover something there, great, else my browsing very quickly comes to a halt.

At Bookworm, though, I haven’t yet figured out the patterns in terms of how they place their books. Yes, I agree with my wife that it is “more random”, but in terms of discoverability, this increased randomness is a feature for me, not a bug! Not knowing what books to expect where, I’m frequently pleasantly surprised. And that leads to more purchases.

That said, the chaos means that if I go to the bookstore with a list of things to buy, the likelihood of finding them will be very very low (that said, both shops have incredibly helpful shopkeepers who will find you any book that you want and which is in stock at the store).

Now I’m thinking about this in the context of e-commerce. If randomness is what drives discoverability, maybe one bug of e-commerce is that it is too organised. You search for something specific, and you get that. You search for something vague, and the cost of going through all the results to find something you like is very high.

As for my books, my first task is to finish most of the books I got these weekends. And I’ll continue to play it random, and patronise both these shops.

Darwin Nunez and missed chances

There is one “fact” I’m rather proud of – it is highly likely (there is absolutely no way to verify) that in CAT 2003-4 (scheduled for 2003; then paper got leaked and it was held in Feb 2004), among all those who actually joined IIMs that year, I had the most number of wrong answers.

By my calculations after the exam (yeah I remember these things) I had got 20 answers wrong (in a 150 question paper). Most of my friends had their wrong counts in the single digits. That I did rather well in the exam despite getting so many answers wrong was down to one thing – I got a very large number of answers right.

Most readers of this blog will know that I can be a bit narcissist. So when I see or read something, I immediately correlate it to my own life. Recently I was watching this video on striker Darwin Nunez, and his struggles to settle into the English Premier League.

“Nobody has missed more clear chances this season than Darwin Nunez”, begins JJ Bull in this otherwise nice analysis. Somewhere in the middle of this video, he slips in that Nunez has missed so many chances because he has created so many more of them in the first place – by being in the right place at the right time.

Long ago when I used to be a regular quizzer (nowadays I’m rather irregular), in finals I wouldn’t get stressed if our team missed a lot of questions (either with other teams answering before us, or getting something narrowly wrong). That we came so close to getting the points, I would reason, meant that we had our processes right in the first place, and sooner or later we would start getting those points.

In general I like Nunez. Maybe because he’s rather unpredictable (“Chaos” as JJ Bull calls him in the above video), I identify with him more than some of the more predictable characters in the team (it’s another matter that this whole season has been a disaster for Liverpool -I knew it on the opening day when Virgil Van Dijk gave away a clumsy penalty to Fulham). He is clumsy, misses seemingly easy chances, but creates some impossible stuff out of nothing (in that sense, he is very similar to Mo Salah, so I don’t know how they together work out as a portfolio for Liverpool. That said, I love watching them play together).

In the world of finance, losing money is seen as a positive bullet point. If you have lost more money, it is a bigger status symbol. In most cases, that you lost so much money means that your bank had trusted you with that much money in the first place, and so there must be something right about you.

You see this in the startup world. Someone’s startup folds. Some get acquihired. And then a few months later, you find that they are back in the market and investors are showering them with funds. One thing is that investors trust that other investors had trusted these founders with much more money in the past. The other, of course, is the hope that this time they would have learnt from the mistakes.

Fundamentally, though, the connecting thread running across all this is about how to evaluate risk, and luck. Conditional on your bank trusting you with a large trading account, one bad trading loss is more likely to be bad luck than your incompetence. And so other banks quickly hire you and trust you with their money.

That you have missed 15 big chances in half a season means that you have managed to create so many more chances (as part of a struggling team). And that actually makes you a good footballer (though vanilla pundits don’t see it that way).

So trust the process. And keep at it. As long as you are in the right place at the right time a lot of times, you will cash on average.

Dhoni and Japan

Back in MS Dhoni’s heyday, CSK fans would rave about his strategy that they called as “taking it deep”. The idea was that while chasing  a target, Dhoni would initially bat steadily, getting sort of close but increasing the required run rate. And then when it seemed to be getting out of hand, he would start belting, taking the bowlers by surprise and his team to victory.

This happened many times to be recognised by fans as a consistent strategy. Initially it didn’t make sense to me – why was it that he would purposely decrease the average chances of his team’s victory so that he could take them to a heroic chase?

But then, thinking about it, the strategy seems fair – he would never do this in a comfortable chase (where the chase was “in the money”). This would happen only in steep (out of the money) chases. And his idea of “taking it deep” was in terms of increasing the volatility.

Everyone knows that when your option is out of the money, volatility is good for you. Which means an increase in volatility will increase the value of the option.

And that is exactly what Dhoni would do. Keep wickets and let the required rate increase, which would basically increase volatility. And then rely on “mental strength” and “functioning under pressure” to win. It didn’t always succeed, of course (and that it didn’t always fail meant Dhoni wouldn’t come off badly when it failed). However, it was a very good gamble.

We see this kind of a gamble often in chess as well. When a player has a slightly inferior position, he/she decides to increase chances by “mixing it up a bit”. Usually that involves a piece or an exchange sacrifice, in the hope of complicating the position, or creating an imbalance. This, once again, increases volatility, which means increases the chances for the player with the slightly inferior position.

And in the ongoing World Cup, we have seen Japan follow this kind of strategy in football as well. It worked well in games against Germany and Spain, which were a priori better teams than Japan.

In both games, Japan started with a conservative lineup, hoping to keep it tight in the first half and go into half time either level or only one goal behind. And then at half time, they would bring on a couple of fast and tricky players – Ritsu Doan and Kaoru Mitoma. Basically increasing the volatility against an already tired opposition.

And then these high volatility players would do their bit, and as it happened in both games, Japan came back from 0-1 at half time to win 2-1. Basically, having “taken the game deep”, they would go helter skelter (I was conscious to not say “hara kiri” here, since it wasn’t really suicidal). And hit the opposition quickly, and on the break.

Surprisingly, they didn’t follow the same strategy against Croatia, in the pre-quarterfinal, where Doan started the game, and Mitoma came on only in the 64th minute. Maybe they reasoned that Croatia weren’t that much better than them, and so the option wasn’t out of the money enough to increase volatility through the game. As it happened, the game went to penalties (basically deeper than Japan’s usual strategy) where Croatia prevailed.

The difference between Dhoni and Japan is that in Japan’s case, the players who increase the volatility and those who then take advantage are different. In Dhoni’s case, he performs both functions – he first bats steadily to increase vol, and then goes bonkers himself!

Hot hands in safaris

We entered Serengeti around 12:30 pm on Saturday, having stopped briefly at the entrance gate to have lunch packed for us by our hotel in Karatu. Around 1 pm, our guide asked us to put the roof up, so we could stand and get a 360 degree view. “This is the cheetah region”, he told us.

For the next hour or so we just kept going round and round. We went off the main path towards some rocks. Some other jeeps had done the same. None of us had any luck.

By 2 pm we had seen nothing. Absolutely nothing. For a place like Serengeti, that takes some talent, given the overall density of animals there. We hadn’t even seen a zebra, or a wildebeest. Maybe a few gazelles (I could never figure out how to tell between Thomson’s and Grant’s through the trip, despite seeing tonnes of both on the trip). “This is not even the level of what we saw in Tarangire yesterday”, we were thinking.

And then things started to happen. First there was a herd of zebras. On Friday we had missed an opportunity to take a video of a zebra crossing the road (literally a “zebra crossing”, get it?). And now we had a whole herd of zebras crossing the road in front of us. This time we didn’t miss the opportunity (though there was no Spice Telecom).

Zebra crossing in Serengeti

And then we saw a herd of buffaloes. And then a bunch of hippos in a pool. We asked our guide to take us closer to them, and he said “oh don’t worry about hippos. Tomorrow I’ll take you to a hippo pool with over a fifty hippos”. And sped off in the opposite direction. There was a pack of lions fallen asleep under a tree, with the carcass of a wildebeest they had just eaten next to them (I posted that photo the other day).

This was around 3 pm. By 4 pm, we had seen a large herd of wildebeest and zebra on their great annual migration. And then seen a cheetah sitting on a termite hill, also watching the migration. And yet another pool with some 50 hippos lazing in it. It was absolutely surreal.

It was as if we had had a “hot hand” for an hour, with tremendous sightings after a rather barren first half of the afternoon. We were to have another similar “hot hand” on Monday morning, on our way out from the park. Again in the course of half an hour (when we were driving rather fast, with the roof down, trying to exit the park ASAP) we saw a massive herd of elephants, a mother and baby cheetah, a pack of lions and a single massive male lion right next to the road.

If you are the sort who sees lots of patterns, it is possibly easy to conclude that “hot hands” are a thing in wildlife. That when you have one good sighting, it is likely to be followed by a few other good sightings. However, based on a total of four days of safaris on this trip, I strongly believe that here at least hot hands are a fallacy.

But first a digression. The issue of “hot hands” has been a long-standing one in basketball. First some statisticians found that the hot hand truly exists – that NBA (or was it NCAA?) players who have made a few baskets in succession are more likely to score off their next shot. Then, other statisticians found some holes in the argument and said that it was simply a statistical oddity. And yet again (if i remember correctly) yet another group of statisticians showed that with careful analysis, the hot hand actually exists. This was rationalised as “when someone has scored a few consecutive baskets, their confidence is higher, which improves the chances of scoring off the next attempt”.

So if a hot hand exists, it is more to do with the competence and confidence of the person who is executing the activity.

In wildlife, though, it doesn’t work that way. While it is up to us (and our guides) to spot the animals, that you have spotted something doesn’t make it more likely to spot something else (in fact, false positives in spotting can go up when you are feeling overconfident). Possibly the only correlation between consecutive spottings is that guides of various jeeps are in constant conversation on the radio, and news of spottings get shared. So if a bunch of jeeps have independently spotted stuff close to each other, all the jeeps will get to see all these stuffs (no pun intended), getting a “hot hand”.

That apart, there is no statistical reason in a safari to have a “hot hand”. 

Rather, what is more likely is selection bias. When we see a bunch of spottings close to one another, we think it is because we have a “hot hand”. However, when we are seeing animals only sporadically (like we did on Sunday, not counting the zillions of wildebeest and zebra migrating), we don’t really register that we are “not having a hot hand”.

It is as if you are playing a game of coin tosses, where you register all the heads but simply ignore the tails, and theorise about clumping of heads. When a low probability event happens (multiple sightings in an hour, for example), it registers better in our heads, and we can sometimes tend to overrepresent them in our memories. The higher probability (or “lower information content”) events we simply ignore! And so we assume that events are more impactful on average than they actually are.

Ok now i’m off on a ramble (this took a while to write – including making that gif among other things) – but Nassim Taleb talks about it this in one of his early Incerto books (FBR or Black Swan – that if you only go by newspaper reports, you are likely to think that lower average crime cities are more violent, since more crimes get reported there).

And going off on yet another ramble – hot hands can be a thing where the element of luck is relatively small. Wildlife spotting has a huge amount of luck involved, and so even with the best of skills there is only so much of a hot hand you can produce.

So yeah – there is no hot hand in wildlife safaris.

Random Friday night thoughts about myself

I’m flamboyant. That’s who I am. That’s my style. There’s no two ways about it. I can’t be conservative or risk-averse. That’s not who i am.

And because being flamboyant is who I am, I necessarily take risk in everything I do. This means that occasionally the risks don’t pay off – if they pay off all the time they’re not a risk.

In the past I’ve taken the wrong kind of lessons from risks not paying off. That I should not have taken those risks. That I should have taken more calculated risks. That I should have hedged better.

Irrespective of how calculated your risks are, they will not pay off some of the time. The calculation is basically to put a better handle on this probability, and the impact of the risk not paying off. Hedging achieves the same thing.

For example, my motorcycle trip to Rajasthan in 2012 was a calculated risk, hedged by full body riding gear. I had a pretty bad accident – the motorcycle was travelling at 85 kmph when I hit a cow and got thrown off the bike, but the gear meant I escaped with just a hairline fracture in my last metacarpal – I rode on and finished the trip.

Back to real life – what happened was that between approx 2006-09 a number of risks didn’t pay off. Nowadays I like to think of it as a coincidence. Or maybe it was a “hot hand” of the wrong kind – after the initial set of failed risks, I became less confident and less calculating about my risks, and more of them did not pay off.

This is my view now, of course, looking back. Back then I thought I was finished. I started beating myself for every single (what turned out to be, in hindsight) bad decision. And that made me take worse decisions.

A year of medication (2012), which included the aforementioned motorcycle trip, a new career and a lot of time off, helped me get rid of some of these logical fallacies. I started accepting that risks sometimes don’t pay off. And the solution to that is NOT to take less risk.

However, that thought (that every single risk thay didn’t pay off was a bad decision on my past) has been permanently seeded in my brain – whether I like it or not (I don’t like it). And so whenever something goes bad – basically a risk I consciously took not paying off – I instinctively look for a bad decision that I personally made to lay the blame on. And that, putting it simply, never makes me happy. And this is something I need to overcome.

As I said at the beginning of the post, cutting risk simply isn’t my style. And as I internalise that this is how I inherently am, I need to accept that some of my decisions will inherently turn out to have bad outcomes. And in a way, that is part of my strategy.

This blogpost is essentially a note to myself – to document this realisation on my risk profile and to make sure that I have something to refer to the next time a risky decision I take doesn’t pay off (well that happens every single day – this is for the big ones).

The next time I shoot off my mouth without thinking it’s part of my strategy.

The next time I resist the urge to contain myself and blurt out what I’m thinking it’s part of my strategy.

The next time I unwittingly harm myself because of a bad decision I make it’s just part of my strategy.

To close – there was a time when Inzamam-ul-Haq took someone’s advice and lost weight and found that he just couldn’t bat. In a weird way his belly was positively correlated with his batting. Similarly the odd bad decision I take is positively correlated with how I operate naturally.

And I need to learn to live with it.

Ronald Coase, Scott Adams and Intrapersonal Vertical Integration

I have a new HR policy. I call it “intrapersonal vertical integration”. Read on.

I

Back in the 193os, economist Ronald Coase wrote an article on “the nature of the firm” (the link is to Wikipedia, not to the actual paper). It was a description of why people form companies and partnerships and so on, rather than all being gig workers negotiating each piece of work.

The key concept here was one of transaction costs – if everyone were to be a freelancer, like I was between 2012 and 2020 (both included), then for every little piece of work there would need to be a piece of negotiation.

“Can you build this dashboard for me?”
“Yes. That would be $10000”
“No, I’ll only pay $2000”
“9000”
“3000 final”
“get lost”

During my long period of freelancing, I internalised this, and came up with a “minimum order value” – a reasonable amount which could account for transaction costs like the above (just as I write this, I’m changing videos on Youtube for my wife, and she’s asking me to put 30 second videos. And I’m refusing saying “too much transaction cost. I need my hands for something else (blogging)” ).

This worked out fine for the projects that I actually got, but transaction costs meant that a lot of the smaller deals never worked out. I lost out on potential revenue from those, and my potential clients lost out on work getting done.

So, instead, if I were to be part of a company, like I am now, transaction costs are far lower. Yes, we might negotiate on exact specifications, or deadlines, but price was a single negotiation at the time I joined the firm. And so a lot more work gets done – better for me and better for the company. And this is why companies exist. It might sound obvious, but Coase put it in a nice and elegant theoretical framework.

II

I’ve written about this several times on my blog – Scott Adams’s theory that there are two ways in which you can be really successful.

1. Become the best at one specific thing.
2. Become very good (top 25%) at two or more things.

This is advice that I have taken seriously, and I’ve followed the second path. Being the best at one specific thing is too hard, and too random as well – “the best” is a sort of a zero sum game. Instead, being very good in a few things is easier to do, and as I’d said in one of my other posts on this, being very good in uncorrelated things is a clear winner.

I will leave this here and come back later on in the post, like how Dasharatha gave some part of the mango to Sumitra (second in line), and then decided to come back to her later on in the distribution.

III

I came up with this random theory the other day on the purpose of product managers. This theory is really random and ill-formed, and I haven’t bothered discussing it with any real product managers.

The need for product managers comes from software engineers’ insistence on specific “system requirement specifications”. 

I learnt software engineering in a formal course back in 2002. Back then, the default workflow for software engineering was the so-called “waterfall model”. It was a linear sequential thing where the first part of the process goes in clearly defining system requirement specifications. Then there would be an unambiguous “design document”. And only then would coding begin.

In that same decade (2000s), “agile” programming became a thing. This meant fast iterations and continuous improvements. Software would be built layer by layer. However, software engineers had traditionally worked only with precise specifications, and “ambiguous business rules” would throw them off. And so the role of the product manager was created – who would manage the software product in a way that they would interface with ambiguous business on one side, and precise software engineers on the other.

Their role was to turn ambiguity to certainty, and get work done. They would never be hands on – instead their job would be to give precise instructions to people who would be hands on.

I have never worked as either a software engineer or a product manager, but I don’t think I’d enjoy either job. On the one hand, I don’t like being given precise instructions, and instead prefer ambiguity. On the other, if I were to give precise instructions, I would rather use C++ or Python to give those instructions than English or Kannada. In other words, if I were to be precise in my communication, I would rather talk to a computer than to another human.

It possibly has to do with my work history. I spent a little over two years as a quant at a top tier investment bank. As part of the job, I was asked to write production code. I used to protest, saying writing C++ code wasn’t the best use of my time or effort. “But think about the effort involved in explaining your model to someone else”, the higher ups in the company would tell me. “Wouldn’t it be far easier to just code it yourself?”

IV

Coase reasoned that transaction costs are the reason why we need a firm. We don’t need frequent negotiations and transaction costs, so if people were to get together in the form of a firm, they could coordinate much better and get a lot more work done, with more value accruing to every party involve.

However, I don’t think Coase went far enough. Just putting people in one firm only eliminates one level of transaction costs – of negotiating conditions and prices. Even when you are in the same firm, coordinating with colleagues implies communication, and unless precise, the communication links can end up being the weak links in how much the firm can achieve.

Henry Ford’s genius was to recognise the assembly line (a literal conveyor belt) as a precise form of communication. The workers in his factories were pretty much automatons, doing their precise job, in the knowledge that everyone else was doing their own. The assembly line made communication simpler, and that allowed greater specialisation to unlock value in the firm – to the extent that each worker could get at least five dollars a day and the firm would still be profitable.

It doesn’t work so neatly in what can be classified as “knowledge industries”. Like with the product manager and the software engineer, there is a communication layer which, if it fails, can bring down the entire process.

And there are other transaction costs implied in this communication – let’s say you are building stuff that I need to build on to make the final product. Every time I think you need to build something slightly different, it involves a process of communication and negotiation. It involves the product manager to write a new section in the document. And when working on complex problems, this can increase the complexity multifold.

So we are back to Scott Adams (finally). Building on what I’d said before – you need to be “very good” at two or more things, and it helps if these things are uncorrelated (in terms of being able to add unique value). However, it is EVEN MORE USEFUL if the supposedly uncorrelated skills you have can be stacked, in a form of vertical integration.

In other words, if you are good at several things that are uncorrelated, where the output of one thing can be the input into another, you are a clear winner.

Adams, for example, is good at understanding business, he is funny and he can draw. The combination of the first two means that he can write funny business stories, and that he can also draw means he has created a masterpiece in the form of Dilbert.

Don’t get me wrong – you can have a genius storyteller and a genius artist come together to make great art (Goscinny and Uderzo, for example). However, it takes a lot of luck for a Goscinny to find his Uderzo, or vice versa. I haven’t read much Asterix but what I’m old by friends is that the quality dropped after Uderzo was forced to be his own Goscinny (after the latter died).

At a completely different level – I have possibly uncorrelated skills in understanding business and getting insight out of data. One dovetails into the other and so I THINK I’m doing well in business intelligence. If I were only good at business, and needed to keep asking someone to churn the data on each iteration, my output would be far far slower and poorer.

So I extend this idea into “intrapersonal vertical integration”. If you are good at two or more things, and one can lead into another, you have a truly special set of skills and can be really successful.

Putting it another way – in knowledge jobs, communication can be so expensive that if you can vertically integrate yourself across multiple jobs, you can add significant value even if you are not the best at each of the individual skills.

Finish

In knowledge work, communication is the weakest link, so the fewer levels of communication you have, the better and faster you can do your job. Even if you get the best for every level in your chain, the strength (or lack of it) of communication between them can mean that they produce suboptimal output.

Instead if you can get people who are just good at two or more things in the chain (rather than being the best at any one), you can add significantly better value.

Putting it another way, yes, I’m batting for bits-and-pieces players rather than genuine batsmen or bowlers. However, the difference between what I’m saying and cricket is that in cricket batting and bowling are not vertically integrated. If they were, bits and pieces players would work far far better.

The Downside

I’ve written about this before. While being good at uncorrelated things that dovetail into one another can be a great winning strategy, liquidity can be your enemy. That you are unique means that there aren’t too many like you. And so organisations may not want to bet too much on you – since you will be hard to replace. And decide to take the slack in communication and get specialists for each position instead.

PS: 

I have written a book on transaction costs and liquidity. As it happens, today it is on display at the Bangalore Literature Festival.

Cross posted on LinkedIn

Risk and data

A while back a group of <a large number of scientists> wrote an open letter to the Prime Minister demanding greater data sharing with them. I must say that the letter is written in academic language and the effort to understand it was too much, but in the interest of fairness I’ll put a screenshot that was posted on twitter here.

I don’t know about this clinical and academic data. However, the holding back of one kind of data, in my opinion, has massively (and negatively) impacted people’s mental health and risk calculations.

This is data on mortality and risk. The kind of questions that I expect government data to have answered was:

  1. If I get covid-19 (now in the second wave), what is the likelihood that I will die?
  2. If my oxygen level drops to 90 (>= 94 is “normal”), what is the likelihood that I will die?
  3. If I go to hospital, what is the likelihood I will die?
  4. If I go to ICU what is the likelihood I will die?
  5. What is the likelihood of a teenager who contracts the virus (and is otherwise in good health) dying of the virus?

And so on. Simple risk-based questions whose answers can help people calibrate their lives and take calculated enough risks to get on with it without putting themselves and their loved ones at risk.

Instead, what we find from official sources are nothing but aggregates. Total numbers of people infected, dead, recovered and so on. And it is impossible to infer answers to the “risk questions” based no that.

And who fill in the gaps? Media of course.

I must have discussed “spectacularness bias” on this blog several times before. Basically the idea is that for something to be news, it needs to carry information. And an event carries information if it occurs despite having a low prior probability (or not occurring despite a high prior probability). As I put it in my lectures, “‘dog bites man’ is not news. ‘man bits dog’ is news”.

So when we rely on media reports to fill in our gaps in our risk systems, we end up taking all the wrong kinds of lessons. We learn that one seventeen year old boy died of covid despite being otherwise healthy. In the absence of other information, we assume that teenagers are under grave risk from the disease.

Similarly, cases of children looking for ICU beds get forwarded far more than cases of old people looking for ICU beds. In the absence of risk information, we assume that the situation must be grave among children.

Old people dying from covid goes unreported (unless the person was famous in some way or the other), since the information content in that is low. Young people dying gets amplified.

Based on all the reports that we see in the papers and other media (including social media), we get an entirely warped sense of what the risk profile of the disease is. And panic. When we panic, our health gets worse.

Oh, and I haven’t even spoken about bad risk reporting in the media. I saw a report in the Times of India this morning (unable to find a link to it) that said that “young are facing higher mortality in this wave”. Basically the story said that people under 60 account for a far higher proportion of deaths in the second wave than in the first.

Now there are two problems with that story.

  1. A large proportion of over 60s in India are vaccinated, so mortality is likely to be lower in this cohort.
  2. What we need is the likelihood of a person under 60 dying upon contracting covid. NOT the proportion of deaths accounted for by under 60s. This is the classic “averaging along the wrong axis” that they unleash upon you in the first test of any statistics course.

Anyway, so what kind of data would have helped?

  1. Age profile of people testing positive, preferably state wise (any finer will be noise)
  2. Age profile of people dying of covid-19, again state wise

I’m sure the government collects this data. Just that they’re not used to releasing this kind fo data, so we’re not getting it. And so we have to rely on the media and its spectacularness bias to get our information. And so we panic.

PS: By no means am I stating that covid-19 is not a risk. All I am stating is that the information we have been given doesn’t help us make good risk decisions