## Random Friday night thoughts about myself

I’m flamboyant. That’s who I am. That’s my style. There’s no two ways about it. I can’t be conservative or risk-averse. That’s not who i am.

And because being flamboyant is who I am, I necessarily take risk in everything I do. This means that occasionally the risks don’t pay off – if they pay off all the time they’re not a risk.

In the past I’ve taken the wrong kind of lessons from risks not paying off. That I should not have taken those risks. That I should have taken more calculated risks. That I should have hedged better.

Irrespective of how calculated your risks are, they will not pay off some of the time. The calculation is basically to put a better handle on this probability, and the impact of the risk not paying off. Hedging achieves the same thing.

For example, my motorcycle trip to Rajasthan in 2012 was a calculated risk, hedged by full body riding gear. I had a pretty bad accident – the motorcycle was travelling at 85 kmph when I hit a cow and got thrown off the bike, but the gear meant I escaped with just a hairline fracture in my last metacarpal – I rode on and finished the trip.

Back to real life – what happened was that between approx 2006-09 a number of risks didn’t pay off. Nowadays I like to think of it as a coincidence. Or maybe it was a “hot hand” of the wrong kind – after the initial set of failed risks, I became less confident and less calculating about my risks, and more of them did not pay off.

This is my view now, of course, looking back. Back then I thought I was finished. I started beating myself for every single (what turned out to be, in hindsight) bad decision. And that made me take worse decisions.

A year of medication (2012), which included the aforementioned motorcycle trip, a new career and a lot of time off, helped me get rid of some of these logical fallacies. I started accepting that risks sometimes don’t pay off. And the solution to that is NOT to take less risk.

However, that thought (that every single risk thay didn’t pay off was a bad decision on my past) has been permanently seeded in my brain – whether I like it or not (I don’t like it). And so whenever something goes bad – basically a risk I consciously took not paying off – I instinctively look for a bad decision that I personally made to lay the blame on. And that, putting it simply, never makes me happy. And this is something I need to overcome.

As I said at the beginning of the post, cutting risk simply isn’t my style. And as I internalise that this is how I inherently am, I need to accept that some of my decisions will inherently turn out to have bad outcomes. And in a way, that is part of my strategy.

This blogpost is essentially a note to myself – to document this realisation on my risk profile and to make sure that I have something to refer to the next time a risky decision I take doesn’t pay off (well that happens every single day – this is for the big ones).

The next time I shoot off my mouth without thinking it’s part of my strategy.

The next time I resist the urge to contain myself and blurt out what I’m thinking it’s part of my strategy.

The next time I unwittingly harm myself because of a bad decision I make it’s just part of my strategy.

To close – there was a time when Inzamam-ul-Haq took someone’s advice and lost weight and found that he just couldn’t bat. In a weird way his belly was positively correlated with his batting. Similarly the odd bad decision I take is positively correlated with how I operate naturally.

And I need to learn to live with it.

## Ronald Coase, Scott Adams and Intrapersonal Vertical Integration

I have a new HR policy. I call it “intrapersonal vertical integration”. Read on.

I

Back in the 193os, economist Ronald Coase wrote an article on “the nature of the firm” (the link is to Wikipedia, not to the actual paper). It was a description of why people form companies and partnerships and so on, rather than all being gig workers negotiating each piece of work.

The key concept here was one of transaction costs – if everyone were to be a freelancer, like I was between 2012 and 2020 (both included), then for every little piece of work there would need to be a piece of negotiation.

“Can you build this dashboard for me?”
“Yes. That would be $10000” “No, I’ll only pay$2000”
“9000”
“3000 final”
“get lost”

During my long period of freelancing, I internalised this, and came up with a “minimum order value” – a reasonable amount which could account for transaction costs like the above (just as I write this, I’m changing videos on Youtube for my wife, and she’s asking me to put 30 second videos. And I’m refusing saying “too much transaction cost. I need my hands for something else (blogging)” ).

This worked out fine for the projects that I actually got, but transaction costs meant that a lot of the smaller deals never worked out. I lost out on potential revenue from those, and my potential clients lost out on work getting done.

So, instead, if I were to be part of a company, like I am now, transaction costs are far lower. Yes, we might negotiate on exact specifications, or deadlines, but price was a single negotiation at the time I joined the firm. And so a lot more work gets done – better for me and better for the company. And this is why companies exist. It might sound obvious, but Coase put it in a nice and elegant theoretical framework.

II

I’ve written about this several times on my blog – Scott Adams’s theory that there are two ways in which you can be really successful.

1. Become the best at one specific thing.
2. Become very good (top 25%) at two or more things.

This is advice that I have taken seriously, and I’ve followed the second path. Being the best at one specific thing is too hard, and too random as well – “the best” is a sort of a zero sum game. Instead, being very good in a few things is easier to do, and as I’d said in one of my other posts on this, being very good in uncorrelated things is a clear winner.

I will leave this here and come back later on in the post, like how Dasharatha gave some part of the mango to Sumitra (second in line), and then decided to come back to her later on in the distribution.

III

I came up with this random theory the other day on the purpose of product managers. This theory is really random and ill-formed, and I haven’t bothered discussing it with any real product managers.

The need for product managers comes from software engineers’ insistence on specific “system requirement specifications”.

I learnt software engineering in a formal course back in 2002. Back then, the default workflow for software engineering was the so-called “waterfall model”. It was a linear sequential thing where the first part of the process goes in clearly defining system requirement specifications. Then there would be an unambiguous “design document”. And only then would coding begin.

In that same decade (2000s), “agile” programming became a thing. This meant fast iterations and continuous improvements. Software would be built layer by layer. However, software engineers had traditionally worked only with precise specifications, and “ambiguous business rules” would throw them off. And so the role of the product manager was created – who would manage the software product in a way that they would interface with ambiguous business on one side, and precise software engineers on the other.

Their role was to turn ambiguity to certainty, and get work done. They would never be hands on – instead their job would be to give precise instructions to people who would be hands on.

I have never worked as either a software engineer or a product manager, but I don’t think I’d enjoy either job. On the one hand, I don’t like being given precise instructions, and instead prefer ambiguity. On the other, if I were to give precise instructions, I would rather use C++ or Python to give those instructions than English or Kannada. In other words, if I were to be precise in my communication, I would rather talk to a computer than to another human.

It possibly has to do with my work history. I spent a little over two years as a quant at a top tier investment bank. As part of the job, I was asked to write production code. I used to protest, saying writing C++ code wasn’t the best use of my time or effort. “But think about the effort involved in explaining your model to someone else”, the higher ups in the company would tell me. “Wouldn’t it be far easier to just code it yourself?”

IV

Coase reasoned that transaction costs are the reason why we need a firm. We don’t need frequent negotiations and transaction costs, so if people were to get together in the form of a firm, they could coordinate much better and get a lot more work done, with more value accruing to every party involve.

However, I don’t think Coase went far enough. Just putting people in one firm only eliminates one level of transaction costs – of negotiating conditions and prices. Even when you are in the same firm, coordinating with colleagues implies communication, and unless precise, the communication links can end up being the weak links in how much the firm can achieve.

Henry Ford’s genius was to recognise the assembly line (a literal conveyor belt) as a precise form of communication. The workers in his factories were pretty much automatons, doing their precise job, in the knowledge that everyone else was doing their own. The assembly line made communication simpler, and that allowed greater specialisation to unlock value in the firm – to the extent that each worker could get at least five dollars a day and the firm would still be profitable.

It doesn’t work so neatly in what can be classified as “knowledge industries”. Like with the product manager and the software engineer, there is a communication layer which, if it fails, can bring down the entire process.

And there are other transaction costs implied in this communication – let’s say you are building stuff that I need to build on to make the final product. Every time I think you need to build something slightly different, it involves a process of communication and negotiation. It involves the product manager to write a new section in the document. And when working on complex problems, this can increase the complexity multifold.

So we are back to Scott Adams (finally). Building on what I’d said before – you need to be “very good” at two or more things, and it helps if these things are uncorrelated (in terms of being able to add unique value). However, it is EVEN MORE USEFUL if the supposedly uncorrelated skills you have can be stacked, in a form of vertical integration.

In other words, if you are good at several things that are uncorrelated, where the output of one thing can be the input into another, you are a clear winner.

Adams, for example, is good at understanding business, he is funny and he can draw. The combination of the first two means that he can write funny business stories, and that he can also draw means he has created a masterpiece in the form of Dilbert.

Don’t get me wrong – you can have a genius storyteller and a genius artist come together to make great art (Goscinny and Uderzo, for example). However, it takes a lot of luck for a Goscinny to find his Uderzo, or vice versa. I haven’t read much Asterix but what I’m old by friends is that the quality dropped after Uderzo was forced to be his own Goscinny (after the latter died).

At a completely different level – I have possibly uncorrelated skills in understanding business and getting insight out of data. One dovetails into the other and so I THINK I’m doing well in business intelligence. If I were only good at business, and needed to keep asking someone to churn the data on each iteration, my output would be far far slower and poorer.

So I extend this idea into “intrapersonal vertical integration”. If you are good at two or more things, and one can lead into another, you have a truly special set of skills and can be really successful.

Putting it another way – in knowledge jobs, communication can be so expensive that if you can vertically integrate yourself across multiple jobs, you can add significant value even if you are not the best at each of the individual skills.

Finish

In knowledge work, communication is the weakest link, so the fewer levels of communication you have, the better and faster you can do your job. Even if you get the best for every level in your chain, the strength (or lack of it) of communication between them can mean that they produce suboptimal output.

Instead if you can get people who are just good at two or more things in the chain (rather than being the best at any one), you can add significantly better value.

Putting it another way, yes, I’m batting for bits-and-pieces players rather than genuine batsmen or bowlers. However, the difference between what I’m saying and cricket is that in cricket batting and bowling are not vertically integrated. If they were, bits and pieces players would work far far better.

The Downside

I’ve written about this before. While being good at uncorrelated things that dovetail into one another can be a great winning strategy, liquidity can be your enemy. That you are unique means that there aren’t too many like you. And so organisations may not want to bet too much on you – since you will be hard to replace. And decide to take the slack in communication and get specialists for each position instead.

PS:

I have written a book on transaction costs and liquidity. As it happens, today it is on display at the Bangalore Literature Festival.

## Risk and data

A while back a group of <a large number of scientists> wrote an open letter to the Prime Minister demanding greater data sharing with them. I must say that the letter is written in academic language and the effort to understand it was too much, but in the interest of fairness I’ll put a screenshot that was posted on twitter here.

I don’t know about this clinical and academic data. However, the holding back of one kind of data, in my opinion, has massively (and negatively) impacted people’s mental health and risk calculations.

This is data on mortality and risk. The kind of questions that I expect government data to have answered was:

1. If I get covid-19 (now in the second wave), what is the likelihood that I will die?
2. If my oxygen level drops to 90 (>= 94 is “normal”), what is the likelihood that I will die?
3. If I go to hospital, what is the likelihood I will die?
4. If I go to ICU what is the likelihood I will die?
5. What is the likelihood of a teenager who contracts the virus (and is otherwise in good health) dying of the virus?

And so on. Simple risk-based questions whose answers can help people calibrate their lives and take calculated enough risks to get on with it without putting themselves and their loved ones at risk.

Instead, what we find from official sources are nothing but aggregates. Total numbers of people infected, dead, recovered and so on. And it is impossible to infer answers to the “risk questions” based no that.

And who fill in the gaps? Media of course.

I must have discussed “spectacularness bias” on this blog several times before. Basically the idea is that for something to be news, it needs to carry information. And an event carries information if it occurs despite having a low prior probability (or not occurring despite a high prior probability). As I put it in my lectures, “‘dog bites man’ is not news. ‘man bits dog’ is news”.

So when we rely on media reports to fill in our gaps in our risk systems, we end up taking all the wrong kinds of lessons. We learn that one seventeen year old boy died of covid despite being otherwise healthy. In the absence of other information, we assume that teenagers are under grave risk from the disease.

Similarly, cases of children looking for ICU beds get forwarded far more than cases of old people looking for ICU beds. In the absence of risk information, we assume that the situation must be grave among children.

Old people dying from covid goes unreported (unless the person was famous in some way or the other), since the information content in that is low. Young people dying gets amplified.

Based on all the reports that we see in the papers and other media (including social media), we get an entirely warped sense of what the risk profile of the disease is. And panic. When we panic, our health gets worse.

Oh, and I haven’t even spoken about bad risk reporting in the media. I saw a report in the Times of India this morning (unable to find a link to it) that said that “young are facing higher mortality in this wave”. Basically the story said that people under 60 account for a far higher proportion of deaths in the second wave than in the first.

Now there are two problems with that story.

1. A large proportion of over 60s in India are vaccinated, so mortality is likely to be lower in this cohort.
2. What we need is the likelihood of a person under 60 dying upon contracting covid. NOT the proportion of deaths accounted for by under 60s. This is the classic “averaging along the wrong axis” that they unleash upon you in the first test of any statistics course.

Anyway, so what kind of data would have helped?

1. Age profile of people testing positive, preferably state wise (any finer will be noise)
2. Age profile of people dying of covid-19, again state wise

I’m sure the government collects this data. Just that they’re not used to releasing this kind fo data, so we’re not getting it. And so we have to rely on the media and its spectacularness bias to get our information. And so we panic.

PS: By no means am I stating that covid-19 is not a risk. All I am stating is that the information we have been given doesn’t help us make good risk decisions

## Uncertain Rewards

A couple of months back, I read Nir Eyal’s Hooked. I didn’t particularly get hooked to the book – it’s one of those books that should have been a blogpost (or maybe a longform article). However, as part of the “Hooked model” that forms the core of the book, the author talks about the importance of “uncertain rewards”.

The basic idea is that it is easier to get addicted to something when the rewards from it are uncertain. If the rewards are certain, then irrespective of how large they are, there is a chance that you might get bored of them. Uncertainty, on the other hand, makes you curious. It provides you “information” each time you “play the game”. And you in the quest for new information (remember that entropy is information?), you keep playing. And you get hooked.

This plays out in various ways. Alcohol and drugs, for example, sometimes offer “good trips”, and sometimes “bad trips”. The memory of the good trips is the reason why you keep at it, even if you occasionally have bad trips. The uncertain rewards hook you.

It’s the same with social media. This weekend, so far, I’ve had a largely good experience on Twitter. However, last weekend on the platform was a disaster. I’d gotten quickly depressed and stopped. So why did I get back on to twitter this weekend when last weekend was bad? Because of an earlier weekend when it had provided a set of good conversations.

Even last weekend, when I started having a “bad trip” on Twitter, I kept at it, thinking the longer I play the better the chances of having a good trip. Ultimately I just ruined my weekend.

Uncertain rewards are also why, (especially) when we are young, we tolerate abusive romantic partners. Partners who treat you well all the time are boring. And there is no excitement. Abusive partners, on the other hand, treat you like a king/queen at times, and like shit at other times. The extent of the highs and lows means that you get hooked to them. It possibly takes a certain degree of abuse for you to realise that a “steady partner who treats you well” makes for a better long term partner.

Is there a solution to this? I don’t think so. As we learn in either thermodynamics or information theory, entropy or randomness is equal to information. And because we have evolved to learn and get more information, we crave entropy. And so we crave the experiences that give us a lot of entropy, even it that means the occasional bad trip.

Finally, I realise that uncertain rewards are also the reason why religion is addictive. One conversation I used to have a lot with my late mother was when I would say, “why do you keep praying when your prayers weren’t answered the last time?”. And she would quote another time when her prayers WERE answered. It is this uncertain reward of answers to prayers (which, in my opinion, is sheer randomness) that keeps religion “interesting”. And makes it addictive.

## Confusing with complications

I’m reading this awesome article by Srinivas Bhogle (with Rajeeva Karandikar) on election forecasting. To be fair, not much of the article is new to me – it’s just a far more readable version of Karandikar’s seminal presentation on the topic made at IIT Kanpur all those years back.

However, as with all good retellings, this story also has some nice tidbits. This one has to do with “index of opposition unity”. The voice here is Bhogle’s:

It is easy to understand why the IOU becomes so critical in such situations. But, and here’s the rub, the exact mathematical formula connecting IOU to the seat count prediction is not easy to find. I searched through the big and small print of The Verdict by Dorab Sopariwala and Prannoy Roy, but the formula remained elusive.

Rajeeva suggests that it was likely based on simple heuristics: something like ‘if the IOU is less than 25%, give the first-placed party 75% of the seats.’ It may also have involved intelligent tweaking based on current survey data, historical data, informal feedback, expert opinion, gut feeling, and so on.

I first came across the IOU in Prannoy Roy and Dorab Sopariwala’s book. The way they had presented in the book, it seemed like it is a “major concept”. It seems, like I did, Bhogle also looked through the book trying to find a precise formula, and failed to do so.

And then Karandikar’s insight above is crucial – that the IOU may not be a precise mathematical formula, but just an intelligent set of heuristics, involving intelligent tweaking.

Sometimes putting a fancy name (or, even better, an acronym) on something can help lend credibility to the concept. For example, IOU is something that has been championed by Roy and Sopariwala for years, and they have done so to a level where it has become a self-fulfilling prophecy, and a respected scientist for Bhogle has gone searching for its formula!

Also, sometimes, telling people that you “used an intelligent heuristic” to come up with a conclusion can lead you to be taken less seriously. Put on a fancy name (even if it is something that you have yourself come up with), and the game changes. You suddenly start to be taken more seriously, like Ganesha assumed when he started sending fan mail under the name “YG Rao”.

And like they say in The Usual Suspects, sometimes the greatest trick that the devil ever pulled was to convince you that he exists. It is the same with “concepts” such as IOU – you THINK they must be sound because they come with a fancy name, when all that they apeear to represent is a set of fancy heuristics.

I must say this is excellent marketing.

## ISAs and Power Laws

There are a number of professions where incomes are distributed according to a power law. The most successful people in the professions corner a very large share of the income that people in the profession make, and unless you reach that very high level of success, you might even struggle to make a living wage.

Professions of this nature include the arts (movies, music, drama, standup comedy, painting, sculpture, etc.), sports, writing and entrepreneurship. The thing with such professions is that it needs some degree of “socialism” – if people are left to their own devices, then the 99% confidence payoffs will mean that few people will enter the profession, and when fewer people enter the profession, the overall quality of the profession goes down.

So what is required in this case is some sort of a safety net – people who are reasonably competent at the profession get paid a sort of regular basic income (could either be one-time, periodic or output-based) by “investors” in exchange for a cut of the upside. And this, for a talented but struggling beginner, is usually a good deal – they are assured a basic income to pursue what they love and think they are good at, and anything they have to pay in return is only probabilistic – contingent upon a heavy degree of success.

And in order for this kind of safety net to work, it is important that the investment be of the nature of “equity” rather than “debt” – the extreme power law nature of these professions is that only a small proportion of the people who get the safety net will be able to pay back, and those that are able to pay back will be able to pay disproportionately large amounts.

Entrepreneurship and film acting have sort of done well in terms of providing these safety net. Entrepreneurs get venture capital investment, which allows them to fund their business and take (nominal) salaries, while working on the thing they hope to make it big in. The venture capitalists make money even when a small proportion of their investments don’t fail.

The model in acting is a little different- studios hire actors on long term contracts at negotiated salaries. These salaries give actors the safety net to continue in the profession. And in case the actors become popular, the studios cash out essentially by “encashing the option” of using the actor at the pre-negotiated rate for the duration of the contract.

There are other examples of these safety nets as well – artist studios pay their artists a basic wage, in exchange for a cut on the sale of their paintings. However, the model is not as popular as it seems.

For sportspersons, for example, apart from things like the Ranji Trophy increasing match fees in a big way in the late noughties, this kind of a safety net has been absent. The studio model in acting hasn’t held on. Writers get advances but that doesn’t represent much of a “living wage”.

The good news is that this is changing. Investment in athletes in exchange for a cut of future earnings is gaining traction. And now we have this deal ($): Taxes will cut into his new 14-year agreement with the Padres, of course. But Tatis also must pay off a previous obligation, a deal he made during the 2017-18 offseason, when he was turning 19 years old and preparing for his first full season at Double A. It was then that Tatis entered into a contract with Big League Advance (BLA), a company that offers select minor leaguers upfront payments in exchange for a percentage of their future earnings in Major League Baseball. Neither Tatis nor BLA has revealed the exact percentage he owes the company. The company’s president and CEO, former major-league pitcher Michael Schwimer, told The Athletic in April 2018 that BLA uses a proprietary algorithm to value every player in the minors. Players who receive offers can accept a base-level payout in return for 1 percent of their earnings, with the chance to receive greater incremental payouts and pay back a maximum of 10 percent. If a player never reaches the majors, he keeps the cash advance, with no obligation to pay it back. This is an awesome thing. For a struggling potential sportsperson, a minor investment (in exchange for equity) can provide a huge boost in their chances of making it – hiring coaches, for example, or eating better food, or living more comfortably. While the media attention will go to the small proportion of investments that do pay off (like how tech media gives disproportionate coverage, and quite rightly so, to startups that do well), arrangements like this mean that more people will play the sport, and the overall standard in the sport will improve. We need to see if such arrangements start making a mark in the rest of the arts and writing as well. Oh, and much has been made of income sharing agreements for professional colleges and “tuition centres”. I’m not sure that is the right model there – the thing is that if you are studying to be a software engineer, your payoffs don’t follow a power law. Yes, if you are successful, you make a few orders of magnitude more money than the less successful ones, but even an average software engineer can expect to make a fairly decent income. From that perspective, selling equity in your future earnings to get paid to study engineering is not a great idea, and can lead to adverse selection on the part of the candidates (the better ones will prefer to get funding through debt, which their average salaries can help pay off). In that sense I prefer what the likes of MountBlue are doing, where the “training fees” get paid off by simply working for the company for a certain period of time. ## Monetising volatility I’m catching up on old newsletters now – a combination of job and taking my email off what is now my daughter’s iPad means I have a considerable backlog – and I found this gem in Matt Levine’s newsletter from two weeks back ($; Bloomberg).

“it comes from monetizing volatility, that great yet under-appreciated resource.”

He is talking about equity derivatives, and says that this is “not such a good explanation”. While it may not be such a good explanation when it comes to equity derivatives itself, I think it has tremendous potential outside of finance.

I’m reminded of the first time I was working in the logistics industry (back in 2007). I had what I had thought was a stellar idea, which was basically based on monetising volatility, but given that I was in a company full of logistics and technology and operations research people, and no other derivatives people, I had a hard time convincing anyone of that idea.

My way of “monetising volatility” was rather simple – charge people cancellation fees. In the part of the logistics industry I was working in back then, this was (surprisingly, to me) a particularly novel idea. So how does cancellation fees equate to monetising volatility?

Again it’s due to “unbundling”. Let’s say you purchase a train ticket using advance reservation. You are basically buying two things – the OPTION to travel on that particular day using that particular train, sitting on that particular seat, and the cost of the travel itself.

The genius of the airline industry following the deregulation in the US in the 1980s was that these two costs could be separated. The genius was that charging separately for the travel itself and the option to travel, you can offer the travel itself at a much lower price. Think of the cancellation charge as as the “option premium” for exercising the option to travel.

And you can come up with options with different strike prices, and depending upon the strike price, the value of the option itself changes. Since it is the option to travel, it is like a call option, and so higher the strike price (the price you pay for the travel itself), the lower the price of the option.

This way, you can come up with a repertoire of strike-option combinations – the more you’re willing to pay for cancellation (option premium), the lower the price of the travel itself will be. This is why, for example, the cheapest airline tickets are those that come with close to zero refund on cancellation (though I’ve argued that bringing refunds all the way to zero is not a good idea).

Since there is uncertainty in whether you can travel at all (there are zillions of reasons why you might want to “cancel tickets”), this is basically about monetising this uncertainty or (in finance terms) “monetising volatility”. Rather than the old (regulated) world where cancellation fees were low and travel charges were high (option itself was not monetised), monetising the options (which is basically a price on volatility) meant that airlines could make more money, AND customers could travel cheaper.

It’s like money was being created out of thin air. And that was because we monetised volatility.

I had the same idea for another part of the business, but unfortunately we couldn’t monetise that. My idea was simple – if you charge cancellation fees, our demand will become more predictable (since people won’t chumma book), and this means we will be able to offer a discount. And offering a discount would mean more people would buy this more predictable demand, and in the immortal jargon of Silicon Valley, “a flywheel would be set in motion”.

The idea didn’t fly. Maybe I was too junior. Maybe people were suspicious of my brief background in banking. Maybe most people around me had “too much domain knowledge”. So the idea of charging for cancellation in an industry that traditionally didn’t charge for cancellation didn’t fly at all.

Anyway all of that is history.

Now that I’m back in the industry, it remains to be seen if I can come up with such “brilliant” ideas again.

## Uncertainty and Anxiety

A lot of parenting books talk about the value of consistency in parenting – when you are consistent with your approach with something, the theory goes, the child knows what to expect, and so is less anxious about what will happen.

It is not just about children – when something is more deterministic, you can “take it for granted” more. And that means less anxiety about it.

From another realm, prices of options always have “positive vega” – the higher the market volatility, the more the price of the option. Thinking about it another way, the more the uncertainty, the more people are willing to pay to hedge against it. In other words, higher uncertainty means more anxiety.

However, sometimes the equation can get flipped. Let us take the case of water supply in my apartment. We have both a tap water connection and a borewell, so historically, water supply has been fairly consistent. For the longest time, we didn’t bother thinking about the pressure of water in the taps.

And then one day in the beginning of this year the water suddenly stopped. We had an inkling of it that morning as the water in the taps inexplicably slowed down, and so stored a couple of buckets until it ground to a complete halt later that day.

It turned out that our water pump, which is way deep inside the earth (near the water table) was broken, so it took a day to fix.

Following that, we have become more cognisant of the water pressure in the pipes. If the water pressure goes down for a bit, the memory of the day when the motor conked is fresh, and we start worrying that the water will suddenly stop. I’ve panicked at least a couple of times wondering if the water will stop.

However, after this happened a few times over the last few months I’m more comfortable. I now know that fluctuation of water pressure in the tap is variable. When I’m showering at the same time as my downstairs neighbour (I’m guessing), the water pressure will be lower. Sometimes the level of water in the tank is just above the level required for the pump to switch on. Then again the pressure is lower. And so forth.

In other words, observing a moderate level of uncertainty has actually made me more comfortable now and reduced my anxiety – within some limits, I know that some fluctuation is “normal”.  This uncertainty is more than what I observed earlier, so in other words, increased (perceived) uncertainty has actually reduced anxiety.

One way I think of it is in terms of hidden risks – when you see moderate fluctuations, you know that fluctuations exist and that you don’t need to get stressed around them. So your anxiety is lower. However, if you’ve gone a very long time with no fluctuation at all, then you are concerned that there are hidden risks that you have not experienced yet.

So when the water pressure in the taps has been completely consistent, then any deviation is a very strong (Bayesian) sign that something is wrong. And that increases anxiety.

## Shooting, investing and the hot hand

A couple of years back I got introduced to “Stumbling and Mumbling“, a blog written by Chris Dillow, who was described to me as a “Marxist investment banker”. I don’t agree with a lot of the stuff in his blog, but it is all very thoughtful.

He appears to be an Arsenal fan, and in his latest post, he talks about “what we can learn from football“. In that, he writes:

These might seem harmless mistakes when confined to talking about football. But they have analogues in expensive mistakes. The hot-hand fallacy leads investors to pile into unit trusts with good recent performance (pdf) – which costs them money as the performance proves unsustainable. Over-reaction leads them to buy stocks at the top of the market and sell at the bottom. Failing to see that low probabilities compound to give us a high one helps explain why so many projects run over time and budget. And so on.

Now, the hot hand fallacy has been a hard problem in statistics for a few years now. Essentially, the intuitive belief in basketball is that someone who has scored a few baskets is more likely to be successful in his next basket (basically, the player is on a “hot hand”).

It all started with a seminal paper by Amos Tversky et al in the 1980s, that used (the then limited) data to show that the hot hand is a fallacy. Then, more recently, Miller and Sanjurjo took another look at the problem and, with far better data at hand, found that the hot hand is actually NOT a fallacy.

There is a nice podcast on The Art of Manliness, where Ben Cohen, who has written a book about hot hands, spoke about the research around it. In any case, there are very valid reasons as to why hot hands exist.

Yet, Dillow is right – while hot hands might exist in something like basketball shooting, it doesn’t in something like investing. This has to do with how much “control” the person in question has. Let me switch fields completely now and quote a paragraph from Venkatesh Guru Rao‘s “The Art Of Gig” newsletter:

As an example, take conducting a workshop versus executing a trade based on some information. A significant part of the returns from a workshop depend on the workshop itself being good or bad. For a trade on the other hand, the returns are good or bad depending on how the world actually behaves. You might have set up a technically perfect trade, but lose because the world does something else. Or you might have set up a sloppy trade, but the world does something that makes it a winning move anyway.

This is from the latest edition, which is paid. Don’t worry if you aren’t a subscriber. The above paragraph I’ve quoted is sufficient for the purpose of this blogpost.

If you are in the business of offering workshops, or shooting baskets, the outcome of the next workshop or basket depends largely upon your own skill. There is randomness, yes, but this randomness is not very large, and the impact of your own effort on the result is large.

In case of investing, however, the effect of the randomness is very large. As VGR writes, “For a trade on the other hand, the returns are good or bad depending on how the world actually behaves”.

So if you are in a hot hand when it comes to investing, it means that “the world behaved in a way that was consistent with your trade” several times in a row. And that the world has behaved according to your trade several times in a row makes it no more likely that the world will behave according to your trade next time.

If, on the other hand, you are on a hot hand in shooting baskets or delivering lectures, then it is likely that this hot hand is because you are performing well. And because you are performing well, the likelihood of you performing well on the next turn is also higher. And so the hot hand theory holds.

So yes, hot hands work, but only in the context “with a high R Square”, where the impact of the doer’s performance is large compared to the outcome. In high randomness regimes, such as gambling or trading, the hot hand doesn’t matter.

## What is the Case Fatality Rate of Covid-19 in India?

The economist in me will give a very simple answer to that question – it depends. It depends on how long you think people will take from onset of the disease to die.

The modeller in me extended the argument that the economist in me made, and built a rather complicated model. This involved smoothing, assumptions on probability distributions, long mathematical derivations and (for good measure) regressions.. And out of all that came this graph, with the assumption that the average person who dies of covid-19 dies 20 days after the thing is detected.

Yes, there is a wide variation across the country. Given that the disease is the same and the treatment for most people diseased is pretty much the same (lots of rest, lots of water, etc), it is weird that the case fatality rate varies by so much across Indian states. There is only one explanation – assuming that deaths can’t be faked or miscounted (covid deaths attributed to other reasons or vice versa), the problem is in the “denominator” – the number of confirmed cases.

What the variation here tells us is that in states towards the top of this graph, we are likely not detecting most of the positive cases (serious cases will get themselves tested anyway, and get hospitalised, and perhaps die. It’s the less serious cases that can “slip”). Taking a state low down below in this graph as a “good tester” (say Andhra Pradesh), we can try and estimate what the extent of under-detection of cases in each state is.

Based on state-wise case tallies as of now (might be some error since some states might have reported today’s number and some mgiht not have), here are my predictions on how many actual number of confirmed cases there are per state, based on our calculations of case fatality rate.

Yeah, Maharashtra alone should have crossed a million caess based on the number of people who have died there!

Now let’s get to the maths. It’s messy. First we look at the number of confirmed cases per day and number of deaths per day per state (data from here). Then we smooth the data and take 7-day trailing moving averages. This is to get rid of any reporting pile-ups.

Now comes the probability assumption – we assume that a proportion $p$ of all the confirmed cases will die. We assume an average number of days ($N$) to death for people who are supposed to die (let’s call them Romeos?). They all won’t pop off exactly $N$ days after we detect their infection. Let’s say a proportion $\lambda$ dies each day. Of everyone who is infected, supposed to die and not yet dead, a proportion $\lambda$ will die each day.

My maths has become rather rusty over the years but a derivation I made shows that $\lambda = \frac{1}{N}$. So if people are supposed to die in an average of 20 days, $\frac{1}{20}$ will die today, $\frac{19}{20}\frac{1}{20}$ will die tomorrow. And so on.

So people who die today could be people who were detected with the infection yesterday, or the day before, or the day before day before (isn’t it weird that English doesn’t a word for this?) or … Now, based on how many cases were detected on each day, and our assumption of $p$ (let’s assume a value first. We can derive it back later), we can know how many people who were found sick $k$ days back are going to die today. Do this for all $k$, and you can model how many people will die today.

The equation will look something like this. Assume $d_t$ is the number of people who die on day $t$ and $n_t$ is the number of cases confirmed on day $t$. We get

$d_t = p (\lambda n_{t-1} + (1-\lambda) \lambda n_{t-2} + (1-\lambda)^2 \lambda n_{t-3} + ... )$

Now, all these $n$s are known. $d_t$ is known. $\lambda$ comes from our assumption of how long people will, on average, take to die once their infection has been detected. So in the above equation, everything except $p$ is known.

And we have this data for multiple days. We know the left hand side. We know the value in brackets on the right hand side. All we need to do is to find $p$, which I did using a simple regression.

And I did this for each state – take the number of confirmed cases on each day, the number of deaths on each day and your assumption on average number of days after detection that a person dies. And you can calculate $p$, which is the case fatality rate. The true proportion of cases that are resulting in deaths.

This produced the first graph that I’ve presented above, for the assumption that a person, should he die, dies on an average 20 days after the infection is detected.

So what is India’s case fatality rate? While the first graph says it’s 5.8%, the variations by state suggest that it’s a mild case detection issue, so the true case fatality rate is likely far lower. From doing my daily updates on Twitter, I’ve come to trust Andhra Pradesh as a state that is testing well, so if we assume they’ve found all their active cases, we use that as a base and arrive at the second graph in terms of the true number of cases in each state.

PS: It’s common to just divide the number of deaths so far by number of cases so far, but that is an inaccurate measure, since it doesn’t take into account the vintage of cases. Dividing deaths by number of cases as of a fixed point of time in the past is also inaccurate since it doesn’t take into account randomness (on when a Romeo might die).

Anyway, here is my code, for what it’s worth.

deathRate <- function(covid, avgDays) {
covid %>%
mutate(Date=as.Date(Date, '%d-%b-%y')) %>%
gather(State, Number, -Date, -Status) %>%
arrange(State, Date) ->
cov1

# Need to smooth everything by 7 days
cov1 %>%
arrange(State, Date) %>%
group_by(State) %>%
mutate(
TotalConfirmed=cumsum(Confirmed),
TotalDeceased=cumsum(Deceased),
ConfirmedMA=(TotalConfirmed-lag(TotalConfirmed, 7))/7,
DeceasedMA=(TotalDeceased-lag(TotalDeceased, 7))/ 7
) %>%
ungroup() %>%
filter(!is.na(ConfirmedMA)) %>%
select(State, Date, Deceased=DeceasedMA, Confirmed=ConfirmedMA) ->
cov2

cov2 %>%
select(DeathDate=Date, State, Deceased) %>%
inner_join(
cov2 %>%
select(ConfirmDate=Date, State, Confirmed) %>%
crossing(Delay=1:100) %>%
mutate(DeathDate=ConfirmDate+Delay),
by = c("DeathDate", "State")
) %>%
filter(DeathDate > ConfirmDate) %>%
arrange(State, desc(DeathDate), desc(ConfirmDate)) %>%
mutate(
Lambda=1/avgDays,
}