Why I never became a pundit

It’s been nearly a decade since i started writing in the mainstream media. Ahead of the Karnataka elections in 2013, I had published on this blog a series of quantitative analyses of the election, when R Sukumar (then editor-in-chief of Mint) picked it up and asked me if I could write for his paper on the topic – quantitative analysis of elections.

And so Election Metrics (what my pieces in Mint – they were analysis and not editorials, which meant it wasn’t a strict “column” per se, but I got paid well) was born. I wrote for Mint until the end of 2018, when my then contract ran out and Sukumar’s successor chose not to renew.

Having thus “cracked print”, I decided that the next frontier had to be video. I wanted to be on TV, as a pundit. That didn’t come easily. The 2014 national elections (when Modi first became PM) came and went, and I spent the counting day in the Mint newsroom, far from any television camera. I tried to get my way in to IPL auction analysis, but to no avail.

Finally, in 2018, on the day of the Karnataka elections, I got one guy I knew from way back to arrange for a TV appearance, and went on “News9” (a Bangalore-focussed English news channel) to talk about exit polls.

“I saw the video you had put on Facebook”, my friend Ranga said when he met me a few days later, “and you were waxing all eloquent about sample sizes and standard errors”. On that day I had been given space to make my arguments clear, and I had unleashed the sort of stuff you don’t normally see on news TV. Three days later, I got invited on the day of counting, enjoyed myself far less, and that, so far, has been the end of my career in punditry.

Barring a stray invitation from The Republic aside, my career in TV punditry has never gotten close to getting started after that. Of late I haven’t bothered, but in the past it has frequently rankled, that I’ve never been able to “crack TV”. And today I figured out why.

On my way to work this morning I was listening to this podcast featuring noted quant / factor investors Jim O’Shaughnessy and Cliff Asness. It was this nice episode where they spoke about pretty much everything – from FTX and AMC to psychedelics. But as you might expect with two quant investors in a room, they spent a lot of time talking about quantitative investing.

And then somewhere they started  talking about their respective TV appearances. O’Shaughnessy started talking about how in the early days of his fund, he used to make a lot of appearances on Bloomberg and CNBC, but of late he has pretty much stopped going.

He said something to the effect of: “I am a quant. I cannot give soundbites. I talk in terms of stories and theories. In the 80s, the channels used to give me a minute or two to speak – that was the agreement under which I appeared on them. But on my last appearance, I barely got 10 seconds to speak. They wanted soundbites, but as a quant I cannot give soundbites”.

And then Asness agreed, saying pretty much the same thing. That it was okay to go on television in the time when you got a reasonable amount of time to speak, and build a theory, and explain stuff, but now that television has come down to soundbites and oneliners, he is especially unsuited to it. And so he has stopped going.

There it was – if you are the sort who is driven by theories, and you need space to explain, doing so over voice is not efficient. You would rather write, where there is room for constructing an argument and making your point. If you were to speak, unless you had a lot of time (remember that speaking involves a fair amount of redundancy, unlike writing), it would be impossible to talk theories and arguments.

And I realise I have internalised this in life as well – at work for example, I write long emails (in a previous job, colleagues used to call them “blogposts”) and documents. I try to avoid complicated voice discussions – for with my laborious style I can never win them. Better to just write a note after it is over.

Volatility and price differentiation

In a rather surreal interview to the rather fantastically named Aurangzeb Naqshbandi and Hindustan Times editor Sukumar Ranganathan, Congress president Rahul Gandhi has made a stunning statement in the context of agricultural markets:

Markets are far more volatile in terms of rapid price differentiation, than they were before.

I find this sentence rather surreal, in that I don’t really know what Gandhi is talking about. As a markets guy and a quant, there is only one way in which I interpret this statement.

It is about how market volatility is calculated. While it might be standard to use standard deviation as a measure of market volatility, quants prefer to use a method called “quadratic variation” (when the market price movement follows a random walk, quadratic variation equals the variance).

To calculate quadratic variation, you take market returns at a succession of very small intervals, square these returns and then sum them up. And thinking about it mathematically, calculating returns at short time intervals is similar to taking the derivative of the price, and you can call it “price differentiation”.

So when Gandhi says “markets are far more volatile in terms of rapid price differentiation”, he is basically quoting the formula for quadratic variation – when the derivative of the price time series goes up, the market volatility increases by definition.

This is what you have, ladies and gentlemen – the president of the principal opposition party in India has quoted the formula that quants use for market volatility in an interview with a popular newspaper! Yet, some people continue to call him “pappu”.

The utility of utility functions

That is the title of a webinar I delivered this morning on behalf of Kristal.AI, a company that I’ve been working with for a while now. I spoke about utility functions, and how they can be used in portfolio optimisation.

This is related to the work that I’ve been doing for Kristal, and lies at the boundaries between quantitative finance and behavioural finance, and in fact I spoke about utility functions (combined with Monte Carlo methods) as being a great method to unify quantitative and behavioural finance.

Interactive Brokers (who organised the webinar) recorded the thing, and you can find the recording here. 

I think the webinar went well, though I’m not very sure since there was no feedback. This was by design – the webinar was a speaker-only broadcast, and audience weren’t allowed to participate except in terms of questions that were directly sent to me.

In the first place, webinars are hard to do since it feels like talking to an empty room – there is no feedback, not even nods or smiles, and you don’t know if people are listening. In most “normal” webinars, the audience can interject by raising their hands, and you can try make it interactive. The format used here didn’t permit such interaction which made it seem like I was talking into thin air.

Also, the Mac app of the webinar tool used didn’t seem particularly well optimised. I couldn’t share a particular screen from my laptop (like I couldn’t say “share only my PDF, nothing else” which is normal in most online chat tools), and there are times where I’ve inadvertently exposed my desktop to the full audience (you can see it on the recording).

Anyways, I think I’ve spoken about something remotely interesting, so give it a listen. My “main speech” only takes around 20-25 minutes. And if you want to know more about utility functions and behavioural economics, i recommend this piece by John Cochrane to you.

The (missing) Desk Quants of Main Street

A long time ago, I’d written about my experience as a Quant at an investment bank, and about how banks like mine were sitting on a pile of risk that could blow up any time soon.

There were two problems as I had documented then. Firstly, most quants I interacted with seemed to be solving maths problems rather than finance problems, not bothering if their models would stand the test of markets. Secondly, there was an element of groupthink, as quant teams were largely homogeneous and it was hard to progress while holding contrarian views.

Six years on, there has been no blowup, and in some sense banks are actually doing well (I mean, they’ve declined compared to the time just before the 2008 financial crisis but haven’t done that badly). There have been no real quant disasters (yes I know the Gaussian Copula gained infamy during the 2008 crisis, but I’m talking about a period after that crisis).

There can be many explanations regarding how banks have not had any quant blow-ups despite quants solving for math problems and all thinking alike, but the one I’m partial to is the presence of a “middle layer”.

Most of the quants I interacted with were “core” in the sense that they were not attached to any sales or trading desks. Banks also typically had a large cadre of “desk quants” who are directly associated with trading teams, and who build models and help with day-to-day risk management, pricing, etc.

Since these desk quants work closely with the business, they turn out to be much more pragmatic than the core quants – they have a good understanding of the market and use the models more as guiding principles than as rules. On the other hand, they bring the benefits of quantitative models (and work of the core quants) into day-to-day business.

Back during the financial crisis, I’d jokingly predicted that other industries should hire quants who were now surplus to Wall Street. Around the same time, DJ Patil et al came up with the concept of the “data scientist” and called it the “sexiest job of the 21st century”.

And so other industries started getting their own share of quants, or “data scientists” as they were now called. Nowadays its fashionable even for small companies for whom data is not critical for business to have a data science team. Being in this profession now (I loathe calling myself a “data scientist” – prefer to say “quant” or “analytics”), I’ve come across quite a few of those.

The problem I see with “data science” on “Main Street” (this phrase gained currency during the financial crisis as the opposite of Wall Street, in that it referred to “normal” businesses) is that it lacks the cadre of desk quants. Most data scientists are highly technical people who don’t necessarily have an understanding of the business they operate in.

Thanks to that, what I’ve noticed is that in most cases there is a chasm between the data scientists and the business, since they are unable to talk in a common language. As I’m prone to saying, this can go two ways – the business guys can either assume that the data science guys are geniuses and take their word for the gospel, or the business guys can totally disregard the data scientists as people who do some esoteric math and don’t really understand the world. In either case, value added is suboptimal.

It is not hard to understand why “Main Street” doesn’t have a cadre of desk quants – it’s because of the way the data science industry has evolved. Quant at investment banks has evolved over a long period of time – the Black-Scholes equation was proposed in the early 1970s. So the quants were first recruited to directly work with the traders, and core quants (at the banks that have them) were a later addition when banks realised that some quant functions could be centralised.

On the other hand, the whole “data science” growth has been rather sudden. The volume of data, cheap incrementally available cloud storage, easy processing and the popularity of the phrase “data science” have all increased well-at-a-faster rate in the last decade or so, and so companies have scrambled to set up data teams. There has simply been no time to train people who get both the business and data – and the data scientists exist like addendums that are either worshipped or ignored.

Newsletter!

So after much deliberation and procrastination, I’ve finally started a newsletter. I call it “the art of data science” and the title should be self-explanatory. It’s pure unbridled opinion (the kind of which usually goes on this blog), except that I only write about one topic there.

I intend to have three sections and then a “chart of the edition” (note how cleverly I’ve named this section to avoid giving much away on the frequency of the newsletter!). This edition, though, I ended up putting too much harikathe, so I restricted to two sections before the chart.

I intend to talk a bit each edition about some philosophical part of dealing with data (this section got a miss this time), a bit on data analysis methods (I went a bit meta on this this time) and a little bit on programming languages (which I used for bitching a bit).

And that I plan to put a “chart of the edition” means I need to read newspapers a lot more, since you are much more likely to find gems (in either direction) there than elsewhere. For the first edition, I picked off a good graph I’d seen on Twitter, and it’s about Hull City!

Anyway, enough of this meta-harikathe. You can read the first edition of the newsletter here. In case you want to get it in your inbox each week/fortnight/whenever I decide to write it, then subscribe here!

And put feedback (by email, not comments here) on what you think of the newsletter!

Matt Levine describes my business idea

When I was leaving the big bank I was working for (I keep forgetting whether this blog is anonymous or not, but considering that I’ve now mentioned it on my LinkedIn profile (and had people congratulate me “on the new job”), I suppose it’s not anonymous any more) in 2011, I didn’t bother looking for a new job.

I was going into business, I declared. The philosophy (that’s a word I’ve learnt to use in this context by talking to Venture Capitalists) was that while Quant in investment banking was already fairly saturated, there was virgin territory in other industries, and I’d use my bank-honed quant skills to improve the level of reasoning in these other industries.

Since then things have more or less gone well. I’ve worked in several sectors, and done a lot of interesting work. While a lot of it has been fairly challenging, very little of it has technically been of a level that would be considered challenging by an investment banking quant. And all this is by design.

I’ve long admired Matt Levine for the way in which he clearly explains fairly complicated finance stuff in his daily newsletter (that you can get delivered to your inbox for free),  and more or less talking about finance in an entertaining model. I’ve sometimes mentioned that I’ve wanted to grow up to be like him, to write like him, to analyse like him and all that.

And I find that in yesterday’s newsletter he clearly encapsulates the idea with which I started off when I quit banking in 2011. He writes:

A good trick is, find an industry where the words “Monte Carlo model” make you sound brilliant and mysterious, then go to town.

This is exactly what I set out to do in 2011, and have continued to do since then. And you’d be amazed to find the number of industries where “Monte Carlo model” makes you sound brilliant and mysterious.

Considering the difficulties I’ve occasionally had in communicating to people what exactly I do, I think I should adopt Levine’s line to describe my work. I clearly can’t go wrong that way.

 

What did Brendan in? Priors? The schedule? Or the cups?

So Brendan Rodgers has been sacked as Liverpool manager, after what seems like an indifferent start to the season. The club is in tenth position with 12 points after 8 games, with commentators noting that “at the same stage last season” the club had 13 points from 8 games.

Yet, the notion of “same stage last season” is wrong, as I’d explained in this post I’d written two years back (during Liverpool’s last title chase), since the fixture list changes year on year. As I’ve explained in that post, a better way to compare a club’s performance is to compare its performance this season to corresponding fixtures from last season.

Looking at this season from such a lens (and ignoring games against promoted teams Bournemouth and Norwich), this is what Liverpool’s season so far looks like:

Fixture This season Last season Difference
Stoke away Win Loss +3
Arsenal away Draw Loss +1
West Ham home Loss Win -3
Manchester United Away Loss Loss 0
Aston Villa home Win Loss +3
Everton away Draw Draw 0

In other words, compared to similar fixtures last season, Liverpool is on a +4 (winning two games and drawing one among last season’s losses, and losing one of last season’s wins). In fact, if we look at the fixture schedule, apart from the games against promoted sides (which Liverpool didn’t do wonderfully in, scraping through with an offside goal against Bournemouth and drawing with Norwich), Liverpool have had a pretty tough start to the season in terms of fixtures.

So the question is what led to Brendan Rodgers’ dismissal last night? Surely it can’t be the draw at Everton, for that has become a “standard result” of late? Maybe the fact that Liverpool didn’t win allowed the management to make the announcement last evening, but surely the decision had been made earlier?

The first possibility is that the priors had been stacked against Rodgers. Considering the indifferent performance last season in both the league (except for one brilliant spell) and the cups, and the sacking of Rodgers’ assistants, it’s likely that the benefit of the doubt before the season began was against Rodgers, and only a spectacular performance could have turned it around.

The other possibility is indifferent performances in the cups, with 1-1 home draws against FC Sion and Carlisle United being the absolute low points, in fixtures that one would have expected Liverpool to win easily (albeit with weakened sides). While Liverpool is yet to exit any cup, indifferent performances so far meant that there hasn’t been much improvement in the squad since last season.

Leaving aside a “bad prior” at the beginning of the season and cup performances (no pun intended), there’s no other reason to sack Rodgers. As my analysis above shows, his performance in the league hasn’t been particularly bad in terms of results, with only the defeat to West Ham and possibly the draw to Norwich being bad. If Fenway Sports Group (the owners of Liverpool FC) have indeed sacked Rodgers on his league performance, it simply means that they don’t fully get the “Moneyball” philosophy that they supposedly follow, and could do with some quant consulting.

And if they’re reading this, they should know who to approach for such consulting services!

Teaching at IIMB: Mid-term review

IIMB has a strange policy. They are not allowed to have classes tomorrow on account of it being a national holiday so they shifted tomorrow’s concept to today, indicating a complete lack of appreciation of the concept of the long weekend. Anyway, since I didn’t have any other plans for the day or the weekend I decided to not request for a slot change and went anyways. This was my eleventh class out of twenty.

I expected the attendance to be rather thin today, but the class surprised me with more than three-fourths of the registered students turning up (on par with most sessions so far). And despite the class being at 8 am in the morning, none of them slept (at least I didn’t notice anyone sleeping). That is again on par with the course so far – more than halfway though the course and I’m yet to catch a single person sleeping in class! Maybe I should take some credit for that.

The class before today’s was about ten days back (long gap due to mid-term exams), and that day I had a minor scare. I had formulated a case that involved solving the Newsboy Problem (now politically corrected to “Newsvendor model“) as a sub-step in the solution to the case. Having worked out the sketches of the case solution the previous night I went to sleep hoping to work out the full case before I went to class. And my brain froze.

So it was 6:30 on the morning of an 8am class and I wasn’t getting the head or tail of the newsboy problem despite having known it fairly well. Decided to have cereal at home rather than go to SN to give myself more time to read up and understand the model. And my brain refused to open up. Yet I made my way to class, hoping I could “wing it”.

I didn’t have to, for the class exceeded expectations and solved the case for me. One guy popped up with “newsvendor model”. Another guy said that we could consider a certain thing as a “spot price”, thus eliminating the need to make any assumptions on costs. Then we started working out the model on Excel (remember that this is a “spreadsheet modelling” course). And the time came to implement the newsvendor model. And my brain froze in anticipation. “How do we do this?”, I asked, trying to not give away my brain freeze.

“We calculate the critical ratio”, came the chorus (sometimes I dispense with the politeness and order of people raising their hands and speaking in order). “And what is that here?”, I asked. “B6/(B5+B6)” came back the chorus. And then when I asked them how to impute the ordering level based on this, the chorus had figured out the exact way in which we should use NormInv to determine this. The troubling bit of the newsvendor problem having been thus solved, I took control and went forward with the rest of the case. And my respect for the class went up significantly that day.

Later in the day I was relating the incident to the wife, who I might have mentioned is an MBA student at IESE Business School in Barcelona. “Oh my god, your class is so quant”, she exclaimed. This is a topic for another day but perhaps due to the nature of the admissions procedure, students at IIMs are definitely much much more quantitatively oriented than students at B-schools elsewhere. Yet, IIMs don’t seem to be doing much in terms of harnessing this quant potential which should be giving their students a global competitive advantage.

And coming back to my class, they’ll be sitting for placements starting the 9th of February. If my class is a representative sample (it is most likely not, since I’m teaching an elective and these people expressed interest in learning what I’m teaching, so there is a definite bias), this seems like a great batch at IIMB. So I encourage you to go and recruit!