Conductors and CAPM

For a long time I used to wonder why orchestras have conductors. I possibly first noticed the presence of the conductor sometime in the 1990s when Zubin Mehta was in the news. And then I always wondered why this person, who didn’t play anything but stood there waving a stick, needed to exist. Couldn’t the orchestra coordinate itself like rockstars or practitioners of Indian music forms do?

And then i came across this video a year or two back.

And then the computer science training I’d gone through two decades back kicked in – the job of an orchestra conductor is to reduce an O(n^2) problem to an O(n) problem.

For a  group of musicians to make music, they need to coordinate with each other. Yes, they have the staff notation and all that, but still they need to know when to speed up or slow down, when to make what transitions, etc. They may have practiced together but the professional performance needs to be flawless. And so they need to constantly take cues from each other.

When you have n musicians who need to coordinate, you have \frac{n.(n-1)}{2} pairs of people who need to coordinate. When n is small, this is trivial, and so you see that small ensembles or rock bands can easily coordinate. However, as n gets large, n^2 grows well-at-a-faster-rate. And that is a problem, and a risk.

Enter the conductor. Rather than taking cues from one another, the musicians now simply need to take cues from this one person. And so there are now only n pairs that need to coordinate – each musician in the band with the conductor. Or an O(n^2) problem has become an O(n) problem!

For whatever reason, while I was thinking about this yesterday, I got reminded of legendary finance professor R Vaidya‘s class on capital asset pricing model (CAPM), or as he put it “Sharpe single index model” (surprisingly all the links I find for this are from Indian test prep sites, so not linking).

We had just learnt portfolio theory, and how using the expected returns, variances and correlations between a set of securities we could construct an “efficient frontier” of securities that could give us the best risk-adjusted return. Seemed very mathematically elegant, except that in case you needed to construct a portfolio of n stocks, you needed n^2 correlations. In other word, an O(n^2) problem.

And then Vaidya introduced CAPM, which magically reduced the problem to an O(n) problem. By suddenly introducing the concept of an index, all that mattered for each stock now was its beta – the coefficient of its returns proportional to the index returns. You didn’t need to care about how stocks reacted with each other any more – all you needed was the relationship with the index.

In a sense, if you think about it, the index in CAPM is like the conductor of an orchestra. If only all O(n^2) problems could be reduced to O(n) problems this elegantly!

Should USP be part of MVP

First of all, my apologies for the jargon, but this is a way to get attention of those corporate types who I hope to sell to. The MVP here in question is the startup-wala MVP (minimum viable product) and not the sports-wala MVP (most valuable player). There is no ambiguity to USP.

So it’s an accepted mantra in the startup world that product development should follow the “agile model” rather than the “waterfall model” (borrowing from software engineering paradigms). It is recommended that you put out a “minimum viable product” (MVP) out early into the market and get continuous feedback as you continue to hone your product. This way, you don’t end up wasting too much time building stuff the market doesn’t want, and can pivot (change direction to another product/service) if necessary.

The question is how “minimum” the “minimum viable product” should be. Let’s say that your business isn’t something that creates a new market but something that improves upon an existing product or service. In other words, you are building a business around “a better way of doing X” (it doesn’t matter here what X is).

The temptation in this case is to copy X and release it as your minimum viable product. This is rather easy to do, since you can just reverse engineer X, and put out a product quickly. That’s the quickest way to get to the market.

The problem with this approach, however, is that your initial set of users who experience your MVP will fail to see what the big deal about your product is – while they might hear your promises that this is only a start and you intend to do X in a “new improved way”, the first version as they see it shows no indication of this promise.

Worse, when your product is branded as a “new improved X”, it automatically gets anchored in your users’ minds with respect to X. Irrespective of what your product looks or feels like, once you’ve branded as a “new improved X”, comparisons to X are inevitable. And when your MVP is not very different from X, people might lose interest.

On the other hand, if you need to build in your USP into your MVP, it results in a longer product development cycle. In such cases, if the market doesn’t really want your “new improved X”, a lot more effort would have been expended, leading to higher risk (of market not accepting product).

Yet, if your MVP is nothing like what your “real product” is, then you are not really getting feedback from the market on your “real product” – only feedback on your MVP. And the MVP should be something such that you can make use of any feedback you get on it in terms of superior product design.