Models

This is my first ever handwritten post. Wrote this using a Natraj 621 pencil in a notebook while involved in an otherwise painful activity for which I thankfully didn’t have to pay much attention to. I’m now typing it out verbatim from what I’d written. There might be inaccuracies because I have a lousy handwriting. I begin

People like models. People like models because it gives them a feeling of being in control. When you observe a completely random phenomenon, financial or otherwise, it causes a feeling of unease. You feel uncomfortable that there is something that is beyond the realm of your understanding, which is inherently uncontrollable. And so, in order to get a better handle of what is happening, you resort to a model.

The basic feature of models is that they need not be exact. They need not be precise. They are basically a broad representation of what is actually happening, in a form that is easily understood. As I explained above, the objective is to describe and understand something that we weren’t able to fundamentally comprehend.

All this is okay but the problem starts when we ignore the assumptions that were made while building the model, and instead treat the model as completely representative of the phenomenon it is supposed to represent. While this may allow us to build on these models using easily tractable and precise mathematics, what this leads to is that a lot of the information that went into the initial formulation is lost.

Mathematicians are known for their affinity towards precision and rigour. They like to have things precisely defined, and measurable. You are likely to find them going into a tizzy when faced with something “grey”, or something not precisely measurable. Faced with a problem, the first thing the mathematician will want to do is to define it precisely, and eliminate as much of the greyness as possible. What they ideally like is a model.

From the point of view of the mathematician, with his fondness for precision, it makes complete sense to assume that the model is precise and complete. This allows them to bringing all their beautiful math without dealing with ugly “greyness”. Actual phenomena are now irrelevant.The model reigns supreme.

Now you can imagine what happens when you put a bunch of mathematically minded people on this kind of a problem. And maybe even create an organization full of them. I guess it is not hard to guess what happens here – with a bunch of similar thinking people, their thinking becomes the orthodoxy. Their thinking becomes fact. Models reign supreme. The actual phenomenon becomes a four-letter word. And this kind of thinking gets propagated.

Soon the people fail to  see beyond the models. They refuse to accept that the phenomenon cannot obey their models. The model, they think, should drive the phenomenon, rather than the other way around. The tails wagging the dog, basically.

I’m not going into the specifics here, but this might give you an idea as to why the financial crisis happened. This might give you an insight into why obvious mistakes were made, even when the incentives were loaded in favour of the bankers getting it right. This might give you an insight as to why internal models in Moody’s even assumed that housing prices can never decrease.

I think there is a lot more that can be explained due to this love for models and ignorance of phenomena. I’ll leave them as an exercise to the reader.

Apart from commenting about the content of this post, I also want your feedback on how I write when I write with pencil-on-paper, rather than on a computer.

 


Big Management and Big Picture

One common shortcoming that top management in a lot of companies is accused of is that they give too much attention to details (i.e. sometimes they micromanage), and they are unable to see the big picture.

For example, if you think about the financial crisis of 2007-08, people kept making stupid bets about the mortgage market because they didn’t look at mortgages in the overall context of economy. They looked at their models, made sure they “converged” to a zillion digits, the math was perfect, etc. And priced. And conveniently forgot some of the “big assumptions”.

I think this has to do with the typical promotion procedures in corporations, and an assumption that people who are good at one kind of stuff will continue to be good at other kinds of stuff.

For example, in the early part of your career, in order to move up the “corporate ladder”, it’s important to show your skills at being able to give attention to detail, to be able to see the “little picture”, be careful and precise, and so on. For these are the kind of skills that makes one successful in the lower-level jobs.

Now, my hypothesis is that being good at details and being good at seeing the big picture are at best orthogonal, and at worst negatively correlated. I base this hypothesis on some initial reading on stuff like Attention Deficit & Hyperactivity Disorder and related topics.

So, when you promote people based on their ability to be good at details (which is required at lower levels of the job), you will end up with a top and middle management full of people who are excellent at details, and whose ability in seeing the big picture is at best questionable. Explains well, right?

I don’t know what can be done to rectify this. Promotion is too important to take away as an incentive for good performance at junior levels. Some organizations do institute procedures where for higher promotions you also need to show skills that show your big picture skills. But these are only for people who have already reached middle management, which is people who are good at details, which means that a large part of those who started at the bottom, and who are “big picture people” would have already fallen at the wayside by then.

Does my hypothesis make sense? If it does, what do you think needs to be done to get big picture thinkers at the top?

 

 

Priors and posteriors

There is a fundamental difference between version 1.0 of any thing and any subsequent version. In the version 1.0, you usually don’t need to give any reasons for your choices. The focus in that case would be in getting the version ready, and you can get away with whatever assumptions you want to feel like. Nobody will question you because first of all they want to see your product out, and not delay it with “class participation”. The prior thus gets established.

Now, for any subsequent version, if you suggest a change, it will be evaluated against what is already there. You need to do a detailed scientific analysis into the switching costs and switching benefits, and make a compelling enough case that the change should be made. Even when it is a trivial change, you can expect it to come under a lot of scrutiny, since now there is a “prior”, a “default” which people can fall back on if they don’t like what you suggest.

People and products are resistant to change. Inertia exists. So if you want to make a mark, make sure you’re there at version 1.0. Else you’ll get caught in infintely painful bureaucratic hassles. And given the role of version 1.0 into how a product pans out (in the sense that most of the assumptions made there never really get challenged) I think the successful products are those that got something right initially, which made better assumptions than the others.

Addition to the Model Makers Oath

Paul Wilmott and Emanuel Derman, in an article in Business Week a couple of years back (at the height of the financial crisis) came up with a model-makers oath. It goes:

• I will remember that I didn’t make the world and that it doesn’t satisfy my equations.

• Though I will use models boldly to estimate value, I will not be overly impressed by mathematics.

• I will never sacrifice reality for elegance without explaining why I have done so. Nor will I give the people who use my model false comfort about its accuracy. Instead, I will make explicit its assumptions and oversights.

• I understand that my work may have enormous effects on society and the economy, many of them beyond my comprehension.

While I like this, and try to abide by it, I want to add another point to the oath:

As a quant, it is part of my responsibility that my fellow-quants don’t misuse quantitative models in finance and bring disrepute to my profession. It is my responsibility that I’ll put in my best efforts to be on the lookout for deviant behavour on the part of other quants, and try my best to ensure that they too adhere to these principles.

Go read the full article in the link above (by Wilmott and Derman). It’s a great read. And coming back to the additional point I’ve suggested here, I’m not sure I’ve drafted it concisely enough. Help in editing and making it more concise and precise is welcome.