Should you have an analytics team?

In an earlier post a couple of weeks back, I had talked about the importance of business people knowing numbers and numbers people knowing business, and had put in a small advertisement for my consulting services by mentioning that I know both business and numbers and work at their cusp. In this post, I take that further and analyze if it makes sense to have a dedicated analytics team.

Following the data boom, most companies have decided (rightly) that they need to do something to take advantage of all the data that they have and have created dedicated analytics teams. These teams, normally staffed with people from a quantitative or statistical background, with perhaps a few MBAs, is in charge of taking care of all the data the company has along with doing some rudimentary analysis. The question is if having such dedicated teams is effective or if it is better to have numbers-enabled people across the firm.

Having an analytics team makes sense from the point of view of economies of scale. People who are conversant with numbers are hard to come by, and when you find some, it makes sense to put them together and get them to work exclusively on numerical problems. That also ensures collaboration and knowledge sharing and that can have positive externalities.

Then, there is the data aspect. Anyone doing business analytics within a firm needs access to data from all over the firm, and if the firm doesn’t have a centralized data warehouse which houses all its data, one task of each analytics person would be to get together the data that they need for their analysis. Here again, the economies of scale of having an integrated analytics team work. The job of putting together data from multiple parts of the firm is not solved multiple times, and thus the analysts can spend more time on analyzing rather than collecting data.

So far so good. However, writing a while back I had explained that investment banks’ policies of having exclusive quant teams have doomed them to long-term failure. My contention there (including an insider view) was that an exclusive quant team whose only job is to model and which doesn’t have a view of the market can quickly get insular, and can lead to groupthink. People are more likely to solve for problems as defined by their models rather than problems posed by the market. This, I had mentioned can soon lead to a disconnect between the bank’s models and the markets, and ultimately lead to trading losses.

Extending that argument, it works the same way with non-banking firms as well. When you put together a group of numbers people and call them the analytics group, and only give them the job of building models rather than looking at actual business issues, they are likely to get similarly insular and opaque. While initially they might do well, soon they start getting disconnected from the actual business the firm is doing, and soon fall in love with their models. Soon, like the quants at big investment banks, they too will start solving for their models rather than for the actual business, and that prevents the rest of the firm from getting the best out of them.

Then there is the jargon. You say “I fitted a multinomial logistic regression and it gave me a p-value of 0.05 so this model is correct”, the business manager without much clue of numbers can be bulldozed into submission. By talking a language which most of the firm understands you are obscuring yourself, which leads to two responses from the rest. Either they deem the analytics team to be incapable (since they fail to talk the language of business, in which case the purpose of existence of the analytics team may be lost), or they assume the analytics team to be fundamentally superior (thanks to the obscurity in the language), in which case there is the risk of incorrect and possibly inappropriate models being adopted.

I can think of several solutions for this – but irrespective of what solution you ultimately adopt –  whether you go completely centralized or completely distributed or a hybrid like above – the key step in getting the best out of your analytics is to have your senior and senior-middle management team conversant with numbers. By that I don’t mean that they all go for a course in statistics. What I mean is that your middle and senior management should know how to solve problems using numbers. When they see data, they should have the ability to ask the right kind of questions. Irrespective of how the analytics team is placed, as long as you ask them the right kind of questions, you are likely to benefit from their work (assuming basic levels of competence of course). This way, they can remain conversant with the analytics people, and a middle ground can be established so that insights from numbers can actually flow into business.

So here is the plug for this post – shortly I’ll be launching short (1-day) workshops for middle and senior level managers in analytics. Keep watching this space 🙂

 

Jobs and courtship

Jobs, unlike romantic relationships, don’t come with a courtship period. You basically go for a bunch of interviews and at the end of it both parties (you and the employer) have to decide whether it is going to be a good fit. Neither party has complete information – you don’t know what a typical day at the job is like, and your employer doesn’t know much about your working style. And so both of you are taking a risk. And there is a significant probability that you are actually a misfit and the “relationship” can go bad.

For the company it doesn’t matter so much if the odd job goes bad. They’ll usually have their recruitment algorithm such that the probability of a misfit employee is so low it won’t affect their attrition numbers. From the point of view of the employees, though, it can get tough. Every misfit you go through has to be explained at the next interview. You have a lot of misfits, and you’re deemed to be an unfaithful guy (like being called a “much-married man”). And makes it so tough for you to get another job that you are more likely to stumble into one where you’re a misfit once again!

Unfortunately, it is not practical for companies to hire interns. I mean, it is a successful recruitment strategy at the college-students level but not too many people are willing to get into the uncertainty of a non-going-concern job in the middle of their careers. This risk-aversion means that a lot of people have no option but to soldier on despite being gross misfits.

And then there are those that keep “divorcing” in an attempt to fit in, until they are deemed unemployable.

PS: In this regard, recruitments are like arranged marriage. You make a decision based on a handful of interviews in simulated conditions without actually getting to know each other. And speaking of arranged marriage, I reprise this post of mine from six years ago.

Models

This is my first ever handwritten post. Wrote this using a Natraj 621 pencil in a notebook while involved in an otherwise painful activity for which I thankfully didn’t have to pay much attention to. I’m now typing it out verbatim from what I’d written. There might be inaccuracies because I have a lousy handwriting. I begin

People like models. People like models because it gives them a feeling of being in control. When you observe a completely random phenomenon, financial or otherwise, it causes a feeling of unease. You feel uncomfortable that there is something that is beyond the realm of your understanding, which is inherently uncontrollable. And so, in order to get a better handle of what is happening, you resort to a model.

The basic feature of models is that they need not be exact. They need not be precise. They are basically a broad representation of what is actually happening, in a form that is easily understood. As I explained above, the objective is to describe and understand something that we weren’t able to fundamentally comprehend.

All this is okay but the problem starts when we ignore the assumptions that were made while building the model, and instead treat the model as completely representative of the phenomenon it is supposed to represent. While this may allow us to build on these models using easily tractable and precise mathematics, what this leads to is that a lot of the information that went into the initial formulation is lost.

Mathematicians are known for their affinity towards precision and rigour. They like to have things precisely defined, and measurable. You are likely to find them going into a tizzy when faced with something “grey”, or something not precisely measurable. Faced with a problem, the first thing the mathematician will want to do is to define it precisely, and eliminate as much of the greyness as possible. What they ideally like is a model.

From the point of view of the mathematician, with his fondness for precision, it makes complete sense to assume that the model is precise and complete. This allows them to bringing all their beautiful math without dealing with ugly “greyness”. Actual phenomena are now irrelevant.The model reigns supreme.

Now you can imagine what happens when you put a bunch of mathematically minded people on this kind of a problem. And maybe even create an organization full of them. I guess it is not hard to guess what happens here – with a bunch of similar thinking people, their thinking becomes the orthodoxy. Their thinking becomes fact. Models reign supreme. The actual phenomenon becomes a four-letter word. And this kind of thinking gets propagated.

Soon the people fail to  see beyond the models. They refuse to accept that the phenomenon cannot obey their models. The model, they think, should drive the phenomenon, rather than the other way around. The tails wagging the dog, basically.

I’m not going into the specifics here, but this might give you an idea as to why the financial crisis happened. This might give you an insight into why obvious mistakes were made, even when the incentives were loaded in favour of the bankers getting it right. This might give you an insight as to why internal models in Moody’s even assumed that housing prices can never decrease.

I think there is a lot more that can be explained due to this love for models and ignorance of phenomena. I’ll leave them as an exercise to the reader.

Apart from commenting about the content of this post, I also want your feedback on how I write when I write with pencil-on-paper, rather than on a computer.

 


IPOs Revisited

I’ve commented earlier on this blog about investment bankers shafting companies that want to raise money from the market, by pricing the IPO too low. While a large share price appreciation on the day of listing might be “successful” from the point of view of the IPO investors, it’s anything but that from the point of view of the issuing companies.

The IPO pricing issue is in the news again now, with LinkedIn listing at close to 100% appreciation of its IPO price. The IPO was sold to investors at $45 a share, and within minutes of listing it was trading at close to $90. I haven’t really followed the trajectory of the stock after that, but assume it’s still closer to $90 than to $45.

Unlike in the Makemytrip case (maybe that got ignored since it’s an Indian company and not many commentators know about it), the LinkedIn IPO has got a lot of footage among both the mainstream media and the blogosphere. There have been views on both sides – that the i-banks shafted LinkedIn, and that this appreciation is only part of the price discovery mechanism, so it’s fair.

One of my favourite financial commentators Felix Salmon has written a rather large piece on this, in which he quotes some of the other prominent commentators also. After giving a summary of all the views, Salmon says that LinkedIn investors haven’t really lost out too much due to the way the IPO has been priced (I’ve reproduced a quote here but I’d encourage you to go read Salmon’s article in full):

But the fact is that if I own 1% of LinkedIn, and I just saw the company getting valued on the stock market at a valuation of $9 billion or so, then I’m just ecstatic that my stake is worth $90 million, and that I haven’t sold any shares below that level. The main interest that I have in an IPO like this is as a price-discovery mechanism, rather than as a cash-raising mechanism. As TED says, LinkedIn has no particular need for any cash at all, let alone $300 million; if it had an extra $200 million in the bank, earning some fraction of 1% per annum, that wouldn’t increase the value of my stake by any measurable amount, because it wouldn’t affect the share price at all.

Now, let us look at this in another way. Currently Salmon seems to be looking at it from the point of view of the client going up to the bank and saying “I want to sell 100,000 shares in my company. Sell it at the best price you can”. Intuitively, this is not how things are supposed to work. At least, if the client is sensible, he would rather go the bank and say “I want to raise 5 million dollars. Raise it by diluting my current shareholders by as little as possible”.

Now you can see why the existing shareholders can be shafted. Suppose I owned one share of LinkedIn, out of a total 100 shares outstanding. Suppose I wanted to raise 9000 rupees. The banker valued the current value at $4500, and thus priced the IPO at $45 a share, thus making me end up with 1/300 of the company.

However, in hindsight, we know that the broad market values the company at $90 a share, implying that before the IPO the company was worth $9000. If the banker had realized this, he would have sold only 100 fresh shares of the company, rather than 200. The balance sheet would have looked exactly the same as it does now, with the difference that I would have owned 1/200 of the company then, rather than 1/300 now!

1/200 and 1/300 seem like small numbers without much difference, but if you understand that the total value of LinkedIn is $9 billion (approx) and if you think about pre-IPO shareholders who held much larger stakes, you know who has been shafted.

I’m not passing a comment here on whether the bankers were devious or incompetent, but I guess in terms of clients wanting to give them future business, both are enough grounds for disqualification.

Floor Space Index

In an extract  from his latest book Triumph of the City Ed Glaeser argues that one way to improve urban living would be to increase the floor space index, and allow higher buildings. In another recent article, Ajay Shah argues that the presence of army land in the middle of cities is again hampering urban growth and development by increasing intra-city distances and reducing space for the common man inside the cities. I was thinking about these two concepts from the point of view of Bangalore.

Floor space index (FSI) is a metric that controls the total supply of residential area within a city. It is defined as the ratio of built-up area of the house to the area of the plot it stands on. Currently, in Bangalore it is capped at 1.5. This means that if I own a site measuring 60′ by 40′, the maximum area of the building I can build on it is 3600 sq ft. Clearly, by capping FSI, the total supply of residential area in a city is capped (assuming cities don’t expand outwards, of course). Currently, a lot of the development going on is of the type of builders acquiring “underutilized property” (old bungalows, say) and then “unlocking the value” by building buildings on it up to the permissible limit.

So I was wondering what were to happen if the government were to tomorrow decide to act on Glaeser’s recommendations and suddenly increase the FSI. For one, it would jack up the value of land – since there is more value in each piece of land that can now be “unlocked”. On the other hand, it would lead to a gradual fall in prices of apartments – since the limit on the supply of “floor space” would go up, that would lead to a fall in prices.

Existing owners of “independent houses” (where they own both the house and the land it’s built on) would be overjoyed – for now the value of the land they own would suddenly go up. Existing owners of apartments wouldn’t – their net worth takes a sudden drop. But all this doesn’t matter since both these groups are highly fragmented and are unlikely to matter politically.

What one needs to consider is how builders and real-estate developers would react to this kind of a move, since they have the ability to influence politics. For one, it would allow them to build additional floors in properties where they already own the land, so they have reason to stay positive. On the other hand, due to the increase in land prices, new development would become much more expensive than it is today, thus making it tough for them to expand. Another thing to note is that increased supply of housing and office space in the city would definitely negatively impact the prices of such holdings on the outskirts, and I’m of the opinion that a large number of real estate companies might actually be “long” housing space on the outskirts and would thus lose out in case the FSI were to be increased.

There are other implications of increasing FSI, of course. One of my biggest nightmares is that density in cities will increase at such a high rate that the sewerages won’t be able to handle the extra “flow”. And then there is the issue of increased traffic – though it can be argued that increased density means that commutes might actually come down. Overall, to my mind at this point of time, the picture is unclear, though given the overall incentives to the powerful real estate community it is unlikely to happen. Though I would definitely welcome any increase in FSI (this has nothing to do with my financial situation; and yes, based on my current holdings I’m “long FSI”).

As for army land, there are vast areas that used to once be on the outskirts which are now inside the city. If the army were to decide to sell them to the city, I’m sure it would be able to make a really large amount of money. But then given that the army is not a profit-oriented institution, it has no need for the money so will not let go of the land. In fact, as I write this, the army in Bangalore has taken up the development of lands around the inner ring road – some townships and football fields have come up. But then, this is not the use that Shah envisaged – for none of this actually integrates enough into the local economy to make an impact. And so for the army to sell the land, the decision would have to come from the central government. And given that increase in in-city floor space is likely to negatively impact the powerful real estate companies, don’t be surprised if they were to lobby against the sale of urban army land.

Tailpiece : A while back there was this issue of Transferable Development Rights. When the BBMP wanted to widen roads it announced that people losing land would be compensated in the form of tradable TDRs. For that to be effective, a necessary condition is that the cost of violating the building code is actually high.

Successful IPOs

Check out this article in the Wall Street Journal. Read the headline. Does this sound right to you?

MakeMyTrip Opens Up 57% Post-IPO; May Be Year’s Best Deal

It doesn’t, to me. How in the world is the IPO successful if it has opened 57% higher in the first hour (it ended the first day 90% higher than the IPO price)? To rephrase, from whose point of view has the IPO been the “best deal”?

What this headline tells me is that makemytrip has been well and truly shafted. If the stock has nearly doubled on the first day, all it means is that MMYT raised just about half the cash from the IPO as it could have raised. If not anything else, the IPO has been a spectacular failure from the company’s point of view.

The US has a screwed up system for IPOs. Unlike in India where there is a 100% book-building process where there is effectively an auction to determine the IPO price (though within a band) in the US it is all the responsibility of the bank in charge of the IPO to distribute stock (as far as I understand). Which is why working in Equity Capital Markets groups in investment banks is so much more work there than it is here – you need to go around to potential investors hawking the stock and convincing them to invest, etc.

Now, the bank usually gets paid a percentage of the total money raised in the IPO so it is in their incentive to set the price as high as they can (and the fact that they are underwriting means they can’t get too greedy and set a price no one will buy at). Or so it is designed.

The problem arises because the firm that is IPOing is not the only client of the bank. Potential investors in the IPO are most likely to be clients of other divisions of the bank (say, sales and trading). By giving these investors a “good price” on the IPO (i.e. by setting the IPO price too low), the bank hopes to make up for the commission it loses by way of business that the investors give to other divisions of the bank. If most of the IPO buyers are clients of the bank’s sales and trading division (it’s almost always the case) then what all these clients together gain by a low IPO price far outweighs the bank’s lost commission.

It is probably because of this nexus that Google decided to not raise money in a conventional way but instead go through an auction (it made big news back then, but then that’s how things always happen in India so we have a reason to be proud). Unfortunately they were able to do it only because they are google and other companies have failed to successfully raise money by that process.

The nexus between investment banks and investors in IPOs remains and unless there are enough companies that want to do a Google, it won’t be a profitable option to IPO in the US. Which makes it even more intriguing that MMYT chose to raise funds in the US and not here in India.

CTR

Ok this is a post that has been delayed by about a couple of weeks. One of those things that has been in my head now for a while so writing it. So some two or three Sundays back (more likely to be two) I went to the famous CTR in Malleswaram for breakfast. For the first time ever. Yeah I now it’s supposed to be a classic place and all that but it’s only now that I’m getting acquainted with north/west parts of Bangalore so had completely missed out on this so far.

So as per what several people had told me at various points of time in life, the Masala Dosa at CTR was brilliant. Unparalleled. The difference between CTR and Vidyarthi Bhavan is that the former makes masala dosa just the way that other restaurants do, but only much better and tastier. The dosa at Vidyarthi Bhavan is a different animal altogether and am told the has very different composition to what is made in other restaurants.

There is another important difference between CTR and Vidyarthi Bhavan and thats in terms of service and crowd management. Vidyarthi Bhavan does an excellent job in this regard, striving to “rotate table covers” as quickly as possible. Within moments of you taking your seat, your order gets taken, the dosa arrives, as does the bill and a look from the waiter asking you what the fuck you are doing there considering you have finished your tiffin. Extremely efficient from the point of view of the restaurant (in terms of maximizing capacity) and for customers looking for a quick dosa, but not so from the point of view of people who want to linger for a while and chat.

Unfortunately the one time I’ve been to CTR (2 sundays back) I was in a bit of a hurry since I had to go attend a quiz. Maybe the intention of the restaurant is to allow customers to sit for a while and chat up, but I don’t know if you can actually do that since at any given point of time (reports might be biased since this was a Sunday morning, 9am) there are four people waiting for you to leave so that they can grab your seat. This large crowd that is in waiting is also I think a result of slow service at the restaurant (simple queuing theory – for a given arrival rate, the slower the service rate, the more the average queue length).

There were some simple tasks in which CTR didn’t do so well. For example, making a customer wait for ten minutes before you take his order is not only ten minutes wasted for him, it is also ten minutes of absolutely unproductive “table time” – something that a fast food place like this can’t really afford. And then the ordered items also took a long time to arrive (again, most people at CTR have the same order – one “masaal” so I do hope the make dosas “to stock”) – but then their kitchen capacity may not match up to the capacity of the seating area (which isn’t too much). You pay bill at the table itself rather than at the counter which means you sit there for even longer. And so forth.

This post is supposed to be a part of this series that I was writing some four years back examining the Supply Chain practices and delivery models at various fast food restaurants in Bangalore. I have only one observation with respect to CTR and based on that I don’t give it very high marks in terms of supply chain and delivery efficiency. However, the dosa there is so awesome that I’m sure that I’ll brave the crowds and go there more often and might be able to make better observations about the process.

Collateralized Death Obligations

When my mother died last Friday, the doctors at the hospital where she had been for three weeks didn’t have a diagnosis. When my father died two and a half years back, the hospital where he’d spent three months didn’t have a diagnosis. In both cases, there were several hypotheses, but none of them were even remotely confirmed. In both cases, there have been a large number of relatives who have brought up the topic of medical negligence. In my father’s case, some people wanted me to go to consumer court. This time round, I had signed several agreements with the hospital absolving them of all possible complications, etc.

The relationship between the doctor and the patient is extremely asymmetric. It is to do with the number of counterparties, and with the diversification. If you take a “medical case”, it represents only a small proportion of the doctor’s total responsibility – it is likely that at any given point of time he is seeing about a hundred patients, and each case takes only a small part of his mind space. On the other hand, the same case represents 100% for the patient, and his/her family. So say 1% on one side and 100% on the other, and you know where the problem is.

The medical profession works on averages. They usually give a treatment with “95% confidence”. I don’t know how they come up with such confidence limits, and whether they explicitly state it out, but it is a fact that no disease has a 100% sure shot cure. From the doctor’s point of view, if he is administering a 95% confidence treatment, he will be happy as long as his success rate is over that. The people for whom the treatment was unsuccessful are just “statistics”. After all, given the large number of patients a doctor sees, there is nothing better he can do.

The problem on the patient’s side is that it’s like Schrodinger’s measurement. Once a case has been handled, from the patient’s perspective it collapses to either 1 or 0. There is no concept of probabilistic success in his case. The process has either succeeded or it has failed. If it is the latter, it is simply due to his own bad luck. Of ending up on the wrong side of the doctor’s coin. On the other hand, given the laws of aggregation and large numbers, doctors can come up with a “success rate” (ok now I don’t kn0w why this suddenly reminds me of CDOs (collateralized debt obligations)).

There is a fair bit of randomness in the medical profession. Every visit to the doctor, every process, every course of treatment is like a toin coss. Probabilities vary from one process to another but nothing is risk-free. Some people might define high-confidence procedures as “risk-free” but they are essentially making the same mistakes as the people in investment banks who relied too much on VaR (value at risk). And when things go wrong, the doctor is the easiest to blame.

It is unfortunate that a number of coins have fallen wrong side up when I’ve tossed them. The consequences of this have been huge, and it is chilling to try and understand what a few toin cosses can do to you. The non-linearity of the whole situation is overwhelming, and depressing. But then this random aspect of the medical profession won’t go away too easily, and all you can hope for when someone close to you goes to the doctor is that the coin falls the right way.

The Theory of Consistent Fuckability and Ladders for Men

Ok so the popular Ladder Theory states that men have only one ladder. It states that all men want to sleep with all women, and they simply rank every woman on the scale of how badly they want to sleep with her or whatever. Women, on the other hand, have two ladders – the “good” ladder, and the “friends” ladder, which allows them to get close to men without harbouring any romantic/sexual thoughts. Since men are incapable of exhibiting such behaviour, you get the concept of Gay Best Friend.

However, this absence of dual ladders for men exists only if you look at the short term. If you are a man and you are looking for a long-term relationship with genetic propagation as a part of your plans, I argue that the female twin ladders can be suitably modified in order to separate out “friends” from potential “bladees”. In order to aid this, I present the Theory of Consistent Fuckability.

From the ladder theory, we know that every man wants to sleep with every woman. For a fruitful, long-term, gene-propagating relationship, however, this is just a necessary but not sufficient condition. As I had argued in another post, given that divorce is usually messy, the biggest cost in getting married to someone is the opportunity cost of getting into long-term relationships with the rest of the population. And if you are involved in gene-propagation, it is ideal if neither of the propagators cheats on the partner – from the point of view of the child’s upbringging and all such jazz.

So if you are a man and you want to marry someone, you must be reasonably sure that you want to sleep with her on a consistent basis. You should be willing to do her every day. If not, there is a good chance that you might want to cheat on her at a later date, which is not ideal from your genes’ point of view.

A small digresssion here. You might ask what might happen to “ugly” women (basically women considered unattractive by a large section of men). However, the argument is that the market helps you find your niche. For example, if you want to cheat on a woman, there must be other women who are superior (on your scale) to this woman who want you to do them. Assuming that I am extremely unattractive and the fact that not too many “attractive” women will want to do me, I should be able to set my “consistent fuckability standard” appropriately.

Returning to the point, when you are evaluating a woman for MARRIAGE (note it doens’t apply to shorter term non-gene-propagating relationships), you will need to decide if you will want to have sex with her on a consistent basis. And based on the answer to this question, you can define the universe of all women into two – those that you want to do consistently and those that don’t. And they form your two ladders.

Now, reasonably independent (maybe there’s a positive low correlation on one of the ladders) of this consistent fuckability factor, you can evaluate the women on other factors such as emotional compatibility, strengths, weaknesses, culture fit and all that jazz. And rank them on those. And then use this distinction on the consistency factor and you will have your two ladders. So you have the “friends” ladder – which is differnet from the friends’ ladders of women in the sense that you want to sleep with them but not on a consistent basis. And there is the “good” ladder of those who you want to do consistently.

To summarize, consistent fuckability is a necessary but not sufficient condition for a fruitful, multiplicative, gene-propagating long-term relationship; and because of this, under certain circumstances, men also develop a pair of ladders.

Currently listening to: When I’m Sixty Four, The Beatles

The Perils of Notes Dictation

Thinking about my history lessons in schools, one picture comes to mind readily. A dark Mallu lady (she taught us history in the formative years between 6th and 8th) looking down at her set of voluminous notes and dictating. And all of us furiously writing so as to not miss a word of what she said. For forty minutes this exercise would continue, and then the bell would ring. Hands weary with all the writing, we would put our notebooks in our bags and look forward to a hopefully less strenuous next “perriod”.

The impact of this kind of “teaching” on schoolchildren’s attitude towards history, and their collective fflocking to science in 11th standard is obvious. There are so many things that are so obviously wrong with this mode of “teaching”. I suppose I’ll save that for else-where. Right now, I’m trying to talk about the perils of note-making in itself.

Before sixth standard and history, in almost all courses we would be dictated “questions and answers”. The questions that would appear in the exam would typically be a subset of these Q&A dictated in class. In fact, I remember that some of the more enthu teachers would write out the stuff on the board rather htan just dictating. I’m still amazed how I used to fairly consistently top the class in those days of “database query” exams.

I’m thinking about this from the point of view of impact on language. Most people who taught me English in that school had fairly good command over the language, and could be trusted to teach us good English. However, I’m not sure if I can say the same about the quality of language of other teachers. All of them were conversant in English, yes, and my schoool was fairly strict about being “English-medium”. However, the quality of English, especially in terms of grammar and pronunciation, of a fair number of teachers left a lot to be desired.

I can still remember the odd image of me thinking “this is obviously grammatically incorrect” and then proceeding to jot down what the teacher said “in my own words“. I’m sure there were other classmates who did the same. However, I’m also sure that a large number of people in the class just accepted what the teacher said to be right, in terms of language that is.

What this process of “dictation of notes” did was that teachers with horrible accents, grammar, pronunciation or all of the above passed on their bad language skills to the unsuspecting students. All the possible good work that English teachers had done was undone.There is a chance that this bad pronunciation, grammar, etc. would have been passed on even if the teachers didn’t give notes – for the students would just blindly imitate what the teachers would say. However, the amount by which they copied different teachers would not then be weighted by the amount of notes that each teacher dictated, and I think a case can be made that the quality of a teacher is inversely proportional to the amount of notes he/she dictates.

Teachers will not change because dictation is the way that they have been taught to “teach”. The onus needs to go to schools to make sure that the teachers don’t pass on their annoying language habits to the students. And a good place to start would be to stop them from dictating notes. And I still don’t understand the value of writing down notes that you don’t really bother to understand when you have a number of reasonably good text books and guide books available in the market. I agree that for earlier classes, some amount of note-making might be necessary (I think even that can be dispensed with), but in that case the school needs to be mroe careful regarding the language skills of people it recruits in order to dictate these notes.