Round Tables

One of the “features” of being in a job is that you get invited to conferences and “industry events”. I’ve written extensively about one of them in the past – the primary purpose of these events is for people to be able to sell their companies’ products, their services and even themselves (job-hunting) to other attendees.

Now, everyone knows that this is the purpose of these events, but it is one of those things that is hard to admit. “I’m going to this hotel to get pitched to by 20 vendors” is not usually a good enough reason to bunk work. So there is always a “front” – an agenda that makes it seemingly worthy for people to attend these events.

The most common one is to have talks. This can help attract people at two levels. There are some people who won’t attend talks unless they have also been asked to talk, and so they get invited to talk. And then there are others who are happy to just attend and try to get “gyaan”, and they get invited as the audience. The other side of the market soon appears, paying generous dollars to hold the event at a nice venue, and to be able to sell to all the speakers and the audience.

Similarly, you have panel discussions. Organisers in general think this is one level better than talks – instead of the audience being bored by ONE person for half an hour, they are bored by about 4-5 people (and one moderator) for an hour. Again there is the hierarchy here – some people won’t want to attend unless they have been put on the panel. And who gets to be on the panel is a function of how desperate one or more sponsors is to sell to the potential panelists.

The one thing most of these events get right is to have sufficient lunch and tea breaks for people to talk to each other. Then again, these are brilliant times for sponsors to be able to sell their wares to the attendees. And it has the positive externality that people can meet and “network” and talk among themselves – which is the best value you can get out of an event like this one.

However, there is one kind of event that I’ve attended a few times, but I can’t understand how they work. This is the “round table”. It is basically a closed room discussion with a large number of invited “panellists”, where everyone just talks past each other.

Now, at one level I understand this – this is a good way to get a large number of people to sell to without necessarily putting a hierarchy in terms of “speakers” / “panellists” and “audience”. The problem is that what they do with these people is beyond my imagination.

I’ve attended two of these events – one online and one offline. The format is the same. There is a moderator who goes around the table (not necessarily in any particular order), with one question to each participant (the better moderators would have prepared well for this). And then the participant gives a long-winded answer to that question, and the answer is not necessarily addressed at any of the other participants.

The average length of each answer and the number of participants means that each participant gets to speak exactly once. And then it is over.

The online version of this was the most underwhelming event I ever attended – I didn’t remember anything from what anyone spoke, and assumed that the feeling was mutual. I didn’t even bother checking out these people on LinkedIn after the event was over.

The offline version I attended was better in the way that at least we could get to talk to each other after the event. But the event itself was rather boring – I’m pretty sure I bored everyone with my monologue when it was my turn, and I don’t remember anything that anyone else said in this event. The funny thing was – the event wasn’t recorded, and there was hardly anyone from the organising team at the discussion. There existed just no point of all of us talking for so long. It was like people who organise Satyanarayana Poojes to get an excuse to have a party at home.

I’m wondering how this kind of event can be structured better. I fully appreciate the sponsors and their need to sell to the lot of us. And I fully appreciate that it gives  them more bang for the buck to have 20 people of roughly equal standing to sell to – with talks or panels, the “potential high value customers” can be fewer.

However – wouldn’t it be far more profitable to them to be able to spend more time actually talking to the lot of us and selling, rather than getting all of us to waste time talking nonsense to each other? Like – maybe just a party or a “lunch” would be better?

Then again – if you want people to travel inter-city to attend this, a party is not a good enough excuse for people to get their employers to sponsor their time and travel. And so something inane like the “round table” has to be invented.

PS: There is this school of thought that temperatures in offices and events are set at a level that is comfortable for men but not for women. After one recent conference I attended I have a theory on why this is the case. It is because of what is “acceptable formal wear” for men and women.

Western formal wear for men is mostly the suit, which means dressing up in lots of layers, and maybe even constraining your neck with a tie. And when you are wearing so many clothes, the environment better be cool else you’ll be sweating.

For women, however, formal wear need not be so constraining – it is perfectly acceptable to wear sleeveless tops, or dresses, for formal events. And the temperatures required to “air” the suit-wearers can be too cold for women.

At a recent conference I was wearing a thin cotton shirt and could thus empathise with the women.

 

The Law Of Comparative Advantage and Priorities

Over a decade ago I had written about two kinds of employees – those who offer “competitive advantage” and those who offer “comparative advantage”.

Quoting myself:

So in a “comparative advantage” job, you keep the job only because you make it easier for one or more colleagues to do more. You are clearly inferior to these colleagues in all the “components” of your job, but you don’t get fired only because you increase their productivity. You become the Friday to their Crusoe.

On the other hand, you can keep a job for “competitive advantage“. You are paid because there are one or more skills that the job demands in which you are better than your colleagues

Now, one issue with “comparative advantage” jobs is that sometimes it can lead to people being played out of position. And that can reduce the overall productivity of the team, especially when priorities change.

Let’s say you have 2 employees A and B, and 2 high-priority tasks X and Y. A dominates B – she is better and faster than B in both X and Y. In fact, B cannot do X at all, and is inferior to A when it comes to Y. Given these tasks and employees, the theory of comparative advantage says that A should do X and B should do Y. And that’s how you split it.

In this real world problem though, there can be a few issues – A might be better at X than B, but she just doesn’t want to do X. Secondly, by putting the slower B on Y, there is a floor on how soon Y can be delivered.

And if for some reason Y becomes high priority for the team, with the current work allocation there is no option than to just wait for B to finish Y, or get A to work on Y as well (thus leaving X in the lurch, and the otherwise good A unhappy). A sort of no win situation.

The whole team ends up depending on the otherwise weak B, a sort of version of this:

A corollary is that if you have been given what seems like a major responsibility it need not be because you are good at the task you’ve been given responsibility for. It could also be because you are “less worse” than your colleagues at this particular thing than you are at other things.

 

 

Average skill and peak skill

One way to describe how complex a job is is to measure the “average level of skill” and “peak level of skill” required to do the job. The more complex the job is, the larger this difference is. And sometimes, the frequency at which the peak level of skill is required can determine the quality of people you can expect to attract to the job.

Let us start with one extreme – the classic case of someone  turning screws in a Ford factory. The design has been done so perfectly and the assembly line so optimised that the level of skill required by this worker each day is identical. All he/she (much more likely a he) has to do is to show up at the job, stand in the assembly line, and turn the specific screw in every single car (or part thereof) that passes his way.

The delta between the complexity of the average day and the “toughest day” is likely to be very low in this kind of job, given the amount of optimisation already put in place by the engineers at the factory.

Consider a maintenance engineer (let’s say at an oil pipeline) on the other hand. On most days, the complexity required of the job is very close to zero, for there is nothing much to do. The engineer just needs to show up and potter around and make a usual round of checks and all izz well.

On a day when there is an issue however, things are completely different – the engineer now needs to identify the source of the issue, figure out how to fix it and then actually put in the fix. Each of this is an insanely complex process requiring insane skill. This maintenance engineer needs to be prepared for this kind of occasional complexity, and despite the banality of most of his days on the job, maintain the requisite skill to do the job on these peak days.

In fact, if you think of it, a lot of “knowledge” jobs, which are supposed to be quite complex, actually don’t require a very high level of skill on most days. Yet, most of these jobs tend to employ people at a far higher skill level than what is required on most days, and this is because of the level of skill required on “peak days” (however you define “peak”).

The challenge in these cases, though, is to keep these high skilled people excited and motivated enough when the job on most days requires pretty low skill. Some industries, such as oil and gas, resolve this issue by paying well and giving good “benefits” – so even an engineer who might get bored by the lack of work on most days stays on to be able to contribute in times when there is a problem.

The other way to do this is in terms of the frequency of high skill days – if you can somehow engineer your organisation such that the high skilled people have a reasonable frequency of days when high skills are required, then they might find more motivation. For example, you might create an “internal consulting” team of some kind – they are tasked with performing a high skill task across different teams in the org. Each time this particular high skill task is required, the internal consulting team is called for. This way, this team can be kept motivated and (more importantly, perhaps) other teams can be staffed at a lower average skill level (since they can get help on high peak days).

I’m reminded of my first ever real taste of professional life – an internship in an investment bank in London in 2005. That was the classic “high variance in skills” job. Having been tested on fairly extreme maths and logic before I got hired, I found that most of my days were spent just keying in numbers in to an Excel sheet to call a macro someone else had written to price swaps (interest rate derivatives).

And being fairly young and immature, I decided this job is not worth it for me, and did not take up the full time offer they made me. And off I went on a rather futile “tour” to figure out what kind of job has sufficient high skill work to keep me interested. And then left it all to start my own consultancy (where others would ONLY call me when there was work of my specialty; else I could chill).

With the benefit of hindsight (and having worked in a somewhat similar job later in life), though, I had completely missed the “skill gap” (delta between peak and average skill days) in my internship, and thus not appreciated why I had been hired for it. Also, that I spent barely two months in the internship meant I didn’t have sufficient data to know the frequency of “interesting days”.

And this is why – most of your time might be spent in writing some fairly ordinary code, but you will still be required to know how to reverse a red-black tree.

Most of your time might be spent in writing SQL queries or pulling some averages, but on the odd day you might need to know that a chi square test is the best way to test your current hypothesis.

Most of your time might be spent in managing people and making sure the metrics are alright, but on the odd day you might have to redesign the process at the facility that you are in charge of.

In most complex jobs, the average day is NOT similar to the most complex day by any means. And thus the average day is NOT representative of the job. The next time someone I’m interviewing asks me what my “average day looks like”, I’ll maybe point that person to this post!

Key Person Risk and Creative Professions

I’m coming to the conclusion that creative professions inevitably come with a “key person risk”. And this is due to the way teams in such professions are usually built.

I’ll start with a tweet that I put out today.

(I had NOT planned this post at the time when I put out this tweet)

I’ll not go into defining creative professions here, but I will leave it to say that you typically know it when you see one.

The thing with teams in such professions is that people who are good and creative are highly unlikely to get along with each other. Going into the animal kingdom for an analogy, we can think of dividing everyone in any such professions into “alphas” and “betas”. Alphas are the massively creative people who usually rise to lead their teams. Betas are the rest.

And given that any kind of creativity is due to some amount of lateral thinking, people good at creative professions are likely to hallucinate a bit (hallucination is basically lateral thinking taken to an extreme). And stretching it a bit more, you can say that people who are good at creative tasks are usually mad in one way or another.

As I had written briefly this morning, it is not usual for mad people (especially of a similar nature of madness) to get along with each other. So if you have a creative alpha leading the team, it is highly unlikely that he/she will have similar alphas in the next line of leadership. It is more likely that the next line of leadership will have people who are good complements to the alpha leader.

For example, in the ongoing World Cup, I’ve seen several tactical videos that have all said one thing – that Rodrigo De Paul’s primary role in the Argentinian team is to “cover for Messi”. Messi doesn’t track back, but De Paul will do the defending for him. Messi largely switches off, but De Paul is industrious enough to cover for Messi. When Messi goes forward, De Paul goes back. When Messi drops deep, De Paul makes a forward run.

This is the most typical creative partnership that you can get – one very obviously alpha creative supported by one or more steady performers who enable the creative person to do the creative work.

The question is – what happens when the creative head (the alpha) leaves? And the answer to this are going to be different in elite sport and the corporate world (and I’m mostly talking about the latter in this post).

In elite sport, when Messi retires (which he is likely to do after tomorrow’s final, irrespective of the result), it is virtually inconceivable that Argentina will ask De Paul to play in his position. Instead, they will look into others who are already playing in a sort of Messi role, maybe (or likely) at an inferior level and bring them up. De Paul will continue to play his role of central midfielder and continue to support whoever comes into Messi’s role.

In corporate setups, though, when one employee leaves, the obvious thing to do is to promote that person’s second in command. Sometimes there might be a battle for succession among various seconds in command, and the losers also leave the company. For most teams, where seconds in command are usually similar in style to the leader, this kind of succession planning works.

For creative teams, however, this usually leads to a disaster. More often than not, the second in command’s skills will be very different from that of the leader. If the leader had been an alpha creative (that’s the case we’re largely discussing here), the second in command is more likely to be a steady “water carrier” (a pejorative term used to describe France’s current coach Didier Deschamps).

And if this “water carrier” (no offence meant to anyone by this, but it is a convenient description) stays in the job for a long time, it is likely that the creative team will stop being creative. The thing that made it creative in the first place was the alpha’s leadership (this is especially true of small teams), and unless the new boss has recognised this and brings in a new set of alphas (or identifies potential alphas in the org and quickly promotes them), the team will start specialising in what was the new boss’s specialisation – which is to hold things steady and do all the right things and cover for someone who doesn’t exist any more.

So teams in creative professions have a key man risk in that if a particularly successful alpha leaves, the team as it remains is likely to stagnate and stop being creative. The only potential solutions I can think of are:

  • Bring in a new creative from outside to lead the team. The second in command remains just that
  • Coach the second in command to identify diverse (and creative alpha) talents within the team and recognise that there are alphas and betas. And the second in command basically leads the team but not the creative work
  • Organise the team more as a sports team where each person has a specific role. So if the attacking midfielder leaves, replace with a new attacking midfielder (or promote a junior attacking midfielder into a senior attacking midfielder). Don’t ask your defensive midfielders to suddenly become an attacking midfielder
  • Put pressure from above for alphas to have a sufficient number of other alphas as the next line of command. Retaining this team is easier said than done, and without betas the team can collapse.

Of course, if you look at all this from the perspective of the beta, there is an obvious question mark about career prospects. Unless you suddenly change your style (easier said than done), you will never be the alpha, and this puts in place a sort of glass ceiling for your career.

Heads of departments

Recently I was talking to someone about someone else. “He got an offer to join XXXXXX as CTO”, the guy I was talking to told me, “but I told him not to take it. Problem with CTO role is that you just stop learning and growing. Better to join a bigger place as a VP”.

The discussion meandered for a couple of minutes when I added “I feel the same way about being head of analytics”. I didn’t mention it then (maybe it didn’t flash), but this was one of the reasons why I lobbied for (and got) taking on the head of data science role as well.

I sometimes feel lonely in my job. It is not something anyone in my company can do anything about. The loneliness is external – I sometimes find that I don’t have too many “peers” (across companies). Yes, I know a handful of heads of analytics / data science across companies, but it is just that – a handful. And I can’t claim to empathise with all of them (and I’m sure the feeling is mutual).

Irrespective of the career path you have chosen, there comes a point in your career where your role suddenly becomes “illiquid”. Within your company, you are the only person doing the sort of job that you are doing. Across companies, again, there are few people who do stuff similar to what you do.

The kind of problems they solve might be different. Different companies are structured differently. The same role name might mean very different things in very different places. The challenges you have to face daily to do your job may be different. And more importantly, you might simply be interested in doing different things.

And the danger that you can get into when you get into this kind of a role is that you “stop growing”. Unless you get sufficient “push from below” (team members who are smarter than you, and who are better than you on some dimensions), there is no natural way for you to learn more about the kind of problems you are solving (or the techniques). You find that your current level is more than sufficient to be comfortable in your job. And you “put peace”.

And then one day you find ten years have got behind youNo one told you when to run, you missed the starting gun

(I want you to now imagine the gong sound at the beginning of “Time” playing in your ears at this point in the blogpost)

One thing I tell pretty much everyone I meet is that my networking within my own industry (analytics and data science) is shit. And this is something I need to improve upon. Apart from the “push from below” (which I get), the only way to continue to grow in my job is to network with peers and learn from them.

The other thing is to read. Over the weekend I snatched the new iPad (which my daughter had been using; now she has got my wife’s old Macbook Air) and put all my favourite apps on it. I feel like I’m back in 2007 again, subscribing to random blogs (just that most of them are on substack now, rather than on Blogspot or Livejournal or WordPress), in the hope that I will learn. Let me see where this takes me.

And maybe some people decide that all this pain is simply not worth it, and choose to grow by simply becoming more managerial, and “building an empire”.

George Mallory and Metrics

It is not really known if George Mallory actually summited the Everest in 1924 – he died on that climb, and his body was only found in 1999 or so. It wasn’t his first attempt at scaling the Everest, and at 37, some people thought he was too old to do so.

There is this popular story about Mallory that after one of his earlier attempts at scaling the Everest, someone asked him why he wanted to climb the peak. “Because it’s there”, he replied.

George Mallory (extreme left) and companions

In the sense of adventure sport, that’s a noble intention to have. That you want to do something just because it is possible to do it is awesome, and can inspire others. However, one problem with taking quotes from something like adventure sport, and then translating it to business (it’s rather common to get sportspeople to give “inspirational lectures” to business people) is that the entire context gets lost, and the concept loses relevance.

Take Mallory’s “because it’s there” for example. And think about it in the context of corporate metrics. “Because it’s there” is possibly the worst reason to have a metric in place (or should we say “because it can be measured?”). In fact, if you think about it, a lot of metrics exist simply because it is possible to measure them. And usually, unless there is some strong context to it, the metric itself is meaningless.

For example, let’s say we can measure N features of a particular entity (take N = 4, and the features as length, breadth, height and weight, for example). There will be N! was in which these metrics can be combined, and if you take all possible arithmetic operations, the number of metrics you can produce from these basic N metrics is insane. And you can keep taking differences and products and ratios ad infinitum, so with a small number of measurements, the number of metrics you can produce is infinite (both literally and figuratively). And most of them don’t make sense.

That doesn’t normally dissuade our corporate “measurer”. That something can be measured, that “it’s there”, is sometimes enough reason to measure something. And soon enough, before you know it, Goodhart’s Law would have taken over, and that metric would have become a target for some poor manager somewhere (and of course, soon ceases to be a metric itself). And circular logic starts from there.

That something can be measured, even if it can be measured highly accurately, doesn’t make it a good metric.

So what do we do about it? If you are in a job that requires you to construct or design or make metrics, how can you avoid the “George Mallory trap”?

Long back when I used to take lectures on logical fallacies, I would have this bit on not mistaking correlation for causation. “Abandon your numbers and look for logic”, I would say. “See if the pattern you are looking at makes intuitive sense”.

I guess it is the same for metrics. It is all well to describe a metric using arithmetic. However, can you simply explain it in natural language, and can the listener easily understand what you are saying? And more importantly, does that make intuitive sense?

It might be fashionable nowadays to come up with complicated metrics (I do that all the time), in the hope that it will offer incremental benefit over something simpler, but more often than not the difficulty in understanding it makes the additional benefit moot. It is like machine learning, actually, where sometimes adding features can improve the apparent accuracy of the model, while you’re making it worse by overfitting.

So, remember that lessons from adventure sport don’t translate well to business. “Because it’s there” / “because it can be measured” is absolutely NO REASON to define a metric.

Speed, Accuracy and Shannon’s Channel Coding Theorem

I was probably the CAT topper in my year (2004) (they don’t give out ranks, only percentiles (to two digits of precision), so this is a stochastic measure). I was also perhaps the only (or one of the very few) person to get into IIMs that year despite getting 20 questions wrong.

It had just happened that I had attempted far more questions than most other people. And so even though my accuracy was rather poor, my speed more than made up for it, and I ended up doing rather well.

I remember this time during my CAT prep, where the guy who was leading my CAT factory once suggested that I was making too many errors so I should possibly slow down and make fewer mistakes. I did that in a few mock exams. I ended up attempting far fewer questions. My accuracy (measured as % of answers I got wrong) didn’t change by much. So it was an easy decision to forget above accuracy and focus on speed and that served me well.

However, what serves you well in an entrance exam need not necessarily serve you well in life. An exam is, by definition, an artificial space. It is usually bounded by certain norms (of the format). And so, you can make blanket decisions such as “let me just go for speed”, and you can get away with it. In a way, an exam is a predictable space. It is a caricature of the world. So your learnings from there don’t extend to life.

In real life, you can’t “get away with 20 wrong answers”. If you have done something wrong, you are (most likely) expected to correct it. Which means, in real life, if you are inaccurate in your work, you will end up making further iterations.

Observing myself, and people around me (literally and figuratively at work), I sometimes wonder if there is a sort of efficient frontier in terms of speed and accuracy. For a given level of speed and accuracy, can we determine an “ideal gradient” – on which way a person needs to move in order to make the maximum impact?

Once in a while, I take book recommendations from academics, and end up reading (rather, trying to read) academic books. Recently, someone had recommended a book that combined information theory and machine learning, and I started reading it. Needless to say, within half a chapter, I was lost, and I had abandoned the book. Yet, the little I read performed the useful purpose of reminding me of Shannon’s channel coding theorem.

Paraphrasing, what it states is that irrespective of how noisy a channel is, using the right kind of encoding and redundancy, we will be able to predictably send across information at a certain maximum speed. The noisier the channel, the more the redundancy we will need, and the lower the speed of transmission.

In my opinion (and in the opinions of several others, I’m sure), this is a rather profound observation, and has significant impact on various aspects of life. In fact, I’m prone to abusing it in inexact manners (no wonder I never tried to become an academic).

So while thinking of the tradeoff between speed and accuracy, I started thinking of the channel coding theorem. You can think of a person’s work (or “working mind”) as a communication channel. The speed is the raw speed of transmission. The accuracy (rather, the lack of it) is a measure of noise in the channel.

So the less accurate someone is, the more the redundancy they require in communication (or in work). For example, if you are especially prone to mistakes (like I am sometimes), you might need to redo your work (or at least a part of it) several times. If you are the more accurate types, you need to redo less often.

And different people have different speed-accuracy trade-offs.

I don’t have a perfect way to quantify this, but maybe we can think of “true speed of work” by dividing the actual speed in which someone does a piece of work by the number of iterations they need to get it right.  OK it is not so straightforward (there might be other ways to build redundancy – like getting two independent people to do the same thing and then tally the numbers), but I suppose you get the drift.

The interesting thing here is that the speed and accuracy is not only depend on the person but the nature of work itself. For me, a piece of work that on average takes 1 hour has a different speed-accuracy tradeoff compared to a piece of work that on average takes a day (usually, the more complicated and involved a piece of analysis, the more the error rate for me).

In any case, the point to be noted is that the speed-accuracy tradeoff is different for different people, and in different contexts. For some people, in some contexts, there is no point at all in expecting highly accurate work – you know they will make mistakes anyways, so you might as well get the work done quickly (to allow for more time to iterate).

And in a way, figuring out speed-accuracy tradeoffs of the people who work for you is an important step in getting the best out of them.

 

Recruitment and diversity

This post has potential to become controversial and is related to my work, so I need to explicitly state upfront that all opinions here are absolutely my own and do not, in any way, reflect those of my employers or colleagues or anyone else I’m associated with.

I run a rather diverse team. Until my team grew inorganically two months back (I was given more responsibility), there were eight of us in the team. Each of us have masters degrees (ok we’re not diverse in that respect). Sixteen degrees / diplomas in total. And from sixteen different colleges / universities. The team’s masters degrees are in at least four disjoint disciplines.

I have built this part of my team ground up. And have made absolutely made no attempt to explicitly foster diversity in my team. Yet, I have a rather diverse team. You might think it is on accident. You might find weird axes on which the team is not diverse at all (masters degrees is one). I simply think it is because there was no other way.

I like to think that I have fairly high standards when it comes to hiring. Based on the post-interview conversations I have had with my team members, these standards have percolated to them as well. This means we have a rather tough task hiring. This means very few people even qualify to be hired by my team. Earlier this year I asked for a bigger hiring budget. “Let’s see if you can exhaust what you’ve been given, and then we can talk”, I was told. The person who told me this was not being sarcastic – he was simply aware of my demand-supply imbalance.

Essentially, in terms of hiring I face such a steep demand-supply imbalance that even if I wanted to, it would be absolutely impossible for me to discriminate while hiring, either positively or negatively.

If I want to hire less of a certain kind of profile (whatever that profile is), I would simply be letting go of qualified candidates. Given how long it takes to find each candidate in general, imagine how much longer it would take to find candidates if I were to only look at a subset of applicants (to prefer a category I want more of in my team). Any kind of discrimination (apart from things critical to the job such as knowledge of mathematics and logic and probability and statistics, and communication) would simply mean I’m shooting myself in the foot.

Not all jobs, however, are like this. In fact, a large majority of jobs in the world are of the type where you don’t need a particularly rare combination of skills. This means potential supply (assuming you are paying decently, treating employees decently, etc.) far exceeds demand.

When you’re operating in this kind of a market, cost of discrimination (either positive or negative) is rather low. If you were to rank all potential candidates, picking up number 25 instead of number 20 is not going to leave you all that worse off. And so you can start discriminating on axes that are orthogonal to what is required to do the job. And that way you can work towards a particular set of “diversity (or lack of it) targets”.

Given that a large number of jobs (not weighted by pay) belong to this category, the general discourse is that if you don’t have a diverse team it is because you are discriminating in a particular manner. What people don’t realise is that it is pretty impossible do discriminate in some cases.

All that said, I still stand by my 2015 post on “axes on diversity“. Any externally visible axis of diversity – race / colour / gender / sex / sexuality – is likely to diminish diversity in thought. And – again this is my personal opinion – I value diversity in thought and approach much more than the visible sources of diversity.

 

Why calls are disruptive to work

It is well known in my company that I don’t like phone calls. I mean – they are useful at times, but they have their time and place. For most normal office communication, it is far easier to do it using chat or mail, and less disruptive to your normal work day.

Until recently, I hadn’t been able to really articulate why phone calls (this includes Meet / Zoom / Teams / whatever) are disruptive to work, but recently had an epiphany when I was either drunk or hungover (can’t remember which now) during/after a recent company party.

Earlier that day, during the said party, one colleague (let’s call him C1) had told me about another colleague (let’s call him C2) and his (C2’s) penchant for phone calls. “Sometimes we would have written a long detailed document”, C1 said, “and then C2 will say, ‘I have to make one small point here. Can you please call me?’. He’s just the opposite of you”

I don’t know why after this I started thinking about circuit switching and packet switching. And then I realised why I hate random office calls.

Currently I use a Jio connection for my phone. The thing with Ji0 (and 4G in general, I think) is that it uses packet switching for phone calls – it uses the same data network for calls as well. This is different from earlier 2G (and 3G as well, if I’m not wrong) networks where calls were made on a different voice (circuit switching) network. Back then, if you got a call, your phone’s data connection would get interrupted – no packages could be sent because your phone was connected through a circuit. It was painful.

Now, with packet switching for phone calls as well, the call “packets” and the browsing “packets” can coexist and co-travel on the “pipes” connecting the phone to the tower and the wide world beyond. So you can take phone calls while still using data.

Phone calls in the middle of work disrupt work in exactly the same way.

The thing with chatting with someone while you’re working is that you can multitask. You send a message and by the time they reply you might have written a line of code, or sent another message to someone else. This means chatting doesn’t really disrupt work -it might slow down work (since you’re also doing work in smaller packets now), but your work goes on. Your other chats go on. You don’t put your life on hold because of this call.

A work phone call (especially if it has to be a video call) completely disrupts this network. Suddenly you have to give one person (or persons) at the end of the line your complete undivided attention. Work gets put on hold. Your other conversations get put on hold. The whole world slows down for you.

And once you hang up, you have the issue of gathering the context again on what you were doing and what you were thinking about and the context of different conversations (this is a serious problem for me). Everything gets disrupted. Sometimes it is even difficult to start working again.

I don’t know if this issue is specific to me because of my ADHD (and hence the issues in restarting work). Actually – ADHD leads to another problem. You might be hyper focussing on one thing at work, and when you get a call you are still hyper focussed on the same thing. And that means you can’t really pay attention to the call you are on, and can end up saying some shit. With chat / email, you don’t need to respond to everything immediately, so you can wait until the hyper focus is over!

In any case, I’m happy that I have the reputation I have, that I don’t like doing calls and prefer to do everything through text. The only downside I can think of of this is that you have to put everything in writing.

PSA: Google Calendar now allows you to put “focus time” on your own calendar. So far I haven’t used it too much but plan to use it more in the near future.

 

Junior Data Scientists

Since this is a work related post, I need to emphasise that all opinions in this are my own, and don’t reflect that of any organisation / organisations I might be affiliated with

The last-released episode of my Data Chatter podcast is with Abdul Majed Raja, a data scientist at Atlassian. We mostly spoke about R and Python, the two programming languages / packages most used for data science, and spoke about their relative merits and demerits.

While we mostly spoke about R and Python, Abdul’s most insightful comment, in my opinion, had to do with neither. While talking about online tutorials and training, he spoke about how most tutorials related to data science are aimed at the entry level, for people wanting to become data scientists, and that there was very little readymade material to help people become better data scientists.

And from my vantage point, as someone who has been heavily trying to recruit data scientists through the course of this year, this is spot on. A lot of profiles I get (most candidates who apply to my team get put through an open ended assignment) seem uncorrelated with the stated years of experience on their CVs. Essentially, a lot of them just appear “very junior”.

This “juniority”, in most cases, comes through in the way that people have done their assignments. A telltale sign, for example, is an excessive focus on necessary but nowhere sufficient things such as data cleaning, variable transformation, etc. Another telltale sign is the simple application of methods without bothering to explain why the method was chosen in the first place.

Apart from the lack of tutorials around, one reason why the quality of data science profiles continues to remain “junior” could be the organisation of teams themselves. To become better at your job, you need interact with people who are better than you at your job. Unfortunately, the rapid rise in demand for data scientists in the last decade has meant that this peer learning is not always there.

Yes – if you are a bunch of data scientists working together, you can pull each other up. However, if many of you have come in through the same process, it is that much more difficult – there is no benchmark for you.

The other thing is the structure of the teams (I’m saying this with very little data, so call me out if I’m bullshitting) – unlike software engineers, data scientists seldom work in large teams. Sometimes they are scattered across the organisation, largely working with tech or business teams. In any case, companies don’t need that many data scientists. So the number is low to start off with as well.

Another reason is the structure of the market – for the last decade the demand for data scientists has far exceeded the available supply. So that has meant that there is no real reason to upskill – you’ll get a job anyway.

Abdul’s solution, in the absence of tutorials, is for data scientists to look at other people’s code. The R community, for example, has a weekly Tidy Tuesday data challenge, and a lot of people who take that challenge put up their code online. I’m pretty certain similar resources exist for Python (on Kaggle, if not anywhere else).

So for someone who wants to see how other data scientists work and learn from them, there is plenty of resources around.

PS: I want to record a podcast episode on the “pile stirring” epidemic in machine learning (where people simply throw methods at a dataset without really understanding why that should work, or understanding the basic math of different methods). So far I’ve been unable to find a suitable guest. Recommendations welcome.