Channel Coding Theorem in Real Life

One of my favourite concepts in Computer Science is Shannon’s Channel Coding Theorem. This theorem is basically about the efficiency of communication over a noisy channel. And as I was thinking a few minutes back, this has interesting implications in real life as well, well away from the theory of communication.

I don’t have that much understanding of the rigorous explanation of the theorem. However, I absolutely love the central idea of it – that the noisier a channel is, the more the redundancy you need in your communication, and thus the slower is your communication. A corollary of this is that every channel has a “natural maximum speed”, and as long as you try to communicate within that speed, you can communicate reliably.

I won’t go into the technical details here – that involves assuming that the channel loses (or garbles) X% of bits, and then constructing a redundant code that shows that even with this loss, you can communicate effectively.

Anyway, let’s leave behind the theory communication and go on to real life.

I’ve found that I communicate badly when I’m not sure what language to talk in. If I’m talking in English with someone who I know knows good English, I communicate rather well (like my writing 😛 ) . However, if I’m not sure about the quality of language of the other person, I hesitate. I try to force myself to find simpler / more obvious words, and that disturbs my flow of thought, and I stammer.

Similarly, when I’m not sure whether to talk in Kannada or English (the two languages I’m very comfortable in), I stammer heavily. Again, because I’m not sure if the words I would naturally use will be understood by the other person (the counterparty’s comprehension being the “noise in the channel” here), I slow down, get jittery, and speak badly.

Then of course, there is the very literal interpretation of the channel coding theorem – when your internet connection (or call quality in general) is bad, you end up having to speak slower. When I was hunting for a job in 2020, I remember doing badly in a few interviews because of the quality (or lack thereof) of the internet connection (this was before I had discovered that Google Meet performs badly on Safari).

Similarly, sometime last month, I had thought I had prepared well for what I thought was going to be a key conversation at work. The internet was bad, we couldn’t hear each other and  kept repeating (redundancy is how you overcome the noise in the channel), and that diminished throughput massively. Given the added difficulty in communication, I didn’t bring up the key points I had prepared for. It was a damp squib.

Related to this is when you aren’t sure if the person you are speaking to can hear clearly. This disability again clouds the communication channel, meaning you need to build in redundancy, and thus a reduction in throughput.

When you are uncertain of yourself, or underconfident, you end up tending to do badly. That is because when you are uncertain, you aren’t sure if the other person will fully understand what you are going to say. Consequently, you end up talking slower, building redundancy in your speech, etc. You are more doubtful of what you are going to say, and don’t take risks, since your lack of confidence has clouded the “communication channel”, thus depressing your throughput.

Again a lot of this might apply to me alone – I function best when I’m talking / writing at a certain minimum throughput, and operating at anywhere below that makes me jittery and underconfident and a bad communicator. It is no surprise that my writing really took off once I got a computer of my own.

That was in the beginning of July 2004, and within a month, I had started (the predecessor of) this blog. I’ve been blogging for 19 years now.

That aside aside, the channel coding  theorem works in non-verbal contexts as well. Back in 2016, before my daughter was born, I remember reading somewhere that tentative mothers lead to cranky babies. The theory was that if the mum was anxious or afraid while handling her baby, the baby wouldn’t perceive the signals of touch sufficiently, and being devoid of communication, become cranky.

We had seen a few examples of this among relatives and friends (and this possibly applies to me as well – my mother had told me that I was the first newborn she ever handled, and so she was a bit tentative in handling me). This again can be explained using the Channel Coding Theorem.

When the mother’s touch is tentative, it is as if the touchy channel between mother and child has some “noise”. The tentativeness of the touch means the baby is not really sure of what the mother is “saying”. With touch, unlike language or bits, redundancy is harder. And so the child goes up insufficiently connected to its mother.

Conversely, later on in life, these tentative mothers tend to bring in redundancy in their communications with their (now jittery) children, and end up holding them too hard, and not letting them go (and some of these children go to therapists, who inevitably blame it on the mothers 😛 ). Ultimately, all of this stems from the noise in the initial communication channel (thanks to the tentativeness of the source).

Ok I’ve rambled on here, so will stop now. However, now that I’ve seeded this thought in you, you too will start seeing the channel coding theorem everywhere (oh – if you think this post is badly written, then that is again like reading this over a noisy channel. And you will get irritated with the lack of throughput and pack).

Chat GPT and Information Content

For the last week or so I’ve been fooling around with Chat GPT. It’s been a lot of fun, if you can endure the verbosity of its answers. Yesterday, at work, I used it for tech support when some people I was talking to were using too much jargon.

Before I came home from work yesterday, my last conversation was with a couple of recruiters, about a role that I’m hiring for. So when I got home, on a whim, I wondered if I could check out of Chat GPT could write job descriptions for me.

I asked:

write a job description for a senior data scientist at India’s largest logistics company. 4-7 years exp. Needs to have experience in ML, OR and statistics, and be able to work close to business and on ambiguous problems

It answered (pasting in full since the screenshot above is limited):

Job Title: Senior Data Scientist

Company: India’s Largest Logistics Company

Location: India

Experience: 4-7 years

We are seeking an experienced and highly skilled Senior Data Scientist to join our team at India’s largest logistics company. In this role, you will be responsible for using machine learning, operations research, and statistics to drive business insights and decision making.

As a Senior Data Scientist, you will have a deep understanding of data science techniques and technologies, and will be able to apply them to complex and ambiguous problems. You will be able to work closely with business teams to understand their needs and to develop solutions that drive value for the company.

To be successful in this role, you should have at least 4-7 years of experience in data science, and should have a strong background in machine learning, operations research, and statistics. You should be able to work independently and be comfortable working on complex and ambiguous problems.

Responsibilities:

  • Use machine learning, operations research, and statistics to drive business insights and decision making
  • Work closely with business teams to understand their needs and develop solutions
  • Develop and maintain data pipelines and systems
  • Collaborate with other data scientists and cross-functional teams to develop and implement solutions

Qualifications:

  • Bachelor’s or Master’s degree in a quantitative field, such as computer science, statistics, or engineering
  • At least 4-7 years of experience in data science
  • Strong background in machine learning, operations research, and statistics
  • Ability to work independently and on complex and ambiguous problems
  • Excellent communication and collaboration skills
  • Experience with Python, R, or other programming languages is a plus

We offer a competitive salary and benefits package, as well as opportunities for growth and advancement. If you are an experienced data scientist with a passion for driving business insights and decision making, we encourage you to apply.

In the past when I’ve been hiring, I’ve been a bit annoyed by people asking for job descriptions. I’ve largely found them to be fluff and boilerplate, and I’ve tried to get away with a sentence or two. However, due to the insistence of recruiters and candidates alike, I’ve ended up writing elaborate job descriptions. I don’t need to any more.

This post, however, is not about hiring (but yeah – I am hiring, and the above job description is reasonably accurate). It’s about information content in general.

Traditionally, information content has been measured in bits. A bit is the amount of information contained in an equi-probable binary event.

Sometimes when we find that someone is too verbose and using too many words when fewer would suffice, we say that their bit rate is low. We also use “low bit rate” to describe people such as former Prime Minister Atal Behari Vajpayee, who would speak incredibly slowly.

However, beyond the bit, which is a fairly technical concept, it has been difficult to quantify information content. Sometimes you read an article or a story and find that there is nothing much to it. But given the natural language, and the context of various words, it is impossible to quantify the information content.

Now, with Chat GPT, maybe it becomes a bit easier (though one would need a “reverse chat GPT algo”, to find the set of prompts required for Chat GPT to churn out a particular essay). Above, for example, I’ve shown how much fluff there generally is to the average job description – a fairly short prompt generated this longish description that is fairly accurate.

So you can define the information content of a piece or essay in terms of the number of words in the minimum set of prompts required for Chat GPT (or something like it) to come up with it. If you are a boring stereotypical writer, the set of prompts required will be lower. If you are highly idiosyncratic, then you will need to give a larger number of prompts for Chat GPT to write like you. You know where I’m going.

This evening, in office, a colleague commented that now it will be rather easy to generate marketing material. “Even blogs might become dead, since with a few prompts you can get that content”, he said (it can be a legit service to build off the Chat GPT API to take a tweet and convert it into an essay).

I didn’t tell him then but I have decided to take it up as a challenge. I consider myself to be a fairly idiosyncratic writer, which means I THINK there is a fair bit of information content in what I write, and so this blog will stay relevant. Let’s see how it goes.

PS: I still want to train a GAN on my blog (well over a million words, at last count) and see how it goes. If you know of any tools I can use for this, let me know!

 

Podcast: All Reals

I had spoken here a few times about starting a new “data podcast, right? The first episode is out today, and in this I speak to S Anand, cofounder and CEO of Gramener, about the interface of business with data science.

It’s a long freewheeling conversation, where we talk about data science in general, about Excel, about data visualisations, pie charts, Tufte and all that.

Do listen – it should be available on all podcast platforms, and let me know what you think. Oh, and don’t forget to subscribe to the podcast. New episodes will be out every Tuesday morning.

And if you think you want to be on the podcast, or know someone who wants to be a guest on the podcast, you can reach out. datachatterpodcast AT gmail.

Launching: Data Chatter

A few weeks back I had mentioned here that I’m starting a podcast. And it is now ready for release. Listen to the trailer here:

It is a series of conversations about all things data. First episode will be out on Tuesday, and then weekly after that. I’ve already built up an inventory of seven episodes. So far I’ve recorded episodes about big data, business intelligence, visualisations, a lot of “domain-specific” analytics, and the history of analytics in India. And many more are to come.

Subscribe to the podcast to be able to listen to it whenever it comes out. It is available on all podcasting platforms. For some reason, Apple is not listed on the anchor site, but if you search for “Data Chatter” on Apple Podcasts, you should find it (I did).

And of course, feedback is welcome (you can just comment on this post). And please share this podcast with whoever else you think might like it.

The Mint Way and the NED Way

I wrote for Mint for six and a half years. I loved writing for them. The editors were fantastic, the copy desk was understanding, and for the most part (until the editor who hired me moved on), they published most of the stuff I wrote.

What I wrote for Mint also helped open doors, as I ended up striking up many conversations based on that (though I’ve forgotten if any of them converted to revenue generation). At least in my initial year of writing for them, when I did a data-based take on elections in the run up to Modi’s first national election, I seemed to get a lot of “footage” and attention.

However, one person who was definitely unimpressed with my writing for Mint was my wife. Apart from two or three articles (I remember this and this for sure), she considers most of my writing for Mint as being rather boring. And when I go back to read some of the stuff I’ve written for them, I must agree.

The sort of flow that is there to every post I write here (or at least most posts) is completely missing there. A lot of pieces seems to simply be a collection of facts, and a small dose of analysis based on those facts. I hesitated to state my opinions and “take risks” in my writing. Barring one or two pieces I even hesitated to use a personal voice, appearing rather impersonal in most of my writing.

I had started writing for them at a time when I was starting to be known as a sort of data guy (I mainly wrote data related stuff for them). And from somewhere I had picked up this notion that it is honourable to be faithful to only the data, and to describe it as it is and to the extent possible simply state facts without taking sides.

And the fact that my pieces mostly appeared in the news pages (rather than the opinion pages) made me even more hesitant to use my personal voice in the writing – if it’s going to be news, I need to be as impersonal as possible, I thought. And so I wrote. The editors seemed to like it, since they kept me for six and a half years. Social media feedback tells me that at least occasionally the readers liked it. My wife never liked it.

Nowadays, from time to time, I find myself getting into “the Mint frame of mind” when I’m writing something. This happens when I need to get something out by a deadline, and I try to become too careful about what I’m stating and not bring in a personal opinion. So I try to find links to support every piece of information I put in. I try to be careful to not appear taking political sides. In other words, I get into “Mint mode”. And when I write in Mint mode, I end up writing stuff that, in hindsight, is not very interesting to read.

I guess my blog gives me the freedom that when I’m not writing well, I simply abandon the post. In that sense, the quality of my writing that you see has some selection bias – if I’m not happy with how something is going, I simply abandon it. However, my writing elsewhere doesn’t have that luxury, and so I sometimes end up “delivering shit”.

I really don’t know what I can do to prevent this from happening on a consistent basis. Maybe I should just blog more. And try and be myself when I write elsewhere as well. Maybe I should just write like I write a blog and then edit it to take out any personal touches, rather than trying to write impersonally in the first place.

OK I know i’ve rambled here 😛

Writing and monetisation

I started writing this blog, or its predecessor, in 2004. For nine years I made zero money off it. In fact, in 2008, after I moved to this website, I started paying money to run this blog, in terms of an annual domain name and hosting fee.

And then in 2013, I became part of a “big bundle”, as Mint offered me the opportunity to write for them. I had a contract to write at least three pieces a month around a particular topic, in return for which I would be paid a reasonable sum of money.

That sum of money was “reasonable” enough that it sort of provided me “tenure” until 2017, when I moved to London (I continued to write for Mint, and get paid, but the money wasn’t enough for “tenure” in London where expenses were higher). The Mint editor changed in early 2018, and the tenure ended in late 2018. I briefly got another tenure with the same editor at his new digs in 2019, but I decided to end that after a few months.

(By tenure, I mean steady stable income out of work that doesn’t take too much of my time. So I never had to struggle for basic expenses and every business deal was a bonus. Wonderful times)

In other words, I built my reputation as a writer by myself, writing this blog (and its predecessor), and then monetised it by joining a large bundle.

Recent trends in the media seem to be reversing the process. Recently, for example, Andrew Sullivan, a journalist with the New York magazine, quit his job and started his own newsletter. And this seems to be the part of a larger trend.

Columnist Matt Taibbi left Rolling Stone in April to write on Substack full time. Andrew Sullivan did the same last week, leaving New York Magazine to resurrect his blog the Dish. Joan Niesen, a Sports Illustrated staff writer who was laid off in October, shortly after the magazine’s sale, started a free Substack newsletter last week.

Essentially, journalists who made their names as being part of big bundles, are leaving these bundles and instead trying to monetise on their own platforms. This is exactly the opposite of the route that I, and many other bloggers of the 2000s wave, took – build reputation independently and monetise as part of a bundle.

At the outset, I’m sceptical about lifelong bundlers leaving their bundles. Essentially, once you’ve gotten used to working as part of a large professional setup, you would have started taking a large number of things for granted, and replicating those things are not going to be easy once you go indie.

As a writer, for example, who will edit your copy? Who will make and design your graphics? Who will write your headlines? Who will tell you lots to write about? While you might have experience as a journalist and be very good at your core job, being part of an institution means that you will find it very difficult to do everything by yourself (I THINK I’ve written something on these lines for non-journalism jobs as well, but I can’t be bothered to find that post now. I actually searched for it and found that like an idiot I’d written it on LinkedIn, and now I can‘t find it).

Moreover, people will face “subscription fatigue”, and won’t want to subscribe to too many individual writers. A case can be made for bundling again (get a bunch of writers who write about sort of complementary stuff, and then bundle their newsletters for an integrated subscription). (after all, all disruption is about either bundling or unbundling).

 

Credentialed and credential-less networks

Recently I tried out Instagram Reels, just to see what the big deal about it is. The first impression wasn’t great. My feed was filled with famous people (KL Rahul was there, along with some bollywood actresses), doing supposedly funny things. Compared to the little I had seen of TikTok (I had the app installed for a day last year), this was barely funny.

In fact, from my first impression it seems like Instagram Reels is a sort of bastard child of TikTok and Quibi (I took a 90 day trial of Quibi and uninstalled after a month, having used it 2-3 times and got bored each time). There is already a “prior reputation network”, based on people’s followers on the main Instagram product. And Reels takes off on this.

This means that for a new person coming into this supposedly new social network, the barriers to entry to getting more followers is rather high. They need to compete with people who already have built their reputations elsewhere (either on the Instagram main product, or in the case of someone like KL Rahul, completely offline).

I was reading this blogpost yesterday that compared and contrasted social networking in the 2000s (blogs) with that of the 2010s (twitter). It’s a nice blogpost, though I should mention that it sort of confirms my biases since I sort of built my reputation using my blog in the late 2000s.

That post makes the same point – blogs created their own reputation networks, while twitter leverages people’s reputations from elsewhere.

The existence of the blue checks points to the way in which the barriers that a new blogger faced entering a community was far lower than is currently the case on twitter. The start-up costs of blogging were higher, but once somebody integrated themselves into a community and began writing, they were judged on the quality of that writing alone. Very little attention was paid to who that person was outside of the blogosphere. While some prominent and well known individuals blogged, there was nothing like the “blue checks” we see on twitter today. It is not hard to understand why this is. Twitter is an undifferentiated mass of writhing souls trying to inflict their angry opinions on the earth. Figuring out who to listen to in this twist of two-sentences is difficult. We use a tweeter’s offline affiliations to separate the wheat and the chaff.

For the longest time, I refrained from putting my real name on this blog (though it was easy enough to triangulate my identity based on all the things I’d written here). This was to create a sort of plausible deniability in case some employer somewhere got pissed off with what I was writing.

Most of the blogosphere was similarly pseudonymous (or even anonymous). A lot of people I got to know through their blogging, I learnt about them from their writing before I could know anything else about them (that came from their “offline lives”). Reputation outside the blogosphere didn’t matter – your standing as a blogger depended on the quality of blogposts, and comments on other people’s blogposts only.

It is similar with TikTok – it’s “extreme machine learning” means that people’s reputations outside the network don’t matter in terms of people’s following on the network, and how likely they are to appear in people’s feeds. Instead, all that matters is the quality of the content on the platform, based (in TikTok’s case) on user engagement on the platform.

So as we look for an alternative to replace TikTok, given that the Chinese Communist Party seems to be able to get supposedly confidential data from it, we need to remember that we need a “fresh network”, or a “credential free” network.

Instagram has done something it’s good at, which is copying. However, given that it relies on existing credentials, Reels will never have the same experience as TikTok. Neither will any other similar product created from an existing social network. What we need is something that can create its own reputation network, bottom up.

Then again, blogging was based on an open platform so it was easy for people to build their networks. With something like TikTok relying heavily on network effects and algorithmic curation, I don’t know if such a thing can happen there.

Open and closed platforms

This is a blogpost that I had planned a very long time (4-5 weeks) ago, and I’m only getting down to write it now. So my apologies if the quality is not as good as my blogposts usually are. 

Many of you would have looked at the title of this blogpost and assumed that the trigger for this was the “acquisition” of Joe Rogan’s podcast by Spotify. For a large sum of money, Spotify is “taking his podcast private”, making it exclusive to Spotify subscribers.

However, this is only an “immediate trigger” for writing this post. I’d planned this post way back in April when I’d written one of my Covid-19 related blogposts – maybe it was this one.

I had joked the post needed to be on Medium for it to be taken seriously (a lot of covid related analysis was appearing on Medium around that time). Someone suggested I actually put it on Medium. I copied and pasted it there. Medium promptly took down my post.

I got pissed off and swore to never post on Medium again. I got reminded of the time last year when Youtube randomly pulled down one of my cricket videos when someone (an IP troll, I later learnt) wrongly claimed that I’d used copyrighted sounds in my video (the only sound in that video was my own voice).  I had lodged a complaint with Youtube, and my video was resurrected, but it was off air for a month (I think).

Medium and Youtube are both examples of closed platforms. All content posted on these platforms are “native to the platform”. These platforms provide a means of distributing (and sometimes even marketing) the content, and all content posted there essentially belongs to the platform. Yes, you get paid a cut of the ad fee (in case your Youtube channel becomes super powerful, for example), but Youtube decides whether your video deserves to be there at all, and whose homepages to put it on.

The main feature of a closed platform is that any content created on the platform needs to be consumed on the same platform. A video I’ve uploaded on Youtube is only accessible on Youtube. A medium post can only be read on medium. A tweet can only be read on twitter. A Facebook post only on Facebook.

The advantage with closed platforms is that by submitting your content to the platform, you are hoping to leverage some benefits the platform might offer, like additional marketing and distribution, and discovery.

This blog doesn’t work that way. Blogposts work through this technology called “RSS”, and to read what I’m writing here you don’t need to necessarily visit noenthuda.com. You can read it on the feed reader of your choice (Feedly is what I use). Of course there is the danger that one feed reader can have overwhelming marketshare, and the destruction of that feed reader can kill the ecosystem itself (like it happened with Google Reader in 2013). Yet, RSS being an open platform means that this blog still exists, and you can continue to receive it on the RSS reader of your choice. If Medium were to shut down tomorrow, all Medium posts might be lost.

Another example of an open platform is email – it doesn’t matter what email service or app you use, my email and yours is interoperable. India’s Universal Payment Interface (UPI) is another open platform – the sender and receiver can use apps of their choice and still transact.

And yet another open platform (which a lot of people didn’t really realise is an open platform) is podcasting. Podcasts run on the RSS protocol. So when you subscribe to a podcast using Apple Podcasts, it is similar to adding a blog to your Feedly. This thread by Ben Thompson of Stratechery (that I just stumbled upon when I started writing this post) sums it up well:

What Spotify is trying to do (with the Joe Rogan and Ringer deals) is to take these contents off open platforms and put it on its own closed platform. Some people (like Rogan) will take the bait since they’re getting paid for it. However, this comes at the cost of control – like I’m not sure if we’ll have another episode of Rogan’s podcast where host and guest light up a joint.

Following my experiences with Medium and Youtube, when my content was yanked off for no reason (or for flimsy reasons), I’m not sure I like closed platforms any more. Rather, someone needs to pay me a lot of money to take my content to a closed platform (speaking of which, do you know that all my writing for Mint (written in 2013-18) is behind their newly erected paywall now?).

In closing I must mention that platforms being “open” and platforms being “free” are orthogonal. A paid podcast or newsletter is still on an open platform (see Ben Thompson tweetstorm above), since it can be consumed on a medium independent of the one where it was produced – essentially a different feed is generated depending on what the customer has paid for.

Now that I’ve written this post, I don’t know what the point of this is. Maybe it’s just for collecting and crystallising my own thoughts, which is the point behind most of my blogposts anyway.

PS: We have RSS feeds for text and podcasts for audio. I wonder why we don’t have a popular and open protocol for video.

Blogs and tweetstorms

The “tweetstorm” is a relatively new art form. It basically consists of a “thread” of tweets that serially connect to one another, which all put together are supposed to communicate one grand idea.

It is an art form that grew organically on twitter, almost as a protest against the medium’s 140 (now raised to 280) character limit. Nobody really knows who “invented” it. It had emerged by 2014, at least, as this Buzzfeed article cautions.

In the early days, you would tweetstorm by continuously replying to your own tweet, so the entire set of tweets could be seen by readers as a “thread”. Then in 2017, Twitter itself recognised that it was being taken over by tweetstorms, and added “native functionality” to create them.

In any case, as with someone from “an older generation” (I’m from the blogging generation, if I can describe myself so), I was always fascinated by this new art form that I’d never really managed to master. Once in a while, rather than writing here (which is my natural thing to do), I would try and write a tweet storm. Most times I didn’t succeed. Clearly, someone who is good at an older art form struggles to adapt to newer ones.

And then something clicked on Wednesday when I wrote my now famous tweetstorm on Bayes Theorem and covid-19 testing. I got nearly two thousand new followers, I got invited to a “debate” on The Republic news channel and my tweetstorm is circulated in apartment Telegram groups (though so far nobody has yet sent my my own tweetstorm).

In any case, I don’t like platforms where I’m not in charge of content (that’s a story for another day), and so thought I should document my thoughts here on my blog. And I did so last night. At over 1200 words, it’s twice as long as my average blogpost (it tired me so much that the initial version, which went on my RSS feed, had a massive typo in the last line!).

And while I was writing that, I realised that the tone in the blog post was very different from what I sounded like in my famous tweetstorm. In my post (at least by my own admission, though a couple of friends have agreed with me), I sound reasonable and measured. I pleasantly build up the argument and explain what I wanted to explain with a few links and some data. I’m careful about not taking political sides, and everything. It’s how good writing should be like.

Now go read my tweetstorm:

Notice that right from the beginning I’m snide. I’m bossy. I come across as combative. And I inadvertently take sides here and there. Overall, it’s bad writing. Writing that I’m not particularly proud of, though it gave me some “rewards”.

I think that’s inherent to the art form. While you can use as many tweets as you like, you have a 280 character limit in each. Which means that each time you’re trying to build up an argument, you find yourself running out of characters, and you attempt to “finish your argument quickly”. That means that each individual tweet can come across as too curt or “to the point”. And  when you take a whole collection of curt statements, it’s easy to come across as rude.

That is possibly true of most tweetstorms. However good your intention is when you sit down to write them, the form means that you will end up coming across as rude and highly opinionated. Nowadays, people seem to love that (maybe they’ve loved it all the time, and now there is an art form that provides this in plenty), and so tweetstorms can get “picked up” and amplified and you become popular. However, try reading it when you’re yourself in a pleasant and measured state, and you find that most tweetstorms are unreadable, and constitute bad writing.

Maybe I’m writing this blogpost because I’m loyal to my “native art form”. Maybe my experience with this artform means that I write better blogs than tweetstorms. Or maybe it’s simply all in my head. Or that blogs are “safe spaces” nowadays – it takes effort for people to leave comments on blogs (compared to replying to a tweet with abuse).

I’ll leave you with this superb old article from The Verge on “how to tweetstorm“.

Yet another social media sabbatical

Those of you who know me well know that I keep taking these social media sabbaticals. Once in a while I decide that I’m spending too much time on these platforms, wasting both time and mental energy, and log off. Time has come for yet another such break.

I had a bumper day on twitter yesterday. I wrote this one tweet storm that went viral. Some 2000 plus retweets and all that. Basically I used some 15 tweets to explain Bayes’s Theorem, a concept that most people find really hard to understand.

For the last 24 hours, my twitter mentions have been a mess. I’ve tried various things – applying filters, switching from the native app to tweetdeck, etc. but I find that I keep checking my mentions for that dopamine rush that comes out of new followers (I have some 1500 new followers after the tweetstorm, including Chris Arnade of Dignity fame), new retweets and new likes.

And the dopamine rush is frequently killed by hate, as a tweetstorm like this will inevitably generate. I did another tweetstorm this morning detailing this hate – it has to do with the “two Overton Windows” post I’d written a couple of weeks ago.

People are so deranged that even a maths tweetstorm (like the one at the beginning of this post) can be made political, and you see people go on and on.

In fact, there is this other piece I had written (for Mint, back in 2015) that again uses Bayes’s Theorem to explain online flamewars. Five years down, everything I wrote is true.

It is futile to engage with most people on Twitter, especially when they take their political selves too seriously. It can be exhausting, and 27 hours after I wrote that tweetstorm I’m completely exhausted.

So yeah this is not a social media sabbatical like my previous ones where I logged off all media. As things stand I’m only off Twitter (I’ve taken mitigating steps on other platforms to protect my blood pressure and serotonin).

Then again, those of you who know me well know that when I’m off twitter I’ll be writing more here. You can continue to expect that. I hope to be more productive here, and in my work (I’m swamped with work this lockdown) as well.

I continue to be available on WhatsApp, and Telegram, and email. Those of you who have my email or number can reach me in one of those places. For everything else, there’s the “contact” tab on this blog.

See you more regularly here in the coming days!