Does alma matter?

I just spent the holiday afternoon massively triggering myself by watching the just-released Netflix documentary Alma Matters, about life in IIT Kharagpur. Based on the trailer itself, I thought I could relate to it, thanks to my four years at IIT Madras. And so my wife and I sat, and spent three hours on the documentary. Our daughter was with us for the first half hour, and then disappeared for reasons mentioned below.

I have too many random thoughts in my head right now, so let me do this post in bullet points.

  • I have always had mixed feelings about my time at IIT Madras. On the one hand, I found it incredibly depressing. Even now, the very thought of going to Chennai depresses me. On the other, I have a lot of great memories from there, and built a strong network.

    Now that I think of it, having watched the documentary, a lot of those “great memories” were simply about me making the best use of a bad situation I was in. I don’t think I want to put myself, or my daughter, through that kind of an experience again

  • My basic problem at IIT was that I just couldn’t connect with most people there. I sometimes joke that I couldn’t connect with 80% of the people there, but remain in touch with the remaining 20%. And that is possibly right.

    The problem is that most people there were either “too fighter”, always worried about and doing academics, or “too given up”, not caring about anything at all in life. I couldn’t empathise with either and ended up having a not so great time.

  • My wife intently watched the show with me, even though she got bored by the end of the first episode. “It’s all so depressing”, she kept saying. “Yes, this is how life was”, I kept countering.

    And then I think she caught the point. “Take out the cigarettes and alcohol, and this is just like school. Not like college at all”, she said. And I think that quite sums up IIT for me. We were adults (most of us for most of the time – I turned eighteen a few months after I joined), but were treated like children for the most part. And led our lives like children in some ways, either being too regimented, or massively rebelling.

    “Now I can see why people don’t grow up when they go to IIT”, my wife said. After I had agreed, she went on, “this applies to you as well. You also haven’t grown up”. I couldn’t counter.

  • The “maleness” of the place wasn’t easy to notice. After one scene, my wife mentioned that we spend such a long time in the prime years of our lives dealing only with other men, that it is impossible to have normal relationships later on. It’s only a few who have come from more liberal backgrounds, or who manage to unlearn the IIT stuff, who manage to have reasonably normal long-term relationships.
  • The maleness of IITs was also given sort of ironic treatment by the show. There is a segment in the first episode about elections, which shows a female candidate, about how girls have a really bad time at IIT due to the massively warped sex ratio (in my time it was 16:1), and so it is difficult for girls to get respect.

    And then that turned out to be the “token female segment” in the show, as girls were all but absent in the rest of the three hours. That girls hardly made it to the show sort of self-reinforced the concept that girls aren’t treated well at IITs,

  • After intently watching for half an hour, my daughter asked, “if this is a movie about IIT, why aren’t you in it?”. I told her that it’s about a different IIT. “OK fine. I’ll watch it when they make a movie about your IIT”, she said and disappeared.
  • While the second and third episodes of the show were too long-drawn and sort of boring, I did manage to finish the show end-to-end in one sitting, which has to say something about it being gripping (no doubt to someone like me who could empathise with parts of it).
  • Finally, watch this trailer to the show. Watch what the guy says about people with different CGPA ranges.
  • He talks about “respect 8 pointers and don’t like 9 pointers”. That sort of made me happy since I finished (I THINK) with a CGPA of 8.9

If you’re from an IIT, you are likely to empathise with the show. If you are close to someone from IIT, you might appreciate them better when you watch it. Overall three episodes is too long-drawn. The first episode is a good enough gist of life at IIT.

And yeah, trigger warnings apply.

Changing game

Yesterday we reconnected Netflix after having gone off the platform for a month – we had thought we were wasting too much time on the platform, and so pulled the plug, until the paucity of quality non-sport content on our other streaming platforms forced us to return.

The first thing I did upon reconnecting Netflix was watching Gamechangers, a documentary about the benefits of vegan food, which had been recommended to me by a couple of business associates a few weeks back.

The documentary basically picks a bunch of research that talks about the benefits of plant-based food and staying away from animal-based food. The key idea is that animals are “just middlemen of protein”, and by eating plants we might be going straight to source.

And it is filled with examples of elite athletes and strong-persons who have turned vegan, and how going vegan is helping them build more stamina and have better health indicators, including the length and hardness of erections.

The documentary did end up making me feel uncomfortable – I grew up vegetarian, but for the last 7-8 years I’ve been eating pretty much everything. And I’ve come to a point of life where I’m not sure if I’ll get my required nutrient mix from plant-based foods only.

And there comes this documentary presenting evidence upon evidence that plant based foods are good, and you should avoid animal based food if you want your arteries to not be clogged, to keep your stamina high, and so on. There were points during the documentary where I seriously considered turning vegetarian once again.

Having given it a day, I think the basic point of the documentary as I see it is that, ceteris paribus, a plant based diet is likely to keep you healthier and fitter than an animal-based diet. But then, ceteris is not paribus.

The nutrient mix that you get from the sort of vegetarian diet that I grew up on is very different from the nutrient mix you get from a meat-based diet. Some of the examples of vegan diets shown in the documentary, for example, rely heavily on mock meats (made with soybean), which have a similar nutritional profile to meats they are meant to mock. And that is very different from the carb-fests that south indian vegetarian food have turned into.

So for me to get influenced by the documentary and turn back vegetarian (or even vegan, which I’d imagine will be very hard for me to do), I need to supplement my diet with seemingly unnatural foods such as “mock meat” if I need to get the same nutritional balance that I’ve gotten used to of late. Simply eliminating all meat or animal based products from my diet is not going to make me any more healthier, notwithstanding what the documentary states, or what Virat Kohli does.

In other words, it seems to me that getting the right balance of nutrients is a tradeoff between eating animal-based food, and eating highly processed unnatural food (mock meat). And I’m not willing to switch on that yet.

Human, Animal and Machine Intelligence

Earlier this week I started watching this series on Netflix called “Terrorism Close Calls“. Each episode is about an instance of attempted terrorism that has been foiled in the last 2 decades. For example, there is one example of the plot to bomb a set of transatlantic flights from London to North America in 2006 (a consequence of which is that liquids still aren’t allowed on board flights).

So the first episode of the series involves this Afghani guy who drives all the way from Colorado to New York to place a series of bombs in the latter’s subways (metro train system). He is under surveillance through the length of his journey, and just as he is about to enter New York, he is stopped for what seems like a “routine drugs test”.

As the episode explains, “a set of dogs went around his car sniffing”, but “rather than being trained to sniff drugs” (as is routine in such a stop), “these dogs had been trained to sniff explosives”.

This little snippet got me thinking about how machines are “trained” to “learn”. At the most basic level, machine learning involves showing a large number of “positive cases” and “negative cases” based on which the program “learns” the differences between the positive and negative cases, and thus to identify the positive cases.

So if you want to built a system to identify cats in an image, you feed the machine a large number of images with cats in them, and a large(r) number of images without cats in them, each appropriately “labelled” (“cat” or “no cat”) and based on the differences, the system learns to identify cats.

Similarly, if you want to teach a system to detect cancers based on MRIs, you show it a set of MRIs that show malignant tumours, and another set of MRIs without malignant tumours, and sure enough the machine learns to distinguish between the two sets (you might have come across claims of “AI can cure cancer”. This is how it does it).

However, AI can sometimes go wrong by learning the wrong things. For example, an algorithm trained to recognise sheep started classifying grass as “sheep” (since most of the positive training samples had sheep in meadows). Another system went crazy in its labelling when an unexpected object (an elephant in a drawing room) was present in the picture.

While machines learn through lots of positive and negative examples, that is not how humans learn, as I’ve been observing as my daughter grows up. When she was very little, we got her a book with one photo each of 100 different animals. And we would sit with her every day pointing at each picture and telling her what each was.

Soon enough, she could recognise cats and dogs and elephants and tigers. All by means of being “trained on” one image of each such animal. Soon enough, she could recognise hitherto unseen pictures of cats and dogs (and elephants and tigers). And then recognise dogs (as dogs) as they passed her on the street. What absolutely astounded me was that she managed to correctly recognise a cartoon cat, when all she had seen thus far were “real cats”.

So where do animals stand, in this spectrum of human to machine learning? Do they recognise from positive examples only (like humans do)? Or do they learn from a combination of positive and negative examples (like machines)? One thing that limits the positive-only learning for animals is the limited range of their communication.

What drives my curiosity is that they get trained for specific things – that you have dogs to identify drugs and dogs to identify explosives. You don’t usually have dogs that can recognise both (specialisation is for insects, as they say – or maybe it’s for all non-human animals).

My suspicion (having never had a pet) is that the way animals learn is closer to how humans learn – based on a large number of positive examples, rather than as the difference between positive and negative examples. Just that the limit of the animal’s communication being limited means that it is hard to train them for more than one thing (or maybe there’s something to do with their mental bandwidth as well. I don’t know).

What do you think? Interestingly enough, there is a recent paper that talks about how many machine learning systems have “animal-like abilities” rather than coming close to human intelligence.

For millions of years, mankind lived, just like the animals.
And then something happened that unleashed the power of our imagination. We learned to talk
– Stephen Hawking, in the opening of a Roger Waters-less Pink Floyd’s Keep Talking

Beer and diapers: Netflix edition

When we started using Netflix last May, we created three personas for the three of us in the family – “Karthik”, “Priyanka” and “Berry”. At that time we didn’t realise that there was already a pre-created “kids” (subsequently renamed “children” – don’t know why that happened) persona there.

So while Priyanka and I mostly use our respective personas to consume Netflix (our interests in terms of video content hardly intersect), Berry uses both her profile and the kids profile for her stuff (of course, she’s too young to put it on herself. We do it for her). So over the year, the “Berry” profile has been mostly used to play Peppa Pig, and the occasional wildlife documentary.

Which is why we were shocked the other day to find that “Real life wife swap” had been recommended on her account. Yes, you read that right. We muttered a word of abuse about Netflix’s machine learning algorithms and since then have only used the “kids” profile to play Berry’s stuff.

Since then I’ve been wondering what made Netflix recommend “real life wife swap” to Berry. Surely, it would have been clear to Netflix that while it wasn’t officially classified as one, the Berry persona was a kid’s account? And even if it didn’t, didn’t the fact that the account was used for watching kids’ stuff lead the collaborative filtering algorithms at Netflix to recommend more kids’ stuff? I’ve come up with various hypotheses.

Since I’m not Netflix, and I don’t have their data, I can’t test it, but my favourite hypothesis so far involves what is possibly the most commonly cited example in retail analytics – “beer and diapers“. In this most-likely-apocryphal story, a supermarket chain discovered that beer and diapers were highly likely to appear together in shopping baskets. Correlation led to causation and a hypothesis was made that this was the result of tired fathers buying beer on their diaper shopping trips.

So the Netflix version of beer-and-diapers, which is my hypothesis, goes like this. Harrowed parents are pestered by their kids to play Peppa Pig and other kiddie stuff. The parents are so stressed that they don’t switch to the kid’s persona, and instead play Peppa Pig or whatever from their own accounts. The kid is happy and soon goes to bed. And then the parent decides to unwind by watching some raunchy stuff like “real life wife swap”.

Repeat this story in enough families, and you have a strong enough pattern that accounts not explicitly classified as “kids/children” have strong activity of both kiddie stuff and adult content. And when you use an account not explicitly mentioned as “kids” to watch kiddie stuff, it gets matched to these accounts that have created the pattern – Netflix effectively assumes that watching kid stuff on an adult account indicates that the same account is used to watch adult content as well. And so serves it to Berry!

Machine learning algorithms basically work on identifying patterns in data, and then fitting these patterns on hitherto unseen data. Sometimes the patterns make sense – like Google Photos identifying you even in your kiddie pics. Other times, the patterns are offensive – like the time Google Photos classified a black woman as a “gorilla“.

Thus what is necessary is some level of human oversight, to make sure that the patterns the machine has identified makes some sort of sense (machine learning purists say this is against the spirit of machine learning, since one of the purposes of machine learning is to discover patterns not perceptible to humans).

That kind of oversight at Netflix would have suggested that you can’t tag a profile to a “kiddie content AND adult content” category if the profile has been used to watch ONLY kiddie content (or ONLY adult content). And that kind of oversight would have also led Netflix to investigate issues of users using “general” account for their kids, and coming up with an algorithm to classify such accounts as kids’ accounts, and serve only kids’ content there.

It seems, though, that algorithms run supreme at Netflix, and so my baby daughter gets served “real life wife swap”. Again, this is all a hypothesis (real life wife swap being recommended is a fact, of course)!