I can retire

I was playing with the “custom GPT” feature of ChatGPT. I asked it to build a bot to answer questions based on content in this blog. And then I started trying it out. Here is a sample:

You
explain the studs and fighters theory
NED Bot

The “Studs and Fighters” theory is a framework used to describe two distinct types of competitive individuals or entities, often in the context of sports, business, or any field involving competition. Here’s a casual explanation:

  • Studs: These are the naturally talented. They’ve got the skills, often inherent, that make them stand out with seemingly less effort. In sports, think of the athlete who executes plays with grace and makes it all look easy. In business, it could be that entrepreneur whose ventures just always seem to bloom.
  • Fighters: These folks might not have the natural flair of the studs, but boy do they have grit. They’re the ones who compensate for any lack of natural talent with sheer hard work and perseverance. They fight for every point, work relentlessly, and often overcome obstacles through tenacity. Think of the athlete who may not be the most skilled on the field but is always the last to leave practice.

The theory goes that in the long run, the Fighters tend to outperform the Studs because their success is not reliant on inherent talent, which can plateau, but on continuous improvement and adaptation. However, when a Stud also has the mindset of a Fighter, that’s when you get those once-in-a-generation superstars. It’s not just about what you’re born with or what you’re willing to do; it’s about combining the best of both worlds if you can.

You
who came up with this thoery?
NED Bot

The “Studs and Fighters” theory was popularized by the blog “noenthuda.com,” which is run by Karthik Shashidhar. He formulated this concept based on observations in various competitive fields, and it has since been referenced in discussions about performance, talent, and success. The theory encapsulates the dichotomy between natural talent and hard-earned skill, offering a lens through which to view competition and development in individuals.

Not bad, I must say! This took about five minutes to program.

I’ve published the bot. You can find it here. According to ChatGPT itself, you don’t need a premium ChatGPT account to USE this. Have fun, and stress test!

Notes from a wedding reception

One of the impacts of the Covid-19 pandemic was to reduce the size of weddings. For a brief period of two or three years, the so-called “big fat indian wedding” got significantly slimmer.

It had started with the lockdowns and some insane government-imposed regulations on the size of weddings. I remember attending even some close relatives’ weddings over Zoom during 2020 and 2021.

And then there was the bandwagon. Because during that time people had been used to not being invited for weddings of people they knew (a few years back my wife’s French flatmate had been shocked to know that we had invited my wife’s aunt’s friends to our wedding. And this was before we told him that we’d also invited the priest of the temple across the road, and the guy who ran a chaat stall down the road), some people continued to have small weddings.

As a consequence, it had been a good four years since we had attended a “random” wedding – the wedding of someone we didn’t know too well. And as we were getting ready to go to my wife’s school friend’s brother’s wedding reception, she remarked that “somehow these receptions of people you don’t know too well are more fun than those of close friends or relatives”. Having gone to the wedding and come back, I attest that statement.

A few pertinent observations, in no particular order:

  • The “density of a queue” is a function of the level of trust in society. In a high-trust society, where you expect everyone to follow the queue, people can have personal space in the queue. In a low trust society, when you are concerned about someone overtaking you in the queue, you stand close to the person in front of you. By recursion, this leads to a rather dense queue.
  • Unfortunately, by the time of my own wedding in 2010, I hadn’t figured out why lines at wedding receptions were so long (apart from the fact that we had invited the priest across the road and the guy who supplied coffee powder to my father-in-law). And then later found that the culprit was the “panning shot” – a video taken by the videographer where he pans across the set of people posing with the couple for the photo.

    It is 2023, the panning shot still causes hold-ups. Now, I expect generative AI to solve this problem for good. All you need are a bunch of still photographers at a few strategic angles, and then the AI can fill in the panning shot, thus saving the time of everyone at the reception.

  • For a while I had stood alone in the queue, as my wife and daughter had gone somewhere with my wife’s close friend (whose brother was getting married today). I had a bouquet in hand, and the density of the queue meant that I had to be conscious of it getting squished. And the uncle in front of me in the line kept walking backwards randomly. Soon I decided to let the thorns on the roses in the bouquet do the work
  • Of late we’ve had so many bad experiences with food at functions (and remember that we’ve largely gone to close relatives’ and friends’ events, so we haven’t been able to crib loudly as well) that we recently took a policy decision to have our meals at home and then go to the events. As Murphy’s Law would dictate, the food today looked rather good (and my wife, who had the chaat there as an after-dinner snack, confirmed it was)
  • At my own reception in 2010, I remember my (then new) wife and I feeling happy when large groups came to greet us – that meant the queue would dissolve that much quicker. From today’s experience I’m not sure that’s the case. The advantage is one panning shot for the entire group. The disadvantage is the amount of time it takes to get the group organised into a coherent formation for the photo
  • Reception queues, if anything, have become slower thanks to people’s impatience to wait for the official pictures. Inevitably in every largish group, there is someone who hands their phone to the official photographers asking for a photo using that. In some seemingly low-trust groups, multiple people hand over their phones to the official photographer asking for the picture to be taken with THAT
  • Wedding receptions are good places for peoplewatching, especially when you are in the queue.

    And not knowing too many people at the wedding means there are more new people to watch

  • One downside of not knowing too many people at the wedding means you are doubtful if the groom or bride recognise you (especially if you are the invitee of one of their close relatives). You will be hoping the parent or sibling who invited you is around to do the intro. I’ve had a few awkward moments

OK that is one wedding reception I’ve attended in almost four years, and I’ve written a lot. I’ll stop.

Stable Diffusion and Chat GPT and Logistic Regression

For a long time I have had this shibboleth on whether someone is a “statistics person or a machine learning person”. It is based on what they call regressions where the dependent variable is binary. Statisticians simply call it “logit” (there is also a “probit“).

Now, in terms of implementation as well, there is one big difference between the way “logit” is modelled versus “logistic regression”. For a logit model (if you are using python, you need to use the “statsmodels” package for this, not scikit learn), the number of observations needs to far exceed the number of independent variables.

Else, a matrix that needs to be inverted as part of the solution will turn out to be singular, and there will be no solution. I guess I betrayed my greater background in statistics than in Machine Learning when, in 2018, I wrote this blogpost on machine learning being a “process to tie down coefficients in maths models“.

For “logistic regression” (as opposed to “logit”) puts no such constraint – on the regression matrix being invertible. Instead of actually inverting the matrix, machine learning approaches simply focus on learning the terms of the inverted matrix using gradient descent (basically the opposite of hill climbing), so mathematical inconveniences such as matrices that cannot be inverted are moot there.

And so you have logistic regression models with thousands of variables, often calibrated with a fewer number of data points. To be honest, I can’t understand this fully – without sufficient information (data points) to calibrate the coefficients, there will always be a sense of randomness in the output. The model has too many degrees of freedom, and so there is additional information the model is supplying (apart from what was supplied in the training data!).

Of late I have been playing a fair bit with generative AI (primarily ChatGPT and Stable Diffusion). The other day, my daughter and I were alone in my in-laws’ house, and I told her “look I’ve brought my personal laptop along, if you want we can play with it”. And she had demanded that she “play with stable diffusion”. This is the image she got for “tiger chasing deer”.

I have written earlier here about how the likes of ChatGPT and Stable Diffusion in a way redefine “information content“.

 

And if you think about it, almost by definition, “generative AI” creates information (and hallucinates, like in the above pic). Traditionally speaking, a “picture is worth a thousand words”, but if you can generate a picture with just a few words of prompt, the information content in it is far less than a thousand words.

In some sense, this reminds me of “logistic regression” once again. By definition (because it is generative), there is insufficient “tying down of coefficients”, because of which the AI inevitably ends up “adding value of its own”, which by definition is random.

So, you will end up getting arbitrary results. ChatGPT often gives you wrong answers to questions. Dall-E and Midjourney and Stable Diffusion will return nonsense images such as the above. Because a “generative AI” needs to create information, by definition, all the coefficients of the model cannot be well calibrated. 

And the consequence of this is that however good these AIs get, however much data is used to train them, there will always be an element of randomness to them. There will always be test cases where they give funny results.

No, AGI is not here yet.

Why AI will always be biased

Out on Marginal Revolution, Alex Tabarrok has an excellent post on why “sexism and racism will never diminish“, even when people on the whole become less sexist and racist. The basic idea is that there is always a frontier – even when we all become less sexist or racist, there will be people who will  be more sexist or racist than the others and they will get called out as extremists.

To quote a paper that Tabarrok has quoted (I would’ve used a double block-quote for this if WordPress allowed it):

…When blue dots became rare, purple dots began to look blue; when threatening faces became rare, neutral faces began to appear threatening; and when unethical research proposals became rare, ambiguous research proposals began to seem unethical. This happened even when the change in the prevalence of instances was abrupt, even when participants were explicitly told that the prevalence of instances would change, and even when participants were instructed and paid to ignore these changes.

Elsewhere, Kaiser Fung has a nice post on some of his learnings from a recent conference on Artificial Intelligence that he attended. The entire post is good, and I’ll probably comment on it in detail in my next newsletter, but there is one part that reminded me of Tabarrok’s post – on bias in AI.

Quoting Fung (no, this is not a two-level quote. it’s from his blog post):

Another moment of the day is when one speaker turned to the conference organizer and said “It’s become obvious that we need to have a bias seminar. Have a single day focused on talking about bias in AI.” That was his reaction to yet another question from the audience about “how to eliminate bias from AI”.

As a statistician, I was curious to hear of the earnest belief that bias can be eliminated from AI. Food for thought: let’s say an algorithm is found to use race as a predictor and therefore it is racially biased. On discovering this bias, you remove the race data from the equation. But if you look at the differential impact on racial groups, it will still exhibit bias. That’s because most useful variables – like income, education, occupation, religion, what you do, who you know – are correlated with race.

This is exactly like what Tabarrok mentioned about humans being extremist in whatever way. You take out the most obvious biases, and the next level of biases will stand out. And so on ad infinatum.