The (missing) Desk Quants of Main Street

A long time ago, I’d written about my experience as a Quant at an investment bank, and about how banks like mine were sitting on a pile of risk that could blow up any time soon.

There were two problems as I had documented then. Firstly, most quants I interacted with seemed to be solving maths problems rather than finance problems, not bothering if their models would stand the test of markets. Secondly, there was an element of groupthink, as quant teams were largely homogeneous and it was hard to progress while holding contrarian views.

Six years on, there has been no blowup, and in some sense banks are actually doing well (I mean, they’ve declined compared to the time just before the 2008 financial crisis but haven’t done that badly). There have been no real quant disasters (yes I know the Gaussian Copula gained infamy during the 2008 crisis, but I’m talking about a period after that crisis).

There can be many explanations regarding how banks have not had any quant blow-ups despite quants solving for math problems and all thinking alike, but the one I’m partial to is the presence of a “middle layer”.

Most of the quants I interacted with were “core” in the sense that they were not attached to any sales or trading desks. Banks also typically had a large cadre of “desk quants” who are directly associated with trading teams, and who build models and help with day-to-day risk management, pricing, etc.

Since these desk quants work closely with the business, they turn out to be much more pragmatic than the core quants – they have a good understanding of the market and use the models more as guiding principles than as rules. On the other hand, they bring the benefits of quantitative models (and work of the core quants) into day-to-day business.

Back during the financial crisis, I’d jokingly predicted that other industries should hire quants who were now surplus to Wall Street. Around the same time, DJ Patil et al came up with the concept of the “data scientist” and called it the “sexiest job of the 21st century”.

And so other industries started getting their own share of quants, or “data scientists” as they were now called. Nowadays its fashionable even for small companies for whom data is not critical for business to have a data science team. Being in this profession now (I loathe calling myself a “data scientist” – prefer to say “quant” or “analytics”), I’ve come across quite a few of those.

The problem I see with “data science” on “Main Street” (this phrase gained currency during the financial crisis as the opposite of Wall Street, in that it referred to “normal” businesses) is that it lacks the cadre of desk quants. Most data scientists are highly technical people who don’t necessarily have an understanding of the business they operate in.

Thanks to that, what I’ve noticed is that in most cases there is a chasm between the data scientists and the business, since they are unable to talk in a common language. As I’m prone to saying, this can go two ways – the business guys can either assume that the data science guys are geniuses and take their word for the gospel, or the business guys can totally disregard the data scientists as people who do some esoteric math and don’t really understand the world. In either case, value added is suboptimal.

It is not hard to understand why “Main Street” doesn’t have a cadre of desk quants – it’s because of the way the data science industry has evolved. Quant at investment banks has evolved over a long period of time – the Black-Scholes equation was proposed in the early 1970s. So the quants were first recruited to directly work with the traders, and core quants (at the banks that have them) were a later addition when banks realised that some quant functions could be centralised.

On the other hand, the whole “data science” growth has been rather sudden. The volume of data, cheap incrementally available cloud storage, easy processing and the popularity of the phrase “data science” have all increased well-at-a-faster rate in the last decade or so, and so companies have scrambled to set up data teams. There has simply been no time to train people who get both the business and data – and the data scientists exist like addendums that are either worshipped or ignored.

Auctions of distressed assets

Bloomberg Quint reports that several prominent steel makers are in the fray for the troubled Essar Steel’s assets. Interestingly, the list of interested parties includes the promoters of Essar Steel themselves. 

The trouble with selling troubled assets or bankrupt companies is that it is hard to put a value on them. Cash flows and liabilities are uncertain, as is the value of the residual assets that the company can keep at the end of the bankruptcy process. As a result of the uncertainty, both buyers and sellers are likely to slap on a big margin to their price expectations – so that even if they were to end up overpaying (or get underpaid), there is a reasonable margin of error.

Consequently, several auctions for assets of bankrupt companies fail (an auction is always a good mechanism to sell such assets since it brings together several buyers in a competitive process and the seller – usually a court-appointed bankruptcy manager – can extract the maximum possible value). Sellers slap on a big margin of error on their asking price and set a high reserve price. Buyers go conservative in their bids and possibly bid too low.

As we have seen with the attempted auctions of the properties of Vijay Mallya (promoter of the now bankrupt Kingfisher Airlines) and Subroto Roy Sahara (promoter of the eponymous Sahara Group), such auctions regularly fail. It is the uncertainty of the value of assets that dooms the auctions to failure.

What sets apart the Essar Steel bankruptcy process is that while the company might be bankrupt, the promoters (the Ruia brothers) are not. And having run the company (albeit to the ground), they possess valuable information on the value of assets that remain with the company. And in the bankruptcy process, where neither other buyers nor sellers have adequate information, this information can prove invaluable.

When I first saw the report on Essar’s asset sale, I was reminded of the market for footballers that I talk about in my book Between the buyer and the seller. That market, too, suffers from wide bid-ask spreads on account of difficulty in valuation.

Like distressed companies, the market for footballers also sees few buyers and sellers. And what we see there is that deals usually happen at either end of the bid-ask spectrum – if the selling club is more desperate to sell, the deal happens at an absurdly low price, and if the buying club wants the deal more badly, they pay a high price for it.

I’ve recorded a podcast on football markets with Amit Varma, for the Seen and the unseen podcast.

Coming back to distressed companies, it is well known that the seller (usually a consortium of banks or their representatives) wants to sell, and is usually the more desperate party. Consequently, we can expect the deal to happen close to the bid price. A few auctions might fail in case the sellers set their expectations too high (all buyers bid low since value is uncertain), but that will only make the seller more desperate, which will bring down the price at which the deal happens.

So don’t be surprised if the Ruias do manage to buy Essar Steel, and if they manage to do that at a price that seems absurdly low! The price will be low because there are few buyers and sellers and the seller is the more desperate party. And the Ruias will win the auction, because their inside information of the company they used to run will enable them to make a much better bid.

 

Thaler and Uber and surge pricing

I’m writing about Uber after a really long time on this blog. Basically I’d gotten tired of writing about the company and its ideas, and once I wrote a chapter about dynamic pricing in cabs in my book, there was simply nothing more to say.

Now, the Nobel Prize to Richard Thaler and his comments sometime back about Uber’s surge pricing has given me reason to revisit this topic, though I’ll keep it short.

Basically Thaler makes the point that when businesses are greedy and seen to be gouging customers in times of high demand, they might lose future demand from the same customers. In his 2015 book Misbehaving (which I borrowed from the local library a few months ago but never got down to reading), he talks specifically about Uber, and about how price gouging isn’t a great idea.

This has been reported across both mainstream and social media over the last couple of days as if Thaler is completely against the concept of surge pricing itself. For example, in this piece about Thaler, Pramit Bhattacharya of Mint introduces the concept of surge pricing and says:

Thaler was an early critic of this model. In his 2015 book Misbehaving: The Making of Behavioral Economics, Thaler argues that temporary spikes in demand, “from blizzards to rock star deaths, are an especially bad time for any business to appear greedy”. He argues that to build long-term relationships with customers, firms must be seen as “fair” and not just efficient, and that this often involves giving up on short-term profits even if customers may be willing to pay more at that point to avail themselves of its product or service.

At first sight, it is puzzling that an economist would be against the principle of dynamic pricing, since it helps the marketplace allocate resources more effectively and more importantly, use price as an information mechanism to massively improve liquidity in the system. But Thaler’s views on the topic are more nuanced. To continue to quote from Pramit’s piece:

“I love Uber as a service,” writes Thaler. “But if I were their consultant, or a shareholder, I would suggest that they simply cap surges to something like a multiple of three times the usual fare. You might wonder where the number three came from. That is my vague impression of the range of prices that one normally sees for products such as hotel rooms and plane tickets that have prices dependent on supply and demand. Furthermore, these services sell out at the most popular times, meaning that the owners are intentionally setting the prices too low during the peak season.

Thaler is NOT suggesting that Uber not use dynamic pricing – the information and liquidity effects of that are too massive to compensate for occasionally pissing off passengers. What he suggests, however, is that the surge be CAPPED, perhaps at a multiple of three.

There is a point after which dynamic pricing ceases to serve any value in terms of information and liquidity, and its sole purpose is to ensure efficient allocation of resources at that particular instant in time. At such levels, though, the cost of pissing off customers is also rather high. And Thaler suggests that 3 is the multiple at which the benefits of allocation start getting weighed down by the costs of pissing off passengers.

This is exactly what I’ve been proposing in terms of cab regulation for a couple of years now, though I don’t think I’ve put it down in writing anywhere. That rather than banning these services from not using dynamic pricing at all, a second best solution for a regulator who wants to prevent “price gouging” is to have a fare cap, and to set the cap high enough that there is enough room for the marketplaces to manoeuvre and use price as a mechanism to exchange information and boost liquidity.

Also, the price cap should be set in a way that marketplaces have flexibility in how they will arrive at the final price as long as it is within the cap – regulators might say that the total fare may not exceed a certain multiple of the distance and time or whatever, but they should not dictate how the marketplace precisely arrives at the price – since calculation of transaction cost in taxi pricing has historically been a hard problem and one of the main ways in which marketplaces such as Uber bring efficiency is in solving this problem in an innovative manner using technology.

For more on this topic, listen to my podcast with Amit Varma about how taxi marketplaces such as Uber use surge pricing to improve liquidity.

For even more on the topic, read my book Between the buyer and the seller which has a long chapter dedicated to the topic,

The nature of the professional services firm

This is yet another rejected section from my soon-t0-be-published book Between the buyer and the seller


In 2006, having just graduated from business school, I started my career working for a leading management consulting firm. This firm had been one of the most sought after employers for students at my school, and the salary they offered to pay me was among the highest offers for India-based jobs in my school in my year of graduation.

The elation of being paid better than my peers didn’t last too long, though. In what was my second or third week at the firm, I was asked to help a partner prepare a “pitch deck” – a document trying to convince a potential client to hire my firm for a piece of work. A standard feature in any pitch deck is costing, and the cost sheet of the document I was working on told me that the rate my firm was planning to bill its client for my services was a healthy multiple of what I was being paid.

While I left the job a few months later (for reasons that had nothing to do with my pay), I would return to the management consulting industry in 2012. This time, however, I didn’t join a firm – I chose to freelance instead. Once again I had to prepare pitch decks to win businesses, and quote a professional fee as part of it. This time, though, the entire billing went straight to my personal top line, barring some odd administrative expenses.

The idea that firms exist in order to take advantage of saving in transaction costs was first proposed by Ronald Coase in what has come to be a seminal paper in 1937. In “The Nature of the Firm”, Coase writes:?

The main reason why it is profitable to establish a firm would seem to be that there is a cost of using the price mechanism. The most obvious cost of ‘organising’ production through the price mechanism is that of discovering what the relevant prices are.

In other words, if an employer and employee or two divisions of a firm were to negotiate each time the price of goods or services being exchanged, the cost of such negotiations (the transaction cost) would far outstrip the benefit of using the price mechanism in such a case. Coase’s paper goes on to develop a framework to explain why firms aren’t larger than they were. He says,

Naturally, a point must be reached where the costs of organising an extra transaction within the firm are equal to the costs involved in carrying out the transaction in the open market.

While Coase’s theories have since been widely studied and quoted, and apply to all kinds of firms, it is still worth asking the question as to why professional services firms such as the management consulting firm I used to work for are as ubiquitous as they are. It is also worth asking why such firms manage to charge from their clients fees that are far in excess of what they pay their own employees, thus making a fat spread.

The defining feature of professional services firms is that they are mostly formed by the coming together of a large number of employees all of whom do similar work for an external client. While sometimes some of these employees might work in teams, there is seldom any service in such firms (barring administrative tasks) that are delivered to someone within the firm – most services are delivered to an external client. Examples of such firms include law firms, accounting firms and management consulting firms such as the one I used to work for (it is tempting to include information technology services firms under this banner but they tend to work in larger teams implying a higher contribution from teamwork).

One of my main challenges as a freelance consultant is to manage my so-called “pipeline”. Given that I’m a lone consultant, there is a limit on the amount of work I can take on at any point in time, affecting my marketing. I have had to, on multiple occasions, respectfully decline assignments because I was already tied up delivering another assignment at the same point in time. On the other hand, there have been times (sometimes lasting months together) where I’ve had little billable work, resulting in low revenues for those times.

If I were to form a partnership or join a larger professional services firm (with other professionals similar to me), both my work and my cash flows would be structured quite differently. Given that the firm would have a reasonable number of professionals working together, it would be easier to manage the pipeline – the chances of all professionals being occupied at any point in time is low, and the incoming work could be assigned to one of the free professionals. The same process would also mean that gaps in workflow would be low – if my marketing is going bad, marketing of one of my busy colleagues might result in work I might end up doing.

What is more interesting is the way in which cash flows would change. I would no longer have to wait for the periods when I was doing billable work in order to get paid – my firm would instead pay me a regular salary. On the other hand, when I did win business and get paid, the proceeds would entirely go to my firm. The fees that my firm would charge its clients would be significantly higher than what the firm paid me, like it happened with my employer in 2006.

There would be multiple reasons for this discrepancy in fees, the most straightforward being administrative costs (though that is unlikely to account for too much of the fee gap). There would be a further discount on account of the firm paying me a regular salary while I only worked intermittently. That, too, would be insufficient to explain the difference. Most of the difference would be explained by the economic value that the firm would add by means of its structure.

The problem with being a freelance professional is that times when potential clients might demand your services need not coincide with the times when you are willing to provide such services. Looking at it another way, the amount of services you supply at any point in time might not match the amount of services demanded at that point in time, with deviations going either way (sometimes you might be willing to supply much more than what is demanded, and vice versa).

Freelance professionals have another problem finding clients – as individual professionals, it is hard for them to advertise and let all possible potential clients know about their existence and the kind of services they may provide. Potential clients have the same problem too – when they want a piece of work done by a freelance professional, it is hard for them to identify and contact all possible professionals who might be able and willing to carry out that piece of work. In other words, the market for services of freelance professionals is highly illiquid.

Professional services firms help solve this illiquidity problem through a series of measures. Firstly, they acquire the time of professionals by promising to pay them a regular income. Secondly, as a firm, they are able to advertise and market the services of these professionals to potential clients. When these potential clients respond in the affirmative, the professional services firms sell them the time of professionals that they had earlier acquired.

These activities suggest that professional services firms can be considered to be market makers in the market for professional services. Firstly, they satisfy the conditions for market making – they actually buy and take on to their books the time of the professionals they hire, giving them a virtual “inventory” which they try to sign on. Secondly, they match demand and supply that might occur at different points in time – recruitment of employees occurs asynchronously with the sale of business to clients. In other words, they take both sides of the market – buying employees’ time from employees and selling this employees’ time to clients! Apart from this, firms also use their marketing and promotional activities that their size affords them to attract both employees and clients, thus improving liquidity in the market.

And like good market makers, firms make their money on the spread between what clients pay them and what they pay their employees. Earlier on in this chapter, we had mentioned that market making is risky business thanks to its inventory led model. It is clear to see that professional services firms are also risky operations, given that it is possible that they may either not be able to find professionals to execute on contracts won from clients, or not be able to find enough clients to provide sufficient work for all their employees.

In other words, when a professional joins a professional services firm, the spread they are letting go of (between what clients of their firms pay the firms, and what professionals draw as salaries) can be largely explained in terms of market making fees. It is the same case for a client who has pays a firm much more than what could have been paid had the professional been engaged directly – the extra fees is for the market making services that the firm is providing.

From the point of view of a professional, joining a firm might result in lower average long-term income compared to being freelance, but that more than compensates for the non-monetary volatility of not being able to find business in an otherwise illiquid market. For a potential client of such services also, the premium paid to the firm is a monetisation of the risk of being unable to find a professional in an illiquid market.

You might wonder, then, as to why I continue to be a freelance professional rather than taking a discount for my risks and joining a firm. For the answer, we have to turn back to Coase – I consider the costs of transacting in the open market, including the risk and uncertainty of transactions, far lower than the cost of entering into a long-term transaction with a firm!

Shorting private markets

This is one of those things I’ll file in the “why didn’t I think of it before?” category.

The basic idea is that if you think there is a startup bubble, and that private companies (as a class) are being overvalued by investors, there exists a rather simple way to short the market – basically start your own company and sell equity to these investors!

The basic problem with shorting a market such as those for shares of privately held startups is that the shares are owned by a small set of investors, none of whom are likely to lend you stock that you can sell and buy back later. More importantly, markets in privately held stock can be incredibly illiquid, and it may take a long time indeed before the stocks move to what you think is their “right” level.

So what do you do? I’ll simply let the always excellent Matt Levine to provide the answer here:

We have talked a few times in the past about the difficulty of shorting unicorns: Investors can buy shares in the big venture-backed private tech companies, but they can’t sell those shares short, which arguably leads to those shares being overvalued as enthusiasts join in but skeptics are excluded. As I once said, though, “the way to profit from a bubble is by selling into it, and that people sometimes focus too narrowly on short-selling into it”: If you think that unicorns as a category are overvalued, the way to profit from that is not so much by shorting Uber as it is by founding your own dumb startup, raising a lot of money from overenthusiastic venture capitalists, paying yourself a big salary, and walking away whistling when the bubble collapses.

Same here! If you are skeptical of the ICO trend, the right thing to do is not to short all the new tokens that are coming to market. It’s to build your own token, do an initial coin offering, and walk off with the proceeds. For the sake of your own conscience, you can just go ahead and say that that’s what you’re doing, right in the ICO white paper. No one seems to mind.

Seriously! Why didn’t I think of this?

Tiered equity structure and investor conflict

About this time last year, I’d written this article for Mint about optionality in startup valuations. The basic idea there was that any venture capital investment into startups usually comes with “dirty terms” that seek to protect the investor’s capital.

So you have liquidity preferences that demand that the external investors get paid out first (according to a pre-decided formula) in case of a “liquidity event” (such as an IPO or an acquisition). You also have “ratchets”, which seek to protect an investor’s share in the company in case the company raises a subsequent round at a lower valuation.

These “dirty terms” are nothing but put options written by existing investors in a firm in favour of the new investors. And these options telescope. So the Series A round has options written by founders, employees and seed investors, in favour of Series A investors. At the time of Series B, Series A investors move to the short (writing) side of the options, which are written in favour of Series B investors. And so forth.

There are many reasons such clauses exist. One venture capitalist told me that his investors have similar optionality on their investments in his funds, and it is only fair he passes them on. Another told me that “good entrepreneurs” believe in their idea so much that they don’t want to even consider the thought that their company may not do well – which is when these options pay out, and so they are happy to write these options. And then you know that an embedded option can increase the optics of the “headline valuation” of a company, which is something some founders want.

In any case, in my piece for Mint I’d written about such optionality leading to potential conflicts among investors in different classes of stock, which might sometimes be a hindrance to further capital raises. Quoting from there,

The latest round of investors usually don’t mind a “down round” (an investment round that values the company lower than the preceding round) since their ratchets protect them, but earlier investors are short such ratchets, and don’t want to see their stakes diluted. Thus, when a company is unable to find investors who are willing to meet its current round of valuation, it can lead to conflict between different sets of investors in the company itself.

And now Mint reports that such conflicts are a main reason for Indian e-commerce biggie Snapdeal’s recent struggles, which has led to massive layoffs and a delay in funding. The story has played out exactly as I’d written in the paper last year.

Softbank, which invested last in Snapdeal and is long put options on the company’s value, is pushing the company to raise more funds at a lower valuation. However, Nexus and Kalaari, who had invested earlier and stand to lose significantly thanks to these options, are resisting such moves. And the company continues to stall.

I hope this story provides entrepreneurs and venture capitalists sufficient evidence that dirty terms can affect everyone up and down the chain, and can actually harm the business’s day-to-day operations. The cleaner a company keeps the liabilities side of the balance sheet (in having a small number of classes of equity), the better it is in the long run.

But then with Snap having IPOd by offering only non-voting shares to the public, I’m not too hopeful of equity truly being equitable any more!

British retail strategy

Right under where I currently live, there’s a Waitrose. Next door, there’s a Tesco Express. And a little down the road, there’s a Sainsbury Local. The day I got here, a week ago, I drove myself nuts trying to figure out which of these stores is the cheapest.

And after one week of random primary research, I think I have the classic economist’s answer – it depends. On what I’m looking to buy that is.

Each of these chains has built a reputation of sourcing excellent products and selling them to customers at a cheap price. The only thing is that each of them does it on a different kind of products. So there is a set of products that Tesco is easily the cheapest at, but the chain compensates for this by selling other products for a higher rate. It is similar with the other chains.

Some research I read a year or two back showed that while Amazon was easily the cheapest retailer in the US for big-ticket purchases, their prices for other less price-sensitive items was not as competitive. In other words, Amazon let go of the margin on high-publicity goods, and made up for it on goods where customers didn’t notice as much.

It’s the same with British retailers – each of their claims of being the cheapest is true, but that applies only to a section of the products. And by sacrificing the margin on these products, they manage to attract a sufficient number of customers to their stores, who also buy other stuff that is not as competitively priced!

Now, it is possible for an intelligent customer to conduct deep research and figure out the cheapest shop for each stock keeping unit. The lack of quick patterns of who is cheap for what, however, means that the cost of such research and visiting multiple shops usually far exceeds the benefits of buying everything from the cheapest source.

I must mention that this approach may not apply in online retail where at the point of browsing a customer is not “stuck” to any particular shop (unlike in offline where a customer is at a physical store location while browsing).

Variable pricing need not be boring at all!

Pizza from dominos – good and bad

Last night we decided we wanted pizza from dominos for dinner. Having been used to Swiggy, I instinctively googled for dominos and tried to place the order online.

There is one major fuckup with the dominos website – it asks you to pick the retail outlet closest to you, rather than taking your location and picking it yourself. And so it happened that we picked an outlet not closest to us.

I quickly got a call from the guy at the outlet where my order had gone, expressing his inability to deliver it, and saying he’ll cancel my order. I gave him a mouthful – it’s 2016, and why couldn’t he have simply transferred the order to the outlet that is supposed to service me?

I was considering cancelling the order and not ordering again (a self-injurious move, since we wanted Dominos pizza, not just pizza), when the guy from the outlet in whose coverage area I fell called. He explained the situation once again, saying my original order was to be cancelled, and he would have to take a new order.

Again – it wasn’t just a fuckup in the payment in the Dominos system, in which case they could’ve simply transferred my order to this new guy. So I had to repeat my entire order once again to this guy (not so much of a problem since I was only getting one pizza) and my address as well (it’s a long address which I prefer filling online).

Then there was the small matter of payment – one reason I’d ordered online was that I could pay electronically (I used PayTM). When I asked him if I could pay online for the new order he said I had to repeat the entire process of online ordering – there was no order ID against which I could simply logon and pay.

I played my trump card at this time – asked him to make sure the delivery guy had change for Rs. 2000 (I’d lined up at a bank 2 weeks back and withdrawn a month’s worth of cash, only that it was all in Rs. 2000 notes). He instantly agreed. Half an hour later, the pizza, along with change for Rs. 2000 was at my door.

The good thing about the experience was that the delivery process was smooth, and more importantly, the outlet where my order reached had taken initiative in communicating it to the outlet under whose coverage my house fell – the salespersons weren’t willing to take a chance to miss a sale that had fallen at their door.

The bad thing is that Jubilant Foodworks’ technology sucks, big time. Thanks to the heavily funded and highly unprofitable startups we usually order from, we’re used to a high level of technology from the food delivery kind of businesses. Given that Jubilant is a highly profitable company it shouldn’t be too hard for them to license the software of one of these new so-called “foodtech” companies to further enhance the experience.

No clue why they haven’t done it yet!

PS: I realise I’ve written this blogpost in the style I used to write in over a decade ago. Some habits die hard.

Why PayTM is winning the payments “battle” in India

For the last one year or so, ever since I started using IMPS at scale, and read up the UPI protocol, I’ve been bullish about Indian banks winning the so-called “payments battle”. If and when the adoption of electronic payments in India takes off, I’ve been expecting banks to cash in ahead of the “prepaid payments instruments” operators.

The events of the last one week, however, have made me revise this prediction. While the disruption of the cash economy by withdrawal of 85% of all notes in circulation has no doubt given a major boost to the electronic payments industry, only some are in a position to do anything about this.

The major problem for banks in the last one week has been that they’ve been tasked with the unenviable task of exchanging the now invalid currency, taking deposits and issuing new currency. With stringent know-your-customer (KYC) norms, the process hasn’t been an easy one, and banks have been working overtime (along with customers working overtime standing in line) to make sure hard currency is in the market again.

While by all accounts banks have been undertaking this task rather well, the problem has been that they’ve had little bandwidth to do anything else. This was a wonderful opportunity for banks, for example, to acquire small merchants to accept payments using UPI. It was an opportune time to push the adoption of credit card payment terminals to merchants who so far didn’t possess them. Banks could’ve also used the opportunity to open savings accounts for the hitherto unbanked, so they had a place to park their cash.

As it stands, the demands of cash management have been so overwhelming that the above are literally last priorities for the bank. Leave alone expand their networks, banks are even unable to service the existing point of sale machines on their network, as one distraught shopkeeper mentioned to me on Saturday.

This is where the opportunity for the likes of PayTM lies. Freed of the responsibilities of branch banking and currency exchange, they’ve been far better placed to acquire customers and merchants and improve their volume of sales. Of course, their big problem is that they’re not interoperable – I can’t pay using Mobikwik wallet to a merchant who can accept using PayTM. Nevertheless, they’ve had the sales and operational bandwidth to press on with their network expansion, and by the time the banks can get back to focussing on this, it might be too late.

And among the Prepaid Payment Instrument (PPI) operators again, PayTM is better poised to exploit the opportunity than its peers, mainly thanks to recall. Thanks to the Uber deal, they have a foothold in the premium market unlike the likes of Freecharge which are only in the low-end mobile recharge market. And PayTM has also had cash to burn to create recall – with deals such as sponsorship of Indian cricket matches.

It’s no surprise that soon after the announcement of withdrawal of large currency was made, PayTM took out full page ads in all major newspapers. They correctly guessed that this was an opportunity they could not afford to miss.

PS: PayTM has a payments bank license, so once they start those operations, they’ll become interoperable with the banking system, with IMPS and UPI and all that.

Buying, Trying and Sizing

The traditional paradigm of apparel purchase has been to try and then buy. You visit a retail store, pick what you like, try them out in the store’s dressing rooms and then buy a subset. In this paradigm, it is okay for sizing to not be standardised, since how the garment actually fits on you plays a larger part in your decision making than how it is supposed to fit on you.

With the coming of online retail, however, this paradigm is being reversed, since here you first buy, and then try, and then return the garment if it doesn’t fit properly. This time, the transaction cost of returning a garment is much higher than in the offline retail case.

So I hope that with online retail gaining currency in apparel purchase, manufacturers will start paying more attention to standardised sizing, and make sure that a garment’s dimensions are exactly what is mentioned on the online retailer’s site.

The question is who should take the lead on enforcing this. It cannot be the manufacturer, for had they been concerned already about standardised sizing, they would’ve implemented it already. So far the retailer has only been an intermediary (a “pipe”, as Sangeet would put it).

However, with the transaction cost of failed transactions being borne by the retailers, and these transaction costs being rather high in online retail, I expect the likes of Amazon and Myntra to take the lead in ensuring that sizing is standardised, perhaps by pushing up the ease of search of garments from manufacturers who already practice such sizing (these retailers have sufficient data to measure this easily).

It will be interesting to see how this plays out. Given history, I don’t expect retailers to collaborate in coming up this a standard. So assuming each major online retailer comes up with its own standard, the question is if it will start off being uniform or if it will converge to a common standard over time.

I also wonder if the lead in standardising sizes will be taken by private brands of the online retailers, since they have the most skin in the game in terms of costs, before other manufacturers will follow suit.

In any case, I trust that soon (how “soon” that soon is is questionable) I’ll be able to just look at the stated sizing on a garment and buy it (if it’s of my liking) without wondering how well it’ll fit me.