Agile programming and Parliamentary procedures

Supposedly the latest (ok not that latest – it’s been around for a decade) thing in software engineering is this thing called “Agile programming“. It’s basically about breaking away from the old “waterfall model” (which we had learnt in a software engineering course back in 2003), and to a more iterative model with quick releases, updates, etc.

I’ve never worked as a software engineer, but it seems to me that Agile programming is the way to go – basically get something out and keep iterating till you have a good product rather than thinking endlessly about incorporating all design specifications before writing a single line of code. Requirements change rapidly nowadays, and unless you are “agile” (pun intended) to that, you will not produce good software.

Agile methodologies, however, don’t work in parliamentary procedures, since there is very high transaction cost there. Take, for example, the proposed Goods and Service Tax (GST). The current form of the Goods and Service Tax is an incredibly flawed bill, with taxes for movement of goods across states and certain products being excluded from the ambit altogether. Mihir Sharma at Business Standard has a great takedown of the current bill (there is no one quotable paragraph. Read the whole thing. I’m mostly in agreement).

So it seems to me that the government is passing the GST in its current half-baked form because it wants some version (like a Minimum Viable Product) off the ground, and then it hopes to rectify the shortcomings in a later iteration. In other words, the government is trying some sort of an agile methodology when it comes to passing the GST.

The problem with parliamentary procedures, however, is that transaction costs are great. Once a law has been passed, it is set in stone and the effort required for any single amendment is equal to the effort originally required for passing the law itself, since you have to go through the whole procedure again. In other words, our laws need to be developed using the waterfall model, and hence have full system requirement specifications in place before they are passed.

It’s not surprising since the procedure for passing laws was laid down back at a time when hardly any programming existed, leave alone agile programming. Yet, it begs the question of what can be done to make our laws more agile (pun intended).

PS: I understand that Agile software development has several features and this iterative nature is just one of them. But that is the only one I want to focus on here.

Pipes, Platforms, the Internet and Zero Rating

My friend Sangeet Paul Chaudary, who runs Platform Thinking Labs, likes to describe the world in terms of “pipes” and “platforms”. One of the themes of his work is that we are moving away from a situation of “dumb pipes”, which simply connect things without intelligence, to that of “smart platforms”. Read the entire Wired piece (liked above) to appreciate it fully.

So I was reading this excellent paper on Two-Sided Markets by Jean-Charles Rochet and Jean Tirole (both associated with the Toulouse School of Economics) earlier today, and I found their definition of two-sided markets (the same as platform business) striking. This is something I’d struggled with in the past (I admit to saying things like “every market is two-sided. There’s a buyer and a seller”), especially given the buzzword status accorded to the phrase, but it is unlikely I’ll struggle again. The paper says:

A necessary condition for a market to be two-sided is that the Coase theorem does not apply to the relation between the two sides of the markets: The gain from trade between the two parties generated by the interaction depends only on the total charge levied by the platform, and so in a Coase (1960) world the price structure is neutral.

This is an absolutely brilliant way to define two-sided markets. The paper elaborates:

Definition 1: Consider a platform charging per-interaction charges a^B and a^S to the buyer and seller sides. The market for interactions between the two sides is one-sided if the volume V of transactions realized on the platform depends only on the aggregate price level

a=a^B +a^S

i.e., is insensitive to reallocations of this total price a between the buyer and the seller. If by contrast V varies with a^B while a is kept constant, the market is said to be two-sided.

So for a market to be two-sided, i.e. for it to be intermediated by an “intelligent platform” rather than a “dumb pipe”, the volume of transactions should depend not only on the sum of prices paid by the buyer and seller, but on each price independently.

The “traditional” neutral internet, by this definition, is a platform. The amount of content I consume on Youtube, for example, is a function of my internet plan – the agreement between my internet service provider and me on how much I get charged as a function of what I consume. It doesn’t depend on the total cost of transmitting that content from Youtube to me. In other words, I don’t care what Youtube pays its internet service provider for the content it streams. Transaction costs (large number of small transactions) also mean that it is not practically possible for Youtube to subsidise my use of their service in this model.

Note that if buyers and sellers on a platform can make deals “on the side”, it ceases to be a platform, for now only the total price charged to the two matters (side deals can take care of any “adjustments”). The reason this can’t take place for a Youtube like scenario is that you have a large number of small transactions, accounting for which imposes massive transaction costs.

The example that Rochet and Tirole take while explaining this concept in their paper is very interesting (note that the paper was written in 2004):

…As the variable charge for outgoing traffic increases, websites would like to pass this cost increase through to the users who request content downloads…

..an increase in their cost of Internet traffic could induce websites that post content for the convenience of other users or that are cash-strapped, to not produce or else reduce the amount of content posted on the web, as they are unable to pass the cost increase onto the other side.

Note how nicely this argument mirrors what Indian telecom companies are saying on the Zero Rating issue. That a general increase in cost of internet access for consumers can result in small “poor” consumers to not consume on the internet at all, as they are unable to pass on the cost to the other side!

Fascinating stuff!

How 2ab explains net neutrality

I’ve temporarily resurrected my blog on the Indian National Interest, and this post is mirrored from there. This is a serious argument, btw. After a prolonged discussion at Takshashila this morning, I convinced myself that net neutrality is a good idea.

So Prime Minister Narendra Modi has set off this little storm on Twitter by talking about the relationship between India and Canada being similar to the “2ab term” in the expansion of (a+b)^2 .

Essentially, Modi was trying to communicate that the whole of the relationship between India and Canada is greater than the sum of parts, and it can be argued that the lack of a “cos \theta” term there implies that he thinks India and Canada’s interests are perfectly aligned (assuming a vector sum).

But that is for another day, for this post is about net neutrality. So how does 2ab explain net neutrality? The fundamental principle of the utility of the Internet is Metcalfe’s law which states that the value of a telecommunications network is proportional to the square of the number of entities in the network. In other words, if a network has n entities, the value of these n entities being connected is given by the formula k n^2 . We can choose the unit in which we express utility such that we can set k = 1, which means that the value of the network is n^2.

Now, the problem with not having net neutrality is that it can divide the internet into a set of “walled gardens”. If your internet service provider charges you differentially to access different sites, then you are likely to use more of the sites that are cheaper and less of the more expensive sites. Now, if different internet service providers will charge different websites and apps differently, then it is reasonable assume that the sites that customers of different internet services access are going to be different?

Let us take this to an extreme, and to the hypothetical case where there are two internet service providers, and they are not compatible with each other, in that the network that you can access through one of these providers is completely disjoint from the network that you can access through the other provider (this is a thought experiment and an extreme hypothetical case). Effectively, we can think of them as being two “separate internets” (since they don’t “talk to” each other at all).

Now, let us assume that there are a users on the first internet, and b users on the second (this is bad nomenclature according to mathematical convention, where a and b are not used for integer variables, but there is a specific purpose here, as we can see). What is the total value of the internet(s)?

Based on the formula described earlier in the post, given that these two internets are independent, the total value is a^2 + b^2. Now, if we were to tear down the walls, and combine the two internets into one, what will be the total value? Now that we have one network of (a+b) users, the value of the network is (a+b)^2 or a^2 + 2 ab + b^2 . So what is the additional benefit that we can get by imposing net neutrality, which means that we will have one internet? 2 ab, of course!

In other words, while allowing internet service providers to charge users based on specific services might lead to additional private benefits to both the providers (higher fees) and users (higher quality of service), it results in turning the internet into some kind of a walled garden, where the aggregate value of the internet itself is diminished, as explained above. Hence, while differential pricing (based on service) might be locally optimal (at the level of the individual user or internet service provider), it is suboptimal at the aggregate level, and has significant negative externalities.

#thatswhy we need net neutrality.

Amending the snooze function in alarm clocks

This is an idea that appeared to me in my dreams. Really. I’m not joking. Or maybe I thought of it as soon as I woke up this morning – in the cusp of dreams and reality, and then presently fell back asleep. Either ways, it doesn’t matter. The idea is surely mine, and not knowing how to profit from it I’m making it public.

The basic idea is that the inter-snooze interval between consecutive alarms should decrease geometrically. Currently, alarm clock apps on mobile phones have a fixed snooze duration. For example, my Moto G has a fixed snooze duration of 6 minutes (which I think i can change through settings, but will remain fixed at the new level then). The wife’s iPhone has a fixed snooze duration of 5 minutes (again customisable I believe).

However, I believe that this is illogical and makes you wake up over a longer time interval than necessary. The reasoning is that the degree of wakefulness at each alarm ring is different. When you wake up at the second ring (after you’ve snoozed it once), you’re more wakeful than you were when the alarm rang for the first time. After you’ve snoozed for the second time, you are unlikely to go into as deep sleep as you did when you snoozed it for the first time, in which case you are unlikely to go into the kind of deep sleep you were in before the first ring of the alarm clock.

By keeping the inter-snooze duration constant, what the alarm clock is doing is to give you an opportunity to go back in into the same kind of deep sleep (the longer you sleep between alarm rings, the greater the possibility that you will go back into deep sleep), which further impedes your complete waking up.

What is ideal is that the first time you get woken up from deep sleep, you struggle, snuggle and snooze, and go back to sleep. The next time you should be woken up before you’ve hit the deep sleep phase. You wake up again, struggle, snuggle and snooze, and go into shallower sleep. The next alarm ring should catch you at this shallower stage, and rouse you up. And so on.

So what I’m proposing is that the inter-snooze interval in alarm clocks should decrease geometrically. So if the first inter-snooze interval lasted five minutes, the next one should last less than that, and the one after that even less than that. Each time this interval should come down by a pre-defined fraction (let’s say half, without loss of generality). That way, even if you snooze multiple times, it ensures that you finally wake up in a time-bound fashion (beyond a point, the snooze duration becomes so small that it rings continuously until you switch off and wake up, and by then you have attained full consciousness).

So the way I want my alarm clock designed is that I define how much time I want to wake up in (let’s say default is 20 minutes), and a (harder to change) multiplicative factor by which inter-snooze times come down (default is half), and the inter-snooze interval decreases accordingly geometrically so that you wake up in exactly the time that you’ve initially specified!

So with the defaults of 20 and 1/2, the inter-snooze periods will be 10 mins, 5 mins, 2 min 30 secs, 1 min 15 secs, 37.5 secs, 18.75 secs, … by which time you should be annoyed enough to have woken up but yet wakeful enough having drifted back only just enough!

I think this is a world-changing idea, but I mention again that I don’t know how to commercialise it so putting it out in the open. If you think this works for you, thank me!

And perhaps this is a good assignment to start my career in programming mobile phone apps. Should I start with iOS or Android? (I have an android phone and an iPad).

Barcelona Harbour and Montjuic

Last evening I decided to trek up Montjuic, a hill that is in the middle of Barcelona. I remember reading a long time back (probably on my last visit here) that there was a nice hiking path up Montjuic, and decided to go, without any plan. I conveniently forgot to look up the hiking path, and instead consulted google maps on the phone.

After a while the route got boring (this was after I had passed Placa Espanya). At around the same time I had started climbing the hill, and the combination of the elevation and lack of interesting things around (there were no shops or people or anything of interest on that road) made me want to turn back. I had almost turned back when I hit a bus stop, and bus number 55 came there. And off I climbed and went.

The bus dropped me at the bottom of the Montjuic Funicular, and I thought I’ll take that. But the steep price (EUR 11 for both ways) put me off, and a helpful tourist office nearby told me that the peak was 20 minutes walk away. I did the walk in 10, only to be confronted by another queue – for tickets to go into the castle. I decided to have a look around before I went in.

Going around the castle towards the side that faced the sea, this is what I saw:

barcaport

 

And I sat there, stunned. There were other people sitting or standing in the same area, most of them couples. And most of them seemed like they were looking out at the sea as they sat there. The sea held no interest to me, however, though my object of interest had something to do with the sea. It was the Barcelona harbour!

I had never before seen a container terminal in operation, and here was one, right under where I was standing, in full flow. There were three ships docked, each of a different size. Containers had been stacked up all over the terminal, as if they were lego blocks. You had these machines that were roaming all over the place, which would pick up containers and place them elsewhere. And then you had these forklifts  stackers with orange claws which would place load and unload containers to/from ships.

Just to stand there and watch this operation was mindblowing, and I stood hence for about half an hour. I noticed some nooks in the Montjuic castle where some couples were cuddled up. These nooks gave a great view of the container terminal. So I harboured visions of cosying up in one of these nooks with the wife, watching the operations of the Barcelona container terminal, analysing the operational effectiveness of the place and the algorithms involved. But then the wife was at school, and so I moved on.

On my way back I “got lost” again, as I wandered on some hiking paths past some of the infrastructure that I understand had been built for the 1992 Olympic games. Once again I got “bailed out” by a bus stop, and a bus that dropped me at a point in town that I had been to earlier. “Problem reduced to known problem”, I exclaimed and walked home from there.

For visitors to Barcelona I would highly recommend going up Montjuic. I have no clue what the castle is like, for I didn’t go in. The hiking paths are supposed to be good but I didn’t explore much of that. Yet, it is a fantastic place to go to and watch global commerce in action, as trucks roll in and out of the container terminal, only to be divested of their containers by these machines that place them aside and then transport them on to the ships. It has to be seen to be believed!

Making coding cool again

I learnt to code back in 1998. My aunt taught me the basics of C++, and I was fascinated by all that I could make my bad old x386 computer to do. Soon enough I was solving complex math problems, and using special ASCII characters to create interesting pattens on screen. It wasn’t long before I wrote the code for two players sitting on the same machine to play Pong. And that made me a star.

I was in a rather stud class back then (the school I went to in class XI had a reputation for attracting toppers), and after a while I think I had used my coding skills to build a reasonable reputation. In other words, coding was cool. And all the other kids also looked up to coding as a special skill.

Somewhere down the line, though I don’t remember when it was, coding became uncool. Despite graduating with a degree in Computer Science from IIT Madras, I didn’t want a “coding job”. I ended up with one, but didn’t want to take it, and so I wrote some MBA entrance exams, and made my escape that way.

By the time I graduated from my MBA, coding had become even more uncool. If you were in a job that required you to code, it was an indication that you were in the lowest rung, and thus not in a “management job”. Perhaps even worse, if your job required you to code, you were probably in an “IT job”, something that was back then considered as being a “dead end” and thus not a very preferred job. Thus, even if you coded in your job, you tended to downplay it. You didn’t want your peers to think you were either in a “bottom rung” job or in an “IT job”. So I wrote fairly studmax code (mostly using VB on Excel) but didn’t particularly talk about it when I met my MBA friends. As I moved jobs (they became progressively studder) my coding actually increased, but I continued to downplay the coding bit.

And I don’t think it’s just me. Thanks to the reasons explained above, coding is considered uncool among most MBA graduates. Even most engineering graduates from good colleges don’t find coding cool, for that is the job that their peers in big-name big-size dead-end-job software services companies do. And if people consider coding uncool, it has a dampening impact on the quality of talent that goes into jobs that involves coding. And that means code becomes less smart. And so forth.

So the question is how we can make coding cool again. I’m not saying it’s totally uncool. There are plenty of talented people who want to code, and who think it’s cool. The problem though is that the marginal potential coder is not taking to coding because he thinks that coding is not cool enough. And making coding cool will make more people want to take it up, which will lead to greater number of people take up this vocation!

Any ideas?

LinkedIn, WhatsApp and Freaky Contact Lists

So one of the things I do when I’m bored is to open the “new conversation” (plus sign) thing on my WhatsApp and check which of my contacts are there in my WhatsApp social network. I do this periodically, without any particular reason. On the upside, I see people who I haven’t spoken to for a long time, and this results in a conversation. On the downside, this is freaky.

The problem with WhatsApp is that it automatically assumes that everyone in your phone book is someone you want to keep in touch with. And more likely than not, people make their WhatsApp profile pictures visible to all. And sometimes these profile pictures have to do with something personal, rather than a simple mugshot. Some people have pictures of their homes, of their kids, and of better halves. And suddenly, everyone who has their number on their phone book gets a peek into the part of their lives they’ve chosen to make public by way of their WhatsApp profile pictures!

Some examples of people on my phone book into whose lives I’ve thus got a peek includes a guy who repairs suitcases, a guy who once repaired my refrigerator, a real estate broker whose services I’d engaged five years back to rent out my house, and so forth. And then there are business clients – purely professional contacts, but who have chosen to expose through their WhatsApp profile pictures aspects of their personal lives! Thus, through the picture function (of course you can choose to not make your picture public), you end up knowing much more about random contacts in your phone book than you need to!

The next level of freakiness comes from people who have moved on from the numbers that they shared with you. So you see in the photo associated with an old friend someone who looks very very different and who is definitely not that friend! And thanks to their having put pictures on WhatsApp, you now get an insight into their personal lives (again I tell you that people put intensely personal pictures as their WhatsApp profile pictures). I haven’t tried messaging one of these assuming they are still the person who is my friend and used to once own their number!

Then there are friends who live abroad who gave you the numbers of close relatives when they were in town so that you could get in touch with them. These numbers have now duly passed back on to the said relatives (usually a parent or a sibling) of your overseas friends, and thanks to the pictures that they put on WhatsApp, you now get an insight into their lives! Then you start wondering why you still have these contacts in your phonebook, but then it’s so unintuitive to delete contacts that you just let it be.

The thing with Android is that it collects your contacts from all social media and puts them into your phone book – especially Facebook and LinkedIn. On Facebook people are unlikely to give out their phone numbers, and everyone on my facebook friends list is my friend anyway (today I began a purge to weed out unknown people from my friends list) it’s not freaky to see them on your whatsapp. But then thanks to the Android integration, you have your LinkedIn contacts popping up in your address books, and consequently whatsapp!

Again, LInkedIn has a lot of people who are known to you, though you have no reason to get to know their personal lives via the photos they put on WhatsApp. But on LinkedIn you also tend to accept connection requests from people you don’t really know but think might benefit from associating with them at a later date. And thanks to integration with WhatsApp, and profile pics, you now get an insight into the lives of your headhunters! It’s all bizarre.

So yes, you can conclude that I might be jobless enough to go through my full WhatsApp contacts list periodically. Guilty as charged. The problem, though, is that people don’t realise that their WhatsApp profile pictures are seen by just about anyone who has their number, irrespective of the kind of relationship. And thus people continue to put deeply personal pictures as their WhatsApp profile pictures, and thus bit by bit give themselves away to the world!

The solution is simple – put a mugshot or a “neutral” photo as your WhatsApp profile picture. You don’t know how many people can see that!

GMail and Unsolicited Emails

About a year and a half back, GMail moved to this tabbed inbox format, where “promotional” and “social” mails were filtered out and delivered to separate tabs. This meant that most of the promotional mail and mail from social networks you got never hit the main inbox, which meant that your phone wouldn’t buzz for those and that you need not read all of to keep that “inbox zero” count (I know a lot of people apart from me who are obsessed about that).

What this meant was that we didn’t really bother about all that unsolicited mail – it would sit somewhere in the inbox away from where you saw, and all you did occasionally was to click on the “social” and “promotions” tabs so that nothing would be seen in the tab headers (for the OCD includes making sure those headers are empty).

In fact, now that all these promotional mail was hidden away, you didn’t mind getting more of that. And when more social networks and advertisers started approaching you, you didn’t mind. It was easy to ignore them. And once in a while you would click through, resulting in a payment somewhere, which made sense to the advertisers.

The new Inbox that google has pioneered in the last one month, though, has changed all that. Now, while there are several more tags which are automatically added to mails and they don’t hit your inbox directly (“updates”, “finance” and “forums” are examples), these tags are now treated no differently from the “social” or “promotions” tags.

Also, the way the mails under these tags are shown is interesting. Every time there is at least one unread mail under a tag, the tag shows up near the top of your inbox. And when you click on it, all “undone” mails under that tag are shown. So if there was a promo which I simply ignored and clicked “done” on the tag (rather than on the promo mail itself) it would show up again the next time something landed in the tag. And that is an irritant.

To put it differently, when a promo or social mail lands in my inbox, I now have this compulsion to open it and mark it as “done”. And over the last few days I’ve found myself doing this way too many times.

As a consequence I’m now making a conscious effort to track down and unsubscribe from any unsolicited mails I was getting. LinkedIn sends me a daily digest of some groups. I’ve unsubscribed from all of them. Amazon and Flipkart used to hit me often with promotions. That has stopped. Livejournal birthday reminders are gone, too. Over the last few days, I’ve been hunting down the “unsubscribe” button on all promotional mail and actively unsubscribing from unsolicited mail.

I’m now going to extend from my one data point and assume that others are behaving similarly. Based on this, I think GMail’s tabbed inbox format was great for promoters – by keeping the promos away in one tab, it meant people didn’t mind getting those, and they would click through once in a while.

In the Inbox, though, since promos are almost treated similar to “normal” mail, the annoyance factor has increased, and thus people are unsubscribing. And it is not good news for advertisers.

R, Windows, Mac, and Bangalore and Chennai Auto Rickshaws

R on Windows is like a Bangalore auto rickshaw, R on Mac is a Chennai auto rickshaw. Let me explain.

For a long time now I’ve been using R for all my data management and manipulation and analysis and what not. Till two months back I did so on a Windows laptop and a desktop. The laptop had 8 GB RAM and the desktop had 16GB RAM. I would handle large datasets, and sometimes when I would try to do something complicated that required the use of more memory space than the computer had, the process would fail, saying “fail to allocate X GB of memory”. On Windows R would not creep into the hard disk, into virtual memory territory.

In other words it was like a Bangalore auto rickshaw, which plies mostly on meter but refuses to come to areas that are outside the driver’s “zone”. A binary decision. A yes or a no. No concept of price discrimination.

The Mac, which I’ve been using for the last two months, behaves differently. This one has only 8GB of RAM, but I’m able to handle large datasets without ever running out of memory. How is this achieved? By means of using the system’s Virtual Memory. This means the system doesn’t run out of memory, I haven’t received the “can’t allocate memory” error even once on this Mac.

So the catch here is that the virtual memory (despite having a SSD hard disk) is painfully slow, and it takes a much longer time for the program to read and write from the memory than it does with the main memory. This means that processes that need more than 8 GB of RAM (I frequently end up running such queries) execute, but take a really long time to do so.

This is like Chennai auto rickshaws, who never say “no” but make sure they charge a price that will well compensate them for the distance and time and trouble and effort, and a bit more.

Curation mechanisms

The one thing that is making my stay away from twitter (Flipboard is also gone now, since the iPad has been returned to its rightful owner – the wife) hard is the fact that I’m unable to find a reliable alternate means of curating content. Let me explain.

Basically, how do you find interesting stuff to read? I’m talking about article length pieces here (500-5000 words), and not books – the latter are “easy” in terms of how they’re packaged, etc. Fifteen years back it was quite simple, and not all that simple – in order to find a good piece of writing you needed to be subscribed to the periodical in which it was published.

So you would subscribe to periodicals as long as they published good pieces once in a while – at least for the option value of finding such pieces. This meant that sales of periodicals was inflated – a handful of good pieces here and there would support significant subscription numbers, and they did rather well. Then the internet changed all that.

The beauty of the internet is unbundling – you can read one piece from a periodical without reading the fluff. Even periodicals that have a subscription paywall usually offer a certain number of articles (not certain number of editions, note) free before you pay up. This has turned the magazine business topsy-turvy – if you only have the odd good piece that appears in your magazine, people are going to find it somehow, and are not going to bother subscribing to your magazine just so that they can find it!

The question, thus, arises as to how you can find good pieces that are of interest to you without subscribing to whole magazines themselves (and considering the number of sources from which I’ve consumed content even in the last two weeks it’s impossible to subscribe to all of them).

Close to ten years back you got it by way of an RSS reader – you essentially subscribed to entire periodicals or well-defined subsets of them. You didn’t pay for the subscription and there was no paper – the pieces would come and fall in your “RSS feed”. Feed readers such as Bloglines and Google Reader became big in the mid noughties (I remember switching from the former to the latter in 2006 or something).

You used these readers to subscribe to blogs of interesting people (back then a lot of interesting people blogged), and these blogs would link out to other interesting content, and you would consume it all. Then Google Reader began this thing called “shared items” – where you could share items from your RSS feeds with your Google Talk friend list. This improved curation – for example, I knew that there was this friend who would share all interesting posts from a particular blog, so I didn’t need to subscribe to that blog’s RSS feed any more. Soon you could share items apart from those on your RSS feed – any interesting website you came across, you could share. It was beautiful.

And then in its infinite wisdom, Google decided to kill Google Reader! Like that. Gone.

Thankfully by then we had twitter, where among other things people would share interesting stuff. And there would be enough of those posted through the day every day to keep you busy! All the buried content in the world now started getting dug up thanks to twitter. There was always tonnes of interesting stuff.

But then it comes with a remarkably high degree of outrage – no one can simply share a link any more – there has to be commentary that is outraging about something or the other. The question, thus, is about how we can consume content from twitter without the outrage. That leads to apps such as Flipboard, which presents the content in an interesting format. There was a similar app I tried to write but gave up on.

Now that I don’t have access to flipboard any more (while flipboard for Android is nice, it’s not anything like flipboard for ipad) how do I curate content? How do I get interesting stuff recommended to me without having to trawl infinite websites?

The app that I think is well placed for such curation is Pocket – where you can store articles for reading later. But then its native sharing application isn’t too good. It in fact encourages you to share via twitter and email! If only Pocket can improve upon its native sharing, and thus build a social network around the shared content, it is possible that we could have something like Google Reader shared items once again!

But with everyone on twitter is there a market for this?