From the above-linked article: "The Holy Grail is a filter which only serves up information which is relevant based on who you are, your social graph, what you or your friends are doing now, what you or friends have done before, and in context of other information you are consuming. It needs to be delivered wherever you are and on whatever device or display can deliver the ambient stream: mobile phone, laptop computer, TV, heads-up display in vehicle or inside your glasses."
Segal seems to envision such filters being created by various startups and/or corporate Internet players. But I've been hypothesizing that some simple algorithm could emerge at any time, even from the brain of some lone individual, that would fulfill Segal's requirements for The Holy Grail. Once elaborated in a mathematically complete way, the logic and usefulness of it may be obvious to nearly anyone, and it could quickly become incorporated into Google, Facebook, Twitter, and other common Internet gateways.
The Internet's physical architecture allows for real-time conversations with millions of simultaneous participants. Such conversations are not actually happening yet, but all the hardware necessary for them to begin happening has already been deployed. Hundreds of millions of people are interacting with the network at any given time. In order to convert this network activity into a planetary conversation, two steps appear critical:
1) Widespread adoption of some data structure (data format, file format, encoding method, serialization scheme, etc.) capable of representing any information (data, meaning, intent, values, etc.) that anyone might want to convey through the network. In other words, any given online person will be constantly generating new data expressing "what's up" with that person in a format that can easily be understood by anyone else. Let's use the term "plex" for a given instance of a user's "what's up" data structure.
2) Widespread adoption of automated methods of constantly updating a user's plex. These methods (filters, algorithms, etc.) will look at a user's current plex and at the user's current activity (selection(s) within the interface) and will rewrite the plex based primarily on those two data (but also based on information from throughout the Internet).
When these steps are taken, I hypothesize it could lead very quickly to interfaces of moving, seamless, fractally nested images, which will transcend and include the functionalities of alphanumeric text -- they will allow us to express, simply by navigating through these virtual worlds, anything we can currently express through text and much more.
Speech and manual text editing appear too cumbersome, too slow, to serve as the predominant media for interfacing with this new Internet where implications of every input action will ripple intelligently through the entire network. The global conversation seems much more likely to happen primarily through interfaces of flowing images, from which we will select using mice and touchscreens.
As Mark Pesce said, "We’re as Borg-ed up as we need to be. Probably we’re more Borg-ed up than we can handle." Sufficient hardware has already been deployed for the Internet to begin functioning as a kind of benevolent hive mind into which we'll all be plugged, allowing Netizens to act together with such vastly greater efficiency (and empathy, understanding, cohesiveness, etc.) that we'll make vastly accelerated progress on solving social problems and on developing new technologies. Only a software upgrade -- perhaps hinging on just a few lines of code -- stands between us and this future.
In May I wrote some mathematical expressions I called Algorithm Alpha -- an early, crude attempt at creating The Filter ("Step 2" above).
In June Mark Pesce announced the Plexus Social Networking Stack, which seems to make significant progress toward "Step 1" above, with its "plex" data structure (a social graph, accessed by Plexus components called the Listener and the Sharer).