mobile, semantic, social, graphical, intelligent Internet

 The current fragmented state of the Internet -- the profusion of apps, sites, feeds, etc. -- seems to cry out for what Edo Segal calls "The Filter (Holy Grail)" -- a top-level filter standing between the user and the Internet, facilitating vastly more efficient Internet navigation by constantly sifting through the entire Internet and intelligently selecting the most relevant information to appear on your interface.

From the above-linked article: "The Holy Grail is a filter which only serves up information which is relevant based on who you are, your social graph, what you or your friends are doing now, what you or friends have done before, and in context of other information you are consuming. It needs to be delivered wherever you are and on whatever device or display can deliver the ambient stream: mobile phone, laptop computer, TV, heads-up display in vehicle or inside your glasses."

Segal seems to envision such filters being created by various startups and/or corporate Internet players. But I've been hypothesizing that some simple algorithm could emerge at any time, even from the brain of some lone individual, that would fulfill Segal's requirements for The Holy Grail. Once elaborated in a mathematically complete way, the logic and usefulness of it may be obvious to nearly anyone, and it could quickly become incorporated into Google, Facebook, Twitter, and other common Internet gateways.

The Internet's physical architecture allows for real-time conversations with millions of simultaneous participants. Such conversations are not actually happening yet, but all the hardware necessary for them to begin happening has already been deployed. Hundreds of millions of people are interacting with the network at any given time. In order to convert this network activity into a planetary conversation, two steps appear critical:

1) Widespread adoption of some data structure (data format, file format, encoding method, serialization scheme, etc.) capable of representing any information (data, meaning, intent, values, etc.) that anyone might want to convey through the network. In other words, any given online person will be constantly generating new data expressing "what's up" with that person in a format that can easily be understood by anyone else. Let's use the term "plex" for a given instance of a user's "what's up" data structure.

2) Widespread adoption of automated methods of constantly updating a user's plex. These methods (filters, algorithms, etc.) will look at a user's current plex and at the user's current activity (selection(s) within the interface) and will rewrite the plex based primarily on those two data (but also based on information from throughout the Internet).

When these steps are taken, I hypothesize it could lead very quickly to interfaces of moving, seamless, fractally nested images, which will transcend and include the functionalities of alphanumeric text -- they will allow us to express, simply by navigating through these virtual worlds, anything we can currently express through text and much more.

Speech and manual text editing appear too cumbersome, too slow, to serve as the predominant media for interfacing with this new Internet where implications of every input action will ripple intelligently through the entire network. The global conversation seems much more likely to happen primarily through interfaces of flowing images, from which we will select using mice and touchscreens.

As Mark Pesce said, "We’re as Borg-ed up as we need to be. Probably we’re more Borg-ed up than we can handle." Sufficient hardware has already been deployed for the Internet to begin functioning as a kind of benevolent hive mind into which we'll all be plugged, allowing Netizens to act together with such vastly greater efficiency (and empathy, understanding, cohesiveness, etc.) that we'll make vastly accelerated progress on solving social problems and on developing new technologies. Only a software upgrade -- perhaps hinging on just a few lines of code -- stands between us and this future.

In May I wrote some mathematical expressions I called Algorithm Alpha -- an early, crude attempt at creating The Filter ("Step 2" above).

In June Mark Pesce announced the Plexus Social Networking Stack, which seems to make significant progress toward "Step 1" above, with its "plex" data structure (a social graph, accessed by Plexus components called the Listener and the Sharer).

latest algorithm notes

These speculations are aimed at formulating an algorithm for controlling online experiences. For several months I've been speculating on my blog along similar lines; I have been predicting the imminent emergence of niches in the online ecosystem for these types of algorithms. I am attempting to write an algorithm, and to describe a scenario of its implementation with enough detail to induce others to actually implement it, as at least an optional feature of some kind of online social network.

Hypothesis: The massive, even overwhelming, amounts of feedback potentially available through the Internet provide a potential avenue for breaking through our linguistic/emotional/political impasses, breaking through the remaining barriers to the kind of super-efficient logistical coordination that can solve our big problems and initiate a Utopia.

Let's try to envision our next steps down this avenue. I wrote "Algorithm Alpha," a very simple formula which continuously transforms a social graph by looking at
1) the user's current friends and friends-of-friends (Process 1), and at
2) which friend a user is connecting with at the moment (Process 2).

Process 1, if left to run repeatedly without any Process 2 input, will transform the graph based solely on the activity going on at other nodes, eventually resulting in a kind of "Internet average" graph and interface, a window into what's happening on the Internet as a whole.

Process 2 could be understood as functioning to slow down or counteract Process 1 by indicating a preference for, a bias toward, a selection of, one of the user's friends as opposed to the others. We can expect that each such action will, first, result in a reorientation of the perspective that the graphical interface presents to the user -- probably in most cases a kind of zooming into the selected area. Second, the implications of this input reverberate through the Internet, affecting other social graphs via instances of processes similar to "Process 1" running on other interfaces. Process 1 of the interface where the input was entered will then ensure that these reverberations are continuously represented as appropriate changes to the images displayed on the graphical interface.

[Potentially interesting question here: are the images on the graphical interface somehow recalculated from scratch each time the social graph is edited? Or does the interface remember its previous "frames" and make piecemeal adjustments for each new selection? Or, could either of these two ways of computing the re-rendering result in somehow equivalent transformations (or in two nonequivalent, but viable, alternatives?)]

A function mathematically relating the frequency/speed with which Processes 1 and 2 execute might provide an important piece of the puzzle at this stage.
----------------------------
One possibility: a "continuous input" scheme: The user is considered to be "always" selecting one or another of the members of the social graph. Or possibly even to be selecting more than one at a time, in different proportions.

[The magnitude of a 1st-order connection could somehow determine how often Process 1 re-checks that node.[?]]

Assuming the "continuous input" scheme, we could re-conceive of Process 2 as taking for its input, instead of a single value, a set of values in a format identical to the format we've been assuming for the social graph (a list of addresses with a magnitude number attached to each, with magnitudes adding to 1). So the interface continuously generates new "input graphs," rather than just single input values, and feeds them to the algorithm.

Also of interest may be a table showing the magnitude of the connectedness between every possible pair of friends in a social graph. Showing, in other words, the degree of closeness, of correlation, between each pair of first-order friends. I don't really have a clear idea yet of how could this be used in the calculations. We could adjust the input graphs based on it, but wouldn't the distributed effects of the interactions of the many instances of these algorithms already adjust for these correlations?

When a given image appears on the screen, whatever it represents, we want the interface to automatically render, nearby the image, multiple likely "connotations" -- other images, from anywhere on the Internet, with _meanings_ closely connected to the given image -- with the appropriateness of these connotations calculated based both on the given image (which probably corresponds to a node somewhere that is linked to a social graph) and on the current social graph of the user's own interface.

ok. let's create an algorithm for manifesting global online direct democracy

Let's try to come at this algorithm business from the perspective of the physical constraints around the media that humans will typically be using to interact with their interfaces -- screens, mice, etc.

We assume those physical constraints, and we try to define characteristics of a bit of code that we would want to run on all interfaces, code which could represent the reality of each user-interface hybrid as transparently, faithfully, truly as possible for communication with the rest of the network.

We'll remember that everyone will have an environment outside of the interface.

The code for a given node will make the assumption that similar code is operating on many other nodes. So in writing the code, we can think about how the code will function on scales larger than the locality of the individual user.

What domain of values can the code's variables take? We're trying to write a single string of code, identical copies of which can run on every node in the network. Its variables must take different values so that it will do different things in different instances.

Different things will be going on at different nodes, but they will all have in common the action of communicating with each other. So the addresses that point to other nodes seem like a natural set of variables for the code to operate on [bringing us back to the social graph as a kind of fundamental concept here]; and we'll assume that some node will exist to represent any other thing, concept, etc. that the code might want to represent, internally, as a variable's value. [similar to the philosophy of the Semantic Web protocols]

We may want to assume that in addition to node addresses, the code's variables may also take numerical values, or for simplicity we may want to assume that all possible numerical values will, like everything else, be represented by node addresses. [RDF seems to take the former approach.][If we do the latter, wouldn't we need to somehow specify within the algorithm when to treat the variable as numeric, which would probably defeat the purpose of increased simplicity?]

Then the questions seem to become: how to transform these variables based on the interconnections of the nodes that they represent, and based on the user's input? Algorithm Alpha gives some precise but possibly arbitrary/suboptimal/incomplete mathematical formulas for answering these questions.

toward Algorithm Beta

OK, I may have gotten a little over-enthusiastic about Algorithm Alpha :-)

But doesn't it seem like something along these lines (Algorithm Beta? :-) -- some algorithm that automates the evolution of the magnitude of your connections (or subscriptions) -- in other words, an algorithm that chooses your friends for you (as Matt has been putting it) -- is going to emerge soon, with serious implications for AI, etc.?

I have used the Global Brain metaphor. Maybe this short string of code could begin connecting online nodes in a new way, possibly somewhat analogous to how a short string of ape DNA began connecting neurons in a new way and helped abstract thought, language, etc. begin to emerge.

We can imagine many such algorithms -- many criteria for automatic friend selection -- and we can imagine that many different ones will probably coexist in this new emerging ecosystem. But the key right now seems to me to be the specification of an algorithm that "proves the rule" -- a basic, general social graph evolution method that provides such manifest benefits that it will induce some new or existing social networks (or social network clients -- it could be any program that can edit the social graph) to actually implement it as a feature.

When the social graph automatically fluctuates at micro timescales (compared to the current processes of manually adding/removing connections, using 3rd party services for finding and bulk-following Twitter users, etc.) this could become a powerful method of representing any kind of information. Why try to adapt social graphs for this purpose? By definition, social graphs are meant to represent the position of a node relative to the rest of the network. The "rest of the network" consists of other nodes, each of which has its own social graph. So social graphs seem like the only medium that we can assume will be used at every node. And they seem easily adaptable for expressing just about any kind of meaning; any complex of associations, or meanings, or preferences that become attached to node A (via the composition of A's social graph) can then be pointed to by node B via the inclusion of node A in node B's graph.

I have a few thoughts about "Algorithm Beta". Maybe it will have a more unified structure than Algorithm Alpha's two distinct Processes. Maybe rather than associating each input action with a 1st-order connection and enlarging the magnitude of that connection in the user's social graph, it will instead associate each action with any node, not necessarily a node in the user's social graph, any node that seems most representative of the user's intent/desire/context at that time, and then recalculate the user's social graph by simply copying that node's graph or by somehow amalgamating, expressing a relationship between, that node's graph and the user's current graph.

There seem to be some key insights to be had regarding how shifting patterns of node interconnections will ultimately be visualized on our screens, insights that will help in writing this algorithm. We're trying to fluidize our interfaces, to go beyond the rectangular boxes and strings of text; mentally visualizing this goal could help us define the logic that will fluidize the making and breaking of connections at the most abstract level possible (the social graph level), which will help accelerate the evolution of our social networks and the efficiency of our online communications, which will then expedite the collaborative process of manifesting even more fluid interfaces.

response to Mike Dougherty

On Sun, May 16, 2010 at 10:03 AM, Mike Dougherty <msd001@gmail.com> wrote:
On Sun, May 16, 2010 at 11:22 AM, Joshua Maurice <josh.maurice@gmail.com> wrote:
If I'm right about that, then the algorithm will almost inevitably "go viral" and become a standard feature for Internet interfaces. It doesn't really seem like something that anyone could retain Zuckerbergian control of. I hypothesize that the more people start using interfaces equipped with such algorithms, the
 
However, I will happily accept $50,000 OBO for it :-) Seriously though folks, might someone toss me a

don't expect it to go viral at $50k

It takes a lot of work building the critical mass required for something new to be ubiquitously embraced.


But do you not agree as to the desirability of having an algorithm like this running on your personal interface, even if no one else is using it?

So many sites now provide different ways to find people on Twitter, and Twitter itself is just starting to add geolocation features allowing you to find people based on location. People devote to following/unfollowing decisions large portions of the time they spend on social networking sites. And then there are other features for categorizing the people you follow, like the Lists feature that Twitter rolled out a few months ago, and the similar features that Tweetdeck and other Twitter interfaces have had for a bit longer. The job of managing our online social graphs is becoming so complex that the collapse of this profusion of partially overlapping methods (not to mention the still-largely-separate networking sites themselves) into a vastly simplified system seems inevitable.

This algorithm attempts to express some kind of universal logic that we all use all the time anyway, in navigating through the social network that each of us is. We can only pay attention to a limited set of things at a time, so why not express this as a list of addresses with a magnitude number assigned to each? Then we're going to want to transform the list regularly. Process 1 automates continuous information sharing with the rest of the network, and Process 2 allows for continuous input that will percolate through and evoke feedback from the network.

Really, I suppose somebody could just clone an existing open-source Twitter interface and plug the algorithm into the code, if they wanted to create a working prototype that people could use.

Mark Pesce recently wrote about the significance of the social graph in the coming age of hyperconnectivity in "Calculated Risks" [http://blog.futurestreetconsulting.com/?p=342]

...and also in "Social networks and the end of privacy" [http://www.abc.net.au/unleashed/stories/s2899438.htm]:

"That social network, in itself, is of enormous value. It's a record of your passage through the human universe: who you've met, who you've liked, who's impressed you. These connections are a series of impressions, as well as a biography and a chronology. Your connections tell you who you are. They also tell the rest of us who you are.
"I can have a look at your 'social graph' - the newfangled name for this - and determine, with varying degrees of accuracy, your age, your sex, your field of employment, and so forth. I can do better. Some very sophisticated software - developed by the NSA, but now widespread - can illuminate your political affiliations, your sexual preference (and pecadiloes), your health, and so forth.
"If I can look at your social graph and determine that you're a terrorist, or a lesbian, or a cancer survivor, or lactose-intolerant, that might be information you want to keep to yourself, information you don't want shared widely. Yet it's plainly visible from your social graph."

So to the extent that Mark is right about the importance of this newfangled "social graph" idea, to the extent that it does loom large in our future, aren't we well advised to look into basic logical methods of maintaining/evolving our social graphs?

Mark illuminates how expressive our social graphs already are of all those different aspects of our lives. I expect Algorithm Alpha to help vastly accelerate this expressiveness -- and thereby allow us, as I said earlier, to encode all the content of all the messages going through the network as patterns of network connections, expressed in our social graphs as different values attached to different connections at different times

I haven't described in very coherent detail how such an algorithm, implemented on a global scale, will help produce nearly all-graphical interfaces of fluid, moving, interconnected images. [Forthcoming [?]] If I can do that soon, I suppose it might help in conveying the algorithm's utopian attractiveness, as some kind of default universal standard for social graph transformation.

But I'm claiming that a lot of tweeps, etc. will want to run this algorithm on our interfaces whether or not it attains any of the critical masses at which more "fantastic" and "world changing" phenomena begin to emerge.

Zuckerberg, Diaspora, Algorithm Alpha, money

On Sun, May 16, 2010 at 4:25 AM, Mike Tintner wrote:
See the latest Fortune for the story of how 20 year old Zuckerberg resisted one offer after another,  from first many millions, to later many billions, to keep control of Facebook - all because he knew from the beginning that he had a "fantastic idea" re networking that would "change the world."
 
How would you compare your own ideas here?


Yes, after obsessing about this Algorithm for many months (and finally putting it into mathematically precise terms two days ago) it sure seems like a great candidate for the Next Big Thing of the Internet to me.

If I'm right about that, then the algorithm will almost inevitably "go viral" and become a standard feature for Internet interfaces. It doesn't really seem like something that anyone could retain Zuckerbergian control of. I hypothesize that the more people start using interfaces equipped with such algorithms, the more image-based and less text-based our interfaces will become. But it can be implemented easily by existing social networks, as soon as Twitter and/or Facebook and/or Google and/or Diaspora, etc., decide to begin offering the feature of automatic, rather than manual, social graph transformation -- automatic adjustment of the arrays of people/places/things to which our social networks connect us, arrays such as Following lists in Twitter, Friends lists in Facebook, etc. So, to reemphasize this point, it can provide immediate, significant benefits if implemented now as an option for users of Twitter, Facebook, etc., but it will also keep manifesting "fantastic" new emergent properties that will "change the world" as higher and higher proportions of netizens begin using it.

Perhaps some new social network can be created specifically in order to implement the algorithm. Or, since Diaspora is taking off so quickly as an open-source project, it should be easy enough for anybody to insert Algorithm Alpha into at least a fork of the project.

However, I will happily accept $50,000 OBO for it :-) Seriously though folks, might someone toss me a grant, a few thousand bucks maybe, for continued research on this? Pretty broke over here. I'm very confident that breakthroughs in communication technology, like this algorithm, are going to help us move past our money-based scarcity economy within a very few years.

My email address: josh.maurice@gmail.com

chat with Matt continued

On Sat, May 15, 2010 at 1:35 PM, Matt Mahoney <matmahoney@yahoo.com>wrote:

>I guess I don't understand your algorithm or its purpose, but I don't see why the user should be concerned with what his social network looks like beyond his immediate neighbors. This information is only important in social networks like facebook where you have to choose your sources of information (friends) rather than letting a search function do that automatically.

Of course you're concerned with your social network outside your immediate neighbors. You're concerned with your neighbors' neighbors (2nd-order connections) somewhat less than with your immediate neighbors (1st-order connections), and with your neighbors' neighbors' neighbors (3rd-order connections) somewhat less than with your neighbors' neighbors (2nd-order connections), etc. It's all important, it all influences you, although the lower-order connections influence you more, by definition.
My algorithm only adjusts a node's first-order connections, taking as input both the 1st- and 2nd-order connections.

I think the next generation of internet interfaces obviously needs to integrate social networking and searching. What we search for will affect our social network, and vice versa.

How would a "search function" operate on an interface of the kind I'm talking about?

In a sense, every node becomes a continuously operating search function, by amalgamating identical 2nd-order connections. In other words, if several of my friends (1st-order connections) suddenly begin expressing an interest in a new node (2nd-order connection from my perspective), or if just one or a few of them begin expressing extremely strong interest in it, then the algorithm will put that node into my social graph as a new 1st-order connection.


You will still be able to keep Google as one of your first-order connections, i.e., "google.com" as one of the addresses in your social graph (whether implemented as an APML file, or whatever -- these social graphs as I've been describing them consist of just a list of addresses and a number between 0 and 1 for each address, so I suppose we might just as well use a plain text file).


But you will not send queries directly to google.com. Your query will become a node in your social graph with a temporarily extremely high magnitude, causing that query to suddenly become very "interesting" for all your friends as well, including Google. The social graph of each of your friends will then tend to become more inclusive of things more related to your query.

how Algorithm Alpha facilitates graphicality

On Fri, May 14, 2010 at 6:02 PM, Matt Mahoney <matmahoney@yahoo.com> wrote:
I'm not sure what your algorithm is supposed to do. Are you trying to unify existing social networks, or solve some other problem with the way it is currently done? Perhaps you could write a document that includes its goals, inputs, outputs, costs, assumptions, and risks.

Matt, I've been studying your design.

I think if all the social networks just adopted some single system or file format for representing social graphs, such as APML (as I said in "Social Networks and the Beginning of the Global Brain"), this would provide a significant gain in efficiency over the current system of APIs, etc. and would go a long way toward unifying the existing social networks and toward making them accessible through indefinitely many new sites, like Diaspora. That in itself would significantly improve our social connectedness, providing many benefits economically, psychologically, etc.

Then I've been trying to imagine and encode the next step beyond such a unification, ie. the automatic evolution of the social graphs, which you seem to have hit upon too ("Your social network is determined by your interests. You don't have to choose your friends. The network does it for you automatically. Once we have AI, your friends don't even need to be human.")

But I'm also imagining that all the content of all the messages going through the network will be encoded as patterns of network connections, expressed in our social graphs as different values attached to different connections at different times. In a sense every node will be doing only one job, as far as the rest of the network is concerned: adjusting its social graph.

Users will provide all our input by making selections within the moving graphics on our screens. As a user, I will have many choices as to how my interface graphically represents my social graph (how it arranges/organizes/presents areas representing my 1st-order connections), and each area representing a given 1st-order connection will be subdivided (into areas representing my 2nd-order connections) based on the choices made at that node about how to represent its 1st-order connections (which have become my 2nd-order connections). Some nodes will not have a human driving them and will arrange their 1st-order connections according to some algorithm.

With Algorithm Alpha (your 1st-order network is continuously recalculated as the weighted combination of all your 2nd-order networks in "Process 1", and your 1st-order network is adjusted to reflect increased attention to any 1st-order connection you select through input devices in "Process 2") I am attempting to define a general and powerful method for routing information appropriately through social networks. The algorithm only looks at 1st- and 2nd-order connections, not any higher orders, but if it runs on millions of nodes, information would quickly make its way anywhere in the world where similar interests have been expressed. The algorithm doesn't even necessarily require a critical mass of adoption to be useful -- Twitter or Facebook could potentially begin offering such a network evolution option at any time, and many of us would probably turn it on and benefit from having our followees/friends list evolve based on our 2nd-order connections and our browsing behavior.

I have not yet really specified when or how often processes 1 and 2 would run, and in process 2, I haven't specified how to calculate the increase in connection magnitude when the user makes a selection, but these seem like relatively simple, interconnected questions.

Algorithm Alpha :-)

Here's what I've come up with so far, in trying to create some basic, "common-sense" standards for transforming social graphs. I'm trying to specify some logical maintenance processes that practically everybody can agree make sense to perform regularly on our social graphs.

The two recalculation processes described below will operate on graph X, which contains, as its values, a set of Internet addresses, and, associated with each address, a number between 0 and 1, with the sum of these numbers at a given time approximating 1. Assume that each of these addresses points to another graph, which can be retrieved, and which I'll call a "2nd-order graph" from the perspective of graph X.

Process 1:
Graph X will take new values using the values of all of its 2nd-order graphs. The number in graph X associated (before the recalculation) with a given address will determine the weight of the contribution of the values of the 2nd-order graph retrieved at that address to the values of the recalculated graph X. So each address-number pair in each 2nd-order graph will contribute to the number for that same address in the recalculated graph X an amount equal to the number in that pair (in the 2nd-order graph), multiplied by the number associated in graph X (before the recalculation) with the 2nd-order graph containing the pair. We can expect that the address representing graph X itself will appear, not self-referentially in graph X, but frequently in second-order graphs. When this happens, it contributes to the number for each address that appeared in graph X before the recalculation, an amount equal to the address's initial number, multiplied by the number in graph X for the 2nd-order graph that is pointing back to graph X, multiplied by the number in that 2nd-order graph for the address pointing to graph X.

Process 2:
Graph X will be recalculated based on the user's input. This process will not look at the contents (values) of the 2nd-order graphs. Assume that each pixel on the user's screen at any time will be associated with one particular address in graph X. Then, a selection of any pixel via an input action such as a mouse click will increase the number in graph X for the corresponding address, and will accommodate this larger number by decreasing the numbers for the rest of the addresses. If you click on a pixel and the number for the corresponding address goes from 0.1 to 0.2, then the number for each of the rest of the addresses will become eight-ninths as big as it was.

Social Networks and the Beginning of the Global Brain

In Social networks and the end of privacy, Mark Pesce seems to make a good case for creating an open-source social network, like Diaspora. I can easily imagine this providing a more efficient, lightweight alternative to the previously dominant corporate creations, as happened with Linux, Firefox, etc. This, as Mark suggests, will help maximize our freedom and flexibility in managing our social graphs.

The creation of universal hardware and software standards (hardware connection types like USB, character sets like ASCII, file formats, etc.) crucially facilitated the development of modern open- and closed-source operating systems. The creation of Internet protocols like HTTP and HTML crucially facilitated the development of modern open- and closed-source web browsers. Might we not then expect that the creation of increasingly sophisticated standards for representing and transforming our social graphs could aid in the tranformation of online social networks into something of really spectacular usefulness? The The Data Portability Project has worked in this direction with the creation of Attention Profiling Markup Language (APML), which I've speculated could be used to represent our entire social graphs, though I'm not sure whether it was intended by its creators to serve that function. But along with the universal adoption of APML or something else for representing our social graphs, let's also think about creating standards for transforming those graphs -- for automatically adjusting the filters used to select which streams of information from the immense Internet ocean reach our eyeballs. I've speculated that online social networks, once equipped with such information distribution algorithms, could replace the functions currently performed by most written legal standards of interaction, such as national constitutions and other legal codes, Robert's Rules of Order for legislative assemblies, etc. If/when such standards do emerge, perhaps the existing infrastructure of sites like Twitter, Facebook, Google, etc. could be adapted for the new functions and could coexist for some time with more open alternatives, with the information flowing freely and optimally among them all.