Chris Sacca on the implied user contract

Chris Sacca nicely summarized today’s FB vs Google vs Twitter controversy:

It comes down to what each company has promised its users. Facebook promised its users their stuff would be private, which is why users rightfully get pissed when that line blurs. Twitter has promised users, well, that it will stay up, and that is why users rightfully get pissed when the whale is back.

Google has promised its users and the entire tech community, again and again, that it would put their interests first, and that is why Google users, rightfully get pissed when their results are deprecated to try to promote a lesser Google product instead.

It’s all about expectations.

The interoperability of social networks

Google recently added a caustic warning message when users attempt to export their Google Contacts to Facebook:

Hold on a second. Are you super sure you want to import your contact information for your friends into a service that won’t let you get it out?

Facebook allows users to download their personal information (photos, profile info, etc) but has been fiercely protective of the social graph (you can’t download friends, etc). The downloaded data arrives in a .zip file – hardly a serious attempt to interoperate using modern APIs (update: Facebook employee corrects me/clarifies in comments here). In contrast, Google has taken an aggressively open posture with respect to the social graph, calling Facebook’s policy “data protectionism.”

The economic logic behind these positions is a straightforward application of Metcalf’s law, which states that the value of a network is the square of the number of nodes in the network*.  A corollary to Metcalf’s law is that when two networks connect or interoperate the smaller network benefits more than the larger network does. If network A has 10 users then according to Metcalf’s law its “value” is 100 (10*10).   If network B has 20 users than it’s value is 400 (20*20). If they interoperate, network A gains 400 in value but network B only gains 100 in value. Interoperating is generally good for end users, but assuming the two networks are directly competitive – one’s gain is the other’s loss – the larger network loses.

A similar network interoperability battle happened last decade among Instant Messaging networks. AIM was the dominant network for many years and refused to interoperate with other networks. Google Chat adopted open standards (Jabber) and MSN and Yahoo were much more open to interoperating. Eventually this battle ended in a whimper — AIM never generated much revenue, and capitulated to aggregators and openness.  (Capitulating was probably a big mistake – they had the opportunity to be as financially successful as Skype or Tencent).

Google might very well genuinely believe in openness. But it is also strategically wise for them to be open in layers that are not strategic (mobile OS, social graph, Google docs) while remaining closed in layers that are strategic (search ranking algorithm, virtually all of their advertising services).

When Google releases their long-awaited new social network, Google Me, expect an emphasis on openness. This could create a rich ecosystem around their social platform that could put pressure on Facebook to interoperate. True interoperability would be great for startups, innovation, and – most importantly – end users.

* Metcalf’s law assumes that every node is connected to every node and each connection is equally valuable. Real world networks are normally not like this. In particular, social networks are much more clustered and therefore have somewhere between linear and exponential utility growth with each additional user.

Instrumenting the offline world

In the last decade there have been major advances in storing, analyzing, and acting upon extremely large data sets.  Data sets that were previously left dormant are now being put to (mostly) constructive use. But the vast majority of information in the world isn’t available for analysis because it isn’t being electronically collected.

This is changing rapidly as new data collection mechanisms are implemented – what engineers refer to as instrumentation. Common examples of instrumentation include thermometers, public safety cameras, and heart rate monitors.

Smart phones are one obvious new source of potential instrumentation.  A person’s location, activities, audio and visual environment – and probably many more things that haven’t been thought of yet – can now be monitored.  This of course raises privacy issues.  Hopefully these privacy issues will be solved by requiring explicit user opt-in.  If so, this will require creating incentives for people to do so.

Foursquare instruments location in an opt-in way through the check in. The incentives are social and game-like, but the data produced could be useful for many more “serious” purposes.  Fitbit instruments a person’s health-related activity. The immediate incentive is to measure and improve your own health, but the aggregate data could be analyzed by medical researchers to benefit others.

In manufacturing, there has been a lot of interesting innovation around monitoring machinery, for example by using loosely joined, inexpensive mesh networks.  In homes, protocols like ZigBee allow devices to communicate which allows, for example, automation of tedious tasks and improved energy efficiency.

In the next decade, there will be a massive amount of innovation and opportunity around the big data stack. Instrumentation will be the foundational layer of that stack.