The next twenty years are going to make this last twenty years just pale

If we were sent back with a time machine, even 20 years, and reported to people what we have right now and describe what we were going to get in this device in our pocket—we’d have this free encyclopedia, and we’d have street maps to most of the cities of the world, and we’d have box scores in real time and stock quotes and weather reports, PDFs for every manual in the world—we’d make this very, very, very long list of things that we would say we would have and we get on this device in our pocket, and then we would tell them that most of this content was free. You would simply be declared insane. They would say there is no economic model to make this. What is the economics of this? It doesn’t make any sense, and it seems far-fetched and nearly impossible.

But the next twenty years are going to make this last twenty years just pale. We’re just at the beginning of the beginning of all these kind of changes. There’s a sense that all the big things have happened, but relatively speaking, nothing big has happened yet.

The Technium: An Interview with Kevin Kelly

The computing deployment phase

Technological revolutions happen in two main phases: the installation phase and the deployment phase. Here’s a chart (from this excellent book by Carlota Perez via Fred Wilson) showing the four previous technological revolutions and the first part of the current one:



Each revolution begins with a financial bubble that propels the (irrationally) rapid “installation” of the new technology.  Then there’s a crash, followed by a recovery and then a long period of productive growth as the new technology is “deployed” throughout other industries as well as society more broadly. Eventually the revolution runs its course and a new technological revolution begins.

In the transition from installation to deployment, the bulk of the entrepreneurial activity moves “up the stack”. For example, in the installation phase of the automobile revolution, the action was in building cars. In the deployment phase, the action shifted to the app layer: the highway system, shipping, suburbanization, big box retail, etc.

This pattern is repeating itself in the computing/internet revolution. Most of the successful startups in the 90s built core infrastructure (e.g. optical switching) whereas most of the successful startups since then built applications on top of that infrastructure (e.g. search). The next phase should see startups higher in the stack. According to historical patterns, these would be ones that require deeper cultural change or deeper integration into existing industries.

Some questions to consider:

– What industries are the best candidates for the next phase of deployment? The likely candidates are the information-intensive mega-industries that have been only superficially affected by the internet thus far: education, healthcare, and finance. Note that deployment doesn’t just mean creating, say, a healthcare or education app. It means refactoring an industry into its “optimal structure” – what the industry would look like if rebuilt from scratch using the new technology.

– How long will this deployment period last? Most people – at least in the tech industry – think it’s just getting started. From the inside, it looks like one big revolution with lots of smaller, internal revolutions (PC, internet, mobile, etc). Each smaller revolution extends the duration and impact of the core revolution.

– Where will this innovation take place? The historical pattern suggests it will become more geographically diffuse over time. Detroit was the main beneficiary of the first part of the automobile revolution. Lots of other places benefited from the second part. This is the main reason to be bullish on ”application layer” cities like New York and LA. It is also suggests that entrepreneurs will increasingly have multi-disciplinary expertise.

Techies and normals

There are techies (if you are reading this blog you are almost certainly one of them) and there are mainstream users – some people call them “normals” (@caterina suggested “muggles”). A lot of people call techies “early adopters” but I think this is a mistake: techies are only occasionally good predictors of which tech products normals will like.

Techies are enthusiastic evangelists and can therefore give you lots of free marketing. Normals, on the other hand, are what you need to create a large company. There are three main ways that techies and normals can combine to embrace (or ignore) a startup.

1. If you are loved first by techies and then by normals you get free marketing and also scale.  Google, Skype and YouTube all followed this chronology.  It is startup nirvana.

2. The next best scenario is to be loved by normals but not by the techies. The vast majority of successful consumer businesses fall into this category. Usually the first time they get a lot of attention from the tech community is when they announce revenues or close a big financing. Some recent companies that fall in this category are Groupon, Zynga, and Gilt Group. Since these companies don’t start out with lots of free techie evangelizing they often acquire customers through paid marketing.

(My last company – SiteAdvisor – was a product tech bloggers mostly dismissed even as normals embraced it.  When I left the company we had over 150 million downloads, yet the first time the word “SiteAdvisor” appeared on TechCrunch was a year after we were acquired when they referred to another product as “SiteAdvisor 2.0″.)

3. There are lots of products that are loved just by techies but not by normals. When something is getting hyped by techies, one of the hardest things to figure out is whether it will cross over to normals. The normals I know don’t want to vote on news, tag bookmarks, or annotate web pages.  I have no idea whether they want to “check in” to locations.  A year ago, I would have said they didn’t want to Twitter but obviously I was wrong. Knowing when something is techie-only versus techie-plus-normals is one of the hardest things to predict.

The next big thing will start out looking like a toy

One of the amazing things about the internet economy is how different the list of top internet properties today looks from the list ten years ago.  It wasn’t as if those former top companies were complacent – most of them acquired and built products like crazy to avoid being displaced.

The reason big new things sneak by incumbents is that the next big thing always starts out being dismissed as a “toy.”  This is one of the main insights of Clay Christensen’s “disruptive technology” theory. This theory starts with the observation that technologies tend to get better at a faster rate than users’ needs increase. From this simple insight follows all kinds of interesting conclusions about how markets and products change over time.

Disruptive technologies are dismissed as toys because when they are first launched they “undershoot” user needs. The first telephone could only carry voices a mile or two. The leading telco of the time, Western Union, passed on acquiring the phone because they didn’t see how it could possibly be useful to businesses and railroads – their primary customers. What they failed to anticipate was how rapidly telephone technology and infrastructure would improve (technology adoption is usually non-linear due to so-called complementary network effects). The same was true of how mainframe companies viewed the PC (microcomputer), and how modern telecom companies viewed Skype. (Christensen has many more examples in his books).

This does not mean every product that looks like a toy will turn out to be the next big thing. To distinguish toys that are disruptive from toys that will remain just toys, you need to look at products as processes. Obviously, products get better inasmuch as the designer adds features, but this is a relatively weak force. Much more powerful are external forces: microchips getting cheaper, bandwidth becoming ubiquitous, mobile devices getting smarter, etc. For a product to be disruptive it needs to be designed to ride these changes up the utility curve.

Social software is an interesting special case where the strongest forces of improvement are users’ actions. As Clay Shirky explains in his latest book, Wikipedia is literally a process – every day it is edited by spammers, vandals, wackos etc., yet every day the good guys make it better at a faster rate. If you had gone back to 2001 and analyzed Wikipedia as a static product it would have looked very much like a toy. The reason Wikipedia works so brilliantly are subtle design features that sculpt the torrent of user edits such that they yield a net improvement over time. Since users’ needs for encyclopedic information remains relatively steady, as long as Wikipedia got steadily better, it would eventually meet and surpass user needs.

A product doesn’t have to be disruptive to be valuable. There are plenty of products that are useful from day one and continue being useful long term. These are what Christensen calls sustaining technologies. When startups build useful sustaining technologies, they are often quickly acquired or copied by incumbents. If your timing and execution is right, you can create a very successful business on the back of a sustaining technology.

But startups with sustaining technologies are very unlikely to be the new ones we see on top lists in 2020. Those will be disruptive technologies – the ones that sneak by because people dismiss them as toys.

Non-linearity of technology adoption

When I was in business school I remember a class where a partner from a big consulting firm was talking about how they had done extensive research and concluded that broadband would never gain significant traction in the US without government subsidies.  His primary evidence was a survey of consumers they had done asking them if they were willing to pay for broadband access at various price points.

Of course the flaw in this reasoning is that, at the time, there weren’t many websites or apps that made good use of broadband.   This was 2002 – before YouTube, Skype, Ajax-enabled web apps and so on.  In the language of economics, broadband and broadband apps are complementary goods – the existence of one makes the other more valuable.  Broadband didn’t have complements yet so it wasn’t that valuable.

Complement effects are one of the main reasons that technology adoption is non-linear. There are other reasons, including network effects, viral product features, and plain old faddishness.

Twitter has network effects – it is more valuable to me when more people use it.  By opening up the API they also gained complement effects – there are tons of interesting Twitter-related products that make it more useful.  Facebook also has network effects and with its app program and Facebook Connect gets complement effects.

You can understand a large portion of technology business strategy by understanding strategies around complements.  One major point:  companies generally try to reduce the price of their products complements (Joel Spolsky has an excellent discussion of the topic here).   If you think of the consumer as having a willingness to pay a fixed N for product A plus complementary product B, then each side is fighting for a bigger piece of the pie. This is why, for example, cable companies and content companies are constantly battling.  It is also why Google wants open source operating systems to win, and for broadband to be cheap and ubiquitous.

Clay Christensen has a really interesting theory about how technology “value chains” evolve over time.  Basically they typically start out with a single company creating the whole thing, or most of it.  (Think of mobile phones or the PC).  This is because early products require tight integration to squeeze out maximum performance and usability.  Over time, standard “APIs” start to develop between layers, and the whole product gains performance/usability to spare.   Thus the chain begins to stratify and adjacent sections start fighting to commoditize one another.   In the early days it’s not at all obvious which segments of the chain will win.  That is why, for example, IBM let Microsoft own DOS.  They bet on the hardware.   One of Christensen’s interesting observations is, in the steady state, you usually end up with alternating commoditized and non-commoditized segments of the chain.

Microsoft Windows & Office was the big non-commoditized winner of the PC. Dell did very well precisely because they saw early on that hardware was becoming commodotized.  In a commoditized market you can still make money but your strategy should be based on lowering costs.

Be wary of analysts and consultants who draw lines to extrapolate technology trends.  You are much better off thinking about complements, network effects, and studying how technology markets have evolved in the past.