Incumbents die due to irrelevance or ineptitude

Judging from the tech press, you’d think the biggest risk to successful companies is competition. But when you examine the history of technology, incumbents usually decline because the world changes and they lose relevance, or because they lose visionary founders and the organization decays. Some examples:

– Dell thrived when PCs dominated the computer market and Dell was the low cost provider of commodity hardware products. The shift to mobile and tablet computing meant that hardware quality (not price) was once again the primary basis of competition. As a result, Dell’s laser-like focus on cost reduction became a liability.

– The New York Times was, for many decades, one of the few premium channels through which brand and classified advertisers could reach mass consumers. Thus car companies and real estate brokers subsidized foreign reporting and investigative business journalism. The internet provided a vast alternative channel, and the Times became far less relevant. At the same time, the internet provided many new sources for breaking news, editorials etc, hurting the Times on the subscriber side.

– Yahoo didn’t lose because Google out-competed them on search. They lost because they didn’t really care about search – indeed, they outsourced algorithmic search to Alta Vista, Inktomi and then Google itself. The leading portals back in circa 2000 (Yahoo, Excite, Lycos etc) desperately wanted to keep keep users on their site – the buzzword was “stickiness” – but Google knew better and focused on getting users off of Google to other places on the web. Yahoo became just another place to read celebrity gossip and use generic web services.

– Netflix thrived when they could simply ignore the movie companies and rely on the first-sale doctrine to get DVDs. The market shift to streaming video created a new and brutal dependency. They had to go make deals with content companies. Now they are even trying to create their own content to lessen this dependency. They have a brilliant and visionary management team but this is a tough transition to make.

– Sony relied on its Steve-Jobs-like founder, Akio Morita, to repeatedly develop incredibly innovative products (among them: the first transistor radio, the first transistor television, the Walkman, the first video cassette recorder, the compact disc) that seemed to come out of nowhere and create massive new markets. Since he left, the company has floundered and the stock has fallen dramatically.

– Google’s biggest risk isn’t a direct competitor. Startups and incumbents who’ve tried to create better search engines have barely cut into Google’s market share. Google’s primary risk – and they seem to know this – is that they are no longer relevant when people find content through social sites, and where an ever increasing portion of the web is uncrawlable.

Google released their “Dropbox-killer” a few days ago. I don’t know if Dropbox has yet achieved incumbent status, but they certainly seem to be the market leader. They also seem to have a very competent management team. So if history is a guide, Dropbox’s biggest risk isn’t a competitor but irrelevance – if, for example, files become less and less important in a web services world and Dropbox doesn’t adapt accordingly.

Chris Sacca on the implied user contract

Chris Sacca nicely summarized today’s FB vs Google vs Twitter controversy:

It comes down to what each company has promised its users. Facebook promised its users their stuff would be private, which is why users rightfully get pissed when that line blurs. Twitter has promised users, well, that it will stay up, and that is why users rightfully get pissed when the whale is back.

Google has promised its users and the entire tech community, again and again, that it would put their interests first, and that is why Google users, rightfully get pissed when their results are deprecated to try to promote a lesser Google product instead.

It’s all about expectations.

What’s not evil: ranking content fairly *and* letting public content get indexed

Please see update at bottom

Most websites spend massive amounts of time and money to get any of their pages index and ranked by Google’s search engine. Indeed, there is a entire billion dollar industry (SEO) devoted to helping companies get their content indexed and ranked.

Twitter and Facebook have decided to disallow Google from indexing 99.9% of their content. Twitter won’t let Google index tweets and Facebook won’t let Google index status updates and most other user and brand generated content. In Facebook’s case this makes sense for content that users have designated as non-public. In Twitter’s case, the vast majority of the blocked content is designated by users as public. Furthermore, Twitter’s own search function rarely works for tweets older than a week (from Twitter’s search documentation, they return “6-9 days of Tweets”).

There is a debate going today in the tech world: Facebook and Twitter are upset that Google won’t highly rank the 0.1% of their content they make indexable. Facebook and Twitter even created something they call the “Don’t be evil” toolbar that reranks Google search results the way they’d like them to be ranked. The clear implication is that Google is violating their famous credo and acting “evil”.

The vast majority of websites would dream of having the problem of being able to block Google from 99.9% of their content and have the remaining 0.1% rank at the top of results. What would be best for users – and least “evil” – would be to let all public content get indexed and have Google rank that content “fairly” without favoring their own content. Facebook and Twitter are right about Google’s rankings, but Google is right about Facebook and Twitter blocking public content from being indexed.

Update: after posting this I got a bunch of emails, tweets and comments telling me that Twitter does in fact allow Google to index all their tweets, and that any missing tweets are the fault of Google, not Twitter. A few people suggested that without firehose access Google can’t be expected to index all tweets. At any rate, I think the “Why aren’t all tweets indexed?” issue is more nuanced than I argued above.

Accurate contrarian theories

When Google released its search engine in 1998, its search results were significantly better than its competitors’. Many people attribute Google’s success to this breakthrough technology. But there was another key reason:  a stubborn refusal to accept the orthodox view at the time that “stickiness” was crucial to a website’s success. Here’s what happened when they tried to sell their technology to Excite (a leading portal/search engine in the late 90s):

[Google] was too good. If Excite were to host a search engine that instantly gave people information they sought, [Excite’s CEO] explained, the users would leave the site instantly. Since his ad revenue came from people staying on the site—“stickiness” was the most desired metric in websites at the time—using Google’s technology would be counterproductive. “He told us he wanted Excite’s search engine to be 80 percent as good as the other search engines,” … and we were like, “Wow, these guys don’t know what they’re talking about.” – Steven Levy, In The Plex (p. 30)

Famed investor/entrepreneur Reid Hoffman says world-changing startups need to be premised on “accurate contrarian theories.”  In Google’s case, it was true but non-contrarian to think users would prefer a better search engine. What was true and contrarian was to think it made business sense to get users off their site as quickly as possible. The business model to support this contrarian theory wouldn’t emerge until years later, and by then Google would already have become the world’s most popular search engine.

Inferring intent on mobile devices

[Google CEO Eric] Schmidt said that while the Google Instant predictive search technology helps shave an average of 2 seconds off users’ queries, the next step is “autonomous search.” This means Google will conduct searches for users without them having to manually conduct searches. As an example, Schmidt said he could be walking down the streets of San Francisco and receive information about the places around him on his mobile phone without having to click any buttons. “Think of it as a serendipity engine,” Schmidt said. “Think of it as a new way of thinking about traditional text search where you don’t even have to type.”  – eWeek

When users type phrases into Google, they are searching, but also expressing intent. To create the “serendipity engine” that Eric Schmidt envisions would require a system that infers users’ intentions.

Here are some of the input signals a mobile device could use to infer intent.

Context

Location: It is helpful to break location down into layers, from the most concrete to the most abstract:

1) lat / long – raw GPS coordinates

2) venue – mapping of lat / long coordinates to a venue.

3) venue relationship to user – is the user at home, at a friend’s house, at work, in her home city etc.

4) user movement – locations the user has visited recently.

5) inferred user activity – if the user is at work during a weekday, she is more likely in the midst of work. If she is walking around a shopping district on a Sunday away from her home city, she is more likely to want to buy something. If she is outside, close to home, and going to multiple locations, she is more likely to be running erands.

Weather: during inclement weather user is less likely to want to move far and more likely to prefer indoor activities.

Time of day & date: around mealtimes the user is more likely to be considering what to eat. On weekends the user is more likely to be doing non-work activities. Outside at night, the user is more likely to be looking for bar/club/movie etc.  Time of days also lets you know what venues are open & closed.

News events near the user: they are at the pro sporting event, an accident happened nearby, etc.

Things around the user: knowing not just venues, but activities (soccer game), inventories (Madden 2011 is in stock at BestBuy across the street), events (concert you might like is nearby), etc.

These are just a few of the contextual signals that could be included as input signals.

Taste

The more you know about users’ tastes, the better you can infer their intent. It is silly to suggest a great Sushi restaurant to someone who dislikes Sushi. At Hunch we model taste with a giant matrix. One axis is every known user (the system is agnostic about which ID system – it could be Facebook, Twitter, a mobile device, etc), the other axis is things, defined very broadly: product, person, place, activity, tag etc.  In the cells of the matrix are either the known or predicted affinity between the person and thing.  (Hunch’s matrix currently has about 500M people, 700M items, and 50B known affinity points).

Past expressed intent

– App actions:  e.g. user just opened Yelp, so is probably looking for a place to go.

– Past search actions: user’s recent (desktop & mobile) web searches could be indications of later intent.

– Past “saved for later” actions:  user explicitly saved something for later e.g. using Foursquare’s “to do” functionality.

Behavior of other people

– Friends:  The fact that a user’s friends are all gathered nearby might make her want to join them.

– Tastemates: That someone with similar tastes just performed some actions suggests the user is more likely to want to perform the same actions.

– Crowds: The user might prefer to go toward or avoid crowds, depending on mood and taste.

How should an algorithm weight all these signals? It is difficult to imagine this being done effectively anyway except empirically through a feedback loop. So the system suggests some intent, the user gives feedback, and then the system learns by adjusting signal weightings and gets smarter.  With a machine learning system like this it is usually impossible to get to 100% accuracy, so the system would need a “fault tolerant” UI.  For example, pushing suggestions through modal dialogs could get very annoying without 100% accuracy, whereas making suggestions when the user opens an application or through subtle push alerts could be non-annoying and useful.