An old wound came up while reading the last post Nick Carr’s excellent blog: the distinction between the different type of network externalities is not very well know. The idea is not well known enough to be recognized by search engines, so why would I need some subtleties? Because they can help understand whether a company simply has a large market share, or whether they are blatantly cornering a market, if not several.
The post is a reply to Tim O’Reilly’s latest on Web 2.0: are they network effects for such services? Certainly — provided we make the difference between networks and clubs, and neighbourhoods, and adoption costs and technological learning-by-doing.
The first cases considered for this issues were Nuclear power and Solar panels, by W. Brian Arthur and Typewriters, by Paul A David, who both warned against possible lock-in. Nuclear or technologies are typical learning-by-doing: the more we develop on technological path, the harder to switch to another option. That is why the choice between heavy- vs. light-water nuclear power-plants and crystalline vs. amorphous silicium solar-panels appeared as a damning alternative. There is little social interaction between the two option, or mimetism: simply an industrial complex learning how to do one versus the other. Keyboard layouts is a different story: Qwerty was learnt by typing, but the role of more experienced typists onto less experienced one, and the influence of language (keyboards are still different depending on local diacritics) cannot be neglected.
Then came social interactions: by considering you preferred not the technology that you were experienced to develop, but the standards used by the people you worked with, David and H. Peyton Young suggested to consider local interactions. With this simple, realistic constraint, several, local monopolies were possible, provided the relationship lattice was dense enough. Although the basic idea is simple, heavy-duty theoretical physicists are developing fascinating models and simulations based on ‘real’ complex networks.
The big change, and the best way to understand Google’s business model, came with Two-sided market; the theory can lead to rather impressive mathematical contraptions, but the idea is simple: you have two type of users, and one type wants the others to be present. Job-market or match-makers are obvious cases; credit-card companies need to convince both holders and merchants to carry their tech; journals and search engines have advertisers and readers/users to bring together. With this setting, a company can ‘corner’ a market by offering one side the service as cheap as possible, forcing the other side to use its particular service and .
So to summarize: with uniform interaction between the users, a monopoly is likely; with local interaction, local monopolies can happen (although local can be quite wide); with several sides, the cost to carry several system is important — but as always, a general monopoly can pretty much prove to a lock-in. Compatibility is tricky because, first the largest player won’t implement it, and secondly, it can ease the transition to monopoly anyway (see the counter-intuitive results of that oddly realistic case about IMs)
A combination of these two can be found in eBay, for instance, where both what you are looking for, what service are using your relatives and friends influence you decision to take part — and the two types of users is desicive: buyers encourage sellers and respectively.
Finally, the most complex interactions come from real businesses: the best framework for that is most likely helpful for Web 2.0 was suggested by [full disclosure] my advisor Éric Brousseau and Thierry Pénard. The Economics of Digital Business Models details how companies have to balance between three activities: describing sides and matching them (Monster.com); combining modular elements together to make a functioning service (Windows includes pilots; a car has tires, windshields not made by the company); collecting information help offer a better service (Amazon’s “Other people who bought this book also liked. . .”). This demands to combine Network externalities, Transaction costs, Differentiation, Economies of scale, Incentives and Quality management.
All the aspects mentionned in the comments of Nick Carr’s post are relevant: having users’ click patterns helps Google harvest more information about what are great sites; word-of-mouth builds up anticipation in the same company’s favor; good coders emulation can lock great architecture to its services. . . It is neither obvious nor necessary that the Mountain View monster eats us all — however, many mechanisms are at play, and even the most attentive Open-Source contributor should be careful not to crush emerging ideas.
The aspect that is important is that there is room for improvement to complement all the services of the company — and the possibility to leverage that initial foothold. Do you want examples? I’m interested in complex graph clustering; many coders are trying to find how to lump you in teams, based, e.g. on your e-mails. Recent progress are impressive. By offering to small companies that use Google Mail (Pro, Yahoo! Mail or another OpenStack supporter) the possibility to share their relations through OAuth and see how interactions appear to be structured (inside and outside the company, as it seems there is a difference). The same is of course possible based on other Google data sources, like Scholar or Blogger. Based on that expertise, the same company could offer more relevant project management software, and leverage Google work.
PS: I should comment the Huberman’s paper that I blogged about recently on Wednesday morning at 9:30 during the W2S workgroup at La Cantine in Paris.