A couple of themes were prevalent this past week:
- iPhone/Android location logging,
- cloud computing (and a big cloud collapse at Amazon),
- the tech valuation bubble because of Groupon et al,
- profits at Apple, AT&T vs VZ, Google, most notably,
- and who wins in social media and what is next.
In my opinion they are all related and the Cloud plays the central role, metaphorically and physically. Horowitz recently wrote about the new computing paradigm in defense of the supposed technology valuation bubble. I agree wholeheartedly with his assessment as I got my first taste of this historical computing cycle over 30 years ago when I had to cycle 10 miles to a High School in another district that had a dedicated line to the county mainframe. A year or two later I was simulating virus growth on an Apple PC. So when Windows came in 1987 I was already ahead of the curve with respect to distributed computing. Moreover, as a communications analyst in the early 1990s I also realized what competition in the WAN post-1984 had begat, namely, Web 1.0 (aka the Internet) and the most advanced and cheapest digital paging/messaging services in the world. Both of these trends would have a significant impact on me personally and professionally and I will write about those evolutions and collapses in future Spectral issues.
The problem, the solution, the problem, the solution, etc….
The problem back in the 1970s and early 1980s was the telephone monopoly. Moore’s law bypassed the analog access bottleneck with cheap processing and local transport. Consumers and then enterprises and institutions began to buy and link the PCs together to communicate, share files and resources. Things got exciting when we began to multitask in 1987, and then by 1994 any PC provided access to information pretty much anywhere. During the 1990s and well into the next decade, Web 1.0 was just a 1.2-way store and forward database lookup platform. It was early cloud computing, sort of, but no-one had high-speed access. It was so bad in 1998 when I went independent, that I had 25x more dedicated bandwidth than my former colleagues at bulge-bracket Wall Street firms. That’s why we had the bust. Web 1.0 was narrow-band, not broadband, and certainly not 2-way. Wireless was just beginning to wake up to data, even though Jeff Bezos had everyone believing they would be ordering books through their phones in 2000.
Two things happened in the 2000s. First, high speed bandwidth became ubiquitous. I remember raising capital for The Feedroom, a leading video ASP, in 2003 and we were still watching high-speed access penetration reaching the 40% “tipping point.”. Second the IP stack grew from being a 4 layer model to something more robust. We built CDNs. We built border controllers that enabled Skype VoIP traffic to transit foreign networks “for free.” We built security. HTML, browsers and web frontends grew to support multimedia. By the second half of the decade, Web 2.0 became 1.7-way and true “cloud” services began to develop. Web 2.0 is still not fully developed as there are still a lot of technical and pricing controls and “lubricants” missing for true 2-way synchronous high-definition communications; more about that in future Spectrals.
The New “Hidden Problem”
Unfortunately, over that time the underlying service provider market of 5-6 competitive service providers (wired, wireless, cable) consolidated down to an oligopoly in most markets. Wherever competition dropped to 3 or fewer providers bandwidth pricing stopped falling 40-70% like it should have and only fell 5-15% per annum. Yet technology prices at the edge and core (Moore’s Law) kept on falling 50%+ every 12-18 months. Today, the price differential between “retail” and “underlying economic” cost per bit is the widest it has been since 1984.
That wouldn’t be a problem except for two recent developments: the advent of the smartphone and the attendant application ecosystems. So what does this have to do with cloud computing, especially when that was “an enterprise phenomenon” begun by Salesforce.com with its Force.com and Amazon Web Services. A lot of the new consumer wireless applications run on the cloud. There are entire developer ecosystems building new companies. IDC estimates that the total amount of information accessible is going to grow 44x by 2020 to 35 zetabytes. And the average number of unique files is going to grow 65x. That means that while a lot of the applications and information is going to be high-bandwidth (video and multimedia), there are also going to be many smaller files and transactions (bits of information); ie telemetry or personal information or sensory inputs. And this information will be constantly accessed by 3-5 billion wireless smartphones and devices. The math of networks is (N*(N-1))/2. That’s an awful lot of IP session pathways.
Why is That A Problem?
The problem is that the current wireless networks can’t handle this onslaught. Carriers have already been announcing datacaps over the past 2 years. While they are falling over themselves to announce 4G networks, the reality is that they are only designed to be a 2-3x faster, and far from being ubiquitous, either geographically (wide-area) or inbuilding. That’s a problem if the new applications and information sets require networks that are 20-50x faster and many factors more reliable and ubiquitous. The smartphones and their wireless tether are becoming single points of access. Add to that the fact that carriers derive increasingly less direct benefit from these application ecosystems, so they’ll have less and less incentive to upgrade and reprice their network services along true technology-driven marginal cost. Neustar is already warning carriers they are being bypassed in the process.
Does The Bubble Have to Burst?
Just as in the late 1990s, the upper and middle layer guys really don’t know what is going on at the lower layers. And if they don’t then surely the current bubble will burst as expectations will get ahead of reality. That may take another 2-3 years, but it will likely happen. In the meantime, alternative access players are beginning to rise up. Even the carriers themselves are talking about offloading traffic onto femto and wifi cells. Wifi alliances are springing up again and middle layer software/application controls are developing to make it easier for end-users to offload traffic themselves. Having lived through and analyzed the advent of competitive wired and wireless networks in the 1990s, my sense is that nothing, even LightSquared or Clearwire in their current forms, will be significant enough to precipitate the dramatic restructuring that is necessary to service this coming tidal wave of demand.
What we need is something that I call centralized hierarchical networking (CHN)™. Essentially we will see three major layers with the bottom access/transport layer being controlled by 3-4 hybrid networks. The growth and dynamic from edge to core and vice versa will wax and wane in rather rapid fashion. Until then, while I totally get and support the cloud and believe most applications are going that route, let the Cloud Players be forewarned of coming turbulence unless something is done to (re)solve the bandwidth bottleneck!