Skip to main content
SpectralShifts Blog 
Sunday, March 30 2014

Why App Coverage Will Drive Everything

Given the smartphone’s ubiquity and our dependence on it, “App Coverage” (AC) is something confronting us every day, yet we know little about it. At the CCA Global Expo this week in San Antonio Glenn Laxdal of Ericsson spoke about “app coverage”, which the vendor first surfaced in 2013.  AC is defined as, “the proportion of a network’s coverage that has sufficient performance to run a particular app at an acceptable quality level.”  In other words the variety of demand from end-users for voice, data and video applications is outpacing the ability of carriers to keep up.  According to Ericsson, monitoring and ensuring performance of app coverage is the next wave in LTE networks.  Here’s a good video explaining AC in simple, visual terms.

Years, nay, decades ago I used to say coverage should be measured in 3 important ways:

  • Geographic (national vs regional vs local)
  • In/Outdoors (50+% loss indoors)
  • Frequency (double capex 1900 vs 700 mhz)

Each of these had specific supply/demand clearing implications across dozens of issues impacting balance sheets and P&L statements; ultimately determining winners and losers.  They are principally why AT&T and Verizon today have 70% of subscribers (80% of enterprise customers) up from 55% just 5 years ago, 84% of total profit, and over 100% of industry free cash flow.  Now we can add “applications” to that list.  And it will only make it more challenging for competitors to wrestle share from the “duopoly”.

Cassidy Shield of Alcatel-Lucent, further stated that fast follower strategies to the duopoly would likely fail; implying that radical rethinking was necessary.  Some of that came quickly in the form of Masayoshi Son’s announcement of a broad partnership with NetAmerica and members of CCA for preferred roaming, concerted network buildout and sharing of facilities and device purchase agreements. This announcement came two weeks after Son visited Washington DC and laid out Sprint’s vision for a new, more competitive wireless future in America.

The conference concluded with a panel of CEOs hailing Sprint’s approach, which Son outlined here, as one of benevolent dictator (perhaps not the best choice of words) and exhorting the label partner, partner, partner; something that Terry Addington of MobileNation has said has taken way too long.  Even then the panel agreed that pulling off partnerships will be challenging.

The Good & Bad of Wireless

Wireless is great because it is all things to all people, and that is what makes it bad too.  Planning for and accounting how users will access the network is very challenging across a wide user base.  There are fundamentally different “zones” and contexts in which different apps can be used and they often conflict with network capacity and performance.  I used to say that one could walk and also hang upside down from a tree and talk, but you couldn’t “process data” doing those things.  Of course the smartphone changed all that and people are accessing their music apps, location services, searches, purchases, and watching video from anywhere; even hanging upside down in trees.

Today voice, music and video consume 12, 160 and 760 kpbs of bandwidth, respectively, on average.  Tomorrow those numbers might be 40, 500, 1500, and that’s not even taking into account “upstream” bandwidth which will be even more of a challenge for service providers to provision when consumers expect more 2-way collaboration everywhere.  The law of wireless gravity, which states bits will seek out fiber/wire as quickly and cheaply as possible, will apply, necessitating sharing of facilities (wireless and wired), heterogeneous network (Hetnet), and aggressive wifi offload approaches; even consumers will be shared in the form of managed services across communities of users (known today as OTT).  The show agenda included numerous presentations on distributed antennae networks and wifi offload applied to the rural coverage challenge.

Developing approaches ex ante to anticipate demand is even more critical if carriers want to play major roles in the internet of things, unified (video) communications and the connected car.  As Ericsson states in its whitepaper,

“App coverage integrates all aspects of network performance – including radionetwork throughput and latency, capacity, as well as the performance of the backhaul, packetcore and the content-delivery networks. Ultimately, managing app coverage and performance demands a true end-to-end approach to designing, building and running mobile networks.”

Posted by: Michael Elling AT 10:37 am   |  Permalink   |  0 Comments  |  Email
Monday, March 17 2014

A New Visionary In Our Midst?

The US has lacked a telecom network visionary for nearly 2 decades.  There have certainly been strong and capable leaders, such as John Malone who not only predicted but brought about the 500 channel LinearTV model.  But there hasn’t been someone like Bill McGowan who broke up AT&T or Craig McCaw who first had the vision to build a national, seamless wireless network, countering decades of provincial, balkanized thinking.  Both of them fundamentally changed the thinking around public service provider networks.

But with a strong message to the markets in Washington DC on March 11 from Masayoshi Son, Sprint’s Chairman, the 20 year wait may finally be over.  Son did what few have been capable of doing over the past 15-20 years since McGowan exited stage left and McCaw sold out to MaBell: telling it like it is.  The fact is that today’s bandwidth prices are 20-150x higher than they should be with current technology.

This is no one’s fault in particular and in fact to most people (even informed ones) all measures of performance-to-price compared to 10 or 20 years ago look great.  But, as Son illustrated, things could be much, much better.  And he’s willing to make a bet on getting the US, the most advanced and heterogeneous society, back to a leadership role with respect to the ubiquity and cost of bandwidth.  To get there he needs more scale and one avenue is to merge with T-Mobile.

There have been a lot of naysayers as to the possibility of a Sprint-T-Mo hookup, including leaders at the FCC.  But don’t count me as one; it needs to happen.  Initially skeptical when the rumors first surfaced in December, I quickly reasoned that a merger would be the best outcome for the incentive auctions.  A merger would eliminate spectrum caps as a deterrent to active bidding and maximize total proceeds.  It would also have a better chance of developing a credible third competitor with equal geographic reach. Then in January the FCC and DoJ came out in opposition to the merger.

In February, though, Comcast announced the much rumored merger with TW and Son jumped on the opportunity to take his case for merging to a broader stage.  He did so in front of a packed room of 300 communications pundits, press and politicos at the US Chamber of Commerce’s prestigious Hall of Flags; a poignant backdrop for his own rags to riches story.  Son’s frank honesty about the state of broadband for the American public vs the rest of the world, as well as Sprint’s own miserable current performance were impressive.  It’s a story that resonates with my America’s Bandwidth Deficit presentation.

Here are some reasons the merger will likely pass:
  • The FCC can’t approve one horizontal merger (Comcast/TW) that brings much greater media concentration and control over content distribution, while disallowing a merger of two small players (really irritants as far as AT&T and Verizon are concerned).
  • Son has a solid track record of disruption and doing what he says.
  • The technology and economics are in his favor.
  • The vertically integrated service provider model will get disrupted faster and sooner as Sprint will have to think outside the box, partner, and develop ecosystems that few in the telecom industry have thought about before; or if they have, they’ve been constrained by institutional inertia and hidebound by legacy regulatory and industry siloes.

Here are some reasons why it might not go through:

  • The system is fundamentally corrupt.  But the new FCC Chairman is cast from a different mold than his predecessors and is looking to make his mark on history.
  • The FCC shoots itself in the foot over the auctions.  Given all the issues and sensitivities around incentive auctions the FCC wants this first one to succeed as it will serve as a model for all future spectrum refarming issues. 
  • The FCC and/or DoJ find in the public interest that the merger reduces competition.  But any analyst can see that T-Mo and Sprint do not have sustainable models at present on their own; especially when all the talk recently in Barcelona was already about 5G.

Personally I want Son’s vision to succeed because it’s the vision I had in 1997 when I originally brought the 2.5-2.6 (MMDS) spectrum to Sprint and later in 2001 and 2005 when I introduced Telcordia’s 8x8 MIMO solutions to their engineers.  Unfortunately, past management regimes at Sprint were incapable of understanding the strategies and future vision that went along with those investment and technology pitches.  Son has a different perspective (see in particular minute 10 of this interview with Walt Mossberg) with his enormous range of investments and clear understanding of price elasticity and the marginal cost of minutes and bits.

To be successful Sprint’s strategy will need to be focused, but at the same time open and sharing in order to simultaneously scale solutions across the three major layers of the informational stack (aka the InfoStack):

  • upper (application and content)
  • middle (control)
  • lower (access and transport)

This is the challenge for any company that attempts to disrupt the vertically integrated telecom or LinearTV markets; the antiquated and overpriced ones Son says he is going after in his presentation.    But the US market is much larger and more robust than the rest of the world, not just geographically, but also from a 360 degree competitive perspective where supply and demand are constantly changing and shifting.

Ultimate success may well rest in the control layer, where Apple and Google have already built up formidable operating systems which control vastly profitably settlement systems across multiple networks.  What few realize is that the current IP stack does not provide price signals and settlement systems that clear supply and demand between upper and lower layers (north-south) or between networks (east-west) in the newly converged “informational” stack of 1 and 2-way content and communications.

If Sprint’s Chairman realizes this and succeeds in disrupting those two markets with his strategy then he certainly will be seen as a visionary on par with McGowan and McCaw.

Posted by: Michael Elling AT 09:58 am   |  Permalink   |  0 Comments  |  Email
Wednesday, August 07 2013

Debunking The Debunkers

The current debate over the state of America's broadband services and over the future of the internet is like a 3-ring circus or 3 different monarchists debating democracy.  In other words an ironic and tragically humorous debate between monopolists, be they ultra-conservative capitalists, free-market libertarians, or statist liberals.  Their conclusions do not provide a cogent path to solving the single biggest socio-political-economic issue of our time due to pre-existing biases, incorrect information, or incomplete/wanting analysis.  Last week I wrote about Google's conflicts and paradoxes on this issue.  Over the next few weeks I'll expand on this perspective, but today I'd like to respond to a Q&A, Debunking Broadband's Biggest Myths, posted on Commercial Observer, a NYC publication that deals with real estate issues mostly and has recently begun a section called Wired City, dealing with a wide array of issues confronting "a city's" #1 infrastructure challenge.  Here's my debunking of the debunker.

To put this exchange into context, the US led the digitization revolutions of voice (long-distance, touchtone, 800, etc..), data (the internet, frame-relay, ATM, etc...) and wireless (10 cents, digital messaging, etc...) because of pro-competitive, open access policies in long-distance, data over dial-up, and wireless interconnect/roaming.  If Roslyn Layton (pictured below) did not conveniently forget these facts or does not understand both the relative and absolute impacts on price and infrastructure investment then she would answer the following questions differently:

Real Reason/Answer:  our bandwidth is 20-150x overpriced on a per bit basis because we disconnected from moore's and metcalfe's laws 10 years ago, due to the Telecom Act, then special access "de"regulation, then Brand-X or shutting down equal access for broadband.  This rate differential is shown in the discrepancy between rates we pay in NYC and what Google charges in KC, as well as the difference in performance/price of 4G and wifi.  It is great Roslyn can pay $3-5 a day for Starbucks.  Most people can't (and shouldn't have to) just for a cup a Joe that you can make at home for 10-30 cents.

Real Reason/Answer:  Because of their vertical business models, carriers are not well positioned to generate high ROI on rapidly depreciating technology and inefficient operating expense at every layer of the "stack" across geographically or market segment constrained demand.  This is the real legacy of inefficient monopoly regulation.  Doing away with regulation, or deregulating the vertical monopoly, doesn’t work.  Both the policy and the business model need to be approached differently.  Blueprints exist from the 80s-90s that can help us restructure our inefficient service providers.  Basically, any carrier that is granted a public ROW (right of way) or frequency should be held to an open access standard in layer 1.  The quid pro quo is that end-points/end-users should also have equal or unhindered access to that network within (economic and aesthetic) reason.  This simple regulatory fix solves 80% of the problem as network investments scale very rapidly, become pervasive, and can be depreciated quickly.

Real Reason/Answer:  Quasi monopolies exist in video for the cable companies and in coverage/standards in frequencies for the wireless companies.  These scale economies derived from pre-existing monopolies or duopolies granted by and maintained to a great degree by the government.  The only open or equal access we have left from the 1980s-90s (the drivers that got us here) is wifi (802.11) which is a shared and reusable medium with the lowest cost/bit of any technology on the planet as a result.  But other generative and scalabeable standards developed in the US or with US companies at the same time, just like the internet protocol stacks, including mobile OSs, 4G LTE (based on CDMA/OFDM technology), OpenStack/Flow that now rule the world.  It's very important to distinguish which of these are truly open or not.

Real Reason/Answer: The 3rd of the population who don't have/use broadband is as much because of context and usability, whether community/ethnicity, age or income levels, as cost and awareness.  If we had balanced settlements in the middle layers based on transaction fees and pricing which reflect competitive marginal cost, we could have corporate and centralized buyers subsidizing the access and making it freely available everywhere for everyone.  Putting aside the ineffective debate between bill and keep and 2-sided pricing models and instead implementing balanced settlement exchange models will solve the problem of universal HD tele-work, education, health, government, etc…  We learned in the 1980s-90s from 800 and internet advertising that competition can lead to free, universal access to digital "economies".  This is the other 20% solution to the regulatory problem.

Real Reason/Answer:  The real issue here is that America led the digital information revolution prior to 1913 because it was a relatively open and competitive democracy, then took the world into 70 years of monopoly dark ages, finally breaking the shackles of monopoly in 1983, and then leading the modern information revolution through the 80s-90s.  The US has now fallen behind in relative and absolute terms in the lower layers due to consolidation and remonopolization.  Only the vestiges of pure competition from the 80s-90s, the horizontally scaled "data" and "content" companies like Apple, Google, Twitter and Netflix (and many, many more) are pulling us along.  The vertical monopolies stifle innovation and the generative economic activity we saw in those 2 decades.  The economic growth numbers and fiscal deficit do not lie.

Posted by: Michael Elling AT 08:02 am   |  Permalink   |  0 Comments  |  Email
Wednesday, July 31 2013

Is Google Trying to Block Web 4.0?

Back in 1998 I wrote, “if you want to break up the Microsoft software monopoly then break up the Baby Bell last-mile access monopoly.”  Market driven broadband competition and higher-capacity digital wireless networks gave rise to the iOS and Android operating systems over the following decade which undid the Windows monopoly.  The 2013 redux to that perspective is, once again, “if you want to break up the Google search monopoly then break up the cable/telco last mile monopolies.” 

Google is an amazing company, promoting the digital pricing and horizontal service provider spirit more than anyone.  But Google is motivated by profit and will seek to grow that profit as best it can, even if contrary to founding principles and market conditions that fueled its success (aka net neutrality or equal access).  Now that Google is getting into the lower layers in the last mile they are running into paradoxes and conflicts over net neutrality/equal access and in danger of becoming just another vertical monopoly.  (Milo Medin provides an explanation in the 50th minute in this video, but it is self-serving, disingenuous and avoids confronting the critical issue for networks going forward.)

Contrary to many people’s beliefs, the upper and lower layers have always been inextricably interdependent and nowhere was this more evident than with the birth of the internet out of the flat-rate dial-up networks of the mid to late 1980s (a result of dial-1 equal access).  The nascent ISPs that scaled in the 1980s on layer 1-2 data bypass networks were likewise protected by Computers II-III (aka net neutrality) and benefited from competitive (WAN) transport markets.

Few realize or accept the genesis of Web 1.0 (W1.0) was the break-up of AT&T in 1983.   Officially birthed in 1990 it was an open, 1-way store and forward database lookup platform on which 3 major applications/ecosystems scaled beginning in late 1994 with the advent of the browser: communications (email and messaging), commerce, and text and visual content.  Even though everything was narrowband, W1.0 began the inexorable computing collapse back to the core, aka the cloud (4 posts on the computing cycle and relationship to networks).  The fact that it was narrowband didn't prevent folks like Mark Cuban and Jeff Bezos from envisioning and selling a broadband future 10 years hence.   Regardless, W1.0 started collapsing in 1999 as it ran smack into an analog dial-up brick wall.  Google hit the bigtime that year and scaled into the early 2000s by following KISS and freemium business model principles.  Ironically, Google’s chief virtue was taking advantage of W1.0’s primary weakness.

Web 2.0 grew out of the ashes of W1.0 in 2002-2003.  W2.0 both resulted from and fueled the broadband (BB) wars starting in the late 1990s between the cable (offensive) and telephone (defensive) companies.  BB penetration reached 40% in 2005, a critical tipping point for the network effect, exactly when YouTube burst on the scene.  Importantly, BB (which doesn't have equal access, under the guise of "deregulation") wouldn’t have occurred without W1.0 and the above two forms of equal access in voice and data during the 1980s-90s.  W2.0 and BB were mutually dependent, much like the hardware/software Wintel model.   BB enabled the web to become rich-media and mostly 2-way and interactive.  Rich-media driven blogging, commenting, user generated content and social media started during the W1.0 collapse and began to scale after 2005.

“The Cloud” also first entered people’s lingo during this transition.  Google simultaneously acquired YouTube in the upper layers to scale its upper and lower layer presence and traffic and vertically integrated and consolidated the ad exchange market in the middle layers during 2006-2008.  Prior to that, and perhaps anticipating lack of competitive markets due to "deregulation" of special access, or perhaps sensing its own potential WAN-side scale, the company secured low-cost fiber rights nationwide in the early 2000s following the CLEC/IXC bust and continued throughout the decade as it built its own layer 2-3 transport, storage, switching and processing platform.  Note, the 2000s was THE decade of both vertical integration and horizontal consolidation across the board aided by these “deregulatory” political forces.  (Second note, "deregulatory" should be interpreted in the most sarcastic and insidious manner.)

Web 3.0 began officially with the iPhone in 2007.  The smartphone enabled 7x24 and real-time access and content generation, but it would not have scaled without wifi’s speed, as 3G wireless networks were at best late 1990s era BB speeds and didn’t become geographically ubiquitous until the late 2000s.  The combination of wifi (high speeds when stationary) and 3G (connectivity when mobile) was enough though to offset any degradation to user experience.  Again, few appreciate or realize that  W3.0 resulted from two additional forms of equal access, namely cellular A/B interconnect from the early 1980s (extended to new digital PCS entrants in the mid 1990s) and wifi’s shared spectrum.  One can argue that Steve Jobs single-handedly resurrected equal access with his AT&T agreement ensuring agnostic access for applications.  Surprisingly, this latter point was not highlighted in Isaacson's excellent biography.  Importantly, we would not have had the smartphone revolution were it not for Jobs' equal access efforts.

W3.0 proved that real-time, all the time "semi-narrowband" (given the contexts and constraints around the smartphone interface) trumped store and forward "broadband" on the fixed PC for 80% of people’s “web” experience (connectivity and interaction was more important than speed), as PC makers only realized by the late 2000s.  Hence the death of the Wintel monopoly, not by government decree, but by market forces 10 years after the first anti-trust attempts.  Simultaneously, the cloud became the accepted processing model, coming full circle back to the centralized mainframe model circa 1980 before the PC and slow-speed telephone network led to its relative demise.  This circularity further underscores not only the interplay between upper and lower layers but between edge and core in the InfoStack.  Importantly, Google acquired Android in 2005, well before W3.0 began as they correctly foresaw that small-screens and mobile data networks would foster the development of applications and attendant ecosystems would intrude on browser usage and its advertising (near) monopoly. 

Web 4.0 is developing as we speak and no one is driving it and attempting to influence it more with its WAN-side scale than Google.  W4.0 will be a full-duplex, 2-way, all-the time, high-definition application driven platform that knows no geographic or market segment boundaries.  It will be engaging and interactive on every sensory front; not just those in our immediate presence, but everywhere (aka the internet of things).  With Glass, Google is already well on its way to developing and dominating this future ecosystem.  With KC Fiber Google is illustrating how it should be priced and what speeds will be necessary.  As W4.0 develops the cloud will extend to the edge.  Processing will be both centralized and distributed depending on the application and the context.  There will be a constant state of flux between layers 1 and 3 (transport and switching), between upper and lower layers, between software and hardware at every boundary point, and between core and edge processing and storage.  It will dramatically empower the end-user and change our society more fundamentally than what we’ve witnessed over the past 30 years.  Unfortunately, regulators have no gameplan on how to model or develop policy around W4.0.

The missing pieces for W4.0 are fiber based and super-high capacity wireless access networks in the lower layers, settlement exchanges in the middle layers, and cross-silo ecosystems in the upper layers.   Many of these elements are developing in the market naturally: big data, hetnets, SDN, openflow, open OS' like Android and Mozilla, etc… Google’s strategy appears consistent and well-coordinated to tackle these issues; if not far ahead of others. But its vertically integrated service provider model and stance on net neutrality in KC is in conflict with the principles that so far have led to its success.

Google is buying into the vertical monopoly mindset to preserve its profit base instead of teaching regulators and the markets about the virtues of open or equal access across every layer and boundary point (something clearly missing from Tim Wu's and Bob Atkinson's definitions of net neutrality).  In the process it is impeding the development of W4.0.  Governments could solve this problem by simply conditioning any service provider with access to a public right of way or frequency to equal access in layers 1 and 2; along with a quid pro quo that every user has a right to access unhindered by landlords and local governments within economic and aesthetic reason.  (The latter is a bone we can toss all the lawyers who will be looking for new work in the process of simpler regulations.)  Google and the entire market will benefit tremendously by this approach.  Who will get there first?  The market (Google or MSFT/AAPL if the latter are truly hungry, visionary and/or desperate) or the FCC?  Originally hopeful, I’ve become less sure of the former over the past 12 months.  So we may be reliant on the latter.

Related Reading:

Free and Open Depend on Where You Are in Google's InfoStack

InfoWorld defends Google based on its interpretation of NN; of which there are 4

DSL reports thinks Google is within its rights because it expects to offer enterprise service.  Only they are not and heretofore had not mentioned it.

A good article on Gizmodo about state of the web what "we" are giving up to Google

The datacenter as an "open access" boundary.  What happens today in the DC will happen tomorrow elsewhere.

Posted by: Michael Elling AT 10:57 am   |  Permalink   |  4 Comments  |  Email
Sunday, March 11 2012

I first started using clouds in my presentations in 1990 to illustrate Metcalfe’s Law and how data would scale and supersede voice.  John McQuillan and his Next Gen Networks (NGN) conferences were my inspiration and source.  In the mid-2000s I used them to illustrate the potential for a world of unlimited demand ecosystems: commercial, consumer, social, financial, etc…  Cloud computing has now become a part of everyday vernacular.  The problem is that for cloud computing to expand the world of networks needs to go flat, or horizontal, as in this complex looking illustration to the left.

This is a static view.  Add some temporality and rapidly shifting supply/demand dynamics and the debate begins as to whether the system should be centralized or decentralized.  Yes and no.  There are 3 main network types:  hierarchical, centralized and fully distributed (aka peer to peer).  None fully accommodate metcalfe’s, moore’s and zipf’s laws.  Network theory needs to capture the dynamic of new service/technology introduction that initially is used by a small group, but then rapidly scales to many.  Processing/intelligence initially must be centralized but then traffic and signaling volumes dictate pushing the intelligence to the edge.  The illustration to the right begins to convey that lateral motion in a flat, layered architecture, driven by the 2-way, synchronous nature of traffic; albeit with the signalling and transactions moving vertically up and down.

But just as solutions begin to scale, a new service is borne superseding the original.  This chaotic view from the outside looks like an organism in constant state of expansion then collapse, expansion then collapse, etc…

A new network theory that controls and accounts for this constant state of creative destruction* is Centralized Hierarchical Networks (CHNs) CC.   A search on google and duckduckgo reveals no known prior attribution, so Information Velocity Partners, LLC (aka IVP Capital, LLC) both lays claim and offers up the term under creative commons (CC).  I actually coined the CHN term in 2004 at a Telcordia symposium; now an Ericsson subsidiary.

CHN theory fully explains the movement from mainframe to PC to cloud.  It explains the growth of switches, routers and data centers in networks over time.  And it should be used as a model to explain how optical computing/storage in the core, fiber and MIMO transmission and cognitive radios at the edge get introduced and scaled.  Mobile broadband and 7x24 access /syncing by smartphones are already beginning to reveal the pressures on a vertically integrated world and the need to evolve business models and strategies to centralized hierarchical networking.

*--interesting to note that creative destruction was original used in far-left Marxist doctrine in the 1840s but was subsumed into and became associated with far-right Austrian School economic theory in the 1950s.  Which underscores my view that often little difference lies between far-left and far-right in a continuous circular political/economic spectrum.

Related Reading:
Decentralizing the Cloud.  Not exactly IMO.

Network resources will always be heterogeneous.

Everything gets pushed to the edge in this perspective

Posted by: Michael Elling AT 08:42 am   |  Permalink   |  0 Comments  |  Email
Sunday, February 26 2012

Wireless service providers (WSPs) like AT&T and Verizon are battleships, not carriers.  Indefatigable...and steaming their way to disaster even as the nature of combat around them changes.  If over the top (OTT) missiles from voice and messaging application providers started fires on their superstructures and WiFi offload torpedoes from alternative carriers and enterprises opened cracks in their hulls, then Dropbox bombs are about to score direct hits near their water lines.  The WSPs may well sink from new combatants coming out of nowhere with excellent synching and other novel end-user enablement solutions even as pundits like Tomi Ahonen and others trumpet their glorious future.  Full steam ahead. 

Instead, WSP captains should shout “all engines stop” and rethink their vertical integration strategies to save their ships.  A good start might be to look where smart VC money is focusing and figure out how they are outfitted at each level to defend against or incorporate offensively these rapidly developing new weapons.  More broadly WSPs should revisit the WinTel wars, which are eerily identical to the smartphone ecosystem battles, and see what steps IBM took to save its sinking ship in the early 1990s.  One unfortunate condition might be that the fleet of battleships are now so widely disconnected that none have a chance to survive. 

The bulls on Dropbox (see the pros and cons behind the story) suggest that increased reliance on cloud storage and synching will diminish reliance on any one device, operating system or network.  This is the type of horizontalization we believe will continue to scale and undermine the (perceived) strength of vertical integration at every layer (upper, middle and lower).  Extending the sea battle analogy, horizontalization broadens the theatre of opportunity and threat away from the ship itself; exactly what aircraft carriers did for naval warfare.

Synching will allow everyone to manage and tailor their “states”, developing greater demand opportunity; something I pointed out a couple of months ago.  People’s states could be defined a couple of ways, beginning with work, family, leisure/social across time and distance and extending to specific communities of (economic) interest.   I first started talking about the “value of state” as Chief Strategist at Multex just as it was being sold to Reuters.

Back then I defined state as information (open applications, communication threads, etc...) resident on a decision maker’s desktop at any point in time that could be retrieved later.  Say I have multiple industries that I cover and I am researching biotech in the morning and make a call to someone with a question.  Hours later, after lunch meetings, I am working on chemicals when I get a call back with the answer.  What’s the value of bringing me back automatically to the prior biotech state so I can better and more immediately incorporate and act on the answer?  Quite large.

Fast forward nearly 10 years and people are connected 7x24 and checking their wireless devices on average 150x/day.  How many different states are they in during the day?  5, 10, 15, 20?  The application world is just beginning to figure this out.  Google, Facebook, Pinterest and others are developing data engines that facilitate “free access” to content and information paid for by centralized procurement; aka advertising.  Synching across “states” will provide even greater opportunity to tailor messages and products to consumers.

Inevitably those producers (advertisers) will begin to require guaranteed QoS and availability levels to ensure a good consumer experience.  Moreover, because of social media and BYOD companies today are looking at their employees the same way they are looking at their consumers.  The overall battlefield begins to resemble the 800 and VPN wars of the 1990s when we had a vibrant competitive service provider market before its death at the hands of the 1996 Telecom Act (read this critique and another that questions the Bell's unnatural monopoly).  Selling open, low-cost, widely available connectivity bandwidth into this advertising battlefield can give WSPs profit on every transaction/bullet/bit across their network.  That is the new “ship of state” and taking the battle elsewhere.  Some call this dumb pipes; I call this a smart strategy to survive being sunk. 

Related Reading:

John Mahoney presents state as representing content and context

Smartphone users complaints with speed rise 50% over voice problems

Posted by: Michael Elling AT 09:54 am   |  Permalink   |  0 Comments  |  Email
Sunday, December 11 2011

Look up the definition of information and you’ll see a lot of terminology circularity.  It’s all-encompassing and tough to define.  It’s intangible, yet it drives everything we do.  But information is pretty useless without people; in fact it doesn’t really exist.  Think about the tree that fell, unseen, in the forest.  Did it really fall?  I am interested in the velocity of information, its impact on economies, societies, institutions and as a result in the development of communication networks and exchange of ideas.

Over the past several years I have increasingly looked at the relationship between electricity and communications.  The former is the number one ingredient for the latter.  Ask anybody in the data-center or server farm world.  The relationship is circular.  One wonders why the NTIA under its BTOP program didn’t figure that out; or at least talk to the DOE.  Both spent billions separately, instead of jointly.  Gee, why didn’t we add a 70 kV line when we trenched fiber down that remote valley?

Cars, in moving people (information) around,  are a communications network, too; only powered by gasoline.  Until now.  The advent of electric vehicles (EV) is truly exciting.  Perhaps more than the introduction of digital cell phones nearly 20 years ago.  But to realize that future both the utility and auto industries should take a page from the competitive wireless playbook.

What got me thinking about all this was a  NYT article this week about Dan Akerson, a former MCI CFO  and Nextel CEO, who has been running (and shaking up) GM over the past 15 months.  It dealt specifically with Dan’s handling of the Chevy Volt fires.  Knowing Dan personally, I can say he is up to the task.  He is applying lessons learned from the competitive communications markets to the competitive automotive industry.  And he will win.

But will he and the automotive industry lose because of the utility industry?  You see, the auto industry, the economy and the environment have a lot to gain from the development of electric vehicles (EV).  Unfortunately the utility industry, which is 30 years behind the communications and IT revolution “digitizing” its business model, is not prepared for an EV eventuality.  Ironically, utilities stand in the way of their own long-term success as EV’s would boost demand dramatically.

A lot has been spent on a “smart grid” with few meaningful results.  Primarily this is because most of the efforts and decisions are being driven by insiders who do not want to change the status quo.  The latter includes little knowledge of the consumer, a 1-way mentality, and a focus on average peak production and consumption.  Utilities and their vendors loathe risk and consider real time to be 15 minutes going down to 5 minutes and view the production and consumption of electricity to be paramount.  Smart-grid typically means the opposite, or a reduction in revenues.

So, it’s no surprise that they are building a smart-grid which does not give the consumer choice, flexibility and control, nor the ability to contribute to electricity production and be rewarded to be efficient and socially responsible.  Nor do they want a lot of big-data to analyze and make the process even more efficient.  Funny those are all byproducts of the competitive communications and IT industries we’ve become accustomed to.

So maybe once Dan has solved GM’s problems and recognizes the problems facing an electric vehicle future, he will focus his and those of his private equity brethren’s interests on developing a market-driven smart-grid; not one your grandmother’s utility would build.

By the way, here’s a “short”, and by no means exhaustive, list of alliances and organizations and the members involved in developing standards and approaches to the smart grid.  Note: they are dominated by incumbents, and they all are comprised differently!

 

Electricity Advisory Committee
Gridwise Alliance
Gridwise Architecture Council
NIST SmartGrid Architecture Council
NIST SmartGrid Advisory Committee
NIST SmartGrid Interoperability Panel
North American Energy Standards Board (NAESB)
SmartGrid Task Force Members (Second list under Smartgrid.gov)
Global SmartGrid Federation
NRECA SmartGrid Demonstration
IEEE SmartGrid Standards
SmartGrid Information Clearinghouse


 

 

Posted by: Michael Elling AT 10:52 am   |  Permalink   |  0 Comments  |  Email
Sunday, November 13 2011

A humble networking protocol 10 years ago, packet based Ethernet (invented at Xerox in 1973) has now ascended to the top of the carrier networking pyramid over traditional voice circuit (time) protocols due to the growth in data networks (storage and application connectivity) and 3G wireless.  According to AboveNet the top 3 CIO priorities are cloud computing, virtualization and mobile, up from spots 16, 3 and 12, respectively, just 2 years ago!   Ethernet now accounts for 36% of all access, larger than any other single legacy technology, up from nothing 10 years ago when the Metro Ethernet Forum was established.  With Gigabit and Terabit speeds, Ethernet is the only protocol for the future.

The recent Ethernet Expo 2011 in NYC underscored the trends and importance of what is going on in the market.  Just like fiber and high-capacity wireless (MIMO) in the physical layer (aka layer 1), Ethernet has significant price/performance advantages in transport networks (aka layer 2).  This graphic illustrates why it has spread through the landscape so rapidly from LAN to MAN to WAN.   With 75% of US business buildings lacking access to fiber, EoC will be the preferred access solution.  As bandwidth demand increases, Ethernet has a 5-10x price/performance advantage over legacy equipment.

Ethernet is getting smarter via a pejoratively coined term, SPIT (Service Provider Information Technology).  The graphic below shows how the growing horizontalization is supported by vertical integration of information (ie exchanges) that will make Ethernet truly “on-demand”.  This model is critical because of both the variability and dispersion of traffic brought on by both mobility and cloud computing.  Already, the underlying layers are being “re”-developed by companies like AlliedFiber who are building new WAN fiber with interconnection points every 60 miles.  It will all be ethernet.  Ultimately, app providers may centralize intelligence at these points, just like Akamai pushed content storage towards the edge of the network for Web 1.0.  At the core and key boundary points Ethernet Exchanges will begin to develop.  Right now network connections are mostly private and there is significant debate as to whether there will be carrier exchanges.  The reality is that there will be exchanges in the future; and not just horizontal but vertical as well to facilitate new service creation and a far larger range of on-demand bandwidth solutions.

By the way, I found this “old” (circa 2005) chart from the MEF illustrating what and where Ethernet is in the network stack.  It is consistent with my own definition of web 1.0 as a 4 layer stack.  Replace layer 4 with clouds and mobile and you get the sense for how much greater complexity there is today.  When you compare it to the above charts you see how far Ethernet has evolved in a very rapid time and why companies like Telx, Equinix (8.6x cash flow), Neutral Tandem (3.5x cash flow) will be interesting to watch, as well as larger carriers like Megapath and AboveNet (8.2x cash flow).   Certainly the next 3-5 years will see significant growth in ethernet and obsolescence of the PSTN and legacy voice (time-based) technologies.

Related Reading:
CoreSite and other data centers connect directly to Amazon AWS

Equinix and Neutral Tandem provide seamless service

 

Posted by: Michael Elling AT 12:46 pm   |  Permalink   |  0 Comments  |  Email
Sunday, October 30 2011

Without access does the cloud exist?  Not really.

In 2006, cloud computing entered the collective intelligence in the form of Amazon Web Services.  By 2007, over 330,000 developers were registered on the platform.  This rapid uptake was an outgrowth of web 1.0 applications (scale) and growth in high-speed, broadband access from 1998-2005 (ubiquity).  It became apparent to all that new solutions could be developed and efficiencies improved by collapsing to the core a portion of processing and storage that had developed at the edge during the WinTel revolution.  The latter had fundamentally changed the IT landscape between the late 1980s and early 2000s from a mainframe to client server paradigm.

In late 2007 the iPhone was born, just as 3G digital services were introduced by a competitive US wireless industry.  In 2009 “smartphone” penetration was 18% of the market.  By the 3rd quarter of 2011 that number reached 44%.  The way people communicate and consume information is changing dramatically in a very short time. 

The smartphone is driving cloud (aka back to the mainframe) adoption for 3 reasons: 1) it is introducing a new computing device to complement, not replace, existing computing devices at home and work; 2) the small screen limits what information can be shown and processed; 3) it is increasing the sociability, velocity and value of information.   Information knows no bounds at the edge or core.  And we are at the very very early stages of this dramatic new revolution.

Ice Cream Sandwich (just like Windows 2.0 multi-tasking in 1987) heralds a radical new world of information generation and consumption.  Growth in processing and computation at the edge will drive the core and vice versa; just as chip advances from Intel fed software bloat on desktops further necessitating faster chips.   

But the process can only expand if the networks are there (see page 2) to support that.  Unfortunately carriers have responded with data caps and bemoan the lack of new spectrum.  Fortunately, a hidden back door exists in the form of WiFi access.  And if carriers like AT&T and Verizon don’t watch out, it will become the preferred form of access.

As a recent adopter of Google Music I have become very attuned to that.  First, it is truly amazing how seamless content storage and playback has become.  Second, I learned how to program my phone to always hunt for a wifi connection.  Third, when I do not have access to either the 3G wireless network or WiFi and I want something that is stored online a strange feeling of being disconnected overtakes me; akin to leaving one’s cellphone at home in the morning.

With the smartphone we are getting used to choice and instant gratification.  The problem with WiFi is it’s variability and unreliability.  Capital and technology is being applied to solve that problem and it will be interesting to see how service providers react to the potential threat (and/or opportunity).  Where carriers once imagined walled application gardens there are now fertile iOS and Android fields watered by clouds over which carriers exert little control.  Storm clouds loom over their control of and ROI from access networks.

Posted by: Michael Elling AT 09:10 am   |  Permalink   |  0 Comments  |  Email
Sunday, April 24 2011

A couple of themes were prevalent this past week:

  • iPhone/Android location logging,
  • cloud computing (and a big cloud collapse at Amazon),
  • the tech valuation bubble because of Groupon et al,
  • profits at Apple, AT&T vs VZ, Google, most notably,
  • and who wins in social media and what is next.

In my opinion they are all related and the Cloud plays the central role, metaphorically and physically.  Horowitz recently wrote about the new computing paradigm in defense of the supposed technology valuation bubble.  I agree wholeheartedly with his assessment as I got my first taste of this historical computing cycle over 30 years ago when I had to cycle 10 miles to a High School in another district that had a dedicated line to the county mainframe.  A year or two later I was simulating virus growth on an Apple PC.  So when Windows came in 1987 I was already ahead of the curve with respect to distributed computing.  Moreover, as a communications analyst in the early 1990s I also realized what competition in the WAN post-1984 had begat, namely, Web 1.0 (aka the Internet) and the most advanced and cheapest digital paging/messaging services in the world.  Both of these trends would have a significant impact on me personally and professionally and I will write about those evolutions and collapses in future Spectral issues.

The problem, the solution, the problem, the solution, etc….

The problem back in the 1970s and early 1980s was the telephone monopoly.  Moore’s law bypassed the analog access bottleneck with cheap processing and local transport.  Consumers and then enterprises and institutions began to buy and link the PCs together to communicate, share files and resources.   Things got exciting when we began to multitask in 1987, and then by 1994 any PC provided access to information pretty much anywhere.  During the 1990s and well into the next decade, Web 1.0 was just a 1.2-way store and forward database lookup platform.  It was early cloud computing, sort of, but no-one had high-speed access.  It was so bad in 1998 when I went independent, that I had 25x more dedicated bandwidth than my former colleagues at bulge-bracket Wall Street firms.  That’s why we had the bust.  Web 1.0 was narrow-band, not broadband, and certainly not 2-way.  Wireless was just beginning to wake up to data, even though Jeff Bezos had everyone believing they would be ordering books through their phones in 2000.

Two things happened in the 2000s.  First, high speed bandwidth became ubiquitous.  I remember raising capital for The Feedroom, a leading video ASP, in 2003 and we were still watching high-speed access penetration reaching the 40% “tipping point.”.  Second the IP stack grew from being a 4 layer model to something more robust.  We built CDNs.  We built border controllers that enabled Skype VoIP traffic to transit foreign networks “for free.”  We built security.  HTML, browsers and web frontends grew to support multimedia.  By the second half of the decade, Web 2.0 became 1.7-way and true “cloud” services began to develop.  Web 2.0 is still not fully developed as there are still a lot of technical and pricing controls and “lubricants” missing for true 2-way synchronous high-definition communications; more about that in future Spectrals.

The New “Hidden Problem”

Unfortunately, over that time the underlying service provider market of 5-6 competitive service providers (wired, wireless, cable) consolidated down to an oligopoly in most markets.  Wherever competition dropped to 3 or fewer providers bandwidth pricing stopped falling 40-70% like it should have and only fell 5-15% per annum.  Yet technology prices at the edge and core (Moore’s Law) kept on falling 50%+ every 12-18 months.  Today, the price differential between “retail” and “underlying economic” cost per bit is the widest it has been since 1984.

That wouldn’t be a problem except for two recent developments:  the advent of the smartphone and the attendant application ecosystems.  So what does this have to do with cloud computing, especially when that was “an enterprise phenomenon” begun by Salesforce.com with its Force.com and Amazon Web Services.  A lot of the new consumer wireless applications run on the cloud.  There are entire developer ecosystems building new companies.  IDC estimates that the total amount of information accessible is going to grow 44x by 2020 to 35 zetabytes.  And the average number of unique files is going to grow 65x.  That means that while a lot of the applications and information is going to be high-bandwidth (video and multimedia), there are also going to be many smaller files and transactions (bits of information); ie telemetry or personal information or sensory inputs.  And this information will be constantly accessed by 3-5 billion wireless smartphones and devices.  The math of networks is (N*(N-1))/2.  That’s an awful lot of IP session pathways.

Why is That A Problem?

The problem is that the current wireless networks can’t handle this onslaught.  Carriers have already been announcing datacaps over the past 2 years.  While they are falling over themselves to announce 4G networks, the reality is that they are only designed to be a 2-3x faster, and far from being ubiquitous, either geographically (wide-area) or inbuilding.  That’s a problem if the new applications and information sets require networks that are 20-50x faster and many factors more reliable and ubiquitous.  The smartphones and their wireless tether are becoming single points of access.  Add to that the fact that carriers derive increasingly less direct benefit from these application ecosystems, so they’ll have less and less incentive to upgrade and reprice their network services along true technology-driven marginal cost.  Neustar is already warning carriers they are being bypassed in the process.

Does The Bubble Have to Burst?

Just as in the late 1990s, the upper and middle layer guys really don’t know what is going on at the lower layers.  And if they don’t then surely the current bubble will burst as expectations will get ahead of reality.  That may take another 2-3 years, but it will likely happen.  In the meantime, alternative access players are beginning to rise up.  Even the carriers themselves are talking about offloading traffic onto femto and wifi cells.  Wifi alliances are springing up again and middle layer software/application controls are developing to make it easier for end-users to offload traffic themselves.  Having lived through and analyzed the advent of competitive wired and wireless networks in the 1990s, my sense is that nothing, even LightSquared or Clearwire in their current forms, will be significant enough to precipitate the dramatic restructuring that is necessary to service this coming tidal wave of demand.

What we need is something that I call centralized hierarchical networking (CHN)™.  Essentially we will see three major layers with the bottom access/transport layer being controlled by 3-4 hybrid networks.  The growth and dynamic from edge to core and vice versa will wax and wane in rather rapid fashion.  Until then, while I totally get and support the cloud and believe most applications are going that route, let the Cloud Players be forewarned of coming turbulence unless something is done to (re)solve the bandwidth bottleneck!

Posted by: Michael Elling AT 09:34 am   |  Permalink   |  0 Comments  |  Email
Tuesday, April 19 2011

5 Areas of Focus

1) How does information flow through our economic, social and political fabric?  I believe all of history can be modeled on the pathways and velocity of information.  To my knowledge there is no economic science regarding the velocity of information, but many write about it.  Davidow (OVERconnected) speaks to networks of people (information) being in 3 states of connectivity.  Tom Wheeler, someone whom I admire a great deal, often relates what is happening today to historical events and vice versa.  His book on Lincoln’s use of the telegraph makes for a fascinating read.  Because of its current business emphasis and potential to change many aspects of our economy and lives social media will be worth modeling along the lines of information velocity.

2) Mapping the rapidly evolving infomedia landscape to explain both the chaos of convergence and the divergence of demand has interested me for 20 years.  This represents a taxonomy of things in the communications, technology and internet worlds.  The latest iteration, called the InfoStack, puts everything into a 3 dimensional framework with a geographic, technological/operational, and network/application dispersion.  I’ve taken that a step further and from 3 dimensional macro/micro models developed 3 dimensional organizational matrices for companies.  3 coordinates capture 99% of everything that is relevant about a technology, product, company, industry or topic.

3) Mobile payments and ecommerce have been an area of focus over the past 3 years.  I will comment quite a bit on this topic.  There are hundreds of players, with everyone jockeying for dominance or their piece of the pie.  The area is also at the nexus of 3 very large groupings of companies:  financial services, communications services and transaction/information processors.  The latter includes Google and FaceBook, which is why they are constantly being talked about.  That said, players in all 3 camps are constrained by vestigial business and pricing models.   Whoever ties/relates the communications event/transaction to the underlying economic transaction will win.  New pricing will reflect digitization and true marginal cost.  Successful models/blueprints are 800, VPN, and advertising.  We believe 70-80% of all revenue in the future will derive from corporate users and less than 30% will be subscription based.

4) Exchange models and products/solutions that facilitate the flow of information across upper and lower layers and from end to end represent exciting and rewarding opportunities.  In a competitive world of infinite revenue clouds of demand mechanisms must exist that drive cost down between participants as traffic volumes explode.  This holds for one-way and two-way traffic, and narrow and broadband applications.  The opposing sides of bill and keep (called party pays) and network neutrality, are missing the point.  New services can only develop if there is a bilateral, balanced payment system.  It is easy to see why incumbent service and application models embrace bill and keep, as it stifles new entrants.  But long term it also stifles innovation and retards growth.

5) What will the new network and access topologies look like?  Clearly the current industry structure cannot withstand the dual onslaught of rapid technological change and obsolescence and enormously growing and diverging demand.  It’s great if everyone embraces the cloud, but what if we don’t have access to it?  Something I call “centralized hierarchical networking” will develop.  A significant amount of hybridization will exist.  No “one solution” will result.  Scale and ubiquity will be critical elements to commercial success.  As will anticipation and incorporation of developments in the middle and upper layers.  Policy must ensure that providers are not allowed to hide behind a mantra of “natural bottlenecks” and universal service requirements.  In fact, the open and competitive models ensure the latter as we saw from our pro-competitive and wireless policies of the 1980s and 1990s.

In conclusion, these are the 5 areas I focus on:

1)      Information Velocity

2)      Mapping the InfoStack

3)      Applications and in particular, payment systems

4)      Exchange models

5)      Networks

The analysis will tend to focus on pricing (driven by marginal, not average costs) and arbitrages, the “directory value” of something, which some refer to as the network effect, and key supply and demand drivers.

Posted by: Michael Elling AT 09:43 am   |  Permalink   |  0 Comments  |  Email
Monday, April 18 2011

Today, April 18, 2011 marks my first official blog.  It is about making money and having fun.  Actually I started blogging about telecommunications 20 years ago on Wall Street with my TelNotes daily and SpectralShifts weekly.  Looking back, I am happy to report that a lot of what I said about the space actually took place; consolidation, wireless usurpation of wireline access, IP growing into something more robust than a 4 layer stack, etc…  Over the past decade I’ve watched the advent of social media, and application ecosystems, and the collapse of the competitive communications sector; the good, the bad, and the ugly, respectively.

Along the way I’ve participated in or been impacted by these trends as I helped startups and small companies raise money and improve their strategy, tactics and operations.  Overall, an entirely different perspective from my ivory tower Wall Street research perch of the 1980s-90s.  Hopefully what I have to say is of use to a broad audience and helps people cut through contradictory themes of chaotic convergence and diverging demand to take advantage of the rapidly shifting landscape.

I like examples of reality imitating art.  One of my favorites was Pink Floyd’s The Wall, which preceded the destruction of the Berlin Wall by a decade.  Another, the devastating satire and 1976 classic Network, predating by 30 years what media has become in the age of reality TV, twitter and the internet moment.  I feel like a lot has changed and it’s time for me to start talking again.  So in the words of Howard Beale (Peter Finch) “I’m as mad as hell, and I’m not going to take it anymore.” 

Most of the time you’ll see me take an opposite stance from consensus, or approach a topic or problem from a 90 degree angle.  That’s my intrinsic value; don’t look for consensus opinion here.  The ability to do this lies in my analytical framework, called the InfoStack.  It is a three dimensional framework that maps information, topics and problems along geographic, network and application dispersions.  By geographic I mean WAN, MAN, LAN, PAN.  By network, I mean a 7 layer OSI stack.  And by applications, I mean clouds of intersecting demand.  You will see that I talk about horizontal layering and scale, vertically complete solutions, and unlimited “cloud-like” revenue opportunity.  Anything I analyze is in the context of what is going on in adjacent spaces of the matrix.  And I look for cause and effect amongst the layers.

I see us at the beginning of something very big; bigger than in 1987 at the dawn of the Wintel revolution.  The best way to enjoy the great literary authors is to start with their earliest works and read sequentially; growing and developing with them.  Grow with me as we sit at the dawn of the Infomedia revolution that is and will remake the world around us.  In the process, let’s make some money and build things that are substantial.

Posted by: Michael Elling AT 01:00 pm   |  Permalink   |  0 Comments  |  Email
Email
Twitter
Facebook
Digg
LinkedIn
Delicious
StumbleUpon
Add to favorites

Information Velocity Partners, LLC
88 East Main Street, Suite 209
Mendham, NJ 07930
Phone: 973-222-0759
Email:
contact@ivpcapital.com

Mastodon

Design Your Own Website, Today!
iBuilt Design Software
Give it a try for Free