Skip to main content
SpectralShifts Blog 
Thursday, August 09 2018

In light of the Deloitte white paper about the poor state of affairs of 5G in the US and the likelihood of falling farther behind the Chinese it is worthwhile to revisit something we wrote 14 years ago, especially given what transpired with the smartphone, wifi and competition over that period.  In our opinion, the US is not so much as falling behind, as there is no "real 5G future" that will become reality under the current industry structure.  Something will have to change.

In 2004 we wrote an article in Telephony Online about the "4th and Final Wave of Digitization."  Of course this was 1 year before the final Supreme Court ruling on Brand-X which was the death-knell of equal access and 20 years of competition in long-distance, thereby ushering in the final consolidations into 4 major edge-access providers in the US by 2006 (10 years after the ill-fated and farcical Telecom Act of 1996 which was supposed to have resolved all the conflicting regulation across 4 different media--voice, video, data, wireless--but didn't).  The article also appeared 3 years before the introduction of the iPhone and Steve Jobs' subsequent single-handed resurrection of equal access, which itself led to "OTT" (over the top) entering the internet and telecoms vernacular in 2010.  And this was something we presciently wrote, "the growing wave will develop from the core and crash against the vertically integrated service providers and undo monopoly bandwidth bottlenecks."

Oddly, the core driven wave did not crash over the incumbent monopolies (even if many believe the mobile operators and cable cos have effectively been neutered above layer 2 by the iOS and Android application ecosystems), rather it was a rising tide that lifted all boats.  The vertically integrated monopolies at the edge, instead of being subsumed and restructured in a cataclysm like AT&T in 1984, tranquilly and profitably built out 4G networks as a result of enormous demand pull-through from the end users of those OTT applications on the wireless side and played along with the Google inspired marketing hype game of "gigabit" access on the wired side.  (This is where one says one offers "gigabit" service, even it is really only 100-500 megabits/second and gets away with it while the FCC does nothing.) 

To the eyes of many inside and outside the industry the current state of affairs has become accepted as evidence that "competition works!"  And who really is to complain when the reality is 99% of most applications still use less than 1mbs, while video streaming to a handheld smartphone is never more than 2mbs.  That's a starting point that dooms the Deloitte WP in our opinion to considering different outcomes around 5G.

Belying this seeming tranquility is the fact that both Moore and Metcalfe (or their laws) have kept working unabated.  Today the retail cost per megabit second or per gigabyte consumed is even higher in absolute and relative terms to the economic cost of the underlying technology since before competition began in 1984 with the break-up of AT&T as a result of Bill McGowen's microwave bypass.  Somewhere between 90-99%.  It's as if the edge access monopolies still operate in "analog mode." They have not digitized themselves (see our definition of being digital) in their actions and business models; and they have not restructured into horizontally scaled platforms that defines "the internet".  The real problem is that they are all isolated at the edge and have not figured out how to confederate to deal with the OTT threat. Something Deloitte misses entirely.

And another fact is that video is about to eat the world.  If we could say that over the past 20-30 years software and wireless have eaten the world, then it is a safe bet that in 10 years we will look back and say the networks were eaten by 2-way HD and 4K video for tele-medicine, tele-education, tele-work, security and a whole host of video driven applications.  It will not be autonomous cars and connected vehicles as the WP claims.  The afformentioned applications will all be paid mostly by centralized buyers who can justify the expense through dramatic real-world cost savings and digitization and virtualization of their own charters and institutional organizations.  Just think that today one provider's traffic, Netflix, accounts for almost 40% of internet traffic.  And that's just 1-way video.  The simple fact is that the current edge is about 1/100th of the way in terms of capacity, performance and pricing from supporting ubiquitous 2-way HD video conferencing; be it from an equipment or price perspective.

But no one hyping 5G, including Deloitte, is really pointing that out.

So the tranquil state of affairs for the edge access providers is about to change and the final monopoly bottlenecks will be undone with a final wave coming in the form of price reductions of 90-99% and a complete rethink of last mile access and who pays.  We refer to this as the Full Stack Access (FSA) model.  In the coming weeks we will be looking at

So enjoy some perspective from 14 years ago taking into account what has happened and ponder what may occur by 2032:

First printed in TelephonyOnline;  September 20, 2004

Three enormous digital waves generated by long-distance, datacomms and mobility competition crashed over the info-media landscape in the past 20 years.  “Digitization” was a mechanism by which service providers balanced cost, coverage, capacity and clarity/QoS (the 4Cs of supply) with ubiquity, usability and unit cost (the 3Us of marginal demand). Even as per unit costs and prices dropped 99% over the respective decades, demand elasticity caused revenue per user to grow.  At the same time competitive service providers and the capital markets rudely learned that getting, keeping and stimulating demand on rapidly obsoleting capital bases were the key drivers for income statements and balance sheets.

In the aftermath of these market-driven waves, the 100 year-old, highly regulated, 2-way PSTN and 80 year-old, moderately regulated, 1-way media and broadcast segments were joined by the entirely new, and relatively unregulated, 2-way datacomm and wireless segments.  The result was growing chaos, which the Telecom and Cable Acts were meant, but failed, to resolve.

To rationalize and counter this chaos the markets embraced the concept of convergence, epitomizing the “all in one” approach in vertically integrated CLECs (PSTN) and horizontally oriented ASPs (Internet).  Unfortunately, vertically integrated service providers could not scale all layers of their operations and investment effectively across a demand environment where everyone and every organization wanted its converged bundle put together differently.  And while ASP’s appeared to scale more effectively along those lines, IP, as a 4-layer protocol, was prone to poor QoS and costs that actually snowballed in a world of distributed processing.  In the end $250+ billion of promise and hype got washed out to sea.

Since then, all four segments have continued to follow the immutable laws of Moore and Metcalf in their supply evolution, while demand has continued to evolve at a rapid, and varied, pace.  At the same time IP has grown up as a ready-for-prime-time, scalable, 7 layer, protocol stack, and represents the foundation for the 4th and final digital wave.

We see the tidal wave, as many increasingly do, developing rapidly, but perceive it as coming from a different direction.  Most expect the wave to start in the migration from TDM to IP at the customer premise and in the MAN Class 5 to softswitch conversion process.  In reality, IP and VoIP have scaled in entirely opposite areas over the last 5 years; namely in the WAN and across horizontal layers of the stack.  We refer to this as the core-driven, as opposed to edge-driven, VoIP model.

From this perspective, the growing wave will develop from the core and crash against the vertically integrated service providers and undo monopoly bandwidth bottlenecks.  The latter are best understood when looking at the level and substance of access charges in the last mile.  Today, 1 megabyte of synchronous (high QoS) MAN bandwidth for commercial applications costs about $200/month in developed countries and $500/month in lesser developed, and less competitive, markets.  When contrasted with actual hardware, software and provisioning/operating costs in the LAN and WAN today, those numbers should be closer to $10 and $20, respectively.  As well, the bulk of the monopoly cash flow actually derives from the terminating, not originating, side of calls or sessions.

Over the next few weeks and months we will break down the sequence of events and likely developments that will lead to a final and precipitous collapse in access pricing, a la the previous 3 digital booms.  As well, we will develop the revenue and demand upside, also consistent with those booms.  Our crystal ball also says it’s a pretty good outcome, notwithstanding a lot of wrenching change!  Gee, didn’t that happen 3 times previously?

Related Items:

Mark Zuckerberg says to Kara Swisher: "You can just see this trajectory from early internet, when the technology and connections were slow, most of the internet was text. Text is great, but it can be sometimes hard to capture what’s going on. Then, we all got phones with cameras on them and the internet got good enough to be primarily images. Now the networks are getting good enough that it’s primarily video."

Posted by: Michael Elling AT 12:15 pm   |  Permalink   |  0 Comments  |  Email
Thursday, November 23 2017

tldr; both sides wrong in the net neutrality debate.  we need to look at networks and interworking differently.  otherwise digital and wealth divides will continue and worse. a new way of understanding networks and network theory is called equilibrism.

The term “net neutrality” is a contrivance at best and a farcical notion at worst. That’s why both sides can be seen as right and wrong in the debate. Net neutrality was invented in the early 2000s to account for the loss of equal access (which was the basis for how the internet really started) and the failure to address critical interconnection issues in the Telecom Act of 1996 across converging ecosystems of voice, video and data on wired and wireless networks.

The reality is that networks are multi-layered (horizontal) and multi-boundaried (vertical) systems. Supply and demand issues in this framework need to be cleared across all of these demarcation points. Sounds complex. It is complex (see here for illustration). Furthermore imbalance in one area exerts pressure in another. Now add to that concept of a single network an element of “inter-networking” and the complexity grows exponentially. The inability for net neutrality to be applied consistently across the framework(s) is its biggest weakness.

That's the technology and economic perspective.

Now let’s look at the socio-economic and political perspective. Networks are fundamental to everything around and within us; both physical and mental models. They explain markets, theory of the firm, all of our laws, social interaction, even the biology and chemistry of our bodies and the physical laws that govern the universe. Networks reduce risk for the individual actors/elements of the network. And networks exhibit the same tendencies, be they one-way or two-way; real-time or store and forward.

These tendencies include value that grows geometrically by the number and nature of transactions/participants and gets captured at the core and top of this framework that is the network, while costs grow more or less linearly (albeit with marginal differences) and are mostly borne at the bottom and edge. The costs can be physical (as in a telecom or cable network) or virtual (as in a social media network, where the cost is higher anxiety or loss of privacy, etc..). To be sustainable and generative*, networks need some conveyance of value from the core and top to the costs at the bottom and edge. I refer to this as equilibrism. Others call it universal service. There is a difference.

(*-If we don’t have some type of equilibrism the tendency in all networks is towards monopoly or oligopoly; which is basically what we see under neo-liberalism and early forms of capitalism before trust-busters and Keynesian policies.)

To understand the difference between universal service and equilibrism into this “natural law of networks” we have to throw in two other immutable, natural outcomes evident everywhere, namely pareto and standard distributions. The former easily show the geometric (or outsized) value capture referred to above. Standard distributions (bell curves) reflect extreme differences in supply and demand at the margin. Once we factor both of these in, we find that networks can never completely tend toward full centralization or full decentralization and be sustainable. So the result is constant push/pull of tradeoffs horizontally in the framework (between core and edge) facilitated by tradeoffs vertically (between upper and lower layers).  

For example, a switch in layer 3 offsets the cost and complexity of layers 1 and 2 (total mesh vs star). This applies to distance and density and how the upper layers of the stack affect the lower layers. For a given set of demand, supply can either be centralized or distributed (ie cloud vs openfog or MEC; or centralized payment systems like Visa vs blockchain). A lot of people making the case for fully distributed or for fully centralized systems seemingly do not understand these horizontal and vertical tradeoffs.

The bottom-line: a network (or series of internetworks) that is fully centralized or decentralized is unsustainable and a network (internetworks) where there is no value/cost (re)balancing is also unsustainable. Much of what we are seeing in today’s centralized or monopolistic “internet” and the resulting call for decentralization (blockchain/crypto) is evidence of these conclusions. The confusion or misunderstanding lies in the fact that the network is nothing without its components or actors, and the individual actors are nothing without the network. Which is more important? Both.

Now add back in the “inter-networking” piece and it seems that no simple answer, like net neutrality, solves how we make networks and ecosystems sustainable; especially when supply depreciates rapidly and costs are in near constant state of (relative or absolute) decline and demand is infinite and infinitely diverse. These parameters have always been around (we call them technology and market differentiation/specialization), but they’ve become very apparent in the last 50 years with the advent of the computers and digital networked ecosystems that dominate all aspects of our lives. Early signs abound since the first digital network (the telegraph) was invented**, but we just haven’t realized it until now with a monopolized internet at global scale; albeit one that was intended to be free and open and is anything but. So, there exists NO accepted economic theory explaining or providing the answer to the problems of monopoly information networks that have been debated for well over 100 years when Theodore Vail promised “One Policy, One System, Universal Service” and the US government officially blessed information monopolies.

(** — digital impacts arguably began with the telegraph 170 years ago and their impact on goods and people, e.g. railroads, and stock markets, e.g. the ticker tape. The velocity of information increased geometrically. Wealth and information access divides became enormous by the late 1800s.)

"Equilibrism" may be THE answer that provides a means towards insuring universal access in competitive digital networked ecosystem. Equilibrism holds that settlements across and between the boundaries and layers are critical and that greater network effects occur the more interconnected the networks are down towards the bottom and edges of the ecosytems. Settlements serve two primary functions. First they are price signals. As such they provide incentives and disincentives between actors (remember the standard distribution between all the marginal actors referred to above?). Second they provide a mechanism for value conveyance between those who capture the value and those who bear the costs (remember the pareto distribution above?). In the informational stackas we’ve illustrated it, settlements exist north-south (between app and infrastructure layers) and east-west (between actors/networks). But a lack of these settlements has resulted in extreme misshaping of both the pareto optimum and normal distribution.

We find very little academic work around settlements*** and, in particular, the proper level for achieving sustainability and generativity. The internet is a “settlement free” model and therefore lacks incentives/disincentives and in the process makes risk one sided. Also without settlements, a receiving party cannot subsidize the sender (say goodbye to 800 like services which scaled the competitive voice WAN in the 1980s and 90s and paved the way for the internet to scale). Lastly, and much more importantly than the recent concerns over security, privacy and demagoguery, the internet does not facilitate universal service.

(*** — academic work around “network effects” on the other hand has seen a surge over the last 40 years since the concept was derived at Bell Labs in 1974 from an economist studying the failure of the famous Picturephone from 1963. Of course a lot of this academic work is flawed (and limited) without an understanding of the role of settlements.)

Unlike the winner takes all model (the monopoly outcomes referred to above), equilibrism points to a new win/win model where supply and demand are cleared much more efficiently and the ecosystems are more sustainable and generative. So where universal service is seen as a “taking” or tax on those who have and giving to those who don’t and addressing only portions of the above 2 curves, equilibrism is fundamentally about “giving” to those who have albeit slightly less than those who don’t. Simply put equilibrism is at work when the larger actor pays a slightly higher settlement than the smaller actor, but in return the larger actor will still get a relatively larger benefit due to network effects. Think of it in gravity terms between 2 masses. 

We are somehow brainwashed into thinking winner takes all is a natural outcome. In fact it is not. Nature is about balance and evolution; a constant state of disequilibria striving for equilibrium. It’s not survival of the fittest; it’s about survival of the most adaptable. Almost invariably adaptation comes from the smaller actor or new entrant into the ecosystem. And that ultimately is what drives sustainability and advancement; not unfettered winner takes all competition. Change should be embraced, not rejected, because it is constantly occurring in nature. That’s how we need to begin to think about all our socio-economic and political institutions. This will take time, since it cuts against what we believe to be true since the dawn of mankind. If you don’t think so, take a refresher course in Plato’s Republic.

If we don’t change our thinking and approach, at best digital and wealth divides will continue and we’ll convulse from within. At worst, outside forces, like technology (AI), other living organisms (contagions) and matter (climate change) will work against us in favor of balance. That’s just natural.

Related Reading:

Why Tech is Evil, NYT

Silicon Valley's erasing our individuality

Neoliberalism's ideology problem

Posted by: Michael Elling AT 05:30 am   |  Permalink   |  0 Comments  |  Email
Sunday, March 30 2014

Why App Coverage Will Drive Everything

Given the smartphone’s ubiquity and our dependence on it, “App Coverage” (AC) is something confronting us every day, yet we know little about it. At the CCA Global Expo this week in San Antonio Glenn Laxdal of Ericsson spoke about “app coverage”, which the vendor first surfaced in 2013.  AC is defined as, “the proportion of a network’s coverage that has sufficient performance to run a particular app at an acceptable quality level.”  In other words the variety of demand from end-users for voice, data and video applications is outpacing the ability of carriers to keep up.  According to Ericsson, monitoring and ensuring performance of app coverage is the next wave in LTE networks.  Here’s a good video explaining AC in simple, visual terms.

Years, nay, decades ago I used to say coverage should be measured in 3 important ways:

  • Geographic (national vs regional vs local)
  • In/Outdoors (50+% loss indoors)
  • Frequency (double capex 1900 vs 700 mhz)

Each of these had specific supply/demand clearing implications across dozens of issues impacting balance sheets and P&L statements; ultimately determining winners and losers.  They are principally why AT&T and Verizon today have 70% of subscribers (80% of enterprise customers) up from 55% just 5 years ago, 84% of total profit, and over 100% of industry free cash flow.  Now we can add “applications” to that list.  And it will only make it more challenging for competitors to wrestle share from the “duopoly”.

Cassidy Shield of Alcatel-Lucent, further stated that fast follower strategies to the duopoly would likely fail; implying that radical rethinking was necessary.  Some of that came quickly in the form of Masayoshi Son’s announcement of a broad partnership with NetAmerica and members of CCA for preferred roaming, concerted network buildout and sharing of facilities and device purchase agreements. This announcement came two weeks after Son visited Washington DC and laid out Sprint’s vision for a new, more competitive wireless future in America.

The conference concluded with a panel of CEOs hailing Sprint’s approach, which Son outlined here, as one of benevolent dictator (perhaps not the best choice of words) and exhorting the label partner, partner, partner; something that Terry Addington of MobileNation has said has taken way too long.  Even then the panel agreed that pulling off partnerships will be challenging.

The Good & Bad of Wireless

Wireless is great because it is all things to all people, and that is what makes it bad too.  Planning for and accounting how users will access the network is very challenging across a wide user base.  There are fundamentally different “zones” and contexts in which different apps can be used and they often conflict with network capacity and performance.  I used to say that one could walk and also hang upside down from a tree and talk, but you couldn’t “process data” doing those things.  Of course the smartphone changed all that and people are accessing their music apps, location services, searches, purchases, and watching video from anywhere; even hanging upside down in trees.

Today voice, music and video consume 12, 160 and 760 kpbs of bandwidth, respectively, on average.  Tomorrow those numbers might be 40, 500, 1500, and that’s not even taking into account “upstream” bandwidth which will be even more of a challenge for service providers to provision when consumers expect more 2-way collaboration everywhere.  The law of wireless gravity, which states bits will seek out fiber/wire as quickly and cheaply as possible, will apply, necessitating sharing of facilities (wireless and wired), heterogeneous network (Hetnet), and aggressive wifi offload approaches; even consumers will be shared in the form of managed services across communities of users (known today as OTT).  The show agenda included numerous presentations on distributed antennae networks and wifi offload applied to the rural coverage challenge.

Developing approaches ex ante to anticipate demand is even more critical if carriers want to play major roles in the internet of things, unified (video) communications and the connected car.  As Ericsson states in its whitepaper,

“App coverage integrates all aspects of network performance – including radionetwork throughput and latency, capacity, as well as the performance of the backhaul, packetcore and the content-delivery networks. Ultimately, managing app coverage and performance demands a true end-to-end approach to designing, building and running mobile networks.”

Posted by: Michael Elling AT 10:37 am   |  Permalink   |  0 Comments  |  Email
Monday, March 17 2014

A New Visionary In Our Midst?

The US has lacked a telecom network visionary for nearly 2 decades.  There have certainly been strong and capable leaders, such as John Malone who not only predicted but brought about the 500 channel LinearTV model.  But there hasn’t been someone like Bill McGowan who broke up AT&T or Craig McCaw who first had the vision to build a national, seamless wireless network, countering decades of provincial, balkanized thinking.  Both of them fundamentally changed the thinking around public service provider networks.

But with a strong message to the markets in Washington DC on March 11 from Masayoshi Son, Sprint’s Chairman, the 20 year wait may finally be over.  Son did what few have been capable of doing over the past 15-20 years since McGowan exited stage left and McCaw sold out to MaBell: telling it like it is.  The fact is that today’s bandwidth prices are 20-150x higher than they should be with current technology.

This is no one’s fault in particular and in fact to most people (even informed ones) all measures of performance-to-price compared to 10 or 20 years ago look great.  But, as Son illustrated, things could be much, much better.  And he’s willing to make a bet on getting the US, the most advanced and heterogeneous society, back to a leadership role with respect to the ubiquity and cost of bandwidth.  To get there he needs more scale and one avenue is to merge with T-Mobile.

There have been a lot of naysayers as to the possibility of a Sprint-T-Mo hookup, including leaders at the FCC.  But don’t count me as one; it needs to happen.  Initially skeptical when the rumors first surfaced in December, I quickly reasoned that a merger would be the best outcome for the incentive auctions.  A merger would eliminate spectrum caps as a deterrent to active bidding and maximize total proceeds.  It would also have a better chance of developing a credible third competitor with equal geographic reach. Then in January the FCC and DoJ came out in opposition to the merger.

In February, though, Comcast announced the much rumored merger with TW and Son jumped on the opportunity to take his case for merging to a broader stage.  He did so in front of a packed room of 300 communications pundits, press and politicos at the US Chamber of Commerce’s prestigious Hall of Flags; a poignant backdrop for his own rags to riches story.  Son’s frank honesty about the state of broadband for the American public vs the rest of the world, as well as Sprint’s own miserable current performance were impressive.  It’s a story that resonates with my America’s Bandwidth Deficit presentation.

Here are some reasons the merger will likely pass:
  • The FCC can’t approve one horizontal merger (Comcast/TW) that brings much greater media concentration and control over content distribution, while disallowing a merger of two small players (really irritants as far as AT&T and Verizon are concerned).
  • Son has a solid track record of disruption and doing what he says.
  • The technology and economics are in his favor.
  • The vertically integrated service provider model will get disrupted faster and sooner as Sprint will have to think outside the box, partner, and develop ecosystems that few in the telecom industry have thought about before; or if they have, they’ve been constrained by institutional inertia and hidebound by legacy regulatory and industry siloes.

Here are some reasons why it might not go through:

  • The system is fundamentally corrupt.  But the new FCC Chairman is cast from a different mold than his predecessors and is looking to make his mark on history.
  • The FCC shoots itself in the foot over the auctions.  Given all the issues and sensitivities around incentive auctions the FCC wants this first one to succeed as it will serve as a model for all future spectrum refarming issues. 
  • The FCC and/or DoJ find in the public interest that the merger reduces competition.  But any analyst can see that T-Mo and Sprint do not have sustainable models at present on their own; especially when all the talk recently in Barcelona was already about 5G.

Personally I want Son’s vision to succeed because it’s the vision I had in 1997 when I originally brought the 2.5-2.6 (MMDS) spectrum to Sprint and later in 2001 and 2005 when I introduced Telcordia’s 8x8 MIMO solutions to their engineers.  Unfortunately, past management regimes at Sprint were incapable of understanding the strategies and future vision that went along with those investment and technology pitches.  Son has a different perspective (see in particular minute 10 of this interview with Walt Mossberg) with his enormous range of investments and clear understanding of price elasticity and the marginal cost of minutes and bits.

To be successful Sprint’s strategy will need to be focused, but at the same time open and sharing in order to simultaneously scale solutions across the three major layers of the informational stack (aka the InfoStack):

  • upper (application and content)
  • middle (control)
  • lower (access and transport)

This is the challenge for any company that attempts to disrupt the vertically integrated telecom or LinearTV markets; the antiquated and overpriced ones Son says he is going after in his presentation.    But the US market is much larger and more robust than the rest of the world, not just geographically, but also from a 360 degree competitive perspective where supply and demand are constantly changing and shifting.

Ultimate success may well rest in the control layer, where Apple and Google have already built up formidable operating systems which control vastly profitably settlement systems across multiple networks.  What few realize is that the current IP stack does not provide price signals and settlement systems that clear supply and demand between upper and lower layers (north-south) or between networks (east-west) in the newly converged “informational” stack of 1 and 2-way content and communications.

If Sprint’s Chairman realizes this and succeeds in disrupting those two markets with his strategy then he certainly will be seen as a visionary on par with McGowan and McCaw.

Posted by: Michael Elling AT 09:58 am   |  Permalink   |  0 Comments  |  Email
Wednesday, August 07 2013

Debunking The Debunkers

The current debate over the state of America's broadband services and over the future of the internet is like a 3-ring circus or 3 different monarchists debating democracy.  In other words an ironic and tragically humorous debate between monopolists, be they ultra-conservative capitalists, free-market libertarians, or statist liberals.  Their conclusions do not provide a cogent path to solving the single biggest socio-political-economic issue of our time due to pre-existing biases, incorrect information, or incomplete/wanting analysis.  Last week I wrote about Google's conflicts and paradoxes on this issue.  Over the next few weeks I'll expand on this perspective, but today I'd like to respond to a Q&A, Debunking Broadband's Biggest Myths, posted on Commercial Observer, a NYC publication that deals with real estate issues mostly and has recently begun a section called Wired City, dealing with a wide array of issues confronting "a city's" #1 infrastructure challenge.  Here's my debunking of the debunker.

To put this exchange into context, the US led the digitization revolutions of voice (long-distance, touchtone, 800, etc..), data (the internet, frame-relay, ATM, etc...) and wireless (10 cents, digital messaging, etc...) because of pro-competitive, open access policies in long-distance, data over dial-up, and wireless interconnect/roaming.  If Roslyn Layton (pictured below) did not conveniently forget these facts or does not understand both the relative and absolute impacts on price and infrastructure investment then she would answer the following questions differently:

Real Reason/Answer:  our bandwidth is 20-150x overpriced on a per bit basis because we disconnected from moore's and metcalfe's laws 10 years ago, due to the Telecom Act, then special access "de"regulation, then Brand-X or shutting down equal access for broadband.  This rate differential is shown in the discrepancy between rates we pay in NYC and what Google charges in KC, as well as the difference in performance/price of 4G and wifi.  It is great Roslyn can pay $3-5 a day for Starbucks.  Most people can't (and shouldn't have to) just for a cup a Joe that you can make at home for 10-30 cents.

Real Reason/Answer:  Because of their vertical business models, carriers are not well positioned to generate high ROI on rapidly depreciating technology and inefficient operating expense at every layer of the "stack" across geographically or market segment constrained demand.  This is the real legacy of inefficient monopoly regulation.  Doing away with regulation, or deregulating the vertical monopoly, doesn’t work.  Both the policy and the business model need to be approached differently.  Blueprints exist from the 80s-90s that can help us restructure our inefficient service providers.  Basically, any carrier that is granted a public ROW (right of way) or frequency should be held to an open access standard in layer 1.  The quid pro quo is that end-points/end-users should also have equal or unhindered access to that network within (economic and aesthetic) reason.  This simple regulatory fix solves 80% of the problem as network investments scale very rapidly, become pervasive, and can be depreciated quickly.

Real Reason/Answer:  Quasi monopolies exist in video for the cable companies and in coverage/standards in frequencies for the wireless companies.  These scale economies derived from pre-existing monopolies or duopolies granted by and maintained to a great degree by the government.  The only open or equal access we have left from the 1980s-90s (the drivers that got us here) is wifi (802.11) which is a shared and reusable medium with the lowest cost/bit of any technology on the planet as a result.  But other generative and scalabeable standards developed in the US or with US companies at the same time, just like the internet protocol stacks, including mobile OSs, 4G LTE (based on CDMA/OFDM technology), OpenStack/Flow that now rule the world.  It's very important to distinguish which of these are truly open or not.

Real Reason/Answer: The 3rd of the population who don't have/use broadband is as much because of context and usability, whether community/ethnicity, age or income levels, as cost and awareness.  If we had balanced settlements in the middle layers based on transaction fees and pricing which reflect competitive marginal cost, we could have corporate and centralized buyers subsidizing the access and making it freely available everywhere for everyone.  Putting aside the ineffective debate between bill and keep and 2-sided pricing models and instead implementing balanced settlement exchange models will solve the problem of universal HD tele-work, education, health, government, etc…  We learned in the 1980s-90s from 800 and internet advertising that competition can lead to free, universal access to digital "economies".  This is the other 20% solution to the regulatory problem.

Real Reason/Answer:  The real issue here is that America led the digital information revolution prior to 1913 because it was a relatively open and competitive democracy, then took the world into 70 years of monopoly dark ages, finally breaking the shackles of monopoly in 1983, and then leading the modern information revolution through the 80s-90s.  The US has now fallen behind in relative and absolute terms in the lower layers due to consolidation and remonopolization.  Only the vestiges of pure competition from the 80s-90s, the horizontally scaled "data" and "content" companies like Apple, Google, Twitter and Netflix (and many, many more) are pulling us along.  The vertical monopolies stifle innovation and the generative economic activity we saw in those 2 decades.  The economic growth numbers and fiscal deficit do not lie.

Posted by: Michael Elling AT 08:02 am   |  Permalink   |  0 Comments  |  Email
Wednesday, July 31 2013

Is Google Trying to Block Web 4.0?

Back in 1998 I wrote, “if you want to break up the Microsoft software monopoly then break up the Baby Bell last-mile access monopoly.”  Market driven broadband competition and higher-capacity digital wireless networks gave rise to the iOS and Android operating systems over the following decade which undid the Windows monopoly.  The 2013 redux to that perspective is, once again, “if you want to break up the Google search monopoly then break up the cable/telco last mile monopolies.” 

Google is an amazing company, promoting the digital pricing and horizontal service provider spirit more than anyone.  But Google is motivated by profit and will seek to grow that profit as best it can, even if contrary to founding principles and market conditions that fueled its success (aka net neutrality or equal access).  Now that Google is getting into the lower layers in the last mile they are running into paradoxes and conflicts over net neutrality/equal access and in danger of becoming just another vertical monopoly.  (Milo Medin provides an explanation in the 50th minute in this video, but it is self-serving, disingenuous and avoids confronting the critical issue for networks going forward.)

Contrary to many people’s beliefs, the upper and lower layers have always been inextricably interdependent and nowhere was this more evident than with the birth of the internet out of the flat-rate dial-up networks of the mid to late 1980s (a result of dial-1 equal access).  The nascent ISPs that scaled in the 1980s on layer 1-2 data bypass networks were likewise protected by Computers II-III (aka net neutrality) and benefited from competitive (WAN) transport markets.

Few realize or accept the genesis of Web 1.0 (W1.0) was the break-up of AT&T in 1983.   Officially birthed in 1990 it was an open, 1-way store and forward database lookup platform on which 3 major applications/ecosystems scaled beginning in late 1994 with the advent of the browser: communications (email and messaging), commerce, and text and visual content.  Even though everything was narrowband, W1.0 began the inexorable computing collapse back to the core, aka the cloud (4 posts on the computing cycle and relationship to networks).  The fact that it was narrowband didn't prevent folks like Mark Cuban and Jeff Bezos from envisioning and selling a broadband future 10 years hence.   Regardless, W1.0 started collapsing in 1999 as it ran smack into an analog dial-up brick wall.  Google hit the bigtime that year and scaled into the early 2000s by following KISS and freemium business model principles.  Ironically, Google’s chief virtue was taking advantage of W1.0’s primary weakness.

Web 2.0 grew out of the ashes of W1.0 in 2002-2003.  W2.0 both resulted from and fueled the broadband (BB) wars starting in the late 1990s between the cable (offensive) and telephone (defensive) companies.  BB penetration reached 40% in 2005, a critical tipping point for the network effect, exactly when YouTube burst on the scene.  Importantly, BB (which doesn't have equal access, under the guise of "deregulation") wouldn’t have occurred without W1.0 and the above two forms of equal access in voice and data during the 1980s-90s.  W2.0 and BB were mutually dependent, much like the hardware/software Wintel model.   BB enabled the web to become rich-media and mostly 2-way and interactive.  Rich-media driven blogging, commenting, user generated content and social media started during the W1.0 collapse and began to scale after 2005.

“The Cloud” also first entered people’s lingo during this transition.  Google simultaneously acquired YouTube in the upper layers to scale its upper and lower layer presence and traffic and vertically integrated and consolidated the ad exchange market in the middle layers during 2006-2008.  Prior to that, and perhaps anticipating lack of competitive markets due to "deregulation" of special access, or perhaps sensing its own potential WAN-side scale, the company secured low-cost fiber rights nationwide in the early 2000s following the CLEC/IXC bust and continued throughout the decade as it built its own layer 2-3 transport, storage, switching and processing platform.  Note, the 2000s was THE decade of both vertical integration and horizontal consolidation across the board aided by these “deregulatory” political forces.  (Second note, "deregulatory" should be interpreted in the most sarcastic and insidious manner.)

Web 3.0 began officially with the iPhone in 2007.  The smartphone enabled 7x24 and real-time access and content generation, but it would not have scaled without wifi’s speed, as 3G wireless networks were at best late 1990s era BB speeds and didn’t become geographically ubiquitous until the late 2000s.  The combination of wifi (high speeds when stationary) and 3G (connectivity when mobile) was enough though to offset any degradation to user experience.  Again, few appreciate or realize that  W3.0 resulted from two additional forms of equal access, namely cellular A/B interconnect from the early 1980s (extended to new digital PCS entrants in the mid 1990s) and wifi’s shared spectrum.  One can argue that Steve Jobs single-handedly resurrected equal access with his AT&T agreement ensuring agnostic access for applications.  Surprisingly, this latter point was not highlighted in Isaacson's excellent biography.  Importantly, we would not have had the smartphone revolution were it not for Jobs' equal access efforts.

W3.0 proved that real-time, all the time "semi-narrowband" (given the contexts and constraints around the smartphone interface) trumped store and forward "broadband" on the fixed PC for 80% of people’s “web” experience (connectivity and interaction was more important than speed), as PC makers only realized by the late 2000s.  Hence the death of the Wintel monopoly, not by government decree, but by market forces 10 years after the first anti-trust attempts.  Simultaneously, the cloud became the accepted processing model, coming full circle back to the centralized mainframe model circa 1980 before the PC and slow-speed telephone network led to its relative demise.  This circularity further underscores not only the interplay between upper and lower layers but between edge and core in the InfoStack.  Importantly, Google acquired Android in 2005, well before W3.0 began as they correctly foresaw that small-screens and mobile data networks would foster the development of applications and attendant ecosystems would intrude on browser usage and its advertising (near) monopoly. 

Web 4.0 is developing as we speak and no one is driving it and attempting to influence it more with its WAN-side scale than Google.  W4.0 will be a full-duplex, 2-way, all-the time, high-definition application driven platform that knows no geographic or market segment boundaries.  It will be engaging and interactive on every sensory front; not just those in our immediate presence, but everywhere (aka the internet of things).  With Glass, Google is already well on its way to developing and dominating this future ecosystem.  With KC Fiber Google is illustrating how it should be priced and what speeds will be necessary.  As W4.0 develops the cloud will extend to the edge.  Processing will be both centralized and distributed depending on the application and the context.  There will be a constant state of flux between layers 1 and 3 (transport and switching), between upper and lower layers, between software and hardware at every boundary point, and between core and edge processing and storage.  It will dramatically empower the end-user and change our society more fundamentally than what we’ve witnessed over the past 30 years.  Unfortunately, regulators have no gameplan on how to model or develop policy around W4.0.

The missing pieces for W4.0 are fiber based and super-high capacity wireless access networks in the lower layers, settlement exchanges in the middle layers, and cross-silo ecosystems in the upper layers.   Many of these elements are developing in the market naturally: big data, hetnets, SDN, openflow, open OS' like Android and Mozilla, etc… Google’s strategy appears consistent and well-coordinated to tackle these issues; if not far ahead of others. But its vertically integrated service provider model and stance on net neutrality in KC is in conflict with the principles that so far have led to its success.

Google is buying into the vertical monopoly mindset to preserve its profit base instead of teaching regulators and the markets about the virtues of open or equal access across every layer and boundary point (something clearly missing from Tim Wu's and Bob Atkinson's definitions of net neutrality).  In the process it is impeding the development of W4.0.  Governments could solve this problem by simply conditioning any service provider with access to a public right of way or frequency to equal access in layers 1 and 2; along with a quid pro quo that every user has a right to access unhindered by landlords and local governments within economic and aesthetic reason.  (The latter is a bone we can toss all the lawyers who will be looking for new work in the process of simpler regulations.)  Google and the entire market will benefit tremendously by this approach.  Who will get there first?  The market (Google or MSFT/AAPL if the latter are truly hungry, visionary and/or desperate) or the FCC?  Originally hopeful, I’ve become less sure of the former over the past 12 months.  So we may be reliant on the latter.

Related Reading:

Free and Open Depend on Where You Are in Google's InfoStack

InfoWorld defends Google based on its interpretation of NN; of which there are 4

DSL reports thinks Google is within its rights because it expects to offer enterprise service.  Only they are not and heretofore had not mentioned it.

A good article on Gizmodo about state of the web what "we" are giving up to Google

The datacenter as an "open access" boundary.  What happens today in the DC will happen tomorrow elsewhere.

Posted by: Michael Elling AT 10:57 am   |  Permalink   |  4 Comments  |  Email
Thursday, May 02 2013

What Exactly Is Intermodal Competition?

Intermodal competition is defined as: “provision of the same service by different technologies (i.e., a cable television company competing with a telephone company in the provision of video services).”

Intramodal competition is defined as: “competition among identical technologies in the provision of the same service (e.g., a cable television company competing with another cable television company in the offering of video services).”

Focus on 4 words: same, different, identical, same.  Same is repeated twice.

The Free State Foundation (FSF) is out with a paper regarding the existence of intermodal competition between wireless and wired.  The reason is that they take exception with the FCC’s recent reports on Wireless and Video competition.

Saying wireless represents intermodal competition to wired (fiber/coax) is like saying that books compete with magazines or radio competes with TV.  Sure, the former both deliver the printed word.  And the latter both pass for entertainment broadcast to us.  Right?

Yet these are fundamentally different applications and business models even if they may share common network layers and components, or in English, similarities exist between production and distribution and consumption.  But their business models are all different.

Wireless Is Just Access to Wireline

So are wireless and wired really the SAME?  For voice they certainly aren’t.  Wireless is still best efforts.  It has the advantage of being mobile and with us all the time, which is a value-added, while wired offers much, much better quality.  For data the differences are more subtle.  With wireless I can only consume stuff in bite sizes (email, twitter, peruse content, etc..) because of throughput and device limitations (screen, processor, memory).  I certainly can’t multi-task and produce content the way I can on a PC linked to a high-speed broadband connection.

That said, increasingly people are using their smartphones as hotspots or repeaters to which they connect their notebooks and tablets and can then multi-task.  I do this a fair bit and it is good while I'm on the road and mobile, but certainly no substitute for a fixed wired connection/hotspot in terms of speed and latency.  Furthermore, wireless carriers by virtue of their inefficient vertically integrated, siloed business models and the fact that wireless is both shared and reused, have implemented onerous price caps that limit total (stock) consumption even as the increase speed (flow).  The latter creates a crowding out effect and throughput is degraded as more people access the same radio, which I run into a lot.  I know this because my speed decreases or the 4G bars mysteriously disappear on my handset and indicate 3G instead.  

Lastly, one thing I can do with the phone that I can’t do with the PC is take pictures and video.  So they really ARE different.  And when it comes to video, there is as much comparison between the two as a tractor trailer and a motorcycle.  Both will get us there, but really everything else is different.  

At the end of the day, where the two are similar or related is when I say wireless is just a preferred access modality and extension of wired (both fixed and mobile) leading to the law of wireless gravity: a wireless bit will seek out fiber as quickly and cheaply as possible.  And this will happen once we move to horizontal business models and service providers are incented to figure out the cheapest way to get a bit anywhere and everywhere.

Lack of Understanding Drives Bad Policy

By saying that intermodal competition exists between wireless and wired, FSF is selectively taking aspects of the production, distribution and consumption of content, information and communications and conjuring up similarities that exist.  But they are really separate pieces of the of the bigger picture puzzle.  I can almost cobble together a solution that is similar vis a vis the other, but it is still NOT the SAME for final demand!

This claiming to be one thing while being another has led to product bundling and on-net pricing--huge issues that policymakers and academics have ignored--that have promoted monopolies and limited competition.  In the process of both, consumers have been left with overpriced, over-stuffed, unwieldy and poorly performing solutions.

In the words of Blevins, FSF is once again providing a “vague, conflicting, and even incoherent definition of intermodal competition.”  10 years ago the US seriously jumped off the competitive bandwagon after believing in the nonsense that FSF continues to espouse.  As a result, bandwidth pricing in the middle and last mile disconnected from moore’s and metcalfe’s laws and is now overpriced 20-150x impeding generative ecosystems and overall economic growth.
 

Related Reading:
DrPeering Really Thinks Cellphones Are, Well, Awful!

Apparently the only way to convince regulators is to lie or distort the "wireless only" stats

 

Posted by: Micheal Elling AT 11:01 am   |  Permalink   |  0 Comments  |  Email
Thursday, April 25 2013

The Law of Wireless Gravity

I've written about the impacts of and interplay between Moore’s, Metcalfe’s and Zipf’s laws on supply and demand of communication services and networks.  Moore’s and Metcalfe’s laws can combine to drive bandwidth costs down 50% annually.  Others have pointed out Butter’s law, coming from a Bell Lab’s wizard, Gerry Butter, which arrives at a more aggressive outcome; a 50% drop every 9 months!  Anyway those are the big laws that are immutable and washing against and over vertically integrated monopolies like giant unseen tsunamis.

Then there are the smaller laws, like my friend Russ McGuire at Sprint who penned, “The value of any product or service increases with its mobility.”  Wow, that’s very metcalfian and almost infinite in value because the devices and associated pathways can move in 3 planes.  I like that and have always believed in McGuire’s Law (even before he invented it!).

Since the early 1990s, when I was one of the few, if only, analyst on the Street to cover wired and wireless telecoms, I’ve been maintaining that wireless is merely access to wireline applications.  While that has been validated finally with “the cloud” and business models and networks have been merging (at least at the corporate level) the majority of people still believe them to be fundamentally distinct.  It shows in simple things like interfaces and lack of interoperability across 4 screens.  Thankfully all that is steadily eroding due to cloud ecosystems and the enormous fight happening in the data world between the edge and the core and open vs closed:  GOOG vs AAPL vs MSFT (and let’s not forget Mozilla, the OS to rule all OS’?).

Anyone who works in or with the carriers knows wireless and wired networks are indelibly linked and always have been in terms of backhaul transport to the cell-tower.  But over the past 6 years the symbiosis has become much greater because of the smartphone.  1G and 2G digital networks were all capable of providing “data” connections from 1998-2006, but it really wasn’t until the iPhone happened on the scene in 2007 along with the advent of 3G networks that things really started taking off.

The key was Steve Jobs’ demand to AT&T that smartphone applications purchased through the App Store have unfettered access to the internet, be it through:

  • 2G, which was relatively pervasive, but slow at 50-300kbps,
  • 3G, which was not pervasive, but faster at 500-1500 kbps, or
  • Wifi (802.11g), which was pervasive in a lot of “fixed” areas like home, work or school.

The latter made a ton of sense in particular, because data apps, unlike voice, will more likely be used when one is relatively stationary, for obvious visual and coordination and safety reasons; the exception being music.  In 2007 802.11g Wifi was already 54 mbps, or 30-50x faster than 3G, even though the Wifi radios on smartphones could only handle 30 mbps.  It didn’t matter, since most apps rarely need more than 2 mbps to perform ok.  Unfortunately, below 2 mbps they provided a dismal experience and that’s why 3G had such a short shelf-life and the carriers immediately began to roll out 4G.

Had Jobs not gotten his way, I think the world would be a much different place as the platforms would not have been so generative and scaled so quickly without unconstrained (or nearly ubiquitous) access.  This is an example of what I call Metcalfian “suck” (network effect pull-through) of the application ecosystem for the carriers and nothing exemplified it better than the iPhone and App Store for the first few years as AT&T outpaced its rivals and the Android app ecosystem.  And it also upset the normal order of business first and consumer second through the bring your own device (BYOD) trend, blurring the lines between the two traditionally separate market segments.

Few people to this day realize or appreciate the real impact that Steve Jobs had, namely reviving equal access.  The latter was something the carriers and federal government conspired to and successfully killed in the early 2000s.  Equal access was the horse that brought us competitive voice in the early 1980s, competitive data in the early 1990s and helped scale digital wireless networks nationwide in the late 1990s.  All the things we’re thankful for, yet have forgotten, or never entirely appreciated, or even how they came about.

Simply put, 70% of all mobile data access is over Wifi and we saw 4G networks develop 5 years faster than anyone thought possible.  Importantly, not only is Wifi cheaper and faster access, it is almost always tied to a broadband pipe that is either fiber or becomes fiber very quickly.

Because of this “smart” or market driven form of equal access and in appreciation of Steve Jobs’ brilliance, I am going to introduce a new law.  The Law of Wireless Gravity which holds, "a wireless bit will seek out fiber as quickly and cheaply as possible.”  I looked it up on google and it doesn’t exist.  So now I am introducing it into the public domain under creative commons.  Of course there will be plenty of metaphors about clouds and attraction and lightning to go along with the law.  As well, there will be numerous corollaries.

I hope people abide by this law in all their thinking about and planning for broadband, fiber, gigabit networks, application ecosystems, devices, control layers, residential and commercial demand, etc…because it holds across all of those instances.  Oh, yeah, it might actually counter the confusion over and disinformation about spectrum scarcity at the same time.  And it might solve the digital divide problem, and the USF problem, and the bandwidth deficit….and even the budget deficit.  Ok, one step at a time.

Related Reading:

Not exactly reading, but comic Bill Burr's ode to Steve Jobs

Looking back at the number of laws Kurzweil got right and wrong (sometimes a matter of timing) looking back to 2001.

Posted by: Michael Elling AT 09:49 am   |  Permalink   |  0 Comments  |  Email
Saturday, March 02 2013

Mommy, Why Is Our Internet SOOOO Expensive?


I've talked elsewhere about the impact of immutable laws on networks and pricing.  
The best way to understand where we are is to look at bandwidth prices in the “middle mile”.    Private lines are one of the essential costs in any service provider's model and can represent anywhere between 20-80% of their cost of revenue depending on the market and scale (for startups and rural carriers it is closer to 80%).  10 years ago 100 megs per second (mbs) cost $5,000 per month for an average private line circuit.

Let's assume we have 5 different states of competition: monopoly, duopoly, oligopoly, competitive (private), competitive (public).  The latter are different, in that a private user can take advantage of Moore's law, while public service providers benefit as well from scale economies of the network effect  (Metcalfe's laws).  The concept of private vs public is easily understood and recognized in both the changes in the voice markets of the 1970s to 1990s (PBX's going to VPNs) and data markets of 1990s to 2000s (LANs/client server going to the internet/cloud).  The increase in the addressable demand and/or range of applications results in significant scale economies driving enormous cost savings.

Now let's put our model comparing the 5 states in motion.  Assume the monopoly is quasi regulated and society as a whole is aware of decreasing technology costs.  As a result let's say they are generous (or are forced to by the regulator) and drop pricing 6% per annum.  Over 10 years our 100 mbs circuit costs $2,693, a reduction of 46%!  Wow, looks good.

Now let's say we have a duopoly, which the article states is the case with multi-model competition.  In that case 10% price drops have been the norm and in fact we end up with pricing 65% below where we started, or $1,743!  Wow, wow!  Everything looks fantastic!!  Just like the article says.

But wait, there is a catch.  That's really not how it happened, because bandwidth is not bandwidth is not bandwidth, as Gertrude Stein once said.  Bandwidth is made up of the layer 1 (physical, say coax, fiber or wireless mediums) and layer 2 costs (transport protocols or electronic costs), but also can be impacted by layer 3 switching (tradeoff for both physical and transport layer costs depending on volumes and type of traffic and market segments).  In many instances, there is monopoly in one geographic area or layer and it can create monopoly like price/cost constraints everywhere depending on the nature of the application and market segment in question.  But I digress.  Let's keep it simple.

So let's just say we have an oligopoly (we really don't on a national basis even in wireless, which is a duopoly as far as the upper 50% of the market of users is concerned) then we could reasonably expect to see 15% declines.  Then that same circuit would cost $984 today, 80% below prices 10 years ago, but more importantly, 63% more gross margin for a competitor or value added reseller or application developer to work with to put into innovation or leasing or building out additional capacity to serve more demand.  Uh oh!  Now the conclusions in the article don't look so rosy.

But wait.  It gets worse.  Very large users still get to take advantage of Moore's law because they have buying power and, well, because they can; 95% of us can't.  If you peek under the covers of our telecom infrastructure, private networks are alive and well and getting cheaper and cheaper and cheaper.  (And in the process creating diseconomies for the rest of us left on public networks).  In urban markets, where most of these large customers are, price declines (but only to these customers) have followed Moore's law on the order or 35% per year.  So that $5,000 circuit is in fact approaching or at $67.  A whopping 98.65% cost savings!  Absolutely f—king insane!  Most people don't realize or understand this, but it's the primary reason Google (a huge "private" user) can experiment with and "market" a 1 gig connection for $70/month to the end user (the other 95%) in Kansas City.

Finally, let's not forget about Metcalfe, our 5th and final state of competition.  (By the way he's the generally accepted originator of Ethernet, which is powering all our local area data networks (LANS) and increasingly our metro-area (MANS) and wide area networks (WANS), but most importantly all our WiFi connections in all our mobile devices).  Metcalfe's law (or virtue as I call it) is that the cost of a network goes up linearly while the potential pathways, or traffic possibilities, or revenue opportunities goes up geometrically.  In other words costs go up minimally while value grows exponentially the more people use it.  The internet and all the applications and application ecosystems (Facebook, Google, Amazon, etc…) are one big example of Metcalfe's law.  And the implication of Metcalfe's law is to scale Moore's law by a factor of 2.  Trust me on this, it’s slightly more complicated math but it’s true.  So let's be conservative and say the price drops per bit should be around 60% (10% less than 2 times 35%).  That means that the $5,000 circuit 10 years ago should cost....$0.52 today.  Huh???? 52 cents?!?!  Say it ain’t so!



That my friends is what could be.  What it should be.  And unfortunately, why the application world is look like it's 1999 all over again, and why our economic growth has declined and why countries, who don't have our information economy scale have raced ahead of us.  It's not about density or regulatory edict/policy.  It's simply about the lack of competition.  Actually, the right type of competition which requires another article.  Competition of the 1980s and 1990s brought about digitization and price declines of 99% in voice, data and wireless.  But revenues grew anywhere from 6%-25% annually in all those segments because 3 forms of demand elasticity occurred: 1) normal demand/price elasticity; 2) shift from private back to public; and 3) application-driven elasticities.  Basically there was a lot of known/perceived, latent/stored and potential demand that was unleashed simply by price declines.  And that is precisely the point we are at today, that few if any appreciate, and contrary to the assumptions and conclusions of the article.

ssss

Posted by: AT 08:00 am   |  Permalink   |  Email
Wednesday, February 06 2013

Is IP Growing UP? Is TCPOSIP the New Protocol Stack? Will Sessions Pay For Networks?

Oracle’s purchase of leading SBC vendor (session border controller Acme Packets), is a tiny seismic event in the technology and communications (ICT) landscape.  Few notice the potential for much broader upheaval ahead.

SBCs, which have been around since 2000, facilitate traffic flow between different networks; IP to PSTN to IP and IP to IP.  Historically traffic has been mostly voice, where minutes and costs count because that world has been mostly rate-based.  Increasingly they are being used to manage and facilitate any type of traffic “sessions” across an array of public and private networks, be it voice, data, or video.  The reasons are many-fold, including, security, quality of service, cost, and new service creation; all things TCP-IP don't account for.

Session control is layer 5 to TCP-IP’s 4 layer stack.  A couple of weeks ago I pointed out that most internet wonks and bigots deride the OSI framework and feel that the 4 layer TCP-IP protocol stack won the “war”.  But here is proof that as with all wars the victors typically subsume the best elements and qualities of the vanquished.

The single biggest hole in the internet and IP world view is bill and keep.  Bill and keep’s origins derive from the fact that most of the overhead in data networks was fixed in the 1970s and 1980s.  The component costs were relatively cheap compared with the mainframe costs that were being shared and the recurring transport/network costs were being arbitraged and shared by those protocols.  All the players, or nodes, were known and users connected via their mainframes.  The PC and ethernet (a private networking/transmission protocol) came along and scaled much later.  So why bother with expensive and unnecessary QoS, billing, mediation and security in layers 5 and 6?

Then along came the break-up of AT&T and due to dial-1 equal access, the Baby Bells responded with flat-rate, expanded area (LATA) pricing plans to build a bigger moat around their Class 5 monopoly castles (just like AT&T had built 50 mile interconnect exclusion zones in the 1913 Kingsbury Commitment due to the threat of wireless bypass even back then, and just like the battles OTT providers like Netflix are having with incumbent broadband monopolies today) in the mid to late 1980s.  The nascent commercial ISPs took advantage of these flat-rate zones, invested in channel banks, got local DIDs and the rest as they say is history.  Staying connected all day on a single flat-rate back then was perceived of as "free".  So the "internet" scaled from this pricing loophole (even as the ISPs received much needed shelter from vertical integration by the monopoly Bells in Computers 2/3) and further benefited from WAN competition and commoditization of transport to connect all the distributed router networks into seamless regional and national layer 1-2 low-cost footprints even before www and http/html and the browser hit in the early to mid 1990s.  The marginal cost of "interconnecting" these layer 1-2 networks was infinitesimal at best and therefore bill and keep, or settlement-free peering, made a lot of sense.  

But Bill and Keep (B&K) has three problems:

  • It supports incumbents and precludes new entrants
  • It stifles new service creation
  • It precludes centralized procurement and subsidization

With Acme, Oracle can provide solutions to problems two and three; with the smartphone driving the process.  Oracle has java on 3 billion phones around the globe.  Now imagine a session controller client on each device that can help with application and access management and preferential routing and billing etc, along with guaranteed QoS and real-time performance metrics and auditing; regardless of what network the device is currently on.  The same holds in reverse in terms of managing "session state" across multiple devices/screens across wired and wireless networks.

The alternative to B&K is what I refer to as balanced settlements.  In traditional telecom parlance, instead of just being calling party pays, they can be both called and calling party pays and are far from the regulated monopoly origination/termination tariffs.  Their pricing (transaction fees) will reflect marginal costs and therefore stimulate and serve marginal demand.  As a result, balanced settlements provide a way for rapid, coordinated roleout of new services and infrastructure investment across all layers and boundary points.  They provide the price signals that IP does not.

Balanced settlements clear supply and demand north-south between upper (application) and lower (switching,transport and access) layers, as well as east-west from one network or application or service provider to another.  Major technological shifts in the network layers like open flow, software defined networks (SDN) and network function virtualization (NFV) can develop rapidly.   Balanced settlements will reside in competitive exchanges evolving out of today's telecom tandem networks, confederation of service provider APIs, and the IP world's peering fabric, driven by big data analytical engines and advertising exchanges.

Perhaps most importantly, balanced settlements enable subsidization or procurement of edge access from the core.  Large companies and institutions can centrally drive and pay for high definition telework, telemedicine, tele-education, etc... solutions across a variety of access networks (fixed and wireless).  The telcos refer to this as guaranteed quality of service leading to "internet fast lanes."  Enterprises will do this to further digitize and economize their own operations and distribution reach (HD collaboration and the internet of things), just like 800, prepaid calling cards, VPNs and the internet itself did in the 1980s-90s.  I call this process marrying the communications event to the commercial/economic transaction and it results in more revenue per line or subscriber than today's edge subscription model.  As well, as more companies and institutions increasingly rely on the networks, they will demand backups, insurance and redundancy ensuring that there will be continous investment in multiple layer 1 access networks.

Along with open or shared access in layer 1 (something we should have agreed to in principal back in 1913 and again in 1934 as governments provide service providers a public right of way or frequency), balanced settlements can also be an answer to inefficient universal service subsidies.  Three trends will drive this. Efficient loading of networks and demand for ubiquitous high-definition services by mobile users will require inexpensive uniform access everywhere with concurrent investment in high-capacity fiber and wireless end-points.  Urban demand will naturally pay for rural demand in the process due to societal mobilty and finally the high volume low marginal cost user (enterprise or institution) will amortize and pay for the low-volume high marginal cost user to be part of their "economic ecosystem" thereby reducing the digital divide.

Related Reading:

TechZone 360 Analyzes the Deal

Acme Enables Skype Bandwidth On Demand
 

Posted by: Michael Elling AT 10:05 am   |  Permalink   |  0 Comments  |  Email
 

Email
Twitter
Facebook
Digg
LinkedIn
Delicious
StumbleUpon
Add to favorites

Information Velocity Partners, LLC
88 East Main Street, Suite 209
Mendham, NJ 07930
Phone: 973-222-0759
Email:
contact@ivpcapital.com

Mastodon

Design Your Own Website, Today!
iBuilt Design Software
Give it a try for Free