Skip to main content
SpectralShifts Blog 
Thursday, November 23 2017

tldr; both sides wrong in the net neutrality debate.  we need to look at networks and interworking differently.  otherwise digital and wealth divides will continue and worse. a new way of understanding networks and network theory is called equilibrism.

The term “net neutrality” is a contrivance at best and a farcical notion at worst. That’s why both sides can be seen as right and wrong in the debate. Net neutrality was invented in the early 2000s to account for the loss of equal access (which was the basis for how the internet really started) and the failure to address critical interconnection issues in the Telecom Act of 1996 across converging ecosystems of voice, video and data on wired and wireless networks.

The reality is that networks are multi-layered (horizontal) and multi-boundaried (vertical) systems. Supply and demand issues in this framework need to be cleared across all of these demarcation points. Sounds complex. It is complex (see here for illustration). Furthermore imbalance in one area exerts pressure in another. Now add to that concept of a single network an element of “inter-networking” and the complexity grows exponentially. The inability for net neutrality to be applied consistently across the framework(s) is its biggest weakness.

That's the technology and economic perspective.

Now let’s look at the socio-economic and political perspective. Networks are fundamental to everything around and within us; both physical and mental models. They explain markets, theory of the firm, all of our laws, social interaction, even the biology and chemistry of our bodies and the physical laws that govern the universe. Networks reduce risk for the individual actors/elements of the network. And networks exhibit the same tendencies, be they one-way or two-way; real-time or store and forward.

These tendencies include value that grows geometrically by the number and nature of transactions/participants and gets captured at the core and top of this framework that is the network, while costs grow more or less linearly (albeit with marginal differences) and are mostly borne at the bottom and edge. The costs can be physical (as in a telecom or cable network) or virtual (as in a social media network, where the cost is higher anxiety or loss of privacy, etc..). To be sustainable and generative*, networks need some conveyance of value from the core and top to the costs at the bottom and edge. I refer to this as equilibrism. Others call it universal service. There is a difference.

(*-If we don’t have some type of equilibrism the tendency in all networks is towards monopoly or oligopoly; which is basically what we see under neo-liberalism and early forms of capitalism before trust-busters and Keynesian policies.)

To understand the difference between universal service and equilibrism into this “natural law of networks” we have to throw in two other immutable, natural outcomes evident everywhere, namely pareto and standard distributions. The former easily show the geometric (or outsized) value capture referred to above. Standard distributions (bell curves) reflect extreme differences in supply and demand at the margin. Once we factor both of these in, we find that networks can never completely tend toward full centralization or full decentralization and be sustainable. So the result is constant push/pull of tradeoffs horizontally in the framework (between core and edge) facilitated by tradeoffs vertically (between upper and lower layers).  

For example, a switch in layer 3 offsets the cost and complexity of layers 1 and 2 (total mesh vs star). This applies to distance and density and how the upper layers of the stack affect the lower layers. For a given set of demand, supply can either be centralized or distributed (ie cloud vs openfog or MEC; or centralized payment systems like Visa vs blockchain). A lot of people making the case for fully distributed or for fully centralized systems seemingly do not understand these horizontal and vertical tradeoffs.

The bottom-line: a network (or series of internetworks) that is fully centralized or decentralized is unsustainable and a network (internetworks) where there is no value/cost (re)balancing is also unsustainable. Much of what we are seeing in today’s centralized or monopolistic “internet” and the resulting call for decentralization (blockchain/crypto) is evidence of these conclusions. The confusion or misunderstanding lies in the fact that the network is nothing without its components or actors, and the individual actors are nothing without the network. Which is more important? Both.

Now add back in the “inter-networking” piece and it seems that no simple answer, like net neutrality, solves how we make networks and ecosystems sustainable; especially when supply depreciates rapidly and costs are in near constant state of (relative or absolute) decline and demand is infinite and infinitely diverse. These parameters have always been around (we call them technology and market differentiation/specialization), but they’ve become very apparent in the last 50 years with the advent of the computers and digital networked ecosystems that dominate all aspects of our lives. Early signs abound since the first digital network (the telegraph) was invented**, but we just haven’t realized it until now with a monopolized internet at global scale; albeit one that was intended to be free and open and is anything but. So, there exists NO accepted economic theory explaining or providing the answer to the problems of monopoly information networks that have been debated for well over 100 years when Theodore Vail promised “One Policy, One System, Universal Service” and the US government officially blessed information monopolies.

(** — digital impacts arguably began with the telegraph 170 years ago and their impact on goods and people, e.g. railroads, and stock markets, e.g. the ticker tape. The velocity of information increased geometrically. Wealth and information access divides became enormous by the late 1800s.)

"Equilibrism" may be THE answer that provides a means towards insuring universal access in competitive digital networked ecosystem. Equilibrism holds that settlements across and between the boundaries and layers are critical and that greater network effects occur the more interconnected the networks are down towards the bottom and edges of the ecosytems. Settlements serve two primary functions. First they are price signals. As such they provide incentives and disincentives between actors (remember the standard distribution between all the marginal actors referred to above?). Second they provide a mechanism for value conveyance between those who capture the value and those who bear the costs (remember the pareto distribution above?). In the informational stackas we’ve illustrated it, settlements exist north-south (between app and infrastructure layers) and east-west (between actors/networks). But a lack of these settlements has resulted in extreme misshaping of both the pareto optimum and normal distribution.

We find very little academic work around settlements*** and, in particular, the proper level for achieving sustainability and generativity. The internet is a “settlement free” model and therefore lacks incentives/disincentives and in the process makes risk one sided. Also without settlements, a receiving party cannot subsidize the sender (say goodbye to 800 like services which scaled the competitive voice WAN in the 1980s and 90s and paved the way for the internet to scale). Lastly, and much more importantly than the recent concerns over security, privacy and demagoguery, the internet does not facilitate universal service.

(*** — academic work around “network effects” on the other hand has seen a surge over the last 40 years since the concept was derived at Bell Labs in 1974 from an economist studying the failure of the famous Picturephone from 1963. Of course a lot of this academic work is flawed (and limited) without an understanding of the role of settlements.)

Unlike the winner takes all model (the monopoly outcomes referred to above), equilibrism points to a new win/win model where supply and demand are cleared much more efficiently and the ecosystems are more sustainable and generative. So where universal service is seen as a “taking” or tax on those who have and giving to those who don’t and addressing only portions of the above 2 curves, equilibrism is fundamentally about “giving” to those who have albeit slightly less than those who don’t. Simply put equilibrism is at work when the larger actor pays a slightly higher settlement than the smaller actor, but in return the larger actor will still get a relatively larger benefit due to network effects. Think of it in gravity terms between 2 masses. 

We are somehow brainwashed into thinking winner takes all is a natural outcome. In fact it is not. Nature is about balance and evolution; a constant state of disequilibria striving for equilibrium. It’s not survival of the fittest; it’s about survival of the most adaptable. Almost invariably adaptation comes from the smaller actor or new entrant into the ecosystem. And that ultimately is what drives sustainability and advancement; not unfettered winner takes all competition. Change should be embraced, not rejected, because it is constantly occurring in nature. That’s how we need to begin to think about all our socio-economic and political institutions. This will take time, since it cuts against what we believe to be true since the dawn of mankind. If you don’t think so, take a refresher course in Plato’s Republic.

If we don’t change our thinking and approach, at best digital and wealth divides will continue and we’ll convulse from within. At worst, outside forces, like technology (AI), other living organisms (contagions) and matter (climate change) will work against us in favor of balance. That’s just natural.

Related Reading:

Why Tech is Evil, NYT

Silicon Valley's erasing our individuality

Neoliberalism's ideology problem

Posted by: Michael Elling AT 05:30 am   |  Permalink   |  0 Comments  |  Email
Wednesday, August 07 2013

Debunking The Debunkers

The current debate over the state of America's broadband services and over the future of the internet is like a 3-ring circus or 3 different monarchists debating democracy.  In other words an ironic and tragically humorous debate between monopolists, be they ultra-conservative capitalists, free-market libertarians, or statist liberals.  Their conclusions do not provide a cogent path to solving the single biggest socio-political-economic issue of our time due to pre-existing biases, incorrect information, or incomplete/wanting analysis.  Last week I wrote about Google's conflicts and paradoxes on this issue.  Over the next few weeks I'll expand on this perspective, but today I'd like to respond to a Q&A, Debunking Broadband's Biggest Myths, posted on Commercial Observer, a NYC publication that deals with real estate issues mostly and has recently begun a section called Wired City, dealing with a wide array of issues confronting "a city's" #1 infrastructure challenge.  Here's my debunking of the debunker.

To put this exchange into context, the US led the digitization revolutions of voice (long-distance, touchtone, 800, etc..), data (the internet, frame-relay, ATM, etc...) and wireless (10 cents, digital messaging, etc...) because of pro-competitive, open access policies in long-distance, data over dial-up, and wireless interconnect/roaming.  If Roslyn Layton (pictured below) did not conveniently forget these facts or does not understand both the relative and absolute impacts on price and infrastructure investment then she would answer the following questions differently:

Real Reason/Answer:  our bandwidth is 20-150x overpriced on a per bit basis because we disconnected from moore's and metcalfe's laws 10 years ago, due to the Telecom Act, then special access "de"regulation, then Brand-X or shutting down equal access for broadband.  This rate differential is shown in the discrepancy between rates we pay in NYC and what Google charges in KC, as well as the difference in performance/price of 4G and wifi.  It is great Roslyn can pay $3-5 a day for Starbucks.  Most people can't (and shouldn't have to) just for a cup a Joe that you can make at home for 10-30 cents.

Real Reason/Answer:  Because of their vertical business models, carriers are not well positioned to generate high ROI on rapidly depreciating technology and inefficient operating expense at every layer of the "stack" across geographically or market segment constrained demand.  This is the real legacy of inefficient monopoly regulation.  Doing away with regulation, or deregulating the vertical monopoly, doesn’t work.  Both the policy and the business model need to be approached differently.  Blueprints exist from the 80s-90s that can help us restructure our inefficient service providers.  Basically, any carrier that is granted a public ROW (right of way) or frequency should be held to an open access standard in layer 1.  The quid pro quo is that end-points/end-users should also have equal or unhindered access to that network within (economic and aesthetic) reason.  This simple regulatory fix solves 80% of the problem as network investments scale very rapidly, become pervasive, and can be depreciated quickly.

Real Reason/Answer:  Quasi monopolies exist in video for the cable companies and in coverage/standards in frequencies for the wireless companies.  These scale economies derived from pre-existing monopolies or duopolies granted by and maintained to a great degree by the government.  The only open or equal access we have left from the 1980s-90s (the drivers that got us here) is wifi (802.11) which is a shared and reusable medium with the lowest cost/bit of any technology on the planet as a result.  But other generative and scalabeable standards developed in the US or with US companies at the same time, just like the internet protocol stacks, including mobile OSs, 4G LTE (based on CDMA/OFDM technology), OpenStack/Flow that now rule the world.  It's very important to distinguish which of these are truly open or not.

Real Reason/Answer: The 3rd of the population who don't have/use broadband is as much because of context and usability, whether community/ethnicity, age or income levels, as cost and awareness.  If we had balanced settlements in the middle layers based on transaction fees and pricing which reflect competitive marginal cost, we could have corporate and centralized buyers subsidizing the access and making it freely available everywhere for everyone.  Putting aside the ineffective debate between bill and keep and 2-sided pricing models and instead implementing balanced settlement exchange models will solve the problem of universal HD tele-work, education, health, government, etc…  We learned in the 1980s-90s from 800 and internet advertising that competition can lead to free, universal access to digital "economies".  This is the other 20% solution to the regulatory problem.

Real Reason/Answer:  The real issue here is that America led the digital information revolution prior to 1913 because it was a relatively open and competitive democracy, then took the world into 70 years of monopoly dark ages, finally breaking the shackles of monopoly in 1983, and then leading the modern information revolution through the 80s-90s.  The US has now fallen behind in relative and absolute terms in the lower layers due to consolidation and remonopolization.  Only the vestiges of pure competition from the 80s-90s, the horizontally scaled "data" and "content" companies like Apple, Google, Twitter and Netflix (and many, many more) are pulling us along.  The vertical monopolies stifle innovation and the generative economic activity we saw in those 2 decades.  The economic growth numbers and fiscal deficit do not lie.

Posted by: Michael Elling AT 08:02 am   |  Permalink   |  0 Comments  |  Email
Wednesday, July 31 2013

Is Google Trying to Block Web 4.0?

Back in 1998 I wrote, “if you want to break up the Microsoft software monopoly then break up the Baby Bell last-mile access monopoly.”  Market driven broadband competition and higher-capacity digital wireless networks gave rise to the iOS and Android operating systems over the following decade which undid the Windows monopoly.  The 2013 redux to that perspective is, once again, “if you want to break up the Google search monopoly then break up the cable/telco last mile monopolies.” 

Google is an amazing company, promoting the digital pricing and horizontal service provider spirit more than anyone.  But Google is motivated by profit and will seek to grow that profit as best it can, even if contrary to founding principles and market conditions that fueled its success (aka net neutrality or equal access).  Now that Google is getting into the lower layers in the last mile they are running into paradoxes and conflicts over net neutrality/equal access and in danger of becoming just another vertical monopoly.  (Milo Medin provides an explanation in the 50th minute in this video, but it is self-serving, disingenuous and avoids confronting the critical issue for networks going forward.)

Contrary to many people’s beliefs, the upper and lower layers have always been inextricably interdependent and nowhere was this more evident than with the birth of the internet out of the flat-rate dial-up networks of the mid to late 1980s (a result of dial-1 equal access).  The nascent ISPs that scaled in the 1980s on layer 1-2 data bypass networks were likewise protected by Computers II-III (aka net neutrality) and benefited from competitive (WAN) transport markets.

Few realize or accept the genesis of Web 1.0 (W1.0) was the break-up of AT&T in 1983.   Officially birthed in 1990 it was an open, 1-way store and forward database lookup platform on which 3 major applications/ecosystems scaled beginning in late 1994 with the advent of the browser: communications (email and messaging), commerce, and text and visual content.  Even though everything was narrowband, W1.0 began the inexorable computing collapse back to the core, aka the cloud (4 posts on the computing cycle and relationship to networks).  The fact that it was narrowband didn't prevent folks like Mark Cuban and Jeff Bezos from envisioning and selling a broadband future 10 years hence.   Regardless, W1.0 started collapsing in 1999 as it ran smack into an analog dial-up brick wall.  Google hit the bigtime that year and scaled into the early 2000s by following KISS and freemium business model principles.  Ironically, Google’s chief virtue was taking advantage of W1.0’s primary weakness.

Web 2.0 grew out of the ashes of W1.0 in 2002-2003.  W2.0 both resulted from and fueled the broadband (BB) wars starting in the late 1990s between the cable (offensive) and telephone (defensive) companies.  BB penetration reached 40% in 2005, a critical tipping point for the network effect, exactly when YouTube burst on the scene.  Importantly, BB (which doesn't have equal access, under the guise of "deregulation") wouldn’t have occurred without W1.0 and the above two forms of equal access in voice and data during the 1980s-90s.  W2.0 and BB were mutually dependent, much like the hardware/software Wintel model.   BB enabled the web to become rich-media and mostly 2-way and interactive.  Rich-media driven blogging, commenting, user generated content and social media started during the W1.0 collapse and began to scale after 2005.

“The Cloud” also first entered people’s lingo during this transition.  Google simultaneously acquired YouTube in the upper layers to scale its upper and lower layer presence and traffic and vertically integrated and consolidated the ad exchange market in the middle layers during 2006-2008.  Prior to that, and perhaps anticipating lack of competitive markets due to "deregulation" of special access, or perhaps sensing its own potential WAN-side scale, the company secured low-cost fiber rights nationwide in the early 2000s following the CLEC/IXC bust and continued throughout the decade as it built its own layer 2-3 transport, storage, switching and processing platform.  Note, the 2000s was THE decade of both vertical integration and horizontal consolidation across the board aided by these “deregulatory” political forces.  (Second note, "deregulatory" should be interpreted in the most sarcastic and insidious manner.)

Web 3.0 began officially with the iPhone in 2007.  The smartphone enabled 7x24 and real-time access and content generation, but it would not have scaled without wifi’s speed, as 3G wireless networks were at best late 1990s era BB speeds and didn’t become geographically ubiquitous until the late 2000s.  The combination of wifi (high speeds when stationary) and 3G (connectivity when mobile) was enough though to offset any degradation to user experience.  Again, few appreciate or realize that  W3.0 resulted from two additional forms of equal access, namely cellular A/B interconnect from the early 1980s (extended to new digital PCS entrants in the mid 1990s) and wifi’s shared spectrum.  One can argue that Steve Jobs single-handedly resurrected equal access with his AT&T agreement ensuring agnostic access for applications.  Surprisingly, this latter point was not highlighted in Isaacson's excellent biography.  Importantly, we would not have had the smartphone revolution were it not for Jobs' equal access efforts.

W3.0 proved that real-time, all the time "semi-narrowband" (given the contexts and constraints around the smartphone interface) trumped store and forward "broadband" on the fixed PC for 80% of people’s “web” experience (connectivity and interaction was more important than speed), as PC makers only realized by the late 2000s.  Hence the death of the Wintel monopoly, not by government decree, but by market forces 10 years after the first anti-trust attempts.  Simultaneously, the cloud became the accepted processing model, coming full circle back to the centralized mainframe model circa 1980 before the PC and slow-speed telephone network led to its relative demise.  This circularity further underscores not only the interplay between upper and lower layers but between edge and core in the InfoStack.  Importantly, Google acquired Android in 2005, well before W3.0 began as they correctly foresaw that small-screens and mobile data networks would foster the development of applications and attendant ecosystems would intrude on browser usage and its advertising (near) monopoly. 

Web 4.0 is developing as we speak and no one is driving it and attempting to influence it more with its WAN-side scale than Google.  W4.0 will be a full-duplex, 2-way, all-the time, high-definition application driven platform that knows no geographic or market segment boundaries.  It will be engaging and interactive on every sensory front; not just those in our immediate presence, but everywhere (aka the internet of things).  With Glass, Google is already well on its way to developing and dominating this future ecosystem.  With KC Fiber Google is illustrating how it should be priced and what speeds will be necessary.  As W4.0 develops the cloud will extend to the edge.  Processing will be both centralized and distributed depending on the application and the context.  There will be a constant state of flux between layers 1 and 3 (transport and switching), between upper and lower layers, between software and hardware at every boundary point, and between core and edge processing and storage.  It will dramatically empower the end-user and change our society more fundamentally than what we’ve witnessed over the past 30 years.  Unfortunately, regulators have no gameplan on how to model or develop policy around W4.0.

The missing pieces for W4.0 are fiber based and super-high capacity wireless access networks in the lower layers, settlement exchanges in the middle layers, and cross-silo ecosystems in the upper layers.   Many of these elements are developing in the market naturally: big data, hetnets, SDN, openflow, open OS' like Android and Mozilla, etc… Google’s strategy appears consistent and well-coordinated to tackle these issues; if not far ahead of others. But its vertically integrated service provider model and stance on net neutrality in KC is in conflict with the principles that so far have led to its success.

Google is buying into the vertical monopoly mindset to preserve its profit base instead of teaching regulators and the markets about the virtues of open or equal access across every layer and boundary point (something clearly missing from Tim Wu's and Bob Atkinson's definitions of net neutrality).  In the process it is impeding the development of W4.0.  Governments could solve this problem by simply conditioning any service provider with access to a public right of way or frequency to equal access in layers 1 and 2; along with a quid pro quo that every user has a right to access unhindered by landlords and local governments within economic and aesthetic reason.  (The latter is a bone we can toss all the lawyers who will be looking for new work in the process of simpler regulations.)  Google and the entire market will benefit tremendously by this approach.  Who will get there first?  The market (Google or MSFT/AAPL if the latter are truly hungry, visionary and/or desperate) or the FCC?  Originally hopeful, I’ve become less sure of the former over the past 12 months.  So we may be reliant on the latter.

Related Reading:

Free and Open Depend on Where You Are in Google's InfoStack

InfoWorld defends Google based on its interpretation of NN; of which there are 4

DSL reports thinks Google is within its rights because it expects to offer enterprise service.  Only they are not and heretofore had not mentioned it.

A good article on Gizmodo about state of the web what "we" are giving up to Google

The datacenter as an "open access" boundary.  What happens today in the DC will happen tomorrow elsewhere.

Posted by: Michael Elling AT 10:57 am   |  Permalink   |  4 Comments  |  Email
Wednesday, February 06 2013

Is IP Growing UP? Is TCPOSIP the New Protocol Stack? Will Sessions Pay For Networks?

Oracle’s purchase of leading SBC vendor (session border controller Acme Packets), is a tiny seismic event in the technology and communications (ICT) landscape.  Few notice the potential for much broader upheaval ahead.

SBCs, which have been around since 2000, facilitate traffic flow between different networks; IP to PSTN to IP and IP to IP.  Historically traffic has been mostly voice, where minutes and costs count because that world has been mostly rate-based.  Increasingly they are being used to manage and facilitate any type of traffic “sessions” across an array of public and private networks, be it voice, data, or video.  The reasons are many-fold, including, security, quality of service, cost, and new service creation; all things TCP-IP don't account for.

Session control is layer 5 to TCP-IP’s 4 layer stack.  A couple of weeks ago I pointed out that most internet wonks and bigots deride the OSI framework and feel that the 4 layer TCP-IP protocol stack won the “war”.  But here is proof that as with all wars the victors typically subsume the best elements and qualities of the vanquished.

The single biggest hole in the internet and IP world view is bill and keep.  Bill and keep’s origins derive from the fact that most of the overhead in data networks was fixed in the 1970s and 1980s.  The component costs were relatively cheap compared with the mainframe costs that were being shared and the recurring transport/network costs were being arbitraged and shared by those protocols.  All the players, or nodes, were known and users connected via their mainframes.  The PC and ethernet (a private networking/transmission protocol) came along and scaled much later.  So why bother with expensive and unnecessary QoS, billing, mediation and security in layers 5 and 6?

Then along came the break-up of AT&T and due to dial-1 equal access, the Baby Bells responded with flat-rate, expanded area (LATA) pricing plans to build a bigger moat around their Class 5 monopoly castles (just like AT&T had built 50 mile interconnect exclusion zones in the 1913 Kingsbury Commitment due to the threat of wireless bypass even back then, and just like the battles OTT providers like Netflix are having with incumbent broadband monopolies today) in the mid to late 1980s.  The nascent commercial ISPs took advantage of these flat-rate zones, invested in channel banks, got local DIDs and the rest as they say is history.  Staying connected all day on a single flat-rate back then was perceived of as "free".  So the "internet" scaled from this pricing loophole (even as the ISPs received much needed shelter from vertical integration by the monopoly Bells in Computers 2/3) and further benefited from WAN competition and commoditization of transport to connect all the distributed router networks into seamless regional and national layer 1-2 low-cost footprints even before www and http/html and the browser hit in the early to mid 1990s.  The marginal cost of "interconnecting" these layer 1-2 networks was infinitesimal at best and therefore bill and keep, or settlement-free peering, made a lot of sense.  

But Bill and Keep (B&K) has three problems:

  • It supports incumbents and precludes new entrants
  • It stifles new service creation
  • It precludes centralized procurement and subsidization

With Acme, Oracle can provide solutions to problems two and three; with the smartphone driving the process.  Oracle has java on 3 billion phones around the globe.  Now imagine a session controller client on each device that can help with application and access management and preferential routing and billing etc, along with guaranteed QoS and real-time performance metrics and auditing; regardless of what network the device is currently on.  The same holds in reverse in terms of managing "session state" across multiple devices/screens across wired and wireless networks.

The alternative to B&K is what I refer to as balanced settlements.  In traditional telecom parlance, instead of just being calling party pays, they can be both called and calling party pays and are far from the regulated monopoly origination/termination tariffs.  Their pricing (transaction fees) will reflect marginal costs and therefore stimulate and serve marginal demand.  As a result, balanced settlements provide a way for rapid, coordinated roleout of new services and infrastructure investment across all layers and boundary points.  They provide the price signals that IP does not.

Balanced settlements clear supply and demand north-south between upper (application) and lower (switching,transport and access) layers, as well as east-west from one network or application or service provider to another.  Major technological shifts in the network layers like open flow, software defined networks (SDN) and network function virtualization (NFV) can develop rapidly.   Balanced settlements will reside in competitive exchanges evolving out of today's telecom tandem networks, confederation of service provider APIs, and the IP world's peering fabric, driven by big data analytical engines and advertising exchanges.

Perhaps most importantly, balanced settlements enable subsidization or procurement of edge access from the core.  Large companies and institutions can centrally drive and pay for high definition telework, telemedicine, tele-education, etc... solutions across a variety of access networks (fixed and wireless).  The telcos refer to this as guaranteed quality of service leading to "internet fast lanes."  Enterprises will do this to further digitize and economize their own operations and distribution reach (HD collaboration and the internet of things), just like 800, prepaid calling cards, VPNs and the internet itself did in the 1980s-90s.  I call this process marrying the communications event to the commercial/economic transaction and it results in more revenue per line or subscriber than today's edge subscription model.  As well, as more companies and institutions increasingly rely on the networks, they will demand backups, insurance and redundancy ensuring that there will be continous investment in multiple layer 1 access networks.

Along with open or shared access in layer 1 (something we should have agreed to in principal back in 1913 and again in 1934 as governments provide service providers a public right of way or frequency), balanced settlements can also be an answer to inefficient universal service subsidies.  Three trends will drive this. Efficient loading of networks and demand for ubiquitous high-definition services by mobile users will require inexpensive uniform access everywhere with concurrent investment in high-capacity fiber and wireless end-points.  Urban demand will naturally pay for rural demand in the process due to societal mobilty and finally the high volume low marginal cost user (enterprise or institution) will amortize and pay for the low-volume high marginal cost user to be part of their "economic ecosystem" thereby reducing the digital divide.

Related Reading:

TechZone 360 Analyzes the Deal

Acme Enables Skype Bandwidth On Demand
 

Posted by: Michael Elling AT 10:05 am   |  Permalink   |  0 Comments  |  Email
Wednesday, February 29 2012

Is reality mimicking art?  Is Android following the script from Genesis’ epochal hit Land of Confusion?  Is it a bad dream on this day that happens once every four years?  Yes, yes, and unfortunately no.  Before I go into a litany of ills besetting the Android market and keeping Apple shareholders very happy, two points: a) I have an HTC Incredible and am a Droid fan, and b) the 1986 hit parodied superpower conflict and inept decisions by global leaders but presaged the fall of the Berlin Wall and 20+ years of incredible growth, albeit with a good deal of 3rd world upheaval in the Balkans, Mid-east, and Africa.  So maybe there is hope that out of the current state a new world order will arise as the old monopolies are dismantled.

Apple clearly has the digital formula right at present; simplicity, ease of use, performance, and yet, at the same time unlimited choice and customization.  Contrast that with this parody from SNL of Verizon and 4G/3G/2G/noG and the Samsung Superbowl Ad portraying a wild party.  The result is a disturbing trend if you are an Android phone lover.  The ecosystem’s rate of new technology adoption is slowing down even if better technology is being made as consumers are clearly confused.  In the tablet market there is even a greater disparity, with Android tablets hardly making a dent in Apple's share.

Yesterday Eric Schmidt prognosticated at MWC a world where more is better and cheaper; which may be good for Google but not necessarily the best thing for anyone else in the Droid ecosystem, including consumers.  Yet, at the same time Apple managed to steal the show with its iPad3 announcement.  Contrast this with HTC rolling out some awesome phones that will not be available in the US this year because their chip doesn't support 4G.   

The answer is not better technology, but better ecosystems.  The Droid device vendors should realize this and build a layer of software and standards above Google/ICS to facilitate interoperability across silos (at the individual, device and network level); instead of just depreciating their hardware value by competing on price and features many people do not want. They can then collectively win in residual transaction streams (like collectively synching back to a dropbox) like Apple.

Examples of these include standardization and interoperability of free or subsidized wifi offload, over the top messaging, voice and other solutions and the holy grail, mobile payments.  Companies like CloudFoundry allows for cross Cloud application infrastructure support, with AppFog and Iron Foundry are pursuing these approaches individually.  But just think what would happen if Samsung, HTC, LG and Motorola were to band together and coordinate these approaches and develop very low cost balanced payment systems within the Droid ecosystem to promote interoperability and cooperation, counteract Google and restore some sanity to the market.  Carriers (um battleships?) will not be able to stop this effort and may even welcome it just as the music industry opened its arms to Apple.

Apple hasn’t been an innovator so much as a great design company that understands big market opportunities and what the customer wants.  The result is an established order that other industries and their customers clearly prefer.  Millenials are too young to know Land of Confusion, but the current decision makers in the Droid ecosystem do and so they should take a lesson from history on this Leap Day.  Hopefully we’ll wake up in 4 years and there will be a wonderful new world order.  Oh, and a Happy 4 Birthdays to everyone present and past who was born on this day.

Related Reading:

Good assessment of and comments on the fragmentation of Android

Is it the people Apple and Google hire?  Maybe, maybe not.

Posted by: Michael Elling AT 09:17 am   |  Permalink   |  0 Comments  |  Email
Sunday, February 26 2012

Wireless service providers (WSPs) like AT&T and Verizon are battleships, not carriers.  Indefatigable...and steaming their way to disaster even as the nature of combat around them changes.  If over the top (OTT) missiles from voice and messaging application providers started fires on their superstructures and WiFi offload torpedoes from alternative carriers and enterprises opened cracks in their hulls, then Dropbox bombs are about to score direct hits near their water lines.  The WSPs may well sink from new combatants coming out of nowhere with excellent synching and other novel end-user enablement solutions even as pundits like Tomi Ahonen and others trumpet their glorious future.  Full steam ahead. 

Instead, WSP captains should shout “all engines stop” and rethink their vertical integration strategies to save their ships.  A good start might be to look where smart VC money is focusing and figure out how they are outfitted at each level to defend against or incorporate offensively these rapidly developing new weapons.  More broadly WSPs should revisit the WinTel wars, which are eerily identical to the smartphone ecosystem battles, and see what steps IBM took to save its sinking ship in the early 1990s.  One unfortunate condition might be that the fleet of battleships are now so widely disconnected that none have a chance to survive. 

The bulls on Dropbox (see the pros and cons behind the story) suggest that increased reliance on cloud storage and synching will diminish reliance on any one device, operating system or network.  This is the type of horizontalization we believe will continue to scale and undermine the (perceived) strength of vertical integration at every layer (upper, middle and lower).  Extending the sea battle analogy, horizontalization broadens the theatre of opportunity and threat away from the ship itself; exactly what aircraft carriers did for naval warfare.

Synching will allow everyone to manage and tailor their “states”, developing greater demand opportunity; something I pointed out a couple of months ago.  People’s states could be defined a couple of ways, beginning with work, family, leisure/social across time and distance and extending to specific communities of (economic) interest.   I first started talking about the “value of state” as Chief Strategist at Multex just as it was being sold to Reuters.

Back then I defined state as information (open applications, communication threads, etc...) resident on a decision maker’s desktop at any point in time that could be retrieved later.  Say I have multiple industries that I cover and I am researching biotech in the morning and make a call to someone with a question.  Hours later, after lunch meetings, I am working on chemicals when I get a call back with the answer.  What’s the value of bringing me back automatically to the prior biotech state so I can better and more immediately incorporate and act on the answer?  Quite large.

Fast forward nearly 10 years and people are connected 7x24 and checking their wireless devices on average 150x/day.  How many different states are they in during the day?  5, 10, 15, 20?  The application world is just beginning to figure this out.  Google, Facebook, Pinterest and others are developing data engines that facilitate “free access” to content and information paid for by centralized procurement; aka advertising.  Synching across “states” will provide even greater opportunity to tailor messages and products to consumers.

Inevitably those producers (advertisers) will begin to require guaranteed QoS and availability levels to ensure a good consumer experience.  Moreover, because of social media and BYOD companies today are looking at their employees the same way they are looking at their consumers.  The overall battlefield begins to resemble the 800 and VPN wars of the 1990s when we had a vibrant competitive service provider market before its death at the hands of the 1996 Telecom Act (read this critique and another that questions the Bell's unnatural monopoly).  Selling open, low-cost, widely available connectivity bandwidth into this advertising battlefield can give WSPs profit on every transaction/bullet/bit across their network.  That is the new “ship of state” and taking the battle elsewhere.  Some call this dumb pipes; I call this a smart strategy to survive being sunk. 

Related Reading:

John Mahoney presents state as representing content and context

Smartphone users complaints with speed rise 50% over voice problems

Posted by: Michael Elling AT 09:54 am   |  Permalink   |  0 Comments  |  Email
Sunday, December 18 2011

 

(The web is dead, long live the apps)

 

Is the web dead?  According to George Colony, CEO of Forrester, at LeWeb (Paris, Dec 7-9) it is; and on top of that social is running out of time, and social is where the enterprise is headed.  A lot to digest at once, particularly when Google’s Schmidt makes a compelling case for a revolutionary smartphone future that is still in its very, very early stages; courtesy of an ice cream sandwich.

Ok, so let’s break all this down.  The Web, dead?  Yes Web 1.0 is officially dead, replaced by a mobile, app-driven future.  Social is saturated?  Yes, call it 1.0 and Social 2.0 will be utilitarian.  Time is money, knowledge is power.  Social is really knowledge and that’s where enterprises will take the real-time, always connected aspect of the smartphone ice cream sandwich applications that harness internal and external knowledge bases for rapid product development and customer support.  Utilitarian.  VIVA LA REVOLUTION!

Web 1.0 was a direct outgrowth of the breakup of AT&T; the US’ second revolution 30 years ago coinciding ironically with the bicentennial end of the 1st revolution.  The bandwidth bottleneck of the 1960s and 1970s (the telephone monopoly tyranny) that gave rise to Microsoft and Intel processing at the edge vs the core, began to reverse course in the late 1980s and early 1990s as a result of flat-rate data access and an unlimited universe of things to easily look for (aka web 1.0).  This flat-rate processing was a direct competitive response by the RBOCs to the competitive WAN (low-cost metered) threat.

As silicon scaled via Moore’s law (the WinTel sub-revolution) digital mobile became a low-cost, ubiquitous reality.  The same pricing concepts that laid the foundation for web 1.0 took hold in the wireless markets in the US in the late 1990s; courtesy of the software defined, high-capacity CDMA competitive approach (see pages 34 and 36) developed in the US.

The US is the MOST important market in wireless today and THE reason for its leadership in applications and smart cloud.  (Incidentally, it appears that most of LeWeb speakers were either American or from US companies.)  In the process the relationship between storage, processing and network has come full circle (as best described by Ben Horowitz).  The real question is, “will the network keep up?”  Or are we doomed to repeat the cycle of promise and dashed hopes we witnessed between 1998-2003?

The answer is, “maybe”; maybe the communications oligopolies will liken themselves to IBM in front of the approaching WinTel tsunami in 1987.  Will Verizon be that service provider that recognizes the importance of and embraces open-ness and horizontalization?  The 700 mhz auctions and recent spectrum acquisitions and agreements with the major cable companies might be a sign that they do.

But a bigger question is whether Verizon will adopt what I call a "balanced payment (or settlement) system" and move away from IP/ethernet’s "bill and keep" approach.  A balanced payment or settlement system for network interconnection simultaneously solves the issues of new service creation AND paves the way for the applications to directly drive and pay for network investment.  So unlike web 1.0 where communication networks were resistently pulled into a broadband present, maybe they can actually make money directly off the applications; instead of the bulk of the value accruing to Apple and Google.

Think of this as an “800” future on steroids or super advertising, where the majority of access is paid for by centralized buyers.  It’s a future where advertising, product marketing, technology, communications and corporate strategy converge.  This is the essence of what Colony and Schmidt are talking about.   Will Verizon CEO Seidenberg, or his rivals, recognize this?  That would indeed be revolutionary!

Related Reading:
February 2011 Prediction by Tellabs of Wireless Business Models Going Upside Down by 2013

InfoWeek Article on Looming Carrier Bandwidth Shortages

 

 

 

 

 

Posted by: Michael Elling AT 09:56 am   |  Permalink   |  0 Comments  |  Email
Email
Twitter
Facebook
Digg
LinkedIn
Delicious
StumbleUpon
Add to favorites

Information Velocity Partners, LLC
88 East Main Street, Suite 209
Mendham, NJ 07930
Phone: 973-222-0759
Email:
contact@ivpcapital.com

Mastodon

Design Your Own Website, Today!
iBuilt Design Software
Give it a try for Free