Skip to main content
SpectralShifts Blog 
Monday, March 17 2014

A New Visionary In Our Midst?

The US has lacked a telecom network visionary for nearly 2 decades.  There have certainly been strong and capable leaders, such as John Malone who not only predicted but brought about the 500 channel LinearTV model.  But there hasn’t been someone like Bill McGowan who broke up AT&T or Craig McCaw who first had the vision to build a national, seamless wireless network, countering decades of provincial, balkanized thinking.  Both of them fundamentally changed the thinking around public service provider networks.

But with a strong message to the markets in Washington DC on March 11 from Masayoshi Son, Sprint’s Chairman, the 20 year wait may finally be over.  Son did what few have been capable of doing over the past 15-20 years since McGowan exited stage left and McCaw sold out to MaBell: telling it like it is.  The fact is that today’s bandwidth prices are 20-150x higher than they should be with current technology.

This is no one’s fault in particular and in fact to most people (even informed ones) all measures of performance-to-price compared to 10 or 20 years ago look great.  But, as Son illustrated, things could be much, much better.  And he’s willing to make a bet on getting the US, the most advanced and heterogeneous society, back to a leadership role with respect to the ubiquity and cost of bandwidth.  To get there he needs more scale and one avenue is to merge with T-Mobile.

There have been a lot of naysayers as to the possibility of a Sprint-T-Mo hookup, including leaders at the FCC.  But don’t count me as one; it needs to happen.  Initially skeptical when the rumors first surfaced in December, I quickly reasoned that a merger would be the best outcome for the incentive auctions.  A merger would eliminate spectrum caps as a deterrent to active bidding and maximize total proceeds.  It would also have a better chance of developing a credible third competitor with equal geographic reach. Then in January the FCC and DoJ came out in opposition to the merger.

In February, though, Comcast announced the much rumored merger with TW and Son jumped on the opportunity to take his case for merging to a broader stage.  He did so in front of a packed room of 300 communications pundits, press and politicos at the US Chamber of Commerce’s prestigious Hall of Flags; a poignant backdrop for his own rags to riches story.  Son’s frank honesty about the state of broadband for the American public vs the rest of the world, as well as Sprint’s own miserable current performance were impressive.  It’s a story that resonates with my America’s Bandwidth Deficit presentation.

Here are some reasons the merger will likely pass:
  • The FCC can’t approve one horizontal merger (Comcast/TW) that brings much greater media concentration and control over content distribution, while disallowing a merger of two small players (really irritants as far as AT&T and Verizon are concerned).
  • Son has a solid track record of disruption and doing what he says.
  • The technology and economics are in his favor.
  • The vertically integrated service provider model will get disrupted faster and sooner as Sprint will have to think outside the box, partner, and develop ecosystems that few in the telecom industry have thought about before; or if they have, they’ve been constrained by institutional inertia and hidebound by legacy regulatory and industry siloes.

Here are some reasons why it might not go through:

  • The system is fundamentally corrupt.  But the new FCC Chairman is cast from a different mold than his predecessors and is looking to make his mark on history.
  • The FCC shoots itself in the foot over the auctions.  Given all the issues and sensitivities around incentive auctions the FCC wants this first one to succeed as it will serve as a model for all future spectrum refarming issues. 
  • The FCC and/or DoJ find in the public interest that the merger reduces competition.  But any analyst can see that T-Mo and Sprint do not have sustainable models at present on their own; especially when all the talk recently in Barcelona was already about 5G.

Personally I want Son’s vision to succeed because it’s the vision I had in 1997 when I originally brought the 2.5-2.6 (MMDS) spectrum to Sprint and later in 2001 and 2005 when I introduced Telcordia’s 8x8 MIMO solutions to their engineers.  Unfortunately, past management regimes at Sprint were incapable of understanding the strategies and future vision that went along with those investment and technology pitches.  Son has a different perspective (see in particular minute 10 of this interview with Walt Mossberg) with his enormous range of investments and clear understanding of price elasticity and the marginal cost of minutes and bits.

To be successful Sprint’s strategy will need to be focused, but at the same time open and sharing in order to simultaneously scale solutions across the three major layers of the informational stack (aka the InfoStack):

  • upper (application and content)
  • middle (control)
  • lower (access and transport)

This is the challenge for any company that attempts to disrupt the vertically integrated telecom or LinearTV markets; the antiquated and overpriced ones Son says he is going after in his presentation.    But the US market is much larger and more robust than the rest of the world, not just geographically, but also from a 360 degree competitive perspective where supply and demand are constantly changing and shifting.

Ultimate success may well rest in the control layer, where Apple and Google have already built up formidable operating systems which control vastly profitably settlement systems across multiple networks.  What few realize is that the current IP stack does not provide price signals and settlement systems that clear supply and demand between upper and lower layers (north-south) or between networks (east-west) in the newly converged “informational” stack of 1 and 2-way content and communications.

If Sprint’s Chairman realizes this and succeeds in disrupting those two markets with his strategy then he certainly will be seen as a visionary on par with McGowan and McCaw.

Posted by: Michael Elling AT 09:58 am   |  Permalink   |  0 Comments  |  Email
Wednesday, August 07 2013

Debunking The Debunkers

The current debate over the state of America's broadband services and over the future of the internet is like a 3-ring circus or 3 different monarchists debating democracy.  In other words an ironic and tragically humorous debate between monopolists, be they ultra-conservative capitalists, free-market libertarians, or statist liberals.  Their conclusions do not provide a cogent path to solving the single biggest socio-political-economic issue of our time due to pre-existing biases, incorrect information, or incomplete/wanting analysis.  Last week I wrote about Google's conflicts and paradoxes on this issue.  Over the next few weeks I'll expand on this perspective, but today I'd like to respond to a Q&A, Debunking Broadband's Biggest Myths, posted on Commercial Observer, a NYC publication that deals with real estate issues mostly and has recently begun a section called Wired City, dealing with a wide array of issues confronting "a city's" #1 infrastructure challenge.  Here's my debunking of the debunker.

To put this exchange into context, the US led the digitization revolutions of voice (long-distance, touchtone, 800, etc..), data (the internet, frame-relay, ATM, etc...) and wireless (10 cents, digital messaging, etc...) because of pro-competitive, open access policies in long-distance, data over dial-up, and wireless interconnect/roaming.  If Roslyn Layton (pictured below) did not conveniently forget these facts or does not understand both the relative and absolute impacts on price and infrastructure investment then she would answer the following questions differently:

Real Reason/Answer:  our bandwidth is 20-150x overpriced on a per bit basis because we disconnected from moore's and metcalfe's laws 10 years ago, due to the Telecom Act, then special access "de"regulation, then Brand-X or shutting down equal access for broadband.  This rate differential is shown in the discrepancy between rates we pay in NYC and what Google charges in KC, as well as the difference in performance/price of 4G and wifi.  It is great Roslyn can pay $3-5 a day for Starbucks.  Most people can't (and shouldn't have to) just for a cup a Joe that you can make at home for 10-30 cents.

Real Reason/Answer:  Because of their vertical business models, carriers are not well positioned to generate high ROI on rapidly depreciating technology and inefficient operating expense at every layer of the "stack" across geographically or market segment constrained demand.  This is the real legacy of inefficient monopoly regulation.  Doing away with regulation, or deregulating the vertical monopoly, doesn’t work.  Both the policy and the business model need to be approached differently.  Blueprints exist from the 80s-90s that can help us restructure our inefficient service providers.  Basically, any carrier that is granted a public ROW (right of way) or frequency should be held to an open access standard in layer 1.  The quid pro quo is that end-points/end-users should also have equal or unhindered access to that network within (economic and aesthetic) reason.  This simple regulatory fix solves 80% of the problem as network investments scale very rapidly, become pervasive, and can be depreciated quickly.

Real Reason/Answer:  Quasi monopolies exist in video for the cable companies and in coverage/standards in frequencies for the wireless companies.  These scale economies derived from pre-existing monopolies or duopolies granted by and maintained to a great degree by the government.  The only open or equal access we have left from the 1980s-90s (the drivers that got us here) is wifi (802.11) which is a shared and reusable medium with the lowest cost/bit of any technology on the planet as a result.  But other generative and scalabeable standards developed in the US or with US companies at the same time, just like the internet protocol stacks, including mobile OSs, 4G LTE (based on CDMA/OFDM technology), OpenStack/Flow that now rule the world.  It's very important to distinguish which of these are truly open or not.

Real Reason/Answer: The 3rd of the population who don't have/use broadband is as much because of context and usability, whether community/ethnicity, age or income levels, as cost and awareness.  If we had balanced settlements in the middle layers based on transaction fees and pricing which reflect competitive marginal cost, we could have corporate and centralized buyers subsidizing the access and making it freely available everywhere for everyone.  Putting aside the ineffective debate between bill and keep and 2-sided pricing models and instead implementing balanced settlement exchange models will solve the problem of universal HD tele-work, education, health, government, etc…  We learned in the 1980s-90s from 800 and internet advertising that competition can lead to free, universal access to digital "economies".  This is the other 20% solution to the regulatory problem.

Real Reason/Answer:  The real issue here is that America led the digital information revolution prior to 1913 because it was a relatively open and competitive democracy, then took the world into 70 years of monopoly dark ages, finally breaking the shackles of monopoly in 1983, and then leading the modern information revolution through the 80s-90s.  The US has now fallen behind in relative and absolute terms in the lower layers due to consolidation and remonopolization.  Only the vestiges of pure competition from the 80s-90s, the horizontally scaled "data" and "content" companies like Apple, Google, Twitter and Netflix (and many, many more) are pulling us along.  The vertical monopolies stifle innovation and the generative economic activity we saw in those 2 decades.  The economic growth numbers and fiscal deficit do not lie.

Posted by: Michael Elling AT 08:02 am   |  Permalink   |  0 Comments  |  Email
Wednesday, February 06 2013

Is IP Growing UP? Is TCPOSIP the New Protocol Stack? Will Sessions Pay For Networks?

Oracle’s purchase of leading SBC vendor (session border controller Acme Packets), is a tiny seismic event in the technology and communications (ICT) landscape.  Few notice the potential for much broader upheaval ahead.

SBCs, which have been around since 2000, facilitate traffic flow between different networks; IP to PSTN to IP and IP to IP.  Historically traffic has been mostly voice, where minutes and costs count because that world has been mostly rate-based.  Increasingly they are being used to manage and facilitate any type of traffic “sessions” across an array of public and private networks, be it voice, data, or video.  The reasons are many-fold, including, security, quality of service, cost, and new service creation; all things TCP-IP don't account for.

Session control is layer 5 to TCP-IP’s 4 layer stack.  A couple of weeks ago I pointed out that most internet wonks and bigots deride the OSI framework and feel that the 4 layer TCP-IP protocol stack won the “war”.  But here is proof that as with all wars the victors typically subsume the best elements and qualities of the vanquished.

The single biggest hole in the internet and IP world view is bill and keep.  Bill and keep’s origins derive from the fact that most of the overhead in data networks was fixed in the 1970s and 1980s.  The component costs were relatively cheap compared with the mainframe costs that were being shared and the recurring transport/network costs were being arbitraged and shared by those protocols.  All the players, or nodes, were known and users connected via their mainframes.  The PC and ethernet (a private networking/transmission protocol) came along and scaled much later.  So why bother with expensive and unnecessary QoS, billing, mediation and security in layers 5 and 6?

Then along came the break-up of AT&T and due to dial-1 equal access, the Baby Bells responded with flat-rate, expanded area (LATA) pricing plans to build a bigger moat around their Class 5 monopoly castles (just like AT&T had built 50 mile interconnect exclusion zones in the 1913 Kingsbury Commitment due to the threat of wireless bypass even back then, and just like the battles OTT providers like Netflix are having with incumbent broadband monopolies today) in the mid to late 1980s.  The nascent commercial ISPs took advantage of these flat-rate zones, invested in channel banks, got local DIDs and the rest as they say is history.  Staying connected all day on a single flat-rate back then was perceived of as "free".  So the "internet" scaled from this pricing loophole (even as the ISPs received much needed shelter from vertical integration by the monopoly Bells in Computers 2/3) and further benefited from WAN competition and commoditization of transport to connect all the distributed router networks into seamless regional and national layer 1-2 low-cost footprints even before www and http/html and the browser hit in the early to mid 1990s.  The marginal cost of "interconnecting" these layer 1-2 networks was infinitesimal at best and therefore bill and keep, or settlement-free peering, made a lot of sense.  

But Bill and Keep (B&K) has three problems:

  • It supports incumbents and precludes new entrants
  • It stifles new service creation
  • It precludes centralized procurement and subsidization

With Acme, Oracle can provide solutions to problems two and three; with the smartphone driving the process.  Oracle has java on 3 billion phones around the globe.  Now imagine a session controller client on each device that can help with application and access management and preferential routing and billing etc, along with guaranteed QoS and real-time performance metrics and auditing; regardless of what network the device is currently on.  The same holds in reverse in terms of managing "session state" across multiple devices/screens across wired and wireless networks.

The alternative to B&K is what I refer to as balanced settlements.  In traditional telecom parlance, instead of just being calling party pays, they can be both called and calling party pays and are far from the regulated monopoly origination/termination tariffs.  Their pricing (transaction fees) will reflect marginal costs and therefore stimulate and serve marginal demand.  As a result, balanced settlements provide a way for rapid, coordinated roleout of new services and infrastructure investment across all layers and boundary points.  They provide the price signals that IP does not.

Balanced settlements clear supply and demand north-south between upper (application) and lower (switching,transport and access) layers, as well as east-west from one network or application or service provider to another.  Major technological shifts in the network layers like open flow, software defined networks (SDN) and network function virtualization (NFV) can develop rapidly.   Balanced settlements will reside in competitive exchanges evolving out of today's telecom tandem networks, confederation of service provider APIs, and the IP world's peering fabric, driven by big data analytical engines and advertising exchanges.

Perhaps most importantly, balanced settlements enable subsidization or procurement of edge access from the core.  Large companies and institutions can centrally drive and pay for high definition telework, telemedicine, tele-education, etc... solutions across a variety of access networks (fixed and wireless).  The telcos refer to this as guaranteed quality of service leading to "internet fast lanes."  Enterprises will do this to further digitize and economize their own operations and distribution reach (HD collaboration and the internet of things), just like 800, prepaid calling cards, VPNs and the internet itself did in the 1980s-90s.  I call this process marrying the communications event to the commercial/economic transaction and it results in more revenue per line or subscriber than today's edge subscription model.  As well, as more companies and institutions increasingly rely on the networks, they will demand backups, insurance and redundancy ensuring that there will be continous investment in multiple layer 1 access networks.

Along with open or shared access in layer 1 (something we should have agreed to in principal back in 1913 and again in 1934 as governments provide service providers a public right of way or frequency), balanced settlements can also be an answer to inefficient universal service subsidies.  Three trends will drive this. Efficient loading of networks and demand for ubiquitous high-definition services by mobile users will require inexpensive uniform access everywhere with concurrent investment in high-capacity fiber and wireless end-points.  Urban demand will naturally pay for rural demand in the process due to societal mobilty and finally the high volume low marginal cost user (enterprise or institution) will amortize and pay for the low-volume high marginal cost user to be part of their "economic ecosystem" thereby reducing the digital divide.

Related Reading:

TechZone 360 Analyzes the Deal

Acme Enables Skype Bandwidth On Demand
 

Posted by: Michael Elling AT 10:05 am   |  Permalink   |  0 Comments  |  Email
Sunday, July 08 2012

Thursday December 19, 2013 will commemorate the 100 year anniversary of the Kingsbury Commitment.  There are 528 days remaining.  Let's plan something special to observe this tragic moment.

In return for universal service, AT&T was granted a "natural monopoly".  The democratic government in the US, one of the few at the time, recognized the virtue of open communications for all and foolishly agreed to Ted Vail's deceptions.  Arguably, this one day changed the course of mankind for 50-70 years.  Who knows what might have been if we had fostered low-cost communications in the first half of the century?

Anyway, when universal service didn't happen (no sh-t sherlock) the government stepped in to ensure universal service in 1934.  So on top of an overpriced monopoly the American public was taxed to ensure 100% of the population got the benefit of being connected.  Today, that tax amounts to $15 billion annually to support overpriced service to less than 5% of the population.  (Competitive networks have shown how this number gets driven to zero!)

Finally in the early 1980s, after nearly 30 years (the final case started in 1974 and took nearly 9 years) of trying the Department of Justice got a Judge to break up the monopoly into smaller monopolies and provide "equal access" to competitors across the long-distance piece starting and ending at the Class 5 (local switch and calling) boundary.  The AT&T monopoly was dead; long live the Baby Bell monopolies!  But the divestiture began a competitive long-distance (WAN) digitization "wave" in the 1980s that resulted in, amongst other things:

  • 99% drop in pricing over 10 years
  • 90% touchtone penetration by 1990 vs 20% ROW
  • Return of large volume corporate traffic via VPN services and growth of switched data intranets
  • Explosion of free, 800 access (nearly 50% of traffic by 1996)
  • Over 4 (upwards of 7 in some regions/routes) WAN fiber buildouts
  • Bell regulatory relief on intralata tolls via expanding calling areas (LATAs)
  • Introduction of flat-rate local pricing by the Bells

The latter begat the Internet, the second wave of digitization in the early 1990s.  The scaling of Wintel driven by the Internet paved the way for low-cost digital cellphones, the third wave of digitization in the late 1990s.  (Note, both the data and wireless waves were supported by forms of equal access).  By 1999 our economy had come back to the forefront on the global scene and our budget was balanced and we were in a position to pay down our national debt.  I expected the 4th and Final Wave of last mile (broadband) digitization to start sometime in the mid to late 2000s.  It never came.  In fact the opposite happened because of 3 discrete regulatory actions:

  • 1996 Telecom Act
  • 2002 Special Access Deregulation
  • 2004 Rescision of Equal Access and Bell entry into Long Distance (WAN)

Look at the following 6 charts and try not to blink or cry.  In all cases, there is no reason why the prices in the US are not 50-70% lower; if not more.  We have the scale.  We have the usage.  We have the industries.  We have the technology.  We started all the 3 prior waves and should have oriented our vertically integrated service providers horizontally a la the data processing industry to effectively deal with rapid technological change.  Finally, we have Moore's and Metcalfe's laws, which argue for a near 60% reduction in bandwidth pricing and/or improved performance annually! 

But the government abetted a remonopolization of the sector over the past 15 years.

It's almost a tragedy to be American on this July 4 week.  The FCC and the government killed competition brought about by Bill McGowan.  But in 2007 Steve Jobs resurrected equal access and competition.  So I guess it's great to be American after all!  Many thanks to Wall and the Canadian government for these stats.













Related Reading:

New America Foundation Global Cost of Connectivity (It's bad in the US!)

Posted by: Michael Elling AT 07:17 am   |  Permalink   |  0 Comments  |  Email
Sunday, March 18 2012

Previously we have written about “being digital” in the context of shifting business models and approaches as we move from an analog world to a digital world.  Underlying this change have been 3 significant tsunami waves of digitization in the communications arena over the past 30 years, underappreciated and unnoticed by almost all until after they had crashed onto the landscape:

  • The WAN wave between 1983-1990 in the competitive long-distance market, continuing through the 1990s;
  • The Data wave, itself a direct outgrowth of the first wave, began in the late 1980s with flat-rate local dial-up connections to ISPs and databases anywhere in the world (aka the Web);
  • The Wireless wave  beginning in the early 1990s and was a direct outgrowth of the latter two.  Digital cellphones were based on the same technology as the PCs that were exploding with internet usage.  Likewise, super-low-cost WAN pricing paved the way for one-rate, national pricing plans.  Prices dropped from $0.50-$1.00 to less than $0.10.  Back in 1996 we correctly modeled this trend before it happened.

Each wave may have looked different, but they followed the same patterns, building on each other.  As unit prices dropped 99%+ over a 10 year period unit demand exploded resulting in 5-25% total market growth.  In other words, as ARPu dropped ARPU rose; u vs U, units vs Users.  Elasticity. 

Yet with each new wave, people remained unconvinced about demand elasticity.  They were just incapable of pivoting from the current view and extrapolating to a whole new demand paradigm.  Without fail demand exploded each time coming from 3 broad areas: private to public shift, normal price elasticity, and application elasticity.

  • Private to Public Demand Capture.  Monopolies are all about average costs and consumption, with little regard for the margin.  As a result, they lose the high-volume customer who can develop their own private solution.  This loss diminishes scale economies of those who remain on the public, shared network raising average costs; the network effect in reverse.  Introducing digitization and competition drops prices and brings back not all, but a significant number of these private users.  Examples we can point to are private data and voice networks, private radio networks, private computer systems, etc…that all came back on the public networks in the 1980s and 1990s.  Incumbents can’t think marginally.
  • Normal Price Elasticity.  As prices drop, people will use more.  It gets to the point where they forget how much it costs, since the relative value is so great.  One thing to keep in mind is that lazy companies can rely too much on price and “all-you-can-eat” plans without regard for the real marginal price to marginal cost spread.  The correct approach requires the right mix of pricing, packaging and marketing so that all customers at the margin feel they are deriving much more value than what they are paying for; thus generating the highest margins.  Apple is a perfect example of this.  Sprint’s famous “Dime” program was an example of this.  The failure of AYCE wireless data plans has led wireless carriers to implement arbitrary pricing caps, leading to new problems.  Incumbents are lazy.
  • Application Elasticity. The largest and least definable component of demand is the new ways of using the lower cost product that 3rd parties drive into the ecosystem.   They are the ones that drive true usage via ease of use and better user interfaces.  Arguably they ultimately account for 50% of the new demand, with the latter 2 at 25% each.  With each wave there has always been a large crowd of value-added resellers and application developers that one can point to that more effectively ferret out new areas of demand.  Incumbents move slowly.  

Demand generated via these 3 mechanisms soaked up excess supply from the digital tsunamis.  In each case competitive pricing was arrived at ex ante by new entrants developing new marginal cost models by iterating future supply/demand scenarios.  It is this ex ante competitive guess, that so confounds the rest of the market both ahead and after the event.  That's why few people recognize that these 3 historical waves are early warning signs for the final big one.  The 4th and final wave of digitization will occur in the mid-to-last mile broadband markets.  But many remain skeptical of what the "demand drivers" will be.  These last mile broadband markets are monopoly/duopoly controlled and have not yet realized price declines per unit that we’ve seen in the prior waves.  Jim Crowe of Level3 recently penned a piece in Forbes that speaks to this market failure.  In coming posts we will illustrate where we think bandwidth pricing is headed, as people remain unconvinced about elasticity, just as before.  But hopefully the market has learned from the prior 3 waves and will understand or believe in demand forecasts if someone comes along and says last mile unit bandwidth pricing is dropping 99%.  Because it will.

 

Posted by: Michael Elling AT 10:47 am   |  Permalink   |  0 Comments  |  Email
Sunday, February 12 2012

Last week we revisted our seminal analysis from 1996 of the 10 cent wireless minute plan (400 minutes for C$40) introduced by Microcell of Canada and came up with the investment theme titled “The 4Cs of Wireless”.  To generate sufficient ROI wireless needed to replace wireline as a preferred access method/device (PAD).  Wireless would have to satisfy minimal cost, coverage, capacity and clarity requirements to disrupt the voice market.  We found:

  • marginal cost of a wireless minute (all-in) was 1.5-3 cents
  • dual-mode devices (coverage) would lead to far greater penetration
  • software-driven and wideband protocols would win the capacity and price wars
  • CDMA had the best voice clarity (QoS); pre-dating Verizon’s “Can you hear me now” campaign by 6 years

In our model we concluded (and mathematically proved) that demand elasticity would drive consumption to 800 MOUs/month and average ARPUs to north of $70, from the low $40s.  It all happened within 2 short years; at least perceived by the market when wireless stocks were booming in 1998.  But in 1996, the pricing was viewed as the kiss of death for the wireless industry by our competitors on the Street.  BTW, Microcell, the innovator, was at a disadvantage based on our analysis, as the very reason they went to the aggressive pricing model to fill the digital pipes, namely lack of coverage due to a single-mode GSM phone, ended up being their downfall.  Coverage for a "mobility" product trumped price, as we see below.

What we didn’t realize at the time was that the 4Cs approach was broadly applicable to supply of communication services and applications in general.  In the following decade, we further realized the need for a similar checklist on the demand side to understand how the supply would be soaked up and developed the 4Us of Demand in the process.  We found that solutions and services progressed rapidly if they were:

  • easy to use interface
  • usable across an array of contexts
  • ubiquitous in terms of their access
  • universal in terms of appeal

Typically, most people refer only to user interface (UI or UX) or user experience (UE), but those aren't granular enough to accomodate the enormous range of demand at the margin.  Look at any successful product or service introduction over the past 30 years and they’ve scored high on all 4 demand elements.  The most profitable and self-sustaining products and solutions have been those that maximized perceived utility demand versus marginal cost.  Apple is the most recent example of this.

Putting the 4Cs and 4Us together in an iterative fashion is the best way to understand clearing of marginal supply and demand ex ante.  With rapid depreciation of supply (now in seconds, minutes and days) and infinitely diverse demand in digital networked ecosystems getting this process right is critical.

Back in the 1990s I used to say the difference between wireless and wired networks was like turning on a lightswitch in a dark room filled with people.  Reaction and interaction (demand) could be instantaneous for the wireless network.  So it was important to build out rapidly and load the systems quickly.  That made them generative and emergent, resulting in exponential demand growth.  (Importantly, this ubiquity resulted from interconnection mandated by regulations from the early 1980s and extended to new digital entrants (dual-mode) in the mid 1990s).  Conversely a wired network was like walking around with a flashlight and lighting discrete access points providing linear growth.

The growth in adoption we are witnessing today from applications like Pinterest, Facebook and Instagram (underscored in this blogpost from Fred Wilson) is like stadium lights compared with the candlelight of the 1990s.  What took 2 years is taking 2 months.  You’ll find the successful applications and technologies score high on the 4Cs and 4Us checklists before they turn the lights on and join the iOS and Android parties.

Related Reading:
Fred Wilson's 10 Golden Principles

Posted by: Michael Elling AT 11:37 am   |  Permalink   |  0 Comments  |  Email
Sunday, February 05 2012

You all know Monsieurs (MM.) Moore et Metcalfe.  But do you know Monsieur (M.) Zipf?  I made his acquaintance whilst researching infinite long tails.  Why does he matter, you inquire?  Because M. Zipf brings some respectability to Moore et Metcalfe, who can get a little out of control from time to time. 

Monsieur Moore is an aggressive chap who doubles his strength every 18 months or so and isn’t shy about it.  Monsieur Metcalfe has an insatiable appetite, and every bit he consumes increases his girth substantially.  Many people have made lots of money from MM Moore’s et Metcalfe’s antics over the past 30 years.  The first we refer to generally as the silicon or processing effect, the latter as the network effect.  Putting the two together should lead to declines of 50-60% in cost for like performance or throughput.  Heady, rather piggish stuff!

Monsieur Zipf, on the other hand, isn’t one for excess.  He follows a rather strict regimen; one that applies universally to almost everything around us; be it man-made or natural.  M. Zipf isn’t popular because he is rather unsocial.  He ensures that what one person has, the next chap only can have half as much, and the next chap half that, and so on.   It’s a decreasing, undemocratic principle.   Or is it?

Despite his unpopularity, lack of obvious charm and people’s general ignorance of him, M. Zipf’s stature is about to rise.  Why?  Because of the smartphone and everyone’s desire to always be on and connected; things MM Moore and Metcalfe wholeheartedly support.

M. Zipf is related to the family of power law distributions.  Over the past 20 years, technologists have applied his law to understanding network traffic.  In a time of plenty, like the past 20 years, M. Zipf’s not been that important.  But after 15 years of consolidation and relative underinvestment we are seeing demand outstrip supply and scarcity is looming.  M. Zipf can help deal with that scarcity.

Capacity will only get worse as LTE (4G) devices explode on the scene in 2012, not only because of improved coverage, better handsets, improved Android (ICS), but mostly because of the iconic iPhone 5 coming this summer!  Here’s the thing with 4G phones, they have bigger screens and they load stuff 5-10x faster.  So what took 30 seconds now takes 3-10 seconds to load and stuff will look 2-3x better!  People will get more and want more; much to MM Moore’s et Metcalfe’s great pleasure.

“Un moment!” cries M. Zipf.  “My users already skew access quite a bit and this will just make matters worse!  Today, 50% of capacity is used by 1% of my users.  The next 9% use 40% and the remaining 90% of users use just 10% of capacity.  With 4G the inequality can only get worse.  Indignez-vous!  Which is the latest French outcry for equality.”  It turns out Zipf's law is actually democratic in that each person consumes at their marginal not average rate.  The latter is socialism.

Few of us will see this distribution of usage as a problem short-term, except when we’re on overloaded cellsites and out of reach of a friendly WiFi hotspot.  The carriers will throw more capex at the problem and continue to price inefficiently and ineffectively.   The larger problem will become apparent within 2 years when the 90% become the 10% and the carriers tell Wall Street they need to invest another $50B after 2015 just after spending $53B between 2010-2014.

Most people aware of this problem say there is a solution.  More Spectrum = more Bandwidth to satisfy MM Moore et Metcalfe.  But they’ve never heard of M. Zipf nor understood fully how networks are used.  Our solution, extended as a courtesy by M. Zipf, is to “understand the customer” and work on “traffic offloading” at the margin.  Pricing strategies, some clever code, and marketing are the tools to implement a strategy that can minimize the capital outlays, and rapidly amortize investment and generate positive ROI.

We’ve been thinking about this since 1996 when we first introduced our 4Cs of Wireless (cost, coverage, capacity and clarity) analyzing, understanding and embracing 10 cent wireless pricing (introduced by French Canada's revolutionary MicroCell).  As a result we were 2-3 years ahead of everybody with respect to penetration, consumption and wireline substitution thinking and forecasts. Back in 1995 the best wireless prices were 50 cents per minute and just for buying a lot of local access.  Long-distance and roaming charges applied.  So a corporate executive who travelled a lot would regularly rack up $2000-3000 monthly phone bills.  The result was less than 10% penetration, 80 minutes of use per month, and ARPUs declining from $45 to $40 to $35 in analysts' models because the marginal customers being added to the network were using the devices infrequently and more often than not putting them into the glove compartment in case of emergencies.  Fewer than 3% of the population actually used the devices more than 1x day.

We used to poll taxi drivers continuously about wireless and found that their average perceived price of $0.75 per minute was simply too high to justify not having to pull over and use a payphone for $0.25.  So that was the magical inflection point in the elasticity curves.  When MicroCell introduced $0.10 late in the Spring of 1996 and we polled the same set of users, invariably we were just able to avoid an accident they got so excited.  So we reasoned and modeled that more than just taxi drivers would use wireless as a primary access device.  And use it a lot.  This wireless/wireline substitution would result in consumption of 700-800 minutes of use per month, penetration hitting 100% quickly and ARPUs, rather than declining, actually increasing to $70.  The forecast was unbelievably bullish.  And of course no one believed it in 1996, even though all those numbers were mostly reached within 5 years.   

But we also recognized that wireless was a two-edge sword with respect to localized capacity and throughput; taking into account the above 3 laws.  So we also created an optimal zone, or location-based, pricing and selling plan that increased ARPUs and effective yield and were vastly superior to all you can eat (AYCE) and eat what you want (EWYW) plans.  Unfortunately, carriers didn't understand or appreciate M. Zipf and within 2 years they were giving away night and weekend minutes for free, where they could have monetized them for 3-6 cents each.  Then some carriers responded by giving away long-distance (whose marginal cost was less than a minutes without access; but still could cost 2-3 cents).  Then AT&T responded with the One-rate plan, which destroyed roaming surcharges and led to one-rate everywhere; even if demand was different everywhere.

Here’s a snapshot of that analysis that is quite simple and consistent with Zipf's law and highly applicable today.  Unfortunately, where my approach would have kept effective yield at 8 cents or higher, the competitive carriers responded by going to all you can eat (AYCE) plans and the effective yield dropped to 4 cents by 2004.  Had intercarrier SMS not occurred in the 2003-4 timeframe, they would have all been sunk with those pricing models, as they were in the middle of massive 2G investment programs for the coming "wireless data explosion", which actually didn't happen until 3G and smartphones in the 2008-2009 timeframes.  It was still a voice and blackberry (texting and email) world in 2007 when the iPhone hit. With ubiquitous SMS and people's preference to text instead of leaving vmail, minutes dropped from 700 to 500, lowering carrier's costs, and they were able to generate incremental revenues on SMS pricing plans (called data) in the 2004-2007 timeframe.

All that said, the analysis and approach is even more useful today since extreme consumption of data will tend to occur disproportionately in the fixed mode (what many refer to as offload).  I let you come up with your own solutions.  À bientôt!  Oh, and look up Free in France to get an idea of where things are headed.  What is it about these French?  Must be something about Liberté, égalité, fraternité.

Related Reading:
It's All in Your Head by Robert Metcalfe

JD Power Surveys on Wireless Network Performance

The Law of the Few (the 20/80 rule) from the Tipping Point

Briscoe, Odlyzko get it wrong because of one dimensional thinking and mixing apples and oranges

See Larry Downes' Law of Disruption

Law of Bandwidth (but only in a period of monopoly)

See our own home cooked Law of Wireless Gravity

Keck's law of fiber capacity (may be coming to an end?

Kurzweil's 20 Laws of Telecosm (how many right and wrong also depends on timing)

Posted by: Michael Elling AT 11:38 am   |  Permalink   |  0 Comments  |  Email
Sunday, January 29 2012

Every institution, every industry, every company has or is undergoing the transformation from analog to digital.  Many are failing, superseded by new entrants.  No more so than in the content and media industries: music, retail, radio, newspaper and publishing.  But why, especially as they’ve invested in the tools and systems to go digital?  Their failure can be summed up by this simple quote, “Our retail stores are all about customer service, and (so and so) shares that commitment like no one else we’ve met,” said Apple’s CEO. “We are thrilled to have him join our team and bring his incredible retail experience to Apple.”

Think about what Apple’s CEO emphasized.  “Customer service.”  Not selling; and yet the stores accounted for $15B of product sold in 2011!  When you walk into an Apple store it is like no other retailing experience, precisely because Apple stood the retail model on its head.  Apple thought digital as it sold not just 4 or 5 products--yes that’s it—but rather 4-5 ecosystems that let the individual easily tailor their unique experience from beginning to end.

Analog does not scale.  Digital does.  Analog is manual.  Digital is automated.  Analog cannot easily be software defined and repurposed.  Digital can.  Analog is expensively two-way.  With Digital 2-way becomes ubiquitous and synchronous.  Analog is highly centralized.  Digital can be easily distributed.  All of this drives marginal cost down at all layers and boundary points meaning performance/price is constantly improving even as operator/vendor margins rise.

With Digital the long tail doesn’t just become infinite, but gives way to endless new tails.  The (analog) incumbent sees digital as disruptive, with per unit price declines and same-store revenues eroding.  They fail to see and benefit from relative cost declines and increased demand.  The latter invariably occurs due to a shift from "private" to public consumption, normal price elasticity, and "application" elasticity as the range of producers and consumers increases.  The result is overall revenue growth and margin expansion for every industry/market that has gone digital.

Digital also makes it easy for something that worked in one industry to be easily translated to another.   Bill Taylor of Fast Company recently wrote in HBR that keeping pace with rapid change in a digital world requires having the widest scope of vision, and implementing successful ideas from other fields.

The film and media industries are a case in point.  As this infographic illustrates Hollywood studios have resisted “thinking digital” for 80 years.  But there is much they could learn from the transformation of other information/content monopolies over the past 30 years.  This blog from Fred Wilson sums up the issues between the incumbents and new entrants well.  Hollywood would do well to listen and see what Apple did to the music industry and how it changed it fundamentally; because it is about to do the same to publishing and video.  If not Apple then others.

Another aspect of digital is the potential for user innovation.   Digital companies should constantly be looking for innovation at the edge.  This implies a focus on the “marginal” not average consumer.  Social media is developing tremendous tools and data architectures for this.  If companies don’t utilize these advances, those same users will develop new products and markets, as can be seen from the comments field of this blog on the financial services industry.

Digital is synonymous with flat which drives greater scale efficiency into markets.  Flat (horizontal) systems tend toward vertical completeness via ecosystems (the Apple or Android or WinTel approach).  Apple IS NOT vertically integrated.  It has pieced together and controls very effectively vertically complete solutions.  In contrast, vertically integrated monopolies ultimately fail because they don’t scale at every layer efficiently.  Thinking flat (horizontal) is the first step to digitization.

Related Reading:
Apple Answers the Question "Why?" Before "How?" And "What?"
US Govt to Textbook Publishers: You Will Go Digital!
This article confused vertical integration with vertical completeness
Walmart and Retailers Need to Rethink Strategy
Comparison of 3 top Fashion Retailers Web and App Strategies

Software Defined Networking translated to the real world

Posted by: Michael Elling AT 10:54 am   |  Permalink   |  0 Comments  |  Email
Sunday, January 22 2012

Data is just going nuts!  Big data, little data, smartphones, clouds, application ecosystems.  So why are Apple and Equinix two of only a few large cap companies in this area with stocks up over 35% over the past 12 months, while AT&T, Verizon, Google and Sprint are market performers or worse?   It has to do with pricing, revenues, margins and capex; all of which impact ROI.  The former’s ROI is going up while the latters’ are flat to declining.  And this is all due to the wildness of mobile data.

Data services have been revealing flaws and weaknesses in the carriers pricing models and networks for some time, but now the ante is being upped.  Smartphones now account for almost all new phones sold, and soon they will represent over 50% of every carriers base, likely ending this year over 66%.  That might look good except when we look at these statistics and facts:

  • 1% of wlx users use 50% of the bandwidth, while the top 10% account for 90%.  That means 9 out of 10 users account for only 10% of network consumption; clearly overpaying for what they get.
  • 4G smartphone displays (720x1280 pixels) allow video viewing that uses 300x more capacity than voice.
  • Streaming just 2 hours of music daily off a cloud service soaks up 3.5GB per month
  • Carriers still derive more than 2/3 of their revenues from voice.
  • Cellular wireless (just like WiFi) is shared. 

Putting this together you can see that on the one hand a very small percentage of users use the bulk of the network.  Voice pricing and revenues are way out of sync with corresponding data pricing and revenues; especially as OTT IP voice and other applications become pervasive. 

 

Furthermore, video, which is growing in popularity will end up using 90% of the capacity, crowding out everything else, unless carriers change pricing to reflect differences in both marginal users and marginal applications.  Marginal here = high volume/leading edge.

So how are carriers responding?  By raising data prices.  This started over a year ago as they started capping those “unlimited” data plans.  Now they are raising the prices and doing so in wild and wacky ways; ways we think that will come back to haunt them just like wild party photos on FB.  Here are just two of many examples:

  • This past week AT&T simplified its pricing and scored a marketing coup by offering more for more and lowering prices even as the media reported AT&T as “raising prices.”  They sell you a bigger block of data at a higher initial price and then charge the same rate for additional blocks which may or may not be used.  Got that?
  • On the other hand that might be better than Walmart’s new unlimited data plan which requires PhD level math skills to understand.  Let me try to explain as simply as possible.  Via T-Mobile they offer 5GB/month at 3G speed, thereafter (the unlimited part) they throttle to 2G speed.  But after March 16 the numbers will change to 250MB initially at 3G, then 2G speeds unlimited after that.  Beware the Ides of March’s consumer backlash!

Unless the carriers and their channels start coming up with realistic offload solutions, like France’s Free, and pricing to better match underlying consumption, they will continue to generate lower or negative ROI.  They need to get control of wild data.  Furthermore, if they do not, the markets and customers will.  With smartphones (like Apple's, who by the way drove WiFi as a feature knowing that AT&T's network was subpar) and cloudbased solutions (hosted by Equinix) it is becoming easier for companies like Republic Wireless to virtually bypass the expensive carrier plans using their very own networks.  AT&T, VZ, Sprint will continue to be market performers at best.

Related Reading/Viewing:
AT&T pricing relies heavily on breakage
Useful stats on data growth from MobileFuture
FoxNews report on AT&T Data Throttling
This article actually suggests "dissuading usage" as 1 of 4 solutions
Consumer reports article whereby data equivalent voice pricing = 18 cents, OTT on lowest plan = 6.6 cents, OTT on highest plan = 1 cent
New shared data plans set a new high standard

Posted by: Michael Elling AT 09:13 am   |  Permalink   |  0 Comments  |  Email
Thursday, December 29 2011

67 million Americans live in rural areas. The FCC says the benchmark broadband speed is at least 4 Mbps downstream and 1 Mbps upstream. Based on that definition 65% of Americans actually have broadband, but only 50% who live in rural markets do; or 35 million. The 50% is due largely because 19 million Americans (28%) who live in rural markets do not even have access to these speeds. Another way of looking at the numbers shows that 97% of non-rural Americans have access to these speeds versus 72% living in rural areas.  Rural Americans are at a significant disadvantage to other Americans when it comes to working from home, e-commerce or distance education.  Clearly 70% are buying if they have access to it.

Furthermore we would argue the FCC standard is no longer acceptable when it comes to basic or high-definition multimedia, video and file downloads.  These applications require 10+ Mbps downstream and 3+ Mbps upstream to make applications user friendly.  Without those speeds you get what we call the "world-wide-wait" in rural markets for most of today's high-bandwidth applications.  In the accompanying 2 figures we see a clear gap between the blue lines (urban) and green lines (rural) for both download and upload speeds.  The result is that only 7% of rural Americans use broadband service with 6+/1.5+ Mbps versus 22% nationwide today.

The problem in rural markets is lack of alternative and affordable service providers. In fact the NTIA estimates that 4% of Americans have no broadband provider to begin with, 12% only 1 service provider and 44% just 2 providers. Almost all rural subscribers fall into 1 of these 3 categories. Rural utilities, municipalities, businesses and consumers would benefit dramatically from alternative access providers as economic growth is directly tied to broadband penetration.

The accompanying chart shows how vital broadband is to regional economic growth.  If alternative access drives rural broadband adoption to levels similar to urban markets, then local economies will grow an additional 3% annually.  That's because new wireless technology and applications such as home energy management, video on demand, video conferencing and distance learning provide the economic justification for alternative, lower-cost, higher bandwidth solutions.

Related Reading

FCC Broadband Map

US 3G Wireless Coverage Map

The UK is Far Ahead of the US; Their deficient is our average

Rural Telcos Against FCC USF Reform

GA Tries to Reduce Subsidies, Again

 

Posted by: Michael Elling AT 08:09 am   |  Permalink   |  0 Comments  |  Email
Sunday, November 06 2011

Would gamification work in the smart grid?  Possibly.  Others have asked the same question.  But some would ask, why do you need to incent people to save money?  Because people’s self-interest might not be aligned with the smart-grid as currently envisioned by vendors and utilities. 

Gamification’s value is to do something against one’s self-interest without realizing it.  At the same time, people play games to accomplish something aspirational.  How can these two, somewhat contradictory, precepts be applied to the smart-grid? 

People resist the smart grid because of its perceived complexity, expense and intrusiveness.  They are acting in their self-interest.  Secondly, the smart-grid is supposedly about giving the end-user controls over their own consumption.  Unfortunately, utilities are scared by this future, since it runs counter to revenue growth.

Enter gamification where everyone might win.  If introduced into the design of smart-grid solutions from the get-go it could have a fundamental impact on penetration, acceptance and ultimately revenue and profit growth for the utility industry.   Why?  Because the demand for electricity is potentially unlimited and the easier and more efficient the industry makes consumption the greater the growth potential.

So what might gamification of the smart grid look like?  It would need to satisfy the following conditions: personal growth, societal improvement and marketing engagement.   Right now solutions I’ve read about focus on individual rewards (see Welectricity and Lowfoot), but there is a growing body of evidence that people respond better when their use is compared to their neighbors.  So why not turn efficiency and production into a contest?  Research is already underway in Hawaii and Chicago.  Small, innovative app-driven solutions are entering the market; even supported by former US Vice Presidents.

To get as much participation and ensure wide-spread rewards smart-grid gamification contests should be held at home, neighborhood, city, county, state, all the way to national levels.  It should provide for both relative and absolute changes to provide ALL users an incentive to win; not just the largest users.  And not just individuals, but groups as well.  Contests could also get down to the appliance level and ultimately should include contribution/cogeneration (here’s another example). 

Utilities have done a poor job of getting customers to look at their info online; less than 10% on average.   Playing games with customers and following recipes like this might be a way to change all that.  Win, win, win.

Related Reading:

Gaming across all industries

 

Posted by: Michael Elling AT 11:45 am   |  Permalink   |  0 Comments  |  Email
Sunday, October 30 2011

Without access does the cloud exist?  Not really.

In 2006, cloud computing entered the collective intelligence in the form of Amazon Web Services.  By 2007, over 330,000 developers were registered on the platform.  This rapid uptake was an outgrowth of web 1.0 applications (scale) and growth in high-speed, broadband access from 1998-2005 (ubiquity).  It became apparent to all that new solutions could be developed and efficiencies improved by collapsing to the core a portion of processing and storage that had developed at the edge during the WinTel revolution.  The latter had fundamentally changed the IT landscape between the late 1980s and early 2000s from a mainframe to client server paradigm.

In late 2007 the iPhone was born, just as 3G digital services were introduced by a competitive US wireless industry.  In 2009 “smartphone” penetration was 18% of the market.  By the 3rd quarter of 2011 that number reached 44%.  The way people communicate and consume information is changing dramatically in a very short time. 

The smartphone is driving cloud (aka back to the mainframe) adoption for 3 reasons: 1) it is introducing a new computing device to complement, not replace, existing computing devices at home and work; 2) the small screen limits what information can be shown and processed; 3) it is increasing the sociability, velocity and value of information.   Information knows no bounds at the edge or core.  And we are at the very very early stages of this dramatic new revolution.

Ice Cream Sandwich (just like Windows 2.0 multi-tasking in 1987) heralds a radical new world of information generation and consumption.  Growth in processing and computation at the edge will drive the core and vice versa; just as chip advances from Intel fed software bloat on desktops further necessitating faster chips.   

But the process can only expand if the networks are there (see page 2) to support that.  Unfortunately carriers have responded with data caps and bemoan the lack of new spectrum.  Fortunately, a hidden back door exists in the form of WiFi access.  And if carriers like AT&T and Verizon don’t watch out, it will become the preferred form of access.

As a recent adopter of Google Music I have become very attuned to that.  First, it is truly amazing how seamless content storage and playback has become.  Second, I learned how to program my phone to always hunt for a wifi connection.  Third, when I do not have access to either the 3G wireless network or WiFi and I want something that is stored online a strange feeling of being disconnected overtakes me; akin to leaving one’s cellphone at home in the morning.

With the smartphone we are getting used to choice and instant gratification.  The problem with WiFi is it’s variability and unreliability.  Capital and technology is being applied to solve that problem and it will be interesting to see how service providers react to the potential threat (and/or opportunity).  Where carriers once imagined walled application gardens there are now fertile iOS and Android fields watered by clouds over which carriers exert little control.  Storm clouds loom over their control of and ROI from access networks.

Posted by: Michael Elling AT 09:10 am   |  Permalink   |  0 Comments  |  Email
Sunday, October 23 2011

Even though the US has the most reliable electric system in the world, utility companies are not schooled in real-time or two-way concepts when it comes to gathering and reporting data, nor when it comes to customer service. All of that changes with a “smart-grid” and may be the best explanation why so many smart-grid solutions stop at the meter and do not extend fully into the customer premise. Unfortunately, utilities are not prepared to “get” so much information, let alone “give” much to the customer. Over 20 million smart meters, representing 15% penetration in residential markets, have been deployed as of June, 2011 according to IEE.  They forecast 65 million (50%) by 2015, at an average cost of $150-250 per household.  While these numbers are significant, it will have taken 15 years to get there and even then only 6 million premises, less than 5% of the market, are expected to have energy management devices by 2015.  So while the utilities will have a slightly better view of things and have greater controls and operating efficiencies, the consumer will not be engaged fully, if at all.  This is the challenge of the smart-grid today.

Part of the issue is incumbent organizations--regulatory bodies, large utilities and vendors--and their desire to stick to proven approaches, while not all agreeing on what those approaches are. According to NIST, there are no fewer than 75 key standards and 11 different standards bodies and associations involved in smart-grid research and trials. The result is numerous different approaches, many of which are proprietary and expensive.  As well, the industry breaks energy management within smart-grid into 2 broad categories, namely Demand Response Management (DRM or the side the utility controls) and Demand Side Management (DSM or the side the customer arguably controls), instead of just calling it “end-to-end energy management;” which is how we refer to it.

Another challenge, specifically for rural utilities is that over 60% have PLC meters, which don’t work with most of the “standard” DRM solutions in the market, necessitating an upgrade. This could actually present an opportunity for a well designed end-to-end solution that leapfrogs the current industry debate and offers a new approach.  Such an approach would work-around an expensive investment upgrade of the meter AND allow DSM at the same time. After working with utilities for over 10 years, we’ve discovered that rural utilities are the most receptive to this new way of thinking, not least because they are owned by their customers and they can achieve greater operating efficiencies from end-to-end “smart” technology investment because of their widely dispersed customer base.

Ultimately the market will need low-cost, flexible end-to-end solutions to make the smart-grid pervasive and generate the expected ROI for utility and customer alike.

Posted by: Michael Elling AT 08:13 am   |  Permalink   |  Email
Tuesday, April 19 2011

5 Areas of Focus

1) How does information flow through our economic, social and political fabric?  I believe all of history can be modeled on the pathways and velocity of information.  To my knowledge there is no economic science regarding the velocity of information, but many write about it.  Davidow (OVERconnected) speaks to networks of people (information) being in 3 states of connectivity.  Tom Wheeler, someone whom I admire a great deal, often relates what is happening today to historical events and vice versa.  His book on Lincoln’s use of the telegraph makes for a fascinating read.  Because of its current business emphasis and potential to change many aspects of our economy and lives social media will be worth modeling along the lines of information velocity.

2) Mapping the rapidly evolving infomedia landscape to explain both the chaos of convergence and the divergence of demand has interested me for 20 years.  This represents a taxonomy of things in the communications, technology and internet worlds.  The latest iteration, called the InfoStack, puts everything into a 3 dimensional framework with a geographic, technological/operational, and network/application dispersion.  I’ve taken that a step further and from 3 dimensional macro/micro models developed 3 dimensional organizational matrices for companies.  3 coordinates capture 99% of everything that is relevant about a technology, product, company, industry or topic.

3) Mobile payments and ecommerce have been an area of focus over the past 3 years.  I will comment quite a bit on this topic.  There are hundreds of players, with everyone jockeying for dominance or their piece of the pie.  The area is also at the nexus of 3 very large groupings of companies:  financial services, communications services and transaction/information processors.  The latter includes Google and FaceBook, which is why they are constantly being talked about.  That said, players in all 3 camps are constrained by vestigial business and pricing models.   Whoever ties/relates the communications event/transaction to the underlying economic transaction will win.  New pricing will reflect digitization and true marginal cost.  Successful models/blueprints are 800, VPN, and advertising.  We believe 70-80% of all revenue in the future will derive from corporate users and less than 30% will be subscription based.

4) Exchange models and products/solutions that facilitate the flow of information across upper and lower layers and from end to end represent exciting and rewarding opportunities.  In a competitive world of infinite revenue clouds of demand mechanisms must exist that drive cost down between participants as traffic volumes explode.  This holds for one-way and two-way traffic, and narrow and broadband applications.  The opposing sides of bill and keep (called party pays) and network neutrality, are missing the point.  New services can only develop if there is a bilateral, balanced payment system.  It is easy to see why incumbent service and application models embrace bill and keep, as it stifles new entrants.  But long term it also stifles innovation and retards growth.

5) What will the new network and access topologies look like?  Clearly the current industry structure cannot withstand the dual onslaught of rapid technological change and obsolescence and enormously growing and diverging demand.  It’s great if everyone embraces the cloud, but what if we don’t have access to it?  Something I call “centralized hierarchical networking” will develop.  A significant amount of hybridization will exist.  No “one solution” will result.  Scale and ubiquity will be critical elements to commercial success.  As will anticipation and incorporation of developments in the middle and upper layers.  Policy must ensure that providers are not allowed to hide behind a mantra of “natural bottlenecks” and universal service requirements.  In fact, the open and competitive models ensure the latter as we saw from our pro-competitive and wireless policies of the 1980s and 1990s.

In conclusion, these are the 5 areas I focus on:

1)      Information Velocity

2)      Mapping the InfoStack

3)      Applications and in particular, payment systems

4)      Exchange models

5)      Networks

The analysis will tend to focus on pricing (driven by marginal, not average costs) and arbitrages, the “directory value” of something, which some refer to as the network effect, and key supply and demand drivers.

Posted by: Michael Elling AT 09:43 am   |  Permalink   |  0 Comments  |  Email
Email
Twitter
Facebook
Digg
LinkedIn
Delicious
StumbleUpon
Add to favorites

Information Velocity Partners, LLC
88 East Main Street, Suite 209
Mendham, NJ 07930
Phone: 973-222-0759
Email:
contact@ivpcapital.com

Mastodon

Design Your Own Website, Today!
iBuilt Design Software
Give it a try for Free