Skip to main content
SpectralShifts Blog 
Monday, March 17 2014

A New Visionary In Our Midst?

The US has lacked a telecom network visionary for nearly 2 decades.  There have certainly been strong and capable leaders, such as John Malone who not only predicted but brought about the 500 channel LinearTV model.  But there hasn’t been someone like Bill McGowan who broke up AT&T or Craig McCaw who first had the vision to build a national, seamless wireless network, countering decades of provincial, balkanized thinking.  Both of them fundamentally changed the thinking around public service provider networks.

But with a strong message to the markets in Washington DC on March 11 from Masayoshi Son, Sprint’s Chairman, the 20 year wait may finally be over.  Son did what few have been capable of doing over the past 15-20 years since McGowan exited stage left and McCaw sold out to MaBell: telling it like it is.  The fact is that today’s bandwidth prices are 20-150x higher than they should be with current technology.

This is no one’s fault in particular and in fact to most people (even informed ones) all measures of performance-to-price compared to 10 or 20 years ago look great.  But, as Son illustrated, things could be much, much better.  And he’s willing to make a bet on getting the US, the most advanced and heterogeneous society, back to a leadership role with respect to the ubiquity and cost of bandwidth.  To get there he needs more scale and one avenue is to merge with T-Mobile.

There have been a lot of naysayers as to the possibility of a Sprint-T-Mo hookup, including leaders at the FCC.  But don’t count me as one; it needs to happen.  Initially skeptical when the rumors first surfaced in December, I quickly reasoned that a merger would be the best outcome for the incentive auctions.  A merger would eliminate spectrum caps as a deterrent to active bidding and maximize total proceeds.  It would also have a better chance of developing a credible third competitor with equal geographic reach. Then in January the FCC and DoJ came out in opposition to the merger.

In February, though, Comcast announced the much rumored merger with TW and Son jumped on the opportunity to take his case for merging to a broader stage.  He did so in front of a packed room of 300 communications pundits, press and politicos at the US Chamber of Commerce’s prestigious Hall of Flags; a poignant backdrop for his own rags to riches story.  Son’s frank honesty about the state of broadband for the American public vs the rest of the world, as well as Sprint’s own miserable current performance were impressive.  It’s a story that resonates with my America’s Bandwidth Deficit presentation.

Here are some reasons the merger will likely pass:
  • The FCC can’t approve one horizontal merger (Comcast/TW) that brings much greater media concentration and control over content distribution, while disallowing a merger of two small players (really irritants as far as AT&T and Verizon are concerned).
  • Son has a solid track record of disruption and doing what he says.
  • The technology and economics are in his favor.
  • The vertically integrated service provider model will get disrupted faster and sooner as Sprint will have to think outside the box, partner, and develop ecosystems that few in the telecom industry have thought about before; or if they have, they’ve been constrained by institutional inertia and hidebound by legacy regulatory and industry siloes.

Here are some reasons why it might not go through:

  • The system is fundamentally corrupt.  But the new FCC Chairman is cast from a different mold than his predecessors and is looking to make his mark on history.
  • The FCC shoots itself in the foot over the auctions.  Given all the issues and sensitivities around incentive auctions the FCC wants this first one to succeed as it will serve as a model for all future spectrum refarming issues. 
  • The FCC and/or DoJ find in the public interest that the merger reduces competition.  But any analyst can see that T-Mo and Sprint do not have sustainable models at present on their own; especially when all the talk recently in Barcelona was already about 5G.

Personally I want Son’s vision to succeed because it’s the vision I had in 1997 when I originally brought the 2.5-2.6 (MMDS) spectrum to Sprint and later in 2001 and 2005 when I introduced Telcordia’s 8x8 MIMO solutions to their engineers.  Unfortunately, past management regimes at Sprint were incapable of understanding the strategies and future vision that went along with those investment and technology pitches.  Son has a different perspective (see in particular minute 10 of this interview with Walt Mossberg) with his enormous range of investments and clear understanding of price elasticity and the marginal cost of minutes and bits.

To be successful Sprint’s strategy will need to be focused, but at the same time open and sharing in order to simultaneously scale solutions across the three major layers of the informational stack (aka the InfoStack):

  • upper (application and content)
  • middle (control)
  • lower (access and transport)

This is the challenge for any company that attempts to disrupt the vertically integrated telecom or LinearTV markets; the antiquated and overpriced ones Son says he is going after in his presentation.    But the US market is much larger and more robust than the rest of the world, not just geographically, but also from a 360 degree competitive perspective where supply and demand are constantly changing and shifting.

Ultimate success may well rest in the control layer, where Apple and Google have already built up formidable operating systems which control vastly profitably settlement systems across multiple networks.  What few realize is that the current IP stack does not provide price signals and settlement systems that clear supply and demand between upper and lower layers (north-south) or between networks (east-west) in the newly converged “informational” stack of 1 and 2-way content and communications.

If Sprint’s Chairman realizes this and succeeds in disrupting those two markets with his strategy then he certainly will be seen as a visionary on par with McGowan and McCaw.

Posted by: Michael Elling AT 09:58 am   |  Permalink   |  0 Comments  |  Email
Thursday, April 25 2013

The Law of Wireless Gravity

I've written about the impacts of and interplay between Moore’s, Metcalfe’s and Zipf’s laws on supply and demand of communication services and networks.  Moore’s and Metcalfe’s laws can combine to drive bandwidth costs down 50% annually.  Others have pointed out Butter’s law, coming from a Bell Lab’s wizard, Gerry Butter, which arrives at a more aggressive outcome; a 50% drop every 9 months!  Anyway those are the big laws that are immutable and washing against and over vertically integrated monopolies like giant unseen tsunamis.

Then there are the smaller laws, like my friend Russ McGuire at Sprint who penned, “The value of any product or service increases with its mobility.”  Wow, that’s very metcalfian and almost infinite in value because the devices and associated pathways can move in 3 planes.  I like that and have always believed in McGuire’s Law (even before he invented it!).

Since the early 1990s, when I was one of the few, if only, analyst on the Street to cover wired and wireless telecoms, I’ve been maintaining that wireless is merely access to wireline applications.  While that has been validated finally with “the cloud” and business models and networks have been merging (at least at the corporate level) the majority of people still believe them to be fundamentally distinct.  It shows in simple things like interfaces and lack of interoperability across 4 screens.  Thankfully all that is steadily eroding due to cloud ecosystems and the enormous fight happening in the data world between the edge and the core and open vs closed:  GOOG vs AAPL vs MSFT (and let’s not forget Mozilla, the OS to rule all OS’?).

Anyone who works in or with the carriers knows wireless and wired networks are indelibly linked and always have been in terms of backhaul transport to the cell-tower.  But over the past 6 years the symbiosis has become much greater because of the smartphone.  1G and 2G digital networks were all capable of providing “data” connections from 1998-2006, but it really wasn’t until the iPhone happened on the scene in 2007 along with the advent of 3G networks that things really started taking off.

The key was Steve Jobs’ demand to AT&T that smartphone applications purchased through the App Store have unfettered access to the internet, be it through:

  • 2G, which was relatively pervasive, but slow at 50-300kbps,
  • 3G, which was not pervasive, but faster at 500-1500 kbps, or
  • Wifi (802.11g), which was pervasive in a lot of “fixed” areas like home, work or school.

The latter made a ton of sense in particular, because data apps, unlike voice, will more likely be used when one is relatively stationary, for obvious visual and coordination and safety reasons; the exception being music.  In 2007 802.11g Wifi was already 54 mbps, or 30-50x faster than 3G, even though the Wifi radios on smartphones could only handle 30 mbps.  It didn’t matter, since most apps rarely need more than 2 mbps to perform ok.  Unfortunately, below 2 mbps they provided a dismal experience and that’s why 3G had such a short shelf-life and the carriers immediately began to roll out 4G.

Had Jobs not gotten his way, I think the world would be a much different place as the platforms would not have been so generative and scaled so quickly without unconstrained (or nearly ubiquitous) access.  This is an example of what I call Metcalfian “suck” (network effect pull-through) of the application ecosystem for the carriers and nothing exemplified it better than the iPhone and App Store for the first few years as AT&T outpaced its rivals and the Android app ecosystem.  And it also upset the normal order of business first and consumer second through the bring your own device (BYOD) trend, blurring the lines between the two traditionally separate market segments.

Few people to this day realize or appreciate the real impact that Steve Jobs had, namely reviving equal access.  The latter was something the carriers and federal government conspired to and successfully killed in the early 2000s.  Equal access was the horse that brought us competitive voice in the early 1980s, competitive data in the early 1990s and helped scale digital wireless networks nationwide in the late 1990s.  All the things we’re thankful for, yet have forgotten, or never entirely appreciated, or even how they came about.

Simply put, 70% of all mobile data access is over Wifi and we saw 4G networks develop 5 years faster than anyone thought possible.  Importantly, not only is Wifi cheaper and faster access, it is almost always tied to a broadband pipe that is either fiber or becomes fiber very quickly.

Because of this “smart” or market driven form of equal access and in appreciation of Steve Jobs’ brilliance, I am going to introduce a new law.  The Law of Wireless Gravity which holds, "a wireless bit will seek out fiber as quickly and cheaply as possible.”  I looked it up on google and it doesn’t exist.  So now I am introducing it into the public domain under creative commons.  Of course there will be plenty of metaphors about clouds and attraction and lightning to go along with the law.  As well, there will be numerous corollaries.

I hope people abide by this law in all their thinking about and planning for broadband, fiber, gigabit networks, application ecosystems, devices, control layers, residential and commercial demand, etc…because it holds across all of those instances.  Oh, yeah, it might actually counter the confusion over and disinformation about spectrum scarcity at the same time.  And it might solve the digital divide problem, and the USF problem, and the bandwidth deficit….and even the budget deficit.  Ok, one step at a time.

Related Reading:

Not exactly reading, but comic Bill Burr's ode to Steve Jobs

Looking back at the number of laws Kurzweil got right and wrong (sometimes a matter of timing) looking back to 2001.

Posted by: Michael Elling AT 09:49 am   |  Permalink   |  0 Comments  |  Email
Friday, August 17 2012

How To Develop A Blue Ocean Strategy In A Digital Ecosystem

Back in 2002 I developed a 3 dimensional macro/micro framework based strategy for Multex, one of the earliest and leading online providers of financial information services.  The result was to sell themselves to Reuters in a transaction that benefited both companies.  1+8 indeed equaled 12.  What I proposed to the CEO was simple.  Do “this” to grow to a $500m company or sell yourself.  After 3-4 weeks of mulling it over, he took a plane to London and sold his company rather than undertake the “this”.

What I didn’t know at the time was that the “this” was a Blue Ocean Strategy (BOS) of creating new demand by connecting previously unconnected qualitative and quantitative information sets around the “state” of user.  For example a portfolio manager might be focused on biotech stocks in the morning and make outbound calls to analysts to answer certain questions.  Then the PM goes to a chemicals lunch and returns to focus on industrial products in the afternoon, at which point one of the biotech analysts gets back to him.  Problem.  The PM’s mental and physical “state” or context is gone.  Multex had the ability to build a tool that could bring the PM back to his morning “state” in his electronic workplace.  Result, faster and better decisions.  Greater productivity, possible performance, definite value.

Sounds like a great story, except there was no BOS in 2002.  It was invented in 2005.  But the second slide of my 60 slide strategy deck to the CEO had this quote from the author’s of BOS, W.Chan Kim and Renee Mauborgne, of INSEAD, the Harvard Business School of Europe:

“Strategic planning based on drawing a picture…produces strategies that instantly illustrate if they will: stand out in the marketplace, are easy to understand and communicate, and ensure that every employee shares a single visual reference point.”

So you could argue that I anticipated the BOS concept to justify my use of 3D frameworks which were meant to illustrate this entirely new playing field for Multex.

But this piece is less about the InfoStack’s use in business and sports and more about the use of the 4Cs and 4Us of supply and demand as tools within the frameworks to navigate rapidly changing and evolving ecosystems.  And we use the BOS graphs postulated by Kim/Mauborgne.  The 4Cs and 4Us lets someone introducing a new product, horizontal layer (exchange) or vertical market solution (service integration) figure out optimal product, marketing and pricing strategies and tactics a priori.  A good example of this is a BOS I created for a project I am working on in the area of Wifi offload and Hetnet (heterogeneous access networks that can be self-organising) area called HotTowns (HOT).  Here’s a picture of it comparing 8 key supply and demand elements across fiber, 4G macro cellular and super saturation offload in a rural community.  Note that the "blue area" representing the results of the model can be enhanced on the capacity front by fiber and on the coverage front by 4G.

The same approach can be used to rate mobile operating systems and any other product at a boundary of the infostack or horizontal or vertical solution in the market.  We'll do some of that in upcoming pieces.

 

 

Posted by: Michael Elling AT 09:49 am   |  Permalink   |  0 Comments  |  Email
Sunday, June 03 2012

Since I began covering the sector in 1990, I’ve been waiting for Big Bang II.  An adult flick?  No, the sequel to Big Bang (aka the breakup of MaBell and the introduction of equal access) was supposed to be the breakup of the local monopoly.  Well thanks to the Telecom Act of 1996 and the well-intentioned farce that it was, that didn’t happen and equal access officially died (equal access RIP) in 2005 with the Supreme Court's Brand-X decision vs the FCC.  If it died, then we saw a resurrection that few noticed.  

I am announcing that Equal Access is alive and well, albeit in a totally unexpected way.  Thanks to Steve Jobs’ epochal demands put on AT&T to counter its terrible 2/3G network coverage and throughput, every smartphone has an 802.11 (WiFi) backdoor built-in.  Together with the Apple and Google operating systems being firmly out of carriers’ hands and scaling across other devices (tablets, etc…) a large ecosystem of over-the-top (OTT), unified communications and traffic offloading applications is developing to attack the wireless hegemony. 

First, a little history.  Around the time of AT&T's breakup the government implemented 2 forms of equal access.  Dial-1 in long-distance made marketing and application driven voice resellers out of the long-distance competitors.  The FCC also mandated A/B cellular interconnect to ensure nationwide buildout of both cellular networks.  This was extended to nascent PCS providers in the early to mid 1990s leading to dramatic price declines and enormous demand elasticities.  Earlier, the competitive WAN/IXC markets of the 1980s led to rapid price reductions and to monopoly (Baby Bell or ILEC) pricing responses that created the economic foundations of the internet in layers 1 and 2; aka flat-rate or "unlimited" local dial-up.  The FCC protected the nascent ISP's by preventing the Bells from interfering at layer 2 or above.  Of course this distinction of MAN/LAN "net-neutrality" went away with the advent of broadband, and today it is really just about WAN/MAN fights between the new (converged) ISPs or broadband service providers like Comcast, Verizon, etc... and the OTT or content providers like Google, Facebook, Netflix, etc...

(Incidentally, the FCC ironically refers to edge access providers, who have subsumed the term ISPs or "internet service providers", as "core" providers, while the over-the-top (OTT) messaging, communications, e-commerce and video streaming providers, who reside at the real core or WAN, are referred to as "edge" providers.  There are way, way too many inconsistencies for truly intelligent people to a) come up with and b) continue to promulgate!)

But a third form of equal access, this one totally unintentioned, happened with 802.11 (WiFi) in the mid 1990s.  The latter became "nano-cellular" in that power output was regulated limiting hot-spot or cell-size to ~300 feet.  This had the impact of making the frequency band nearly infinitely divisible.  The combination was electric and the market, unencumbered by monopoly standards and scaling along with related horizontal layer 2 data technologies (ethernet), quickly seeded itself.  It really took off when Intel built WiFi capability directly into it's Centrino chips in the early 2000s.  Before then computers could only access WiFi with usb dongles or cables tethered to 2G phones

Cisco just forecast that 50% of all internet traffic will be generated from 802.11 (WiFi) connected devices.  Given that 802.11’s costs are 1/10th those of 4G something HAS to give for the communications carrier.  We’ve talked about them needing to address the pricing paradox of voice and data better, as well as the potential for real obviation at the hands of the application and control layer worlds.  While they might think they have a near monopoly on the lower layers, Steve Job’s ghost may well come back to haunt them if alternative access networks/topologies get developed that take advantage of this equal access.  For these networks to happen they will need to think digital, understand, project and foster vertically complete systems and be able to turn the "lightswitch on" for their addressable markets.

Posted by: Michael Elling AT 10:21 am   |  Permalink   |  2 Comments  |  Email
Sunday, March 11 2012

I first started using clouds in my presentations in 1990 to illustrate Metcalfe’s Law and how data would scale and supersede voice.  John McQuillan and his Next Gen Networks (NGN) conferences were my inspiration and source.  In the mid-2000s I used them to illustrate the potential for a world of unlimited demand ecosystems: commercial, consumer, social, financial, etc…  Cloud computing has now become a part of everyday vernacular.  The problem is that for cloud computing to expand the world of networks needs to go flat, or horizontal, as in this complex looking illustration to the left.

This is a static view.  Add some temporality and rapidly shifting supply/demand dynamics and the debate begins as to whether the system should be centralized or decentralized.  Yes and no.  There are 3 main network types:  hierarchical, centralized and fully distributed (aka peer to peer).  None fully accommodate metcalfe’s, moore’s and zipf’s laws.  Network theory needs to capture the dynamic of new service/technology introduction that initially is used by a small group, but then rapidly scales to many.  Processing/intelligence initially must be centralized but then traffic and signaling volumes dictate pushing the intelligence to the edge.  The illustration to the right begins to convey that lateral motion in a flat, layered architecture, driven by the 2-way, synchronous nature of traffic; albeit with the signalling and transactions moving vertically up and down.

But just as solutions begin to scale, a new service is borne superseding the original.  This chaotic view from the outside looks like an organism in constant state of expansion then collapse, expansion then collapse, etc…

A new network theory that controls and accounts for this constant state of creative destruction* is Centralized Hierarchical Networks (CHNs) CC.   A search on google and duckduckgo reveals no known prior attribution, so Information Velocity Partners, LLC (aka IVP Capital, LLC) both lays claim and offers up the term under creative commons (CC).  I actually coined the CHN term in 2004 at a Telcordia symposium; now an Ericsson subsidiary.

CHN theory fully explains the movement from mainframe to PC to cloud.  It explains the growth of switches, routers and data centers in networks over time.  And it should be used as a model to explain how optical computing/storage in the core, fiber and MIMO transmission and cognitive radios at the edge get introduced and scaled.  Mobile broadband and 7x24 access /syncing by smartphones are already beginning to reveal the pressures on a vertically integrated world and the need to evolve business models and strategies to centralized hierarchical networking.

*--interesting to note that creative destruction was original used in far-left Marxist doctrine in the 1840s but was subsumed into and became associated with far-right Austrian School economic theory in the 1950s.  Which underscores my view that often little difference lies between far-left and far-right in a continuous circular political/economic spectrum.

Related Reading:
Decentralizing the Cloud.  Not exactly IMO.

Network resources will always be heterogeneous.

Everything gets pushed to the edge in this perspective

Posted by: Michael Elling AT 08:42 am   |  Permalink   |  0 Comments  |  Email
Sunday, January 22 2012

Data is just going nuts!  Big data, little data, smartphones, clouds, application ecosystems.  So why are Apple and Equinix two of only a few large cap companies in this area with stocks up over 35% over the past 12 months, while AT&T, Verizon, Google and Sprint are market performers or worse?   It has to do with pricing, revenues, margins and capex; all of which impact ROI.  The former’s ROI is going up while the latters’ are flat to declining.  And this is all due to the wildness of mobile data.

Data services have been revealing flaws and weaknesses in the carriers pricing models and networks for some time, but now the ante is being upped.  Smartphones now account for almost all new phones sold, and soon they will represent over 50% of every carriers base, likely ending this year over 66%.  That might look good except when we look at these statistics and facts:

  • 1% of wlx users use 50% of the bandwidth, while the top 10% account for 90%.  That means 9 out of 10 users account for only 10% of network consumption; clearly overpaying for what they get.
  • 4G smartphone displays (720x1280 pixels) allow video viewing that uses 300x more capacity than voice.
  • Streaming just 2 hours of music daily off a cloud service soaks up 3.5GB per month
  • Carriers still derive more than 2/3 of their revenues from voice.
  • Cellular wireless (just like WiFi) is shared. 

Putting this together you can see that on the one hand a very small percentage of users use the bulk of the network.  Voice pricing and revenues are way out of sync with corresponding data pricing and revenues; especially as OTT IP voice and other applications become pervasive. 

 

Furthermore, video, which is growing in popularity will end up using 90% of the capacity, crowding out everything else, unless carriers change pricing to reflect differences in both marginal users and marginal applications.  Marginal here = high volume/leading edge.

So how are carriers responding?  By raising data prices.  This started over a year ago as they started capping those “unlimited” data plans.  Now they are raising the prices and doing so in wild and wacky ways; ways we think that will come back to haunt them just like wild party photos on FB.  Here are just two of many examples:

  • This past week AT&T simplified its pricing and scored a marketing coup by offering more for more and lowering prices even as the media reported AT&T as “raising prices.”  They sell you a bigger block of data at a higher initial price and then charge the same rate for additional blocks which may or may not be used.  Got that?
  • On the other hand that might be better than Walmart’s new unlimited data plan which requires PhD level math skills to understand.  Let me try to explain as simply as possible.  Via T-Mobile they offer 5GB/month at 3G speed, thereafter (the unlimited part) they throttle to 2G speed.  But after March 16 the numbers will change to 250MB initially at 3G, then 2G speeds unlimited after that.  Beware the Ides of March’s consumer backlash!

Unless the carriers and their channels start coming up with realistic offload solutions, like France’s Free, and pricing to better match underlying consumption, they will continue to generate lower or negative ROI.  They need to get control of wild data.  Furthermore, if they do not, the markets and customers will.  With smartphones (like Apple's, who by the way drove WiFi as a feature knowing that AT&T's network was subpar) and cloudbased solutions (hosted by Equinix) it is becoming easier for companies like Republic Wireless to virtually bypass the expensive carrier plans using their very own networks.  AT&T, VZ, Sprint will continue to be market performers at best.

Related Reading/Viewing:
AT&T pricing relies heavily on breakage
Useful stats on data growth from MobileFuture
FoxNews report on AT&T Data Throttling
This article actually suggests "dissuading usage" as 1 of 4 solutions
Consumer reports article whereby data equivalent voice pricing = 18 cents, OTT on lowest plan = 6.6 cents, OTT on highest plan = 1 cent
New shared data plans set a new high standard

Posted by: Michael Elling AT 09:13 am   |  Permalink   |  0 Comments  |  Email
Sunday, November 13 2011

A humble networking protocol 10 years ago, packet based Ethernet (invented at Xerox in 1973) has now ascended to the top of the carrier networking pyramid over traditional voice circuit (time) protocols due to the growth in data networks (storage and application connectivity) and 3G wireless.  According to AboveNet the top 3 CIO priorities are cloud computing, virtualization and mobile, up from spots 16, 3 and 12, respectively, just 2 years ago!   Ethernet now accounts for 36% of all access, larger than any other single legacy technology, up from nothing 10 years ago when the Metro Ethernet Forum was established.  With Gigabit and Terabit speeds, Ethernet is the only protocol for the future.

The recent Ethernet Expo 2011 in NYC underscored the trends and importance of what is going on in the market.  Just like fiber and high-capacity wireless (MIMO) in the physical layer (aka layer 1), Ethernet has significant price/performance advantages in transport networks (aka layer 2).  This graphic illustrates why it has spread through the landscape so rapidly from LAN to MAN to WAN.   With 75% of US business buildings lacking access to fiber, EoC will be the preferred access solution.  As bandwidth demand increases, Ethernet has a 5-10x price/performance advantage over legacy equipment.

Ethernet is getting smarter via a pejoratively coined term, SPIT (Service Provider Information Technology).  The graphic below shows how the growing horizontalization is supported by vertical integration of information (ie exchanges) that will make Ethernet truly “on-demand”.  This model is critical because of both the variability and dispersion of traffic brought on by both mobility and cloud computing.  Already, the underlying layers are being “re”-developed by companies like AlliedFiber who are building new WAN fiber with interconnection points every 60 miles.  It will all be ethernet.  Ultimately, app providers may centralize intelligence at these points, just like Akamai pushed content storage towards the edge of the network for Web 1.0.  At the core and key boundary points Ethernet Exchanges will begin to develop.  Right now network connections are mostly private and there is significant debate as to whether there will be carrier exchanges.  The reality is that there will be exchanges in the future; and not just horizontal but vertical as well to facilitate new service creation and a far larger range of on-demand bandwidth solutions.

By the way, I found this “old” (circa 2005) chart from the MEF illustrating what and where Ethernet is in the network stack.  It is consistent with my own definition of web 1.0 as a 4 layer stack.  Replace layer 4 with clouds and mobile and you get the sense for how much greater complexity there is today.  When you compare it to the above charts you see how far Ethernet has evolved in a very rapid time and why companies like Telx, Equinix (8.6x cash flow), Neutral Tandem (3.5x cash flow) will be interesting to watch, as well as larger carriers like Megapath and AboveNet (8.2x cash flow).   Certainly the next 3-5 years will see significant growth in ethernet and obsolescence of the PSTN and legacy voice (time-based) technologies.

Related Reading:
CoreSite and other data centers connect directly to Amazon AWS

Equinix and Neutral Tandem provide seamless service

 

Posted by: Michael Elling AT 12:46 pm   |  Permalink   |  0 Comments  |  Email
Email
Twitter
Facebook
Digg
LinkedIn
Delicious
StumbleUpon
Add to favorites

Information Velocity Partners, LLC
88 East Main Street, Suite 209
Mendham, NJ 07930
Phone: 973-222-0759
Email:
contact@ivpcapital.com

Mastodon

Design Your Own Website, Today!
iBuilt Design Software
Give it a try for Free