Skip to main content
SpectralShifts Blog 
Wednesday, July 31 2013

Is Google Trying to Block Web 4.0?

Back in 1998 I wrote, “if you want to break up the Microsoft software monopoly then break up the Baby Bell last-mile access monopoly.”  Market driven broadband competition and higher-capacity digital wireless networks gave rise to the iOS and Android operating systems over the following decade which undid the Windows monopoly.  The 2013 redux to that perspective is, once again, “if you want to break up the Google search monopoly then break up the cable/telco last mile monopolies.” 

Google is an amazing company, promoting the digital pricing and horizontal service provider spirit more than anyone.  But Google is motivated by profit and will seek to grow that profit as best it can, even if contrary to founding principles and market conditions that fueled its success (aka net neutrality or equal access).  Now that Google is getting into the lower layers in the last mile they are running into paradoxes and conflicts over net neutrality/equal access and in danger of becoming just another vertical monopoly.  (Milo Medin provides an explanation in the 50th minute in this video, but it is self-serving, disingenuous and avoids confronting the critical issue for networks going forward.)

Contrary to many people’s beliefs, the upper and lower layers have always been inextricably interdependent and nowhere was this more evident than with the birth of the internet out of the flat-rate dial-up networks of the mid to late 1980s (a result of dial-1 equal access).  The nascent ISPs that scaled in the 1980s on layer 1-2 data bypass networks were likewise protected by Computers II-III (aka net neutrality) and benefited from competitive (WAN) transport markets.

Few realize or accept the genesis of Web 1.0 (W1.0) was the break-up of AT&T in 1983.   Officially birthed in 1990 it was an open, 1-way store and forward database lookup platform on which 3 major applications/ecosystems scaled beginning in late 1994 with the advent of the browser: communications (email and messaging), commerce, and text and visual content.  Even though everything was narrowband, W1.0 began the inexorable computing collapse back to the core, aka the cloud (4 posts on the computing cycle and relationship to networks).  The fact that it was narrowband didn't prevent folks like Mark Cuban and Jeff Bezos from envisioning and selling a broadband future 10 years hence.   Regardless, W1.0 started collapsing in 1999 as it ran smack into an analog dial-up brick wall.  Google hit the bigtime that year and scaled into the early 2000s by following KISS and freemium business model principles.  Ironically, Google’s chief virtue was taking advantage of W1.0’s primary weakness.

Web 2.0 grew out of the ashes of W1.0 in 2002-2003.  W2.0 both resulted from and fueled the broadband (BB) wars starting in the late 1990s between the cable (offensive) and telephone (defensive) companies.  BB penetration reached 40% in 2005, a critical tipping point for the network effect, exactly when YouTube burst on the scene.  Importantly, BB (which doesn't have equal access, under the guise of "deregulation") wouldn’t have occurred without W1.0 and the above two forms of equal access in voice and data during the 1980s-90s.  W2.0 and BB were mutually dependent, much like the hardware/software Wintel model.   BB enabled the web to become rich-media and mostly 2-way and interactive.  Rich-media driven blogging, commenting, user generated content and social media started during the W1.0 collapse and began to scale after 2005.

“The Cloud” also first entered people’s lingo during this transition.  Google simultaneously acquired YouTube in the upper layers to scale its upper and lower layer presence and traffic and vertically integrated and consolidated the ad exchange market in the middle layers during 2006-2008.  Prior to that, and perhaps anticipating lack of competitive markets due to "deregulation" of special access, or perhaps sensing its own potential WAN-side scale, the company secured low-cost fiber rights nationwide in the early 2000s following the CLEC/IXC bust and continued throughout the decade as it built its own layer 2-3 transport, storage, switching and processing platform.  Note, the 2000s was THE decade of both vertical integration and horizontal consolidation across the board aided by these “deregulatory” political forces.  (Second note, "deregulatory" should be interpreted in the most sarcastic and insidious manner.)

Web 3.0 began officially with the iPhone in 2007.  The smartphone enabled 7x24 and real-time access and content generation, but it would not have scaled without wifi’s speed, as 3G wireless networks were at best late 1990s era BB speeds and didn’t become geographically ubiquitous until the late 2000s.  The combination of wifi (high speeds when stationary) and 3G (connectivity when mobile) was enough though to offset any degradation to user experience.  Again, few appreciate or realize that  W3.0 resulted from two additional forms of equal access, namely cellular A/B interconnect from the early 1980s (extended to new digital PCS entrants in the mid 1990s) and wifi’s shared spectrum.  One can argue that Steve Jobs single-handedly resurrected equal access with his AT&T agreement ensuring agnostic access for applications.  Surprisingly, this latter point was not highlighted in Isaacson's excellent biography.  Importantly, we would not have had the smartphone revolution were it not for Jobs' equal access efforts.

W3.0 proved that real-time, all the time "semi-narrowband" (given the contexts and constraints around the smartphone interface) trumped store and forward "broadband" on the fixed PC for 80% of people’s “web” experience (connectivity and interaction was more important than speed), as PC makers only realized by the late 2000s.  Hence the death of the Wintel monopoly, not by government decree, but by market forces 10 years after the first anti-trust attempts.  Simultaneously, the cloud became the accepted processing model, coming full circle back to the centralized mainframe model circa 1980 before the PC and slow-speed telephone network led to its relative demise.  This circularity further underscores not only the interplay between upper and lower layers but between edge and core in the InfoStack.  Importantly, Google acquired Android in 2005, well before W3.0 began as they correctly foresaw that small-screens and mobile data networks would foster the development of applications and attendant ecosystems would intrude on browser usage and its advertising (near) monopoly. 

Web 4.0 is developing as we speak and no one is driving it and attempting to influence it more with its WAN-side scale than Google.  W4.0 will be a full-duplex, 2-way, all-the time, high-definition application driven platform that knows no geographic or market segment boundaries.  It will be engaging and interactive on every sensory front; not just those in our immediate presence, but everywhere (aka the internet of things).  With Glass, Google is already well on its way to developing and dominating this future ecosystem.  With KC Fiber Google is illustrating how it should be priced and what speeds will be necessary.  As W4.0 develops the cloud will extend to the edge.  Processing will be both centralized and distributed depending on the application and the context.  There will be a constant state of flux between layers 1 and 3 (transport and switching), between upper and lower layers, between software and hardware at every boundary point, and between core and edge processing and storage.  It will dramatically empower the end-user and change our society more fundamentally than what we’ve witnessed over the past 30 years.  Unfortunately, regulators have no gameplan on how to model or develop policy around W4.0.

The missing pieces for W4.0 are fiber based and super-high capacity wireless access networks in the lower layers, settlement exchanges in the middle layers, and cross-silo ecosystems in the upper layers.   Many of these elements are developing in the market naturally: big data, hetnets, SDN, openflow, open OS' like Android and Mozilla, etc… Google’s strategy appears consistent and well-coordinated to tackle these issues; if not far ahead of others. But its vertically integrated service provider model and stance on net neutrality in KC is in conflict with the principles that so far have led to its success.

Google is buying into the vertical monopoly mindset to preserve its profit base instead of teaching regulators and the markets about the virtues of open or equal access across every layer and boundary point (something clearly missing from Tim Wu's and Bob Atkinson's definitions of net neutrality).  In the process it is impeding the development of W4.0.  Governments could solve this problem by simply conditioning any service provider with access to a public right of way or frequency to equal access in layers 1 and 2; along with a quid pro quo that every user has a right to access unhindered by landlords and local governments within economic and aesthetic reason.  (The latter is a bone we can toss all the lawyers who will be looking for new work in the process of simpler regulations.)  Google and the entire market will benefit tremendously by this approach.  Who will get there first?  The market (Google or MSFT/AAPL if the latter are truly hungry, visionary and/or desperate) or the FCC?  Originally hopeful, I’ve become less sure of the former over the past 12 months.  So we may be reliant on the latter.

Related Reading:

Free and Open Depend on Where You Are in Google's InfoStack

InfoWorld defends Google based on its interpretation of NN; of which there are 4

DSL reports thinks Google is within its rights because it expects to offer enterprise service.  Only they are not and heretofore had not mentioned it.

A good article on Gizmodo about state of the web what "we" are giving up to Google

The datacenter as an "open access" boundary.  What happens today in the DC will happen tomorrow elsewhere.

Posted by: Michael Elling AT 10:57 am   |  Permalink   |  4 Comments  |  Email
Wednesday, February 06 2013

Is IP Growing UP? Is TCPOSIP the New Protocol Stack? Will Sessions Pay For Networks?

Oracle’s purchase of leading SBC vendor (session border controller Acme Packets), is a tiny seismic event in the technology and communications (ICT) landscape.  Few notice the potential for much broader upheaval ahead.

SBCs, which have been around since 2000, facilitate traffic flow between different networks; IP to PSTN to IP and IP to IP.  Historically traffic has been mostly voice, where minutes and costs count because that world has been mostly rate-based.  Increasingly they are being used to manage and facilitate any type of traffic “sessions” across an array of public and private networks, be it voice, data, or video.  The reasons are many-fold, including, security, quality of service, cost, and new service creation; all things TCP-IP don't account for.

Session control is layer 5 to TCP-IP’s 4 layer stack.  A couple of weeks ago I pointed out that most internet wonks and bigots deride the OSI framework and feel that the 4 layer TCP-IP protocol stack won the “war”.  But here is proof that as with all wars the victors typically subsume the best elements and qualities of the vanquished.

The single biggest hole in the internet and IP world view is bill and keep.  Bill and keep’s origins derive from the fact that most of the overhead in data networks was fixed in the 1970s and 1980s.  The component costs were relatively cheap compared with the mainframe costs that were being shared and the recurring transport/network costs were being arbitraged and shared by those protocols.  All the players, or nodes, were known and users connected via their mainframes.  The PC and ethernet (a private networking/transmission protocol) came along and scaled much later.  So why bother with expensive and unnecessary QoS, billing, mediation and security in layers 5 and 6?

Then along came the break-up of AT&T and due to dial-1 equal access, the Baby Bells responded with flat-rate, expanded area (LATA) pricing plans to build a bigger moat around their Class 5 monopoly castles (just like AT&T had built 50 mile interconnect exclusion zones in the 1913 Kingsbury Commitment due to the threat of wireless bypass even back then, and just like the battles OTT providers like Netflix are having with incumbent broadband monopolies today) in the mid to late 1980s.  The nascent commercial ISPs took advantage of these flat-rate zones, invested in channel banks, got local DIDs and the rest as they say is history.  Staying connected all day on a single flat-rate back then was perceived of as "free".  So the "internet" scaled from this pricing loophole (even as the ISPs received much needed shelter from vertical integration by the monopoly Bells in Computers 2/3) and further benefited from WAN competition and commoditization of transport to connect all the distributed router networks into seamless regional and national layer 1-2 low-cost footprints even before www and http/html and the browser hit in the early to mid 1990s.  The marginal cost of "interconnecting" these layer 1-2 networks was infinitesimal at best and therefore bill and keep, or settlement-free peering, made a lot of sense.  

But Bill and Keep (B&K) has three problems:

  • It supports incumbents and precludes new entrants
  • It stifles new service creation
  • It precludes centralized procurement and subsidization

With Acme, Oracle can provide solutions to problems two and three; with the smartphone driving the process.  Oracle has java on 3 billion phones around the globe.  Now imagine a session controller client on each device that can help with application and access management and preferential routing and billing etc, along with guaranteed QoS and real-time performance metrics and auditing; regardless of what network the device is currently on.  The same holds in reverse in terms of managing "session state" across multiple devices/screens across wired and wireless networks.

The alternative to B&K is what I refer to as balanced settlements.  In traditional telecom parlance, instead of just being calling party pays, they can be both called and calling party pays and are far from the regulated monopoly origination/termination tariffs.  Their pricing (transaction fees) will reflect marginal costs and therefore stimulate and serve marginal demand.  As a result, balanced settlements provide a way for rapid, coordinated roleout of new services and infrastructure investment across all layers and boundary points.  They provide the price signals that IP does not.

Balanced settlements clear supply and demand north-south between upper (application) and lower (switching,transport and access) layers, as well as east-west from one network or application or service provider to another.  Major technological shifts in the network layers like open flow, software defined networks (SDN) and network function virtualization (NFV) can develop rapidly.   Balanced settlements will reside in competitive exchanges evolving out of today's telecom tandem networks, confederation of service provider APIs, and the IP world's peering fabric, driven by big data analytical engines and advertising exchanges.

Perhaps most importantly, balanced settlements enable subsidization or procurement of edge access from the core.  Large companies and institutions can centrally drive and pay for high definition telework, telemedicine, tele-education, etc... solutions across a variety of access networks (fixed and wireless).  The telcos refer to this as guaranteed quality of service leading to "internet fast lanes."  Enterprises will do this to further digitize and economize their own operations and distribution reach (HD collaboration and the internet of things), just like 800, prepaid calling cards, VPNs and the internet itself did in the 1980s-90s.  I call this process marrying the communications event to the commercial/economic transaction and it results in more revenue per line or subscriber than today's edge subscription model.  As well, as more companies and institutions increasingly rely on the networks, they will demand backups, insurance and redundancy ensuring that there will be continous investment in multiple layer 1 access networks.

Along with open or shared access in layer 1 (something we should have agreed to in principal back in 1913 and again in 1934 as governments provide service providers a public right of way or frequency), balanced settlements can also be an answer to inefficient universal service subsidies.  Three trends will drive this. Efficient loading of networks and demand for ubiquitous high-definition services by mobile users will require inexpensive uniform access everywhere with concurrent investment in high-capacity fiber and wireless end-points.  Urban demand will naturally pay for rural demand in the process due to societal mobilty and finally the high volume low marginal cost user (enterprise or institution) will amortize and pay for the low-volume high marginal cost user to be part of their "economic ecosystem" thereby reducing the digital divide.

Related Reading:

TechZone 360 Analyzes the Deal

Acme Enables Skype Bandwidth On Demand
 

Posted by: Michael Elling AT 10:05 am   |  Permalink   |  0 Comments  |  Email
Monday, January 28 2013

TCP/IP Won, OSI Lost.  Or Did It?  Clue: Both Are Horizontal

Edmund Burke said, “Those who cannot remember the past are doomed to repeat it.”  What he didn’t add, as it might have undermined his point, is that “history gets created in one moment and gets revised the next.”  That’s what I like to say.  And nothing could be more true when it comes to current telecom and infomedia policy and structure.  How can anyone in government, academia, capital markets or the trade learn from history and make good long term decisions if they don’t have the facts straight?

I finished a book about the origins of the internet (ARPAnet, CSnet, NSFnet) called “where wizards stay up late, The Origins of The Internet” by Katie Hafner and Matthew Lyon written back in 1996, before the bubble and crash of web 1.0.  It’s been a major read for computer geeks and has some lessons for people interested in information industry structures and business models.  I cross both boundaries and was equally fascinated by the “anti-establishment” approach by the group of scientists and business developers at BBN, the DoD and academia, as well as the haphazard and evolutionary approach to development that resulted in an ecosystem very similar to what the original founders envisioned in the 1950s.

The book has become something of a bible for internet, and those I refer to as upper layer (application), fashionistas who, unfortunately, have, and are provided in the book with, very little understanding of the middle and lower layers of the service provider “stack”.  In fact the middle layers all but dissappear as far as they are concerned.  While those upper layer fashionistas would like to simplify things and say, “so and so was a founder or chief contributor of the internet,” or “TCP/IP won and OSI lost,” actual history and reality suggest otherwise.

Ironically, the best way to look at the evolution of the internet is via the oft-maligned 7-layer OSI reference model.  It happens to be the basis for one dimension of the InfoStack analytical engine.  The InfoStack relates the horizontal layers (what we call service provisioning checklist for a complete solution) to geographic dispersion of traffic and demand on a 2nd axis, and to a 3rd axis which historically covered 4 disparate networks and business models but now maps to applications and market segments.  Looking at how products, solutions and business models unfold along these axis provides a much better understanding of what really happens as 3 coordinates or vectors provides better than 90% accuracy around any given datapoint.

The book spans the time between the late 1950s and the early 1990s, but focuses principally on the late 1960s and early 1970s.  Computers were enormously expensive and shared by users, but mostly on a local basis because of high cost and slow connections.  No mention is made of the struggle modems and hardware vendors had to get level access to the telephone system and PCs had yet to burst on the scene.  The issues around the high-cost monopoly communications network run by AT&T are only briefly mentioned; their impact and import lost to the reader.

The book makes no  mention that by the 1980s development of what became the internet ecosystem really started picking up steam.  After struggling to get a foothold on the “closed” MaBell system since the 1950s, smartmodems burst on the scene in 1981.  Modems accompanied technology developments that had been occurring with fax machines, answering machines and touchtone phones; all generative aspects of a nascent competitive voice/telecoms markets.

Then, in 1983, AT&T was broken up and an explosion in WAN (long-distance) competition drove pricing down, and advanced intelligent networks increased the possibility of dial-around bypass.  (Incidentally, by 1990s touchtone penetration in the US was over 90% vs less than 20% in the rest of the world driving not only explosive growth in 800 calling, but VPN and card calling, and last but not least the simple "touchtone" numeric pager; one of the percursors to our digital cellphone revolution).  The Bells responded to this potential long-distance bypass threat by seeking regulatory relief with expanded calling areas and flat-rate calling to preserve their Class 5 switch monopoly.  

All the while second line growth exploded, primarily as people connected fax machines and modems for their PCs to connect to commercial ISPs (Compuserve, Prodigy, AOL, etc...).  These ISPs benefited from low WAN costs (competitive transit in layer 2), inexpensive routing (compared with voice switches) in layer 3, and low-cost channel banks and DIDs in those expanded LATAs to which people could dial up flat-rate (read "free") and remain connected all day long.  The US was the only country in the world that had that type of pricing model in the 1980s and early 1990s. 

Another foundation of the internet ecosystem, PCs, burst from the same lab (Xerox Parc) that was run by one of the founders of the Arpanet, Bob Taylor, who could deserve equal or more credit than Bob Kahn or Vint Cerf (inventors of TCP) for development of the internet.  As well, the final two technological underpinnings that scaled the internet, Ethernet and Windows, were developed at Xerox Parc.  These technology threads which should have been better developed in the book for their role in the demand for and growth of the internet from the edge.

In the end, what really laid the foundation for the internet were numerous efforts in parallel that developed outside the monopoly network and highly regulated information markets.  These were all 'generative' to quote Zitrane.  (And as I said a few weeks ago, they were accidental).  These parallel streams evolved into an ecosystem onto which www, http, html and mosaic, were laid--the middle and upper layers--of the 1.5 way, store and foreward, database lookup “internet” in the early to mid 1990s.  Ironically and paradoxically this ecosystem came together just as the Telecom Act of 1996 was being formed and passed; underscored by the fact that the term “internet” is mentioned once in the entire Act and one of the reasons I labeled the Act “farcical” back in 1996.

But the biggest error of the book in my opinion is not the omission of all these efforts in parallel with the development of TCP/IP and giving them due weight in the internet ecosystem, rather concluding with the notion that TCP/IP won and the OSI reference model lost.  This was disappointing and has had a huge, negative impact on perception and policy.  What the authors should have said is that a horizontally oriented, low-cost, open protocol as part of a broader similarly oriented horizontal ecosystem beat out a vertically integrated, expensive, closed and siloed solution from monopoly service providers and vendors.

With a distorted view of history it is no wonder then that:

The list of ironic and unfortunate paradoxes in policy and market outcomes goes on and on because people don’t fully understand what happened between TCP/IP and OSI and how they are inextricably linked.  Until history is viewed and understood properly, we will be doomed, in the words of Burke, to repeat it. Or, as Karl Marx said, "history repeats itself twice, first as tragedy and second as farce."

Posted by: Michael Elling AT 10:50 am   |  Permalink   |  0 Comments  |  Email
Friday, January 18 2013

Last summer I attended a Bingham event at the Discovery Theatre in NYC’s Time Square to celebrate the Terracotta Warriors of China’s first emperor, Qin Shi Huang.  What struck me was how far our Asian ancestors had advanced technically, socially and intellectually beyond our western forefathers by 200 BC.  Huang's reign, which included the building of major transportation and information networks was followed by a period of nearly 1,500 years of relative peace (and stagnation) in China.  It would take another 1,000 years for the westerners to catch up during periods of war, plague and socio-political upheaval.  But once they passed their Asian brethren by the 15th and 16th centuries they never looked back.  Having just finished Art of War, by Sun Tsu, I asked myself, is war and strife necessary for mankind to advance?

This question was reinforced over the holidays upon visiting the Loire Valley in France, which most people associate with beautiful Louis XIV chateaus, a rich fairy-tale medieval history, and good wines.  What most people don’t realize is that the Loire was a war-torn area for the better part of 400 years as the French (Counts of Blois) and English (Counts of Anjou; precursors to the Plantagenet dynasty of England) vied for domination of a then emerging Europe.  The parallels between China and France 1,000 years later couldn’t have been more poignant.

After the French finally kicked the English out in the 1400s this once war-torn region became the center of the European renaissance and later the birthplace of the age of enlightenment.  Francois 1st brought Leonardo from Italy for the last 3 years of his life and the French seized upon his way of thinking; to be followed a few centuries later by Voltaire and Rousseau.  The French aristocracy, without wars to fight, invited them to stay in their Chateaus, built on the fortifications of the medieval castles, and develop their enduring principles of liberty, equality and fraternity.  These in turn would become the foundations upon which America broadly based its constitution and structure of government; all of which in theory supports and leads to competitive markets and network neutrality; the basis of the internet.

And before I left on my trip, I bought a kindle version of Sex, Bombs and Burgers by Peter Nowak on the recommendation of an acquaintance at Bloomberg.  Nowak’s premise is to base much of America’s advancement and success over the past 50 years on our warrior instincts and need to procreate and sustain life.  I liked the book and recommend it to anyone, especially as I used to quip, “Web 1.0 of the 1990s was scaled by the 4 (application) horsemen: Content, Commerce, Communication and hard/soft-Core porn.”  But the book also provides great insights beyond the growth of porn on the internet into our food industry and where our current military investments might be taking us physically and biologically.

While the book meanders on occasion, my take-away and answer to my above question is that war (and the struggle to survive by procreating and eating) increases the rate of technological innovations, which often then result in new products; themselves often mistakes or unintended commercial consequences from their original military intent.  War increases the pace of innovation out of necessity, intensity and focus.  After all, our state of fear is unnaturally heightened when someone is trying to kill us, underscoring the notion that fear and greed are man’s primary psychological and commercial motivators; not love and happiness.

Most people generally believe the internet is an example of a technological innovation hatched from the militarily driven space race; which is the premise for another book I am just starting Where Wizards Stay Up Late, by Hafner and Lyon.  What most people fail to realize, including Nowak, is that the internet was an unintended consequence of the breakup of AT&T in 1983; another type of conflict or economic war that had been waged in the 1950s-1970s.  In that war we had General William McGowan of MCI (microwave, the M in MCI, was a technology principally scaled during WW II) battling MaBell along with his ally the DOJ.  At the same time, a group of civilian scientists in the Pentagon had been developing the ARPAnet, a weapon/tool developed to get around MaBell’s monopoly long-distance fortifications to enable low cost computer communications across the US and globally.

The two conflicts aligned in the late 1980s as the remnants of MaBell, the Baby Bells, sought regulatory relief through state and federal regulators from a viciously competitive WAN/long-distance sector to preserve two arcane, piggishly profitable monopoly revenue streams; namely intrastate tolls and terminating access.  The regulatory relief provided was to expand local calling areas (LATAs) and go to flat rate (all you can eat) pricing models.  By then modems and routers, outgrowths of ARPA related initiatives, had gotten cheap enough that the earliest ISPs could cost effectively build and market their own layer 1-2 nationwide "data bypass" networks across 5,000 local calling areas.

These networks allowed people to dial up a free or low cost local number and stay connected with a computer or database or server anywhere all day long.  The notions of “free” and “cheap” and the collapse of distance were born.  The internet started and scaled in the US because of partially competitive communications networks, whom no one else had in 1990.  It would be 10 years before the ROW had an unlimited flat-rate access topology like the US.

Only after these foundational (pricing and infrastructure) elements were in place, did the government allow commercial nets to interconnect via the ARPAnet in 1988.  This was followed by Tim B Lee's WWW in 1989 (a layer 3 address simplification standard) and http and html in subsequent years providing the basis for a simple to use, mass-market browser, mosaic, the precursor to Netscape, in 1993.  The result was the Internet or Web 1.0, which was a 4 or 5 layer asynchronous communications stack mostly used as a store and forward database lookup tool.

The internet was the result of two wars being fought against the monopolies of the Soviet communists and American bellheads; both of which, ironically, share(d) common principles.  Participants and commentators in the current network neutrality, access/USF reform and ITU debates, including Nowak, should be aware of these conflict-driven beginnings of the internet, in particular the power and impact of price, as it would modify their positions significantly with respect to these debates.  Issues like horizontal scaling, vertical disintermediation and completeness, balanced settlement systems and open/equal access need to be better analyzed and addressed.  What we find in almost every instance on the part of every participant in these debates is hypocritical and paradoxical positions, since people do not fully appreciate history and how they arrived at their relative and absolute positions.

Posted by: Michael Elling AT 12:10 pm   |  Permalink   |  0 Comments  |  Email
Sunday, January 29 2012

Every institution, every industry, every company has or is undergoing the transformation from analog to digital.  Many are failing, superseded by new entrants.  No more so than in the content and media industries: music, retail, radio, newspaper and publishing.  But why, especially as they’ve invested in the tools and systems to go digital?  Their failure can be summed up by this simple quote, “Our retail stores are all about customer service, and (so and so) shares that commitment like no one else we’ve met,” said Apple’s CEO. “We are thrilled to have him join our team and bring his incredible retail experience to Apple.”

Think about what Apple’s CEO emphasized.  “Customer service.”  Not selling; and yet the stores accounted for $15B of product sold in 2011!  When you walk into an Apple store it is like no other retailing experience, precisely because Apple stood the retail model on its head.  Apple thought digital as it sold not just 4 or 5 products--yes that’s it—but rather 4-5 ecosystems that let the individual easily tailor their unique experience from beginning to end.

Analog does not scale.  Digital does.  Analog is manual.  Digital is automated.  Analog cannot easily be software defined and repurposed.  Digital can.  Analog is expensively two-way.  With Digital 2-way becomes ubiquitous and synchronous.  Analog is highly centralized.  Digital can be easily distributed.  All of this drives marginal cost down at all layers and boundary points meaning performance/price is constantly improving even as operator/vendor margins rise.

With Digital the long tail doesn’t just become infinite, but gives way to endless new tails.  The (analog) incumbent sees digital as disruptive, with per unit price declines and same-store revenues eroding.  They fail to see and benefit from relative cost declines and increased demand.  The latter invariably occurs due to a shift from "private" to public consumption, normal price elasticity, and "application" elasticity as the range of producers and consumers increases.  The result is overall revenue growth and margin expansion for every industry/market that has gone digital.

Digital also makes it easy for something that worked in one industry to be easily translated to another.   Bill Taylor of Fast Company recently wrote in HBR that keeping pace with rapid change in a digital world requires having the widest scope of vision, and implementing successful ideas from other fields.

The film and media industries are a case in point.  As this infographic illustrates Hollywood studios have resisted “thinking digital” for 80 years.  But there is much they could learn from the transformation of other information/content monopolies over the past 30 years.  This blog from Fred Wilson sums up the issues between the incumbents and new entrants well.  Hollywood would do well to listen and see what Apple did to the music industry and how it changed it fundamentally; because it is about to do the same to publishing and video.  If not Apple then others.

Another aspect of digital is the potential for user innovation.   Digital companies should constantly be looking for innovation at the edge.  This implies a focus on the “marginal” not average consumer.  Social media is developing tremendous tools and data architectures for this.  If companies don’t utilize these advances, those same users will develop new products and markets, as can be seen from the comments field of this blog on the financial services industry.

Digital is synonymous with flat which drives greater scale efficiency into markets.  Flat (horizontal) systems tend toward vertical completeness via ecosystems (the Apple or Android or WinTel approach).  Apple IS NOT vertically integrated.  It has pieced together and controls very effectively vertically complete solutions.  In contrast, vertically integrated monopolies ultimately fail because they don’t scale at every layer efficiently.  Thinking flat (horizontal) is the first step to digitization.

Related Reading:
Apple Answers the Question "Why?" Before "How?" And "What?"
US Govt to Textbook Publishers: You Will Go Digital!
This article confused vertical integration with vertical completeness
Walmart and Retailers Need to Rethink Strategy
Comparison of 3 top Fashion Retailers Web and App Strategies

Software Defined Networking translated to the real world

Posted by: Michael Elling AT 10:54 am   |  Permalink   |  0 Comments  |  Email
Thursday, January 05 2012

Counter-intuitive thinking often leads to success.  That’s why we practice and practice so that at a critical moment we are not governed by intuition (chance) or emotion (fear).  No better example of this than in skiing; an apt metaphor this time of year.  Few self-locomoted sports provide for such high risk-reward requiring mental, physical and emotional control.  To master skiing one has to master a) the fear of staying square (looking/pointing) downhill, b) keeping one’s center over (or keeping forward on) the skis, and c) keeping a majority of pressure on the downhill (or danger zone) ski/edge.  Master these 3 things and you will become a marvelous skier.  Unfortunately, all 3 run counter to our intuitions driven by fear and safety of the woods at the side of the trail, leaning back and climbing back up hill.  Overcoming any one is tough.

What got me thinking about all this was a Vint Cerf (one of the godfathers of the Internet) Op-Ed in the NYT this morning which a) references major internet access policy reports and decisions, b) mildly supports the notion of the Internet as a civil not human right, and c) trumpets the need for engineers to put in place controls that protect people’s civil (information) rights.  He is talking about policy and regulation from two perspectives, business/regulatory and technology/engineering, which is confusing.  In the process he weighs in, at a high level, on current debates over net neutrality, SOPA, universal service and access reform, from his positions at Google and IEEE and addresses the rights and governance from an emotional and intuitive sense.

Just as with skiing, let’s look at the issues critically, unemotionally and counter-intuitively.  We can’t do it all in this piece, so I will establish an outline and framework (just like the 3 main ways to master skiing) and we’ll use that as a basis in future pieces to expound on the above debates and understand corporate investment and strategy as 2012 unfolds.

First, everyone should agree that the value of networks goes up geometrically with each new participant.  It’s called Metcalfe’s law, or Metcalfe’s virtue.  Unfortunately people tend to focus on scale economies and cost of networks; rarely the value.  It is hard to quantify that value because most have a hard time understanding elasticity and projecting unknown demand.  Further few rarely distinguish marginal from average cost.  The intuitive thing for most to focus on is supply, because people fear the unknown (demand).

Second, everyone needs to realize that there is a fundamental problem with policy making in that (social) democrats tend to support and be supported by free market competitors, just as (conservative) republicans have a similar relationship with socialist monopolies.  Call it the telecom regulatory paradox.  This policy paradox is a function of small business vs big business, not either sides’ political dogma; so counter-intuitive and likely to remain that way.

Third, the internet was never open and free.  Web 1.0 resulted principally from a judicial action and a series of regulatory access frameworks/decisions in the mid to late 1980s that resulted in significant unintended consequences in terms of people's pricing perception.  Markets and technology adapted to and worked around inefficient regulations.  Policy makers did not create or herald the internet, wireless and broadband explosions of the past 25 years.  But in trying to adjust or adapt past regulation they are creating more, not less, inefficiency, no matter how well intentioned their precepts.  Accept it as the law of unintended consequences.  People feel more comfortable explaining results from intended actions than something unintended or unexplainable.

So, just like skiing, we’ve identified 3 principles of telecoms and information networks that are counter-intuitive or run contrary to accepted notions and beliefs.  When we discuss policy debates, such as net neutrality or SOPA, and corporate activity such as AT&T’s aborted merger with T-Mobile or Verizon’s spectrum and programming agreement with the cable companies, we will approach and explain them in the context of Metcalfe’s Virtue (demand vs supply), the Regulatory Paradox (vertical vs horizontal orientation; not big vs small), and  the law of unintended consequences (particularly what payment systems stimulate network investment).  Hopefully the various parties involved can utilize this approach to better understand all sides of the issue and come to more informed, balanced and productive decisions.

Vint supports the notion of a civil right (akin to universal service) for internet access.  This is misguided and unachievable via regulatory edict/taxation.  He also argues that there should be greater control over the network.  This is disingenuous in that he wants to throttle the open-ness that resulted in his godchild’s growth.  But consider his positions at Google and IEEE.  A “counter-intuitive” combination of competition, horizontal orientation and balanced payments is the best approach for an enjoyable and rewarding experience on the slopes of the internet and, who knows, ultimately and counterintuitively offering free access to all.  The regulators should be like the ski patrol to ensure the safety of all.   Ski school is now open.

Related reading:
A Perspective from Center for New American Security

Network Neutrality Squad (NNsquad) of which Cerf is a member

Sad State of Cyber-Politics from the Cato Institute

Bike racing also has a lot of counter-intuitive moments, like when your wheel locks with the rider in front.  Here's what to do!

Posted by: Michael Elling AT 01:23 pm   |  Permalink   |  0 Comments  |  Email
Sunday, December 18 2011

 

(The web is dead, long live the apps)

 

Is the web dead?  According to George Colony, CEO of Forrester, at LeWeb (Paris, Dec 7-9) it is; and on top of that social is running out of time, and social is where the enterprise is headed.  A lot to digest at once, particularly when Google’s Schmidt makes a compelling case for a revolutionary smartphone future that is still in its very, very early stages; courtesy of an ice cream sandwich.

Ok, so let’s break all this down.  The Web, dead?  Yes Web 1.0 is officially dead, replaced by a mobile, app-driven future.  Social is saturated?  Yes, call it 1.0 and Social 2.0 will be utilitarian.  Time is money, knowledge is power.  Social is really knowledge and that’s where enterprises will take the real-time, always connected aspect of the smartphone ice cream sandwich applications that harness internal and external knowledge bases for rapid product development and customer support.  Utilitarian.  VIVA LA REVOLUTION!

Web 1.0 was a direct outgrowth of the breakup of AT&T; the US’ second revolution 30 years ago coinciding ironically with the bicentennial end of the 1st revolution.  The bandwidth bottleneck of the 1960s and 1970s (the telephone monopoly tyranny) that gave rise to Microsoft and Intel processing at the edge vs the core, began to reverse course in the late 1980s and early 1990s as a result of flat-rate data access and an unlimited universe of things to easily look for (aka web 1.0).  This flat-rate processing was a direct competitive response by the RBOCs to the competitive WAN (low-cost metered) threat.

As silicon scaled via Moore’s law (the WinTel sub-revolution) digital mobile became a low-cost, ubiquitous reality.  The same pricing concepts that laid the foundation for web 1.0 took hold in the wireless markets in the US in the late 1990s; courtesy of the software defined, high-capacity CDMA competitive approach (see pages 34 and 36) developed in the US.

The US is the MOST important market in wireless today and THE reason for its leadership in applications and smart cloud.  (Incidentally, it appears that most of LeWeb speakers were either American or from US companies.)  In the process the relationship between storage, processing and network has come full circle (as best described by Ben Horowitz).  The real question is, “will the network keep up?”  Or are we doomed to repeat the cycle of promise and dashed hopes we witnessed between 1998-2003?

The answer is, “maybe”; maybe the communications oligopolies will liken themselves to IBM in front of the approaching WinTel tsunami in 1987.  Will Verizon be that service provider that recognizes the importance of and embraces open-ness and horizontalization?  The 700 mhz auctions and recent spectrum acquisitions and agreements with the major cable companies might be a sign that they do.

But a bigger question is whether Verizon will adopt what I call a "balanced payment (or settlement) system" and move away from IP/ethernet’s "bill and keep" approach.  A balanced payment or settlement system for network interconnection simultaneously solves the issues of new service creation AND paves the way for the applications to directly drive and pay for network investment.  So unlike web 1.0 where communication networks were resistently pulled into a broadband present, maybe they can actually make money directly off the applications; instead of the bulk of the value accruing to Apple and Google.

Think of this as an “800” future on steroids or super advertising, where the majority of access is paid for by centralized buyers.  It’s a future where advertising, product marketing, technology, communications and corporate strategy converge.  This is the essence of what Colony and Schmidt are talking about.   Will Verizon CEO Seidenberg, or his rivals, recognize this?  That would indeed be revolutionary!

Related Reading:
February 2011 Prediction by Tellabs of Wireless Business Models Going Upside Down by 2013

InfoWeek Article on Looming Carrier Bandwidth Shortages

 

 

 

 

 

Posted by: Michael Elling AT 09:56 am   |  Permalink   |  0 Comments  |  Email
Sunday, December 11 2011

Look up the definition of information and you’ll see a lot of terminology circularity.  It’s all-encompassing and tough to define.  It’s intangible, yet it drives everything we do.  But information is pretty useless without people; in fact it doesn’t really exist.  Think about the tree that fell, unseen, in the forest.  Did it really fall?  I am interested in the velocity of information, its impact on economies, societies, institutions and as a result in the development of communication networks and exchange of ideas.

Over the past several years I have increasingly looked at the relationship between electricity and communications.  The former is the number one ingredient for the latter.  Ask anybody in the data-center or server farm world.  The relationship is circular.  One wonders why the NTIA under its BTOP program didn’t figure that out; or at least talk to the DOE.  Both spent billions separately, instead of jointly.  Gee, why didn’t we add a 70 kV line when we trenched fiber down that remote valley?

Cars, in moving people (information) around,  are a communications network, too; only powered by gasoline.  Until now.  The advent of electric vehicles (EV) is truly exciting.  Perhaps more than the introduction of digital cell phones nearly 20 years ago.  But to realize that future both the utility and auto industries should take a page from the competitive wireless playbook.

What got me thinking about all this was a  NYT article this week about Dan Akerson, a former MCI CFO  and Nextel CEO, who has been running (and shaking up) GM over the past 15 months.  It dealt specifically with Dan’s handling of the Chevy Volt fires.  Knowing Dan personally, I can say he is up to the task.  He is applying lessons learned from the competitive communications markets to the competitive automotive industry.  And he will win.

But will he and the automotive industry lose because of the utility industry?  You see, the auto industry, the economy and the environment have a lot to gain from the development of electric vehicles (EV).  Unfortunately the utility industry, which is 30 years behind the communications and IT revolution “digitizing” its business model, is not prepared for an EV eventuality.  Ironically, utilities stand in the way of their own long-term success as EV’s would boost demand dramatically.

A lot has been spent on a “smart grid” with few meaningful results.  Primarily this is because most of the efforts and decisions are being driven by insiders who do not want to change the status quo.  The latter includes little knowledge of the consumer, a 1-way mentality, and a focus on average peak production and consumption.  Utilities and their vendors loathe risk and consider real time to be 15 minutes going down to 5 minutes and view the production and consumption of electricity to be paramount.  Smart-grid typically means the opposite, or a reduction in revenues.

So, it’s no surprise that they are building a smart-grid which does not give the consumer choice, flexibility and control, nor the ability to contribute to electricity production and be rewarded to be efficient and socially responsible.  Nor do they want a lot of big-data to analyze and make the process even more efficient.  Funny those are all byproducts of the competitive communications and IT industries we’ve become accustomed to.

So maybe once Dan has solved GM’s problems and recognizes the problems facing an electric vehicle future, he will focus his and those of his private equity brethren’s interests on developing a market-driven smart-grid; not one your grandmother’s utility would build.

By the way, here’s a “short”, and by no means exhaustive, list of alliances and organizations and the members involved in developing standards and approaches to the smart grid.  Note: they are dominated by incumbents, and they all are comprised differently!

 

Electricity Advisory Committee
Gridwise Alliance
Gridwise Architecture Council
NIST SmartGrid Architecture Council
NIST SmartGrid Advisory Committee
NIST SmartGrid Interoperability Panel
North American Energy Standards Board (NAESB)
SmartGrid Task Force Members (Second list under Smartgrid.gov)
Global SmartGrid Federation
NRECA SmartGrid Demonstration
IEEE SmartGrid Standards
SmartGrid Information Clearinghouse


 

 

Posted by: Michael Elling AT 10:52 am   |  Permalink   |  0 Comments  |  Email
Sunday, December 04 2011

Be careful what you wish for this holiday season?  After looking at Saks’ 5th Avenue “Snowflake & Bubbles” holiday window and sound and light display, I couldn’t help but think of a darker subtext.  I had to ask the question answered infamously by Rolling Stone back in 2009, “who are the bubble makers?   The fact that this year’s theme was the grownup redux from last year’s child fantasy by focusing on the “makers” was also striking.  An extensive google search reveals that NO ONE has tied either years’ bubble themes to manias in the broader economy or to the 1%.  In fact, the New York Times called them “new symbols of joy and hope.”  Only one article referenced the recession and hardship for many people as a stark backdrop for such a dramatic display.  Ominously, one critic likened it to the “Nutcracker with bubbles” and we all know what happened to Tsarist Russia soon thereafter.

The light show created by Iris is spectacular and portends what I believe to be a big trend in the coming decade, namely using the smartphone to interact with signs and displays in the real world.  It is not unimaginable that every device will soon have a wifi connection and be controllable via an app from a smartphone.  Using the screen to type a message or draw an illustration that appears on a sign is already happening.  CNBC showcased the windows as significant commercial and technical successes, which they were.  Ironically the 1% appear to be doing just fine as Saks reported record sales in November.

Perhaps the lack of critical commentary has something to do with how quickly Occupy Wall Street rose and fell.  Are we really living in a Twitter world?  Fascinated and overwhelmed by trivia and endless information?  At least the displays were sponsored by FIAT, who is trying to revive two brands in the US market simultaneously, focusing on the very real-world pursuit of car manufacturing.  The same, unfortunately, cannot be said about MasterCard, (credit) bubble makers extraordinaire.  Manias and speculative bubbles are not new and they will not go away.  I’ve seen two build first hand and know that little could have been done to prevent them.  So it will be in the future.

One was the crash in 1987 of what I like to call the “bull-sheet market of the 1980s”.  More than anything, the 1980s was marked by the ascendance of the spreadsheet as a forecasting tool.  Give a green kid out of business school a tool to easily extrapolate logarithmic growth and you’ve created the ultimate risk deferral process; at least until the music stops in the form of one down year in the trend.  Who gave these tools out and blessed their use?  The bubble makers (aka my bosses).  But the market recovered and went to significant new highs (and speculative manias).

Similarly, a new communications paradigm (aka the internet) sprang to life in the early to mid 1990s as a relatively simply store and forward, database look-up solution.  By the end of the 1990s there was nothing the internet could not do, especially if communications markets remained competitive.  I remember the day in 1999 when Jeff Bezos said, in good bubble maker fashion, that “everyone would be buying goods from their cellphones” as a justification for Amazon’s then astronomical value of $30bn.  I was (unfortunately) smart enough to know that scenario was a good 5-10 years in the future.  10 years later it was happening and AMZN recently exceeded $100bn, but not before dropping below $5bn in 2001 along with $5 trillion of wealth evaporating in the market.

If the spreadsheet and internet were the tools of the bubble makers in the 1980s and 1990s, then wireless was the primary tool of the bubble makers in the 2000s.  Social media went into hyperdrive with texting, tweeting and 7x24 access from 3G phones apps.  Arguably wireless mobility drove people's transiency and ability to move around aiding the housing bubble.  So then what is the primary tool of the bubble makers in the 2010s?  Arguably it is and will be the application ecosystems of iOS and Android.   And what could make for an ugly bubble/burst cycle?  Lack of bandwidth and lack of efficient clearinghouse systems (payments) for connecting networks.

Posted by: Michael Elling AT 08:51 am   |  Permalink   |  0 Comments  |  Email
Sunday, April 24 2011

A couple of themes were prevalent this past week:

  • iPhone/Android location logging,
  • cloud computing (and a big cloud collapse at Amazon),
  • the tech valuation bubble because of Groupon et al,
  • profits at Apple, AT&T vs VZ, Google, most notably,
  • and who wins in social media and what is next.

In my opinion they are all related and the Cloud plays the central role, metaphorically and physically.  Horowitz recently wrote about the new computing paradigm in defense of the supposed technology valuation bubble.  I agree wholeheartedly with his assessment as I got my first taste of this historical computing cycle over 30 years ago when I had to cycle 10 miles to a High School in another district that had a dedicated line to the county mainframe.  A year or two later I was simulating virus growth on an Apple PC.  So when Windows came in 1987 I was already ahead of the curve with respect to distributed computing.  Moreover, as a communications analyst in the early 1990s I also realized what competition in the WAN post-1984 had begat, namely, Web 1.0 (aka the Internet) and the most advanced and cheapest digital paging/messaging services in the world.  Both of these trends would have a significant impact on me personally and professionally and I will write about those evolutions and collapses in future Spectral issues.

The problem, the solution, the problem, the solution, etc….

The problem back in the 1970s and early 1980s was the telephone monopoly.  Moore’s law bypassed the analog access bottleneck with cheap processing and local transport.  Consumers and then enterprises and institutions began to buy and link the PCs together to communicate, share files and resources.   Things got exciting when we began to multitask in 1987, and then by 1994 any PC provided access to information pretty much anywhere.  During the 1990s and well into the next decade, Web 1.0 was just a 1.2-way store and forward database lookup platform.  It was early cloud computing, sort of, but no-one had high-speed access.  It was so bad in 1998 when I went independent, that I had 25x more dedicated bandwidth than my former colleagues at bulge-bracket Wall Street firms.  That’s why we had the bust.  Web 1.0 was narrow-band, not broadband, and certainly not 2-way.  Wireless was just beginning to wake up to data, even though Jeff Bezos had everyone believing they would be ordering books through their phones in 2000.

Two things happened in the 2000s.  First, high speed bandwidth became ubiquitous.  I remember raising capital for The Feedroom, a leading video ASP, in 2003 and we were still watching high-speed access penetration reaching the 40% “tipping point.”.  Second the IP stack grew from being a 4 layer model to something more robust.  We built CDNs.  We built border controllers that enabled Skype VoIP traffic to transit foreign networks “for free.”  We built security.  HTML, browsers and web frontends grew to support multimedia.  By the second half of the decade, Web 2.0 became 1.7-way and true “cloud” services began to develop.  Web 2.0 is still not fully developed as there are still a lot of technical and pricing controls and “lubricants” missing for true 2-way synchronous high-definition communications; more about that in future Spectrals.

The New “Hidden Problem”

Unfortunately, over that time the underlying service provider market of 5-6 competitive service providers (wired, wireless, cable) consolidated down to an oligopoly in most markets.  Wherever competition dropped to 3 or fewer providers bandwidth pricing stopped falling 40-70% like it should have and only fell 5-15% per annum.  Yet technology prices at the edge and core (Moore’s Law) kept on falling 50%+ every 12-18 months.  Today, the price differential between “retail” and “underlying economic” cost per bit is the widest it has been since 1984.

That wouldn’t be a problem except for two recent developments:  the advent of the smartphone and the attendant application ecosystems.  So what does this have to do with cloud computing, especially when that was “an enterprise phenomenon” begun by Salesforce.com with its Force.com and Amazon Web Services.  A lot of the new consumer wireless applications run on the cloud.  There are entire developer ecosystems building new companies.  IDC estimates that the total amount of information accessible is going to grow 44x by 2020 to 35 zetabytes.  And the average number of unique files is going to grow 65x.  That means that while a lot of the applications and information is going to be high-bandwidth (video and multimedia), there are also going to be many smaller files and transactions (bits of information); ie telemetry or personal information or sensory inputs.  And this information will be constantly accessed by 3-5 billion wireless smartphones and devices.  The math of networks is (N*(N-1))/2.  That’s an awful lot of IP session pathways.

Why is That A Problem?

The problem is that the current wireless networks can’t handle this onslaught.  Carriers have already been announcing datacaps over the past 2 years.  While they are falling over themselves to announce 4G networks, the reality is that they are only designed to be a 2-3x faster, and far from being ubiquitous, either geographically (wide-area) or inbuilding.  That’s a problem if the new applications and information sets require networks that are 20-50x faster and many factors more reliable and ubiquitous.  The smartphones and their wireless tether are becoming single points of access.  Add to that the fact that carriers derive increasingly less direct benefit from these application ecosystems, so they’ll have less and less incentive to upgrade and reprice their network services along true technology-driven marginal cost.  Neustar is already warning carriers they are being bypassed in the process.

Does The Bubble Have to Burst?

Just as in the late 1990s, the upper and middle layer guys really don’t know what is going on at the lower layers.  And if they don’t then surely the current bubble will burst as expectations will get ahead of reality.  That may take another 2-3 years, but it will likely happen.  In the meantime, alternative access players are beginning to rise up.  Even the carriers themselves are talking about offloading traffic onto femto and wifi cells.  Wifi alliances are springing up again and middle layer software/application controls are developing to make it easier for end-users to offload traffic themselves.  Having lived through and analyzed the advent of competitive wired and wireless networks in the 1990s, my sense is that nothing, even LightSquared or Clearwire in their current forms, will be significant enough to precipitate the dramatic restructuring that is necessary to service this coming tidal wave of demand.

What we need is something that I call centralized hierarchical networking (CHN)™.  Essentially we will see three major layers with the bottom access/transport layer being controlled by 3-4 hybrid networks.  The growth and dynamic from edge to core and vice versa will wax and wane in rather rapid fashion.  Until then, while I totally get and support the cloud and believe most applications are going that route, let the Cloud Players be forewarned of coming turbulence unless something is done to (re)solve the bandwidth bottleneck!

Posted by: Michael Elling AT 09:34 am   |  Permalink   |  0 Comments  |  Email
Tuesday, April 19 2011

5 Areas of Focus

1) How does information flow through our economic, social and political fabric?  I believe all of history can be modeled on the pathways and velocity of information.  To my knowledge there is no economic science regarding the velocity of information, but many write about it.  Davidow (OVERconnected) speaks to networks of people (information) being in 3 states of connectivity.  Tom Wheeler, someone whom I admire a great deal, often relates what is happening today to historical events and vice versa.  His book on Lincoln’s use of the telegraph makes for a fascinating read.  Because of its current business emphasis and potential to change many aspects of our economy and lives social media will be worth modeling along the lines of information velocity.

2) Mapping the rapidly evolving infomedia landscape to explain both the chaos of convergence and the divergence of demand has interested me for 20 years.  This represents a taxonomy of things in the communications, technology and internet worlds.  The latest iteration, called the InfoStack, puts everything into a 3 dimensional framework with a geographic, technological/operational, and network/application dispersion.  I’ve taken that a step further and from 3 dimensional macro/micro models developed 3 dimensional organizational matrices for companies.  3 coordinates capture 99% of everything that is relevant about a technology, product, company, industry or topic.

3) Mobile payments and ecommerce have been an area of focus over the past 3 years.  I will comment quite a bit on this topic.  There are hundreds of players, with everyone jockeying for dominance or their piece of the pie.  The area is also at the nexus of 3 very large groupings of companies:  financial services, communications services and transaction/information processors.  The latter includes Google and FaceBook, which is why they are constantly being talked about.  That said, players in all 3 camps are constrained by vestigial business and pricing models.   Whoever ties/relates the communications event/transaction to the underlying economic transaction will win.  New pricing will reflect digitization and true marginal cost.  Successful models/blueprints are 800, VPN, and advertising.  We believe 70-80% of all revenue in the future will derive from corporate users and less than 30% will be subscription based.

4) Exchange models and products/solutions that facilitate the flow of information across upper and lower layers and from end to end represent exciting and rewarding opportunities.  In a competitive world of infinite revenue clouds of demand mechanisms must exist that drive cost down between participants as traffic volumes explode.  This holds for one-way and two-way traffic, and narrow and broadband applications.  The opposing sides of bill and keep (called party pays) and network neutrality, are missing the point.  New services can only develop if there is a bilateral, balanced payment system.  It is easy to see why incumbent service and application models embrace bill and keep, as it stifles new entrants.  But long term it also stifles innovation and retards growth.

5) What will the new network and access topologies look like?  Clearly the current industry structure cannot withstand the dual onslaught of rapid technological change and obsolescence and enormously growing and diverging demand.  It’s great if everyone embraces the cloud, but what if we don’t have access to it?  Something I call “centralized hierarchical networking” will develop.  A significant amount of hybridization will exist.  No “one solution” will result.  Scale and ubiquity will be critical elements to commercial success.  As will anticipation and incorporation of developments in the middle and upper layers.  Policy must ensure that providers are not allowed to hide behind a mantra of “natural bottlenecks” and universal service requirements.  In fact, the open and competitive models ensure the latter as we saw from our pro-competitive and wireless policies of the 1980s and 1990s.

In conclusion, these are the 5 areas I focus on:

1)      Information Velocity

2)      Mapping the InfoStack

3)      Applications and in particular, payment systems

4)      Exchange models

5)      Networks

The analysis will tend to focus on pricing (driven by marginal, not average costs) and arbitrages, the “directory value” of something, which some refer to as the network effect, and key supply and demand drivers.

Posted by: Michael Elling AT 09:43 am   |  Permalink   |  0 Comments  |  Email
Monday, April 18 2011

Today, April 18, 2011 marks my first official blog.  It is about making money and having fun.  Actually I started blogging about telecommunications 20 years ago on Wall Street with my TelNotes daily and SpectralShifts weekly.  Looking back, I am happy to report that a lot of what I said about the space actually took place; consolidation, wireless usurpation of wireline access, IP growing into something more robust than a 4 layer stack, etc…  Over the past decade I’ve watched the advent of social media, and application ecosystems, and the collapse of the competitive communications sector; the good, the bad, and the ugly, respectively.

Along the way I’ve participated in or been impacted by these trends as I helped startups and small companies raise money and improve their strategy, tactics and operations.  Overall, an entirely different perspective from my ivory tower Wall Street research perch of the 1980s-90s.  Hopefully what I have to say is of use to a broad audience and helps people cut through contradictory themes of chaotic convergence and diverging demand to take advantage of the rapidly shifting landscape.

I like examples of reality imitating art.  One of my favorites was Pink Floyd’s The Wall, which preceded the destruction of the Berlin Wall by a decade.  Another, the devastating satire and 1976 classic Network, predating by 30 years what media has become in the age of reality TV, twitter and the internet moment.  I feel like a lot has changed and it’s time for me to start talking again.  So in the words of Howard Beale (Peter Finch) “I’m as mad as hell, and I’m not going to take it anymore.” 

Most of the time you’ll see me take an opposite stance from consensus, or approach a topic or problem from a 90 degree angle.  That’s my intrinsic value; don’t look for consensus opinion here.  The ability to do this lies in my analytical framework, called the InfoStack.  It is a three dimensional framework that maps information, topics and problems along geographic, network and application dispersions.  By geographic I mean WAN, MAN, LAN, PAN.  By network, I mean a 7 layer OSI stack.  And by applications, I mean clouds of intersecting demand.  You will see that I talk about horizontal layering and scale, vertically complete solutions, and unlimited “cloud-like” revenue opportunity.  Anything I analyze is in the context of what is going on in adjacent spaces of the matrix.  And I look for cause and effect amongst the layers.

I see us at the beginning of something very big; bigger than in 1987 at the dawn of the Wintel revolution.  The best way to enjoy the great literary authors is to start with their earliest works and read sequentially; growing and developing with them.  Grow with me as we sit at the dawn of the Infomedia revolution that is and will remake the world around us.  In the process, let’s make some money and build things that are substantial.

Posted by: Michael Elling AT 01:00 pm   |  Permalink   |  0 Comments  |  Email
Email
Twitter
Facebook
Digg
LinkedIn
Delicious
Google+
StumbleUpon
Add to favorites

 

Information Velocity Partners, LLC
88 East Main Street, Suite 209
Mendham, NJ 07930
Phone: 973-222-0759
Email:
contact@ivpcapital.com

Design Your Own Website, Today!
iBuilt Design Software
Give it a try for Free