2012/06/18

NBN: Will Apple's Next Big Thing "Break the Internet" as we know it?

Will Apple, in 2013, release its next Game Changer for Television following on from the iPod, iPhone, and iPad?
If they do, will that break the Internet as we know it when 50-250MM people trying to stream a World Cup final?

Nobody can supply Terrabit server links, let alone afford them. To reinvent watching TV, Apple has to reinvent its distribution over the Internet.

The surprising thing is we were first on the cusp of wide-scale "Video-on-Demand" in 1993.
Can, twenty years later, we get there this time?

Walter Isaacson in his HBR piece, "The Real Leadership Lessons of Steve Jobs" says:
In looking for industries or categories ripe for disruption, Jobs always asked who was making products more complicated than they should be. In 2001 portable music players ... , leading to the iPod and the iTunes Store. Mobile phones were next. ... At the end of his career he was setting his sights on the television industry, which had made it almost impossible for people to click on a simple device to watch what they wanted when they wanted.
Even when he was dying, Jobs set his sights on disrupting more industries. He had a vision for turning textbooks into artistic creations that anyone with a Mac could fashion and craft—something that Apple announced in January 2012. He also dreamed of producing magical tools for digital photography and ways to make television simple and personal. Those, no doubt, will come as well.
This doesn't just pose a problem that can be solved by running fibre to every home, or Who can afford the Plan at home, it's much bigger:
  • On-demand, or interactive TV, delivered over the general Internet cannot be done from One Big Datacentre, it just doesn't scale.
  • Streaming TV over IP links to G3/G4 mobile devices with individual connections does scale at either the radio-link, the backhaul/distribution links or the head-end.
The simple-minded network demands will drown both the NBN and Turnbull's opportunistic pseudo-NBN.

In their "How will the Internet Scale?" whitepaper, Content Delivery Network (CDN) provider Akamai, begins with:
Consider a viewing audience of 50 million simultaneous viewers around the world for an event such as a World Cup playoff game. An encoding rate of 2 Mbps is required to provide TV-like quality for the delivery of the game over IP. Thus, the bandwidth requirements for this single event are 100 Tbps. If there were more viewers or if DVD (at ~5 Mbps) or high definition (HD) (at ~10 Mbps) quality were required, then the bandwidth requirements would be even larger.
Is there any hope that such traffic levels be supported by the Internet?
And adds:
Because of the centralized CDN’s limited deployment footprint, servers are often far from end users. As such, distance-induced latency will ultimately limit throughput, meaning that overall quality will suffer. In addition, network congestion and capacity problems further impact throughput, and these problems, coupled with the greater distance between server and end user, create additional opportunities for packet loss to occur, further reducing quality. For a live stream, this will result in a poor quality stream, and for on-demand content, such as a movie download, it essentially removes the on-demand nature of the content, as the download will take longer than the time required to view the content. Ultimately, “quality” will be defined by end users using two simple criteria—does it look good, and is it on-demand/immediate?
Concluding, unsurprisingly, that their "hammer" can crack this "nut":
This could be done by deploying 20 servers (each capable of delivering 1 Gbps) in each of 5,000 locations within edge networks. Additional capacity can be added by deploying into PCs and set-top boxes. Ultimately, a distributed server deployment into thousands of locations means that Akamai can achieve the 100 Tbps goal, whereas the centralized model, with dozens of locations, cannot.
Akamai notes that Verisign acquired Kontiki Peer-to-Peer Software (P2P) in 2006 to address this problem. If the Internet distribution channels were 'flat', P2P  might work, but they are hierarchical and asymmetrical, in reality they are small networks in ISP server-rooms with long point-to-point links back to the premises.  Akamai's view is the P2P networks require a CDN style Control Layer to work well enough.

In their "State of the Internet" [use archives] report for Q1, 2011, Akamai cites these speeds:
... research has shown that the term broadband has varying definitions across the globe – Canadian regulators are targeting 5 mbps download speeds, whereas the european Commission believes citizens need download rates of 30 mbps, while peak speeds of at least 12 mbps are the goal of australia’s National broadband Network. As such, we believe that redefining the definition of broadband within the report to 4 mbps would be too United States-centric, and we will not be doing so at this time.
As the quantity of HD-quality media increases over time, and the consumption of that media increases, end users are likely to require ever-increasing amounts of bandwidth. a connection speed of 2 mbps is arguably sufficient for standard-definition TV-quality content, and 5 mbps for standard-definition DVD quality video content, while blu- ray (1080p) video content has a maximum video bit rate of 40 mbps, according to the blu-ray FAQ.
There are multiple challenges inherent for wide-scale Television delivery over the Internet:
  • Will the notional customer line-access rate even support the streaming rate?
  • Can the customer achieve sustained sufficient download rates from their ISP for either streaming or load-and-play use?
    • Will the service work when they want it - Busy Hour?
  • Multiple technical factors influence the sustained download rates:
    • Links need to be characterised by a triplet {speed, latency, error-rate} not 'speed'.
    • local loop congestion
    • ISP backhaul congestion
    • backbone capacity
    • End-End latency from player to head-end
    • Link Quality and total packet loss
  • Can the backbone, backhaul and distribution networks support full Busy Hour demand?
    • Telcos already know that "surprises" like the Japanese Earthquake/Tsunami, which are not unlike a co-ordinated Distributed Denial of Service attack, can bring an under-dimensioned network down in minutes...
    • With hundreds of millions of native Video devices spread through the Internet, these "surprise" events will trigger storms, the like of which we haven't seen before.
  • Can ISP networks and servers sustained full Busy Hour demand?
  • Can ISP's and the various lower-level networks support multiple topologies and technical solutions?

CISCO in their "Visual Networking Index 2011-2016" (VNI) report have a more nuanced and detailed model with exponential growth (Compound Annual Growth or GAGR). They also flag distribution of video as a major growth challenge for ISP's and backbone providers.

CISCO writes these headlines in its Executive Summary:
Global IP traffic has increased eightfold over the past 5 years, and will increase nearly fourfold over the next 5 years. Overall, IP traffic will grow at a compound annual growth rate (CAGR) of 29 percent from 2011 to 2016.
In 2016, the gigabyte equivalent of all movies ever made will cross the global Internet every 3 minutes.
The number of devices connected to IP networks will be nearly three times as high as the global population in 2016. There will be nearly three networked devices per capita in 2016, up from one networked device per capita in 2011. Driven in part by the increase in devices and the capabilities of those devices, IP traffic per capita will reach 15 gigabytes per capita in 2016, up from 4 gigabytes per capita in 2011.
A growing amount of Internet traffic is originating with non-PC devices. In 2011, only 6 percent of consumer Internet traffic originated with non-PC devices, but by 2016 the non-PC share of consumer Internet traffic will grow to 19 percent. PC-originated traffic will grow at a CAGR of 26 percent, while TVs, tablets, smartphones, and machine-to-machine (M2M) modules will have traffic growth rates of 77 percent, 129 percent, 119 percent, and 86 percent, respectively.
Busy-hour traffic is growing more rapidly than average traffic. Busy-hour traffic will increase nearly fivefold by 2016, while average traffic will increase nearly fourfold. Busy-hour Internet traffic will reach 720 Tbps in 2016, the equivalent of 600 million people streaming a high-definition video continuously. 
Global Internet Video Highlights
It would take over 6 million years to watch the amount of video that will cross global IP networks each month in 2016. Every second, 1.2 million minutes of video content will cross the network in 2016.
Globally, Internet video traffic will be 54 percent of all consumer Internet traffic in 2016, up from 51 percent in 2011. This does not include the amount of video exchanged through P2P file sharing. The sum of all forms of video (TV, video on demand [VoD], Internet, and P2P) will continue to be approximately 86 percent of global consumer traffic by 2016. [emphasis added]
Internet video to TV doubled in 2011. Internet video to TV will continue to grow at a rapid pace, increasing sixfold by 2016. Internet video to TV will be 11 percent of consumer Internet video traffic in 2016, up from 8 percent in 2011.
Video-on-demand traffic will triple by 2016. The amount of VoD traffic in 2016 will be equivalent to 4 billion DVDs per month.
High-definition video-on-demand surpassed standard-definition VoD by the end of 2011. By 2016, high-definition Internet video will comprise 79 percent of VoD.
In their modelling, CISCO use considerably lower rates for video rates than Akamai (~1Mbps) with an expectation of 7%/year reduction in required bandwidth (halving bandwidth every 10 years). But I didn't notice a concomitant demand for increased definition and frame-rate - which will drive video bandwidth demand upwards much faster than encoding improvements drive them down.

Perhaps we'll stay at around 4 Mbps...

Neither CISCO nor Akamai model for a "Disruptive Event", like Apple rolling-out a Video iPod...

History shows previous attempts at wide-scale "Video on Demand" have floundered.
In 1993 Oracle, as documented by Fortune Magazine, tried to build a centralised video service (4Mbps) based around the nCube massively parallel processor. An SGI system was estimated at $2,000/user which was 10-times cheaper than an IBM mainframe. A longer, more financially focussed history corroborates the story.

What isn't said in the stories is that the processing model was for the remote-control to command the server, so the database needed to pause, rewind, slow/fast forward etc the streams of every TV. There was no local buffering device to reduce the server problem to "mere streaming". Probably because consumer hard-disks were ~100MB (200secs @ 4Mbps) at the time and probably not able to stream at full rate. A local server with 4Gb storage would've been an uneconomic $5-10,000.
 He (Larry Ellison, CEO) says the nCube2 computer, made up of as many as 8,192 microprocessors, will be able to deliver video on demand to 30,000 users simultaneously by early 1994, at a capital cost of $600 per viewer. The next-generation nCube3, due in early 1995, will pack 65,000 microprocessors into a box the size of a walk-in closet and will handle 150,000 concurrent users at $300 apiece. 
Why did these attempts by large, highly-motivated, well-funded, technically-savvy companies with a track-record of success fail with very large pots of gold waiting for the first to crack the problem?

I surmise it was the aggregate head-end bandwidth demands: 30,000 users at 4Mbps is 120Gbps and the per-premise cost of the network installation.

Even with current technologies, building a reliable, replicated head-end with that capacity is a stretch, albeit not that hard with 10Gbps ethernet now available. Using the then current, and well known, 100 channel cable TV systems, distributing via coax or fibre to 100,000 premises was possible. But as we know from the NBN roll-out, "premises passes" is not nearly the same as "premises connected". Consumers take time to enrol in new services, as is well explained by Rogers, "Diffusion of Innovation" theory.

The business model would've assumed an over-subscription rate, ie. at Busy Hour only a fraction of subscribers would be accessing Video-on-Demand content. Thus a single central facility could've supported a town of 1 million people, with 1-in-four houses connected [75,000] and a Busy-Hour viewing rate of 40%.

If Apple trots out a "Game-Changer for Television", with on-demand delivery over the Internet, the current growth projections of CISCO and Akamai will be wildly pessimistic.

New networks like the NBN will be radically under-dimensioned by 2015, or at least the ISP's, their Interconnects and backhauls will be...

The GPON fabric of the NBN may handle 2.88Gbps aggregate downstream, and 5-10Mbps per household is well within the access speed of even the slowest offered service, 12/1Mbps.

But how are the VLANs organised on the NBN Layer-2 delivery system? VLAN Id's are 12-bit, or limited to 4,096.
I haven't read the detail of how many distinct services can be simultaneously be streamed per fibre and per Fibre Distribution Area. A 50% household take-up means

When I've talked to Network Engineers about the problem of streaming video over the Internet, they've agreed with my initial reaction:
  • Dimensioning the head-end or server-room of any sizeable network for a central distribution model is expensive and technically challenging,
  • Designing a complete network for live-streaming/download to every end-point of 4-8Mbps sustained (in Busy Hour) is very expensive.
  • Isn't this exactly the problem that multi-cast was designed for?
The NBN's Layer-2 VLAN-in-VLAN solution should be trivially capable of dedicating one VLAN, with it's 4096 sub-'channels' to video multicast, able to be split out by the Fibre Network Termination Unit (NTU) - not unlike the system Transact built in the ACT.

Users behaviour, their use of Video services, can be controlled via pricing:
  • The equivalent of "Free-to-Air" channels can be multicast and included in the cost of all packages, and
  • Video-on-Demand can be priced at normal per GB pricing, plus the Service Provider subscription fee.
As now with Free-to-Air, viewers can program PVR's to timeshift programs very affordably.

In answer to the implied Akamai question at the start:
  • What server and network resources/bandwidth do you need to stream a live event (in SD, HD and 3-D) to anyone and everyone that wants to watch it?
With multicast, under 20Mbps, because you let the network multiply the traffic at the last possible point.

Otherwise, it sure looks like a Data Tsunami that will drown even the NBN.

No comments: