## 2014/03/30

### The New Disruption in Computing

Since 2000, we've been progressively bumping into limits, or end-points, in all areas of silicon technologies. Some I've mentioned previously.

The thesis of this piece is the Next Technology Disruption is No Disruption, No Revolution:
instead of exponential growth of technologies, decreasing unit prices and increasing volume sales, we're now seeing zero or slow growth, steady or increased unit prices (especially if supply-chain is disrupted), and in all but a few market segments, sales are in decline and profits are stressed in many vendors. I believe these are linked.
The list of Technology Roadblocks since 2000 is long and deep:
• around 2002, Moore's Law for single CPU's hit the Heat/Power Wall, forcing Intel & AMD into multi-core designs.
• Laptop sales, by units, overtook Desktop sales in 2008.
• Since the 2007 introduction of the iPhone and 2010 introduction of the iPad (and 2008 Android phone, then tablet), sales of PC's.
• In 2013, sales of Tablets, by units, had outstripped all PC sales.
• Sales of Intel and AMD
• In 2005, DRAM was dropped as the reference Technology step
• Flash memory kept doubling in capacity and halving price-per-bit at historic rates.
• This has seen the death of both Floppy Drives and Optical Disks (CD, DVD's) in consumer products.
• In 2010, Hard Disk Disks (HDD's) had missed their evolution targets. 4TB had been forecast while 2TB shipped.
• Small HDD's and Floppy disks have been killed by USB Flash drives and SSD. At around $1/GB for Solid State Disk (SSD), drives under$50-$75 are only SSD's. • 2013 saw sales of HDD's, by unit, decline 5%, the second consecutive year of contraction. • I have no information on the evolution and pricing of GPU, Graphics Processors and Video Cards, nor Monitors. We seem to arrive at 30 inch displays with 2560 resolution 5-7 years ago. • General use of Optical Drives plateaued at DVD single-sided capacity, 4.7GB. • Bluray disks, 30GB (?), are sold in relatively limited volumes and media is comparatively rare and expensive. • Commodity consumer-grade SD-cards, used in laptops, video cameras, still camera and phones & tablets are available everywhere and come in sizes from 4GB to 32GB. At around$1/GB, they've relegated write-once Bluray to the technology off-ramp.
• Ethernet via Twisted Pair, erupted in the early 1990's with 10Mbps and by 1999 was up to 1Gbps. It's stagnated since.
• The 'ethernet' market is estimated to be a $16B/year business. • Today every PC (Desktop and Laptop) with an ethernet port supports 10/100/1000Mbps speeds. • In 2002, a 10Gbps standard was released, but took until 2007 to ship 1M ports. Although a twisted-pair 10Gbps standard exists, it is very rarely seen and hasn't appeared in common consumer devices. • In 2010, fibre only updates to Ethernet were released, 40Gbps and 100Gbps. These are especially important for server rooms and long-distance links. • DWDM, Dense Wave Division Multiplexing (or 'Many Colours'), allows 64 or 96 signals to be carried on one fibre. The cost of electronics, and even fibre, is tiny compared to the cost of civil works installing fibre over 10 kilometres. The one bright point in the Technology landscape is in the Wide Area Network (WAN). Fibre to the Premises (FTTP), is starting to provide more access to affordable, guaranteed 100Mbps and 1Gbps services to all customers. This will drive new applications in business market: branch offices, SOHO and SME's. Economics of Volume Production Since the IBM PC was released in 1981, the economics of Volume Production has driven the Hardware and Software world. When the cost of R&D and marketing is amortised across millions, or tens of millions of units sold, the per-piece cost rapidly reduces to the Variable Cost of manufacture. The "Learning Curve Effect", or Economies of Scale, also kicks in: for every doubling of production scale, unit costs reduce 20%-30%. Not only does the R&D capital risk get returned quickly, the cost of parts continues to drop, stimulating further demand if the market is elastic. Vendors can maintain, or increase, margins while dropping prices. It's a Golden Circle, the more you sell, the cheaper they are and the more you sell. What hasn't been widely appreciated is Moore's Second Law, the Vendor corollary: Every four years, the price of a Fabrication plant doubles. This makes sense as well. Every new technology step is beyond current know limits and stretches existing technology - and that always comes at an increasing marginal cost. Each extra performance increase costs more than the last, because you always harvest the "lowest hanging fruit" first, the cheapest and easiest gains are taken before all others. This virtuous circle of faster/higher-capacity products, increased demand, cheaper prices, higher profits fuelling new R&D, new plant and higher volumes, based on PC sales, worked exceedingly well until 2008 and the Global Financial Crisis (GFC). The GFC effects, a global downturn in PC Desktop & laptop demand, was reinforced by the challenge from the smartphone and tablet markets. In Economics, they are substitutes, for at least part of what consumers & businesses used PC's for. Since 2010/2011 PC sales have been in decline. What hasn't washed through yet is the effect on market segments that have been dependent on the PC market for their R&D and manufacture. The same plants and technologies that produce cheap CPU's for laptops also manufacture the fastest, most expensive server CPU's. High-end CPUs might comprise 1% of the market, but they share the same plants and fabrication R&D are the cheapest, high-volume CPU's. It "only" costs$20M-$30M to design a CPU for a new "process" and nothing needs to change in the (now) fully automatic manufacturing plants. This low additional cost allows vendors like Intel to leverage their high-volume commodity production lines and profitably create and sell low-volume, high performance products. But only while the commodity market can pay for the whole new investment, R&D and new plant. Software Microsoft rode the PC revolution from 1981 to 2009, increasing sales and profits every year. They traded on the underlying volume economics and Silicon Revolution to sustain performance. Describing Microsoft software created the term "bloatware", meaning it consumed radically more of every resource (CPU, RAM, Disk), or 'footprint', at every release, cancelling or negating any hardware performance gains and forcing domestic and business users to upgrade well before the hardware physically needed replacement. Since 2005, upgrade cycle have slowed. Microsoft failed as well in basic Software Engineering processes, causing the 2005 "Longhorn Reset", where around 25,000 man-years of effort was discarded, a massive financial write-off. In the marketplace, this competency & quality problem showed up in poor Security and a radically inferior User Interface. People mostly don't realise that Microsoft makes$15 in license fees on every smartphone sold, Android and iPhone,  because they hold the basic patents. For nearly 10 years, starting with Windows-CE, Microsoft had the smartphone market to itself.

The revolution of the Steve Jobs iPhone was not the code and technology, but the Software, Security and User Interface done right.  The market spoke and iPhone sales soared, whilst a number of Windows Phone releases have languished in obscurity, even with the #1 mobile phone vendor, Nokia, insanely dropping all other phone operating systems, causing its business to collapse and eventually be acquired - by Microsoft.

Microsoft tied it fortunes to the global PC market. With that market now collapsing due to longer "refresh" cycles and substitute products, their business model is under threat.

Technical Ratios and Economic Ratios have both changed in 30 years

In 1981, the CPU at 4.77Mhz, in the IBM PC didn't need a "cache", because DRAM was faster than the CPU.

In 1991 and the advent of the first full CMOS CPU on a chip, the Intel 486 at 25Mhz (then 50  & 100Mhz), DRAM was now slower than the CPU. Cache memory, small at first and once the preserve of super computers, became standard in consumer PC's.

Ever since, the gap between CPU's and DRAM speeds has widened. Even with multi-core CPU's and their relatively static 2.5Ghz-3Ghz clock, the gap widens as more cores demand more reads/writes per second from DRAM.

In the earliest value computers, some used rotating "drum" storage as main memory. The forerunner to the modern hard disk was faster than any other technology.

Quickly, other faster, and more expensive per bit, technologies were developed.
DRAM has continued to get faster, albeit not keeping up with CPU's, while Hard Disk Drives, HDD's, have increased quite slowly in both average access time and streaming throughput - the number of bytes/second they can transfer.

As disks have had more bits stuffed onto them, two factors have changed: the number of bits-per-inch stored along a track, and the number of tracks-per-inch across the disk. The combination of the two is the "areal density", the number of bits packed into a single square inch. This has risen from a few thousand to now almost a trillion (a million million).

For a given increase in areal density, half the amount comes from bit-per-inch along the track and if the disk rotates at the same rate, the transfer rate increases by that amount. Which is a great outcome, for free, your disk reads off your data that much faster.

But it comes at a cost. Your 4GB disk transfers at twice the rate of your 1GB disk, but because there's now four times as much data, it takes twice as long to read the whole disk.

The first 3.5 inch drives from Shugart, now Seagate, were 100MB and transferred at 0.6-1MB/s. In 2-3 minutes, you could read the whole drive. Current 4TB drives are 40,000 times larger, but only transfer 200 times faster, ~120MB/sec. To read your whole 4TB drive now, say for a backup, takes 10 hours - more if you're using it for other things! This blow out in "full drive scan time" hasn't garnered much attention, either.

So we've had this three way gap opening up between CPU speeds, DRAM speeds and Hard Disks, together with this blowout in full drive scan time. The fundamental technical ratios of all systems have changed radically, yet we persist with many of the same system designs.

Economic Ratios

Not only have the technical ratios changed, but also relative costs.

In the mid-1980s, Businesses paid $100-500,000 (double that in today's dollars) for a minimum disk setup. The IBM 3380 stored 2.5GB, took twenty square feet of floor space and used 10kW-20kW of power. Compare this to sub-$200 for a 4,000GB 3.5 inch drive in PC that uses 8W-10W. That's a phenomenal change in cost-per-bit, Watts/bit as well as figures I haven't quoted for transfer rate and average 'latency' (time to read data).

Around 1700 3.5 inch drives can be put into a single rack, surprisingly, that's a similar cost and power usage, for 3 million times the capacity (2.75PB), accessed around 2,000 times faster.

While in 1988, 'average' IBM customers paid $750,000 (equivalent to$1.5M now), there are very few businesses that store even 1PB. Storage Arrays are still expensive, but because they do a lot more and squeeze out more performance than a simple "Box of Disks".

In terms of average wages, Storage has come down from costing 35+ years of average wages to a month average wage. A full installation, CPU, DRAM, Disks, Tapes, Printers and Controllers, would've cost 5-8 times the price of Storage ($3-6M). Maintenance costs are typically 20%/year of purchase price. A staff of 15-20 were necessary to operate and administer a computer this size. A team of 10-20 Analysts and Programmers were also needed. The mainframe cost around 250 people's wages, was replaced in 2-3 years, cost 50-100 wages/year to operate and needed another 20-50 people to run it. On a yearly basis, an average mainframe cost 200-250 wage-equivalents per year to own and operate. For$10,000, or 15% of average annual earnings, a high-performance server, including "enough" storage can be purchased. On average, administrators look after 5-10 servers. Today, we also have Storage and Database Administrators, as well as System Admins and Operators, but many fewer. Programmers still abound. For Desktop PC's and Workgroup Fileserver, a single admin support 100-500 PC's and 5-10 file servers.

While IT Departments have grown considerably in head count and budget, so has the work they perform, what they support and the "hardware fleet" under their control.

For the same proportional compute capacity, a rather small server kept for 3-5 years, the full yearly cost is 25%-50% wage equivalent: a 500-fold reduction to businesses.

In 1985, it was worth it for Admins to invest months in saving even 5% of disk space (two full years wages in CapEx, and same again in on-costs)

Now, when a new drive costs one day's pay, it doesn't make sense for a business to invest much time

Thirty years ago, labour was under half your computing cost and The Computer cost 100 times more a person.

Today, PC's cost a couple of days wages, servers less than a month's wages and staffing costs are 60% or more of IT budgets. Additional costs, like Help Desk software, software licensing, networking and specialist training, consume a large fraction of IT budgets. Server room hardware and operations are now a relatively small fraction of Enterprise computing budgets.

This is without taking into account the reason we use computers in the first place: to increase the productivity of the people in the organisation. Computers and I.T. produce a benefit to the business, or they don't get used (versus bought. Management often confuses "having" a system with it being used and being useful. Another topic entirely.)

When you take the hourly turnover of Organisations supported by computers & I.T., a mid-sized business of 500 people is betting $50-$100,000/hour on everything working. System failures cost twice: once in lost work hours paid and again in opportunity cost - revenue/turnover lost during the outage.

In thirty years, we've gone from labour being 100 times cheaper than a computer to 100 times as expensive. We can now afford to trade computer time and resources for staff time and productivity.

The "efficiency" of computers is now subsidiary to the productivity of staff and the efficiency of the organisation, yet Computing & I.T. still seldom make the calculation, rarely collect the necessary data and almost never justify I.T. Reliability and Performance upgrades in terms of direct Organisational savings and as Insurance against outages. I.T. and business practices both have to change to embrace the new reality.

A direct consequence of 3 decades of PC's in businesses and a decade of ubiquitous Internet on the Desktop is that Enterprise I.T. services are maturing. Less new hardware is bought, fewer new systems licensed and brought into service,  though conversions are more common, and both PC (desktop and laptops) and servers have had their average service life extended. We keep hardware longer because it still works well enough.

In all this, more storage space is sold each year. It's hard to get good figures, but from 30%-60% increase per year is estimated. Which makes you wonder what it all gets used for. Cat pictures or real business uses?

Summary

I.T. Hardware and Software vendors are facing a major disruption in their business model: the express train of steadily increasing demand, underpinned by PC technologies, has stalled and is slowing.

In Storage, Enterprises and consumers continue to buy more capacity, but it doesn't show up in Hard Disk sales nor vendor profits. (There's only two major manufacturers left, Seagate and Western Digital).

Because the PC market has paid for the development of capacity/speed increases in CPUs, DRAM and Hard Disk (Flash is paid for with consumer electronics as well), as that market collapses, and it seems to be doing so at an increasing rate, then cash-flow dries up and financial support for new product dwindles.

Vendors producing for the "mobile" market, smartphones and tablets, are still experiencing increasing demand, volumes and profits.

The virtuous circle no longer operates to fund exponentially more expensive R&D and manufacturing plant. based on PC technology. With lower Return on Investment, longer payback periods and more uncertain sales volumes, vendors of all PC-based technologies will either slow R&D and new products, or cancel programs entirely.

By 2015, we'll know how this goes. That could be the year that PC technology freezes, after which we'll get only incremental improvement or the very occasional "breakthrough" product.

Steve Jenkin said...

[Comments from an old friend, Part I of III]

I see much and hopefully gain a little insight.

I think progress will continue, but we will not progress in the same directions hence the measures we have used will show an unprecedented slowdown.

I work in an enterprise that is still on 32bit OS, and 32 bit programs. Probably six years ago it became difficult to purchase single core CPUs. While their multicore counterparts could do more from a computing perspective, from a user perspective performance from a single program that was not multi-core-aware dropped.

Storage may have missed targets but it has kept growing. In my opinion storage is not actually understood by the majority of people including many storage professionals.

Storage is an increasing risk for anyone who has not changed modes to adapt to the changed realities of storage.

Optical drives are now almost a footnote in computing if not entertainment media.

The prevalence of USB storage is a heinous circumstance we have arrived at, but it meets the lowest common denominator.

Networking has not done so much because it has not been an economic bottle-neck.

9 years ago I saw a 10Mbps hub (not switch) linking 45-55 staff to the switch that had the rest of the LAN. I had never used anything under 100Mbps before that.

Today I look after a branch office (5 staff) that has stepped down from all services on a 100Mbps LAN to a 2Mbps link serving everything (telephony, email, internet, file, DHCP, NTP, LDAP). WAN optimisation has meant that the users haven’t complained! We have moved forward 9 years and dropped network capacity to 1/50 without bothering the user – it doesn’t sound right but it works.

In consumerland gigabit has been over and above needs for so long it is only now with 3D HDTV that gigabit is truly looking like it needs an upgrade.

Prevalence of networks goes beyond fixed networks. The prevalence of WiFi devices has been a major impact, and GSM/HSPA data has changed the game entirely.

9 years ago I had the privilege of using a mobile data card that could deliver a maximum of 2Mbps and cost hundreds of dollars per month. Now every mobile in our fleet can hit 40Mbps, every laptop has inbuilt HSPA modem, and all this is delivered at ~\$60 per unit per month.

Connectivity has revolutionised IT services and has a lot to do with the 24h service expectation because there is “always” connectivity so there always needs to be a service running behind that.

FTTP could change things, but only if people are given the tools to make the change. For many many professionals a solid 10Mbps connection from a home office would more than adequately replace a desk in their office (see WAN acceleration above, and contemplate what is possible). But this still ties peole to fixed lines or WiFi networks they can utilise. Where is the WiFi mesh we should have?

In the enterprise it has been harder and harder to keep ahead of consumer technology, particularly as cloud based systems become more mature.

· We offer a 50Mbps internet connection shared to 60 staff, compared to a home user with 24 Mbps to themselves it’s not impressive.
· Gmail/Outlook.com offer services we are hard pushed to beat.
· DropBox etc sync to HDD, which beats accessing files over gigabit
· Backblaze, and CrashPlan offer backup solutions that are much the same as our enterprise solutions only they keep deeper history and are far more accessible.
· We won’t even pretend to offer something as useful as LastPass.
("controversial"? it’s just single sign on via internet).

Rhys Ambler
rhys@ambler.id.au

Steve Jenkin said...

[Comments from an old friend, Part II of III]

The old maxim “Do one thing well.” is exactly what cloud can offer.

Processors have traditionally been the driver of better computing speeds but perhaps what we are going to see it the better application of computing as our next step while the compute power slows in growth.

The demand for mobile “sub-powered” devices provides a whole new environment ripe for cloud offerings. In the 90’s everything had a clock added to it, in the 00’s everything had an LED added, in the 10’s everything had WiFi added.

Perhaps the next step is sending information rather than data by adding compute power to everything. Intel were developing SD cards sized computers, which I was told have this week been downgraded to CF sized because of problems with the newer chips so Intel are reverting to a more powerful Atom but increasing the size. Small cheap ubiquitous low power computers paired with accelerometers, GPS, WiFi, and GSM/HSPA have made devices spatially aware (no gyroscopes in common production yet?). Doing useful things with them is still an ongoing problem but the market will take care of that in its own time.

I’d dispute the Microsoft 10 year monopoly on smartphones, palm were in there with handheld computing and they did have a phone which they failed to capitalise on. RIM developed the BlackBerry and grew an empire which they too have frittered away. In their latest upgrade they have sounded the death knell by moving away from their own global infrastructure which differentiated them from every other solution. They are now as easily replaceable as any other part of the market but they used to be the device of choice for most business people 10 years ago.

The prevalence of smartphones has led many consumers to segment their experience into many apps that each have a purpose. They do bring this to their attitude to work. The days of Mozilla (browser, mail client, PIM all in one) are gone – although I’m not quite sure I think this will effectively remain the case.

As an iPhone user I would have to say the quality of the product is in decline. More crashes, less stability of interface, little incremental change and a closed shop approach (Safari is the only browser that can be called by other apps – reminiscent of the successful anti-trust case against MS for their bundling of IE). This just means consumers suffer – there isn’t a rising star because Android is still full of problems that show it as an immature OS despite being more prevalent than iOS (Win vs Linux shows this can stand for many years).

Tech ratios have changed but so has the preferred consumer platform. Consumers may not know it but with their purchases they are recognising the changed environment, and the feedback loop means they are reinforcing these changes.

It is up to enterprise to catch up, capitalise the new environment and lead positive change for the consumer. Enterprise is where the “experts” are. However I don’t have faith there are many IT departments that are up to the task.

As far as end users go I still work with people who can’t see the benefit of search over browsing/filling. Even when it comes to filing they can’t step away from a physical analog: Outlook has been pushing categories for 13 years, when Gmail started 10 years ago (plus a few days) they saw the future was tags which are functionally the same. Gmail wasn’t tied to history so they never implemented folders. Yet I still see the vast majority of users spending a large portion of their day operating Outlook, and blind to the power they are given. Many, if not most, are actively rejecting the different modus operandi, and the only reason I ever get is that they know how to work the less efficient system.

Rhys Ambler
rhys@ambler.id.au

Steve Jenkin said...

[Comments from an old friend, Part III of III]

Perhaps one unmentioned bottleneck is human adaptability! Maybe all the advances have moved capability beyond the limits of most humans’ adaptability. If so then the challenge may be to make the advances accessible to those with rigid minds? Perhaps the challenge is to fork everything so the old can coexist with the advances.

The failure of Google Wave is a perfect demonstration of this lack of adaptability:

· Why would you send a file out to 5 users to edit when you could get 5 users to look at the same file?
· Why continue using unsecured protocol that enables spam and leaves itself open to intercept?
· Why deal with reply all conversations that fork and exclude pieces of information in each thread?
· Why does our mail server attach a disclaimer asking people not to own the file they were just given when we could retain control of our files letting others come look at our content?
· Why do I have to create distribution lists and access control lists on behalf of my users?
· If users want storage in the same space as communication (evidenced by their desire to store everything on the mail server), then why are we propagating file servers, SharePoint servers, and mail servers which the user has to use a minimum of three different programs to access?

To all of this I have only one answer: users could not adapt to the new capabilities because they weren’t lead/herded into this new space and so they all felt lost because it was too big a change for them to understand.

Storage will keep growing because the culture of search is partially understood and “There is no point throwing anything away at these costs.” (I find that idea highly contentious). There is an increasing capability to gather data (often duplicating it) so the temptation to store data is yielded to.

To an extent the capability to generate data (like photos videos and memes of cats) has grown in an unprecedented jump with smartphones because smartphones are so prevalent and so well suited to the first two tasks of content generation. Apps for creating derivative content are so low cost they spread quickly.

Lastly data isn’t all tied to computing. The vast bulk of data I have has nothing to do with me, it is media someone else has produced.

The landscape has changed and I think there is a gap between what is around and what people/enterprise recognise. It will all change but I wouldn’t dare to project how it will change.

Rhys Ambler
rhys@ambler.id.au