The thesis of this piece is the Next Technology Disruption is No Disruption, No Revolution:
instead of exponential growth of technologies, decreasing unit prices and increasing volume sales, we're now seeing zero or slow growth, steady or increased unit prices (especially if supply-chain is disrupted), and in all but a few market segments, sales are in decline and profits are stressed in many vendors. I believe these are linked.The list of Technology Roadblocks since 2000 is long and deep:
- around 2002, Moore's Law for single CPU's hit the Heat/Power Wall, forcing Intel & AMD into multi-core designs.
- Laptop sales, by units, overtook Desktop sales in 2008.
- Since the 2007 introduction of the iPhone and 2010 introduction of the iPad (and 2008 Android phone, then tablet), sales of PC's.
- In 2013, sales of Tablets, by units, had outstripped all PC sales.
- Sales of Intel and AMD
- In 2005, DRAM was dropped as the reference Technology step
- Flash memory kept doubling in capacity and halving price-per-bit at historic rates.
- This has seen the death of both Floppy Drives and Optical Disks (CD, DVD's) in consumer products.
- In 2010, Hard Disk Disks (HDD's) had missed their evolution targets. 4TB had been forecast while 2TB shipped.
- Small HDD's and Floppy disks have been killed by USB Flash drives and SSD. At around $1/GB for Solid State Disk (SSD), drives under $50-$75 are only SSD's.
- 2013 saw sales of HDD's, by unit, decline 5%, the second consecutive year of contraction.
- I have no information on the evolution and pricing of GPU, Graphics Processors and Video Cards, nor Monitors. We seem to arrive at 30 inch displays with 2560 resolution 5-7 years ago.
- General use of Optical Drives plateaued at DVD single-sided capacity, 4.7GB.
- Bluray disks, 30GB (?), are sold in relatively limited volumes and media is comparatively rare and expensive.
- Commodity consumer-grade SD-cards, used in laptops, video cameras, still camera and phones & tablets are available everywhere and come in sizes from 4GB to 32GB. At around $1/GB, they've relegated write-once Bluray to the technology off-ramp.
- Ethernet via Twisted Pair, erupted in the early 1990's with 10Mbps and by 1999 was up to 1Gbps. It's stagnated since.
- The 'ethernet' market is estimated to be a $16B/year business.
- Today every PC (Desktop and Laptop) with an ethernet port supports 10/100/1000Mbps speeds.
- In 2002, a 10Gbps standard was released, but took until 2007 to ship 1M ports. Although a twisted-pair 10Gbps standard exists, it is very rarely seen and hasn't appeared in common consumer devices.
- In 2010, fibre only updates to Ethernet were released, 40Gbps and 100Gbps. These are especially important for server rooms and long-distance links.
- DWDM, Dense Wave Division Multiplexing (or 'Many Colours'), allows 64 or 96 signals to be carried on one fibre. The cost of electronics, and even fibre, is tiny compared to the cost of civil works installing fibre over 10 kilometres.
The one bright point in the Technology landscape is in the Wide Area Network (WAN). Fibre to the Premises (FTTP), is starting to provide more access to affordable, guaranteed 100Mbps and 1Gbps services to all customers. This will drive new applications in business market: branch offices, SOHO and SME's.
Economics of Volume Production
Since the IBM PC was released in 1981, the economics of Volume Production has driven the Hardware and Software world. When the cost of R&D and marketing is amortised across millions, or tens of millions of units sold, the per-piece cost rapidly reduces to the Variable Cost of manufacture. The "Learning Curve Effect", or Economies of Scale, also kicks in: for every doubling of production scale, unit costs reduce 20%-30%.
Not only does the R&D capital risk get returned quickly, the cost of parts continues to drop, stimulating further demand if the market is elastic. Vendors can maintain, or increase, margins while dropping prices. It's a Golden Circle, the more you sell, the cheaper they are and the more you sell.
What hasn't been widely appreciated is Moore's Second Law, the Vendor corollary:
Every four years, the price of a Fabrication plant doubles.This makes sense as well. Every new technology step is beyond current know limits and stretches existing technology - and that always comes at an increasing marginal cost. Each extra performance increase costs more than the last, because you always harvest the "lowest hanging fruit" first, the cheapest and easiest gains are taken before all others.
This virtuous circle of faster/higher-capacity products, increased demand, cheaper prices, higher profits fuelling new R&D, new plant and higher volumes, based on PC sales, worked exceedingly well until 2008 and the Global Financial Crisis (GFC).
The GFC effects, a global downturn in PC Desktop & laptop demand, was reinforced by the challenge from the smartphone and tablet markets. In Economics, they are substitutes, for at least part of what consumers & businesses used PC's for.
Since 2010/2011 PC sales have been in decline.
What hasn't washed through yet is the effect on market segments that have been dependent on the PC market for their R&D and manufacture. The same plants and technologies that produce cheap CPU's for laptops also manufacture the fastest, most expensive server CPU's.
High-end CPUs might comprise 1% of the market, but they share the same plants and fabrication R&D are the cheapest, high-volume CPU's. It "only" costs $20M-$30M to design a CPU for a new "process" and nothing needs to change in the (now) fully automatic manufacturing plants.
This low additional cost allows vendors like Intel to leverage their high-volume commodity production lines and profitably create and sell low-volume, high performance products. But only while the commodity market can pay for the whole new investment, R&D and new plant.
Microsoft rode the PC revolution from 1981 to 2009, increasing sales and profits every year. They traded on the underlying volume economics and Silicon Revolution to sustain performance.
Describing Microsoft software created the term "bloatware", meaning it consumed radically more of every resource (CPU, RAM, Disk), or 'footprint', at every release, cancelling or negating any hardware performance gains and forcing domestic and business users to upgrade well before the hardware physically needed replacement. Since 2005, upgrade cycle have slowed.
Microsoft failed as well in basic Software Engineering processes, causing the 2005 "Longhorn Reset", where around 25,000 man-years of effort was discarded, a massive financial write-off. In the marketplace, this competency & quality problem showed up in poor Security and a radically inferior User Interface.
People mostly don't realise that Microsoft makes $15 in license fees on every smartphone sold, Android and iPhone, because they hold the basic patents. For nearly 10 years, starting with Windows-CE, Microsoft had the smartphone market to itself.
The revolution of the Steve Jobs iPhone was not the code and technology, but the Software, Security and User Interface done right. The market spoke and iPhone sales soared, whilst a number of Windows Phone releases have languished in obscurity, even with the #1 mobile phone vendor, Nokia, insanely dropping all other phone operating systems, causing its business to collapse and eventually be acquired - by Microsoft.
Microsoft tied it fortunes to the global PC market. With that market now collapsing due to longer "refresh" cycles and substitute products, their business model is under threat.
Technical Ratios and Economic Ratios have both changed in 30 years
In 1981, the CPU at 4.77Mhz, in the IBM PC didn't need a "cache", because DRAM was faster than the CPU.
In 1991 and the advent of the first full CMOS CPU on a chip, the Intel 486 at 25Mhz (then 50 & 100Mhz), DRAM was now slower than the CPU. Cache memory, small at first and once the preserve of super computers, became standard in consumer PC's.
Ever since, the gap between CPU's and DRAM speeds has widened. Even with multi-core CPU's and their relatively static 2.5Ghz-3Ghz clock, the gap widens as more cores demand more reads/writes per second from DRAM.
In the earliest value computers, some used rotating "drum" storage as main memory. The forerunner to the modern hard disk was faster than any other technology.
Quickly, other faster, and more expensive per bit, technologies were developed.
DRAM has continued to get faster, albeit not keeping up with CPU's, while Hard Disk Drives, HDD's, have increased quite slowly in both average access time and streaming throughput - the number of bytes/second they can transfer.
As disks have had more bits stuffed onto them, two factors have changed: the number of bits-per-inch stored along a track, and the number of tracks-per-inch across the disk. The combination of the two is the "areal density", the number of bits packed into a single square inch. This has risen from a few thousand to now almost a trillion (a million million).
For a given increase in areal density, half the amount comes from bit-per-inch along the track and if the disk rotates at the same rate, the transfer rate increases by that amount. Which is a great outcome, for free, your disk reads off your data that much faster.
But it comes at a cost. Your 4GB disk transfers at twice the rate of your 1GB disk, but because there's now four times as much data, it takes twice as long to read the whole disk.
The first 3.5 inch drives from Shugart, now Seagate, were 100MB and transferred at 0.6-1MB/s. In 2-3 minutes, you could read the whole drive. Current 4TB drives are 40,000 times larger, but only transfer 200 times faster, ~120MB/sec. To read your whole 4TB drive now, say for a backup, takes 10 hours - more if you're using it for other things! This blow out in "full drive scan time" hasn't garnered much attention, either.
So we've had this three way gap opening up between CPU speeds, DRAM speeds and Hard Disks, together with this blowout in full drive scan time. The fundamental technical ratios of all systems have changed radically, yet we persist with many of the same system designs.
Not only have the technical ratios changed, but also relative costs.
In the mid-1980s, Businesses paid $100-500,000 (double that in today's dollars) for a minimum disk setup. The IBM 3380 stored 2.5GB, took twenty square feet of floor space and used 10kW-20kW of power.
Compare this to sub-$200 for a 4,000GB 3.5 inch drive in PC that uses 8W-10W. That's a phenomenal change in cost-per-bit, Watts/bit as well as figures I haven't quoted for transfer rate and average 'latency' (time to read data).
Around 1700 3.5 inch drives can be put into a single rack, surprisingly, that's a similar cost and power usage, for 3 million times the capacity (2.75PB), accessed around 2,000 times faster.
While in 1988, 'average' IBM customers paid $750,000 (equivalent to $1.5M now), there are very few businesses that store even 1PB. Storage Arrays are still expensive, but because they do a lot more and squeeze out more performance than a simple "Box of Disks".
In terms of average wages, Storage has come down from costing 35+ years of average wages to a month average wage. A full installation, CPU, DRAM, Disks, Tapes, Printers and Controllers, would've cost 5-8 times the price of Storage ($3-6M). Maintenance costs are typically 20%/year of purchase price. A staff of 15-20 were necessary to operate and administer a computer this size. A team of 10-20 Analysts and Programmers were also needed.
The mainframe cost around 250 people's wages, was replaced in 2-3 years, cost 50-100 wages/year to operate and needed another 20-50 people to run it. On a yearly basis, an average mainframe cost 200-250 wage-equivalents per year to own and operate.
For $10,000, or 15% of average annual earnings, a high-performance server, including "enough" storage can be purchased. On average, administrators look after 5-10 servers. Today, we also have Storage and Database Administrators, as well as System Admins and Operators, but many fewer. Programmers still abound. For Desktop PC's and Workgroup Fileserver, a single admin support 100-500 PC's and 5-10 file servers.
While IT Departments have grown considerably in head count and budget, so has the work they perform, what they support and the "hardware fleet" under their control.
For the same proportional compute capacity, a rather small server kept for 3-5 years, the full yearly cost is 25%-50% wage equivalent: a 500-fold reduction to businesses.
In 1985, it was worth it for Admins to invest months in saving even 5% of disk space (two full years wages in CapEx, and same again in on-costs)
Now, when a new drive costs one day's pay, it doesn't make sense for a business to invest much time
Thirty years ago, labour was under half your computing cost and The Computer cost 100 times more a person.
Today, PC's cost a couple of days wages, servers less than a month's wages and staffing costs are 60% or more of IT budgets. Additional costs, like Help Desk software, software licensing, networking and specialist training, consume a large fraction of IT budgets. Server room hardware and operations are now a relatively small fraction of Enterprise computing budgets.
This is without taking into account the reason we use computers in the first place: to increase the productivity of the people in the organisation. Computers and I.T. produce a benefit to the business, or they don't get used (versus bought. Management often confuses "having" a system with it being used and being useful. Another topic entirely.)
When you take the hourly turnover of Organisations supported by computers & I.T., a mid-sized business of 500 people is betting $50-$100,000/hour on everything working. System failures cost twice: once in lost work hours paid and again in opportunity cost - revenue/turnover lost during the outage.
In thirty years, we've gone from labour being 100 times cheaper than a computer to 100 times as expensive. We can now afford to trade computer time and resources for staff time and productivity.
The "efficiency" of computers is now subsidiary to the productivity of staff and the efficiency of the organisation, yet Computing & I.T. still seldom make the calculation, rarely collect the necessary data and almost never justify I.T. Reliability and Performance upgrades in terms of direct Organisational savings and as Insurance against outages. I.T. and business practices both have to change to embrace the new reality.
A direct consequence of 3 decades of PC's in businesses and a decade of ubiquitous Internet on the Desktop is that Enterprise I.T. services are maturing. Less new hardware is bought, fewer new systems licensed and brought into service, though conversions are more common, and both PC (desktop and laptops) and servers have had their average service life extended. We keep hardware longer because it still works well enough.
In all this, more storage space is sold each year. It's hard to get good figures, but from 30%-60% increase per year is estimated. Which makes you wonder what it all gets used for. Cat pictures or real business uses?
I.T. Hardware and Software vendors are facing a major disruption in their business model: the express train of steadily increasing demand, underpinned by PC technologies, has stalled and is slowing.
In Storage, Enterprises and consumers continue to buy more capacity, but it doesn't show up in Hard Disk sales nor vendor profits. (There's only two major manufacturers left, Seagate and Western Digital).
Because the PC market has paid for the development of capacity/speed increases in CPUs, DRAM and Hard Disk (Flash is paid for with consumer electronics as well), as that market collapses, and it seems to be doing so at an increasing rate, then cash-flow dries up and financial support for new product dwindles.
Vendors producing for the "mobile" market, smartphones and tablets, are still experiencing increasing demand, volumes and profits.
The virtuous circle no longer operates to fund exponentially more expensive R&D and manufacturing plant. based on PC technology. With lower Return on Investment, longer payback periods and more uncertain sales volumes, vendors of all PC-based technologies will either slow R&D and new products, or cancel programs entirely.
By 2015, we'll know how this goes. That could be the year that PC technology freezes, after which we'll get only incremental improvement or the very occasional "breakthrough" product.