Why Fibre Channel SAN's will be dead in 5 years.

I won't be buying shares in any Fibre Channel-based tech stocks as I think the technology will be dead within 5 years for two reasons:
  • Enterprise Storage Arrays will stop being used for high-intensity random I/O, instead being used for "seek and stream", and
  • PCI Flash or SCM Storage will become the low-latency "Tier 0" Storage of choice because of speed (latency), cost and simplicity.
Update 1-Feb-2012:
Another article by Fusion-io, Getting the most out of flash storage  provides extra links:

Elsewhere I've written Jim Gray's observation, "Disk is the new Tape" should be "Disk is the new CD".
That is, Enterprise Storage is best suited for "Seek and Stream", not random I/O.
Enterprise Storage Arrays will need to provide in this future:
  • reliable persistent storage and archives,
  • high capacity and best $ per MB, and
  • high bandwidth streaming IO.
  • and if you were being honest, vendor-neutral management and protocols, in-place upgrades, any-time snapshots/backups and flexible zero-downtime expansion and reconfiguration.
    The current need for "fork-lift upgrades" and vendor incompatibility are disadvantageous to customers.
How to create a Storage Network that needs to be fast, cheap, simple, robust/reliable, secure and scalable when "super low-latency, zero jitter and non-blocking IO" is taken out of the mix?

Ethernet and nothing else.

Whether Layer 2 protocols, like Coraid's ATA-over-Ethernet, or Layer 3 protocols, e.g. the slower, higher overhead but routable iSCSI, dominate is still an open question.
Both have strengths and weaknesses and can be used together very effectively without conflicts to maximise ROI's and minimise Enterprise Storage costs, both CapEx and OpEx.

Ethernet is around 10-100 times cheaper than Fibre Channel to install and configure and requires only a fraction of the Administration and support, because Enterprises already have well resourced and competent Networking teams. Network Engineers are in much better supply than SAN specialists, so wages are more reasonable and availability much, much higher.
Their competency/capability is also much better able to be assessed by technical managers when both hiring and firing.

As well, Ethernet has a current growth path of 40Gbps and 100Gbps with 10Gbps widely available now for servers.
Fibre Channel may improve sometime in the future to 12Gbps, but that's an uncertain roadmap and with a global market in 10,000's vs millions for ethernet, the cost differential will only grow.

Fibre Channel has become a very poor choice when bandwidth is the primary "figure of merit".

In access latency, PCI-based Flash Memory, such as Fusion-io's, will always beat SAN-based Storage Arrays by a rather large margin.

It's there in the physics and unbeatable...

All the interfaces, line delays, buffering and switching - out and back on a SAN - means if even the Storage Array SSD's had zero latency, it would be many-times slower.

This Q and A with Matt Young of Fusion-io on "Making Flash Fast",  says it well:
Q: How does your ioMemory technology differ from Solid State Disks?  And how does it compare performance wise?

A: Solid State Disks or SSDs are used to store data with the intention of constant use – similar to that of a hard drive. These SSDs generally use disk-based protocols that introduce unnecessary latency into the system. Fusion’s ioMemory technology differs in that it doesn’t act as a hard drive. It performs as an extension of the memory hierarchy for servers. This means that they provide a tighter integration with host systems and applications, helping you to work more productively.

Fusion-io products offer the industry’s lowest latencies, which maximise performance and scalability, while delivering enterprise reliability.
Q: Can you provide some typical I/O performance figures for ioMemory compared to DRAM, and solid state disk?

A: With some generalisation, the order of memory, fastest first is as follows,
  • DRAM with 100-300 nanosecond access,
  • ioMemory with 15 microsecond access,
  • NAND appliances with around 500 microsecond access and
  • then SSD’s with around 1ms [1000  microsecond] access. [66 times slower...]
There are of course a number of factors that need to be considered in these times such as payload size, load, etc, however, in simple terms with all things equal and a well-designed product, latency is ultimately affected by the distance data must travel to get to where it is useful.
So, the closer your technology resides in relation to the CPU the better the response time.
That’s why that even though two products may use the same NAND chips and be connected on the PCI Express bus, you see markedly different latency characteristics.
Q: As well as I/O performance, what other attributes of ioMemory are finding favour among customers?

A: One of the things that our customers tell us provides a major benefit in addition to performance is the reliability of ioMemory and the cost savings generated from implementing Fusion-io solutions. Fusion-io products are uniquely reliable enough to be offered by all major OEM manufacturers, including Dell, HP and IBM.

Finally, many customers tell us that they save a lot of money on CapEx and OpEx, since ioMemory takes so much less power, cooling and real estate than traditional, scaled-out storage infrastructures.

Declaration of Interest:
I have no shares or other financial interest in Fusion-io or any companies or their competitors mentioned in this piece.
I am not employed now, nor have ever been, by Fusion-io or any of its related/associated entities.
I receive no remuneration for writing these opinions/analyses.
(signed) Steve Jenkin, 31-Jan-2012.

No comments: