For the delivery of general purpose and wide-scale Compute/Internet Services there now seems to be a definitive hardware organisation for servers, typified by the E-bay "pod" contract.
For decades there have been well documented "Design Rules" for producing Silicon devices using specific technologies/fabrication techniques. This is an attempt to capture some rules for current server farms. [Update 06-Nov-11: "Design Rules" are important: Patterson in a Sept. 1995 Scientific American article notes that the adoption of a quantitative design approach in the 1980's led to an improvement in microprocessor speedup from 35%pa to 55%pa. After a decade, processors were 3 times faster than forecast.]
Commodity Servers have exactly three possible CPU configurations, based on "scale-up" factors:
- single CPU, with no coupling/coherency between App instances. e.g. pure static web-server.
- dual CPU, with moderate coupling/coherency. e.g. web-servers with dynamic content from local databases. [LAMP-style].
- multi-CPU, with high coupling/coherency. e.g. "Enterprise" databases with complex queries.
[Update 06-Nov-11: Because Oracle insists some feature sets must run on raw hardware. Sometimes vendors won't support your (preferred) VM solution.]
VM products are close to free and offer incontestable Admin and Management advantages, like 'teleportation' or live-migration of running instances and local storage.
There is a special non-VM case: cloned physical servers. This is how I'd run a mid-sized or large web-farm.
This requires careful design, a substantial toolset, competent Admins and a resilient Network design. Layer 4-7 switches are mandatory in this environment.
There are 3 system components of interest:
- The base Platform: CPU, RAM, motherboard, interfaces, etc
- Local high-speed persistent storage. i.e. SSD's in a RAID configuration.
- Large-scale common storage. Network attached storage with filesystem, not block-level, access.
Consequentially, "Fibre Channel over Ethernet" with its inherent contradictions and problems, is unnecessary.
Designing individual service configurations can be broken down into steps:
- select the appropriate CPU config per service component
- specify the size/performance of local SSD per CPU-type.
- architect the supporting network(s)
- specify common network storage elements and rate of storage consumption/growth.
As a professional, you're looking to provide "bang-for-buck" for someone else who's writing the cheques. Over-dimensioning is as much a 'sin' as running out of capacity. Nobody ever got fired for spending just enough, hence maximising profits.
Getting it right as often as possible is the central professional engineering problem.
Followed by, limiting the impact of Faults, Failures and Errors - including under-capacity.
The quintessential advantage to professionals in developing standard, reproducible designs is the flexibility to respond to unanticipated load/demands and the speed with which new equipment can be brought on-line, and the converse, retired and removed.
Security architectures and choice of O/S + Cloud management software is outside the scope of this piece.
There are many multi-processing architectures, each best suited to particular workloads.
They are outside the scope of this piece, but locally attached GPU's are about to become standard options.
Most servers will acquire what were known as vector processors and applications using this capacity will start to become common. This trend may need their own Design Rule(s).
Different, though potentially similar design rules apply for small to mid-size Beowulf clusters, depending on their workload and cost constraints.
Large-scale or high-performance compute clusters or storage farms, such as the IBM 120 Petabyte system, need careful design by experienced specialists. With any technology, "pushing the envelope" requires special attention by the best people you have, to even have a chance of success.
Not unsurprisingly, this organisation looks a lot like the current fad, "Cloud Computing" and the last fad, "Services Oriented Architecture".
Google and Amazon dominated their industry segments partly because they figured out the technical side of their business early on. They understood how to design and deploy datacentres suitable for their workload, how to manage Performance and balance Capacity and Cost.
Their "workloads", and hence server designs, are very different:
- Google serves pure web-pages, with almost no coupling/communication between servers.
- Amazon has front-end web-servers is backed by complex database systems.
No comments:
Post a Comment