[p134] Alderman blames the failure on their overreliance on Carver Mead's publications...Carver Mead and Lynn Conway at CalTech revolutionised VLSI design and production around 1980, publishing "Introduction to VLSI System Design" and providing access to fabrication lines for students and academics. This has been widely written about:
e.g. in "The Power of Modularity", a short piece on the birth of the microchip from Longview Institute, and a 2007 Computerworld piece on the importance of Mead and Conway's work.
David A. Patterson wrote of a further, related, effect in Scientific American, September 1995, p63, "Microprocessors in 2020"
Every 18 months microprocessors double in speed. Within 25 years, one computer will be as powerful as all those in Silicon Valley today
Most recently, microprocessors have become more powerful, thanks to a change in the design approach.
Following the lead of researchers at universities and laboratories across the U.S., commercial chip designers now take a quantitative approach to computer architecture.
Careful experiments precede hardware development, and engineers use sensible metrics to judge their success.
Computer companies acted in concert to adopt this design strategy during the 1980s, and as a result, the rate of improvement in microprocessor technology has risen from 35 percent a year only a decade ago to its current high of approximately 55 percent a year, or almost 4 percent each month.
Processors are now three times faster than had been predicted in the early 1980s;
it is as if our wish was granted, and we now have machines from the year 2000.
Copyright 1995 Scientific American, Inc.The important points are:
- These acts, capturing expert knowledge in formal Design Rules, were intentional and deliberate.
- These rules weren't an arbitrary collection just thrown together, they were a three-part approach, 1) the dimensionless scalable design rules, 2) the partitioning of tasks and 3) system integration and testing activities.
- The impact, through a compounding rate effect, has been immense e.g. through Moore's Law doubling time, bringing CPU improvements forward 20 years.
- The Design Rules have become embedded in software design and simulation tools, allowing new silicon devices to be designed much faster, with more complexity and with orders fewer errors and faults.
- It's a very successful model that's been replicated in other areas of I.T.
Does it not work, not scale or is not considered 'useful' or 'necessary'?
There are some tools that contain embedded expert knowledge, e.g. for server storage configuration. But they are tightly tied to particular vendors and product families.
Update 13-Nov-2011: What makes/defines a Design Rule (DR)?
Design Rules fall in the middle ground between "Rules-of-Thumb" used in Art/Craft of Practice and the authoritative, abstract models/equations of Science.
They define the middle ground of Engineering:
more formal than R-o-T's but more general and directly applicable than the theories models and equations of pure Science, suitable for creating and costing Engineering designs.
This "The Design Rule for I.T./Computing" approach is modelled after the VLSI technique used for many decades, but is not a slavish derivation of it.
Every well understood field of Engineering has one definitive/authoritative "XXX Engineering Handbook" publication that covers all the sub-fields/specialities, recites all the formal Knowledge, Equations, Models, Relationships and Techniques, provides Case Studies, Tutorials, necessary Tables/Charts and worked examples. Plus basic material of ancillary, related or supporting fields.
The object of these "Engineering Handbooks" is that any capable, competent, certified Engineer in a field can rely on its material to solve problems, projects or designs that come their way. They have a reference they can rely upon for their field.
Quantifying specific costs and materials/constraints comes from vendor/product specifications and contracts or price lists. These numbers are used for the detailed calculations and pricing using the techniques/models/equations given in The Engineering Handbook.
A collection of "Design Rules for I.T. and Computing" may serve the same need.
What are the requirements of a DR?:
- Explicitly list aspects covered and not covered by the DR:
eg. Persistent Data Storage vs Permanent Archival Storage - Constraints and Limits of the DR:
What's the largest, smallest or complex system applicable. - Complete: all Engineering factors named and quantified.
- Inputs and Outputs: Power, Heat, Air/Water, ...
- Scalable: How to scale the DR up and down.
- Accounting costs: Whole of Life, CapEx and Opex models.
- Environmental Requirements:
- Availability and Serviceability:
- Contamination/Pollution: Production, Supply and Operation.
- Waste generation and disposal.
- Consumables, Maintenance, Operation and Administration
- Training, Staffing, User education.
- Deployment, Installation/Cutover, Removal/Replacement.
- Compatibility with systems, components and people.
- Optimisable in multiple dimensions. Covers all the aspects traded off in Engineering decisions:
- Cost: per unit, 'specific metric' (eg $$/Gb),
- Speed/Performance: how it's defined, measured, reported and compared.
- 'Space' (Speed and 'Space' in the sense of Algorithmn trade-off)
- Size, Weight, and other Physical characteristics
- 'Quality' (of design and execution, not the simplistic "fault/error rate")
- Product compliance to specification, repeatability of 'performance'. (manufacturing defects, variance, problems, ...)
- Usability
- Safety/Security
- Reliability/Recovery
- other factors will be needed to achieve a model/rule that is:
{Correct, Consistent, Complete, Canonical (ie min size)}