Flash Memory, Disk and A New Storage Organisation

The raw data for this table. It's not definitive, but meant to be close to reality. I haven't included tape media, because I have no reliable price data - and they are not relevant to the domestic market - CD's and DVD's are one of the best, cheapest and most portable/future proof technologies for enterprise and domestic archives and backups.
A Previous Post quotes Robin Harris of ZDnet (Storage Mojo).

Edit (04-jun-97): George Santayanda on his storage sanity blog writes on the Flashdance. Cites 80%/pa reduction in flash prices, break-even wit HDD on 2010/11. He's been in storage for years and is a senior manager. And doesn't take things too seriously.

And he points to a Powerpoint by Jim Gray on the Flash/Solid State disk. Worth the read.

Storage Price Trends

The Yr/Yr ratios are used for forward projections.

The 'Est Flash' column uses the current 'best price' (MSY) for flash memory and a Yr/Yr ratio of 3.25.

Flash memory is now very much cheaper than RAM - forward projections not done.

YearRAM $/GbFlash $/GbEst Flash $/GbDisk$/GbDVD $/GbMax flashMax Disk
Yr/Yr ratio











Depending on the"Year-on-Year" ratio you choose for the reduction in $/Gb of Flash memory, and if you think both flash and disk drives will continue their plunge down the price curve, solid state memory (flash) may be the cheapest form of storage in under 5 years.

New Storage Organisation

Backups and Archives

With the price of large, commodity disk drives driving down near DVD's, and probably overtaking, within 5 years - and that's ignoring the cost of optical drives and the problems of loading the data you want. Why would you not want to store backups and archives on disk?

For safe backups, the disks cannot be in the same machine, nor actually spinning. If the disk is in a USB enclosure, this meaning being able to spin it down by command.

Small businesses can effect a safe, effective off-site backup/archive solution by pairing with a friend and using 'rysnc' or similar over the Internet (we all have DSL now, don't we?) to a NAS appliance. The NAS does need to store the data encrypted - which could be done at source by 'rsync' (yet another option) or created by using an encrypted file system and rsync'ing the raw (encrypted) file. Best solution would be to have the disks spin-down when not being accessed.

This technique scales up to medium and large businesses, but probably using dedicated file-servers.

And the same technique - treat disks like removable tapes - applies. If drives are normally kept powered down, the normal issues of wearing out just won't arise. Rusting out might prove to be an issue - and little effects like thermal shock if powered up when very cold.

Speed, Space and Transfer Rate

Robin Harris of ZDnet etc writes on Storage. He's flagged that as commodity disks become larger, a few effects arise:

  • single-parity RAID is no longer viable, especially as disk-drive failure is correlated with age. Older drives fail more often. The problem is the time to copy/recreate the data - MTTR, and the chance of another failure in that window whilst unprotected.

  • Sudden drive failure is the normal mode. Electronics or power supply dies - game over.

  • Common faults in a single batch/model are likely to cause drives to fail early and together, and
  • RAID performance is severely impacted (halved) when rebuilding a failed drive.
Harris is fond of quoting a single metric for drives: (I/O per second) per Gb. Which is a great way to characterise the effective speed of drives. The size of drives has been doubling ever couple of years - but the speed (rotational and seek) has been increasing much slower... Big drives, even lashed together in RAID arrays, can't deliver the same effective performance as a bunch of smaller, older drives.

This single figure of merit is half the equation: The other side is "time to copy".

That scales with transfer time, size, on-board cache and sustained read/write speeds.

What the World Needs Now - a new filesystem or storage appliance

Just as disk drives are bashing up against some fundamental limits - bigger is only bigger, not faster - Flash memory is driving down in price - into the same region as disks. [And nobody knows when the limits of the magnetic recording technology will be reached - just like the 'heat death' of Moore's Law for CPU speed in early 2003.]

Flash suffers from some strong limitations:

  • Not that fast - in terms of transfer rate
  • Asymmetric read and write speeds (5-10:1)
  • bits wear out. Not indefinite life.
  • potentially affected by radiation (including cosmic rays)
But it's persistent without power, physically small, very fast 'seek time', relatively cheap per unit, simply interfaced, very portable, (seems) reliable and uses very little power. Cheap flash memory only transfers around 5Mb/sec. Sandisk "Extreme 3" Compact Flash (CF) cards targeted at professional photographers, write at 20Mb/sec (and "extreme 4" double that).

"Plan 9", the next operating system invented by the group who designed Unix (and hence Linux), approached just this problem. Files were stored on dedicated "File Servers" - not that remarkable.

Their implementation used 2 levels of cache in front of the ultimate storage (magneto-optical disk). The two levels of cache were memory and disk.

The same approach can be used today to integrate flash memory into appliances - or filesystems:

  • large RAM for high-performance read caching.

  • large, parallel flash memories for read buffering and write caching
  • ultimate storage on disk, and
  • archives/snapshots to off-line disk drives.

The disk drives in the RAID still have to have multiple parity drives, and hot spares.

The Flash memory has to be treated as a set of parallel drives - and probably with parity drive(s) as well.

This arrangement addresses the write performance issues, leverages the
faster read speed (when pushing cache to disk) and mitigates the effect
of chip failure, bits wearing out and random 'bit-flips' not detected
and corrected internally.

They only deep question is: What sort of flash drives to use?

  • compact flash are IDE (ATA) devices. Same pin-out (less the 2 outside) as 2.5" drives
  • SD card is small, cheap, simple - but bulk connection to computers aren't readily available

  • USB flash is more expensive (more interfaces), but scales up well and interfaces are readily available.
  • Or some new format - directly inserted onto PCI host cards...

USB or CF is a great place to start.

CF may cause IDE/ATA interfaces to re-emerge on motherboards - or PCI IDE-card sales to pick up.


Why Ideas are 'Cheap' or Execution is everything

"Genius is 1% Inspiration and 99% Perspiration": Thomas Alva Edison


Ideas cost very little to generate and without substantial additional effort, come
to nothing.
But new ideas are the only starting point for new things - so which is more important, coming up with the idea or making it concrete?
Both are necessary and as important as one another - without the other, neither will lead anywhere.

Criticism of others ideas without substantial evidence,proof, counter example or working demonstration is churlish.

"Put up or Shut up" is a reasonable maxim for critiquing ideas.
Ideas only take real form and viability if they are the subject of robust and probing debate and defence. It is better to fail early, amongst friends, than publicly and spectacularly.
It's easy to confuse a profusion of ideas with "invention".
The marker of "usefulness" is the follow-through from ideation to implementation.


There have been a number of world class research institutions where the notion "Ideas are cheap" has been an espoused mantra - Xerox PARC and Bell Labs are personally known to me.

But what does that mean? That "Ideas" are worthless?

No - far from it. "Ideas" are the starting point of everything new and improved. "Ideas" are a necessary, but not sufficient, condition for the full-fledged innovation of new, useful 'things'.

Why Ideas are 'Cheap'

From the Net:

Thomas Edison is credited with two similar quotes, Einstein is sometimes given credit for the genius quote:
"Genius is 1% inspiration and 99% perspiration."
"Success is 10% inspiration and 90% perspiration."

and also:
"There is no substitute for hard work."

Canadian Dave Pollard, ex-Global Director of Knowledge Innovation (or Chief Knowledge Officer)
Ernst & Young, says:

We are all by nature inventive, and ideas are cheap. The real challenge is innovation, bringing a great invention or idea to commercial fruition. It is the application of the idea that takes true genius, hard work, patience, timing, and often good luck and good connections. It is what separates the millionaire entrepreneur from the pauper inventor.

If you've ever had one idea, you can have more... It's not hard dreaming up
variations on a theme or combining existing ideas in novel ways...

Trusting that you will come up with another 'good idea' can take a while to learn and be harrowing in the seemingly endless 'dry' periods.

Trust in yourself and your abilities, sits alongside "Determination" and "Persistence" as the requirements of any good innovator. It is a learnt skill. And like any skill, the more practice you have, the better you are at it - and the more variations of it you have available.

Our brains are excellent at solving problems - it is one of the things Homo Sapien excels at. And what defines "Homo Sapien" from previous species is our rate of innovation - we've come up with novel,useful advances at a stunning and increasing rate for 30,000 years or more.

The hard part - putting in the hours.

Look at the ratio of work between levels in the sequence:


research - lookup

research - investigate or create

write paper, publish



pilot production

full production and marketing

evolution/updating product

Each of these steps is about 10 times more effort, work or costly than the previous one.

Ideas are "cheap" in the sense that they cost very little to produce and very little effort to espouse. Taking them further requires real effort, determination and persistence...  The critical difference between "invention" and "innovation" (thinking and doing).

More than one notable academic has had just one "good idea" in their lifetime - and sometimes that was even given/suggested to them by someone-else. But they deserve credit and recognition for doing the hard yards by bringing an idea into concrete usefulness.

Why you shouldn't be too 'Wed' to your ideas

If you've had one good idea, why won't you have more?

There is no reason to believe your Brain/Mind won't deliver you more Good Ideas than you can use - if you have the sense to ask it and the courage to cultivate it. And are looking for them.

Ideas are uncomfortable, challenging and a nuisance.

Look at Babbage and his "Analytical Engine" - he kept thinking of "better, cheaper, faster" ways to do the same thing - and in the end he failed to produce any working thing [but was shown to be right this century when one of his designs was built and worked as predicted, modulo a few minor design issues.]

It's an Art generating lots of ideas - but a greater art in selecting the Good Ones and converting them into useful research, programs or prototypes.

And perhaps also it's a personality or psychological trait that predisposes some people to ideation, innovation, refining or finishing.

The Margerison-McCann Team Management Wheel offers statistically proven evidence of population capability differences.

Robert E. Kelley's 1998 book "How to be a Star at Work" is based on the data reported by Bell Labs that there are people who radically outperfom others (i.e. their output is measurably higher). How much? Kelley is a bit coy on this - perhaps 4 times, perhaps 10 times... But that would presume you have benchmark outputs of 'normal' performance.


"Ideas are Cheap" is a shorthand way of saying:

It's far easier to come up with an idea than it is to translate it into something concrete and useful - and getting it produced and marketed is a bigger step again.

Yes, it is a genuine skill generating ideas - but just like in a journey, starting out is usually the easiest part.

Lauding people for "great ideas" is counter-productive - it encourages them to stop there... Probably blocking them from arriving at much better ideas as they arise during the development phase.

Our society and culture is based on a huge number of "good ideas" that have been refined and developed over Milena.

"Standing on the shoulders" of earlier, great minds is a given. Truly
new ideas are exceedingly scarce.


Microsoft, AntiTrust (Monopolies) and Patents

MSFT is threatening to Sue the Free World.
Patents are a state-granted Monopoly in return for full disclosure.
  • Patents are useless unless defended.
  • Patents are granted in a single jurisdiction at a time - there are no 'global' patents.
  • Patents are uncertain until tested in court - by the full panoply of judges, counsel and mountains of paper.
  • Patent 'trolls' and 'submarining' exist (and are legal tactics) - people who play the system for Fun and Profit. They hide out until someone is successful, then don't try to license their patents - but sue (for large amounts).
Microsoft may claim that code infringes its patents - but that's just a posture. If they were for real, they'd be launching court cases to decide the matter.


Microsoft Troubles - III

Microsoft threatens to Sue The Free World.

Groklaw comments on MSFT threatening to sue "Patent Violations".

CNN/Fortune Original article (probably)

ZDnet (Mary Jo Follet)


Driving Disks into the future

Robin Harris of ZDnet "Storage mojo" has written a series of posts on factors affecting the future of Disk Storage. These are my reactions to these quotes and trends, especially in flash memory.

Flash getting "70% cheaper every year" - hence more attractive:

"Every storage form factor migration has occurred when the smaller size reached a capacity point that enabled the application, even though it cost more per megabyte."

"With flash prices dropping 70% a year and disks 45%, the trend is inexorable: flash will just get more attractive every year."

The problems with RAID and Big Drives:

"There are three general problems with RAID: Economic, Managerial, Architectural"

  • RAID costs too much
  • Management is based on a broken concept [LUN's]
  • Parity RAID is architecturally doomed
"The big problem with parity RAID is that I/O rates are flat as capacity rises. 20 years ago a 500 MB drive could do 50 I/O per second (IOPS), or 1 IOPS for every 10 megabytes of capacity. Today, a 150 GB, 15k drive, the ne plus ultra of disk technology, is at 1 IOPS for every 750 MB of capacity. Big SATA drives are at 1 IOPS per several gigabytes. And the trend is down."

What a "Web Business" wants from Storage Vendors:

What a Web Business wants [Don MacAskill of 'smugmug']:

  • External DAS for the database servers .. and dual-controller arrays [simplified recovery after server death]
  • Spindle love. Typical array has 14.
  • No parity RAID. RAID 1+0.
  • 15k drive love. Speed is good.
  • Love drive enclosures with odd numbers of drives. Makes keeping one hot spare easy.
  • Love big battery-backed up write caches in write-back mode. Because super-fast writes are “. . . easily the hardest thing in a DB to scale.”
  • Disable array read caching: array caches are small compared to the 32 GB of RAM in the servers. reserve all array cache for writes.
  • Disable array pre-fetching: the database knows better than the array.
  • Love configurable stripe and chunk sizes. 1 MB+ is good.

"Don should be the ideal array customer: fanatical about protection; lots of data; heavy workload, not afraid to spend money. Yet he isn’t completely satisfied, let alone delighted, by what’s out there. A lot of the engineering that goes into arrays is wasted on him, so he’s paying for a lot of stuff he’ll never use, like parity RAID, pre-fetch and read caching."

And 'the future of Storage'

The future of storage:

"The dominant storage workload of the 21st century. Large file sizes, bandwidth intensive, sequential reads and writes."

"(OLTP) Not going away. The industry is well supplied with kit for OLTP. It will simply be a steadily shrinking piece of the entire storage industry. OLTP will keep growing, just not as fast as big file apps."

"Disk drives: rapidly growing capacity; slowly growing IOPS. Small I/0s are costly. Big sequential I/0s are cheap. Databases have long used techniques to turn small I/Os into larger ones. With big files, you don’t have to."

"The combination of pervasive high-resolution media, consumer-driven storage needs, expensive random I/0s and cheap bandwidth point to a new style of I/O and storage. The late Jim Gray noted that everything in storage today will be in main memory in ten years. A likely corollary is that everything analog that is stored today will be digital in 10 years."


Response to Cognitive Work Load - Huh?

Another related question I've been trying to even discover the correct name for over the last 10 years.
I can't believe something so fundamental to I.T. and the knowledge economy could go unstudied.

I frame it as "Human cognitive response to workload".

There is a whole bunch of data on "human physiological response to workload" - like the US Navy and how long stokers can work at various temperatures (and humidity?).

This goes to the heart of computing/programming - being able to solve difficult problems, and managing/reducing defects/errors. In my career, I got very tired of bosses attempting to get more work done by "forced marches". 80 hour weeks aren't more productive - they just insure a very high defect rate and amazing amounts of rework.

The best I have been able to find is Dr Lisanne Bainbridge and her work on "mental load".

What I wanted to discover is:
  • that for each individual there is an optimal number of 'brain work' hours per week
  • the effect of physical & mental fatigue and sleep deprivation on 'brain work' output, degree of difficulty tasks and error rate.
  • the recovery time for strenuous (mental) effort - working 50, 75 and 100 hours / week requires recovery, but how much?

If you, gentle reader, have any leads/pointers on this I really appreciate it :-)

Even if someone knows what the field is called or refer me to the peoplethat do know.

Teams - Where's the proof?

Addition 24-May-2007
Johanna Rotham author and consultant answered an e-mail from me.
Johanna is involved in the Jerry Weinberg and Friends AYE - Amplifying Your Effectiveness - conference. Johanna's book "Behind Closed Doors" is on this blog and highly recommended for I.T. Technical Managers. Another interest of Johanna's: Hiring the Best People.

Johanna's thoughtful response:
Part of the problem is I can't do two of the same project where one is set up as an integrated team and the other is a bunch of people who don't have integrated deliverables. I can tell you that the projects where the people are set up with committed handoffs to each other (Lewis' idea that one person can't work without the rest of them), have better project throughput (more projects per time period) than the groups of people who do not have committed handoffs to each other. But that's empirical evidence, not academic research.

Here's a real request I sent to a company expert in the psychological aspects of work and teams - Team Management Systems. They are one of the few companies that bring intellectual rigour and validated research to the masses.

I'm trying to find any books, or even journal articles, that show *quantitative* results of team work... Especially anything that proves a) high-performing teams do exist and b) shows they do perform better.

Any who's every worked in a well-functioning team knows they produces a lot more & the quality is way up.

Robert E Kelley, of Carnegie Mellon, in "How to be a star at work", describes the '9 strategies of star performers' developed through an extensive study of Bell Labs Switching Systems software group.

In Appendix 1, the research story behind the book, he describes the assumption of the book, star performers, as "doing the work of 10 average coworkers"... That's all very hard to measure accurately, and the book doesn't, though order-of-magnitude differences do stand out to management.

David H Maister in "Practice What you Preach" reports an extensive, multi-country study where he comes up with a multi-factorial (quantitative) model relating company financial results to inputs - and staff attitude is a huge factor.

As an aside, he mentions that staff will happily stay in a place they like while being paid 20% less than 'outside'. It's not an invitation to pass people less, but an encouragement if you get into financial straits.

The Gallup (Poll) Organisation in "12 Elements of Great Management" is again based in strong, validated research. They quote the quantitative impact of their 12 elements on multiple other factors like accidentrates [and productivity?]. Lots of stories as well :-)

I've any number of books on my shelves on Teams - their management, formation, workings, benefits...
But none that says the simple fact: Teams are more productive than groups of individuals.

WHAT I've been searching for is anything that has a strong quantitative study of this obvious fact...

I'd like to be able to refute fools like the one who stated "Teams don't give any benefits - they are an American propaganda/fad". He was incapable of working with others, so for him this was a factual observation. He was sacked from that leadership position.


Defining I.T. Service Management

Objectives (The What)

Having begun around 1950, the world of Commercial I.T. is now mature in many ways. "Fields of Work" and professional taxonomies are starting to become standardised. Professional "Best Practices" are being documented and international standards agreed in some areas.

For the first time, audits of one of the most pragmatic I.T. disciplines, "Service Management", are possible with ISO 20,000. Business managers can now get an independent , objective opinion on the state of their I.T. operations - or of their outsourcers.

Being "documented common sense", ITIL and the related ISO 20,000 are good professional guides, but not underpinned by theory. Are there any gaps in the standard? How does Service Management interface with other IT Fields of Work? and What changes in those other disciplines are necessary to support the new audited practice?

Analysis of the full impact of I.T. Service Management, creation of a full taxonomy and definitions of "I.T. Maturity" are beyond the scope of a small "single researcher" project.

Approach (The How)

ITIL Version 2 and 3 and ISO 20,000, as published documents, form the basis of the project.
Prior work in the field has yet to be identified. Secondary research will be the first step.

Each of the models will be codified and uniformly described, then a 3-way comparison performed. A Gap Analysis done of the 3 models, and a formal model built describing "I.T. Service Management" and its interfaces built and each of the existing approaches mapped to it.

Importance/Value (The Why)

The global economy, especially businesses in the "Western Industrialised World" are increasingly dependent on I.S./I.T. and their continued efficient operation. Corporate failures partially due to I.S./I.T. failure have occurred. Improving delivery of I.T. Services and the business management and use of them is important to reduce those failures in the future.

The advent of ubiquitous and universal computing requires concomitant development of business management.

There assertions are considered axioms in this context:
  • Organisations these days are dependent on their I.T. Operations.
  • I.T. cuts across all segments of current organisations.
  • I.T. defines the business processes and hence productivity of the whole organisation.
  • What you don't measure you can't manage and improve.
  • Improving the effectiveness of I.T. Operations requires auditable processes.
  • Common I.T. Audit and Reporting Standards, like the Accounting Standards, are necessary to contrast and compare the efficiency and effectiveness of I.T. Operations across different organisations or different units within a single organisation.
I.T. is a cognitive amplifier, it delivers "cheaper, better, faster, more, all-the-same", through the embedding of finely detailed business processes into electronic (computing) systems.

For simple, repetitive cognitive tasks, computers are 1-5,000 times cheaper than people in western countries.

From this amplification effect, computers still provide the greatest single point of leverage for organisations. The underpin the requirement to "do more with the same", improving productivity and increasing profitability.

The few studies of "IT Efficiency" that are available show that IT effectiveness is highly variable and unrelated to expenditure.

The value-add to business of a complete I.T. Service Management model is two-fold:
  • manage down the input costs of the I.T. infrastructure and Operations and,
  • audit assurance for the board and management of the continued good performance of I.T. Operations.

[A 1990 HBS or MIT study into "White Collar Productivity" - reported a decrease in the first decade of PC's]

Previous Work (What else)

There is much opinion in the area, without substantive evidence: e.g. Nick Carr and "Does IT Matter?" The McKinsey report/book on European Manufacturers and their I.T. expenditure versus financial performance shows there is no co-relation between effort (expenditure) and effect (financial performance).

"Commonsense" IT Practitioner approaches, SOX, ITIL and COBIT and others, do not address the measuring and managing of I.T. outputs and interfaces and their business effects, utiliation and effectiveness.

Jerrry Landsbaum's 1992 work included examples of their regular business reports - quantifiable and repeatable metrics of I.T. Operations phrased in business terms.

Hope to find (The Wherefore)

  • Create a formal model for I.T. Operations and its performance within and across similar organisations.
  • From the model, generate a standard set of I.T. performance metrics.
  • Generate a set of useful I.T. Operations Business Impact metrics.

Report Outline

  • Coded process models of ITIL version 2, 3 and ISO 20,000.
  • 3-way comparison of ITIL version 2, 3 and ISO 20,000.
  • Gap Analysis of ITIL version 2, 3 and ISO 20,000 models.
  • Formal I.T. Service Management model.
  • Common I.T. Service Management internal metrics and Business Impact
    metrics flowing from the model.
  • Interfaces to other I.T. and business areas and changes necessary to support audits of I.T. Service Management.
  • Further Work and Research Questions

Execution Phases

  • Learn ITIL Version 2 - Service Managers Certificate course [complete]
  • Learn ISO 20,000 - IT Consultants training [in process]
  • Acquire and learn ITIL Version 3 [depends on OGC availability. mid/late 2007]
  • Create/identify process codification.
  • Codify ITIL version 2, 3 and ISO 20,000
  • Compare and contrast coded descriptions. Report.
  • Create/adapt process description calculus for formal model.
  • Create formal I.T. Service Management model.
  • Derive interfaces to business and other I.T. processes
  • Derive internal metrics, role KPI's and business impact metrics
  • Finalise report.


Bookshelf I

These are books on my bookshelf I'd recommend. Notes on them later.
Pick and choose as you need.

Personal Organisation

Personal Efficiency Program - Kerry Gleeson [older]
Getting Things Done - David Allen [newer]

Teams, People, Performance

Practice What you Preach - David H Maister [Numerical model relating Profitability to Staff Morale/Treatment]
How to be a Star at Work - Robert E Kelley
Team Management Systems - Margerison & McCann

Maimum Success: Breaking the 12 Bad Business Habits before they break you - Waldroop & Butler
[rereleased as] The 12 Bad Habits that hold Good People bBack

No Asshole Rule - Robert Sutton

Execution - the art of Getting things done (in big business)

Who says Elephants can't Dance - Louis V Gerstner [on Execution and 'Management is Hard']
Execution - Bossidy & Charan
Confronting Reality - Bossidy & Charan

Gallup Research

First, Break all the Rules - Buckingham & ??
Now, Discover your Strengths - Buckingham & Clifton [old]
StrengthsFinder 2.0 - Tom Rath [current]
12: The Elements of Great Managing - Wagner & Harter
The One thing you need to know - Buckingham

Off the Wall - Different ideas on Management and Leadership

Contrarian's Guide to Leadership - Steven B Sample
Simplicity - Jensen

Intelligent Leadership - Alistair Mant [old]
Maverick - Ricardo Semler. [old]
The 7-day Weekend - Ricardo Semler [new]

Wear Clean Underwear - Rhonda Abrams
Management of the Absurd - Richard Fearson
Charisma Effect - Guilfoyle

Computing Management

Measuring and Motivating Maintenance Programms - Jerry Landsbaum
any of the 50 books by Robert L. (Bob) Glass

Why Information Systems Fail - Chris Sauer
Software Failure : Management Failure - Flowers

Jerry Weinberg Prolific author - Quality, People, Teams, Inspections & Reviews, technical, ...
Quality Software Management - 4 book series
Becoming a Technical Leader
Weinberg on Writing - the Fieldstone Method
Secrets of Consulting
Psychology of Computer Programming
-- and another 40 or so --

Peopleware - DeMarco & Lister from Dorset House Publishing specialising in why people matter.

Project Retrospectives - Norm Kerth
Programming on Purpose - PJ Plauger

Mythical Man Month - Fredrick Brooks [I don't have a copy]

"IT Doesn't Matter" - Nicholas G Carr [read a synopsis, don't buy]