2007/03/23

Future Forecasting for I.T. - how close to 'mature' is the market?

Jonathon Schwartz of SUN Microsystems posted an article on SUN and Intel Alliance. SUN may be coming back from the brink - with the 'opening' of Solaris, they could have realised again they're a hardware company (and do great servers).

There was a line that gave me pause:
To be clear, this isn't about displacing one another's competitors, it's about getting as big a piece of the future as possible. The market's not shrinking, after all.


I was struck by The market's not shrinking, after all.

In 2000, the personal-use PC's were 'desktops' - now laptop sales are at least equal or higher...
The world is changing - the I.T. market is very close to maturation - near 'topping out' perhaps.

Take for instance the Gartner predictions for desktop/laptop sales in next 12 months (can't remember the link).
They forecast a 10.6% growth in sales volume (to 255+M units) but only 4.x% increase in sales dollars.

SUN have announced their "DataCentre in a Container". Think that through - these are effectively very nicely packaged *mainframes* of MIMD(non-homogeneous) design versus the classic MIMD (SMP) design. You get a 'volume discount' by buying excess capacity - and it comes prebuilt. Your techs should not ever be opening the doors. It really will be "everything in software". And the box could be anywhere within a few milliseconds down the network.

Some organisations will resell capacity, not like the old Processing Bureaus and lately (web hosting), but fractional amounts of a 'box'. Just like leasing office, storage or wharehouse space.

The big change will be corporations adopting the same scale-up/scale-out architectures as the large internet companies - the Internet Data Centre rather than the usual Enterprise Data Centre...

Moore's Law on CPU speed broke in Q1-2003 - but those pesky engineers are still building smaller devices and putting more transistor on a chip - that means more bang for your CPU buck for maybe another 10 years [definitely 2010, but why not 2015].

Scenario:
Organisation buys a DC box. Keeps it for it's economic life (dominated probably by disk size/failures), then replaces it.
The new box *will* have more CPU power, or cost less per processing 'unit', modulo disk pricing.

All of a sudden servers (the things that SUN sells) will be bought in large quanta, kept and replaced in the same large quanta.
And each quanta will feature better "bang per buck". What we've seen in desktops and servers, is that unit price can't be maintained - the price of the low-end units will keep drifting down.

The West's economy is getting close to being saturated with corporate compute power...
Real growth might occur in the developing world - that's a complex equation that includes social and cultural variables.

So will the market for server CPU's keep expanding? I think we are close to maturation of the I.T. industry, within 30-50% of the maximum CPU demand... Which means very close to total sales dollars.

Modulo brand new applications of course :-)
Artificial Intelligence, Knowledge Management or Data Mining/Business Intelligence could actually deliver something useful oneday.

2007/03/22

I.T. in context

Here are Questions, not Answers...
Things that I'd like to explore and have better answers on.

Most of these questions probably don't have permanent 'answers' - each generation, each culture, each industry has to define and redefine them for their mix of technology, political structure and workplace organisation I suspect.

2007/03/20

Quantifying the Business Benefits of I.T. Operations

Objectives (The What)


That "I.T. is done for a Business Benefit" seems axiomatic.

But where's the evidence after 50-60 years of computing? It's not coming out our ears - just the reverse.

Businesses understand the importance of hard data and it's through analysis for marketing, but don't apply the same techniques or management principles to their I.T. Operations.

I'd like to model and quantify the Business Benefits of I.T. Operations across multiple organisations to provide baselines, benchmarks and trend analysis. The impact of all aspects of I.T. is beyond the scope of a single researcher project.

Approach (The How)



Data is fundamental input for analyses. Leveraging what's available means the outputs can be commercially reproduced and aer within the project budget (zero cost).

Three separate data streams will be mined:
  • Historic "ITSM" tool data from multiple organisations.
  • Detailed I.T. accounting information from selected organisations.
  • Primary research in one organisation to collect and report "FTE equivalents provided" by I.T.


[FTE = Full Time Employee. Otherwise, "virtual employees". What head count and cost would be needed to provide similar services with 1965 technology.]

Importance/Value (The Why)



There propositions are to be tested:
  • I.T. is done for a Business Benefit.
  • Business Benefits, tangible or intangible, should be measurable.
  • Organisations these days are dependent on their I.T. Operations.
  • I.T. cuts acros all segments of current organisations.
  • I.T. defines the business processes and hence productivity of the whole organisation.
  • What you don't measure you can't manage and improvve.
  • Improving the effectiveness of I.T. Operations requires reliable metrics.
  • Commin I.T. Reporting Standards, like the Accounting Standards, are necessary to contrast and compare the efficiency and effectiveness of I.T. Operations across different organisations or different units within a single organisation.


I.T. is a cognitive amplifier, it delivers "cheaper, better, faster, more, all-the-same", through the embedding of finely detailed business processes into electronic (computing) systems.

For simple, repetitive cognitive tasks, computers are 1-5,000 times cheaper than people in western countries.

From this ampflication effect, computers still provide the greatest single point of leverage for organisations. The underpin the requirement to "do more with the same", improving productivity and increasing profitability.

Subtle shifts in this whole-organisation amplification ratio (e.g. from 100:1 to 95:1 or 105:1) are impossible for isolated individuals to detect unaided. But they make very large differences to the 'global' organisation output and productivity.

In retail businesse, the gross margin is often around 2.5%. Reducing whole company productivity by 5% will destroy it's profitability, and without any metrics, will be impossible for any management team to identify and resolve.

The few studies of "IT Efficiency" that are available show that IT effectiveness is highly variable and unrelated to expenditure.
My proposition is that "intuitive management" of IT is stretched well beyond it's useful limits and needs to be replaced by evidence-based management.

The value-add to business is two-fold:
  • manage downt he input costs of the I.t. infrastructure and,
  • quantify the "cognitive amplifier" effects across the whole organisation to make informed decisions on optimum 'global' investment/expenditure on I.T. Operations.


[There's the 1990 HBS or MIT study into "White Collar Productivity" - reporting a decrease in the first decade of PC's]

Previous Work (What else)


There is a dearth of published material/research in this area.
The "State-of-Practice" is "NEVER DONE".
There is much opnion in the area, without substantive evidence: e.g. Nick Carr and "Does IT Matter?"

"Commonsense" IT Practitioner approaches, ITIL and COBIT (others?), do not address the measuring and managing of I.T. outputs and their business effects, ultilisation and effectiveness.

The McKinsey report/book on European Manufacturers and their I.T. expenditure versus financial performance shows there is no co-relation between effort (expenditure) and effect (financial peformance).

Jerrry Landsbaum's 1992 work included examples of their regular business reports - quantifiable and repeatable metrics of I.T. Operations phrased in business terms. This work seems entirely disregarded.


Hope to find (The Wherefore)


  • Model I.T. Operations performance within and across similar organisations.
  • generate tools usable within organisations to collect/report their own metrics.
  • Define a set of useful I.T. Operations performance and Business Impact metrics.
  • Model inputs to Business and Business Utilisation/Outcomes.


Report Outline


  • Analyse ITSM tool data. Derive KPI's, Internal Baselines/Trends, Cross-section Benchmarks
  • Annual I.T. Operations Report
  • FTE Employee equivalents - Count and Cost
  • Why IT Matters to the Business.
  • Gaps in Service Management models - ITIL and COBIT
  • Adding I.T. Operations to Management Theories.
  • Advancing I.T. as a Profession
  • Further Work and Research Questions


Execution Phases

Force Multipliers - Tools as Physical and Cognitive Amplifiers

The industrial revolution was about using Tools as Physical Amplifiers.

Prior to the steam engine, the oxen/bullock/horse/donkey/elephant was the dominat non-human power-source.

Humans can work at about 125-250W continuously (1/8 to 1/4 of a kilowatt, or 1/6 to 1/3 Horsepower). Elite athletes can produce 500W or more for short periods.

All biological systems have a short and medium term "duty-cycle" - our muscles get tired, non-linearly, and need short-term recovery and longer-term rest and recuperation. Sleep and recreation are about the nervous system/mind/brain.

From chapters of Taylor's book Scientific Management here's proof you get more out of people by making them rest! He improved average output from 12 tons/day to 47 tons/day through careful (psychological) selection and enforcing rest periods. Counter-intuitive, but well-researched.
For example, when pig iron is being handled (each pig weighing 92 pounds), a first-class workman can only be under load 43 per cent of the day.
He must be entirely free from load during 57 per cent of the day.
And as the load becomes lighter, the percentage of the day under which the man can remain under load increases.
So that, if the workman is handling a half-pig, weighing 46 pounds, he can then be under load 58 per cent of the day, and only has to rest during 42 per cent.
As the weight grows lighter the man can remain under load during a larger and larger percentage of the day, until finally a load is reached which he can carry in his hands all day long without being tired out.


For an 8-hour day at 125W, a total of 1KwHr (kilowatt hour) useful work is done, ignoring rest breaks.
Animals are heat engines as well. We 'burn' fuel with oxygen, releasing Carbon Dioxide and some useful work. For that 8-hour day, the energy input is probably 10,000KJ (kilo joules) [or 2500 (kilo)'calories']

Electricity sells for ~20c/KwHr in the western world. The minimum wage in Australia is ~$100/day now.
The raw "Physical Amplification" on a cost basis is ~500:1

A 500 HP bulldozer is controlled by one operator. There's a 3000:1 amplification. But machines are under 50% effective at converting their output to 'work done' compared to people.
On a cost-basis, the bulldozer might costs $150/hour to operate (40-50L of fuel, Wages, Maintenance, Depreciation).
Or 10c/hour/person-equivalent. Probably 400:1 ratio based on the operator wages.

Cognitive Amplifiers


The same comparison calculations as with Physical Tools and Machines can be done for humans and computers.

The human brain runs at ~50 watts - with around a 4MJ total energy input (1000 'calories') per day.
Like muscles, it has a "duty cycle' and requires rest and recuperation, as well as sleep and longer-term "recreation" and holidays. There appear to be no studies of "Human Response to Cognitive Workload". [Looking for the wrong thing?]

Directly comparing the human brain's I/O bandwidth, processing and storage capacity with electronic computers is difficult because they are organised so differently and probably complementary. The are best at different tasks.

45 years of "Aritifical Intelligence" research tells us that we don't understand in fine detail our brain processes and capabilities or the fully appreciate the complexity of "ordinary tasks". To recognise, not understand, human speech takes around a 1.5Ghz CPU and 256Mb of RAM - plus a very large, complex training dataset. Recognising speech, without understanding it is the equivalent of talking gibberish.

For repetitive cognitive processes that humans do poorly - computers with their methodical exactness, excel.

A $5,000 computer system that costs $10,000 over a 5-year life (excluding software) can run an accounting systems that processes 25,000 transactions/hour and is able to store, summarise and report on perhaps a decades' worth of data.

The equivalent human processing using mechanical 'tabulators', themselve 5-10 times faster than pen-and-journal, would take 250 operators just for data-entry. Consolidating and reporting the accounts requires another largish group (25?).

The yearly wages bill for the operators would be ~$4M. On-costs, leave, recruitment and training - say $25M for 5 years.
A 2500:1 amplification on a cost basis.

Cognitive Amplifiers


I.T. Systems benefits are "cheaper, better, faster, more, all-the-same".

I.T. systems embed the business processes used and their interfaces, performance and reliability define the productivity possible across the whole organisation.

Perfect I.T. Project Management - They're Research Projects!

My last post took me most of a day to produce. Not a great words/minute rate.
I'd expected to spend no more than an hour - it's work that I first did around 2000, so I'm familiar with it.

A friend jibed that "You should've used your I.T. project management methodology".

I do have stong views on managing I.T. projects, especially large one, and they are backed up by the solid research data from Standish Group. They do apply to exactly this task of writing. Unfortunately they offer no useful guidance.

  • All new programs are research projects
  • If you haven't got working code, you don't know how long it can take. If it's a complex task, then beforehand you cannotknow where the 'beartraps' are.
  • At any point in a project, you can only see in detail a couple of weeks ahead.
  • 'Scale' is everthing [Alan Kay's argument]. Don't take on any project more than 30% larger than one you've completed successfully.
  • Everyone is an efficient, effective Project Manager - it's just the Domain and Scale that change.
  • Production of new Software is Pure Research. It relies on Creativity, it will take unanticipated twists and turns, you can't order "breakthroughs" to schedule, seemingly simple things can be 'too hard' and it's only Done when it's Done (The Golden Rule of Open Source). And when you're done, you probably want or need to redo it - completely - and several times.


The Standish Group's rule is: Maximum of six people for six months.
That's a summary of 50,000 detailed case studies. I think it's worth taking on board.

So I feel happy about my little project taking as long as it did.

2007/03/19

The Triple Whammy - the true cost of I.T. Waste

Background

There's a report around at the moment that says spending on I.T. is 3-4 times more effective than anything else. [Link to come]

In the first couple of decades of commercial computing, all the "low hanging fruit" - the best returns - for I.T. were exploited. That's when 'the books', financial information and large internal databases (assets, employees, stockholders, vendors, customers, ...) were computerised.

1991 - the first IT recession - marked the end of this era. For the first time, IT staff were laid off in an economic downturn. Previously other staff could be displaced by automating their jobs with I.T.

The 2000 I.T. recession - which we are only just starting to recover from - was industry backlash (and rightfully so) to "Y2K" and the "Dotcom Bust"). The general I.T. justification before then was: "We need this, trust us". After 2000, business needed to be convinced...

Controlling Waste in Government I.T. - An Immodest Proposal

The Standish Group has researched and released the CHAOS report since 1994. What's special about Yet Another Expensive Industry Report?

The fact that nobody else does it, they have 50,000 detailed case studies of I.T. projects, and their results are consistent year to year (but they would make it that way, wouldn't they?).

Do we believe their claims the US spends $250Bn/year on IT applications development? That $81Bn of that is on cancelled projects and anothe $59Bn on over-runs? Or that only 16.2% of projects finish on time and within 130% of budget? That "For every 100 projects that start, there are 94 restarts"?

To scale that back to Australia, about one fifteenth the size, there'd be A$21Bn/year on just applications development. Which doesn't gel with estimates from the ABS that the I.T. sector here is about A$20Bn in total. (The ABS only reports accurately the ICT sector - grossly inflated by 'Communications' i.e. phone et al.) If the Australian I.T. sector is 5% of GDP, it would be around $50Bn and employ 500,000 people. Not unbelievable.

Either the US does a lot more AppDev that us, they pay a lot more, the survey is wrong - or the ABS survery figures are out.
To cut through the questions, all that's needed is a 'scale factor' - to convert the numbers from Standish into believable figures for Australia. Taking the ABS survey figure as a lower bound and guessing that half I.T. budgets go on AppsDev, or $10Bn, then that's a scale factor of 25:1.

So the Waste in Australia on cancelled AppDev projects is at least $3.25Bn/yr. The ABS also state that 40% of I.T. expenditure is by Government - half by the Federal Govt. The Government is wasting $1.5Bn - $3Bn of public monies yearly.

The only reliable figure for 'waste' is cancelled projects. Standish do say 52.7% of projects will cost 189% of their original estimates. But that could just be deliberate low estimates, optimisum or ineptitude of the IT areas - which after 50+ years of commercial I.T. you'd have thought management might have recognised and addressed.

It's over 10 years since Standish started their CHAOS reports - so why hasn't any section of the Australian Government looked at the problem here? Some possibilities:
  • There is no problem here. [Nope, glorious failures like ADCNET abound]

  • We don't have figures, so nothing could be wrong.

  • It's too trivial a figure

  • Nobody here knows the Standish work. [That's either negligence or incompetence.]

  • It's nobody's job? How about:

    • Australian Audit Office?

    • Senate Estimates Committee and Expenditure Review Board?

    • AGIMO, NOIE, GOI, ...

    • FMA Act & Finance - "Efficient, Effective, Ethical expenditure of public monies"

    • Department Heads [see FMAA]

    • I.T. Heads

There is a tried, proven model for controlling 'waste' - and the government knows it well:
Aviation.

Two independent bodies are needed: An investigator and an enforcement/compliance agency.
In Aviation, they are "BASI (Bureau of Air Safety Investigation)" and "CASA (Civil Aviation Safety Authority)".
CASA creates real 'consequences' for people and organisations - negligence and incompetence are cause for temporary or permanent disbarment from the industry.

BASI looks to find the causes of 'incidents', how to avoid them in future and promulgates the information to everyone that should know.

For about $30M/year, roughly the budget of the ANAO, the Federal Government could start to define and address the problem of I.T. waste. This is an area where the Government can lead the Private Sector - the same companies and people contract for the public and private sector. The Government can be seen to be impartial and transparent, and their is no legal impediment for a government "right to practice" list.

Spending $30M to save $3,250M - that sound like a good deal to me. Why not to the Government?

Going Backwards - losing what we know

The people who worked with the computer pioneer John von Neuman all practiced and valued 'code reviews'. This definitely was passed on - Jerry Weinberg, the Software Qualtiy supremo, is proof. In the 1970's, as an academic, he even proved reviews and a focus on 'quality' were the cheapest, most effective way to produce good programs, quickly.

There must be hundreds of other Good Practices that have fallen by the way, that were once 'standard practice' somewhere and exceedingly useful.

So why aren't all or some of the Good Practices taught routinely - both at University and in the work place? After all, we're talking about things that work, that address the software fundaments: cheaper, better, faster, more, that push up the tradeoff point for "pick two of 'fast, good, cheap'", make the production of software more reliable and predictable - and ultimately cheaper.

Theodore Dalrymple (Anthony Daniels) a British doctor and psychiatrist in Life At The Bottom says:

"When a man tells me, in explanation of his anti-social behaviour, that he is easily led, I ask him whether he was ever easily led to study mathematics or the subjunctives of French verbs."..


That's what we seem to have in I.T., people are Easily Led Astray. Somehow we know what will and won't lead to better work - and systematically chose against "Good Practices".

I've never seen more than one person at a site spontaenously improve their practices. But have been at effect more than once of management directives to "remove the gold plating" - to give away Good Practices.