Unsolicited Advise to a classmate

I've never been a politician, managed a large budget or run even a moderate sized project, so why do I have the hubris to offer some unsolicited advice to a newly minted I.T. Minister, to whom by accident, I was once briefly a classmate?

Rejecting these thoughts "because nobody else is doing them" is an option, though not a great reason.

Rejecting them "because they're too expensive" is a judgement call, but has to be measured against "Compared to What?".
Doing nothing will cost you a whole bunch, you've already have the report on that.

The core arguments in support of my observations are:
  • How important to the current and future BackOffice and FrontOffice operations of your Government is I.T.? I suggest that the machinery of Government cannot operate without it I.T. systems, not simply ineffectively, but like airlines, not at all.
  • There is an internal consistency in what I propose, derived from one of the toughest businesses around. The challenge is not "will this work", but "how can it be made to work for us".

Why do I offer these "untried, untested" observations?

After working 40 years on the I.T. coal-face, I know how to deliver projects and make systems "just work", not only technically, but within bureaucracies. I was into my fifth or so "turn-around" when I realised that I had a process, it was simple and effective but required managerial understanding and co-operation, personal commitment and the will to pursue it. Multiple times I've dredged high-profile, business critical systems from certain failure to modest or outstanding success - for companies, consulting firms and Government.

I can't offer you advice in the realms I've never practiced, but I do know what's needed to permanently address professional & technical issues. Solving these is not the whole story, as you'll be aware, but without solving them, no solution can be found to your I.T. problems.

What drives this is my formulation of the Professional mandate:
It's "unprofessional" to repeat or allow, Known Faults, Failures and Errors.
It's simple and easily stated, but hard to practice. The power of this succinct formulation is that there is nowhere to hide for non-compliant managers, IT practitioners and suppliers when called to account for their actions.

Aye, and there's the rub.
Suppliers, Professionals, Managers and Heads of Agency must be held to account for their actions or nothing can change, let alone will change. The bureaucracy and those driving it, the business owners or politicians, must be fully behind both sides of the process: investigation and compliance.

And it works, spectacularly. The proof is the improvement in Aviation dating back to the 1970's when the work on "Systems Accidents" by Charles Perrow and James T Reason was introduced.

The process of detailed, systematic Air Crash Investigation was formalised with the UK's AAIB by Sir Arnold Hall and the associated Cohen Court of Inquiry into the Comet crashes of the early 1950's.

By then, it was long established to have separate bodies for crash investigations responsible for root-cause analysis and recommendations to prevent the same incidents, and regulatory/compliance bodies and Courts of Inquiry, like the US-FAA, responsible for mandating what was required, testing and certifying all aviation professionals, suppliers and operators and judging, then handing out "consequences" for those deemed responsible or culpable.

Without doubt, Investigation and Compliance bodies worked in preventing disasters, up to a point.
Perrow and Reason took Aviation Safety to a new level by creating the theoretical framework to analyse "Systems Accidents" - where the system as a whole, not individuals, produces "the accident".

This single insight has improved Aviation Safety by 10 or more times plus significantly improved the related aspects of Performance and Profitability.

All three rely on the same process: collect data, analyse it, improve the system, rinse and repeat.

What people outside the industry don't fully appreciate is that the same data & techniques that drive Safety/Quality Improvement also drives Process Improvement and Business/Economic Improvement.
Do one and you can do all and doing just one without doing all, decreases the results for that area.
The three approaches complement and strengthen each other.

Specifics to implement an integrated Quality, Process and Business Improvement program for Government:
  • First, identify all Professionals within and without the Government, and the Organisations they work for.
    • The AIIA could be tasked with keeping this Register.
    • We know from the work of AMPCO with Australian medicos, that this is possible and the level of effort/expense required.
    • You have the advantage of both a smaller industry and operating in a single state.
    • This is not a certification register,
      • but it does turn into a single database to check for prior poor performance and refuse to allow inappropriate people onto projects or into sites.
  • A taxonomy of Skills, Tasks, Services and Specialities is required for the register and the next step.
  • Next, a yearly survey by those purchasing I.T. services listed in the taxonomy, rating each Professional and delivering Organisation.
    • This isn't just "possible" but has been running for decades in the most litigious field possible: The Chambers Guide to the UK Legal Profession.
    • Being able to answer "who are the best mainframe programmers?" or "are these folks the good Oracle Databases they claim to be?" is important both in Commerce and Government.
    • We know there is commercial and personal interest in the Chambers Guide because its continued for a few decades.
    • Treasury or Finance have natural responsibility to create and maintain the taxonomy and The Guide. You'd hope that like Chambers, it can be run profitably.
  • Either the Audit Office or Finance/Treasury needs to maintain a register of all 'large' I.T. Projects.
    • To determine if I.T. Projects are succeeding or not, the first requirement is a definitive list of them.
    • Then you have to track timelines, budgets and real expenditures as the project proceed.
    • Then assess the final outcomes.
      • Not just a "delivery date",
      • but an independent expert assessment for all deliverables of "actual" vs "planned",
      • and an assessment of the delivered system by both direct users and "business owners".
    • A good place to start is by contacting the only group that have been tracking I.T. Project outcomes for over 15 years: Standish Group, publishers of the CHAOS Report.
      • The amazing thing to me isn't that someone thought to ask in 1996 "Why do I.T. Projects Fail?", but after they came up with terrifying results, nobody else created their own yearly program to confirm or dispute the results and imputed causes/solutions.
  • Finally two bodies need to be setup within Government to fulfil the independent Investigation and Regulation/Compliance functions.
    • The Audit Office needs to complete full, detailed reviews of all completed large I.T. projects or unfinished projects "deemed in trouble", specifically looking for:
      • root causes of defects, omissions or problems.
      • new "Faults, Failures and Errors" to add to the list, or modify existing items.
      • the repetition of "Known Faults, Failures and Errors" to be referred to the Compliance body, and
      • Sources of excellence so they can be replicated and not lost. [See Barry Boehm below]
    • Treasury or Finance, in their role of overseeing the local equivalent of s44 of the FMAA requiring the use Government resources controlled by Heads of Agencies to be "efficient, effective, economical and ethical".
      • I.T. Projects today do not fail because of technical or professional problems.
      • Whilst some Professionals will be deemed negligent or incompetent etc,
      • all failures stem from the management of the Project, because they control selection, training, allocation and removal of Professionals and defining the Scope of Works, Monitoring Progress and Testing Outcomes and Project Performance.
      • The actions of management right up the line to the Head of Agency must be examined for their contribution to failures. The "Systems Accident" model at the heart of Aviation informs us that failures are never the result of just one person, but the whole system.
All these systems can be up and running quickly. None are hugely expensive. Most are within the current remit and scope of existing bodies, needing little new legislation, but perhaps some additional resources.
Do these things need massive Projects, Designs, and Specifications to start?
No: the first couple of iterations can be fully managed with spreadsheets and some basic desktop databases. Not committing too early to intensive design and complex systems is perhaps the first lesson.

Undoubtedly, there will be major push-back, of at least two forms:
  • We are already "Best Practice", so this doesn't apply to us, and
  • Why are you picking on us?, it's unfair.
The response is simple and consistent:
We're only looking for people and organisations repeating Known Faults, Failures and Errors. If you're Best Practice, you won't be, if you've invented a new way to fail, then there are no consequences for you to worry about, because as Professionals, we know you're all dedicated to not repeating mistakes.
But the real push-back will come from the non-I.T. bureaucrats, they won't be at all happy being held to account for their actions, or more likely inaction...

The real challenge with my suggestions is the Organisational Politics.
Setting some rules for everybody, then enforcing them, equally for everybody, including Heads of Agencies and their Senior Management, won't go down well.

Can "Can Do" and his team hold that line under extreme pressure? Interesting question.

Barry Boehm neatly summaries the importance of the Historical Perspective as:

Santayana's half-truth:
“Those who cannot remember the past are condemned to repeat it”
  • Don’t remember failures?
  • Likely to repeat them
  • Don’t remember successes?
  • Not likely to repeat them
The critical insight here is that there are two sides to improvement:
  • What not to do,
  • What to do.

No comments: