Human Response to Cognitive Workload

Context: This piece started as a question to a researcher in Psychology.

There's a topic I've been trying to find research results for some time.
I call it "(Human) Cognitive Response to Workload".

There is a bunch of qualitative data for "Physiological response to Workload" available - e.g. US Navy for "stokers" working in different levels of heat.

I found Prof. Lisanne Bainbridge in the UK. She's retired now. Her field is 'mental load' and couldn't point me at research in the area or help me properly phrase my question.
She pointed me at Penny Sanderston, now Prof. at University of Queensland.

What I'm interested in is any information to apply to Software Developers and other Knowledge workers:
  • In the short, medium & longer term (day, week, year) how do you maximise cognitive output?
  • What roles do sleep, recreation & holidays play in 'recharging' cognitive abilities?
  • For different levels (degrees of difficulty) of cognitive task (or skilled manual task) what are the optimum work rates and duty cycles? (ratio of work/rest)
Related areas are Rates of Error & 'tiredness' effect on maximum cognitive task.
[James T. Reason has very good work on "Human Error" and "Organisational Error". His work is used extensively in Aviation and Nuclear safety. He originated "the swiss-cheese" model of accidents.]

From my own work experience and observations of people in the work-place, I know that 80-hour weeks produce significantly less than normal work weeks, that "pulling an all-nighter" is at best a "single-shot" action and needs significant recovery time, and different people have very different work-cycles (e.g. morning & evening people).

'Rework' arising from errors is the dominant use of programmer time in software projects. Error rates rise dramatically with 'pressure' (as in 'crunch time') and extended work hours. The ability to find/resolve errors, especially the more subtle and 'difficult', also reduces with pressure. The programmer time to fix & test an error increases with 'cognitive distance' from it and with pressure/fatigue. The time to find, fix & test an error is much greater than the original creation time (by at least 10-times). Normal programmer output on commercial projects is around 10-100 LOC (Lines of Code) per day (2000 LOC/year/progr). Usual error rates are quoted as around 40-100 errors/kLOC (1,000 LOC). Average time to fix errors (or 'defects') ranges from 8-35 hours/defect. (8 hours is a guesstimate)

Very quickly programmer output becomes negative, i.e. every hour of work leads to more time in rework.
With this description, it is trivially obvious that software projects in this state cannot complete. Every day they continue they get further from delivering.

Mihaly Csikszentmihalyi's work on "Flow" indicates for intense concentration (high degree-of-difficulty cognitive tasks), that programmers should be left uninterrupted (no noise/phones/email/drop-ins) for extended periods.
In what I've read, he doesn't describe the length of optimum work-periods - fatigue must arise.

Programming tasks are obviously not all as hard as one another - but we have no language for "degree-of-difficulty" and therefore any way to describe or manage it. Nor can it be included in Project Management plans. This is another important but related problem.

Programmers and other I.T. workers are the original "Knowledge Workers". What they do and the problems they address, are followed by the general workforce 10-15 years later. [Think Internet and email].
Describing/understanding them can be generalised to the entire white-collar workforce in this Age of Computing.

And them there is measuring "Programmer Output". The one thing we know is it isn't "Lines of Code" (LOC), nor is it the input effort (hours).
There are no methods/metrics to consistently measure "programmer output", i.e. how much useful work programmers produce:
  • which makes current Estimation and Project Tracking/Management techniques not just irrelevant, but misleading and dangerous.
Separately there is judging the 'Performance Quality' of output - in the sense of Art or Music 'performance quality'. A lot of commercial code is very poorly constructed - the equivalent of stick-figures and hand-painting. Live code that I've had to maintain, I would fail as a student project - 2/10 or worse.

This is different to 'Software Quality' - a very broad and ill-defined term. It can be errors/defects or fitness-for-purpose or Usability or internal 'Design Elegance'.

There are 3 other related issues - for 2 of them there is at least one
good piece of work:
  • Creating 'Star Performers' over time. Robert E. Kelley
  • Organisational factors affecting performance. David H. Maister, "Practice what you preach"
  • Teams vs Groups. Supposedly gives 2-10 fold increase in 'output'

Many IT authors [Jerry Weinberg, Tom De Marco] assert the synergy of teams leads to "the whole being much more than the sum of the parts".
I've never been able to track down any work with 'hard data', despite asking many people over many years.

I.T. is a cognitive amplifier in the same way that machines are physical/force amplifiers.
Just how can that be measured ('gain' in electronics) and how many dimensions are needed to describe it are fundamental questions not even articulated within the field.
The most obvious computer function, book-keeping, probably has a gain of around 100-1000.

Characterising, describing & measuring 'cognitive amplification' is squarely in the realm of psychology...


This was part of my interest in understanding "What we do" in I.T.
Programs are embodied cognitive processes (small-t thinking) - their construction is a purely cognitive task.
But I cannot find research to help my understanding in these areas.

The economic imperative to optimise the costs of software construction is strong - but the Practitioner literature and "State of Practice" ignores the most basic element in the chain - individual human performance & capability.
  • 20-25 years ago, Western economies/businesses became dependent on computing.
  • Around 2000, they became dependent on 'the Internet' (IT + electronic communications).
  • Now they are approaching "I.T. is the business" - i.e. their computer systems mediate most external as well as internal interactions & processes, including sales.
The first economy to understand what I.T. is and how it works, will have an enormous, and difficult to close, advantage.
The ability of computing systems to act as Cognitive Amplifiers means raw population size is not the primary determinant, rather the effective aggregate 'gain'.

With a 'cognitive gain factor' as low as 100, a country as small as Australia could become a global economic power.


Solving 'Spam'

It never ceases to amaze me, the Politician attitude to Porn and 'Spam' & it's friend, malware.

Porn is "bad, bad, bad" and Pollies show very high interest - including policy & legislation.

Lots of angst & trashing around about eradicating something that 2,000+ years of writing/publishing shows can't be controlled/legislated away. The physical publishing world & (cable) TV show that the only effective is means of control is to allow-but-license.

Same as tobacco. Never going to eradicate it, only control it.

'Restricted Content' access can only be controlled iff:
  • every page is 'classified' at source (meta-tags),
  • an unforgeable Internet 'proof-of-age' card/system is created,
  • there are criminal penalties for subverting the system, forging identities or misclassifying pages,
  • there are no legal jurisdictions outside 'the system' [e.g. on the high-seas],
  • all browsers enforce 'the rules',
  • and browsers can't be built/written to ignore 'the rules'.
i.e. It is impossible to eliminate 'restricted content', and possibly provably so...