Subscribe to our newsletter which has tips, tools and strategies to improve your organisation's performance

Human Error – does it exist?

Recently at a course, there was a very interesting debate, which challenged the basic premise of human performance and whether there was any such thing as “Human Error”. The traditional view is of course, that we can predict failure rates of technical systems, if we know the component reliabilities. So since the classic HRA methodologies predict that, human components are much less reliable (deviate from ideal performance more often?), they are consequently much more likely to cause system failures by their errors! (I’m only human).

Similarly extreme views are held by the people who argue that, on the contrary, it is the increasing complexity of modern systems that is to blame; and the human component has no chance of totally understanding how it operates and always acts with the best intentioned attempts of making the d--- thing work. If it fails, then, it must be the system’s, not the human’s fault, isn’t it!?

This article attempts to show both views are legitimate. So, to do this, perhaps we ought to step back and look at how these components actually work, to see where the unreliability, or error arises. Consider two cases, a programmable logic controller, or something with an AI (Artificial Intelligence) chip, and a human operator. The AI system works by using pre-programmed algorithms in control loops to regulate the process and react to system changes, automatically and instantly. Hence it will always take the correct action provided the system behaves as designed. If the circumstances are not as imagined, the prescribed action can be non-optimal (resulting in a system error?).

HI (Human Intelligence) works in a fundamentally different way. Our brains are fed a picture of reality by our senses. This personal perception takes about 0.1 seconds to set up and we then have to act (which takes still more time) on data, on which there is already a time lag. To compensate for this, the human brain has an ability to predict ahead (guesses?) what it expects to happen. If this matches the next “picture” then all is well. But because this is a prediction of the probability that the future “as imagined” will match the actuality, it is very rarely exactly right. We unconsciously then register the difference, by comparison, and update our “prior” belief (yes it’s Bayesian!) to get a more accurate match for the next time slice. So the HI (operator) is functioning in a feedforward, Bayesian probabilistic mode, while the AI (controller) is operating in a deterministic, feedback mode.

So if the systems behave perfectly, as designed, the AI (barring mechanical failures) will be 100% accurate, able to correct any deviations, or errors (within the design spec.).

But with the human operator, the HI is designed to guess and correct, and will probably be near enough, but liable to inaccuracies in the guesses, especially if there is an overload, or distraction in the processing of competing signals. So error and error correction, is fundamental to its mode of operation. In humans this is further complicated, as we have the ability to mix emotions and memories as well as primal instincts, like fight or flight,  into the composite fusion “picture”. At this level then, Human Error patently does exist?

So the first argument is right, isn’t it? HI - works by guessing and minimising the difference between prediction and actuality – i.e. put mathematically, correcting the error. On the other hand AI –is inherently more accurate and reliable?

Humans in “normal” situations, constantly and naturally, make judgements and guesses. Mostly this is an automatic and unconscious process, but sometimes we get it not quite right, or even wrong. But since most of this happens unconsciously, we can sometimes miss some of the prompts and requisite corrections, until a major divergence occurs – to which we may, or may not, react correctly, or in time. These then are the familiar slips and lapses of SAFETY I and BBS approaches.

(I have deliberately not addressed “deliberate” interventions as, since they are intended, they are clearly not errors?)

Human error therefore does exist, but is it necessarily a bad thing and to “blame” for all our failures?

So far we have compared the performance of HI, with a programmable logic controller AI. But HI is capable of so much more. The Brain is not a simple organ. It is the result of millions of years of evolution. At my age we can recognise that modern computer operating systems still contain coding for 1960’s (Intel 8080) chips. Similarly our brains contain coding and sophistication levels from the primitive automaton, to the highly developed “conscious” decision making intellectual.

These levels are distinct Darwinian adaptations and can be thought of as ranging from the lowest - Inbuilt (sea slugs), through Instinct (reptiles) and Intuition (the chimp Paradox), to the distinctly human Intelligent (Kahneman system1) and Intellect (Kahneman System 2). Automatic lawnmowers, carpet sweepers and even driverless cars are currently still at the sea slug stage. So where most animals act on Instinct (Fight or Flight), operators, risk and incident analysts need to use their Intellect! (THINK!!!).

Unfortunately, this ability to think comes with other potential sources of “error”. Kahneman and Tversky have shown the existence and effects of inbuilt heuristics and biases, which can distort the prediction mechanisms and we can attempt to control an alternative (wrong?) perception of reality.

So we make errors; but fortunately this survival adaptation also allows us to often detect, correct and recover. As we can also learn, we can continuously improve. This then allows us to compensate, anticipate, and avoid making the same mistakes, even when the situations are abnormal and unexpected.

So when we understand how the human mind operates – by trial and error, Human Performance deviations (from ideal) are natural and to be expected, as a characteristic of the system.

(From the SAFETY I perspective they are therefore, by definition, more unreliable and make errors – QED?)

But the distinguishing property of the Human Intelligence controller is that it is constantly adapting and correcting, which by SAFETY II definitions, makes the system more “Resilient”. So we are not making errors, necessarily, but adjusting to the natural variabilities, which may exceed, or fall short of the designers’ intentions. 

So from the SAFETY II perspective, they are not errors, but adjustments to adapt to system behaviour and system properties. The more complex the system, the more difficult it becomes, to make the adjustments that are necessary. And even worse, we can make predictions of system behaviour and subsequent “adjustments” (not errors?) on a totally erroneous, but well intentioned “understanding” of what’s going on. (3 Mile Island?).

In summary, AI at the level of a programmable logic controller will be more reliable than HI, which works by minimising errors, and thus will show more variability in performance. (In risk terms, this means not as “reliable”). But HI currently is capable of so much more in terms of responding and adapting to unexpected situations. It is more resilient.

So you’re both right! Now let’s get on with ensuring successful and safe operations as well as avoiding failures.

Close

Header Logo

Tell us how we can help you and then we can jump on a call together...