CRJ Series 4 - Understanding the Performance Modes
Previous blogs mentioned a study by the World Association of Nuclear Operators (WANO), which discovered that humans, on average, make five errors every hour. Just take a second to picture what that might look like in your organisation - every person making an average of five errors every hour...
Fortunately, the majority of these errors are inconsequential. We either don't realise we've made an error at all, or it is of such insignificance that we pass it off as absent-mindedness and move on. Just think about the last time you had to call your own phone and listen for the ring tone because you couldn't find it anywhere (or maybe that's just me?)
Unfortunately, as we are all too well aware, some of these errors can result in, or lead to catastrophic consequences. Think of a recent event that has hit the news headlines or search for 'human error' online. On that list will be a number of unwanted outcomes experienced by organisations, where human fallibility played a role in an event that had the power to derail them completely or at the very least, knock them bandy for a while.
So, what can we do to STOP people from getting it wrong?
This is where it becomes challenging and when the going gets tough, some organisations start to chase their own tails. The temptation is to 'fix the worker'.
What does this mean? Send them on a 'Stop Being a Doofus' course, give them the 'Pay More Attention' toolbox talk, rewrite the task procedure in words of less than ten letters and three syllables and use BOLD CAPITALS so that they can clearly see the really important bits, use surveys to gauge their personal appetite for risk, punish them when they get it wrong or reward them when they don't?
Most people would agree that these are all relatively inexpensive and quick, easy fixes - ie reading investigation reports and confirming that the corrective actions have been seen...
One definition of Human Error is: 'An unintended deviation from a preferred behaviour'. Yet, whose preferred behaviour does this refer to? It is entirely subjective: I would have preferred to do something one way, but I unintentionally did it another way.
Let's just test this theory - grab a pen and a piece of paper and write down one thing that you are going to do tomorrow that you have no intention of doing. Have you done it? No, I didn't think so.
The very act of writing it down suddenly means that you now do intend to do it. Our simplified definition of a human error is: 'Something I didn't intend to do.' So, if a person comes to work for you tomorrow and unwittingly does something they did not intend to do, will punishing them stop them from doing it again? Will rewarding them prevent them from doing it again? The answer is a categoric no: because they did not intend to do it in the first place.
Human error is here to stay. To quote Trevor Kletz: "People say that accidents are due to human error, which is like saying falls are due to gravity." In the dichotomy of negotiables vs non-negotiables, human error falls into the latter, therefore you have to accept it, put down that pen, stop writing that disciplinary letter, take a deep breath, count to ten and suck it up. Human error is here to stay, and we have to learn to work with it. In the words of Shane Bush of Bushco HPI: "Two things will happen to you if you try to eliminate human error - 1. You will go broke and 2. You will go insane."
The good news is that human error is predictable, you are predictable, we are all predictable. Human error that has gone before has left a trail of breadcrumbs that can be followed and learnt from.
I mentioned in my last blog about fail-safing, ie designing systems, hardware, activities, processes and procedures with human error in mind. In a previous blog I wrote about USB drives - the design of these drives acknowledges that people will undoubtedly try to plug it in upside down nine times out of ten, so it is designed in a way that does not reduce this error rate, but reduces the consequences of that error.
Can you think of other places where this thinking is being put to good use? Perhaps it's a part of your role as a designer to develop these resilience measures? If we know when people are likely to make an error, we can build in levels of resilience so that they can fail safely. A specifically designed suite of error prevention tools and techniques can be introduced to help avoid or eliminate the conditions that increase the propensity for human error (more on this in a future blog).
Types of error
There are primarily two types of error: Active errors, which result in an immediate consequence and it is usually known who made the mistake; and latent errors, which were made in the past and lie undetected in systems, processes and procedures, waiting to trap an organisation or individual at some time in the future.
One example of an active error is cutting the wrong wire when defusing an explosive device - the bomb explodes (the immediate consequence) and the bomb disposal expert is injured (we know who did it).
Latent errors are not always so obvious. An example is the design of a new building that does not consider the safe cleaning of windows (because they do not open) or replacing light bulbs in high ceilings (which necessitates working at height). There is no immediate consequence and when someone falls from a ladder years later, it is common for nobody to know who designed that element of the building (though course, in many countries, there are design and construction regulations that are meant to address these issues).
Interestingly, during the work that my company undertakes for clients, we have discovered that behind every active error, there is usually a significant number of latent errors that led the individual down the error path; on average the ratio is 15 to one.
Propensity for human error
If you're anything like me, you won't be satisfied with being simply average, you will undoubtedly want to know how to be above average. In this case we're talking about the conditions that have the potential to increase our average error rate to above five per hour.
This leads me to the performance modes.
People operate at work in one of three cognitive performance modes at any one time. The specific performance mode employees are in can increase the likelihood of them making an error. Understanding this and introducing a series of metacognitive tools and techniques for them to use can significantly reduce propensity for error.
We can also use what we know about the performance modes to anticipate the types of errors that might be made and to put some defences in place to reduce likelihood and risk.
The performance modes
As you can see from the diagram above, the first performance mode is skill-based. A good way to explain this is to use the analogy of driving a car. Imagine driving to work on a bright, dry, clear morning, taking your usual route to work. There are no traffic jams or road works, it's way before the school run time, your favourite radio show is on. Perhaps your mind has wandered ahead and started working through your itinerary for the day, or you're thinking about your plans for what will surely be a beautiful summer evening.
You snap back to reality and realise you have just pulled in to the car park at work. How did you get here? You can't remember half of the journey; it was totally effortless, in fact you must have been on autopilot.
If you are familiar with this, you have just experienced skill base. If you have made any errors you are unlikely to have noticed them; you are probably so familiar with your journey that you have become complacent.
When working in skill-based mode defences are needed to provide a jolt out of the complacent, unfocused mindset and to warn of danger - for example rumble strips, auto shut off, alarms, etc. When are your employees in skill-based mode? What defences do you have and how effective are they? This mode requires alertness to violations such as overriding alarms, taping up dead-man switches and so on. It should be noted, however, that the error rate here is very low: just one in 10,000.
Turning to rule-based working, let's continue the car analogy, but taking a different journey right through the centre of a busy capital city, via a few of the huge housing developments that form the suburbs. The roads are very busy with back-to-back traffic. Cyclists swerve in and out between vehicles, buses pull out without warning and taxis rule the roads. There are lots of traffic lights, junctions and roundabouts; the lanes change from two to four and back again without warning and every inch of road is taken up by vehicles. And the driver has never driven in this city before.
Welcome to 'rule' base. The best way to envision this is as a filing cabinet in an office, which contains all of an organisation's policies and procedures. These an be dipped into when needed; they aren't carried around and they certainly are not always consulted at every turn - but they are there and people are familiar enough with them to get by; they know which ones they can bend a little to get them to where they need to be.
The driver knows they are supposed to give way to the right on a roundabout, but everyone else just keeps creeping out, so they do the same. They know the speed limit is 30 but they need to keep up with the traffic when it moves, so 34 won't hurt just this once.
The things that go on in rule base are:
Misapplication of good rules;
Application of some bad rules;
Or a failure to follow a rule that a person knows he or she should be following.
This is when the use of good, accurate procedures should be reinforced, using checklists or cross-checks and some way-markers also help. Error rate has increased to one in 1,000.
Finally, the driver jumps in the space shuttle and heads off for their first ever vacation on Mars. They have never been before, they don't speak the language and can't read 'Martian'. Cars don't have wheels, they hover off the ground and they don't make much noise. The road signs are very different and there are four lights at each junction and they are horizontal, blue, pink, purple and white.
Confused? Any driver should be - this is 'knowledge' base, or what should be called 'lack of knowledge base'.
In this performance mode the driver has no knowledge or reference point to call upon, he or she is in the dark, with no idea what to do. Defences here might include things like initial training, demonstrations, prevention through design and mentoring. A lot of supervision is required and people cannot spend too long working at this high level of mental intensity. Here, the error rate can be as high as one in two.
Clearly, the overall goal is to try to ensure that our employees are working in either rule or skill-based mode for the majority of the time, keeping their mental strain at a level they can cope with and moving between skill and rule-base when the need arises - for example, when about to take a critical step, or check readings or measurements.
At Paradigm Human Performance, we introduce tools and techniques to help organisations determine which performance modes their people are in for any given task (remembering, of course, that the experienced person's skill base will be the new trainee's rule base).
Understanding the performance modes will help put the correct risk controls and error prevention tools in place. And as we'll discover in a later blog, it will also help us to identify root causes and causal factors of unwanted outcomes.
As ever, I'd love to hear your thoughts. Please contact me at firstname.lastname@example.org you would like some more information about the principles or any other aspects of my work.
More insights coming soon.