Category: Human Performance
Do you like quick, simple tips that add value to the way you work? Do you like articles that increase your happiness? How about a joke or something to brighten your day? Of course you do! Or you wouldn’t be reading this post. But the real question is, do you want MORE than all of the useful information we provide on this blog? That’s okay – we’ll allow you to be greedy!
A lot of people don’t know we have a company page on LinkedIn that also shares all those things and more. Follow us by clicking the image below that directs to our company page, and then clicking “Follow.”
We also have a training page where we share tips about career/personal development as well as course photos and information about upcoming courses. If you are planning to attend a TapRooT® course or want a job for candidates with root cause analysis skills, click the image below that directs to our training page and then click “Follow.”
Thank you for being part of the global TapRooT® community!
When is a safety incident a crime? Would making it a corporate crime improve corporate and management behavior?July 29th, 2015 by Mark Paradies
I think we all agree that a fatality is a very unfortunate event. But it may not be a criminal act.
When one asks after an accident if a crime has been committed, the answer depends on the country where the accident occurred. A crime in China may not be a crime in the UK. A crime in the UK may not be a crime in the USA. And a crime in the USA may not be a crime in China.
Even experts may disagree on what constitutes a crime. For example, University of Maryland Law Professor Rena Steinzor wrote an article on her blog titled: “Kill a Worker? You’re Not a Criminal. Steal a Worker’s Pay? You Are One.” Her belief is that Du Pont and Du Pont’s managers should have faced criminal prosecution after an accident at their LaPorte, Texas, facility. She cited behavior by Du Pont’s management as “extraordinarily reckless.”
OSHA Chief David Michaels disagrees with Professor Steinzor. He is quoted in a different article as saying during a press conference that Professor Steinzor’s conclusions and article are, “… simply wrong.”
The debate should raise a significant question: Is making an accident – especially a fatal accident – a corporate crime a good way to change corporate/management behavior and improve worker safety?
Having worked for Du Pont back in the late 1980’s, I know that management was very concerned about safety. They really took safety to heart. I don’t know if that attitude changed as Du Pont transformed itself to increase return on equity … Perhaps they lost their way. But would making poor management decisions a crime make Du Pont a safer place to work?
Making accidents a crime would definitely making performing an accident investigation more difficult. Would employees and managers cooperate with ANY investigation (internal, OSHA, or criminal) IF the outcome could be a jail sentence? I can picture every interviewee consulting with their attorney prior to answering an investigator’s question.
I believe the lack of cooperation would make finding and fixing root causes much more difficult. And finding and fixing the root causes of accidents is extremely important when trying to improve safety. Thus, I believe increased criminalization of accidents would actually work against improving safety.
I believe that Du Pont will take action to turn around safety performance after a series of serious and sometimes fatal accidents. I think they will do this out of concern for their employees. I don’t think the potential for managers going to jail would improve the odds that this improvement will occur.
What do you think? Do you agree or disagree. Or better yet, do you have evidence of criminal proceedings improving or hindering safety improvement?
Let me know by leaving a comment below.
I overheard a senior executive talking about the problems his company was facing:
- Prices for their commodity were down, yet costs for production were up.
- Cost overruns and schedule slippages were too common.
- HSE performance was stagnant despite improvement goals.
- They had several recent quality issues that had caused customer complaints.
- They were cutting “unnecessary” spending like training and travel to make up for revenue shortfalls.
I thought to myself …
“How many times have I heard this story?”
I felt like interrupting him and explaining how he could stop at least some of his PAIN.
I can’t do anything about low commodity prices. The price of oil, copper, gold, coal, or iron ore is beyond my control. And he can’t control these either.
But he was doing things that were making his problems (pain) worse.
For example, if you want to stop cost overruns, you need to analyze and fix the root causes of cost overruns.
How do you do that? With TapRooT®.
And how would people learn about TapRooT®? By going to training.
And what had he eliminated? The training budget!
How about the stagnant HSE performance?
To improve performance his company needs to do something different. They need to learn best practices from other industry leaders from their industry AND from other industries.
Where could his folks learn this stuff? At the TapRooT® Summit.
His folks didn’t attend because they didn’t have a training or travel budget!
And the quality issues? He could have his people use the same advanced root cause analysis tools (TapRooT®) to attack them that they were already using for cost, schedule, and HSE incidents. Oh, wait. His people don’t know about TapRooT®. They didn’t attend training.
This reminds me of a VP at a company that at the end of a presentation about a major accident that cost his company big $$$$ and could have caused multiple fatalities (but they were lucky that day). The accident had causes that were directly linked to a cost cutting/downsizing initiative that the VP had initiated for his division. The cost cutting initiative had been suggested by consultants to make the company more competitive in a down economy with low commodity prices. At the end of the presentation he said:
“If anybody would have told me the impacts of these cuts, I wouldn’t have made them!”
Yup. Imaging that. Those bad people didn’t tell him he was causing bad performance by cutting the people and budget they needed to make the place work.
That accident and quote occurred almost 20 years ago.
Yes, this isn’t the first time we have faced a poor economy, dropping commodity prices, or performance issues. The more things change, the more they stay the same!
But what can you do?
Share this story!
And let your management know how TapRooT® Root Cause Analysis can help them alleviate their PAIN!
Once they understand how TapRooT®’s systematic problem solving can help them improve performance even in a down economy, they will realize that the small investment required is well worth it compared to the headaches they will avoid and the performance improvement they can achieve.
Because in bad times it is especially true that:
“You can stop spending bad money
or start spending good money
The 22-year-old man died in hospital after the accident at a plant in Baunatal, 100km north of Frankfurt. He was working as part of a team of contractors installing the robot when it grabbed him, according to the German car manufacturer. Volkswagen’s Heiko Hillwig said it seemed that human error was to blame.
A worker grabs the wrong thing and often gets asked, “what were you thinking?” A robot picks up the wrong thing and we start looking for root causes.
Read the article below to learn more about the fatality and ask why would we not always look for root causes once we identify the actions that occurred?
For the drug and medical device manufacturing industries, the US Code of Federal Regulations 211.22, Good Manufacturing Practices, states that if manufacturing errors occur, quality control should make sure that the errors “… are fully investigated.”
From past FDA actions it is clear that stopping the investigation at a “human error” cause is NOT fully investigating the error.
Here are three reasons people fail when they are investigating human error issues and when they develop fixes:
- They did not use a systematic process. 5-Whys is not a systematic process.
- The system they used did not guide them to the real, fixable causes of human errors (most quality professionals are not trained in human factors),
- The system did not suggest ways to fix human errors once the causes had been identified (the FDA expects effective corrective actions).
What tool provides a systematic process with guidance to find and fix the causes of human error? The TapRooT® System!
If you would like to read more about how TapRooT® can help you find the root causes of human error, see:
To learn about how to use TapRooT® to improve your investigations of human error, We suggest attending the 5-Day TapRooT® Advanced Root Cause Analysis Team Leader Training. See the list of public course held around the world at:
With your hard work and effort and a system that will find and fix the root causes of human error you can succeed in fixing human error issues and in meeting the FDA’s expectations.
“Doctor… how do you know that the medicine you prescribed him fixed the problem,” the peer asked. “The patient did not come back,” said the doctor.
No matter what the industry and or if the root causes found for an issue was accurate, the medicine can be worse than the bite. Some companies have a formal Management of Change Process or a Design of Experiment Method that they use when adding new actions. On the other extreme, some use the Trial and Error Method… with a little bit of… this is good enough and they will tell us if it doesn’t work.
You can use the formal methods listed above or it can be as simple for some risks to just review with the right people present before implementation of an action occurs. We teach to review for unintended consequences during the creation of and after the implementation of corrective or preventative actions in our 7 Step TapRooT® Root Cause Analysis Process. This task comes with four basic rules first:
1. Remove the risk/hazard or persons from the risk/hazard first if possible. After all, one does not need to train somebody to work safer or provide better tools for the task, if the task and hazard is removed completely. (We teach Safeguard Analysis to help with this step)
2. Have the right people involved throughout the creation of, implementation of and during the review of the corrective or preventative action. Identify any person who has impact on the action, owns the action or will be impacted by the change, to include process experts. (Hint, it is okay to use outside sources too.)
3. Never forget or lose sight of why you are implementing a corrective or preventative action. In our analysis process you must identify the action or inaction (behavior of a person, equipment or process) and each behaviors’ root causes. It is these root causes that must be fixed or mitigated for, in order for the behaviors to go away or me changed. Focus is key here!
4. Plan an immediate observation to the change once it is implemented and a long term audit to ensure the change sustained.
Simple… yes? Maybe? Feel free to post your examples and thoughts.
We can all remember some type of major product recall that affected us in the past (tires, brakes, medicine….) or recalls that may be impacting us today (air bags). These recalls all have a major theme, a company made something and somebody got hurt or worse. This is a theme of “them verses those” perception.
Now stop and ask, when is the last time quality and safety was discussed as one topic in your current company’s operations?
You received a defective tool or product….
- You issued a defective tool or product….
- A customer complained….
- A customer was hurt….
Each of the occurrences above often triggers an owner for each type of problem:
- The supplier…
- The vendor…
- The contractor…
- The manufacturer….
- The end user….
Now stop and ask, who would investigate each type of problem? What tools would each group use to investigate? What are their expertise and experiences in investigation, evidence collection, root cause analysis, corrective action development or corrective action implementation?
This is where we create our own internal silo’s for problem solving; each problem often has it’s own department as listed in the company’s organizational chart:
- Customer Service (Quality)
- Manufacturing (Quality or Engineering)
- Supplier Management (Supply or Quality)
- EHS (Safety)
- Risk (Quality)
- Compliance (?)
The investigations then take the shape of the tools and experiences of those departments training and experiences.
Does anyone besides me see a problem or an opportunity here?
Getting comfortable with just getting by, accepting failure, accepting broken rules … these attitudes all contribute to an unhealthy workplace culture that allows major and minor accidents. How do we get back on track?
We livestreamed Mark Paradies’ empowering talk, “How to Stop Normalization of Deviation,” at the 2015 Global TapRooT® Summit in Las Vegas. View the recorded session below and learn how to improve the work culture at your facility.
STOP is a strong word.
We can’t guarantee that you will be able to stop all human errors. But you can stop some types of errors. You can reduce the likelihood of other types of errors. And you can use Safeguard Analysis and defense in depth to keep errors from becoming major accidents.
Want to learn these valuable lessons? Are you ready to start a human performance revolution at your facility? Then attend this new course developed by Mark Paradies and Joel Haight:
You will learn why people make mistakes.
ou will learn that human errors can be predicted and prevented.
You will learn the latest techniques being used in the nuclear industry (and which ones work and which ones don’t).
And much more.
But don’t delay. This course on June 1-2. It is the only course open to the public this year and the course is almost full.
If your company has a major accident, the board of directors will know the cost of human error.
But human error happens every day. And the small incidents and mid-size accidents can add up to billions of dollars in waste, injuries, and warranty costs.
How much of the billion dollars in waste comes from your company? Do you keep track?
How can you know how much you should invest in stopping human error if you don’t know how much human error costs?
One thing that is for sure, if you are worried about human error, you should attend the 2-Day TapRooT® Understanding and Stopping Human Error Course taught by Joel Haight and Mark Paradies.
The course will help you understand why human error occurs and the best practices you can implement to make human action much more reliable.
The course is being held on June 1-2.
Just after this course is the 2015 Global TapRooT® Summit.
The Summit include a track titled, “Human Error Reduction and Behavior Change.” To see all the topic covered, see this link:
And click on the appropriate track button.
Register for both the 2-Day TapRooT® Understanding and Stopping Human Error Course and the TapRooT® Summit forgive days of great learning, networking, and best practice sharing. Just CLICK HERE to get started.
Here’s a description of an car/train accident:
How could things go from a minor error and fender bender to a multi-fatality accident?
It happens when someone makes a bad decision under pressure.
Don’t think it couldn’t happen to you. Even with good training and good human factors design, under high stress, people do things that seem stupid when investigating an accident (looking at what happened in the calm light of the post accident investigation).
Often, the people reacting in a stressful situation aren’t well trained and may have poor displays, poor visibility, or other distractions. Their chance of choosing the right action? About 50/50. That’s right, they could flip a coin and it would be just as effective as their brain in deciding what to do in a high-stress situation.
FIRST: Avoid decisions under high stress. In this case, KEEP OFF THE TRACKS!
Never stop on a railroad track even when no trains are coming.
That’s true for all hazards.
Stay out from under loads. Stay away from moving heavy equipment.
You get the idea.
Don’t put yourself in a position where you have to make a split-second decision.
SECOND: NEVER TRY TO BEAT A TRAIN or PULL IN FRONT OF A TRAIN.
Always back off the tracks if possible. This is true even if you hit the gate and dent your car.
FINALLY: Think about how this train accident could apply to hazards at your facility.
Are people at risk of having to make split-second decisions under stress?
If they do, or if it is possible, a serious accident could be just around the corner.
Try to remove the hazard if possible.
How could have the hazard been removed in this case?
An overpass or underpass for cars is one way.
Other ideas? Leave them below as comments.
Caution: Watching this Video can and will make you laugh…… then you realize you might be laughing at…
… your own actions.
… your understanding of other peoples actions.
… your past corrective or preventative actions.
Whether your role or passion is in safety, operations, quality, or finance…. “quality is about people and not product.” Interestingly enough, many people have not heard Dr. Deming’s concepts or listened to Dr. Deming talk. Yet his thoughts may help you understand the difference between people not doing their best and the best the process and management will all to be produced.
To learn more about quality process thoughts and how TapRooT® can integrate with your frontline activities to sustain company performance excellence, join a panel of Best Practice Presenters in our TapRooT® Summit Track 2015 this June in Las Vegas. A Summit Week that reminds you that learning and people are your most vital variables to success and safety.
To learn more about our Summit Track please go to this link. https://www.taproot.com/taproot-summit
If you have trouble getting access to the video, you can also use this link http://youtu.be/mCkTy-RUNbw
A frequent question that I see in various on-line chat forums is: “Is human error a root cause?” For TapRooT® Users, the answer is obvious. NO! But the amount of discussion that I see and the people who even try suggesting corrective actions for human error with no further analysis is amazing. Therefore, I thought I’d provide those who are NOT TapRooT® Users with some information about how TapRooT® can be used to find and fix the root causes of human error.
First, we define a root cause as:
“the absence of a best practice or the failure to apply knowledge that would have prevented a problem.”
But we went beyond this simple definition. We created a tool called the Root Cause Tree® to help investigators go beyond their current knowledge to discover human factors best practices/knowledge to improve human performance and stop/reduce human errors.
How does the Root Cause Tree® work?
First, if there is a human error, it gets the investigator to ask 15 questions to guide the investigator to the appropriate seven potential Basic Cause Categories to investigate further to find root causes.
The seven Basic Cause Categories are:
- Quality Control,
- Human Engineering,
- Work Direction, and
- Management Systems.
If a category is indicated by one of the 15 questions, the investigator uses evidence in a process of elimination and selection guided by the questions in the Root Cause Tree® Dictionary.
The investigator uses evidence to work their way down the tree until root causes are discovered under the indicated categories or until that category is eliminated. Here’s the Human Engineering Basic Cause Category with one root cause (Lights NI).
The process of using the Root Cause Tree® was tested by users in several different industries including a refinery, an oil exploration division of a major oil company, the Nuclear Regulatory Commission, and an airline. In each case, the tests proved that the Tree helped investigators find root causes that they previously would have overlooked and improved the company’s development of more effective corrective actions. You can see examples of the results of performance improvement by using the TapRooT® System by clicking here.
If you would like to learn to use TapRooT® and the Root Cause Tree® to find the real root causes of human error and to improve human performance, I suggest that you attend our 5-Day TapRooT® Advanced Root Cause Analysis Team Leader Course and bring an incident that you are familiar with to the course to use as a final exercise.
Note that we stand behind our training with an ironclad guarantee. Attend the course. Go back to work and apply what you have learned. If you and your management don’t agree that you are finding root causes that you previously would have overlooked and that your management doesn’t find that the corrective actions you recommend are much more effective, just return your course materials and software and we will refund the entire course fee. No questions asked. It’s just that simple.
How can we make such a risk-free guarantee?
Because we’ve proven that TapRooT® works over and over again at industries around the world. We have no fear that you will see that TapRooT® improves your analysis of human errors, helps you develop more effective corrective actions, and helps your company achieve the next level better level of performance.
Want to see more tips like these? Subscribe to our Tuesday eNewsletter. It delivers a root cause tip column, career development tips, accidents & lessons learned and more! Just send an email to Barb at firstname.lastname@example.org with “subscribe” in the subject line.
Monday Accident & Lessons Learned: Human Error Leads to Near-Miss at Railroad Crossing in UK – Can We Learn Lessons From This?June 23rd, 2014 by Mark Paradies
Here’s the summary from the UK RAIB report:
At around 05:56 hrs on Thursday 6 June 2013, train 2M43, the 04:34 hrs passenger service from Swansea to Shrewsbury, was driven over Llandovery level crossing in the town of Llandovery in Carmarthenshire, Wales, while the crossing was open to road traffic. As the train approached the level crossing, a van drove over immediately in front of it. A witness working in a garage next to the level crossing saw what had happened and reported the incident to the police.
The level crossing is operated by the train’s conductor using a control panel located on the station platform. The level crossing was still open to road traffic because the conductor of train 2M43 had not operated the level crossing controls. The conductor did not operate the level crossing because he may have had a lapse in concentration, and may have become distracted by other events at Llandovery station.
The train driver did not notice that the level crossing had not been operated because he may have been distracted by events before and during the train’s stop at Llandovery, and the positioning of equipment provided at Llandovery station relating to the operation of trains over the level crossing was sub-optimal.
The RAIB identified that an opportunity to integrate the operation of Llandovery level crossing into the signalling arrangements (which would have prevented this incident) was missed when signalling works were planned and commissioned at Llandovery between 2007 and 2010. The RAIB also identified that there was no formalised method of work for train operations at Llandovery.
The RAIB has made six recommendations. Four are to the train operator, Arriva Trains Wales, and focus on improving the position of platform equipment, identifying locations where traincrew carry out operational tasks and issuing methods of work for those locations, improvements to its operational risk management arrangements and improving the guidance given to its duty control managers on handling serious operational irregularities such as the one that occurred at Llandovery.
Two recommendations are made to Network Rail. These relate to improvements to its processes for signalling projects, to require the wider consideration of reasonable opportunities to make improvements when defining the scope of these projects, and consideration of the practicability of providing a clear indication to train crew when Llandovery level crossing, and other crossings of a similar design, are still open to road traffic.
The full report has very interesting information about the possibility of fatigue playing a part in this near miss. See the whole report HERE.
This report is an excellent example of how much can be learned from a near-miss. People are more whilling to talk when a potential near-fatal accident happens than when a fatality happens. And all of this started because a bystander reported the near-miss (not the train crew or the driver).
How can you improve the reporting and investigation of potentially fatal near-miss accidents? Could your improvements in this area help stop fatalities?
What would you do if you made a mistake at work that caused an accident, and your co-workers gathered around you to physically beat you as punishment? A PowerSource article, “Energy Companies Study the Role of Human Behavior in Safety” mentions this true-life scenario.
There’s still much work to be done in removing blame from an accident investigation to find the true root cause. In the PowerSource article, TapRooT® Friend & Expert Joel Haight introduces a new human factors engineering program at the University of Pittsburgh that he hopes will attract industry professionals and assist them in solving this issue.
Click here to read the article.
It’s easy to make a math error. But when that error means your new, expensive submarine won’t float … that’s a disaster!
Read about the problem at:
I like the way the stories talk about “someone” put the decimal point in the wrong place. I bet if you are that someone, you are laying low…
Now, what can you learn? What do you do to catch simple math errors (we all make them) in critical calculations? Leave your comments here.
Could scheduling be a root cause of fatigue related errors? Navy OKs new watch schedule to reduce fatigue on submarines.April 22nd, 2014 by Mark Paradies
Finally an attempt to reduce fatigue on submarines. See the story here:
Monday Accident & Lessons Learned: Incident Report from the UK Rail Accident Investigation Branch: Tram running with doors open on London Tramlink, CroydonApril 7th, 2014 by Mark Paradies
There were eight recommendations made by the UK RAIB. here’s a summary of the investigation:
On Saturday 13 April 2013 between 17:33 and 17:38 hrs, a tram travelling from West Croydon to Beckenham Junction, on the London Tramlink system, departed from Lebanon Road and Sandilands tram stops with all of its doors open on the left-hand side. Some of the doors closed automatically during the journey, but one set of doors remained open throughout the incident. The incident ended when a controller monitoring the tram on CCTV noticed that it had departed from Sandilands withits doors open, and arranged for the tram to be stopped. Although there were no casualties, there was potential for serious injury.
The tram was able to move with its doors open because a fault override switch, which disables safety systems such as the door-traction interlock, had been inadvertently operated by the driver while trying to resolve a fault with the tram. The driver didnot close and check the doors before departing from Lebanon Road and Sandilands partly because he was distracted from dealing with the fault, and partly because he did not believe that the tram could be moved with any of its doors open. The design of controls and displays in the driving cab contributed to the driver’s inadvertent operation of the fault override switch. Furthermore, breakdowns in communication between the driver and the passengers, and between the driver and the controller, meant that neither the driver nor the controller were aware of the problem until after the tram left Sandilands.
The RAIB has made eight recommendations. Four of these are to Tram Operations Ltd, aimed at improving the design of tram controls and displays, as well astraining of staff on, and processes for, fault handling and communications. Two recommendations have been made to London Tramlink, one (in consultation with Tram Operations Ltd) relating to improving cab displays and labelling and one on enhancing the quality of the radio system on the network. One recommendation is made to all UK tram operators concerning the accidental operation of safety override switches. The remaining recommendation is to the Office of Rail Regulation regarding the provision of guidance on ergonomics principles for cab interface design.
For the complete report, see:
Five days of panic. 140,000 residents voluntarily evacuate. Fourteen years of clean-up.
The 35th anniversary of the Three Mile Island Nuclear Disaster.
On the midnight shift on March 28, 1979, things started to go wrong at TMI. A simple instrument problem started a chain of events that led to a core meltdown.
I can still remember that morning.
I was learning to operate a nuclear plant (S1W near Idaho Falls, ID) at the time. I was in the front seat of the bus riding out to the site. The bus driver had a transistor radio on and the news reported that there had been a nuclear accident at TMI. They switched to a live report from a farmer across the river. He said he could smell the radiation in the air. Also, his cows weren’t giving as much milk.
the midnight shift on March 28, 1979, things started to go wrong at TMI. A simple instrument problem started a chain of events that led to a core meltdown.
I was learning to operate a nuclear plant (S1W near Idaho Falls, ID) at the time. I can still remember that morning. I was in the front seat of the bus riding out to the site. The bus driver had a transistor radio on and the news reported that there had been a nuclear accident at TMI. They switched to a live report from a farmer across the river. He said he could smell the radiation in the air. Also, his cows weren’t giving as much milk.
Years later, I attended the University of Illinois while also being a Assistant Professor (teaching midshipmen naval weapons and naval history). I was the first in a new program that was a cooperative effort between the Nuclear Engineering and Psychology Departments to research human factors and nuclear power plants. My advisor and mentor was Dr. Charles O. Hopkins, a human factors expert. In 1981-1982, he headed group of human factors professionals who wrote a report for the NRC on what they should do to more fully consider human factors in nuclear reactor regulation.
As part of my studies I developed a course on the accident at TMI and published my thesis on function allocation and automation for the next generation of nuclear power plants.
So, each year when the anniversary of the accident comes around I think back to those days and how little we have learned (or should I say applied) about using good human factors to prevent industrial accidents.
WHAT ARE HUMAN PERFORMANCE TOOLS?
Over the past decade, best practices and techniques have been developed “stop” or manage human error. They were developed mainly in the US nuclear industry and vary in content/name by the consultant/organization that offers them. Common tools include:
- Procedure Use*
- Place Keeping*
- Pre-Job Brief*
- Post-Job Brief
- Peer Checking*
- Time Out
- Rule of Three
- 3-Way Communication*
- Observation & Coaching*
- Questioning Attitude
- Attention to Detail
- Errors Traps/Precursors
Here are some links to learn more about the tools above:
Also, if you plan on attending the 2014 Global TapRooT® Summit, attend Mark Paradies’ talk on human performance tools to learn more about these tools.
The asterisk (*) techniques above have always been included on the Root Cause Tree® (part of the TapRooT® System) because they are supported by established human factors research. Post-Job Briefs are also a well-established best practice that isn’t included on the Root Cause Tree® because it would occur after an incident or as part of the normal performance improvement program.
WHAT’S WRONG WITH HUMAN PERFORMANCE TOOLS?
Some of the techniques seem like excellent best practices (paying attention, having a questioning attitude, STAR, and Time Out), but I haven’t been able to find scientific human factors research that supports their use. For example, the “Rule of Three” is supposedly supported by research in the aviation industry that three yellow lights (conditions that are worrisome but not enough to prevent a flight) are equal to one red light (a fight no-go indicator – for example weather that doesn’t meet the flight minimums).
Because they seem like good ideas, you may decide to adopt them, but they may not work as intended in all cases. After all, research hasn’t tested their limits.
The final technique, Error Traps/Precursors seems to violate a couple of human factors principles and therefore should only be used with caution.
ERROR TRAPS / PRECURSORS
The concept behind Error Traps/Precursors is that certain human conditions are indicators of impending human error. If a person can self-monitor to detect the “error likely” human condition, he/she can then apply an appro-priate human performance tool to avoid (stop) the impending error. For example, if you notice that you are rushing, you could apply STAR.
What are these human conditions? The selection varies depending on the consultant that presents the technique, but they commonly include:
- High Workload
- New Tasks
- First Time
- New Technique
A problem with this technique is that the person performing work must self-monitor to detect the human condition to self-trigger action. I’ve never seen research that people are particularly good at self-monitoring to detect any human condition. And even if they were, the list seems to indicate that people would be would be constantly self-triggering. By this list, people are always just about to make a mistake. (To err is human?)
Constantly monitoring points to another human factors limitation. The human brain automatically apportions a very limited resource – attention. Your brain continuously, subconsciously decides what to pay attention to and what to ignore. Your brain decides what sounds are important and which ones are noise. Your brain may decide that motion in the visual field deserves more attention than a stationary object. Or that a sharp pain is more important than a faint touch.
In times of crisis or when one is busy, your ability to pay attention is stressed. Imagine yourself driving on ice. You are so focused on the feel of the road and preventing sliding that you don’t have enough attention left over to even have a casual conversation.
Even when you are not stressed, if you self-monitor your state, you stealing attention from some other task. What faint signal might you miss?
All of the Human Performance Tools have a common limitation. They are weak corrective actions. They are 5’s or 6’s on the TapRooT® hierarchy of controls. Rules, procedures, training, are all attempts at improving human performance. And the human may be your weakest safeguard. If your human performance improvement program is based on the weakest safeguards, what should you expect?
This doesn’t mean that you should not try proven human performance tools. It means that you should try to adopt stronger safeguards and understand the limitations of human performance tools and, at a minimum, implement defense in depth to ensure adequate performance.
On August 14, 2013, UPS Airlines Flight 1354 crashed and burst into flames short of the runway on approach to Birmingham–Shuttlesworth International Airport. Both pilots of the cargo plane were pronounced dead at the scene of the crash.
The Federal Aviation Administration issued new rules aimed at ensuring airline pilots have sufficient rest 2 years ago, and proposed to include cargo airlines in draft regulations, but exempted them when final regulations were released, citing cost.
Read the rest of the story on The Washington Post.
Learn more about tell-tale signs of fatigue-related mistakes at the 2014 Global TapRooT® Summit. Summit speaker Bill Sirois, Senior Vice President and Chief Operating Officer for Circadian Technologies, will be speaking about fatigue and human performance.
Fatigue in the workplace is difficult to measure, and it is even more difficult to identify as a causal factor of accidents and injuries. However, fatigue does contribute to human errors including errors in judgment, risk-taking behaviors, clouded decision-making, ability to handle stress and reaction time.
Join us for the Human Error Reduction and Behavior Change track, April 9 – 11, 2014 in Horseshoe Bay, Texas, to hear this talk.
LEARN MORE on the Summit website.
REGISTER NOW for the Human Error Reduction and Behavior Change track.
The following is the text of a speech delivered in 1982 by Admiral Hyman G. Rickover – the father of the Nuclear Navy – at Columbia University. Rickover’s accomplishments as the head of the Nuclear Navy are legendary. From developing the first power producing submarine based nuclear reactor from scratch to operations in just three years to creating a program to guarantee process safety (nuclear safety) for over 60 years (zero nuclear accidents).
I am reprinting this speech here because I believe that many do not understand the management concepts needed to guarantee process safety. We teach these concepts in our “Reducing Serious Injuries and Fatalities Using TapRooT®” pre-Summit course. Since many won’t be able to attend this training, I wanted to give all an opportunity to learn these valuable lessons by posting this speech.
– – –
Human experience shows that people, not organizations or management systems, get things done. For this reason, subordinates must be given authority and responsibility early in their careers. In this way they develop quickly and can help the manager do his work. The manager, of course, remains ultimately responsible and must accept the blame if subordinates make mistakes.
As subordinates develop, work should be constantly added so that no one can finish his job. This serves as a prod and a challenge. It brings out their capabilities and frees the manager to assume added responsibilities. As members of the organization become capable of assuming new and more difficult duties, they develop pride in doing the job well. This attitude soon permeates the entire organization.
One must permit his people the freedom to seek added work and greater responsibility. In my organization, there are no formal job descriptions or organizational charts. Responsibilities are defined in a general way, so that people are not circumscribed. All are permitted to do as they think best and to go to anyone and anywhere for help. Each person then is limited only by his own ability.
Complex jobs cannot be accomplished effectively with transients. Therefore, a manager must make the work challenging and rewarding so that his people will remain with the organization for many years. This allows it to benefit fully from their knowledge, experience, and corporate memory.
The Defense Department does not recognize the need for continuity in important jobs. It rotates officer every few years both at headquarters and in the field. The same applies to their civilian superiors.
This system virtually ensures inexperience and nonaccountability. By the time an officer has begun to learn a job, it is time for him to rotate. Under this system, incumbents can blame their problems on predecessors. They are assigned to another job before the results of their work become evident. Subordinates cannot be expected to remain committed to a job and perform effectively when they are continuously adapting to a new job or to a new boss.
When doing a job—any job—one must feel that he owns it, and act as though he will remain in the job forever. He must look after his work just as conscientiously, as though it were his own business and his own money. If he feels he is only a temporary custodian, or that the job is just a stepping stone to a higher position, his actions will not take into account the long-term interests of the organization. His lack of commitment to the present job will be perceived by those who work for him, and they, likewise, will tend not to care. Too many spend their entire working lives looking for their next job. When one feels he owns his present job and acts that way, he need have no concern about his next job.
In accepting responsibility for a job, a person must get directly involved. Every manager has a personal responsibility not only to find problems but to correct them. This responsibility comes before all other obligations, before personal ambition or comfort.
A major flaw in our system of government, and even in industry, is the latitude allowed to do less than is necessary. Too often officials are willing to accept and adapt to situations they know to be wrong. The tendency is to downplay problems instead of actively trying to correct them. Recognizing this, many subordinates give up, contain their views within themselves, and wait for others to take action. When this happens, the manager is deprived of the experience and ideas of subordinates who generally are more knowledgeable than he in their particular areas.
A manager must instill in his people an attitude of personal responsibility for seeing a job properly accomplished. Unfortunately, this seems to be declining, particularly in large organizations where responsibility is broadly distributed. To complaints of a job poorly done, one often hears the excuse, “I am not responsible.” I believe that is literally correct. The man who takes such a stand in fact is not responsible; he is irresponsible. While he may not be legally liable, or the work may not have been specifically assigned to him, no one involved in a job can divest himself of responsibility for its successful completion.
Unless the individual truly responsible can be identified when something goes wrong, no one has really been responsible. With the advent of modern management theories it is becoming common for organizations to deal with problems in a collective manner, by dividing programs into subprograms, with no one left responsible for the entire effort. There is also the tendency to establish more and more levels of management, on the theory that this gives better control. These are but different forms of shared responsibility, which easily lead to no one being responsible—a problems that often inheres in large corporations as well as in the Defense Department.
When I came to Washington before World War II to head the electrical section of the Bureau of Ships, I found that one man was in charge of design, another of production, a third handled maintenance, while a fourth dealt with fiscal matters. The entire bureau operated that way. It didn’t make sense to me. Design problems showed up in production, production errors showed up in maintenance, and financial matters reached into all areas. I changed the system. I made one man responsible for his entire area of equipment—for design, production, maintenance, and contracting. If anything went wrong, I knew exactly at whom to point. I run my present organization on the same principle.
A good manager must have unshakeable determination and tenacity. Deciding what needs to be done is easy, getting it done is more difficult. Good ideas are not adopted automatically. They must be driven into practice with courageous impatience. Once implemented they can be easily overturned or subverted through apathy or lack of follow-up, so a continuous effort is required. Too often, important problems are recognized but no one is willing to sustain the effort needed to solve them.
Nothing worthwhile can be accomplished without determination. In the early days of nuclear power, for example, getting approval to build the first nuclear submarine—the Nautilus—was almost as difficult as designing and building it. Many in the Navy opposed building a nuclear submarine.
In the same way, the Navy once viewed nuclear-powered aircraft carriers and cruisers as too expensive, despite their obvious advantages of unlimited cruising range and ability to remain at sea without vulnerable support ships. Yet today our nuclear submarine fleet is widely recognized as our nation’s most effective deterrent to nuclear war. Our nuclear-powered aircraft carriers and cruisers have proven their worth by defending our interests all over the world—even in remote trouble spots such as the Indian Ocean, where the capability of oil-fired ships would be severely limited by their dependence on fuel supplies.
The man in charge must concern himself with details. If he does not consider them important, neither will his subordinates. Yet “the devil is in the details.” It is hard and monotonous to pay attention to seemingly minor matters. In my work, I probably spend about ninety-nine percent of my time on what others may call petty details. Most managers would rather focus on lofty policy matters. But when the details are ignored, the project fails. No infusion of policy or lofty ideals can then correct the situation.
To maintain proper control one must have simple and direct means to find out what is going on. There are many ways of doing this; all involve constant drudgery. For this reason those in charge often create “management information systems” designed to extract from the operation the details a busy executive needs to know. Often the process is carried too far. The top official then loses touch with his people and with the work that is actually going on.
Attention to detail does not require a manager to do everything himself. No one can work more than twenty-four hours each day. Therefore to multiply his efforts, he must create an environment where his subordinates can work to their maximum ability. Some management experts advocate strict limits to the number of people reporting to a common superior—generally five to seven. But if one has capable people who require but a few moments of his time during the day, there is no reason to set such arbitrary constraints. Some forty key people report frequently and directly to me. This enables me to keep up with what is going on and makes it possible for them to get fast action. The latter aspect is particularly important. Capable people will not work for long where they cannot get prompt decisions and actions from their superior.
I require frequent reports, both oral and written, from many key people in the nuclear program. These include the commanding officers of our nuclear ships, those in charge of our schools and laboratories, and representatives at manufacturers’ plants and commercial shipyards. I insist they report the problems they have found directly to me—and in plain English. This provides them unlimited flexibility in subject matter—something that often is not accommodated in highly structured management systems—and a way to communicate their problems and recommendations to me without having them filtered through others. The Defense Department, with its excessive layers of management, suffers because those at the top who make decisions are generally isolated from their subordinates, who have the first-hand knowledge.
To do a job effectively, one must set priorities. Too many people let their “in” basket set the priorities. On any given day, unimportant but interesting trivia pass through an office; one must not permit these to monopolize his time. The human tendency is to while away time with unimportant matters that do not require mental effort or energy. Since they can be easily resolved, they give a false sense of accomplishment. The manager must exert self-discipline to ensure that his energy is focused where it is truly needed.
All work should be checked through an independent and impartial review. In engineering and manufacturing, industry spends large sums on quality control. But the concept of impartial reviews and oversight is important in other areas also. Even the most dedicated individual makes mistakes—and many workers are less than dedicated. I have seen much poor work and sheer nonsense generated in government and in industry because it was not checked properly.
One must create the ability in his staff to generate clear, forceful arguments for opposing viewpoints as well as for their own. Open discussions and disagreements must be encouraged, so that all sides of an issue will be fully explored. Further, important issues should be presented in writing. Nothing so sharpens the thought process as writing down one’s arguments. Weaknesses overlooked in oral discussion become painfully obvious on the written page.
When important decisions are not documented, one becomes dependent on individual memory, which is quickly lost as people leave or move to other jobs. In my work, it is important to be able to go back a number of years to determine the facts that were considered in arriving at a decision. This makes it easier to resolve new problems by putting them into proper perspective. It also minimizes the risk of repeating past mistakes. Moreover if important communications and actions are not documented clearly, one can never be sure they were understood or even executed.
It is a human inclination to hope things will work out, despite evidence or doubt to the contrary. A successful manager must resist this temptation. This is particularly hard if one has invested much time and energy on a project and thus has come to feel possessive about it. Although it is not easy to admit what a person once thought correct now appears to be wrong, one must discipline himself to face the facts objectively and make the necessary changes—regardless of the consequences to himself. The man in charge must personally set the example in this respect. He must be able, in effect, to “kill his own child” if necessary and must require his subordinates to do likewise. I have had to go to Congress and, because of technical problems, recommended terminating a project that had been funded largely on my say-so. It is not a pleasant task, but one must be brutally objective in his work.
No management system can substitute for hard work. A manager who does not work hard or devote extra effort cannot expect his people to do so. He must set the example. The manager may not be the smartest or the most knowledgeable person, but if he dedicates himself to the job and devotes the required effort, his people will follow his lead.
The ideas I have mentioned are not new—previous generations recognized the value of hard work, attention to detail, personal responsibility, and determination. And these, rather than the highly-touted modern management techniques, are still the most important in doing a job. Together they embody a common-sense approach to management, one that cannot be taught by professors of management in a classroom.
I am not against business education. A knowledge of accounting, finance, business law, and the like can be of value in a business environment. What I do believe is harmful is the impression often created by those who teach management that one will be able to manage any job by applying certain management techniques together with some simple academic rules of how to manage people and situations.
Why Are the Major, Steady Declines in Minor and Recordable Injuries Not Seen to the Same Extent in Major Accident (Fatality) Statistics?December 26th, 2013 by Barb Phillips
Why are the major, steady declines in minor and recordable injuries not seen to the same extent in major accident (fatality) statistics? Mark Paradies has new insight into the phenomenon and has used it to develop systematic methods to stop major accidents by using TapRooT® both reactively and proactively.
Register for Reducing Serious Injuries & Fatalities Using TapRooT®, a 2-Day Pre-Summit Course scheduled for April 7-8, 2014 in Horseshoe Bay, Texas.
The course highlights three major sources of major accidents:
* industrial hazards
* process safety and
* driving safety.
Learn new ideas to revolutionize your fatality/major accident prevention programs and start you down the road to eliminating major accidents.
Learn more about the Summit: http://www.taproot.com/taproot-summit
Register for this 2-day course and the Summit and save $200!