Category: Human Performance

Have You Saved a Life Today?

October 3rd, 2012 by

When was the last time you saved a life?

I remember the first time I saw my daughter, who had a summer job as a lifeguard, save a man and a boy from drowning in a lake. Afterwards I talked to her. She didn’t see it as a big deal. She said she did rescues every week. She didn’t consider it heroic. It was just part of her job.

Several years ago at the TapRooT® Summit, a TapRooT® User approached me to thank me. He said that they had stopped fatalities at their site after learning to apply TapRooT®. He told me that improved performance meant that, over a period of several years, they had saved about five lives at their refinery. He then said …

“Imagine how many TapRooT® Users there are applying TapRooT® around the world …
Easily there are hundreds of lives saved every year!”

Have you found and fixed the root causes of potentially fatal accidents at your site? Then you too have saved a life – or maybe more than one life.

Try not to become complacent about the lives you are saving. Celebrate your success. Tell others about the good job they are doing. Make sure that management knows about the lives saved.

And never stop improving. 

As my boss in the Navy, Captain Willian J. Rodriguez, told me:

“If you’re not pedaling, you’re going downhill.”

201210011737

Don’t become complacent about saving lives.

Keep up the good work.

And when you can, enlist others in this great cause.

Monday Accident & Lessons Learned: Learning from Fatal Car Wrecks

September 17th, 2012 by

Consumer Reports published an article about teenage and older person fatality rates. They had this graph…

 Content Dam Cro Magazine-Articles 2012 October Consumer-Reports-Dangerous-Drivers-10-12

Here is something interesting that you can learn …

Did you know that the human brain doesn’t fully develop the ability to assess risk until about age 25? Now look at the graph above. Look at the fatality reduction post age 25.

Now think … what are you doing to keep your (<25) year old workers safe in all situations where they make make poor risk decisions?

In our family we did what the article recommended – a graduated driving program.

First, kids weren’t allowed to obtain their full licenses until they were 17.

Next, we talked to them extensively about the risk of driving before we let them start driving independently and kept that as limited as possible (no friends in the car – only family for the first year). We did a lot of “critiqued” driving with them even after they had their full license. We also had them take advanced (beyond high school drivers ed) classes put on by our local police.

And we didn’t get them high powered vehicles.

How could you take the same approach with young employees?

Monday Accident & Lessons Learned: UK RAIB Report on a Track Worker Struck by a Train at Stoats Nest Junction, 12 June 2011

August 27th, 2012 by

Here’s a link to a report by the UK Rail Accident Investigation Branch about a train that struck a worker near the tracks:

http://www.raib.gov.uk/cms_resources.cfm?file=/120806_R162012_Stoats_Nest.pdf

How do you keep your workers safe from moving vehicles?

One interesting point in this report was that the train’s horn probably could not be heard by the track workers because of the noise generated by the equipment they were using.

What’s the Ideal Temperature for 24 Hour Operations?

July 5th, 2012 by

In a recent article by Circadian Technologies, they suggested that the idea temperature to maintain alertness in 24 hour (shift work) environments was between 66º to 68º F.

Wow! That seems cool! See the whole article here …

http://www.circadian.com/solutions-services/publications-a-reports/newsletters/managing-247-enewsletter/managing-247-qaa-0612.html

Nine Switches of Human Alertness (from Circadian Technologies)

June 27th, 2012 by

How do you keep people alert? What influences your alertness? Find out about the nine switches that Dr. Martin Moore-Ede has described in his book, The 24 Hour Society. Here’s the link …

http://www.circadian.com/solutions-services/publications-a-reports/newsletters/managing-247-enewsletter/managing-247-article-2-0612.html

How To Test the Accuracy of Your “Why” Root Cause Analysis

June 1st, 2012 by

I recently received an inquiry asking me, “What is the test to assure true root cause is found?  We use the “Why-Why” Analysis.”

    Using one of the “why” methods (“Why-Why”, 5-Why’s, etc) unfortunately leads many investigators to question their methodology.  That’s because, after using it once or twice, it becomes pretty obvious that these methods are 100% dependent on the experience and biases of the investigator.  If you’re a training person, amazingly enough, your “why” analysis leads you to training problems.  If you’re a quality person, you end up with quality-related issues.

    That’s because these methods do not give you any expert guidance to get beyond the investigator’s current level of knowledge.  For example, if you don’t know anything about “human engineering” (what color should an alarm light be? what shape valve hand wheel should be used?), you will never look for these problems.  You will only find problems you’re already familiar with, and therefore you will only put corrective actions in place that you have probably already tried in the past.

    The question posed is exactly the right question.  “How do I know I’ve found the real root cause?”  The bad news is that only a highly-trained human performance expert can answer that without some type of expert guidance.  The good news is, TapRooT® was designed to give you exactly that expert guidance.

    TapRooT® was designed by human performance experts to guide the normal investigator toward the true root causes that a highly-trained expert would find.  It does this by supplying a series of simple “yes/no” questions that you answer in the course of your investigation.  The answers to those question will quickly narrow you down to the true root causes of human performance or equipment failures that actually led to the accident.  Once you have these root causes, you can then apply effective corrective actions to eliminate them, preventing similar human performance mistakes in the future.

    Now, instead of ending up with corrective actions like, “Counseled the operator on the importance of opening the correct valve” (like he doesn’t already know that!), you can now find out why he opened the wrong valve.  You can be confident that the root causes you have found are real, proven root causes… the real reasons good people make mistakes.

     To directly answer the question, “why” methodologies will not consistently get you to true root causes.  There is no test built into those methods to verify root causes are found.  There’s no electronic “magic bullet” that can work around the weaknesses in those systems.  You’ll have to go outside those methodologies to get there.  Give TapRooT® a try!  We guarantee you’ll be satisfied with the results.

Mark Paradies Spoke at the IIE Conference About the “7 Secrets of Root Cause Analysis” this Week

May 25th, 2012 by

P1030850-1

Mark Paradies spoke at the IIE Conference about the “7 Secrets of Root Cause Analysis” this week. The Industrial Engineers present were very interested in going beyond common problem solving tools like 5-Whys and Cause and Effect and asked some great questions.

To see the paper the talk is based on, CLICK HERE.

Monday Accident & Lessons Learned: If You Make a Hole in the Deck, Someone Will Fall Through It!

April 30th, 2012 by

Screen Shot 2012-04-24 At 4.42.03 Pm

Here is an accident report from the BSEE (Bureau of Safety & Environmental Enforcement of the US Department of the Interior):

http://bsee.gov/uploadedFiles/BSEE/Enforcement/Accidents_and_Incidents/Panel_Investigation_Reports/BSEE%202012-01.pdf
How many times have you seen similar accidents with unprotected holes on construction sites, oil platforms, or in other locations with work that makes “temporary” openings?

It would seem that anyone supervising work should know better.

Yet the report says that the company blamed the roustabout who fell to his death through the hole because he was, “…distracted by concern for a family issue at home.”

The report says:

This same story that the accident was caused by a lack of concentration by a distracted Roustabout, was repeated in the initial report to BOEMRE, in interviews by Supervisor, Company Man, and by management of Alliance, and was written into the accident investigation report by Contractor and Operator. The only reason given in statements for this conclusion was that the Roustabout had spoken of it at breakfast and had tried to rearrange his shift to accommodate the family issue.

OK TapRooT® Users, what do you think. Is “lack of concentration” a root cause? Did the company do a thorough investigation? Could they tell everyone to “be more careful” and resume work as usual? Was the BSEE right to question the adequacy of the contractor and the operator?

Read the report and let me know what you think.

What is a “Freak Accident”?

April 24th, 2012 by

Nic Zoricic died in a World Cup skicross race in Switzerland. The organizers say it was a “terrible, tragic accident.” Others called it a “freak accident.”

Here’s a video of the accident and press coverage…

        video platform
  video management
  video solutions
  video player

Here’s an AP article about the family’s statement…

http://www.google.com/hostednews/ap/article/ALeqM5hGhkDBtvc0Sn3rZRH6fBmyjzzTaQ?docId=d2663cf7693146f39613c04722195402

Freak accident or predictable outcome?

US Chemical Safety Board Announces That They Plan to Release an Interim Report on the Deepwater Horizon Accident This Year – CSB Press Release Attached

April 20th, 2012 by

How much time does it take to investigate an accident?

The US CSB has announced that they are continuing to investigate the Deepwater Horizon (Macondo Well) Accident and will release a preliminary report in July of 2012 and a final report in 2013.

Today is the two year anniversary of the blowout and explosion that killed 11 aboard the Deepwater Horizon. Of course the resulting oil spill continued for months.

An investigation that concludes more than two years after an accident seems too slow to me … but perhaps the results will be worth the wait?

From the information in the CSB press release, the CSB seems to believe that additional regulation – especially developing a safety case – would have prevented the accident. In their press release (see below), they said:

Investigation findings to date indicate a need for companies and regulators to institute more rigorous accident prevention programs similar to those in use overseas.

They also say:

In December 2010, a CSB public hearing in Washington featured international regulators, companies, trade associations and union representatives discussing the “safety case” regulatory approach for offshore safety, a concept widely used in the North Sea and Australia and supported by a number of the participants.

To me this seems strange. Transocean had a similar near-miss incident in the North Sea and there have been major spills in exploration covered by Australian regulations (those “overseas” regulations). But people in Washington seem to always believe that more regulation – rather than better management – will keep everyone safe.

I’ve never really believed that regulations are the answer to safe performance. The the press release, CSB says:

Process safety regulations and standards utilized by oil companies in refineries and process plants in the continental U.S. have a stronger major accident prevention focus, CSB investigators have determined. Unlike the U.S. offshore regulatory system, the “onshore” process safety requirements are more rigorous and apply both to operators and key contractors.

But refineries and plants covered by the process safety regulations continue to have deadly fires and explosions, showing that adherence to regulations is not enough to prevent accidents. (A subject of my talk at the 2012 TapRooT® Summit – CLICK HERE to watch all three parts of the talk.)

The press release also contains some comments that I believe are right on target from my review of previous Deepwater Horizon accident reports.

First, they say they are looking into the human factors of well control. The release says:

The issue of human factors in offshore drilling and well completion is particularly important as offshore well control programs currently rely to a large extent on manual control, procedures and human intervention to control hazards…“.

When reviewing the displays available to the operators monitoring the control of the well, I thought – poor human engineering! (See pictures below of what the operators were suppose to catch.)

April20Flowinflowout-1

April20Drillpipepressure-1

April20Normalabnormalflow-1

I haven’t seen pictures of the types of actual instrumentation that was used (all now at the bottom of the sea), but I imagine similar equipment is used throughout the industry. If the drawings of what was supposed to be observed look like real time detection would be error prone, I can only guess that the actual equipment is as hard or harder to use.

The next “human factors” issue that the CSB is investigating is fatigue. The press release says:

…we are investigating whether fatigue was a factor in this accident. Transocean’s rig workers, originally working 14-day shifts, had been required to go to 21-day shifts on board. CSB is examining whether this decision was assessed for its impact on safe operations.

Of course, fatigue could be an issue in a 14 day or a 21 day shift. I hope they are using a systematic process like FACTS to evaluate the possibility of fatigue being a potential cause of operator errors.

Here is the entire CSB press release …

 Userfiles Chemsafety Image Template1 Prnt-Hdr-29

CSB Investigation into Macondo Blowout and Explosion in Gulf of Mexico Continues; Two Public Hearings and Interim Reports Scheduled for this Year

April 19, 2012

Washington, DC, April 19, 2012 — The U.S. Chemical Safety Board (CSB) announced today that its investigation into the April 20, 2010 Macondo well blowout, explosion and fire in the Gulf of Mexico is progressing, with two interim reports with findings and recommendations to be released this year. A final report is expected to be completed in early 2013.

Investigation findings to date indicate a need for companies and regulators to institute more rigorous accident prevention programs similar to those in use overseas. The CSB announcement was made approaching the second anniversary of the tragedy, which took eleven lives and caused the worst environmental disaster in U.S. history,

Process safety regulations and standards utilized by oil companies in refineries and process plants in the continental U.S. have a stronger major accident prevention focus, CSB investigators have determined. Unlike the U.S. offshore regulatory system, the “onshore” process safety requirements are more rigorous and apply both to operators and key contractors.

To date, the CSB has conducted numerous interviews, examined tens of thousands of documents from over 15 companies and parties, gathered data from two phases of blowout preventer (BOP) testing, and conducted a public hearing on international regulatory approaches.  Recommendations targeting specific reforms are contemplated for release as early as August of this year.

CSB Chairman Rafael Moure-Eraso said, “Our final report on the tragic accident that occurred two years ago, will, I believe, represent an opportunity to make fundamental safety improvements to offshore oil and gas exploration to prevent future catastrophic accidents.”

Dr. Moure-Eraso continued, “The CSB is not a regulatory agency; our job is to convey information and recommendations to improve safety in the oil and chemical industries and protect workers and the environment. In our view, while previous investigations of the Macondo blowout have produced useful information and recommendations, important opportunities for change have not been fully addressed.  And these are critically important for major accident prevention.” 

Don Holmstrom, manager of the CSB Western Regional Office in Denver, whose team is conducting the investigation, announced a timeline calling for the final report release in early 2013, with the first of several public meetings to be held by the CSB likely in July 2012 in Houston, addressing use of leading and lagging indicators by companies and regulators to improve safety performance.

The CSB anticipates releasing preliminary findings and safety recommendations at the meeting, and to hear experts testify on the need for the offshore drilling industry to utilize safety performance indicators like hydrocarbon leaks and maintenance of safety critical equipment to drive safety improvements and to prevent major accidents.

Dr. Moure-Eraso said the CSB’s investigation is taking a broad look at causal issues of the Macondo blowout and the subsequent massive release of flammable hydrocarbons which resulted in an explosion.  “These issues include the manner in which the industry and the regulating agencies learn or did not learn from previous incidents. They also include a lack of human factors guidance, and organizational issues that impaired effective engineering decisions,” he said.

The issue of human factors in offshore drilling and well completion is particularly important as offshore well control programs currently rely to a large extent on manual control, procedures and human intervention to control hazards, said CSB Investigator Cheryl MacKenzie.  She observed “There are no human factors standards or regulations in U.S. offshore drilling that focus on major accident prevention. As an example, we are investigating whether fatigue was a factor in this accident. Transocean’s rig workers, originally working 14-day shifts, had been required to go to 21-day shifts on board. CSB is examining whether this decision was assessed for its impact on safe operations.” 

The CSB investigation team participated in the examination of the blowout preventer (BOP) in Louisiana last year.  As has been reported, the BOP – a massive device designed to shear off the well pipe and stop the flow of volatile hydrocarbons — failed.  The CSB is currently evaluating BOP deficiencies as reflecting larger needed improvements in offshore risk management.  These include lack of safety barrier reliability/ requirements, inadequate hazard analysis requirements for evaluating BOP equipment design, and insufficient management of change requirements for controlling hazards.

The CSB is conducting additional computer modeling of the BOP and assessing the capability of the BOP to close and which functions specifically led to its failure. The agency is also exploring new issues and “near miss” deficiencies that did or could have compromised the ability of the BOP to function properly, including the failure of the annular preventer to seal the well, the impact of drill pipe size, and the performance of the BOP hydraulic accumulators.

Finally, the CSB is carefully examining the physical causes of the drill pipe buckling that other investigations previously concluded may have prevented the BOP’s blind shear rams from functioning correctly.  The CSB is evaluating different mechanisms that could have led to the drill pipe buckling.

CSB Chairman Moure-Eraso said the CSB is examining whether further changes to offshore safety regulations and industry standards are needed.  “While important regulatory changes have occurred, we are examining whether these changes that have been made are sufficient for preventing major accidents.  In December 2010, a CSB public hearing in Washington featured international regulators, companies, trade associations and union representatives discussing the “safety case” regulatory approach for offshore safety, a concept widely used in the North Sea and Australia and supported by a number of the participants.

In addition, Chairman Moure-Eraso noted that the CSB investigation is also examining the implementation of effective corporate governance and sustainability standards to address safety and environmental risk, organizational issues that impaired effective engineering decisions, and the consideration of past safety performance in lease allocation decisions and contractor selection.

The CSB investigation into the accident has been delayed on occasion as the Board sought to work out mutually acceptable access and investigation agreements with other investigative groups that had different missions.  The Department of Justice has filed an action against Transocean in federal court and has requested that the Court order Transocean to comply with the CSB subpoenas. Following a hearing last week, the CSB anticipates a decision from the Court in the near future.

Chairman Moure-Eraso said, “The CSB investigation of this tragedy will, we believe, offer unique findings and recommendations that, if adopted, would provide significantly safer operations during vital offshore drilling and production activities.

The CSB is an independent federal agency charged with investigating chemical accidents. The agency’s board members are appointed by the president and confirmed by the Senate. CSB investigations look into all aspects of chemical accidents, including physical causes such as equipment failure as well as inadequacies in regulations, industry standards, and safety management systems.

The Board does not issue citations or fines but does make safety recommendations to plants, industry organizations, labor groups, and regulatory agencies such as OSHA and EPA. Visit our website, www.csb.gov.

For more information, contact Communications Manager Hillary Cohen at 202.446.8094 or Sandy Gilmour at 202.251.5496.

Was Jet Blue Pilot Insane While Flying? His Attorney Claims He Was!

April 18th, 2012 by

I previously wrote about this serious incident that needs to be investigated as a serious aviation near-miss. (See previous posts here and here.)

The pilot was arrested after the plane made an emergency landing and charges were filed against him for “knowingly interfering … with the performance of the duties of a flight crew member …”.

Now his attorney says he was legally insane, and therefore not responsible for his actions, See the Story in The Wall Street Journal by clicking here.

Legally insane pilots? Seems pretty scary to me.

Magic Safety Powers

April 18th, 2012 by

Screen Shot 2012-04-18 At 4.21.52 Pm-1

While I was in Manchester, UK, I started noticing devices with magical safety powers.

The first of these devices was the yellow coat/vest with reflective tape on it.

They were everywhere – on every construction site. Workers wore them walking down the street. We were required to wear them to visit our booth at a conference exhibit (if the stands were “being built”).

Surely, they had magical powers to keep us safe because the links between the hazards present and the protection provided seemed tenuous at best. After all, I had worked safely at Du Pont for five years without a yellow vest and had avoided being crushed or run over. So had 20,000 other workers at our site for 20 years. So, I concluded the yellow vests must have some magical powers that make them worthwhile.

A week later, while in an airport in the US, I noticed another magical device – the hard hat.

This time a worker had the hardhat and a yellow vest on while working on top of a building. There was no overhead hazard and he was not wearing any fall protection. Thus, the combination of a hardhat and vest must have magical powers to protect the worker from the obvious hazard – falling off the building. Perhaps while wearing them he could fly? Otherwise, he wouldn’t be wearing them because there was no other hazard present.

Do you have magical safety equipment where you work?

Equipment that must be worn even when there is no obvious hazard?

And do people stop and think about the obvious hazards and what they should be wearing to prevent accidents?

Or do they just count on the magical powers of the yellow vest and hardhat?

Friday Joke: And The Root Cause Is?

April 13th, 2012 by

You might not think this is a joke … just stupidity.

New Military Standard for Human Engineering Design Criteria

April 4th, 2012 by

Here’s new (January 2012) Mil-STD-1472G:

Mil Std 1472G
(Click on document to open)

I really like the labeling guidance that starts on page 132 (Section 5.4). But don’t think this is it … every section has excellent material.

Screen Shot 2012-04-03 At 6.19.30 Pm

Monday Accident & Lessons Learned: Is Fatigue an Issue?

April 2nd, 2012 by

 Professional Blog Wp-Content Uploads 2011 01 Sleepy-Person-Coffee-Cups

I was reviewing an industry study on the causes of accidents and noticed that fatigue was nowhere on their list. Since other studies where people actually observed performance show that fatigue is a major issue in real world accidents, I wondered why fatigue did not show up on the industry sponsored list.

The easy answer is … If you don’t ASK about fatigue and look into fatigue as a potential cause, you will never find it.

That reminded me of an investigation into a barge crash. The operator couldn’t find a reason why the First Mate had gone “brain dead” and made a totally inappropriate approach to a bend in the river. It was very important to be lined up correctly because the river was running near flood stage and there was little room for error. But once he was lined up wrong, he had little choice. He tried to “power through” the turn and ended up crashing the barges into a bridge after the turn.

One of the questions I asked the investigator was, “Did you consider fatigue?” (The accident happened at about 5 AM and the tug and barges were on the second day of the trip.)

The reply was interesting … the investigator said:

“He was working a standard schedule.”

That seemed to be enough for him to dismiss fatigue as a cause.

I asked, “What is a standard schedule.” The answer, “6 on and 6 off.”

So the first mate would normally work from midnight to 6 AM, have six hours “off” to rest or work, then be back piloting from noon to 6 PM, get off, eat dinner, and go to bed and get back up to work from midnight to 6 AM again.

I asked if he knew if the First Mate had been well rested before starting the journey. The answer? “No, I didn’t ask about that.”

Even after this questioning, the investigator just couldn’t see that fatigue could be a potential cause that should be looked into. After all, the schedule was a standard industry practice.

That’s one of the reasons that I started adding sessions about fatigue to the TapRooT® Summit.

It’s also one of the reasons that we collaborated with Circadian Technologies to produce a tool for investigators to assess fatigue with a proved diagnostic tool call FACTS (Fatigue Accident/Incident Causation Testing System). (Click on the link to subscribe to the on-line system for free.)

It’s also why I recommend Circadian Technologies seminars on fatigue risk management and shift work scheduling.

If you are interested about learning more about fatigue, there are two seminars coming up that you should consider. The first is “Designing and Implementing an Effective Fatigue Risk Management System” and will be held in Salt Lake City on May 23-24. For more information, see:

http://www.circadianstore.com/catalog/frms-seminar.html

The second is “Successfully Expanding from 5- to 7-Day Continuous Operation” and will be held in Chicago, IL on June 13-14. For more information, see:

http://www.circadianstore.com/catalog/5-to-7-shiftwork-expansion-seminar.html

We should not overlook fatigue as a potential cause. TapRooT® includes a question about fatigue as one of the 15 questions in the Human Performance Troubleshooting Guide on the front of the Root Cause Tree®. So you should consider fatigue for every human error. Ask about fatigue and perform an assessment using FACTS if there seems to be a potential for a fatigue issue. Don’t accept “standard industry practice” as ruling out fatigue as an issue.

Monday Accident & Lessons Learned: What Should You Investigate?

March 19th, 2012 by

Let’s say this was your facility and you happened to walk up on this job (or is it a circus act?) …

Circustrick

Yes … You would stop the work (or pause the work … a new term I heard at the BST Conference).

But then what?

a) Discipline?

b) Correct them informally – no need to get them in trouble.

c) Report as a near-miss?

d) Do a TapRooT® Investigation?

I would argue for a complete root cause analysis using TapRooT®.

Why?

Because this insanity has the potential to become a high consequence incident.

Even though no one has been hurt yet, this is a near-miss with the potential for a serious injury or fatality. Therefore, it should be treated just as seriously as if someone had fallen off the ladder and broken their wrist (or worse).

Learning from this “near-miss” an important step in preventing more serious accidents.

So, today’s lesson learned is to learn from the small stuff to make sure the big accidents never happen.

Stopping Human Error

February 23rd, 2012 by

Recently, I read an article by a human factors expert that said human error can’t be eliminated but that errors could be managed. The article covered the common information about Skill-Rule-Knowledge based behaviors and preventing slips and rule-based errors.

Just a couple of days later, I heard a talk about a method to “self-trigger” and recognize when a mistake was just about to be made so that you could stop yourself in the nick of time and not make a mistake.

The theory was that people are more error prone when they are rushing, fatigued, frustrated, or complacent. And when this is true, they are more likely to take their eyes and mind off a task, put themselves in the line of fire, or lose balance, traction, or grip.

All you need to do is to constantly observe your own state of mind, and if you become complacent, frustrated, rushed, or fatigued, you alert yourself to be careful and watch/think about what you are doing. Take a break to reduce fatigue. Stop rushing and realize that your frustration is counter-productive.

Simple, right?

You can also work on habits to self-check for errors when you might be in an error likely situation (like being distracted).

What’s wrong with this?

It requires people to exhibit behaviors that aren’t “human.”

People are really bad at self-monitoring.

It’s unlikely that if you are hurried, fatigued, frustrated, or complacent that you will notice it “just in the nick of time.”

However, afterwards if you admit you were hurried, fatigued, frustrated, or complacent, then the condition seems like an obvious precondition and you failed to notice. Thus, your failure is the “cause” of the accident.

What’s a better idea? To use human factors tools to improve the human reliability of the tasks and use mistake proofing to trap or prevent errors that can’t be tolerated.

Whenever your performance improvement initiative requires people to be like machines, my bet is that the program will fail.

Instead, use humans where their skills are needed and use automation where unvarying performance is needed.

(Written by Mark Paradies in the February Root Cause Network™ Newsletter, Copyright © February, 2012 by System Improvements. Reprinted by permission.)

Why Are We Failing To Prevent Sentinel Events? By Mark Paradies

February 16th, 2012 by

Med

DEATH TOLL

What kills more people in the US than industrial accidents, highway accidents, and airline accidents combined?

Mistakes in hospitals.

The technical term for these mistakes is “Sentinel Events.”

Estimates of the deaths caused vary. We use estimates because there are no accurate statistics on the total number of deaths caused by mistakes in hospitals. There is no national reporting requirement.

Even though there is no national reporting requirement, studies show that despite over a decade of effort to stop sentinel events, no progress is being made. Some studies actually show the problem getting worse. And this problem isn’t unique

WHY NO IMPROVEMENT

Why can’t we improve? There are a number of factors that make improvement difficult:

1. Healthcare Complexity

2. Poor Root Cause Analysis (RCA)

3. Inadequate Corrective Actions

4. Not Enough Management Attention

We will review all of these factors and what we can do about them in the following sections.

HEALTHCARE COMPLEXITY

Medical practice keeps getting more complex. More complex technology. More drugs with more interactions. More pressure to work faster and be more efficient. The result? More chances to make errors with catastrophic consequences. At the same time, downsizing means less staff to catch errors.

Healthcare complexity calls for increased, proactive application of system reliability and human factors solutions to improve health¬care delivery.  Intelligent, resilient design can make complex systems reliable. Plus, staffing needs to be assessed to ensure adequate coverage to apply error-catching activities.

POOR ROOT CAUSE ANALYSIS

After a decade of using RCA to analyze sentinel events, the lack of progress indicates a failure of healthcare root cause analysis.

What’s wrong? A majority of healthcare facilities use inadequate RCA systems including fishbone diagrams, 5-Whys, and healthcare derived root cause checklists. These “simple” techniques are inadequate to analyze complex healthcare sentinel events.

Not only are the RCA systems inadequate, the RCA training is also inadequate. People are assigned to investigate healthcare sentinel events with little or no training. They are lucky to attend a free one to eight hour session provided at a professional society meeting or sponsored by an insurance provider.

But healthcare investigators face another factor that makes root cause analysis even more difficult: BLAME. More than your everyday blame that comes with every accident. Medical malpractice seems designed to make people less open – less willing to cooperate wholeheartedly with investigators.

Furthermore, doctors who are independent contractors are naturally suspicious of investigators who seem to question their judgment and put their credentials at risk. Is it any wonder that we haven’t made progress?

Despite some of the factors that are difficult to address, picking an advanced root cause analysis system and getting people trained shouldn’t be hard. After all, there is TapRooT®!

The TapRooT® System was designed to be used for simple and complex investigations. It has been applied successfully in healthcare settings and has improved performance of complex systems. The 2-Day and 5-Day TapRooT® Courses have been customized for on-site training of healthcare investigators to help them with demanding investigations. Problems solved!

POOR CORRECTIVE ACTIONS

Inadequate root cause analysis is just the start. Typically, we see the weakest corrective actions applied to prevent repeat sentinel events.

Those familiar with the terminology “hierarchy of controls” applied in industrial and process safety may know what I am pointing out. Healthcare corrective actions often include the application of new standards that depend on human reliability. When these fail, investigators recommend some of the “re” corrective actions, including: re-train, re-mind, and re-emphasize (discipline).

But these are the weakest possible corrective actions (see pages 127 -129 in your 2008 TapRooT® Book.) More effective corrective actions include another type of “re” corrective action. Removing the hazard or the target. Or, re-engineering the process to improve system reliability and decrease human error without adding additional tasks for people to cope with.

These types of corrective actions and more are the result of a TapRooT® investigation when investigators apply the suggestions in the Corrective Action Helper® and apply Safeguards Analysis as part of the development of their solutions.

MANAGEMENT ATTENTION

One might say that the cause of all the previous problems is inadequate management attention to performance improvement at healthcare facilities. Part of this inattention can probably be attributed to the fact that most healthcare administrators aren’t trained in advanced performance improvement techniques. Even the few who have had Six Sigma training don’t know about advanced root cause analysis and, therefore, don’t know about the action they could take to make performance improvement happen.

Plus, hospital administrators need to become more involved in the analysis, review, and approval of sentinel event investigations. Involvement can bring them face-to-face with the challenges people are experiencing in the field. Trained managers reviewing a SnapCharT® can see beyond blame to real action to improve performance. They can see their contribution to errors that come from understaffing and fatigue. They can become a knowledgeable part of the team fighting sentinel events.

SIMPLE PLAN TO IMPROVE

Each day, hundreds of lives are lost because we haven’t won the battle to defeat sentinel events. Don’t wait for the entire healthcare industry to wake up to the problems and solutions. Don’t wait for regulatory requirements to force your facility into action. Start today with the tools that are at hand.

1. Bring the message to management. Get them involved. They should feel that EVERY sentinel event at their facility is a personal failure to address the causes!

2. Adopt an advanced root cause analysis system – TapRooT® – including the latest root cause analysis software and database to help you learn from small incidents to prevent major sentinel events.

3. Get the training that your facility needs in root cause analysis. This includes training for hospital administrators, staff, and your performance improvement experts.

Start with a customized 2-Day TapRooT® Course for senior management. Follow that with a 2-Day TapRooT® Course for those who are frequently involved in sentinel event investigations and a 5-Day TapRooT® Course for those who facilitate sentinel event investigations.

4. Once you complete steps 1-3, you are ready to start continuous improvement efforts. Start by attending the TapRooT® Summit healthcare track to find out what other leaders in the field of healthcare are doing to continue improvement efforts.

Don’t wait. People are dying waiting for improvement to occur. Start today!

(Reprinted by permission from the February Root Cause Network™ Newsletter, Copyright © February, 2012)

Great Human Factors: When a Hand Control is Called a "Suicide Shifter"

February 16th, 2012 by

c3e1_3ibo3-468x370

I am a sucker for a 1948 Indian Chief motorcycle.  So I  thought … what a great opportunity to talk about Human Factors Design and show off a little nostalgia. The topic of today is the Suicide Shifter.

The Suicide Shifter is located on the left side of the fuel tank and was used to shift gears while riding. Called a Suicide Shifter because you had to take your left hand off the handle bar grip to shift it.

So the question for you today is how many equipment control designs used today at your work area are not placed in the safest area to use while operating?

Connect with Us

Filter News

Search News

Authors

Angie ComerAngie Comer

Software

Barb CarrBarb Carr

Editorial Director

Chris ValleeChris Vallee

Human Factors

Dan VerlindeDan Verlinde

VP, Software

Dave JanneyDave Janney

Safety & Quality

Garrett BoydGarrett Boyd

Technical Support

Ken ReedKen Reed

VP, Equifactor®

Linda UngerLinda Unger

Co-Founder

Mark ParadiesMark Paradies

Creator of TapRooT®

Per OhstromPer Ohstrom

VP, Sales

Shaun BakerShaun Baker

Technical Support

Steve RaycraftSteve Raycraft

Technical Support

Wayne BrownWayne Brown

Technical Support

Success Stories

Prior to implementing TapRooT in 1993, we performed incident investigations but we often stopped…

ExxonMobil

Reporting of ergonomic illnesses increased by up to 40% in…

Ciba Vision, a Novartis Company
Contact Us