Category: Investigations

Scientific Method and Root Cause Analysis

April 4th, 2018 by

Screen Shot 2018 03 26 at 2 15 18 PM

I had someone tell me that the ONLY way to do root cause analysis was to use the scientific method. After all, this is the way that all real science is performed.

Being an engineer (rather than a scientist), I had a problem with this statement. After all, I had done or reviewed hundreds (maybe thousands?) of root cause analyses and I had never used the scientific method. Was I wrong? Is the scientific method really the only or best answer?

First, to answer this question, you have to define the scientific method. And that’s the first problem. Some say the scientific method was invented in the 17th century and was the reason that we progressed beyond the dark ages. Others claim that the terminology “scientific method” is a 20th-century invention. But, no matter when you think the scientific method was invented, there are a great variety of methods that call themselves “the scientific method.” (Google “scientific method” and see how many different models you can find. The one presented above is an example.)

So let’s just say the scientific method that the person was insisting was the ONLY way to perform a root cause analysis required the investigator to develop a hypothesis and then gather evidence to either prove or disprove the hypothesis. That’s commonly part of most methods that call themselves the scientific method.

What’s the problem with this hypothesis testing model? People don’t do it very well. There’s even a scientific term the problem that people have disproving their hypothesis. It’s called CONFIRMATION BIAS. You can Google the term and read for hours. But the short description of the problem is that when people develop a hypothesis that they believe in, they tend to gather evidence to prove what they believe and disregard evidence that is contrary to their hypothesis. This is a natural human tendency – think of it like breathing. You can tell someone not to breath, but they will breath anyway.

What did my friend say about this problem with the scientific method? That it could be overcome by teaching people that they had to disprove all other theories and also look for evidence to disproves their theory.

The second part of this answer is like telling people not to breath. But what about the first part of the solution? Could people develop competing theories and then disprove them to prove that there was only one way the accident could have occurred? Probably not.

The problem with developing all possible theories is that your knowledge is limited. And, of course, how long would it take if you did have unlimited knowledge to develop all possible theories and prove or disprove them?

The biggest problem that accident investigators face is limited knowledge.

We used to take a poll at the start of each root cause analysis class that we taught. We asked:

“How many of you have had any type of formal training
in human factors or why people make human errors?”

The answer was always less than 5%.

Then we asked:

“How many of you have been asked to investigate
incidents that included human errors?”

The answer was always close to 100%.

So how many of these investigators could hypothesize all the potential causes for a human error and how would they prove or disprove them?

That’s one simple reason why the scientific method is not the only way, or even a good way, to investigate incidents and accidents.

Need more persuading? Read these articles on the problems with the scientific method:

The End of Theory: The Data Deluge Makes The Scientific Method Obsolete

The Scientific Method is a Myth

What Flaws Exist Within the Scientific Method?

Is the Scientific Method Seriously Flawed?

What’s Wrong with the Scientific Method?

Problems with “The Scientific Method”

That’s just a small handful of the articles out there.

Let me assume that you didn’t read any of the articles. Therefore, I will provide one convincing example of what’s wrong with the scientific method.

Isaac Newton, one of the world’s greatest mathematicians, developed the universal law of gravity. Supposedly he did this using the scientific method. And it worked on apples and planets. The problem is, when atomic and subatomic matter was discovered, the “law” of gravity didn’t work. There were other forces that governed subatomic interactions.

Enter Albert Einstein and quantum physics. A whole new set of laws (or maybe you called them “theories”) that ruled the universe. These theories were proven by the scientific method. But what are we discovering now? Those theories aren’t “right” either. There are things in the universe that don’t behave the way that quantum physics would predict. Einstein was wrong!

So, if two of the smartest people around – Newton and Einstein – used the scientific method to develop answers that were wrong but that most everyone believed … what chance do you and I have to develop the right answer during our next incident investigation?

Now for the good news.

Being an engineer, I didn’t start with the scientific method when developing the TapRooT® Root Cause Analysis System. Instead, I took an engineering approach. But you don’t have to be an engineer (or a human factors expert) to use it to understand what caused an accident and what you can do to stop a future similar accident from happening.

Being an engineer, I had my fair share of classes in science. Physics, math, and chemistry are all part of an engineer’s basic training. But engineers learn to go beyond science to solve problems (and design things) using models that have limitations. A useful model can be properly applied by an engineer to design a building, an electrical transmission network, a smartphone, or a 747 without understanding the limitations of quantum mechanics.

Also, being an engineer I found that the best college course I ever had that helped me understand accidents wasn’t an engineering course. It was a course on basic human factors. A course that very few engineers take.

By combining the knowledge of high reliability systems that I gained in the Nuclear Navy with my knowledge of engineering and human factors, I developed a model that could be used by people without engineering and human factors training to understand what happened during an incident, how it happened, why it happened, and how it could be prevented from happening again. We have been refining this model (the TapRooT® System) for about thirty years – making it better and more usable – using the feedback from tens of thousands of users around the world. We have seen it applied in a wide variety of industries to effectively solve equipment and human performance issues to improve safety, quality, production, and equipment reliability. These are real world tests with real world success (see the Success Stories at this link).

So, the next time someone tells you that the ONLY way to investigate an incident is the scientific method, just smile and know that they may have been right in the 17th century, but there is a better way to do it today.

If you don’t know how to use the TapRooT® System to solve problems, perhaps you should attend one of our courses. There is a basic 2-Day Course and an advanced 5-Day Course. See the schedule for public courses HERE. Or CONTACT US about having a course at your site.

How Safe Must Autonomous Vehicles Be?

April 3rd, 2018 by

Tesla is under fire for the recent crash of their Model X SUV, and the subsequent fatality of the driver. It’s been confirmed that the vehicle was in Autopilot mode when the accident occurred. Both Tesla and the NTSB are investigating the particulars of this crash.

PHOTO: PUBLISHED CREDIT: KTVU FOX 2/REUTERS.

I’ve read many of the comments about this crash, in addition to previous crash reports. It’s amazing how much emotion is poured into these comments. I’ve been trying to understand the human performance issues related to these crashes, and I find I must take special note of the human emotions that are attached to these discussions.

As an example, let’s say that I develop a “Safety Widget™” that is attached to all of your power tools. This widget raises the cost of your power tools by 15%, and it can be shown that this option reduces tool-related accidents on construction sites by 40%.  That means, on your construction site, if you have 100 incidents each year, you would now only have 60 incidents if you purchase my Safety Widget™.  Would you consider this to be a successful purchase?  I think most people would be pretty happy to see their accident rates reduced by 40%!

Now, what happens when you have an incident while using the Safety Widget™? Would you stop using the Safety Widget™ the first time it did NOT stop an injury? I think we’d still be pretty happy that we would prevent 40 incidents at our site each year. Would you still be trying to reduce the other 60 incidents each year? Of course. However, I think we’d keep right on using the Safety Widget™, and continue looking for additional safeguards to put in place, while trying to improve the design of the original Safety Widget™.

This line of thinking does NOT seem to be true for autonomous vehicles. For some reason, many people seem to be expecting that these systems must be perfect before we are allowed to deploy them. Independent reviews (NOT by Tesla) have shown that, on a per driver-mile basis, Autopilot systems reduce accidents by 40% over normal driver accident rates. In the U.S., we experience about 30,000 fatalities each year due to driver error. Shouldn’t we be happy that, if everyone had an autonomous vehicle, we would be saving 12,000 lives every year? The answer to that, you would think, would be a resounding “YES!” But there seems to be a much more emotional content to the answer than straight scientific data would suggest.

I think there may be several human factors in play as people respond to this question:

  1. Over- and under-trust in technology: I was talking to one of our human factors experts, and he mentioned this phenomena. Some people under-trust technology in general and, therefore, will find reasons not to use it, even when proven to work. Others will over-trust the technology, as evidenced by the Tesla drivers who are watching movies, or not responding to system warnings to maintain manual control of the vehicle.
  2. “I’m better than other drivers. Everyone else is a bad drive; while they may need assistance, I drive better than any autonomous gadget.” I’ve heard this a lot. I’m a great driver; everyone else is terrible. It’s a proven fact that most people have an inflated opinion of their own capabilities compared to the “average” person.” If you were to believe most people, each individual (when asked) is better than average. This would make it REALLY difficult to calculate an average, wouldn’t it?
  3. It’s difficult to calculate the unseen successes. How many incidents were avoided by the system? It’s hard to see the positives, but VERY easy to see the negatives.
  4. Money. Obviously, there will be some people put out of work as autonomous vehicles become more prevalent. Long-haul truckers will be replaced by autopilot systems. Cab drivers, delivery vehicle drivers, Uber drivers, and train engineers are all worried about their jobs, so they are more likely to latch onto any negative that would help them maintain their relevancy. Sometimes this is done subconsciously, and sometimes it is a conscious decision.

Of course, we DO have to monitor and control how these systems are rolled out. We can’t have companies roll out inferior systems that can cause harm due to negligence and improper testing. That is one of the main purposes of regulation and oversight.

However, how safe is “safe enough?” Can we use a system that isn’t perfect, but still better than the status quo? Seat belts don’t save everyone, and in some (rare) cases, they can make a crash worse (think of Dale Earnhardt, or a crash into a lake with a stuck seat belt). Yet, we still use seat belts. Numerous lives are saved every year by restraint systems, even though they aren’t perfect. How “safe” must an autonomous system be in order to be accepted as a viable safety device? Are we there yet? What do you think?

Effective Listening Skills Inventory for Investigative Interviews

March 29th, 2018 by

Do you ever interrupt someone because you fear “losing” what you want to say? Do you become momentarily engrossed in your thoughts, then return to reality to find someone awaiting your answer to a question you didn’t hear? Most of us are at fault for interrupting or being distracted from time to time. Particularly, though, in an interview environment where focus is key, distractions or interruptions can be detrimental to the interview.

Watch, listen, learn from this week’s conversation between Barb Carr and Benna Dortch:

Effective Listening Skills Inventory For Investigation Interviews from TapRooT® Root Cause Analysis on Vimeo.

Now, learn how to inventory your listening skills. Internalizing suggestions can recalibrate your thought and communication processes. Your work and your communication style will reflect the changes you’ve made.

Feel free to comment or ask questions on Facebook. We will respond!

Bring your lunch next Wednesday and join TapRooT®’s Facebook Live session. You’ll pick up valuable, workplace-relevant takeaways from an in-depth discussion between TapRooT® professionals. We’ll be delighted to have your company.

Here’s the scoop for tuning in next week:

Where? https://www.facebook.com/RCATapRooT/

When? Wednesday, April 4, 2018

What Time? Noon Eastern | 11:00 a.m. Central | 10:00 a.m. Mountain | 9:00 a.m. Pacific

Thank you for joining us!

Monday Accidents & Lessons Learned: Does What You See Match What Is Happening?

March 26th, 2018 by

>

An incident report from NASA’s Aviation Safety Reporting System (ASRS) gives insight into a pilot’s recurring, problematic observation. Through distraction and confusion, a Bonanza pilot misperceived the runway edge and centerline lights as they cycled off and on. Air Traffic Control (ATC) let him know that the centerline lights were constant, not blinking.

The pilot summarized his experience, “I was transiting the final approach path of . . . Runway 16R and observed the runway edge and centerline lights cycle on and off . . . at a rate of approximately 1 per second. It was very similar to the rate of a blinking traffic light at a 4-way vehicle stop. The [3-blade] propeller speed was 2,400 RPM. This was observed through the entire front windscreen and at least part of the pilot side window. I queried ATC about the reason for the runway lights blinking and was told that they were not blinking. It was not immediately obvious what was causing this, but I did later speculate that it may have been caused by looking through the propeller arc.

“The next day [during] IFR training while on the VOR/DME Runway 16R approach, we observed the runway edge and centerline lights cycle on and off . . . at a rate slightly faster than 1 per second. The propeller speed was 2,500 RPM. I then varied the propeller speed and found that, at 2,700 RPM, the lights were observed strobing at a fairly high rate and, at 2,000 RPM, the blinking rate slowed to less than once per second. This was observed through the entire approach that terminated at the Missed Approach Point (MAP). The flight instructor was also surprised and mentioned that he had not seen this before, but also he doesn’t spend much time behind a 3-blade propeller arc.

“I would speculate that the Pulse Width Modulation (PWM) dimming system of the LED runway lights was phasing with my propeller, causing the observed effect. I would also speculate that the effect would . . . significantly differ at other LED dimming settings . . . and behind a 2-blade propeller.

“I found the effect to be entirely confusing and distracting and would not want to make a landing in such conditions.”

The TapRooT® System, Training, and Software have a dedicated history of R&D, human performance, and improvement. Learn with our best incident investigation and root cause analysis systems.

Root Cause Analysis Audit Idea

March 22nd, 2018 by

Screen Shot 2018 03 22 at 3 02 19 PM

In the past couple of years has your company had a major accident?

If they did, did you check to see if there were previous smaller incidents that should have been learned from and if the corrective actions should have prevented the major accident?

I don’t think I have ever seen a major accident that didn’t have precursors that could have been learned from to improve performance. The failure to learn and improve is a problem that needs a solution.

In the TapRooT® root cause analysis of a major accident, the failure to fix pervious precursor incidents should get you to the root cause of “corrective action NI” if you failed to implement effective corrective actions from the previous investigations.

If this idea seems like a new idea at your facility, here is something that you might try. Go back to your last major accident. Review your database to look for similar precursor incidents. If there aren’t any, you have identified a problem. You aren’t getting good reporting of minor incidents with potential serious consequences.

If you find previous incidents, it’s time for an audit. Review the investigations to determine why the previous corrective actions weren’t effective. This should produce improvements to your root cause analysis processes, training, reviews, …

Don’t wait for the next big accident to improve your processes. You have all the data that you need to start improvements today!

Are you a Proficient TapRooT® Investigator?

March 19th, 2018 by

 

 

 

 

 

 

 

I teach a lot of TapRooT® courses all over the world, to many different industries and departments.  I often get the same questions from students during these courses.  One of the common questions is, “How do I maintain my proficiency as a TapRooT® investigator?”

This is a terrific question, and one that you should think carefully on.  To get a good answer, let’s look at a different example.:

Let’s say you’ve been tasked with putting together an Excel spreadsheet for your boss.  It doesn’t have to be anything too fancy, but she did ask that you include pivot tables in order to easily sort the data in multiple ways.  You decide to do a quick on-line course on Excel to brush up on the newest techniques, and you put together a great spreadsheet.

Now, if your boss asked you to produce another spreadsheet 8 months from now, what would happen?  You’d probably remember that you can use pivot tables, but you’ve probably forgotten exactly how it works.  You’ll most likely have to relearn the technique again, looking back over your last one, or maybe hitting YouTube as a refresher.  It would have been nice if you had worked on a few spreadsheets in the meantime to maintain the skills you learned from your first Excel course.  And what happens if Microsoft comes out with a new version of Excel?

Performing TapRooT® investigations are very similar.  The techniques are not difficult; they can be used by pretty much anyone, once they’ve been trained.  However, you have to practice these skills to get good at them and maintain your proficiency.  When you leave your TapRooT® course, you are ready to conduct your first investigation, and those techniques are still fresh.  If you wait 8 months before you actually use TapRooT®, you’ll probably need to refresh your skills.

In order to remain proficient, we recommend the following:

  • Obviously, you need to attend an initial TapRooT® training session.  We would not recommend trying to learn a technique by reading a book.  You need practice and guidance to properly use any advanced technique.
  • After your class, we recommend you IMMEDIATELY go perform an investigation, probably within the next 2 weeks or so.  You need to quickly use TapRooT® in your own work environment.  You need to practice it in your own conference room, know where your materials will be kept, know who you’re going to contact, etc.  Get the techniques ingrained into your normal office routine right away.
  • We then recommend that you use TapRooT® at least every month.  That doesn’t necessarily mean that you must perform a full incident investigation monthly, but maybe just use a few of the techniques.  For example, you could perform an audit and run those results though the Root Cause Tree®.  Anything to keep proficient using the techniques.
  • Refresher training is also a wonderful idea.  We would recommend attending a refresher course every 2 years to make sure you are up to speed on the latest software and techniques.  If you’ve attended a 2-Day TapRooT® course, maybe a 5-Day Advanced Team Leader Course would be a good choice.
  • Finally, attending the Annual Global TapRooT® Summit is a great way to keep up to speed on your TapRooT® techniques.  You can attend a specialized Pre-Summit course (Advanced Trending Techniques, or Equifactor® Equipment Troubleshooting, or maybe an Evidence Collection course), and then attend a Summit track of your choosing.

There is no magic here.  The saying, “Use it, or Lose it” definitely applies!

Miami Bridge Collapse – Is Blame Part of Your Investigation Policy?

March 16th, 2018 by

collapse miami bridge

 

 

 

 

 

 

 

I was listening to a news report on the radio this morning about the pedestrian bridge collapse in Miami. At one point, they were interviewing Florida Governor Rick Scott.  Here is what he said:

“There will clearly be an investigation to find out exactly what happened and why this happened…”

My ears perked up, and I thought, “That sounds like a good start to a root cause investigation!”

And then he continued:

“… and we will hold anybody accountable if anybody has done anything wrong,”

Bummer.  His statement had started out so good, and then went directly to blame in the same breath.  He had just arrived on the scene.  Before we had a good feel for what the actual circumstances were, we are assuming our corrective actions are going to pinpoint blame and dish out the required discipline.

This is pretty standard for government and public figures, so I wasn’t too surprised.  However, it got me thinking about our own investigations at our companies.  Do we start out our investigations with the same expectations?  Do we begin with the good intentions of understanding what happened and finding true root causes, but then have this expectation that we need to find someone to blame?

We as companies owe it to ourselves and our employees to do solid, unbiased incident investigations.  Once we get to reliable root causes, our next step should be to put fixes in place that answer the question, “How do we prevent these root causes from occurring in the future?  Will these corrective actions be effective in preventing the mistakes from happening again?”  In my experience, firing the employee / supervisor / official in charge rarely leads to changes that will prevent the tragedy from happening again.

 

ASSE Safety 2018 Flash Session on Investigative Interviewing

March 15th, 2018 by


I’m excited to be selected as a flash session speaker for ASSE Safety 2018. My talk “Top 3 Tips for Improving your Investigative Interviewing Skills” is planned for Monday, June 4, 2018 at 2:45 p.m. in the Exhibit Hall, Booth #2165. I hope to see you there! Please stop by the TapRooT® Booth and talk to Dave Janney and me while you’re at the conference.

Protection Against Hydrogen Sulfide

March 6th, 2018 by

On January 16, 2017, a private construction company sent four utility works to handle complaints about sewage backup in Key Largo, Florida. Three of the four works descended into the the 15-foot-deep drainage hole, and within seconds all voice communication was lost amongst the construction workers.

The Key Largo Fire Department was the first to respond to the scene. Leonardo Moreno, a volunteer firefighter, tried to enter the hole with his air tank but failed. So, he descended without his air tank and lost consciousness within seconds of entering the drainage hole. Eventually, another firefighter was able to enter the hole with an air tank and pull Moreno out. Unfortunately, the other three construction workers weren’t so lucky. All of them died from hydrogen sulfide poisoning, and Moreno was in critical condition.

Unfortunate events like this are completely avoidable. Comment below how this could have been avoided/prevented by using TapRooT® proactively.

To learn more about this tragic incident click here.

Monday Accident & Lessons Learned: Runaway Trailer Investigation Provides Context for Action Sequence

February 26th, 2018 by

The Rail Accident Investigation Branch (RAIB) investigated and released its report on a runaway trailer that occurred May 28, 2017, in England’s Hope, Derbyshire. The incident occurred when a trailer propelled by a small rail tractor became detached and traveled approximately one mile before coming to a stop. RAIB found that a linchpin had been erroneously inserted.

Chief Inspector of Rail Accidents, Simon French, remarked, “The whole episode, as our report shows, was a saga arising from lack of training, care, and caution.” Peruse the report delineating the circumstances and recommendations here. Read the RAIB report here. Enroll in a TapRooT® course to gain the training necessary to investigate and, further, to prevent incidents.

‘Equipment Failure’ is the cause?

February 22nd, 2018 by
Fire, equipment. failure

Drone view of tank farm fire Photo: West Fargo Fire Department

 

 

 

 

 

 

 

On Sunday, there was a diesel fuel oil fire at a tank farm in West Fargo, ND. About 1200 barrels of diesel leaked from the tank.  The fire appears to have burned for about 9 hours or so.  They had help from fire dapartments from the local airport and local railway company, and drone support from the National Guard.  There were evacuations of nearby residents.  Soil remediation is in progress, and operations at the facility have resumed.  Read more about the story here.

The fire chief said it looks like there was a failure of the piping and pumping system for the tank. He said that the owners of the tank are investigating. However, one item caught my attention. He said, “In the world of petroleum fires, it wasn’t very big at all. It might not get a full investigation.”

This is a troublesome statement.  Since it wasn’t a big, major fire, and no one was seriously hurt, it doesn’t warrent an investigation.  However, just think of all the terrific lessons learned that could be discovered and learned from.  How major a fire must it be in order to get a “full investigation?”

I often see people minimize issues that were just “equipment failures.”  There isn’t anyone to blame, no bad people to fire, it was just bad equipment.  We’ll just chalk this one up to “equipment failure” and move on.  In this case, that mindset can cause people to ignore the entire accident, and that determining it was equipment failure is as deep as we need to go.

Don’t get caught in this trap.  While I’m sure the tank owner is going to go deeper, I encourage the response teams to do their own root cause analyses to determine if their response was adequate, if notifications correct, if they had reliable lines of communications with external aganecies, etc.  It’s a great opportunity to improve, even if it was only “equipment failure,” and even if you are “only” the response team.

Monday Accident & Lesson Learned: Quick action by mother prevents toddler from falling through hole in moving train

February 19th, 2018 by


A child was rescued from death when his mom grabbed him before he fell through a hole in a moving train.  In a 39-page report published by the Rail Accident Investigation Board, it was revealed that “The child entered the toilet, and as the door opened and the child stepped through it, he fell forward because the floor was missing in the compartment he had entered.” Read the report here.

Top 3 Reasons Corrective Actions Fail & What to Do About It

February 15th, 2018 by

Ken Reed and Benna Dortch discuss the three top reasons corrective actions fail and how to overcome them. Don’t miss this informative video! It is a 15 minute investment of time that will change the way you think about implementing fixes and improve performance at your facility.

Support your Investigation Results with Solid Evidence Collection

February 13th, 2018 by

We have a couple of seats left in the “TapRooT® Evidence Collection and Interviewing Techniques to Sharpen Investigation Skills” 2-day Pre-Summit course, February 26-27 in Knoxville, Tennessee. My co-instructor, Reb Brickey and I are excited to share methods of collection that will help you stop assumptions in their tracks!

Evidence collection techniques help investigators focus on the facts of an investigation. We will talk about pre-planning, different types of evidence to keep in mind as well as spend a good part of the second day learning about and practicing interviewing techniques. The course begins with an interesting case study and attendees break into investigation teams and work toward solving the evidence collection puzzle for the accident presented.

Class size is limited to ensure an ideal learning environment, so register now for the 2-day course only or for the 2-day course plus the 3-day Global TapRooT® Summit.

 

Stop Assumptions in Their Tracks!

February 13th, 2018 by

Assumptions can cause investigators to reach unproven conclusions.

But investigators often make assumptions without even knowing that they were assuming.

So how do you stop assumptions in their tracks?

When you are drawing your SnapCharT®, you need to ask yourself …

How do I know that?

If you have two ways to verify an Event or a Condition, you probably have a FACT.

But if you have no ways to prove something … you have an assumption.

What if you only have one source of information? You have to evaluate the quality of the source.

What if one eye witness told you the information? Probably you should still consider it an assumption. Can you find physical evidence that provides a second source?

What if you just have one piece of physical evidence? You need to ask how certain you are that this piece of physical evidence can only have one meaning or one cause.

Dashed Boxes

Everything that can’t be proven to be a fact should be in a dashed box or dashed oval on your SnapCharT®. And on the boxes or ovals that you are certain about? List your evidence that proves they are facts.

Now you have stopped assumptions in their tracks!

Monday Accident & Lessons Learned: The Lac-Mégantic rail disaster

February 12th, 2018 by

The Lac-Mégantic rail disaster occurred when an unattended 74-car freight train carrying Bakken Formation crude oil rolled down a 1.2% grade from Nantes and derailed, resulting in the fire and explosion of multiple tank cars. Forty-two people were confirmed dead, with five more missing and presumed dead. More than 30 buildings were destroyed. The death toll of 47 makes it the fourth-deadliest rail accident in Canadian history.

 

Click image to view or download .pdf

 

Why You Should Use the TapRooT® Process for Smaller Investigations

February 7th, 2018 by

“If the hammer is your only tool, all of your problems will start looking like nails.”

Per Ohstrom shares how TapRooT® is used to investigate smaller incidents by demonstrating the methodology. Are you using the 5-Whys to investigate these types of incidents? The 5-Whys won’t take you beyond your own knowledge. Find out how TapRooT® will!

Monday Accidents & Lessons Learned: Three Killed, Dozens Injured on Italian Trenord-Operated Train

February 5th, 2018 by

Packed with 250 commuters and heading to Milan’s Porta Garibaldi station, the Italian Trenord-operated train derailed January 25, 2018, killing three people and seriously injuring dozens. The train was said to have been traveling at normal speed but was described by witnesses as “trembling for a few minutes before the accident.” A collapse of the track is under investigation.

Why is early information-gathering important?

How to Make Incident Investigations Easier

January 31st, 2018 by

Ken Reed talks about the differences between an investigation for a low-to-moderate incident and a major incident. Find out how TapRooT® makes both types of investigation easier to manage.

Want to learn how to investigate a major/minor incident with all of the advanced tools? Sign up for an upcoming 5-day training!

Want to start with just the essential skills for performing a root cause analysis on a minor or major investigation? It’s a great place to start with a minor investment of time. Sign up for an upcoming 2-day training!

Root Cause Analysis Tip: Do you perform an incident investigation like you watch the news?

January 31st, 2018 by

If you are like me, you flip channels to see how each news station or news website reports the same issue of interest. Heck, I even look at how different countries discuss the same issue of interest. Take the “Deep Water Horizon Spill of 2010” or was it the “BP Oil Spill of 2010” or was it the “Gulf of Mexico Oil Spill of 2010”? It depends on where you were or what you watched when it was reported. At the end of the day we all often develop Bias Criteria of Trust… often without any true ability to determine which perspective is closer to the truth.

Now there are fancier terms of bias from confirmation bias to hindsight bias, but let’s take a look at some of our news source Bias Criteria of Trust.


So here is the question to stop and ask….. do you do the same thing when you start an investigation, perform root cause analysis or troubleshoot equipment? It is very easy to say YES! We tend to trust interviews and reports using the same criteria above before we actually have the evidence. We also tend to not trust interviews and reports purely because of who and where they came from, without evidence as well!

Knowing this…..

Stop the urge to not trust or to overly trust. Go Out And Look (GOAL) and collect the evidence.

Got your interest? Want to learn more? Feel free to contact me or any of our TapRooT® Instructors at info@taproot.com or call 865.539.2139.

Where Do You Get Ideas To Improve Root Cause Analysis?

4 Signs You Need to Improve Your Investigations

4 Signs You Need to Improve Your Investigations

January 29th, 2018 by

If you want to improve your root cause analysis beyond simple techniques that yield incomplete results that don’t stop problems, you are probably ready for step one … implementing the TapRooT® Root Cause Analysis System.

But many find that after they implement the TapRooT® System, they still have room to improve their investigations. Here are four signs that you’re ready for step two:

  1. Investigator Bad Habits – Before your investigators were trained to use TapRooT®, they probably had some other method they used to find “the root cause.” The bad habits they learned probably aren’t completely corrected in a single 2-Day or 5-Day TapRooT® Root Cause Analysis Course. They may have previously been trained that there was only one root cause. They might not know how to interview or collect information (facts). They may need practice drawing complete SnapCharT®s or identifying all the Causal Factors. Therefore, they may need more training or some coaching to complete the development of their skills.
  2. Insufficient Time & Resources – Even if you are a great investigator, you need time to collect evidence and complete your investigation. If you have too little time and if you don’t have adequate resources, the TapRooT® Training alone can’t make your investigations excellent.
  3. Inadequate Investigation Review – Investigators need feedback to improve their skills. Where do they get expert feedback? It could come from management if they are experts in root cause analysis. If management doesn’t understand root cause analysis, the feedback they get may not improve future results. Therefore, you should probably implement a “peer review” before management review occurs. The “peer review” will be done by one or more root cause analysis experts to identify areas for improvement BEFORE the investigation is presented to management. The best peer reviews are conducted while the investigation is being performed. Think of this as just-in-time coaching.
  4. Insufficient Practice – Even with great training to start with, people become “rusty” if they don’t practice their skills. Of course, you don’t want to have more serious incidents to get more experience for your investigators. What can you do? Three things … a) Use the TapRooT® System to investigate less serious but potentially serious incident. The new book, Using the Essential TapRooT® Techniques to Investigate Low-to-Medium Risk Incidents, can show you how to do this without wasting time and effort. b) Use the TapRooT® System to prepare for, perform, and analyze the results of audits. Learn how to do this in the upcoming pre-Summit course, TapRooT® for Audits. Or get the book, TapRooT® Root Cause Analysis for Audits and Proactive Performance Improvement. c) Have a refresher course for your investigators (contact us for info by CLICKING HERE) or have them attend a pre-Summit Course and the Global TapRooT® Summit to refresh their skills.

Are you ready for step two? Would you like to learn more about improving your implementation of TapRooT® and changing the culture of your companies investigations and root cause analysis? Then get registered for the 2018 Global TapRooT® Summit.

NewImage

FIRST, Mark Paradies, President of System Improvements and TapRooT® author will be giving a keynote address titled: How Good is Your TapRooT® Implementation. Learn how to apply best practices from around the world to improve your use of TapRooT® Root Cause Analysis.

NewImage

SECOND, Jack Frost, Vice President HSE of Matrix Service Company, will be giving a Best Practice Track talk titled: Improving Safety Culture Through Measuring and Grading Investigations. In this session he will discuss using an evaluation matrix to grade your investigations and coach your investigators to better root cause analysis.

Screen Shot 2018 01 25 at 10 01 23 AM

You can download the matrix that Jack uses here: http://www.taproot.com/content/wp-content/uploads/2015/04/RateRootCauseAnalysis11414.xlsx.

Don’t be satisfied. Continually improve your root cause analysis!

Monday Accident & Lessons Learned: How Long Should a Root Cause Analysis Take?

January 29th, 2018 by

Screen Shot 2018 01 27 at 2 48 07 PM

On January 25th, The Atlanta Journal-Constitution reported that Georgia Power had not identified the cause of the December 17th electrical fire that shut down power to large portions of Atlanta’s Hartsfield-Jackson Airport. The article reports that the service disruption caused massive passenger disruptions and will cost $230,000 to repair. Delta says that the disruption from the fire and an early December snow storm will cost the airlines $60 million dollars.

Obviously this incident is worth preventing and needs an effective root cause analysis. It has been over a month since the fire. The questions is … how long should a root cause analysis take? A month, three months, a year, three years?

Of course, the answer varies depending on the type of the incident but what do you think is reasonable?

Leave your comments by clicking on the Comment link below.

CSB to Investigate Fatal Well Explosion in Oklahoma

January 27th, 2018 by

I don’t know when the CSB became the drilling investigator but here is their press release announcing the investigation…

NewImage

CSB Will Investigate Fatal Well Explosion in Oklahoma

Washington D.C. January 25, 2018 – The U.S. Chemical Safety Board announced today that it will be moving forward with a full investigation into the fatal gas well explosion near Quinton, Oklahoma. The explosion fatally injured five workers.

Upon notification of the incident, the CSB deployed two investigators to gather additional facts  to assist the Board in making  a decision regarding the scope of the investigation. Investigators arrived on site Wednesday morning and met with the lease holder for the well and the drilling operator.  CSB investigators will continue to meet with well service providers and the well site consultant company that had employees on site at the time of the incident. Evidence preservation and collection is the initial focus of the investigation.

The CSB is an independent, non-regulatory federal agency whose mission is to drive chemical safety change through independent investigations to protect people and the environment. The agency’s board members are appointed by the President and confirmed by the Senate.

CSB investigations examine all aspects of chemical incidents, including physical causes such as equipment failure as well as inadequacies in regulations, industry standards, and safety management systems. For more information, contact public@csb.gov.

Is this a good idea? … Navy to have “Article 32” hearings for COs involved in collisions at sea.

January 17th, 2018 by

NewImage

Didn’t I just read (see this LINK) a Navy investigation that implied there were Management System causes of the two collisions in the Pacific? Didn’t the report suggest that the Navy needed to change it’s culture?

An article in USNI News says that both Commander Alfredo J. Sanchez and Commander Bryce Benson will face Article 32 hearings (the prelude to a court martial) over their role in the ships’ collisions in the Pacific.

NewImage

Will punishment make the Navy better? Will it make it easier for ship’s commanding officers to admit mistakes? And what about the crew members who are facing disciplinary hearings? Will that make the culture of the Navy change from a reactive-punitive culture to a culture where mistakes are shared and learned from BEFORE major accidents happen?

What do you think…

Here is the press release from the Navy’s Consolidated Disposal Authority (Director of Naval Reactors Adm. James F. Caldwell):

On 30 October 2017, Admiral William Moran, Vice Chief of Naval Operations, designated Admiral Frank Caldwell as the Consolidated Disposition Authority to review the accountability actions taken to date in relation to USS Fitzgerald (DDG 62) and USS John S. McCain (DDG 56) collisions and to take additional administrative or disciplinary actions as appropriate.

After careful deliberation, today Admiral Frank Caldwell announced that Uniform Code of Military Justice (UCMJ) charges are being preferred against individual service members in relation to the collisions.

USS Fitzgerald: Courts-martial proceedings/Article 32 hearings are being convened to review evidence supporting possible criminal charges against Fitzgerald members. The members’ ranks include one Commander (the Commanding Officer), two Lieutenants, and one Lieutenant Junior Grade. The charges include dereliction of duty, hazarding a vessel, and negligent homicide.

USS John S. McCain: Additionally, for John S. McCain, one court- martial proceeding/Article 32 hearing is being convened to review evidence supporting possible criminal charges against one Commander (the Commanding Officer). The charges include dereliction of duty, hazarding a vessel, and negligent homicide. Also, one charge of dereliction of duty was preferred and is pending referral to a forum for a Chief Petty Officer.

The announcement of an Article 32 hearing and referral to a court-martial is not intended to and does not reflect a determination of guilt or innocence related to any offenses. All individuals alleged to have committed misconduct are entitled to a presumption of innocence.

Additional administrative actions are being conducted for members of both crews including non-judicial punishment for four Fitzgerald and four John S. McCain crewmembers.

Information regarding further actions, if warranted, will be discussed at the appropriate time.

Equipment Troubleshooting in the Future

January 5th, 2018 by

Equipment Troubleshooting in the Future
By Natalie Tabler and Ken Reed

If you haven’t read the article by Udo Gollub on the Fourth Industrial Revolution, take some time to open this link. This article can actually be found at many links on the internet, so attribution is not 100% certain, but Mr. Gollub appears to be the probable author.

The article is interesting. It discusses a viewpoint that, in the current stage of our technological development, disruptive technologies are able to very quickly change our everyday technological expectations into “yesterday’s news.” What we consider normal today can be quickly overtaken and supplanted by new technology and paradigms. While this is an interesting viewpoint, one of the things I don’t see discussed is one of the most common problems with automating our society: equipment failure. If our world will largely depend on software controlling machinery, then we need to take a long hard look at avoiding failure not only in the manufacturing process, but also in the software development process.

The industrial revolution that brought us from an agricultural society to an industrial one also brought numerous problems along with the benefits. Changing how the work is done (computerization vs. manual labor) does not change human nature. The rush to be first to come out with a product (whether it be new software or a physical product) will remain inherent in the business equation, and with it the danger of not adequately testing, or overly optimistic expectations of benefit and refusal to admit weaknesses.

If we are talking about gaming software – no big deal. So, getting to the next level of The Legend of Zelda – Breath of the Wind had some glitches; that can be changed with the next update. But what if we are talking about self-driving cars or medical diagnostic equipment? With no human interaction with the machine (or software running it) the results could be catastrophic. And what about companies tempted to cut some corners in order to bolster profits (remember the Ford Pinto, Takata airbags, and the thousands of other recalls that cost lives)? Even ethical companies can produce defective products because of lack of knowledge or foresight. Imagine if there were little or no controls in production or end use.

Additionally, as the systems get more complex, the probability of unexpected or unrecognized error modes will also increase at a rapid rate. The Air France Flight 447 crash is a great example of this.

So what can be done to minimize these errors that will undoubtedly occur? There are really 2 options:

1. Preventative, proactive analysis safety and equipment failure prevention training will be essential as these new technologies evolve. This must also be extended to software development, since it will be the driving force in new technologies production. If you wonder how much failure prevention training is being used in this industry, just count the number of updates your computer and phone software sends out each year. And yes, failure prevention should include vigilance on security breaches. A firm understanding of human error, especially in the software and equipment design phase, is essential to understanding why an error might be introduced, and what systems we will need in place to catch or mitigate the consequences of these errors.  This obviously requires effective root cause analysis early in the process.

2. The second option is to fully analyze the results of any errors after they crop up. Since failures are harder to detect as stated in #1, it becomes even more critical that, when an error does cause a problem, we dig deep enough to fix the root cause of the failure. It will not be enough to say, “Yes, that line of code caused this issue. Corrective action: Update the line of code.” We must look more deeply into how we allowed the errant line of code to exist, and then do a rigorous generic cause analysis to see of we have this same issue elsewhere in our system.

With the potential for rapidly-evolving hardware and software systems causing errors, it will be incumbent on companies to have rigorous, effective failure analysis to prevent or minimize the effects of these errors.

Want to learn more about equipment troubleshooting? Attend our Special 2-Day Equifactor® Equipment Troubleshooting and Root Cause Analysis training February 26 and 27, 2018 in Knoxville, Tennessee and plan to stay for the 2018 Global TapRooT® Summit, February 28 to March 2, 2018.

Connect with Us

Filter News

Search News

Authors

Angie ComerAngie Comer

Software

Anne RobertsAnne Roberts

Marketing

Barb CarrBarb Carr

Editorial Director

Chris ValleeChris Vallee

Human Factors

Dan VerlindeDan Verlinde

VP, Software

Dave JanneyDave Janney

Safety & Quality

Garrett BoydGarrett Boyd

Technical Support

Ken ReedKen Reed

VP, Equifactor®

Linda UngerLinda Unger

Co-Founder

Mark ParadiesMark Paradies

Creator of TapRooT®

Michelle WishounMichelle Wishoun

Licensing Paralegal

Per OhstromPer Ohstrom

VP, Sales

Shaun BakerShaun Baker

Technical Support

Steve RaycraftSteve Raycraft

Technical Support

Wayne BrownWayne Brown

Technical Support

Success Stories

Saving Time, Resources & Effort using TapRooT® Sofware Submitted by: Shelley Hassen, HSE Assurance & Compliance Manager Company: WILLBROS Canada Challenge Our company was utilizing the Web Based version of the TapRooT® Software globally to perform and record all RCAs but our work is very project-based and often in remote locations with internet connectivity issues. …

Fortunately, I already had a plan. I had previously used TapRooT to improve investigations…

Bi-State Development Corporation
Contact Us