Category: Investigations

Root Cause Tip: Repeat-Back Strengthens Positive Communication

May 17th, 2018 by

Misunderstood verbal communication can lead to a serious incident.

Risk Engineer and HSE expert, Jim Whiting, shared this report with us recently highlighting four incidents where breakdowns in positive communications were factors. In each circumstance, an operator proceeded into shared areas without making positive communication with another operator.

Read: Positive communication failures result in collisions.

Repeat-back (sometimes referred to as 3-way communication) can reinforce positive communication. This technique may be required by policy or procedure and reinforced during training on a task for better compliance.

Repeat-back is used to ensure the information shared during a work process is clear and complete. In the repeat back process, the sender initiates the communication using the receiver’s name, the receiver repeats the information back, and the sender acknowledges the accuracy of the repeat back or repeats the communication if it is not accurate.

There are many reasons why communications are misunderstood. Workers make assumptions about an unclear message based on their experiences or expectations. A sender may choose poor words for communication or deliver messages that are too long to remember. The message may not be delivered by the sender in the receiver’s primary language. A message delivered in the same language but by a worker from a different geographical region may be confusing because the words do not sound the same across regions.

Can you think of other reasons a repeat-back technique can be helpful? Please comment below.

“It was such a simple mistake!”

May 14th, 2018 by

mistake

 

 

 

 

 

 

 

 

 

When you have a major incident (fire, environmental release, etc.), your investigation will most likely identify several causal factors (CF) that, if they had not occurred, we probably would not have had the incident.  They are often relatively straight forward, and TapRooT® does a great job identifying those CFs and subsequent root causes.

Sometimes, the simplest problems can be the most frustrating to analyze and fix.  We think to ourselves, “How could the employee have made such a simple mistake?  He just needs to be more careful!”  Luckily, TapRooT® can help even with these “simple” mistakes.

Let’s look at an example.  Let’s say you are out on a ship at sea.  The vessel takes a bit of a roll, and a door goes shut on one of your employees.  His finger is caught in the door as it shuts, causing an injury.  Simple problem, right?  Maybe the employee should just be more aware of where he is putting his hands!  We will probably need more effective fixes if we really want to prevent this in the future.

How can we use TapRooT® to figure this out?  First of all, it is important to fully document the accident using a SnapCharT®.  Don’t skip this just because you think that the problem is simple.  The SnapCharT® forces you to ask good questions and makes sure you aren’t missing anything.  The simple problem may have aspects that you would have missed without fully using this technique.  In this example, maybe you find that this door is different than other doors, which have latches to hold them open, or handles to make it easier to open the door.  Imagine that this door might have been a bathroom stall door.  It would probably be set up differently than doors / hatches in other parts of the ship.

So, what are your Causal Factors?  First, I probably would not consider the sudden movement of the ship as a CF.  Remember, the definition of a CF states that it is a mistake or an error that directly leads to the incident. In this case, I think that it is expected that a ship will pitch or roll while underway; therefore, this would not be a CF. It is just a fact. This would be similar to the case where, in Alaska, someone slipped on a snow-covered sidewalk. I would not list that “it was snowing” as a CF.  This is an expected event in Alaska. It would not be under Natural Disaster / Sabotage, either, since snow is something I should be able to reasonably protect against by design.

In this case, I would consider the pitch / roll of the vessel as a normal occurrence.  There is really nothing wrong with the vessel rolling. The only time this would be a problem is if we made some mistake that caused an excessive roll of the vessel, causing the door to unexpectedly slam shut in spite of our normal precautions. If that were the case, I might consider the rolling of the ship to be a CF.  That isn’t the case in this example.

You would probably want to look at 2 other items that come to mind:

1.  Why did the door go shut, in spite of the vessel operating normally?
If we are on a vessel that is expected to move, our doors should probably not be allowed to swing open and shut on their own. There should be latches / shock absorbers / catches that hold the door in position when not being operated. Also, while the door is actually being operated, there should be a mechanism that does not depend on the operator to hold it steady while using the door. I remember on my Navy vessel all of our large hatches had catches and mechanisms that held the doors in place, EXCEPT FOR ONE HEAVY HATCH. We used to tell everyone to “be careful with that hatch, because it could crush you if we take a roll.” We had several injuries to people going through that hatch in rough seas. Looking back on that, telling people to “be careful” was probably not a very strong safeguard.

Depending on what you find here, the root causes for this could possibly be found under Human Engineering, maybe “arrangement/placement”, “tools/instruments NI”, excessive lifting/force”, “controls NI”, etc.

2. Why did the employee have his hand in a place that could cause the door to catch his hand?
We should also take a look to understand why the employee had his hand on the door frame, allowing the door to catch his finger.  I am not advocating, “Tell the employee to be careful and do not put your hand in possible pinch points.” That will not work too well. However, you should take a look and see if we have sufficient ways of holding the door (does it have a conventional door knob? Is it like a conventional toilet stall, with no handle or method of holding the door, except on the edge?). We might also want to check to see if we had a slippery floor, causing the employee to hold on to the edge of the door / frame for support. Lots of possibilities here.

Another suggestion: Whenever I have what I consider a “simple” mistake that I just can’t seem to understand (“How did the worker just fall down the stairs!?”), I find that performing a Critical Human Action Profile (CHAP) can be helpful.  This tool helps me fully understand EXACTLY what was going on when the employee made a very simple yet significant mistake.

TapRooT® works really well when you are trying to understand “simple” mistakes.  It gets you beyond telling the employee to be more careful next time, and allows you to focus on more human performance-based root causes and corrective actions that are much more likely to prevent problems in the future.

Evidence Collection: Two things every investigator should know about scene management

April 17th, 2018 by

You may not be part of scene management when an incident occurs at your facility but there are two things every investigator should know:

  1. Hazards that are present in the work area and how to handle them. It’s impossible to anticipate every accident that could happen but we can evaluate hazards that are present at our facilities that could affect employees and the community at large to structure a scene management plan.
  2. Priorities for evidence collection. The opportunity to collect evidence decreases over time. Here are a few things to keep in mind during, and immediately following, scene management.
    • Fragile evidence goes away.
    • Witnesses forget what they saw.
    • Environmental conditions change making it hard to understand why an incident occurred.
    • Clean-up and restart begins; thus, changing the scene from its original state.

Learn more by holding our 1-Day Effective Interviewing & Evidence Collection Training at your facility. It is a standalone course but also fits well with our 2-Day TapRooT® Root Cause Analysis Training. Contact me for details: carr@taproot.com.

 

You’re invited to Facebook Live for Wednesday lunch

April 16th, 2018 by

We invite you to tune into TapRooT®’s Facebook Live every Wednesday. You’ll be joining TapRooT® professionals as we bring you a contemporary, workplace-relevant topic. Put a reminder on your calendar, in your phone, or stick a post-it on your forehead to watch TapRooT®’s Facebook Live this week for another terrific discussion and for news you can use. We look forward to being with you on Wednesdays!

Here’s how to connect with us for Wednesday’s Facebook Live:

Where? https://www.facebook.com/RCATapRooT/

When? Wednesday, April 18, 2018

What Time? Noon Eastern | 11:00 a.m. Central | 10:00 a.m. Mountain | 9:00 a.m. Pacific

If you missed last week’s Facebook Live session with TapRooT® co-founder Mark Paradies and Barb Carr, editorial director at TapRooT®, as they discussed methodologies for root cause analysis in incident investigation, you can catch up on the discussion via the Vimeo below. You may want to peruse Mark’s article, Scientific Method and Root Cause Analysis, to supplement this significant learning experience. Feel free to comment or ask questions on our Facebook page.

The Scientific Method In Relation To Root Cause Analysis from TapRooT® Root Cause Analysis on Vimeo

NOTE: Be sure to save the date for the 2019 Global TapRooT® Summit: March 11-15, in the Houston, TX area (La Torretta Lake Resort)!

How many investigations are enough?

April 16th, 2018 by

 

I’d like you to think about this scenario at work.  You’ve just sent your team to Defensive Driving School, and made sure they were trained and practiced on good driving skills.  They were trained on how to respond when the vehicle is sliding, safe following distances, how to respond to inclement weather conditions, etc.

Now that they’re back at work, how many managers would tell their recently-trained employees, “I’m glad we’ve provided you with additional skills to keep yourself safe on those dangerous roads.  Now, I only want you to apply that training when you’re in bad weather conditions.  On sunny days, please don’t worry about it.”  Would you expect them to ONLY use those skills when the roads are snow-covered?  Or ONLY at rush hour?  I think we would all agree that this would be a pretty odd thing to tell your team!

Yet, that’s what I often hear!

I teach TapRooT® courses all over the world. We normally start off the class by asking the students why they’re at the course and what they are expecting to get from the class. I often hear something that goes like this:

“I’m here to get a more structured and accurate root cause analysis process that is easy for my staff to use and gets repeatable results.  I don’t expect to use TapRooT® very often because we don’t have that many incidents,  but when we do, we want to be using a great process.”

Now, don’t get me wrong, I appreciate the sentiment (we don’t expect to have many serious incidents at our company), and we can definitely meet all of the other criteria.  However, it does get a little frustrating to hear that some companies are going to reserve using this fantastic product to only a few incidents each year.  Doesn’t that seem to be a waste of terrific training?  Why would we only want our employees to use their training on the big stuff, but not worry about using that same great training on the smaller stuff?

There are a couple of reasons that I can think of that we have this misconception on when to use TapRooT®:

  • Some managers honestly believe that they don’t have many incidents.  Trust me, they are not looking very hard! Our people (including ourselves) are making mistakes every day.  Wouldn’t it be nice if we went out there, found those small mistakes, and applied TapRooT® to find solid root causes and corrective actions to fix those small issues before they became large incidents?
  • Some people think that it takes too long to do a good RCA.  Instead, they spend time using an inferior investigation technique on smaller problems that doesn’t fix anything anyway.  If you’re going to take time to perform some type of RCA, why waste any time at all on a system that gives you poor results?
  • Some people don’t realize that all training is perishable.  Remember those defensive driving skills?  If you never practice them, do you ever get good at them?

I recognize that you can’t do an RCA on every paper cut that occurs at your facility.  Nobody has the resources for that.  So there must be some level of “incident” at which makes sense to perform a good analysis.  So, how do we figure out this trip point?

Here are some guidelines and tips you can follow to help you figure out what level of problem should be investigated using TapRooT®:

  • First of all, we highly recommend that your investigators perform one TapRooT® investigation at least every month.  Any longer than that, and your investigation skills start becoming dull.  Think about any skill you’ve learned.  “Use it, or lose it.”
    • Keep in mind that this guideline is for each investigator.  If you have 10 investigators, each one should be involved in using TapRooT® at least monthly.  This doesn’t have to be a full investigation, but they should use some of the tools or be involved in an investigation at least every month.
  • Once you figure out how many investigations you should perform each year to keep your team proficient, you can then figure out what level of problem requires a TapRooT® investigation.  Here is an example.
    • Let’s say you have 3 investigators at your company.  You would want them to perform at least one investigation each month.  That would be about 36 investigations each year.  If you have about 20 first aid cases each year, that sounds like a good level to initiate a TapRooT® investigation.  You would update your company policy to say that any first aid case (or more serious) would require a TapRooT® investigation.
    • You should
      also do the same with other issues at the company.  You might find that your trigger points would be:

      • Any first aid report or above
      • Any reportable environmental release
      • Any equipment damage worth more than $100,000
    • When you add them all up, they might be about 36 investigations each year.  You would adjust these levels to match your minimum number to maintain proficiency.
  • At the end of each year, you should do an evaluation of your investigations.  Did we meet our goals?  Did each investigator only do 4 investigations this year?  Then we wasted some opportunities.  Maybe we need to lower our trip points a bit.  Or maybe we need to do more audits and observations, with a quick root cause analysis of those audit results.  Remember, your goal is to have each investigator use TapRooT® in some capacity at least once each month.
  • Note that all of this should be specified in your company’s investigation policy.  Write it down so that it doesn’t get lost.

Performing TapRooT® investigations only on large problems will give you great results.  However, you are missing the opportunity to fix smaller problems early, before they become major issues.

TapRooT®: It’s not just for major issues anymore!

The Scientific Method In Relation To Root Cause Analysis

April 13th, 2018 by

Did you miss last week’s Facebook Live session with TapRooT® co-founder Mark Paradies and Barb Carr, editorial director at TapRooT®, as they discussed methodologies for root cause analysis in incident investigation? Here’s an opportunity to catch up on the discussion, as Mark and Barb distill the disciplines and factors that historically have been involved in solving complex problems. Also, peruse Mark’s article, Scientific Method and Root Cause Analysis, to supplement this significant learning experience. Feel free to comment or ask questions on our Facebook page.

The Scientific Method In Relation To Root Cause Analysis from TapRooT® Root Cause Analysis on Vimeo

Tune into TapRooT®’s Facebook Live every Wednesday. You’ll be joining TapRooT® professionals as we bring you a workplace-relevant topic. Put a reminder on your calendar or in your phone to watch TapRooT®’s Facebook Live this week for another terrific discussion and for news you can use. We look forward to being with you on Wednesdays!

Here’s the info you need to connect with us for our next Facebook Live:

Where? https://www.facebook.com/RCATapRooT/

When? Wednesday, April 18, 2018

What Time? Noon Eastern | 11:00 a.m. Central | 10:00 a.m. Mountain | 9:00 a.m. Pacific

NOTE: Save the date for 2019 Global TapRooT® Summit: March 11-15, in the Houston, TX area (La Torretta Lake Resort)!

Monday Accidents & Lessons Learned: When retrofitting does not evaluate risks

April 9th, 2018 by

Bound for London Waterloo, the 2G44 train was about to depart platform 2 at Guildford station. Suddenly, at 2:37 pm, July 7, 2017, an explosion occurred in the train’s underframe equipment case, ejecting debris onto station platforms and into a nearby parking lot. Fortunately, there were no injuries to passengers or staff; damage was contained to the train and station furnishings. It could have been much worse.

The cause of the explosion was an accumulation of flammable gases within the traction equipment case underneath one of the train’s coaches. The gases were generated after the failure of a large electrical capacitor inside the equipment case; the capacitor failure was due to a manufacturing defect.

Recently retrofitted with a modern version of the traction equipment, the train’s replacement equipment also included the failed capacitor. The project team overseeing the design and installation of the new equipment did not consider the risk of an explosion due to a manufacturer’s defect within the capacitor. As a result, there were no preventative engineering safeguards.

The Rail Accident Investigation Branch (RAIB) has recommended a review of the design of UK trains’ electric traction systems to ensure adequate safeguards are in place to offset any identified anomalies and to prevent similar explosions. Learn about the six learning points recommended by the RAIB for this investigation.

Use the TapRooT® System to solve problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Scientific Method and Root Cause Analysis

April 4th, 2018 by

Screen Shot 2018 03 26 at 2 15 18 PM

I had someone tell me that the ONLY way to do root cause analysis was to use the scientific method. After all, this is the way that all real science is performed.

Being an engineer (rather than a scientist), I had a problem with this statement. After all, I had done or reviewed hundreds (maybe thousands?) of root cause analyses and I had never used the scientific method. Was I wrong? Is the scientific method really the only or best answer?

First, to answer this question, you have to define the scientific method. And that’s the first problem. Some say the scientific method was invented in the 17th century and was the reason that we progressed beyond the dark ages. Others claim that the terminology “scientific method” is a 20th-century invention. But, no matter when you think the scientific method was invented, there are a great variety of methods that call themselves “the scientific method.” (Google “scientific method” and see how many different models you can find. The one presented above is an example.)

So let’s just say the scientific method that the person was insisting was the ONLY way to perform a root cause analysis required the investigator to develop a hypothesis and then gather evidence to either prove or disprove the hypothesis. That’s commonly part of most methods that call themselves the scientific method.

What’s the problem with this hypothesis testing model? People don’t do it very well. There’s even a scientific term the problem that people have disproving their hypothesis. It’s called CONFIRMATION BIAS. You can Google the term and read for hours. But the short description of the problem is that when people develop a hypothesis that they believe in, they tend to gather evidence to prove what they believe and disregard evidence that is contrary to their hypothesis. This is a natural human tendency – think of it like breathing. You can tell someone not to breath, but they will breath anyway.

What did my friend say about this problem with the scientific method? That it could be overcome by teaching people that they had to disprove all other theories and also look for evidence to disproves their theory.

The second part of this answer is like telling people not to breath. But what about the first part of the solution? Could people develop competing theories and then disprove them to prove that there was only one way the accident could have occurred? Probably not.

The problem with developing all possible theories is that your knowledge is limited. And, of course, how long would it take if you did have unlimited knowledge to develop all possible theories and prove or disprove them?

The biggest problem that accident investigators face is limited knowledge.

We used to take a poll at the start of each root cause analysis class that we taught. We asked:

“How many of you have had any type of formal training
in human factors or why people make human errors?”

The answer was always less than 5%.

Then we asked:

“How many of you have been asked to investigate
incidents that included human errors?”

The answer was always close to 100%.

So how many of these investigators could hypothesize all the potential causes for a human error and how would they prove or disprove them?

That’s one simple reason why the scientific method is not the only way, or even a good way, to investigate incidents and accidents.

Need more persuading? Read these articles on the problems with the scientific method:

The End of Theory: The Data Deluge Makes The Scientific Method Obsolete

The Scientific Method is a Myth

What Flaws Exist Within the Scientific Method?

Is the Scientific Method Seriously Flawed?

What’s Wrong with the Scientific Method?

Problems with “The Scientific Method”

That’s just a small handful of the articles out there.

Let me assume that you didn’t read any of the articles. Therefore, I will provide one convincing example of what’s wrong with the scientific method.

Isaac Newton, one of the world’s greatest mathematicians, developed the universal law of gravity. Supposedly he did this using the scientific method. And it worked on apples and planets. The problem is, when atomic and subatomic matter was discovered, the “law” of gravity didn’t work. There were other forces that governed subatomic interactions.

Enter Albert Einstein and quantum physics. A whole new set of laws (or maybe you called them “theories”) that ruled the universe. These theories were proven by the scientific method. But what are we discovering now? Those theories aren’t “right” either. There are things in the universe that don’t behave the way that quantum physics would predict. Einstein was wrong!

So, if two of the smartest people around – Newton and Einstein – used the scientific method to develop answers that were wrong but that most everyone believed … what chance do you and I have to develop the right answer during our next incident investigation?

Now for the good news.

Being an engineer, I didn’t start with the scientific method when developing the TapRooT® Root Cause Analysis System. Instead, I took an engineering approach. But you don’t have to be an engineer (or a human factors expert) to use it to understand what caused an accident and what you can do to stop a future similar accident from happening.

Being an engineer, I had my fair share of classes in science. Physics, math, and chemistry are all part of an engineer’s basic training. But engineers learn to go beyond science to solve problems (and design things) using models that have limitations. A useful model can be properly applied by an engineer to design a building, an electrical transmission network, a smartphone, or a 747 without understanding the limitations of quantum mechanics.

Also, being an engineer I found that the best college course I ever had that helped me understand accidents wasn’t an engineering course. It was a course on basic human factors. A course that very few engineers take.

By combining the knowledge of high reliability systems that I gained in the Nuclear Navy with my knowledge of engineering and human factors, I developed a model that could be used by people without engineering and human factors training to understand what happened during an incident, how it happened, why it happened, and how it could be prevented from happening again. We have been refining this model (the TapRooT® System) for about thirty years – making it better and more usable – using the feedback from tens of thousands of users around the world. We have seen it applied in a wide variety of industries to effectively solve equipment and human performance issues to improve safety, quality, production, and equipment reliability. These are real world tests with real world success (see the Success Stories at this link).

So, the next time someone tells you that the ONLY way to investigate an incident is the scientific method, just smile and know that they may have been right in the 17th century, but there is a better way to do it today.

If you don’t know how to use the TapRooT® System to solve problems, perhaps you should attend one of our courses. There is a basic 2-Day Course and an advanced 5-Day Course. See the schedule for public courses HERE. Or CONTACT US about having a course at your site.

How Safe Must Autonomous Vehicles Be?

April 3rd, 2018 by

Tesla is under fire for the recent crash of their Model X SUV, and the subsequent fatality of the driver. It’s been confirmed that the vehicle was in Autopilot mode when the accident occurred. Both Tesla and the NTSB are investigating the particulars of this crash.

PHOTO: PUBLISHED CREDIT: KTVU FOX 2/REUTERS.

I’ve read many of the comments about this crash, in addition to previous crash reports. It’s amazing how much emotion is poured into these comments. I’ve been trying to understand the human performance issues related to these crashes, and I find I must take special note of the human emotions that are attached to these discussions.

As an example, let’s say that I develop a “Safety Widget™” that is attached to all of your power tools. This widget raises the cost of your power tools by 15%, and it can be shown that this option reduces tool-related accidents on construction sites by 40%.  That means, on your construction site, if you have 100 incidents each year, you would now only have 60 incidents if you purchase my Safety Widget™.  Would you consider this to be a successful purchase?  I think most people would be pretty happy to see their accident rates reduced by 40%!

Now, what happens when you have an incident while using the Safety Widget™? Would you stop using the Safety Widget™ the first time it did NOT stop an injury? I think we’d still be pretty happy that we would prevent 40 incidents at our site each year. Would you still be trying to reduce the other 60 incidents each year? Of course. However, I think we’d keep right on using the Safety Widget™, and continue looking for additional safeguards to put in place, while trying to improve the design of the original Safety Widget™.

This line of thinking does NOT seem to be true for autonomous vehicles. For some reason, many people seem to be expecting that these systems must be perfect before we are allowed to deploy them. Independent reviews (NOT by Tesla) have shown that, on a per driver-mile basis, Autopilot systems reduce accidents by 40% over normal driver accident rates. In the U.S., we experience about 30,000 fatalities each year due to driver error. Shouldn’t we be happy that, if everyone had an autonomous vehicle, we would be saving 12,000 lives every year? The answer to that, you would think, would be a resounding “YES!” But there seems to be a much more emotional content to the answer than straight scientific data would suggest.

I think there may be several human factors in play as people respond to this question:

  1. Over- and under-trust in technology: I was talking to one of our human factors experts, and he mentioned this phenomena. Some people under-trust technology in general and, therefore, will find reasons not to use it, even when proven to work. Others will over-trust the technology, as evidenced by the Tesla drivers who are watching movies, or not responding to system warnings to maintain manual control of the vehicle.
  2. “I’m better than other drivers. Everyone else is a bad drive; while they may need assistance, I drive better than any autonomous gadget.” I’ve heard this a lot. I’m a great driver; everyone else is terrible. It’s a proven fact that most people have an inflated opinion of their own capabilities compared to the “average” person.” If you were to believe most people, each individual (when asked) is better than average. This would make it REALLY difficult to calculate an average, wouldn’t it?
  3. It’s difficult to calculate the unseen successes. How many incidents were avoided by the system? It’s hard to see the positives, but VERY easy to see the negatives.
  4. Money. Obviously, there will be some people put out of work as autonomous vehicles become more prevalent. Long-haul truckers will be replaced by autopilot systems. Cab drivers, delivery vehicle drivers, Uber drivers, and train engineers are all worried about their jobs, so they are more likely to latch onto any negative that would help them maintain their relevancy. Sometimes this is done subconsciously, and sometimes it is a conscious decision.

Of course, we DO have to monitor and control how these systems are rolled out. We can’t have companies roll out inferior systems that can cause harm due to negligence and improper testing. That is one of the main purposes of regulation and oversight.

However, how safe is “safe enough?” Can we use a system that isn’t perfect, but still better than the status quo? Seat belts don’t save everyone, and in some (rare) cases, they can make a crash worse (think of Dale Earnhardt, or a crash into a lake with a stuck seat belt). Yet, we still use seat belts. Numerous lives are saved every year by restraint systems, even though they aren’t perfect. How “safe” must an autonomous system be in order to be accepted as a viable safety device? Are we there yet? What do you think?

Effective Listening Skills Inventory for Investigative Interviews

March 29th, 2018 by

Do you ever interrupt someone because you fear “losing” what you want to say? Do you become momentarily engrossed in your thoughts, then return to reality to find someone awaiting your answer to a question you didn’t hear? Most of us are at fault for interrupting or being distracted from time to time. Particularly, though, in an interview environment where focus is key, distractions or interruptions can be detrimental to the interview.

Watch, listen, learn from this week’s conversation between Barb Carr and Benna Dortch:

Effective Listening Skills Inventory For Investigation Interviews from TapRooT® Root Cause Analysis on Vimeo.

Now, learn how to inventory your listening skills. Internalizing suggestions can recalibrate your thought and communication processes. Your work and your communication style will reflect the changes you’ve made.

Feel free to comment or ask questions on Facebook. We will respond!

Bring your lunch next Wednesday and join TapRooT®’s Facebook Live session. You’ll pick up valuable, workplace-relevant takeaways from an in-depth discussion between TapRooT® professionals. We’ll be delighted to have your company.

Here’s the scoop for tuning in next week:

Where? https://www.facebook.com/RCATapRooT/

When? Wednesday, April 4, 2018

What Time? Noon Eastern | 11:00 a.m. Central | 10:00 a.m. Mountain | 9:00 a.m. Pacific

Thank you for joining us!

Monday accidents & lessons learned: Does what you see match what is happening?

March 26th, 2018 by

>

An incident report from NASA’s Aviation Safety Reporting System (ASRS) gives insight into a pilot’s recurring, problematic observation. Through distraction and confusion, a Bonanza pilot misperceived the runway edge and centerline lights as they cycled off and on. Air Traffic Control (ATC) let him know that the centerline lights were constant, not blinking.

The pilot summarized his experience, “I was transiting the final approach path of . . . Runway 16R and observed the runway edge and centerline lights cycle on and off . . . at a rate of approximately 1 per second. It was very similar to the rate of a blinking traffic light at a 4-way vehicle stop. The [3-blade] propeller speed was 2,400 RPM. This was observed through the entire front windscreen and at least part of the pilot side window. I queried ATC about the reason for the runway lights blinking and was told that they were not blinking. It was not immediately obvious what was causing this, but I did later speculate that it may have been caused by looking through the propeller arc.

“The next day [during] IFR training while on the VOR/DME Runway 16R approach, we observed the runway edge and centerline lights cycle on and off . . . at a rate slightly faster than 1 per second. The propeller speed was 2,500 RPM. I then varied the propeller speed and found that, at 2,700 RPM, the lights were observed strobing at a fairly high rate and, at 2,000 RPM, the blinking rate slowed to less than once per second. This was observed through the entire approach that terminated at the Missed Approach Point (MAP). The flight instructor was also surprised and mentioned that he had not seen this before, but also he doesn’t spend much time behind a 3-blade propeller arc.

“I would speculate that the Pulse Width Modulation (PWM) dimming system of the LED runway lights was phasing with my propeller, causing the observed effect. I would also speculate that the effect would . . . significantly differ at other LED dimming settings . . . and behind a 2-blade propeller.

“I found the effect to be entirely confusing and distracting and would not want to make a landing in such conditions.”

The TapRooT® System, Training, and Software have a dedicated history of R&D, human performance, and improvement. Learn with our best incident investigation and root cause analysis systems.

Root Cause Analysis Audit Idea

March 22nd, 2018 by

Screen Shot 2018 03 22 at 3 02 19 PM

In the past couple of years has your company had a major accident?

If they did, did you check to see if there were previous smaller incidents that should have been learned from and if the corrective actions should have prevented the major accident?

I don’t think I have ever seen a major accident that didn’t have precursors that could have been learned from to improve performance. The failure to learn and improve is a problem that needs a solution.

In the TapRooT® root cause analysis of a major accident, the failure to fix pervious precursor incidents should get you to the root cause of “corrective action NI” if you failed to implement effective corrective actions from the previous investigations.

If this idea seems like a new idea at your facility, here is something that you might try. Go back to your last major accident. Review your database to look for similar precursor incidents. If there aren’t any, you have identified a problem. You aren’t getting good reporting of minor incidents with potential serious consequences.

If you find previous incidents, it’s time for an audit. Review the investigations to determine why the previous corrective actions weren’t effective. This should produce improvements to your root cause analysis processes, training, reviews, …

Don’t wait for the next big accident to improve your processes. You have all the data that you need to start improvements today!

Are you a Proficient TapRooT® Investigator?

March 19th, 2018 by

 

 

 

 

 

 

 

I teach a lot of TapRooT® courses all over the world, to many different industries and departments.  I often get the same questions from students during these courses.  One of the common questions is, “How do I maintain my proficiency as a TapRooT® investigator?”

This is a terrific question, and one that you should think carefully on.  To get a good answer, let’s look at a different example.:

Let’s say you’ve been tasked with putting together an Excel spreadsheet for your boss.  It doesn’t have to be anything too fancy, but she did ask that you include pivot tables in order to easily sort the data in multiple ways.  You decide to do a quick on-line course on Excel to brush up on the newest techniques, and you put together a great spreadsheet.

Now, if your boss asked you to produce another spreadsheet 8 months from now, what would happen?  You’d probably remember that you can use pivot tables, but you’ve probably forgotten exactly how it works.  You’ll most likely have to relearn the technique again, looking back over your last one, or maybe hitting YouTube as a refresher.  It would have been nice if you had worked on a few spreadsheets in the meantime to maintain the skills you learned from your first Excel course.  And what happens if Microsoft comes out with a new version of Excel?

Performing TapRooT® investigations are very similar.  The techniques are not difficult; they can be used by pretty much anyone, once they’ve been trained.  However, you have to practice these skills to get good at them and maintain your proficiency.  When you leave your TapRooT® course, you are ready to conduct your first investigation, and those techniques are still fresh.  If you wait 8 months before you actually use TapRooT®, you’ll probably need to refresh your skills.

In order to remain proficient, we recommend the following:

  • Obviously, you need to attend an initial TapRooT® training session.  We would not recommend trying to learn a technique by reading a book.  You need practice and guidance to properly use any advanced technique.
  • After your class, we recommend you IMMEDIATELY go perform an investigation, probably within the next 2 weeks or so.  You need to quickly use TapRooT® in your own work environment.  You need to practice it in your own conference room, know where your materials will be kept, know who you’re going to contact, etc.  Get the techniques ingrained into your normal office routine right away.
  • We then recommend that you use TapRooT® at least every month.  That doesn’t necessarily mean that you must perform a full incident investigation monthly, but maybe just use a few of the techniques.  For example, you could perform an audit and run those results though the Root Cause Tree®.  Anything to keep proficient using the techniques.
  • Refresher training is also a wonderful idea.  We would recommend attending a refresher course every 2 years to make sure you are up to speed on the latest software and techniques.  If you’ve attended a 2-Day TapRooT® course, maybe a 5-Day Advanced Team Leader Course would be a good choice.
  • Finally, attending the Annual Global TapRooT® Summit is a great way to keep up to speed on your TapRooT® techniques.  You can attend a specialized Pre-Summit course (Advanced Trending Techniques, or Equifactor® Equipment Troubleshooting, or maybe an Evidence Collection course), and then attend a Summit track of your choosing.

There is no magic here.  The saying, “Use it, or Lose it” definitely applies!

Miami Bridge Collapse – Is Blame Part of Your Investigation Policy?

March 16th, 2018 by

collapse miami bridge

 

 

 

 

 

 

 

I was listening to a news report on the radio this morning about the pedestrian bridge collapse in Miami. At one point, they were interviewing Florida Governor Rick Scott.  Here is what he said:

“There will clearly be an investigation to find out exactly what happened and why this happened…”

My ears perked up, and I thought, “That sounds like a good start to a root cause investigation!”

And then he continued:

“… and we will hold anybody accountable if anybody has done anything wrong,”

Bummer.  His statement had started out so good, and then went directly to blame in the same breath.  He had just arrived on the scene.  Before we had a good feel for what the actual circumstances were, we are assuming our corrective actions are going to pinpoint blame and dish out the required discipline.

This is pretty standard for government and public figures, so I wasn’t too surprised.  However, it got me thinking about our own investigations at our companies.  Do we start out our investigations with the same expectations?  Do we begin with the good intentions of understanding what happened and finding true root causes, but then have this expectation that we need to find someone to blame?

We as companies owe it to ourselves and our employees to do solid, unbiased incident investigations.  Once we get to reliable root causes, our next step should be to put fixes in place that answer the question, “How do we prevent these root causes from occurring in the future?  Will these corrective actions be effective in preventing the mistakes from happening again?”  In my experience, firing the employee / supervisor / official in charge rarely leads to changes that will prevent the tragedy from happening again.

 

ASSE Safety 2018 Flash Session on Investigative Interviewing

March 15th, 2018 by


I’m excited to be selected as a flash session speaker for ASSE Safety 2018. My talk “Top 3 Tips for Improving your Investigative Interviewing Skills” is planned for Monday, June 4, 2018 at 2:45 p.m. in the Exhibit Hall, Booth #2165. I hope to see you there! Please stop by the TapRooT® Booth and talk to Dave Janney and me while you’re at the conference.

Protection Against Hydrogen Sulfide

March 6th, 2018 by

On January 16, 2017, a private construction company sent four utility works to handle complaints about sewage backup in Key Largo, Florida. Three of the four works descended into the the 15-foot-deep drainage hole, and within seconds all voice communication was lost amongst the construction workers.

The Key Largo Fire Department was the first to respond to the scene. Leonardo Moreno, a volunteer firefighter, tried to enter the hole with his air tank but failed. So, he descended without his air tank and lost consciousness within seconds of entering the drainage hole. Eventually, another firefighter was able to enter the hole with an air tank and pull Moreno out. Unfortunately, the other three construction workers weren’t so lucky. All of them died from hydrogen sulfide poisoning, and Moreno was in critical condition.

Unfortunate events like this are completely avoidable. Comment below how this could have been avoided/prevented by using TapRooT® proactively.

To learn more about this tragic incident click here.

Monday Accident & Lessons Learned: Runaway trailer investigation provides context for action sequence

February 26th, 2018 by

The Rail Accident Investigation Branch (RAIB) investigated and released its report on a runaway trailer that occurred May 28, 2017, in England’s Hope, Derbyshire. The incident occurred when a trailer propelled by a small rail tractor became detached and traveled approximately one mile before coming to a stop. RAIB found that a linchpin had been erroneously inserted.

Chief Inspector of Rail Accidents, Simon French, remarked, “The whole episode, as our report shows, was a saga arising from lack of training, care, and caution.” Peruse the report delineating the circumstances and recommendations here. Read the RAIB report here. Enroll in a TapRooT® course to gain the training necessary to investigate and, further, to prevent incidents.

‘Equipment Failure’ is the cause?

February 22nd, 2018 by
Fire, equipment. failure

Drone view of tank farm fire Photo: West Fargo Fire Department

 

 

 

 

 

 

 

On Sunday, there was a diesel fuel oil fire at a tank farm in West Fargo, ND. About 1200 barrels of diesel leaked from the tank.  The fire appears to have burned for about 9 hours or so.  They had help from fire dapartments from the local airport and local railway company, and drone support from the National Guard.  There were evacuations of nearby residents.  Soil remediation is in progress, and operations at the facility have resumed.  Read more about the story here.

The fire chief said it looks like there was a failure of the piping and pumping system for the tank. He said that the owners of the tank are investigating. However, one item caught my attention. He said, “In the world of petroleum fires, it wasn’t very big at all. It might not get a full investigation.”

This is a troublesome statement.  Since it wasn’t a big, major fire, and no one was seriously hurt, it doesn’t warrent an investigation.  However, just think of all the terrific lessons learned that could be discovered and learned from.  How major a fire must it be in order to get a “full investigation?”

I often see people minimize issues that were just “equipment failures.”  There isn’t anyone to blame, no bad people to fire, it was just bad equipment.  We’ll just chalk this one up to “equipment failure” and move on.  In this case, that mindset can cause people to ignore the entire accident, and that determining it was equipment failure is as deep as we need to go.

Don’t get caught in this trap.  While I’m sure the tank owner is going to go deeper, I encourage the response teams to do their own root cause analyses to determine if their response was adequate, if notifications correct, if they had reliable lines of communications with external aganecies, etc.  It’s a great opportunity to improve, even if it was only “equipment failure,” and even if you are “only” the response team.

Monday Accident & Lesson Learned: Quick action by mother prevents toddler from falling through hole in moving train

February 19th, 2018 by


A child was rescued from death when his mom grabbed him before he fell through a hole in a moving train.  In a 39-page report published by the Rail Accident Investigation Board, it was revealed that “The child entered the toilet, and as the door opened and the child stepped through it, he fell forward because the floor was missing in the compartment he had entered.” Read the report here.

Top 3 Reasons Corrective Actions Fail & What to Do About It

February 15th, 2018 by

Ken Reed and Benna Dortch discuss the three top reasons corrective actions fail and how to overcome them. Don’t miss this informative video! It is a 15 minute investment of time that will change the way you think about implementing fixes and improve performance at your facility.

Support your Investigation Results with Solid Evidence Collection

February 13th, 2018 by

We have a couple of seats left in the “TapRooT® Evidence Collection and Interviewing Techniques to Sharpen Investigation Skills” 2-day Pre-Summit course, February 26-27 in Knoxville, Tennessee. My co-instructor, Reb Brickey and I are excited to share methods of collection that will help you stop assumptions in their tracks!

Evidence collection techniques help investigators focus on the facts of an investigation. We will talk about pre-planning, different types of evidence to keep in mind as well as spend a good part of the second day learning about and practicing interviewing techniques. The course begins with an interesting case study and attendees break into investigation teams and work toward solving the evidence collection puzzle for the accident presented.

Class size is limited to ensure an ideal learning environment, so register now for the 2-day course only or for the 2-day course plus the 3-day Global TapRooT® Summit.

 

Stop Assumptions in Their Tracks!

February 13th, 2018 by

Assumptions can cause investigators to reach unproven conclusions.

But investigators often make assumptions without even knowing that they were assuming.

So how do you stop assumptions in their tracks?

When you are drawing your SnapCharT®, you need to ask yourself …

How do I know that?

If you have two ways to verify an Event or a Condition, you probably have a FACT.

But if you have no ways to prove something … you have an assumption.

What if you only have one source of information? You have to evaluate the quality of the source.

What if one eye witness told you the information? Probably you should still consider it an assumption. Can you find physical evidence that provides a second source?

What if you just have one piece of physical evidence? You need to ask how certain you are that this piece of physical evidence can only have one meaning or one cause.

Dashed Boxes

Everything that can’t be proven to be a fact should be in a dashed box or dashed oval on your SnapCharT®. And on the boxes or ovals that you are certain about? List your evidence that proves they are facts.

Now you have stopped assumptions in their tracks!

Monday Accident & Lessons Learned: The Lac-Mégantic rail disaster

February 12th, 2018 by

The Lac-Mégantic rail disaster occurred when an unattended 74-car freight train carrying Bakken Formation crude oil rolled down a 1.2% grade from Nantes and derailed, resulting in the fire and explosion of multiple tank cars. Forty-two people were confirmed dead, with five more missing and presumed dead. More than 30 buildings were destroyed. The death toll of 47 makes it the fourth-deadliest rail accident in Canadian history.

 

Click image to view or download .pdf

 

Why You Should Use the TapRooT® Process for Smaller Investigations

February 7th, 2018 by

“If the hammer is your only tool, all of your problems will start looking like nails.”

Per Ohstrom shares how TapRooT® is used to investigate smaller incidents by demonstrating the methodology. Are you using the 5-Whys to investigate these types of incidents? The 5-Whys won’t take you beyond your own knowledge. Find out how TapRooT® will!

How to Make Incident Investigations Easier

January 31st, 2018 by

Ken Reed talks about the differences between an investigation for a low-to-moderate incident and a major incident. Find out how TapRooT® makes both types of investigation easier to manage.

Want to learn how to investigate a major/minor incident with all of the advanced tools? Sign up for an upcoming 5-day training!

Want to start with just the essential skills for performing a root cause analysis on a minor or major investigation? It’s a great place to start with a minor investment of time. Sign up for an upcoming 2-day training!

Connect with Us

Filter News

Search News

Authors

Angie ComerAngie Comer

Software

Anne RobertsAnne Roberts

Marketing

Barb CarrBarb Carr

Editorial Director

Chris ValleeChris Vallee

Human Factors

Dan VerlindeDan Verlinde

VP, Software

Dave JanneyDave Janney

Safety & Quality

Garrett BoydGarrett Boyd

Technical Support

Ken ReedKen Reed

VP, Equifactor®

Linda UngerLinda Unger

Co-Founder

Mark ParadiesMark Paradies

Creator of TapRooT®

Michelle WishounMichelle Wishoun

Licensing Paralegal

Per OhstromPer Ohstrom

VP, Sales

Shaun BakerShaun Baker

Technical Support

Steve RaycraftSteve Raycraft

Technical Support

Wayne BrownWayne Brown

Technical Support

Success Stories

In 1995, BellSouth Telecommunications noticed an increase in the number of service interruptions or outages…

Bell South

Alaska Airlines adopted System Safety and incorporated TapRooT® into the process to find the root causes…

Alaska Airlines
Contact Us