Category: Accidents

Monday Accidents & Lessons Learned: We’re Not Off the Runway Yet

April 16th, 2018 by

NASA’s Aviation Safety Reporting System (ASRS) from time to time shares contemporary experiences to add value to the growth of aviation wisdom, lessons learned, and to spur a freer flow of reported incidents. ASRS receives, processes, and analyzes these voluntarily submitted reports from pilots, air traffic controllers, flight attendants, maintenance personnel, dispatchers, ground personnel, and others regarding actual or potential hazards to safe aviation operations.

We acknowledge that the element of surprise, or the unexpected, can upend even the best flight plan. But, sometimes, what is perceived as an anomaly pales in comparison to a subsequent occurrence. This was the case when an Air Taxi Captain went the second mile to clear his wingtips while taxiing for takeoff. Just as he thought any threat was mitigated, boom! Let’s listen in to his account:

“Taxiing out for the first flight out of ZZZ, weed whacking was taking place on the south side of the taxiway. Watching to make sure my wing cleared two men mowing [around] a taxi light, I looked forward to continue the taxi. An instant later I heard a ‘thump.’ I then pulled off the taxiway onto the inner ramp area and shut down, assuming I’d hit one of the dogs that run around the airport grounds on a regular basis. I was shocked to find a man, face down, on the side of the taxiway. His coworkers surrounded him and helped him to his feet. He was standing erect and steady. He knew his name and the date. Apparently [he was] not injured badly. I attended to my two revenue passengers and returned the aircraft to the main ramp. I secured the aircraft and called [the Operations Center]. An ambulance was summoned for the injured worker. Our ramp agent was a non-revenue passenger on the flight and took pictures of the scene. He stated that none of the workers was wearing a high visibility vest, which I also observed. They seldom have in the past.

“This has been a recurring problem at ZZZ since I first came here. The operation is never [published in the] NOTAMs [for] an uncontrolled airfield. The pilots just have to see and avoid people and animals at all times. I don’t think the person that collided with my wingtip was one of the men I was watching. I think he must have been stooped down in the grass. The only option to [improve the] safety of the situation would be to stop completely until, hopefully, the workers moved well clear of the taxiway. This is one of…many operational deficiencies that we, the pilots, have to deal with at ZZZ on a daily basis.”

We invite you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: When retrofitting does not evaluate risks

April 9th, 2018 by

Bound for London Waterloo, the 2G44 train was about to depart platform 2 at Guildford station. Suddenly, at 2:37 pm, July 7, 2017, an explosion occurred in the train’s underframe equipment case, ejecting debris onto station platforms and into a nearby parking lot. Fortunately, there were no injuries to passengers or staff; damage was contained to the train and station furnishings. It could have been much worse.

The cause of the explosion was an accumulation of flammable gases within the traction equipment case underneath one of the train’s coaches. The gases were generated after the failure of a large electrical capacitor inside the equipment case; the capacitor failure was due to a manufacturing defect.

Recently retrofitted with a modern version of the traction equipment, the train’s replacement equipment also included the failed capacitor. The project team overseeing the design and installation of the new equipment did not consider the risk of an explosion due to a manufacturer’s defect within the capacitor. As a result, there were no preventative engineering safeguards.

The Rail Accident Investigation Branch (RAIB) has recommended a review of the design of UK trains’ electric traction systems to ensure adequate safeguards are in place to offset any identified anomalies and to prevent similar explosions. Learn about the six learning points recommended by the RAIB for this investigation.

Use the TapRooT® System to solve problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

How Safe Must Autonomous Vehicles Be?

April 3rd, 2018 by

Tesla is under fire for the recent crash of their Model X SUV, and the subsequent fatality of the driver. It’s been confirmed that the vehicle was in Autopilot mode when the accident occurred. Both Tesla and the NTSB are investigating the particulars of this crash.

PHOTO: PUBLISHED CREDIT: KTVU FOX 2/REUTERS.

I’ve read many of the comments about this crash, in addition to previous crash reports. It’s amazing how much emotion is poured into these comments. I’ve been trying to understand the human performance issues related to these crashes, and I find I must take special note of the human emotions that are attached to these discussions.

As an example, let’s say that I develop a “Safety Widget™” that is attached to all of your power tools. This widget raises the cost of your power tools by 15%, and it can be shown that this option reduces tool-related accidents on construction sites by 40%.  That means, on your construction site, if you have 100 incidents each year, you would now only have 60 incidents if you purchase my Safety Widget™.  Would you consider this to be a successful purchase?  I think most people would be pretty happy to see their accident rates reduced by 40%!

Now, what happens when you have an incident while using the Safety Widget™? Would you stop using the Safety Widget™ the first time it did NOT stop an injury? I think we’d still be pretty happy that we would prevent 40 incidents at our site each year. Would you still be trying to reduce the other 60 incidents each year? Of course. However, I think we’d keep right on using the Safety Widget™, and continue looking for additional safeguards to put in place, while trying to improve the design of the original Safety Widget™.

This line of thinking does NOT seem to be true for autonomous vehicles. For some reason, many people seem to be expecting that these systems must be perfect before we are allowed to deploy them. Independent reviews (NOT by Tesla) have shown that, on a per driver-mile basis, Autopilot systems reduce accidents by 40% over normal driver accident rates. In the U.S., we experience about 30,000 fatalities each year due to driver error. Shouldn’t we be happy that, if everyone had an autonomous vehicle, we would be saving 12,000 lives every year? The answer to that, you would think, would be a resounding “YES!” But there seems to be a much more emotional content to the answer than straight scientific data would suggest.

I think there may be several human factors in play as people respond to this question:

  1. Over- and under-trust in technology: I was talking to one of our human factors experts, and he mentioned this phenomena. Some people under-trust technology in general and, therefore, will find reasons not to use it, even when proven to work. Others will over-trust the technology, as evidenced by the Tesla drivers who are watching movies, or not responding to system warnings to maintain manual control of the vehicle.
  2. “I’m better than other drivers. Everyone else is a bad drive; while they may need assistance, I drive better than any autonomous gadget.” I’ve heard this a lot. I’m a great driver; everyone else is terrible. It’s a proven fact that most people have an inflated opinion of their own capabilities compared to the “average” person.” If you were to believe most people, each individual (when asked) is better than average. This would make it REALLY difficult to calculate an average, wouldn’t it?
  3. It’s difficult to calculate the unseen successes. How many incidents were avoided by the system? It’s hard to see the positives, but VERY easy to see the negatives.
  4. Money. Obviously, there will be some people put out of work as autonomous vehicles become more prevalent. Long-haul truckers will be replaced by autopilot systems. Cab drivers, delivery vehicle drivers, Uber drivers, and train engineers are all worried about their jobs, so they are more likely to latch onto any negative that would help them maintain their relevancy. Sometimes this is done subconsciously, and sometimes it is a conscious decision.

Of course, we DO have to monitor and control how these systems are rolled out. We can’t have companies roll out inferior systems that can cause harm due to negligence and improper testing. That is one of the main purposes of regulation and oversight.

However, how safe is “safe enough?” Can we use a system that isn’t perfect, but still better than the status quo? Seat belts don’t save everyone, and in some (rare) cases, they can make a crash worse (think of Dale Earnhardt, or a crash into a lake with a stuck seat belt). Yet, we still use seat belts. Numerous lives are saved every year by restraint systems, even though they aren’t perfect. How “safe” must an autonomous system be in order to be accepted as a viable safety device? Are we there yet? What do you think?

Monday Accidents & Lessons Learned: When a snake leads you down a rabbit hole

April 2nd, 2018 by

While Lewis Carroll did not create the rabbit hole, he did turn those holes into a literal abyss down which people could fall. Today, “rabbit hole” has become a metaphor for extreme diversion, redirection, or distraction. Industries spiral down them all the time, resulting in a talespin that, sometimes, cannot be rerouted.

A Captain experienced a unique problem during the pre-departure phase of a flight. Within earshot of passengers, the Gate Agent briefed the Captain, “I am required to inform you that while cleaning the cockpit, the cleaning crew saw a snake under the Captain’s pedals. The snake got away, and they have not been able to find it.”

The incident report from NASA’s Aviation Safety Reporting System (ASRS) details the Captain’s response and reaction: “At this time, the [international pre-departure] inspection was complete, and I was allowed on the aircraft. I found two mechanics in the flight deck. I was informed that they had not been able to find the snake, and they were not able to say with certainty what species of snake it was. The logbook had not been annotated with a write-up, so I placed a write-up in the logbook. I was also getting a line check on this flight. The Check Airman told me that his father was deathly afraid of snakes and suggested that some passengers on the flight may suffer with the same condition.

“I contacted Dispatch and discussed with them that I was uncomfortable taking the aircraft with an unknown reptile condition. . . . The possibility [existed] that a snake could expose itself in flight or, worse on the approach, come out from under the rudder pedals. Dispatch agreed with my position. The Gate Agent then asked to board the aircraft. I said, “No,” as we might be changing aircraft. I then contacted the Chief Pilot. I explained the situation and told him I was uncomfortable flying the aircraft without determining what the condition of the snake was. I had specifically asked if the cleaning crew had really seen a snake. I was informed, yes, that they had tried to vacuum it up and it had slithered away. The Chief Pilot agreed with me and told me he would have a new aircraft for us in five minutes. We were assigned the aircraft at the gate next door.

“. . . When I returned [to the airport], I asked a Gate Agent what had happened to the “snake airplane.” I was told that the aircraft was left in service, and the next Captain had been asked to sign some type of form stating he was informed that the snake had not been found.”

Don’t wait for a snake-in-the-cockpit experience to improve your processes. Reach out to TapRooT® to curtail rabbit holes and leave nothing to chance.

McD’s in UK Fined £200k for Employee Injured While Directing Traffic

March 27th, 2018 by

NewImage

An angry motorist hits a 17-year-old employee who is directing traffic and breaks his knee. Normally, you would think the road rage driver would be at fault. But a UK court fined McDonalds $200,000.

Why? It was a repeat incident. Two previous employees had been hurt while directing traffic. And McDonalds didn’t train the employees how to direct traffic.

What do you think? Would a good root cause analysis of the previous injuries and effective corrective actions have prevented this accident?

Monday accidents & lessons learned: Does what you see match what is happening?

March 26th, 2018 by

>

An incident report from NASA’s Aviation Safety Reporting System (ASRS) gives insight into a pilot’s recurring, problematic observation. Through distraction and confusion, a Bonanza pilot misperceived the runway edge and centerline lights as they cycled off and on. Air Traffic Control (ATC) let him know that the centerline lights were constant, not blinking.

The pilot summarized his experience, “I was transiting the final approach path of . . . Runway 16R and observed the runway edge and centerline lights cycle on and off . . . at a rate of approximately 1 per second. It was very similar to the rate of a blinking traffic light at a 4-way vehicle stop. The [3-blade] propeller speed was 2,400 RPM. This was observed through the entire front windscreen and at least part of the pilot side window. I queried ATC about the reason for the runway lights blinking and was told that they were not blinking. It was not immediately obvious what was causing this, but I did later speculate that it may have been caused by looking through the propeller arc.

“The next day [during] IFR training while on the VOR/DME Runway 16R approach, we observed the runway edge and centerline lights cycle on and off . . . at a rate slightly faster than 1 per second. The propeller speed was 2,500 RPM. I then varied the propeller speed and found that, at 2,700 RPM, the lights were observed strobing at a fairly high rate and, at 2,000 RPM, the blinking rate slowed to less than once per second. This was observed through the entire approach that terminated at the Missed Approach Point (MAP). The flight instructor was also surprised and mentioned that he had not seen this before, but also he doesn’t spend much time behind a 3-blade propeller arc.

“I would speculate that the Pulse Width Modulation (PWM) dimming system of the LED runway lights was phasing with my propeller, causing the observed effect. I would also speculate that the effect would . . . significantly differ at other LED dimming settings . . . and behind a 2-blade propeller.

“I found the effect to be entirely confusing and distracting and would not want to make a landing in such conditions.”

The TapRooT® System, Training, and Software have a dedicated history of R&D, human performance, and improvement. Learn with our best incident investigation and root cause analysis systems.

Construction’s Fatal Four – A Better Approach to Prevention

March 26th, 2018 by

In 2016, 21% of fatal injuries in the private sector were in the Construction industry as classified by the Department of Labor. That was 991 people killed in this industry (almost 3 people every day). Among these were the following types of fatality:

Falls – 384 (38.7%)
Struck by Object – 93 (9.4%)
Electrocutions – 82 (8.3%)
Caught-in/between – 72 (7.3%)

Imagine that. Eliminating just these 4 categories of fatalities would have saved over 630 workers in 2016.

Now, I’m not naive enough to think we can suddenly eliminate an entire category of injury or fatality in the U.S. However, I am ABSOLUTELY CERTAIN that, at each of our companies, we can take a close look at these types of issues and make a serious reduction in these rates. Simply telling our workers to “Be careful out there!” or “Follow the procedures and policies we give you” just won’t cut it.

NOTE: In the following discussion, when I’m talking about our workers and teammates, I am talking about ALL of us! We ALL violate policies and procedures every day. Don’t believe me? Take a look at the speedometer on your car on the way home from work tonight and honestly tell me you followed the speed limit all the way home.

As an example, take a look at your last few incident investigations. When there is an incident, one of the questions always asked is, “Did you know that you weren’t supposed to do that?” The answer is almost always, “Yes.” Yet, our teammates did it anyway.

Unfortunately, too many companies stop here. “Worker knew he should not have put his hand into a pinch point. Corrective action, Counseled the employee on the importance of following policy and remaining clear of pinch points.” What a completely useless corrective action! I’m pretty sure that the worker who just lost the end of his finger knows he should not have put his hand into that pinch point. Telling him to pay attention and be more careful next time will probably NOT be very effective.

If we really want to get a handle on these types of injuries, we must adopt a more structured, scientific strategy. I’d propose the following as a simple start:

1. Get out there and look! Almost every accident investigation finds that this has happened before, or that the workers often make this same mistake. If that is true, we should be getting out there and finding these daily mistakes.

2. To correct these mistakes, you must do a solid root cause analysis. Just yelling at our employees will probably not be effective. Remember, they are not bad people; they are just people. This is what people do. They try to do the best job they can, in the most efficient manner, and try to meet management’s expectations. We need to understand what, at the human performance level, allowed these great employees to do things wrong. THAT is what a good root cause analysis can do for you.

3. As in #2, when something bad DOES happen, you must do a solid RCA on those incidents, too. If your corrective actions are always:

  • Write a new policy or procedure
  • Discipline the employee
  • Conduct even MORE training

then your RCA methodology is not digging deep enough.

There is really no reason that we can’t get these types of injuries and fatalities under control. Start by doing a good root cause analysis to understand what really happened, and recognize and acknowledge why your team made mistakes. Only then can we apply effective corrective actions to eliminate those root causes. Let’s work together to keep our team safe.

1947 Centralia Mine Disaster

March 25th, 2018 by

On March 25, 1947, the Centralia No. 5 coal mine exploded in Illinois. The explosions took the lives of 111 mine workers. At the time of the explosion, 142 men were in the mine. 65 of these men were killed by burns and the violence of the explosion, and 45 of the men were killed by afterdamp. Only 8 men were rescued, but unfortunately one of the rescued men died due to the effects of afterdamp. The other 24 men were able to escape the mine unaided.

So, what happened? The coal mine was extremely dry and dusty, and there were large deposits of coal dust throughout the mine. Very little effort had been made to to clean/load out excessive dust. Also, water had not been used to allay the dust at its source.  Then, an unfortunate blowout happened when coal dust ignited. Because of the coal dust build up throughout the mine, the explosion worsened. In total, there were six working sections of the mine and 4 out of the 6 sections were affected by flames and explosion violence. The other two sections of the mine were only affected by afterdamp.

The explosion was contained when it reached the rockdusted zones. It traveled through all the active mining rooms, and some abandoned rooms that were not treated with rockdust. The explosion also failed to move through areas that were partly caved in, and in some places filled with incombustible roof rash.

Disasters with a loss of life are often wake-up calls in major industries, and how important is to ensure that it never happens again.

TapRooT® Root Cause Analysis is taught globally to help industries avoid major accidents like this. Our 5-Day TapRooT® Advanced Root Cause Analysis Team Leader Training offers advanced tools and techniques to find and fix root causes pro-actively and significant issues that may lead to major problems proactively.

Monday Accidents & Lessons Learned: When exposure to contaminants is part of a job

March 19th, 2018 by

Recently, a review by Australia’s Department of Mining, Industry, Regulation, and Safety (DMIRS) revealed that workers in some gold rooms have experienced sustained exposure to elevated heavy metal levels, including arsenic, lead, and mercury. Work done in a gold room is specifically identified as occupational exposure work that requires ongoing health surveillance for gold room employees.

Among the biological monitoring results were omissions within the biological and atmospheric monitoring program of some heavy metal contaminants associated with ore mineralization; lack of consideration on the part of sites for the mineralogy of their specific ore deposits that contribute to assessment of heavy metal often present in Western Australia gold deposits; inadequate, ineffective ventilation systems within gold rooms; and a deficiency of ventilation system performance testing and monitoring. Along with these inconsistencies, when equipment is modified or installed in gold rooms, maintenance programs fall short of manufacturer’s recommendations.

Read the Mines Safety Bulletin, Minimizing exposure to hazardous contaminants in gold rooms. Then, learn why professional training in effective investigations and competency in root cause analysis are key to solving workplace problems.

Miami Bridge Collapse – Is Blame Part of Your Investigation Policy?

March 16th, 2018 by

collapse miami bridge

 

 

 

 

 

 

 

I was listening to a news report on the radio this morning about the pedestrian bridge collapse in Miami. At one point, they were interviewing Florida Governor Rick Scott.  Here is what he said:

“There will clearly be an investigation to find out exactly what happened and why this happened…”

My ears perked up, and I thought, “That sounds like a good start to a root cause investigation!”

And then he continued:

“… and we will hold anybody accountable if anybody has done anything wrong,”

Bummer.  His statement had started out so good, and then went directly to blame in the same breath.  He had just arrived on the scene.  Before we had a good feel for what the actual circumstances were, we are assuming our corrective actions are going to pinpoint blame and dish out the required discipline.

This is pretty standard for government and public figures, so I wasn’t too surprised.  However, it got me thinking about our own investigations at our companies.  Do we start out our investigations with the same expectations?  Do we begin with the good intentions of understanding what happened and finding true root causes, but then have this expectation that we need to find someone to blame?

We as companies owe it to ourselves and our employees to do solid, unbiased incident investigations.  Once we get to reliable root causes, our next step should be to put fixes in place that answer the question, “How do we prevent these root causes from occurring in the future?  Will these corrective actions be effective in preventing the mistakes from happening again?”  In my experience, firing the employee / supervisor / official in charge rarely leads to changes that will prevent the tragedy from happening again.

 

Root Cause Tip: Luck Versus Being Consistent, Success and Failure Can Come From Both

March 14th, 2018 by

Every best practice can be a strength or a weakness. Even one phrase like “I will ____” can be self-defeating or uplifting. “I will succeed” versus “I will fail.” Both phrases set your compass for success or failure. Okay, so what does philosophy have to do with root cause analysis? Simple….

Practice safe behaviors, build and sustain safe and sustainable processes with good best practices, and success is measured by less injuries, less near-misses, and more efficient processes.

Practice unsafe behaviors, build unsafe but sustainable processes with poor best practices, and success is measured by more injuries, more near-misses, and wasteful business processes. Safety only happens by luck!

Guess what? In many cases, you can still be in compliance during audits but still meet the criteria of “unsafe but sustainable processes with poor best practices . . . measured by more injuries, more near-misses, and wasteful business processes.”

This is why Question Number 14 on the TapRooT® Root Cause Tree® is so important.

Not every Causal Factor/Significant Issue that occurred during an incident or was found during an audit is due to a person just breaking a rule or taking shortcuts. In many cases, the employee was following the rules to the “T” when the action that the employee performed, got him/her hurt or got someone else hurt.

Take time to use the TapRooT® Root Cause Tree®, Root Cause Tree® Dictionary, and Corrective Action Helper® as designed to perform consistently with a successful purpose.

Want to learn more? Attend one of our public TapRooT® Courses or contact us to schedule an onsite course.

Hire a Professional

March 12th, 2018 by

root cause analysis, RCA, investigation

I know every company is trying to do the best they can with the resources that are available. We ask a lot of our employees and managers, trying to be as efficient as we can.

However, sometimes we need to recognize when we need additional expertise to solve a particular problem. Or, alternatively, we need to ensure that our people have the tools they need to properly perform their job functions.  Companies do this for many job descriptions:

  • Oil analyst
  • Design engineer
  • Nurse
  • Aircraft Mechanic

I don’t think we would ask our Safety Manager to repair a jet engine.  THAT would be silly!

However, for some reason, many companies think that it is OK to ask their aircraft mechanics to perform a root cause analysis without giving them any additional training.  “Looks like we had a problem with that 737 yesterday.  Joe, go investigate that and let me know what you find.”  Why would we expect Joe, who is an excellent mechanic, to be able to perform a professional root cause analysis without being properly trained?  Would we send our Safety Manager out to repair a jet engine?

It might be tempting to assume that performing an RCA is “easy,” and therefore does not require professional training.  This is somewhat true.  It is easy to perform bad RCA’s without professional training.  While performing effective  investigations does not require years of training, there is a certain minimum competency you should expect from your team, and it is not fair to them to throw them into a situation which they are not trained to handle.

Ensure you are giving your team the support they need by giving them the training required to perform excellent investigations.  A 2-Day TapRooT® Essential Techniques Course is probably all your people will need to perform investigations with terrific results.

 

Monday Accidents & Lessons Learned: When a disruption potentially saves lives

March 12th, 2018 by

Early news of an incident often does not convey the complexity behind the incident. Granted, many facts are not initially available. On Tuesday, January 24, 2017, a Network Rail freight train derailed in south-east London between Lewisham and Hither Green just before 6:00 am, with the rear two wagons of the one-kilometer-long train off the tracks. Soon after, the Southeastern network sent a tweet to report the accident, alerting passengers that, “All services through the area will be disrupted, with some services suspended.” Then came the advice, “Disruption is expected to last all day. Please make sure you check before travelling.” While southeastern passengers were venting their frustrations on Twitter, a team of engineers was at the site by 6:15 am, according to Network Rail. At the scene, the engineers observed that no passengers were aboard and that no one was injured. They also noted a damaged track and the spillage of a payload of sand.

The newly laid track at Courthill Loop South Junction was constructed of separate panels of switch and crossing track, with most of the panels arriving to the site preassembled. Bearer ties, or mechanical connectors, joined the rail supports. The February 2018 report from the Rail Accident Investigation Branch (RAIB), including five recommendations, noted that follow-up engineering work took place the weekend after the new track was laid, and the derailment occurred the next day. Further inspection found the incident to be caused by a significant track twist and other contributing factors. Repair disrupted commuters for days as round-the-clock engineers accomplished a complete rebuild of a 50-meter railway stretch and employed cranes to lift the overturned wagons. Now factor in time, business, resources saved—in addition to lives that are often spared—when TapRooT® advanced root cause analysis is used to proactively reach solutions.

What does bad root cause analysis cost?

March 7th, 2018 by

NewImage

Have you ever thought about this question?

An obvious answer is $$$BILLIONS.

Let’s look at one example.

The BP Texas City refinery explosion was extensively investigated and the root cause analysis of BP was found to be wanting. But BP didn’t learn. They didn’t implement advanced root cause analysis and apply it across all their business units. They didn’t learn from smaller incidents in the offshore exploration organization. They didn’t prevent the BP Deepwater Horizon accident. What did the Deepwater Horizon accident cost BP? The last estimate I saw was $22 billion. The costs have probably grown since then.

I would argue that ALL major accidents are at least partially caused by bad root cause analysis and not learning from past experience.

EVERY industrial fatality could be prevented if we learned from smaller precursor incidents.

EVERY hospital sentinel event could be prevented (and that’s estimated at 200,000 fatalities per year in the US alone) if hospitals applied advanced root cause analysis and learned from patient safety incidents.

Why don’t companies and managers do better root cause analysis and develop effective fixes? A false sense of saving time and effort. They don’t want to invest in improvement until something really bad happens. They kid themselves that really bad things won’t happen because they haven’t happened yet. They can’t see that investing in the best root cause analysis training is something that leads to excellent performance and saving money.

Yet that is what we’ve proven time and again when clients have adopted advanced root cause analysis and paid attention to their performance improvement efforts.

The cost of the best root cause analysis training and performance improvement efforts are a drop in the bucket compared to any major accident. They are even cheap compared to repeat minor and medium risk incidents.

I’m not promising something for nothing. Excellent performance isn’t free. It takes work to learn from incidents, implement effective fixes, and stop major accidents. Then, when you stop having major accidents, you can be lulled into a false sense of security that causes you to cut back your efforts to achieve excellence.

If you want to learn advanced root cause analysis with a guaranteed training, attend of our upcoming public TapRooT® Root Cause Analysis Training courses.

Here is the course guarantee:

Attend the course. Go back to work and use what you have learned to analyze accidents,
incidents, near-misses, equipment failures, operating issues, or quality problems.
If you don’t find root causes that you previously would have overlooked
and if you and your management don’t agree that the corrective actions that you
recommend are much more effective, just return your course materials/software
and we will refund the entire course fee.

Don’t be “penny wise and pound foolish.” Learn about advanced root cause analysis and apply it to save lives, prevent environmental damage, improve equipment reliability, and achieve operating excellence.

Protection Against Hydrogen Sulfide

March 6th, 2018 by

On January 16, 2017, a private construction company sent four utility works to handle complaints about sewage backup in Key Largo, Florida. Three of the four works descended into the the 15-foot-deep drainage hole, and within seconds all voice communication was lost amongst the construction workers.

The Key Largo Fire Department was the first to respond to the scene. Leonardo Moreno, a volunteer firefighter, tried to enter the hole with his air tank but failed. So, he descended without his air tank and lost consciousness within seconds of entering the drainage hole. Eventually, another firefighter was able to enter the hole with an air tank and pull Moreno out. Unfortunately, the other three construction workers weren’t so lucky. All of them died from hydrogen sulfide poisoning, and Moreno was in critical condition.

Unfortunate events like this are completely avoidable. Comment below how this could have been avoided/prevented by using TapRooT® proactively.

To learn more about this tragic incident click here.

Is Having the Highest Number of Serious Incidents Good or Bad?

March 6th, 2018 by

NewImage

I read an interesting article about two hospitals in the UK with the highest number of serious incidents.

On the good side, you want people to report serious incidents. Healthcare has a long history of under-reporting serious incidents (sentinel events).

On the good side, administrators say they do a root cause analysis on these incidents.

On the bad side, the hospitals continue to have these incidents. Shouldn’t the root cause analysis FIX the problems and the number of serious incidents be constantly decreasing and becoming less severe?

Maybe they should be applying advanced root cause analysis?

Monday Accidents & Lessons Learned: When a critical team meets the unexpected

March 5th, 2018 by

Teamwork can break down or go awry in difficult circumstances. During normal operations, team members adhere to policy for their roles, but a single incident can challenge or splinter even the most prepared team. Flight passengers can create a variety of circumstances that require quick and exceptional thinking and action; many of these circumstances are not delineated or addressed in the Quick Reference Handbook (QRH) or by company policy.

This happened to an air carrier crew in an aircraft on the runway awaiting takeoff. The crew was suddenly caught up in a passenger’s panic-stricken, emotionally charged request to deplane. CALLBACK, from NASA’s Aviation Safety Reporting System, allows us six crew debriefing perspectives from this incident. From the First Officer’s report to both Flight Attendants’ summaries, we can view and, using TapRooT@ Techniques, interact with the complications that accompanied each vantage point.

Connect with Us

Filter News

Search News

Authors

Angie ComerAngie Comer

Software

Anne RobertsAnne Roberts

Marketing

Barb CarrBarb Carr

Editorial Director

Chris ValleeChris Vallee

Human Factors

Dan VerlindeDan Verlinde

VP, Software

Dave JanneyDave Janney

Safety & Quality

Garrett BoydGarrett Boyd

Technical Support

Ken ReedKen Reed

VP, Equifactor®

Linda UngerLinda Unger

Co-Founder

Mark ParadiesMark Paradies

Creator of TapRooT®

Michelle WishounMichelle Wishoun

Licensing Paralegal

Per OhstromPer Ohstrom

VP, Sales

Shaun BakerShaun Baker

Technical Support

Steve RaycraftSteve Raycraft

Technical Support

Wayne BrownWayne Brown

Technical Support

Success Stories

An improvement plan was developed and implemented. Elements of the improvement plan included process…

Exelon Nuclear

Reporting of ergonomic illnesses increased by up to 40% in…

Ciba Vision, a Novartis Company
Contact Us