Category: Accidents

Monday Accidents & Lessons Learned: Who’s in Charge?

May 28th, 2018 by

An ERJ-145 crew failed to detect a change in its vertical navigation mode during descent. When it was eventually discovered, corrective action was taken, but large deviations from the desired flight path may have already compromised safety.

“This event occurred while being vectored for a visual approach. The First Officer (FO) was the Pilot Flying and I was Pilot Monitoring. ATC had given us a heading to fly and a clearance to descend to 3,000 feet. 3,000 was entered into the altitude preselect, was confirmed by both pilots, and a descent was initiated. At about this time, we were also instructed to maintain 180 knots. Sometime later, I noticed that our speed had begun to bleed off considerably, approximately 20 knots, and was still decaying. I immediately grabbed the thrust levers and increased power, attempting to regain our airspeed. At about this time, it was noticed that the preselected altitude had never captured and that the Flight Mode Annunciator (FMA) had entered into PITCH MODE at some point. It became apparent that after the aircraft had started its descent, the altitude preselect (ASEL) mode had changed to pitch and was never noticed by either pilot. Instead of descending, the aircraft had entered a climb at some point, and this was not noticed until an appreciable amount of airspeed decay had occurred. At the time that this event was noticed, the aircraft was approximately 900 feet above its assigned altitude. Shortly after corrective action was begun, ATC queried us about our climbing instead of descending. We replied that we were reversing the climb. The aircraft returned to its assigned altitude, and a visual approach was completed without any further issues.

“[We experienced a] large decrease in indicated airspeed. The event occurred because neither pilot noticed the Flight Mode Annunciator (FMA) entering PITCH MODE. Thrust was added, and then the climb was reversed in order to descend back to our assigned altitude. Both pilots need to reaffirm that their primary duty is to fly and monitor the aircraft at all times, starting with the basics of heading, altitude, airspeed, and performance.”

We encourage you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.We encourage you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

“People are SO Stupid”: Horrible Comments on LinkedIn

May 23rd, 2018 by

 

 

How many people have seen those videos on LinkedIn and Facebook that show people doing really dumb things at work? It seems recently LinkedIn is just full of those types of videos. I’m sure it has something to do with their search algorithms that target those types of safety posts toward me. Still, there are a lot of them.

The videos themselves don’t bother me. They are showing real people doing unsafe things or accidents, which are happening every day in real life. What REALLY bothers me are the comments that people post under each video. Again concentrating on LinkedIn, people are commenting on how dumb people are, or how they wouldn’t put up with that, or “stupid is as stupid does!”

Here are a couple examples I pulled up in about 5 minutes of scrolling through my LinkedIn feed.  Click on the pictures to see the comments that were made with the entries:

 

 

 

 

 

 

 

 

 

 

 

Click on picture to watch Video

 

 

 

 

 

 

 

These comments often fall under several categories.  We can take a look at these comments as groups

“Those people are not following safety guideline xxxx.  I blame operator “A” for  this issue!”

Obviously, someone is not following a good practice.  If they were, we wouldn’t have had the issue, right?  It isn’t particularly helpful to just point out the obvious problem.  We should be asking ourselves, “Why did this person decide that it was OK to do this?”  Humans perform split-second risk assessments all the time, in every task they perform.  What we need to understand is the basis of a person’s risk assessment.  Just pointing out that they performed a poor assessment is too easy.  Getting to the root cause is much more important and useful when developing corrective actions.

“Operators were not paying attention / being careful.”

No kidding.  Humans are NEVER careful for extended periods of time.  People are only careful when reminded, until they’re not.  Watch your partner drive the car.  They are careful much of the time, and then we need to change the radio station, or the cell phone buzzes, etc.

Instead of just noting that people in the video are not being careful, we should note what safeguards were in place (or should have been in place) to account for the human not paying attention.  We should ask what else we could have done in order to help the human do a better job.  Finding the answers to these questions is much more helpful than just blaming the person.

These videos are showing up more and more frequently, and the comments on the videos are showing how easy it is to just blame people instead of doing a human performance-based root cause analysis of the issue.  In almost all cases, we don’t even have enough information in the video to make a sound analysis.  I challenge you to watch these videos and avoid blaming the individual, making the following assumptions:

  1.  The people in the video are not trying to get hurt / break the equipment / make a mistake
  2.  They are NOT stupid.  They are human.
  3.  There are systems that we could put in place that make it harder for the human to make a mistake (or at least make it easier to do it right).

When viewing these videos in this light, it is much more likely that we can learn something constructive from these mistakes, instead of just assigning blame.

Two Incidents in the Same Year Cost UK Auto Parts Manufacturer £1.6m in Fines

May 22nd, 2018 by

Screen Shot 2018 05 22 at 4 37 39 PM

Faltec Europe manufactures car parts in the UK. They had two incidents in 2015 related to health and safety.

The first was an outbreak of Legionnaires’ Disease due to a cooling water system that wasn’t being properly treated.

The second was an explosion and fire in the manufacturing facility,

For more details see:

http://press.hse.gov.uk/2018/double-investigation-leads-to-fine-for-north-east-car-parts-manufacturer-faltec-europe-limited/

The company was prosecuted by the UK HSE and was fined £800,000 for each incident plus £75,159.73 in costs and a victim surcharge of £120.

The machine that exploded had had precursor incidents, but the company had not taken adequate corrective actions.

Are you investigating your precursor incidents and learning from them to prevent major injuries/health issues, fires, and explosions?

Perhaps you should be applying advanced root cause analysis to find and fix the real root causes of equipment and human error related incidents? Learn more at one of our courses:

2-Day TapRooT® RooT® Cause Analysis Course

5-Day TapRooT® Advanced Root Cause Analysis Team Leader Training

Want to see our courses in Europe? CLICK HERE.

You can attend our training at our public courses anywhere around the world. See the list by CLICKING HERE.

Would you like to sponsor a course at your site? Contact us for a quote by CLICKING HERE.

Monday Accidents & Lessons Learned: The Worst U.S. Maritime Accident in Three Decades

May 21st, 2018 by

The U.S.-flagged cargo ship, El Faro, and its crew of 33 men and women sank after sailing into Hurricane Joaquin. What went wrong and why did an experienced sea captain sail his crew and ship directly into the eye of a hurricane? The investigation lasted two years. 

One of two ships owned by TOTE Maritime Inc., the El Faro constantly rotated between Jacksonville, Florida, and San Juan, Puerto Rico, transporting everything from frozen chickens to milk to Mercedes Benzes to the island. The combination roll-on/roll-off and lift-on/lift-off cargo freighter was crewed by U.S. Merchant Marines. Should the El Faro miss a trip, TOTE would lose money, store shelves would be bare, and the Puerto Rican economy would suffer.

The El Faro, a 790-foot, 1970s steamship, set sail at 8:15 p.m. on September 29, 2015, with full knowledge of the National Hurricane Center warning that Tropical Storm Joaquin would likely strengthen to a hurricane within 24 hours.

Albeit with modern navigation and weather technology, the aging ship, with two boilers in need of service, with no life vests or immersion suits, was equipped with open lifeboats that would not be launched once the captain gave the order to abandon ship in the midst of a savage hurricane.

As the Category 4 storm focused on the Bahamas, winds peaking at 140 miles an hour, people and vessels headed for safety. All but one ship. On October 1, 2015, the SS El Faro steamed into the furious storm. Black skies. Thirty to forty foot waves. The Bermuda Triangle. Near San Salvador, the sea freighter found itself in the strongest October storm to hit these waters since 1866. Around 7:30 a.m. on October 1, the ship was taking on water and listing 15 degrees. Although, the last report from the captain indicated that the crew had managed to contain the flooding. Soon after, the freighter ceased all communications. All aboard perished in the worst U.S. maritime disaster in three decades. Investigators from the National Transportation Safety Board (NTSB) were left to wonder why.

When the NTSB launched one of the most thorough investigations in its long history, they spoke with dozens of experts, colleagues, friends, and family of the crew. The U.S. Coast Guard, with help from the Air Force, the Air National Guard, and the Navy, searched in a 70,000 square-mile area off Crooked Island in the Bahamas, spotting debris, a damaged lifeboat, containers, and traces of oil. On October 31, 2015, the USNS Apache searched and found the El Faro, using the CURV 21, a remotely operated deep ocean vehicle.

Thirty days after the El Faro sank, the ship was found 15,000 feet below sea level. The images of the sunken ship showed a breach in the hull and its main navigation tower missing. 

Finally came the crucial discovery when a submersible robot retrieved the ship’s voyage data recorder (VDR), found on Tuesday, April 26, 2016, at 4,600 meters bottom. This black box held everything uttered on the ship’s bridge, up to its final moments.

The big challenge was locating the VDR, only about a foot by eight inches. No commercial recorder had ever been recovered this deep where the pressure is nearly 7,000 pounds per square inch.

The 26-hour recording converted into the longest script—510 pages— ever produced by the NTSB.  The recorder revealed that at the outset, there was absolute certainty among the crew and captain that going was the right thing to do. As the situation evolved and conditions deteriorated, the transcript reveals, the captain dismissed a crew member’s suggestion that they return to shore in the face of the storm. “No, no, no. We’re not gonna turn around,” he said. Captain Michael Davidson then said, “What I would like to do is get away from this. Let this do what it does. It certainly warrants a plan of action.” Davidson went below just after 7:57 p.m. and was not heard again nor present on the bridge until 4:10 a.m. The El Faro and its crew had but three more hours after Davidson reappeared on the bridge, as the recording ends at 7:39 a.m., ten minutes after Captain Davidson ordered the crew to abandon ship.

This NTSB graphic shows El Faro’s track line in green as the ship sailed from Jacksonville to Puerto Rico on October 1, 2015. Color-enhanced satellite imagery from close to the time the ship sank illustrates Hurricane Joaquin in red, with the storm’s eye immediately to the south of the accident site.

The NTSB determined that the probable cause of the sinking of El Faro and the subsequent loss of life was the captain’s insufficient action to avoid Hurricane Joaquin, his failure to use the most current weather information, and his late decision to muster the crew. Contributing to the sinking was ineffective bridge resource management on board El Faro, which included the captain’s failure to adequately consider officers’ suggestions. Also contributing to the sinking was the inadequacy of both TOTE’s oversight and its safety management system.

The NTSB’s investigation into the El Faro sinking identified the following safety issues:

  • Captain’s actions
  • Use of noncurrent weather information
  • Late decision to muster the crew
  • Ineffective bridge resource management
  • Company’s safety management system
  • Inadequate company oversight
  • Need for damage control plan
  • Flooding in cargo holds
  • Loss of propulsion
  • Downflooding through ventilation closures
  • Need for damage control plan
  • Lack of appropriate survival craft

The report also addressed other issues, such as the automatic identification system and the U.S. Coast Guard’s Alternate Compliance Program. On October 1, 2017, the U. S. Coast Guard released findings from its investigation, conducted with the full cooperation of the NTSB. The 199-page report identified causal factors of the loss of 33 crew members and the El Faro, and proposed 31 safety recommendations and four administrative recommendations for future actions to the Commandant of the Coast Guard.

Captain Jason Neubauer, Chairman, El Faro Marine Board of Investigation, U.S. Coast Guard, made the statement, “The most important thing to remember is that 33 people lost their lives in this tragedy. If adopted, we believe the safety recommendations in our report will improve safety of life at sea.”

Avoid Big Problems By Paying Attention to the Small Stuff

May 16th, 2018 by

Almost every manager has been told not to micro-manage their direct reports. So the advice above:

Avoid Big Problems By Paying Attention to the Small Stuff

may sound counter-intuitive.

Perhaps this quote from Admiral Rickover, leader of the most successful organization to implement process safety and organizational excellence, might make the concept clearer:

The Devil is in the details, but so is salvation.

When you talk to senior managers who existed through a major accident (the type that gets bad national press and results in a management shakeup), they never saw it coming.

A Senior VP at a utility told me:

It was like I was walking along on a bright sunny day and
the next thing I knew, I was at the bottom of a deep dark hole.

They never saw the accident coming. But they should have. And they should have prevented it. But HOW?

I have never seen a major accident that wasn’t preceded by precursor incidents.

What is a precursor incident?

A precursor incident is an incident that has low to moderate consequences but could have been much worse if …

  • One of more Safeguards had failed
  • It was a bad day (you were unlucky)
  • You decided to cut costs just one more time and eliminated the hero that kept things from getting worse
  • The sequence had changed just a little (the problem occurred on night shift or other timing changed)

These type of incidents happen more often than people like to admit. Thus, they give management the opportunity to learn.

What is the response by most managers? Do they learn? NO. Why? Because the consequences of the little incidents are insignificant. Why waste valuable time, money, and resources investigating small consequence incidents. As one Plant Manager said:

If we investigated  every incident, we would do nothing but investigate incidents.

Therefore, a quick and dirty root cause analysis is performed (think 5-Whys) and some easy corrective actions that really don’t change things that are implemented.

The result? It looks like the problem goes away. Why? Because big accidents usually have multiple Safeguards and they seldom fail all at once. It’s sort of like James Reason’s Swiss Cheese Model…

SwissCheese copy

The holes move around and change size, but they don’t line up all the time. So, if you are lucky, you won’t be there when the accident happens. So, maybe the small incidents repeat but a big accident hasn’t happened (yet).

To prevent the accident, you need to learn from the small precursor incidents and fix the holes in the cheese or add additional Safeguards to prevent the major accidents. The way you do this is by applying advanced root cause analysis to precursor incidents. Learn from the small stuff to avoid the big stuff. To avoid:

  • Fatalities
  • Serious injuries
  • Major environmental releases
  • Serious customer quality complaints
  • Major process upsets and equipment failures
  • Major project cost overruns

Admiral Rickover’s seventh rule (of seven) was:

The organization and members thereof must have the ability
and willingness to learn from mistakes of the past.

And the mistakes he referred to were both major accidents (which didn’t occur in the Nuclear Navy when it came to reactor safety) and precursor incidents.

Are you ready to learn from precursor incidents to avoid major accidents? Then stop trying to take shortcuts to save time and effort when investigating minor incidents (low actual consequences) that could have been worse. Start applying advanced root cause analysis to precursor incidents.

The first thing you will learn is that identifying the correct answer once is a whole lot easier that finding the wrong answer many times.

The second thing you will learn is that when people start finding the real root causes of problems and do real root cause analysis frequently, they get much better at problem solving and performance improves quickly. The effort required is less than doing many poor investigations.

Overall you will learn that the process pay for itself when advanced root cause analysis is applied consistently. Why? Because the “little stuff” that isn’t being fixed is much more costly than you think.

How do you get started?

The fastest way is by sending some folks to the 2-Day TapRooT® Root Cause Analysis Course to learn to investigate precursor incidents.

The 2-Day Course is a great start. But some of your best problem solvers need to learn more. They need the skills necessary to coach others and to investigate significant incidents and major accidents. They need to attend the 5-Day TapRooT® Advanced Root Cause Analysis Team Leader Training.

Once you have the process started, you can develop a plan to continually improve your improvement efforts. You organization will become willing to learn. You will prove how valuable these tools are and be willing to become best in class.

Rome wasn’t built in a day but you have to get started to see the progress you need to achieve. Start now and build on success.

Would you like to talk to one of our TapRooT® Experts to get even more ideas for improving your root cause analysis? Contact us by CLICKING HERE.

Monday Accidents & Lessons Learned: Airplane Mode

May 14th, 2018 by

When you hear the words “mode” and “aviation,” many of us who are frequent flyers may quickly intuit the discussion is heading toward the digital disconnection of our cellular voice and data connection in a device, or airplane mode. Webster defines “mode” as “a particular functioning arrangement or condition,” and an aircraft’s system’s operating mode is characterized by a particular list of active functions for a named condition, or “mode.” Multiple modes of operation are employed by most aircraft systems—each with distinct functions—to accommodate the broad range of needs that exist in the current operating environment.

With ever-increasing aviation mode complexities, pilots must be thoroughly familiar with scores of operating modes and functions. No matter which aircraft system is being operated, when a pilot is operating automation that controls an aircraft, the mode awareness, mode selection, and mode expectation are all capable of presenting hazards that require know-how and management. Sure, these hazards may be obvious, but they are also often complex and difficult to grasp.

NASA’s Aviation Safety Reporting System (ASRS) receives reports that suggest pilots are uninformed or unaware of a current operating mode, or what functions are available in a specific mode. At this juncture, the pilots experience the “What is it doing now?” syndrome. Often, the aircraft is transitioning to, or in, a mode the pilot didn’t select. Further, the pilot may not recognize that a transition has occurred. The aircraft then does something autonomously and unanticipated by the pilot, typically causing confusion and increasing the potential for hazard.

The following report gives us insight into the problems involving aircraft automation that pilots experience with mode awareness, mode selection, and mode expectation.

“On departure, an Air Carrier Captain selected the required navigation mode, but it did not engage. He immediately attempted to correct the condition and subsequently experienced how fast a situation can deteriorate when navigating in the wrong mode.

“I was the Captain of the flight from Ronald Reagan Washington National Airport (DCA). During our departure briefing at the gate, we specifically noted that the winds were 170 at 6, and traffic was departing Runway 1. Although the winds favored Runway 19, we acknowledged that they were within our limits for a tailwind takeoff on Runway 1. We also noted that windshear advisories were in effect, and we followed required procedure using a no–flex, maximum thrust takeoff. We also briefed the special single engine procedure and the location of [prohibited airspace] P-56. Given the visual [meteorological] conditions of 10 miles visibility, few clouds at 2,000 feet, and scattered clouds at 16,000 feet, our method of compliance was visual reference, and we briefed, “to stay over the river, and at no time cross east of the river.

“Taxi out was normal, and we were issued a takeoff clearance [that included the JDUBB One Departure] from Runway 1. At 400 feet AGL, the FO was the Pilot Flying and incorrectly called for HEADING MODE. I was the Pilot Monitoring and responded correctly with “NAV MODE” and selected NAV MODE on the Flight Control Panel. The two lights adjacent to the NAV MODE button illuminated. I referenced my PFD and noticed that the airplane was still in HEADING MODE and that NAV MODE was not armed. Our ground speed was higher than normal due to the tailwind, and we were rapidly approaching the departure course. Again, I reached up and selected NAV MODE, with the same result. I referenced our location on the Multi-Function Display (MFD), and we were exactly over the intended departure course; however, we were still following the flight director incorrectly on runway heading. I said, “Turn left,” and shouted, “IMMEDIATELY!” The FO banked into a left turn. I observed the river from the Captain’s side window, and we were directly over the river and clear of P-56. I spun the heading bug directly to the first fix, ADAXE, and we proceeded toward ADAXE.

“Upon reaching ADAXE, we incorrectly overflew it, and I insisted the FO turn right to rejoin the departure. He turned right, and I said, “You have to follow the white needle,” specifically referencing our FMS/GPS navigation. He responded, “I don’t have a white needle.” He then reached down and turned the Navigation Selector Knob to FMS 2, which gave him proper FMS/GPS navigation. We were able to engage the autopilot at this point and complete the remainder of the JDUBB One Departure. I missed the hand–off to Departure Control, and Tower asked me again to call them, which I did. Before the hand–off to Center, the Departure Controller gave me a phone number to call because of a possible entry into P-56.”

We thank ASRS for this report, and for helping to underscore TapRooT®’s raison d’être.

We encourage you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Remembering An Accident: Enschede Fireworks Disaster

May 13th, 2018 by

On May 13, 2000 in the eastern Dutch city of Enschede a fireworks warehouse caught fire and lead to an enormous explosion. The explosion caused 22 deaths, with 4 fire-fighters among the causalities, another 974 individual were injured, and 500 homes and businesses were severely damaged and/or destroyed during the blast. After the dust had settled a 13 meter diameter, 1.3 meter deep crater could be observed where concrete round cells C9 and C11 – C 15 once stood. To create a crater that size it would take a TNT equivalent between 4 and 5 tonnes. The largest blast was felt up to 30 kilometers away (19 miles).

  

What makes this incident so interesting is the fact that whatever started the fire was never really discovered. Two possibilities seem to be the likely cause. One possibility discussed was arson. The Dutch police made several arrest, but none of whom had been arrested were convicted of arson for the Enschede Fireworks Disaster. The other theory comes from the fire department stating that accidental ignition via an electrical short circuit could have also been the cause of the fire.

Because of the incident and investigation results the fireworks disaster lead to stronger safety regulations in the Netherlands concerning the sales, storage, and distribution of fireworks. Since the catastrophe three illegal firework warehouses were closed down and the Roombeek area that was destroyed by the explosion has been rebuilt.

  

To read the full detailed report click here.

Major disasters are often wake-up calls for how important it is to ensure that they never happen again.

TapRooT® Root Cause Analysis is taught globally to help industries avoid them. Our 5-Day TapRooT® Advanced Root Cause Analysis Team Leader Training offers advanced tools and techniques to find and fix root causes re-actively and help identify precursors that could lead to major problems.

To learn more about our courses and their locations click on the links below.
5-Day TapRooT® Advanced Root Cause Analysis Team Leader Training
2-Day TapRooT® Root Cause Analysis Essentials Training

 

Hazards and Targets

May 7th, 2018 by

Most of us probably would not think of this as a on the job Hazard … a giraffe.

Screen Shot 2018 05 07 at 9 40 49 AM

But African filmmaker Carlos Carvalho was killed by one while working in Africa making a film.

Screen Shot 2018 05 07 at 9 42 38 AM

 Do you have unexpected Hazards at work? Giant Asian hornets? Grizzly bears? 

Or are your Hazards much more common. Heat stroke. Slips and falls (gravity). Traffic.

Performing a thorough Safeguard Analysis before starting work and then trying to mitigate any Hazards is a good way to improve safety and reduce injuries. Do your supervisors know how to do a Safeguard Analysis using TapRooT®?

Monday Accidents & Lessons Learned: Failing the Mind-Check of Reality

May 7th, 2018 by

 

When an RV-7 pilot studied the weather prior to departure, he considered not only the weather but also distractions and personal stress. His situational awareness and decision-making were influenced by these considerations, as you can see in his experience:

“I was cleared to depart on Runway 27L from [midfield at] intersection C. However, I lined up and departed from Runway 9R. No traffic control conflict occurred. I turned on course and coordinated with ATC immediately while airborne.

“I had delayed my departure due to weather [that was] 5 miles east…and just north of the airport on my route. Information Juliet was: “340/04 10SM 9,500 OVC 23/22 29.99, Departing Runway 27L, Runways 9L/27R closed, Runways 5/23 closed.” My mind clued in on [Runway] 09 for departure. In fact, I even set my heading bug to 090. Somehow while worried mostly about the weather, I mentally pictured departing Runway 9R at [taxiway] C. I am not sure how I made that mistake, as the only 9 listed was the closed runway. My focus was not on the runway as it should have been, but mostly on the weather.

“Contributing factors were:

1. Weather

2. No other airport traffic before my departure. (I was looking as I arrived at the airport and completed my preflight and final weather checks)

3. Airport construction. For a Runway 27 departure, typical taxi routing would alleviate any confusion

4. ATIS listing the closed runway with 9 listed first

5. Quicker than expected takeoff clearance

“I do fly for a living. I will be incorporating the runway verification procedure we use on the jet aircraft at my company into my GA flying from now on. Sadly, I didn’t make that procedural change in my GA flying.”

Thanks to NASA’s Aviation Safety Reporting System (ASRS) for contemporarily sharing experiences that offer valuable insight, contributing to the growth of aviation wisdom, lessons learned, and an uninhibited accounting of reported incidents. ASRS receives, processes, and analyzes these voluntarily submitted reports from pilots, air traffic controllers, flight attendants, maintenance personnel, dispatchers, ground personnel, and others entailing actual or potential hazards.

We encourage you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: When There Is No Right Side of the Tracks

April 30th, 2018 by

On Tuesday, February 28, 2017, a wall section began to collapse at the top of a cutting above a four-track railway line between the Liverpool Lime Street and Edge Hill stations in Liverpool, England. From approximately 5:30 pm until 6:02 pm, more than 188 tons of debris rained down from the embankment wall collapsing across all four tracks. The Liverpool Lime Street is the city’s main station, one of the busiest in the north of England.

With the rubble downing overhead power lines and damage to infrastructure, all mainline services to and from the station were suspended. The collapse brought trains to a standstill for three hours, with a necessary evacuation of three trains. Police, fire, and ambulance crews helped evacuate passengers down the tracks. Two of the trains were halted in tunnels. Passengers were stranded on trains at Lime Street station due to power outage resulting from the collapse. A passenger en route to Liverpool from Manchester Oxford Road reported chaos at Warrington station as passengers tried to find their way home.

A representative from Network Rail spoke about the incident, “No trains are running in or out of Liverpool Lime station after a section of trackside wall, loaded with concrete and cabins by a third party, collapsed sending rubble across all four lines and taking overhead wires with it. Early indications suggest train service will not resume for several days while extensive clear-up and repairs take place to make the location safe. More precise forecasts on how long the repairs will take will be made after daybreak tomorrow.”

Read more about the incident here.

We invite you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Press Release: CSB to Investigate Husky Refinery Fire

April 26th, 2018 by

CSB

Washington, DC, April 26, 2018 –  A four-person investigative team from the U.S. Chemical Safety Board (CSB) is deploying to the scene of an incident that reportedly injured multiple workers this morning at the Husky Energy oil refinery in Superior, Wisconsin. The refinery was shutting down in preparation for a five-week turnaround when an explosion was reported around 10 am CDT.

According to initial reports, several people were transported to area hospitals with injuries. There have been no reports of fatalities. Residents and area schools near the refinery were asked to evacuate due to heavy smoke.

The CSB is an independent, non-regulatory federal agency charged with investigating serious chemical incidents. The agency’s board members are appointed by the president and confirmed by the Senate. CSB investigations look into all aspects of chemical accidents, including physical causes such as equipment failure as well as inadequacies in regulations, industry standards, and safety management systems.

The Board does not issue citations or fines but does make safety recommendations to plants, industry organizations, labor groups, and regulatory agencies such as OSHA and EPA. Visit the CSB website, www.csb.gov

Here is additional coverage of the fire …

NewImage

http://www.kbjr6.com/story/38049655/explosion-injuries-reported-at-husky-energy-superior-refinery?autostart=true

Remembering An Accident: Savar Building Collapse

April 25th, 2018 by

On April 24, 2013 in Savar Upazila of Dhaka District, Bangladesh a five story commercial building called the Rana Plaza collapsed and killed 1,134 people. On May 13, 2013 rescue efforts where halted and approximately 2,500 people were rescued, but injured from the collapsed building. This incident is considered the deadliest garment-factory accident in recent history. So why did an accident like this happen in modern day? Keep on reading to find out.

The 400 page report exposed multiple causes in why the building collapsed. One of which, the mayor and owners of the building wrongfully granted construction permits to have additional floors built. To make this situation even worse they used substandard materials, and ignored building code violations while constructing the new floors.

In order for the factory to remain efficient the owners had large generators installed on the upper floors, so the factory could keep producing when blackouts occurred. This added lots of strain and weight to the already poorly built upper levels. The report reflects that every time the generators would turn on it would shake the building.

On April 23, cracks began to form in the foundations and walls. An engineer was called in to examine the building and declared it unsafe, but the owners demanded that their works return despite the unsafe working conditions. Then on April 24, 2013 during the morning rush hour the building collapsed.

Major disasters are often wake-up calls for how important it is to ensure that they never happen again.

TapRooT® Root Cause Analysis is taught globally to help industries avoid them. Our 5-Day TapRooT® Advanced Root Cause Analysis Team Leader Training offers advanced tools and techniques to find and fix root causes reactively and significant issues that may lead to major problems proactively.

To learn more about our courses and their locations click on the links below.

5-Day TapRooT® Advanced Root Cause Analysis Team Leader Training
2-Day TapRooT® Root Cause Analysis Essentials Training

 

How many precursor incidents did your site investigate last month? How many accidents did you prevent?

April 25th, 2018 by

A precursor incident is an incident that could have been worse. If another Safeguard had failed, if the sequence had been slightly different, or if your luck had been worse, the incident could have been a major accident, a fatality, or a significant injury. These incidents are sometimes called “hipos” (High Potential Incidents) or “potential SIFs” (Significant Injury or Fatality).

I’ve never talked to a senior manager that thought a major accident was acceptable. Most claim they are doing EVERYTHING possible to prevent them. But many senior managers don’t require advanced root cause analysis for precursor incidents. Incidents that didn’t have major consequences get classified as a low consequence event. People ask “Why?” five times and implement ineffective corrective actions. Sometimes these minor consequence (but high potential consequence incidents) don’t even get reported. Management is letting precursor incidents continue to occur until a major accident happens.

Perhaps this is why I have never seen a major accident that didn’t have precursor incidents. That’s right! There were multiple chances to identify what was wrong and fix it BEFORE a major accident.

That’s why I ask the question …

“How many precursor incidents did your site investigate last month?”

If you are doing a good job identifying, investigating, and fixing precursor incidents, you should prevent major accidents.

Sometimes it is hard to tell how many major accidents you prevented. But the lack of major accidents will keep your management out of jail, off the hot seat, and sleeping well at night.

Screen Shot 2018 04 18 at 2 08 58 PMKeep Your Managers Out of These Pictures

That’s why it’s important to make sure that senior management knows about the importance of advanced root cause analysis (TapRooT®) and how it should be applied to precursor incidents to save lives, improve quality, and keep management out of trouble. You will find that the effort required to do a great investigation with effective corrective actions isn’t all that much more work than the poor investigation that doesn’t stop a future major accident.

Want to learn more about using TapRooT® to investigate precursor incidents? Attend one of our 2-Day TapRooT® Root Cause Analysis Courses. Or attend a 5-Day TapRooT® Root Cause Analysis Course Team Leader Course and learn to investigate precursor incidents and major accidents. Also consider training a group of people to investigate precursor incidents at a course at your site. Call us at 865-539-2139 or CLICK HERE to send us a message.

Is April just a bad month?

April 24th, 2018 by

I was reading a history of industrial/process safety accidents and noticed that all the following happened in April:

Texas city nitrate explosion

April 16, 1947 – 2,200 tons of ammonium nitrate detonates in Texas City, Teas, destroying multiple facilities and killing 581 people.

Deepwater horizon

April 20, 2010 – A blowout, explosions, and fire destroy the Deepwater Horizon, killing 11. This was the worst oil spill in US history.

West texas

April 17, 2013 – 10 tons of ammonium nitrate detonates in West, Texas, destroying most of the town and killing 15 people.

Maybe this is just my selective vision making a trend out of nothing or maybe Spring is a bad time for process safety? I’m sure it is a coincidence but it sure seems strange.

Do you ever notice “trends” that you make you wonder … “Is this really a trend?”

The best way to know is to apply our advanced trending techniques. Watch for our new book coming out this Summer and then plan to attend the course next March prior to the 2019 Global TapRooT® Summit.

Monday Accidents & Lessons Learned: Putting Yourself on the Right Side of Survival

April 23rd, 2018 by

While building an embankment to circumvent any material from a water supply, a front end loader operator experienced a close call. On March 13, 2018, the operator backed his front end loader over the top of a roadway berm; the loader and operator slipped down the embankment; and the loader landed turning over onto its roof. Fortunately, the operator was wearing his seat belt. He unfastened the seat belt and escaped the upside-down machine through the broken right-side window of the loader door.

Front end loaders are often involved in accidents due to a shift in the machine’s center of gravity. The U.S. Department of Labor Mine Safety and Health Administration (MSHA) documented this incident and issued the statement and best practices below for operating front end loaders.

The size and weight of front end loaders, combined with the limited visibility from the cab, makes the job of backing a front end loader potentially hazardous. To prevent a mishap when operating a front end loader:
• Load the bucket evenly and avoid overloading (refer to the load limits in the operating manual). Keep the bucket low when operating on hills.
• Construct berms or other restraints of adequate height and strength to prevent overtravel and warn operators of hazardous areas.
• Ensure that objects inside of the cab are secured so they don’t become airborne during an accident.
• ALWAYS wear your seatbelt.
• Maintain control of mobile equipment by traveling safe speeds and not
overloading equipment.

We would add the following best practices for loaders:
• Check the manufacturer’s recommendations and supplement appropriate wheel ballast or counterweight.
• Employ maximum stabilizing factors, such as moving the wheels to the widest setting.
• Ensure everyone within range of the loader location is a safe distance away.
• Operate the loader with its load as close to the ground as possible. Should the rear of the tractor tip, its bucket will hit the ground before the tractor tips.

Use the TapRooT® System to put safety first and to solve problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

How far away is death?

April 19th, 2018 by

Lockout-tagout fail: Notepaper sign – “Don’t start”

Are you ready for quality root cause analysis of a precursor incident?

April 17th, 2018 by

Many companies use TapRooT® to investigate major accidents. But investigating a major accident is like closing the barn door after the horse has bolted.

What should you be doing? Quality investigations of incidents that could have been major accidents. We call these precursor incidents. They could have been major accidents if something else had gone wrong, another safeguard had failed, or you were “unlucky” that day.

How do you do a quality investigation of a precursor incident? TapRooT® of course! See the Using the Essential TapRooT® Techniques to Investigate Low-to-Medium Risk Incidents book.

NewImage

Or attend one of our TapRooT® Root Cause Analysis Courses.

Connect with Us

Filter News

Search News

Authors

Angie ComerAngie Comer

Software

Anne RobertsAnne Roberts

Marketing

Barb CarrBarb Carr

Editorial Director

Chris ValleeChris Vallee

Human Factors

Dan VerlindeDan Verlinde

VP, Software

Dave JanneyDave Janney

Safety & Quality

Garrett BoydGarrett Boyd

Technical Support

Ken ReedKen Reed

VP, Equifactor®

Linda UngerLinda Unger

Co-Founder

Mark ParadiesMark Paradies

Creator of TapRooT®

Michelle WishounMichelle Wishoun

Licensing Paralegal

Per OhstromPer Ohstrom

VP, Sales

Shaun BakerShaun Baker

Technical Support

Steve RaycraftSteve Raycraft

Technical Support

Wayne BrownWayne Brown

Technical Support

Success Stories

Alaska Airlines adopted System Safety and incorporated TapRooT® into the process to find the root causes…

Alaska Airlines

Investigation Detects Lack of Experience in Experienced Personnel And Leads To Job Simulation To Improve Performance Submitted by: Errol De Freitas Rojas, SHE Coordinator Company: ExxonMobil, Caracus, Venezuela Challenge We investigated a Marine incident where an anchor cable picked up tension during maneuvers and caused a job to be stopped. We needed to find the …

Contact Us