Category: Accidents

Strange Aviation Incident

August 16th, 2018 by

Imagine your corporate safety investigation of this…

Monday Accidents & Lessons Learned: An Assumption Can Lead You to Being All Wet

August 13th, 2018 by

IOGP Well Control Incident Lesson Sharing

The International Association of Oil & Gas Producers (IOGP) is the voice of the global upstream oil and gas industry. The industry of oil and gas provides a significant proportion of the world’s energy to meet growing demands for heat, light, and transport. IOGP members produce 40 percent of the world’s oil and gas, operating in the Americas, Africa, Europe, the Middle East, the Caspian, Asia, and Australia.

IOGP shares a Well Control Incident Lesson Sharing report recounting a breakdown in communication, preparation and monitoring, and process control. Importantly, through the findings, we identify that the overarching project plan was erroneously based on the expectation, albeit assumption, that the reservoir was depleted. Let’s track this incident:

What happened?
In a field subjected to water flooding, when drilling through shales and expecting to enter a depleted reservoir, gas readings suddenly increased. Subsequently, the mud weight was increased, the well was shut-in, and the drill string became stuck when the hole collapsed during kill operations. Water-flood break-through risks were not communicated to the drill crew, and the drill crew failed to adequately monitor the well during connections. The loss of well control, hole, and drill string was due to poor communication and well-monitoring.

  • Drilling 8″1/2 x 9″1/2 hole with 1.30SG mud weight (MW) at 2248m – this mud density is used to drill the top section shales for borehole stability purpose
  • Crossed an identified sands layer which was expected to be sub-hydrostatic (0.5SG)
  • Observed a connection gas reading up to 60% + pack off tendency.
  • Increased mud weight by step to 1.35SG but gas readings were still high
  • Decided to shut the well in and observed pressure in the well SIDP 400 psi – SICP 510 psi
  • A Gain of +/- 10m3 was estimated later (by postmortem analysis of the previous pipe connection and pump-off logs)
  • Performed Driller’s Method and killed the well by displacing 1.51 SG kill mud
  • Open hole collapsed during circulation with the consequence of string getting stuck and kick zone isolated

What went wrong? 
The reservoir was expected to be depleted. This part of the field was artificially over-pressurized by a water injector well. This was not identified during the well preparation phase. and the risk was not transmitted to the drilling teams. Lack of crew vigilance. Poor well monitoring during DP connections. The high connection gas observed at surface were the result of a crude contamination in the mud system. Significant gain volumes were taken during the previous pipe connections without being detected.

Corrective actions and recommendations 
-The incident was shared with drilling personnel and used for training purposes.

-Shared the experience and emphasized to reinforce the well preparation process with a rigorous risk identification: the hazard related to a continuous injection in a mature field to be emphasized.

-Reinforce well monitoring. Specifically, during pipe connections.

-Review mapping of injection on the field.

Circumstances can crop up anywhere at any time if proper sequence and procedures are not planned and followed. We encourage you to learn and use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: Dumping the Electronic Flight Bag En Route

August 6th, 2018 by

The electronic flight bag (EFB) has demonstrated improved capability to display aviation information such as airport charts, weather, NOTAMs, performance data, flight releases, and weight and balance. This portable electronic hardware has proven facilitative for flight crews in efficiently performing management tasks. While the EFB provides many advantages and extensive improvements for the aviation community in general and for pilots specifically, some unexpected operational threats have surfaced.

NASA’s Aviation Safety and Reporting System (ASRS) has received reports that describe various kinds of EFB anomalies. Today’s particular instance relates to EFB operation in a particular phase of flight:

An ERJ175 pilot attempted to expand the EFB display during light turbulence. Difficulties stemming from the turbulence and marginal EFB location rendered the EFB unusable, so the pilot chose to disregard the EFB entirely.

“We were on short final, perhaps 2,000 feet above field elevation. [It had been a] short and busy flight. I attempted to zoom in to the Jepp Chart, currently displayed on my EFB, to reference some information. The EFB would not respond to my zooming gestures. After multiple attempts, the device swapped pages to a different chart. I was able to get back to the approach page but could not read it without zooming. I attempted to zoom again but, with the light turbulence, I could not hold my arm steady enough to zoom. [There is] no place to rest your arm to steady your hand because of the poor mounting location on the ERJ175.

“After several seconds of getting distracted by…this EFB device, I realized that I was … heads-down for way too long and not paying enough attention to the more important things (e.g., acting as PM). I did not have the information I needed from the EFB. I had inadvertently gotten the EFB onto a company information page, which is bright white rather than the dark nighttime pages, so I turned off my EFB and continued the landing in VMC without the use of my EFB. I asked the PF to go extra slowly clearing the runway to allow me some time to get the taxi chart up after landing.

“… I understand that the EFB is new and there are bugs. This goes way beyond the growing pains. The basic usability is unreliable and distracting. In the cockpit, the device is nearly three feet away from the pilot’s face, mounted almost vertically, at a height level with your knees. All [EFB] gestures in the airplane must be made from the shoulder, not the wrist. Add some turbulence to that, and you have a significant heads-down distraction in the cockpit.”

The award-winning publication and monthly safety newsletter, CALLBACK, from NASA’s Aviation Safety Reporting System, shares reports, such as the one above, that reveal current issues, incidents, and episodes of some common problems that pilots have experienced. In this issue, we learned about precursor events that have occurred during the EFB’s adolescence.

Circumstances can crop up anywhere at any time if proper sequence and procedures are not planned and followed. We encourage you to learn and use the TapRooT® System to apprehend situations and find and fix problems. Attend one of our courses. Among our offerings are a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Remembering An Accident: Sayano-Shushenskaya Hydroelectric Dam

August 2nd, 2018 by

One of the world’s largest hydroelectric plants, Sayano-Shushenskaya Hydroelectric Dam, suffered a catastrophic failure On August 17, 2009, that lead to the death of 75 people, and the pollution of the Yenisei River with 40 tons of oil spilling into it. So how did a major incident like this happen?

On the day of the accident the dam was undergoing major repairs and upgrades. Nine of the ten turbines were operating at full capacity, even the troublesome  #2 turbine. This turbine had previously been offline because of persistent vibrations and maintenance issues, but it was brought back online the previous night. A fire at the Bratsk Power Station caused a drop in electricity production and the decision was made to run Turbine #2 to help with the electrical shortage.

Just before 8:13 am large vibrations were felt by a technician worker on the roof, and according to his recount of the the incident the vibrations gradually grew into a load raw. Shortly after two massive explosion occurred and turbine #2 shoot through the floor 50 feet into the air, and then it came crashing back down. The water that was spinning the turbine was now gushing out at a rate of 67,600 gallons a second. The gushing water produced massive amounts of pressure that ripped the room apart leading to the roofs collapse.

Eventually the gushing water flooded the lower levels and submerged the other turbines. Unfortunately, the plant’s automatic safety system failed to turn off turbines #7 and #9, which were operating at full capacity. This triggered short circuits that left the plant in total darkness adding to the confusion and mayhem.

Several employees struggled to manually close the pen-stock intake gates, and finally succeed at 9:30 am putting an end to the disastrous incident. Because of communication failures and system failures 75 people lost their lives, many were injured, and 40 tons of oil polluted the Yenisei River. Restoring the damage caused by the explosion took years and it cost US$89.3 million to complete.

(Before & After Photo)

To learn more about the Sayano-Shushenskaya Hydroelectric Dam incident click here.

 

Major disasters are often wake-up calls for how important it is to ensure that they never happen again.

TapRooT® Root Cause Analysis is taught globally to help industries avoid them. Our 5-Day TapRooT® Advanced Root Cause Analysis Team Leader Training offers advanced tools and techniques to find and fix root causes re-actively and help identify precursors that could lead to major problems.

To learn more about our courses and their locations click on the links below.
5-Day TapRooT® Advanced Root Cause Analysis Team Leader Training
2-Day TapRooT® Root Cause Analysis Essentials Training

Monday Accidents & Lessons Learned: Zooming to “Too Low Terrain”

July 30th, 2018 by

When the Electronic Flight Bag (EFB) platform—frequently a tablet device—was introduced as a human-machine interface into the aviation industry and the cockpit, the platform proved to  facilitate improvements for both pilots and the aviation community, but the human-machine interface has encountered operational threats in the early years of EFB utilization.

NASA’s Aviation Safety Reporting System (ASRS) has received reports that describe various kinds of EFB anomalies. One routine problem occurs when a pilot “zooms,” or expands the screen to enlarge a detail, and unknowingly “slides” important information off the screen, making it no longer visible. A second type of problem manifests itself in difficulty operating the EFB in specific flight or lighting conditions. Yet a third wrinkle relates to EFB operation in a particular flight phase.

Let’s look at what happened in an A319 when “zoom” went awry:

Prior to departure, an A319 crew had to manage multiple distractions. An oversight, a technique, and a subtle EFB characteristic all subsequently combined to produce a unrecognized controlled flight toward terrain.

“We received clearance from Billings Ground, ‘Cleared … via the Billings 4 Departure, climb via the SID.’ During takeoff on Runway 10L from Billings, we entered IMC. The Pilot Flying (PF) leveled off at approximately 4,600 feet MSL, heading 098 [degrees]. We received clearance for a turn to the southeast … to join J136. We initiated the turn and then requested a climb from ATC. ATC cleared us up to 15,000 feet. As I was inputting the altitude, we received the GPWS alert, ‘TOO LOW TERRAIN.’ Immediately, the PF went to Take Off/Go Around (TO/GA) Thrust and pitched the nose up. The Pilot Monitoring (PM) confirmed TO/GA Thrust and hit the Speed Brake handle … to ensure the Speed Brakes were stowed. Passing 7,000 feet MSL, the PM announced that the Minimum Sector Altitude (MSA) was 6,500 feet within 10 nautical miles of the Billings VOR. The PF reduced the pitch, then the power, and we began an open climb up to 15,000 feet MSL. The rest of the flight was uneventful.

“On the inbound leg [to Billings], the aircraft had experienced three APU auto shutdowns. This drove the Captain to start working with Maintenance Control. During the turn, after completion of the walkaround, I started referencing multiple checklists … to prepare for the non-normal, first deicing of the year. I then started looking at the standard items. It was during this time that I looked at the BILLINGS 4 Departure, [pages] 10-3 and 10-3-1. There are no altitudes on … page [10-3], so I referenced [page] 10-3-1. On [page] 10-3-1 for the BILLINGS 4 Departure at the bottom, I saw RWY 10L, so I zoomed in to read this line. When I did the zoom, it cut off the bottom of the page, which is the ROUTING. Here it clearly states, ‘Maintain 15,000 or assigned lower.’ I never saw this line. When we briefed prior to push, the departure was briefed as, ‘Heading 098, climb to 4,600 feet MSL’; so neither the PF nor the PM saw the number 15,000 feet MSL. The 45-minute turn was busy with multiple nonstandard events. The weather was not great. However, that is no excuse for missing the 15,000-foot altitude on the SID.”

The award-winning publication and monthly safety newsletter, CALLBACK, from NASA’s Aviation Safety Reporting System, shares reports, such as the one above, that reveal current issues, incidents, and episodes.

Circumstances can crop up anywhere at any time if proper sequence and procedures are not planned and followed. We encourage you to learn and use the TapRooT® System to apprehend, find, and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Why Does Blame “Make Sense”?

July 25th, 2018 by

Think about a recent accident …

  • a ship runs aground
  • a refinery has a major fire
  • an oil well has a blowout and explosion
  • a pharmaceutical plant makes a bad batch of drugs and it gets by the QA process and customers are harmed

One thing that you can be sure of in ALL of the accidents above is that:

someone screwed up!

You never have a major accident if all the Safeguards function as designed. And guess what … we depend on human actions, in many cases, as a significant or sometimes as the ONLY Safeguard.

Therefore, when an accident happens, there is usually at least one human action Safeguard that failed.

If you are in a blame oriented organization, the obvious answer is to BLAME the individual (or team) that failed to prevent the accident. If you can find who is to blame and punish them, you can get back to work.

It MAKES SENSE because “if only they had done their job …” the accident would not have happened. Punishing the individual will set an example for everyone else and they will try harder not to make mistakes.

Sure enough, when the same accident doesn’t happen again right away, management believes they fixed the problem with blame and punishment.

I was thinking of this the other day when someone was talking to me about an investigation they had done using TapRooT®. They had recently adopted TapRooT® and, in the past, had frequently blamed people for accidents.

In this case, a worker had made a mistake when starting up a process. The mistake cost the facility over $200,000. The operator thought that she probably was going to be fired. Her apprehension wasn’t reduced when someone told her she was going to be “taprooted.”

She participated in the investigation and was pleasantly surprised. The investigation identified a number of Causal Factors including her “screw up.” But, to her surprise, they didn’t just stop there and blame her. They looked at the reasons for her mistake. They found there were three “root causes” that could be fixed (improvements that could be made) that would stop the mistake from being made in the future.

She came away realizing that anybody doing the same job could have made the same mistake. She saw how the investigation had improved the process to prevent future similar mistakes. She became a true believer in the TapRooT® System.

When you discover the real fixable root causes of human performance related Causal Factors, BLAME DOES NOT MAKE SENSE. In fact, blame is counter productive.

If people see that the outcome of an investigation is usually blame and discipline, it won’t take long until most incidents, if at all possible, become mystery incidents.

What is a mystery incident?

A refinery plant manager told me this story:

Back early in his career, he had been an engineer involved in the construction and startup of a major facility. One day when they were doing testing, the electrical power to some vital equipment was lost and then came back on “by itself.” This caused damage to some of the equipment and a delay in the startup of the plant. An investigation was performed and no reason for the power failure or the reason for the power coming back on could be found. No one admitted to being in the vicinity of the breaker and the breaker was closed when it was checked after the incident.

Thirty years later they held an unofficial reunion of people who had worked on the project. At dinner, people shared funny stories about others and events that had happened. An electrician shared his story about accidentally opening the wrong breaker (they weren’t labeled) and then, when he heard alarms going off, re-shutting the breaker and leaving the area. He said “Well, I’m retired and they can’t punish me for it now.”

That electrician’s actions had been the cause of the incident. The refinery manager telling the story added that the electrician probably would have been fired if he had admitted what he had done at the time. The refinery manager then added that, “It is a good thing that we use TapRooT® and know better than to react to incidents that way. Now we look for and find root causes that improve our processes.”

Are you looking for the root causes of incidents and improving processes?

Or are you still back in the “bad old days” blaming people when a mistake happens?

If you haven’t been to a TapRooT® Course, maybe you should go now and see how to go beyond blame to find the real, fixable root causes of human error.

See our upcoming TapRooT® Courses by clicking on THIS LINK.

Or contact us to get a quote for a course at your site by CLICKING HERE.

And if your management still thinks that blame and punish is a good idea, maybe you should find a way to pass this article along (without being identified and blamed).

Monday Accidents & Lessons Learned: A Taxiway by Any Other Name

July 23rd, 2018 by

An EFB, or electronic flight bag, is portable electronic hardware, increasingly utilized for flight deck or cabin use to facilitate flight crews perform flight management tasks easier and more efficiently. At a basic level, EFBs can perform flight-planning calculations and offer a variety of digital documentation, such as navigational charts, operations manuals, and aircraft checklists.

NASA’s Aviation Safety and Reporting System (ASRS) has received reports that describe various kinds of EFB anomalies. This report illustrates how complications between EFBs and human operators develop into precursor events:

“A B737 Captain encountered frustration while using his moving map. Although the specific incident is not cited, the Captain clearly identifies an EFB operational problem and offers a practical solution for the threat.

‘In [our] new version of [our EFB chart manager App], … a setting under Airport Moving Map (AMM) … says, “Set as default on landing,” [and I cannot] … turn it off. If [I] turn it off, it turns itself back on. This is bad.… It should be the pilot’s choice whether or not to display it at certain times—particularly after landing. Here’s the problem with the AMM: When you zoom out, the taxiway names disappear.

‘Consider this scenario: As you turn off the runway at a large airport, you look down at the map (which is the AMM, not the standard taxi chart, because the AMM comes on automatically, and [I] cannot turn that feature off). You get some complicated taxi instructions and then zoom out the AMM [to] get a general, big-picture idea of where you’re supposed to go. But when [I] zoom out the AMM, taxiway names disappear.… [I] have to switch back to the standard taxi chart and zoom and position that chart to get the needed information. That’s a lot of heads-down [tablet] manipulation immediately after exiting the runway, and it’s not safe.

‘[Pilots should have] control over whether or not to automatically display the AMM after landing. The AMM may work fine at a small airport, but at a large airport when given taxi instructions that are multiple miles long, the AMM is useless for big-picture situational awareness.'”

The award-winning publication and monthly safety newsletter, CALLBACK, from NASA’s Aviation Safety Reporting System, shares reports, such as the one above, that reveal current issues, incidents, and episodes.

Circumstances can crop up anywhere at any time if proper sequence and procedures are not planned and followed. We encourage you to learn and use the TapRooT® System to apprehend situations and find and fix problems. Attend one of our courses. Among our offerings are a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: When One Good Turn Definitely Doesn’t Deserve Another

July 16th, 2018 by

The electronic flight bag (EFB) is rapidly replacing pilots’ conventional papers in the cockpit. While the EFB has demonstrated improved capability to display aviation information—airport charts, weather, NOTAMs, performance data, flight releases, and weight and balance—NASA’s Aviation Safety and Reporting System (ASRS) has received reports that describe various kinds of EFB anomalies, such as this one:

“This B757 Captain received holding instructions during heavy traffic. While manipulating his EFB for clarification, he inadvertently contributed to an incorrect holding entry.

‘[We were] asked to hold at SHAFF intersection due to unexpected traffic saturation. While setting up the FMC and consulting the arrival chart, I expanded the view on my [tablet] to find any depicted hold along the airway at SHAFF intersection. In doing so, I inadvertently moved the actual hold depiction…out of view and [off] the screen.

‘The First Officer and I only recall holding instructions that said to hold northeast of SHAFF, 10-mile legs. I asked the First Officer if he saw any depicted hold, and he said, “No.” We don’t recall instructions to hold as depicted, so not seeing a depicted hold along the airway at SHAFF, we entered a right-hand turn. I had intended to clarify the holding side with ATC, however there was extreme radio congestion and we were very close to SHAFF, so the hold was entered in a right-hand turn.

‘After completing our first 180-degree turn, the controller informed us that the hold at SHAFF was left turns. We said that we would correct our holding side on the next turn. Before we got back to SHAFF for the next turn, we were cleared to [the airport].'”

Volpe National Transportation Systems Center, U.S. Department of Transportation, weighs in on EFBs: “While the promise of EFBs is great, government regulators, potential customers, and industry developers all agree that EFBs raise many human factors considerations that must be handled appropriately in order to realize this promise without adverse effects.”

CALLBACK is the award-winning publication and monthly safety newsletter from NASA’s Aviation Safety Reporting System (ASRS). CALLBACK shares reports, such as the one above, that reveal current issues, incidents, and episodes.

Circumstances can crop up anywhere at any time if proper sequence and procedures are not planned and followed. We encourage you to learn and use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accident & Lessons Learned: Fatal Accident While Unloading a Truck

July 9th, 2018 by

WKBW TV reported that two employees were killed when a stack of Corian counter tops that weighed 800 pounds per slab (11 slabs total) fell on them. The accident occurred at 1:30 a.m. and the employees were pronounced dead at the scene.

The company issued a statement that said:

“We’re saddened to report that a tragic accident occurred this morning at our facility in Lockport, NY that resulted in the death of two of our colleagues. We’re working with our safety team and local law enforcement to understand the circumstances under which this tragedy occurred. Our deepest sympathies are with their families and our Lockport team members.”

OSHA will be conducting an investigation of the deaths. OSHA has investigated two other serious injuries at XPO facilities elsewhere in New York state.

What was the Hazard in this accident?

The heavy, high-piled load.

Who were the targets?

The two employees.

What were the Safeguards?

From the newspaper articles, we don’t know.

We also don’t know any of the reasons for the Safeguard’s failure.

The root cause analysis will have to determine the Safeguards, why they failed, and if they were sufficient. (Do we need additional Safeguards?)

In the TapRooT® System, a SnapCharT® would be used to collect and organize the information about what happened.

Then the failed Safeguards would be identified.

Next, the failed Safeguards (Causal Factors) would be analyzed to find their root causes using the Root Cause Tree® Diagram.

Once the root causes for all the Safeguards were found, the team would start developing corrective actions.

The Safeguards would be reviewed to see if after they were strengthened, if they would be adequate. If they would not be adequate, either additional Safeguards would be developed or the process could be modified to reduce or remove the hazard. For example, stack the counter tops no more that 16 inches high.

To improve the Safeguards that failed, you would address each of the root causes by developing SMARTER corrective actions using the Corrective Action Helper® Module of TapRooT® Software.

What is a SMARTER corrective action?

Specific

Measurable

Accountable

Reasonable

Timely

Effective

Reviewed

To learn more about the TapRooT® System, SnapCharT®, Safeguard Analysis, Causal Factors, the Root Cause Tree® Diagram, the Corrective Action Helper® Module, the TapRooT® Software, and SMARTER, attend one of our 2-Day or 5-Day TapRooT® Courses. Here is a list of the dates and locations of the courses being held around the world:

http://www.taproot.com/store/Courses

One Safeguard Eliminated = Death

July 3rd, 2018 by

Watch the video and see what could have been done to avoid a fatality…

I think there was an obvious Safeguard missing. What was it?

Monday Accidents & Lessons Learned: Where Did We Put the Departure Course?

July 2nd, 2018 by

Have you ever encountered a new methodology or product that you deemed the best thing ever, only to discover in a too-close-for-comfort circumstance that what seemed a game changer had a real downside?

In aviation, the Electronic Flight Bag (EFB) is the electronic equivalent to the pilot’s traditional flight bag. It contains electronic data and hosts EFB applications, and it is generally replacing the pilots’ conventional papers in the cockpit. The EFB has demonstrated improved capability to display aviation information such as airport charts, weather, NOTAMs, performance data, flight releases, and weight and balance.

The EFB platform, frequently a tablet device, introduces a relatively new human-machine interface into the cockpit. While the EFB provides many advantages and extensive improvements for the aviation community in general and for pilots specifically, some unexpected operational threats have surfaced during its early years.

NASA’s Aviation Safety and Reporting System (ASRS) has received reports that describe various kinds of EFB anomalies. One typical problem occurs when a pilot “zooms,” or expands the screen to enlarge a detail, thereby unknowingly “slides” important information off the screen, making it no longer visible.

An Airbus A320 crew was given a vector to intercept course and resume the departure procedure, but the advantage that the EFB provided in one area generated a threat in another.

From the Captain’s Report:

“Air Traffic Control (ATC) cleared us to fly a 030 heading to join the GABRE1 [Departure]. I had never flown this Standard Instrument Departure (SID). I had my [tablet] zoomed in on the Runway 6L/R departure side so I wouldn’t miss the charted headings. This put Seal Beach [VOR] out of view on the [tablet]. I mistakenly asked the First Officer to sequence the Flight Management Guidance Computer (FMGC) between GABRE and FOGEX.”

From the First Officer’s Report:

“During our departure off Runway 6R at LAX [while flying the] GABRE1 Departure, ATC issued, ‘Turn left 030 and join the GABRE1 Departure.’ This was the first time for both pilots performing this SID and the first time departing this runway for the FO. Once instructed to join the departure on the 030 heading, I extended the inbound radial to FOGEX and inserted it into the FMGC. With concurrence from the Captain, I executed it. ATC queried our course and advised us that we were supposed to intercept the Seal Beach VOR 346 radial northbound. Upon review, both pilots had the departure zoomed in on [our tablets] and did not have the Seal Beach [VOR] displayed.”

CALLBACK is the award-winning publication and monthly safety newsletter from NASA’s Aviation Safety Reporting System (ASRS). CALLBACK shares reports, such as the one above, that reveal current issues, incidents, and episodes.

Circumstances can crop up anywhere at any time if proper sequence and procedures are not planned and followed. We encourage you to learn and use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Remembering An Accident: Ixtoc I Oil Spill

June 27th, 2018 by

On June 3, 1979 the Ixtoc I ,located in the Bay of Campeche in the Gulf of Mexico, exploded and caught fire at 3:30 AM. The Ixtoc oil spill became one of the largest oil spills in history.

So what happened?..

On June 2nd, the day before the blowout, the drill hit a spot with soft sedimentary soil that caused a bit weight reduction, also known as a break. This break unfortunately, caused a fracture in the wells’ piping that resulted in a complete loss in mud circulation drilling. As PeMex and Sedco argued over the best course of action oil began to build up in the west well column. Once the decision was made to remove the drill it was a little to late. Pressure had built to up extremely high, dangerous, and unstable levels. This caused a surge of mud to race up the drill pipe and spill onto the drilling platform. After the surge of mud multiple safety failures occurred creating a chain of events that resulted in the disastrous blow out.

 

After the blow out the now escaping oil ignited, and once it came into contact with gas fumes from a motor that powered the derrick aboard the platform the Ixtoc exploded at 3:30 am. The fire burned until about 10:00 am the next day. The fire caused the Sedco 135 drilling tower to collapse leading to the total loss of the drilling rig. Oil continued spilling out of the destroyed well for a total of 290 days, until the well was finally capped on March 23, 1980.

 

To learn more about the Ixtoc I Oil Spill click here.

 

Major disasters are often wake-up calls for how important it is to ensure that they never happen again.

TapRooT® Root Cause Analysis is taught globally to help industries avoid them. Our 5-Day TapRooT® Advanced Root Cause Analysis Team Leader Training offers advanced tools and techniques to find and fix root causes re-actively and help identify precursors that could lead to major problems.

To learn more about our courses and their locations click on the links below.
5-Day TapRooT® Advanced Root Cause Analysis Team Leader Training
2-Day TapRooT® Root Cause Analysis Essentials Training

 

What does a bad day look like?

June 26th, 2018 by

“I told the new construction crew to up their game after they blocked the door with bricks. Wonder what they thought I meant?”

Monday Accident & Lessons Learned: What Does a Human Error Cost? A £566,670 Fine in the UK!

June 25th, 2018 by

Dump truck(Not actual truck, For illustration only.)

The UK HSE fined a construction company £566,670 after a dump truck touched (or came near) a power line causing a short.

No one was hurt and the truck suffered only minor damage.

The drive tried to pull forward to finish dumping his load and caused a short.

Why did the company get fined?

“A suitable and sufficient assessment would have identified the need to contact the Distribution Network Operator, Western Power, to request the OPL’s were diverted underground prior to the commencement of construction. If this was not reasonably practicable, Mick George Ltd should have erected goalposts either side of the OPL’s to warn drivers about the OPL’s. “

That was the statement from the UK HSE Inspector as quoted in a hazarded article.

What Safeguards do you need to keep a simple human error from becoming an accident (or a large fine)?

Performing a Safeguard Analysis before starting work is always a good idea. Learn more about using Safeguard Analysis proactively at our 5-Day TapRooT® Advanced Root Cause Analysis Team Leader Course. See the upcoming public course dats around the world at:

http://www.taproot.com/store/5-Day-Courses/

Monday Accident & Lessons Learned: Why is Right of Way Maintenance Important?

June 18th, 2018 by

Here is another example of why right of way maintenance is important for utility transmission and distribution departments …

Wildfires

An article on hazardex reported that the California Department of Forestry and Fire Protection (Cal Fire) said in a press release that 12 of the wildfires that raged across California’s wine country were due to tree branches touching PG&E power lines.

Eight of the 12 fires have been referred to county District Attorney’s offices for potential criminal prosecution for alleged violations of California laws.

The fires last October killed 44 people, burned more than 245,000 acres, and cost at least $9.4 billion dollars of insured losses. PG&E has informed it’s shareholders that it could be liable costs in excess of the $800 million in insurance coverage that it has for wildfires.

PG&E is lobbying state legislators for relief because they are attributing the fires to climate change and say they should not be held liable for the damage.

What lessons can you learn from this?

Sometimes the cost of delayed maintenance is much higher than the cost of performing the maintenance.

Can you tell which maintenance is safety critical?

Do you know the risks associated with your deferred maintenance?

Things to think about.

Monday Accidents & Lessons Learned: Missing a Mode Change

June 11th, 2018 by

A B737-800 Captain became distracted while searching for traffic during his approach. Both he and the First Officer missed the FMA mode change indication, which resulted in an altitude deviation in a terminal environment.

From the Captain’s Report:
“Arrival into JFK, weather was CAVU. Captain was Pilot Flying, First Officer was Pilot Monitoring. Planned and briefed the visual Runway13L with the RNAV (RNP) Rwy 13L approach as backup. Approach cleared us direct to ASALT, cross ASALT at 3,000, cleared approach. During the descent, we received several calls for a VFR target at our 10 to 12 o’clock position. We never acquired the traffic visually, but we had him on TCAS. Eventually Approach advised, “Traffic no factor, contact Tower.” On contact with Tower, we were cleared to land. Approaching ASALT, I noticed we were approximately 500 feet below the 3,000 foot crossing altitude. Somewhere during the descent while our attention was on the VFR traffic, the plane dropped out of VNAV PATH, and I didn’t catch it. I disconnected the autopilot and returned to 3,000 feet. Once level, I reengaged VNAV and completed the approach with no further problems.”

From the First Officer’s Report:
“FMA mode changes are insidious. In clear weather, with your head out of the cockpit clearing for traffic in a high density environment, especially at your home field on a familiar approach, it is easy to miss a mode change. This is a good reminder to keep instruments in your cross check on those relatively few great weather days.”

CALLBACK is the award-winning publication and monthly safety newsletter from NASA’s Aviation Safety Reporting System (ASRS). CALLBACK shares reports, such as the one above, that reveal current issues, incidents, and episodes. At times, the reports involve mode awareness, mode selection, and mode expectation problems involving aircraft automation that are frequently experienced by the Mode Monitors and Managers in today’s aviation environment.


We encourage you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

New Study Suggests Poor Officer Seamanship Training Across the Navy – Is This a Generic Cause of 2017 Fatal Navy Ship Collisions?

June 7th, 2018 by

BLAME IS NOT A ROOT CAUSE

It is hard to do a root cause analysis from afar with only newspaper stories as your source of facts … but a recent The Washington Times article shed some light on a potential generic cause for the fatal collisions last year.

The Navy conducted an assessment of seamanship skills of 164 first-tour junior officers. The results were as follows

  • 16% (27 of 164) – no concerns
  • 66% (108 of 164) – some concerns
  • 18% (29 of 164) – significant concerns

With almost 1 out of 5 having significant concerns, and two thirds having some concerns, it made me wonder about the blame being placed on the ship’s Commanding Officers and crew. Were they set up for failure by a training program that sent officers to sea who didn’t have the skills needed to perform their jobs as Officer of the Deck and Junior Offiicer of the Deck?

The blame heavy initial investigations certainly didn’t highlight this generic training problem that now seems to be being addressed by the Navy.

Navy officers who cooperated with the Navy’s investigations faced court martials after cooperating.

NewImage

According to and article in The Maritime Executive Lt j.g. Sarah Coppock, Officer of the Deck during the USS Fitzgerald collision, pled guilt to charges to avoid facing a court martial. Was she properly trained or would have the Navy’s evaluators had “concerns” with her abilities if she was evaluated BEFORE the collision? Was this accident due to the abbreviated training that the Navy instituted to save money?

Note that in the press release, information came out that hadn’t previously been released that the Fitzgerald’s main navigation radar was known to be malfunctioning and that Lt. j.g. Coppock thought she had done calculations that showed that the merchant ship would pass safely astern.

NewImage

In other blame related news, the Chief Boatswains Mate on the USS McCain plead guilty to dereliction of duty for the training of personnel to use the Integrated Bridge Navigation System, newly installed on the McCain four months before he arrived. His total training on the system was 30 minutes of instruction by a “master helmsman.” He had never used the system on a previous ships and requested additional training and documentation on the system, but had not received any help prior to the collision.

He thought that the three sailors on duty from the USS Antietam, a similar cruiser, were familiar with the steering system. However, after the crash he discovered that the USS McCain was the only cruiser in the 7th fleet with this system and that the transferred sailors were not familiar with the system.

On his previous ship Chief Butler took action to avoid a collision at sea when a steering system failed during an underway replenishment and won the 2014 Sailor of the Year award. Yet the Navy would have us believe that he was a “bad sailor” (derelict in his duties) aboard the USS McCain.

NewImage

Also blamed was the CO of the USS McCain, Commander Alfredo J. Sanchez. He pleaded guilty to dereliction of duty in a pretrial agreement. Commander Sanchez was originally charged with negligent homicide and hazarding a vessel  but both other charges were dropped as part of the pretrial agreement.

Maybe I’m seeing a pattern here. Pretrial agreements and guilty pleas to reduced charges to avoid putting the Navy on trial for systemic deficiencies (perhaps the real root causes of the collisions).

Would your root cause analysis system tend to place blame or would it find the true root and generic causes of your most significant safety, quality, and equipment reliability problems?

The TapRooT® Root Cause Analysis System is designed to look for the real root and generic causes of issues without placing unnecessary blame. Find out more at one of our courses:

http://www.taproot.com/courses

Monday Accidents & Lessons Learned: Watch It Like It’s Hot

June 4th, 2018 by

A B737 crew was caught off-guard during descent. The threat was real and had been previously known. The crew did not realize that the aircraft’s vertical navigation had reverted to a mode less capable than VNAV PATH.

From the Captain’s Report:
“While descending on the DANDD arrival into Denver, we were told to descend via. We re-cruised the current altitude while setting the bottom altitude in the altitude window. Somewhere close to DANDD intersection, the aircraft dropped out of its vertical mode and, before we realized it, we descended below the 17,000 foot assigned altitude at DANDD intersection to an altitude of nearly 16,000 feet. At once, I kicked off the autopilot and began to climb back to 17,000 feet, which we did before crossing the DANDD intersection. Reviewing the incident, we still don’t know what happened. We had it dialed in, and the vertical mode reverted to CWS PITCH (CWS P).

“Since our software is not the best and we have no aural warnings of VNAV SPD or CWS P, alas, we must watch it ever more closely—like a hawk.”

From the First Officer’s Report:
“It would be nice to have better software—the aircraft constantly goes out of VNAV PATH and into VNAV SPEED for no reason, and sometimes the VNAV disconnects for no reason, like it did to us today.”

We encourage you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.We encourage you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Why do we still have major process safety accidents?

May 30th, 2018 by

I had an interesting argument about root cause analysis and process safety. The person I was arguing with thought that 5-Whys was a good technique to use for process safety incidents that had low consequences.

Let me start by saying that MOST process safety incidents have low actual consequences. The reason they need to be prevented is that they are major accident precursors. If one or more additional Safeguards had failed, they would become a major accident. Thus, their potential consequences are high.

From my previous writings (a sample of links below), you know that I consider 5-Whys to be an inferior root cause analysis tool.

If you don’t have time to read the links above, then consider the results you have observed when people use 5-Whys. The results are:

  • Inconsistent (different people get different results when analyzing the same problem)
  • Prone to bias (you get what you look for)
  • Don’t find the root causes of human errors
  • Don’t consistently find management system root causes

And that’s just a start of the list of performance problems.

So why do people say that 5-Whys is a good technique (or a good enough technique)? It usually comes down to their confidence. They are confident in their ability to find the causes of problems without a systematic approach to root cause analysis. They believe they already know the answers to these simple problems and that it is a waste of time to use a more rigorous approach. Thus, their knowledge and a simple (inferior) technique is enough.

Because they have so much confidence in their ability, it is difficult to show them the weaknesses in 5-Whys because their answer is always:

“Of course, any technique can be misused,
but a good 5-Whys wouldn’t have that problem.”

And a good 5-Whys is the one THEY would do.

If you point out problems with one of their root cause analyses using 5-Why, they say you are nitpicking and stop the conversation because you are “overly critical and no technique is perfect.”

Of course, I agree. No technique is perfect. But some are much better than others. And the results show when the techniques are applied.

And that got me thinking …

How many major accidents had precursor incidents
that were investigated using 5-Whys and the corrective
actions were ineffective (didn’t prevent the major accident)?

Next time you have a major accident, look for precursors and check why their root cause analysis and corrective actions didn’t prevent the major accident. Maybe that will convince you that you need to improve your root cause analysis.

If you want to sample advanced root cause analysis, attend a 2-Day or a 5-Day TapRooT® Course.

The 2-Day TapRooT® Root Cause Analysis Course is for people who investigate precursor incidents (low-to-moderate consequences).

The 5-Day TapRooT® Advanced Root Cause Analysis Team Leader Course is for people who investigate precursor incidents (low-to-moderate consequences) AND perform major investigation (fatalities, fires, explosions, large environmental releases, or other costly events).

See the schedule for upcoming public courses that are held around the world HERE. Just click on your continent to see courses near you.

Monday Accidents & Lessons Learned: Who’s in Charge?

May 28th, 2018 by

An ERJ-145 crew failed to detect a change in its vertical navigation mode during descent. When it was eventually discovered, corrective action was taken, but large deviations from the desired flight path may have already compromised safety.

“This event occurred while being vectored for a visual approach. The First Officer (FO) was the Pilot Flying and I was Pilot Monitoring. ATC had given us a heading to fly and a clearance to descend to 3,000 feet. 3,000 was entered into the altitude preselect, was confirmed by both pilots, and a descent was initiated. At about this time, we were also instructed to maintain 180 knots. Sometime later, I noticed that our speed had begun to bleed off considerably, approximately 20 knots, and was still decaying. I immediately grabbed the thrust levers and increased power, attempting to regain our airspeed. At about this time, it was noticed that the preselected altitude had never captured and that the Flight Mode Annunciator (FMA) had entered into PITCH MODE at some point. It became apparent that after the aircraft had started its descent, the altitude preselect (ASEL) mode had changed to pitch and was never noticed by either pilot. Instead of descending, the aircraft had entered a climb at some point, and this was not noticed until an appreciable amount of airspeed decay had occurred. At the time that this event was noticed, the aircraft was approximately 900 feet above its assigned altitude. Shortly after corrective action was begun, ATC queried us about our climbing instead of descending. We replied that we were reversing the climb. The aircraft returned to its assigned altitude, and a visual approach was completed without any further issues.

“[We experienced a] large decrease in indicated airspeed. The event occurred because neither pilot noticed the Flight Mode Annunciator (FMA) entering PITCH MODE. Thrust was added, and then the climb was reversed in order to descend back to our assigned altitude. Both pilots need to reaffirm that their primary duty is to fly and monitor the aircraft at all times, starting with the basics of heading, altitude, airspeed, and performance.”

We encourage you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.We encourage you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Connect with Us

Filter News

Search News

Authors

Angie ComerAngie Comer

Software

Anne RobertsAnne Roberts

Marketing

Barb CarrBarb Carr

Editorial Director

Chris ValleeChris Vallee

Human Factors

Dan VerlindeDan Verlinde

VP, Software

Dave JanneyDave Janney

Safety & Quality

Garrett BoydGarrett Boyd

Technical Support

Ken ReedKen Reed

VP, Equifactor®

Linda UngerLinda Unger

Co-Founder

Mark ParadiesMark Paradies

Creator of TapRooT®

Michelle WishounMichelle Wishoun

Licensing Paralegal

Per OhstromPer Ohstrom

VP, Sales

Shaun BakerShaun Baker

Technical Support

Steve RaycraftSteve Raycraft

Technical Support

Wayne BrownWayne Brown

Technical Support

Success Stories

Prior to implementing TapRooT in 1993, we performed incident investigations but we often stopped…

ExxonMobil

Many of us investigate accidents that the cause seems intuitively obvious: the person involved…

ARCO (now ConocoPhillips)
Contact Us