Category: Accidents

Monday Accidents & Lessons Learned: Missing a Mode Change

June 11th, 2018 by

A B737-800 Captain became distracted while searching for traffic during his approach. Both he and the First Officer missed the FMA mode change indication, which resulted in an altitude deviation in a terminal environment.

From the Captain’s Report:
“Arrival into JFK, weather was CAVU. Captain was Pilot Flying, First Officer was Pilot Monitoring. Planned and briefed the visual Runway13L with the RNAV (RNP) Rwy 13L approach as backup. Approach cleared us direct to ASALT, cross ASALT at 3,000, cleared approach. During the descent, we received several calls for a VFR target at our 10 to 12 o’clock position. We never acquired the traffic visually, but we had him on TCAS. Eventually Approach advised, “Traffic no factor, contact Tower.” On contact with Tower, we were cleared to land. Approaching ASALT, I noticed we were approximately 500 feet below the 3,000 foot crossing altitude. Somewhere during the descent while our attention was on the VFR traffic, the plane dropped out of VNAV PATH, and I didn’t catch it. I disconnected the autopilot and returned to 3,000 feet. Once level, I reengaged VNAV and completed the approach with no further problems.”

From the First Officer’s Report:
“FMA mode changes are insidious. In clear weather, with your head out of the cockpit clearing for traffic in a high density environment, especially at your home field on a familiar approach, it is easy to miss a mode change. This is a good reminder to keep instruments in your cross check on those relatively few great weather days.”

CALLBACK is the award-winning publication and monthly safety newsletter from NASA’s Aviation Safety Reporting System (ASRS). CALLBACK shares reports, such as the one above, that reveal current issues, incidents, and episodes. At times, the reports involve mode awareness, mode selection, and mode expectation problems involving aircraft automation that are frequently experienced by the Mode Monitors and Managers in today’s aviation environment.


We encourage you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

New Study Suggests Poor Officer Seamanship Training Across the Navy – Is This a Generic Cause of 2017 Fatal Navy Ship Collisions?

June 7th, 2018 by

BLAME IS NOT A ROOT CAUSE

It is hard to do a root cause analysis from afar with only newspaper stories as your source of facts … but a recent The Washington Times article shed some light on a potential generic cause for the fatal collisions last year.

The Navy conducted an assessment of seamanship skills of 164 first-tour junior officers. The results were as follows

  • 16% (27 of 164) – no concerns
  • 66% (108 of 164) – some concerns
  • 18% (29 of 164) – significant concerns

With almost 1 out of 5 having significant concerns, and two thirds having some concerns, it made me wonder about the blame being placed on the ship’s Commanding Officers and crew. Were they set up for failure by a training program that sent officers to sea who didn’t have the skills needed to perform their jobs as Officer of the Deck and Junior Offiicer of the Deck?

The blame heavy initial investigations certainly didn’t highlight this generic training problem that now seems to be being addressed by the Navy.

Navy officers who cooperated with the Navy’s investigations faced court martials after cooperating.

NewImage

According to and article in The Maritime Executive Lt j.g. Sarah Coppock, Officer of the Deck during the USS Fitzgerald collision, pled guilt to charges to avoid facing a court martial. Was she properly trained or would have the Navy’s evaluators had “concerns” with her abilities if she was evaluated BEFORE the collision? Was this accident due to the abbreviated training that the Navy instituted to save money?

Note that in the press release, information came out that hadn’t previously been released that the Fitzgerald’s main navigation radar was known to be malfunctioning and that Lt. j.g. Coppock thought she had done calculations that showed that the merchant ship would pass safely astern.

NewImage

In other blame related news, the Chief Boatswains Mate on the USS McCain plead guilty to dereliction of duty for the training of personnel to use the Integrated Bridge Navigation System, newly installed on the McCain four months before he arrived. His total training on the system was 30 minutes of instruction by a “master helmsman.” He had never used the system on a previous ships and requested additional training and documentation on the system, but had not received any help prior to the collision.

He thought that the three sailors on duty from the USS Antietam, a similar cruiser, were familiar with the steering system. However, after the crash he discovered that the USS McCain was the only cruiser in the 7th fleet with this system and that the transferred sailors were not familiar with the system.

On his previous ship Chief Butler took action to avoid a collision at sea when a steering system failed during an underway replenishment and won the 2014 Sailor of the Year award. Yet the Navy would have us believe that he was a “bad sailor” (derelict in his duties) aboard the USS McCain.

NewImage

Also blamed was the CO of the USS McCain, Commander Alfredo J. Sanchez. He pleaded guilty to dereliction of duty in a pretrial agreement. Commander Sanchez was originally charged with negligent homicide and hazarding a vessel  but both other charges were dropped as part of the pretrial agreement.

Maybe I’m seeing a pattern here. Pretrial agreements and guilty pleas to reduced charges to avoid putting the Navy on trial for systemic deficiencies (perhaps the real root causes of the collisions).

Would your root cause analysis system tend to place blame or would it find the true root and generic causes of your most significant safety, quality, and equipment reliability problems?

The TapRooT® Root Cause Analysis System is designed to look for the real root and generic causes of issues without placing unnecessary blame. Find out more at one of our courses:

http://www.taproot.com/courses

Monday Accidents & Lessons Learned: Watch It Like It’s Hot

June 4th, 2018 by

A B737 crew was caught off-guard during descent. The threat was real and had been previously known. The crew did not realize that the aircraft’s vertical navigation had reverted to a mode less capable than VNAV PATH.

From the Captain’s Report:
“While descending on the DANDD arrival into Denver, we were told to descend via. We re-cruised the current altitude while setting the bottom altitude in the altitude window. Somewhere close to DANDD intersection, the aircraft dropped out of its vertical mode and, before we realized it, we descended below the 17,000 foot assigned altitude at DANDD intersection to an altitude of nearly 16,000 feet. At once, I kicked off the autopilot and began to climb back to 17,000 feet, which we did before crossing the DANDD intersection. Reviewing the incident, we still don’t know what happened. We had it dialed in, and the vertical mode reverted to CWS PITCH (CWS P).

“Since our software is not the best and we have no aural warnings of VNAV SPD or CWS P, alas, we must watch it ever more closely—like a hawk.”

From the First Officer’s Report:
“It would be nice to have better software—the aircraft constantly goes out of VNAV PATH and into VNAV SPEED for no reason, and sometimes the VNAV disconnects for no reason, like it did to us today.”

We encourage you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.We encourage you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Why do we still have major process safety accidents?

May 30th, 2018 by

I had an interesting argument about root cause analysis and process safety. The person I was arguing with thought that 5-Whys was a good technique to use for process safety incidents that had low consequences.

Let me start by saying that MOST process safety incidents have low actual consequences. The reason they need to be prevented is that they are major accident precursors. If one or more additional Safeguards had failed, they would become a major accident. Thus, their potential consequences are high.

From my previous writings (a sample of links below), you know that I consider 5-Whys to be an inferior root cause analysis tool.

If you don’t have time to read the links above, then consider the results you have observed when people use 5-Whys. The results are:

  • Inconsistent (different people get different results when analyzing the same problem)
  • Prone to bias (you get what you look for)
  • Don’t find the root causes of human errors
  • Don’t consistently find management system root causes

And that’s just a start of the list of performance problems.

So why do people say that 5-Whys is a good technique (or a good enough technique)? It usually comes down to their confidence. They are confident in their ability to find the causes of problems without a systematic approach to root cause analysis. They believe they already know the answers to these simple problems and that it is a waste of time to use a more rigorous approach. Thus, their knowledge and a simple (inferior) technique is enough.

Because they have so much confidence in their ability, it is difficult to show them the weaknesses in 5-Whys because their answer is always:

“Of course, any technique can be misused,
but a good 5-Whys wouldn’t have that problem.”

And a good 5-Whys is the one THEY would do.

If you point out problems with one of their root cause analyses using 5-Why, they say you are nitpicking and stop the conversation because you are “overly critical and no technique is perfect.”

Of course, I agree. No technique is perfect. But some are much better than others. And the results show when the techniques are applied.

And that got me thinking …

How many major accidents had precursor incidents
that were investigated using 5-Whys and the corrective
actions were ineffective (didn’t prevent the major accident)?

Next time you have a major accident, look for precursors and check why their root cause analysis and corrective actions didn’t prevent the major accident. Maybe that will convince you that you need to improve your root cause analysis.

If you want to sample advanced root cause analysis, attend a 2-Day or a 5-Day TapRooT® Course.

The 2-Day TapRooT® Root Cause Analysis Course is for people who investigate precursor incidents (low-to-moderate consequences).

The 5-Day TapRooT® Advanced Root Cause Analysis Team Leader Course is for people who investigate precursor incidents (low-to-moderate consequences) AND perform major investigation (fatalities, fires, explosions, large environmental releases, or other costly events).

See the schedule for upcoming public courses that are held around the world HERE. Just click on your continent to see courses near you.

Monday Accidents & Lessons Learned: Who’s in Charge?

May 28th, 2018 by

An ERJ-145 crew failed to detect a change in its vertical navigation mode during descent. When it was eventually discovered, corrective action was taken, but large deviations from the desired flight path may have already compromised safety.

“This event occurred while being vectored for a visual approach. The First Officer (FO) was the Pilot Flying and I was Pilot Monitoring. ATC had given us a heading to fly and a clearance to descend to 3,000 feet. 3,000 was entered into the altitude preselect, was confirmed by both pilots, and a descent was initiated. At about this time, we were also instructed to maintain 180 knots. Sometime later, I noticed that our speed had begun to bleed off considerably, approximately 20 knots, and was still decaying. I immediately grabbed the thrust levers and increased power, attempting to regain our airspeed. At about this time, it was noticed that the preselected altitude had never captured and that the Flight Mode Annunciator (FMA) had entered into PITCH MODE at some point. It became apparent that after the aircraft had started its descent, the altitude preselect (ASEL) mode had changed to pitch and was never noticed by either pilot. Instead of descending, the aircraft had entered a climb at some point, and this was not noticed until an appreciable amount of airspeed decay had occurred. At the time that this event was noticed, the aircraft was approximately 900 feet above its assigned altitude. Shortly after corrective action was begun, ATC queried us about our climbing instead of descending. We replied that we were reversing the climb. The aircraft returned to its assigned altitude, and a visual approach was completed without any further issues.

“[We experienced a] large decrease in indicated airspeed. The event occurred because neither pilot noticed the Flight Mode Annunciator (FMA) entering PITCH MODE. Thrust was added, and then the climb was reversed in order to descend back to our assigned altitude. Both pilots need to reaffirm that their primary duty is to fly and monitor the aircraft at all times, starting with the basics of heading, altitude, airspeed, and performance.”

We encourage you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.We encourage you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

“People are SO Stupid”: Horrible Comments on LinkedIn

May 23rd, 2018 by

 

 

How many people have seen those videos on LinkedIn and Facebook that show people doing really dumb things at work? It seems recently LinkedIn is just full of those types of videos. I’m sure it has something to do with their search algorithms that target those types of safety posts toward me. Still, there are a lot of them.

The videos themselves don’t bother me. They are showing real people doing unsafe things or accidents, which are happening every day in real life. What REALLY bothers me are the comments that people post under each video. Again concentrating on LinkedIn, people are commenting on how dumb people are, or how they wouldn’t put up with that, or “stupid is as stupid does!”

Here are a couple examples I pulled up in about 5 minutes of scrolling through my LinkedIn feed.  Click on the pictures to see the comments that were made with the entries:

 

 

 

 

 

 

 

 

 

 

 

Click on picture to watch Video

 

 

 

 

 

 

 

These comments often fall under several categories.  We can take a look at these comments as groups

“Those people are not following safety guideline xxxx.  I blame operator “A” for  this issue!”

Obviously, someone is not following a good practice.  If they were, we wouldn’t have had the issue, right?  It isn’t particularly helpful to just point out the obvious problem.  We should be asking ourselves, “Why did this person decide that it was OK to do this?”  Humans perform split-second risk assessments all the time, in every task they perform.  What we need to understand is the basis of a person’s risk assessment.  Just pointing out that they performed a poor assessment is too easy.  Getting to the root cause is much more important and useful when developing corrective actions.

“Operators were not paying attention / being careful.”

No kidding.  Humans are NEVER careful for extended periods of time.  People are only careful when reminded, until they’re not.  Watch your partner drive the car.  They are careful much of the time, and then we need to change the radio station, or the cell phone buzzes, etc.

Instead of just noting that people in the video are not being careful, we should note what safeguards were in place (or should have been in place) to account for the human not paying attention.  We should ask what else we could have done in order to help the human do a better job.  Finding the answers to these questions is much more helpful than just blaming the person.

These videos are showing up more and more frequently, and the comments on the videos are showing how easy it is to just blame people instead of doing a human performance-based root cause analysis of the issue.  In almost all cases, we don’t even have enough information in the video to make a sound analysis.  I challenge you to watch these videos and avoid blaming the individual, making the following assumptions:

  1.  The people in the video are not trying to get hurt / break the equipment / make a mistake
  2.  They are NOT stupid.  They are human.
  3.  There are systems that we could put in place that make it harder for the human to make a mistake (or at least make it easier to do it right).

When viewing these videos in this light, it is much more likely that we can learn something constructive from these mistakes, instead of just assigning blame.

Two Incidents in the Same Year Cost UK Auto Parts Manufacturer £1.6m in Fines

May 22nd, 2018 by

Screen Shot 2018 05 22 at 4 37 39 PM

Faltec Europe manufactures car parts in the UK. They had two incidents in 2015 related to health and safety.

The first was an outbreak of Legionnaires’ Disease due to a cooling water system that wasn’t being properly treated.

The second was an explosion and fire in the manufacturing facility,

For more details see:

http://press.hse.gov.uk/2018/double-investigation-leads-to-fine-for-north-east-car-parts-manufacturer-faltec-europe-limited/

The company was prosecuted by the UK HSE and was fined £800,000 for each incident plus £75,159.73 in costs and a victim surcharge of £120.

The machine that exploded had had precursor incidents, but the company had not taken adequate corrective actions.

Are you investigating your precursor incidents and learning from them to prevent major injuries/health issues, fires, and explosions?

Perhaps you should be applying advanced root cause analysis to find and fix the real root causes of equipment and human error related incidents? Learn more at one of our courses:

2-Day TapRooT® RooT® Cause Analysis Course

5-Day TapRooT® Advanced Root Cause Analysis Team Leader Training

Want to see our courses in Europe? CLICK HERE.

You can attend our training at our public courses anywhere around the world. See the list by CLICKING HERE.

Would you like to sponsor a course at your site? Contact us for a quote by CLICKING HERE.

Monday Accidents & Lessons Learned: The Worst U.S. Maritime Accident in Three Decades

May 21st, 2018 by

The U.S.-flagged cargo ship, El Faro, and its crew of 33 men and women sank after sailing into Hurricane Joaquin. What went wrong and why did an experienced sea captain sail his crew and ship directly into the eye of a hurricane? The investigation lasted two years. 

One of two ships owned by TOTE Maritime Inc., the El Faro constantly rotated between Jacksonville, Florida, and San Juan, Puerto Rico, transporting everything from frozen chickens to milk to Mercedes Benzes to the island. The combination roll-on/roll-off and lift-on/lift-off cargo freighter was crewed by U.S. Merchant Marines. Should the El Faro miss a trip, TOTE would lose money, store shelves would be bare, and the Puerto Rican economy would suffer.

The El Faro, a 790-foot, 1970s steamship, set sail at 8:15 p.m. on September 29, 2015, with full knowledge of the National Hurricane Center warning that Tropical Storm Joaquin would likely strengthen to a hurricane within 24 hours.

Albeit with modern navigation and weather technology, the aging ship, with two boilers in need of service, with no life vests or immersion suits, was equipped with open lifeboats that would not be launched once the captain gave the order to abandon ship in the midst of a savage hurricane.

As the Category 4 storm focused on the Bahamas, winds peaking at 140 miles an hour, people and vessels headed for safety. All but one ship. On October 1, 2015, the SS El Faro steamed into the furious storm. Black skies. Thirty to forty foot waves. The Bermuda Triangle. Near San Salvador, the sea freighter found itself in the strongest October storm to hit these waters since 1866. Around 7:30 a.m. on October 1, the ship was taking on water and listing 15 degrees. Although, the last report from the captain indicated that the crew had managed to contain the flooding. Soon after, the freighter ceased all communications. All aboard perished in the worst U.S. maritime disaster in three decades. Investigators from the National Transportation Safety Board (NTSB) were left to wonder why.

When the NTSB launched one of the most thorough investigations in its long history, they spoke with dozens of experts, colleagues, friends, and family of the crew. The U.S. Coast Guard, with help from the Air Force, the Air National Guard, and the Navy, searched in a 70,000 square-mile area off Crooked Island in the Bahamas, spotting debris, a damaged lifeboat, containers, and traces of oil. On October 31, 2015, the USNS Apache searched and found the El Faro, using the CURV 21, a remotely operated deep ocean vehicle.

Thirty days after the El Faro sank, the ship was found 15,000 feet below sea level. The images of the sunken ship showed a breach in the hull and its main navigation tower missing. 

Finally came the crucial discovery when a submersible robot retrieved the ship’s voyage data recorder (VDR), found on Tuesday, April 26, 2016, at 4,600 meters bottom. This black box held everything uttered on the ship’s bridge, up to its final moments.

The big challenge was locating the VDR, only about a foot by eight inches. No commercial recorder had ever been recovered this deep where the pressure is nearly 7,000 pounds per square inch.

The 26-hour recording converted into the longest script—510 pages— ever produced by the NTSB.  The recorder revealed that at the outset, there was absolute certainty among the crew and captain that going was the right thing to do. As the situation evolved and conditions deteriorated, the transcript reveals, the captain dismissed a crew member’s suggestion that they return to shore in the face of the storm. “No, no, no. We’re not gonna turn around,” he said. Captain Michael Davidson then said, “What I would like to do is get away from this. Let this do what it does. It certainly warrants a plan of action.” Davidson went below just after 7:57 p.m. and was not heard again nor present on the bridge until 4:10 a.m. The El Faro and its crew had but three more hours after Davidson reappeared on the bridge, as the recording ends at 7:39 a.m., ten minutes after Captain Davidson ordered the crew to abandon ship.

This NTSB graphic shows El Faro’s track line in green as the ship sailed from Jacksonville to Puerto Rico on October 1, 2015. Color-enhanced satellite imagery from close to the time the ship sank illustrates Hurricane Joaquin in red, with the storm’s eye immediately to the south of the accident site.

The NTSB determined that the probable cause of the sinking of El Faro and the subsequent loss of life was the captain’s insufficient action to avoid Hurricane Joaquin, his failure to use the most current weather information, and his late decision to muster the crew. Contributing to the sinking was ineffective bridge resource management on board El Faro, which included the captain’s failure to adequately consider officers’ suggestions. Also contributing to the sinking was the inadequacy of both TOTE’s oversight and its safety management system.

The NTSB’s investigation into the El Faro sinking identified the following safety issues:

  • Captain’s actions
  • Use of noncurrent weather information
  • Late decision to muster the crew
  • Ineffective bridge resource management
  • Company’s safety management system
  • Inadequate company oversight
  • Need for damage control plan
  • Flooding in cargo holds
  • Loss of propulsion
  • Downflooding through ventilation closures
  • Need for damage control plan
  • Lack of appropriate survival craft

The report also addressed other issues, such as the automatic identification system and the U.S. Coast Guard’s Alternate Compliance Program. On October 1, 2017, the U. S. Coast Guard released findings from its investigation, conducted with the full cooperation of the NTSB. The 199-page report identified causal factors of the loss of 33 crew members and the El Faro, and proposed 31 safety recommendations and four administrative recommendations for future actions to the Commandant of the Coast Guard.

Captain Jason Neubauer, Chairman, El Faro Marine Board of Investigation, U.S. Coast Guard, made the statement, “The most important thing to remember is that 33 people lost their lives in this tragedy. If adopted, we believe the safety recommendations in our report will improve safety of life at sea.”

Avoid Big Problems By Paying Attention to the Small Stuff

May 16th, 2018 by

Almost every manager has been told not to micro-manage their direct reports. So the advice above:

Avoid Big Problems By Paying Attention to the Small Stuff

may sound counter-intuitive.

Perhaps this quote from Admiral Rickover, leader of the most successful organization to implement process safety and organizational excellence, might make the concept clearer:

The Devil is in the details, but so is salvation.

When you talk to senior managers who existed through a major accident (the type that gets bad national press and results in a management shakeup), they never saw it coming.

A Senior VP at a utility told me:

It was like I was walking along on a bright sunny day and
the next thing I knew, I was at the bottom of a deep dark hole.

They never saw the accident coming. But they should have. And they should have prevented it. But HOW?

I have never seen a major accident that wasn’t preceded by precursor incidents.

What is a precursor incident?

A precursor incident is an incident that has low to moderate consequences but could have been much worse if …

  • One of more Safeguards had failed
  • It was a bad day (you were unlucky)
  • You decided to cut costs just one more time and eliminated the hero that kept things from getting worse
  • The sequence had changed just a little (the problem occurred on night shift or other timing changed)

These type of incidents happen more often than people like to admit. Thus, they give management the opportunity to learn.

What is the response by most managers? Do they learn? NO. Why? Because the consequences of the little incidents are insignificant. Why waste valuable time, money, and resources investigating small consequence incidents. As one Plant Manager said:

If we investigated  every incident, we would do nothing but investigate incidents.

Therefore, a quick and dirty root cause analysis is performed (think 5-Whys) and some easy corrective actions that really don’t change things that are implemented.

The result? It looks like the problem goes away. Why? Because big accidents usually have multiple Safeguards and they seldom fail all at once. It’s sort of like James Reason’s Swiss Cheese Model…

SwissCheese copy

The holes move around and change size, but they don’t line up all the time. So, if you are lucky, you won’t be there when the accident happens. So, maybe the small incidents repeat but a big accident hasn’t happened (yet).

To prevent the accident, you need to learn from the small precursor incidents and fix the holes in the cheese or add additional Safeguards to prevent the major accidents. The way you do this is by applying advanced root cause analysis to precursor incidents. Learn from the small stuff to avoid the big stuff. To avoid:

  • Fatalities
  • Serious injuries
  • Major environmental releases
  • Serious customer quality complaints
  • Major process upsets and equipment failures
  • Major project cost overruns

Admiral Rickover’s seventh rule (of seven) was:

The organization and members thereof must have the ability
and willingness to learn from mistakes of the past.

And the mistakes he referred to were both major accidents (which didn’t occur in the Nuclear Navy when it came to reactor safety) and precursor incidents.

Are you ready to learn from precursor incidents to avoid major accidents? Then stop trying to take shortcuts to save time and effort when investigating minor incidents (low actual consequences) that could have been worse. Start applying advanced root cause analysis to precursor incidents.

The first thing you will learn is that identifying the correct answer once is a whole lot easier that finding the wrong answer many times.

The second thing you will learn is that when people start finding the real root causes of problems and do real root cause analysis frequently, they get much better at problem solving and performance improves quickly. The effort required is less than doing many poor investigations.

Overall you will learn that the process pay for itself when advanced root cause analysis is applied consistently. Why? Because the “little stuff” that isn’t being fixed is much more costly than you think.

How do you get started?

The fastest way is by sending some folks to the 2-Day TapRooT® Root Cause Analysis Course to learn to investigate precursor incidents.

The 2-Day Course is a great start. But some of your best problem solvers need to learn more. They need the skills necessary to coach others and to investigate significant incidents and major accidents. They need to attend the 5-Day TapRooT® Advanced Root Cause Analysis Team Leader Training.

Once you have the process started, you can develop a plan to continually improve your improvement efforts. You organization will become willing to learn. You will prove how valuable these tools are and be willing to become best in class.

Rome wasn’t built in a day but you have to get started to see the progress you need to achieve. Start now and build on success.

Would you like to talk to one of our TapRooT® Experts to get even more ideas for improving your root cause analysis? Contact us by CLICKING HERE.

Monday Accidents & Lessons Learned: Airplane Mode

May 14th, 2018 by

When you hear the words “mode” and “aviation,” many of us who are frequent flyers may quickly intuit the discussion is heading toward the digital disconnection of our cellular voice and data connection in a device, or airplane mode. Webster defines “mode” as “a particular functioning arrangement or condition,” and an aircraft’s system’s operating mode is characterized by a particular list of active functions for a named condition, or “mode.” Multiple modes of operation are employed by most aircraft systems—each with distinct functions—to accommodate the broad range of needs that exist in the current operating environment.

With ever-increasing aviation mode complexities, pilots must be thoroughly familiar with scores of operating modes and functions. No matter which aircraft system is being operated, when a pilot is operating automation that controls an aircraft, the mode awareness, mode selection, and mode expectation are all capable of presenting hazards that require know-how and management. Sure, these hazards may be obvious, but they are also often complex and difficult to grasp.

NASA’s Aviation Safety Reporting System (ASRS) receives reports that suggest pilots are uninformed or unaware of a current operating mode, or what functions are available in a specific mode. At this juncture, the pilots experience the “What is it doing now?” syndrome. Often, the aircraft is transitioning to, or in, a mode the pilot didn’t select. Further, the pilot may not recognize that a transition has occurred. The aircraft then does something autonomously and unanticipated by the pilot, typically causing confusion and increasing the potential for hazard.

The following report gives us insight into the problems involving aircraft automation that pilots experience with mode awareness, mode selection, and mode expectation.

“On departure, an Air Carrier Captain selected the required navigation mode, but it did not engage. He immediately attempted to correct the condition and subsequently experienced how fast a situation can deteriorate when navigating in the wrong mode.

“I was the Captain of the flight from Ronald Reagan Washington National Airport (DCA). During our departure briefing at the gate, we specifically noted that the winds were 170 at 6, and traffic was departing Runway 1. Although the winds favored Runway 19, we acknowledged that they were within our limits for a tailwind takeoff on Runway 1. We also noted that windshear advisories were in effect, and we followed required procedure using a no–flex, maximum thrust takeoff. We also briefed the special single engine procedure and the location of [prohibited airspace] P-56. Given the visual [meteorological] conditions of 10 miles visibility, few clouds at 2,000 feet, and scattered clouds at 16,000 feet, our method of compliance was visual reference, and we briefed, “to stay over the river, and at no time cross east of the river.

“Taxi out was normal, and we were issued a takeoff clearance [that included the JDUBB One Departure] from Runway 1. At 400 feet AGL, the FO was the Pilot Flying and incorrectly called for HEADING MODE. I was the Pilot Monitoring and responded correctly with “NAV MODE” and selected NAV MODE on the Flight Control Panel. The two lights adjacent to the NAV MODE button illuminated. I referenced my PFD and noticed that the airplane was still in HEADING MODE and that NAV MODE was not armed. Our ground speed was higher than normal due to the tailwind, and we were rapidly approaching the departure course. Again, I reached up and selected NAV MODE, with the same result. I referenced our location on the Multi-Function Display (MFD), and we were exactly over the intended departure course; however, we were still following the flight director incorrectly on runway heading. I said, “Turn left,” and shouted, “IMMEDIATELY!” The FO banked into a left turn. I observed the river from the Captain’s side window, and we were directly over the river and clear of P-56. I spun the heading bug directly to the first fix, ADAXE, and we proceeded toward ADAXE.

“Upon reaching ADAXE, we incorrectly overflew it, and I insisted the FO turn right to rejoin the departure. He turned right, and I said, “You have to follow the white needle,” specifically referencing our FMS/GPS navigation. He responded, “I don’t have a white needle.” He then reached down and turned the Navigation Selector Knob to FMS 2, which gave him proper FMS/GPS navigation. We were able to engage the autopilot at this point and complete the remainder of the JDUBB One Departure. I missed the hand–off to Departure Control, and Tower asked me again to call them, which I did. Before the hand–off to Center, the Departure Controller gave me a phone number to call because of a possible entry into P-56.”

We thank ASRS for this report, and for helping to underscore TapRooT®’s raison d’être.

We encourage you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Remembering An Accident: Enschede Fireworks Disaster

May 13th, 2018 by

On May 13, 2000 in the eastern Dutch city of Enschede a fireworks warehouse caught fire and lead to an enormous explosion. The explosion caused 22 deaths, with 4 fire-fighters among the causalities, another 974 individual were injured, and 500 homes and businesses were severely damaged and/or destroyed during the blast. After the dust had settled a 13 meter diameter, 1.3 meter deep crater could be observed where concrete round cells C9 and C11 – C 15 once stood. To create a crater that size it would take a TNT equivalent between 4 and 5 tonnes. The largest blast was felt up to 30 kilometers away (19 miles).

  

What makes this incident so interesting is the fact that whatever started the fire was never really discovered. Two possibilities seem to be the likely cause. One possibility discussed was arson. The Dutch police made several arrest, but none of whom had been arrested were convicted of arson for the Enschede Fireworks Disaster. The other theory comes from the fire department stating that accidental ignition via an electrical short circuit could have also been the cause of the fire.

Because of the incident and investigation results the fireworks disaster lead to stronger safety regulations in the Netherlands concerning the sales, storage, and distribution of fireworks. Since the catastrophe three illegal firework warehouses were closed down and the Roombeek area that was destroyed by the explosion has been rebuilt.

  

To read the full detailed report click here.

Major disasters are often wake-up calls for how important it is to ensure that they never happen again.

TapRooT® Root Cause Analysis is taught globally to help industries avoid them. Our 5-Day TapRooT® Advanced Root Cause Analysis Team Leader Training offers advanced tools and techniques to find and fix root causes re-actively and help identify precursors that could lead to major problems.

To learn more about our courses and their locations click on the links below.
5-Day TapRooT® Advanced Root Cause Analysis Team Leader Training
2-Day TapRooT® Root Cause Analysis Essentials Training

 

Hazards and Targets

May 7th, 2018 by

Most of us probably would not think of this as a on the job Hazard … a giraffe.

Screen Shot 2018 05 07 at 9 40 49 AM

But African filmmaker Carlos Carvalho was killed by one while working in Africa making a film.

Screen Shot 2018 05 07 at 9 42 38 AM

 Do you have unexpected Hazards at work? Giant Asian hornets? Grizzly bears? 

Or are your Hazards much more common. Heat stroke. Slips and falls (gravity). Traffic.

Performing a thorough Safeguard Analysis before starting work and then trying to mitigate any Hazards is a good way to improve safety and reduce injuries. Do your supervisors know how to do a Safeguard Analysis using TapRooT®?

Monday Accidents & Lessons Learned: Failing the Mind-Check of Reality

May 7th, 2018 by

 

When an RV-7 pilot studied the weather prior to departure, he considered not only the weather but also distractions and personal stress. His situational awareness and decision-making were influenced by these considerations, as you can see in his experience:

“I was cleared to depart on Runway 27L from [midfield at] intersection C. However, I lined up and departed from Runway 9R. No traffic control conflict occurred. I turned on course and coordinated with ATC immediately while airborne.

“I had delayed my departure due to weather [that was] 5 miles east…and just north of the airport on my route. Information Juliet was: “340/04 10SM 9,500 OVC 23/22 29.99, Departing Runway 27L, Runways 9L/27R closed, Runways 5/23 closed.” My mind clued in on [Runway] 09 for departure. In fact, I even set my heading bug to 090. Somehow while worried mostly about the weather, I mentally pictured departing Runway 9R at [taxiway] C. I am not sure how I made that mistake, as the only 9 listed was the closed runway. My focus was not on the runway as it should have been, but mostly on the weather.

“Contributing factors were:

1. Weather

2. No other airport traffic before my departure. (I was looking as I arrived at the airport and completed my preflight and final weather checks)

3. Airport construction. For a Runway 27 departure, typical taxi routing would alleviate any confusion

4. ATIS listing the closed runway with 9 listed first

5. Quicker than expected takeoff clearance

“I do fly for a living. I will be incorporating the runway verification procedure we use on the jet aircraft at my company into my GA flying from now on. Sadly, I didn’t make that procedural change in my GA flying.”

Thanks to NASA’s Aviation Safety Reporting System (ASRS) for contemporarily sharing experiences that offer valuable insight, contributing to the growth of aviation wisdom, lessons learned, and an uninhibited accounting of reported incidents. ASRS receives, processes, and analyzes these voluntarily submitted reports from pilots, air traffic controllers, flight attendants, maintenance personnel, dispatchers, ground personnel, and others entailing actual or potential hazards.

We encourage you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: When There Is No Right Side of the Tracks

April 30th, 2018 by

On Tuesday, February 28, 2017, a wall section began to collapse at the top of a cutting above a four-track railway line between the Liverpool Lime Street and Edge Hill stations in Liverpool, England. From approximately 5:30 pm until 6:02 pm, more than 188 tons of debris rained down from the embankment wall collapsing across all four tracks. The Liverpool Lime Street is the city’s main station, one of the busiest in the north of England.

With the rubble downing overhead power lines and damage to infrastructure, all mainline services to and from the station were suspended. The collapse brought trains to a standstill for three hours, with a necessary evacuation of three trains. Police, fire, and ambulance crews helped evacuate passengers down the tracks. Two of the trains were halted in tunnels. Passengers were stranded on trains at Lime Street station due to power outage resulting from the collapse. A passenger en route to Liverpool from Manchester Oxford Road reported chaos at Warrington station as passengers tried to find their way home.

A representative from Network Rail spoke about the incident, “No trains are running in or out of Liverpool Lime station after a section of trackside wall, loaded with concrete and cabins by a third party, collapsed sending rubble across all four lines and taking overhead wires with it. Early indications suggest train service will not resume for several days while extensive clear-up and repairs take place to make the location safe. More precise forecasts on how long the repairs will take will be made after daybreak tomorrow.”

Read more about the incident here.

We invite you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Press Release: CSB to Investigate Husky Refinery Fire

April 26th, 2018 by

CSB

Washington, DC, April 26, 2018 –  A four-person investigative team from the U.S. Chemical Safety Board (CSB) is deploying to the scene of an incident that reportedly injured multiple workers this morning at the Husky Energy oil refinery in Superior, Wisconsin. The refinery was shutting down in preparation for a five-week turnaround when an explosion was reported around 10 am CDT.

According to initial reports, several people were transported to area hospitals with injuries. There have been no reports of fatalities. Residents and area schools near the refinery were asked to evacuate due to heavy smoke.

The CSB is an independent, non-regulatory federal agency charged with investigating serious chemical incidents. The agency’s board members are appointed by the president and confirmed by the Senate. CSB investigations look into all aspects of chemical accidents, including physical causes such as equipment failure as well as inadequacies in regulations, industry standards, and safety management systems.

The Board does not issue citations or fines but does make safety recommendations to plants, industry organizations, labor groups, and regulatory agencies such as OSHA and EPA. Visit the CSB website, www.csb.gov

Here is additional coverage of the fire …

NewImage

http://www.kbjr6.com/story/38049655/explosion-injuries-reported-at-husky-energy-superior-refinery?autostart=true

Connect with Us

Filter News

Search News

Authors

Angie ComerAngie Comer

Software

Anne RobertsAnne Roberts

Marketing

Barb CarrBarb Carr

Editorial Director

Chris ValleeChris Vallee

Human Factors

Dan VerlindeDan Verlinde

VP, Software

Dave JanneyDave Janney

Safety & Quality

Garrett BoydGarrett Boyd

Technical Support

Ken ReedKen Reed

VP, Equifactor®

Linda UngerLinda Unger

Co-Founder

Mark ParadiesMark Paradies

Creator of TapRooT®

Michelle WishounMichelle Wishoun

Licensing Paralegal

Per OhstromPer Ohstrom

VP, Sales

Shaun BakerShaun Baker

Technical Support

Steve RaycraftSteve Raycraft

Technical Support

Wayne BrownWayne Brown

Technical Support

Success Stories

If you are a TapRooT® User, you may think that the TapRooT® Root Cause Analysis System exists to help people find root causes. But there is more to it than that. TapRooT® exists to: Save lives Prevent injuries Improve product/service quality Improve equipment reliability Make work easier and more productive Stop sentinel events Stop the …

An improvement plan was developed and implemented. Elements of the improvement plan included process…

Exelon Nuclear
Contact Us