Category: Human Performance

Monday Accidents & Lessons Learned: When One Good Turn Definitely Doesn’t Deserve Another

July 16th, 2018 by

The electronic flight bag (EFB) is rapidly replacing pilots’ conventional papers in the cockpit. While the EFB has demonstrated improved capability to display aviation information—airport charts, weather, NOTAMs, performance data, flight releases, and weight and balance—NASA’s Aviation Safety and Reporting System (ASRS) has received reports that describe various kinds of EFB anomalies, such as this one:

“This B757 Captain received holding instructions during heavy traffic. While manipulating his EFB for clarification, he inadvertently contributed to an incorrect holding entry.

‘[We were] asked to hold at SHAFF intersection due to unexpected traffic saturation. While setting up the FMC and consulting the arrival chart, I expanded the view on my [tablet] to find any depicted hold along the airway at SHAFF intersection. In doing so, I inadvertently moved the actual hold depiction…out of view and [off] the screen.

‘The First Officer and I only recall holding instructions that said to hold northeast of SHAFF, 10-mile legs. I asked the First Officer if he saw any depicted hold, and he said, “No.” We don’t recall instructions to hold as depicted, so not seeing a depicted hold along the airway at SHAFF, we entered a right-hand turn. I had intended to clarify the holding side with ATC, however there was extreme radio congestion and we were very close to SHAFF, so the hold was entered in a right-hand turn.

‘After completing our first 180-degree turn, the controller informed us that the hold at SHAFF was left turns. We said that we would correct our holding side on the next turn. Before we got back to SHAFF for the next turn, we were cleared to [the airport].'”

Volpe National Transportation Systems Center, U.S. Department of Transportation, weighs in on EFBs: “While the promise of EFBs is great, government regulators, potential customers, and industry developers all agree that EFBs raise many human factors considerations that must be handled appropriately in order to realize this promise without adverse effects.”

CALLBACK is the award-winning publication and monthly safety newsletter from NASA’s Aviation Safety Reporting System (ASRS). CALLBACK shares reports, such as the one above, that reveal current issues, incidents, and episodes.

Circumstances can crop up anywhere at any time if proper sequence and procedures are not planned and followed. We encourage you to learn and use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: Where Did We Put the Departure Course?

July 2nd, 2018 by

Have you ever encountered a new methodology or product that you deemed the best thing ever, only to discover in a too-close-for-comfort circumstance that what seemed a game changer had a real downside?

In aviation, the Electronic Flight Bag (EFB) is the electronic equivalent to the pilot’s traditional flight bag. It contains electronic data and hosts EFB applications, and it is generally replacing the pilots’ conventional papers in the cockpit. The EFB has demonstrated improved capability to display aviation information such as airport charts, weather, NOTAMs, performance data, flight releases, and weight and balance.

The EFB platform, frequently a tablet device, introduces a relatively new human-machine interface into the cockpit. While the EFB provides many advantages and extensive improvements for the aviation community in general and for pilots specifically, some unexpected operational threats have surfaced during its early years.

NASA’s Aviation Safety and Reporting System (ASRS) has received reports that describe various kinds of EFB anomalies. One typical problem occurs when a pilot “zooms,” or expands the screen to enlarge a detail, thereby unknowingly “slides” important information off the screen, making it no longer visible.

An Airbus A320 crew was given a vector to intercept course and resume the departure procedure, but the advantage that the EFB provided in one area generated a threat in another.

From the Captain’s Report:

“Air Traffic Control (ATC) cleared us to fly a 030 heading to join the GABRE1 [Departure]. I had never flown this Standard Instrument Departure (SID). I had my [tablet] zoomed in on the Runway 6L/R departure side so I wouldn’t miss the charted headings. This put Seal Beach [VOR] out of view on the [tablet]. I mistakenly asked the First Officer to sequence the Flight Management Guidance Computer (FMGC) between GABRE and FOGEX.”

From the First Officer’s Report:

“During our departure off Runway 6R at LAX [while flying the] GABRE1 Departure, ATC issued, ‘Turn left 030 and join the GABRE1 Departure.’ This was the first time for both pilots performing this SID and the first time departing this runway for the FO. Once instructed to join the departure on the 030 heading, I extended the inbound radial to FOGEX and inserted it into the FMGC. With concurrence from the Captain, I executed it. ATC queried our course and advised us that we were supposed to intercept the Seal Beach VOR 346 radial northbound. Upon review, both pilots had the departure zoomed in on [our tablets] and did not have the Seal Beach [VOR] displayed.”

CALLBACK is the award-winning publication and monthly safety newsletter from NASA’s Aviation Safety Reporting System (ASRS). CALLBACK shares reports, such as the one above, that reveal current issues, incidents, and episodes.

Circumstances can crop up anywhere at any time if proper sequence and procedures are not planned and followed. We encourage you to learn and use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accident & Lessons Learned: What Does a Human Error Cost? A £566,670 Fine in the UK!

June 25th, 2018 by

Dump truck(Not actual truck, For illustration only.)

The UK HSE fined a construction company £566,670 after a dump truck touched (or came near) a power line causing a short.

No one was hurt and the truck suffered only minor damage.

The drive tried to pull forward to finish dumping his load and caused a short.

Why did the company get fined?

“A suitable and sufficient assessment would have identified the need to contact the Distribution Network Operator, Western Power, to request the OPL’s were diverted underground prior to the commencement of construction. If this was not reasonably practicable, Mick George Ltd should have erected goalposts either side of the OPL’s to warn drivers about the OPL’s. “

That was the statement from the UK HSE Inspector as quoted in a hazarded article.

What Safeguards do you need to keep a simple human error from becoming an accident (or a large fine)?

Performing a Safeguard Analysis before starting work is always a good idea. Learn more about using Safeguard Analysis proactively at our 5-Day TapRooT® Advanced Root Cause Analysis Team Leader Course. See the upcoming public course dats around the world at:

http://www.taproot.com/store/5-Day-Courses/

We have a sneak peek for you on today’s Facebook Live!

June 13th, 2018 by

TapRooT® professional Barb Carr will be featured on today’s Facebook Live session. To get a sense of the subject, look at Barb’s recent article.

As always, please feel free to chime in on the discussion in real time. Or leave a comment and we’ll get back to you.

We look forward to being with you on Wednesdays! Here’s how to join us today:

Where? https://www.facebook.com/RCATapRooT/

When? Today, Wednesday, June 13

What time? Noon Eastern | 11:00 a.m. Central | 10:00 a.m. Mountain | 9:00 a.m. Pacific

If you missed last week’s Facebook Live discussion with Mark Paradies and Benna Dortch, catch it below on Vimeo or here on video.

Why do we still have major process safety accidents from TapRooT® Root Cause Analysis on Vimeo.

Do your own investigation into our courses and discover what TapRooT® can do for you; contact us or call us: 865.539.2139.

Save the date for our upcoming 2019 Global TapRooT® Summit, March 11-15, 2019, in the Houston, Texas, area at La Torretta Lake Resort.

Monday Accidents & Lessons Learned: Missing a Mode Change

June 11th, 2018 by

A B737-800 Captain became distracted while searching for traffic during his approach. Both he and the First Officer missed the FMA mode change indication, which resulted in an altitude deviation in a terminal environment.

From the Captain’s Report:
“Arrival into JFK, weather was CAVU. Captain was Pilot Flying, First Officer was Pilot Monitoring. Planned and briefed the visual Runway13L with the RNAV (RNP) Rwy 13L approach as backup. Approach cleared us direct to ASALT, cross ASALT at 3,000, cleared approach. During the descent, we received several calls for a VFR target at our 10 to 12 o’clock position. We never acquired the traffic visually, but we had him on TCAS. Eventually Approach advised, “Traffic no factor, contact Tower.” On contact with Tower, we were cleared to land. Approaching ASALT, I noticed we were approximately 500 feet below the 3,000 foot crossing altitude. Somewhere during the descent while our attention was on the VFR traffic, the plane dropped out of VNAV PATH, and I didn’t catch it. I disconnected the autopilot and returned to 3,000 feet. Once level, I reengaged VNAV and completed the approach with no further problems.”

From the First Officer’s Report:
“FMA mode changes are insidious. In clear weather, with your head out of the cockpit clearing for traffic in a high density environment, especially at your home field on a familiar approach, it is easy to miss a mode change. This is a good reminder to keep instruments in your cross check on those relatively few great weather days.”

CALLBACK is the award-winning publication and monthly safety newsletter from NASA’s Aviation Safety Reporting System (ASRS). CALLBACK shares reports, such as the one above, that reveal current issues, incidents, and episodes. At times, the reports involve mode awareness, mode selection, and mode expectation problems involving aircraft automation that are frequently experienced by the Mode Monitors and Managers in today’s aviation environment.


We encourage you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

New Study Suggests Poor Officer Seamanship Training Across the Navy – Is This a Generic Cause of 2017 Fatal Navy Ship Collisions?

June 7th, 2018 by

BLAME IS NOT A ROOT CAUSE

It is hard to do a root cause analysis from afar with only newspaper stories as your source of facts … but a recent The Washington Times article shed some light on a potential generic cause for the fatal collisions last year.

The Navy conducted an assessment of seamanship skills of 164 first-tour junior officers. The results were as follows

  • 16% (27 of 164) – no concerns
  • 66% (108 of 164) – some concerns
  • 18% (29 of 164) – significant concerns

With almost 1 out of 5 having significant concerns, and two thirds having some concerns, it made me wonder about the blame being placed on the ship’s Commanding Officers and crew. Were they set up for failure by a training program that sent officers to sea who didn’t have the skills needed to perform their jobs as Officer of the Deck and Junior Offiicer of the Deck?

The blame heavy initial investigations certainly didn’t highlight this generic training problem that now seems to be being addressed by the Navy.

Navy officers who cooperated with the Navy’s investigations faced court martials after cooperating.

NewImage

According to and article in The Maritime Executive Lt j.g. Sarah Coppock, Officer of the Deck during the USS Fitzgerald collision, pled guilt to charges to avoid facing a court martial. Was she properly trained or would have the Navy’s evaluators had “concerns” with her abilities if she was evaluated BEFORE the collision? Was this accident due to the abbreviated training that the Navy instituted to save money?

Note that in the press release, information came out that hadn’t previously been released that the Fitzgerald’s main navigation radar was known to be malfunctioning and that Lt. j.g. Coppock thought she had done calculations that showed that the merchant ship would pass safely astern.

NewImage

In other blame related news, the Chief Boatswains Mate on the USS McCain plead guilty to dereliction of duty for the training of personnel to use the Integrated Bridge Navigation System, newly installed on the McCain four months before he arrived. His total training on the system was 30 minutes of instruction by a “master helmsman.” He had never used the system on a previous ships and requested additional training and documentation on the system, but had not received any help prior to the collision.

He thought that the three sailors on duty from the USS Antietam, a similar cruiser, were familiar with the steering system. However, after the crash he discovered that the USS McCain was the only cruiser in the 7th fleet with this system and that the transferred sailors were not familiar with the system.

On his previous ship Chief Butler took action to avoid a collision at sea when a steering system failed during an underway replenishment and won the 2014 Sailor of the Year award. Yet the Navy would have us believe that he was a “bad sailor” (derelict in his duties) aboard the USS McCain.

NewImage

Also blamed was the CO of the USS McCain, Commander Alfredo J. Sanchez. He pleaded guilty to dereliction of duty in a pretrial agreement. Commander Sanchez was originally charged with negligent homicide and hazarding a vessel  but both other charges were dropped as part of the pretrial agreement.

Maybe I’m seeing a pattern here. Pretrial agreements and guilty pleas to reduced charges to avoid putting the Navy on trial for systemic deficiencies (perhaps the real root causes of the collisions).

Would your root cause analysis system tend to place blame or would it find the true root and generic causes of your most significant safety, quality, and equipment reliability problems?

The TapRooT® Root Cause Analysis System is designed to look for the real root and generic causes of issues without placing unnecessary blame. Find out more at one of our courses:

http://www.taproot.com/courses

Is Blame Built Into Your Root Cause System?

June 6th, 2018 by

Blame

If you want to stop good root cause analysis, introduce blame into the process.

In recent years, good analysts have fought to eliminate blame from root cause analysis. But there are still some root cause systems that promote blame. They actually build blame into the system.

How can this be? Maybe they just don’t understand how to make a world-class root cause analysis system.

When TapRooT® Root Cause Analysis was new, I often had people ask:

“Where is the place you put ‘the operator was stupid?'”

Today, this question might make you laugh. Back in the day, I spent quite a bit of time explaining that stupidity is not a root cause. If you hire stupid people, send them through your training program, and qualify them, then that is YOUR problem with your training program.

The “stupid people” root cause is a blame-oriented cause. It is not a root cause.

Logo color no lines no text copy

What is a root cause? Here is the TapRooT® System definition:

Root Cause
The absence of best practices
or the failure to apply knowledge
that would have prevented the problem. 

Are there systems with “stupid people” root causes? YES! Try these blame categories:
    • Attitude
    • Attention less than adequate
    • Step was omitted due to mental lapse
    • Individual’s capabilities to perform work less than adequate
    • Improper body positioning
    • Incorrect performance due to a mental lapse
    • Less than adequate motor skills
    • Inadequate size or strength
    • Poor judgment/lack of judgment/misjudgment
    • Reasoning capabilities less than adequate
    • Poor coordination
    • Poor reaction time
    • Emotional overload
    • Lower learning aptitude
    • Memory failure/memory lapse
    • Behavior inadequate
    • Violation by individual
    • Inability to comprehend training
    • Insufficient mental capabilities
    • Poor language ability
    • In the line of fire
    • Inattention to detail
    • Unawareness
    • Mindset

You might laugh at these root causes but they are included in real systems that people are required to use. The “operator is stupid” root cause might fit in the “reasoning capabilities less than adequate,” the “incorrect performance due to mental lapse,” the “poor judgment/lack of judgment,” or the “insufficient mental capabilities” categories.

You may ask:

“Couldn’t a mental lapse be a cause?”

Of course, the answer is yes. Someone could have a mental lapse. But it isn’t a root cause. Why? It doesn’t fit the definition. It isn’t a best practice or a failure to apply knowledge. We are supposed to develop systems that account for human capabilities and limitations. At best, a memory lapse would be part of a a Causal Factor.

To deal with human frailties, we implement best practices to stop simple memory lapses from becoming incidents. In other words, that’s why we have checklists, good human engineering, second checks when needed, and supervision. The root causes listed on the back side of the TapRooT® Root Cause Tree® are linked to human performance best practices that make human performance more reliable so that a simple memory lapse doesn’t become an accident.

What happens when you make a pick list with blame categories like those in the bulleted list above? The categories get overused. It is much easier to blame the operator (they had less than adequate motor skills) than to find out why they moved the controls the wrong way. Its easy to say there was a “behavior issue.” It is difficult to understand why someone behaved the way they did. TapRooT® looks beyond behavior and simple motor skill error to find real root causes.

We have actually tested the use of “blame categories” in a system and shown that including blame categories in an otherwise good system causes investigators to jump to conclusions and select these “easy to pick” blame categories rather than applying the investigative effort required to find real root causes.

You may think that if you don’t have categories, you have sidestepped the problem of blame. WRONG! Blame is built into our psyche. Most cause-and-effect examples I see have some blame built into the analysis.

If you want to successfully find the real, fixable root causes of accidents, precursor incidents, quality issues, equipment failures, cost overruns, or operational failures, don’t start by placing blame or use a root cause system with built-in blame categories. Instead, use advanced root cause analysis – TapRooT®.

The best way to learn about advanced root cause analysis is in a 2-Day TapRooT® Root Cause Analysis Course or a 5-Day TapRooT® Advanced Root Cause Analysis Team Leader Course. See the list of upcoming public courses here: http://www.taproot.com/store/Courses/.

Monday Accidents & Lessons Learned: Watch It Like It’s Hot

June 4th, 2018 by

A B737 crew was caught off-guard during descent. The threat was real and had been previously known. The crew did not realize that the aircraft’s vertical navigation had reverted to a mode less capable than VNAV PATH.

From the Captain’s Report:
“While descending on the DANDD arrival into Denver, we were told to descend via. We re-cruised the current altitude while setting the bottom altitude in the altitude window. Somewhere close to DANDD intersection, the aircraft dropped out of its vertical mode and, before we realized it, we descended below the 17,000 foot assigned altitude at DANDD intersection to an altitude of nearly 16,000 feet. At once, I kicked off the autopilot and began to climb back to 17,000 feet, which we did before crossing the DANDD intersection. Reviewing the incident, we still don’t know what happened. We had it dialed in, and the vertical mode reverted to CWS PITCH (CWS P).

“Since our software is not the best and we have no aural warnings of VNAV SPD or CWS P, alas, we must watch it ever more closely—like a hawk.”

From the First Officer’s Report:
“It would be nice to have better software—the aircraft constantly goes out of VNAV PATH and into VNAV SPEED for no reason, and sometimes the VNAV disconnects for no reason, like it did to us today.”

We encourage you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.We encourage you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: Who’s in Charge?

May 28th, 2018 by

An ERJ-145 crew failed to detect a change in its vertical navigation mode during descent. When it was eventually discovered, corrective action was taken, but large deviations from the desired flight path may have already compromised safety.

“This event occurred while being vectored for a visual approach. The First Officer (FO) was the Pilot Flying and I was Pilot Monitoring. ATC had given us a heading to fly and a clearance to descend to 3,000 feet. 3,000 was entered into the altitude preselect, was confirmed by both pilots, and a descent was initiated. At about this time, we were also instructed to maintain 180 knots. Sometime later, I noticed that our speed had begun to bleed off considerably, approximately 20 knots, and was still decaying. I immediately grabbed the thrust levers and increased power, attempting to regain our airspeed. At about this time, it was noticed that the preselected altitude had never captured and that the Flight Mode Annunciator (FMA) had entered into PITCH MODE at some point. It became apparent that after the aircraft had started its descent, the altitude preselect (ASEL) mode had changed to pitch and was never noticed by either pilot. Instead of descending, the aircraft had entered a climb at some point, and this was not noticed until an appreciable amount of airspeed decay had occurred. At the time that this event was noticed, the aircraft was approximately 900 feet above its assigned altitude. Shortly after corrective action was begun, ATC queried us about our climbing instead of descending. We replied that we were reversing the climb. The aircraft returned to its assigned altitude, and a visual approach was completed without any further issues.

“[We experienced a] large decrease in indicated airspeed. The event occurred because neither pilot noticed the Flight Mode Annunciator (FMA) entering PITCH MODE. Thrust was added, and then the climb was reversed in order to descend back to our assigned altitude. Both pilots need to reaffirm that their primary duty is to fly and monitor the aircraft at all times, starting with the basics of heading, altitude, airspeed, and performance.”

We encourage you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.We encourage you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

“People are SO Stupid”: Horrible Comments on LinkedIn

May 23rd, 2018 by

 

 

How many people have seen those videos on LinkedIn and Facebook that show people doing really dumb things at work? It seems recently LinkedIn is just full of those types of videos. I’m sure it has something to do with their search algorithms that target those types of safety posts toward me. Still, there are a lot of them.

The videos themselves don’t bother me. They are showing real people doing unsafe things or accidents, which are happening every day in real life. What REALLY bothers me are the comments that people post under each video. Again concentrating on LinkedIn, people are commenting on how dumb people are, or how they wouldn’t put up with that, or “stupid is as stupid does!”

Here are a couple examples I pulled up in about 5 minutes of scrolling through my LinkedIn feed.  Click on the pictures to see the comments that were made with the entries:

 

 

 

 

 

 

 

 

 

 

 

Click on picture to watch Video

 

 

 

 

 

 

 

These comments often fall under several categories.  We can take a look at these comments as groups

“Those people are not following safety guideline xxxx.  I blame operator “A” for  this issue!”

Obviously, someone is not following a good practice.  If they were, we wouldn’t have had the issue, right?  It isn’t particularly helpful to just point out the obvious problem.  We should be asking ourselves, “Why did this person decide that it was OK to do this?”  Humans perform split-second risk assessments all the time, in every task they perform.  What we need to understand is the basis of a person’s risk assessment.  Just pointing out that they performed a poor assessment is too easy.  Getting to the root cause is much more important and useful when developing corrective actions.

“Operators were not paying attention / being careful.”

No kidding.  Humans are NEVER careful for extended periods of time.  People are only careful when reminded, until they’re not.  Watch your partner drive the car.  They are careful much of the time, and then we need to change the radio station, or the cell phone buzzes, etc.

Instead of just noting that people in the video are not being careful, we should note what safeguards were in place (or should have been in place) to account for the human not paying attention.  We should ask what else we could have done in order to help the human do a better job.  Finding the answers to these questions is much more helpful than just blaming the person.

These videos are showing up more and more frequently, and the comments on the videos are showing how easy it is to just blame people instead of doing a human performance-based root cause analysis of the issue.  In almost all cases, we don’t even have enough information in the video to make a sound analysis.  I challenge you to watch these videos and avoid blaming the individual, making the following assumptions:

  1.  The people in the video are not trying to get hurt / break the equipment / make a mistake
  2.  They are NOT stupid.  They are human.
  3.  There are systems that we could put in place that make it harder for the human to make a mistake (or at least make it easier to do it right).

When viewing these videos in this light, it is much more likely that we can learn something constructive from these mistakes, instead of just assigning blame.

Monday Accidents & Lessons Learned: Airplane Mode

May 14th, 2018 by

When you hear the words “mode” and “aviation,” many of us who are frequent flyers may quickly intuit the discussion is heading toward the digital disconnection of our cellular voice and data connection in a device, or airplane mode. Webster defines “mode” as “a particular functioning arrangement or condition,” and an aircraft’s system’s operating mode is characterized by a particular list of active functions for a named condition, or “mode.” Multiple modes of operation are employed by most aircraft systems—each with distinct functions—to accommodate the broad range of needs that exist in the current operating environment.

With ever-increasing aviation mode complexities, pilots must be thoroughly familiar with scores of operating modes and functions. No matter which aircraft system is being operated, when a pilot is operating automation that controls an aircraft, the mode awareness, mode selection, and mode expectation are all capable of presenting hazards that require know-how and management. Sure, these hazards may be obvious, but they are also often complex and difficult to grasp.

NASA’s Aviation Safety Reporting System (ASRS) receives reports that suggest pilots are uninformed or unaware of a current operating mode, or what functions are available in a specific mode. At this juncture, the pilots experience the “What is it doing now?” syndrome. Often, the aircraft is transitioning to, or in, a mode the pilot didn’t select. Further, the pilot may not recognize that a transition has occurred. The aircraft then does something autonomously and unanticipated by the pilot, typically causing confusion and increasing the potential for hazard.

The following report gives us insight into the problems involving aircraft automation that pilots experience with mode awareness, mode selection, and mode expectation.

“On departure, an Air Carrier Captain selected the required navigation mode, but it did not engage. He immediately attempted to correct the condition and subsequently experienced how fast a situation can deteriorate when navigating in the wrong mode.

“I was the Captain of the flight from Ronald Reagan Washington National Airport (DCA). During our departure briefing at the gate, we specifically noted that the winds were 170 at 6, and traffic was departing Runway 1. Although the winds favored Runway 19, we acknowledged that they were within our limits for a tailwind takeoff on Runway 1. We also noted that windshear advisories were in effect, and we followed required procedure using a no–flex, maximum thrust takeoff. We also briefed the special single engine procedure and the location of [prohibited airspace] P-56. Given the visual [meteorological] conditions of 10 miles visibility, few clouds at 2,000 feet, and scattered clouds at 16,000 feet, our method of compliance was visual reference, and we briefed, “to stay over the river, and at no time cross east of the river.

“Taxi out was normal, and we were issued a takeoff clearance [that included the JDUBB One Departure] from Runway 1. At 400 feet AGL, the FO was the Pilot Flying and incorrectly called for HEADING MODE. I was the Pilot Monitoring and responded correctly with “NAV MODE” and selected NAV MODE on the Flight Control Panel. The two lights adjacent to the NAV MODE button illuminated. I referenced my PFD and noticed that the airplane was still in HEADING MODE and that NAV MODE was not armed. Our ground speed was higher than normal due to the tailwind, and we were rapidly approaching the departure course. Again, I reached up and selected NAV MODE, with the same result. I referenced our location on the Multi-Function Display (MFD), and we were exactly over the intended departure course; however, we were still following the flight director incorrectly on runway heading. I said, “Turn left,” and shouted, “IMMEDIATELY!” The FO banked into a left turn. I observed the river from the Captain’s side window, and we were directly over the river and clear of P-56. I spun the heading bug directly to the first fix, ADAXE, and we proceeded toward ADAXE.

“Upon reaching ADAXE, we incorrectly overflew it, and I insisted the FO turn right to rejoin the departure. He turned right, and I said, “You have to follow the white needle,” specifically referencing our FMS/GPS navigation. He responded, “I don’t have a white needle.” He then reached down and turned the Navigation Selector Knob to FMS 2, which gave him proper FMS/GPS navigation. We were able to engage the autopilot at this point and complete the remainder of the JDUBB One Departure. I missed the hand–off to Departure Control, and Tower asked me again to call them, which I did. Before the hand–off to Center, the Departure Controller gave me a phone number to call because of a possible entry into P-56.”

We thank ASRS for this report, and for helping to underscore TapRooT®’s raison d’être.

We encourage you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Hazards and Targets

May 7th, 2018 by

Most of us probably would not think of this as a on the job Hazard … a giraffe.

Screen Shot 2018 05 07 at 9 40 49 AM

But African filmmaker Carlos Carvalho was killed by one while working in Africa making a film.

Screen Shot 2018 05 07 at 9 42 38 AM

 Do you have unexpected Hazards at work? Giant Asian hornets? Grizzly bears? 

Or are your Hazards much more common. Heat stroke. Slips and falls (gravity). Traffic.

Performing a thorough Safeguard Analysis before starting work and then trying to mitigate any Hazards is a good way to improve safety and reduce injuries. Do your supervisors know how to do a Safeguard Analysis using TapRooT®?

Monday Accidents & Lessons Learned: Failing the Mind-Check of Reality

May 7th, 2018 by

 

When an RV-7 pilot studied the weather prior to departure, he considered not only the weather but also distractions and personal stress. His situational awareness and decision-making were influenced by these considerations, as you can see in his experience:

“I was cleared to depart on Runway 27L from [midfield at] intersection C. However, I lined up and departed from Runway 9R. No traffic control conflict occurred. I turned on course and coordinated with ATC immediately while airborne.

“I had delayed my departure due to weather [that was] 5 miles east…and just north of the airport on my route. Information Juliet was: “340/04 10SM 9,500 OVC 23/22 29.99, Departing Runway 27L, Runways 9L/27R closed, Runways 5/23 closed.” My mind clued in on [Runway] 09 for departure. In fact, I even set my heading bug to 090. Somehow while worried mostly about the weather, I mentally pictured departing Runway 9R at [taxiway] C. I am not sure how I made that mistake, as the only 9 listed was the closed runway. My focus was not on the runway as it should have been, but mostly on the weather.

“Contributing factors were:

1. Weather

2. No other airport traffic before my departure. (I was looking as I arrived at the airport and completed my preflight and final weather checks)

3. Airport construction. For a Runway 27 departure, typical taxi routing would alleviate any confusion

4. ATIS listing the closed runway with 9 listed first

5. Quicker than expected takeoff clearance

“I do fly for a living. I will be incorporating the runway verification procedure we use on the jet aircraft at my company into my GA flying from now on. Sadly, I didn’t make that procedural change in my GA flying.”

Thanks to NASA’s Aviation Safety Reporting System (ASRS) for contemporarily sharing experiences that offer valuable insight, contributing to the growth of aviation wisdom, lessons learned, and an uninhibited accounting of reported incidents. ASRS receives, processes, and analyzes these voluntarily submitted reports from pilots, air traffic controllers, flight attendants, maintenance personnel, dispatchers, ground personnel, and others entailing actual or potential hazards.

We encourage you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: Putting Yourself on the Right Side of Survival

April 23rd, 2018 by

While building an embankment to circumvent any material from a water supply, a front end loader operator experienced a close call. On March 13, 2018, the operator backed his front end loader over the top of a roadway berm; the loader and operator slipped down the embankment; and the loader landed turning over onto its roof. Fortunately, the operator was wearing his seat belt. He unfastened the seat belt and escaped the upside-down machine through the broken right-side window of the loader door.

Front end loaders are often involved in accidents due to a shift in the machine’s center of gravity. The U.S. Department of Labor Mine Safety and Health Administration (MSHA) documented this incident and issued the statement and best practices below for operating front end loaders.

The size and weight of front end loaders, combined with the limited visibility from the cab, makes the job of backing a front end loader potentially hazardous. To prevent a mishap when operating a front end loader:
• Load the bucket evenly and avoid overloading (refer to the load limits in the operating manual). Keep the bucket low when operating on hills.
• Construct berms or other restraints of adequate height and strength to prevent overtravel and warn operators of hazardous areas.
• Ensure that objects inside of the cab are secured so they don’t become airborne during an accident.
• ALWAYS wear your seatbelt.
• Maintain control of mobile equipment by traveling safe speeds and not
overloading equipment.

We would add the following best practices for loaders:
• Check the manufacturer’s recommendations and supplement appropriate wheel ballast or counterweight.
• Employ maximum stabilizing factors, such as moving the wheels to the widest setting.
• Ensure everyone within range of the loader location is a safe distance away.
• Operate the loader with its load as close to the ground as possible. Should the rear of the tractor tip, its bucket will hit the ground before the tractor tips.

Use the TapRooT® System to put safety first and to solve problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

How far away is death?

April 19th, 2018 by

Lockout-tagout fail: Notepaper sign – “Don’t start”

Scientific Method and Root Cause Analysis

April 4th, 2018 by

Screen Shot 2018 03 26 at 2 15 18 PM

I had someone tell me that the ONLY way to do root cause analysis was to use the scientific method. After all, this is the way that all real science is performed.

Being an engineer (rather than a scientist), I had a problem with this statement. After all, I had done or reviewed hundreds (maybe thousands?) of root cause analyses and I had never used the scientific method. Was I wrong? Is the scientific method really the only or best answer?

First, to answer this question, you have to define the scientific method. And that’s the first problem. Some say the scientific method was invented in the 17th century and was the reason that we progressed beyond the dark ages. Others claim that the terminology “scientific method” is a 20th-century invention. But, no matter when you think the scientific method was invented, there are a great variety of methods that call themselves “the scientific method.” (Google “scientific method” and see how many different models you can find. The one presented above is an example.)

So let’s just say the scientific method that the person was insisting was the ONLY way to perform a root cause analysis required the investigator to develop a hypothesis and then gather evidence to either prove or disprove the hypothesis. That’s commonly part of most methods that call themselves the scientific method.

What’s the problem with this hypothesis testing model? People don’t do it very well. There’s even a scientific term the problem that people have disproving their hypothesis. It’s called CONFIRMATION BIAS. You can Google the term and read for hours. But the short description of the problem is that when people develop a hypothesis that they believe in, they tend to gather evidence to prove what they believe and disregard evidence that is contrary to their hypothesis. This is a natural human tendency – think of it like breathing. You can tell someone not to breath, but they will breath anyway.

What did my friend say about this problem with the scientific method? That it could be overcome by teaching people that they had to disprove all other theories and also look for evidence to disproves their theory.

The second part of this answer is like telling people not to breath. But what about the first part of the solution? Could people develop competing theories and then disprove them to prove that there was only one way the accident could have occurred? Probably not.

The problem with developing all possible theories is that your knowledge is limited. And, of course, how long would it take if you did have unlimited knowledge to develop all possible theories and prove or disprove them?

The biggest problem that accident investigators face is limited knowledge.

We used to take a poll at the start of each root cause analysis class that we taught. We asked:

“How many of you have had any type of formal training
in human factors or why people make human errors?”

The answer was always less than 5%.

Then we asked:

“How many of you have been asked to investigate
incidents that included human errors?”

The answer was always close to 100%.

So how many of these investigators could hypothesize all the potential causes for a human error and how would they prove or disprove them?

That’s one simple reason why the scientific method is not the only way, or even a good way, to investigate incidents and accidents.

Need more persuading? Read these articles on the problems with the scientific method:

The End of Theory: The Data Deluge Makes The Scientific Method Obsolete

The Scientific Method is a Myth

What Flaws Exist Within the Scientific Method?

Is the Scientific Method Seriously Flawed?

What’s Wrong with the Scientific Method?

Problems with “The Scientific Method”

That’s just a small handful of the articles out there.

Let me assume that you didn’t read any of the articles. Therefore, I will provide one convincing example of what’s wrong with the scientific method.

Isaac Newton, one of the world’s greatest mathematicians, developed the universal law of gravity. Supposedly he did this using the scientific method. And it worked on apples and planets. The problem is, when atomic and subatomic matter was discovered, the “law” of gravity didn’t work. There were other forces that governed subatomic interactions.

Enter Albert Einstein and quantum physics. A whole new set of laws (or maybe you called them “theories”) that ruled the universe. These theories were proven by the scientific method. But what are we discovering now? Those theories aren’t “right” either. There are things in the universe that don’t behave the way that quantum physics would predict. Einstein was wrong!

So, if two of the smartest people around – Newton and Einstein – used the scientific method to develop answers that were wrong but that most everyone believed … what chance do you and I have to develop the right answer during our next incident investigation?

Now for the good news.

Being an engineer, I didn’t start with the scientific method when developing the TapRooT® Root Cause Analysis System. Instead, I took an engineering approach. But you don’t have to be an engineer (or a human factors expert) to use it to understand what caused an accident and what you can do to stop a future similar accident from happening.

Being an engineer, I had my fair share of classes in science. Physics, math, and chemistry are all part of an engineer’s basic training. But engineers learn to go beyond science to solve problems (and design things) using models that have limitations. A useful model can be properly applied by an engineer to design a building, an electrical transmission network, a smartphone, or a 747 without understanding the limitations of quantum mechanics.

Also, being an engineer I found that the best college course I ever had that helped me understand accidents wasn’t an engineering course. It was a course on basic human factors. A course that very few engineers take.

By combining the knowledge of high reliability systems that I gained in the Nuclear Navy with my knowledge of engineering and human factors, I developed a model that could be used by people without engineering and human factors training to understand what happened during an incident, how it happened, why it happened, and how it could be prevented from happening again. We have been refining this model (the TapRooT® System) for about thirty years – making it better and more usable – using the feedback from tens of thousands of users around the world. We have seen it applied in a wide variety of industries to effectively solve equipment and human performance issues to improve safety, quality, production, and equipment reliability. These are real world tests with real world success (see the Success Stories at this link).

So, the next time someone tells you that the ONLY way to investigate an incident is the scientific method, just smile and know that they may have been right in the 17th century, but there is a better way to do it today.

If you don’t know how to use the TapRooT® System to solve problems, perhaps you should attend one of our courses. There is a basic 2-Day Course and an advanced 5-Day Course. See the schedule for public courses HERE. Or CONTACT US about having a course at your site.

How Safe Must Autonomous Vehicles Be?

April 3rd, 2018 by

Tesla is under fire for the recent crash of their Model X SUV, and the subsequent fatality of the driver. It’s been confirmed that the vehicle was in Autopilot mode when the accident occurred. Both Tesla and the NTSB are investigating the particulars of this crash.

PHOTO: PUBLISHED CREDIT: KTVU FOX 2/REUTERS.

I’ve read many of the comments about this crash, in addition to previous crash reports. It’s amazing how much emotion is poured into these comments. I’ve been trying to understand the human performance issues related to these crashes, and I find I must take special note of the human emotions that are attached to these discussions.

As an example, let’s say that I develop a “Safety Widget™” that is attached to all of your power tools. This widget raises the cost of your power tools by 15%, and it can be shown that this option reduces tool-related accidents on construction sites by 40%.  That means, on your construction site, if you have 100 incidents each year, you would now only have 60 incidents if you purchase my Safety Widget™.  Would you consider this to be a successful purchase?  I think most people would be pretty happy to see their accident rates reduced by 40%!

Now, what happens when you have an incident while using the Safety Widget™? Would you stop using the Safety Widget™ the first time it did NOT stop an injury? I think we’d still be pretty happy that we would prevent 40 incidents at our site each year. Would you still be trying to reduce the other 60 incidents each year? Of course. However, I think we’d keep right on using the Safety Widget™, and continue looking for additional safeguards to put in place, while trying to improve the design of the original Safety Widget™.

This line of thinking does NOT seem to be true for autonomous vehicles. For some reason, many people seem to be expecting that these systems must be perfect before we are allowed to deploy them. Independent reviews (NOT by Tesla) have shown that, on a per driver-mile basis, Autopilot systems reduce accidents by 40% over normal driver accident rates. In the U.S., we experience about 30,000 fatalities each year due to driver error. Shouldn’t we be happy that, if everyone had an autonomous vehicle, we would be saving 12,000 lives every year? The answer to that, you would think, would be a resounding “YES!” But there seems to be a much more emotional content to the answer than straight scientific data would suggest.

I think there may be several human factors in play as people respond to this question:

  1. Over- and under-trust in technology: I was talking to one of our human factors experts, and he mentioned this phenomena. Some people under-trust technology in general and, therefore, will find reasons not to use it, even when proven to work. Others will over-trust the technology, as evidenced by the Tesla drivers who are watching movies, or not responding to system warnings to maintain manual control of the vehicle.
  2. “I’m better than other drivers. Everyone else is a bad drive; while they may need assistance, I drive better than any autonomous gadget.” I’ve heard this a lot. I’m a great driver; everyone else is terrible. It’s a proven fact that most people have an inflated opinion of their own capabilities compared to the “average” person.” If you were to believe most people, each individual (when asked) is better than average. This would make it REALLY difficult to calculate an average, wouldn’t it?
  3. It’s difficult to calculate the unseen successes. How many incidents were avoided by the system? It’s hard to see the positives, but VERY easy to see the negatives.
  4. Money. Obviously, there will be some people put out of work as autonomous vehicles become more prevalent. Long-haul truckers will be replaced by autopilot systems. Cab drivers, delivery vehicle drivers, Uber drivers, and train engineers are all worried about their jobs, so they are more likely to latch onto any negative that would help them maintain their relevancy. Sometimes this is done subconsciously, and sometimes it is a conscious decision.

Of course, we DO have to monitor and control how these systems are rolled out. We can’t have companies roll out inferior systems that can cause harm due to negligence and improper testing. That is one of the main purposes of regulation and oversight.

However, how safe is “safe enough?” Can we use a system that isn’t perfect, but still better than the status quo? Seat belts don’t save everyone, and in some (rare) cases, they can make a crash worse (think of Dale Earnhardt, or a crash into a lake with a stuck seat belt). Yet, we still use seat belts. Numerous lives are saved every year by restraint systems, even though they aren’t perfect. How “safe” must an autonomous system be in order to be accepted as a viable safety device? Are we there yet? What do you think?

Effective Listening Skills Inventory for Investigative Interviews

March 29th, 2018 by

Do you ever interrupt someone because you fear “losing” what you want to say? Do you become momentarily engrossed in your thoughts, then return to reality to find someone awaiting your answer to a question you didn’t hear? Most of us are at fault for interrupting or being distracted from time to time. Particularly, though, in an interview environment where focus is key, distractions or interruptions can be detrimental to the interview.

Watch, listen, learn from this week’s conversation between Barb Carr and Benna Dortch:

Effective Listening Skills Inventory For Investigation Interviews from TapRooT® Root Cause Analysis on Vimeo.

Now, learn how to inventory your listening skills. Internalizing suggestions can recalibrate your thought and communication processes. Your work and your communication style will reflect the changes you’ve made.

Feel free to comment or ask questions on Facebook. We will respond!

Bring your lunch next Wednesday and join TapRooT®’s Facebook Live session. You’ll pick up valuable, workplace-relevant takeaways from an in-depth discussion between TapRooT® professionals. We’ll be delighted to have your company.

Here’s the scoop for tuning in next week:

Where? https://www.facebook.com/RCATapRooT/

When? Wednesday, April 4, 2018

What Time? Noon Eastern | 11:00 a.m. Central | 10:00 a.m. Mountain | 9:00 a.m. Pacific

Thank you for joining us!

Monday Accidents & Lessons Learned: Does What You See Match What Is Happening?

March 26th, 2018 by

>

An incident report from NASA’s Aviation Safety Reporting System (ASRS) gives insight into a pilot’s recurring, problematic observation. Through distraction and confusion, a Bonanza pilot misperceived the runway edge and centerline lights as they cycled off and on. Air Traffic Control (ATC) let him know that the centerline lights were constant, not blinking.

The pilot summarized his experience, “I was transiting the final approach path of . . . Runway 16R and observed the runway edge and centerline lights cycle on and off . . . at a rate of approximately 1 per second. It was very similar to the rate of a blinking traffic light at a 4-way vehicle stop. The [3-blade] propeller speed was 2,400 RPM. This was observed through the entire front windscreen and at least part of the pilot side window. I queried ATC about the reason for the runway lights blinking and was told that they were not blinking. It was not immediately obvious what was causing this, but I did later speculate that it may have been caused by looking through the propeller arc.

“The next day [during] IFR training while on the VOR/DME Runway 16R approach, we observed the runway edge and centerline lights cycle on and off . . . at a rate slightly faster than 1 per second. The propeller speed was 2,500 RPM. I then varied the propeller speed and found that, at 2,700 RPM, the lights were observed strobing at a fairly high rate and, at 2,000 RPM, the blinking rate slowed to less than once per second. This was observed through the entire approach that terminated at the Missed Approach Point (MAP). The flight instructor was also surprised and mentioned that he had not seen this before, but also he doesn’t spend much time behind a 3-blade propeller arc.

“I would speculate that the Pulse Width Modulation (PWM) dimming system of the LED runway lights was phasing with my propeller, causing the observed effect. I would also speculate that the effect would . . . significantly differ at other LED dimming settings . . . and behind a 2-blade propeller.

“I found the effect to be entirely confusing and distracting and would not want to make a landing in such conditions.”

The TapRooT® System, Training, and Software have a dedicated history of R&D, human performance, and improvement. Learn with our best incident investigation and root cause analysis systems.

Monday Accidents & Lessons Learned: When a Disruption Potentially Saves Lives

March 12th, 2018 by

Early news of an incident often does not convey the complexity behind the incident. Granted, many facts are not initially available. On Tuesday, January 24, 2017, a Network Rail freight train derailed in southeast London between Lewisham and Hither Green just before 6:00 am, with the rear two wagons of the one-kilometer-long train off the tracks. Soon after, the Southeastern network sent a tweet to report the accident, alerting passengers that, “All services through the area will be disrupted, with some services suspended.” Then came the advice, “Disruption is expected to last all day. Please make sure you check before travelling.” While southeastern passengers were venting their frustrations on Twitter, a team of engineers was at the site by 6:15 am, according to Network Rail. At the scene, the engineers observed that no passengers were aboard and that no one was injured. They also noted a damaged track and the spillage of a payload of sand.

The newly laid track at Courthill Loop South Junction was constructed of separate panels of switch and crossing track, with most of the panels arriving to the site preassembled. Bearer ties, or mechanical connectors, joined the rail supports. The February 2018 report from the Rail Accident Investigation Branch (RAIB), including five recommendations, noted that follow-up engineering work took place the weekend after the new track was laid, and the derailment occurred the next day. Further inspection found the incident to be caused by a significant track twist and other contributing factors. Repair disrupted commuters for days as round-the-clock engineers accomplished a complete rebuild of a 50-meter railway stretch and employed cranes to lift the overturned wagons. Now factor in time, business, resources saved—in addition to lives that are often spared—when TapRooT® advanced root cause analysis is used to proactively reach solutions.

Nuclear Plant Fined $145,000 for “Gun-Decked Logs”

February 21st, 2018 by

When I was in the Navy, people called it “gun-decking the logs.”

In the Navy this means that you falsify your record keeping … usually by just copying the numbers from the previous hour (maybe with slight changes) without making the rounds and taking the actual measurements. And if you were caught, you were probably going to Captain’s Mast (disciplinary hearing).

The term “gun-decking” has something to do with the “false” gun deck that was built into British sailing ships of war to make them look like they had more guns. Sometimes midshipmen would falsify their navigation training calculations by using dead reckoning to calculate their position rather than using the Sun and the stars. This might have been called “gun-decking” because the gun deck is where they turned their homework over to the ships navigator to be reviewed.

NewImage

What happened at the Nuke Plant? A Nuclear Regulatory Commission inspector found that 13 operators had gun-decked their logs. Here’s a quote from the article describing the incident:

“An NRC investigation, completed August 2017, found that on multiple occasions during the three-month period, at least 13 system operators failed to complete their rounds as required by plant procedures, but entered data into an electronic log indicating they had completed equipment status checks and area inspections,” the NRC said in a statement.”

What was the corrective action? The article says:

“The plant operator has already undertaken several corrective actions, the NRC said, including training for employees, changes in the inspection procedures and disciplinary measures for some staff.”

Hmmm … training, procedures, and discipline. That’s the standard three corrective actions. (“Round up the usual suspects!”) Even problems that seem to be HR issues can benefit from advanced root cause analysis. Is this a Standards, Policy, and Administrative Controls Not Used issue? Is there a root cause under that Near Root Cause that needs to be fixed (for example, Enforcement Needs Improvement)? Or is discipline the right answer? It would be interesting to know all the facts.

Want to learn to apply advanced root cause analysis to solve these kinds of problems? Attend one of our 5-Day TapRooT® Advanced Root Cause Analysis Courses. See the upcoming public courses by CLICKING HERE. Or CLICK HERE to contact us about having a course at your site.

Monday Accidents & Lessons Learned: Three Killed, Dozens Injured on Italian Trenord-Operated Train

February 5th, 2018 by

Packed with 250 commuters and heading to Milan’s Porta Garibaldi station, the Italian Trenord-operated train derailed January 25, 2018, killing three people and seriously injuring dozens. The train was said to have been traveling at normal speed but was described by witnesses as “trembling for a few minutes before the accident.” A collapse of the track is under investigation.

Why is early information-gathering important?

Interesting Article in “Stars & Strips” About Navy Court-Martials for COs Involved in Collisions

January 22nd, 2018 by

Screen Shot 2018 01 22 at 6 30 43 PM

The article starts with:

The Navy’s decision to pursue charges of negligent homicide against the former commanders of the USS Fitzgerald and USS John McCain has little precedent, according to a Navy scholar who has extensively scrutinized cases of command failure.”

See the whole article at:

https://www.stripes.com/news/navy/few-navy-commanders-face-court-martial-for-operational-failures-1.507226

The article implies that blame and shame is the normal process for COs whose ships are involved in accidents.

Isn’t it time for the US Navy to learn real advanced root cause analysis that can teach them to find and fix the causes of problems at cause collisions at sea?

Connect with Us

Filter News

Search News

Authors

Angie ComerAngie Comer

Software

Anne RobertsAnne Roberts

Marketing

Barb CarrBarb Carr

Editorial Director

Chris ValleeChris Vallee

Human Factors

Dan VerlindeDan Verlinde

VP, Software

Dave JanneyDave Janney

Safety & Quality

Garrett BoydGarrett Boyd

Technical Support

Ken ReedKen Reed

VP, Equifactor®

Linda UngerLinda Unger

Co-Founder

Mark ParadiesMark Paradies

Creator of TapRooT®

Michelle WishounMichelle Wishoun

Licensing Paralegal

Per OhstromPer Ohstrom

VP, Sales

Shaun BakerShaun Baker

Technical Support

Steve RaycraftSteve Raycraft

Technical Support

Wayne BrownWayne Brown

Technical Support

Success Stories

The healthcare industry has recognized that improved root cause analysis of quality incidents…

Good Samaritan Hospital

TapRooT® is a key part of our safety improvement program that has helped us stop injuries and save lives.

Canyon Fuel Company
Contact Us