Category: Great Human Factors

Monday Accidents & Lessons Learned: An Assumption Can Lead You to Being All Wet

August 13th, 2018 by

IOGP Well Control Incident Lesson Sharing

The International Association of Oil & Gas Producers (IOGP) is the voice of the global upstream oil and gas industry. The industry of oil and gas provides a significant proportion of the world’s energy to meet growing demands for heat, light, and transport. IOGP members produce 40 percent of the world’s oil and gas, operating in the Americas, Africa, Europe, the Middle East, the Caspian, Asia, and Australia.

IOGP shares a Well Control Incident Lesson Sharing report recounting a breakdown in communication, preparation and monitoring, and process control. Importantly, through the findings, we identify that the overarching project plan was erroneously based on the expectation, albeit assumption, that the reservoir was depleted. Let’s track this incident:

What happened?
In a field subjected to water flooding, when drilling through shales and expecting to enter a depleted reservoir, gas readings suddenly increased. Subsequently, the mud weight was increased, the well was shut-in, and the drill string became stuck when the hole collapsed during kill operations. Water-flood break-through risks were not communicated to the drill crew, and the drill crew failed to adequately monitor the well during connections. The loss of well control, hole, and drill string was due to poor communication and well-monitoring.

  • Drilling 8″1/2 x 9″1/2 hole with 1.30SG mud weight (MW) at 2248m – this mud density is used to drill the top section shales for borehole stability purpose
  • Crossed an identified sands layer which was expected to be sub-hydrostatic (0.5SG)
  • Observed a connection gas reading up to 60% + pack off tendency.
  • Increased mud weight by step to 1.35SG but gas readings were still high
  • Decided to shut the well in and observed pressure in the well SIDP 400 psi – SICP 510 psi
  • A Gain of +/- 10m3 was estimated later (by postmortem analysis of the previous pipe connection and pump-off logs)
  • Performed Driller’s Method and killed the well by displacing 1.51 SG kill mud
  • Open hole collapsed during circulation with the consequence of string getting stuck and kick zone isolated

What went wrong? 
The reservoir was expected to be depleted. This part of the field was artificially over-pressurized by a water injector well. This was not identified during the well preparation phase. and the risk was not transmitted to the drilling teams. Lack of crew vigilance. Poor well monitoring during DP connections. The high connection gas observed at surface were the result of a crude contamination in the mud system. Significant gain volumes were taken during the previous pipe connections without being detected.

Corrective actions and recommendations 
-The incident was shared with drilling personnel and used for training purposes.

-Shared the experience and emphasized to reinforce the well preparation process with a rigorous risk identification: the hazard related to a continuous injection in a mature field to be emphasized.

-Reinforce well monitoring. Specifically, during pipe connections.

-Review mapping of injection on the field.

Circumstances can crop up anywhere at any time if proper sequence and procedures are not planned and followed. We encourage you to learn and use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: Dumping the Electronic Flight Bag En Route

August 6th, 2018 by

The electronic flight bag (EFB) has demonstrated improved capability to display aviation information such as airport charts, weather, NOTAMs, performance data, flight releases, and weight and balance. This portable electronic hardware has proven facilitative for flight crews in efficiently performing management tasks. While the EFB provides many advantages and extensive improvements for the aviation community in general and for pilots specifically, some unexpected operational threats have surfaced.

NASA’s Aviation Safety and Reporting System (ASRS) has received reports that describe various kinds of EFB anomalies. Today’s particular instance relates to EFB operation in a particular phase of flight:

An ERJ175 pilot attempted to expand the EFB display during light turbulence. Difficulties stemming from the turbulence and marginal EFB location rendered the EFB unusable, so the pilot chose to disregard the EFB entirely.

“We were on short final, perhaps 2,000 feet above field elevation. [It had been a] short and busy flight. I attempted to zoom in to the Jepp Chart, currently displayed on my EFB, to reference some information. The EFB would not respond to my zooming gestures. After multiple attempts, the device swapped pages to a different chart. I was able to get back to the approach page but could not read it without zooming. I attempted to zoom again but, with the light turbulence, I could not hold my arm steady enough to zoom. [There is] no place to rest your arm to steady your hand because of the poor mounting location on the ERJ175.

“After several seconds of getting distracted by…this EFB device, I realized that I was … heads-down for way too long and not paying enough attention to the more important things (e.g., acting as PM). I did not have the information I needed from the EFB. I had inadvertently gotten the EFB onto a company information page, which is bright white rather than the dark nighttime pages, so I turned off my EFB and continued the landing in VMC without the use of my EFB. I asked the PF to go extra slowly clearing the runway to allow me some time to get the taxi chart up after landing.

“… I understand that the EFB is new and there are bugs. This goes way beyond the growing pains. The basic usability is unreliable and distracting. In the cockpit, the device is nearly three feet away from the pilot’s face, mounted almost vertically, at a height level with your knees. All [EFB] gestures in the airplane must be made from the shoulder, not the wrist. Add some turbulence to that, and you have a significant heads-down distraction in the cockpit.”

The award-winning publication and monthly safety newsletter, CALLBACK, from NASA’s Aviation Safety Reporting System, shares reports, such as the one above, that reveal current issues, incidents, and episodes of some common problems that pilots have experienced. In this issue, we learned about precursor events that have occurred during the EFB’s adolescence.

Circumstances can crop up anywhere at any time if proper sequence and procedures are not planned and followed. We encourage you to learn and use the TapRooT® System to apprehend situations and find and fix problems. Attend one of our courses. Among our offerings are a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: Zooming to “Too Low Terrain”

July 30th, 2018 by

When the Electronic Flight Bag (EFB) platform—frequently a tablet device—was introduced as a human-machine interface into the aviation industry and the cockpit, the platform proved to  facilitate improvements for both pilots and the aviation community, but the human-machine interface has encountered operational threats in the early years of EFB utilization.

NASA’s Aviation Safety Reporting System (ASRS) has received reports that describe various kinds of EFB anomalies. One routine problem occurs when a pilot “zooms,” or expands the screen to enlarge a detail, and unknowingly “slides” important information off the screen, making it no longer visible. A second type of problem manifests itself in difficulty operating the EFB in specific flight or lighting conditions. Yet a third wrinkle relates to EFB operation in a particular flight phase.

Let’s look at what happened in an A319 when “zoom” went awry:

Prior to departure, an A319 crew had to manage multiple distractions. An oversight, a technique, and a subtle EFB characteristic all subsequently combined to produce a unrecognized controlled flight toward terrain.

“We received clearance from Billings Ground, ‘Cleared … via the Billings 4 Departure, climb via the SID.’ During takeoff on Runway 10L from Billings, we entered IMC. The Pilot Flying (PF) leveled off at approximately 4,600 feet MSL, heading 098 [degrees]. We received clearance for a turn to the southeast … to join J136. We initiated the turn and then requested a climb from ATC. ATC cleared us up to 15,000 feet. As I was inputting the altitude, we received the GPWS alert, ‘TOO LOW TERRAIN.’ Immediately, the PF went to Take Off/Go Around (TO/GA) Thrust and pitched the nose up. The Pilot Monitoring (PM) confirmed TO/GA Thrust and hit the Speed Brake handle … to ensure the Speed Brakes were stowed. Passing 7,000 feet MSL, the PM announced that the Minimum Sector Altitude (MSA) was 6,500 feet within 10 nautical miles of the Billings VOR. The PF reduced the pitch, then the power, and we began an open climb up to 15,000 feet MSL. The rest of the flight was uneventful.

“On the inbound leg [to Billings], the aircraft had experienced three APU auto shutdowns. This drove the Captain to start working with Maintenance Control. During the turn, after completion of the walkaround, I started referencing multiple checklists … to prepare for the non-normal, first deicing of the year. I then started looking at the standard items. It was during this time that I looked at the BILLINGS 4 Departure, [pages] 10-3 and 10-3-1. There are no altitudes on … page [10-3], so I referenced [page] 10-3-1. On [page] 10-3-1 for the BILLINGS 4 Departure at the bottom, I saw RWY 10L, so I zoomed in to read this line. When I did the zoom, it cut off the bottom of the page, which is the ROUTING. Here it clearly states, ‘Maintain 15,000 or assigned lower.’ I never saw this line. When we briefed prior to push, the departure was briefed as, ‘Heading 098, climb to 4,600 feet MSL’; so neither the PF nor the PM saw the number 15,000 feet MSL. The 45-minute turn was busy with multiple nonstandard events. The weather was not great. However, that is no excuse for missing the 15,000-foot altitude on the SID.”

The award-winning publication and monthly safety newsletter, CALLBACK, from NASA’s Aviation Safety Reporting System, shares reports, such as the one above, that reveal current issues, incidents, and episodes.

Circumstances can crop up anywhere at any time if proper sequence and procedures are not planned and followed. We encourage you to learn and use the TapRooT® System to apprehend, find, and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: When One Good Turn Definitely Doesn’t Deserve Another

July 16th, 2018 by

The electronic flight bag (EFB) is rapidly replacing pilots’ conventional papers in the cockpit. While the EFB has demonstrated improved capability to display aviation information—airport charts, weather, NOTAMs, performance data, flight releases, and weight and balance—NASA’s Aviation Safety and Reporting System (ASRS) has received reports that describe various kinds of EFB anomalies, such as this one:

“This B757 Captain received holding instructions during heavy traffic. While manipulating his EFB for clarification, he inadvertently contributed to an incorrect holding entry.

‘[We were] asked to hold at SHAFF intersection due to unexpected traffic saturation. While setting up the FMC and consulting the arrival chart, I expanded the view on my [tablet] to find any depicted hold along the airway at SHAFF intersection. In doing so, I inadvertently moved the actual hold depiction…out of view and [off] the screen.

‘The First Officer and I only recall holding instructions that said to hold northeast of SHAFF, 10-mile legs. I asked the First Officer if he saw any depicted hold, and he said, “No.” We don’t recall instructions to hold as depicted, so not seeing a depicted hold along the airway at SHAFF, we entered a right-hand turn. I had intended to clarify the holding side with ATC, however there was extreme radio congestion and we were very close to SHAFF, so the hold was entered in a right-hand turn.

‘After completing our first 180-degree turn, the controller informed us that the hold at SHAFF was left turns. We said that we would correct our holding side on the next turn. Before we got back to SHAFF for the next turn, we were cleared to [the airport].'”

Volpe National Transportation Systems Center, U.S. Department of Transportation, weighs in on EFBs: “While the promise of EFBs is great, government regulators, potential customers, and industry developers all agree that EFBs raise many human factors considerations that must be handled appropriately in order to realize this promise without adverse effects.”

CALLBACK is the award-winning publication and monthly safety newsletter from NASA’s Aviation Safety Reporting System (ASRS). CALLBACK shares reports, such as the one above, that reveal current issues, incidents, and episodes.

Circumstances can crop up anywhere at any time if proper sequence and procedures are not planned and followed. We encourage you to learn and use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: Where Did We Put the Departure Course?

July 2nd, 2018 by

Have you ever encountered a new methodology or product that you deemed the best thing ever, only to discover in a too-close-for-comfort circumstance that what seemed a game changer had a real downside?

In aviation, the Electronic Flight Bag (EFB) is the electronic equivalent to the pilot’s traditional flight bag. It contains electronic data and hosts EFB applications, and it is generally replacing the pilots’ conventional papers in the cockpit. The EFB has demonstrated improved capability to display aviation information such as airport charts, weather, NOTAMs, performance data, flight releases, and weight and balance.

The EFB platform, frequently a tablet device, introduces a relatively new human-machine interface into the cockpit. While the EFB provides many advantages and extensive improvements for the aviation community in general and for pilots specifically, some unexpected operational threats have surfaced during its early years.

NASA’s Aviation Safety and Reporting System (ASRS) has received reports that describe various kinds of EFB anomalies. One typical problem occurs when a pilot “zooms,” or expands the screen to enlarge a detail, thereby unknowingly “slides” important information off the screen, making it no longer visible.

An Airbus A320 crew was given a vector to intercept course and resume the departure procedure, but the advantage that the EFB provided in one area generated a threat in another.

From the Captain’s Report:

“Air Traffic Control (ATC) cleared us to fly a 030 heading to join the GABRE1 [Departure]. I had never flown this Standard Instrument Departure (SID). I had my [tablet] zoomed in on the Runway 6L/R departure side so I wouldn’t miss the charted headings. This put Seal Beach [VOR] out of view on the [tablet]. I mistakenly asked the First Officer to sequence the Flight Management Guidance Computer (FMGC) between GABRE and FOGEX.”

From the First Officer’s Report:

“During our departure off Runway 6R at LAX [while flying the] GABRE1 Departure, ATC issued, ‘Turn left 030 and join the GABRE1 Departure.’ This was the first time for both pilots performing this SID and the first time departing this runway for the FO. Once instructed to join the departure on the 030 heading, I extended the inbound radial to FOGEX and inserted it into the FMGC. With concurrence from the Captain, I executed it. ATC queried our course and advised us that we were supposed to intercept the Seal Beach VOR 346 radial northbound. Upon review, both pilots had the departure zoomed in on [our tablets] and did not have the Seal Beach [VOR] displayed.”

CALLBACK is the award-winning publication and monthly safety newsletter from NASA’s Aviation Safety Reporting System (ASRS). CALLBACK shares reports, such as the one above, that reveal current issues, incidents, and episodes.

Circumstances can crop up anywhere at any time if proper sequence and procedures are not planned and followed. We encourage you to learn and use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: The Worst U.S. Maritime Accident in Three Decades

May 21st, 2018 by

The U.S.-flagged cargo ship, El Faro, and its crew of 33 men and women sank after sailing into Hurricane Joaquin. What went wrong and why did an experienced sea captain sail his crew and ship directly into the eye of a hurricane? The investigation lasted two years. 

One of two ships owned by TOTE Maritime Inc., the El Faro constantly rotated between Jacksonville, Florida, and San Juan, Puerto Rico, transporting everything from frozen chickens to milk to Mercedes Benzes to the island. The combination roll-on/roll-off and lift-on/lift-off cargo freighter was crewed by U.S. Merchant Marines. Should the El Faro miss a trip, TOTE would lose money, store shelves would be bare, and the Puerto Rican economy would suffer.

The El Faro, a 790-foot, 1970s steamship, set sail at 8:15 p.m. on September 29, 2015, with full knowledge of the National Hurricane Center warning that Tropical Storm Joaquin would likely strengthen to a hurricane within 24 hours.

Albeit with modern navigation and weather technology, the aging ship, with two boilers in need of service, with no life vests or immersion suits, was equipped with open lifeboats that would not be launched once the captain gave the order to abandon ship in the midst of a savage hurricane.

As the Category 4 storm focused on the Bahamas, winds peaking at 140 miles an hour, people and vessels headed for safety. All but one ship. On October 1, 2015, the SS El Faro steamed into the furious storm. Black skies. Thirty to forty foot waves. The Bermuda Triangle. Near San Salvador, the sea freighter found itself in the strongest October storm to hit these waters since 1866. Around 7:30 a.m. on October 1, the ship was taking on water and listing 15 degrees. Although, the last report from the captain indicated that the crew had managed to contain the flooding. Soon after, the freighter ceased all communications. All aboard perished in the worst U.S. maritime disaster in three decades. Investigators from the National Transportation Safety Board (NTSB) were left to wonder why.

When the NTSB launched one of the most thorough investigations in its long history, they spoke with dozens of experts, colleagues, friends, and family of the crew. The U.S. Coast Guard, with help from the Air Force, the Air National Guard, and the Navy, searched in a 70,000 square-mile area off Crooked Island in the Bahamas, spotting debris, a damaged lifeboat, containers, and traces of oil. On October 31, 2015, the USNS Apache searched and found the El Faro, using the CURV 21, a remotely operated deep ocean vehicle.

Thirty days after the El Faro sank, the ship was found 15,000 feet below sea level. The images of the sunken ship showed a breach in the hull and its main navigation tower missing. 

Finally came the crucial discovery when a submersible robot retrieved the ship’s voyage data recorder (VDR), found on Tuesday, April 26, 2016, at 4,600 meters bottom. This black box held everything uttered on the ship’s bridge, up to its final moments.

The big challenge was locating the VDR, only about a foot by eight inches. No commercial recorder had ever been recovered this deep where the pressure is nearly 7,000 pounds per square inch.

The 26-hour recording converted into the longest script—510 pages— ever produced by the NTSB.  The recorder revealed that at the outset, there was absolute certainty among the crew and captain that going was the right thing to do. As the situation evolved and conditions deteriorated, the transcript reveals, the captain dismissed a crew member’s suggestion that they return to shore in the face of the storm. “No, no, no. We’re not gonna turn around,” he said. Captain Michael Davidson then said, “What I would like to do is get away from this. Let this do what it does. It certainly warrants a plan of action.” Davidson went below just after 7:57 p.m. and was not heard again nor present on the bridge until 4:10 a.m. The El Faro and its crew had but three more hours after Davidson reappeared on the bridge, as the recording ends at 7:39 a.m., ten minutes after Captain Davidson ordered the crew to abandon ship.

This NTSB graphic shows El Faro’s track line in green as the ship sailed from Jacksonville to Puerto Rico on October 1, 2015. Color-enhanced satellite imagery from close to the time the ship sank illustrates Hurricane Joaquin in red, with the storm’s eye immediately to the south of the accident site.

The NTSB determined that the probable cause of the sinking of El Faro and the subsequent loss of life was the captain’s insufficient action to avoid Hurricane Joaquin, his failure to use the most current weather information, and his late decision to muster the crew. Contributing to the sinking was ineffective bridge resource management on board El Faro, which included the captain’s failure to adequately consider officers’ suggestions. Also contributing to the sinking was the inadequacy of both TOTE’s oversight and its safety management system.

The NTSB’s investigation into the El Faro sinking identified the following safety issues:

  • Captain’s actions
  • Use of noncurrent weather information
  • Late decision to muster the crew
  • Ineffective bridge resource management
  • Company’s safety management system
  • Inadequate company oversight
  • Need for damage control plan
  • Flooding in cargo holds
  • Loss of propulsion
  • Downflooding through ventilation closures
  • Need for damage control plan
  • Lack of appropriate survival craft

The report also addressed other issues, such as the automatic identification system and the U.S. Coast Guard’s Alternate Compliance Program. On October 1, 2017, the U. S. Coast Guard released findings from its investigation, conducted with the full cooperation of the NTSB. The 199-page report identified causal factors of the loss of 33 crew members and the El Faro, and proposed 31 safety recommendations and four administrative recommendations for future actions to the Commandant of the Coast Guard.

Captain Jason Neubauer, Chairman, El Faro Marine Board of Investigation, U.S. Coast Guard, made the statement, “The most important thing to remember is that 33 people lost their lives in this tragedy. If adopted, we believe the safety recommendations in our report will improve safety of life at sea.”

Monday Accidents & Lessons Learned: When There Is No Right Side of the Tracks

April 30th, 2018 by

On Tuesday, February 28, 2017, a wall section began to collapse at the top of a cutting above a four-track railway line between the Liverpool Lime Street and Edge Hill stations in Liverpool, England. From approximately 5:30 pm until 6:02 pm, more than 188 tons of debris rained down from the embankment wall collapsing across all four tracks. The Liverpool Lime Street is the city’s main station, one of the busiest in the north of England.

With the rubble downing overhead power lines and damage to infrastructure, all mainline services to and from the station were suspended. The collapse brought trains to a standstill for three hours, with a necessary evacuation of three trains. Police, fire, and ambulance crews helped evacuate passengers down the tracks. Two of the trains were halted in tunnels. Passengers were stranded on trains at Lime Street station due to power outage resulting from the collapse. A passenger en route to Liverpool from Manchester Oxford Road reported chaos at Warrington station as passengers tried to find their way home.

A representative from Network Rail spoke about the incident, “No trains are running in or out of Liverpool Lime station after a section of trackside wall, loaded with concrete and cabins by a third party, collapsed sending rubble across all four lines and taking overhead wires with it. Early indications suggest train service will not resume for several days while extensive clear-up and repairs take place to make the location safe. More precise forecasts on how long the repairs will take will be made after daybreak tomorrow.”

Read more about the incident here.

We invite you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: We’re Not Off the Runway Yet

April 16th, 2018 by

NASA’s Aviation Safety Reporting System (ASRS) from time to time shares contemporary experiences to add value to the growth of aviation wisdom, lessons learned, and to spur a freer flow of reported incidents. ASRS receives, processes, and analyzes these voluntarily submitted reports from pilots, air traffic controllers, flight attendants, maintenance personnel, dispatchers, ground personnel, and others regarding actual or potential hazards to safe aviation operations.

We acknowledge that the element of surprise, or the unexpected, can upend even the best flight plan. But, sometimes, what is perceived as an anomaly pales in comparison to a subsequent occurrence. This was the case when an Air Taxi Captain went the second mile to clear his wingtips while taxiing for takeoff. Just as he thought any threat was mitigated, boom! Let’s listen in to his account:

“Taxiing out for the first flight out of ZZZ, weed whacking was taking place on the south side of the taxiway. Watching to make sure my wing cleared two men mowing [around] a taxi light, I looked forward to continue the taxi. An instant later I heard a ‘thump.’ I then pulled off the taxiway onto the inner ramp area and shut down, assuming I’d hit one of the dogs that run around the airport grounds on a regular basis. I was shocked to find a man, face down, on the side of the taxiway. His coworkers surrounded him and helped him to his feet. He was standing erect and steady. He knew his name and the date. Apparently [he was] not injured badly. I attended to my two revenue passengers and returned the aircraft to the main ramp. I secured the aircraft and called [the Operations Center]. An ambulance was summoned for the injured worker. Our ramp agent was a non-revenue passenger on the flight and took pictures of the scene. He stated that none of the workers was wearing a high visibility vest, which I also observed. They seldom have in the past.

“This has been a recurring problem at ZZZ since I first came here. The operation is never [published in the] NOTAMs [for] an uncontrolled airfield. The pilots just have to see and avoid people and animals at all times. I don’t think the person that collided with my wingtip was one of the men I was watching. I think he must have been stooped down in the grass. The only option to [improve the] safety of the situation would be to stop completely until, hopefully, the workers moved well clear of the taxiway. This is one of…many operational deficiencies that we, the pilots, have to deal with at ZZZ on a daily basis.”

We invite you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: When Retrofitting Does Not Evaluate Risks

April 9th, 2018 by

Bound for London Waterloo, the 2G44 train was about to depart platform 2 at Guildford station. Suddenly, at 2:37 pm, July 7, 2017, an explosion occurred in the train’s underframe equipment case, ejecting debris onto station platforms and into a nearby parking lot. Fortunately, there were no injuries to passengers or staff; damage was contained to the train and station furnishings. It could have been much worse.

The cause of the explosion was an accumulation of flammable gases within the traction equipment case underneath one of the train’s coaches. The gases were generated after the failure of a large electrical capacitor inside the equipment case; the capacitor failure was due to a manufacturing defect.

Recently retrofitted with a modern version of the traction equipment, the train’s replacement equipment also included the failed capacitor. The project team overseeing the design and installation of the new equipment did not consider the risk of an explosion due to a manufacturer’s defect within the capacitor. As a result, there were no preventative engineering safeguards.

The Rail Accident Investigation Branch (RAIB) has recommended a review of the design of UK trains’ electric traction systems to ensure adequate safeguards are in place to offset any identified anomalies and to prevent similar explosions. Learn about the six learning points recommended by the RAIB for this investigation.

Use the TapRooT® System to solve problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: Does What You See Match What Is Happening?

March 26th, 2018 by

>

An incident report from NASA’s Aviation Safety Reporting System (ASRS) gives insight into a pilot’s recurring, problematic observation. Through distraction and confusion, a Bonanza pilot misperceived the runway edge and centerline lights as they cycled off and on. Air Traffic Control (ATC) let him know that the centerline lights were constant, not blinking.

The pilot summarized his experience, “I was transiting the final approach path of . . . Runway 16R and observed the runway edge and centerline lights cycle on and off . . . at a rate of approximately 1 per second. It was very similar to the rate of a blinking traffic light at a 4-way vehicle stop. The [3-blade] propeller speed was 2,400 RPM. This was observed through the entire front windscreen and at least part of the pilot side window. I queried ATC about the reason for the runway lights blinking and was told that they were not blinking. It was not immediately obvious what was causing this, but I did later speculate that it may have been caused by looking through the propeller arc.

“The next day [during] IFR training while on the VOR/DME Runway 16R approach, we observed the runway edge and centerline lights cycle on and off . . . at a rate slightly faster than 1 per second. The propeller speed was 2,500 RPM. I then varied the propeller speed and found that, at 2,700 RPM, the lights were observed strobing at a fairly high rate and, at 2,000 RPM, the blinking rate slowed to less than once per second. This was observed through the entire approach that terminated at the Missed Approach Point (MAP). The flight instructor was also surprised and mentioned that he had not seen this before, but also he doesn’t spend much time behind a 3-blade propeller arc.

“I would speculate that the Pulse Width Modulation (PWM) dimming system of the LED runway lights was phasing with my propeller, causing the observed effect. I would also speculate that the effect would . . . significantly differ at other LED dimming settings . . . and behind a 2-blade propeller.

“I found the effect to be entirely confusing and distracting and would not want to make a landing in such conditions.”

The TapRooT® System, Training, and Software have a dedicated history of R&D, human performance, and improvement. Learn with our best incident investigation and root cause analysis systems.

Monday Accidents & Lessons Learned: When a Disruption Potentially Saves Lives

March 12th, 2018 by

Early news of an incident often does not convey the complexity behind the incident. Granted, many facts are not initially available. On Tuesday, January 24, 2017, a Network Rail freight train derailed in southeast London between Lewisham and Hither Green just before 6:00 am, with the rear two wagons of the one-kilometer-long train off the tracks. Soon after, the Southeastern network sent a tweet to report the accident, alerting passengers that, “All services through the area will be disrupted, with some services suspended.” Then came the advice, “Disruption is expected to last all day. Please make sure you check before travelling.” While southeastern passengers were venting their frustrations on Twitter, a team of engineers was at the site by 6:15 am, according to Network Rail. At the scene, the engineers observed that no passengers were aboard and that no one was injured. They also noted a damaged track and the spillage of a payload of sand.

The newly laid track at Courthill Loop South Junction was constructed of separate panels of switch and crossing track, with most of the panels arriving to the site preassembled. Bearer ties, or mechanical connectors, joined the rail supports. The February 2018 report from the Rail Accident Investigation Branch (RAIB), including five recommendations, noted that follow-up engineering work took place the weekend after the new track was laid, and the derailment occurred the next day. Further inspection found the incident to be caused by a significant track twist and other contributing factors. Repair disrupted commuters for days as round-the-clock engineers accomplished a complete rebuild of a 50-meter railway stretch and employed cranes to lift the overturned wagons. Now factor in time, business, resources saved—in addition to lives that are often spared—when TapRooT® advanced root cause analysis is used to proactively reach solutions.

Monday Accidents & Lessons Learned: Three Killed, Dozens Injured on Italian Trenord-Operated Train

February 5th, 2018 by

Packed with 250 commuters and heading to Milan’s Porta Garibaldi station, the Italian Trenord-operated train derailed January 25, 2018, killing three people and seriously injuring dozens. The train was said to have been traveling at normal speed but was described by witnesses as “trembling for a few minutes before the accident.” A collapse of the track is under investigation.

Why is early information-gathering important?

Monday Accidents & Lessons Learned: Sandwiched in a Singapore Chain Collision

December 25th, 2017 by

In Singapore, a car was crushed by two trailers after a passenger bus hit the one behind him, causing a chain collision that left 26 people injured. Read more here.

Are you interested in improving human performance? Try this four step plan!

December 19th, 2017 by

Plan4

Is discipline the main way you “fix” human error problems?

Are you frustrated because people make the same kind of mistakes over and over again?

Have you tried “standard” techniques for improving human performance and found that they just don’t get the job done long term (they have an impact short term but not long term)?

Is management grasping for solutions to human error issues?

Would you like to learn best practices from industry human performance experts?

Try this four step plan:

DSC084

1. Attend a 5-Day TapRooT® Advanced Root Cause Analysis Course.

The TapRoot® System is made to reactively and proactively help you solve human performance issues. It has built in human factors expert systems that guide you to the root causes of human errors and help you develop effective fixes. The 5-Day TapRooT® Course is the best way to learn the system and get started fixing human performance issues.

See the upcoming course schedule here: http://www.taproot.com/store/5-Day-Courses/

NewImage

2. Attend the Understanding and Stopping Human Error Course

At this two day class, Dr. Joel Haight, a human factors and safety improvement expert and industrial engineering professor at the University of Pittsburg (where he is the Director of the Safety Engineering Program) shares the reasons why people make mistakes and what you can do to understand the problems and fix them.

Joel is an expert TapRooT® User having extensive experience apply TapRooT® to fix human factors problems at a Chevron refinery and in the oil field in Kazakhstan. He is also an expert in applying other human performance analysis and improvement techniques. He brings this knowledge to the 2-Day Understanding and Stopping Human Error Course.

It is best if you have already attended at least a 2-Day TapRooT® Course prior to attending this course. See the course description here: http://www.taproot.com/taproot-summit/pre-summit-courses#HumanError

DSC260

3. Attend the Human Factors Track at the 2018 Global TapRooT® Summit

Once a year we put together a track at the Global TapRooT® Summit that is designed to share best practices and the latest state-of-the-art techniques to improve human performance. That’s what you get at the Human Factor Track at the Summit. What are the sessions at the 2018 Global TapRooT® Summit?

  • TapRooT® Users Share Best Practices – This is a workshop designed to promote the sharing of investigation, root cause analysis, and human performance best practices from TapRooT® Users from around the world. Every year I attend this session and get new ideas that I share with others to help improve performance. Many say this is the best session at the Summit because they get such great ideas and develop new, helpful contacts from many different industries.
  • Top 5 Reasons for Human Error and How to Stop Them – Mark Paradies, President of System Improvements and a human factors expert, shares his deep knowledge of the top five reasons that he see’s for people making “human errors.” For each of these he shares his best ideas to stop the problems in their tracks.
  • Stop Daily Goofs for Good – Kevin McManus, a TapRooT® Instructor and performance improvement expert, shares systematic improvement ideas s to prevent human error and improve cognitive ergonomics on the job.
  • Using Wearables to Minimize Daily Human Errors – Using “wearables” is a technological approach to error prevention. Find out more about how it is being used and may be applied even more effectively in the future.
  • Alarm Management, Signal Detection Theory, and Decision Making – Are people at your facility overwhelmed by alarms? Do the become complacent because of nuisance alarms? Dr. Joel Haight, Director of the University of Pittsburg Safety Engineering Program will discuss control system decisions, decision execution, alarm management, signal detection theory, and decision making theory and how it could be critical in an emergency situation.
  • The Psychology of Failing Fixes – Why do your fixes fail to prevent human error? That’s what this session is all about!
  • What is a Trend and How Can You Find Trends in the TapRooT® Data? – looking for trends in human error data is an important activity to identify generic human factors problems and take the first step to major human performance improvements. Now for the bad news. Most people really don’t understand trending. Find out what you need to know and how to put trending to work in your improvement program.
  • Performance Improvement Gap Analysis – This is the session where you put everything together. Where does your program have holes? How can you apply what you have learned to fill those holes? What are others doing to solve similar problems? Put your plan together so you are ready to hit the ground running and make improvements happen when you get back to work.

And the Best Practice Sessions outlined above are only a start. You will also see five great Keynote Speakers:

NewImage

Mike Williams will share his experience surviving the Deepwater Horizon explosion.

NewImage

Dr. Carol Gunn will share the story of the her sisters unnecessary death in a hospital and patient safety improvement.

NewImage

Inky Johnson will share his experience with a debilitating football injury and how it changed his life and helps him inspire excellence in others.

NewImage

Mark Paradies will help you get the most out of your application of TapRooT®.

NewImage

Vincent Ivan Phipps will teach yo to amplify your leadership skills and communication ability.

We know that the Summit will provide you with new ideas and the inspiration to implement them.

Start

4. Get started! Analyze your human performance issues and make improvements happen!

Just Do It! get back to work and implement what you have learned. Need more help? We can provide training at your site to get more people trained in using TapRooT® so that you have help making change happen.

Don’t wait! Get your four step plan started! Register for the courses and Summit today!

My 20+ Year Relationship with 5-Why’s

December 11th, 2017 by

I first heard of 5-Why’s over 20 years ago when I got my first job in Quality. I had no experience of any kind, I got the job because I worked with the Quality Manager’s wife in another department and she told him I was a good guy. True story…but that’s how things worked back then!

When I was first exposed to the 5-Why concept, it did not really make any sense to me; I could not understand how it actually could work, as it seemed like the only thing it revealed was the obvious. So, if it is obvious, why do I need it? That is a pretty good question from someone who did not know much at the time.

I dived into Quality and got all the certifications, went to all the classes and conferences, and helped my company build an industry leading program from the ground up. A recurring concept in the study and materials I was exposed to was 5-Why. I learned the “correct” way to do it. Now I understood it, but I still never thought it was a good way to find root causes.

I transferred to another division of the company to run their safety program. I did not know how to run a safety program – I did know all the rules, as I had been auditing them for years, but I really did not know how to run the program. But I did know quality, and those concepts helped me instill an improvement mindset in the leaders which we successfully applied to safety.

The first thing I did when I took the job was to look at the safety policies and procedures, and there it was; when you have an incident, “ask Why 5 times” to get your root cause! That was the extent of the guidance. So whatever random thought was your fifth Why would be the root cause on the report! The people using it had absolutely no idea how the concept worked or how to do it. And my review of old reports validated this. Since then I have realized this is a common theme with 5-Why’s; there is a very wide variation in the way it is used. I don’t believe it works particularly well even when used correctly, but it usually isn’t in my experience.

Since retiring from my career and coming to work with TapRooT®, I’ve had literally hundreds of conversations with colleagues, clients, and potential clients about 5-Why’s. I used to be somewhat soft when criticizing 5-Why’s and just try to help people understand why TapRooT® gets better results. Recently, I’ve started to take a more militant approach. Why? Because most of the people I talk to already know that 5-Why’s does not work well, but they still use it anyway (easier/cheaper/quicker)!

So it is time to take the gloves off; let’s not dance around this any longer. To quote Mark Paradies:
“5-Why’s is Root Cause Malpractice!”

To those that are still dug in and take offense, I do apologize! I can only share my experience.

For more information, here are some previous blog articles:

What’s Wrong With Cause-and-Effect, 5-Why’s, & Fault Trees

Comparing TapRooT® to Other Root Cause Tools

What’s Fundamentally Wrong with 5-Whys?

The 7 Secrets of Root Cause Analysis – Video

December 12th, 2016 by

Hello everyone,

Here is a video that discusses some root cause tips, common problems with root cause analysis, and how TapRooT® can help. I hope you enjoy!

Like what you see? Why not join us at the next course? You can see the schedule and enroll HERE

Would you know if your corrective action resulted in an accident?

June 30th, 2015 by

“Doctor… how do you know that the medicine you prescribed him fixed the problem,” the peer asked. “The patient did not come back,” said the doctor.

No matter what the industry and or if the root causes found for an issue was accurate, the medicine can be worse than the bite. Some companies have a formal Management of Change Process or a Design of Experiment Method that they use when adding new actions.  On the other extreme, some use the Trial and Error Method… with a little bit of… this is good enough and they will tell us if it doesn’t work.

You can use the formal methods listed above or it can be as simple for some risks to just review with the right people present before implementation of an action occurs. We teach to review for unintended consequences during the creation of and after the implementation of corrective or preventative actions in our 7 Step TapRooT® Root Cause Analysis Process. This task comes with four basic rules first:

1. Remove the risk/hazard or persons from the risk/hazard first if possible. After all, one does not need to train somebody to work safer or provide better tools for the task, if the task and hazard is removed completely. (We teach Safeguard Analysis to help with this step)

2. Have the right people involved throughout the creation of, implementation of and during the review of the corrective or preventative action. Identify any person who has impact on the action, owns the action or will be impacted by the change, to include process experts. (Hint, it is okay to use outside sources too.)

3. Never forget or lose sight of why you are implementing a corrective or preventative action. In our analysis process you must identify the action or inaction (behavior of a person, equipment or process) and each behaviors’ root causes. It is these root causes that must be fixed or mitigated for, in order for the behaviors to go away or me changed. Focus is key here!

4. Plan an immediate observation to the change once it is implemented and a long term audit to ensure the change sustained.

Simple… yes? Maybe? Feel free to post your examples and thoughts.

FASTEST WAY TO GET FIRED

May 14th, 2015 by

pablo(1)

When a major accident happens, look out. The tradition is for “heads to roll.”

That’s right, people get fired.

Who get’s fired? Those that are seen as “part of the problem.”

You need to be part of the solution.

How?

Investigate the incident using the TapRooT® Root Cause Analysis System, find the real, fixable root causes, suggest corrective actions that will prevent the problem from happening again, and be ready to help implement the solutions.

Then you are part of the answer … Not part of the problem.

Or you could just sit around and wait to get fired.

The choice is yours.

Get trained to use TapRooT® root cause analysis to solve problems. See:

http://www.taproot.com/courses

Another successful TapRooT® course!

February 28th, 2013 by

I just finished our 5-Day TapRooT® Advanced Root Cause Analysis Team Leader Course in Dubai.  What a great group we had.  Here is the class picture:

IMG_0278

Our next courses in the region will be in April, the 5 Day in Doha and 2 Day in Dubai.  Both courses are already half full, so if you want to attend, you should register right away.  DOHA OR DUBAI REGISTRATION CLICK HERE

Great Human Factors: When a Hand Control is Called a "Suicide Shifter"

February 16th, 2012 by

c3e1_3ibo3-468x370

I am a sucker for a 1948 Indian Chief motorcycle.  So I  thought … what a great opportunity to talk about Human Factors Design and show off a little nostalgia. The topic of today is the Suicide Shifter.

The Suicide Shifter is located on the left side of the fuel tank and was used to shift gears while riding. Called a Suicide Shifter because you had to take your left hand off the handle bar grip to shift it.

So the question for you today is how many equipment control designs used today at your work area are not placed in the safest area to use while operating?

Great Human Factors: The New Windows 8

February 9th, 2012 by

In the human factors world there is an acronym, HCI. This stands for Human Computer Interaction. A subset of the human factors field, HCI is where computer software programers meet the computer user’s needs by design BEFORE they sell it. So…… have you seen the marketing and pre-beta download for Windows 8?

  • Will the new version frustrate new or experienced window users? or both?
  • Will Microsoft help experienced users transition?
  • How will Microsoft help experience users transition (if they do) to the new version?
  • Will software developers who have software used on Microsoft help transition their existing customers?

Windows 8 Developer Preview is available for you to try now: http://msdn.microsoft.com/en-us/windows/apps/br229516

Great Human Factors: Can Intuitive Tool Design Override Previous Training?

February 2nd, 2012 by

Watch the chimpanzee vs. human child in a learning experiment.

Here is the video link: http://youtu.be/nHuagL7x5Wc

We are all trained, or learn, by trial and error on how to use equipment or how to use it “properly”. What happens when you get a better “understanding” of how the equipment works? Here are some of the choices that we could make:

1. Ignore the previous training and just get the prize (work done faster, like the chimpanzee)

2. Continue the rules that you learned or were trained to do (at least in front of the bosses like the children).

3. Stop and ask what’s up?

4. Stop using the tool all together and do not tell anyone.

Often the previous training and experience overrides the new operation steps needed … ever been totally frustrated every time someone changes your computer’s Microsoft Windows version? And no, training by itself does not override experience, practice and repetition does!

I had a discussion not too long ago that OSHA forklift training requirements were met when people were retrained after changing forklifts. Unfortunately, the controls worked exactly opposite on the new forklift and the quick review did nothing to override the past knowledge and muscle motor memory.

Just something to think about when you think “Great Human Factors.”

Connect with Us

Filter News

Search News

Authors

Angie ComerAngie Comer

Software

Anne RobertsAnne Roberts

Marketing

Barb CarrBarb Carr

Editorial Director

Chris ValleeChris Vallee

Human Factors

Dan VerlindeDan Verlinde

VP, Software

Dave JanneyDave Janney

Safety & Quality

Garrett BoydGarrett Boyd

Technical Support

Ken ReedKen Reed

VP, Equifactor®

Linda UngerLinda Unger

Co-Founder

Mark ParadiesMark Paradies

Creator of TapRooT®

Michelle WishounMichelle Wishoun

Licensing Paralegal

Per OhstromPer Ohstrom

VP, Sales

Shaun BakerShaun Baker

Technical Support

Steve RaycraftSteve Raycraft

Technical Support

Wayne BrownWayne Brown

Technical Support

Success Stories

Reporting of ergonomic illnesses increased by up to 40% in…

Ciba Vision, a Novartis Company

The healthcare industry has recognized that improved root cause analysis of quality incidents…

Good Samaritan Hospital
Contact Us