Category: Great Human Factors

What does a bad day look like?

September 11th, 2018 by

Because who doesn’t want to take a trip in their hot tub?

Monday Accidents & Lessons Learned: Ensure Safety Behind the Wheel

September 10th, 2018 by

In June 2018, a Queensland owner/operator truck driver was reversing his single-deck truck up to a ramp to load cattle used in a rodeo. It appears he placed the truck in reverse and began to idle backwards. The gearing of the truck in reverse was sufficiently low that it did not require the driver to have his foot on the accelerator. He then opened the door and stood on the running board of the truck holding onto the steering wheel to maneuver the truck while looking backward to where he was going. He fell from the running board of the truck and was fatally crushed under the front wheel as the truck continued to reverse itself.

Also, in June 2018, a courier van driver sustained serious fractures when he was dragged under his vehicle. He had returned to the parked van to retrieve an item through the front window when it rolled backward. It appears he was dragged under the vehicle while trying to stop it.

Both investigations are continuing.

Contributing factors
Some contributing factors to these incidents include:

  • Workers being under a heavy vehicle or trailer, or in its path
  • Unsafe systems of work being applied, such as poor separation of traffic from pedestrian areas
  • Failing to immobilize:
    -the handbrake of the vehicle not applied
    -the wheels of the heavy vehicle or trailer not immobilized
    -components of the heavy vehicle or trailer not restrained or adequately supported
    -brakes malfunctioning
  • Not conducting a risk assessment before working on the vehicle

Action required in immobilizing heavy vehicles
If an employee needs to work near a heavy vehicle, or between a heavy vehicle and another object, first make sure the vehicle is immobilized by:

  • Switching off the motor and removing the key from the ignition to render it inoperable
  • Applying the handbrake
  • Using wheel chocks, if warranted and required

Establish a safe operating procedure and ensure workers follow it to eliminate the risk of anyone failing to immobilize their vehicle.

Consider installing a handbrake warning system to alert drivers when the handbrake has not been applied (these can be easily retrofitted).

Working under heavy vehicles and trailers
For work under heavy vehicles and trailers, ensure an appropriate load support is used (e.g. stands or lifting devices).

Risk assessments before commencing work
Before commencing work, identify hazards and assess risks associated with working under and around heavy vehicles or trailers. Where appropriate:

  1. Establish an exclusion zone that is clearly marked and enforced.
  2. Use safe work procedures for maintenance and repair tasks, and ensure that workers are trained in these procedures.
  3. Ensure worker training, experience, and competency is consistent with the nature and complexity of the task.

Similar risks exist for light and smaller vehicles, and a risk assessment should be conducted before commencing work.

Preventing similar incidents
There have been incidents where vehicle drivers and others have been killed or seriously injured after being hit, pinned, or crushed by the uncontrolled movement of vehicles. The risk of a vehicle moving in an uncontrolled or unexpected manner must be managed by ensuring appropriate control measures are in place. Controls may include, but are not limited to, the following:

  • Before leaving a vehicle, ensure it is stationary and out of gear with the emergency brake applied.
  • Do not climb into a moving vehicle.
  • Do not allow any movement of the truck or vehicle unless there is someone in the driver’s seat who is able to receive oral or visual warnings and can immediately act to prevent harm (e.g. apply brakes or steer the truck).
  • When reversing, ensure the area around the vehicle is clear.
  • Always employ reverse with the aid of mirrors or a spotter.

The person conducting a business or undertaking should conduct a risk assessment of work practices, develop appropriate safe work systems, conduct appropriate training, and ensure the system is enforced at the workplace.

Statistics
Since 2012, there have been 47 incidents involving workers or others being crushed, struck or run over by a truck moving in an uncontrolled method. Eleven were fatal, and 27 involved a serious injury. In the same period, 49 improvement notices and 25 prohibition notices were issued for uncontrolled movement or rolling of trucks, semitrailers, and more.

Since 2012, there have been 10 work-related deaths involving a person being run over by a vehicle or some other type of machinery. In the same period, 10 prohibition notices and eight improvement notices have been issued in relation to a person being run over by a vehicle or other type of machinery.

Each year, there are around 130 accepted worker compensation claims involving a worker being struck or crushed by a truck. Of these claims, more than a third involve a serious injury, and two are fatal.

Annually, there are around 600 accepted workers compensation claims involving a worker injured by mobile plant*. Of these claims, about 40 percent involve a serious injury requiring five or more days off work, and two are fatal.

Prosecutions and compliance
In May 2017, a company was fined $60,000 and an individual $3,000 following the death of a worker who was run over by a truck and trailer. The worker was lying under the back of the trailer to check on bouncing that had occurred while driving. Moments later, the truck and trailer began moving backward. The trailer wheels rolled over the worker, followed by the truck wheels.

In February 2017, a regional council was fined $170,000 following the death of a worker. The worker was killed after he was struck and run over by a reversing truck on a civil construction site.

In December 2016, a road freight transport company was fined $60,000 and a court ordered undertaking for two years with recognizance of $60,000 following the death of a worker who was run over by a trailer. The prime mover and trailer appeared to have trouble releasing its trailer brakes. The worker went to the rear of the trailer and attempted to release a trailer brake. When the vehicle began rolling backward, he tried to reengage the spring brake but was struck by the trailer wheels.

In June 2016, a company was fined $120,000, after a worker was killed operating a six-ton mobile yard crane to perform shifting the load of steel product. The worker was seen running alongside the crane which was traveling down a slope, uncontrolled, with no one in the operator’s seat. He was either struck by the crane, or it tipped, then run over and killed. The driver was not licensed to operate this type of crane.

*Powered mobile plant is defined by the Work Health and Safety Regulation 2011 (WHS Regulation) to mean any plant that is provided with some form of self-propulsion that is ordinarily under the direct control of an operator, and includes: earthmoving machinery (e.g. rollers, graders, scrapers, bobcats) excavators.

Thanks to WorkCover Queensland for this information highlighting the risks associated with workers being crushed or hit by heavy vehicles or trailers.

Circumstances can crop up anywhere at any time if proper and safe sequence and procedures are not planned and followed. We encourage you to learn and use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: “Brute Force” Compromises Assets

September 3rd, 2018 by

Fighting today’s cybercrime has become a scenario in which businesses continually strive to stay ahead of the most recent evolution. Technology has forever changed the way we work, and the company culture that stays cybersecurity-alert is less likely to spend worklife looking over its collective shoulder.

The very real situation that follows is a Lesson Learned, the Risk of Internet Accessible Cyber Assets, from Western Electric Coordinating Council and NERC (North American Electric Reliability Corporation).

The Problem 
An electronic access point connected to the internet from a low-impact facility for remotely accessing a capacitor bank was compromised by unauthorized internet users for seven months prior to discovery.

Details
A registered entity discovered a compromised electronic access point connected to the internet from a low-impact facility. The access point was originally intended to be temporary and was installed by a SCADA (supervisory control and data acquisition) Manager who subsequently left the entity without providing adequate documentation and turnover to the next SCADA Manager. The access point was misidentified as a remote terminal unit (RTU) with an end-of-life (EOL) operating system and left in place. Unauthorized personnel accessed the cyber asset for seven months before the registered entity became aware of the compromise. Because the device was identified as an EOL system, the compromised system was not maintained (patched, monitored, etc.) by the registered entity and was thus more susceptible to exploitable vulnerabilities.

The initial compromise resulted from an unauthorized internet user guessing via a “brute force”1 method the weak password for the administrators’ account, which permitted remote access. The compromised cyber asset was used over a seven-month period as a mail relaying (SMTP) and remote desktop (RDP) scanner.2 Additionally, the IP address and credentials for the cyber asset were posted on a Russian-based media site, and the cyber asset was subsequently infected with ransomware. The compromise was discovered after support staff could not remotely access the cyber asset. The purpose of the internet-connected access point was to remotely access and operate the capacitor banks to ensure the reliability of the system. Upon looking into the matter further, personnel discovered that the cyber asset was compromised with ransomware, so the registered entity immediately powered off the cyber asset.

Forensic analysis on the compromised system identified several different scanning tools designed to locate remotely accessible RDP or SMTP servers along with text files containing IP addresses for the scanners to target. Although the attackers likely conducted reconnaissance on the local network to identify other vulnerable devices, the primary focus of their activity appears to identify other remote systems to target for attacks.

Corrective Actions
The registered entity removed the compromised device from service and performed forensic analysis to identify all malware on the affected device and determine agent(s) of the compromise, time lines, and reveal (to the most possible extent) the underlying activities and motives of the compromise. A virus scan was also performed on all devices at the same site as well as a review of logs on all of the devices to look for anomalous activity. Other locations were also scanned to determine whether they had similar installations or issues.

Lesson Learned
Cyber assets at low-impact facilities capable of remote internet connectivity are susceptible to unauthorized access from the internet or unsecured networks if not properly secured. These remote access points are typically used to provide communication paths for monitoring and control purposes to maintain BES (Bulk Electric System) reliability. Remote connectivity that can provide unauthorized and potentially malicious access to systems that supply auxiliary power, power quality, voltage support, fault monitoring, and breaker control is of particular concern.

Failure to develop and follow appropriate policies and procedures to control the installation and maintenance of cyber assets may create exploitable vulnerabilities that could negatively impact BES reliability. In this case, installation of, inaccurate identification of, and failing to provide adequate security protections for a device connected to a registered entity’s network led to the compromise of the device. There may be several practical lessons learned that can be derived from this event that apply to low-impact cyber assets and constitute good cybersecurity practices in general.

Policy and Procedures

  1. Train employees and contractors on cybersecurity awareness, policy, and practices
  2. Catalog cyber assets at low-impact facilities to determine use and facilitate accurate records
  3. Consult with and obtain authorization from responsible IT departments as well as compliance and risk management groups to evaluate potential risks and impacts of internet-facing and internet-worked cyber assets at low-impact facilities
  4. Have personnel (e.g., operations, maintenance) who perform periodic onsite visits conduct cyber-device inventory checks as part of routine safety and maintenance inspections
  5. Consider using a checklist
  6. Periodically reevaluate risks and potential impacts of the inventoried cyber assets as new threats and vulnerabilities are revealed or vendor support is discontinued
  7. An entity’s IT department could use tools such as Shodan3 and nmap4 on the entity’s own public IP space on a regular basis to verify only authorized ports are open to the internet
  8. When an employee or contractor leaves the company or is terminated, ensure appropriate turnover and knowledge transfer processes occur

Cybersecurity practices to consider for low-impact facilities

  • Identify and secure cyber assets at low-impact facilities capable of remote connectivity
  • Where possible, implement network access controls within the system to prevent the installation of unauthorized hardware
  • Implement network segmentation into trust zones
  • Change default passwords with strong passwords on user accounts and administrative accounts and restrict operational use of administrative accounts
  • Implement MFA (multi-factor authentication) for all internet-facing resources that support these technologies
  • Provide for a patch management plan for evaluating security patching for cyber assets at low-impact facilities
  • Whenever practical, monitor the network for anomalous behavior

1“Brute forcing” is an automated method of attempting authentication with many different passwords until the attacker is able to successfully login to the system.

2A network scanner performs a scan on a network and collects an electronic inventory of the systems and the services for each device on the network. In this case, the server was used to scan for open SMTP (Simple Mail Transfer Protocol) servers and RDP (Remote Desktop Protocol) servers for potential compromise.

3Shodan is an internet site used to discover devices that are connected to the internet, where they are located and who is using them.

4Nmap (“Network Mapper”) is a free and open source (license) utility for network discovery and security auditing.

TapRooT® recommends the following modifications to your online behavior to reduce the possibility of cybercrime:

  • Change passwords regularly; be the sole owner of your passwords; avoid using personal information in passwords; create passwords with random keyboard patterns, numbers, and special characters.
  • Don’t respond to emails or messages requesting personal or financial information.
  • Sending your password in an email is a definite no-no.
  • Never give unauthorized persons access to business computers—at the workplace or at home.
  • Don’t interact with money-sending instructions in emails.
  • Always call clients and vendors to verify any financial/billing changes.
  • Choose automatic software updates.
  • Back up data to reduce the likelihood of ransomware attacks, and ensure that your backup management is secure. (Often, a company’s most valuable asset is its intellectual property, so a loss in this area can be disastrous.)
  • Install/maintain antivirus and anti-spyware software and a firewall on all business computers.
  • Secure all WiFi networks and passwords.
  • Educate all employees what comprises business information, and the risks in sharing this with anyone.
  • Grant administrative privileges only to trusted staff and limit employee access to data systems that are workload-critical.
  • Require administrative approval and assistance in any and all downloads by employees.

Circumstances can crop up anywhere at any time if proper and safe sequence and procedures are not planned and followed. We encourage you to learn and use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Does Social Media Encourage Poor Root Cause Analysis?

August 29th, 2018 by

Who doesn’t love a good online video? Videos can encourage interaction and make you think, but are they leading us down poor thought paths or compelling us to jump to conclusions? Asking ourselves this question caused us to ponder, Does social media encourage poor root cause analysis?

Listen as TapRooT® professionals Benna Dortch and Ken Reed explore this topic. You will want to glean further insights from Ken’s article, Do LinkedIn Posts Encourage Poor Investigations? (For the Vimeo version of this video, click here.)

TapRooT® Root Cause Analysis training can transform your investigations, to clearly isolate systemic problems that can be fixed, and prevent (or greatly reduce) repeat accidents. Attend a TapRooT® Root Cause Analysis Course and find out how you can use TapRooT® to help you change your workplace into a culture of performance improvement.

If you would like for us to teach a course at your workplace, please reach out here to discuss what we can do for you, or call us at 865.539.2139.

Monday Accidents & Lessons Learned: Simple Ship Repair Results in Fatal Fall

August 27th, 2018 by

The accident
A crew member was making repairs to the surrounding handrails of the lowest of three intermediate platforms built into a cargo hold access ladder. The platform was designed as a landing to hold a single person while moving from one section of the cargo hold access ladder to the next. The ship was at sea, and the cargo hatch covers were closed. The handrails had been removed for repair, and the crew member was preparing to refit them to the platform. The lower platform was five meters above the tank top. There were no eyewitnesses to the accident. It was concluded that the crew member tripped or slipped from the platform and, as he was not wearing a safety harness, he fell to the tank top below. He died from multiple injuries.

Contributing factors
What caused the crew member to slip from the platform?

  1. The platform was cluttered with equipment that the crew member was using to effect the repairs and was not guarded by handrails, making the platform a congested and dangerous place to work.
  2. A single halogen light had been rigged about one meter above the platform. The light was another obstacle that the crew member had to work around.
  3. Although shipboard procedures required the crew member to use a safety harness for the task, he was not wearing one at the time. Wearing a safety harness and connecting it to a secure point would have arrested his fall.

Lessons learned
Working at any height without the protection of handrails creates a hazardous situation. It is crucial for seafarers to follow industry best practices—such as wearing a safety harness and connecting it to a secure point—whenever working from a height. Equally important, light should be abundantly sufficient to illuminate the immediate task and general working areas of workers and should cause no obstruction to workers. Finally, task areas should be clutter-free, prepped in advance for free unobstructed access.

This accident was reported by the Australian Transportation Safety Board.

Circumstances can crop up anywhere at any time if proper and safe sequence and procedures are not planned and followed. We encourage you to learn and use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Does Blame Make Sense in Incident Investigations?

August 23rd, 2018 by

When we consider a recent accident—such as a pharmaceutical plant producing a bad batch of drugs and those drugs making their way past the QA process and, once distributed, ending up harming customers—one fact is indisputable: Human action broke down somewhere along the line. And the least productive next step is to begin a round of the blame game.

In this video, TapRooT® professionals Benna Dortch and Ken Reed highlight why blame is counterproductive—as well as an exercise in futility—in the workplace. While it may be human nature to default to finger-pointing, a blame-oriented organization can move beyond old habits. An organization that employs investigative methodology to accomplish a thorough review of the facts creates a culture where workers are not fearful to self-report missteps and mistakes. (For the Vimeo version of this video, click here.)

TapRooT® Root Cause Analysis training can transform your investigations, moving beyond blame to clearly isolate systemic problems that can be fixed, and prevent (or greatly reduce) repeat accidents. Attend a TapRooT® Root Cause Analysis Course and find out how you can use TapRooT® to help you change a blame culture into a culture of performance improvement.

f you would like for us to teach a course at your workplace, please reach out here to discuss what we can do for you, or call us at 865.539.2139.

Monday Accidents & Lessons Learned: One Second Away from Major Tragedy

August 20th, 2018 by

Have you ever felt that you couldn’t challenge a company practice for fear of losing face or your position? It happens more often than you may imagine. Concerning recent findings from a 2017 Nottinghamshire incident investigation by the Rail Accident Investigation Branch (RAIB), Chief Inspector of Rail Accidents Simon French commented, “When the person in charge of a team is both a strong personality and an employee of the client, it can be particularly hard for contract workers to challenge unsafe behavior.” Inspector French further observed, “We have seen this sort of unsafe behavior before, where the wish to get the work done quickly overrides common sense and self-preservation. When we see narrowly avoided tragedies of this type, it is almost always the result of the adoption of an unsafe method of work and the absence of a challenge from others in the group.”

The incident
Around 11:22 am on October 5, 2017, a group of track workers narrowly avoided being struck by a train close to Egmanton level crossing, between Newark North Gate and Retford on the East Coast Main Line. A high-speed passenger train was approaching the level crossing on the Down Main line at the maximum permitted line speed of 125 mph (201 km/h) when the driver saw a group of track workers in the distance. He sounded the train’s warning horn but saw no response from the group. A few seconds later, the driver gave a series of short blasts on the train horn as it approached and passed the track workers.

The track workers became aware of the train about three seconds before it reached them. One of the group shouted a warning to three others who were between the running rails of the Down Main line. These three workers cleared the track about one second before the train passed them. During this time, thinking his train might strike one or more of them, the driver continued to sound the horn and made an emergency brake application before the train passed the point where the group had been working. The train subsequently came to a stop around 0.75 miles (1.2 km) after passing the site of work.

The immediate cause of the near-miss was that the track workers did not move to a position of safety as the train approached. The group had been working under an unsafe and unofficial system of work, set up by the Person in Charge (PiC). Instead of adhering to the correct method of using the Train Operated Warning System (TOWS) by moving his team to, and remaining in, a position of safety while TOWS was warning of an approaching train, the PiC used the audible warning as a cue for the lookout to start watching for approaching trains in order to maximize the working time of the group on the track. This unsafe system of work broke down when both the lookout and the PiC became distracted and forgot about the TOWS warning them of the approaching train.

Although the PiC was qualified, experienced, and deemed competent by his employer, neither his training nor reassessments had instilled in him an adequate regard for safety along with the importance of following the rules and procedures. Additionally, none of the team involved challenged the unsafe system of work that was in place at the time. Even though some were uncomfortable with it, they feared they might lose the work as contractors if they challenged the PiC.

Recommendations
As a result of its investigation the RAIB has made three recommendations. These relate to:

    1. Strengthening safety leadership behaviour on site and reducing the occurrences of potentially dangerous rule breaking by those responsible for setting up and maintaining safe systems of work;
    2. Mitigating the potentially adverse effect that client-contractor relationships can have on the integrity of the Worksafe procedure such that contractors’ staff feel unable to challenge unsafe systems of work for fear of losing work;
    3. Clarifying to staff how the Train Operated Warning System (TOWS) should be used.

Lessons learned
The findings of this investigation also reinforced the importance of railway staff understanding their safety briefings and challenging any system of work that they believe to be unsafe.

Inspector French added this comment to the findings, “We are therefore recommending that Network Rail looks again at how it monitors and manages the safety leadership exercised by its staff, and how they interact with contractors. There have been too many near-misses in recent years.”

Circumstances can crop up anywhere at any time if proper and safe sequence and procedures are not planned and followed. We encourage you to learn and use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: An Assumption Can Lead You to Being All Wet

August 13th, 2018 by

IOGP Well Control Incident Lesson Sharing

The International Association of Oil & Gas Producers (IOGP) is the voice of the global upstream oil and gas industry. The industry of oil and gas provides a significant proportion of the world’s energy to meet growing demands for heat, light, and transport. IOGP members produce 40 percent of the world’s oil and gas, operating in the Americas, Africa, Europe, the Middle East, the Caspian, Asia, and Australia.

IOGP shares a Well Control Incident Lesson Sharing report recounting a breakdown in communication, preparation and monitoring, and process control. Importantly, through the findings, we identify that the overarching project plan was erroneously based on the expectation, albeit assumption, that the reservoir was depleted. Let’s track this incident:

What happened?
In a field subjected to water flooding, when drilling through shales and expecting to enter a depleted reservoir, gas readings suddenly increased. Subsequently, the mud weight was increased, the well was shut-in, and the drill string became stuck when the hole collapsed during kill operations. Water-flood break-through risks were not communicated to the drill crew, and the drill crew failed to adequately monitor the well during connections. The loss of well control, hole, and drill string was due to poor communication and well-monitoring.

  • Drilling 8″1/2 x 9″1/2 hole with 1.30SG mud weight (MW) at 2248m – this mud density is used to drill the top section shales for borehole stability purpose
  • Crossed an identified sands layer which was expected to be sub-hydrostatic (0.5SG)
  • Observed a connection gas reading up to 60% + pack off tendency.
  • Increased mud weight by step to 1.35SG but gas readings were still high
  • Decided to shut the well in and observed pressure in the well SIDP 400 psi – SICP 510 psi
  • A Gain of +/- 10m3 was estimated later (by postmortem analysis of the previous pipe connection and pump-off logs)
  • Performed Driller’s Method and killed the well by displacing 1.51 SG kill mud
  • Open hole collapsed during circulation with the consequence of string getting stuck and kick zone isolated

What went wrong? 
The reservoir was expected to be depleted. This part of the field was artificially over-pressurized by a water injector well. This was not identified during the well preparation phase. and the risk was not transmitted to the drilling teams. Lack of crew vigilance. Poor well monitoring during DP connections. The high connection gas observed at surface were the result of a crude contamination in the mud system. Significant gain volumes were taken during the previous pipe connections without being detected.

Corrective actions and recommendations 
-The incident was shared with drilling personnel and used for training purposes.

-Shared the experience and emphasized to reinforce the well preparation process with a rigorous risk identification: the hazard related to a continuous injection in a mature field to be emphasized.

-Reinforce well monitoring. Specifically, during pipe connections.

-Review mapping of injection on the field.

Circumstances can crop up anywhere at any time if proper sequence and procedures are not planned and followed. We encourage you to learn and use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: Dumping the Electronic Flight Bag En Route

August 6th, 2018 by

The electronic flight bag (EFB) has demonstrated improved capability to display aviation information such as airport charts, weather, NOTAMs, performance data, flight releases, and weight and balance. This portable electronic hardware has proven facilitative for flight crews in efficiently performing management tasks. While the EFB provides many advantages and extensive improvements for the aviation community in general and for pilots specifically, some unexpected operational threats have surfaced.

NASA’s Aviation Safety and Reporting System (ASRS) has received reports that describe various kinds of EFB anomalies. Today’s particular instance relates to EFB operation in a particular phase of flight:

An ERJ175 pilot attempted to expand the EFB display during light turbulence. Difficulties stemming from the turbulence and marginal EFB location rendered the EFB unusable, so the pilot chose to disregard the EFB entirely.

“We were on short final, perhaps 2,000 feet above field elevation. [It had been a] short and busy flight. I attempted to zoom in to the Jepp Chart, currently displayed on my EFB, to reference some information. The EFB would not respond to my zooming gestures. After multiple attempts, the device swapped pages to a different chart. I was able to get back to the approach page but could not read it without zooming. I attempted to zoom again but, with the light turbulence, I could not hold my arm steady enough to zoom. [There is] no place to rest your arm to steady your hand because of the poor mounting location on the ERJ175.

“After several seconds of getting distracted by…this EFB device, I realized that I was … heads-down for way too long and not paying enough attention to the more important things (e.g., acting as PM). I did not have the information I needed from the EFB. I had inadvertently gotten the EFB onto a company information page, which is bright white rather than the dark nighttime pages, so I turned off my EFB and continued the landing in VMC without the use of my EFB. I asked the PF to go extra slowly clearing the runway to allow me some time to get the taxi chart up after landing.

“… I understand that the EFB is new and there are bugs. This goes way beyond the growing pains. The basic usability is unreliable and distracting. In the cockpit, the device is nearly three feet away from the pilot’s face, mounted almost vertically, at a height level with your knees. All [EFB] gestures in the airplane must be made from the shoulder, not the wrist. Add some turbulence to that, and you have a significant heads-down distraction in the cockpit.”

The award-winning publication and monthly safety newsletter, CALLBACK, from NASA’s Aviation Safety Reporting System, shares reports, such as the one above, that reveal current issues, incidents, and episodes of some common problems that pilots have experienced. In this issue, we learned about precursor events that have occurred during the EFB’s adolescence.

Circumstances can crop up anywhere at any time if proper sequence and procedures are not planned and followed. We encourage you to learn and use the TapRooT® System to apprehend situations and find and fix problems. Attend one of our courses. Among our offerings are a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: Zooming to “Too Low Terrain”

July 30th, 2018 by

When the Electronic Flight Bag (EFB) platform—frequently a tablet device—was introduced as a human-machine interface into the aviation industry and the cockpit, the platform proved to  facilitate improvements for both pilots and the aviation community, but the human-machine interface has encountered operational threats in the early years of EFB utilization.

NASA’s Aviation Safety Reporting System (ASRS) has received reports that describe various kinds of EFB anomalies. One routine problem occurs when a pilot “zooms,” or expands the screen to enlarge a detail, and unknowingly “slides” important information off the screen, making it no longer visible. A second type of problem manifests itself in difficulty operating the EFB in specific flight or lighting conditions. Yet a third wrinkle relates to EFB operation in a particular flight phase.

Let’s look at what happened in an A319 when “zoom” went awry:

Prior to departure, an A319 crew had to manage multiple distractions. An oversight, a technique, and a subtle EFB characteristic all subsequently combined to produce a unrecognized controlled flight toward terrain.

“We received clearance from Billings Ground, ‘Cleared … via the Billings 4 Departure, climb via the SID.’ During takeoff on Runway 10L from Billings, we entered IMC. The Pilot Flying (PF) leveled off at approximately 4,600 feet MSL, heading 098 [degrees]. We received clearance for a turn to the southeast … to join J136. We initiated the turn and then requested a climb from ATC. ATC cleared us up to 15,000 feet. As I was inputting the altitude, we received the GPWS alert, ‘TOO LOW TERRAIN.’ Immediately, the PF went to Take Off/Go Around (TO/GA) Thrust and pitched the nose up. The Pilot Monitoring (PM) confirmed TO/GA Thrust and hit the Speed Brake handle … to ensure the Speed Brakes were stowed. Passing 7,000 feet MSL, the PM announced that the Minimum Sector Altitude (MSA) was 6,500 feet within 10 nautical miles of the Billings VOR. The PF reduced the pitch, then the power, and we began an open climb up to 15,000 feet MSL. The rest of the flight was uneventful.

“On the inbound leg [to Billings], the aircraft had experienced three APU auto shutdowns. This drove the Captain to start working with Maintenance Control. During the turn, after completion of the walkaround, I started referencing multiple checklists … to prepare for the non-normal, first deicing of the year. I then started looking at the standard items. It was during this time that I looked at the BILLINGS 4 Departure, [pages] 10-3 and 10-3-1. There are no altitudes on … page [10-3], so I referenced [page] 10-3-1. On [page] 10-3-1 for the BILLINGS 4 Departure at the bottom, I saw RWY 10L, so I zoomed in to read this line. When I did the zoom, it cut off the bottom of the page, which is the ROUTING. Here it clearly states, ‘Maintain 15,000 or assigned lower.’ I never saw this line. When we briefed prior to push, the departure was briefed as, ‘Heading 098, climb to 4,600 feet MSL’; so neither the PF nor the PM saw the number 15,000 feet MSL. The 45-minute turn was busy with multiple nonstandard events. The weather was not great. However, that is no excuse for missing the 15,000-foot altitude on the SID.”

The award-winning publication and monthly safety newsletter, CALLBACK, from NASA’s Aviation Safety Reporting System, shares reports, such as the one above, that reveal current issues, incidents, and episodes.

Circumstances can crop up anywhere at any time if proper sequence and procedures are not planned and followed. We encourage you to learn and use the TapRooT® System to apprehend, find, and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: When One Good Turn Definitely Doesn’t Deserve Another

July 16th, 2018 by

The electronic flight bag (EFB) is rapidly replacing pilots’ conventional papers in the cockpit. While the EFB has demonstrated improved capability to display aviation information—airport charts, weather, NOTAMs, performance data, flight releases, and weight and balance—NASA’s Aviation Safety and Reporting System (ASRS) has received reports that describe various kinds of EFB anomalies, such as this one:

“This B757 Captain received holding instructions during heavy traffic. While manipulating his EFB for clarification, he inadvertently contributed to an incorrect holding entry.

‘[We were] asked to hold at SHAFF intersection due to unexpected traffic saturation. While setting up the FMC and consulting the arrival chart, I expanded the view on my [tablet] to find any depicted hold along the airway at SHAFF intersection. In doing so, I inadvertently moved the actual hold depiction…out of view and [off] the screen.

‘The First Officer and I only recall holding instructions that said to hold northeast of SHAFF, 10-mile legs. I asked the First Officer if he saw any depicted hold, and he said, “No.” We don’t recall instructions to hold as depicted, so not seeing a depicted hold along the airway at SHAFF, we entered a right-hand turn. I had intended to clarify the holding side with ATC, however there was extreme radio congestion and we were very close to SHAFF, so the hold was entered in a right-hand turn.

‘After completing our first 180-degree turn, the controller informed us that the hold at SHAFF was left turns. We said that we would correct our holding side on the next turn. Before we got back to SHAFF for the next turn, we were cleared to [the airport].'”

Volpe National Transportation Systems Center, U.S. Department of Transportation, weighs in on EFBs: “While the promise of EFBs is great, government regulators, potential customers, and industry developers all agree that EFBs raise many human factors considerations that must be handled appropriately in order to realize this promise without adverse effects.”

CALLBACK is the award-winning publication and monthly safety newsletter from NASA’s Aviation Safety Reporting System (ASRS). CALLBACK shares reports, such as the one above, that reveal current issues, incidents, and episodes.

Circumstances can crop up anywhere at any time if proper sequence and procedures are not planned and followed. We encourage you to learn and use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: Where Did We Put the Departure Course?

July 2nd, 2018 by

Have you ever encountered a new methodology or product that you deemed the best thing ever, only to discover in a too-close-for-comfort circumstance that what seemed a game changer had a real downside?

In aviation, the Electronic Flight Bag (EFB) is the electronic equivalent to the pilot’s traditional flight bag. It contains electronic data and hosts EFB applications, and it is generally replacing the pilots’ conventional papers in the cockpit. The EFB has demonstrated improved capability to display aviation information such as airport charts, weather, NOTAMs, performance data, flight releases, and weight and balance.

The EFB platform, frequently a tablet device, introduces a relatively new human-machine interface into the cockpit. While the EFB provides many advantages and extensive improvements for the aviation community in general and for pilots specifically, some unexpected operational threats have surfaced during its early years.

NASA’s Aviation Safety and Reporting System (ASRS) has received reports that describe various kinds of EFB anomalies. One typical problem occurs when a pilot “zooms,” or expands the screen to enlarge a detail, thereby unknowingly “slides” important information off the screen, making it no longer visible.

An Airbus A320 crew was given a vector to intercept course and resume the departure procedure, but the advantage that the EFB provided in one area generated a threat in another.

From the Captain’s Report:

“Air Traffic Control (ATC) cleared us to fly a 030 heading to join the GABRE1 [Departure]. I had never flown this Standard Instrument Departure (SID). I had my [tablet] zoomed in on the Runway 6L/R departure side so I wouldn’t miss the charted headings. This put Seal Beach [VOR] out of view on the [tablet]. I mistakenly asked the First Officer to sequence the Flight Management Guidance Computer (FMGC) between GABRE and FOGEX.”

From the First Officer’s Report:

“During our departure off Runway 6R at LAX [while flying the] GABRE1 Departure, ATC issued, ‘Turn left 030 and join the GABRE1 Departure.’ This was the first time for both pilots performing this SID and the first time departing this runway for the FO. Once instructed to join the departure on the 030 heading, I extended the inbound radial to FOGEX and inserted it into the FMGC. With concurrence from the Captain, I executed it. ATC queried our course and advised us that we were supposed to intercept the Seal Beach VOR 346 radial northbound. Upon review, both pilots had the departure zoomed in on [our tablets] and did not have the Seal Beach [VOR] displayed.”

CALLBACK is the award-winning publication and monthly safety newsletter from NASA’s Aviation Safety Reporting System (ASRS). CALLBACK shares reports, such as the one above, that reveal current issues, incidents, and episodes.

Circumstances can crop up anywhere at any time if proper sequence and procedures are not planned and followed. We encourage you to learn and use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: The Worst U.S. Maritime Accident in Three Decades

May 21st, 2018 by

The U.S.-flagged cargo ship, El Faro, and its crew of 33 men and women sank after sailing into Hurricane Joaquin. What went wrong and why did an experienced sea captain sail his crew and ship directly into the eye of a hurricane? The investigation lasted two years. 

One of two ships owned by TOTE Maritime Inc., the El Faro constantly rotated between Jacksonville, Florida, and San Juan, Puerto Rico, transporting everything from frozen chickens to milk to Mercedes Benzes to the island. The combination roll-on/roll-off and lift-on/lift-off cargo freighter was crewed by U.S. Merchant Marines. Should the El Faro miss a trip, TOTE would lose money, store shelves would be bare, and the Puerto Rican economy would suffer.

The El Faro, a 790-foot, 1970s steamship, set sail at 8:15 p.m. on September 29, 2015, with full knowledge of the National Hurricane Center warning that Tropical Storm Joaquin would likely strengthen to a hurricane within 24 hours.

Albeit with modern navigation and weather technology, the aging ship, with two boilers in need of service, with no life vests or immersion suits, was equipped with open lifeboats that would not be launched once the captain gave the order to abandon ship in the midst of a savage hurricane.

As the Category 4 storm focused on the Bahamas, winds peaking at 140 miles an hour, people and vessels headed for safety. All but one ship. On October 1, 2015, the SS El Faro steamed into the furious storm. Black skies. Thirty to forty foot waves. The Bermuda Triangle. Near San Salvador, the sea freighter found itself in the strongest October storm to hit these waters since 1866. Around 7:30 a.m. on October 1, the ship was taking on water and listing 15 degrees. Although, the last report from the captain indicated that the crew had managed to contain the flooding. Soon after, the freighter ceased all communications. All aboard perished in the worst U.S. maritime disaster in three decades. Investigators from the National Transportation Safety Board (NTSB) were left to wonder why.

When the NTSB launched one of the most thorough investigations in its long history, they spoke with dozens of experts, colleagues, friends, and family of the crew. The U.S. Coast Guard, with help from the Air Force, the Air National Guard, and the Navy, searched in a 70,000 square-mile area off Crooked Island in the Bahamas, spotting debris, a damaged lifeboat, containers, and traces of oil. On October 31, 2015, the USNS Apache searched and found the El Faro, using the CURV 21, a remotely operated deep ocean vehicle.

Thirty days after the El Faro sank, the ship was found 15,000 feet below sea level. The images of the sunken ship showed a breach in the hull and its main navigation tower missing. 

Finally came the crucial discovery when a submersible robot retrieved the ship’s voyage data recorder (VDR), found on Tuesday, April 26, 2016, at 4,600 meters bottom. This black box held everything uttered on the ship’s bridge, up to its final moments.

The big challenge was locating the VDR, only about a foot by eight inches. No commercial recorder had ever been recovered this deep where the pressure is nearly 7,000 pounds per square inch.

The 26-hour recording converted into the longest script—510 pages— ever produced by the NTSB.  The recorder revealed that at the outset, there was absolute certainty among the crew and captain that going was the right thing to do. As the situation evolved and conditions deteriorated, the transcript reveals, the captain dismissed a crew member’s suggestion that they return to shore in the face of the storm. “No, no, no. We’re not gonna turn around,” he said. Captain Michael Davidson then said, “What I would like to do is get away from this. Let this do what it does. It certainly warrants a plan of action.” Davidson went below just after 7:57 p.m. and was not heard again nor present on the bridge until 4:10 a.m. The El Faro and its crew had but three more hours after Davidson reappeared on the bridge, as the recording ends at 7:39 a.m., ten minutes after Captain Davidson ordered the crew to abandon ship.

This NTSB graphic shows El Faro’s track line in green as the ship sailed from Jacksonville to Puerto Rico on October 1, 2015. Color-enhanced satellite imagery from close to the time the ship sank illustrates Hurricane Joaquin in red, with the storm’s eye immediately to the south of the accident site.

The NTSB determined that the probable cause of the sinking of El Faro and the subsequent loss of life was the captain’s insufficient action to avoid Hurricane Joaquin, his failure to use the most current weather information, and his late decision to muster the crew. Contributing to the sinking was ineffective bridge resource management on board El Faro, which included the captain’s failure to adequately consider officers’ suggestions. Also contributing to the sinking was the inadequacy of both TOTE’s oversight and its safety management system.

The NTSB’s investigation into the El Faro sinking identified the following safety issues:

  • Captain’s actions
  • Use of noncurrent weather information
  • Late decision to muster the crew
  • Ineffective bridge resource management
  • Company’s safety management system
  • Inadequate company oversight
  • Need for damage control plan
  • Flooding in cargo holds
  • Loss of propulsion
  • Downflooding through ventilation closures
  • Need for damage control plan
  • Lack of appropriate survival craft

The report also addressed other issues, such as the automatic identification system and the U.S. Coast Guard’s Alternate Compliance Program. On October 1, 2017, the U. S. Coast Guard released findings from its investigation, conducted with the full cooperation of the NTSB. The 199-page report identified causal factors of the loss of 33 crew members and the El Faro, and proposed 31 safety recommendations and four administrative recommendations for future actions to the Commandant of the Coast Guard.

Captain Jason Neubauer, Chairman, El Faro Marine Board of Investigation, U.S. Coast Guard, made the statement, “The most important thing to remember is that 33 people lost their lives in this tragedy. If adopted, we believe the safety recommendations in our report will improve safety of life at sea.”

Monday Accidents & Lessons Learned: When There Is No Right Side of the Tracks

April 30th, 2018 by

On Tuesday, February 28, 2017, a wall section began to collapse at the top of a cutting above a four-track railway line between the Liverpool Lime Street and Edge Hill stations in Liverpool, England. From approximately 5:30 pm until 6:02 pm, more than 188 tons of debris rained down from the embankment wall collapsing across all four tracks. The Liverpool Lime Street is the city’s main station, one of the busiest in the north of England.

With the rubble downing overhead power lines and damage to infrastructure, all mainline services to and from the station were suspended. The collapse brought trains to a standstill for three hours, with a necessary evacuation of three trains. Police, fire, and ambulance crews helped evacuate passengers down the tracks. Two of the trains were halted in tunnels. Passengers were stranded on trains at Lime Street station due to power outage resulting from the collapse. A passenger en route to Liverpool from Manchester Oxford Road reported chaos at Warrington station as passengers tried to find their way home.

A representative from Network Rail spoke about the incident, “No trains are running in or out of Liverpool Lime station after a section of trackside wall, loaded with concrete and cabins by a third party, collapsed sending rubble across all four lines and taking overhead wires with it. Early indications suggest train service will not resume for several days while extensive clear-up and repairs take place to make the location safe. More precise forecasts on how long the repairs will take will be made after daybreak tomorrow.”

Read more about the incident here.

We invite you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: We’re Not Off the Runway Yet

April 16th, 2018 by

NASA’s Aviation Safety Reporting System (ASRS) from time to time shares contemporary experiences to add value to the growth of aviation wisdom, lessons learned, and to spur a freer flow of reported incidents. ASRS receives, processes, and analyzes these voluntarily submitted reports from pilots, air traffic controllers, flight attendants, maintenance personnel, dispatchers, ground personnel, and others regarding actual or potential hazards to safe aviation operations.

We acknowledge that the element of surprise, or the unexpected, can upend even the best flight plan. But, sometimes, what is perceived as an anomaly pales in comparison to a subsequent occurrence. This was the case when an Air Taxi Captain went the second mile to clear his wingtips while taxiing for takeoff. Just as he thought any threat was mitigated, boom! Let’s listen in to his account:

“Taxiing out for the first flight out of ZZZ, weed whacking was taking place on the south side of the taxiway. Watching to make sure my wing cleared two men mowing [around] a taxi light, I looked forward to continue the taxi. An instant later I heard a ‘thump.’ I then pulled off the taxiway onto the inner ramp area and shut down, assuming I’d hit one of the dogs that run around the airport grounds on a regular basis. I was shocked to find a man, face down, on the side of the taxiway. His coworkers surrounded him and helped him to his feet. He was standing erect and steady. He knew his name and the date. Apparently [he was] not injured badly. I attended to my two revenue passengers and returned the aircraft to the main ramp. I secured the aircraft and called [the Operations Center]. An ambulance was summoned for the injured worker. Our ramp agent was a non-revenue passenger on the flight and took pictures of the scene. He stated that none of the workers was wearing a high visibility vest, which I also observed. They seldom have in the past.

“This has been a recurring problem at ZZZ since I first came here. The operation is never [published in the] NOTAMs [for] an uncontrolled airfield. The pilots just have to see and avoid people and animals at all times. I don’t think the person that collided with my wingtip was one of the men I was watching. I think he must have been stooped down in the grass. The only option to [improve the] safety of the situation would be to stop completely until, hopefully, the workers moved well clear of the taxiway. This is one of…many operational deficiencies that we, the pilots, have to deal with at ZZZ on a daily basis.”

We invite you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: When Retrofitting Does Not Evaluate Risks

April 9th, 2018 by

Bound for London Waterloo, the 2G44 train was about to depart platform 2 at Guildford station. Suddenly, at 2:37 pm, July 7, 2017, an explosion occurred in the train’s underframe equipment case, ejecting debris onto station platforms and into a nearby parking lot. Fortunately, there were no injuries to passengers or staff; damage was contained to the train and station furnishings. It could have been much worse.

The cause of the explosion was an accumulation of flammable gases within the traction equipment case underneath one of the train’s coaches. The gases were generated after the failure of a large electrical capacitor inside the equipment case; the capacitor failure was due to a manufacturing defect.

Recently retrofitted with a modern version of the traction equipment, the train’s replacement equipment also included the failed capacitor. The project team overseeing the design and installation of the new equipment did not consider the risk of an explosion due to a manufacturer’s defect within the capacitor. As a result, there were no preventative engineering safeguards.

The Rail Accident Investigation Branch (RAIB) has recommended a review of the design of UK trains’ electric traction systems to ensure adequate safeguards are in place to offset any identified anomalies and to prevent similar explosions. Learn about the six learning points recommended by the RAIB for this investigation.

Use the TapRooT® System to solve problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: Does What You See Match What Is Happening?

March 26th, 2018 by

>

An incident report from NASA’s Aviation Safety Reporting System (ASRS) gives insight into a pilot’s recurring, problematic observation. Through distraction and confusion, a Bonanza pilot misperceived the runway edge and centerline lights as they cycled off and on. Air Traffic Control (ATC) let him know that the centerline lights were constant, not blinking.

The pilot summarized his experience, “I was transiting the final approach path of . . . Runway 16R and observed the runway edge and centerline lights cycle on and off . . . at a rate of approximately 1 per second. It was very similar to the rate of a blinking traffic light at a 4-way vehicle stop. The [3-blade] propeller speed was 2,400 RPM. This was observed through the entire front windscreen and at least part of the pilot side window. I queried ATC about the reason for the runway lights blinking and was told that they were not blinking. It was not immediately obvious what was causing this, but I did later speculate that it may have been caused by looking through the propeller arc.

“The next day [during] IFR training while on the VOR/DME Runway 16R approach, we observed the runway edge and centerline lights cycle on and off . . . at a rate slightly faster than 1 per second. The propeller speed was 2,500 RPM. I then varied the propeller speed and found that, at 2,700 RPM, the lights were observed strobing at a fairly high rate and, at 2,000 RPM, the blinking rate slowed to less than once per second. This was observed through the entire approach that terminated at the Missed Approach Point (MAP). The flight instructor was also surprised and mentioned that he had not seen this before, but also he doesn’t spend much time behind a 3-blade propeller arc.

“I would speculate that the Pulse Width Modulation (PWM) dimming system of the LED runway lights was phasing with my propeller, causing the observed effect. I would also speculate that the effect would . . . significantly differ at other LED dimming settings . . . and behind a 2-blade propeller.

“I found the effect to be entirely confusing and distracting and would not want to make a landing in such conditions.”

The TapRooT® System, Training, and Software have a dedicated history of R&D, human performance, and improvement. Learn with our best incident investigation and root cause analysis systems.

Monday Accidents & Lessons Learned: When a Disruption Potentially Saves Lives

March 12th, 2018 by

Early news of an incident often does not convey the complexity behind the incident. Granted, many facts are not initially available. On Tuesday, January 24, 2017, a Network Rail freight train derailed in southeast London between Lewisham and Hither Green just before 6:00 am, with the rear two wagons of the one-kilometer-long train off the tracks. Soon after, the Southeastern network sent a tweet to report the accident, alerting passengers that, “All services through the area will be disrupted, with some services suspended.” Then came the advice, “Disruption is expected to last all day. Please make sure you check before travelling.” While southeastern passengers were venting their frustrations on Twitter, a team of engineers was at the site by 6:15 am, according to Network Rail. At the scene, the engineers observed that no passengers were aboard and that no one was injured. They also noted a damaged track and the spillage of a payload of sand.

The newly laid track at Courthill Loop South Junction was constructed of separate panels of switch and crossing track, with most of the panels arriving to the site preassembled. Bearer ties, or mechanical connectors, joined the rail supports. The February 2018 report from the Rail Accident Investigation Branch (RAIB), including five recommendations, noted that follow-up engineering work took place the weekend after the new track was laid, and the derailment occurred the next day. Further inspection found the incident to be caused by a significant track twist and other contributing factors. Repair disrupted commuters for days as round-the-clock engineers accomplished a complete rebuild of a 50-meter railway stretch and employed cranes to lift the overturned wagons. Now factor in time, business, resources saved—in addition to lives that are often spared—when TapRooT® advanced root cause analysis is used to proactively reach solutions.

Monday Accidents & Lessons Learned: Three Killed, Dozens Injured on Italian Trenord-Operated Train

February 5th, 2018 by

Packed with 250 commuters and heading to Milan’s Porta Garibaldi station, the Italian Trenord-operated train derailed January 25, 2018, killing three people and seriously injuring dozens. The train was said to have been traveling at normal speed but was described by witnesses as “trembling for a few minutes before the accident.” A collapse of the track is under investigation.

Why is early information-gathering important?

Monday Accidents & Lessons Learned: Sandwiched in a Singapore Chain Collision

December 25th, 2017 by

In Singapore, a car was crushed by two trailers after a passenger bus hit the one behind him, causing a chain collision that left 26 people injured. Read more here.

Are you interested in improving human performance? Try this four step plan!

December 19th, 2017 by

Plan4

Is discipline the main way you “fix” human error problems?

Are you frustrated because people make the same kind of mistakes over and over again?

Have you tried “standard” techniques for improving human performance and found that they just don’t get the job done long term (they have an impact short term but not long term)?

Is management grasping for solutions to human error issues?

Would you like to learn best practices from industry human performance experts?

Try this four step plan:

DSC084

1. Attend a 5-Day TapRooT® Advanced Root Cause Analysis Course.

The TapRoot® System is made to reactively and proactively help you solve human performance issues. It has built in human factors expert systems that guide you to the root causes of human errors and help you develop effective fixes. The 5-Day TapRooT® Course is the best way to learn the system and get started fixing human performance issues.

See the upcoming course schedule here: http://www.taproot.com/store/5-Day-Courses/

NewImage

2. Attend the Understanding and Stopping Human Error Course

At this two day class, Dr. Joel Haight, a human factors and safety improvement expert and industrial engineering professor at the University of Pittsburg (where he is the Director of the Safety Engineering Program) shares the reasons why people make mistakes and what you can do to understand the problems and fix them.

Joel is an expert TapRooT® User having extensive experience apply TapRooT® to fix human factors problems at a Chevron refinery and in the oil field in Kazakhstan. He is also an expert in applying other human performance analysis and improvement techniques. He brings this knowledge to the 2-Day Understanding and Stopping Human Error Course.

It is best if you have already attended at least a 2-Day TapRooT® Course prior to attending this course. See the course description here: http://www.taproot.com/taproot-summit/pre-summit-courses#HumanError

DSC260

3. Attend the Human Factors Track at the 2018 Global TapRooT® Summit

Once a year we put together a track at the Global TapRooT® Summit that is designed to share best practices and the latest state-of-the-art techniques to improve human performance. That’s what you get at the Human Factor Track at the Summit. What are the sessions at the 2018 Global TapRooT® Summit?

  • TapRooT® Users Share Best Practices – This is a workshop designed to promote the sharing of investigation, root cause analysis, and human performance best practices from TapRooT® Users from around the world. Every year I attend this session and get new ideas that I share with others to help improve performance. Many say this is the best session at the Summit because they get such great ideas and develop new, helpful contacts from many different industries.
  • Top 5 Reasons for Human Error and How to Stop Them – Mark Paradies, President of System Improvements and a human factors expert, shares his deep knowledge of the top five reasons that he see’s for people making “human errors.” For each of these he shares his best ideas to stop the problems in their tracks.
  • Stop Daily Goofs for Good – Kevin McManus, a TapRooT® Instructor and performance improvement expert, shares systematic improvement ideas s to prevent human error and improve cognitive ergonomics on the job.
  • Using Wearables to Minimize Daily Human Errors – Using “wearables” is a technological approach to error prevention. Find out more about how it is being used and may be applied even more effectively in the future.
  • Alarm Management, Signal Detection Theory, and Decision Making – Are people at your facility overwhelmed by alarms? Do the become complacent because of nuisance alarms? Dr. Joel Haight, Director of the University of Pittsburg Safety Engineering Program will discuss control system decisions, decision execution, alarm management, signal detection theory, and decision making theory and how it could be critical in an emergency situation.
  • The Psychology of Failing Fixes – Why do your fixes fail to prevent human error? That’s what this session is all about!
  • What is a Trend and How Can You Find Trends in the TapRooT® Data? – looking for trends in human error data is an important activity to identify generic human factors problems and take the first step to major human performance improvements. Now for the bad news. Most people really don’t understand trending. Find out what you need to know and how to put trending to work in your improvement program.
  • Performance Improvement Gap Analysis – This is the session where you put everything together. Where does your program have holes? How can you apply what you have learned to fill those holes? What are others doing to solve similar problems? Put your plan together so you are ready to hit the ground running and make improvements happen when you get back to work.

And the Best Practice Sessions outlined above are only a start. You will also see five great Keynote Speakers:

NewImage

Mike Williams will share his experience surviving the Deepwater Horizon explosion.

NewImage

Dr. Carol Gunn will share the story of the her sisters unnecessary death in a hospital and patient safety improvement.

NewImage

Inky Johnson will share his experience with a debilitating football injury and how it changed his life and helps him inspire excellence in others.

NewImage

Mark Paradies will help you get the most out of your application of TapRooT®.

NewImage

Vincent Ivan Phipps will teach yo to amplify your leadership skills and communication ability.

We know that the Summit will provide you with new ideas and the inspiration to implement them.

Start

4. Get started! Analyze your human performance issues and make improvements happen!

Just Do It! get back to work and implement what you have learned. Need more help? We can provide training at your site to get more people trained in using TapRooT® so that you have help making change happen.

Don’t wait! Get your four step plan started! Register for the courses and Summit today!

My 20+ Year Relationship with 5-Why’s

December 11th, 2017 by

I first heard of 5-Why’s over 20 years ago when I got my first job in Quality. I had no experience of any kind, I got the job because I worked with the Quality Manager’s wife in another department and she told him I was a good guy. True story…but that’s how things worked back then!

When I was first exposed to the 5-Why concept, it did not really make any sense to me; I could not understand how it actually could work, as it seemed like the only thing it revealed was the obvious. So, if it is obvious, why do I need it? That is a pretty good question from someone who did not know much at the time.

I dived into Quality and got all the certifications, went to all the classes and conferences, and helped my company build an industry leading program from the ground up. A recurring concept in the study and materials I was exposed to was 5-Why. I learned the “correct” way to do it. Now I understood it, but I still never thought it was a good way to find root causes.

I transferred to another division of the company to run their safety program. I did not know how to run a safety program – I did know all the rules, as I had been auditing them for years, but I really did not know how to run the program. But I did know quality, and those concepts helped me instill an improvement mindset in the leaders which we successfully applied to safety.

The first thing I did when I took the job was to look at the safety policies and procedures, and there it was; when you have an incident, “ask Why 5 times” to get your root cause! That was the extent of the guidance. So whatever random thought was your fifth Why would be the root cause on the report! The people using it had absolutely no idea how the concept worked or how to do it. And my review of old reports validated this. Since then I have realized this is a common theme with 5-Why’s; there is a very wide variation in the way it is used. I don’t believe it works particularly well even when used correctly, but it usually isn’t in my experience.

Since retiring from my career and coming to work with TapRooT®, I’ve had literally hundreds of conversations with colleagues, clients, and potential clients about 5-Why’s. I used to be somewhat soft when criticizing 5-Why’s and just try to help people understand why TapRooT® gets better results. Recently, I’ve started to take a more militant approach. Why? Because most of the people I talk to already know that 5-Why’s does not work well, but they still use it anyway (easier/cheaper/quicker)!

So it is time to take the gloves off; let’s not dance around this any longer. To quote Mark Paradies:
“5-Why’s is Root Cause Malpractice!”

To those that are still dug in and take offense, I do apologize! I can only share my experience.

For more information, here are some previous blog articles:

What’s Wrong With Cause-and-Effect, 5-Why’s, & Fault Trees

Comparing TapRooT® to Other Root Cause Tools

What’s Fundamentally Wrong with 5-Whys?

The 7 Secrets of Root Cause Analysis – Video

December 12th, 2016 by

Hello everyone,

Here is a video that discusses some root cause tips, common problems with root cause analysis, and how TapRooT® can help. I hope you enjoy!

Like what you see? Why not join us at the next course? You can see the schedule and enroll HERE

Connect with Us

Filter News

Search News

Authors

Angie ComerAngie Comer

Software

Anne RobertsAnne Roberts

Marketing

Barb CarrBarb Carr

Editorial Director

Chris ValleeChris Vallee

Human Factors

Dan VerlindeDan Verlinde

VP, Software

Dave JanneyDave Janney

Safety & Quality

Garrett BoydGarrett Boyd

Technical Support

Ken ReedKen Reed

VP, Equifactor®

Linda UngerLinda Unger

Co-Founder

Mark ParadiesMark Paradies

Creator of TapRooT®

Michelle WishounMichelle Wishoun

Licensing Paralegal

Per OhstromPer Ohstrom

VP, Sales

Shaun BakerShaun Baker

Technical Support

Steve RaycraftSteve Raycraft

Technical Support

Wayne BrownWayne Brown

Technical Support

Success Stories

As a stockholder, I was reading The CB&I 2014 Annual Report. The section on “Safety” caught my eye. Here is a quote from that section: “Everything at CB&I begins with safety; it is our most important core value and the foundation for our success. In 2014, our employees maintained a lost-time incident rate of 0.03 …

Our Acrylates Area Oxidation Reactor was experiencing frequent unplanned shutdowns (trips) that…

Rohm & Haas
Contact Us