Category: Documents

Monday Accident & Lessons Learned: Five Items To Remember When Using a Checklist

August 15th, 2016 by

Another great article from NASA’s Aviation Safety Reporting System that applies to a much broader audience than just aviation folks. If you use checklists, there something to be learned from this article. Click on the picture of the article below to read more…

Screen Shot 2016 06 28 at 4 01 44 PM

Monday Accident & Lessons Learned: Simple Human Error – Safeguards – Weight and Balance Issues

August 8th, 2016 by

The errors reported in this Aviation Safety Reporting System “Call Back” article are simple but serious. If you load a plane wrong, it could crash on takeoff. Click on the picture of the article below to read the whole report.

Screen Shot 2016 06 28 at 3 55 47 PM

These simple errors seem like they are just an aviation problem. But are there simple errors that your people could make that could cause serious safety, quality, or production issues? maybe a Safeguard Analysis is in order to see if the only Safeguard you are relying on could fail due to a simple human error.

Monday Accident & Lessons Learned: Is the Information Collected as Part of an Accident Investigation “Privileged” – Canadian Court Rules

August 1st, 2016 by

The Occupational Health and Safety Act (“OHS Act”) in Canada requires an employer to conduct and investigation and prepare a report following an accident in the workplace. But an Aberta Queens Bench ruled that the obligation does not “foreclose or preclude” the employer’s ability to claim privilege over information collected during an internal investigation into the incident.

Want to learn more? See the article about the Alberta v Suncor Energy case at:

http://www.mondaq.com/canada/x/495040/Health+Safety/Court+Of+Queens+Bench+Confirms+That+Privilege+May+Apply+To+Workplace+Accident+Investigation

Spreadsheet for Grade Your Investigation Session

July 28th, 2016 by

Download this Excel® Spreadsheet to your laptop or other device to participate in the exercise in the “Grade Your Investigation” Best Practice Session at the 2016 Global TapRooT® Summit:

RateRootCauseAnalysisSummit2016.xlsx

Monday Accident & Lessons Learned: Human Error That Should Not Occur

July 25th, 2016 by

This Accident shares a “Call Back” Report from the Aviation Safety Reporting System that is applicable far beyond aviation.

In this case, the pilot was fatigued and just wanted to “get home.” He had a “finish the mission” focus that could have cost him his life. Here’s an excerpt:

I saw nothing of the runway environment…. I had made no mental accommodation to do a missed approach as I just knew that my skills would allow me to land as they had so many times in past years. The only conscious control input that I can recall is leveling at the MDA [Rather than continuing to the DA? –Ed.] while continuing to focus outside the cockpit for the runway environment. It just had to be there! I do not consciously remember looking at the flight instruments as I began…an uncontrolled, unconscious 90-degree turn to the left, still looking for the runway environment.

To read about this near-miss and the lessons learned, see:

http://asrs.arc.nasa.gov/docs/cb/cb_436.pdf

Screen Shot 2016 06 28 at 3 35 26 PM

Monday Accident & Lessons Learned: Collision between a tram and a pedestrian, Manchester

June 20th, 2016 by

Screen Shot 2016 04 12 at 12 51 33 PM

The UK Rail Accident Investigation Branch published a report about a tram hitting a pedestrian in Manchester, UK.

A summary of the report says:

At about 11:13 hrs on Tuesday 12 May 2015, a tram collided with and seriously injured a pedestrian, shortly after leaving Market Street tram stop in central Manchester. The pedestrian had just alighted from the tram and was walking along the track towards Piccadilly.

The accident occurred because the pedestrian did not move out of the path of the tram and because the driver did not apply the tram’s brakes in time to avoid striking the pedestrian.

As a result of this accident, RAIB has made three recommendations. One is made to Metrolink RATP Dev Ltd in conjunction with Transport for Greater Manchester, to review the assessment of risk from tram operations throughout the pedestrianised area in the vicinity of Piccadilly Gardens.

A second is made to UK Tram, to make explicit provision for the assessment of risk, in areas where trams and pedestrians/cyclists share the same space, in its guidance for the design and operation of urban tramways.

A further recommendation is made to Metrolink RATP Dev Ltd, to improve its care of staff involved in an accident.

For the complete report, see:

https://assets.digital.cabinet-office.gov.uk/media/5705107640f0b6038500004d/R062016_160412_Market_Street.pdf

 

COMPLETE SERIES – Admiral Rickover: Stopping the Normalization of Deviation with the Normalization of Excellence

April 14th, 2016 by

NewImage

You may have dropped in on this series of articles somewhere in the middle. Here are links to each article with a quick summary…

1. There is No Such Thing and the Normalization of Deviation

Point of this article is that deviation IS NORMAL. Management must do something SPECIAL to make deviation abnormal.

2. Stop Normalization of Deviation with Normalization of Excellence

A brief history of how Admiral Rickover created the first high performance organization. The Nuclear navy has a history of over 50 years of operating hundreds of reactors with ZERO process safety (nuclear safety) accidents. He stopped the normalization of deviation with the NORMALIZATION OF EXCELLENCE. Excellence was the only standard that he would tolerate.

3. Normalization of Excellence – The Rickover Legacy – Technical Competency

This article describes the first of Rickover’s three keys to process safety: TECHNICAL COMPETENCE. The big difference here is this isn’t just competence for operators or supervisors. Rickover required technical competence all the way to the CEO.

4. Normalization of Excellence – The Rickover Legacy – Responsibility

The second key to process safety excellence (the normalization of excellence) – RESPONSIBILITY.

Do you think you know what responsibility means? See what Rickover expected from himself, his staff, and everyone responsible for nuclear safety.

5. Normalization of Excellence – The Rickover Legacy – Facing the Facts

FACING THE FACTS is probably the most important of Rickover’s keys to achieving excellence. 

Read examples from the Nuclear Navy and think about what your management does when their is a difficult decision to make.

6. Normalization of Excellence – The Rickover Legacy – 18 Other Elements of Rickover’s Approach to Process Safety

Here is the other 18 elements that Rickover said were essential (as well as the first three keys).

That’s right, the keys are the start but you must do all of these 18 well.

7. Statement of Admiral Rickover in front of the Subcommittee on Energy Research and Production of the Committee on Science and Technology of the US House of Representatives – May 24, 1979

Here is Rickover’s own writing on what makes the Nuclear Navy special. What to this day (over 35 years after Rickover was retired) keeps the reactor safety record spotless.

That’s it. The whole series. I’m thinking about writing about some recent process safety related accidents and showing how management failed to follow Rickover’s guidance and how this lead to poor process safety performance. Would you be interested in reading about bad examples?

Normalization of Excellence – The Rickover Legacy – 18 Other Elements of Rickover’s Approach to Process Safety

March 31st, 2016 by

NewImage

The previous three articles discusses Rickover’s “key elements” to achieving safety in the Navy’s nuclear program. They are:

  1. Technical Competence
  2. Total Responsibility
  3. Facing the Facts

In addition to these three keys that Rickover testified to Congress about, he had 18 other elements that he said were also indispensable. I won’t describe them in detail, but I will list them here:

  1. Conservatism of Design
  2. Robust Systems (design to avoid accidents and emergency system activation)
  3. Redundancy of Equipment (to avoid shutdowns and emergency actions)
  4. Inherently Stable Plant
  5. Full Testing of Plant (prior to operation)
  6. Detailed Prevent/Predictive Maintenance Schedules Strictly Adhered To
  7. Detailed Operating Procedures Developed by Operators, Improved with Experience, and Approved by Technical Experts
  8. Formal Design Documentation and Management of Change
  9. Strict Control of Vendor Provided Equipment (QA Inspections)
  10. Formal Reporting of Incidents and Sharing of Operational Experience
  11. Frequent Detailed Audits/Inspections by Independent, Highly Trained/Experienced Personnel that Report to Top Management
  12. Independent Safety Review by Government Authorities
  13. Personal Selection of Leaders (looking for exceptional technical knowledge and good judgment)
  14. One Year of Specialized Technical Training/Hands-On Experience Prior to 1st Assignment
  15. Advanced Training for Higher Leadership Positions
  16. Extensive Continuing Training and Requalification for All Personnel
  17. Strict Enforcement of Standards & Disqualification for Violations
  18. Frequent Internal Self-Assessments

Would like to review what Rickover had to say about them? See his testimony here:

Rickover Testimony

Now after the description of the excellence of Rickover’s program, you might think there was nothing to be improved. However, I think the program had three key weaknesses. They are:

  1. Blame Orientation (Lack of Praise)
  2. Fatigue
  3. Needed for Advanced Root Cause Analysis

Let me talk about each briefly.

BLAME ORIENTATION

The dark side of a high degree of responsibility was a tendency to blame the individual when something went wrong. Also, success wasn’t celebrated, it was expected. The result was burnout and attitude problems. This led to fairly high turnover rate among the junior leaders and enlisted sailors.

FATIGUE

Want to work long hours? Join the Nuclear Navy! Eighteen hour days, seven days a week, were normal when at sea. In port, three section duty (a 24 hour day every third day) was normal. This meant that you NEVER got a full weekend. Many errors were made due to fatigue. I remember a sailor was almost killed performing electrical work because of actions that just didn’t make sense. He had no explanation for his errors (they were multiple) and he knew better because he was the person that trained everyone else. But he had been working over 45 days straight with a minimum of 12 hours per day. Was he fatigued? It never showed up in the incident investigation.

ADVANCED ROOT CAUSE ANALYSIS

Root Cause Analysis in the Nuclear Navy is basic. Assign smart people and they will find good “permanent fixes” to problems. And this works … sometimes. The problem? The Nuke Navy doesn’t train sailors and officers how to investigate human errors. That’s where advanced root cause analysis comes in. TapRooT® has an expert system that helps people find the root causes of human error and produce fixes that stop the problems. Whenever I hire a Navy Nuke to work at System Improvements, they always tell me they already know about root cause analysis because they did that “on the boat.” But when they take one of our courses, they realize that they really had so much to learn.

Read Part 7:  Statement of Admiral Rickover in front of the Subcommittee on Energy Research and Production of the Committee on Science and Technology of the US House of Representatives – May 24, 1979

If you would like to learn more about advanced root cause analysis, see our course offerings:

COURSES

And sign up for our weekly newsletter:

NEWSLETTER

The EPA’s Revision to the Risk Management Plan Regulation is Open for Comments

March 30th, 2016 by

The modifications have been published in the Federal Register. See:

https://www.federalregister.gov/articles/2016/03/14/2016-05191/accidental-release-prevention-requirements-risk-management-programs-under-the-clean-air-act

To see the previous article about the modifications and their impact on root cause analysis, see:

http://www.taproot.com/archives/53634

Hurry if you want to submit comments. The register says:

“Comments: Comments and additional material must be received on or before May 13, 2016. Under the Paperwork Reduction Act (PRA), comments on the information collection provisions are best assured of consideration if the Office of Management and Budget (OMB) receives a copy of your comments on or before April 13, 2016.Public Hearing. The EPA will hold a public hearing on this proposed rule on March 29, 2016 in Washington, DC.”

April 13, 2016, isn’t far away!

For comment information, see:

https://www.regulations.gov/#!documentDetail;D=EPA-HQ-OEM-2015-0725-0001

To add your comment, see:

https://www.regulations.gov/#!submitComment;D=EPA-HQ-OEM-2015-0725-0001

Normalization of Excellence – The Rickover Legacy – Facing the Facts

March 24th, 2016 by

In the past two weeks we’ve discussed two of the “essential” (Rickover’s word) elements for process safety excellence …

Technical Competence

Responsibility

This week we will discuss the third, and perhaps most important, essential element – FACING THE FACTS.

What is facing the facts? Here are some excerpts of how Rickover described it:

“… To resist the human inclination to hope that things will work out,
despite evidence or suspicions to the contrary.

If conditions require it, you must face the facts and brutally make needed changes
despite significant costs and schedule delays. … The person in charge must
personally set the example in this area and require his subordinates to do likewise.

Let me give two examples from Rickover’s days of leading the Navy Nuclear Power Program that illustrate what he meant (and how he lived out this essential element).

Many people reading this probably do not remember the Cold War or the Space Race with the USSR. But there was a heated competition with national importance in the area of technology during Cold War. This technology race extended to the development of nuclear power to power ships and submarines.

Rickover in 1947, Rickover proposed to the Chief of Naval Operations that he would develop nuclear power for submarine propulsion. The technical hurdles were impressive. Developing the first nuclear powered ship was probably more difficult than the moon shot that happened two decades later. Remember, there were no computers. Slide rules were used for calculations. New metals had to be created. And new physics and radiation protection had to be created. None the less, Rickover decided that before he built the first nuclear powered submarine, he would build a working prototype of the submarine exactly like the actual submarine that was proposed. It would be built inside a hull and surrounded by a water tank to absorb radiation from the reactor.

NewImage

This first submarine reactor was built near Idaho Falls, Idaho, and went critical for the first time in March of 1953. Just imagine trying to do something like that today. From concept to critical operations in just 6 years!

The prototype was then operated to get experience with the new technology and to train the initial crew of the first submarine, the USS Nautilus. The construction of the ship started in 1952 before the prototype was completed. Therefore, much of the construction of the Nautilus was complete before appreciable experience could be gained with the prototype. Part of the reason for this was that the lessons learned from the construction of the prototype allowed the Nautilus construction to progress much faster than was possible for the prototype.

However, during the operation of the prototype it was found that some of the piping used for the non-reactor part of the steam plant was improper. It was eroding much faster than expected. This could eventually lead to a hazardous release of non-radioative steam into the engineering space – a serious personnel hazard.

This news was bad enough, but the problem also had an impact on the Nautilus. There was no non-destructive test that could be performed to determine if the right quality piping had been used in the construction of the submarine. Some said, go ahead with construction. We can change out the piping of the Nautilus after the first period underway and still beat the Russians to sea on nuclear power.

Rickover wouldn’t hear of it. He insisted that the right way to do it was to replace the piping even though it meant a significant schedule delay. Accepting the possibility of poor quality steel would be sending the wrong message … the message that taking shortcuts with safety was OK. Therefore, he insisted that all the steam piping be replaced with steel of known quality before the initial criticality of the reactor. He set the standard for facing the facts. (By the way, they still beat the Russians to sea and won the race.)

NewImage

The second example is perhaps even more astounding. Since no civilian or Navy power plants existed, there were no standards for how much occupational exposure a nuclear technician could receive. In addition, submarines were made for war and some proposed that any civilian radiation allowance should be relaxed for military men because they were sailors who must take additional risks. (Remember, we were doing above ground nuclear weapons testing in the US and marching troops to ground zero after the blast during this period.)

Rickover’s staff argued that a standard slightly higher than the one being developed for civilian workers would be OK for sailors. This higher standard would save considerable shielding weight and would result in a faster, more capable submarine. Rickover would hear nothing of it. He insisted that the shielding be built so that the projected radiation dose received by any sailor from the reactor during operation be no higher than that experienced by the general public. (Everyone receives a certain amount of exposure from solar radiation, dental and medical X-rays, and background radiation from naturally occurring radionuclides.) That was the design standard he set.

Many years later, it was noticed that Russian submarine crews were given time off after their deployments to relax in Black Sea resorts. Some thought this was just a reward for the highly skilled sailors. However, it was later discovered that this time off was required to allow the sailors time for their bone marrow to regenerate after damage due to high levels of radiation. The Russians had not used extra shielding and hence their sailors got significant radiation doses. Perhaps that why Russian nuclear submarine duty got the nickname of “babyless duty.”

There was no similar problem for US Nuclear Power Program personnel. Rickover made sure that the facts were faced early in the design process and no adverse health effects were experienced by US submarine sailors.

Let’s compare Rickover’s facing the facts to industrial practices. What happens when a refinery experiences problems and faces a shutdown? Does management “face the facts” and accept the downtime to make sure that everything is safe? Or do they try to apply bandages while the process is kept running to avoid losing valuable production? What would be the right answer to “face the facts” and achieve process safety excellence? Have we seen major accidents caused by just this kind of failure to face the facts? You betcha!

That’s it, the first three (and most important) of Rickover’s essential elements for process safety excellence. But that isn’t all. In his testimony to Congress he outlined additional specific elements that completed his reactor safety management system.

Read Part 6: Normalization of Excellence – The Rickover Legacy – 18 Other Elements of Rickover’s Approach to Process Safety

Monday Accident & Lessons Learned: Dust Explosion Prevention

January 4th, 2016 by

Screen Shot 2015 12 16 at 1 13 10 PM

See the link below to the pdf of the Dangerous Goods Safety Significant Incident Report Number 01-15 from the Government of Western Australia Department of Mines and Petroleum.

DGS_SIR_0115.pdf

Stop the Sacrifices

July 30th, 2015 by

NewImage

Over a decade ago, I wrote this article to make a point about stopping construction fatalities. I’ve reposted it because it is missing from the archives. Does it still apply today? Perhaps it applies in many other industries as well. Let me know by leaving a comment.

StopSacrifices.pdf (click to open the pdf)

Monday Accident & Lessons Learned: Lessons Learned from 5 Chemical Accidents

July 6th, 2015 by

Lessons learned from five accidents reported by EU and OECD Countries. See:

http://ec.europa.eu/environment/seveso/pdf/mahb_bulletin2.pdf

Read insights on lessons learned from accidents reported in the European Major Accident Reporting System (eMARS) and other accident sources.

47 accidents in eMARS involving contractor safety issues in the chemical or petrochemical industries were examined. Five accidents were chosen on the basis that a contract worker was killed or injured or was involved in the accident.

What do you think? Leave your comments below.

Root Cause Tip: Audit Your Investigation System (A Best of The Root Cause Network™ Newsletter Reprint)

November 26th, 2014 by

AUDIT YOUR INVESTIGATION SYSTEM

AUDIT TO IMPROVE

We have all heard the saying:

Screen Shot 2014 10 01 at 1 03 35 PM

Tom Peters changed that saying to:

“If it ain’t broke, you aren’t looking hard enough.”

We can’t improve if we don’t do something different. In the “Just Do It” society of the 1990’s, if you weren’t improving, you were falling behind. And the pace of improvement has continued to leap forward in the new millennium. 

Sometimes we overlook the need to improve in places that we need to improve the most. One example is our improvement systems. When was the last time you made a comprehensive effort to improve your incident investigations and root cause analysis? 

Improvement usually starts by having a clear understanding of where you are. That means you must assess (inspect) your current implementation of your incident investigation system. The audit needs to establish where you are and what areas are in need of improvement.

AREAS TO AUDIT

If we agree that auditing is important to establish where we are before we start to improve, the question then is:

What should we audit?

To answer that question, you need to know what makes an incident investigation system work and then decide how you will audit the important factors. 

The first research I would suggest is Chapter 6 of the TapRooT® Book (© 2008). This will give you plenty of ideas of what makes an incident investigation system successful.

08TapRooTBook Cover

Next, I would suggest reviewing Appendix A of the TapRooT® Book. Pay special attention to the sample investigation policy and use it as a reference to compare to your company’s policy.

Next, review Appendix C. It provides 16 topics (33 suggestions) to improve your incident investigation and root cause analysis system. The final suggestion is The Good, The Bad, and The Ugly rating sheet to rate your investigation and root cause analysis system. You can download a copy of an Excel spreadsheet of this rating system at:

http://www.taproot.com/archives/46359

Next, review the requirements of your regulator in your country. These will often be “minimum” requirements (for example, the requirements of OSHA’s Process Safety Management regulation. But you obviously should be meeting the government required minimums.

Also, you may have access to your regulators audit guidance. For example, OSHA provides the following guidance for Process Safety Management incident investigations:

12. Investigation of Incidents. Incident investigation is the process of identifying the underlying causes of incidents and implementing steps to prevent similar events from occurring. The intent of an incident investigation is for employers to learn from past experiences and thus avoid repeating past mistakes. The incidents for which OSHA expects employers to become aware and to investigate are the types of events which result in or could reasonably have resulted in a catastrophic release. Some of the events are sometimes referred to as “near misses,” meaning that a serious consequence did not occur, but could have.

Employers need to develop in-house capability to investigate incidents that occur in their facilities. A team needs to be assembled by the employer and trained in the techniques of investigation including how to conduct interviews of witnesses, needed documentation and report writing. A multi-disciplinary team is better able to gather the facts of the event and to analyze them and develop plausible scenarios as to what happened, and why. Team members should be selected on the basis of their training, knowledge and ability to contribute to a team effort to fully investigate the incident. Employees in the process area where the incident occurred should be consulted, interviewed or made a member of the team. Their knowledge of the events form a significant set of facts about the incident which occurred. The report, its findings and recommendations are to be shared with those who can benefit from the information. The cooperation of employees is essential to an effective incident investigation. The focus of the investigation should be to obtain facts, and not to place blame. The team and the investigation process should clearly deal with all involved individuals in a fair, open and consistent manner.

Also, OSHA provides more minimum guidance on page 23 of this document:

https://www.osha.gov/Publications/osha3132.pdf

Finally, another place to network and learn best practices to benchmark against your investigation practices is the TapRooT® Summit. Participants praise the new ideas they pick up by networking with some of the “best and brightest” TapRooT® Users from around the world.

Those sources should provide a pretty good checklist for developing your audit protocol.

NewImage

AUDIT TECHNIQUES (PROTOCOL)

How do you audit the factors that are important to making your incident investigation system work? For each factor you need to develop and audit strategy and audit protocol.  

For example, you might decide that sharing of lessons learned with employs and contractors is a vital part of the investigation process. The first step in developing an audit strategy/protocol would be to answer these questions:

  1. Are there any regulatory requirements for sharing information?
  2. What is required by our company policy?
  3. What good practices should we be considering?

Next, you would have to develop a protocol to verify what is actually happening right now at your company. For example, you might:

  • Do a paper audit of the practices to see if they meet the requirements.
  • Go to the field to verify workers knowledge of past best practices that were shared.

Each factor may have different techniques as part of the audit protocol. These techniques include:

  • paperwork reviews
  • field observations
  • field interviews
  • worker tests
  • management/supervision interviews
  • training and training records reviews
  • statistical reviews of investigation results

To have a thorough audit, the auditor needs to go beyond paperwork reviews. For example, reading incident investigation reports and trying to judge their quality can only go so far in assessing the real effectiveness of the incident investigation system. This type of assessment is a part of a broader audit, but should not provide the only basis by which the quality of the system is judged.

For example, a statistical review was performed on the root cause data from over 200 incident investigations at a facility. The reviewer found that there were only two Communication Basic Cause Category root causes in all 200 investigations. This seemed too low. In further review it was found that investigators at this facility were not allowed to interview employees. Instead, they provided their questions to the employee’s supervisor who would then provide the answers at a later date. Is it any surprise that the supervisor never reported a miscommunication between the supervisor and the employee? This problem could not be discovered by an investigation paperwork review.

Don’t forget, you can use TapRooT® to help develop your audit protocol and find the root causes of audit findings. For example, you can flow chart your investigation process as a Spring SnapCharT® to start developing your audit protocol (see Chapter 5 of the 2008 TapRooT® Book for more ideas).

NewImage

WHO SHOULD AUDIT & WHEN?

We recommend yearly audits of your improvement system. You shouldn’t expect dramatic improvements every year. Rather, if you have been working on improvement for quite some time, you should expect gradual changes that are more obvious after two or three years. This more like measuring a glacier moving than measuring a dragsters movement. 

Who should perform these audits?

First, the system’s owner should be doing annual self-assessments. Of course, auditing your own work is difficult. But self-assessments are the foundation of most improvement programs.

Next, at least every three years you should get an outside set of eyes to review your program. This could be a corporate auditor, someone from another site, or an independent (hired) auditor.

System Improvements (the TapRooT® Folks) provides this type of hired audit service (contact us by calling 865-539-2139 or by CLICKING HERE). We bring expertise in TapRooT® and an independent set of eyes. We’ve seen incident investigation systems from around the world in all sorts of industries and have access to the TapRooT® Advisory Board (a committee of industry expert users) that can provide us with unparalleled benchmarking of practices.  

GET STARTED NOW

Audits should be an important part of you continuous improvement program. If you aren’t already doing annual audits, the best time to start is NOW! Don’t wait for poor results (when compared to your peers) that make your efforts look bad. Those who are the best are already auditing their system and making improvements. You will have to run hard just to keep up!

(This post is based on the October 1994 Root Cause Network™ Newsletter, Copyright © 1994. Reprinted/adapted by permission. Some modifications have been made to update the article.)

Root Cause Analysis Tip: Rate Your Root Cause Analysis / Incident Investigation System – The Good, The Bad, and The Ugly

September 3rd, 2014 by

GoodBadUgly

Over a decade ago, I developed a rating sheet for root cause analysis implementation. We had several sessions at the TapRooT® Summit about it and it was posted on our web site (and then our blog). But in the last web site crash, it was lost. Therefore, I’m reposting it here for those who would like to download it. (Just click on the link below.)

GoodBadUgly.xls

Instructions for using the sheet are on the sheet.

I’m working on a new rating system for evaluation of individual incident investigations and corrective actions. Anyone have any ideas they would like to share?

Press Release from the US CSB: CSB Draft Report Finds Deepwater Horizon Blowout Preventer Failed Due to Unrecognized Pipe Buckling Phenomenon During Emergency Well-Control Efforts on April 20, 2010, Leading to Environmental Disaster in Gulf of Mexico

June 5th, 2014 by

 

CSB Draft Report Finds Deepwater Horizon Blowout Preventer Failed Due to Unrecognized Pipe Buckling Phenomenon During Emergency Well-Control Efforts on April 20, 2010, Leading to Environmental Disaster in Gulf of Mexico

 Report Says Similar Accident Could Still Occur, Calls for Better Management
of Safety-Critical Elements by Offshore Industry, Regulators

 Houston, Texas, June 5, 2014— The blowout preventer (BOP) that was intended to shut off the flow of high-pressure oil and gas from the Macondo well in the Gulf of Mexico during the disaster on the Deepwater Horizon drilling rig on April 20, 2010, failed to seal the well because drill pipe buckled for reasons the offshore drilling industry remains largely unaware of, according to a new two-volume draft investigation report released today by the U.S. Chemical Safety Board (CSB).

CLICK HERE to access Overview
CLICK HERE to access Volume 1
CLICK HERE to access Volume 2

The blowout caused explosions and a fire on the Deepwater Horizon rig, leading to the deaths of 11 personnel onboard and serious injuries to 17 others.  Nearly 100 others escaped from the burning rig, which sank two days later, leaving the Macondo well spewing oil and gas into Gulf waters for a total of 87 days. By that time the resulting oil spill was the largest in offshore history.  The failure of the BOP directly led to the oil spill and contributed to the severity of the incident on the rig.

The draft report will be considered for approval by the Board at a public meeting scheduled for 4 p.m. CDT at the Hilton Americas Hotel, 1600 Lamar St., Houston, TX 77010.  The meeting will include a detailed staff presentation, Board questions, and public comments, and will be webcast at:

http://www.csb.gov/investigations/webcast/.

The CSB report concluded that the pipe buckling likely occurred during the first minutes of the blowout, as crews desperately sought to regain control of oil and gas surging up from the Macondo well.  Although other investigations had previously noted that the Macondo drill pipe was found in a bent or buckled state, this was assumed to have occurred days later, after the blowout was well underway.

After testing individual components of the blowout preventer (BOP) and analyzing all the data from post-accident examinations, the CSB draft report concluded that the BOP’s blind shear ram – an emergency hydraulic device with two sharp cutting blades, intended to seal an out-of-control well – likely did activate on the night of the accident, days earlier than other investigations found.  However, the pipe buckling that likely occurred on the night of April 20 prevented the blind shear ram from functioning properly.  Instead of cleanly cutting and sealing the well’s drill pipe, the shear ram actually punctured the buckled, off-center pipe, sending huge additional volumes of oil and gas surging toward the surface and initiating the 87-day-long oil and gas release into the Gulf that defied multiple efforts to bring it under control.

The identification of the new buckling mechanism for the drill pipe ­– called “effective compression” – was a central technical finding of the draft report.  The report concludes that under certain conditions, the “effective compression” phenomenon could compromise the proper functioning of other blowout preventers still deployed around the world at offshore wells.  The complete BOP failure scenario is detailed in a new 11-minute computer video animation the CSB developed and released along with the draft report.

The CSB draft report also revealed for the first time that there were two instances of mis-wiring and two backup battery failures affecting the electronic and hydraulic controls for the BOP’s blind shear ram.  One mis-wiring, which led to a battery failure, disabled the BOP’s “blue pod” – a control system designed to activate the blind shear ram in an emergency.  The BOP’s “yellow pod” – an identical, redundant system that could also activate the blind shear ram – had a different miswiring and a different battery failure.  In the case of the yellow pod, however, the two failures fortuitously cancelled each other out, and the pod was likely able to operate the blind shear ram on the night of April 20.

“Although both regulators and the industry itself have made significant progress since the 2010 calamity, more must be done to ensure the correct functioning of blowout preventers and other safety-critical elements that protect workers and the environment from major offshore accidents,” said Dr. Rafael Moure-Eraso, the CSB chairperson. “The two-volume report we are releasing today makes clear why the current offshore safety framework needs to be further strengthened.”

“Our investigation has produced several important findings that were not identified in earlier examinations of the blowout preventer failure,” said CSB Investigator Cheryl MacKenzie, who led the investigative team.  “The CSB team performed a comprehensive examination of the full set of BOP testing data, which were not available to other investigative organizations when their various reports were completed.  From this analysis, we were able to draw new conclusions about how the drill pipe buckled and moved off-center within the BOP, preventing the well from being sealed in an emergency.”

The April 2010 blowout in the Gulf of Mexico occurred during operations to “temporarily abandon” the Macondo oil well, located in approximately 5,000-foot-deep waters some 50 miles off the coast of Louisiana.  Mineral rights to the area were leased to oil major BP, which contracted with Transocean and other companies to drill the exploratory Macondo well under BP’s oversight, using Transocean’s football-field-size Deepwater Horizon drilling rig.

The blowout followed a failure of the cementing job to temporarily seal the well, while a series of pressure tests were misinterpreted to indicate that the well was in fact properly sealed.  The final set of failures on April 20 involved the Deepwater Horizon’s blowout preventer (BOP), a large and complex device on the sea floor that was connected to the rig nearly a mile above on the sea surface.

Effective compression, as described in the draft report, occurs when there is a large pressure difference between the inside and outside of a pipe.  That condition likely occurred during emergency response actions by the Deepwater Horizon crew to the blowout occurring on the night of April 20, when operators closed BOP pipe rams at the wellhead, temporarily sealing the well.  This unfortunately established a large pressure differential that buckled the steel drill pipe inside the BOP, bending it outside the effective reach of the BOP’s last-resort safety device, the blind shear ram.

“The CSB’s model differs from other buckling theories that have been presented over the years but for which insufficient supporting evidence has been produced,” according to CSB Investigator Dr. Mary Beth Mulcahy, who oversaw the technical analysis.  “The CSB’s conclusions are based on real-time pressure data from the Deepwater Horizon and calculations about the behavior of the drill pipe under extreme conditions.  The findings reveal that pipe buckling could occur even when a well is shut-in and apparently in a safe and stable condition.  The pipe buckling – unlikely to be detected by the drilling crew – could render the BOP inoperable in an emergency.  This hazard could impact even the best offshore companies, those who are maintaining their blowout preventers and other equipment to a high standard.  However, there are straightforward methods to avoid pipe buckling if you recognize it as a hazard.”

The CSB investigation found that while Deepwater Horizon personnel performed regular tests and inspections of those BOP components that were necessary for day-to-day drilling operations, neither Transocean nor BP had performed regular inspections or testing to identify latent failures of the BOP’s emergency systems. As a result, the safety-critical BOP systems responsible for shearing drill pipe in emergency situations – and safely sealing an out-of-control well – were compromised before the BOP was even deployed to the Macondo wellhead.  The CSB report pointed to the multiple miswirings and battery failures within the BOP’s subsea control equipment as evidence of the need for more rigorous identification, testing, and management of critical safety devices.  The report also noted that the BOP lacked the capacity to reliably cut and seal the 6-5/8 inch drill pipe that was used during most of the drilling at the Macondo well prior to April 20 – even if the pipe had been properly centered in the blind shear ram’s blades.

Despite the multiple maintenance problems found in the Deepwater Horizon BOP, which could have been detected prior to the accident, CSB investigators ultimately concluded the blind shear ram likely did close on the night of April 20, and the drill pipe could have been successfully sealed but for the buckling of the pipe. 

“Although there have been regulatory improvements since the accident, the effective management of safety critical elements has yet to be established,” Investigator MacKenzie said.  “This results in potential safety gaps in U.S. offshore operations and leaves open the possibility of another similar catastrophic accident.”

The draft report, subject to Board approval, makes a number of recommendations to the U.S. Department of Interior’s Bureau of Safety and Environmental Enforcement (BSEE), the federal organization established following the Macondo accident to oversee U.S. offshore safety. These recommendations call on BSEE to require drilling operators to effectively manage technical, operational, and organizational safety-critical elements in order to reduce major accident risk to an acceptably low level, known as “as low as reasonably practicable.”

“Although blowout preventers are just one of the important barriers for avoiding a major offshore accident, the specific findings from the investigation about this BOP’s unreliability illustrate how the current system of regulations and standards can be improved to make offshore operations safer,” Investigator MacKenzie said.  “Ultimately the barriers against a blowout or other offshore disaster include not only equipment like the BOP, but also operational and organizational factors.  And all of these need to be rigorously defined, actively monitored, and verified through an effective management system if safety is to be assured.”  Companies should be required to identify these safety-critical elements in advance, define their performance requirements, and prove to the regulator and outside auditors that these elements will perform reliably when called upon, according to the draft report.

The report also proposes recommendations to the American Petroleum Institute (API), the U.S. trade association for both upstream and downstream petroleum industry. The first recommendation is to revise API Standard 53, Blowout Prevention Equipment Systems for Drilling Wells, calling for critical testing of the redundant control systems within BOP’s, and another for new guidance for the effective management of safety-critical elements in general.

CSB Chairperson Rafael Moure-Eraso said, “Drilling continues to extend to new depths, and operations in increasingly challenging environments, such as the Arctic, are being planned.  The CSB report and its key findings and recommendations are intended to put the United States in a leading role for improving well-control procedures and practices.  To maintain a leadership position, the U.S. should adopt rigorous management methods that go beyond current industry best practices.”

Two forthcoming volumes of the CSB’s Macondo investigation report are planned to address additional regulatory matters as well as organizational and human factors safety issues raised by the accident.

Do You Want A World-Class Improvement Program? Root Cause Network™ Newsletter, May 2014, Issue 121

May 25th, 2014 by

Do you want a World-Class Improvement Program? Then read “Tide and Time Wait for No Man” on page 1 of this month’s Root Cause Network™ Newsletter. Download your copy of the newsletter by clicking on this link:

May14NL121.pdf.

 

What else is in this month’s Root Cause Network™ Newsletter? Here’s a list…
 
  • 5 Ways to Improve Your Interviews (Page 2)
  • Best Practice from the 2014 Global TapRooT® Summit: The TapRooT® Expert Help Desk (Page 2)
  • How things naturally go from “Excellence to Complacency” (Page 2)
  • A new idea … “Budget for Your Next Accident” (Page 3)
  • Dilbert Joke (Page 3)
  • An answer to “Is Human Error a Root Cause?” (Page 3)
  • A list of upcoming public TapRooT® Courses – Is one near you? (Page 4)
These articles are quick reads with interesting information. If you are interested in improvement, print the pdf at the link below and get reading:

May14NL121.pdf

Preliminary (Spring) SnapCharT® of the Lost Flight MH 370

March 31st, 2014 by

Screen Shot 2014 03 31 at 4 38 49 PM

(Photo of remains from cockpit fire of an Egypt Air 777 while parked at a gate in Cairo)

 

One of our TapRooT® Users sent the attached PDF of a SnapCharT® for the loss of Malaysia Air MH 370. 

Have a look. See what you think. Then leave comments here…

MH370SnapCharT.pdf

Is Your Root Cause Analysis Adequate? Root Cause Network™ Newsletter, March 2014, Issue 120

March 3rd, 2014 by

How do you know if your root cause analysis is adequate? Read the article on page 3 of the March Root Cause Network™ Newsletter and find out! Download your copy of the newsletter by clicking on this link: Mar14NL120.pdf

CLICK IMAGE TO READ.

.What else can you learn in this edition?

  • What’s Right and What’s Wrong with Human Performance Tools (Page 1)
  • Why Do Supervisors Produce Bad Investigations? (Page 2)
  • How Should You Target Your Investigations? (Page 2)
  • What’s Wrong with Your Trending? (Page 2)
  • Admiral Rickover’s Face-the-Facts Philosophy (Page 2)
  • Proactive Use of TapRooT® (Page 3)
  • Stop Slips, Trips, and Falls (Page 3)
  • Risk Management Best Practices (Page 3)
  • Upcoming TapRooT® Courses Around the World (Page 4)
  • What Can You Learn at the 2014 Global TapRooT® Summit? (Page 5, 6, & 7)

Plus there’s more! An article you really should read and act upon. See the article on Page 3: “Are you Missing an Important Meeting?

Why should you read that article among all the others? Here’s the first paragraph …

What if you missed a meeting and it caused someone to die. Or maybe you lost your job if you weren’t there? Or your company lost millions of dollars because you simply didn’t attend a three-day meeting. Would you make sure that you were there?

If those questions don’t grab your attention, what will?

Go to this link:

Mar14NL120.pdf

Print the March Root Cause Network™ Newsletter and read it from cover to cover!

You’ll be glad you did. (And you’ll find that there are several actions you will be compelled to take.)

Remembering an Accident: On February 1, 2003, the Columbia Breaks Up During Re-entry

February 1st, 2014 by

ShuttleTemp.ppt
Click on the image above for a PowerPoint of the temperature alarms during reentry.

What Admiral Rickover Had to Say About Management

January 9th, 2014 by

Rickover
(Picture of Captain Rickover taken after WWII while he was learning about nuclear technology
at the Manhattan Project before he became head of the Navy nuclear power program.)

The following is the text of a speech delivered in 1982 by Admiral Hyman G. Rickover – the father of the Nuclear Navy – at Columbia University. Rickover’s accomplishments as the head of the Nuclear Navy are legendary. From developing the first power producing submarine based nuclear reactor from scratch to operations in just three years to creating a program to guarantee process safety (nuclear safety) for over 60 years (zero nuclear accidents).

I am reprinting this speech here because I believe that many do not understand the management concepts needed to guarantee process safety. We teach these concepts in our “Reducing Serious Injuries and Fatalities Using TapRooT®” pre-Summit course. Since many won’t be able to attend this training, I wanted to give all an opportunity to learn these valuable lessons by posting this speech.

– – –

Human experience shows that people, not organizations or management systems, get things done. For this reason, subordinates must be given authority and responsibility early in their careers. In this way they develop quickly and can help the manager do his work. The manager, of course, remains ultimately responsible and must accept the blame if subordinates make mistakes.

As subordinates develop, work should be constantly added so that no one can finish his job. This serves as a prod and a challenge. It brings out their capabilities and frees the manager to assume added responsibilities. As members of the organization become capable of assuming new and more difficult duties, they develop pride in doing the job well. This attitude soon permeates the entire organization.

One must permit his people the freedom to seek added work and greater responsibility. In my organization, there are no formal job descriptions or organizational charts. Responsibilities are defined in a general way, so that people are not circumscribed. All are permitted to do as they think best and to go to anyone and anywhere for help. Each person then is limited only by his own ability.

Complex jobs cannot be accomplished effectively with transients. Therefore, a manager must make the work challenging and rewarding so that his people will remain with the organization for many years. This allows it to benefit fully from their knowledge, experience, and corporate memory.

The Defense Department does not recognize the need for continuity in important jobs. It rotates officer every few years both at headquarters and in the field. The same applies to their civilian superiors.

This system virtually ensures inexperience and nonaccountability. By the time an officer has begun to learn a job, it is time for him to rotate. Under this system, incumbents can blame their problems on predecessors. They are assigned to another job before the results of their work become evident. Subordinates cannot be expected to remain committed to a job and perform effectively when they are continuously adapting to a new job or to a new boss.

When doing a job—any job—one must feel that he owns it, and act as though he will remain in the job forever. He must look after his work just as conscientiously, as though it were his own business and his own money. If he feels he is only a temporary custodian, or that the job is just a stepping stone to a higher position, his actions will not take into account the long-term interests of the organization. His lack of commitment to the present job will be perceived by those who work for him, and they, likewise, will tend not to care. Too many spend their entire working lives looking for their next job. When one feels he owns his present job and acts that way, he need have no concern about his next job.

In accepting responsibility for a job, a person must get directly involved. Every manager has a personal responsibility not only to find problems but to correct them. This responsibility comes before all other obligations, before personal ambition or comfort.

A major flaw in our system of government, and even in industry, is the latitude allowed to do less than is necessary. Too often officials are willing to accept and adapt to situations they know to be wrong. The tendency is to downplay problems instead of actively trying to correct them. Recognizing this, many subordinates give up, contain their views within themselves, and wait for others to take action. When this happens, the manager is deprived of the experience and ideas of subordinates who generally are more knowledgeable than he in their particular areas.

A manager must instill in his people an attitude of personal responsibility for seeing a job properly accomplished. Unfortunately, this seems to be declining, particularly in large organizations where responsibility is broadly distributed. To complaints of a job poorly done, one often hears the excuse, “I am not responsible.” I believe that is literally correct. The man who takes such a stand in fact is not responsible; he is irresponsible. While he may not be legally liable, or the work may not have been specifically assigned to him, no one involved in a job can divest himself of responsibility for its successful completion.

Unless the individual truly responsible can be identified when something goes wrong, no one has really been responsible. With the advent of modern management theories it is becoming common for organizations to deal with problems in a collective manner, by dividing programs into subprograms, with no one left responsible for the entire effort. There is also the tendency to establish more and more levels of management, on the theory that this gives better control. These are but different forms of shared responsibility, which easily lead to no one being responsible—a problems that often inheres in large corporations as well as in the Defense Department.

When I came to Washington before World War II to head the electrical section of the Bureau of Ships, I found that one man was in charge of design, another of production, a third handled maintenance, while a fourth dealt with fiscal matters. The entire bureau operated that way. It didn’t make sense to me. Design problems showed up in production, production errors showed up in maintenance, and financial matters reached into all areas. I changed the system. I made one man responsible for his entire area of equipment—for design, production, maintenance, and contracting. If anything went wrong, I knew exactly at whom to point. I run my present organization on the same principle.

A good manager must have unshakeable determination and tenacity. Deciding what needs to be done is easy, getting it done is more difficult. Good ideas are not adopted automatically. They must be driven into practice with courageous impatience. Once implemented they can be easily overturned or subverted through apathy or lack of follow-up, so a continuous effort is required. Too often, important problems are recognized but no one is willing to sustain the effort needed to solve them.

Nothing worthwhile can be accomplished without determination. In the early days of nuclear power, for example, getting approval to build the first nuclear submarine—the Nautilus—was almost as difficult as designing and building it. Many in the Navy opposed building a nuclear submarine.

In the same way, the Navy once viewed nuclear-powered aircraft carriers and cruisers as too expensive, despite their obvious advantages of unlimited cruising range and ability to remain at sea without vulnerable support ships. Yet today our nuclear submarine fleet is widely recognized as our nation’s most effective deterrent to nuclear war. Our nuclear-powered aircraft carriers and cruisers have proven their worth by defending our interests all over the world—even in remote trouble spots such as the Indian Ocean, where the capability of oil-fired ships would be severely limited by their dependence on fuel supplies.

The man in charge must concern himself with details. If he does not consider them important, neither will his subordinates. Yet “the devil is in the details.” It is hard and monotonous to pay attention to seemingly minor matters. In my work, I probably spend about ninety-nine percent of my time on what others may call petty details. Most managers would rather focus on lofty policy matters. But when the details are ignored, the project fails. No infusion of policy or lofty ideals can then correct the situation.

To maintain proper control one must have simple and direct means to find out what is going on. There are many ways of doing this; all involve constant drudgery. For this reason those in charge often create “management information systems” designed to extract from the operation the details a busy executive needs to know. Often the process is carried too far. The top official then loses touch with his people and with the work that is actually going on.

Attention to detail does not require a manager to do everything himself. No one can work more than twenty-four hours each day. Therefore to multiply his efforts, he must create an environment where his subordinates can work to their maximum ability. Some management experts advocate strict limits to the number of people reporting to a common superior—generally five to seven. But if one has capable people who require but a few moments of his time during the day, there is no reason to set such arbitrary constraints. Some forty key people report frequently and directly to me. This enables me to keep up with what is going on and makes it possible for them to get fast action. The latter aspect is particularly important. Capable people will not work for long where they cannot get prompt decisions and actions from their superior.

I require frequent reports, both oral and written, from many key people in the nuclear program. These include the commanding officers of our nuclear ships, those in charge of our schools and laboratories, and representatives at manufacturers’ plants and commercial shipyards. I insist they report the problems they have found directly to me—and in plain English. This provides them unlimited flexibility in subject matter—something that often is not accommodated in highly structured management systems—and a way to communicate their problems and recommendations to me without having them filtered through others. The Defense Department, with its excessive layers of management, suffers because those at the top who make decisions are generally isolated from their subordinates, who have the first-hand knowledge.

To do a job effectively, one must set priorities. Too many people let their “in” basket set the priorities. On any given day, unimportant but interesting trivia pass through an office; one must not permit these to monopolize his time. The human tendency is to while away time with unimportant matters that do not require mental effort or energy. Since they can be easily resolved, they give a false sense of accomplishment. The manager must exert self-discipline to ensure that his energy is focused where it is truly needed.

All work should be checked through an independent and impartial review. In engineering and manufacturing, industry spends large sums on quality control. But the concept of impartial reviews and oversight is important in other areas also. Even the most dedicated individual makes mistakes—and many workers are less than dedicated. I have seen much poor work and sheer nonsense generated in government and in industry because it was not checked properly.

One must create the ability in his staff to generate clear, forceful arguments for opposing viewpoints as well as for their own. Open discussions and disagreements must be encouraged, so that all sides of an issue will be fully explored. Further, important issues should be presented in writing. Nothing so sharpens the thought process as writing down one’s arguments. Weaknesses overlooked in oral discussion become painfully obvious on the written page.

When important decisions are not documented, one becomes dependent on individual memory, which is quickly lost as people leave or move to other jobs. In my work, it is important to be able to go back a number of years to determine the facts that were considered in arriving at a decision. This makes it easier to resolve new problems by putting them into proper perspective. It also minimizes the risk of repeating past mistakes. Moreover if important communications and actions are not documented clearly, one can never be sure they were understood or even executed.

It is a human inclination to hope things will work out, despite evidence or doubt to the contrary. A successful manager must resist this temptation. This is particularly hard if one has invested much time and energy on a project and thus has come to feel possessive about it. Although it is not easy to admit what a person once thought correct now appears to be wrong, one must discipline himself to face the facts objectively and make the necessary changes—regardless of the consequences to himself. The man in charge must personally set the example in this respect. He must be able, in effect, to “kill his own child” if necessary and must require his subordinates to do likewise. I have had to go to Congress and, because of technical problems, recommended terminating a project that had been funded largely on my say-so. It is not a pleasant task, but one must be brutally objective in his work.

No management system can substitute for hard work. A manager who does not work hard or devote extra effort cannot expect his people to do so. He must set the example. The manager may not be the smartest or the most knowledgeable person, but if he dedicates himself to the job and devotes the required effort, his people will follow his lead.

The ideas I have mentioned are not new—previous generations recognized the value of hard work, attention to detail, personal responsibility, and determination. And these, rather than the highly-touted modern management techniques, are still the most important in doing a job. Together they embody a common-sense approach to management, one that cannot be taught by professors of management in a classroom.

I am not against business education. A knowledge of accounting, finance, business law, and the like can be of value in a business environment. What I do believe is harmful is the impression often created by those who teach management that one will be able to manage any job by applying certain management techniques together with some simple academic rules of how to manage people and situations.

New Military Standard for Human Engineering Design Criteria

April 4th, 2012 by

Here’s new (January 2012) Mil-STD-1472G:

Mil Std 1472G
(Click on document to open)

I really like the labeling guidance that starts on page 132 (Section 5.4). But don’t think this is it … every section has excellent material.

Screen Shot 2012-04-03 At 6.19.30 Pm

FBI Complaint Against JetBlue Captain

March 31st, 2012 by

Here is a PDF of the FBI criminal complaint … click to see the full document.

Osboncomplaint

CSB Asks for Comments on Their 2012-2016 Strategic Plan

March 29th, 2012 by

 Userfiles Chemsafety Image Template1 Prnt-Hdr-26

The CSB has asked for comments on their 2012-2016 Strategic Plan by April 12, 2012. You can see the plan at:

http://www.csb.gov/assets/news/document/CSB_Strat_Plan_Web_Posting_(3-26-2012).pdf

You can send your comments by email to strategicplan@csb.gov.

My thought was that the strategic plan needed more specifics as far as the strategies to be adopted. For example, there were five sub-goals under “Goal 3: Preserve the public trust by maintaining and improving organizational excellence.” One of these was “Maintain effective human capital management by promoting development in leadership, technical, and analytical competencies.

The document referred to several other plans, (the “CSB Human Capital Plan for Fiscal Years 2011 – 2015,” the “Office of Personnel Management (OPM) Workforce Planning Model,” and the “OPM Strategic Leadership Management Model”), that were not readily available on the CSB web site or by a link in the Strategic Plan. Thus, the specifics of the plan are largely unknown and unknowable.

For example, I would like to see an easy link to the qualification requirements and training program for CSB investigators. This could be helpful so that others could see what the CSB considers an adequately qualified investigator.

It would also be interesting for the CSB to detail what they are doing to learn industry best practices for root cause analysis, interviewing, corrective action development, and accident prevention. But these details are not easily available.

Also, I would think that many TapRooT® Users would suggest that the CSB have a core of investigators familiar with the TapRooT® System that is used extensively across the chemical, petrochemical, and oil industry. This would help them interface with industry personnel and provide them the knowledge they need to evaluate industry incident reports produced using TapRooT®.

Finally, I would also think that TapRooT® Users would like to see continued participation of the CSB in ongoing TapRooT® Summits where industry best practices about root cause analysis and accident prevention are shared. Participation in the TapRooT® Summit and other industry conferences should be spelled out as part of the strategy to keep up with the state-of-the-art in the chemical industry.

One other item that deserves comments is the timeliness of CSB accident reports. Frequently, these reports are more than a year after the accident. Important investigations, (for example, BP Texas City, and the still unreleased BP Deepwater Horizon investigations), take more than two years. By the time the investigations are released, the industry has already implemented corrective actions and moved along. I would like to see a specific strategy/plan to improve the timeliness of investigations to avoid late investigations that have limited industry impact because of their tardiness.

One final thought … because having an effective independent evaluation of major accidents is so important, I highly recommend that readers in the chemical industry take the time to read the CSB Strategic Plan and provide your comments before April 12. You can’t complain about the outcome if you don’t comment when asked.

Connect with Us

Filter News

Search News

Authors

Barb PhillipsBarb Phillips

Editorial Director

Chris ValleeChris Vallee

Six Sigma

Dan VerlindeDan Verlinde

Software Development

Dave JanneyDave Janney

Safety & Quality

Gabby MillerGabby Miller

Communications Specialist

Ken ReedKen Reed

Equifactor®

Linda UngerLinda Unger

Vice President

Mark ParadiesMark Paradies

Creator of TapRooT®

Steve RaycraftSteve Raycraft

Technical Support

Success Stories

I know that accidents happen, but I always try and learn from them and absolutely aim…

ARKEMA

Fortunately, I had just been introduced to a system for incident investigation and root cause analysis…

Enmax Corporation
Contact Us