Category: Current Events
The UK Rail Accident Investigation Branch published a report about a tram hitting a pedestrian in Manchester, UK.
A summary of the report says:
At about 11:13 hrs on Tuesday 12 May 2015, a tram collided with and seriously injured a pedestrian, shortly after leaving Market Street tram stop in central Manchester. The pedestrian had just alighted from the tram and was walking along the track towards Piccadilly.
The accident occurred because the pedestrian did not move out of the path of the tram and because the driver did not apply the tram’s brakes in time to avoid striking the pedestrian.
As a result of this accident, RAIB has made three recommendations. One is made to Metrolink RATP Dev Ltd in conjunction with Transport for Greater Manchester, to review the assessment of risk from tram operations throughout the pedestrianised area in the vicinity of Piccadilly Gardens.
A second is made to UK Tram, to make explicit provision for the assessment of risk, in areas where trams and pedestrians/cyclists share the same space, in its guidance for the design and operation of urban tramways.
A further recommendation is made to Metrolink RATP Dev Ltd, to improve its care of staff involved in an accident.
For the complete report, see:
For one mom, it looked like her vacation cruise ship pulling away from the dock without her but with her children on it.
Lesson learned: Don’t miss the boat.
The Wall Street Journal announced that BP incurred $56 Billion in expenses from the Deepwater Horizon explosion and spill. And the end is still not in sight.
BP’s CFO said “It’s impossible to come up with an estimate [of future costs].”
Of course, those costs don’t include the lives lost and the negative PR that the company has received.
How much is a best in class process safety program worth? As BP’s CFO says …
It’s impossible to come up with an estimate.
If you would like to learn best practices to improve your safety performance and make your programs “best in class,” the at ten the 2016 Global TapRooT® Summit in San Antonio, Texas, on August 1-5.
What? You say YOUR COMPANY CAN’T AFFORT IT? Can it afford $56 Billion? The investment in your safety program is a pittance compared with the costs of a major accident. Your company should put spending on safety improvement BEFORE other investments … especially in difficult times.
If you are a senior manager, don’t wait for your safety folks to ask to attend the Summit. Send them an e-mail. Tell them you are putting a team together to attend the Summit with you to learn best practices to prevent major accidents. Ask them who would be the best people to include on this team. Then get them all registered fot the Summit.
Remember, the Summit is GUARANTEED.
Attend the Summit and go back to work and use what you’ve learned.
If you don’t get at least 10 times the return on your investment,
simply return the Summit materials and we’ll refund the entire Summit fee.
Wow! A guaranteed ROI. How can we be so sure that you will return to work with valuable ideas to implement? Because we’ve been hosting these Summits for over 20 years and we know the “best of the best” attend the Summit and we know the value of the ideas they share each year. We’ve heard about the improvements that Summit attendees have implemented. Being proactive is the key to avoiding $56 Billion dollar mistakes.
So don’t wait. Get your folks registered today at:
This is old news to most (or should be) but OSHA finally published the long awaited rule on injury reporting:
So now that information will become more public will companies improve their records to stay out of view? Some things to think about:
*If they did not care about worker’s safety before, why would they care now?
*Will anyone even pay attention?
*Will management put more pressure on the operation to reduce rates?
*Will management give the operation additional resources to accomplish it?
*Will the media misuse the information? Will it be used politically?
*Did you just become your PR Department’s best bud or worst enemy?
*Will it actually help companies choose better business partners? (many companies have been requiring rates during the RFP process anyway)
*Is everyone else in the organization now throwing in their 2 cents on how you run your business?
I look at this a few ways:
*If you already have a good program and record, this should be of little concern to you from the public information standpoint.
*Assuming that is the case, as a former corporate safety manager, I see this as a HUGE cost for companies to comply. But there has been (and still is) plenty of time to get things in place.
At the end of the day, you cannot control regulations. But can you control your injuries? You bet.
Two of the best ways to lower your injury rates? Do better investigations and audits. Why not join us for a future course? You can see the schedule and enroll HERE
The explosion at the West Fertilizer Plant was thought to have been a tragic accident. However, the Associated Press has reported that the Alcohol, Tobacco, and Firearms, and Explosives Agency (ATF) has said that the fire that caused the explosion was “intentionally set.”
Here is a TV report link:
Harrison Ford was hit by a heavy, hydraulically operated door while filming the new Star Wars movie. He suffered a broken leg. The UK Health & Safety Executive charged Foodles Productions (UK) Ltd. with four criminal violations and the company will have it’s first court hearing on May 12th.
Now the question – or lesson learned …
- Will criminal charges make movie actors safer?
- Do studios already have incentives to keep their actors safe?
What do you think? Leave your comments here…
This video has a few four letter words so turn your sound off if you would be offended … but the footage is spectacular.
The Nuclear Energy Institute published a white paper titled:
To summarize what is said, the nuclear industry went overboard putting everything including the kitchen sink into their Corrective Action Program, made things too complex, and tried to fix things that should never have been investigated.
How far overboard did they go? Well, in some cases if you were late to training, a condition report was filed.
For many years we’ve been preaching to our nuclear industry clients to TARGET root cause analysis to actual incidents that could cause real safety or process safety consequences worth stopping. We actually recommend expanding the number of real root cause analyses performed while simplifying the way that root cause analyses were conducted.
Also, we recommended STOPPING wasting time performing worthless apparent cause analyses and generating time wasting corrective actions for problems that really didn’t deserve a fix. They should just be categorized and trended (see out Trending Course if you need to learn more about real trending).
We also wrote a whole new book to help simplify the root cause analysis of low-to-medium risk incidents. It is titled:
Those who have read the book say that it makes TapRooT® MUCH EASIER for simple investigations. It keeps the advantages of the complete TapRooT® System without the complexity needed for major investigations.
What’s in the new book? Here’s the Table of Contents:
Chapter 1: When is a Basic Investigation Good Enough?
Chapter 2: How to Investigate a Fairly Simple Problem Using the Basic Tools of the TapRooT® Root Cause Analysis System
- Find Out What Happened & Draw a SnapCharT®
- Decision: Stop or More to Learn?
- Find Causal Factors Using Safeguard Analysis
- Find Root Causes Using the Root Cause Tree® Diagram
- Develop Fixes Using the Corrective Action Helper Module
- Optional Step: Find and Fix Generic Causes
- What is Left Out of a Basic Investigation to Make it Easy?
Chapter 3: Comparing the Results of a 5-Why Investigation to a Basic TapRooT® Investigation
Appendix A: Quick Reference: How to Perform a Basic TapRooT® Investigation
The TapRooT® Process for simple incidents is just 5 steps and is covered in 50 pages in the book.
If you are looking for a robust techniques that is usable on your simple incidents and for major investigations, LOOK NO FURTHER. The TapRooT® System is the answer.
If you are in the nuclear industry, use TapRooT® to simplify the investigations of low-to-moderate risk incidents.
If you are in some other industry, TapRooT® will help you achieve great results investigating both minor incidents and major accidents with techniques that will help you no matter what level of complexity your investigation requires.
One more question that you might have for us ,,,
How does TapRooT® stay one (or more) steps ahead of the industry?
- We work across almost every industry in every continent around the world.
- We spend time thinking about all the problems (opportunities for improvement) that we see.
- We work with some really smart TapRooT® Users around the world that are part of our TapRooT® Advisory Board.
- We organize and attend the annual Global TapRooT® Summit and collect best practices from around the world.
We then put all this knowledge to work to find ways to keep TapRooT® and our clients at the leading edge of root cause analysis and performance improvement excellence. We work hard, think hard, and each year keep making the TapRooT® Root Cause Analysis System better and easier to use.
If you want to reduce the cumulative impact of your corrective action program, get the latest TapRooT® Book and attend our new 2-Day TapRooT® Root Cause Analysis Course. You will be glad to get great results while saving time and effort.
Wow. Quite an eye-opening Washington Post article describing a report published in the BMJ. A comprehensive study by researchers at the John Hopkins University have found that medical mistakes are now responsible for more deaths in the US each year than Accidents, Respiratory Disease, and Strokes. They estimate over a quarter million people die each year in the US due to mistakes made during medical procedures. And this does NOT include other sentinel events that do not result in death. Researchers include in this category “everything from bad doctors to more systemic issues such as communication breakdowns when patients are handed off from one department to another.” Other tidbits from this study:
- Over 700 deaths each day are due to medical errors
- This is nearly 10% of all deaths in the US each year
What’s particularly alarming is that a study conducted in 1999 showed similar results. That study called medical errors “an epidemic.” And yet, very little has changed since that report was issued. While a few categories have gotten better (hospital-acquired infections, for example), there has been almost no change in the overall numbers.
I’m sure there are many “causes” for these issues. This report focused on the reporting systems in the US (and many other countries) that make it almost impossible to identify medical error cases. And many other problems are endemic to the entire medical system:
- Insurance liabilities
- Inadequate reporting requirements
- Poor training at many levels
- Ineffective accountability systems
- between patient care and running a business
However, individual health care facilities have the most control over their own outcomes. They truly believe in providing the very best medical care to their patients. They don’t necessarily need to wait for national regulations to force change. They often just need a way to recognize the issues, minimize the local blame culture, identify problems, recognize systemic issues at their facilities, and apply effective corrective actions to those issues.
I have found that one of the major hurdles to correcting these issues is a lack of proper sentinel event analysis. Hospitals are staffed with extremely smart people, but they just don’t have the training or expertise to perform comprehensive root cause analysis and incident investigation. Many feel that, because they have smart people, they can perform these analyses without further training. Unfortunately, incident investigation is a skill, just like other skills learned by doctors, nurses, and patient quality staff, and this skill requires specialized training and methodology. When a facility is presented with this training (yes, I’m talking about TapRooT®!), I’ve found that they embrace the training and perform excellent investigations. Hospital staff just need this bit of training to move to the next level of finding scientifically-derived root causes and applying effective corrective actions, all without playing the blame game. It is gratifying to see doctors and nurses working together to correct these issues on their own, without needing some expensive guru to come in and do it for them.
Hospitals have the means to start fixing these issues. I’m hoping the smart people at these facilities take this to heart and begin putting processes in place to make a positive difference in their patient outcomes.
“We are going to find out who is to blame because that is the frustrating part about health and safety accidents such as this. When we go back, when we read the report, we find out each and every time that it was preventable. That’s why we need to learn from this,” Kevin Flynn, Ontario’s labour minister, told reporters Tuesday afternoon.
That’s a quote from CP 24, Toronto’s Breaking News. See the story and watch the video interview about the accident here:
Is there a lesson to be learned here?
Interestingly, the “contractor” performing the work in this accident was a branch of the Ontario government.
Everything was going great for Michael Daughtery, owner of LabMD, a company that tested blood, urine, and tissue samples for urologists. He was living the dream. That is, until one of his managers who had been using LimeWire file-sharing to download music inadvertently shared patient medical records with it. It was a violation of company policy to have it on her computer.
The story goes from bad to worse. Read “A leak wonded this company. Fighting the Feds finished it off” on Bloomberg.
In one day, your whole life could change. Wouldn’t it be great if you never got that phone call that disaster has struck your company?
We have several exclusive Pre-Summit Courses coming up in August that can help you keep your company from facing a crisis such as this. TapRooT® for Audits, Understanding and Stopping Human Error, Risk Assessment & Management and more.
View them here.
We also offer a Medical track immediately following the special 2-day courses at the 3-day Global TapRooT® Summit. Learn more here.
We hope to meet you in San Antonio, Texas during Global TapRooT® Summit week to help you solve your business-critical issues.
The following sequence is from the Clarence Bee …
First, an air conditioning unit for a power supply room failed.
No big deal … There’s an automatic backup and a system to notify the engineer.
Oops … It failed too.
Well, at least there is a local temperature alarm. The local maintenance guy will do the right thing … Right?
Sorry. In the “heat” of the moment, he pushed the “kill” button.
Unfortunately, this was for fire emergencies and it cut off all the power to the 911 system. And nobody knew how to reset it.
Finally, the tech rep from Reliance Electric arrived and the system was restored – 3.5 hours after the kill switch was pushed.
What can you learn from this incident?
- Do your people know what to do when things go wrong?
- Do you do drills?
- Are things clearly labeled?
- Are there response procedures?
- How long has it been since people were trained?
The CSB press release starts with:
“Washington, DC, April 13, 2016 – Offshore regulatory changes made thus far do not do enough to place the onus on industry to reduce risk, nor do they sufficiently empower the regulator to proactively oversee industry’s efforts to prevent another disaster like the Deepwater Horizon rig explosion and oil spill at the Macondo well in the Gulf of Mexico, an independent investigation by the U.S. Chemical Safety Board (CSB) warns.”
For the whole report, see:
Press reports that the ex-CEO of Massey Coal faces a year in prison as a result of Upper Big Branch Mine explosion. As a CEO, putting the safety of your workers at risk to improve profits can be costly.
The following is the summary of a report from the UK Rail Accident Investigation Branch.
Serious accident involving a passenger trapped in train doors and
dragged at Clapham South station, 12 March 2015
At around 08:00 hrs on Thursday 12 March 2015, a passenger fell beneath a train after being dragged along the northbound platform of Clapham South station, in south London. She was dragged because her coat had become trapped between the closing doors of a London Underground Northern line train.
The train had stopped and passengers had alighted and boarded normally, before the driver confirmed that the door closure sequence could begin. The train operator, in the driving cab, started the door closure sequence but, before the doors had fully closed, one set encountered an obstruction and the doors were reopened. A passenger who had just boarded, and found that the available standing space was uncomfortable, stepped back off the train and onto the platform, in order to catch the following train. The edge of this passenger’s coat was then trapped when the doors closed again and she was unable to free it.
The trapped coat was not large enough to be detected by the door control system and the train operator, who was unaware of the situation, started the train moving. While checking the platform camera views displayed in his cab, the train operator saw unusual movements on the platform and applied the train brakes. Before the train came to a stop, the trapped passenger fell to the ground and then, having become separated from her coat, fell into the gap between the platform and the train. The train stopped after travelling about 60 metres. The passenger suffered injuries to her arm, head and shoulder, and was taken to hospital.
As a result of this accident, RAIB has made one recommendation, addressed to London Underground, seeking further improvements in the processes used to manage risks at the platform-train interface.
RAIB has also identified one learning point for the railway industry, relating to the provision of under platform recesses as a measure to mitigate the consequences of accidents where passengers fall from the platform.
For the complete report, see:
IOGP SAFETY ALERTDROPPED OBJECT: 1.3 POUND LINK PIN FELL 40 FEET
A drilling contractor was tripping pipe out of the hole and a link pin came loose from the hook, falling 40 feet (12.2 metres) to the deck below. The pin bounced and struck a glancing blow to the left jaw/neck area of a worker. The link pin is 1 inch by 5 inches (2.5cm x 12.7cm) and weighs 1.3 pounds (0.6 kg).
What Went Wrong?
The type of keeper pin used on the dropped object did not adequately secure the pin. The link pin is threaded and uses a cotter pin to prevent the pin body from backing out. The pin was secured with a coil “diaper pin” instead of a cotter pin.
Corrective Actions and Recommendations:
Safety pins that can be knocked out must not be used for lifting operations or securing equipment overhead.
Follow cotter pin installation guidelines:
- Both points on a cotter pin must be bent around the shaft.
- Cotter pins are a single-use instrument and should never be re-used.
safety alert number: 271
IOGP Safety Alerts http://safetyzone.iogp.org/
Whilst every effort has been made to ensure the accuracy of the information contained in this publication, neither the IOGP nor any of its members past present or future warrants its accuracy or will, regardless of its or their negligence, assume liability for any foreseeable or unforeseeable use made thereof, which liability is hereby excluded. Consequently, such use is at the recipient’s own risk on the basis that any use by the recipient constitutes agreement to the terms of this disclaimer. The recipient is obliged to inform any subsequent recipient of such terms.
This document may provide guidance supplemental to the requirements of local legislation. Nothing herein, however, is intended to replace, amend, supersede or otherwise depart from such requirements. In the event of any conflict or contradiction between the provisions of this document and local legislation, applicable laws shall prevail.
What’s worse than a fatal accident? A fatal accident followed by fatalities to first responders or rescuers.
Six rescuers were recently killed while trying to save 26 miners after a coal mine explosion in Russia. The rescuers were killed when the methane exploded again during their rescue attempt. See:
Can you learn something about your emergency response and rescue efforts from this example?
Monday Accident & Lessons Learned: Report by UK RAIB – Serious accident as a passenger left a train and became trapped in the train doors at West Wickham station last AprilMarch 14th, 2016 by Mark Paradies
At around 11:35 hrs on 10 April 2015, a passenger was dragged along the platform at West Wickham station, south London, when the 11:00 hrs Southeastern service from London Cannon Street to Hayes (Kent) departed while her backpack strap was trapped in the doors of the train.
As it moved off, she fell onto the platform and then through the gap between the platform and train, suffering life-changing injuries.
The backpack strap became trapped when the train doors closed unexpectedly and quickly while she was alighting.
Testing showed that this potentially unsafe situation could only occur when a passenger pressed a door-open button, illuminated to show it was available for use, within a period of less than one second beginning shortly after the train driver initiated the door closure sequence.
RAIB identified this door behaviour, which was not known to the owner or operator, and issued urgent safety advice. In response to this, the railway industry undertook a review which identified 21 other types of train that permit passenger doors to be opened for a short period after door closure is initiated by train crew. The industry is now seeking ways to deal with this risk.
The train was being driven by a trainee driver under the supervision of an instructor. The service was driver only operation, which meant that before leaving West Wickham station, and after all train doors were closed, drivers were required to check that it was safe to depart by viewing CCTV monitors located on the platform. Two of these monitor images showed that a passenger appeared to be trapped but, although visible from the driving cab, neither the trainee driver nor the instructor was aware of this. Although the RAIB has not been able to establish why the trapped passenger was not seen before the train departed, a number of possible explanations have been identified.
As a result of this accident, RAIB has made two recommendations. The first, addressed to operators and owners of trains with power operated doors, is intended to identify and correct all train door control systems exhibiting the unsafe characteristics found during this investigation. The second, addressed to RSSB, seeks changes to guidance documents so that, where practicable, staff dispatching trains watch the train doors while they are closing, in addition to checking the doors after they are closed.
RAIB has also identified five learning points relating to: releasing train doors long enough to allow passengers to get on and off trains safely; effective checking of train doors before trains depart (and not relying on the door interlock light); design of door controls; and use of train driving simulators to raise drivers’ awareness of circumstances when it is not safe to depart from a station.
For the entire report, see:
If it is written down, it must be followed. This means it must be correct… right?
Lack of compliance discussion triggers that I see often are:
- Defective products or services
- Audit findings
- Rework and scrap
So the next questions that I often ask when compliance is “apparent” are:
- Do these defects happen when standard, policies and administrative controls are in place and followed?
- What were the root causes for the audit findings?
- What were the root causes for the rework and scrap?
In a purely compliance driven company, I often here these answers:
- It was a complacency issue
- The employees were transferred…. Sometimes right out the door
- Employee was retrained and the other employees were reminded on why it is important to do the job as required.
So is compliance in itself a bad thing? No, but compliance to poor processes just means poor output always.
Should employees be able to question current standards, policies and administrative controls? Yes, at the proper time and in the right manner. Please note that in cases of emergencies and process work stop requests, that the time is mostly likely now.
What are some options to removing the blinders of pure compliance?
GOAL (Go Out And Look)
- Evaluate your training and make sure it matches the workers’ and the task’s needs at hand. Many compliance issues start with forcing policies downward with out GOAL from the bottom up.
- Don’t just check off the audit checklist fro compliance’s sake, GOAL
- Immerse yourself with people that share your belief to Do the Right thing, not just the written thing.
- Learn how to evaluate your own process without the pure Compliance Glasses on.
If you see yourself acting on the suggestions above, this would be a perfect Compliance Awareness Trigger to join us out our 2016 TapRooT® Summit week August 1-5 in San Antonio, Texas.
Here’s the press report about an incident at a west coast refinery …
They think that someone working in the area accidentally hit a button that shut down fuel to a boiler. That caused a major portion of the refinery to shut down.
At least one Causal Factor for this incident would be “Worker accidentally hits button with elbow.”
If you were analyzing this Causal Factor using the Root Cause Tree®, where would you go?
Of course, it would be a Human Performance Difficulty.
When you reviewed The Human Performance Troubleshooting Guide, you would answer “Yes” to question 5:
“Were displays, alarms, controls, tools, or equipment identified or operated improperly?”
That would lead you do evaluating the equipment’s Human Engineering.
Under the Human-Nachine Interface Basic Cause Category, you would identify the “controls need improvement” root cause because you would answer “Yes” to the Root Cause Tree® Dictionary question:
“Did controls need mistake-proofing to prevent unintentional or incorrect actuation?”
That’s just one root cause for one Causal Factor. How many other Causal Factors were there? It’s hard to tell with the level of detail provided by the article. I would guess there was at least one more, and maybe several (there usually should be for an incident of this magnitude).
At least one of the corrective actions by the refinery management was to initially put a guard on the button. Later, the button was removed to eliminate the chance for human error.
Are there more human-machine interface problems at this refinery? Are they checking for them to look for Generic Causes? You can’t tell from the article.
Would you like to learn more about understanding human errors and advanced root cause analysis? Then you should attend the 5-Day TapRooT® Advanced Root Cause Analysis Team Leader Training. See public course dates at:
And click on the link for the continent where you would like to attend the training.
President Obama issued Executive Order 13650 that directed agencies to improve chemical safety performance. In response, the EPA is proposing changes to the RMP (Risk Management Plan) regulation. A preliminary copy of the changes have been published HERE (they have not yet been published in the Federal Register).
For readers interested in root cause analysis, the main changes start on page 28 in the Incident Investigation and Accident History Requirements section.
The revision to the regulation actually mentions “causal factors” and “root causes” that were not mentioned in the previous regulation. On page 33 the revision states:
Thus EPA is proposing to require a root cause analysis to ensure that facilities determine
the underlying causes of an incident to reduce or eliminate the potential for additional accidents
resulting from deficiencies of the same process safety management system.
The EPA document uses the following definition of a root cause:
Root cause means a fundamental, underlying, system-related reason
why an incident occurred that identifies a correctable failure(s) in management systems.
The revision document gives examples of poor investigations of near-miss accidents that did not get to root causes so that a future accident that included a fatality or severe injuries occurred. These examples include and explosion and fire at a Tosco refinery, an explosion at a Georgia-Pacific Resins facility, an explosion an fire at a Shell olefins plant, and a runaway reaction at a Morton International chemical plant. In each case, root causes of issues were not identified and fixed and this allowed a more serious accident to eventually occur.
Of course, I have said many times that I’ve never seen a major accident that didn’t have precursor incidents (call them near-misses if you must). Performing adequate root cause analysis of smaller incidents has always been one of the goals that we have suggested to TapRooT® Users and now even more fully support with the new Using the Essential TapRooT® Techniques to Investigate Low-to-Medium Risk Incidents book.
The document asks for comments on the proposed revision to the regulation (page 41):
- EPA seeks comment on whether a root cause analysis is appropriate for every RMP reportable accident and near miss.
- Should EPA eliminate the root cause analysis, or revise to limit or increase the scope or applicability of the root cause analysis requirement?
- If so, how should EPA revise the scope or applicability of this proposed requirement?
- EPA also seeks comment on proposed amendments to require consideration of incident investigation findings, in the hazard review (§ 68.50) and PHA (§ 68.67) requirements.
- Finally, EPA seeks comment on the proposed additional requirement in § 68.60 to require personnel with appropriate knowledge of the facility process and knowledge and experience in incident investigation techniques to participate on an incident investigation team.
In the document, there is extensive discussion about defining and investigating near-misses. The section ends with …
- EPA seeks comment on the guidance and examples provided of a near miss.
- Is further clarification needed in this instance?
- Should EPA consider limiting root cause analyses only for incidents that resulted in a catastrophic release?
The document also discusses time frames for completing investigations. Should it be 30 days, 60 days, six months? It’s interesting to note that many investigations of process safety incidents by the US Chemical Safety Board takes years. The EPA is suggesting that a one year time limitation (with the possibility of a written extension granted by the EPA) be the specified time limit.
The EPA is asking for feedback on this time limit:
- EPA seeks comment on whether to add this condition to the incident investigation requirements or whether there are other options to ensure that unsafe conditions that led to the incident are addressed before a process is re-started.
- EPA also seeks comment on whether the different root cause analysis timeframes specified under the MACT and NSPS and proposed herein will cause any difficulties for sources covered under both rules, and if so, what approach EPA should take to resolve this issue.
The document also discusses reporting of root cause information to the EPA and suggests that common “categories” of root causes be reported to the EPA. The document even references an old (1996) version of the TapRooT® Root Cause Tree® and a potential list of root cause categories, They then request comments:
- EPA seeks comment on the appropriateness of requiring root cause reporting as part of the accident history requirements of § 68.42, as well as the categories that should be considered and the timeframe within which the root cause information must be submitted.
Although I am flattered to be the “father” of this idea that root causes should be reported so that they may be learned from, I’m also concerned that people may think that simply selecting from a list of root causes is root cause analysis. Also, I’ve seen many lists of root causes that had bad categorization. The main problem is what I would call “blame” categorization. I’m not sure if the EPA would recognize the importance of the structure and limits that need to be enforced to have a good categorization system. (Many consultants don’t understand this, why should the EPA?)
As everyone who reads the Root Cause Analysis Blog knows, I am always preaching the enhanced use of root cause analysis to improve safety, process safety, patient safety, quality, equipment reliability, and operations. But I am hesitant to jump aboard a bandwagon to write federal regulations that require good management. Yes, I understand that lives are at stake. But every time a government regulation is written, it seems to cement a certain protocol and discourages progress. Imagine all the improvements we have made to TapRooT® since 1996. Would that progress be halted because the EPA cements the “categorization” of root causes in 1996? Or even worse… what if the EPA’s categories include “blame” categories and managers all over the chemical industry start telling investigators to stop looking for other system causes and find blame related root causes? It could happen.
I would suggest that readers watch for the publication of EPA’s revision of the RMP in the Federal Register and get their comments in on the topics listed above. You can’t blame the EPA for making bad regulations if you don’t take the opportunity to comment when the comments are requested.
When the press covers an accident, they like instant answers, The BBC is reporting that “human error” is to “blame” for the cause of two trains crashing head on in Germany. Here’s the article:
Of course, prosecutors are pressing charges against the area controller who “… opened the track to the two trains and tried to warn the drivers.”
What do you think … is “human error” THE cause?
The last of the federal prosecutions are finally concluding. Robert Kaluza was found not guilty of violating the federal Clean Water Act for missing indications of the blowout of the Macondo well. The accident killed 11 workers and both Kaluza and Donald Vidrine were initially charged with manslaughter, but those charges were later dropped.
Vidrine and Kaluza were not the only people charged as a result of the spill. BP employee Kurt Mix was prosecuted for obstruction of justice after he deleted text messages on this phone. Mix wasn’t involved in the accident, but was involved in trying to find ways to stop the spill. His ordeal ended last November when, after his initial conviction was overturned on appeal, he accepted a plea bargain to a misdemeanor charge for deleting the text messages without company permission.
Note that these engineers were the highest level company personnel prosecuted after the spill. No senior executives face charges.
Kurt Mix will be speaking at the 2016 Global TapRooT® Summit about his experience and the effect that it might have on other first responders and people being asked questions after an accident. If you don’t think that federal prosecutions could impact your incident investigations, come hear Kurt’s story and then decide.
The 2016 Global TapRooT® Summit is being held in San Antonio, Texas, on August 1-5. For more information about the keynote speakers, see:
Monday Accident & Lessons Leaned: Sure Looks Like an Equipment Failure … But What is the Root Cause?February 25th, 2016 by Mark Paradies
When you look up in the air and this is what you see … it sure looks like an equipment failure. Bit what is the root cause?
That’s what DTE Energy will be looking into when they investigate this failure.
How do you go beyond “It broke!” and find how and why and equipment failure occurred? We recommend using techniques developed by equipment expert Heinz Bloch and embedded in the Equifactor® Module of the TapRooT® Software.
For more information about the software and training, see: