The Wall Street Journal announced that BP incurred $56 Billion in expenses from the Deepwater Horizon explosion and spill. And the end is still not in sight.
BP’s CFO said “It’s impossible to come up with an estimate [of future costs].”
Of course, those costs don’t include the lives lost and the negative PR that the company has received.
How much is a best in class process safety program worth? As BP’s CFO says …
It’s impossible to come up with an estimate.
If you would like to learn best practices to improve your safety performance and make your programs “best in class,” the at ten the 2016 Global TapRooT® Summit in San Antonio, Texas, on August 1-5.
What? You say YOUR COMPANY CAN’T AFFORT IT? Can it afford $56 Billion? The investment in your safety program is a pittance compared with the costs of a major accident. Your company should put spending on safety improvement BEFORE other investments … especially in difficult times.
If you are a senior manager, don’t wait for your safety folks to ask to attend the Summit. Send them an e-mail. Tell them you are putting a team together to attend the Summit with you to learn best practices to prevent major accidents. Ask them who would be the best people to include on this team. Then get them all registered fot the Summit.
Remember, the Summit is GUARANTEED.
Attend the Summit and go back to work and use what you’ve learned.
If you don’t get at least 10 times the return on your investment,
simply return the Summit materials and we’ll refund the entire Summit fee.
Wow! A guaranteed ROI. How can we be so sure that you will return to work with valuable ideas to implement? Because we’ve been hosting these Summits for over 20 years and we know the “best of the best” attend the Summit and we know the value of the ideas they share each year. We’ve heard about the improvements that Summit attendees have implemented. Being proactive is the key to avoiding $56 Billion dollar mistakes.
So don’t wait. Get your folks registered today at:
A lot of bad days start with bad decisions. For example, when you decide to take a selfie with a 4-foot rattlesnake… (Read story.)
The following is a video of a fatal accident. The vehicle drove around a tow truck sent to block the underpass and past a worker waiving his arms to stop her. She drove into water about 17 feet deep. DON’T watch the video if it will upset you. For others, hopefully you can use this to teach others to avoid standing water during flooding.
The explosion at the West Fertilizer Plant was thought to have been a tragic accident. However, the Associated Press has reported that the Alcohol, Tobacco, and Firearms, and Explosives Agency (ATF) has said that the fire that caused the explosion was “intentionally set.”
Here is a TV report link:
Harrison Ford was hit by a heavy, hydraulically operated door while filming the new Star Wars movie. He suffered a broken leg. The UK Health & Safety Executive charged Foodles Productions (UK) Ltd. with four criminal violations and the company will have it’s first court hearing on May 12th.
Now the question – or lesson learned …
- Will criminal charges make movie actors safer?
- Do studios already have incentives to keep their actors safe?
What do you think? Leave your comments here…
Wow. Quite an eye-opening Washington Post article describing a report published in the BMJ. A comprehensive study by researchers at the John Hopkins University have found that medical mistakes are now responsible for more deaths in the US each year than Accidents, Respiratory Disease, and Strokes. They estimate over a quarter million people die each year in the US due to mistakes made during medical procedures. And this does NOT include other sentinel events that do not result in death. Researchers include in this category “everything from bad doctors to more systemic issues such as communication breakdowns when patients are handed off from one department to another.” Other tidbits from this study:
- Over 700 deaths each day are due to medical errors
- This is nearly 10% of all deaths in the US each year
What’s particularly alarming is that a study conducted in 1999 showed similar results. That study called medical errors “an epidemic.” And yet, very little has changed since that report was issued. While a few categories have gotten better (hospital-acquired infections, for example), there has been almost no change in the overall numbers.
I’m sure there are many “causes” for these issues. This report focused on the reporting systems in the US (and many other countries) that make it almost impossible to identify medical error cases. And many other problems are endemic to the entire medical system:
- Insurance liabilities
- Inadequate reporting requirements
- Poor training at many levels
- Ineffective accountability systems
- between patient care and running a business
However, individual health care facilities have the most control over their own outcomes. They truly believe in providing the very best medical care to their patients. They don’t necessarily need to wait for national regulations to force change. They often just need a way to recognize the issues, minimize the local blame culture, identify problems, recognize systemic issues at their facilities, and apply effective corrective actions to those issues.
I have found that one of the major hurdles to correcting these issues is a lack of proper sentinel event analysis. Hospitals are staffed with extremely smart people, but they just don’t have the training or expertise to perform comprehensive root cause analysis and incident investigation. Many feel that, because they have smart people, they can perform these analyses without further training. Unfortunately, incident investigation is a skill, just like other skills learned by doctors, nurses, and patient quality staff, and this skill requires specialized training and methodology. When a facility is presented with this training (yes, I’m talking about TapRooT®!), I’ve found that they embrace the training and perform excellent investigations. Hospital staff just need this bit of training to move to the next level of finding scientifically-derived root causes and applying effective corrective actions, all without playing the blame game. It is gratifying to see doctors and nurses working together to correct these issues on their own, without needing some expensive guru to come in and do it for them.
Hospitals have the means to start fixing these issues. I’m hoping the smart people at these facilities take this to heart and begin putting processes in place to make a positive difference in their patient outcomes.
“We are going to find out who is to blame because that is the frustrating part about health and safety accidents such as this. When we go back, when we read the report, we find out each and every time that it was preventable. That’s why we need to learn from this,” Kevin Flynn, Ontario’s labour minister, told reporters Tuesday afternoon.
That’s a quote from CP 24, Toronto’s Breaking News. See the story and watch the video interview about the accident here:
Is there a lesson to be learned here?
Interestingly, the “contractor” performing the work in this accident was a branch of the Ontario government.
On April 3rd, an Amtrak passenger train collided with a backhoe that was being used by railroad employees for maintenance. Two maintenance workers were killed, and about 20 passengers on the train were injured. For those that are not familiar with the railroad industry, I wanted to discuss a system that was in place that was designed to help prevent these types of incidents.
Many trains are being back-fitted with equipment and software that is collectively known as positive train control (PTC). These systems include sensors, software, and procedures that are designed to help the engineer safely operate the train. It is designed to allow for:
- Train separation and collision avoidance
- Speed enforcement
- Rail worker safety
For example, as the train approaches a curve that has a lower speed limit, a train with PTC would first alert the engineer that he must reduce speed, and then, if this doesn’t happen, automatically reduce the speed or stop the train as necessary to prevent exceeding tolerance. Another example is that, if maintenance is known to be occurring on a particular section of track, the train “knows” it is not allowed to be on that particular section, and will slow / stop to avoid entering the restricted area. The system can be pretty sophisticated, but this is the general idea.
Notice that I described the system as a series of sensors, software, and procedures that make up PTC. While we can put all kinds of sensors and software in place, there are still procedures that people must follow for the system to operate properly. For example, in in order to know about worker safety restrictions on a particular piece of track, there are several things that must happen:
- The workers must tell the dispatcher they are on a specific section of track (there are very detailed procedures that cover this).
- The dispatcher must correctly tell the system that the workers are present.
- The software must correctly identify the section of track.
- The communications hardware must properly communicate with the train.
- The train must know where it is and where it is going.
- The workers must be on the correct section of track.
- The workers must be doing the correct maintenance (for example, not also working on an additional siding).
- If being used, local temporary warning systems being used by the workers must be operating properly. For example, there are devices that can be worn on the workers’ bodies that signal the train, and that receive a signal from the train.
- Proper maintenance must be performed on all of the PTC hardware and software.
As you can see, just putting a great PTC system in place involves more than just installing a bunch of equipment. Workers must understand the equipment, its interrelation with the train and dispatcher, how the system is properly initialized and secured, the limitations of the PTC system, etc. People are still involved.
For the Washington Amtrak crash, we know that there was a PTC system in place. However, I don’t know how it was being employed, if it was working properly, were all the procedures being followed, etc. I am definitely not trying to apportion any blame, since I’m not involved in the investigation. However, I did want to point out that, while implementation of PTC systems is long overdue, it is important to realize that these systems have many weak points that must be recognized and understood in order to have them operating properly.
Humans will almost always end up being the weak link, and it is critical that the entire system, including the human interactions with the system, be fully accounted for when designing and operating the system. Proper audits will often catch these weak barriers, and proper investigations can help identify the human performance issues that are almost certainly in play when an accident occurs. By finding the human performance issues, we can target more effective corrective actions than just blaming the individual. Our investigations and audits have to take the entire system into account when looking for improvements.
For the 25th year, the AFL-CIO has produced a report about the the state of safety and health for American workers. The report states that in 2014, 4,821 workers were killed on the job in the U.S., and approximately 50,000 died from occupational diseases. This indicates a loss of 150 workers each day from hazardous conditions.
READ the full report.
Read story here.
The following sequence is from the Clarence Bee …
First, an air conditioning unit for a power supply room failed.
No big deal … There’s an automatic backup and a system to notify the engineer.
Oops … It failed too.
Well, at least there is a local temperature alarm. The local maintenance guy will do the right thing … Right?
Sorry. In the “heat” of the moment, he pushed the “kill” button.
Unfortunately, this was for fire emergencies and it cut off all the power to the 911 system. And nobody knew how to reset it.
Finally, the tech rep from Reliance Electric arrived and the system was restored – 3.5 hours after the kill switch was pushed.
What can you learn from this incident?
- Do your people know what to do when things go wrong?
- Do you do drills?
- Are things clearly labeled?
- Are there response procedures?
- How long has it been since people were trained?
The CSB press release starts with:
“Washington, DC, April 13, 2016 – Offshore regulatory changes made thus far do not do enough to place the onus on industry to reduce risk, nor do they sufficiently empower the regulator to proactively oversee industry’s efforts to prevent another disaster like the Deepwater Horizon rig explosion and oil spill at the Macondo well in the Gulf of Mexico, an independent investigation by the U.S. Chemical Safety Board (CSB) warns.”
For the whole report, see:
On April 16, 1947, the Texas City Disaster occurred in the Port of Texas City. It is considered one of the deadliest industrial accidents in the U.S. history. This incident killed a minimum of 581 people and there were 8,485 victims. It even claimed some of the lives of the rescue workers. Because of this disaster it was the first time a class action lawsuit was filled against the United States government under the the Federal Tort Claims Act (FTCA).
The French-registered vessel SS Grandcamp that was docked in the Texas port caught fire. It was carrying 2,300 tons of ammonium nitrate, which is extremely explosive. Once the SS Grandcamp exploded it caused a chain-reaction of fires and explosions through out the port and other near by oil-storage facilities.
To read more about this disaster please click on the link below.
Did you know our 5-Day TapRooT® Advanced Root Cause Analysis Team Leader Training is for those who want to learn the essential and advanced TapRooT® Techniques and to use the TapRooT® Software (by using it in training). TapRooT® can be used proactively before a major incident happens! Learn more about the 5-day at:
Press reports that the ex-CEO of Massey Coal faces a year in prison as a result of Upper Big Branch Mine explosion. As a CEO, putting the safety of your workers at risk to improve profits can be costly.
Here’s the article …
They have already fired the Commanding Officer … so don’t worry … they won’t start up gears without lube oil again. More video below.
If you are in the Navy … it looks like this!
Notice how happy sailors look when they aren’t to blame! (Bet they don’t look that happy on the bridge.)
Just needs some duct tape for repairs!
The following is the summary of a report from the UK Rail Accident Investigation Branch.
Serious accident involving a passenger trapped in train doors and
dragged at Clapham South station, 12 March 2015
At around 08:00 hrs on Thursday 12 March 2015, a passenger fell beneath a train after being dragged along the northbound platform of Clapham South station, in south London. She was dragged because her coat had become trapped between the closing doors of a London Underground Northern line train.
The train had stopped and passengers had alighted and boarded normally, before the driver confirmed that the door closure sequence could begin. The train operator, in the driving cab, started the door closure sequence but, before the doors had fully closed, one set encountered an obstruction and the doors were reopened. A passenger who had just boarded, and found that the available standing space was uncomfortable, stepped back off the train and onto the platform, in order to catch the following train. The edge of this passenger’s coat was then trapped when the doors closed again and she was unable to free it.
The trapped coat was not large enough to be detected by the door control system and the train operator, who was unaware of the situation, started the train moving. While checking the platform camera views displayed in his cab, the train operator saw unusual movements on the platform and applied the train brakes. Before the train came to a stop, the trapped passenger fell to the ground and then, having become separated from her coat, fell into the gap between the platform and the train. The train stopped after travelling about 60 metres. The passenger suffered injuries to her arm, head and shoulder, and was taken to hospital.
As a result of this accident, RAIB has made one recommendation, addressed to London Underground, seeking further improvements in the processes used to manage risks at the platform-train interface.
RAIB has also identified one learning point for the railway industry, relating to the provision of under platform recesses as a measure to mitigate the consequences of accidents where passengers fall from the platform.
For the complete report, see:
IOGP SAFETY ALERTDROPPED OBJECT: 1.3 POUND LINK PIN FELL 40 FEET
A drilling contractor was tripping pipe out of the hole and a link pin came loose from the hook, falling 40 feet (12.2 metres) to the deck below. The pin bounced and struck a glancing blow to the left jaw/neck area of a worker. The link pin is 1 inch by 5 inches (2.5cm x 12.7cm) and weighs 1.3 pounds (0.6 kg).
What Went Wrong?
The type of keeper pin used on the dropped object did not adequately secure the pin. The link pin is threaded and uses a cotter pin to prevent the pin body from backing out. The pin was secured with a coil “diaper pin” instead of a cotter pin.
Corrective Actions and Recommendations:
Safety pins that can be knocked out must not be used for lifting operations or securing equipment overhead.
Follow cotter pin installation guidelines:
- Both points on a cotter pin must be bent around the shaft.
- Cotter pins are a single-use instrument and should never be re-used.
safety alert number: 271
IOGP Safety Alerts http://safetyzone.iogp.org/
Whilst every effort has been made to ensure the accuracy of the information contained in this publication, neither the IOGP nor any of its members past present or future warrants its accuracy or will, regardless of its or their negligence, assume liability for any foreseeable or unforeseeable use made thereof, which liability is hereby excluded. Consequently, such use is at the recipient’s own risk on the basis that any use by the recipient constitutes agreement to the terms of this disclaimer. The recipient is obliged to inform any subsequent recipient of such terms.
This document may provide guidance supplemental to the requirements of local legislation. Nothing herein, however, is intended to replace, amend, supersede or otherwise depart from such requirements. In the event of any conflict or contradiction between the provisions of this document and local legislation, applicable laws shall prevail.
What’s worse than a fatal accident? A fatal accident followed by fatalities to first responders or rescuers.
Six rescuers were recently killed while trying to save 26 miners after a coal mine explosion in Russia. The rescuers were killed when the methane exploded again during their rescue attempt. See:
Can you learn something about your emergency response and rescue efforts from this example?