Category: Accidents

What does a bad day look like?

April 25th, 2017 by

Well, it could start like this…

Remembering an Accident: Oppau Explosion in Germany

April 21st, 2017 by

The explosion occurred September 21, 1921, when a silo that was storing 4,500 tonnes of an ammonium sulfate and ammonium nitrate fertilizer mixture exploded at the Oppau plant in Germany. It killed between 500 – 600 people and there were about 2,000+ people who were injured. The blast was felt for miles, damaging the factory and the surrounding community.

What Happened? 

In 1911 the plant was producing ammonium sulfate when Germany was unable to obtain the necessary sulfur during WWI. It was also producing ammonium nitrate during the same time period. The combination of the two plus the pressure of its own weight, turned the mixture into a plaster-like substance.

The workers had to take pickaxes to remove the plaster-like substance from inside the silos. To make their work easier the workers took small charges of dynamite to loosen the mixture. Before the explosion happened it was estimated that there where as many as 20,000 firings before that fatal day. It is now a well known fact that ammonium nitrate is highly explosive even when mixed, due to this tragic incident.

To read more about this tragic accident please click on the link below.

http://en.wikipedia.org/wiki/Oppau_explosion

To find out how to find and fix root causes at your facility to avoid disasters large and small, visit:

http://www.taproot.com/products-services/about-taproot

Remembering an Accident: Western Airlines Flight 470

March 31st, 2017 by

Western_Airlines_Boeing_737_N4528W_01

On a short, domestic flight on March 31, 1975, a Western Airlines flight had a horrible accident. The plane overran the runway causing major damage to the Boeing 737. Out of the 96 passengers and 6 crew members, only 4 injuries and no deaths occurred. But, what happened?

According to the investigation that was performed in October 1975 (7 months later), the root cause was “poor judgement” by the pilot-in-command. The crew recounted the accident stating there was poor weather and visibility, which caused them to misguide their callouts to the pilot. Was it someone’s fault? Should there be better processes for landing aircrafts in poor weather? Should there be a better way to determine if the weather is even safe to fly in? Should there be improvements on runway lighting/guidance? These are questions that should be asked to develop more effective corrective actions and avoid future, potentially fatal, accidents.

What’s Wrong with this Data?

March 20th, 2017 by

Below are sentinel event types from 2014 – 2016 as reported to the Joint Commission (taken from the 1/13/2017 report at https://www.jointcommission.org/assets/1/18/Summary_4Q_2016.pdf):

Summary Event Data

 Reviewing this data, one might ask … 

What can we learn?

I’m not trying to be critical of the Joint Commissions efforts to collect and report sentinel event data. In fact, it is refreshing to see that some hospitals are willing to admit that there is room for improvement. Plus, the Joint Commission is pushing for greater reporting and improved root cause analysis. But, here are some questions to consider…

  • Does a tic up or down in a particular category mean something? 
  • Why are suicides so high and infections so low? 
  • Why is there no category for misdiagnosis while being treated?

Perhaps the biggest question one might ask is why are their only 824 sentinel events in the database when estimates put the number of sentinel events in the USA at over 100,000 per year.

Of course, not all hospitals are part of the Joint Commission review process but a large fraction are.  

If we are conservative and estimate that there should be 50,000 sentinel events reported to the Joint Commission each year, we can conclude that only 1.6% of the sentinel events are being reported.

That makes me ask some serious questions.

1. Are the other events being hidden? Ignored? Or investigated and not reported?

Perhaps one of the reasons that the healthcare industry is not improving performance at a faster rate is that they are only learning from a tiny fraction of their operating experience. After all, if you only learned from 1.6% of your experience, how long would it take to improve your performance?

2. If a category like “Unitended Retention of a Foreign Body” stays at over 100 incidents per year, why aren’t we learning to prevent these events? Are the root cause analyses inadequate? Are the corrective actions inadequate or not being implemented? Or is there a failure to share best practices to prevent these incidents across the healthcare industry (each facility must learn by one or more of their own errors). If we don’t have 98% of the data, how can we measure if we are getting better or worse? Since our 50,000 number is a gross approximation, is it possible to learn anything at all from this data?

To me, it seems like the FIRST challenge when improving performance is to develop a good measurement system. Each hospital should have HUNDREDS or at least DOZENS of sentinel events to learn from each year. Thus, the Joint Commission should have TENS or HUNDREDS of THOUSANDS of sentinel events in their database. 

If the investigation, root cause analysis, and corrective actions were effective and being shared, there should be great progress in eliminating whole classes of sentinel events and this should be apparent in the Joint Commission data. 

This improved performance would be extremely important to the patients that avoided harm and we should see an overall decrease in the cost of medical care as mistakes are reduced.

This isn’t happening.

What can you do to get things started?

1. Push for full reporting of sentinel events AND near-misses at your hospital.

2. Implement advanced root cause analysis to find the real root causes of sentinel events and to develop effective fixes that STOP repeat incidents.

3. Share what your hospital learns about preventing sentinel events across the industry so that others will have the opportunity to improve.

That’s a start. After twelve years of reporting, shouldn’t every hospital get started?

If you are at a healthcare facility that is

  • reporting ALL sentinel events,
  • investigating most of your near-misses, 
  • doing good root cause analysis, 
  • implementing effective corrective actions that 
  • stop repeat sentinel events, 

I’d like to hear from you. We are holding a Summit in 2018 and I would like to document your success story.

If you would like to be at a hospital with a success story, but you need to improve your reporting, root cause analysis and corrective actions, contact us for assistance. We would be glad to help.

Carnival Pride NTSB Allision Report – Causal Factor Challenge

March 7th, 2017 by

collision, allision, carnival

The NTSB released their report on the allision of the Carnival Pride cruise ship with the pier in Baltimore last may. It caused over $2 million in damages to the pier and the ship, and crushed several vehicles when the passenger access gangway collapsed onto them. Luckily, no one was under or on the walkway when it fell.  You can read the report here.

Pride

The report found that the second in command was conning the ship at the time.  He had too much speed and was at the wrong angle when he was approaching the pier.  The report states that the accident occurred because the captain misjudged the power available when shifting to an alternate method of control to stop the ship.  It states there may have been a problem with the controls, or maybe just human error.  It also concluded that the passenger gangway was extended into the path of the ship, and that it did not have to be extended until ready for passengers to debark.

collision, allision, carnival

Gangway collapse after allision

While I’m sure these findings are true, I wonder what the actual root causes would be?  If the findings are read as written, we are really only looking at Causal Factors, and only a few of those to boot.  Based on only this information, I’m not sure what corrective actions could be implemented that would really prevent this in the future.  As I’m reading through the report, I actually see quite a few additional potential Causal Factors that would need to be researched and analyzed in order to find real root causes.

YOUR CHALLENGES:

  1. Identify the Causal Factors you see in this report.  I know you only have this limited information, but try to find the mistakes, errors, or equipment failures that lead directly to this incident (assuming no other information is available)
  2. What additional information would you need to find root causes for the Causal Factors you have identified?
  3. What additional information would you like in order to identify additional Causal Factors?

Reading through this incident, it is apparent to me that there is a lot of missing information.  The problems identified are not related to human performance-based root causes; there are only a few Causal Factors identified.  Unfortunately, I’m also pretty sure that the corrective actions will probably be pretty basic (Train the officer, update procedure, etc.).

BONUS QUESTION:

For those that think I spelled “collision” wrong, what is the meaning of the word “allision”?  How many knew that without using Google?

Avoid the Danger of New Hires

March 1st, 2017 by

 

Is your safety program ready?

Is your safety program ready?

There is a feeling of cautious optimism in the oil sector, as the price of oil seems to have stabilized above $50/barrel. Rig count in the Permian has more than doubled since last spring. US EIA and JPMorgan are forecasting US production at near record levels of over 9.5 million barrels per day by the end of next year. US exports are up, with China ramping up oil purchases from the US, while OPEC production cuts are holding.

This all sounds good for the US oil sector. It is expected that hiring will start picking up, and in fact Jeff Bush, president of oil and gas recruiting firm CSI Recruiting, has said, “When things come back online, there’s going to be an enormous talent shortage of epic proportions.”

So, once you start hiring, who will you hire? Unfortunately, much of the 170,000 oil workers laid off over the past couple of years are no longer available. That experience gap is going to be keenly felt as you try to bring on new people. In fact, you’re probably going to be hiring many people with little to no experience in safe operation of your systems.

Are you prepared for this? How will you ensure your HSE, Quality, and Equipment Reliability programs are set up to handle this young, eager, inexperienced workforce? What you certainly do NOT want to see are your new hires getting hurt, breaking equipment, or causing environmental releases. Here are some things you should think about:

– Review old incidents and look for recurring mistakes (Causal Factors). Analyze for generic root causes. Conduct a TapRooT® analysis of any recurring issues to help eliminate those root causes.
– Update on-boarding processes to ensure your new hires are receiving the proper training.
– Ensure your HSE staff are prepared to perform more frequent audits and subsequent root cause analysis.
– Ensure your HSE staff are fully trained to investigate problems as they arise.
– Train your supervisors to conduct audits and detailed RCA.
– Conduct human factors audits of your processes. You can use the TapRooT® Root Cause Tree® to help you look for potential issues.
– Take a look at your corrective action program. Are you closing out actions? Are you satisfied with the types of actions that are in there?
– Your HSE team may also be new. Make sure they’ve attended a recent TapRooT® course to make sure they are proficient in using TapRooT®.

Don’t wait until you have these new hires on board before you start thinking about these items. Your team is going to be excited and enthusiastic, trying to do their best to meet your goals. You need to be ready to give them the support and tools they need to be successful for themselves and for your company.

TapRooT® training may be part of your preparation.  You can see a list of upcoming courses HERE.

Remembering an Accident: Montana Coal and Iron Company

February 27th, 2017 by

Two small communities in Montana were tragically touched by a mining accident this day in 1943.

smith-mine-disaster-sign

The Montana Coal and Mine Company employed most men living in Washoe and Bearcreek, Montana. There had never been any major accidents like the one that took place on February 27, 1943. That morning, a massive explosion in mine #3 occurred. It was so powerful that families in both local communities heard and felt it. As the supervisors tried to find the cause of the explosion, they couldn’t find anything. No exact root cause. No evidence to tie together to ensure it doesn’t happen again. Sadly, all they could do was inform the families of their losses and shut down for good. The final fatality count was 74 out of 77 miners. All but 3. It was the largest accident they had ever had.

It’s stories like these that we can learn from. How could they have investigated better to find the root cause? What kind of corrective actions could have been implemented to keep these sort of explosions of happening again?

What does a bad day look like?

February 20th, 2017 by

rattlesnakers

Rattlesnake turns up in toilet bowl in snake-infested U.S. house. Read article on Wingate Wire

 

Remembering an Accident: Baia Mare Cyanide Spill

January 30th, 2017 by

On January 30, 2000, due to excessive amounts of snowfall a dam holding contaminated waters burst. This allowed 100,000 cubic meters of cyanide-contaminated water to spill over into farmlands and the Somes river. In addition to the cyanide, heavy metals were also poured into the rivers leaving a long-lasting negative affect on the environment, while contaminating the drinking water of 2.5 million Hungarians.

The spill had extreme consequences on the wildlife. On the Tisza stretch of the river, virtually all living animals were killed due to the contaminated water. On the Serbian section of the river 80% of all aquatic life was killed. Together the contaminated waters killed approximately 200 tons of fish and affected 62 species of fish, of which 20 are protected. The contamination was so bad that volunteers participated in removing the dead fish to prevent other wildlife from eating the fish and spreading the contamination through out the food chain.

To read more about the wildlife disaster please click on the link below.

http://archive.rec.org/REC/Publications/CyanideSpill/ENGCyanide.pdf

Major disasters are often wake-up calls for how important it is to ensure that they never happen again.

TapRooT® Root Cause Analysis is taught globally to help industries avoid them. Check out our global schedule at:

http://www.taproot.com/courses

Remembering an Accident: The Ashtabula Train Disaster of 1876

December 29th, 2016 by

urlDecember 29, 1876, this day 140 years ago, a massive train accident occurred in Ashtabula, Ohio. A Pacific Express train was traveling through Ohio with approximately 160 passengers plus the train crew, and collapsed, along with the bridge it was crossing, killing over half of the passengers. The only reason some survived was because one part of the train managed to make it across the bridge before it completely collapsed.

Investigations in this time did not have near the research and development to come up with accurate root causes. Therefore, many times those in charge of specific projects (the engineers who constructed this bridge) were blamed. The Lake Shore and Michigan Railroad along with Charles Collins, engineer, and Amasa Stone, architect, were all at fault in the eyes of the investigative jury.

The results from these allegations? Collins committed suicide, Stone was publicly scorned and the actual root cause of the bridge collapse was never discovered. Although it was a tragic event, I don’t think we can consider this a successful investigation.

Want to learn how the TapRooT® process works and can help your company perform better investigations than this one described? Take a course.

Monday Accident & Lessons Learned: Railroad Bridge Structural Failure

December 12th, 2016 by

Screen Shot 2016 11 14 at 6 18 33 PM

A Report from the UK Rail Accident Investigation Branch:

Structural failure caused by scour at Lamington viaduct, South Lanarkshire, 31 December 2015

At 08:40 hrs on Thursday 31 December 2015, subsidence of Lamington viaduct resulted in serious deformation of the track as the 05:57 hrs Crewe to Glasgow passenger service passed over at a speed of about 110 mph (177 km/h). The viaduct spans the River Clyde between Lockerbie and Carstairs. Subsequent investigation showed that the viaduct’s central river pier had been partially undermined by scour following high river flow velocity the previous day. The line was closed for over seven weeks until Monday 22 February 2016 while emergency stabilisation works were completed.

The driver of an earlier train had reported a track defect on the viaduct at 07:28 hrs on the same morning, and following trains crossed the viaduct at low speed while a Network Rail track maintenance team was deployed to the site. The team found no significant track defects and normal running was resumed with the 05:57 hrs service being the first train to pass on the down line. Immediately after this occurred at 08:40 hrs, large track movements were noticed by the team, who immediately imposed an emergency speed restriction before closing the line after finding that the central pier was damaged.

The viaduct spans a river bend which causes water to wash against the sides of the piers. It was also known to have shallow foundations. These were among the factors that resulted in it being identified as being at high risk of scour in 2005. A scheme to provide permanent scour protection to the piers and abutments was due to be constructed during 2015, but this project was deferred until mid-2016 because a necessary environmental approval had not been obtained.

To mitigate the risk of scour, the viaduct was included on a list of vulnerable bridges for which special precautions were required during flood conditions. These precautions included monitoring of river levels and closing the line if a pre determined water level was exceeded. However, this process was no longer in use and there was no effective scour risk mitigation for over 100 of the most vulnerable structures across Scotland. This had occurred, in part, because organisational changes within Network Rail had led to the loss of knowledge and ownership of some structures issues.

Although unrelated to the incident, the RAIB found that defects in the central river pier had not been fully addressed by planned maintenance work. There was also no datum level marked on the structure which meant that survey information from different sources could not easily be compared to identify change.

As a result of this investigation, RAIB has made three recommendations to Network Rail relating to:

  • the management of scour risk
  • the response to defect reports affecting structures over water
  • the management of control centre procedures.

Five learning points are also noted relating to effective management of scour risk.

For more information, see:

R222016_161114_Lamington_viaduct

Monday Accident & Lessons Learned: Ammonia leak kills 1 at Carlsberg brewery in UK

December 5th, 2016 by

SHP reported that a worker at the Carlsberg brewery died and 22 others were injured by a cooling system ammonia leak.

Are you using advanced root cause analysis to investigate near-misses and stop major accidents? Major accidents can be avoided.  That’s a lesson that all facilities with hazards should learn. For current advanced root cause analysis public courses being held around the world, see:

Upcoming TapRooT® Public Courses

TapRooT® can be used for both low to medium risk incidents (including near-misses) and major accidents. For people who will normally be investigating low risk incidents, the 2-Day TapRooT® Root Cause Analysis Course is recommended.

For people who will investigate all types of incidents including near-misses and incidents with major consequences (or a potential for major consequences), we recommend the 5-Day Advanced Team Leader Training.

Don’t wait! If you have attended TapRooT® Training, get signed up today!

Monday Accident & Lessons Learned: Collision at Yafforth, UK, level crossing, 3 August 2016

November 28th, 2016 by

NewImage

For a report from the UK Rail Accident Investigation Branch, see:

www.gov.uk

Remembering an Accident: MESIT Factory Collapse

November 23rd, 2016 by

Village-Book-Mystery-Logo

This day in history…an explosion occurred that remains a mystery to this day.

On November 23, 1984, in Uherské Hradiště, Czechoslovakia, part of the MESIT factory collapsed. It is unknown the exact root cause, but this manufacturing plant disaster killed 18 workers and injured 43. Being in the mid 1980’s, the communist regime decided to keep the accident a secret from the public. People speculate it was to hide confidential work from the rest of the world. However, media caught a hold of it from an anonymous source which then spread across the sea to western media. Although not much is known, it’s no secret that proactive measures need to be enforced and investigations must always be done to make improvements.

 

The Blame Culture Hurts Hospital Root Cause Analysis

November 22nd, 2016 by

If you don’t understand what happened, you will never understand why it happened.

You would think this is just common sense. But if it is, why would an industry allow a culture to exist that promotes blame and makes finding and fixing the root causes of accidents/incidents almost impossible?

I see the blame culture in many industries around the world. Here is an example from a hospital in the UK. This is an extreme example but I’ve seen the blame culture make root cause analysis difficult in many hospitals in many countries.

Dr. David Sellu (let’s just call him Dr. Death as they did in the UK tabloids), was prosecuted for errors and delays that killed a patient. He ended up serving 16 months in high security prisons because the prosecution alleged that his “laid back attitude” had caused delays in treatment that led to the patient’s death. However, the hospital had done a “secret” root cause analysis that showed that systemic problems (not the doctor) had led to the delays. A press investigation by the Daily Mail eventually unearthed the report that had been kept hidden. This press reports eventually led to the doctor’s release but not until he had served prison time and had his reputation completely trashed.

Screen Shot 2016 11 22 at 11 09 45 AM

If you were a doctor or a nurse in England, would you freely cooperate with an investigation of a patient death? When you know that any perceived mistake might lead to jail? When problems that are identified with the system might be hidden (to avoid blame to the institution)? When your whole life and career is in jeopardy? When your freedom is on the line because you may be under criminal investigation?

This is an extreme example. But there are other examples of nurses, doctors, and pharmacists being prosecuted for simple errors that were caused by systemic problems that were beyond their control and were not thoroughly investigated. I know of some in the USA.

The blame culture causes performance improvement to grind to a halt when people don’t fully cooperate with initiatives to learn from mistakes.

TapRooT® Root Cause Analysis can help investigations move beyond blame by clearly showing the systemic problems that can be fixed and prevent (or at least greatly reduce) future repeat accidents.Attend a TapRooT® Root Cause Analysis Course and find out how you can use TapRooT® to help you change a blame culture into a culture of performance improvement.

Foe course information and course dates, see:

http://www.taproot.com/courses

Monday Accident & Lessons Learned: Remembering The Concord Crash

November 21st, 2016 by

Found this TV show about the crash and thought it was interesting … What can you learn?

 

Monday Accident & Lessons Learned: Pilot Error is Root Cause

November 14th, 2016 by

The Navy still likes to blame folks as a root cause. At least that’s what I see in this report about a “pilot error” keeping a F/A-18 Hornet making it back to the carrier USS Theodore Roosevelt.

Seems there were lot’s of Causal Factors that contributed to the loss of an $86 million dollar aircraft that are described in this article on Military.com:

http://www.military.com/daily-news/2016/10/27/debris-pilot-error-caused-2015-jet-crash-persian-gulf-navy.html

I haven’t found the report of the video on line.

What do you think of the report of the investigation?

 

Navy Root Cause Analysis Focused on Blame Vision, Crisis Vision, or Opportunity to Improve Vision?

November 3rd, 2016 by

NewImage

In a short but interesting article in SEAPOWER, Vice Admiral Thomas J. Moore stated that Washing Navy Yard had just about completed the root cause analysis of the failure of the main turbine generators on the USS Ford (CVN 78). He said:

The issues you see on Ford are unique to those particular machines
and are not systemic to the power plant or to the Navy as a whole.

Additionally, he said:

“…it is absolutely imperative that, from an accountability standpoint, we work with Newport News
to find out where the responsibility lies. They are already working with their sub-vendors
who developed these components to go find where the responsibility and accountability lie.
When we figure that out, contractually we will take the necessary steps to make sure
the government is not paying for something we shouldn’t be paying for.”

That seems like a “Blame Vision” statement.

That Blame Vision statement was followed up by statement straight from the Crisis Mangement Vision playbook. Admiral Moore emphasized that would get a date set for commissioning of the ship that is behind schedule by saying:

“Right now, we want to get back into the test program and you’ll see us do that here shortly.
As the test program proceeds, and we start to development momentum, we’ll give you a date.
We decided, ‘Let’s fix this, let’s get to the root cause, let’s get back in the test program,’ and
when we do that, we’ll be sure to get a date out. I expect that before the end of the year
we will be able to set a date for delivery.”

Press statements are hard to interpret. Perhaps the Blame and Crisis Visions were just the way the reporters heard the statements or the way I interpreted them. An Opportunity to Improve Vision statement would have been more along the lines of:

We are working hard to discover the root causes of the failures of the main turbine generators
and we will be working with our suppliers to fix the problems discovered and apply the
lessons learned to improve the reliability of the USS Ford and subsequent carriers of this class,
as well as improving our contracting, design, and construction practices to reduce the
likelihood of future failures in the construction of new, cutting edge classes of warships.

Would you like to learn more about the Blame Vision, the Crisis Management Vision, and the Opportunity to Improve Vision and how they can shape your company’s performance improvement programs? The watch for the release of our new book:

The TapRooT® Root Cause Analysis Philosophy – Changing the Way the World Solves Problems

It should be published early next year and we will make all the e-Newsletter readers are notified when the book is released.

To subscribe to the newsletter, provide your contact information at:

http://www.taproot.com/contact-us#newsletter

Fatal Theme Park Ride in Australia

November 2nd, 2016 by

Here’s the press report.

A similar ride in the US has been closed while the investigation is ongoing. 

Monday Accident & Lessons Learned: CSB Report on the Williams Olefins Plant Explosion and Fire

October 31st, 2016 by

Pres release from the US Chemical Safety Board…

NewImage

CSB Releases Final Case Study into 2013 Explosion and Fire at Williams Olefins Plant
in Geismar, Louisiana; Case Study Concludes that Process Safety Management Deficiencies
During 12 Years Prior to the Incident Led to the Explosion

October 19, 2016, Baton Rouge, LA — Today the CSB released its final report into the June 13, 2013, explosion and fire at the Williams Olefins Plant in Geismar, Louisiana, which killed two employees. The report concludes that process safety management program deficiencies at the Williams Geismar facility during the 12 years leading to the incident allowed a type of heat exchanger called a “reboiler” to be unprotected from overpressure, and ultimately rupture, causing the explosion.

The Williams Geismar facility produces ethylene and propylene for the petrochemical industry and employs approximately 110 people. At the time of the incident, approximately 800 contractors worked at the plant on an expansion project aimed at increasing the production of ethylene.

The incident occurred during non-routine operational activities that introduced heat to the reboiler, which was offline and isolated from its pressure relief device. The heat increased the temperature of a liquid propane mixture confined within the reboiler, resulting in a dramatic pressure rise within the vessel. The reboiler shell catastrophically ruptured, causing a boiling liquid expanding vapor explosion (BLEVE) and fire, which killed two workers; 167 others reported injuries, the majority of which were contractors.

The CSB investigation revealed a poor process safety culture at the Williams Geismar facility, resulting in a number of process safety management program weaknesses. These include deficiencies in implementing Management of Change (MOC), Pre-Startup Safety Review (PSSR), Process Hazard Analysis (PHA) programs, and procedure programs causal to the incident:

  • Failure to appropriately manage or effectively review two significant changes that introduced new hazards involving the reboiler that ruptured—(1) the installation of block valves that could isolate the reboiler from its protective pressure relief device and (2) the administrative controls Williams relied on to control the position (open or closed) of these block valves. 
  • Failure to effectively complete a key hazard analysis recommendation intended to protect the reboiler that ultimately ruptured.
  • Failure to perform a hazard analysis and develop a procedure for the operations activities conducted on the day of the incident that could have addressed overpressure protection. 

CSB Chairperson Vanessa Allen Sutherland said, “The tragic accident at Williams was preventable and therefore unacceptable. This report provides important safety lessons that we urge other companies to review and incorporate within their own facilities.”

The CSB case study on the accident at Williams notes the importance of:

  • Using a risk-reduction strategy known as the “hierarchy of controls” to effectively evaluate and select safeguards to control process hazards.  This strategy could have resulted in Williams choosing to install a pressure relief valve on the reboiler that ultimately ruptured instead of relying on a locked open block valve to provide an open path to pressure relief, which is less reliable due to the possibility of human implementation errors;
  • Establishing a strong organizational process safety culture.  A weak process safety culture contributed to the performance and approval of a delayed MOC that did not identify a major overpressure hazard and an incomplete PSSR;
  • Developing robust process safety management programs, which could have helped to ensure PHA action items were implemented effectively; and
  • Ensuring continual vigilance in implementing process safety management programs to prevent major process safety incidents. 

Following the incident, Williams implemented improvements in managing process safety at the Geismar facility. These include, among others, redesigning the reboilers to prevent isolation from their pressure relief valves, improving its management of change process to be more collaborative, and updating its process hazard analysis procedure.

Investigator Lauren Grim said, “Williams made positive safety management changes at the Geismar facility following the accident, but more should be done to improve process safety and strengthen the plant’s process safety culture. Our report details important safety recommendations to protect workers at the Williams Geismar facility.”

To prevent future incidents and further improve process safety at the Geismar plant, the CSB recommended that Williams strengthen existing safety management systems and adopt additional safety programs. These strategies include conducting safety culture assessments, developing a robust safety indicators tracking program, and conducting detailed process safety program assessments.

The CSB also identified gaps in a key industry standard by the American Petroleum Institute (API) and issued recommendations to API to strengthen its “Pressure-relieving and Depressuring Systems” requirements to help prevent future similar incidents industry-wide.

Chairperson Sutherland said, “Most of the accidents the CSB investigates could have been prevented had process safety culture been a top priority at the facility where the incident occurred. These changes must be encouraged from the top with managers implementing effective process safety management programs.” 

The CSB is an independent, non-regulatory federal agency charged with investigating serious chemical accidents. The agency’s board members are appointed by the president and confirmed by the Senate. CSB investigations look into all aspects of chemical accidents, including physical causes such as equipment failure as well as inadequacies in regulations, industry standards, and safety management systems.

The Board does not issue citations or fines but does make safety recommendations to plants, industry organizations, labor groups, and regulatory agencies such as OSHA and EPA. Visit our website, www.csb.gov.  For more information, contact Communications Manager Hillary Cohen, cell 202-446-8094 or email public@csb.gov.

Defense Argues Jail Time is Wrong

October 28th, 2016 by

The Associate Press reported that attorneys for Don Blankenship, the imprisoned former CEO of Massey Energy, should not have been sentenced to go to jail for the 2010 coal mine explosion that killed 29 people.

Read more here:

http://www.pennenergy.com/articles/pennenergy/2016/10/coal-news-ex-coal-ceo-argues-he-s-wrongly-imprisoned-after-29-deaths.html?cmpid=EnlDailyPowerOctober272016&eid=294706529&bid=1569999

Note that I found this “wanted poster” on line at http://mountainkeeper.blogspot.com.

NewImage

Monday Accident & Lessons Learned: How Can Automation Get You Into Trouble?

October 24th, 2016 by

NewImage

Automation dependency is an interesting topic. Here’s what a recent CALLBACK from the Aviation Safety Reporting System had to say about the topic…

http://asrs.arc.nasa.gov/docs/cb/cb_440.pdf

Connect with Us

Filter News

Search News

Authors

Angie ComerAngie Comer

Software

Barb PhillipsBarb Phillips

Editorial Director

Chris ValleeChris Vallee

Six Sigma

Dan VerlindeDan Verlinde

VP, Software

Dave JanneyDave Janney

Safety & Quality

Gabby MillerGabby Miller

Marketing

Garrett BoydGarrett Boyd

Technical Support

Ken ReedKen Reed

VP, Equifactor®

Linda UngerLinda Unger

Co-Founder

Mark ParadiesMark Paradies

Creator of TapRooT®

Per OhstromPer Ohstrom

VP, Sales

Shaun BakerShaun Baker

Technical Support

Steve RaycraftSteve Raycraft

Technical Support

Wayne BrownWayne Brown

Technical Support

Success Stories

At our phosphate mining and chemical plants located in White…

PCS Phosphate, White Springs, Florida

The healthcare industry has recognized that improved root cause analysis of quality incidents…

Good Samaritan Hospital
Contact Us