Category: Investigations

Monday Accident & Lessons Learned: Fatality Near-Miss Because of Corrective Actions NI or Corrective Action NYI

November 24th, 2014 by

Screen Shot 2014 10 23 at 11 03 25 AM

A recent rail accident report by the UK Rail Accident Investigation Branch described a facility maintenance failure that could have caused a fatality. Here’s a brief excerpt from the report:

 “At about 16:00 hours on Thursday 1 August 2013, concrete cladding fell from the bridge spanning Denmark Hill station, London, and most of the debris landed on platform 1. … The concrete cladding had been added to the bridge structure in about 1910 and fell because of gradual deterioration of the fixing arrangements. Deterioration of the cladding fixing arrangements had been reported to Network Rail over a period of at least four years but the resulting actions taken by Network Rail and its works contractor were inadequate.”

Under the Management System portion of the TapRooT® Root Cause Tree® you will find Corrective Actions Need Improvement and Corrective Actions Not Yet Implemented root causes under the under the Corrective Action near root cause. We used to abbreviate these CANYI and CALTA in the old days (Corrective Action Not Yet Implemented and Corrective Actions Less Than Adequate).

The TapRooT® theory of management requires that management implements effective corrective action once they are aware of a problem. The corrective action must not only be effective, but also it must be implemented in a timely manner (commensurate with the risk the problem presents). 

In this case, I would probably lean toward the Corrective Action Not Yet Implemented root cause, although, the Corrective Action Needs Improvement root cause might apply to the previous inadequate temporary fixes. 

What can you learn from this?

Does your management support effective timely corrective actions? Or do you have a large backlog of ineffective fixes? Maybe you need corrective action improvements!

Monday Accident & Lessons Learned: OPG Safety Alert #261 – WELL CONTROL COMPLICATIONS ON FIRST WELL FOR NEW DRILLSHIP

November 17th, 2014 by

WELL CONTROL COMPLICATIONS ON FIRST WELL FOR NEW DRILLSHIP

This incident occurred whilst drilling the first well following new rig commissioning and start-up. While drilling into suspected sand, the rig experienced a kick. The well was shut in with 180 psi Shut In Drill Pipe Pressure SIDPP), 14 BBLS gained, 270 psi Shut In Casing Pressure (SICP), 12.3 PPG MW (surface) in the hole. Several attempts were made to circulate; pipe was stuck and packed off. A riser mud cap of 13.4 PPG was installed and the well monitored through the choke line (static). The well was opened and monitored to be static. The stuck pipe was freed, circulation re-established and the well was again shut it. The Driller’s Method was then used to displace the influx from the well.

During the first circulation, a high gas alarm, from the shaker exhaust sensor, initiated a rig muster. The well was shut in and monitored. The shaker gas detectors and ventilation were checked and found operable. As the well kill was re-started, mud vented from the Mud Gas Separator (MGS) siphon breaker line, and all the shaker gas sensors alarmed. The rig was called to muster a second time. The well was shut in (indications were that gas had blown through the degasser liquid seal) and monitored. The liquid seal was lost and the well was immediately shut in. The liquid seal was flushed again and well kill started up but again lost the liquid seal and the well was shut in. Further investigation of the MGS identified a blind skillet plate in the spool piece between the MGS and main gas vent line which blocked the normal path for gas flow and misdirected the gas to the shaker room. The skillet plate had been installed during construction to prevent rainwater from entering the MGS.

The blind skillet plate was removed and the well kill re-started without further incident. No injuries were reported.

NewImageFigure 1: Blind flange located on top of vessel near deck ceiling. Not easily detected.

NewImage

Figure 2Removed blind flange from the 12” vent line of the mud gas separator.

What Went Wrong?

  1. Uncertainty about the pore pressure below base of salt resulted in the mud weight being too low to prevent an influx.
  2. Malfunction of the mudlogger gas sampling system during drilling operations led to unrepresentative gas unit data.
  3. A 12-in blind skillet plate installed in the MGS main gas vent line during rig construction was not removed before operations began.
  4. Personnel on the rig did not fully understand the operation of the MGS to prevent subsequent gas releases in the shaker room.

Corrective Actions and Recommendations

  1. Include in rig contractors’ procedures for rig acceptance, flange management procedures to ensure that temporary blanking flanges or skillets, installed during construction or commissioning, are removed prior to hand-over to operations. Verification of rig contractor’s procedures to be in operator’s practices.
  2. Develop detailed instructions and procedures for preventative maintenance and calibration of the surface mud logging gas detection equipment that includes daily visual inspection of the gas trap impeller. Documentation for inspection and maintenance is to be maintained on the rig.
  3. Include critical items provided by Third Parties in the Safety Critical Equipment list and its associated controls.
  4. Implement awareness training for rig crews on the MGS Operating Procedure, LEL readings, mudlog gas detection, and significance and consequence of gas releases.

Source Contact

Safety alert number: 261 OGP
Safety Alerts http://info.ogp.org.uk/safety

Disclaimer

Whilst every effort has been made to ensure the accuracy of the information contained in this publication, neither the OGP nor any of its members past present or future warrants its accuracy or will, regardless of its or their negligence, assume liability for any foreseeable or unforeseeable use made thereof, which liability is hereby excluded. Consequently, such use is at the recipient’s own risk on the basis that any use by the recipient constitutes agreement to the terms of this disclaimer. The recipient is obliged to inform any subsequent recipient of such terms.

This document may provide guidance supplemental to the requirements of local legislation. Nothing herein, however, is intended to replace, amend, supersede or otherwise depart from such requirements. In the event of any conflict or contradiction between the provisions of this document and local legislation, applicable laws shall prevail.

 

Root Cause Analysis Tip: Top 10 Investigation Mistakes (in 1994)

November 12th, 2014 by

Gatlinburg Sunrise 1

At the first TapRooT® Summit in Gatlinburg, Tennessee, in 1994, attendees voted on the top investigation mistakes that they had observed. The list was published in the August 1994 Root Cause Network™ newsletter (© 1994). Here’s the top 10:

  1. Management revises the facts. (Or management says “You can’t say that.”)
  2. Assumptions become facts.
  3. Untrained team of investigators. (We assign good people/engineers to find causes.)
  4. Started investigation too late.
  5. Stopped investigation too soon.
  6. No systematic investigation process.
  7. Management can’t be the root cause.
  8. Supervisor performs investigation in their spare time.
  9. Fit the facts to the scenario. (Management tells the investigation team what to find.)
  10. Hidden agendas.

What do you think? Have things change much since 1994? If your management supports using TapRooT®, you should have eliminated these top 10 investigation mistakes.

What do you think is the biggest investigation mistake being made today? Is it on the list above? Leave your ideas as a comment.

Monday Accident & Lessons Learned: UK RAIB Report – Freight train derailment near Gloucester

November 10th, 2014 by

Screen Shot 2014 10 10 at 10 02 46 AM

Here’s the summary of the report:

At about 20:15 hrs on 15 October 2013, a freight train operated by Direct Rail Services, which was carrying containers, derailed about 4 miles (6.4 km) south west of Gloucester station on the railway line from Newport via Lydney. It was travelling at 69 mph (111 km/h) when the rear wheelset of the last wagon in the train derailed on track with regularly spaced dips in both rails, a phenomenon known as cyclic top. The train continued to Gloucester station where it was stopped by the signaller, who had become aware of a possible problem with the train through damage to the signalling system. By the time the train stopped, the rear wagon was severely damaged, the empty container it was carrying had fallen off, and there was damage to four miles of track, signalling cables, four level crossings and two bridges.

Screen Shot 2014 10 10 at 9 59 07 AM

The immediate cause of the accident was a cyclic top track defect which caused a wagon that was susceptible to this type of track defect to derail. The dips in the track had formed due to water flowing underneath the track and although the local Network Rail track maintenance team had identified the cyclic top track defect, the repairs it carried out were ineffective. The severity of the dips required immediate action by Network Rail, including the imposition of a speed restriction for the trains passing over it, but no such restriction had been put in place. Speed restrictions had repeatedly been imposed since December 2011 but were removed each time repair work was completed; on each occasion, such work subsequently proved to be ineffective.

The type of wagon that derailed was found to be susceptible to wheel unloading when responding to these dips in the track, especially when loaded with the type of empty container it was carrying. This susceptibility was not identified when the wagon was tested or approved for use on Network Rail’s infrastructure.

The RAIB also observes: the local Network Rail track maintenance team had a shortfall in its manpower resources; and design guidance for the distance between the wheelsets on two-axle wagons could also be applied to the distance between the centres of the bogies on bogie wagons.

The RAIB has made seven recommendations. Four are directed to Network Rail and cover reviewing the drainage in the area where the train derailed, revising processes for managing emergency speed restrictions for cyclic top track defects, providing track maintenance staff with a way of measuring cyclic top after completing repairs, and investigating how cyclic top on steel sleeper track can be effectively repaired. Two are directed to RSSB and cover reviewing how a vehicle’s response to cyclic top is assessed and amending guidance on the design of freight wagons. One is directed to Direct Rail Services and covers mitigating the susceptibility of this type of wagon to cyclic top.

For the complete report, see:

http://www.raib.gov.uk/cms_resources.cfm?file=/141009_R202014_Gloucester.pdf

Tulsa Public 5-Day TapRooT® Advanced Root Cause Analysis Team Leader Training

November 4th, 2014 by

Final case studies being presented in our Tulsa, Oklahoma course.

Image 4 Image 5 Image 6 Image 7

For more information on our public courses click here or to book your own onsite course click here.

San Antonio 5-Day TapRooT® Advanced Root Cause Analysis Team Leader Training

November 4th, 2014 by

Students presenting their final case studies on day 5 of the course. Students always learn something new in the case that they brought to be reviewed.

Image Image 1 Image 3Image 2

For more information on our public courses click here or to book your own onsite course click here.

Did Retiring Warthogs to “Save Money” Lead to The Recent Friendly Fire Accident In Afganistan?

October 30th, 2014 by

NewImage

And interesting article in the Washing Post suggests that using a B-1B for night time close air support and insufficient training led to the death of 7 Americans and 3 allies in a friendly fire accident.

See the story at THIS LINK and see what you think.

Monday Accident & Lessons Learned: UK RAIB Accident Report on a Passenger Becoming Trapped in a Train Door and Dragged a Short Distance at Newcastle Central Station

October 27th, 2014 by

Screen Shot 2014 09 18 at 11 33 45 AM

Here is a summary of the report:

At 17:02 hrs on Wednesday 5 June 2013, a passenger was dragged by a train departing from platform 10 at Newcastle Central station. Her wrist was trapped by an external door of the train and she was forced to move beside it to avoid being pulled off her feet. The train reached a maximum speed of around 5 mph (8 km/h) and travelled around 20 metres before coming to a stop. The train’s brakes were applied either by automatic application following a passenger operating the emergency door release handle, or by the driver responding to an emergency signal from the conductor. The conductor, who was in the rear cab, reported that he responded to someone on the platform shouting at him to stop the train. The passenger suffered severe bruising to her wrist.

This accident occurred because the conductor did not carry out a safety check before signalling to the driver that the train could depart. Platform 10 at Newcastle Central is a curved platform and safe dispatch is particularly reliant upon following the correct dispatch procedure including undertaking the pre-dispatch safety checks.

The investigation found that although the doors complied with the applicable train door standard, they were, in certain circumstances, able to trap a wrist and lock without the door obstruction sensing system detecting it. Once the doors were detected as locked, the train was able to move.

In 2004, although the parties involved in the train’s design and its approval for service were aware of this hazard, the risk associated with it was not formally documented or assessed. The train operator undertook a risk assessment in 2010 following reports of passengers becoming trapped. Although they rated the risk as tolerable, the hazard was not recorded in such a way that it could be monitored and reassessed, either on their own fleet or by operators of similar trains.

As a consequence of this incident, RAIB has made six recommendations. One of these is for operators of trains with this door design to assess the risk of injuries and fatalities due to trapping and dragging incidents and take the appropriate action to mitigate the risk.

Two recommendations have been made to the train’s manufacturer. One of these is to reduce the risk of trapping on future door designs, and the other to review its design processes with respect to hazard identification and recording.
One recommendation has been made to the operator of the train involved in this particular accident. This is related to the management of hazards associated with the design of its trains and assessment of the risks of its train dispatch operations.

Two recommendations have been made to RSSB. One is to add guidance to the standard on passenger train doors to raise awareness that it may be possible to overcome door obstruction detection even though doors satisfy the tests specified within the standard. The other recommendation is the consideration of additional data which should be recorded within its national safety management information system to provide more complete data relating to the risk of trapping and dragging incidents.

See the complete report here:

http://www.raib.gov.uk/cms_resources.cfm?file=/140918_R192014_Newcastle.pdf

Root Cause Tip: Making Team Investigations Work (A Best of Article from the Root Cause Network™ Newsletter)

October 9th, 2014 by

Reprinted from the June 1994 Root Cause Network™ Newsletter, Copyright © 1994. Reprinted by permission. Some modifications have been made to update the article.

Ire group 2b 2

 

MAKING TEAM INVESTIGATIONS WORK

WHY USE A TEAM?

First, team investigations are now required for process safety related incidents at facilities covered by OSHA’s Process Safety Management regulation (1910.119, section m). But why require team investigations?

Quite simply because two heads are better than one! Why? Several reasons:

  • A team’s resources can more quickly investigate an incident before the trail goes cold.
  • For complex systems, more than one person is usually needed to understand the problem. 
  • Several organizations that were involved in the incident need to participate in the investigation.
  • A properly selected team is more likely to consider all aspects of a problem rather than focusing on a single aspect that a single investigator may understand and therefore choose to investigate. (The favorite cause syndrom.)

R IMG 5213

MAKING THE TEAM WORK

Investigating an incident using a team is different than performing an individual investigation. To make the team work, you need to consider several factors:

  • Who to include on the team.
  • The training required for team members.
  • Division of work between team members and coordinating the team’s activities.
  • Record keeping of the team’s meetings.
  • Software to facilitate the team’s work.
  • Keeping team members updated on the progress of the investigation (especially interview results) and maintaining a team consensus on what happened, the causal factors, and the root causes.

DSCN0594

WHO’S ON THE TEAM?

The OSHA 1910.119 regulation requires that the team include a member knowledgeable of the process and a contractor representative if contractor employees were involved in the incident. Other you may want on the team may include:

  • Engineering/technical assistance for hardware expertise.
  • Human engineering/ergonomics experts for human performance analysis.
  • Operations/maintenance personnel who understand the work practices.
  • An investigation coach/facilitator who is experienced in performing investigation.
  • A recorder to help keep up with meeting minutes, evidence documentation, and report writing/editing.
  • A union rep.
  • A safety professional.

TRAINING THE TEAM

JimTeachGood

A common belief is that “good people” naturally know how to investigate incidents. All they need to do is ask some questions and use their judgement to decide what caused the incident. Then they can use their creative thinking (brainstorming) to develop corrective actions. Hopever, we’ve seen dramatic improvements in the ability of a team to effectively investigate an incident, find its root causes, and propose effective corrective actions when they are appropriately trained BEFORE they perform an investigation.

What kind of training do they need? Of course, more is better but here is a suggestion for the minimum training required…

  • Team Leaders / Coaches – A course covering advanced root cause analysis, interviewing, and presentation skills. We suggest the 5-Day TapRooT® Advanced Root Cause Analysis Team Leader Course. Also, the Team Leaders should be well versed in report writing and the company’s investigation policies. Coaches/facilitators should be familiar with facilitation skills/practices. Also, Team Leaders and Facilitations should continually upgrade their skills by attending the TapRooT® Summit.
  • Team Members – A course covering advanced root cause analysis skills. We suggest the 2-Day TapRooT® Incident Investigation and Root Cause Analysis Course
  • People Involved in the Incident – It may seem strange to some that people involved in an incident need training to make the investigation more effective. However, we have observed that people are more cooperative if they understand the workings of the investigation (process and techniques) and that a TapRooT® investigation is not blame oriented. Therefore, we recommend that all line employees take a 4-hour TapRooT® Basics course. We have developed and provided this training for many licensed clients who have found that it helps their investigation effectiveness. 

Trailer vs Truck Cab TR Pres

 KEEPING ON TRACK

 One real challenge for a team investigation is keeping a team consensus. Different team members will start the investigation with different points of view and different experiences. Turf wars or finger pointing can develop when these differences are considered. This can be exacerbated when different team members perform different interviews and get just a few pieces of the puzzle. Therefore, the Team Leader must have a plan to keep all the team members informed of the information collected and to build a team consensus as the investigation progresses. frequent team meetings using the SnapCharT® to help build consensus can be helpful. Using the Root Cause Tree® Dictionary to guide the root cause analysis process and requiring the recording of evidence that causes the team to select a root cause is an excellent practice. 

MORE TO LEARN

This article is just a start. There is much more to learn. Experienced Team Leaders have many stories to tell about the knowledge they have learned “the hard way” in performing team incident investigations. But you can avoid having to learn many of these lessons the hard way if you attend the TapRooT® 5-Day Advanced Root Cause Analysis Team Leader Course. See the upcoming public courses by CLICKING HERE. Or contact us to schedule a course at your site.

Monday Accident & Lessons Learned: OPG Safety Alert #260 – Planning & Preparation … Key Elements for Prevention of MPD Well Control Accidents

October 6th, 2014 by

OPG Safety Alert #260

PLANNING AND PREPARATION – KEY ELEMENTS FOR PREVENTION OF MPD WELL CONTROL INCIDENTS

Summary

During drilling the 6″ reservoir section in an unconventional well, a kick-loss situation occurred. After opening the circulation port in a drillstring sub-assembly, LCM was pumped to combat losses. When LCM subsequently returned to surface it plugged the choke. Circulation was stopped, the upper auto-Internal BOP (IBOP) was activated, and the choke manifold was lined up for flushing using a mud pump. During the course of this operation mud backflow was observed at the Shaker Box. The Stand Pipe Manifold and mud pumps were isolated to investigate. After a period of monitoring the stand pipe pressure, the upper IBOP, located at the top of the drillpipe, was opened to attempt to bullhead mud into the drillstring. Upon opening, a pressure, above 6500psi and exceeding the surface system safe working pressure, was observed. The upper IBOP was closed immediately and the surface system bled down. An attempt to close the lower manual IBOP as a second barrier was not successful. Due to the presence of high pressure, the Stand Pipe Manifold could not be used as the second barrier, nor could it be used for circulation. Well control experts were mobilised to perform hot tapping and freeze operations which were successfully executed and allowed a high-pressure drillpipe tree to be installed in order to re-instate 2 barriers on the drillpipe.

What Went Wrong?

  1. With the down-hole circulation sub-assembly open in the drillstring, the upper IBOP was either leaking or remained open due to activation malfunction (this could not be substantiated), and a flow path developed up the drill pipe.
  2. The line up for flushing the Choke Manifold with the mud pumps did not allow for adequate well monitoring. The set up as used resulted in unexpected flow up the drillstring to go undetected.
  3. It was incorrectly assumed that monitored volume gains were due only to mud transfer.
  4. Assessment of flow, volume and pressure risks did not consider in sufficient detail the concurrent operations involving pumping mud off line and a pressurized drill string.
  5. Operational focus was on choke manifold flushing whereas supervision should have maintained oversight of the broader situation including well monitoring.

NewImage

Corrective Actions and Recommendations

  1. Develop a barrier plan for all operational steps; always update the plan as a result of operational changes prior to continuing (ie. ensure a robust Management of Change process).
  2. Take the time required to verify that intended barriers are in place as per the Barrier Plan and, when activated, have operated properly (eg. IBOP’s).
  3. Install a landing nipple above the down hole circulation sub-assembly to allow a sealing drop dart to be run if required.
  4. Always close-in, or line-up, in such a way that allows for monitoring of all the closed-in pressures at all times.
  5. “Walk the lines” prior to commencing (concurrent) operations involving pressure and flow.
  6. Develop procedures in advance for flushing of the Well Control system, especially for recognisable potential cases of concurrent operations.
  7. Develop clear procedures covering all aspects of unconventional operations, including reasonably expected scenarios, and ensure effective communication of these to all relevant staff.

Disclaimer

Whilst every effort has been made to ensure the accuracy of the information contained in this publication, neither the OGP nor any of its members past present or future warrants its accuracy or will, regardless of its or their negligence, assume liability for any foreseeable or unforeseeable use made thereof, which liability is hereby excluded. Consequently, such use is at the recipient’s own risk on the basis that any use by the recipient constitutes agreement to the terms of this disclaimer. The recipient is obliged to inform any subsequent recipient of such terms.

This document may provide guidance supplemental to the requirements of local legislation. Nothing herein, however, is intended to replace, amend, supersede or otherwise depart from such requirements. In the event of any conflict or contradiction between the provisions of this document and local legislation, applicable laws shall prevail.

Monday Accident & Lessons Learned: Hot Work on Tanks Containing Biological or Organic Material

September 29th, 2014 by

This week accident information is from the US Chemical Safety Board …

NewImage

CSB Chairperson Moure-Eraso Warns About Danger of Hot Work
on Tanks Containing Biological or Organic Material

 Begin Statement

Earlier this month a team of CSB investigators deployed to the Omega Protein facility in Moss Point, Mississippi, where a tank explosion on July 28, 2014, killed a contract worker and severely injured another. Our team, working alongside federal OSHA inspectors, found that the incident occurred during hot work on or near a tank containing eight inches of a slurry of water and fish matter known as “stickwater.”

NewImage

 The explosion blew the lid off the 30-foot-high tank, fatally injuring a contract worker who was on top of the tank. A second contract worker on the tank was severely injured. CSB investigators commissioned laboratory testing of the stickwater and found telltale signs of microbial activity in the samples, such as the presence of volatile fatty acids in the liquid samples and offgassing of flammable methane and hydrogen sulfide.

The stickwater inside of the storage tank had been thought to be nonhazardous. No combustible gas testing was done on the contents of the tank before the hot work commenced.

This tragedy underscores the extreme importance of careful hot work planning, hazard evaluation, and procedures for all storage tanks, whether or not flammable material is expected to be present. Hot work dangers are not limited to the oil, gas, and chemical sectors where flammability hazards are commonplace.

The CSB has now examined three serious hot work incidents—all with fatalities—involving hot work on tanks of biological or organic matter. At the Packaging Corporation of America (PCA), three workers were killed on July 29, 2008, as they were performing hot work on a catwalk above an 80-foot-tall tank of “white water,” a slurry of pulp fiber waste and water.  CSB laboratory testing identified anaerobic, hydrogen-producing bacteria in the tank.  The hydrogen gas ignited, ripping open the tank lid and sending workers tumbling to their deaths.

On February 16, 2009, a welding contractor was killed while repairing a water clarifier tank at the ConAgra Foods facility in Boardman, Oregon. The tank held water and waste from potato washing; the CSB investigation found that water and organic material had built up beneath the base of the tank and decayed through microbial action, producing flammable gas that exploded.

Mixtures of water with fish, potatoes, or cardboard waste could understandably be assumed to be benign and pose little safety risk to workers. It is vital that companies, contract firms, and maintenance personnel recognize that in the confines of a storage tank, seemingly non-hazardous organic substances can release flammable gases at levels that cause the vapor space to exceed the lower flammability limit. Under those conditions, a simple spark or even conducted heat from hot work can prove disastrous.

I urge all companies to follow the positive example set by the DuPont Corporation, after a fatal hot work tragedy occurred at a DuPont chemical site near Buffalo, New York. Following CSB recommendations from 2012, DuPont instituted a series of reforms to hot work safety practices on a global basis, including requirements for combustible gas monitoring when planning for welding or other hot work on or near storage tanks or adjacent spaces.

Combustible gas testing is simple, safe, and affordable. It is a recommended practice of the National Fire Protection Association, The American Petroleum Institute, FM Global, and other safety organizations that produce hot work guidance. Combustible gas testing is important on tanks that hold or have held flammables, but it is equally important—if not more so—for tanks where flammables are not understood to be present. It will save lives.

END STATEMENT

More resources:

http://www.csb.gov/e-i-dupont-de-nemours-co-fatal-hotwork-explosion/

http://www.csb.gov/packaging-corporation-storage-tank-explosion/

http://www.csb.gov/seven-key-lessons-to-prevent-worker-deaths-during-hot-work-in-and-around-tanks/

http://www.csb.gov/motiva-enterprises-sulfuric-acid-tank-explosion/

http://www.csb.gov/partridge-raleigh-oilfield-explosion-and-fire/&?nbsp;

 

Monday Accident & Lesson Learned: Fatal accident at Barratt’s Lane No.2 footpath crossing, Attenborough, Nottingham, 26 October 2013

September 22nd, 2014 by

Screen Shot 2014 08 21 at 7 35 57 AM

The UK Rail Accident Investigation Branch issued a report about the fatal accident of a train striking a pedestrian at a footpath crossing near Nottingham, UK. See the entire report and the one lesson learned at:

http://www.raib.gov.uk/cms_resources.cfm?file=/140821_R182014_Barratts_Lane.pdf

Best of The Root Cause Network™ Newsletter – Beat ‘Em or Lead ‘Em … A Tale of Two Plants

September 18th, 2014 by

Note: We have decided to republish articles from the Root Cause Network™ Newsletter that we find particularly interesting and still applicable today. These are used with the permission of the original publisher. In some cases, we have updated some parts of the text to keep them “current” but we have tried to present them in their original form as much as possible. If you enjoy these reprints, let us know. You should expect about two per month.

Nucplant

BEAT ‘EM OR LEAD ‘EM
A TALE OF TWO PLANTS

You’re the VP of a 1000 MW nuclear power plant. A senior reactor operator in the control room actuates the wrong valve.

The turbine trips.

The plant trips.

If the plant had just 30 more days of uninterrupted operation, your utility would have been eligible for a better rate structure based on the Public Service Commission’s (PUC) policy that rewards availability. Now you can kiss that hefty bonus check (that is tied to plant performance goals) good-bye.

To make matters worse, during the recovery, a technician takes a “shortcut” while performing a procedure and disables several redundant safety circuits. An inspector catches the mistake and now the Nuclear Regulatory Commission (the plant’s nuclear safety regulator – the NRC) is sending a special inspection team to look at the plant’s culture. That could mean days, weeks or even months of down time due to regulatory startup delays.

What do you do???

PLANT 1 – RAPID ACTION

He who hesitates is lost!

Corporate expects heads to roll!

You don’t want to be the first, so you:

  1. Give the operator a couple of days off without pay. Tell him to think about his mistake. He should have used STAR! If he isn’t more careful next time, he had better start looking for another job.
  2. Fire the technician. Make him an example. There is NO excuse for taking a shortcut and not following procedures. Put out another memo telling everyone that following procedure is a “condition of employment.”
  3. Expedite the root cause analysis. Get it done BEFORE the NRC shows up. There is no time for detailed analysis. Besides, everyone knows what’s wrong – the operator and technician just goofed up! (Human error is the cause.) Get the witch-hunt over fast to help morale.
  4. Write a quick report. Rapid action will look good to the regulator. We have a culture that does not accept deviation from strict rules and firing the technician proves that. Tell them that we are emphasizing the human performance technology of STAR. Maybe they won’t bother us any more.
  5. Get the startup preparation done. We want to be ready to go back on-line as soon as we can to get the NRC off our backs and a quick start-up will keep the PUC happy.

PLANT 2 – ALTERNATIVE ACTION

No one likes these types of situations, but you are prepared, so you:

  1. Start a detailed root cause analysis. You have highly trained operations and maintenance personnel, system and safety engineers, and human factors professionals to find correctable root causes. And your folks don’t just fly by the seat of their pants. They are trained in a formal investigation process that has been proven to work throughout a variety of industries – TapRooT®! It helps them be efficient in their root cause analysis efforts. And they have experts to help them if they have problems getting to the root causes of any causal factors they identify.
  2. Keep the NRC Regional Office updated on what your team is finding. You have nothing to hide. Your past efforts sharing your root cause analyses means that they have confidence that you will do a thorough investigation.
  3. “Keep the hounds at bay.” Finding the real root causes of problems takes time to perform a trough investigation. Resist the urge (based on real or perceived pressure) to give in to knee-jerk reactions. You don’t automatically punish those involved. Yoiu believe your people consistently try to do their best. You have avoided the negative progression that starts with a senseless witch-hunt, progresses to fault finding, and results in future lies and cover-ups.
  4. Check to see that the pre-staged corrective maintenance has started. Plant down time – even unscheduled forced outages – is too valuable to waste. You use every chance to fix small problems  to avoid the big ones.
  5. Keep up to date on the root cause analysis team’s progress. Make sure you do everything in your power to remove any roadblocks that they face.
  6. Get ready to reward those involved in the investigation and in developing and implementing effective corrective actions. This is a rare opportunity to show off your team’s capabilities while in the heat of battle. Reward them while the sweat is still on their brow.
  7. Be critical of the investigation that is presented to you. Check that all possible root causes were looked into. Publicly ask: “What could I have done to prevent this incident?” Because of your past efforts, the team will be ready for good questions and will have answers.

DIFFERENCES

Which culture is more common in your industry?

Which plant would you rather manage?

Where would you rather work?

What makes Plant 1 and Plant 2 so different? It is really quite simple…

  • Management Attitude: A belief in your people means that you know they are trying to do their best. There is no higher management purpose that to help then succeed by giving them the tools they need to get the job done right.
  • Trust: Everyone trusts everyone on this team. This starts with good face to face communications. It includes a fair application of praise and punishment after a thorough root cause analysis.
  • Systematic Approach and Preparation: Preparation is the key to success and the cause of serendipity. Preparation requires planning and training. A systematic approach allows outstanding performance to be taught and repeated. That’s why a prepared plant uses TapRooT®.

Which plant exhibited these characteristics?

HOW TO CHANGE

Can you change from Plant 1 to Plant 2? YES! But how???

The first step has to be made by senior managers. The right attitude must be adopted before trust can be developed and a systematic approach can succeed.

Part of exhibiting the belief in your people is making sure that they have the tools they need. This includes:

  • Choosing an advanced, systematic root cause analysis tool (TapRooT®).
  • Adopting a written accident/incident investigation policy that shows managements commitment to thorough investigations and detailed root cause analysis.
  • Creating a database to trend incident causes and track corrective actions to completion.
  • Training people to use the root cause analysis tool and the databases that go with them.
  • Making sure that people have time to do proper root cause analysis, help if things get difficult, and the budget to implement effective corrective actions.
  • Providing a staff to assist with and review important incidents, to trend investigation results, and to track the implementation of corrective actions and report back to management on how the performance improvement system is performing.

Once the proper root cause analysis methods (that look for correctable root causes rather than placing blame) are implement and experienced by folks in the field, trust in management will become a forgone conclusion.

YOU CAN CHANGE

Have faith that your plant can change. If you are senior management, take the first step: Trust your people.

Next, implement TapRooT® to get to the real, fixable causes of accidents, incidents, and near-misses. See Chapter 6 of the © 2008 TapRooT® Book to get great ideas that will make your TapRooT® implementation world class.

_ _ _

Copyright 2014 by System Improvements, Inc. Adapted from an article in the March 1992 Root Cause Network™ Newsletter (© 1992 by System Improvements – used by permission) that was based on a talk given by Mark Paradies at the 1990 Winter American Nuclear Society Meeting.

Monday Accident & Lesson Learned: Wheelchair / Baby Stroller Rolls onto the Tracks

September 15th, 2014 by

Screen Shot 2014 08 14 at 11 25 09 AM

The UK Rail Accident Investigation Branch has published a report about two accidents where things (a wheelchair and a baby stroller) rolled onto the tracks.

To see the report and the one lesson learned, CLICK HERE.

Monday Accident & Lessons Learned: NTSB Investigation – Grounding and Sinking of Towing Vessel Stephen L. Colby”

September 8th, 2014 by

Screen Shot 2014 08 12 at 1 12 18 PM

Below is the NTSB investigation PDF. Read it and see what you think of the “probable cause” of the accident …  “The National Transportation Safety Board determines that the probable cause of the grounding and sinking of the Stephen L. Colby was the failure of the master and mate to ensure sufficient underkeel clearance for the intended transit through the accident area.

See the whole report here:

Colby.pdf

 

 

Monday Accident & Lessons Learned: RAIB Investigation Report – Road Rail Vehicle Runs Away, Strikes Scaffold

September 1st, 2014 by

Screen Shot 2014 08 05 at 12 11 11 PM

Here is the summary of the report from the UK Rail Accident Investigation Branch:

At about 03:00 hrs on Sunday 21 April 2013, a road rail vehicle (RRV) ran away as it was being on-tracked north of Glasgow Queen Street High Level Tunnel on a section of railway sloping towards the tunnel. The RRV ran through the tunnel and struck two scaffolds that were being used for maintenance work on the tunnel walls. A person working on one of the scaffolds was thrown to the ground and suffered severe injuries to his shoulder. The track levelled out as the RRV ran into Glasgow Queen Street station and, after travelling a total distance of about 1.1 miles (1.8 kilometres), it stopped in platform 5, about 20 metres short of the buffer stop.

The RRV was a mobile elevating work platform that was manufactured for use on road wheels and then converted by Rexquote Ltd to permit use on the railway. The RRV’s road wheels were intended to provide braking in both road and rail modes. This was achieved in rail mode by holding the road wheels against a hub extending from the rail wheels. The design of the RRV meant that during a transition phase in the on-tracking procedure, the road wheel brakes were ineffective because the RRV was supported on the rail wheels but the road wheels were not yet touching the hubs. Although instructed to follow a procedure which prevented this occurring simultaneously at both ends of the RRV, the machine operator unintentionally put the RRV into this condition. He was (correctly) standing beside the RRV when it started to move, and the control equipment was pulled from his hand before he could stop the vehicle.

The RRV was fitted with holding brakes acting directly on both rail wheels at one end of the vehicle. These were intended to prevent a runaway if non-compliance with the operating instructions meant that all road wheel brakes were ineffective. The holding brake was insufficient to prevent the runaway due to shortcomings in Rexquote’s design, factory testing and specification of maintenance activities. The lack of an effective quality assurance system at Rexquote was an underlying factor. The design of the holding brake was not reviewed when the RRV was subject to the rail industry vehicle approval process because provision of such a brake was not required by Railway Industry Standards.

The RAIB has identified one learning point which reminds the rail industry that the rail vehicle approval process does not cover all aspects of rail vehicle performance. The RAIB has made four recommendations. One requires Rexquote to implement an effective quality assurance system and another, supporting an activity already proposed by Network Rail, seeks to widen the scope of safety-related audits applied by Network Rail to organisations supplying rail plant for use on its infrastructure. A third recommendation seeks improvements to the testing process for parking brakes provided on RRVs. The final recommendation, based on an observation, relates to the provision of lighting on RRVs.

To read the whole report, see:

http://www.raib.gov.uk/cms_resources.cfm?file=/140717_R152014_Glasgow_Queen_Street.pdf

UK Rail Accident Investigation Branch investigates electrical arcing and fire on a Metro train and parting of the overhead line at Walkergate station, Newcastle upon Tyne, on 11 August 2014

August 29th, 2014 by

Here’s the press release …

Electrical arcing and fire on a Metro train and parting of the overhead line
at Walkergate station, Newcastle upon Tyne, on 11 August 2014

RAIB is investigating an accident which occurred on the Tyne and Wear Metro system at Walkergate station on Monday 11 August 2014.

At 18:56 hrs a two-car Metro train, travelling from South Shields to St James, arrived at Walkergate station. While standing in the station an electrical fault occurred to a line breaker mounted on the underside of the train, which produced some smoke. It also caused the circuit breakers at the sub-stations supplying the train with electricity, via the overhead line, to trip (open). About one minute later power was restored to the train. There followed a brief fire in the area of the initial electrical fault and further smoke. Shortly afterwards, the overhead line above the train parted and the flailing ends of the wire fell on the train roof and one then fell on to the platform, producing significant arcing and sparks for around 14 seconds. Fortunately, there was no-one on the platform at the time. However, there were at least 30 passengers on the train who self-evacuated on to the platform using the train doors’ emergency release handles. The fire service attended but the fire was no longer burning. No-one was reported to be injured in the accident and there was no significant damage to the interior of the train.

NewImageImage courtesy of Tyne and Wear Metro 

RAIB’s investigation will consider the sequence of events and factors that led to the accident, and identify any safety lessons. In particular, it will examine:

  • the reasons for the electrical fault;
  • the response of the staff involved, including the driver and controllers;
  • the adequacy of the electrical protection arrangements; and
  • actions taken since a previous accident of a similar type that occurred at South Gosforth in January 2013 (RAIB report 18/2013).

RAIB’s investigation is independent of any investigations by the safety authority. RAIB will publish its findings at the conclusion of the investigation. The report will be available on the RAIB’s website. 

You can subscribe to automated emails notifying you when the RAIB publishes its report and bulletins.

RAIB would like to hear from any passengers who were on the train. Any information provided to assist our safety investigation will be treated in strict confidence. If you are able to help the RAIB please contact us by email on enquiries@raib.gov.uk or by telephoning 01332 253300

Monday Accident & Lessons Learned: OPG Safety Alert #259 – FATALITY DURING CONFINED SPACE ENTRY

August 25th, 2014 by

 

FATALITY DURING CONFINED SPACE ENTRY

  • Two cylindrical foam sponge pads had been inserted in a riser guide tube to form a plug. Argon gas had been pumped into the 60 cm space between the two sponges as shielding gas for welding on the exterior of the riser guide tube.
  • After completion of the welding, a worker descended into the riser guide tube by rope access to remove the upper sponge. While inside, communication with the worker ceased.
  • A confined space attendant entered the riser guide tube to investigate. Finding his colleague unconscious, he called for rescue and then he too lost consciousness.
  • On being brought to the surface, the first worker received CPR; was taken to hospital; but died of suspected cardio-respiratory failure after 2 hours of descent into the space. The co-worker recovered.

 

NewImage

 

What Went Wrong?

  • Exposure to an oxygen-deficient atmosphere: The rope access team members (victim and co-worker) were unaware of the asphyxiation risk from the argon gas shielding.
  • Gas test: There was no gas test done immediately prior to the confined space entry. The act of removing the upper foam sponge itself could have released (additional) argon, so any prior test would not be meaningful.
  • Gas detectors: Portable gas detectors were carried, but inside a canvas bag. The co-worker did not hear any audible alarm from the gas detector when he descended into the space.
  • Evacuation time: It took 20 minutes to bring the victim to the deck after communication failed.

Corrective Actions and Recommendations

Lessons:

  • As a first step: assess whether the nature of the work absolutely justifies personnel entering the confined space.
  • Before confined space entry:
    - identify and communicate the risks to personnel carrying out the work
    - define requirements, roles and responsibilities to control, monitor and supervise the work
    - check gas presence; understand how the work itself may change the atmospheric conditions
    - ensure adequate ventilation, lighting, means of communication and escape
  • Ensure step by step work permits are issued and displayed for each work phase, together with specific job safety analyses
  • During confined space entry:
    - station a trained confined space attendant at the entrance to the space at all times
    - ensure that communication and rescue equipment and resources are readily available
    - carry and use portable/personal gas detectors throughout the activity 

ACTION

Review your yard confined space entry practice, keeping in mind the lessons learned from this incident.

safety alert number: 259 

OGP Safety Alerts http://info.ogp.org.uk/safety/

Disclaimer
 
Whilst every effort has been made to ensure the accuracy of the information contained in this publication, neither the OGP nor any of its members past present or future warrants its accuracy or will, regardless of its or their negligence, assume liability for any foreseeable or unforeseeable use made thereof, which liability is hereby excluded. Consequently, such use is at the recipient’s own risk on the basis that any use by the recipient constitutes agreement to the terms of this disclaimer. The recipient is obliged to inform any subsequent recipient of such terms.This document may provide guidance supplemental to the requirements of local legislation. Nothing herein, however, is intended to replace, amend, supersede or otherwise depart from such requirements. In the event of any conflict or contradiction between the provisions of this document and local legislation, applicable laws shall prevail.

Monday Accident & Lessons Learned: OPG Safety Alert – Well Control Incident – Managing Gas Breakout in SOBM

August 18th, 2014 by

Safety Alert Number: 258 

OGP Safety Alerts http://info.ogp.org.uk/safety

While drilling at a depth of 4747m, the well was shut-in due to an increase in returns with a total gain of 17bbls recorded. The well kill needed an increase in density from 1.40sg to 1.61sg to achieve a stable situation. With the well open the BHA was pumped out to the shoe and tripped 400m to pick up a BOP test tool to perform the post-kill BOP test.

The BOP and choke manifold test were performed as well as some rig maintenance. The BHA was then tripped into the hole and the last 2 stands were washed to bottom. Total pumps-off time without circulation was 44 hours.

Gas levels during the bottoms-up initially peaked at around 14% and then dropped steadily to around 5%. HPHT procedures were being followed and this operation required circulation through the choke for the last 1/3 of the bottoms up. This corresponds to taking returns through the choke after 162m3 is circulated.

After 124 cubic metres of the bottoms-up had been pumped the gas detector at the bell nipple was triggered. Simultaneously, mud started to be pushed up out of the hole, reaching a height of around 1 joint above the drill floor. The flow continued for around 30 seconds corresponding to a bubble of gas exiting the riser. The pumps and rotation were shut down, followed by closure of the diverter, annular and upper pipe rams. Approximately 2bbls of SBM were lost over-board through the diverter line. The flow stopped by itself after just a few seconds and casing pressure was recorded as zero. No-one was on the drill floor at the time and no movement, damage or displacement of equipment occurred.

After verifying that there was no flow (monitored on the stripping tank) the diverter was opened and 10 cubic metres of mud used to refill the riser, equal to a drop in height of 56m.

The riser was circulated to fresh mud with maximum gas levels recorded at 54%. This was followed by a full bottoms up through the choke.

A full muster of POB was conducted due to the gas alarms being triggered.

What Went Wrong?

Conclusion – An undetected influx was swabbed into the well during the BOP test which was then circulated up inadvertently though a non-closed system breaking out in the riser.

  1. Stroke counter was reset to zero after washing 3 stands to bottom (this resulted in 136 cubic metres of circulation not being accounted for in the bottoms up monitoring).
  2. Review of Monitoring While Drilling Annular Pressure memory logs identified several swabbing events identified – main event was when the BOP test tool was POOH from the wellhead – ESD as measured by APWD dropped to 1.59sg on 10 or 11 occasions.
  3. Swabbing was exacerbated by Kill Weight Mud not having sufficient margin above PP.

Corrective Actions and Recommendations:

  • Take into account all washing to bottom for any circulation where bottoms up is to be via choke.
  • Tool Pushers shall cross check the bottoms up calculation and joint agreement on reset of the stroke counter.
  • All BHA tripping speeds to be modeled so that potential swabbing operations are identified and so that tripping speed limits can be specified.
  • Verify, when possible, actual swabbing magnitude using PWD memory logs (ie after a trip out of the hole).
  • Pumping out (even inside liner/casing) shall be considered in tight tolerance liner/drilling BHA. Modeling shall be used to underpin the decision.

Disclaimer

Whilst every effort has been made to ensure the accuracy of the information contained in this publication, neither the OGP nor any of its members past present or future warrants its accuracy or will, regardless of its or their negligence, assume liability for any foreseeable or unforeseeable use made thereof, which liability is hereby excluded. Consequently, such use is at the recipient’s own risk on the basis that any use by the recipient constitutes agreement to the terms of this disclaimer. The recipient is obliged to inform any subsequent recipient of such terms.This document may provide guidance supplemental to the requirements of local legislation. Nothing herein, however, is intended to replace, amend, supersede or otherwise depart from such requirements. In the event of any conflict or contradiction between the provisions of this document and local legislation, applicable laws shall prevail.

Food Industry Related OSHA General Duty Clause Citations: Did you make the list? Now what?

August 13th, 2014 by

OSHA General Duty Clause Citations: 2009-2012: Food Industry Related Activities

Untitled

Doing a quick search of the OSHA Database for Food Industry related citations, it appears that Dust & Fumes along with Burns are the top driving hazard potentials.

Each citation fell under OSH Act of 1970 Section 5(a)(1): The employer did not furnish employment and a place of employment which were free from recognized hazards that were causing or likely to cause death or serious physical harm to employees in that employees were exposed……

Each company had to correct the potential hazard and respond using an Abatement Letter that includes words such as:

The hazard referenced in Inspection Number [insert 9-digit #]

for violation identified as:

 Citation [insert #] and item [insert #] was corrected on [insert

date] by:

 

Okay so you have a regulatory finding and listed above is one of the OSHA processes to correct it, sounds easy right? Not so fast…..

….are the findings correct?

….if a correct finding, are you correcting the finding or fixing the problems that allowed the issue?

….is the finding a generic/systemic issue?

As many of our TapRooT® Client’s have learned, if you want a finding to go away, you must perform a proper root cause analysis first. They use tools such as:

 

o   SnapCharT®: a simple, visual technique for collecting and organizing information quickly and efficiently.

o   Root Cause Tree®: an easy-to-use resource to determine root causes of problems.

o   Corrective Action Helper®: helps people develop corrective actions by seeing outside the box.

First you must define the Incident or Scope of the analysis. Critical in analysis of a finding is that the scope of your investigation is not that you received a finding. The scope of the investigation should be that you have a potential uncontrolled hazard or access to a potential hazard.

In thinking this way, this should also trigger the need to perform a Safeguard Analysis during the evidence collection and during the corrective action development. Here are a few blog articles that discuss this tool we teach in our TapRooT® Courses.

Monday Accident & Lesson NOT Learned: Why Do We Use the Weakest Corrective Actions From the Hierarchy of Safeguards?http://www.taproot.com/archives/28919#comments

Root Cause Analysis Tip: Analyze Things That Go Right … The After-Action Review

http://www.taproot.com/archives/43841

If you have not been taking OSHA Finding to the right level of action, you may want to benchmark your current action plan and root cause analysis process, see below:

BENCHMARKING ROOT CAUSE ANALYSIS

http://www.taproot.com/archives/45408

 

Monday Accident & Lessons Learned: CDC Report on the Potential Exposure to Anthrax

August 11th, 2014 by

Here’s the Executive Summary from the CDC Report:

Executive Summary

The Centers for Disease Control and Prevention (CDC) conducted an internal review of an incident that involved an unintentional release of potentially viable anthrax within its Roybal Campus, in Atlanta, Georgia. On June 5, 2014, a laboratory scientist in the Bioterrorism Rapid Response and Advanced Technology (BRRAT) laboratory prepared extracts from a panel of eight bacterial select agents, including Bacillus anthracis (B. anthracis), under biosafety level (BSL) 3 containment conditions. These samples were being prepared for analysis using matrix-assisted laser desorption/ionization time-of-flight (MALDI- TOF) mass spectrometry, a technology that can be used for rapid bacterial species identification.

What Happened

This protein extraction procedure was being evaluated as part of a preliminary assessment of whether MALDI-TOF mass spectrometry could provide a faster way to detect anthrax compared to conventional methods and could be utilized by emergency response laboratories. After chemical treatment for 10 minutes and extraction, the samples were checked for sterility by plating portions of them on bacterial growth media. When no growth was observed on sterility plates after 24 hours, the remaining samples, which had been held in the chemical solution for 24 hours, were moved to CDC BSL-2 laboratories. On June 13, 2014, a laboratory scientist in the BRRAT laboratory BSL-3 lab observed unexpected growth on the anthrax sterility plate. While the specimens plated on this plate had only been treated for 10 minutes as opposed to the 24 hours of treatment of specimens sent outside of the BSL-3 lab, this nonetheless indicated that the B. anthracis sample extract may not have been sterile when transferred to BSL-2 laboratories.

Why the Incident Happened

The overriding factor contributing to this incident was the lack of an approved, written study plan reviewed by senior staff or scientific leadership to ensure that the research design was appropriate and met all laboratory safety requirements. Several additional factors contributed to the incident:

  • Use of unapproved sterilization techniques

  • Transfer of material not confirmed to be inactive

  • Use of pathogenic B. anthracis when non-pathogenic strains would have been appropriate for

    this experiment

  • Inadequate knowledge of the peer-reviewed literature

  • Lack of a standard operating procedure or process on inactivation and transfer to cover all procedures done with select agents in the BRRAT laboratory. What Has CDC Done Since the Incident Occurred CDC’s initial response to the incident focused on ensuring that any potentially exposed staff were assessed and, if appropriate, provided preventive treatment to reduce the risk of illness if exposure had occurred. CDC also ceased operations of the BRRAT laboratory pending investigation, decontaminated potentially affected laboratory spaces, undertook research to refine understanding of potential exposures and optimize preventive treatment, and conducted a review of the event to identify key recommendations.

To evaluate potential risk, research studies were conducted at a CDC laboratory and at an external laboratory to evaluate the extent to which the chemical treatment used by the BRRAT laboratory inactivated B. anthracis. Two preparations were evaluated: vegetative cells and a high concentration of B. anthracis spores. Results indicated that this treatment was effective at inactivating vegetative cells of B. anthracis under the conditions tested. The treatment was also effective at inactivating a high percentage of, but not all B. anthracis spores from the concentrated spore preparation.

A moratorium is being put into effect on July 11, 2014, on any biological material leaving any CDC BSL-3 or BSL-4 laboratory in order to allow sufficient time to put adequate improvement measures in place.

What’s Next

Since the incident, CDC has put in place multiple steps to reduce the risk of a similar event happening in the future. Key recommendations will address the root causes of this incident and provide redundant safeguards across the agency, these include:

  • The BRRAT laboratory has been closed since June 16, 2014, and will remain closed as it relates to work with any select agent until certain specific actions are taken

  • Appropriate personnel action will be taken with respect to individuals who contributed to or were in a position to prevent this incident

  • Protocols for inactivation and transfer of virulent pathogens throughout CDC laboratories will be reviewed

  • CDC will establish a CDC-wide single point of accountability for laboratory safety

  • CDC will establish an external advisory committee to provide ongoing advice and direction for laboratory safety

  • CDC response to future internal incidents will be improved by rapid establishment of an incident command structure

  • Broader implications for the use of select agents, across the United States will be examined.

    This was a serious event that should not have happened. Though it now appears that the risk to any individual was either non-existent or very small, the issues raised by this event are important. CDC has concrete actions underway now to change processes that allowed this to happen, and we will do everything possible to prevent a future occurrence such as this in any CDC laboratory, and to apply the lessons learned to other laboratories across the United States. 

Hydrocarbon Processing Reports: “Propylene leak blamed for fatal Taiwan gas blasts”

August 5th, 2014 by

A fatal gas blast in Taiwan’s biggest port city, Kaohsiung included 24 fatalities and 271 injured, four of which were policemen and fire fighters. Some of the nearby, uninjured residents assisted the injured by assembling makeshift stretchers, while the remaining 1,212 residents were relocated to safer grounds.

What was the root cause of this massive explosion? Local officials are still investigating. As of right now, their assessment is that there was a gas leak in a sewage pipeline that contained propylene, a gas used to make plastic and fabrics. This incident has been described as an “earthquake-like explosion” that knocked out thousands of local residents power and gas supply.

There are two main propylene producers in the area as well as two large oil refineries that are under investigation. All the sewage pipes in the city are being checked for further evidence and to see which company the particular pipe line that exploded is linked to. Until then, each of these companies have experienced stock share drops and are taking as many precautionary measures as possible to prevent a second explosion.

See the story at:

http://www.hydrocarbonprocessing.com/Article/3367621/Latest-News/Propylene-leak-blamed-for-fatal-Taiwan-gas-blast.html

Monday Accident & Lessons Learned: RAIB Investigation of Uncontrolled evacuation of a London Underground train at Holland Park station 25 August 2013

August 4th, 2014 by

Screen Shot 2014 07 28 at 8 25 36 AM

Here’s the summary of the report from the UK RAIB:

At around 18:35 hrs on Sunday 25 August 2013, a London Underground train departing Holland Park station was brought to a halt by the first of many passenger emergency alarm activations, after smoke and a smell of burning entered the train. During the following four minutes, until the train doors still in the platform were opened by the train operator (driver), around 13 passengers, including some children, climbed out of the train via the doors at the ends of carriages.

The investigation found that rising fear spread through the train when passengers perceived little or no response from the train operator to the activation of the passenger emergency alarms, the train side-doors remained locked and they were unable to open them, and they could not see any staff on the platform to deal with the situation. Believing they were in danger, a number of people in different parts of the train identified that they could climb over the top of safety barriers in the gaps between carriages to reach the platform.

A burning smell from the train had been reported when the train was at the previous station, Notting Hill Gate, and although a request had been made for staff at Holland Park station to investigate the report, the train was not held in the platform for staff to respond. A traction motor on the train was later found to have suffered an electrical fault, known as a ‘flash-over’, which was the main cause of the smoke and smell.

A factor underlying the passengers’ response was the train operator’s lack of training and experience to deal with incidents involving the activation of multiple passenger emergency alarms.

The report observes that London Underground Limited (LUL) commenced an internal investigation of the incident after details appeared in the media.

RAIB has made six recommendations to LUL. These seek to achieve a better ergonomic design of the interface between the train operator and passenger emergency alarm equipment, to improve the ability of train operators to respond appropriately to incidents of this type, and to ensure that train operators carryradios when leaving the cab to go back into the train so that they can maintain communications with line controllers. LUL is also recommended to review the procedures for line controllers to enable a timely response to safety critical conditions on trains and to ensure continuity at shift changeover when dealing with incidents. In addition, LUL is recommended to review the training and competencies of its staff to provide a joined-up response to incidents involving trains in platforms and to reinforce its procedures on the prompt and accurate reporting of incidents so that they may be properly investigated.

Monday Accident & Lessons Learned: UK RAIB Accident Report – Near-miss at Butterswood level crossing, North Lincolnshire, 25 June 2013

July 28th, 2014 by

Screen Shot 2014 06 16 at 12 34 19 PM

The UK Rail Accident Investigation Branch issued a report about a train/car near miss at a crossing. Here is a summary of the report:

At around 07:35 hrs on Tuesday 25 June 2013 a passenger train was involved in a near-miss with a car on a level crossing near Butterswood in North Lincolnshire. The train passed over the level crossing with the barriers in the raised position and the road traffic signals extinguished. No injuries or damage were caused as a result of the incident.

Screen Shot 2014 06 16 at 12 46 59 PM

Normally, the approach of the train would have automatically initiated the closure of the crossing. However, the crossing was not working normally because the power supply to the crossing equipment had been interrupted. The crossing was of a type where train drivers are required to check that it is not obstructed as they approach and that it has operated correctly. A flashing light is provided for this purpose, just before the crossing, with a flashing white light displayed if the crossing has correctly closed against road users, and a flashing red light displayed at all other times (including those occasions when the crossing has failed to close on the approach of a train). The driver of the train involved in the near-miss did not notice until it was too late to stop that the flashing light was indicating that the crossing was not working normally, and was still open for road traffic.

The RAIB’s investigation found that the train driver had the expectation that the crossing would operate normally as the train approached and that he had not focused his attention on the flashing light at the point where he needed to confirm that the crossing had operated correctly for the passage of his train. Although the level crossing had probably failed around nine hours before the incident, the fact of its failure was not known to any railway staff.

The investigation also found that the crossing was not protected with automatic warning system equipment and that the maintenance arrangements at the crossing were not effective in ensuring reliable performance of the equipment. In addition, the train operator’s briefing material did not clearly explain to drivers their role in respect of failures at this type of level crossing.

The RAIB has identified four key learning points relating to non-provision of the automatic warning system at locations where it is mandated by standards, recording of the condition of assets during inspection, storage of batteries, and involving people with relevant technical expertise in industry investigations into incidents and accidents.The RAIB has made four recommendations. Three recommendations have been made to Network Rail addressing the indications given to train drivers approaching crossings where they are required to monitor the crossing’s status, improvements to the reliability of power supplies to crossings such as Butterswood and considering remote monitoring of the power supply at similar crossings. One recommendation has been made to First TransPennine Express regarding the briefing that it gives its drivers on this type of level crossing.

For the complete report, see:

http://www.raib.gov.uk/cms_resources.cfm?file=/140616_R122014_Butterswood.pdf

Connect with Us

Filter News

Search News

Authors

Barb PhillipsBarb Phillips
Editorial Director
Chris ValleeChris Vallee
Human Factors & Six Sigma
Dan VerlindeDan Verlinde
Information Technology
Dave JanneyDave Janney
Safety & Quality
Ed SkompskiEd Skompski
Medical Issues
Ken ReedKen Reed
Equifactor®
Linda UngerLinda Unger
Vice President
Mark ParadiesMark Paradies
Creator of TapRooT®
Megan CraigMegan Craig
Media Specialist
Steve RaycraftSteve Raycraft
Technical Support

Success Stories

We held our first on-site TapRooT Training in mid-1995. Shortly after the training we had another…

Intel

In 1995, BellSouth Telecommunications noticed an increase in the number of service interruptions or outages…

Bell South
Contact Us