Category: Accidents

Remembering An Accident: Savar Building Collapse

April 25th, 2018 by

On April 24, 2013 in Savar Upazila of Dhaka District, Bangladesh a five story commercial building called the Rana Plaza collapsed and killed 1,134 people. On May 13, 2013 rescue efforts where halted and approximately 2,500 people were rescued, but injured from the collapsed building. This incident is considered the deadliest garment-factory accident in recent history. So why did an accident like this happen in modern day? Keep on reading to find out.

The 400 page report exposed multiple causes in why the building collapsed. One of which, the mayor and owners of the building wrongfully granted construction permits to have additional floors built. To make this situation even worse they used substandard materials, and ignored building code violations while constructing the new floors.

In order for the factory to remain efficient the owners had large generators installed on the upper floors, so the factory could keep producing when blackouts occurred. This added lots of strain and weight to the already poorly built upper levels. The report reflects that every time the generators would turn on it would shake the building.

On April 23, cracks began to form in the foundations and walls. An engineer was called in to examine the building and declared it unsafe, but the owners demanded that their works return despite the unsafe working conditions. Then on April 24, 2013 during the morning rush hour the building collapsed.

Major disasters are often wake-up calls for how important it is to ensure that they never happen again.

TapRooT® Root Cause Analysis is taught globally to help industries avoid them. Our 5-Day TapRooT® Advanced Root Cause Analysis Team Leader Training offers advanced tools and techniques to find and fix root causes reactively and significant issues that may lead to major problems proactively.

To learn more about our courses and their locations click on the links below.

5-Day TapRooT® Advanced Root Cause Analysis Team Leader Training
2-Day TapRooT® Root Cause Analysis Essentials Training

 

How many precursor incidents did your site investigate last month? How many accidents did you prevent?

April 25th, 2018 by

A precursor incident is an incident that could have been worse. If another Safeguard had failed, if the sequence had been slightly different, or if your luck had been worse, the incident could have been a major accident, a fatality, or a significant injury. These incidents are sometimes called “hipos” (High Potential Incidents) or “potential SIFs” (Significant Injury or Fatality).

I’ve never talked to a senior manager that thought a major accident was acceptable. Most claim they are doing EVERYTHING possible to prevent them. But many senior managers don’t require advanced root cause analysis for precursor incidents. Incidents that didn’t have major consequences get classified as a low consequence event. People ask “Why?” five times and implement ineffective corrective actions. Sometimes these minor consequence (but high potential consequence incidents) don’t even get reported. Management is letting precursor incidents continue to occur until a major accident happens.

Perhaps this is why I have never seen a major accident that didn’t have precursor incidents. That’s right! There were multiple chances to identify what was wrong and fix it BEFORE a major accident.

That’s why I ask the question …

“How many precursor incidents did your site investigate last month?”

If you are doing a good job identifying, investigating, and fixing precursor incidents, you should prevent major accidents.

Sometimes it is hard to tell how many major accidents you prevented. But the lack of major accidents will keep your management out of jail, off the hot seat, and sleeping well at night.

Screen Shot 2018 04 18 at 2 08 58 PMKeep Your Managers Out of These Pictures

That’s why it’s important to make sure that senior management knows about the importance of advanced root cause analysis (TapRooT®) and how it should be applied to precursor incidents to save lives, improve quality, and keep management out of trouble. You will find that the effort required to do a great investigation with effective corrective actions isn’t all that much more work than the poor investigation that doesn’t stop a future major accident.

Want to learn more about using TapRooT® to investigate precursor incidents? Attend one of our 2-Day TapRooT® Root Cause Analysis Courses. Or attend a 5-Day TapRooT® Root Cause Analysis Course Team Leader Course and learn to investigate precursor incidents and major accidents. Also consider training a group of people to investigate precursor incidents at a course at your site. Call us at 865-539-2139 or CLICK HERE to send us a message.

Is April just a bad month?

April 24th, 2018 by

I was reading a history of industrial/process safety accidents and noticed that all the following happened in April:

Texas city nitrate explosion

April 16, 1947 – 2,200 tons of ammonium nitrate detonates in Texas City, Teas, destroying multiple facilities and killing 581 people.

Deepwater horizon

April 20, 2010 – A blowout, explosions, and fire destroy the Deepwater Horizon, killing 11. This was the worst oil spill in US history.

West texas

April 17, 2013 – 10 tons of ammonium nitrate detonates in West, Texas, destroying most of the town and killing 15 people.

Maybe this is just my selective vision making a trend out of nothing or maybe Spring is a bad time for process safety? I’m sure it is a coincidence but it sure seems strange.

Do you ever notice “trends” that you make you wonder … “Is this really a trend?”

The best way to know is to apply our advanced trending techniques. Watch for our new book coming out this Summer and then plan to attend the course next March prior to the 2019 Global TapRooT® Summit.

Monday Accidents & Lessons Learned: Putting Yourself on the Right Side of Survival

April 23rd, 2018 by

While building an embankment to circumvent any material from a water supply, a front end loader operator experienced a close call. On March 13, 2018, the operator backed his front end loader over the top of a roadway berm; the loader and operator slipped down the embankment; and the loader landed turning over onto its roof. Fortunately, the operator was wearing his seat belt. He unfastened the seat belt and escaped the upside-down machine through the broken right-side window of the loader door.

Front end loaders are often involved in accidents due to a shift in the machine’s center of gravity. The U.S. Department of Labor Mine Safety and Health Administration (MSHA) documented this incident and issued the statement and best practices below for operating front end loaders.

The size and weight of front end loaders, combined with the limited visibility from the cab, makes the job of backing a front end loader potentially hazardous. To prevent a mishap when operating a front end loader:
• Load the bucket evenly and avoid overloading (refer to the load limits in the operating manual). Keep the bucket low when operating on hills.
• Construct berms or other restraints of adequate height and strength to prevent overtravel and warn operators of hazardous areas.
• Ensure that objects inside of the cab are secured so they don’t become airborne during an accident.
• ALWAYS wear your seatbelt.
• Maintain control of mobile equipment by traveling safe speeds and not
overloading equipment.

We would add the following best practices for loaders:
• Check the manufacturer’s recommendations and supplement appropriate wheel ballast or counterweight.
• Employ maximum stabilizing factors, such as moving the wheels to the widest setting.
• Ensure everyone within range of the loader location is a safe distance away.
• Operate the loader with its load as close to the ground as possible. Should the rear of the tractor tip, its bucket will hit the ground before the tractor tips.

Use the TapRooT® System to put safety first and to solve problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

How far away is death?

April 19th, 2018 by

Lockout-tagout fail: Notepaper sign – “Don’t start”

Are you ready for quality root cause analysis of a precursor incident?

April 17th, 2018 by

Many companies use TapRooT® to investigate major accidents. But investigating a major accident is like closing the barn door after the horse has bolted.

What should you be doing? Quality investigations of incidents that could have been major accidents. We call these precursor incidents. They could have been major accidents if something else had gone wrong, another safeguard had failed, or you were “unlucky” that day.

How do you do a quality investigation of a precursor incident? TapRooT® of course! See the Using the Essential TapRooT® Techniques to Investigate Low-to-Medium Risk Incidents book.

NewImage

Or attend one of our TapRooT® Root Cause Analysis Courses.

Evidence Collection: Two things every investigator should know about scene management

April 17th, 2018 by

You may not be part of scene management when an incident occurs at your facility but there are two things every investigator should know:

  1. Hazards that are present in the work area and how to handle them. It’s impossible to anticipate every accident that could happen but we can evaluate hazards that are present at our facilities that could affect employees and the community at large to structure a scene management plan.
  2. Priorities for evidence collection. The opportunity to collect evidence decreases over time. Here are a few things to keep in mind during, and immediately following, scene management.
    • Fragile evidence goes away.
    • Witnesses forget what they saw.
    • Environmental conditions change making it hard to understand why an incident occurred.
    • Clean-up and restart begins; thus, changing the scene from its original state.

Learn more by holding our 1-Day Effective Interviewing & Evidence Collection Training at your facility. It is a standalone course but also fits well with our 2-Day TapRooT® Root Cause Analysis Training. Contact me for details: carr@taproot.com.

 

Monday Accidents & Lessons Learned: We’re Not Off the Runway Yet

April 16th, 2018 by

NASA’s Aviation Safety Reporting System (ASRS) from time to time shares contemporary experiences to add value to the growth of aviation wisdom, lessons learned, and to spur a freer flow of reported incidents. ASRS receives, processes, and analyzes these voluntarily submitted reports from pilots, air traffic controllers, flight attendants, maintenance personnel, dispatchers, ground personnel, and others regarding actual or potential hazards to safe aviation operations.

We acknowledge that the element of surprise, or the unexpected, can upend even the best flight plan. But, sometimes, what is perceived as an anomaly pales in comparison to a subsequent occurrence. This was the case when an Air Taxi Captain went the second mile to clear his wingtips while taxiing for takeoff. Just as he thought any threat was mitigated, boom! Let’s listen in to his account:

“Taxiing out for the first flight out of ZZZ, weed whacking was taking place on the south side of the taxiway. Watching to make sure my wing cleared two men mowing [around] a taxi light, I looked forward to continue the taxi. An instant later I heard a ‘thump.’ I then pulled off the taxiway onto the inner ramp area and shut down, assuming I’d hit one of the dogs that run around the airport grounds on a regular basis. I was shocked to find a man, face down, on the side of the taxiway. His coworkers surrounded him and helped him to his feet. He was standing erect and steady. He knew his name and the date. Apparently [he was] not injured badly. I attended to my two revenue passengers and returned the aircraft to the main ramp. I secured the aircraft and called [the Operations Center]. An ambulance was summoned for the injured worker. Our ramp agent was a non-revenue passenger on the flight and took pictures of the scene. He stated that none of the workers was wearing a high visibility vest, which I also observed. They seldom have in the past.

“This has been a recurring problem at ZZZ since I first came here. The operation is never [published in the] NOTAMs [for] an uncontrolled airfield. The pilots just have to see and avoid people and animals at all times. I don’t think the person that collided with my wingtip was one of the men I was watching. I think he must have been stooped down in the grass. The only option to [improve the] safety of the situation would be to stop completely until, hopefully, the workers moved well clear of the taxiway. This is one of…many operational deficiencies that we, the pilots, have to deal with at ZZZ on a daily basis.”

We invite you to use the TapRooT® System to find and fix problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

Monday Accidents & Lessons Learned: When retrofitting does not evaluate risks

April 9th, 2018 by

Bound for London Waterloo, the 2G44 train was about to depart platform 2 at Guildford station. Suddenly, at 2:37 pm, July 7, 2017, an explosion occurred in the train’s underframe equipment case, ejecting debris onto station platforms and into a nearby parking lot. Fortunately, there were no injuries to passengers or staff; damage was contained to the train and station furnishings. It could have been much worse.

The cause of the explosion was an accumulation of flammable gases within the traction equipment case underneath one of the train’s coaches. The gases were generated after the failure of a large electrical capacitor inside the equipment case; the capacitor failure was due to a manufacturing defect.

Recently retrofitted with a modern version of the traction equipment, the train’s replacement equipment also included the failed capacitor. The project team overseeing the design and installation of the new equipment did not consider the risk of an explosion due to a manufacturer’s defect within the capacitor. As a result, there were no preventative engineering safeguards.

The Rail Accident Investigation Branch (RAIB) has recommended a review of the design of UK trains’ electric traction systems to ensure adequate safeguards are in place to offset any identified anomalies and to prevent similar explosions. Learn about the six learning points recommended by the RAIB for this investigation.

Use the TapRooT® System to solve problems. Attend one of our courses. We offer a basic 2-Day Course and an advanced 5-Day Course. You may also contact us about having a course at your site.

How Safe Must Autonomous Vehicles Be?

April 3rd, 2018 by

Tesla is under fire for the recent crash of their Model X SUV, and the subsequent fatality of the driver. It’s been confirmed that the vehicle was in Autopilot mode when the accident occurred. Both Tesla and the NTSB are investigating the particulars of this crash.

PHOTO: PUBLISHED CREDIT: KTVU FOX 2/REUTERS.

I’ve read many of the comments about this crash, in addition to previous crash reports. It’s amazing how much emotion is poured into these comments. I’ve been trying to understand the human performance issues related to these crashes, and I find I must take special note of the human emotions that are attached to these discussions.

As an example, let’s say that I develop a “Safety Widget™” that is attached to all of your power tools. This widget raises the cost of your power tools by 15%, and it can be shown that this option reduces tool-related accidents on construction sites by 40%.  That means, on your construction site, if you have 100 incidents each year, you would now only have 60 incidents if you purchase my Safety Widget™.  Would you consider this to be a successful purchase?  I think most people would be pretty happy to see their accident rates reduced by 40%!

Now, what happens when you have an incident while using the Safety Widget™? Would you stop using the Safety Widget™ the first time it did NOT stop an injury? I think we’d still be pretty happy that we would prevent 40 incidents at our site each year. Would you still be trying to reduce the other 60 incidents each year? Of course. However, I think we’d keep right on using the Safety Widget™, and continue looking for additional safeguards to put in place, while trying to improve the design of the original Safety Widget™.

This line of thinking does NOT seem to be true for autonomous vehicles. For some reason, many people seem to be expecting that these systems must be perfect before we are allowed to deploy them. Independent reviews (NOT by Tesla) have shown that, on a per driver-mile basis, Autopilot systems reduce accidents by 40% over normal driver accident rates. In the U.S., we experience about 30,000 fatalities each year due to driver error. Shouldn’t we be happy that, if everyone had an autonomous vehicle, we would be saving 12,000 lives every year? The answer to that, you would think, would be a resounding “YES!” But there seems to be a much more emotional content to the answer than straight scientific data would suggest.

I think there may be several human factors in play as people respond to this question:

  1. Over- and under-trust in technology: I was talking to one of our human factors experts, and he mentioned this phenomena. Some people under-trust technology in general and, therefore, will find reasons not to use it, even when proven to work. Others will over-trust the technology, as evidenced by the Tesla drivers who are watching movies, or not responding to system warnings to maintain manual control of the vehicle.
  2. “I’m better than other drivers. Everyone else is a bad drive; while they may need assistance, I drive better than any autonomous gadget.” I’ve heard this a lot. I’m a great driver; everyone else is terrible. It’s a proven fact that most people have an inflated opinion of their own capabilities compared to the “average” person.” If you were to believe most people, each individual (when asked) is better than average. This would make it REALLY difficult to calculate an average, wouldn’t it?
  3. It’s difficult to calculate the unseen successes. How many incidents were avoided by the system? It’s hard to see the positives, but VERY easy to see the negatives.
  4. Money. Obviously, there will be some people put out of work as autonomous vehicles become more prevalent. Long-haul truckers will be replaced by autopilot systems. Cab drivers, delivery vehicle drivers, Uber drivers, and train engineers are all worried about their jobs, so they are more likely to latch onto any negative that would help them maintain their relevancy. Sometimes this is done subconsciously, and sometimes it is a conscious decision.

Of course, we DO have to monitor and control how these systems are rolled out. We can’t have companies roll out inferior systems that can cause harm due to negligence and improper testing. That is one of the main purposes of regulation and oversight.

However, how safe is “safe enough?” Can we use a system that isn’t perfect, but still better than the status quo? Seat belts don’t save everyone, and in some (rare) cases, they can make a crash worse (think of Dale Earnhardt, or a crash into a lake with a stuck seat belt). Yet, we still use seat belts. Numerous lives are saved every year by restraint systems, even though they aren’t perfect. How “safe” must an autonomous system be in order to be accepted as a viable safety device? Are we there yet? What do you think?

Monday Accidents & Lessons Learned: When a snake leads you down a rabbit hole

April 2nd, 2018 by

While Lewis Carroll did not create the rabbit hole, he did turn those holes into a literal abyss down which people could fall. Today, “rabbit hole” has become a metaphor for extreme diversion, redirection, or distraction. Industries spiral down them all the time, resulting in a talespin that, sometimes, cannot be rerouted.

A Captain experienced a unique problem during the pre-departure phase of a flight. Within earshot of passengers, the Gate Agent briefed the Captain, “I am required to inform you that while cleaning the cockpit, the cleaning crew saw a snake under the Captain’s pedals. The snake got away, and they have not been able to find it.”

The incident report from NASA’s Aviation Safety Reporting System (ASRS) details the Captain’s response and reaction: “At this time, the [international pre-departure] inspection was complete, and I was allowed on the aircraft. I found two mechanics in the flight deck. I was informed that they had not been able to find the snake, and they were not able to say with certainty what species of snake it was. The logbook had not been annotated with a write-up, so I placed a write-up in the logbook. I was also getting a line check on this flight. The Check Airman told me that his father was deathly afraid of snakes and suggested that some passengers on the flight may suffer with the same condition.

“I contacted Dispatch and discussed with them that I was uncomfortable taking the aircraft with an unknown reptile condition. . . . The possibility [existed] that a snake could expose itself in flight or, worse on the approach, come out from under the rudder pedals. Dispatch agreed with my position. The Gate Agent then asked to board the aircraft. I said, “No,” as we might be changing aircraft. I then contacted the Chief Pilot. I explained the situation and told him I was uncomfortable flying the aircraft without determining what the condition of the snake was. I had specifically asked if the cleaning crew had really seen a snake. I was informed, yes, that they had tried to vacuum it up and it had slithered away. The Chief Pilot agreed with me and told me he would have a new aircraft for us in five minutes. We were assigned the aircraft at the gate next door.

“. . . When I returned [to the airport], I asked a Gate Agent what had happened to the “snake airplane.” I was told that the aircraft was left in service, and the next Captain had been asked to sign some type of form stating he was informed that the snake had not been found.”

Don’t wait for a snake-in-the-cockpit experience to improve your processes. Reach out to TapRooT® to curtail rabbit holes and leave nothing to chance.

McD’s in UK Fined £200k for Employee Injured While Directing Traffic

March 27th, 2018 by

NewImage

An angry motorist hits a 17-year-old employee who is directing traffic and breaks his knee. Normally, you would think the road rage driver would be at fault. But a UK court fined McDonalds $200,000.

Why? It was a repeat incident. Two previous employees had been hurt while directing traffic. And McDonalds didn’t train the employees how to direct traffic.

What do you think? Would a good root cause analysis of the previous injuries and effective corrective actions have prevented this accident?

Monday accidents & lessons learned: Does what you see match what is happening?

March 26th, 2018 by

>

An incident report from NASA’s Aviation Safety Reporting System (ASRS) gives insight into a pilot’s recurring, problematic observation. Through distraction and confusion, a Bonanza pilot misperceived the runway edge and centerline lights as they cycled off and on. Air Traffic Control (ATC) let him know that the centerline lights were constant, not blinking.

The pilot summarized his experience, “I was transiting the final approach path of . . . Runway 16R and observed the runway edge and centerline lights cycle on and off . . . at a rate of approximately 1 per second. It was very similar to the rate of a blinking traffic light at a 4-way vehicle stop. The [3-blade] propeller speed was 2,400 RPM. This was observed through the entire front windscreen and at least part of the pilot side window. I queried ATC about the reason for the runway lights blinking and was told that they were not blinking. It was not immediately obvious what was causing this, but I did later speculate that it may have been caused by looking through the propeller arc.

“The next day [during] IFR training while on the VOR/DME Runway 16R approach, we observed the runway edge and centerline lights cycle on and off . . . at a rate slightly faster than 1 per second. The propeller speed was 2,500 RPM. I then varied the propeller speed and found that, at 2,700 RPM, the lights were observed strobing at a fairly high rate and, at 2,000 RPM, the blinking rate slowed to less than once per second. This was observed through the entire approach that terminated at the Missed Approach Point (MAP). The flight instructor was also surprised and mentioned that he had not seen this before, but also he doesn’t spend much time behind a 3-blade propeller arc.

“I would speculate that the Pulse Width Modulation (PWM) dimming system of the LED runway lights was phasing with my propeller, causing the observed effect. I would also speculate that the effect would . . . significantly differ at other LED dimming settings . . . and behind a 2-blade propeller.

“I found the effect to be entirely confusing and distracting and would not want to make a landing in such conditions.”

The TapRooT® System, Training, and Software have a dedicated history of R&D, human performance, and improvement. Learn with our best incident investigation and root cause analysis systems.

Construction’s Fatal Four – A Better Approach to Prevention

March 26th, 2018 by

In 2016, 21% of fatal injuries in the private sector were in the Construction industry as classified by the Department of Labor. That was 991 people killed in this industry (almost 3 people every day). Among these were the following types of fatality:

Falls – 384 (38.7%)
Struck by Object – 93 (9.4%)
Electrocutions – 82 (8.3%)
Caught-in/between – 72 (7.3%)

Imagine that. Eliminating just these 4 categories of fatalities would have saved over 630 workers in 2016.

Now, I’m not naive enough to think we can suddenly eliminate an entire category of injury or fatality in the U.S. However, I am ABSOLUTELY CERTAIN that, at each of our companies, we can take a close look at these types of issues and make a serious reduction in these rates. Simply telling our workers to “Be careful out there!” or “Follow the procedures and policies we give you” just won’t cut it.

NOTE: In the following discussion, when I’m talking about our workers and teammates, I am talking about ALL of us! We ALL violate policies and procedures every day. Don’t believe me? Take a look at the speedometer on your car on the way home from work tonight and honestly tell me you followed the speed limit all the way home.

As an example, take a look at your last few incident investigations. When there is an incident, one of the questions always asked is, “Did you know that you weren’t supposed to do that?” The answer is almost always, “Yes.” Yet, our teammates did it anyway.

Unfortunately, too many companies stop here. “Worker knew he should not have put his hand into a pinch point. Corrective action, Counseled the employee on the importance of following policy and remaining clear of pinch points.” What a completely useless corrective action! I’m pretty sure that the worker who just lost the end of his finger knows he should not have put his hand into that pinch point. Telling him to pay attention and be more careful next time will probably NOT be very effective.

If we really want to get a handle on these types of injuries, we must adopt a more structured, scientific strategy. I’d propose the following as a simple start:

1. Get out there and look! Almost every accident investigation finds that this has happened before, or that the workers often make this same mistake. If that is true, we should be getting out there and finding these daily mistakes.

2. To correct these mistakes, you must do a solid root cause analysis. Just yelling at our employees will probably not be effective. Remember, they are not bad people; they are just people. This is what people do. They try to do the best job they can, in the most efficient manner, and try to meet management’s expectations. We need to understand what, at the human performance level, allowed these great employees to do things wrong. THAT is what a good root cause analysis can do for you.

3. As in #2, when something bad DOES happen, you must do a solid RCA on those incidents, too. If your corrective actions are always:

  • Write a new policy or procedure
  • Discipline the employee
  • Conduct even MORE training

then your RCA methodology is not digging deep enough.

There is really no reason that we can’t get these types of injuries and fatalities under control. Start by doing a good root cause analysis to understand what really happened, and recognize and acknowledge why your team made mistakes. Only then can we apply effective corrective actions to eliminate those root causes. Let’s work together to keep our team safe.

1947 Centralia Mine Disaster

March 25th, 2018 by

On March 25, 1947, the Centralia No. 5 coal mine exploded in Illinois. The explosions took the lives of 111 mine workers. At the time of the explosion, 142 men were in the mine. 65 of these men were killed by burns and the violence of the explosion, and 45 of the men were killed by afterdamp. Only 8 men were rescued, but unfortunately one of the rescued men died due to the effects of afterdamp. The other 24 men were able to escape the mine unaided.

So, what happened? The coal mine was extremely dry and dusty, and there were large deposits of coal dust throughout the mine. Very little effort had been made to to clean/load out excessive dust. Also, water had not been used to allay the dust at its source.  Then, an unfortunate blowout happened when coal dust ignited. Because of the coal dust build up throughout the mine, the explosion worsened. In total, there were six working sections of the mine and 4 out of the 6 sections were affected by flames and explosion violence. The other two sections of the mine were only affected by afterdamp.

The explosion was contained when it reached the rockdusted zones. It traveled through all the active mining rooms, and some abandoned rooms that were not treated with rockdust. The explosion also failed to move through areas that were partly caved in, and in some places filled with incombustible roof rash.

Disasters with a loss of life are often wake-up calls in major industries, and how important is to ensure that it never happens again.

TapRooT® Root Cause Analysis is taught globally to help industries avoid major accidents like this. Our 5-Day TapRooT® Advanced Root Cause Analysis Team Leader Training offers advanced tools and techniques to find and fix root causes pro-actively and significant issues that may lead to major problems proactively.

Monday Accidents & Lessons Learned: When exposure to contaminants is part of a job

March 19th, 2018 by

Recently, a review by Australia’s Department of Mining, Industry, Regulation, and Safety (DMIRS) revealed that workers in some gold rooms have experienced sustained exposure to elevated heavy metal levels, including arsenic, lead, and mercury. Work done in a gold room is specifically identified as occupational exposure work that requires ongoing health surveillance for gold room employees.

Among the biological monitoring results were omissions within the biological and atmospheric monitoring program of some heavy metal contaminants associated with ore mineralization; lack of consideration on the part of sites for the mineralogy of their specific ore deposits that contribute to assessment of heavy metal often present in Western Australia gold deposits; inadequate, ineffective ventilation systems within gold rooms; and a deficiency of ventilation system performance testing and monitoring. Along with these inconsistencies, when equipment is modified or installed in gold rooms, maintenance programs fall short of manufacturer’s recommendations.

Read the Mines Safety Bulletin, Minimizing exposure to hazardous contaminants in gold rooms. Then, learn why professional training in effective investigations and competency in root cause analysis are key to solving workplace problems.

Miami Bridge Collapse – Is Blame Part of Your Investigation Policy?

March 16th, 2018 by

collapse miami bridge

 

 

 

 

 

 

 

I was listening to a news report on the radio this morning about the pedestrian bridge collapse in Miami. At one point, they were interviewing Florida Governor Rick Scott.  Here is what he said:

“There will clearly be an investigation to find out exactly what happened and why this happened…”

My ears perked up, and I thought, “That sounds like a good start to a root cause investigation!”

And then he continued:

“… and we will hold anybody accountable if anybody has done anything wrong,”

Bummer.  His statement had started out so good, and then went directly to blame in the same breath.  He had just arrived on the scene.  Before we had a good feel for what the actual circumstances were, we are assuming our corrective actions are going to pinpoint blame and dish out the required discipline.

This is pretty standard for government and public figures, so I wasn’t too surprised.  However, it got me thinking about our own investigations at our companies.  Do we start out our investigations with the same expectations?  Do we begin with the good intentions of understanding what happened and finding true root causes, but then have this expectation that we need to find someone to blame?

We as companies owe it to ourselves and our employees to do solid, unbiased incident investigations.  Once we get to reliable root causes, our next step should be to put fixes in place that answer the question, “How do we prevent these root causes from occurring in the future?  Will these corrective actions be effective in preventing the mistakes from happening again?”  In my experience, firing the employee / supervisor / official in charge rarely leads to changes that will prevent the tragedy from happening again.

 

Connect with Us

Filter News

Search News

Authors

Angie ComerAngie Comer

Software

Anne RobertsAnne Roberts

Marketing

Barb CarrBarb Carr

Editorial Director

Chris ValleeChris Vallee

Human Factors

Dan VerlindeDan Verlinde

VP, Software

Dave JanneyDave Janney

Safety & Quality

Garrett BoydGarrett Boyd

Technical Support

Ken ReedKen Reed

VP, Equifactor®

Linda UngerLinda Unger

Co-Founder

Mark ParadiesMark Paradies

Creator of TapRooT®

Michelle WishounMichelle Wishoun

Licensing Paralegal

Per OhstromPer Ohstrom

VP, Sales

Shaun BakerShaun Baker

Technical Support

Steve RaycraftSteve Raycraft

Technical Support

Wayne BrownWayne Brown

Technical Support

Success Stories

In this short period of time interesting problems were uncovered and difficult issues were…

Huntsman

I know that accidents happen, but I always try and learn from them and absolutely aim…

ARKEMA
Contact Us