Category: Human Performance

Stop Normalization of Deviation with Normalization of Excellence

March 3rd, 2016 by

There is no Normalization of Deviation. Deviation IS NORMAL!

If you don’t think that is true, read this previous article:

There is no such thing as “Normalization of Deviation”

In 1946, Admiral Rickover was one of a small group of naval officers that visited the Manhattan Project in Oak Ridge, Tennessee, to learn about nuclear power and to see if there were ways to apply it in the US Navy. He had the foresight to see that it could be applied as a propulsion for submarines – freeing subs from the risky proposition of having to surface to recharge their batteries.

Rickover

But even more amazing than his ability to see how nuclear power could be used, to form a team with exceptional technical skills, and to research and develop the complex technologies that made this possible … he saw that the normal ways that the Navy and industrial contractors did things (their management systems) were not robust enough to handle the risk of nuclear technology.

Rickover set out to develop the technology to power a ship with the atom and to develop the management systems that would assure excellence. In PhD research circles these new ways of managing are often called a “high performance organization.”

Rickover’s pursuit of excellence was not without cost. It made him the pariah in naval leadership. Despite his accomplishments, Rickover would have been forced out of the Navy if it had not been for strident support from key members of Congress.

Why was Rickover an outcast? Because he would not compromise over nuclear safety and his management philosophies were directly opposed to the standard techniques used throughout the Navy (and most industrial companies).

What is the proof that his high performance management systems work? Over 60 years of operating hundreds of naval nuclear reactors ashore and at sea without a single process safety accident (reactor meltdown). And his legacy continues even after he left as head of the Nuclear Navy. The culture he established is so strong that it has endured for 30 years!

Compare that record to the civilian nuclear power industry, refinery process safety incidents, or off shore drilling major accidents. You will see that Rickover developed a truly different high performance organization that many with PhD’s still don’t understand.

In his organization, deviation truly was abnormal.

What are the secrets that Rickover applied to achieve excellence? They aren’t secret. He testified to his methods in front of Congress and his testimony is available at this link:

http://www.taproot.com/content/wp-content/uploads/2010/09/RickoverCongressionalTestimony.pdf

What keeps other industries from adopting the Rickover’s management systems to achieve equally outstanding performance in their industries? The systems Rickover used to achieve excellence are outside the experience of most senior executives and applying the management systems REQUIRES focussed persistence from the highest levels of management.

To STOP the normalization of deviation, the CEO and Presidents of major corporations would have to insist and promote the Normalization of Excellence that is outlined in Rickover’s testimony to Congress.

Sometimes Rickover’s testimony to Congress may not be clear to someone who has not experience life in the Nuclear Navy. Therefore, I will explain (translate from Nuclear navy terminology) what Rickover meant and provide readers with examples from my Nuclear Navy career and from industry.

Read Part 3:  Normalization of Excellence – The Rickover Legacy – Technical Competency

What Temp for the Best Night’s Sleep

February 24th, 2016 by

Or click on this link if the video does not play …

http://on.wsj.com/21luzyO

There is no such thing as “Normalization of Deviation”

February 24th, 2016 by

Yes, I have written and spoken about normalization of deviation before. But today I hope to convince you that normalization of deviation DOES NOT EXIST.

What is “Normalization of Deviation”? (Or sometimes referred to as normalization of deviance.) In an interview, Diane Vaughan, Sociology Professor from Ohio State University, said:

Social normalization of deviance means that people within the organization become so much accustomed to a deviant behaviour that they don’t consider it as deviant, despite the fact that they far exceed their own rules for the elementary safety.

Let’s think about this for a moment. What defines deviation or deviance?

To deviate means to depart from a set path.

In a country, the government’s rules or laws set the path.

In a company, management usually sets the path. Also, management usually conforms to the regulations set by the government.

To say that there is normalization of deviation means that the normal course of doing business is to follow the rules, regulations, and laws.

Let’s look at a simple example: the speed limit.

When Jimmy Carter was President, he convinced Congress to pass a national speed limit of 55 miles per hour. He did this because there was an oil shortage and he declared the conservation of oil to be the “moral equivalent of war.”

What happened?

Violating the speed limit became a national pastime. Can you remember the popular songs? Sammy Hagar sang “I can’t drive 55.”  There was C.W. McCall signing “Convoy.”

And the most famous of all? The movie “Smoky and the Bandit” with Burt Reynolds and Sally Fields …

How many laws did they break?

One might think that this is just a rare example. The rule (55 mph speed limit) was just too strict and people rebelled.

Take a minute to consider what you observe … who breaks the rules?

  • Drivers
  • Teenagers
  • Politicians
  • Executives
  • Pilots
  • Nurses
  • Police
  • Teachers
  • Judges
  • Doctors
  • Clergy

You can probably remember a famous example for each of these classes of people above where rule breaking was found to be common (or at least not so uncommon).

You might ask yourself … Why do people break the rules? If you ask people why they break the rules you will hear the following words used in their replies:

  • Unnecessary
  • Burdensome
  • Just for the inexperienced
  • Just once
  • They were just guidelines
  • Everybody does it

My belief is that rule breaking is part of human nature. We often work the easiest way, the quickest way, the way with the least effort, to get things done.

Thus deviation from strict standards is NOT unusual. Deviation is NORMAL!

This there is no “Normalization of Deviation” … the abnormal state is getting everyone to follow strict rules.

  • To follow the procedure as written
  • To always wear PPE
  • To follow the speed limit
  • To pay every tax
  • To never sleep on the job (to stop nodding off on the back shift or at a boring meeting)

Thus, instead of wondering why “Normalization of Deviation” exists and treating it like an abnormal case, we should see that we have to do something special to get the abnormal state of a high performance organization to exist.

What we now want is an abnormal state of an extremely high performance organization that we try to establish with strict codes of behavior that are outside our normal experience (or human nature).

How do you establish this high performance organization with high compliance with strict standards? That’s a great question and the topic for another article that I will write in the future.

Read Part 2: Stop Normalization of Deviation with Normalization of Excellence

Monday Accident & Lessons Learned: Was “Paying Attention” the Root Cause?

February 8th, 2016 by

Watch this video and see what you think …

Get More from TapRooT®: Follow our Pages on LinkedIn

August 13th, 2015 by

Do you like quick, simple tips that add value to the way you work? Do you like articles that increase your happiness?  How about a joke or something to brighten your day? Of course you do! Or you wouldn’t be reading this post.  But the real question is, do you want MORE than all of the useful information we provide on this blog?  That’s okay – we’ll allow you to be greedy!

A lot of people don’t know we have a company page on LinkedIn that also shares all those things and more.  Follow us by clicking the image below that directs to our company page, and then clicking “Follow.”

linkedinheader

We also have a training page where we share tips about career/personal development as well as course photos and information about upcoming courses.  If you are planning to attend a TapRooT® course or want a job for candidates with root cause analysis skills, click the image below that directs to our training page and then click “Follow.”

training page

Thank you for being part of the global TapRooT® community!

When is a safety incident a crime? Would making it a corporate crime improve corporate and management behavior?

July 29th, 2015 by

NewImage

I think we all agree that a fatality is a very unfortunate event. But it may not be a criminal act.

When one asks after an accident if a crime has been committed, the answer depends on the country where the accident occurred. A crime in China may not be a crime in the UK. A crime in the UK may not be a crime in the USA. And a crime in the USA may not be a crime in China.

Even experts may disagree on what constitutes a crime. For example, University of Maryland Law Professor Rena Steinzor wrote an article on her blog titled: “Kill a Worker? You’re Not a Criminal. Steal a Worker’s Pay? You Are One.” Her belief is that Du Pont and Du Pont’s managers should have faced criminal prosecution after an accident at their LaPorte, Texas, facility. She cited behavior by Du Pont’s management as “extraordinarily reckless.”

OSHA Chief David Michaels disagrees with Professor Steinzor. He is quoted in a different article as saying during a press conference that Professor Steinzor’s conclusions and article are, “… simply wrong.”

The debate should raise a significant question: Is making an accident – especially a fatal accident – a corporate crime a good way to change corporate/management behavior and improve worker safety?

Having worked for Du Pont back in the late 1980’s, I know that management was very concerned about safety. They really took safety to heart. I don’t know if that attitude changed as Du Pont transformed itself to increase return on equity … Perhaps they lost their way. But would making poor management decisions a crime make Du Pont a safer place to work?

Making accidents a crime would definitely making performing an accident investigation more difficult. Would employees and managers cooperate with ANY investigation (internal, OSHA, or criminal) IF the outcome could be a jail sentence? I can picture every interviewee consulting with their attorney prior to answering an investigator’s question.

I believe the lack of cooperation would make finding and fixing root causes much more difficult. And finding and fixing the root causes of accidents is extremely important when trying to improve safety. Thus, I believe increased criminalization of accidents would actually work against improving safety.

I believe that Du Pont will take action to turn around safety performance after a series of serious and sometimes fatal accidents. I think they will do this out of concern for their employees. I don’t think the potential for managers going to jail would improve the odds that this improvement will occur.

What do you think? Do you agree or disagree. Or better yet, do you have evidence of criminal proceedings improving or hindering safety improvement?

Let me know by leaving a comment below.

 

The more things change the more they stay the same…

July 21st, 2015 by

I overheard a senior executive talking about the problems his company was facing:

  • Prices for their commodity were down, yet costs for production were up.
  • Cost overruns and schedule slippages were too common.
  • HSE performance was stagnant despite improvement goals.
  • They had several recent quality issues that had caused customer complaints.
  • They were cutting “unnecessary” spending like training and travel to make up for revenue shortfalls. 

I thought to myself … 

“How many times have I heard this story?”

I felt like interrupting him and explaining how he could stop at least some of his PAIN. 

I can’t do anything about low commodity prices. The price of oil, copper, gold, coal, or iron ore is beyond my control. And he can’t control these either.

 But he was doing things that were making his problems (pain) worse. 

For example, if you want to stop cost overruns, you need to analyze and fix the root causes of cost overruns.

How do you do that? With TapRooT®.

And how would people learn about TapRooT®? By going to training.

And what had he eliminated? The training budget!

How about the stagnant HSE performance?

To improve performance his company needs to do something different. They need to learn best practices from other industry leaders from their industry AND from other industries.

Where could his folks learn this stuff? At the TapRooT® Summit.

His folks didn’t attend because they didn’t have a training or travel budget!

And the quality issues? He could have his people use the same advanced root cause analysis tools (TapRooT®) to attack them that they were already using for cost, schedule, and HSE incidents. Oh, wait. His people don’t know about TapRooT®. They didn’t attend training.

This reminds me of a VP at a company that at the end of a presentation about a major accident that cost his company big $$$$ and could have caused multiple fatalities (but they were lucky that day). The accident had causes that were directly linked to a cost cutting/downsizing initiative that the VP had initiated for his division. The cost cutting initiative had been suggested by consultants to make the company more competitive in a down economy with low commodity prices. At the end of the presentation he said:

“If anybody would have told me the impacts of these cuts, I wouldn’t have made them!”

Yup. Imaging that. Those bad people didn’t tell him he was causing bad performance by cutting the people and budget they needed to make the place work. 

That accident and quote occurred almost 20 years ago.

Yes, this isn’t the first time we have faced a poor economy, dropping commodity prices, or performance issues. The more things change, the more they stay the same!

But what can you do?

Share this story!

And let your management know how TapRooT® Root Cause Analysis can help them alleviate their PAIN!

Once they understand how TapRooT®’s systematic problem solving can help them improve performance even in a down economy, they will realize that the small investment required is well worth it compared to the headaches they will avoid and the performance improvement they can achieve.

Because in bad times it is especially true that:

“You can stop spending bad money
or start spending good money
fast enough!”

NewImage

Robot Made a Human Error and a Worker was Killed

July 8th, 2015 by


_83994885_getty_robot_pic

The 22-year-old man died in hospital after the accident at a plant in Baunatal, 100km north of Frankfurt. He was working as part of a team of contractors installing the robot when it grabbed him, according to the German car manufacturer. Volkswagen’s Heiko Hillwig said it seemed that human error was to blame.

A worker grabs the wrong thing and often gets asked, “what were you thinking?” A robot picks up the wrong thing and we start looking for root causes.

Read the article below to learn more about the fatality and ask why would we not always look for root causes once we identify the actions that occurred?

http://www.bbc.co.uk/newsbeat/article/33359005/man-crushed-to-death-by-robot-at-car-factory

 

Investigating the Human Error Causes of Drug and Device Manufacturing Problems

July 8th, 2015 by

For the drug and medical device manufacturing industries, the US Code of Federal Regulations 211.22, Good Manufacturing Practices, states that if manufacturing errors occur, quality control should make sure that the errors “… are fully investigated.”

From past FDA actions it is clear that stopping the investigation at a “human error” cause is NOT fully investigating the error.

Here are three reasons people fail when they are investigating human error issues and when they develop fixes:

  1. They did not use a systematic process.  5-Whys is not a systematic process.
  2. The system they used did not guide them to the real, fixable causes of human errors (most quality professionals are not trained in human factors),
  3. The system did not suggest ways to fix human errors once the causes had been identified (the FDA expects effective corrective actions).

What tool provides a systematic process with guidance to find and fix the causes of human error? The TapRooT® System!

If you would like to read more about how TapRooT® can help you find the root causes of human error, see:

http://www.taproot.com/archives/44542

or

http://www.taproot.com/products-services/about-taproot

 To learn about how to use TapRooT® to improve your investigations of human error, We suggest attending the 5-Day TapRooT® Advanced Root Cause Analysis Team Leader Training. See the list of public course held around the world at:

http://www.taproot.com/store/5-Day-Courses/?coursefilter=Team+Leader+Training

With your hard work and effort and a system that will find and fix the root causes of human error you can succeed in fixing human error issues and in meeting the FDA’s expectations.

Would you know if your corrective action resulted in an accident?

June 30th, 2015 by

“Doctor… how do you know that the medicine you prescribed him fixed the problem,” the peer asked. “The patient did not come back,” said the doctor.

No matter what the industry and or if the root causes found for an issue was accurate, the medicine can be worse than the bite. Some companies have a formal Management of Change Process or a Design of Experiment Method that they use when adding new actions.  On the other extreme, some use the Trial and Error Method… with a little bit of… this is good enough and they will tell us if it doesn’t work.

You can use the formal methods listed above or it can be as simple for some risks to just review with the right people present before implementation of an action occurs. We teach to review for unintended consequences during the creation of and after the implementation of corrective or preventative actions in our 7 Step TapRooT® Root Cause Analysis Process. This task comes with four basic rules first:

1. Remove the risk/hazard or persons from the risk/hazard first if possible. After all, one does not need to train somebody to work safer or provide better tools for the task, if the task and hazard is removed completely. (We teach Safeguard Analysis to help with this step)

2. Have the right people involved throughout the creation of, implementation of and during the review of the corrective or preventative action. Identify any person who has impact on the action, owns the action or will be impacted by the change, to include process experts. (Hint, it is okay to use outside sources too.)

3. Never forget or lose sight of why you are implementing a corrective or preventative action. In our analysis process you must identify the action or inaction (behavior of a person, equipment or process) and each behaviors’ root causes. It is these root causes that must be fixed or mitigated for, in order for the behaviors to go away or me changed. Focus is key here!

4. Plan an immediate observation to the change once it is implemented and a long term audit to ensure the change sustained.

Simple… yes? Maybe? Feel free to post your examples and thoughts.

Product Safety Recall…… one of the few times that I see Quality and Safety Merge

June 22nd, 2015 by

We can all remember some type of major product recall that affected us in the past (tires, brakes, medicine….) or recalls that may be impacting us today (air bags). These recalls all have a major theme, a company made something and somebody got hurt or worse. This is a theme of “them verses those” perception.

Now stop and ask, when is the last time quality and safety was discussed as one topic in your current company’s operations?

You received a defective tool or product….

  1. You issued a defective tool or product….
  2. A customer complained….
  3. A customer was hurt….
  4. ???….

Each of the occurrences above often triggers an owner for each type of problem:

  1. The supplier…
  2. The vendor…
  3. The contractor…
  4. The manufacturer….
  5. The end user….

Now stop and ask, who would investigate each type of problem? What tools would each group use to investigate? What are their expertise and experiences in investigation, evidence collection, root cause analysis, corrective action development or corrective action implementation?

This is where we create our own internal silo’s for problem solving; each problem often has it’s own department as listed in the company’s organizational chart:

  1. Customer Service (Quality)
  2. Manufacturing (Quality or Engineering)
  3. Supplier Management (Supply or Quality)
  4. EHS (Safety)
  5. Risk (Quality)
  6. Compliance (?)

The investigations then take the shape of the tools and experiences of those departments training and experiences.

Does anyone besides me see a problem or an opportunity here?

Are You Comfortable With Just Getting By?

June 15th, 2015 by

Getting comfortable with just getting by, accepting failure, accepting broken rules … these attitudes all contribute to an unhealthy workplace culture that allows major and minor accidents. How do we get back on track?

We livestreamed Mark Paradies’ empowering talk, “How to Stop Normalization of Deviation,” at the 2015 Global TapRooT® Summit in Las Vegas. View the recorded session below and learn how to improve the work culture at your facility.

Broadcast live streaming video on Ustream

How Much Does Human Error Cost Your Company?

March 4th, 2015 by

If your company has a major accident, the board of directors will know the cost of human error.

But human error happens every day. And the small incidents and mid-size accidents can add up to billions of dollars in waste, injuries, and warranty costs. 

How much of the billion dollars in waste comes from your company? Do you keep track?

How can you know how much you should invest in stopping human error if you don’t know how much human error costs?

One thing that is for sure, if you are worried about human error, you should attend the 2-Day TapRooT® Understanding and Stopping Human Error Course taught by Joel Haight and Mark Paradies.

The course will help you understand why human error occurs and the best practices you can implement to make human action much more reliable.

The course is being held on June 1-2.

Just after this course is the 2015 Global TapRooT® Summit. 

The Summit include a track titled, “Human Error Reduction and Behavior Change.” To see all the topic covered, see this link:

http://www.taproot.com/taproot-summit/summit-schedule

And click on the appropriate track button.

Register for both the 2-Day TapRooT® Understanding and Stopping Human Error Course and the TapRooT® Summit forgive days of great learning, networking, and best practice sharing. Just CLICK HERE to get started.

Monday Accident & Lessons Learned: Errors Under Pressure – What are the Odds?

February 9th, 2015 by

Screen Shot 2015 02 05 at 1 52 57 PM

Here’s a description of an car/train accident:

http://www.nytimes.com/2015/02/05/nyregion/on-metro-north-train-in-crash-a-jolt-then-chaos.html?emc=edit_th_20150205&nl=todaysheadlines&nlid=60802201&_r=0

How could things go from a minor error and fender bender to a multi-fatality accident?

It happens when someone makes a bad decision under pressure.

Don’t think it couldn’t happen to you. Even with good training and good human factors design, under high stress, people do things that seem stupid when investigating an accident (looking at what happened in the calm light of the post accident investigation).

Often, the people reacting in a stressful situation aren’t well trained and may have poor displays, poor visibility, or other distractions. Their chance of choosing the right action? About 50/50. That’s right, they could flip a coin and it would be just as effective as their brain in deciding what to do in a high-stress situation.

LESSONS LEARNED

FIRST: Avoid decisions under high stress. In this case, KEEP OFF THE TRACKS!

Never stop on a railroad track even when no trains are coming.

That’s true for all hazards.

Stay out from under loads. Stay away from moving heavy equipment.

You get the idea.

Don’t put yourself in a position where you have to make a split-second decision.

SECOND:  NEVER TRY TO BEAT A TRAIN or PULL IN FRONT OF A TRAIN.

Always back off the tracks if possible. This is true even if you hit the gate and dent your car.

FINALLY: Think about how this train accident could apply to hazards at your facility.

Are people at risk of having to make split-second decisions under stress?

If they do, or if it is possible, a serious accident could be just around the corner.

Try to remove the hazard if possible.

How could have the hazard been removed in this case?

An overpass or underpass for cars is one way.

Other ideas? Leave them below as comments.

Even in Humor, You can learn about Root Cause and People from Dr. Deming

January 22nd, 2015 by

Caution: Watching this Video can and will make you laugh…… then you realize you might be laughing at…

… your own actions.

… your understanding of other peoples actions.

… your past corrective or preventative actions.

Whether your role or passion is in safety, operations, quality, or finance…. “quality is about people and not product.” Interestingly enough, many people have not heard Dr. Deming’s concepts or listened to Dr. Deming talk. Yet his thoughts may help you understand the difference between people not doing their best and the best the process and management will all to be produced.

To learn more about quality process thoughts and how TapRooT® can integrate with your frontline activities to sustain company performance  excellence, join a panel of Best Practice Presenters in our TapRooT® Summit Track 2015 this June in Las Vegas. A Summit Week that reminds you that learning and people are your most vital variables to success and safety.

To learn more about our Summit Track please go to this link. https://www.taproot.com/taproot-summit

If you have trouble getting access to the video, you can also use this link http://youtu.be/mCkTy-RUNbw

Root Cause Analysis Tip: Is Human Error a Root Cause?

July 17th, 2014 by

A frequent question that I see in various on-line chat forums is: “Is human error a root cause?” For TapRooT® Users, the answer is obvious. NO! But the amount of discussion that I see and the people who even try suggesting corrective actions for human error with no further analysis is amazing. Therefore, I thought I’d provide those who are NOT TapRooT® Users with some information about how TapRooT® can be used to find and fix the root causes of human error.

First, we define a root cause as:

the absence of a best practice or the failure to apply knowledge that would have prevented a problem.”

But we went beyond this simple definition. We created a tool called the Root Cause Tree® to help investigators go beyond their current knowledge to discover human factors best practices/knowledge to improve human performance and stop/reduce human errors.

How does the Root Cause Tree® work?

First, if there is a human error, it gets the investigator to ask 15 questions to guide the investigator to the appropriate seven potential Basic Cause Categories to investigate further to find root causes.

The seven Basic Cause Categories are:

  • Procedures,
  • Training,
  • Quality Control,
  • Communications,
  • Human Engineering,
  • Work Direction, and
  • Management Systems.

If a category is indicated by one of the 15 questions, the investigator uses evidence in a process of elimination and selection guided by the questions in the Root Cause Tree® Dictionary.

The investigator uses evidence to work their way down the tree until root causes are discovered under the indicated categories or until that category is eliminated. Here’s the Human Engineering Basic Cause Category with one root cause (Lights NI).

Screen Shot 2014 07 08 at 10 35 20 AM

The process of using the Root Cause Tree® was tested by users in several different industries including a refinery, an oil exploration division of a major oil company, the Nuclear Regulatory Commission, and an airline. In each case, the tests proved that the Tree helped investigators find root causes that they previously would have overlooked and improved the company’s development of more effective corrective actions. You can see examples of the results of performance improvement by using the TapRooT® System by clicking here.

If you would like to learn to use TapRooT® and the Root Cause Tree® to find the real root causes of human error and to improve human performance, I suggest that you attend our 5-Day TapRooT® Advanced Root Cause Analysis Team Leader Course and bring an incident that you are familiar with to the course to use as a final exercise.

Note that we stand behind our training with an ironclad guarantee. Attend the course. Go back to work and apply what you have learned. If you and your management don’t agree that you are finding root causes that you previously would have overlooked and that your management doesn’t find that the corrective actions you recommend are much more effective, just return your course materials and software and we will refund the entire course fee. No questions asked. It’s just that simple.

How can we make such a risk-free guarantee?

Because we’ve proven that TapRooT® works over and over again at industries around the world. We have no fear that you will see that TapRooT® improves your analysis of human errors, helps you develop more effective corrective actions, and helps your company achieve the next level better level of performance.

Want to see more tips like these? Subscribe to our Tuesday eNewsletter.  It delivers a root cause tip column, career development tips, accidents & lessons learned and more!  Just send an email to Barb at editor@taproot.com with “subscribe” in the subject line.

Monday Accident & Lessons Learned: Human Error Leads to Near-Miss at Railroad Crossing in UK – Can We Learn Lessons From This?

June 23rd, 2014 by

Here’s the summary from the UK RAIB report:

 

At around 05:56 hrs on Thursday 6 June 2013, train 2M43, the 04:34 hrs passenger service from Swansea to Shrewsbury, was driven over Llandovery level crossing in the town of Llandovery in Carmarthenshire, Wales, while the crossing was open to road traffic. As the train approached the level crossing, a van drove over immediately in front of it. A witness working in a garage next to the level crossing saw what had happened and reported the incident to the police.

The level crossing is operated by the train’s conductor using a control panel located on the station platform. The level crossing was still open to road traffic because the conductor of train 2M43 had not operated the level crossing controls. The conductor did not operate the level crossing because he may have had a lapse in concentration, and may have become distracted by other events at Llandovery station.

The train driver did not notice that the level crossing had not been operated because he may have been distracted by events before and during the train’s stop at Llandovery, and the positioning of equipment provided at Llandovery station relating to the operation of trains over the level crossing was sub-optimal.

The RAIB identified that an opportunity to integrate the operation of Llandovery level crossing into the signalling arrangements (which would have prevented this incident) was missed when signalling works were planned and commissioned at Llandovery between 2007 and 2010. The RAIB also identified that there was no formalised method of work for train operations at Llandovery.

The RAIB has made six recommendations. Four are to the train operator, Arriva Trains Wales, and focus on improving the position of platform equipment, identifying locations where traincrew carry out operational tasks and issuing methods of work for those locations, improvements to its operational risk management arrangements and improving the guidance given to its duty control managers on handling serious operational irregularities such as the one that occurred at Llandovery.

Two recommendations are made to Network Rail. These relate to improvements to its processes for signalling projects, to require the wider consideration of reasonable opportunities to make improvements when defining the scope of these projects, and consideration of the practicability of providing a clear indication to train crew when Llandovery level crossing, and other crossings of a similar design, are still open to road traffic.

Screen Shot 2014 05 15 at 11 33 58 AM

Screen Shot 2014 05 15 at 11 34 59 AM

Screen Shot 2014 05 15 at 11 36 22 AM

The full report has very interesting information about the possibility of fatigue playing a part in this near miss. See the whole report HERE.

This report is an excellent example of how much can be learned from a near-miss. People are more whilling to talk when a potential near-fatal accident happens than when a fatality happens. And all of this started because a bystander reported the near-miss (not the train crew or the driver).

How can you improve the reporting and investigation of potentially fatal near-miss accidents? Could your improvements in this area help stop fatalities?

 

 

Removing Blame from an Accident Investigation

May 7th, 2014 by

What would you do if you made a mistake at work that caused an accident, and your co-workers gathered around you to physically beat you as punishment? A PowerSource article, “Energy Companies Study the Role of Human Behavior in Safety” mentions this true-life scenario.

There’s still much work to be done in removing blame from an accident investigation to find the true root cause.  In the PowerSource article, TapRooT® Friend & Expert Joel Haight introduces a new human factors engineering program at the University of Pittsburgh that he hopes will attract industry professionals and assist them in solving this issue.

Click here to read the article.

Monday Accident & Lessons Learned: Decimal Points Matter

May 5th, 2014 by

Screen Shot 2014 04 02 at 2 33 15 PM

 

It’s easy to make a math error. But when that error means your new, expensive submarine won’t float … that’s a disaster!

Read about the problem at:

 http://thechronicleherald.ca/business/1133956-sub-might-be-too-heavy-to-resurface

and

http://www.huffingtonpost.com/2013/05/24/spain-submarine-s-81-isaac-peral-cant-float_n_3328683.html

I like the way the stories talk about “someone” put the decimal point in the wrong place. I bet if you are that someone, you are laying low…

Now, what can you learn? What do you do to catch simple math errors (we all make them) in critical calculations? Leave your comments here.

How Far Away is Death?

April 8th, 2014 by

Not too far on this day in 1983…

Monday Accident & Lessons Learned: Incident Report from the UK Rail Accident Investigation Branch: Tram running with doors open on London Tramlink, Croydon

April 7th, 2014 by

Screen Shot 2014 03 06 at 6 23 50 PM

There were eight recommendations made by the UK RAIB. here’s a summary of the investigation:

On Saturday 13 April 2013 between 17:33 and 17:38 hrs, a tram travelling from West Croydon to Beckenham Junction, on the London Tramlink system, departed from Lebanon Road and Sandilands tram stops with all of its doors open on the left-hand side. Some of the doors closed automatically during the journey, but one set of doors remained open throughout the incident. The incident ended when a controller monitoring the tram on CCTV noticed that it had departed from Sandilands withits doors open, and arranged for the tram to be stopped. Although there were no casualties, there was potential for serious injury.

The tram was able to move with its doors open because a fault override switch, which disables safety systems such as the door-traction interlock, had been inadvertently operated by the driver while trying to resolve a fault with the tram. The driver didnot close and check the doors before departing from Lebanon Road and Sandilands partly because he was distracted from dealing with the fault, and partly because he did not believe that the tram could be moved with any of its doors open. The design of controls and displays in the driving cab contributed to the driver’s inadvertent operation of the fault override switch. Furthermore, breakdowns in communication between the driver and the passengers, and between the driver and the controller, meant that neither the driver nor the controller were aware of the problem until after the tram left Sandilands.

The RAIB has made eight recommendations. Four of these are to Tram Operations Ltd, aimed at improving the design of tram controls and displays, as well astraining of staff on, and processes for, fault handling and communications. Two recommendations have been made to London Tramlink, one (in consultation with Tram Operations Ltd) relating to improving cab displays and labelling and one on enhancing the quality of the radio system on the network. One recommendation is made to all UK tram operators concerning the accidental operation of safety override switches. The remaining recommendation is to the Office of Rail Regulation regarding the provision of guidance on ergonomics principles for cab interface design.

 

For the complete report, see:

http://www.raib.gov.uk/cms_resources.cfm?file=/140306_R052014_Lebanon_Road.pdf

Three Mile Island Accident Anniversary: The Nuclear Industry Discovers Human Factors

March 28th, 2014 by

NewImage

Five days of panic. 140,000 residents voluntarily evacuate. Fourteen years of clean-up.

The 35th anniversary of the Three Mile Island Nuclear Disaster.

On the midnight shift on March 28, 1979, things started to go wrong at TMI. A simple instrument problem started a chain of events that led to a core meltdown.

I can still remember that morning.

I was learning to operate a nuclear plant (S1W near Idaho Falls, ID) at the time. I was in the front seat of the bus riding out to the site. The bus driver had a transistor radio on and the news reported that there had been a nuclear accident at TMI. They switched to a live report from a farmer across the river. He said he could smell the radiation in the air. Also, his cows weren’t giving as much milk.

the midnight shift on March 28, 1979, things started to go wrong at TMI. A simple instrument problem started a chain of events that led to a core meltdown.

I was learning to operate a nuclear plant (S1W near Idaho Falls, ID) at the time. I can still remember that morning. I was in the front seat of the bus riding out to the site. The bus driver had a transistor radio on and the news reported that there had been a nuclear accident at TMI. They switched to a live report from a farmer across the river. He said he could smell the radiation in the air. Also, his cows weren’t giving as much milk.

Years later, I attended the University of Illinois while also being a Assistant Professor (teaching midshipmen naval weapons and naval history). I was the first in a new program that was a cooperative effort between the Nuclear Engineering and Psychology Departments to research human factors and nuclear power plants. My advisor and mentor was Dr. Charles O. Hopkins, a human factors expert. In 1981-1982, he headed group of human factors professionals who wrote a report for the NRC on what they should do to more fully consider human factors in nuclear reactor regulation.

NewImage

As part of my studies I developed a course on the accident at TMI and published my thesis on function allocation and automation for the next generation of nuclear power plants.

So, each year when the anniversary of the accident comes around I think back to those days and how little we have learned (or should I say applied) about using good human factors to prevent industrial accidents.

NewImage

Connect with Us

Filter News

Search News

Authors

Angie ComerAngie Comer

Software

Barb PhillipsBarb Phillips

Editorial Director

Chris ValleeChris Vallee

Six Sigma

Dan VerlindeDan Verlinde

VP, Software

Dave JanneyDave Janney

Safety & Quality

Gabby MillerGabby Miller

Marketing

Garrett BoydGarrett Boyd

Technical Support

Ken ReedKen Reed

VP, Equifactor®

Linda UngerLinda Unger

Co-Founder

Mark ParadiesMark Paradies

Creator of TapRooT®

Per OhstromPer Ohstrom

VP, Sales

Shaun BakerShaun Baker

Technical Support

Steve RaycraftSteve Raycraft

Technical Support

Wayne BrownWayne Brown

Technical Support

Success Stories

In this short period of time interesting problems were uncovered and difficult issues were…

Huntsman

Fortunately, I already had a plan. I had previously used TapRooT to improve investigations…

Bi-State Development Corporation
Contact Us