December 29, 1876, this day 140 years ago, a massive train accident occurred in Ashtabula, Ohio. A Pacific Express train was traveling through Ohio with approximately 160 passengers plus the train crew, and collapsed, along with the bridge it was crossing, killing over half of the passengers. The only reason some survived was because one part of the train managed to make it across the bridge before it completely collapsed.
Investigations in this time did not have near the research and development to come up with accurate root causes. Therefore, many times those in charge of specific projects (the engineers who constructed this bridge) were blamed. The Lake Shore and Michigan Railroad along with Charles Collins, engineer, and Amasa Stone, architect, were all at fault in the eyes of the investigative jury.
The results from these allegations? Collins committed suicide, Stone was publicly scorned and the actual root cause of the bridge collapse was never discovered. Although it was a tragic event, I don’t think we can consider this a successful investigation.
Want to learn how the TapRooT® process works and can help your company perform better investigations than this one described? Take a course.
A Report from the UK Rail Accident Investigation Branch:
Structural failure caused by scour at Lamington viaduct, South Lanarkshire, 31 December 2015
At 08:40 hrs on Thursday 31 December 2015, subsidence of Lamington viaduct resulted in serious deformation of the track as the 05:57 hrs Crewe to Glasgow passenger service passed over at a speed of about 110 mph (177 km/h). The viaduct spans the River Clyde between Lockerbie and Carstairs. Subsequent investigation showed that the viaduct’s central river pier had been partially undermined by scour following high river flow velocity the previous day. The line was closed for over seven weeks until Monday 22 February 2016 while emergency stabilisation works were completed.
The driver of an earlier train had reported a track defect on the viaduct at 07:28 hrs on the same morning, and following trains crossed the viaduct at low speed while a Network Rail track maintenance team was deployed to the site. The team found no significant track defects and normal running was resumed with the 05:57 hrs service being the first train to pass on the down line. Immediately after this occurred at 08:40 hrs, large track movements were noticed by the team, who immediately imposed an emergency speed restriction before closing the line after finding that the central pier was damaged.
The viaduct spans a river bend which causes water to wash against the sides of the piers. It was also known to have shallow foundations. These were among the factors that resulted in it being identified as being at high risk of scour in 2005. A scheme to provide permanent scour protection to the piers and abutments was due to be constructed during 2015, but this project was deferred until mid-2016 because a necessary environmental approval had not been obtained.
To mitigate the risk of scour, the viaduct was included on a list of vulnerable bridges for which special precautions were required during flood conditions. These precautions included monitoring of river levels and closing the line if a pre determined water level was exceeded. However, this process was no longer in use and there was no effective scour risk mitigation for over 100 of the most vulnerable structures across Scotland. This had occurred, in part, because organisational changes within Network Rail had led to the loss of knowledge and ownership of some structures issues.
Although unrelated to the incident, the RAIB found that defects in the central river pier had not been fully addressed by planned maintenance work. There was also no datum level marked on the structure which meant that survey information from different sources could not easily be compared to identify change.
As a result of this investigation, RAIB has made three recommendations to Network Rail relating to:
- the management of scour risk
- the response to defect reports affecting structures over water
- the management of control centre procedures.
Five learning points are also noted relating to effective management of scour risk.
For more information, see:
SHP reported that a worker at the Carlsberg brewery died and 22 others were injured by a cooling system ammonia leak.
Are you using advanced root cause analysis to investigate near-misses and stop major accidents? Major accidents can be avoided. That’s a lesson that all facilities with hazards should learn. For current advanced root cause analysis public courses being held around the world, see:
TapRooT® can be used for both low to medium risk incidents (including near-misses) and major accidents. For people who will normally be investigating low risk incidents, the 2-Day TapRooT® Root Cause Analysis Course is recommended.
For people who will investigate all types of incidents including near-misses and incidents with major consequences (or a potential for major consequences), we recommend the 5-Day Advanced Team Leader Training.
Don’t wait! If you have attended TapRooT® Training, get signed up today!
For a report from the UK Rail Accident Investigation Branch, see:
This day in history…an explosion occurred that remains a mystery to this day.
On November 23, 1984, in Uherské Hradiště, Czechoslovakia, part of the MESIT factory collapsed. It is unknown the exact root cause, but this manufacturing plant disaster killed 18 workers and injured 43. Being in the mid 1980’s, the communist regime decided to keep the accident a secret from the public. People speculate it was to hide confidential work from the rest of the world. However, media caught a hold of it from an anonymous source which then spread across the sea to western media. Although not much is known, it’s no secret that proactive measures need to be enforced and investigations must always be done to make improvements.
If you don’t understand what happened, you will never understand why it happened.
You would think this is just common sense. But if it is, why would an industry allow a culture to exist that promotes blame and makes finding and fixing the root causes of accidents/incidents almost impossible?
I see the blame culture in many industries around the world. Here is an example from a hospital in the UK. This is an extreme example but I’ve seen the blame culture make root cause analysis difficult in many hospitals in many countries.
Dr. David Sellu (let’s just call him Dr. Death as they did in the UK tabloids), was prosecuted for errors and delays that killed a patient. He ended up serving 16 months in high security prisons because the prosecution alleged that his “laid back attitude” had caused delays in treatment that led to the patient’s death. However, the hospital had done a “secret” root cause analysis that showed that systemic problems (not the doctor) had led to the delays. A press investigation by the Daily Mail eventually unearthed the report that had been kept hidden. This press reports eventually led to the doctor’s release but not until he had served prison time and had his reputation completely trashed.
If you were a doctor or a nurse in England, would you freely cooperate with an investigation of a patient death? When you know that any perceived mistake might lead to jail? When problems that are identified with the system might be hidden (to avoid blame to the institution)? When your whole life and career is in jeopardy? When your freedom is on the line because you may be under criminal investigation?
This is an extreme example. But there are other examples of nurses, doctors, and pharmacists being prosecuted for simple errors that were caused by systemic problems that were beyond their control and were not thoroughly investigated. I know of some in the USA.
The blame culture causes performance improvement to grind to a halt when people don’t fully cooperate with initiatives to learn from mistakes.
TapRooT® Root Cause Analysis can help investigations move beyond blame by clearly showing the systemic problems that can be fixed and prevent (or at least greatly reduce) future repeat accidents.Attend a TapRooT® Root Cause Analysis Course and find out how you can use TapRooT® to help you change a blame culture into a culture of performance improvement.
Foe course information and course dates, see:
Found this TV show about the crash and thought it was interesting … What can you learn?
The Navy still likes to blame folks as a root cause. At least that’s what I see in this report about a “pilot error” keeping a F/A-18 Hornet making it back to the carrier USS Theodore Roosevelt.
Seems there were lot’s of Causal Factors that contributed to the loss of an $86 million dollar aircraft that are described in this article on Military.com:
I haven’t found the report of the video on line.
What do you think of the report of the investigation?
In a short but interesting article in SEAPOWER, Vice Admiral Thomas J. Moore stated that Washing Navy Yard had just about completed the root cause analysis of the failure of the main turbine generators on the USS Ford (CVN 78). He said:
“The issues you see on Ford are unique to those particular machines
and are not systemic to the power plant or to the Navy as a whole.“
Additionally, he said:
“…it is absolutely imperative that, from an accountability standpoint, we work with Newport News
to find out where the responsibility lies. They are already working with their sub-vendors
who developed these components to go find where the responsibility and accountability lie.
When we figure that out, contractually we will take the necessary steps to make sure
the government is not paying for something we shouldn’t be paying for.”
That seems like a “Blame Vision” statement.
That Blame Vision statement was followed up by statement straight from the Crisis Mangement Vision playbook. Admiral Moore emphasized that would get a date set for commissioning of the ship that is behind schedule by saying:
“Right now, we want to get back into the test program and you’ll see us do that here shortly.
As the test program proceeds, and we start to development momentum, we’ll give you a date.
We decided, ‘Let’s fix this, let’s get to the root cause, let’s get back in the test program,’ and
when we do that, we’ll be sure to get a date out. I expect that before the end of the year
we will be able to set a date for delivery.”
Press statements are hard to interpret. Perhaps the Blame and Crisis Visions were just the way the reporters heard the statements or the way I interpreted them. An Opportunity to Improve Vision statement would have been more along the lines of:
We are working hard to discover the root causes of the failures of the main turbine generators
and we will be working with our suppliers to fix the problems discovered and apply the
lessons learned to improve the reliability of the USS Ford and subsequent carriers of this class,
as well as improving our contracting, design, and construction practices to reduce the
likelihood of future failures in the construction of new, cutting edge classes of warships.
Would you like to learn more about the Blame Vision, the Crisis Management Vision, and the Opportunity to Improve Vision and how they can shape your company’s performance improvement programs? The watch for the release of our new book:
The TapRooT® Root Cause Analysis Philosophy – Changing the Way the World Solves Problems
It should be published early next year and we will make all the e-Newsletter readers are notified when the book is released.
To subscribe to the newsletter, provide your contact information at:
Here’s the press report.
A similar ride in the US has been closed while the investigation is ongoing.
Pres release from the US Chemical Safety Board…
CSB Releases Final Case Study into 2013 Explosion and Fire at Williams Olefins Plant
in Geismar, Louisiana; Case Study Concludes that Process Safety Management Deficiencies
During 12 Years Prior to the Incident Led to the Explosion
October 19, 2016, Baton Rouge, LA — Today the CSB released its final report into the June 13, 2013, explosion and fire at the Williams Olefins Plant in Geismar, Louisiana, which killed two employees. The report concludes that process safety management program deficiencies at the Williams Geismar facility during the 12 years leading to the incident allowed a type of heat exchanger called a “reboiler” to be unprotected from overpressure, and ultimately rupture, causing the explosion.
The Williams Geismar facility produces ethylene and propylene for the petrochemical industry and employs approximately 110 people. At the time of the incident, approximately 800 contractors worked at the plant on an expansion project aimed at increasing the production of ethylene.
The incident occurred during non-routine operational activities that introduced heat to the reboiler, which was offline and isolated from its pressure relief device. The heat increased the temperature of a liquid propane mixture confined within the reboiler, resulting in a dramatic pressure rise within the vessel. The reboiler shell catastrophically ruptured, causing a boiling liquid expanding vapor explosion (BLEVE) and fire, which killed two workers; 167 others reported injuries, the majority of which were contractors.
The CSB investigation revealed a poor process safety culture at the Williams Geismar facility, resulting in a number of process safety management program weaknesses. These include deficiencies in implementing Management of Change (MOC), Pre-Startup Safety Review (PSSR), Process Hazard Analysis (PHA) programs, and procedure programs causal to the incident:
- Failure to appropriately manage or effectively review two significant changes that introduced new hazards involving the reboiler that ruptured—(1) the installation of block valves that could isolate the reboiler from its protective pressure relief device and (2) the administrative controls Williams relied on to control the position (open or closed) of these block valves.
- Failure to effectively complete a key hazard analysis recommendation intended to protect the reboiler that ultimately ruptured.
- Failure to perform a hazard analysis and develop a procedure for the operations activities conducted on the day of the incident that could have addressed overpressure protection.
CSB Chairperson Vanessa Allen Sutherland said, “The tragic accident at Williams was preventable and therefore unacceptable. This report provides important safety lessons that we urge other companies to review and incorporate within their own facilities.”
The CSB case study on the accident at Williams notes the importance of:
- Using a risk-reduction strategy known as the “hierarchy of controls” to effectively evaluate and select safeguards to control process hazards. This strategy could have resulted in Williams choosing to install a pressure relief valve on the reboiler that ultimately ruptured instead of relying on a locked open block valve to provide an open path to pressure relief, which is less reliable due to the possibility of human implementation errors;
- Establishing a strong organizational process safety culture. A weak process safety culture contributed to the performance and approval of a delayed MOC that did not identify a major overpressure hazard and an incomplete PSSR;
- Developing robust process safety management programs, which could have helped to ensure PHA action items were implemented effectively; and
- Ensuring continual vigilance in implementing process safety management programs to prevent major process safety incidents.
Following the incident, Williams implemented improvements in managing process safety at the Geismar facility. These include, among others, redesigning the reboilers to prevent isolation from their pressure relief valves, improving its management of change process to be more collaborative, and updating its process hazard analysis procedure.
Investigator Lauren Grim said, “Williams made positive safety management changes at the Geismar facility following the accident, but more should be done to improve process safety and strengthen the plant’s process safety culture. Our report details important safety recommendations to protect workers at the Williams Geismar facility.”
To prevent future incidents and further improve process safety at the Geismar plant, the CSB recommended that Williams strengthen existing safety management systems and adopt additional safety programs. These strategies include conducting safety culture assessments, developing a robust safety indicators tracking program, and conducting detailed process safety program assessments.
The CSB also identified gaps in a key industry standard by the American Petroleum Institute (API) and issued recommendations to API to strengthen its “Pressure-relieving and Depressuring Systems” requirements to help prevent future similar incidents industry-wide.
Chairperson Sutherland said, “Most of the accidents the CSB investigates could have been prevented had process safety culture been a top priority at the facility where the incident occurred. These changes must be encouraged from the top with managers implementing effective process safety management programs.”
The CSB is an independent, non-regulatory federal agency charged with investigating serious chemical accidents. The agency’s board members are appointed by the president and confirmed by the Senate. CSB investigations look into all aspects of chemical accidents, including physical causes such as equipment failure as well as inadequacies in regulations, industry standards, and safety management systems.
The Board does not issue citations or fines but does make safety recommendations to plants, industry organizations, labor groups, and regulatory agencies such as OSHA and EPA. Visit our website, www.csb.gov. For more information, contact Communications Manager Hillary Cohen, cell 202-446-8094 or email email@example.com.
The Associate Press reported that attorneys for Don Blankenship, the imprisoned former CEO of Massey Energy, should not have been sentenced to go to jail for the 2010 coal mine explosion that killed 29 people.
Read more here:
Note that I found this “wanted poster” on line at http://mountainkeeper.blogspot.com.
Automation dependency is an interesting topic. Here’s what a recent CALLBACK from the Aviation Safety Reporting System had to say about the topic…
On October 24, 1960 at the Baikonur Test Range in Russia, a massive explosion occurred in the testing of the Soviet ICBM R-16 missile. The second-stage engine ignited, detonating the first-stage fuel tanks (which had not been drained) directly beneath it causing this catastrophic explosion. Any and all military men and test-range employees working on it were immediately incinerated. Unfortunately the fire and fumes spread so quickly that all over men on the range were killed soon after. However, this missile was a secret project therefore the explosion had to remain a secret. It wasn’t exposed in a magazine article until April 1989.
Is taking a shortcut like not draining the fuel tanks resourceful? No. It’s dangerous and cost the Soviet army much more than they bargained for.
Monday Accident & Lessons Learned: Aviation Safety Reporting System CALLBACK Notice About Ramp SafetyOctober 17th, 2016 by Mark Paradies
Here’s the start of the report …
This month CALLBACK features reports taken from a cross-section of ramp experiences. These excerpts illustrate a variety of ramp hazards that can be present. They describe the incidents that resulted and applaud the “saves” made by the Flight Crews and Ground Personnel involved.
For the complete report, see:
Summary from the UK Rail Accident Investigation Branch …
At 18:12 hrs on Thursday 16 June 2016, a two-car diesel multiple unit train, operated by Great Western Railway (GWR), was driven through open trap points immediately outside Paddington station and derailed. It struck an overhead line equipment (OLE) mast, damaging it severely and causing part of the structure supported by the mast to drop to a position where it was blocking the lines. There were no passengers on the train, and the driver was unhurt. All the the lines at Paddington were closed for the rest of that evening, with some services affected until Sunday 19 June.
For causes and lessons learned, see: https://www.gov.uk/government/publications/paddington-safety-digest/derailment-at-paddington-16-june-2016
Monda Accident & Lessons Learned: US CSB Report on 2014 Freedom Industries Contamination of Charleston, West Virginia Drinking WaterOctober 3rd, 2016 by Mark Paradies
Here is the press release from the US Chemical Safety Board …
CSB Releases Final Report into 2014 Freedom Industries Mass Contamination of Charleston, West Virginia Drinking Water; Final Report notes Shortcomings in Communicating Risks to Public, and Lack of Chemical Tank Maintenance Requirements Report Includes Lessons Learned and Safety Recommendations to Prevent a Similar Incident from Occurring
September 28, 2016, Charleston, WV, — The CSB’s final report into the massive release of chemicals into this valley’s primary source of drinking water in 2014 concludes Freedom Industries failed to inspect or repair corroding tanks, and that as hazardous chemicals flowed into the Elk River, the water company and local authorities were unable to effectively communicate the looming risks to hundreds of thousands of affected residents, who were left without clean water for drinking, cooking and bathing.
On the morning of January 9, 2014, an estimated 10,000 gallons of Crude Methylcyclohexanemethanol (MCHM) mixed with propylene glycol phenyl ethers (PPH Stripped) were released into the Elk River when a 46,000-gallon storage tank located at the Freedom Industries site in Charleston, WV, failed. As the chemical entered the river it flowed towards West Virginia American Water’s intake, which was located approximately 1.5 miles downstream from the Freedom site.
The CSB’s investigation found that Freedom’s inability to immediately provide information about the chemical characteristics and quantity of spilled chemicals resulted in significant delays in the issuance of the “Do Not Use Order” and informing the public about the drinking water contamination. For example, Freedom’s initially reported release quantity was 1,000 gallons of Crude MCHM. Over the following days and weeks, the release quantity increased to 10,000 gallons. Also, the presence of PPH in the released chemical was not made public until 13 days after the initial leak was discovered.
The CSB’s investigation found that no comprehensive aboveground storage tank law existed in West Virginia at the time of the release, and while there were regulations covering industrial facilities that required Freedom to have secondary containment, Freedom ultimately failed to maintain adequate pollution controls and secondary containment as required.
CSB Chairperson Vanessa Allen Sutherland said, “Future incidents can be prevented with proper communication and coordination. Business owners, state regulators and other government officials and public utilities must work together in order to ensure the safety of their residents. The CSB’s investigation found fundamental flaws in the maintenance of the tanks involved, and deficiencies in how the nearby population was told about the risks associated with the chemical release.”
An extensive technical analysis conducted by the CSB found that the MCHM tanks were not internally inspected for at least 10 years before the January 2014 incident. However, the CSB report notes, since the incident there have been a number of reforms including passage of the state’s Aboveground Storage Tank Act. Among other requirements, the new regulations would have required the tanks at freedom to be surrounded by an adequate secondary containment structure, and require proper maintenance and corrosion prevention, including internal inspections and a certification process.
The CSB’s investigation determined that nationwide water providers have likely not developed programs to determine the location of potential chemical contamination sources, nor plans to respond to incidents such as the one in Charleston, WV.
Supervisory Investigator Johnnie Banks said, “The public deserves and must demand clean, safe drinking water. We want water systems throughout the country to study the valuable lessons learned from our report and act accordingly. We make specific recommendations to a national association to communicate these findings and lessons.”
While performing a lift using a tower crane, a failure of the gearbox cause about 1.000 lbs of hook and rigging gear to fall to the ground, narrowly missing workers in the area. Here is the report.
The investigation revealed several issues, most relating to proper inspections of the gearboxes to identify defective gears. While again this appears to be a straight equipment failure, we would also want to know:
- How did the deficient gear end up in the gearbox (it was the wrong material)?
- Are we looking for repeat failures (this had happened before)?
- How close were the workers in the vicinity?
- What was the preventative maintenance plan for this gearbox? Was it required be the vendor?
Lots of other directions a good investigation will lead you.
Here is a link (click of picture below) to a Callback publication about accidents and Fatigue …
Here is a quote:
“The NTSB 2016 “Most Wanted List” of Transportation Safety Recommendations leads with, ‘Reduce Fatigue-Related Accidents.” It states, “Human fatigue is a serious issue affecting the safety of the traveling public in all modes of transportation.’”
This incident notice is from the UK Rail Investigation Branch about an overspeed incident at Fletton Junction, Peterborough on 11 September 2015.
At around 17:11 hrs on 11 September 2015, the 14:25 hrs Virgin Trains East Coast passenger train service from Newcastle to London King’s Cross passed through Fletton Junction, near Peterborough at 51 mph (82 km/h) around twice the permitted speed of 25 mph (40 km/h). This caused the carriages to lurch sideways resulting in minor injuries to three members of staff and one passenger.
It is likely that the train driver had forgotten about the presence of the speed restriction because he was distracted and fatigued due to issues related to his family. Lineside signs and in-cab warnings may have contributed to him not responding appropriately as he approached the speed restriction and engineering controls did not prevent the overspeeding. Neither Virgin Trains East Coast, nor the driver, had realised that family-related distraction and fatigue were likely to be affecting the safety of his driving. Virgin Trains East Coast route risk assessment had not recognised the overspeeding risks particular to Fletton Junction and Network Rail had not identified that a speed limit sign at the start of the speed restriction was smaller than required by its standards.
The incident could have had more serious consequences if the train had derailed or overturned. The risk of this was present because the track layout was designed for a maximum speed of 27 mph (43 km/h).
As a consequence of this investigation, RAIB has made five recommendations. Two addressed to Virgin Trains East Coast relate to enhancing the management of safety critical staff with problems related to their home life, and considering such issues during the investigation of unsafe events.
A recommendation addressed to Virgin Trains East Coast and an associated recommendation addressed to Network Rail relate to assessing and mitigating risks at speed restrictions.
A further recommendation to Network Rail relates to replacement of operational signage when this is non-compliant with relevant standards.
RAIB report also includes learning points relating to managing personal problems that could affect the safety performance of drivers. A further learning point, arising because of a delay in reporting the incident, stresses the importance of drivers promptly reporting incidents which could have caused track damage. A final learning point encourages a full understanding of the effectiveness of safety mitigation provided by infrastructure and signalling equipment.
For more information see:
Here’s a link to the story: http://www.abc.net.au/news/2016-07-25/baby-dies-at-bankstown-lidcombe-hospital-after-oxygen-mix-up/7659552
An Oxygen line had been improperly installed in 2015. It fed nitrous oxide to a neonatal resuscitation unit rather than oxygen.
The Ministry of Health representative said that all lines in all hospitals in New South Wales installed since the Liberal government took over in 2011 will be checked for correct function.
What can you learn from this?
Think about your installation and testing of new systems. How many Safeguards are in place to protect the targets?
Last month, Delta Airlines experienced an equipment failure that caused their reservation system to shut down, Media reports indicate close to 2,000 flights were canceled. This is only a few weeks after Southwest Airlines experienced a similar computer failure, causing numerous flight delays and cancellations.
Reports continue to indicate that this was an equipment failure, due to a small fire in a power supply in there server room. Here is their description:
“Monday morning (August 8) an uninterrupted power source switch experienced a small fire which resulted in a massive failure at Delta’s Technology Command Center. This caused the power control module to malfunction, sending a surge to a transformer outside of Delta, resulting in the loss of power. The power was stabilized and power was restored quickly. But when this happened, critical systems and network equipment didn’t switch over to backups. Around 300 of about 7,000 data center components were discovered to not have been configured appropriately to avail backup power. In addition to restoring Delta’s systems to normal operations, Delta teams this week have been working to ensure reliable redundancies of electrical power as well as network connectivity and applications are in place.”
Keep in mind that the “uninterrupted power supply switch” is actually known as an “uninterruptible” power supply (UPS). This normally swaps you over to another power source if your primary source fails. You may have a simple UPS on your computer systems at the office, providing battery backup while power is restored. In Delta’s case, their UPS system attempted to switch over, but configuration issues prevented a significant number of their devices from actually shifting over.
Additionally, other reports indicate that the reservation system is an extremely antiquated system, linked into other airlines’ (also extremely antiquated) systems. They have all patched together and upgraded their individuals systems to the point that it is almost impossible to upgrade; it really requires a complete replacement, which would be EXTREMELY difficult and expensive to replace while still being used for current reservations.
So while this is discussed by the airlines as an equipment failure, I think there are more than likely multiple causal factors, of which only one (the initiating problem) was a burned up component. Without knowing the details, we can see several Causal Factors:
- A UPS caught fire
- This small fire caused a large surge and widespread power loss
- Other equipment was not properly configured to shift to backup power
- There is no backup in the event of a loss of the primary reservation system
- The reservation computer system has not been upgraded to modern standards
I always question when a failure is classed as “equipment failure.” Unless the equipment failure is an allowed event (Tolerable Failure), it is much more likely that humans were much more involved in the failure, with the broken equipment as only a result.
From the UK Rail Accident Investigation Branch…
On 1 August 2015 at about 11:11 hrs, a freight train travelling within a work site collided with the rear of a stationary freight train at 28 mph (45 km/h).
Engineering staff had authorised the driver of the moving freight train to enter the work site at New Cumnock station, travel about 3 miles (4.8 km) to the start of a track renewal site, and bring the train to a stand behind the stationary train.
There were no injuries but the locomotive and seven wagons from the moving train and eleven wagons from the stationary train were derailed; the locomotive and derailed wagons were damaged. One wagon came to rest across a minor road. There was also substantial damage to the track on both railway lines.
The immediate cause was that the moving train was travelling too fast to stop short of the rear of the stationary train when its driver first sighted the train ahead. This was due to a combination of the train movement in the work site not taking place at the default speed of 5 mph (8 km/h) or at caution, as required by railway rules, and the driver of the moving train believing that the stationary train was further away than it actually was.
An underlying cause was that drivers often do not comply with the rules that require movements within a work site to be made at a speed of no greater than 5 mph (8 km/h) or at caution.
As a consequence of this investigation, RAIB has made four recommendations addressed to freight operating companies.
One relates to the monitoring of drivers when they are driving trains within possessions and work sites.
Two recommendations relate to implementing a method of formally recording information briefed to drivers about making train movements in possessions and work sites.
A further recommendation relates to investigating the practicalities of driving freight trains in possessions and work sites for long distances at a speed of 5 mph (8 km/h) or at other slow speeds, and taking action to address any identified issues.
RAIB has also identified three learning points including:
the importance of providing drivers with all of the information they need to carry out movements in possessions and work sites safelya reminder to provide drivers (before they start a driving duty) with information about how and when they will be relievedthe importance of engineering staff giving instructions to drivers through a face to face conversation when it is safe and practicable to do so.