System accident
A system accident, or normal accident, is an "unanticipated interaction of multiple failures" in a complex system. This complexity can either be technological or organizational, and often has elements of both. A system accident can be very easy to see in hindsight, but very difficult to see in foresight. Ahead of time, there are simply too many possible action pathways.
These accidents often resemble Rube Goldberg devices in the way that small errors of judgment, flaws in technology, and insignificant damages combine to form an emergent disaster. System accidents were described in 1984 by Charles Perrow, who termed them "normal accidents", as having such characteristics as interactive complexity, tight coupling, cascading failures, and opaqueness. James T. Reason extended this approach with human reliability and the Swiss cheese model, now widely accepted in aviation safety and healthcare.
Once an enterprise passes a certain point in size, with many employees, specialization, backup systems, double-checking, detailed manuals, and formal communication, employees can all too easily recourse to protocol, habit, and "being right." Rather like attempting to watch a complicated movie in a language one is unfamiliar with, the narrative thread of what is going on can be lost. And other phenomena, such as groupthink, can be occurring at the same time for real-world accidents almost always have multiple causes. In particular, it is a mark of a dysfunctional organization to simply blame the last person who touched something.