The report from the International Nuclear Safety Advisory Group (INSAG) from the International Atomic Energy Agency (IAEA) on the causes of the accident that had happened at the Chernobyl Nuclear Power Plant on April 26, 1986. (INSAG-4, 1991) not only opened a Pandora's box of cultures., The INSAG report also contained the following important passage, which has received very little attention:
“The definition relates Safety Culture to personal attitudes and habits of thought and to the style of organizations. A second proposition then follows, namely that such matters are generally intangible; that nevertheless such qualities lead to tangible manifestations; and that a principal requirement is the development of means to use the tangible manifestations to test what is underlying.” (INSAG-4, 1986, p. 1, emphasis added).
but also contained a clear warning about the main problem issued a clear warning
“The definition relates Safety Culture to personal attitudes and habits of thought and to the style of organizations. A second proposition then follows, namely that such matters are generally intangible; that nevertheless such qualities lead to tangible manifestations; and that a principal requirement is the development of means to use the tangible manifestations to test what is underlying.” (INSAG-4, 1986, p. 1, emphasis added). The central, and so far unresolved problem with safety culture is tat we do not know how to deal with the intangible (safety culture) to regulate the tangible(human, performance and specifically the manifestations of the absence of safety. A further characterization of the intangible-tangible dilemma is here. It also includes a possible and practical solution to the dilemma.
”It is widely accepted as true that “the cure of a given troublesome condition depends primarily upon knowledge of its cause and the
A counterfactual condition, is something, a factor or a condition (call it X), whose presence presumably might have prevented an accident from happening. (It is counterfactual because the accident that it hypothetically could have prevented already happened. The reasoning goes like this ”if only X had been present or if only there had been more of X, or if only X had been better, then the accident would not have happened, commonly used examples other than safety culture are communication, trust, leadership, and situation awareness. Counterfactual conditions are the favorite of self-anointed experts who in the aftermath of major calamities find it irresistible not to share their wisdom and insights with others.
Hollnagel, E. & Slater, D. (2025). Safety culture maturity: A problem disguised as a solution. Fontainebleau, France: International Institute for Leadership and safety culture (IILSC Insights (2025/1)). Hollnagel, E. & Slater, D. (Eds.), (2026) Incremental Safety Practices. Abingdon, Oxon, UK: Routledge.
Hollnagel, E. (2025). From safety to safely: Principles and practice of Systemic Potentials Management. Abingdon, Oxon, UK: Routledge.
Hollnagel, E. & Slater, D. (2025). Safety culture maturity: A problem disguised as a solution. Fontainebleau, France: International Institute for Leadership and safety culture (IILSC Insights (2025/1)).
Hollnagel, E. (2026). Decremental and incremental safety cultures: Safety-I and Safety-II revisited. Boca Raton, FL: CRC Press
Khattabi, N. et al. (2026). Improving cross-learning in clinical teams using daily on-site reflective meetings. Frontiers in Health Services (in print)
Martins, J. B. et al.(2022). Integrating Safety-I and Safety-II: Learning from failure and success in construction sites." Safety science148: 105672.
Reason, J. T. (2000). Safety paradoxes and safety culture. Injury Control & Safety Promotion, 7(1), 3-14.
Schein, E. H. & Schein, P. A. (2025). Humble Inquiry: The gentle art of asking instead of telling.(3rdEd.) Oakland, CA: Berrett-Koehler Publishers.
Wahl, K., Stenmarker, M., & Ros, A. (2022). Experience of learning from everyday work in daily safety huddles—a multi-method study. BMC Health Services Research, 22(1), 1101.

© Erik Hollnagel (2025)
The text below is also available as a separate.pdf file, which you can find here (but this unfortunately does not include other pdf files referred to in the text.Initially posted September 27, 2025 (latest update October 7, 2025)
This website will be updated from time to time, whenever I get a new idea or realize I have overlooked something, the latter being more likely.
Is incremental safety a new type of safety? It is not, of course, for there can be only one (as the Highlander wisely said). The only possible definition of safety is “a state where there are as few unexpected and unacceptable outcomes as possible”. The word safety is unsurprisingly derived from Latin where the word salvus means, uninjured, unharmed and in good health (Raheemy, Sherratt & Hallowell, 2025), to be safe therefore means to be free from or protected against harm and injury, either actual (occurring here and now), or potential (possibly occurring at some time in the foreseeable future) Every sentient organism either tries to avoid getting into situations where they can suffer injury or be harmed, e.g., by making the threat disappear (which is why dogs bark at strangers), or else try to get away from such situations if they happen nevertheless, typically by hiding, by seeking cover, by crawling or by fleeing (as all birds and winged insects do when we approach them). The "fight or flight" response (Cannon, 1915) is universal. Humans,as animals with cognitive abilities, especially the use of symbolic representation and memories of the past (history), as well as the ability to select some memories rather than all (corresponding to consciousness and also the ability to imagine what may happen in the future are naturally motivated to ensure a state of safety for whatever they do, individually and collectively. This explains the importance both of feeling safe and of being safe (which, although often confused are quite different, we sometimes feel safe even though we are not, and vice versa.
In hindsight, the terms Safety-I and Safety-II regrettably nourished the confusion about whether there is more than one type of safety.There is not, as there can be only one! The confusion even tempted some (Leveson, 2020), (Aven, 2022), and (Cooper, 2022) wrongly to infer that there could also be a Safety-III. Others foolishly latched onto this. (Flinterman, 2024), and (Samost-Williams & Brook, 2025).
As a psychologist I had anticipated that this false inference might be made, and therefore explicitly warned against it in my 2014 book:
“Since Safety-II represents a logical extension of Safety-I, it may well be asked whether there will not some day be a Safety-III? In order to answer that, it is necessary to keep in mind that Safety-I and Safety-II differ in their focus and therefore ultimately in their ontology. The focus of Safety-I is on work that goes wrong or fails, and the corresponding efforts are to reduce their number. The focus of Safety-II is on work that goes well, and the corresponding efforts are to make sure that more work goes well.” (Hollnagel, 2014, p. 178).
A more detailed argument why Safety-III is impossible can be found here.
In hindsight, I neglected to point out that Safety-I and Safety-II represent different ways to achieve a state of safety, rather than different definitions of safety. (But this was not clear to me, nor to others, at the time). Safety-I and Safety-II are by their intention actually akin to safety differently (Dekker, 2015), where the adverbial form of different makes clear that it does not refer to a different type of safety, as there, indeed, can be only one, but to a different way of achieving and/or managing safety – understood as the state of being safe.
According to Safety-I and the safety legacy (Hollnagel, 2025), a state of safety can be achieved by reducing the number of acts and/or events that result in unacceptable outcomes -- hence a decremental approach(Hollnagel, 2026), with the popular but completely unrealistic Zero Accident Vision(ZAV). (Zwetsloot et al., 2013; Sharman 2016; Björnberg, et al., 2019). According to Safety-II, a state of safety can be achieved by increasing the number of acts and/or events that result in acceptable outcomes, hence an incremental approach, corresponding to an incremental safety culture. Incremental safety implies the equally unattainable ideal that everything goes well (Visio centum).
If safety is defined as a state where there are as few unexpected and unacceptable outcomes as possible, then the focus is clearly on those, and efforts are consequently aimed at reducing that number to be as small as possible, ideally zero, hence a decremental approach. If safety instead is defined as a state where there are as many expected and acceptable acceptable outcome as possible, then the focus is on those, and efforts are in consequence aimed at increasing that number to be as large as possible, ideally 100%, hence an incremental approach. Visio centum is, of course complementary to the Zero Accident Vision. If Visio centum is ever achieved, meaning that every act leads to predictable and acceptable outcomes, then there will, logically be no acts that lead to unpredictable and unacceptable outcomes.
Incremental and decremental safety refer to two different approaches to become safe, rather than to two different types of safety. The difference can be illustrated by flipping a coin. Unless the person who flips the coin is incredibly skilled any physical coin will land either heads up or tails up (only a quantum coin could, like Schrödinger's cat, remain in a quantum superposition. But it would be terrible for safety, and for daily life, if it was not possible to determine whether the outcomes of an act were acceptable or unacceptable. Yet even though the outcomes of flipping a coin themselves are binary, the values need not be. If the coin was flipped to make a decision the value of the outcome will be positive for one part and at the same time negative for the other (except for:heads, I win; tails you lose), but such complementarity of value is unlikely to be the case where the issue is industrial safety. There are few, if any, cases in which an accident, resulting in harm, loss or injury will be seen as having a positive value – the only exception is perhaps the actions of a terrorist, an assassin or an arsonist. Decremental safety also (I sincerely hope) exclude the possibility that someone concludes that there is a third way to achieve a state of safety, such as Safety-III. (Although human folly seems to have no upper limit.) If you neither try to reduce or decrement the number of cases where work goes wrong, nor try to increase or increment the number of cases where work goes well, the only option left is to do nothing or laissez-faire, which for few companies is an attractive safety management strategy since it means that nothing will change with regard to safety and few companies dare publicly declare they are financially or ethically satisfied with their current occupational accident rate and want to keep that as their status quo. Using the analogy of coin flipping. The coin will land either tails up (corresponding to an unacceptable outcome) or heads up (corresponding to an acceptable outcome). So the two types of outcomes are clearly mutually exclusive, if there are more of one type of outcomes, there will be fewer of the other type of outcomes, and vice versa. The intention with this analogy is not to suggest that the outcome of work activities are random. On the contrary, we do all we can to bias the coin so it does not land tails up, corresponding to unacceptable outcomes or a decremental safety culture. The analogy rather suggests that we might as well bias the coin in the opposite direction so that it is more likely to land heads up corresponding to acceptable outcomes or an incremental safety culture, because that is after all what we would like, and need to happen in everyday work.
In hindsight, the terms Safety-I and Safety-II created some confusion about whether there was more than one type of safety. There is not, since there can be only one! The confusion even tempted some (Leveson, 2020), (Aven, 2022), and (Cooper, 2022) wrongly to infer that there could also be a Safety-III. Others foolishly latched onto this. (Flinterman, 2024), and (Samost-Williams & Brook, 2025). Being a psychologist I had anticipated that this false inference might tempt some, and had therefore explicitly warned against it in my 2014 book:
“Since Safety-II represents a logical extension of Safety-I, it may well be asked whether there will not some day be a Safety-III? In order to answer that, it is necessary to keep in mind that Safety-I and Safety-II differ in their focus and therefore ultimately in their ontology. The focus of Safety-I is on work acts that go wrong or fail, and the corresponding efforts are to reduce their number. The focus of Safety-II is on work acts that go well, and the corresponding efforts are to make sure that more go well.” (Hollnagel, 2014, p. 178).
Three more detailed arguments why Safety-III is impossible can be found here and in (Hollnagel,. 2025).
In hindsight, I failed to point out that Safety-I and Safety-II represent two different ways to achieve a state of safety, rather than two different definitions of safety. (But this was not clear to me, nor to others, at the time). Safety-II is actually akin to safety differently (Dekker, 2015) , where the adverbial form of different makes clear that it does not refer to a different type of safety, but to a different way of achieving and/or managing safety – understood as the state of being safe. (a similar argument for safety is made in (Hollnagel, 2025). The only third option I can imagine would be sacramental safety, which has the phonetic resemblance to decremental and incremental safety, but semantically makes as little sense as did Safety-III. Time will tell if my imagination is too limited (I secretly fear so, but dread ever to see the evidence for it).
We are fooled by the language we use when we think, and talk about safety. As pointed out by professor James Reason (RIP) the first paradox of safety ”Safety is defined and measured more by its absence than its presence”. Yet we commonly and habitually use the comparative form safer of the adjective safe, wrongly meaning more than safe. Yet practically all formal definitions of safety emphasize the absence of something.
According to the conventional interpretation of safety, now often called Safety-I (Hollnagel, 2014b), safety refers to a condition where as little as possible goes wrong. The American Society of Safety Engineers and the American National Standards Institute, for instance, define safety as “the freedom from unacceptable risk” (ASSE, 2011, p. 13). The Agency for Healthcare Research and Quality (AHRQ), likewise defines safety as “freedom from accidental injury, (Emanuel et al., 2009) which can be achieved by “avoiding injuries or harm to patients from care that is intended to help them”. Finally the International Civil Aviation Organization (ICAO), defines safety as “the state in which the risk of harm to persons or of property damage is reduced to, and maintained at or below, an acceptable level through a continuing process of hazard identification and risk management” (ICAO, 2013, pp. 1–2). When we say that something is safer, as in One Percent Safer (Sharman, 2020) we actually, and paradoxically mean it is one percent less unsafe, so there are fewer *(by 1%) unexpected and unacceptable outcomes. It therefore represents a decremental approach. The opposite is a true incremental approach, formerly called Safety-II, where the meaning of safer is the there are an increased number of expected and acceptable outcomes, even if it is just by one percent. Returning to language, safer actually means less unsafe, hence less of something, rasher than more of something. The only thing there will be more of is cost, since it is far from inexpensive to become safer by reducing the number of unacceptable outcomes. But there is little to show as a result of the investment, except hopefully fewer accidents and unwanted outcomes, which means that something is saved or avoided. In contrast to that incremental safety means that something is gained because more work will go well, leading to increased productivity and revenue. And since we live in a macroscopic (non-quantum world), where an act cannot go well and fail at the same time, an increase in the number of acceptable outcomes also means a decrease or reduction in the number of unacceptable outcomes. An incremental safety approach therefore fulfills the aim of a decremental safety approach at the same time.
The fundamental dilemma is that safety culture is an intangible that purports to account for the tangible, for human performance in general and organizational accidents in particular. The following sections takes a closer look at this dilemma. The aim is briefly to chart the chequered history of safety culture, to focus on the fundamental intellectual dilemma and to suggest a modest solution.
So much has been written about safety culture, that even an AI endowed search engine does not know the number of published books , but a conservative intelligent human estimate is at least several hundreds, to say nothing of tens of thousands of journal papers and conference proceedings with no end in sight. Few of these efforts have, however, addressed Throughout most of the history of safety, the ages of technology and of human factors, according to (Hale & Hovden, 1998), cf., Table , safety practitioners were satisfied with using a single cause (either a technical failure and malfunction or a quirk of the human factor) to account for why and how accidents happened(technically called proximal monocausal attribution)but the 1970s and 1980s saw several large accidents such as the explosion at Flixborough (1974), The nuclear reactor accident at Three Mile Island (1979), the explosion of the space shuttle Challenger (1986), the accident at the Chernobyl nuclear power plant (also 1986), the capsizing of the Herald of Free Enterprise (1987) and the King’s Cross Underground fire (also 1987), where were no obvious proximate active failures, but instead a combination of active failures and long-standing (latent) distal systemic pathogens, that suddenly resulted in an accident. This ushered in the age of safety management (Table ) which required a combination of active failures and latent system conditions or pathogens,hence a transition to distal multicausal attribution that challenged established linear cause-effect reasoning It also threatened conventional monocausal reasoning and called into question traditional Newtonian methods of safety analysis and management. (Dekker, Cilliers & Hofmeyr, 2011). Out of this turmoil arose the idea that safety culture, as a single cause or factor perhaps could be used to explain what happened. This salvaged the convenience of monocausal thinking but at the cost of reduced veracity. The concept of safety culture maturity, using the analogy of human psychological development soon became so popular as a way to account for third age accidents, that it itself required further elaboration.The primary contribution of safety culture was to meet the psychological and socio-cultural need for simple explanations of accidents in cases where proximal monocausal attribution was inadequate. Safety culture has never demonstrated any practical value or ability to improve safety byreducingthe number of unexpected and unacceptable outcomes. Safety culture is a social construct(Searle, 1995), and also counterfactual conditional.There are no recognised psychological, sociological, or organizational theories that support safety culture, and no known methods effectively to establish, manage,maintain, or improveit. Safety culture came out of nowhere to relieve the anxiety in the wake of organizational accidents. Safety culture achieved that but has otherwise done nothing to reduce or prevent such accidents from occurring much work and many serious efforts notwithstanding.
Table 1: Three ages of safety (Hale & Hovden, 1998).
Safety culture was first invoked as a counterfactual cause or explanation when the International Nuclear Safety Advisory Group (INSAG) from the International Atomic Energy Agency (IAEA) released its report on the causes of the accident that had happened at the Chernobyl Nuclear Power Plant on April 26, 1986. (INSAG-4, 1991), although (Reason, 1988) had already attributed the accident to a combination of an inadequate test plan and Group Think. Safety culture from the beginning denoted something that was absent but which might have prevented an accident if only it had been present, or in other words as a counterfactual conditional.
In the INSAG report introduced the concept of safety culture to the common safety vocabulary, safety culture was defined as:
“that assembly of characteristics and attitudes in organizations and individuals which establishes that, as an overriding priority (nuclear plant) safety issues receive the attention warranted by their Significance” (INSAG-4, 1986, p. 5). This is, however, in philosophy known as an analytic proposition, that is true solely by virtue of its meaning.
Safety culture was therefore from the very start left undefined. The INSAG is not a permanent group of a fixed size, so it is difficult to say with any kind of certainty who contributed the idea of safety culture. Educated guesses, however suggest that Mr. Brian Edmondson from the Central Electricity Generating Board (CEGB) in the UK may have been the one. The term safety culture appears to have originated privately in the post-Chernobyl analytical work by the British Central Electricity Generating Board (CEGB) (likely authored by Mr.Brian Edmondson) and was carried into IAEA discourse through professional collaboration, and eventually formalised anonymously in the INSAG-4 report. We know for certain that when Jim Reason wrote his paper on the Chernobyl errors (Reason, 1987, p. 203-204), he pointed to the problems of coping with complexity, and elements of 'groupthink'
Reason concluded that “we would be wise to extend the causal analysis beyond the limited scope of individual information processing to take account of the social aspects of the situation”. (Ibid p. 204. This is in many ways the essence of safety culture.
The tangible-intangible dilemma
The INSAG-4 report also contained the following important passage, which has received very little attention:
“The definition relates Safety Culture to personal attitudes and habits of thought and to the style of organizations. A second proposition then follows, namely that such matters are generally intangible; that nevertheless such qualities lead to tangible manifestations; and that a principal requirement is the development of means to use the tangible manifestations to test what is underlying.” (INSAG-4, 1986, p. 1, emphasis added).
Much has been written about safety culture since then, but few authors seem to have noticed the above passage with the second proposition, which was cleverly tucked away on page 1 of the report. This passage nevertheless precisely identifies the fundamental problem with safety culture, namely how the intangible (safety culture) determines something tangible (what people do at work particularly when it does not go well) or tangible=F(intangible), and whether this can actually be regulated. Without having a clear and practical solution to The tangible—intangible dilemma dilemma it is a fool’s errand to rely on safety culture.
Throughout the three ages of safety (Table 1, above) it has every now and then, but rarely been necessary to find a cause for a hitherto unknown type of accidents. The best known case is the world's first commercial jet airliner, the de Havilland Comet aircraft. During its first year of operation (1949) three Comets planes were lost in highly publicised accidents after suffering catastrophic mishaps mid-flight. Following these accidents the Comet was withdrawn from service, but a redesigned version flew again in 1958. The aircraft was finally retired in 1997. To provide passengers with a better experience, the Comets had large rectangular windows, which eventually induced metal fatigue. Metal fatigue is the weakening of a metal part due to repeated cyclical movement such as bending or twisting Metal fatigue was obviously not considered when the Comet aircraft were designed, even though the phenomenon had been known, and named, since 1839. Metal fatigue was seen as just another failure of technology hence did not introduce a new age of safety.
In other sciences it has also from time to time needed to introduce something new, because it was required by the underlying theories. The best known example is that the Standard Model of particle physics required specific elementary particles, most famously the Higgs Boson, predicted by Peter Higgs, Francois Englert, and others in 1964, and confirmed to exist by experiments using the Large Hadron Collider (LHC) in 2012. (Higgs and Englert shared the Nobel prize in physics in 2013). A similar inference in cosmology is the existence of dark matter, a mysterious, form of matter that supposedly makes up altogether 85% of the known universe although it is yet to be found, perhaps because it by definition is invisible. Safety culture differs from both of these. There is first of all no compelling physical evidence, such as crashed airliners that requires safety culture as a cause, neither are there any strong theories about human or organizational behaviour that predict its existence. So like dark matter safety culture remains invisible. The reasons why safety culture became necessary are instead purely psychological. Whenever something unexpected and unacceptable happens there is an insatiable psychological need of explanations.
The need of safety culture
New causes can become necessary for two different reasons,as explained above: (1) for technical reasons when a new type of accidents happens, as illustrated by the case of the de Havilland Comet aircraft , or (2) for theoretical reasons as illustrated by the Higgs boson. Safety culture, however, was due to neither of these reasons, but had its own and equally compelling reason: (3) psychological necessity, as pointed out by many philosophers.
First of these was the Andalusian Muslim polymath, Ibn Hazm of Cordoba (994 – 1064) who observed that: ”The chief motive of all human actions is the desire to avoid anxiety.” (Hazm,1990, p. 114).
Half a Millenium later, the English philosopher Thomas Hobbes (1588 - 1679) in his great book Leviathan wrote that:
“Ignorance of remote causes, disposeth men to attribute all events, to the causes immediate, and Instrumentall for these are all the causes they perceive”. (Hobbes, 1651, Chapter XI).
This by the way elegantly explains the problem of organizational accidents, namely that we cannot perceive the causes. Latent conditions, by definition, stay latent because they are not noticed.
In the late 19th century The eminent German philosopher, Friedrich Wilhelm Nietzsche (1844-1900), was even more direct, when he wrote:
“to trace something unfamiliar back to something familiar is at once a relief, a comfort and a Satisfaction, while it also produces a feeling of power. The unfamiliar involves danger, anxiety and care -the fundamental instinct is to get rid of these painful circumstances. First principle - any explanation is better than none at all.” (Nietzsche, 1997, Chapter 5, emphasis added). I will refer to this as the Nietzschean anxiety.
But most eloquent was Count Leo Tolstoy (1828-1910), who in his great book War And Peace discussed how the Muscovites tried to make sense of the conflict when Napoleon was marching toward Moscow. Tolstoy astutely noted that:
“Man’s mind cannot grasp the causes of events in their completeness, but the desire to find those causes is implanted in man’s soul. And without considering the multiplicity and complexity of the conditions any one of which taken separately may seem to be the cause, he snatches at the first approximation to a cause that seems to him intelligible and says: This is the cause.” (Tolstoy, 2007, Book 13, Chapter 1)
The famous US psychoanalyst Erich Fromm (1900-1980) was more blunt when he wrote: “The quest for certainty blocks the search for meaning.”(Fromm, 2002, p. 45).
The numerous unexpected and unacceptable outcomes that happened in the third age of safety - and which still happen, created an urgent need for a new type of explanation that the counterfactual social construct safety culture appeared to meet. Safety culture literally came as a deus ex machina.
Given that safety culture for the reasons mentioned above was necessary to restore the peace of mind where simpler explanations had failed, the obvious next question is how a safety culture can be established and maintained. This is the essential problem the INSAG identified when they wrote “the development of means to use the tangible manifestations to test what is underlying.” This is where an articulated theory of organizational culture would have come in handy. Such a theory is unfortunately nowhere to be found. A good approximation is Edgar Schein’s proposal that an organizational culture has three main parts. According to (Schein, 1992) an organizational culture comprises artefacts, [the visible, tangible elements of an organization's culture] that are easy to perceive, shared basic assumptions [the deeply embedded, taken-for-granted behaviours which are usually unconscious], but constitute the essence of culture, hence essentially Reason’s Group Think, and the espoused values [the organization's stated values and rules of behaviour]. The artefacts represent and complement the union of espoused values and the basic assumptions. There are several renderings of this model (often as three layers of a pyramid with artefacts at the top and underlying assumptions as the bottom layer. Cultures are, more generally, “systems of (socially transmitted behaviour patterns) that serve to relate human communities to their ecological settings (Keesing, 1974, p. 75), presumably also when these settings are intentionally created, such as the places of work that are of concern for industrial safety and which today totally have replaced natural work environments, with the possible exception of fishermen, farmers, loggers, and miners.
Schein’s model is, however not a proper scientific model according to the definition by (Coombs, Dawes & Tversky, 1970, p. 2). These authors define a model as:
“... the representation of some aspects of the world by a more abstract system. In applying a model, the investigator identifies objects and relations in the world with some elements and relations in the formal system.” (Coombs, Dawes & Tversky,1970, p. 2).
Schein’s model does describe three essential parts of an organizational culture (artefacts, basic assumptions an espoused values, but does not describe their mutual relations. The intangibles are obviously the espoused values and the shared basic assumptions but Schein's model contains nothing that corresponds to the tangibles, to human performance. It is therefore of little help to clarify the intangibles and to use safety culture actively to improve system performance., how, for instance, does a change in artefacts affect the espoused values? And what should be done effectively to change the basic assumptions, if and when needed?
The tangible—intangible dilemma is the essential issue that the INSAG report identified. There are two well-known candidates or frameworks that address that: the HSE Safety Culture Ladder and the Bradley curve.
As soon as safety culture and safety culture maturity became important issues proposals were made about how to manage the intangible in order to ensure the desired tangibles, to wit the absence of organisational accidents. But all such attempts immediately ran into problems with the first paradox of reason.
Twenty five years ago Jim Reason published a paper on “Safety Paradoxes And Safety Culture”, which received surprisingly little attention. In this paper Professor Reason described four paradoxes of safety (a paradox is a seemingly absurd or contradictory statement or proposition which when investigated may prove to be well founded or true.) The first paradox was that safety is defined and measured more by its absence than its presence. That means that what was supposed to be the tangible outcome of the intangible safety culture actually is not there, which, of course, is a major problem for the intangible-tangible dilemma. While there are many well-known manifestations of the absence of safety there are practically none for the presence of safety, that can serve as evidence for the presence of the intangible safety culture. It is therefore essential to define which tangible manifestations the intangibles are supposed to produce. Another consequence of the first paradox is that while there are numerous models of how accidents occur there are practically no models of performance when there are few or no unexpected or unacceptable outcomes, even though these happen nearly all the time and are the rule where accidents are the exceptions and also is the kind of performance that is highly desired and necessary for companies to sustain their existence. There is actuyally a fifth paradox as well, which is a consequence of the first paradox. The fifth paradox is explained here.
Examples
Examples of the tangible
Examples of the tangible, are relatively easy to find. The simplest is probably the adage ”Safety Fitst”, which in a slightly expanded form becomes S.T.O.P: Stop, Think, Observe, Plan, in some cases supplemented by Communicate.
An even better example from the world of healthcare is Jeff’s rule:
which encourages practitioners to ‘reflect, review, rethink’ if a patient presents three times with the same or escalating symptoms. (Spencer, et al., 2025).
Examples of the Intangible
Examples of the intangible are in the nature of things harder to come by. Because they require an available definition of culture, or at least organizational culture.
What is culture?
Culture can be defined as the complex whole of a group's shared beliefs, customs, values, and ways of life, including both tangible elements and intangible aspects like language and traditions. Culture can be defined as the complex whole of a group's shared beliefs, customs, values, and ways of life, including both tangible elements and intangible aspects like language and traditions. The concept of an organizational culture, can be found as far back as the esprit de corps of the Napoleonic Grande armée (1805-1815), possibly even the agoge of the Spartan Army more than two Millennia ago, Another expression is provided by the individual or collective level of aspiration for performance, or the anspruchsniveau (Chapman & Volkman, 1939) and (Lewin, et al., 1944), and, of course, more directly in the model of organizational culture described by Schein, 1992):the artefacts, [the visible, tangible elements of an organization's culture] that are easy to perceive,the shared basic assumptions and the espoused values. There are several renderings of this model (often as three layers of a pyramid with artefacts at the top. Cultures are, more generally, “systems of (socially transmitted behaviour patterns) that serve to relate human communities to their ecological settings (Keesing, 1974, p. 75), presumably also when these settings are artificially , such as the places of work that are of concern for industrial safety and which today totally have replaced natural work environments, with the possible exception of fishermen, farmers, loggers, and miners.
The problems in defining the intangible safety culture wére clearly laid out by (Borys, 2019) who concluded:
“The literature review revealed that there is no agreed definition of the term ‘safety culture,’ and no definitive model of safety culture. In short, the body of literature is large, diverse, fragmented, confusing and ambiguous. There is little evidence supporting a relationship between safety culture and safety performance. In a practical sense, it is fruitless to continue to attempt to define safety culture. Rather than trying to change something as nebulous as ‘safety culture,’ the focus should shift to changing the organisational and management practices that have an immediate and direct impact on workplace safety. Organisational culture, however, is a useful concept if understood as a metaphor rather than a variable. Organisational culture informs changes in organisational practices that focus on safety, and may evolve as the culture learns and grows over time. While they do not inform culture change, safety climate surveys may be a useful measure of the perceived effectiveness of changes in organisational practices focused on safety. This view of organisational culture has implications for practice, both for OHS professionals and management.” Borys, 2019. p. 29)
The HSE Safety Culture Ladder
One of the best known frameworks for managing and improving safety culture is Patrick Hudson’s HSE Safety Culture Ladder, even if it does not directly address or solve the tangible-intangible dilemma. The HSE Safety Culture Ladder represents the intangible by five levels or steps (Figure 1.) An obvious question is, of course, where the five named levels come from and what their theoretical or empirical basis is.
The origin of the HSE Culture Ladder is not in research on safety culture, but rather in research on safety climate.(Westrum, 2004). Whereas the HSE Culture Ladder pretends to be a model of safety culture maturity, accounts of Safety Climate (Westrum, 2004) are less pretentious. Climate represents patterns in information handling, and therefore at best represents espoused management values. Hence completely different from the safety culture of the organization as such, as management is a subset of the organization. To make matters worse,neither the HSE Safety Culture Ladder (Figure 1), nor the Bradley curve (Figure 2) are actually models in a scientific sense (Coombs, Dawes & Tversky,1970). They are both seductive graphical renderings of the popular idea, that safety culture can gradually mature or improve through a number of distinct phases, either five levels or four stages but neither rendering provides the three types of knowledge needed to control a change, described below [The Knowledge Needed To Control A Change].
The analogy with a ladder offers the affordance that safety culture is improved by climbing up the ladder, just as we physically can reach higher if we use a ladder. The critical question is, how a simple and familiar movement in the tangible world can be represented as a similar movement in an intangible world, Hudson, of course, understood that a movement in the intangible world must take place in the minds of the people who do the work, because their actions constitute the tangible manifestations that express the safety culture. Hudson “solved” the problem in the following way:
“But it was clear that simply pointing out which direction to go would not be enough to actually induce progress up that ladder, for that some way of creating lasting change was necessary. The next stage in the project therefore required the development of a process model that defined how people could be brought to change and, it was felt, it would be best if this change was what people wanted, not one that they felt had to take place because they are told to. There are a wide variety of change models available, but most of these (e.g. Kotter, 1996; Kotter and Heskett, 1992) tend to concentrate on the details of project management in a change environment; we selected one model from psychology, the transtheoretical model of (Prochaska and DiClemente, 1983) and (Prochaska et al., 1998), because it covered the elements felt essential to cover the requirement that people should want to change. In particular this model laid much more emphasis on getting those involved to have an active personal desire to change rather than a passive requirement to meet the goals of management. (Hudson, 2007, p. 704) presents the five main stages of the change model, with its special emphasis on acquiring awareness and, once behaviour change has been effected, of maintaining that change.”
Hudson chose the so-called transtheoretical model of (Prochaska and DiClemente, 1983), which became marketed and widely known as the hearts and minds method. (It really ought to be called the minds and hearts method, because convincing people to change is done by working on their mindsets and not their feelings Putting hearts before minds is an Aristotelian mistake.
This solution is, however, replete with problems. The transtheoretical model of change, was so named because it is not based on a specific articulated theory of how changes can be made but rather is a hodgepodge of different theories from psychotherapy, it may well be useful to understand how a personal choice such as to stop smoking takes place. Yet we know practically nothing about how the decisions we make determine the actions we take. An organisation is anyway not a single sentient being, and does not as such choose to change or improve its safety culture, responsible managers perhaps do but they do not by themselves constitute the safety culture of a company. While the transtheoretical model possibly may be appropriate for understanding how a management decision is made, it is singularly unsuitable for understanding how to implement a change in the workforce, once the management’s choice has been made. The hearts and minds method therefore does little to explain the tangible-intangible dilemma.
Another peculiarity of the Safety Culture Ladder is the Pathological is the name for the third step, half way to optimal maturity. In common parlance pathological is the worst possible and not the third best. The worst is here calculative, which most people would rank better than pathological, since it at least suggest there is some kind of underlying principle.
The above long quotation from (Hudson, 2007, p. 704) exposes the main problem with the hearts and minds method for all to see. Regardless of whether safety culture is defined as Group Think (Reason, 1988), as ancestry and social environment (Heinrich, 1959) as an esprit de corps, as a collective level of aspiration for performance in a task, as a socially transmitted behaviour pattern or as the shared basic assumptions or the espoused values, of (Schein, 1992), it is obviously not something a single individual can decide to change, neither is it something that can be changed from the outside by a single factor leadership, for instance. A person may choose to adopt an artefact, such as a MAGA Cap, but cannot in the same way choose to change the shared basic assumptions or the espoused values of the organization where that person works, if they even are aware of them? It may happen after a while of socializing with others that also wear a MAGA cap, but then as a result of social pressure, as described by social judgement theory (Sherif & Sherif, 1968) rather than as a personal choice. It actually contradicts the very idea of culture to assume that adherence to the culture that influences what we do is a discrete individual choice, that can be changed in the same way as breaking a bad habit. To suggest that reveals a fatal misunderstanding of whatever culture or safety culture is. Performing in accordance with the safety culture or with the local organizational culture in general, is not an individual choice or an isolated act, like lighting a cigarette. This is also evident from the popular definition of safety culture as “the way we do things around here”. The analogy that is the justification for the minds and hearts and minds method is therefore misleading, irrelevant and inappropriate and the supporting reasoning is illogical and seriously flawed. Seen from this perspective the choice of the hearts and minds approach was a rather desperate efficiency-thoroughness trade-off (Hollnagel, 2009), based on Hudson’s urgent need to come forward with some sort of method that could be used to change people’s mindset (Hudson, as cited above).
It must be remembered that the transtheoretical model, in both the original version, and Hudson’s interpretation thereof represents hypothetical stages of an individual’s mindset, rather than of a safety culture. It can therefore not be claimed to be a model of safety culture maturity in the way that Schein (1992) is a model of organizational culture. Developing or maturing a safety culture must address the shared basic assumptions, espoused values, and artefacts, if, indeed, an artefact can represent safety culture maturity? And this cannot be replaced by addressing individual characteristics and attitudes. Group Think is neither the sum, nor the average of how a set of individuals think or of their readiness to behave safely, nor are there many artefacts that naturally represent basic assumptions and espoused values.
The Bradley curve
Another popular suggestion for solving the tangible-intangible dilemma is the Bradley curve. It differs from the HSE Culture Ladder (Figure 1) by showing a a continuous curve divided into four sections of apparently equal size instead of five discrete stages or steps.
Figure 2: The Bradley curve from (Jasiulewicz-Kaczmarek, M., Szwedzka, K. & Szczuka (2015).)
The Bradley curve illustrates imagined steady improvement, measured by a diminishing number of occupational accidents, precisely what management wants. The smooth curve is, a manager’s wet dream with neither data nor theory to support it. In the 1990s, Ed Woolard the CEO of DuPont chartered a so called DuPont Discovery Team to develop a system that would allow sustainable and lasting improvement in organisational safety (unsurprisingly defined in terms of the Zero Accident Vision,Woolard had apparently also adopted the myth of safety culture). The best (and actually also the only) known, outcome of that team effort was the Bradley Curve (proposed in 1994 and named after a team member: Vernon Bradley (sometimes wrongly named Berlin Bradley, who was Manager of a DuPont plant in Canada). Empirical support is out of the question because continuous measurements of occupational accident rates are unaffordable and therefore practically impossible. In the best cases a occupational accident rate may be calculated weekly, monthly, or quarterly, but most likely only as part of an annual report. A proper rendering of the Bradley curve would therefore be a staggered graph or a histogram rather than a smooth curve. The Bradley curve anyway does not describe a system that will bring sustainable and lasting improvement in organisational safety, but only the imagined outcomes should such a system ever be specified and realised. The Bradley curve appears in many promotional materials from companies that either claim they use it or (worse) offer to teach others how to do so, but there are few if any scientific papers that either describe or support it, although several make opaque reference to it. (The names of the four stages were apparently borrowed by Mr. Bradley from Covey (1989) who himself seems to be taken them from the famous Swiss developmental psychologist Jean Piaget, hence not related to safety culture at all.) The curve is only possible if it is supported by a method to bring about a steady reduction of occupational accidents. Such a method did not exist in the 1990s and does not exist today. If wishes were horses, beggars would ride, and occupational accident rates would be under steady control.
The Bradley curve in no way explains how safety culture maturity can be controlled, CEO Ed Woolard’s ambitions notwithstanding. Unlike a ladder, there is no natural affordance for a curve or a slope, except, perhaps to slide down along it. Yet, the further you move along the slope as it is rendered in Figure 2, the further you get away from being in the best position, which in the natural rank order usually is the leftmost (cf. The common expression “first in line”), The common renderings of the Bradley curve, but not the version seen in Figure 2 do include an ordinate scale named Safety Culture Self Assessment but it is not actually defined, and it would anyway not be reliable, if it was based on self assessment. Neither is it intuitive that moving further to the right represents improved safety culture maturity. The leftmost position is usually considered to be better than any of the following. The further you move to the right the lower you also are on the abscissa. The Bradley curve therefore represents a counter-intuitive relation in every way and seems to be based on wishful graphics rather than systematic thinking. The curve is divided into four regions or segments of apparently equal size, named (reactive, dependent, independent, and interdependent), the names have no obvious connotations unlike the names of the five steps of the Safety Culture Ladder. And it is not reasonable that an interdependent safety culture is more mature than an independent one. From a simple reliability point of view, the opposite may actually is more reasonable, since a system where functions are independent is more reliable, and therefore preferable to a system where function are interdependent or coupled, as the latter is more likely to experience the normal accidents that (Perrow, 1984) brought into to the safety debate. the order of the two stages independent and interdependent therefore ought to be reversed. Neither does the Bradley curve in any way support how to determine the current position. Since the curve also represents the occupational accident rate it would in principle have been possible to define threshold values for each of the four stages, at least as percentages. The affordance of a ladder is that it brings you higher for each step you take. The Bradley curve does not offer any knowledge of how the movement along the slope, the so-called safety voyage can take place (the means). The Bradley curve (Figure 2) does show a rough correlation with the occupational accident rate, but in this case it seems more reasonable to assume that the decreasing rate is a function of the postulated maturity of safety culture, rather than the other way around, since the latter raises the fundamental question of how the occupational accident rate effectively can be reduced, something the safety legacy so far has been unable to provide.
Similar to the HSE Safety Culture Ladder, the Bradley curve indicates a default goal, namely the fourth and rightmost stage (interdependent) but with no definition of what interdependent means in practice, nor how it can be determined, except by the above mentioned subjective safety culture self assessment.
Altogether the Bradley curve may appear appealing to a manager or a CEO, but is woefully underspecified and therefore impossible and inadvisable to use in practice.
The Reason decomposition
That Jim Reason was aware of the tangible-intangible dilemma, is clear from the following question:
“By what means can we set about transforming an average safety culture into an excellent one? The answer, I believe, lies in recognising that a safe culture is the product of a number of inter-dependent sub-cultures, each of which – to some degree – can be socially engineered. An informed culture can only be built on the foundations of a reporting culture. And this, in turn, depends upon establishing a just culture. In this concluding section, we will look at how to build these two sub-cultures. The other elements of a safe culture – a flexible culture and a learning culture hinge largely upon the establishment of the previous two. They have been discussed at length elsewhere (Weick, 1987), and (Reason, 1997) and will not be considered further here. (Reason, 2000, p. 12).
This decomposition defines a pyramid of subcultures as shown in Figure 3.
Figure 3: The decomposition of safety culture (Reason, 2000).
Reason’s decomposition proposal triggers three comments, Firstly, how is it possible to decompose an intangible safety culture into four subcultures? Secondly, that each of the four proposed subcultures will presumably be intangible themselves. Thirdly that if two of the proposed subcultures an informed culture and a reporting culture can be managed independently, as the quote suggests, then why can the same operations not be applied to the larger safety culture. The relative order of Flexible Culture and Learning Culture is not specified. Neither is it described how it can be determined that either of the four safety culture components are present. Reason’s suggested decomposition is thus even less of a model than Schein’s. Additionally neither (Weick, 1987), or (Reason, 1997) describe how a culture can be managed. Reason’s decomposition is therefore not a viable solution to the tangible-intangible dilemma, nor does it clarify what safety culture is.
Resolving The tangible—intangible dilemma
The tangible—intangible dilemma is a serious impediment for the safety legacy and for established approaches to safety management. Despite the several attempts described above, the tangible—intangible dilemma has not to find a solution. But a solution is actually fairly simple. Namely to replace the intangible with something tangible. The tangible replacement is in this case resilience, not in the sense of a wholesale substitution of safety culture by resilience culture, but by realising that resilience is tangible because it denotes what a system does rather than what it has. The question therefore becomes what characterises resilient performance. An early definition was:
Age of safety | Typical (default) cause | Typical (default) response | Assumed mode of causality | Began year |
I: Technology | Failure of technology | Replace, repair and improve | Active failure, monocausal | 1769 (steam engine) |
II: Person (human factors) | Human factors human error | Blame, train, design, automate | Active failure, monocausal | 1979 (TMI) |
III: Safety Management | Organisational failure lack of safety culture | Audits, standardisation regulation | Latent conditions, multicausal | 1986 Challenger, |



“a resilient system is defined by its ability effectively to adjust its functioning prior to or following changes and disturbances so that it can continue its functioning after a disruption or a major mishap, and in the presence of continuous stresses.” (Hollnagel, 2008, p. x).
This was later defined more explicitly as follows:
“Resilience is defined as the intrinsic ability of a system to adjust its functioning prior to, during, or following changes and disturbances, so that it can sustain required operations under both expected and unexpected conditions. Since resilience refers to something a system does, to a capability or a process, rather than to something the system has, to a product, the measurement cannot be based on counting specific outcomes such as accidents or incidents.” (Hollnagel, 2011, p. 275).
(Lawrie, Parker & Hudson, 2006), proposed a framework for understanding the development of organisational safety culture, where the five levels were introduced and explained (Lawrie, Parker & Hudson, 2006, p. 267). The three authors in turn listed two sources for their proposed levels of safety culture, namely (Westrum, 1996) and (Reason, 1997), where (Westrum, 1996) provided the idea of various levels, although in relation to safety climate rather than safety culture:
Table 2: Westrum's (1996) types of organizational climate
Pathological | Bureaucratic | Generative |
Information is hidden | Information may be ignored Messengers are tolerated | Information is actively sought |


“a resilient system is defined by its ability effectively to adjust its functioning prior to or following changes and disturbances so that it can continue its functioning after a disruption or a major mishap, and in the presence of continuous stresses.” (Hollnagel, 2008, p. x).
This was later defined more explicitly as follows:
“Resilience is defined as the intrinsic ability of a system to adjust its functioning prior to, during, or following changes and disturbances, so that it can sustain required operations under both expected and unexpected conditions. Since resilience refers to something a system does, to a capability or a process, rather than to something the system has, to a product, the measurement cannot be based on counting specific outcomes such as accidents or incidents.” (Hollnagel, 2011, p. 275).
The intangible is made tangible by expressing it in terms of the four potentials for resilient or systemic performance: The potential to respond, the potential to monitor the potential to learn and the potential to anticipate. The rather abstract problem of how to improve safety culture is in this way transformed into a problem of how to establish and maintain the four potentials, for which a practical method is available, described in (Hollnagel, 2026) on the www.incrementalsafety website, and by (Hollnagel, Licu & Leonhardt. 2021). Since the latter source is freely available, the concrete details of how the four potentials and thereby also resilience can established will not be repeated here. The obvious way to get rid of the tangible-intangible dilemma is, of, course to replace the intangible with a description of the tangible, not as the accidents because they represent the absence of what we seek as the first paradox of Reason makes clear, but as the actual behaviour, as what people actually do when they carry out their work, especially when it goes well as it does nearly all the time. The challenge is not to describe work as it actually will be done, since we are constrained by the inevitable difference between Work-as-Imagined and Work as-Done (refs). But in its place we can describe the abilities or potentials that are necessary for resilient performance, for work to go well.
The four systemic potentials
While it is essential to be able to respond, it is a decided advantage to be ready to respond whenever something happens. In our societies there are only few types of responses that maintain a constant readiness, the fire fighters, and the ambulance services, for instance. is possible, in principle, if future events are so regular that they are predictable, yet in the Nordic countries people are still surprised when the first ice and snow appears. . The ability to predict something that is less regular than the sunrise every morning, depends on the ability to monitor to interpret early signs correctly and to identify trends.
The potential to learn. Neither responding, nor monitoring can or should always happen in the same way. In most systems prepared or built-in responses are based on Work-as-Imagined embedded in a World-as-Imagined. But actual demands and conditions will always be different from what was imagined or experienced previously, because internal or external conditions never are perfectly stable, it is therefore inadequate always to respond in the same way. The 2008 financial crisis demonstrated clearly that no company is too big to fail, so it is unwise to disregard what happens iexternally and to fail to adjust both the intangible asnd the tangible to that. It is a serious risk for sny kind of dynamic system always to respond in the same way and to rely on the same signals, signs, and symbols. To do so may provide a short-term gain in efficiency but it is paid by a loss of thoroughness and attention to detail. The only way to avoid this is by learning. Learning enables a system (or a company) to change how it responds to whatever happens, by strengthening effective responses, by suppressing or eliminating ineffective responses, and by learning new responses,. The purpose of learning is not the simple acquisition or accumulation of knowledge (or data or information, but the effects on performance, on the potential to respond – as well as the potential to monitor, and ultimately also on the potential to learn. Learning must be at least as fast as the rate of chamge of the environment. The dinosaurs were limited to learning at en evolutionary pace, and that was clearly insufficient (Fremlin, 1979, p. 250-251). To achieve full efficiency, learning cannot be limited to actual experiences, but must also make use of anticipation, just as anticipation must be flexible and improve with experience. Without the potential to learn, responses will be limited to a fixed and pre-defined set as in a standard procedure or set of instructions, based on whatever system developers and designers were able to imagine once upon a time. Even learning itself must change and develop. It is imprudent always to respond, to monitor, to learn, and to anticipate in the same way, unless eveything is perfectly stable, which entropy makes impossible. Always responding monitoring, learning, and anticipating in the same way is only feasible if conditions never change, if the world is fixed and perfectly stable. But entropy makes that impossible, as we all know. Things will never happen in the future the same way in the future as they happened in the past, which we have known since the time of as we know from the teachings the famous Greek philosopher Heraclitus (c. 500 BCE), who taught us that panta rhei, or "everything floats" The same argument, of course, goes for monitoring. It is unwise to always rely on the same signs, indicators and measurements. Learning can furthermore not be limited to what not to do (avoidance learning). It is equally important, if not more so, to learn from what works well (reinforcement learning), in order to strengthen effective responses. Learning is necessary to make performance more efficient, as in the progression from knowledge via rules to skills (Anderson, 1980; Vicente & Rasmussen, 1988). The gradual automation of responses, the development of patterns, habits, and routines, and the recognition of important weak signals (Hollnagel, 2025) are essential parts of how performance becomes attuned to the prevailing conditions – with all the risks that such increased effectiveness and reduced thoroughness involve. Learning is a sine qua non for both monitoring and responding, and ultimately also for learning itself. A strategy for learning cannot be stable, but must be heuristic and not algorithmic. Life is experienced backwards, but must be lived forwards, as the famous Danish philosopher Søren Kierkegaard pointed out.
The potential to anticipate. Since the environments and surroundings in which work takes place and where companies exist are not perfectly stable, it is necessary to acknowledge both that they can change and also to imagine or think about how they may change, which means the potential to predict and anticipate. (Prediction and anticipation are not synonymous. Prediction is focused on a specific future event, whereas anticipation involves preparing for potential scenarios or even just taking something into consideration. (Predictioncan be used to prepare the readiness of a specific respons, while anticipation can become the basis for a class of similar responses (consider the difference between a prediction of rain and the anticipation of inclement weather). Where monitoring focuses on the current situation and the near term developments, anticipation looks beyond the near term to the more distant future. It is, of course, difficult to predict what may happen in the future. "It is difficult to make predictions -— especially about the future" because it is nearly impossible to know how other people think, and to predict what they may do and also because the rate of changes, even of physical phenomena may be too rapid to follow, and sometimes paradoxically also so slow that it is difficult for us to notice them, climate change illustrates that well. It is also necessary to consider which resources, skills, and competences a company will need in the future? This is necessary in order to ensure that responding remains possible. A simple example is that fires in Electrical Vehicles (EV), for instance, cannot be extinguished in the same ways as fires in fossil fuel vehicles, since both water and foam conduct electricity, that may possibly hurt the fire fighters. New extinguishing equipment and new working procedures are therefore required. Likewise roof fires under solar panels do not dissipate heat as “normal” roof fires do.
The levels are defined by the relative fulfilment of each of the four potentials, and this also defines the practical way in whicha resilience culture can be established. A resiliebce culture thereby ceases to be intangible, and the intangible-tangible delimma is transformed to a tangible-tangible relationship, where (tangible performance)=F(tangible systemic potentials).
Systemic Potentials Management(SPM)
Since though the details of Systemic Potentials Management are readily available elsewhere as already mentioned, it is only required to provide a brief summary here. The SPM was developed as a means to operationalise resilience rather than as framework to resolve the tangible-intangible dilemma. The basic idea is illustrated by Figure 4.
Figure 4 (coincidentally also has five levels. But in this case each level is defined by the relative fulfilment of the four potentials. There is therefore a logical progression through the twenty steps that goes from a regressive or decremental safety culture to a fully develop resilient or incremental safety culture. The four systemic potentials thus provide a clear and practical way to determine the current position and of defining the goal.The relative fulfilment of each potential is achieved by assessing them using context specific seta of queations that address the facets of each potential.
Assessing the systemic potentials
The main practical issue of the systemic potentials is how they can be assessed in order to determine the current position. This is not done directly by assessing each potential as a single quality. That would be as mindless as assessing resilience or safety eo ipso. Each potential can, however, be described as comprising a number of more specific facets or functions that are common to many types of activity and domains. So instead of assessing each potential as a whole, the potentials may better be characterised in terms of the several facets each represents. This can be used as a concerted approach at both the blunt and the sharp ends to ensure that resilient performance is possible here and now as well as in the foreseeable future. The same cannot be done for safety because there is no theory of potentials for safety, other than safety culture. Which because it is a counterfactual condition does not live up to the requirements for a theory! The questions used in the assessment must fulfil three criteria:
If the questions fully meet these criteria, the replies can be used to provide the three types of knowledge needed to control a change as defined by (MacKay, 1951).
The three types of knowledge are necessary in order to be able to change the intangible with predictable consequences for the intangible for how people chose what to do in response to arising situations and unexpected changes in conditions.
CONCLUSIONS
The inevitable conclusion from this analysis is that safety culture is a social construct that serendipitously and conveniently solved an urgent psychological need and thereby got rid of the distressing uncertainty that follows the novel, the unexpected and the unknown. Safety culture does, however, not represent or refer to an articulated theory about human performance, either individually or collectively, or even about safety. Indeed there are no articulated theories about safety, and hardly any about human performance either (there actually are no theories about safety!) The only known tangible consequences of safety culture are due to its absence rather than its presence, safety culture has never been proven to have any practical use, or in any way to be controllable or manageable, it is wishful thinking that safety culture is able to increase the number of expected and acceptable outcomes. Safety culture has practical value mostly for those who innocently promote and thereby perpetuate the social construct as if it was a practical solution to an acute problem. To make matters worse, the popular approach to change safety culture through the hearts and minds method, is based on a fundamental misunderstanding of what culture is, and what the method can do.

The arguments presented here have, hopefully, made it clear that safety culture and the safety culture journey, the attractiveness of these concepts notwithstanding have little, if any, practical value. They do fulfill an obvious psychological need and provide the illusion of practicality. Here the following quote may be appropriate:
“The anticipated utility of the safety culture concept can possibly be judged by its apparent popularity and its relatively rapid adoption by managers, regulators and academics. This popularity, however, may be more a reflection of effective marketing than of scientific development.” (Cox & Flin, 1998, p. 197, emphasis added).
There is undoubtedly a need of practical ways to achieve a state of safety, in public and private enterprises alike. The assumption that safety culture as a monolithic solution can do that is so far unproven, although not for the lack of trying. Ample evidence is provided in the special issue about safety culture by the journal Safety Science (Hale, 2000, p. 1-14). The popularity of safety culture may, as Cox & Flin, 1998) concluded be due more to effective marketing and a strong psychological need for explanations than to proven effectiveness. Leo Tolstoy wrote about that need in his great novel War and Peace:
“Man’s mind cannot grasp the causes of events in their completeness, but the desire to find those causes is implanted in man’s soul. And without considering the multiplicity and complexity of the conditions any one of which taken separately may seem to be the cause, he snatches at the first approximation to a cause that seems to him intelligible and says: “This is the cause!” (Tolstoy (1912), Book 13, Chapter 1)
American Society of Safety Engineers (ASSE). (2011). Prevention through design guidelines for addressing occupational hazards and risks in design and redesign process (ANSI/ASSE Z590.3). ASSE. https://webstore.ansi.org/standards/asse/ansiassez5902011
Anderson, B. F. (1975). Cognitive psychology. New York: Academic Press, 1975.
Aven, T. (2022). A risk science perspective on the discussion concerning Safety I, Safety II and Safety III. Reliability Engineering & System Safety, 217, 108077
Besnard, D., & Hollnagel, E. (2014). I want to believe: some myths about the management of industrial safety. Cognition, Technology & Work, 16(1), 13-23.
Björnberg, K. E., et al. (2019). The vision zero handbook. Cham, Switzerland: Springer International Publishing.
Bloch, B. K. & Vaughen, P. E. (2024) The Bhopal Gas Tragedy — Part I: Process Safety Culture. https://www.aiche.org/resources/publications/cep/2024/september/bhopal-gas-tragedy-part-i-process-safety-culture(accessed December 11, 2024).
Busch, C.(2017) A Ladder on Quick Sand - A Critical View at the Safety Culture Ladder. https://www.linkedin.com/pulse/ladder-quick-sand-critical-view-safety-culture-carsten-busch/(accessed september 27, 2025).
Cannon, W. B. (1915). Bodily changes in pain, hunger, fear, and rage. New York: Appleton-Century-Crofts.
Coombs, C. H., Dawes, R. M, & Tversky, A. (1970). Mathematical psychology. Englewood Cliffs, NJ: Prentice Hall, Inc.
Cooper, M. D. (2022). Towards a model of safety culture. Safety science, 36(2), 111-136.
Cox, S,& Flin, R.(1998): Safety culture: Philosopher's stone or man of straw?, Work & Stress, 12(3), 189-201
Emanuel, L. et al. (2009). What exactly is patient safety? Journal of Medical Regulation, 95(1), 13-24.
Hale, A. (2000). Editorial: Culture’s confusion. Safety Science 34.
Heinrich, H.W. (1931). Industrial accident prevention. New York: McGraw-Hill Insurance Series
Heinrich,H. W. (1959). Industrial Accident Prevention. A scientific approach(4th Edition) New York: Mcgraw-Hill Book Company, Inc.
Hollnagel, E. (2009). The Four Cornerstones of Resilience Engineering. In E. Hollnagel & S. W. A. Dekker (Eds.), Resilience Engineering Perspectives, Vol 2 - Preparation and Restoration (pp. 117-133). Farnham, UK: Ashgate
Hollnagel, E. (2020). Hollnagel, E. (2020). Synesis: The unification of productivity, quality, safety and reliability. Abingdon, Oxon, UK: Routledge.
Hollnagel, E. (2025). From safety to safely: Principles and practice of Systemic Potentials Management. Abingdon, Oxon, UK: Routledge.
Hollnagel, E. (2026). Decremental and incremental safety cultures: Safety-I and Safety-II revisited. Boca Raton, FL: CRC Press.
Hollnagel, E. & Slater, D. (2025). Safety culture maturity: A problem disguised as a solution. Fontainebleau, France: International Institute for Leadership and safety culture (IILSC Insights (2025/1)).
Hollnagel, E. & Slater, D. (Eds.), (2026) Incremental Safety Practices. Abingdon, Oxon, UK: Routledge.
Hollnagel, E., (2008). Preface. Resilience Engineering in a Nutshell In: E. Hollnagel, C. P. Nemeth & S. W .A. Dekker, (Eds.). Resilience engineering perspectives, Volume 1: Remaining sensitive to the possibility of failure. Aldershot, UK: Ashgate.
Hollnagel, E., Licu, A., & Leonhardt, J. (2021). The systemic potentials management: Building a basis for resilient performance (A White paper). Brussels: Eurocontrol.https://skybrary.aero/bookshelf/systemic-potentials-management-building-basis-resilient-performance. <accessed September 25, 2025>.
Hollnagel, E.(2009). The ETTO principle: Why things that go right sometimes go wrong. Farnham, UK: Ashgate.
International Civil Aviation Organization (ICAO). (2013). Safety Management Manual (SMM)(Doc 9859 AN/474). Montreal: ICAO.
Khattabi, N., et al. (2025). Improving cross-learning in clinical teams using daily on-site reflective meetings. Frontiers in Health Services, 5, 1630722.
Leveson, N. G. (1992). "High-pressure steam engines and computer software." In Proceedings of the 14th international conference on Software engineering, New York: the Association for Computing Machinery pp. 2-14.
Leveson, N. G. (2020). Safety III: A systems approach to safety and resilience. MIT Engineering Systems Lab. Sunnyday.mit.edu/safety-3.Pdf (accessed April 13, 2024).
Martins, J. B. et al.(2022). Integrating Safety-I and Safety-II: Learning from failure and success in construction sites." Safety science 148: 105672.
Raheemy, Y., Sherratt, F., & Hallowell, M. R. (2025). What is safety? contemporary definitions and interpretations across North America. Safety Science, 185, 106798.
Rasmussen, J. (1983). Skills, rules, and knowledge; signals, signs, and symbols, and other distinctions in human performance models.IEEE Transactions on Systems, Man, and Cybernetics, (3), 257-266.
Reason, J. T. (2000). Safety paradoxes and safety culture. Injury Control & Safety Promotion, 7(1), 3-14.
Reason, J. T. 81987). The Chernobyl errors. Bulletin of The British Psychological Society, 40, 201-206
Schein, E. (1992). Organizational Culture and Leadership. Jossey Bass, San Francisco.
Schein, E. H. & Schein, P. A. (2025). Humble Inquiry: The gentle art of asking instead of telling. (3rdEd.) Oakland, CA: Berrett-Koehler Publishers.
Searle, J. R. (1995). The construction of social reality. New York: Simon and Schuster.
Sharman, A. (2016). From accidents to zero: A practical guide to improving your workplace safety culture. Routledge.
Vicente, K. J., & Rasmussen, J. (1992). Ecological interface design: Theoretical foundations. IEEE Transactions on systems, man, and cybernetics, 22(4), 589-606.
Wahl, K., Stenmarker, M., & Ros, A. (2022). Experience of learning from everyday work in daily safety huddles—a multi-method study.BMC Health Services Research, 22(1), 1101.
Zwetsloot, G. I. J. M., et al.(2013). The case for research into the zero accident vision.Safety Science, 58, 41-48.
The s first ever international workshop on incremental safety practices is planned to take place in 2026 preliminary details here. This is your unique chance to become part of this exciting development from the start. Below is a photo of Sønderborg, where the meeting will take place (but unfortunately not in the castle
You may be wondering who is behind this? Who are the people who are so unhappy with the current safety legacy and safety culture that they feel a need to muddle the waters even further?




Erik is Scientific Director at the Institute of Resilient Systems Plus, Seoul, South Korea, Honorary Professor at Macquarie University (Sydney, Australia), and Visiting Fellow, at the Institute for Advanced Study, Technische Universität München(Germany). He is also Professor Emeritus from Linköping University (Sweden), Mines Paristech, formerly École nationale supérieure des mines de Paris(ENSMP, (France), and the University of Southern Denmark. Erik Hollnagel has throughout his career worked at universities, research centres, and with industries in many countries and with problems from a variety of domains and industries. He has published widely and is the author/editor of 30books, including seven books on resilience engineering, as well as a large number of papers and book chapters. Erik has been President of the European Association of Cognitive Ergonomics (1994 – 2000) and co-founder and past chairperson of both the Resilience Engineering Association and the Resilient Health Care Society (and he is also an honorary member of both organisations.) He is also author /editor of 30 books on safety and safety related issues. His new book with the title "from Safety Management to Managing Safely" was published by Routledge in 2025. Erik is creator of the Functional Resonance Analysis Method (FRAM) and together with David Woods the progenitor of cognitive systems engineering and resilience engineering and also of Safety-I, Safety-II, and incremental safety. Further details can be found at at https://erikhollnagel.com/
David Slater is a director of the engineering consultancy, Cambrensis Ltd., and an Honorary Professor in the School of Engineering, Cardiff University, UK. His current research interests center on trying to understand how complex socio-technical systems behave in practice; having developed predictive system behavior models for risk analysis and regulatory purposes in theory (Imperial College) and in practice (HMIP, Environment Agency and DG Environment (EC)). These models were applied to real life incidents (from Flixborough to Grenfell Tower and COVID 19). Development of these methodologies to include the human factor is my current focus.He is currently working with Cardiff University Hospital and the Manchester Children’s Hospital on systems to improve safety and resilience in healthcare. He is also working with the cybersecurity group in the school on the challenges of introducing complex AI and software systems. In a prior role he founded the risk consultancy Technica, which introduced formal hazard and risk approaches worldwide, but especially in the North Sea oil and gas developments and the US petrochemical industries.
Tom McDaniel has spent over 40 years supporting commercial, industrial, and energy facilities and organizations throughout the world working in over 30 countries. In addition to his work in advanced environmental and safety management, he spent numerous years working as a yacht captain with over 300 charters, yacht deliveries, and as a private captain. Many of his organizational inventions are based on his experience dealing with the complexity of operating safe vessels. Many of his developments continue to be used throughout industry.
The vast majority of his EHS roles included applying new methods to change organizational narratives and distinctions to allow the movement away from outdated thinking that is holding back the advancement of professionals, organizations, and regulatory agencies. He is widely recognized as a world leader in applying theories to practical applications. He has held various titles including being awarded principal engineer and global manager of human performance. He has numerous awards from various nuclear organizations, Westinghouse, and Siemens.
Tom holds degrees in environmental engineering and advanced business cognition management. He has extensive knowledge and experience in both electrical and mechanical engineering applications. Tom has designed and hosted over 200 advanced EHS conferences and workshops along with over 300 EHS regulatory, leadership, management and workshops, along with new more effective investigation and observation methodologies allowing deeper level system learning regardless of outcome. Much of his current work focuses in how to move away from our industry’s obsession with outcome not waiting for failures to learn.
Tom has consistently taken the most challenged, poor-performing organizations guiding them to become the best in all areas including safety, quality, efficiency, profitability, and customer satisfaction. He is responsible for over 100 million dollars in increased profitability in his supported organizations while reducing all types of harm to customers and the environment. Tom continues to design and host international non-profit conferences. Additionally, he sits on the board of a non-profit designed to help those dealing with life issues.
Tom lives in St. Petersburg, Florida with his wife of 45 years, Dr. Lauren McDaniel. His entire family, including two children, are heavily involved with the sciences including marine microbiology, environmental virology, cyber security, chemical engineering, renewable energy, and advanced mathematics. Tom and Lauren run a small consulting firm, McDaniel Scientific Group LLC. Tom continues to offer free desk consulting and mentoring as a method to give back to industry.
Dr. Nippin Anand is the Founder and CEO of Novellus Solutions, a consultancy specializing in human-centered approaches to learning, risk and social decision making. Nippin is a former master mariner with an MSc in Economics and a Ph.D. in Social Sciences and Anthropology. More than this, Nippin brings a Transdisciplinary lens to the challenges of work and organising spanning: the humanities, social psychology, mythology, First Nations thinking, anthropology, semiotics and philosophy.
After experiencing a near‑collision at sea, Nippin developed a lifelong passion for event investigation, focusing on how failure can be turned into opportunities for learning and change. He has a particular focus on: the psychology of goal setting, motivation, perception and unconscious decisions under pressure.