The False Myth of Scientific Management

[Wikimedia; 2013]

Management theories and practices are increasingly focussing on implementing a systematic measurement of “objective” indicators of performance, inspired by the axiomatic tenet “you cannot manage what you cannot measure”. Enterprises are measuring the performance of processes, they are measuring service availability, they are even measuring performance indicators of employees. Measures are everywhere. Such measures are typically referred to in the literature as Key Performance Indicators (KPIs). There are handbooks of KPIs, collections of KPIs, and all this, according to their holy fathers, will eventually free the discipline of management from the “plague” of subjectivity and its alleged worrisome weaknesses. Whenever practical results fail to meet expectations, despite religious adherence to this theory, true believers amend the implementation adding more and more KPIs, requiring and enforcing even stricter allegiance to their dogma.

But however much individuals and organisations are bent to the rule of the wildy KPI priests, the likeliness of achieving the promised results keeps looking more like utopia than scientific determinism. In some cases, there is a correlation between adoption of KPI-based management practices and results, in the sense that such results materialise in an overlapping time interval with the measurement of KPIs. But this is no scientifically acceptable evidence for the existence of a cause-effect relation between the two.

So, the question arises, why should we stick to a theory of management which is so often disproved and which is continuously confronted with exceptions requiring constant defensive explaining from its own priests? First, the theory is not actually disproved, because it is not a scientific theory. Only scientific theories can be disproved[1]. And scientific management is not a scientific theory, as we will see later in more detail. But the point remains, why are we so incredibly fascinated by measures and objectivity? The excerpt below will give us the insight we need:

 “We in the western world in the 21st century are children of science: our lives are dominated by the products of science and by the powers that they place in our hands. We are still biological creatures, but unlike any other species we have stepped out of the environment of nature, and into a new environment of our own making – one created by science and technology. All the primitive physical threats to life – hunger, cold, disease, darkness, distance – have been beaten back, leaving us free to redesign our social lives in ways that would have seemed inconceivable two centuries ago. The visible signs of this mastery are the machines with which we have harnessed the forces of nature, and which have now become for us indispensable tools for living. But these machines are only symbols of something much deeper – of our understanding of the laws of nature, an understanding which has been gained slowly and painfully over thousands of years. Science has been one of the great intellectual quests of human history, but unlike the philosophical quest or the religious quest, it has had consequences that are intensely practical, indeed it has reshaped our lives.”

[Whitfield, Peter; 2012; Kindle Locations 54-61]

We are fascinated by science, which is the dominating western religion of the 21st century. The reasons for this fascination abound. As the excerpt above has clearly explained, science has had practical consequences on our lives, mostly for the better. And we certainly cannot blame physicians and mathematicians for madmen’s (or politicians’) use of their discoveries.

I need not go through each and every endeavour of science to substantiate my contention that in the West we are fascinated by it. I will just mention a few examples, among thousands. The discovery of antibiotics has saved, and is still saving, millions of lives. But the advances of medical science have also dramatically reduced the consequences of aggressions and wounds:

  • “advances in medical technology since 1970 have prevented approximately three out of four murders”.

[Christensen et al.; 2012; Kindle Locations 2851-2852]

  • “a wound that would have killed a soldier nine out of ten times in World War II, would have been survived nine out of ten times by United States soldiers in Vietnam. This is due to the great leaps in battlefield evacuation and medical care technology between 1940 and 1970.”

[Christensen et al.; 2012; Kindle Locations 2854-2860]

  • “-c. 1840: Introduction in Hungary of washing hands and instruments in chlorinated lime solution reduces mortality due to “childbed fever” from 9.9 percent to .85 percent -c. 1860: Introduction by Lister of carbolic acid as germicide reduced mortality rate after major operations from 45 percent to 15 percent”

[Christensen et al.; 2012; Kindle Locations 2879-2884]

And it was not by chance Americans chose to exhibit their superiority by landing on the moon in 1969. It was a safe bet for them to assume that a public exhibition of the magical powers of american science would impress and subdue the rest of the World for years to come. So, we have plenty of evidence of our cultural bias. Now, how can this bias influence our management practices up to the point of transforming otherwise brilliant people into dull measurement agents of improbable pseudo-scientific observables? We can find a convincing answer in the use of mathematics, which is the universally recognised language of science. But beware, not the one with capital ‘M’; only its bare approximation: the arithmetic of the bean counter. The excerpt below, referring to Newton’s Principia Mathematica, gives again invaluable insight:

“Newton’s title is important in another sense too, in that it recognises the role of mathematics in natural philosophy, the sense that nature must be an ordered system, that number, regularity and proportion are built into the fabric of the universe, and that mathematics can provide the key to our understanding of it.”

[Whitfield, Peter; 2012; Kindle Locations 134-136]

So, everything is clear by now: universal use of KPIs, true believers think, will transform enterprises into “an ordered system” benefiting from the same very “regularity” which is “built into the fabric of the universe”. This is a truly impressive utopia, isn’t it? It is only a pity that all the theory is based on false metaphysical assumptions. I will now go through the misconceptions which underlie this false myth of scientific management.

Misconception 1: Using figures and measuring KPIs is the same as using the Scientific Method

The Scientific Method can be summarised with reasonable approximation using this sequence of steps:

  1. Define a question
  2. Gather information and resources (observe)
  3. Form an explanatory hypothesis
  4. Test the hypothesis by performing an experiment and collecting data in a reproducible manner
  5. Analyze the data
  6. Interpret the data and draw conclusions that serve as a starting point for new hypothesis
  7. Publish results
  8. Retest (frequently done by other scientists)

[Wikipedia; 2013]

With the scientific method, observation of phenomena leads scientist to formulate hypotheses. These hypotheses include posits about the existence of entities (ontological hypotheses), and the laws governing them. Why did Newton care for the observables called acceleration, mass and force? Because he posited, and then proved, the existence of a mathematical law relating them with one another. His very well known second law of motion actually states that F=m*a. The formulation of Newton’s theory is a typical example of scientific theory, in that observation of the reproducible phenomenon described can confirm (or reject) its validity. If the law were false, a number of documented experimental exceptions would disprove it, determining its withdrawal from the body of recognised scientific knowledge.

The first difference between KPIs and scientific observables comes immediately to my mind. KPIs are typically not related to one another by laws like the laws of physics. They are measured with the implicit assumption that they be immediately meaningful on their own. For example, let us assume a manager would like to measure the efficiency of a resolver group. She might want to measure such KPIs as “number of incidents resolved in a day”, “number of incidents whose estimated closing date has been exceeded by more than 20%”. The implicit assumption here is that there exists such an entity as “the efficiency of my resolver group team”, and measuring the KPIs above alone is the same as measuring “the efficiency of her resolver group”. But, wait a moment, we do not have any scientific law here. We do have at most a very basic intuition. Without a law relating force, mass and acceleration, Newton could as well have been measuring the colour of the sky or how many ants were killed by the falling apple to determine its mass. When a manager measures KPIs without an experimental law relating them to the entity she seeks to measure, she is doing the same as measuring the killed ants by a falling apple to determine its mass.

Let us go farther. Scientists measure observables knowing in advance the error function. In other words, the acceleration of a falling body on planet Earth is 9,81 m/s +/- a given epsilon, that is, the error function. A measure is only relevant when the error range is known in advance. If I know that I am driving at 85 mph +/- 2%, I can safely assume that I am within the 90 mph limit. But if I did not know what is the error function of the speedometer in my car, how could I possibly determine if I am within the speed limit or not? It is thanks to the awareness of error ranges that we can trust the cockpit of our cars. and it is thanks to it, that car makers generally tune their speedometers so that they mark speed a bit in excess, to keep drivers on the safe side. Conversely, KPIs are measured without knowledge of their associated error function. Consequently, even if KPIs were actually useful on their own (and as we have seen above, they need not be), measuring them without knowing their associated error function would still make them useless. Saying that one’s team is solving 99% of the incidents within the estimated date without knowing the error function is like saying I am driving at 85b mph +/- an unknown amount. For example, there may be a percentage of incidents which are not tracked following strictly the process (not so unlikely). If one doesn’t know which is this percentage, the error function is unknown, and the KPI above has the same precision as my children’s toy speedometer.

So long as the practice remains for managers to make decision based on indicators measured in this way, it is no surprise that our companies, our economy and, ultimately, our lives, are unnecessarily endangered.

Misconception 2: The Performance is the KPI

Let us assume that the entity to be measured is “the quality of a service desk”. Since the definition is admittedly vague, the poor manager of this service desk will have a hard time when she will be assessed (and possibly rewarded) based on her objectively measured improvements. In order to achieve this, her boss will likely define a bunch of KPIs, which will be regarded as objective evidence. Why objective? Because they are expressed with figures, and this, the big boss will maintain, give them the special sacrality of absolute mathematical truths. For the simple fact that KPIs are expressed in figures, we are asked to devote them the same respect as if they were, say, Maxwell’s equations, or Peano’s axioms. Now, defenders of this use of KPIs would probably admit, the KPI in itself is obviously not the same as “the quality of the service desk”, just as my shoes number and my health insurance ID are not the same identical entity as the person I look in the mirror every morning when I shave. However, the argument goes, if the number of open incidents decreases, we know that quality increases; if the time spent on average to fix an incident decreases, quality increases. So, even if the measured KPIs are not exactly the same as “the quality of service desk”, defenders will say, they provide clear and actionable information to help the manager make the right decisions to act as to improve “the quality of her service desk”. Fair enough. However, I still counter this argument, because it implicitly assumes that there be a linear law relating such KPIs with the quality that need be optimised. But we cannot say, because such a law, need not be linear at all. It might very well be polynomial, logarithmic, or what have you. What does it mean? It means that a manager trying to optimise quality Q by optimising single KPIs related to Q, may be optimising such a quality Q of a very small percentage, and this depends on the mathematical law regulating the interdependence of Q and the KPI at hand. To conclude, saying that improving a given KPI k1 we know that we are improving quality Q1, is a very imprecise way to measure the effect of k1 on Q1, because the achieved improvement is regulated by a mathematical law which is unknown both to the manager whose performance is assessed, and to her assessor. The defender of KPIs could object that measuring something is better than nothing. I reject this stance, because every rational person will agree that, rather than collect arbitrary numbers, and base decisions on them, it is a lot better not to collect them at all.

Misconception 3: What you can Measure you can Manage

This is a typical logical fallacy. From the principle “you cannot manage what you cannot measure” it does not follow “what you can measure you can manage”. Let us see why.

Let’s define the following propositions:

Measure(x)=”I can measure x”

Manage(x)=”I can manage x”

Using definitions above, “you cannot manage what you cannot measure” becomes:

not Measure(x) -> not Manage(x)

By the same token, “what you can measure you can manage”. becomes:

Measure(x) -> Manage(x)

But from not A(x) -> not B(x) does not follow A(x) -> B(x). Rather than use truth tables, I will give a simple evidence of this fallacy.

Let us define A(x)=”person x has water to drink” and B(x)=”person x can stay healthy”

The plain truth “a person without water cannot stay healthy” would be expressed as:

not A(x) -> not B(x)

If expression A(x)->B(x) would hold, we would have “a person with water to drink will stay healthy”. But this is false (one may very well have a lot of water and no food whatsoever). Therefore the implication is mistaken. The correct implication would have been:

(not A(x) -> not B(x) ) -> (B(x)->A(x))

Coming back to our case, we would obtain: “what you can manage you can measure”, which looks like a more reasonable statement.

Conclusions

This essay is an attack on the use of the adjective “scientific” as a qualifier for the discipline of management. Admittedly, my almost exclusive focus on KPIs may have led the reader into believing that it is an attack on KPIs, which is not. I would really like to clarify that this essay is not against KPIs; it is against the pretence that their use can gain the adjective scientific to a non-scientific discipline like management. Again, not being scientific, need not be a disadvantage, or a defect. It’s only that the Western children of science, cannot admit of being subjective sometimes, to act based on gut feelings and other intangible elements. Let alone horoscopes and the like. But they do. They are humans like you and I. To be precise, the expression Scientific Management, or Scientific Business Management, is only an oxymoron and should rather be re-defined as Technical Business Management. KPIs are (sometimes useful) techniques upon which one can indeed build a framework for managing a business. And, possibly, a successful one. So, why all this fuss about an apparently negligible imprecision like using adjective  “scientific” rather than “technical”? Because by using “scientific”, managers demand a degree of respect and allegiance, which is due to Science with the capital “S”. But they do not deserve it. They don’t, because:

  • They are not scientists, and their acts are not supervised and controlled by their peers in the same way as scientists do in structures like universities, which have hundreds of years of experience in verification of research work.
  • They base decisions on KPIs, which can impact jobs, revenue and other critical aspects, and they cannot hide behind the alleged scientific nature of their data. A decision is an act of free will and, as such, implies full accountability, for the good or the bad. No assumed determinism can diminish personal accountability and moral responsibility.
  • Scientific truths are neither good or bad. Their uses are. KPIs too, are neither good or bad, but their use is. So, using KPIs does not convey the moral agnosticism of a mathematical truth to a manager’s decision. No way.

To conclude, there are very useful techniques which may help achieve better results in business management. These techniques may be good or may be bad, depending on how they are used. But please, do not call this Science. It isn’t, I’m sorry.

Bibliography

Christensen et al; 2012 Christensen, Loren, Grossman, Lt. Col. Dave (2012-08-21). “Evolution of Weaponry, A brief look at man’s ingenious methods of overcoming his physical limitations to kill”, Kindle Edition, 2012-08-21

Thomas S. Kuhn; 1996 The Structure of Scientific Revolutions, Third Edition, The University of Chicago Press, ISBN 0-226-45808-3

Whitfield, Peter; 2012. “The History of Science”, Naxos Audiobooks. Kindle Edition, 2012-07-26.

Wikimedia; 2013, https://commons.wikimedia.org/wiki/File:Mad_scientist_caricature.png, accessed 5 October 2013; this file is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported

Wikipedia; 2013, http://en.wikipedia.org/wiki/Scientific_method, accessed 2 October 2013


[1] Actually, the fact that scientific theories can be disproved, does not imply that a single exception may cause their dismission. Rather, it is  a slow process, especially for “mainstream” scientific theories:

“once it has achieved the status of paradigm, a scientific theory is declared invalid only if an alternate candidate is available to take its place. […] That remark does not mean that scientists do not scientific theories, or that experience and experiment are not essential to the process in which they do so. But it does mean-what will ultimately be a central point-that the act of judgement that leads scientists to reject a previously accepted theory is always based upon more than a comparison of that theory with the world.”

[Thomas S. Kuhn; 1996; pagg 77-91]

Clearly, this does not diminish the import of my contention. It does only say that in order to succeed in dismissing the religious use of KPIs there need be sufficient documented counterexamples , and an alternative theory. This may take time, but it  is not an impossible challenge.