Corporate Mind Control

Introduction

What all structures of power have in common is the problem to enable a minority to rule the majority. This is a very old problem, a classical one.  Different cultures have addressed it in different ways. Some using techniques of brute-force domination, others adopting more elaborated methods of mind control. The realm of modern corporations has focussed on the latter approach, taking the most effective ideas from each and every culture. In this respect, modern structures of power are pretty similar across geographies. What are these techniques? It would be difficult to give an exhaustive account, because there are so many, and they keep changing. In this essay I will introduce some ever greens, which will give an overview of some of the methods used  by powerful groups to exert their control over their subjects.

The first category of mind control techniques is Language Techniques. Examples of these techniques are: Manipulative Communication, Definition and Enforcement of Official Vocabulary, and Misuse of Metaphors.

The second category of mind control techniques is Cognitive Techniques, such as the use of Why/What/How questions, Partial Truths / Half-Truths, Misuse of KPIs, Information Overload, Misuse of Variable Compensation, Misuse of Gantt Charts, and Misuse of Time Management.

The third category is Emotional Techniques, such as Foment Perpetual Fear of Losing Job, The Superman CEO, Forbid Errors, Disconnection of Personal Contribution from the Outcome, Prevent Formation of Consent, Get Creative Contribution from Consultants, not Employees, Reference Letter/Work Certificate, and Misuse of Lifelong Certifications.

We will now start this inquiry into the world of scientific mind control with Language Techniques.

Language Techniques

There is a view in the literature, according to which the use of language affects formation of ideas, and influences behaviour:

“The principle of linguistic relativity holds that the structure of a language  affects its speakers’ world view or cognition. Popularly known as the Sapir–Whorf hypothesis, or Whorfianism, the principle is often defined to include two versions. The strong version says that language determines thought, and that linguistic categories limit and determine cognitive categories, whereas the weak version says that linguistic categories and usage only influence thought and decisions.”

Source: (n.d). “Linguistic Relativity”. Retrieved January 4th, 2017, from https://en.wikipedia.org/wiki/Linguistic_relativity

Based on this principle, a number of manipulative techniques has been developed.

Manipulative Communication

Manipulative Communication is aimed at obtaining something from someone through insidious techniques of control. Manipulative Communication is done by using manipulative language. A good description of manipulative language is available at: http://www.clairenewton.co.za/my-articles/the-five-communication-styles.html

I reproduce an excerpt here, for convenience’s sake [accessed on 21 December 2016]
———————————————
The Manipulative Style
This style is scheming, calculating and shrewd. Manipulative communicators are skilled at influencing or controlling others to their own advantage. Their spoken words hide an underlying message, of which the other person may be totally unaware.

Behavioural Characteristics

  • Cunning
  • Controlling of others in an insidious way – for example, by sulking
  • Asking indirectly for needs to be met
  • Making others feel obliged or sorry for them.
  • Uses ‘artificial’ tears
Non-Verbal Behaviour

  • Voice – patronising, envious, ingratiating, often high pitch
  • Facial expression – Can put on the ‘hang dog” expression
Language

  • “You are so lucky to have those chocolates, I wish I had some. I can’t afford such expensive chocolates.”
  • “I didn’t have time to buy anything, so I had to wear this dress. I just hope I don’t look too awful in it.” (‘Fishing’ for a compliment).
People on the Receiving end Feel

  • Guilty
  • Frustrated
  • Angry, irritated or annoyed
  • Resentful
  • Others feel they never know where they stand with a manipulative person and are annoyed at constantly having to try to work out what is going on.

Source: The Anxiety and Phobia Workbook. 2nd edition. Edmund J Bourne. New Harbinger Publications, Inc. 1995.
———————————————
Manipulative communication may be used to obtain something from other people, through canning manipulation of their thought-formation processes . How can one detect manipulative language? Luckily, there are some indicators, some words which are usually reliable markers of a manipulative communication:

  • all, every, none, everyone, no one, always, never, best, worst, etc.

Examples:

“You always make silly mistakes”
“You never get it right at the first attempt”
“Are you sure this is the best solution?”
“This is the worst presentation I have ever seen”

If one wants to develop resilience against manipulative communication, the first thing to do is to learn to spot this manipulative words.

Definition and Enforcement of Official Vocabulary

Imposing use of acceptable vocabulary on others is an effective way of exerting influence on how things are perceived. One thing is saying: “I’ll give you this problem to solve, and you’ll have to complete it and resolve all impediments proactively. If the task is not done, you will pay the consequences.”; quite another is saying “I think you have the right skills for owning this challenge“. The apparent meaning is exactly the same, but the second form is malicious, because it hides the negative aspects behind a layer of shiny fresh paint. The first source of manipulation is done using “challenge” instead of “problem”. A problem exists independent of the observer. For example, finding a computationally efficient algorithm which determines if a given integer is a prime number or not. But if we call this a challenge, the fact that it can be solved is implicitly attributed to the ability of the resolver, not to the objective complexity of the problem. Calling problems “challenges” is a subtle way to put pressure on the person to whom the task is given. The second manipulation is done using the expression “to own a challenge”. The notion of ownership implies that the owner of an object can, among other things, do the following:

  • refuse to receive the object
  • sell the object
  • donate the object
  • destroy the object
  • dispose of the object
  • exchange the object with another one

Think about it, this is certainly true of a car, a house, a book, a pair of shoes, etc. Now the point is, can one really “exchange a task or assignment” with something else? (e.g. another task one likes best?). Can one decide to reject it? The answer is no. When a manager gives an assignment, the assignee is clearly not free to reject the assignment, if not once or twice, and then s/he is shown the door. Saying that someone who has been given a task is “owning a challenge” is a gross misrepresentation of what is going on.

Misuse of Metaphors

Metaphors are a very powerful type of figurative language. According to Merriam Webster (https://www.merriam-webster.com/dictionary/metaphor, accessed on 21 Dec 2016) a metaphor is:

a figure of speech in which a word or phrase literally denoting one kind of object or idea is used in place of another to suggest a likeness or analogy between them (as in drowning in money); broadly : figurative language

How are metaphors used to control people’s mind and behaviour? The method is based on implicitly attributing qualities to an object or notion, which it does not possess. This is achieved by replacing such an object/notion with a metaphor. The metaphor shares some qualities with the original object, but not all. The manipulation is done by tricking the audience into believing that the properties of the surrogate object/notion (the metaphor) also apply to the object/notion replaced by it. A few examples will clarify. The examples below are taken from article “Twenty business metaphors and what they mean by Tom Albrighton” (http://www.abccopywriting.com/2013/03/18/20-business-metaphors-and-what-they-mean, accessed on 21 Dec 2016)

(…)

Organisations = machines
This is another product of the Industrial Revolution, when businesses were characterised like the machines around which they were built. Applying the concept of mechanisation to human work led to things like organisational diagrams (=blueprints), departments (=components), job specifications (=functions) and so on.
All these concepts were productive in their way, but they obscured the reality of an organisation as a group of people. Unlike uniform components, people have very different abilities and aptitudes. And unlike machines, they can’t be turned up, dismantled or tinkered with at will.

(…)

Products = organisms
This metaphor is expressed in phrases like ‘product lifecycle’, ‘product families’, ‘next generation’ and so on. It draws a parallel between the evolution of living things and the way products are developed. This nicely captures the idea of gradual improvement through iteration, as well as the way in which products are ‘born’ (=introduced) and eventually die (=withdrawn) – always assuming they’re not killed off by the forces of ‘natural selection’ in the market.
One drawback of this metaphor might be the way it downplays human agency. Products and services don’t actually have an independent life, nor will they evolve independently. They are the result of our own ideas and decisions – reflections of ourselves, for better or worse.
Progress = elevation
This idea encompasses such well-worn phrases as ‘taking it to the next level’ and ‘onwards and upwards’. If we visualise this metaphor consciously at all, we might think of a lift going to the next floor, or perhaps someone ascending a flight of stairs. While ‘up’ is usually associated with ‘more’ and ‘better’ (think of growing up as a child, or line graphs), not everyone likes heights.
Careers = ladders
Thinking of careers as ladders embodies the same ‘up is good’ idea as ‘progress = elevation’. The higher up the ladder you go, the more you can see – and the more you can ‘look down on’ those below you.
Since ladders are one-person tools, there’s also an implication that you’re making this climb alone. So what will happen if others do try and join you? Will the ladder break, or fall?
More to the point, what if you reach the top and discover that your ladder was leaning against the wrong wall? Jumping across to another ladder is dangerous, so you might have to go all the way back down and start again.
A more useful metaphor in this context might be ‘careers = paths’. Paths fork, implying choice. Overtaking, or being overtaken, is no big deal. You can step off the path for a while, if you like – there might even be a bench to sit on. And while some might feel the need to ‘get ahead’, it’s clear that the most important thing is not how far you go, but whether you’ve chosen the right path for your destination.

(…)

To conclude,

Metaphors are meant to create an impact in the minds of readers. The aim of this literary tool is to convey a thought more forcefully than a plain statement would.

They are exaggerated expressions no doubt, but they are exaggerated because they are supposed to paint a vivid picture, or become a profound statement or saying.

source: http://examples.yourdictionary.com/metaphor-examples.html, accessed on 21 Dec 2016

Cognitive Techniques

Why/What/How

Most people are used to defining what are the things which are important to them. Some want to pursue higher education, others want to become rich, some want to do something good for the poor or the sick. Everyone has the chance to define what is her life strategy. I will call this kind of questions, the “why” questions. When there is a vision, and objectives are defined, one has to identify what need be done. For example, if one wants to pursue higher education she has to study hard, make sacrifices, define an area of specialisation based on her interests and abilities, and so on and so forth. I will refer to questions like these as the “what” questions. When it is clear what need be done, one has to define how each activity can be accomplished. Here it is necessary to define the details. See what is the available budget, what are the universities to target, what their admission criteria, etc. I will refer to questions like these as the “how” questions.

When one works for someone else, and that is the condition of employees, depending on her role, she has access to some kind of questions but, generally, not all. Those who define the “why” questions are the CEO and the Board of Directors. In so doing, they have to interpret the intentions and priorities given by the most important shareholders. One level down, directors and general managers usually define the “what” questions, and define what need be done to achieve the set objectives. Everyone else in the organisation, is actually dealing with “how” questions. Ordinary employees are overwhelmed by details and specialist work. The need to implement and execute “what” their directors have defined.

Dealing with people in a way that carefully filters out more interesting questions and focusses on the details of the things to do is yet another way of controlling their behaviour and ensuring they will not contribute creatively to the definition of the strategy. Their action will be constrained in well defined “tasks” which someone else has created for them. If they fail, such an outcome will be easily attributed to their action, not a wrong plan, because they do not even have visibility on anything else than their micro tasks. Ordinary employees, usually, do not have the elements to understand what is the value of what they are doing, because the ultimate objective is oftentimes shared only with a few people higher up in the organisation hierarchy.

There is a lot of rhetoric about being creative, proactive and so on and so forth. But a lot of organisations, do not really expect people to engage in “what” questions (let alone in “why” questions). All they want from their employees is that they get things done independent of the inefficiencies, politics, and oddities of their working conditions. This is what is really meant, oftentimes, by “being proactive”. The skeptics are invited to try propose a change which makes a process more efficient, or simplifies a procedure, or increases employee satisfaction. And then share the outcome of their proactive and creative behaviour.

Partial Truths / Half-Truths

Salami-technique

I learnt about this expression from a manager some jobs ago. He consciously used it to obtain things from employees, keeping them in a constant condition of dis-empowerment. The “salami-technique” consists in splitting the information necessary to understand a request and execute the tasks in small items, and only share what is strictly required to obtain the execution of tasks by individuals and teams, without giving them the context. The receivers of the “information slices” will know enough to accomplish their duties in a monkey-like fashion, without gaining any insight into what their are doing. Managers practicing the “salami-technique” pride themselves of being the only ones who see the full picture making themselves indispensable.

A variant of this technique is a misused form of the “need to do/need to know” principle. This principle is applied in security in order to reduce exposure to confidential data and reduce risk of disclosure. However, the same principle is sometimes misused to justify the practice of keeping people unnecessarily in the dark and giving them only the minimum information required to execute a task.

Information Funnel

This technique has similarities with the salami-technique illustrated above, but is systematically applied in a hierarchical fashion, reducing the information content shared, every level down the organisation. It is one way in which the why/what/how principle explained above is enforced.

Misuse of KPIs

Key Performance Indicators (KPI) may be useful in defining and measuring quantitative objectives. Several good books explain how to use them wisely. In reality, many actual uses of KPIs are misguided and aimed at manipulating teams into behaving in desired ways. The way this is done is through the exploitation of a cultural bias and equivocation. I have explained this is detail in my post The False Myth of Scientific Management, to which the interested reader can refer to. Here I will give a brief summary. KPIs are expressed in figures. Figures are associated with mathematics, which is the language of science. This association is intended to implicitly claim and evoke rigour and credibility. However, KPIs are not Science, and not even science. They are a technique. Different than observables of scientific laws, KPIs are not bound by mathematical laws which allow to describe and predict phenomena in a way which makes ti possible, for example, to know what is the precision of the estimate, say, the error function, the epsilon. Not being experimental laws, they cannot be disproved if wrong. You have to stay content with the strait faces of the holy priests of this technique, its true believers. Nothing else can substantiate the truth or significance of what is measured by defined KPIs. This is not to say the cannot be used wisely. There are many good ways of doing it described in the literature. Here, the point I want to make, is that a misguided use of KPIs (not KPIs per se) can be used to manipulate people.

Information Overload

Influential book “The Net Delusion: The Dark Side of Internet Freedom” by E. Morozov explains how totalitarian regimes which initially invested their energies in censorship, have soon learnt how to apply a cheaper and more effective technique: information overload. The traditional technique based on censorship is based on the creation and maintenance of a catalogue of forbidden content and resources. This approach was effective in the pre-Internet era, when content was produced and disseminated at lower rates, and this process happened by physical means (e.g. pamphlets, books) whose distribution and access could more easily be controlled. Nowadays, maintaining a black list of undesired content is pretty impossible. In order to prevent people from accessing content which could inspire them to bring about change, regimes soon found that a cheaper and more effective way was already available: flooding the Internet with gargabe content like entertainment sites, porn, etc. When one is overwhelmed by this content one is less prone to engage in discussions and debates on how to change the world.

The question arises, given the focus of this essay on corporate mind control, how does this method developed by regimes relate to the corporate agenda? It relates indirectly. Think about a motivated employee who thinks she can promote her ideas and bring about change in her organisation. What will a change-resistant organisation do? As we have seen above, it will no longer try to formally forbid this. Quite to the contrary, the organisation will pay lip service to innovation and invite proactive employees to check on the intranet what are the processes made available to submit proposals. This information will be buried in a sea of content, with primitive search functionality and the proactive employee will have to check hundreds of hits one by one manually, in a never-ending endeavour. Very soon more mundane tasks will take priority and divert this employee’s effort to tasks closely related to “how” questions (see above), and the time for proactivity will soon be over.

Misuse of Variable Compensation

There are very good reasons why a percentage of the total compensation can be variable. As I did above, I will not articulate here on the good uses of this practice. My focus is on how a malicious use of variable compensation is part of the tool set of the professional mind manipulator. When criteria for the recognition of variable compensation are set in a way which leaves room for interpretation, a manager can easily put the necessary pressure on her employees to obtain what she wants. Although employees are often advised to base their spend on fixed compensation and use variable compensation on non critical things or services, most people do otherwise. If the variable part of their compensation should be reduced, they may have very practical issues, like paying installments, a leasing, or failing to go on holiday. Giving a manager or a handful of individuals the power to decrease the salary of employees clearly gives them an extraordinary power to obtain exceptional performance. What is the problem with this technique of manipulation? Well, there are many. For example, some people focus on the objectives bound to the bonus, neglecting the rest. They become less collaborative and aim at achieving objectives in a formal way, not necessarily generating the expected value. Secondly, when people are treated like greedy individuals who only do something because otherwise they will get less salary, they will behave like ones. Appealing to the desires of the first order, like greed, is a sure way to make a good person behave like a fool.

Misuse of Gantt Charts

Gantt chart are a very popular representation of project plans. Despite the emergence and hype of agile methods, which are based on entirely different concepts, many enterprises still use Gantt charts because internal processes have been shaped over decades based on traditional project management practices. According to Wikipedia,

A Gantt chart is a type of bar chart, devised by Henry Gantt in the 1910s, that illustrates a project schedule. (…) One of the first major applications of Gantt charts was by the United States during World War I, at the instigation of General William Crozier.

source: “Gantt chart”. Retrieved January 14th, 2017, from https://en.wikipedia.org/wiki/Gantt_chart

Gantt charts can be very effective and are a powerful tool when it comes to planning activities like the ones for which they were first introduced. Clearly, planning the activities of the army and planning knowledge work is not exactly the same thing. Part of the equivocation is innocent in its nature: thinking that the advancement of tasks is proportional to the effort invested into it, and the time needed to complete them is inversely proportional to the resources allocated. This is probably true if one builds a wooden table. At least, to an extent. But if one is trying to find an efficient algorithm to solve a problem, this is usually not true. The professional manipulator transforms a problem to solve in a plan, and then describes the solution to the problem in terms of such a plan. Having transformed a problem in its representation, the manipulator feels legitimised to drag tasks here and there, and think (or want others to think) that the corresponding activities can also be completed sooner or later, correspondingly. However, this is blatantly false, as the nine women paradox explains very well:

Nine Women Can’t Make a Baby in a Month

Gantt chart true believers will counter this argument saying that “if all the dependencies are correctly modelled, and the tasks correctly classified (fixed work, fixed duration, fixed units, etc.), than the representation is very close to reality indeed”. However, real life experience, proves that this is seldom the case. And the reason is simple to understand. Modelling all such dependencies correctly is very difficult. One would require infinite knowledge of the problem, at the time the plan is written. But knowledge is not infinite. It’s partial. And with a model based, necessarily, on partial information one cannot expect total reliability. The good planner knows this and uses Gantt charts cum grano salis. The manipulator or the imbecile, firmly believe that what they see is what they will get and want others to believe the same. Sadly, this people contribute to making the statistics of successful projects what it is.

Misuse of Time Management

Let us start with defining what is Time Management:

Time management is the act or process of planning and exercising conscious control over the amount of time spent on specific activities, especially to increase effectiveness, efficiency or productivity. (…) The major themes arising from the literature on time management include the following:

  • Creating an environment conducive to effectiveness
  • Setting of priorities
  • Carrying out activity around prioritization.
  • The related process of reduction of time spent on non-priorities
  • Incentives to modify behavior to ensure compliance with time-related deadlines.

It is a meta-activity with the goal to maximize the overall benefit of a set of other activities within the boundary condition of a limited amount of time, as time itself cannot be managed because it is fixed.

source: “Time management”. Retrieved January 14th, 2017, from https://en.wikipedia.org/wiki/Time_management

While it is certainly true that, being time a finite resource, only a number of tasks can be done in any given finite period, that does not imply that people can be made to achieve more by switching tasks continuously based on a set of activities having ever-changing priorities. The human brain is simply not a CPU: context switch proves extraordinarily expensive as a mental process. When knowledge workers are focussed on their tasks and have maximum efficiency, they are in a so-called state of flow.

In positive psychology, flow, also known as the zone, is the mental state of operation in which a person performing an activity is fully immersed in a feeling of energized focus, full involvement, and enjoyment in the process of the activity. In essence, flow is characterized by complete absorption in what one does.

source: “Flow (psychology)”. Retrieved January 14th, 2017, from https://en.wikipedia.org/wiki/Flow_(psychology)

Whenever a knowledge worker is interrupted, this state of flow ceases and it takes a certain amount of time to be re-established. The time required to get again into the state of flow following an interruption, is one of the reasons why misuse of time management actually reduces efficiency and efficacy, rather than increase them. And one can only undergo a number of cycles before her efficiency is completely spoiled for the whole day.

Professional manipulators use Time Management as an excuse to abuse their victims, so that they can interrupt them continuously whenever they need or want to have something immediately done at the expense of others. They will use a changed priority as a justification, forgetting or neglecting the fact that urgency and importance are two distinct and different things.

Emotional Techniques

Foment Perpetual Fear of Losing Job

In a job market shaped by the tenets of neo-liberalism, people are constantly compared with professionals from all around the world. Nobody is so good in her profession to be safe, when compared with professionals on a global scale. There is always someone who is better, cheaper, younger, has a better health and is more ambitious and driven to take anyone’s job. Companies restructure not only when they do bad but, increasingly, when they make good results. People are constantly confronted with cost cutting, digitization (see my essay The Dark Side of Digitization ), increasing use of Artificial Intelligence, etc. The effect on employees is that everyone feels replaceable. If one says no to a request, however irrational, someone else will immediately be ready to take her job. When forces like the above are used to intentionally foment fear of losing jobs, this is a mind control technique.

The Superman CEO

Years ago I was impressed to see a CEO call employees by their first name in occasion of a business party. He would address people like if he truly knew them and would ask questions like if he cared for what they were doing. That was an amazing thing. It had a great effect on me and I thought this was genuine. Years after, I met another CEO, who prided himself of having learnt Italian pretty fast and amazingly well. It was a remarkable achievement and employees went saying that they might not have been able to do the same. This CEO had an image like a super man. Employees started thinking, “there’s a reason why he is the CEO and I’m not, if he can learn a language in no time, God only knows what else he can do. He is cleverer than us, he is gifted.”

Year after year I have known a lot more CEOs and I have noticed that this is a pattern. While it may certainly be the case that most CEOs are indeed very gifted and smart individuals, probably well above the average, it is also true that some seem to have a clear urge to prove this alleged superiority with spectacular demonstrations like the above. When abilities are showcased in a very spectacular way, this is a form of manipulation and mind control. It is intended at creating the myth that the company is guided by extraordinary individuals and their decisions must just be trusted, even when not understood. The subliminal message is that there’s a reason why such decisions may seem irrational sometimes: it is that ordinary people like you and I can’t just understand them. The argument goes, “in the same way as we can’t remember the first name of all our colleagues (neither they really can, by the way) or learn another language so fast, we can’t just grasp the depth and full meaning of their decisions because we are not as smart as they are.”

Forbid Errors

Some organisations pursue practices which aim at forbidding errors. I remember a colleague managing a technical team shouting to them “this quarter you have to achieve the zero-ticket objective, understand!!!??”. Management practices like this have the objective to keep people focusing of the “how” questions (see above), and forget the “what” or “why” questions. When people cannot make a mistake they cannot innovate, cannot learn, cannot be truly proactive. All they can do is work like monkeys, day in and out.

Disconnection of Personal Contribution from Outcome

Human beings need to have feedback on their work. When a job is well done, normal people like this to be recognised. When negative feedback is received, one likes to be able to understand what went wrong, in order to be able to improve and do better next time. A mind control technique is based on the idea to make the connection between individual contribution and result opaque, so that people can no longer claim recognition for a job well done and can, at the same time, be blamed if something went wrong, even if they did not have anything to do with it. Tasks are defined in such a way that no individual contribution can easily or directly be linked to the overall outcome. Everything becomes “a team exercise”, with the implication that anyone in the team could have been replaced, and the same result still be achieved. The only way one can try to defend themselves from this, is to try to work on tasks whose deliverable has a clear value even when evaluated individually.

Prevent Formation of Consent (aka Divide and Conquer)

A family of manipulation techniques has the objective to prevent formation of  consent. The reason is obvious: if a minority is to rule the majority, the most dangerous thing is when people gain awareness of having like interests and stakes in the situation. Because, if they do, they can join forces, and bring about change. But the ruling minority does not want this to happen. Therefore, it engages in a series of manipulation techniques which I will refer to as divide and conquer.

Competition

An example is fostering competition between individuals, teams, locations, etc. The alleged purpose is to stimulate healthy energies with the intention to let the best ideas emerge. The actual effect is oftentimes that people reduce or stop collaboration, pursue personal gratification instead of team objectives, try to claim ownership of achievements, cast a dark light on less talented individuals in order to shine better, and engage in a multitude of variations of  rogue behaviour.

Forced Bell curves

This techniques is canny and sophisticated in the way it achieves deception. The idea is sometimes articulated along these lines. “Since there is statistical evidence that the performance of individuals in teams and organisations is distributed like a Bell curve, there is no reason, (true believers say), while the performance of your particular team should violate this scientific truth. Science in person tells us that performance has a Bell distribution: who are you, poor manager, to challenge this dogma?!”

This idea is plagued by many conceptual and pragmatic issues. The conceptual issue is that statistics is about big numbers. It may actually be true that in an organisation with 150’000 employees, performance overall is distributed like a Bell curve. But no mathematical law, and not even common sense, mandates that this be the case given any arbitrarily small subset of the overall sample space. In other words, if a manager has a team of six people, it is usually not the case that there has to be a complete idiot, two average individuals, two slightly smarter guys, and a genius. Enforcing Bell curves on arbitrarily small teams is like asking a player to use six dices: one with only ones, the second with only twos, … and the sixth one with only sixes, to make sure that using them one by one, the result will abide by the holy tenets of the Bernoully distribution. Only a madman would do that. But that is still a current management practice in a lot of companies nowadays. And people are tricked into believing it’s true, because it is allegedly based on statistics, which is assumed infallible like all branches of mathematics. But nobody should be allowed to claim mathematical truths without even grasping the basics. And if they do and still claim this technique is good, they are doing it maliciously, knowing to be tricking people.

Get Creative Contribution from Consultants, not Employees

An effective way to control individuals is to keep them constantly in a status of dis-empowerment. An instance of this technique is to give all creative work to consultants and treat employees like people who cannot possibly come out with innovative ideas. When people are treated like children, they will behave like ones.

Reference Letter/Work Certificate

A very common mind control technique is the practice which requires new employees to provide a reference letter from their previous employer. Knowing that your current employer will one day write a reference letter for you when you will leave (you want it or not), gives the employer an extraordinary power to limit the range of acceptable behaviour from you. If they ask you to work on a weekend and you would rather like to celebrate your daddy’s 80th birthday, they may write that you are not “flexible” or do not understand client priorities. And so on and so forth. In countries like Switzerland, the work certificate is particularly sophisticated because employers do not usually write negative statements. Instead, they omit to write positive statements. There exists a code, known by HR departments, which allows them to read between the lines in ways that no other reader can understand. For example, if they write an otherwise positive certificate, but omit the sentence “It is with regret that we received his resignations”, what they really mean is that that person may indeed be good and talented, but did not match their preferred behavioral cliche (maybe was too innovative or proactive), and they are not so displeased that she is leaving. On the receiving side, this apparently positive certificate will be interpreted as, “this is a smart guy, but she is going to create us headaches. Better get a less smart individual who stays quiet in her corner. If we should really need skills, we will take a consultant instead.”

Misuse of Lifelong Certifications

Another effective way to keep people quiet and focussed on the “how” questions, it to keep them busy learning some new technical tricks all the time. Getting certifications is a very good way of acquiring new skills and it is particularly useful in the current competitive job market. Certifications are usually required to technical specialists, less commonly to managers, especially senior managers.

When certifications are bound to promotions and professional models, there can be at least two effects. The first one is that there is more transparency in the promotion criteria, and this is clearly positive. The second one is that a manipulative manager can keep raising the bar, so that technical specialists are constantly in a condition of artificial deficiency. They will always lack a certification or two to be promoted, or even to keep their current band. To make things worse, modern certifications expire, and individuals have the burden to keep renewing them all the time. In addition to day job, people are constantly required to refresh their quickly obsolescent technical skills, and re-certify. These certifications are demanding and take a lot of time, usually in the evenings or weekends. When people are constantly busy certifying and proving themselves, they will not have the mindset, or simply the time, to seek new ways to change the world.

Conclusions

In this essay I have introduced the topic of corporate mind control. This is an evolving set of techniques used by a minority to control the majority. These techniques are aimed at limiting the set of acceptable behaviour, neutralise genuinely innovative ideas, disincentivise proactivity while claiming the contrary, and optimise profitability or other objectives through domination. These techniques are used by some corporations in various degrees. The range is from almost innocuous to very unethical. There is no such a thing as a corporate ranking in this regard. Or, at least, not one that I am aware of. The reason why I have researched this topic is that I argue that these techniques are unethical, because they violate the Kantian principle:

“Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end.”
— Immanuel Kant, Grounding for the Metaphysics of Morals

What this principle really means is subject to philosophical inquiry (see, for example http://philosophy.fas.nyu.edu/docs/IO/1881/scanlon.pdf). However, for the sake of my argument, we need not pursue sophisticated philosophical research. We only need to understand that human beings have priorities, intentions, ambitions, values, emotions, potential, limits, a need for social interactions, a sense of justice, self-esteem, etc. Therefore, they are very different from other “resources” utilised by corporations to produce goods or offer services. Mind control techniques are aimed at limiting the expression of the natural endowment of people, so that they pursue exclusively what is required to them to achieve goals and accomplish tasks defined by someone else. Extreme uses of these techniques deprive people of their dignity, because they are not treated also as ends, but only as a means to an end.

The techniques described in this essay have been developed over time and are utilised by all sorts of organisations, ranging from private enterprises, to political parties and regimes. This latter kind of organisation excels in use of media control, which is a subject not covered in this essay, because it has less relevance to private corporations. This is a separate topic very well covered, for example in:

  • Media Control, Second Edition: The Spectacular Achievements of Propaganda (Open Media Series), Sep 3, 2002, by Noam Chomsky

I hope to have stimulated some constructive meditations on this important topic, with this admittedly peculiar essay, and I will be keen on receiving feedback. Thank you for reading.

References

T. Albrighton, “Twenty business metaphors and what they mean”. Retrieved December 21st, 2016, from http://www.abccopywriting.com/2013/03/18/20-business-metaphors-and-what-they-mean

N. Chomsky, 2002, “Media Control, Second Edition: The Spectacular Achievements of Propaganda” (Open Media Series)

I. Kant, “Grounding for the Metaphysics of Morals”
E. Morozov, 2012, “The Net Delusion: The Dark Side of Internet Freedom”
C. Newton, “The Five Communication Styles”. Retrieved December 21st, 2016, from

http://www.clairenewton.co.za/my-articles/the-five-communication-styles.html, accessed on 21 December 2016

(n.d). “Flow (psychology)”. Retrieved January 14th, 2017, from https://en.wikipedia.org/wiki/Flow_(psychology)

(n.d). “Gantt chart”. Retrieved January 14th, 2017, from https://en.wikipedia.org/wiki/Gantt_chart

(n.d). “Linguistic Relativity”. Retrieved January 4th, 2017, from https://en.wikipedia.org/wiki/Linguistic_relativity

(n.d). “Metaphor”. Retrieved December 21st, 2016, from https://www.merriam-webster.com/dictionary/metaphor

(n.d). “Metaphor Examples”. Retrieved December 21st, 2016, from http://examples.yourdictionary.com/metaphor-examples.html

(n.d). “Time management”. Retrieved January 14th, 2017, from https://en.wikipedia.org/wiki/Time_management

Type-Parametric Segment Trees In Scala

stree_small

In my post Functional Solution to Range Minimum Query (RMQ) using Segment Trees in Scala I explained how to efficiently solve the problem of finding the minimum element of an arbitrary sub-vector of integers. In that article I used a Segment Tree in which each node and leaf has an integer value, which is the minimum of the sub-vector defined by the corresponding range. In this article, I will explain how to use Segment Trees to solve a more general class of problems. Before introducing the solution, we need to answer the question, what are the functions, other the min:Int+ -> Int,  which can be computed efficiently using Segment Trees? Intuitively, some other functions are max(), sum(), avg(), probability(), etc. What do these functions have in common? It is the fact that they may be computed over each element in a given range or, more efficiently, using intermediate results, like their value computed over defined sub-ranges. In my article about RMQ we have seen that an efficient solution to the range minimum problem is based on a Segment Tree which stores the minimum of certain sub-ranges of the given integer vector. Using this technique, instead of iterating over the whole given sub-vector of n elements it is possible to execute only log2(n) computation steps. The same applies to max(), sum(), avg(), probability() computed over a range, and other similar functions, when their computation is optimised using Segment Trees.

In general, the functions we are targeting with Type-Parametric Segment Trees can be defined like this.

Definition
Let v be a Vector of n elements of type T, say, val v: Vector[X], and f a function f:X+ -> V. Function f can be solved efficiently using a Type-Parametric Segment Tree if and only if:
g: (V x V) -> V, such that
∀n >= 2, ∀i ∈ {0, n-2}    f(x0, x1, …, xn-1) = g(f(x0, x1, …, xi), f(xi+1, …, xn-1))                 (1)

Example: min()
Function min() clearly satisfies the definition above because the minimum of a set of elements can also be determined as the minimum of the minimum values computed over any two partitions of the same set.

Example: avg()
Let:
1) n ∈ Integer, such that n > 0;
2) val a = Vector[Integer], with a.length == n;
3) i, j ∈ Integer such that i, j >= 1 and i+j == n
4) avgn() the average computed over all n elements of vector a
5) avgi() the average computed over the first i elements of vector a
6) avgj() the average computed over the last j elements of vector a

We define our function g as follows:
7) g(avgi(), avgj()) = (i/n)(avgi()) + (j/n)(avgj())

Optional proof (only for the interested reader)
Proposition 1: avgn() = (i/n)avgi()+(j/n)avgj()
By definition of avg we have:
8) avgn() = (1/n)∑xn
Decomposing the sum:
9) (1/n)∑xn = (1/n)(∑xi+∑xj)
Using the distributive property:
10) (1/n)(∑xi+∑xj) = (1/n)∑xi + (1/n) ∑xj
But 1 = (i/i), therefore:
11) (1/n)∑xi = (i/n) (1/i) ∑xi
By definition of avg we have:
12) (1/i) ∑xi = avgi()
Combining 11) and 12) we now have:
13) (1/n)∑xi = (i/n)avgi()
In the same exact way it can easily be proved that:
14) (1/n)∑xj = (j/n)avgj()
Combining 10), 13) and 14) we have:
15) avgn() = (i/n)avgi()+(j/n)avgj()
This ends the proof.

We see from definition 7) that avg of n elements can be expressed in terms of its value on any two splits of the vector. To make the concept clearer, we will now see an example in Scala. First, we will see how the SegmentTree data structure can be redesigned in order to become type-parametric. For convenience’s sake, I will use images in this article, but the interested reader may find source code in github:
Type-parametric Segment Tree
Example with function AVG

The first thing to notice is the creation of dedicated class Range. A range class is already available in Scala, but its intended use is different from what we need for our SegmentTree. The below is a very light-weight implementation which allows to perform useful operations on ranges like, for example, intersection.

classrange
This is the implementation of the Segment Tree. The value of each node or leaf of a segment tree is parametric (class V).

If is worth noticing that even the function f:(V,V)=>V, which computes value V based on the partial results contained in the left and right sub-trees is also parametric. As we will see, this is required to achieve the desired degree of flexibility.
classsegmenttree
The build method of singleton object SegmentTree now contains two functions. Function xToV:(X)=>V takes a value from the original vector and builds the value of its corresponding leaf in the associated SegmentTree. Function fun:(V,V)=>V allows to compute value of a node, based on the partial results contained in its left and right sub-trees.
objectsegmenttree
In order to utilise the parametric segment tree to efficiently compute the average AVG over any given sub-vector of a given vector of integers, the following definitions are required:

  1. class Value which contains the average over a given sub-vector, and the length of such a sub-vector. We have seen above that this length is used by function fun:(V,V)=>V
  2. Method intToValue which, given an integer, builds a value having as avg the conversion to Float of the integer, and length equal to one.
  3. Method fun:(V,V)=>V, see proposition above.
    staverage

If we now use the test application ATAvg with the following input:

10 5
10 20 30 40 11 22 33 44 15 5
0 5
1 2
8 9
0 9
4 6

We obtain the following output:

22.17
25.00
10.00
23.00
22.00

It is easy to verify that it is correct.

Remark
The first row of the input contains the size of the vector, and the number of queries. The second row is the integer vector. Each subsequent row is a query (start and end index of the sub-vector). The output contains the avg value computed over each sub-vector in the query list.

This ends the explanation of the parametric segment tree. The interested reader may find the mathematics of segment trees in my previous article:
Functional Solution to Range Minimum Query (RMQ) using Segment Trees in Scala

If you find other interesting uses of this data structure to solve computation problems, I will be happy to receive your feedback. Thank you for the read, and Up Scala!!!

The Dark Side of Digitization

[https://commons.wikimedia.org/wiki/File:Eclipse20070304-2.JPG]

If we were to base our judgement of digitization on certain accounts, the future would look bright. The long promised efficiency which is expected from technology will eventually be achieved. No more tedious manual work will be necessary. People will be able to “focus on more interesting problems” and, as the story goes, the error rate of diagnoses, fraud prevention/detection and defense systems would soon become negligible. To those who are inclined to believe in fairy tales, this perfect picture of a super-efficient, technology-driven, future society will look great, won’t it? Yet, I think, there is a much overlooked dark side of the coin. It is the long-term social effects of replacing human work with automatons. Generally, technical solutions are desirable when they tackle problems in a way which is beneficial to the highest number of people. Is digitization one such case? It depends on how it is approached. I contend that the current hype may in some cases betray a number of dangerous misconceptions:

Misconception 1.      Reducing manual work is always beneficial

Work is, among other things, vocation, occupation and profession. At work people socialize, build relations, nurture friendships. Right or wrong, western culture has elevated professional activity to the rank of a personality-defining aspect. When people will work less, (or not at all), something else will have to fill this gap. Personally I see this as beneficial. I have always perceived with suspicion the atomistic myth of  professional success at the expense of a miserable social life. But changing this will require time and, if not properly managed, can lead to unrest. Are we prepared to manage the transition? Are our omniscient Artificial Intelligence robots giving us any insight on how to approach this problem? I don’t think so. And then, how will the future consumeristic economy be fuelled with an increasing number of unemployed people? Who will buy the goods which will be produced so efficiently? Who will be able to pay for these services? A new society need be shaped, before the much wanted digitization can be made sense of. The rest is only a lot of talk, solutionism and technology-centrism, the myth that technology is the answer to any problem.

Misconception 2.      Artificial Intelligence will replace Human Intelligence to an ever increasing extent

Artificial Intelligence will enable new computing scenarios which will complement human intellect in many ways. This is particularly true in the context of cognitive intelligence. And this is indeed an exciting scenario. But mentation is a lot more than cognition. And only someone with a very limited understanding of what is a human being, can indeed think that a computable function can be a replacement for the wealth of psychological treats and social behaviours which are associated with the spontaneous action of people. The idea that mentation is a computable function has further socially destabilising consequences. Let us assume, for the sake of argument, that Artificial Intelligence fanatics are right. So, human mentation is a computable function. Functions are deterministic by design: given an input, they return an output. But most social contracts have been built over thousands years on the assumption that there is such a thing as personal accountability for our actions. And this can only exist if one has free will. But how can one make a free decision if mentation is nothing more than a computable function? A serial killer would be just someone whose mentation implements the wrong computable function. How could he be accountable for producing outputs which could not have been otherwise? If we think we are ready to accept the weird (yet, admittedly, theoretically possible) idea that mentation is a computable function, than we must also be prepared to renounce the idea of personal accountability, and delete the concept of free will from our dictionaries.

Misconception 3.      The efficiency of digitization is universally beneficial

This efficiency is mostly beneficial to a few people, who can use digitization to optimise their businesses and increase profitability. But these people are a only tiny minority of those affected by it. The vast majority of people will be personally affected for the worse. Some say that more interesting jobs will be created. But this is a mystification. Let’s get real: in the past, the most relevant corporations, like General Motors, employed hundreds thousands people. Nowadays, corporations like Google and Apple make more money and employ a much smaller number of people. And not everyone is “up to” the level required to be employed by such companies. What this will amount to, I need not tell you. You do the math.

I advance that these misconceptions, and possibly more, are plaguing the current over-hyped idea of digitization and, as such, will prevent ordinary people from reaping benefits they can make sense of. One may object that it is easy to criticise ideas, but it is more difficult to devise alternatives. I accept the objection. My answer to this is articulated in my essay “Post-Anthropomorphic Sensorial Computing” which the interested reader will find at this address:

https://alaraph.com/2015/03/10/post-anthropomorphic-sensorial-computing/

Should then people resist digitization altogether? Simply put, no. But oversimplifications may kill the idea and make it difficult to get digitization right. Good digitization is about empowering individuals, not getting rid of them. Claiming that a computer can “outperform” a lawyer or a doctor is a sensationalist claim. But ask the bold defenders of it what they would do when seriously in trouble or sick; would they really put their lives in the cold hands of a robot? A more balanced way to put the question would be to contend that professionals equipped with state-of-the-art fast data advisors, would make a better job. In the end, however greedy a single human being may be, Homo sapiens are, overall, ethical animals. Should we give up all this in favour of a merciless computer program which will always pursue the interests of its owners? The decision is yours.

Lessons Learned from Medical Testing

lessons-learned-from-med-testing-01

[Wikimedia, 2015]

Application Testing is about determining whether or not a given software “behaves” as expected by its sponsors and end users. These stakeholders have a legitimate right to their expectations, because the system should have been engineered with the objective to satisfy their requirements. The diagram below represents the testing process as a black-box: lessons-learned-from-med-testing-02.jpg In consideration of the above, it is easy to see that Application Testing is a particular instance of a more general mathematical problem: Hypothesis Testing. As such, it can be tuned to maximise its reliability either in terms of false positives or false negatives, but not both.

  • False Negative: this is the error in which one incurs when a buggy application wrongly passes a test.
  • False Positive: this is the error in which one incurs when a correct application wrongly fails a test.

This is a very well-known problem, and it finds application in other domains, including statistics and medicine.

lessons-learned-from-med-testing-03

Adapted from [Wikimedia, 2015]

Let us consider, for instance, the case of medical tests. How did the scientific community manage to strike the right balance between sensitivity and reliability? In order to answer this question, one has to answer a preliminary one: what is worse, wrongly detecting an illness or failing to detect it? Clearly, the latter case is the worst. Failing to detect contagion can result in rapid spread of infectious deseases, loss of lives, and high healthcare costs. For this reason, medical tests are designed to minimise false negatives. What is the downside? It is clearly the cost of managing false positives. Whenever a critical infection is detected, medical tests are repeated in order to minimise the chance of an error.

Now, let us come back to our domain, Application Testing. What is the worst scenario, false positives or false negatives? If an application fails a test, but it is actually ok, costs originate because the application needs be investigated by software development and, sometimes, business users. But that is the end of it. Conversely, if an application has a severe problem which remains undetected until deployment to production, this is an entirely different story, and can result in a full range of critical consequences, including compliance breach, financial loss, reputational damage, and so on and so forth. So, in the financial sector, just like in medicine, testing must be optimised in order to minimise the chance of false negatives.

A question naturally arises: how do we fare in this regard? Where are we? Luckily, I believe, in the financial sector we are doing alright. Application Testing in this sector is clearly aligned with this risk management strategy. However, there is still room for improvement. The fact is that, as we have seen above, false positives also have a cost. Do we have to be prepared to pay it in its entirety, or can we do something to reduce it? To go on with the parallel with medicine, I shall argue that false positives, like cholesterol, come in two flavours: the good ones and the bad ones. The good ones originate because of unavoidable causes, like technical glitches, human error in test case design or execution, or even up the software life cycle, like wrong requirements or wrong understanding of user requirements. These errors are part of the game, and can only be mitigated up to an extent.

But there is another category of false positives which I think is bad, and can be reduced. Those are the ones generated because of lack of domain expertise. Techies will base their judgement on the evidence collected during test execution. But this evidence is oftentimes not self-explanatory: interpretation is required. And this interpretation can only go as far as the business domain experience of the tester. Enterprise-wide application landscapes implement very sophisticated workflows and support complex business scenarios. The behaviour of these applications changes depending on user rights, user role, client type, and many more criteria. What is actually the intended behaviour of an application, can easily be mistaken for an error.

So, what is my pragmatic recipe to cut down bad cholesterol in application testing (reduce the cost of false positives)? I believe the answer to this question is closely related to the evolution of the testing profession. Nowadays, a good test engineer is someone with technical skills, and business domain knowledge. This unique blend of skills is certainly precious and makes application testing a science and an art at the same time. But new trends are changing this. Automation is increasingly being pursued because of cost pressure, and of the need for increased agility and reliability. But automation comes with skills challenges of its own, to the effect that, as it is generally recognised, more sw engineering savvy personnel will be required in testing. And this is good. But, at the same time, once the amount of manual activity will be reduced, a new opportunity will exist to inject more business-savvy personnel in testing. The transition, the way I see it, can be represented like this:

lessons-learned-from-med-testing-04.jpg

To sum up, more automation will pursue optimisation in terms of minimisation of false negatives, whereas business domain expertise will reduce the costs of false positives. This evolution of the testing profession in specialist roles is what is required to apply the lessons learned from medical testing to application testing in the financial sector. I will be happy to receive your feedback on this admittedly unconventional view on the future of the testing profession.

References Wikimedia, 2015, https://commons.wikimedia.org/wiki/File:Infant_Body_Composition_Assessment.jpg, accessed on 24.07.2015 Morelli M, 2013, https://alaraph.com/2013/09/26/the-not-so-simple-world-of-user-requirements/, accessed on 07.08.2015

Test Automation and the Data Bottleneck

img-01[Wikimedia]

Introduction

The topic of automation has been revamped in the financial industry following the recent hype on industrialization of IT and lean banking. The rationale of the idea is that there are tasks which are better taken care of by automatons than by humans. This kind of tasks are composed by actions which are executed iteratively, and the quality of their outcomes can be negatively affected by even the smallest operational mistake. An interesting proportion of tests belong to this category. Not all of them, of course, because human intellect can still excel in more creative testing endeavours, like exploratory testing, to give just an example. But other kinds of tests, like regression-tests, would make a very good candidate for automation. Practitioners and managers know all too well that a well-rounded battery of regression tests can indeed prevent defects from being introduced in a new release with demonstrable positive effects on quality. But they also know that manual regression testing is inefficient, error prone, and expensive. Therefore, awareness is raising on the necessity to pursue an increase in the degree of test automation. In this essay I will argue that, before thinking about tooling, solutions, and blueprints, there is a key success factor that must be addressed: avoidance of the data bottleneck. I will first explain what it is, and why it can jeopardize even the most promising automation exercise. After that, I will introduce an architecture which can tackle this issue, and I will show that this approach will bring additional advantages along the way. I will now start the exposition introducing the topic of the data bottleneck.

The Data Bottleneck
If we make an abstraction exercise, we can see testing as a finite state automaton. We start from a state {s1} and after executing a test case TC1 we leave the system in state {s2}. A test case is a transition in our conceptual finite state diagram.

img-02a

In order to be able to execute a test case, the initial state {s1} must satisfy some pre-conditions. When the test case is executed, ending state {s2} may or may not satisfy the post conditions. In the former case we say that the test case has succeeded; in the latter case we say that it has failed. Now, what does this have to do with automation? An example will clarify it. Let us consider the case of a credit request submitted by a client of a certain kind (e.g. private client, female). The pre-conditions of the test case require that no open credit request can exist for a given client when a new request is submitted. From the diagram above we see that there is no transition between states {s2} and {s1}. What does it mean? It means that business workflows are not engineered to be reversible. If the test case creates a credit request and it fails, there is no way to execute it again, because no new business case can be created in the application for this client, until the open request is cancelled. Now, there are cases in which the application can actually execute actions which recover a previous state. But in the majority of cases, this is not possible. In banking, logical data deletion is used instead of physical deletion. Actions are saved in history tables recording a timestamp and the identity of the user, for future reference by auditors. In cases like this, the initial state of a test case cannot be automatically recovered. Sometimes, not even manually. What one would need, is a full database recovery to an initial state where all test cases can be re-executed. This is the only way; other approaches to data recovery are not viable because of the way applications are designed and of applicable legislation.
Above we have seen that data recovery is a key pre-condition for automation. Now we will see why legacy environments are oftentimes an impediment to tackling this issue efficiently. Oftentimes, business data of a financial institution is stored in a mainframe database. And when it is not in a mainframe, the odds are, it is in an enterprise-class database, such as Oracle. What do an Oracle database on Unix/Linux, and a DB2 on a mainframe have in common? Technology-wise very little. Cost-wise, a lot: neither one comes for cheap. The practical implication is that database environments made available for testing are only a few, and they must be shared among testing teams. This makes it impracticable to make available automatic database recovery procedures, because of the synchronization and coordination required. What happens in reality is that test engineers have to carefully prepare their test data, hoping that no interference from their colleagues will affect their test plan. And what is worst, after they are finished with their tests, the data is no longer in a condition suitable for re-execution of the same battery of test cases. Another round of manual data preparation is required.
One may wonder if it is indeed impossible to reduce the degree of manual activity involved. The point is that so long as access to databases is mediated by applications, and applications obey by the business workflow rules (and applicable legislation), recoverability of data is not an option. Are we indeed stuck? Isn’t it possible to achieve automatic data recovery without breaking the secure data access architecture? My contention is that there is indeed a viable solution to this problem. The solution is outlined in the following section.

Proposed Solution: On-Demand Synthetic Test Environments
Automated tests take place in synthetic environments, that is, environments where no client data is available in clear. Therefore, the focus of this solution will be on these environments, which are the relevant ones when it comes to issues of efficiency, cost-optimisation, regressions and, ultimately, quality.
The safest way to recover a database to a desired consistent state is using snapshots. A full snapshot of a database in a consistent state is taken and this “golden image” is kept as the desired initial state of a battery of automated tests. Using the finite state representation, we can describe this concept in the following way:
img-03a

The diagram shows that any time a battery of automated test cases terminates, it can be executed again and again, just by recovering the initial desired state. To be more precise, the recovery procedure can take place not only at the end of the test battery, but at any desired intermediate state. This is particularly useful when a test case fails, and the battery must be re-executed after a fix is released. The diagram can be amended like this:

img-04b
So, we have solved the problem of recovering the database to a desired consistent state which enables automatic (and manual) re-execution of test-cases. Is this all? What if other test engineers are also working on the same database environment? What would be the effect on their test case executions if someone else, inadvertently, swept their data away through the execution of an automatic database recovery procedure? It would simply be catastrophic, to say the least. This would bring about major disruption. How to fix this problem? What is needed is a kind of “sandboxing”: environments should be allocated so that only authorised personnel can run their test cases against the database, and no one else. Only the owner of one such environment should be in a position to order the execution of an automatic database recovery procedure. How can this be achieved? An effective way to do it is by offering on-demand test environments which can be allocated temporarily to a requestor. This sounds very much like private cloud. The below are the key attributes of an ideal solution to the on-demand test environment problem:

  • Test environments shall be self-contained.
    Applications, data and interfaces shall be deployed as a seamlessly working unit
  • Allocation of test environments shall be done using a standard IT request workflow.
    For example, opening a ticket in ServiceNow or what have you
  • Test environments are allocated for a limited period of time.
    After the allocation expires, servers are de-allocated. After a configurable interval, data is destroyed.
  • During the whole duration of the environment reservation, an automatic database recovery procedure shall be offered.
    This procedure may be executed by IT support whenever a request is submitted using a standard ticket. An internal Operational Level Agreement shall be defined. For example, full-database recovery is executed within a business day of the request.
  • The TCO of the solution shall be predictable and flat with respect to the number of environments allocated.
    Traditional virtual environments are allocated indefinitely and can easily become expensive IT zombies. Zombie environments still consume licenses, storage and other computing resources. Conversely, the solution proposed prevents these zombie environments from originating in the first place.

The logical representation of the proposed solution infrastructure is the following.

img-05b

To sum up, these are the advantages of the proposed approach:

  • It enables full test automation, making it truly possible to re-execute batteries of test cases using an automatic database recovery procedure in sand-boxed database instances.
  • It gives 100% control of TCO and allows to keep testing spend on IT within defined limits.
  • It allows to attribute testing spend to projects with high precision.
  • It increases overall quality of testing results
  • It eliminates cases of interference among independent test runs.
  • It allows to anticipate involvement of test teams in the software life cycle.
  • It saves infrastructure costs because computing clouds allow for transparent workload distribution, with the effect of running more (virtual) servers on the same physical infrastructure.

Conclusions

In this essay I have articulated the data bottleneck problem relating to test automation. First, I have given a general introduction to the topic. Second, I have explained why this problem may put in jeopardy test automation initiatives. Last, I have proposed a solution based on the concept of on-demand test automation environments and I have shown why I believe this is the way forward. The interested reader can contact me for sharing feedback or delving deeper in the discussion.

References

Wikimedia, 2015, https://commons.wikimedia.org/wiki/File:Buckeye_automatic_governor_%28New_Catechism_of_the_Steam_Engine,_1904%29.jpg, accessed 1 June 2015

Post-Anthropomorphic Sensorial Computing

I sense, therefore I am

sensorial-2

 Wikimedia[1]

 Evolution of Computing

For decades, evolution of computing has been framed in terms of increase in processor performance, random access memory, and storage capacity. Millions of instructions per second has been a primary criterion for comparison. Moore’s Law successfully predicted the doubling of this performance indicator every eighteen months and, when processor vendors approached the physical limits of this growth, multi-core architectures were introduced, so that what could no longer be achieved with vertical scalability, could theoretically be obtained through horizontal scalability. The reason for this obsession with computing power was that a lot of interesting problems which admit of an algorithmic solution could not be efficiently treated with then-current technology. Nowadays, we have reached a point where further advances can only be pursued if software architecture allows for a high degree of parallelism. However, further increases in the number of cores per CPU will not automatically translate in improved performance, unless computer programs are designed to run intensive computations in separate threads which can be executed on different cores. But even software architecture has its own limits. Such limits are imposed by mathematics: parallelisation cannot be pushed beyond defined thresholds[2]. The good news is that with current technology, we can probably already solve most of all that mathematical complexity allows. Computer programs can interpret human voice more than decently. Human face recognition is a reality. 3D virtual reality is there to see. And the list could go on indefinitely. So what is the next evolutionary step on the horizon? There has been a lot of talk around mobile, big data, internet of things, wearables, and social. Is this all?

Emergence of a new Trend?

Social networks would not be so successful if mobile devices were not a reality. Try see people on a commuter train every day. They are hooked on their social networks accessed with a smart phone. If online access were limited to desktop internet surfing in the evening, interaction would drop dramatically. Big data became a hot topic with the explosion of computing devices feeding databases and analytical systems with an unprecedented amount of data, structured and unstructured. If social and IoT were not that ubiquitous, big data would probably not be the same hot topic that we all know. The point is that these trends are closely intertwined and dependencies among them exist, which can reinforce or inhibit certain evolutionary paths. The central contention of this essay is that IoT, mobile and wearables are giving rise to a new shift, which cannot be reduced to the joint use of these trends alone. I will call this new trend “Post-Anthropomorphic Sensorial Computing” (PASC).

Post-Anthropomorphic Sensorial Computing

First attempts to augment computing machinery with sensorial capabilities were inspired by the human body. Computers were made too “see”, “hear”, and “speak”. Then came robots, which can also move in 3D. Some even believed (or believe), that computers could one day be made to think. This of course reflects a very poor idea of mentation, because everyone who has had a normal interaction with other human beings knows all too well that rational thinking is only a small, yet critical, part of psychological processes of a healthy human being. I contend that the debate whether or not machines can be made to think like a human being is moot. The interesting point is that machines are increasingly endowed with a number of sensors which allow them to read the environment in ways which complement human senses. This is made possible by IoT, wearables, and mobile, but not only that. Try think when, after using a maps application on a tablet, one tries using the same application on a laptop, only to find out that the laptop does not know its current position, which has to be patiently typed the old way. Tablets which were once seen as the poor replacement for more powerful computing devices, can now do things which these very devices cannot do, no matter how powerful their processors are. These wonders are made possible by sensors. A GPS sensor does only make sense on a mobile device. A camera can very well be put on a desktop/laptop, but all it will shoot is the bodily features of the person in front of it. Still something, but nothing compared to what I can picture or film with my iPhone. Try use a password management application on a tablet. The master password can be as user friendly as a fingerprint. Touch the fingerprint sensor and you can access the secure vault without the pain of entering improbable combinations of characters. Thanks God. I would not like to type again and again passwords when I will be in my seventies.  It would make for an unsurmountable barrier. Now try do the same on a desktop. It looks like a step one-hundred years back on the time machine. It is clear that it is sensors which will create value add in ubiquitous computing. But now that the utopian dream of replicating humans with automatons has shown its pathetic metaphysical overlook, it is time to think about Post-Anthropomorphic Sensorial Computing. This new wave of computing artefacts will be enriched with thermometers, infra-red cameras, multi-axis motion sensors, radioactivity sensors, earthquake detection sensors, accelerometers, and so on and so forth. They will complement human senses in ways that would not have been possible if we had kept focusing on artificial intelligence alone. Imagine having a wearable ultrasonic sound location system, the equivalent of a pocketable bat, which can aid the blind move safely in an unknown environment. Imagine having radioactivity sensors and PH sensors in the Oceans, in order to measure the effects of climate change and natural disasters in real time. One day, closer than one may think, we will be able to “feel” if our loved relative or friend is having a hard time, and prevent the worst to happen thousands of kilometres away, maybe only with a good-old reassuring phone call. Because PASC is not about creating fancy gadgets, it is about extending human faculties smoothly, with a human-centric viewpoint. For PASC to succeed, existing disciplines will have to further advance. For example, emergence of PASC will pose new challenges for big data. IoT will become a lot more than a community of refrigerators with an Ethernet card. The boundary between IoT and mobile will blur. PASC is about renouncing the mad scientist dream of replicating mentation on a silicon chip, and finding ways to expand and complement humankind‘s sensorial power with a new generation of ubiquitous, energy efficient, sensorially enriched computing artefacts. The challenge of PASC will be to find ways to translate this wealth of sensorial data in useful information for a human being. In their current evolutionary stage, humans are still endowed with five senses, plus other interesting abilities, like balance. Extending the reach of their biological sensorial endowment will necessarily require the ability to make this extension relevant to them, and usable. Let us consider the interesting experiment of Google Glasses. They have not been very successful so far, but I think this is mostly because they came too early. Secondly, they fell victim to Google’s religious allegiance to the myth of technology. Adding tweets sent by somebody located nearby to the visual perception of a human being is not extending her sensorial capability in the PASC sense. It is only creating a scarcely relevant, probably useless, distraction to vision proper. It’s about adding noise, not signal. PASC instead, is about adding signal. Let us imagine risk evaluation in the PASC age. The risk of buying shares will not only be assessed against the market, the portfolio of the buyer, and other classical criteria. It will also be assessed based on the psychological condition of the buyer. With wearables like Apple Watch or a future evolution of it, it will be possible to determine if a purchase is done lucidly, or as a surrogate compensation for a loss or a feeling of unfulfillment or, maybe worse, fuelled by sudden euphoria.

As a recap, the below sketches some relations between PASC and other existing trends, trying to highlight the dependencies and provide grounds for defending the worth of labelling this new computing trend. I am convinced that, independent of the success of this PASC concept, useful extension of human sensorial endowment, rather than imitation of human mental processes, is the way forward. There is a bright future ahead, made of achievable advances, which can make the life of people, healthy or otherwise, a lot more fulfilling.

synoptic-pasc

Footnotes

[1] https://commons.wikimedia.org/wiki/File:Sobrecarga_sensorial_-_8_%2810862394325%29.jpg

[2] See, for example, Amdahl’s Law: https://en.wikipedia.org/wiki/Amdahl%27s_law

Functional Solution to Range Minimum Query (RMQ) using Segment Trees in Scala

Segment TreeRMQ

Given an array of N integers, computing the minimum over the whole array is an easy operation requiring N steps. For example, one can take the first value as a tentative minimum, and iterate through the array comparing the tentative minimum with the current value. Any time the current value is smaller than the tentative minimum, it becomes the new tentative minimum, until the end of the array.
Let us now consider the case where it’s not the minimum of the whole array which matters, but the minimum of an arbitrary sub-array of the original one. With the term sub-array I mean an array formed by M consecutive elements taken from the original array, with 1 <= M <= N
The brute-force approach would be to compute the minimum element for any given range. The Range Minimum Problem (RMQ) is about finding a more efficient solution to this problem. How can one do better than O(N)? One obvious way would be to pre-compute the minimum of sub-arrays and sacrifice space for performance. But, wait a moment, how many possible sub-arrays can be defined given an array of size N?
Given an array of size N there are N sub-arrays of size 1, N-1 sub-arrays of size 2, N-2 sub-arrays of size 3 (…), 2 sub-arrays of size N-1 and 1 sub-array of size N.
The total number of sub-arrays is therefore:
Number of sub-arrays with consecutive elements
which is bad, because this is space complexity O(N2). Is there another smarter way to tackle the problem? The answer is, of course, positive. Actually, there are a number of ways documented in the literature. This article will illustrate the technique known as “Segment Trees”. This is the idea that it is not necessary to pre-compute all the possible sub-arrays, but only some of them, and the other ones can be more quickly computed starting with the available partial results. Let us consider an example to clarify the idea. Before I go on, I need to do some explaining regarding the choice of Functional Programming (FP) data structures. Arrays are an imperative data structure which is not suitable for FP proper. The idea is that FP is about immutable values allowing for side-effect free computer programs. Arrays, instead, mimic the way a computer memory is structured, i.e. as a sequence of modifiable cells. The functional equivalent of arrays, in Scala, is Vectors. From now on, I will abide by the FP rules and use Vectors. I will now get back to our example. Given the vector:

val data = Vector(2,5,3,0,8,9,5,3,7,5,9,3)

if we knew already:

val min1 = min(Vector(2,5,3,0,8,9)) // min[0:5]
val min2 = min(Vector(5,3,7,5,9,3)) // min[6:11]

we would not need to scan the sub-arrays, because we could simply compute:

val min = math.min(min1, min2)

Segment Trees are balanced binary trees which are composed by nodes consisting of a range and a value, which is the pre-computed minimum of the sub-array corresponding to such a range. In order to see how this tree looks like, let us consider the following example:

val data = Vector(4,5,2,8,9,0,1,2,5,1,8,6,3) // N=13

Our Segment Tree would look like this (click on image to see it full size):

Example Segment Tree

The first thing to notice is that there are 25 ranges (nodes or leaves) in this tree, which is a lot less than 13(13+1)/2= 91
With higher values for N the difference would be even more noticeable. This number can be approximated in excess with this formula:

Approximation in excess of the number of ranges in a Segment Tree (nodes and leaves)

In our case it would give:

20 + 21 + 22 + 23 + 24 = 1 + 2 + 4 + 8 + 16 = 31

The approximation is in excess because the formula assumes that each node forks in two nodes, which is not always the case. To see how best is this idea than the brute force approach, let us consider a Vector with N=1000 elements. We have seen above that all possible sub-ranges are:

1000*(1000+1)/2 = 500'500

However, our Segment Tree will contain no more than:

1 + 2 + 4 + 8 + 16 + 32 + 64 + 128 + 256 + 512 + 1028 = 2’061

Remarkable achievement, isn’t it? We can now see how to query a Segment Tree in order to get the minimum value of any sub-range of the original vector. The query algorithm is very simple, if expressed recursively, which is the perfect way to write in a Functional Language like Scala. The idea is this: if the query range is the same as the root range, get the minimum value from there. Otherwise, query the sub-trees, each with the intersection of the query range and the respective left and right sub-ranges, recursively. The Scala code will speak for itself. Try it out and have fun! I will copy it here for convenience’s sake, but it is also available at this address:

https://github.com/maumorelli/alaraph/blob/master/hackerrank/src/com/alaraph/hackerrank/rmq/Solution.scala

RMQ-002
RMQ-003

The explanation of how to use the program, the input format and expected output can be found at:
https://www.hackerrank.com/challenges/range-minimum-query
Here is a brief excerpt:

Sample Input

10 5
10 20 30 40 11 22 33 44 15 5
0 5
1 2
8 9
0 9
4 6

Sample Output

10
20
5
5
11

We are now approaching the end of this article. I hope you had a good read. To conclude, I would like to draw your attention to the fact that the depth of the Segment Tree is only Log2(N)+1. This clearly implies that a Segment Tree can be visited very efficiently with a moderate number of recursive calls.

References

Hackerrank, (2014), https://www.hackerrank.com/challenges/range-minimum-query, accessed 01.08.2014

Topcoder, (2014), http://community.topcoder.com/tc?module=Static&d1=tutorials&d2=lowestCommonAncestor, accessed 01.08.2014

Wikipedia, (2014), https://en.wikipedia.org/wiki/Segment_tree, accessed 01.08.2014