The danger of naive optimization in uncertainty

Optimization is one of the essential goals of management. It means doing more with less. It is one of the factors behind the incredible performance of the global economy over the last two hundred years. It explains, for example, why the price of a refrigerator has fallen by 98% in constant euros over the last century. But it is sometimes done in a naive way, i.e. without taking uncertainty into account, and this naivete is a source of great fragility.

Optimization has been particularly marked by the development of “just in time”, the idea of delivering the components of a product only when they are needed. By delivering just in time, rather than in advance, we avoid unnecessary inventory and therefore reduce costs. When this technique is applied throughout the supply chain, the impact on efficiency can be significant. Over the last thirty years, with the development of China as the workshop of the world and international logistics, this technique has been expanded on the assumption that the geographical origin of a product is irrelevant. All that mattered was price and lead time. The impact in terms of efficiency was considerable: a product could be sourced with a few clicks and delivered a few days later.

But this optimization reflected a kind of naivete, unaware of the mental models on which it was based. Just-in-time delivery, for example, assumes that there will never be a strike. This is true in Japan, which invented the concept, but not in France, which was quick to adopt it. In a country where strikes are frequent, it is a good idea to have stock. The extra costs incurred are largely recouped in the event of a strike, and production can continue anyway. Similarly, logistics assumed that the global supply system could function smoothly. This assumption was proven for years, then abruptly disproved by the covid epidemic, and now by the slowdown in globalization and security issues such as the Houthi threat to shipping in the Red Sea.

For a system to withstand the inevitable shocks of an uncertain world, there are several conditions that run counter to naive optimization:

First, there must be excess capacity. This is what we call “slack. You need a little too much of everything at every level. The less slack you have, the less shock you can take. So you have to hold back what you don’t need. The model here is the fire department. They are inactive part of the time and then suddenly they are indispensable. So we are funding a structure that is inactive for a significant part of its time. If firefighters are inactive 30% of the time, it would be foolish to reduce the workforce by 30% to optimize operations. Slack is inefficient on a day-to-day basis, but very effective in the long run.

Second, you need redundant systems. That’s why nature gave us two lungs. Again, naive optimization wants to avoid redundancy, which is a source of direct and indirect costs (maintenance, etc.). The U.S. Navy has resumed training sailors to use maps and compasses. Maybe one day GPS will be inaccessible (due to malfunction or enemy sabotage) and we’ll have to find other ways of coping.

Third, the system must be pluralistic, not monolithic. A plural system means that shocks can be absorbed locally. One part of the system can be affected without affecting the others. Therefore, the standardization and uniformity of an organization, which are important factors in optimization, are also factors in fragility. Concentrating all your production in a single plant increases economic performance, but makes you vulnerable if that plant has a problem (strike, natural disaster, political upheaval, etc.) or if it can no longer deliver to the other side of the world (geopolitical crisis, tariff war, interruption of maritime traffic, etc.). The challenge for the organization is therefore to create a global unity while leaving a certain amount of local diversity. This goes directly against the principles of naive optimization.

Fourth, we have to develop resources that are a priori useless. This is a total absence of optimization, since we expect zero return on an investment. A priori useless resources are the hardest to sell to a management obsessed with naive optimization. At the beginning of the last century, the French army allowed individual local initiatives to develop to experiment with new technologies: the radio, the airplane, the bicycle, and so on. These resources were therefore officially useless in terms of the official models, but they proved very useful when these models collapsed at the start of the war and new ones had to be built.

These four principles (and there are others) cannot be developed in the abstract. They can only be implemented if a culture of intelligent optimization is developed within the organization, emphasizing the importance of the human dimension. Basically, it’s a general attitude that needs to be adopted.

Two points must be made here. First, the degree of optimization cannot be defined in absolute terms. We must not over-optimize, of course, but what does “over-optimize” mean? It’s impossible to say. The only way is to make decisions on a case-by-case basis, always keeping in mind the dangers of over-optimization. On the other hand, we can see that there is no ideal organizational structure. Managing a complex system involves compromises; it can never aim for an ideal without danger. If we optimize too much, we weaken the system by making it too sensitive to shocks. If we don’t optimize enough, we weaken the system by reducing economic performance, which also threatens the existence of the organization.

Managing an organization is very much about the art of compromise, not naively optimizing one or two factors at the expense of others that are equally important, even if they rarely come into play. Naïve optimization is, above all, optimization that does not understand the nature of the environment in which it takes place. It is therefore essential that the optimization of the functioning of an organization, which is essential for its survival, is developed with a clear awareness of the mental models on which it is based, so that they can be discussed and questioned.

📬 If you enjoyed this article, don’t hesitate to subscribe to receive future articles via email (“I subscribe” in the upper right corner of the home page).

🇫🇷 A version in French of this article is available here.

One thought on “The danger of naive optimization in uncertainty

Leave a Reply