Finding Clues In The Counter-Intuitive

Arrows

I was training a client recently and overheard a conversation in which one analyst was helping a new team member learn to use a recently deployed strategy simulator.  The new analyst had been experimenting with the simulator and the more experienced one was attempting to verify with her that the outputs were “as expected” and that “nothing counter-intuitive happened.”

I suggest taking a different perspective.  Counter-intuitive results are precisely what we should be looking for because those indicate one of two things: error or opportunity.

First, counter-intuitive results can be a sign that some critical piece of logic or data was left out of the model, that a formula uses an incorrect equation, or even something as simple as an output displays the wrong variable.  Here the counter-intuitive result indicates a bug that needs resolution.

More importantly, counter-intuitive results can flag an error in our own thinking (literally “counter to intuition”).  If we expect a certain result and something surprising happens, that sets us on the path to learning.  Why does that result differ from what we expected?  Which factors contributed?  Are non-linearities pushing us into a new behavioral realm?  Are interactions occurring between variables that have reached values we didn’t expect to appear together in reality?  Understanding the causal logic behind the counter-intuitive opens the door to better solutions.

Once we uncover the reasons for counter-intuitive results (assuming model error isn’t the culprit), we can take advantage of our new understanding by capitalizing on some effect that runs counter to the conventional wisdom.  Perhaps there’s a potentially harmful blind spot in our strategy that can be repaired before competitors take notice?  Better yet, maybe there’s a flaw in a competitor’s strategy that we can seize upon or a market opportunity we can exploit?  This is one of the big advantages to using a simulator, by the way: you can freely experiment and explore the virtual universe in search of counter-intuitive outcomes that might be beneficial.

The key is to recognize the counter-intuitive as a harbinger of new revelations about the system we’re investigating.

Test-Fly Your Strategy

Cockpit

The ability to learn faster than your competitors may be the only sustainable competitive advantage.

— Arie De Geus

Many years ago I learned to fly. 

I love aviation and after receiving my private pilot certificate, I continued to train and eventually earned a commercial multi-engine certificate and instrument rating. Perhaps the most valuable training aid I made use of was the Frasca 142 flight simulator. The simulator allowed my instructors to throw problems of ever-increasing difficulty at me (faulty instruments, system failures, engine fires, etc.) and review my responses with me during post-flight debriefings. I could repeat and rehearse until I had an innate understanding of and reflexive response to the most dire situations. Moreover, I was able to train with safety, speed, and cost-effectiveness that could never be approached using an actual aircraft.

Imagine if you could do this with your business… How should you respond to an aggressive new entrant in your market? When should you invest in new infrastructure? How quickly should you expand into new territories? What if you could “test fly” your strategic decisions before you bet your company on the outcome?

You can, using a “management flight simulator” (MFS). An MFS is a simulation expressly designed to help business leaders understand and prepare for specific scenarios they may face. The simulation codifies the relevant structure of the business, the market, the competition — whatever is considered germane to the problem(s) at hand — and presents “the world” to the user via an interface, typically a dashboard consisting of the various reports, graphs, metrics, etc. that would normally be used by that decision maker. Upon evaluating the situation, the user inputs her strategic and/or operational decisions (e.g. investment amounts, pricing decisions, new factory construction) and the simulation advances one time period, usually one fiscal quarter, and updates the dashboard showing her what the new view of “the world” is, in light of those decisions. In a matter of minutes, a leader can simulate many years into the future and get a view of the far-reaching results of her strategies.

By being able to “replay the tape” and examine the long-term outcome of a series of decisions, leaders are able to learn the subtleties of the interconnections among business entities (e.g. internal departments, the competitive landscape, distribution channels, the capital markets). They can study and understand the secondary and tertiary impacts of the policies they set forth. They can learn to spot the leading indicators that warn of impending danger and signal the need for change. They can test theories, review the results, and rehearse until they’re satisfied with the outcomes.

In short, an MFS enables in-depth learning with amazing speed and efficiency and that learning leads to better decisions.

When aeronautical engineers design a new aircraft, they don’t just build it, climb in, and hope it flies. They simulate every imaginable aspect of it in order to ensure the safest, smoothest, most economical venture possible. Business leaders can take advantage of the same technology to “test fly” their strategies before risking their companies.

For Better Decisions Use Better Models

Given our cognitive biases, the path to better decisions is to base them on better models.

When people make decisions, they typically use their “mental models,” sometimes augmented by other types of models like spreadsheets, to try and predict the likely outcomes of their choices, then select the optimal choice. Think of this as “mental simulation.” This works fine for simple problems, but as complexity increases, our mental models and our cognitive processes are not evolved enough to deal with issues like too much data (leading to bounded rationality), long time delays (we almost always overweight near-time phenomenon), non-linear behavior (where cause and effect are not proportional), and feedback processes (those that lead to self-reinforcing or self-modulating behaviors).

Given our cognitive biases and limited “wet-ware” processing ability, the path to better decisions is to base them on better models: models that are rigorous in their logic, models that don’t suffer from human mental foibles, models that integrate the combined knowledge and mental models of more than a single expert. My colleagues and I construct computational models for just this purpose.

Computational models come in a variety of forms and the technology behind them can vary from statistics-based econometric models to causal-based system dynamics models and much in between. The right choice of model depends on the question being asked, but a key benefit of using any type of model is that it doesn’t suffer from the same types of cognitive biases previously mentioned.

I stress that a model shouldn’t be used as a substitute for thought (“The black box said I should choose Option A”), but rather as a tool to develop better understanding of the environment one’s dealing with and to accelerate one’s own learning of the connections between actions and outcomes. One of the most important uses of such a simulation model is as a learning tool. The simulation becomes a transitional object that externalizes ones mental model, providing a platform for experimentation. This experimentation enables fast feedback leading to “double loop” learning (more on this in a later post).

Thus a computational model, properly encoded, integrates the various theories and beliefs in our mental models and ensures that they are internally consistent. If we construct a simulation and the outcome differs from what we expected, we have a learning opportunity. Perhaps we’ve left something important out of the model that we previously didn’t think was important or didn’t consider? Perhaps the causal connection between event and outcome of pieces of the model don’t operate quite as we believed? Whatever the reason, we can adjust both our mental models and our computational representation of it (something I like to call “learning”).

Your decisions are only as good as the models you base them on, so the key to better decisions is to use better models.