For Better Decisions Use Better Models

Given our cognitive biases, the path to better decisions is to base them on better models.

When people make decisions, they typically use their “mental models,” sometimes augmented by other types of models like spreadsheets, to try and predict the likely outcomes of their choices, then select the optimal choice. Think of this as “mental simulation.” This works fine for simple problems, but as complexity increases, our mental models and our cognitive processes are not evolved enough to deal with issues like too much data (leading to bounded rationality), long time delays (we almost always overweight near-time phenomenon), non-linear behavior (where cause and effect are not proportional), and feedback processes (those that lead to self-reinforcing or self-modulating behaviors).

Given our cognitive biases and limited “wet-ware” processing ability, the path to better decisions is to base them on better models: models that are rigorous in their logic, models that don’t suffer from human mental foibles, models that integrate the combined knowledge and mental models of more than a single expert. My colleagues and I construct computational models for just this purpose.

Computational models come in a variety of forms and the technology behind them can vary from statistics-based econometric models to causal-based system dynamics models and much in between. The right choice of model depends on the question being asked, but a key benefit of using any type of model is that it doesn’t suffer from the same types of cognitive biases previously mentioned.

I stress that a model shouldn’t be used as a substitute for thought (“The black box said I should choose Option A”), but rather as a tool to develop better understanding of the environment one’s dealing with and to accelerate one’s own learning of the connections between actions and outcomes. One of the most important uses of such a simulation model is as a learning tool. The simulation becomes a transitional object that externalizes ones mental model, providing a platform for experimentation. This experimentation enables fast feedback leading to “double loop” learning (more on this in a later post).

Thus a computational model, properly encoded, integrates the various theories and beliefs in our mental models and ensures that they are internally consistent. If we construct a simulation and the outcome differs from what we expected, we have a learning opportunity. Perhaps we’ve left something important out of the model that we previously didn’t think was important or didn’t consider? Perhaps the causal connection between event and outcome of pieces of the model don’t operate quite as we believed? Whatever the reason, we can adjust both our mental models and our computational representation of it (something I like to call “learning”).

Your decisions are only as good as the models you base them on, so the key to better decisions is to use better models.