Many people reach out to me because they want to learn about the science of complexity. They ask what it is, how it works, and — most importantly — how they can apply it to global challenges confronting humanity. In those conversations I share what I have learned as a complexity researcher who specializes in the evolution of human systems. Realizing how useful this is for them, I thought I’d share some of it with all of you here.
What is Complexity Science?
To begin we’ll need a common definition of complexity science. Let’s start with a general one that makes intuitive sense and then dig into the details from there:
“Complexity science is the study of things that surprise us. When a system rapidly changes in ways we don’t expect, there is something going on that cannot be reduced to a simple explanation based on one of its parts. This system is behaving with complexity.”
One of my favorite examples is the popping of popcorn. Take a corn kernel and slowly heat it. For a while nothing changes — until something surprising happens — in an explosion of sound it POPS and is forever transformed into a fluffy white blob with very different properties than the original kernel of corn.
Complexity is the study of how the corn changes from one state to another. Emphasis is given to its internal dynamics as an object that is being heated. This gives us one of the important properties of a complex system — that it is constantly driven far from equilibrium until something unexpected happens.
This is the way life works for all living creatures. Being alive is a dynamic process that only exists while the organism is far from equilibrium. The only way it can be unchanging is to be dead. The constant flow of nutrients and energy keep the organism dynamic and alive. Any study of its internal dynamics will be in the realm of complexity science.
How do People “Do” Complexity Research?
This is one of those questions that often takes the form of “Joe, how do you actually do your work? What are the steps? How do you know you are right when you come to a conclusion?” I LOVE this question. It opens up the conversation into a web of personal reflections and formal methodologies — where the magic really happens.
The way complexity researchers operate is to build a model for the system they are studying and then test to see if it captures something important about the real-world phenomenon they want to understand. This often takes the form of a simple mathematical relationship that gets at the essence of what the real system is about.
For example, a person trained in physics will be familiar with the mathematics that describes how a massive object attached to a spring will behave if the spring is stretched and released. The object will begin to move up and down (or back and forth) in periodic waves that get smaller as friction sucks energy away and releases it as heat. They might want to study something much more complex like the collective behavior of the Chicago Stock Exchange — where they lack a simple model that captures its essence — and so they look at the behavior of individual traders to see if any part of the mass-and-spring model might apply.
In this way, the researcher builds a “bag of tools” for studying different systems. They familiarize themselves with different mathematical models and start to tinker with data from a real system to figure out which model (or combination of models) best explains what they see. The models are tested against empirical data from the real-world system to help the researcher learn whether her assumptions about the applicability of the model are correct or if they were thinking about it in the wrong way.
At this point in the conversation, I usually explain the most valuable skill I learned while studying physics in college. It is not the knowledge of physics, say the way electric charge works in atomic physics or how massive objects behave in a strong gravitational field, but rather the way a person sets up the problem so that their solution actually works!
This is the real bread-and-butter of the physicist. It is why most students of physics go through a period of time when their grades are really low. I experienced this in my first year as an undergraduate when most grades in my general physics class were curved because the highest score on a test was only 40%. It wasn’t until after a year of coursework that student scores would improve dramatically — not because advanced physics students are “smarter” — but because they have gone through a learning process where they figure out how to correctly set up a problem.
And now — no matter what kind of problem I am dealing with — I know how to check my assumptions and figure out which analytic tool is right for the job. That is the true sign of a well-trained physicist. It is also what enables a complexity researcher to figure out which models are “good contenders” for a particular real-world situation.
The researcher uses the models to explore how the system behaves under different conditions. They may do this in order to predict future outcomes of the real system, as someone who studies arbitrage in financial markets might attempt to do so they can hedge their bets on prospective future prices. But prediction is often elusive in complex systems — just think of the weather in your home town — there are limits to how far into the future we can trust our predictions. In these situations, it is more important to characterize the dynamics of the system so that we can explain what is going on than it is to predict it outright.
A great deal of confusion comes from the misunderstanding that science is first and foremost about predicting the future. Most complex systems cannot be predicted — their internal dynamics are just too sensitive to unknown factors that cannot be measured in practice — and so what is really needed is an understanding of why the system can’t be predicted. The weather is a good example. Sometimes it is just as important to know why we can’t predict how much cloud cover there will be in Central Park two weeks from next Saturday as it is to gauge the likelihood of freezing rain late tonight.
In the early days of complexity research, a discovery was made about the fundamental unpredictability of systems that is now known as Chaos Theory. It was first demonstrated with a numerical model for weather forecasting that was based on the physics of fluid flows — known as the Navier-Stokes Equation — which is notoriously sensitive to uncertainty. Even when the mathematics was assumed to be completely deterministic, meaning that perfect knowledge of the system at any point in time would allow a person to calculate exactly what the system will do at any future or past point in time, will still be unpredictable in practice if there is any uncertainty at all.
The measure of how unpredictable a system is can be gained by calculating future states of the system for one set of starting inputs, then change them slightly and run the calculation again. After doing this for a large number of starting values, it becomes possible to measure the statistical spread of future trajectories for the system and how far apart they are relative to their starting conditions. A “chaotic” system is one whose future states diverge so rapidly from small changes in starting inputs that future errors overtake the range of accuracy — making the future state effectively unknowable shortly after beginning the computation.
The way a particular system distributes ignorance in this manner is a key element for explaining how the system behaves. It is positive knowledge to know how quickly our confidence in future predictions goes away. Entire classes of systems can be lumped together based on the rate at which ignorance spreads.
Why Aren’t More Social Systems Studied Using Complexity?
By now I hope you are beginning to see how complexity research is done in practice. The researcher builds and tests models to figure out how they behave under different conditions. The knowledge they gain allows them to explain how real-world systems work and what is knowable (and unknowable) about them in principle. This kind of knowledge can be extremely valuable.
Every example I’ve given so far is for physical systems. This is not by accident. The typical way that models are tested for fitness is to gather measurements from the real world that empirically ground us and constrain our theories to what is really happening. A weather model will make use of temperature and pressure observations to ensure that numerical relationships behave like real physical processes. This is how the researcher discovers flawed assumptions and makes improvements to their model.
One way they do this is called Scale Analysis.
Scale analysis is the practice of taking all known factors that contribute to making changes in a system and estimating how much of the overall behavior that each one can explain. When studying the weather, for example, it is helpful to know how much the Coriolis Force (from the Earth’s rotation) is contributing to the rotation of a supercell thunderstorm. Heat introduced when water vapor condenses into liquid drops will also play a role — but how much? Scale analysis tells us which factors must be included to paint an adequate picture of the system and how it changes in time. Some factors can be excluded without making much of a difference. Knowing which to keep and which to throw away is all part of a strong research methodology.
For physical systems, we can simply go out and make measurements: Coriolis Force = X. Heat Exchange = Y. But for social systems there is an additional layer of interpretation that makes it difficult to know if what we measure is real or not. This is the main reason why complexity research has not been done as much for social systems — our implicit beliefs and value judgments tend to bias us without our knowing it.
Take for example the Rational Actor Model that assumes human behavior is dictated by self-interest and greed. It is built on several ideological assumptions that could be tested to see if they are correct. (When behavioral scientists tested these assumptions, they proved that they were incorrect… leading two psychologists to receive a Nobel Prize in economics for their discoveries!) But for decades the self-interest assumptions were held as sacrosanct. They biased interpretation without ever going through the scientific process of falsification through testing.
And so we must take special care that our philosophical assumptions are empirically responsible — grounded in the scientific method of theory-testing and falsification — if we want to know how reliable our measures of human behavior actually are. It is in this domain of empirically responsible philosophy that I have worked for many years. I have found it vital to keep myself up-to-date on new research in psychology, linguistics, anthropology, neuroscience, and a host of other related fields if I want my models for social systems to be robust against the data one can readily bring forth to challenge my conclusions.
Where Do We Go From Here?
The applications of complexity research to social systems are just waiting to be developed. Recent advances in user-centered design have helped a great deal by bringing the higher standards of empirical methods to bear on the study of human interaction and experience. What we need to do is formalize our understandings of what a complex system is, how models can be built and tested to explain them, and where our philosophical assumptions are grounded in the model-building process so that we can advance in leaps and bounds.
Two years ago I proposed that we create a new field of science focusing on Human Interface Design for Global Change that expresses where I intend to go with my scholarly work and design practice. This work is evolving rapidly through collaborative partnerships with DarwinSF, the International Centre for Earth Simulation, Cognitive Policy Works, the Global Leadership Lab, and several others who strive to effect global change.
Let’s have a conversation about this. Please share your thoughts below or send me an email and let’s see where this goes…