Jeffrey Goldberg
Computer Centre
Cranfield University
Cranfield, Bedford
MK43 0AL
J.Goldberg@cranfield.ac.uk

Lívia Markóczy^{1}
School of Management
Cranfield University
Cranfield, Bedford
MK43 0AL
+44 (0)1234 751122 (x3757)
+44 (0)1234 750 070
L.Markoczy@cranfield.ac.uk
Keywords: Complexity Theory, Game Theory, Chaos
Disorder, unintended consequences of actions, and turbulence followed by calmer periods are part of the every day experience of individuals in organizations as a consequence of the many small interactions among individuals and organizations. Organizational scholars have long been fascinated by this dynamism and unpredictability and have sought theories capable of capturing these.
Complexity theory and chaos theory now seem to fill the role. They have been presented in scholarly and practitioner oriented journals as comprising a revolutionary new paradigm (e.g., Brown & Eisenhardt, 1997; Johnston, 1996; McKergow, 1996) which is not only capable of modeling dynamism and unpredictability, but does so while eliminating the perceived evils in social sciences: reductionism, predictability, and the assumption of rational individuals (e.g., McKergow, 1996; Stacey, 1995). In that discussion the distinction between complexity theory and chaos theory is often blurred. As we comment on the literature that fails to make the necessary distinction we will use the term ``complexity/chaos theory'' as a cover term. We will reserve ``complexity theory'' or ``the study of complex systems'' and ``chaos'' or ``chaotic systems'' or ``nonlinear dynamic systems'' for things that more closely resemble the notions as they have come from mathematical physics and modeling.
Complexity/chaos theorists pride themselves in drawing from recent scientific developments in physics, biology and mathematics. Complexity/chaos theory, however, has also accumulated a rich rhetoric which distorts the picture of what it can do for us. Before we can evaluate complexity/chaos, we need to strip away the rhetoric that surrounds it. Only then can we see how it really contributes.
When we separate chaos from complexity, we will see that most of the actual work in chaos/complexity in management has been with complexity theory (although muddled by some of the rhetoric of chaos) and so we will focus mainly on complexity, but will have something to say about chaos as well.
Below we argue that complexity theory, once stripped of the rhetoric, is not a radical new paradigm. In section 1 we very briefly mention some of the alleged properties of complexity and chaos that some people have been attracted to. Those properties will be discussed more in section 3. But before we can do that, we need to very briefly indicate what we mean by ``chaos'', ``complexity'', and ``game theory''. That is done in section 2, where we discuss some of the ``new'' properties of complexity against the background of game theory. Section 3 is the bulk of this paper in that it attempts to systematically show that what has been proposed as new, profound, revolutionary is often misunderstood. In our concluding remarks (section 4), we speculate as to why so many management scholars have been attracted to complexity and chaos, despite the fact that when examined closely it does little of what they actually desire.
In the management literature complexity/chaos theory is presented as a theory which unlike traditional theories is able to demonstrate how the interaction of agents following simple rules can lead to complicated macroscopic effects in the long run (Levy, 1994; McKergow, 1996).^{2} The interaction of the agents is said to follow a nonlinear dynamic, and differences in the initial state of a system lead to different interaction patterns among the agents which lead to unpredictable, often unintended, consequences on the system level (McKergow, 1996; Stacey, 1995). Unforeseen consequences are assumed to be the result of the existence of both negative and positive feedback loops in the system (Cheng & Van de Ven, 1996; Ginsberg et al., 1996). Negative feedback loops by themselves lead to the stabilization of the system but the positive feedback loops make the system unpredictable and unbalanced as they amplify the effects of certain interactions (Van de Ven & Poole, 1995). Some management scholars consider organizations to be good candidates for the nonlinear dynamic feedback system described by complexity/chaos theory (Brown & Eisenhardt, 1997; Parker & Stacey, 1994; Stacey, 1991). Stacey (1995, p. 480481), for example, writes:
Organizations are clearly feedback systems because every time two humans interact with each other the actions of one person have consequences for the other, leading that other to react in ways that have consequences for the first, requiring in turn a response from the first and so on through time. In this way an action taken by a person in one period of time feeds back to determine, in part at least, the next action of that person...Furthermore, the feedback loops that people set up when they interact with each other, when they form network, are nonlinear. This is because: the choices of agents in human systems are based on perceptions which lead to nonproportional over and underreaction...and without doubt small changes often escalate into major outcomes. These are all defining features of nonlinear as opposed to linear systems and, therefore, all human systems are nonlinear feedback networks.
Complexity/chaos theory is often presented as superior to existing theories that are concerned with equilibria (Brown & Eisenhardt, 1997; McKergow, 1996). Interest in equilibrium is often equated with ``stability, regularity and predictability'' (Stacey, 1995, p. 477) while complexity/chaos theory is claimed to be able to model systems that ``operate far from equilibrium'' and are at the ``paradoxical states of stability and instability, predictability and unpredictability'' (Stacey, 1995, p. 478). Given the occurrence of both positive and negative feedbacks a complex system might never reach equilibrium.
Complexity/chaos allegedly has a number properties which are claimed characterize human systems and interactions. Some of these are listed below. They are merely mentioned here, and discussion of them is reserved for section 3.
As we will see later, some of the properties are not desirable, others have been available in game theory, and still others are only available in game theory.
We do not aim to provide even an overview of the theories or approaches in question, and most of what we do have to say about them is said in later sections (particularly section 3) where we discuss some of specific properties of the theories in comparison. However we wish to say some things in advance of that discussion, but we will keep these brief in order to avoid repetition with later sections.
Chaos and complexity are often discussed together, but are quite different things. There are many characterizations of the differences. Cohen & Stewart (1994, p. 2), for example, characterize it as that complexity is about how simple things arise from complex systems, and chaos is about how complex things arise from simple system.^{3} It is generally true to say that the study of chaos generally involves the study of extremely simple nonlinear systems which lead to extremely complicated behavior, and complexity is generally about the (simple) interactions of many things (often repeated) leading to higher level patterns.
To give an example of a nonlinear dynamical system (which we will come back to in later discussion), we'll look at one famous and simple system. The discussion here is based on Sigmund's (1993, ch. 3) description of May (1976). This work is also well described by Gleick (1996, p. 7073).
Parable 1Imagine a simple species whose population in one generation depends only on its population in the previous generation in two ways. If there are more potential parents there will be more offspring in the next generation, but if there are too many in one generation they each may not get enough nourishment to reproduce. Also to make things simple, let's set the units that we use for talking about the population so that 1 is the absolute maximum the particular environment can hold. So, in the ith generation the population x_{i} depends on the population of the preceding generation x_{i1} according to some equation. Probably the simpliest function that fits the description is an inverted parabola.
x_{i}= kx_{i1}(1x_{i1})
Where k is some constant.That equation is very simple, but it is nonlinear (when multiplied out it is x_{i}=kx_{i1}  kx_{i1}^{2}). For some values of k, most starting values for the population, x_{0}, will eventually lead to a single point (depending on k and not on x_{0}). For other values of k most starting values for the the population will lead to oscillating or cyclical values for the population (and the cycles can be quite long). But for other values of k, starting values for x_{0} don't necessarily converge on any repeating cycle and the population fluctuates in a way that is neither cyclical nor random. When this happens, no difference in starting x is so small that it might not make a big difference. When a system behaves that way it is chaotic.
Chaos theory as used in biology, physics and mathematics is about how to recognize, describe and make meaningful predictions from systems that exhibit that property.
Complexity theory (or the study of complex systems) is really about how a system which is complicated (usually by having many interactions) can lead to surprising patterns when the system is looked at as a whole. For example, each of the billions of water molecules does its own thing when it joins up with others as it freezes to others, given some constraints on what each of them can do, something recognizably snow flake shaped can emerge. Complexity theory is about how the interaction of billions of individual entities can lead to something that appears designed or displaying an overall systems level pattern.^{4}
Our characterization of complexity and chaos theories was minimal. Our characterization of game theory will be even more sketchy. We only mention it because we find that some of the desirable properties attributed to complexity are in fact more well developed in game theory. Also by making a comparison, free of the rhetoric and slogans which sometimes substitute for understanding, we can come to see what is new and not new.
There are some excellent introductions to game theory suitable for students of management (e.g., Dixit & Nalebuff, 1991; Gibbons, 1992; McMillan, 1992), as well as others that are better suited to those with some undergraduate training in economics (e.g., Binmore, 1982; Kreps, 1990); there are also shorter introductions, designed for economists, which can help provide an introduction (e.g., Gibbons, 1997). Those are all excellent sources for developing an understanding of game theory.
One difficulty we face here is overcoming some management scholar's preconceptions of game theory. We have seen more than a couple of (unpublished) manuscripts by management scholars which equated all of game theory with one very particular game, the Prisoner's Dilemma. Game theory is far broader. Basically there are two kinds of decision (or action) situations involving several agents or decision makers. A situation is parametric if the decisions of the agents are independent of each other (although the outcomes may be an effect of interaction); while a situation is strategic if the actions or decisions of the agents depend on each other. In a perfect market, setting the price for a product is a parametric decision because no single individual decision can affect the overall market. In a duopoly, price setting is a strategic decision. Game theory is about strategic decision making in this sense. Dixit & Nalebuff (1991) provide a series of cases where game theoretic, or strategic thinking, is important.
We provide a rough typology of games that game theorists talk about.
These are games where all of the decisions to be made by all of the players are made simultaneously. However, because players can think about what the other players think about what they will do, these dodespite the nameinvolve a certain amount of feedback and selfreflection.
In these games, players take turns.
These are just like the static games except that not all players have full knowledge of parameters of the games, or they have limited (bounded) rationality.
For all of the above there are both cooperative and noncooperative games leading to a typology of eight types of games. Additionally, all of these can types can include 2player games, 3player games, or games involving any finite number of players. When we talk about game theory in general we mean to include the theory that describes all of these types of games, and not just the two person static games with complete information that are so often used in examples for simplicity.
There is a special kind of game theory, evolutionary game theory, which, we will argue is largely indistinguishable from much of the better work done under the name of complexity. Sigmund (1993) provides a very accessible introduction to some of the concepts of evolutionary game theory (as well as discussing complexity and chaos). Schelling (1978) provides an enjoyable and accessible discussion of some game theoretic problems and solutions which have a very strong ``emergent properties'' feel to them.
More interestingly, there is also what has become known of as behavioral game theory, which is described in an outstanding review of the topic by Camerer (1997). Behavioral game theory takes as its agents real humans with their sense of fairness, cognitive limitations, and decision biases. Some of our recent work on understanding cooperation has been in this area (e.g. Goldberg & Markóczy, 1997).
We wish to discuss the particular properties attributed to chaos/complexity, and by doing so we wish to clarify and demystify them; we also wish to evaluate their desirability and novelty. We find that we can do this most effectively if we illustrate many of these properties by making a comparison to game theory. If by doing so, we persuade a few readers to take a second (or even first) look at game theory, that will be a desirable sideeffect but it is not the principle aim.
It is no serious criticism of complexity to observe that one or two of its attractive points are not new. What would be a harsher criticism (which space and time do not allow us to investigate) would be if the complexity literature sought to mislead people about the uniqueness of these points. To the extent that complexity theorists or those supporting them invite the implication of uniqueness of various properties, they should be chastised.
This section is divided into two major subsections, the first describing similarities and the second describing differences. The division is somewhat arbitrary in that the things listed as similarities do also involve differences, and those that are listed as differences involve substantial similarities.
Probably one of the most attractive features of complexity/chaos theory is that it uses a system of dynamic feedback (e.g., Cheng & Van de Ven, 1996; Ginsberg et al., 1996; Van de Ven & Poole, 1995). The value of some variables at any given time is (partially) a function of the values the same variables at an earlier time. How an organization works today is a function of (among other things) how it worked yesterday.
Game theory may at first appear to lack this dynamism because static games don't even involve time. Yet even in the static games, game theory, through its recursive awareness, incorporates dynamics. A typical game might first involve reasoning of the form ``I know that she knows that he knows that I know that she knows ...'' Game theory explicitly provides the tools for managing such a loop and determining (for many cases) what decisions the infinite expansion of such a loop would yield. The self reflection of even static games gives them a dynamism and a feedback all their own.^{5}
The dynamism and feedback of chaos/complexity require iterations over time, and they are often based on trial and error. Sometimes it is the dynamism of the game theoretic type that matters, where trial and error is just too slow or ruled out for other reasons. Let us provide an somewhat extreme (and grossly simplified) example. See Kavka (1987, ch. 8) and especially Schelling (1980, part IV) for discussions that are not so grossly simplified.
Parable 2Roughly speaking the strategic policy during much of the cold war between the US and the USSR was based on Mutually Assured Destruction (MAD). If a war were to start both participants would be devastated. Although there would be an advantage to whoever started first, neither side would have ``first strike capability''. This made it in the interest of both parties to avoid a war. Suppose, however, that one side started to develop technology that might make it able to survive such a nuclear exchange (e.g., President Reagen's ``Star Wars'' proposal). Once a working missile defense system is in place, there is no longer Mutually Assured Destruction. The US might then have first strike capability. Once a side has first strike capability it is in their interest to strike first. They need to strike before the other develops first strike capability. The side without first strike capability also has it in their interest to strike first, since by doing so they can at least reduce the damage they would suffer if the other struck first. Both know this about each other, and so both know that the other knows that they know it is best to strike first; so the first strike is bound to come soon, so push the button now!
This is not a very healthy situation. And it gets even worse. If one side is developing first strike capability, it is in the interest of the other to strike before the missile defense system is deployed. Naturally, since the first side knows this... Furthermore it doesn't even matter if the defense system isn't technologically feasible. If at least one side believes that the other believes that they believe that ... it might be feasible, then it is in the interest of both sides to strike first.
What can be done to prevent such an unstable and dangerous situation? The answer is the Antiballistic Missile (ABM) treaty of 1972 (and its predecessors). The ABM treaty paradoxicallybut correctlyplaced no limit on offensive missiles, but strictly limited the deployment of missile defense systems (and then only to missile bases) to ensure that no side would have first strike capability.^{6}
The ABM treaty did not evolve out of many iterations of generations of learning what strategy works best. It had to work the first time (and thankfully it did). The paradoxical treaty that may have saved the world required thinking about feedback loops, and it required thinking about thinking. That is, it involved both feedback and selfreflection. While selfaware actors are able to reach solutions the first time just by thinking about feedback loops, most complexity models require many iterations before the shape of any equilibrium becomes clear.
Even with nominally static games, there can be a sense of dynamism. Game theory also explicitly includes dynamic games which include repetitions or turn taking. While we have illustrated a similarity (feedback and dynamics), we have also highlighted a difference (selfreflection) which we will come back to later.
There are some attributes which are associated with complex systems. Such systems are selfreferential...They are nonlinear, so that a small change can lead to much larger effects in other parts of the system and at other times.
People often associate this feature of complexity/chaos theory with its reliance on nonlinear models and do not consider alternative theories that rely on linear models (e.g., Johnston, 1996; McKergow, 1996). Nonlinearity, however is neither necessary nor sufficient for one kind of the butterfly effect.
The butterfly effect is an often misunderstood and highly cited aspect of chaos/complexity. An article in the Economist (1998) on public misunderstanding of science mentions the butterfly effect:
[R]eading a book rich with subtle and unfamiliar ideas is a bit like having a custard pie thrown at you: the few bits that stick may not resemble the original very closely. James Gleick's book ``Chaos'' was clear and welltold, yet many readers came away with little more than the notion that a butterfly flapping its wings in Miami can cause a storm months later in New York. (Economist, 1998, p. 129)
The often discussed cases of standards battles^{7} provide a perfect example of perfectly linear and simple systems leading to butterfly effects.
Parable 3Imagine a world with two kinds of people, those who produce keyboards and those who type or learn how to type. Let's suppose that a producer of keyboards can produce a ``qwerty'' keyboard arrangement or a ``dvorak'' keyboard arrangement. Let's also suppose that all other things being equal, the dvorak arrangement is better for typing.^{8} It is in the typist's interest to learn the system that most keyboards will be produced as and it is in the manufacturer's interest to produce the kind of keyboard that most people use.
This is a situation with two stable evolutionary equilibria. In one everyone is using or producing dvorak keyboards, and in the other everyone is using or producing qwerty keyboards. If everybody had perfect information and started from a position where there was no prior commitment to either of the two types, all would choose to use and produce dvorak keyboards. However, if there are initially a few consumers who prefer qwerty or manufactures who over estimate the number of people who prefer qwerty, the less optimal qwerty equilibrium may be reached instead.
In fact, very small differences in the numbers of initial consumers preferring qwerty (or just the in the estimate of these numbers from some of the manufacturers) can lead to one equilibrium being reached instead of the other. Depending on the initial conditions and the amount of imperfection of knowledge in the system, something as small as a butterfly's wing could tip the balance one way or another.
The basic model has only to list people with their preferences. Those preferences can be on a linear, or even ordinal scale. Yet still a small difference in the initial conditions can lead to large differences in the final state. So Nonlinearity is not a necessary condition for the butterfly effect.
Another example might be a somewhat simplified pool table which can be modeled with linear relations only. Yet small differences in a shot can lead to winning the game or losing.
If we return to the nonlinear dynamical population model discussed earlier, x_{i} = kx_{i1}(1  x_{i1}), we will find that for some values of k, the initial population, x_{0}, has no effect on the final outcome. For example, if k = 3.2 the population will end up alternating between 0.513 and 0.799 no matter what x_{0} was initially picked. This goes to show that nonlinear dynamic feedback is not a sufficient condition for the butterfly effect.
The lesson here is that nonlinear dynamics is neither necessary nor sufficient for the small initial differences leading to large differences in output. However it is commonly thought to be necessary, and it is not accidental that people believe in a special relationship between chaos and the butterfly effect. That is because there is a very peculiar and fascinating type of butterfly effect which is unique to some parts of some nonlinear dynamical systems. If we return, for the final time, to that population model we can illustrate the special, or chaotic type of butterfly effect. If we set k = 4.0 then the initial values for x_{0} matter greatly. Not only will small differences in x_{0} lead to different results, but there is no difference so small that it won't make a difference. When a system is behaving like this, there is no butterfly so small that it doesn't effect the outcome of the whole system. But remember, not all nonlinear dynamical systems behave this way, and those that do, only do so for certain ranges of initial conditions.
The stranger kind of butterfly effect is interesting in its own right, but we do not see that it says anything about the sorts of models that management scholars should or shouldn't be exploring. Since our ability to measure initial conditions is so limited, it hardly matters which sort of butterfly effect is in place. But if we keep our models linear, we can more easily use them to examine what does occur. We will have a little more to say on this in section 3.2.1.
It appears that some people are attracted to the notion of the weird sort of butterfly effect because they think that it rules out predictions (e.g., Johnston, 1996; McKergow, 1996). Fortunately, they are wrong.
Parable 4Earth, the Moon and the Sun form a nonlinear dynamical system in exactly the way that leads to the weird sort of butterfly effect. No matter how precisely we measured the mass and velocities of the Earth, moon and sun (short of truly perfect measures which are impossible) we could not predict the ultimate positions of them in the far future. We are not able to say when moon rise will be in London 1,000,000 years from today. But we still can predict quite accurately when it will be a few years from today based on today's measures.
The unpredictability that is inherent in some nonlinear dynamic models may take time to settle in. One cannot simply declare a model useless for predication without making some calculation of how long it takes it to diverge. Predictions of moon rise, tides and the weather all rely on nonlinear dynamical models, and they do get it right most of the time.^{9}
Furthermore, as will discussed in section 3.1.6, even the behavior of a system which becomes chaotic very quickly is ``constrained'' in a way that does allow for some interesting and useful predictions. Chaos theory allows us to make predictions about systems that may at first appear random, but can, in fact, be described by simple models (Richards, 1990).
Along with unpredictability, many of those looking at complexity/chaos (and particularly chaos), claim that these systems are nondeterministic. Usually that claim is bolstered by pointing out the butterfly effect and problems of predictability. Chaos does have something interesting to say about determinism, but it is quite the opposite of how some people have taken it. Chaotic systems are deterministic. If we go back to May's example in parable 1, the equation is entirely deterministic. The state of the system at one stage is completely and entirely determined by the state at a previous time. These are deterministic systems, based on deterministic equations. What is interesting about chaos is that it shows how apparently random behavior can be described by completely deterministic systems. One of the founding papers in the chaos literature is titled ``Deterministic nonperiodic flow'' (Lorenz, 1963). Gleick's account of the work includes
His colleagues were astonished that Lorenz had mimicked both aperiodicity and sensitive dependence on initial conditions in his toy version of the weather: twelve equations, calculated over and over again with ruthless mechanical efficiency. How could such richness, such unpredictability  such chaos  arise from a simple deterministic system? (Gleick, 1996, p. 23)
The FAQ (list of answers to ``Frequently Asked Questions'') for the Internet news group news::scinonlinear also makes it clear that these systems are deterministic:
Dynamical systems are ``deterministic'' if there is a unique consequent to every state, and ``stochastic'' or ``random'' if there is more than one consequent chosen from some probability distribution (the ``perfect'' coin toss has two consequents with equal probability for each initial state). Most of nonlinear science  and everything in this FAQ  deals with deterministic systems. (Meiss, 1998, §2.9)
What attracts attention is not that these systems aren't deterministic (they are), but instead that these deterministic systems behave in ways that superficially resemble some nondeterministic systems.
If everything there is to know is known about the initial state of a system, then it is in principle possible to predict later states with perfect precision assuming perfect computation. But it is not possible to know everything there is to know about a system, nor is it practical to compute things with perfect precision. These practical limits on determinism have been known for centuries, and are not a new discoveries at all.^{10}
One of the appeals of the complexity approach is its ability to generate surprising (or at least nontrivial) macroscopic effects from the iterated interactions of many microscopic agents (Brown & Eisenhardt, 1997). Often complex structures (from which the approach derives its name) are visible at the macro level. These structures appear to emerge from the lower level interactions.
This emergent complexity is fascinating. But is it new or unique to the new paradigm of complexity? No, it is old hat. In the natural sciences, the laws of gases, black body radiation, the shapes of galaxies are all old examples. Economists have all been looking at exactly these sorts of emergent phenomena. Game theorists have delighted in showing how some very simple games can lead to very complex looking behavior. The game theorist Schelling (1978) has a delightful book which lists many such examples, from the way that an auditorium can fill up to the pattern of people switching on headlights as it gets darker.
Some claim that game theory and complexity theory deal with equilibria in very different ways
Even the most complex game theoretic models, however, are only considered useful if they predict an equilibrium outcome. By contrast, chaotic systems do not reach a stable equilibrium; indeed they can never pass through the same exact state more than once. (Levy, 1994, p. 170)
But contrary to popular belief, game theory, complexity theory and chaos theory say more or less the same things about equilibria. There are some differences, but those differences don't matter a great deal in light of the similarities. First it is crucial to clarify a few concepts.
An equilibrium can have any degree of stability. Some equilibria are very unstable (see Figure 1a), others can be very stable (Figure 1b), while yet others can be moderately unstable (Figure 1c). A very small amount of noise or turbulence can take a system out of an unstable equilibrium; only a large disruption or shock will take a system out of a very stable equilibrium and a moderate disruption can take a system out of a moderately stable equilibrium. What is important to note here is that all of the theories under consideration share this. Some games can have moderately stable equilibria; some complex systems can have moderately stable equilibria; some nonlinear dynamic systems can have moderately stable equilibria. The perspectives do not disagree on this.
Another point in which they don't disagree is that all allow for multiple equilibria. Some games will have multiple equilibria; some complex systems will have multiple equilibria; some nonlinear dynamic systems will have multiple equilibria. These multiple equilibria will each have their own degree of stability. The tender trap discussed in section 3.1.2 has three equilibria, two of which are evolutionarily stable (Figure 1d).
A third point of agreement is that each of these theories allow for systems that have no equilibrium. While it is true that the simplest kinds of games (two player complete information static games) are guaranteed to have at least one equilibrium, that does not always hold of other types of games (e.g., the ``Dollar auction'' has no equilibrium (Poundstone, 1992)). Moreover, even for these simplest types of games, the equilibrium might involve a ``mixed strategy'' which behaves probabilistically (e.g., with a rule like ``pick action A with 70% probability'').
A fourth similarity is that all of these views accommodate dynamic equilibria. A system can be in a cyclical equilibrium if it goes from, say, state s_{i} to state s_{j} and eventually back to s_{i}. So if it ever gets into one of the states in that cycle it will cycle around forever if the equilibrium is sufficiently stable.
Nonlinear dynamic systems can, uniquely, have a type of equilibrium called a strange attractor, which resembles a cyclical equilibrium with the important exception that the system doesn't actually ever repeat itself. As the system goes from state to state it stays (depending on how stable the attractor is) within a set of possible states. So, while a particular path or state is unpredictable, the set of states that the system can go to is not arbitrary and can be predicted.^{11} In addition to the strange attractor, which is unique to nonlinear dynamical systems, there are two differences in the ways that equilibria are dealt with. The first difference is that most of the people who are involved with game theory think that it is worthwhile to calculate the equilibria of a system and show how stable those equilibria are if they exist; many people involved in complexity theory think that it is not worthwhile to calculate the equilibria, but instead that it is best to run computer simulations until the system arrives at a reasonably stable equilibrium. Note that this is not an actual difference in the theories, but a difference in the people who use them. One can take identical models and either calculate the equilibria or run simulations or both.
There are some advantages to both methods. For calculating equilibria, if it is done correctly, one knows that all of them have been found, while with the computer simulation, you only know that one reasonably stable one has been found, but may miss others.^{12} Additionally, other properties of the equilibria can be made clearer through a game theoretic analysis which may not be available through a simulation. The advantages of a simulation is that it is easier. Sometimes the model is so complicated that it is extremely difficult to do anything else; other times it can be an aide to the calculation, since the simulation can often tell us what one equilibrium is.
Robert Axelrod, an important and clear thinking developer of complexity, describes complexity simulations (``agentbased modeling'' in his terms) extremely modestly.
Agentbased modeling is a third way of doing science. Like deduction, it starts with a set of explicit assumptions. But unlike deduction, it does not prove theorems. Instead, and agentbased model generates simulated data that can be analyzed inductively. Unlike typical induction, however, the simulated data come from a rigorously specified set of rules rather than direct measurement of the real world. Whereas the purpose of induction is to find patterns in data and that of deduction is to find consequences of assumptions, the purpose of agentbased modeling is to aid intuition. (Axelrod, 1997, p. 45)
Axelrod may be being a bit too hard on the approach he is advocating, in that if he is correct it abandons the best of deduction (theorem proving) and the best of induction (inference from the real world) and combines what remains in a technique for aiding intuition. However, the use of explicit models, which we will mention again in the concluding remarks, is what makes this approach more valuable then many other means to aid intuition.^{13}
Returning to the one new thing in complexity/chaos with regard to equilibria, strange attractors, we have yet to see how this particular entity is useful for the study of social sciences, however.
Any one of the types of systems might have several (or no) equilibria. Some equilibria might be stable, but without there being a path from some particular state to that equilibrium. This property is not unique to chaos/complexity, but arises in some of the relatively simple games discussed by game theorists. The tender trap, section 3.1.2, with partial information is one of these. Once you get stuck with one standard, it is difficult to move to a more desirable equilibrium because the intermediate steps are unavailable (see also Figure 1d). So, again, complexity/chaos offers us nothing new here, except that it may have served to introduce people unfamiliar with these concepts to them.
Below we discuss differences between game theory and complexity/chaos theory. Many of these are differences in practice as opposed to core differences in the theories themselves. Paradoxically, many differences have already been discussed in the sections reviewing the similarities. We discuss differences in this section mostly because it allows us to highlight undesirable properties of complexity and chaos for the study of management.
As we have suggested above, the attractiveness of nonlinearity seems to be the desire to produce models that exhibit the butterfly effect. We have already argued that the nonlinearity is neither necessary nor sufficient to achieve the simple form of this effect. Furthermore, it is not enough to show that nonlinearity exists in the world; to add it to a model, this complication must be individually and specifically motivated. Its proponents must show that it is necessary to get a useful model. People promoting something that makes models so much harder to handle need to do these two things. (1) They need to provide good theoretical reasons for the basic nonlinear equations they wish to add. Plausibility arguments for those equations are not enough if one can also provide a plausibility argument for a linear alternative. (2) They must show that after their modification from linear to nonlinear they can achieve some solid predictive result which would not be available otherwise. We don't feel that even the first of these has been done for the case of nonlinearity in the social sciences, much less the second. The situation has not changed since Elster (1989, p. 3) made this point.
I am not sure, however, that [nonlinearity] is the right direction in which to look for sources of unpredictability [in the social sciences]. The nonlinear difference or differential equations that generate chaos rarely have good microfoundational credentials. The fact that the analyst's model implies a chaotic regime is of little interest if there are no prior theoretical reasons to believe in the model in the first place. If, in addition, one implication of the theory is that it cannot be econometrically tested there are no posterior reasons to take it seriously either.
To our knowledge there have only been two arguments in the management literature for introducing nonlinear equations into models. One is that nonlinear dynamic systems involve both positive and negative feedback loops (e.g., Cheng & Van de Ven, 1996; McKergow, 1996). People correctly want models with feedback loops and seem to think that if there are feedback loops there must be a nonlinear dynamical system. In some unpublished manuscripts we have seen, authors have explicitly insisted on nonlinearity for the sake of feedback loops, and yet went on to work with models that are entirely linear.
The other reason that is given to motivate nonlinearity is unpredictability (McKergow, 1996). We have argued that nonlinearity is neither necessary nor sufficient for unpredictability. Even if it were, it could only be used as a motivation for the existence, somewhere in the interactions, of nonlinearity. It cannot be used to motivate a particular nonlinear interaction which must be either theoretically or empirically motivated.
Some studies (Cheng & Van de Ven, 1996; Richards, 1990) which actually looked for and found the very specific sort of unpredictability that comes with some parts of some nonlinear dynamical systems, did so by filtering out every linear relationship available from the data. Once all linearity was filtered out, the only thing that could remain was true randomness and nonlinear variation. They found that there was a nonrandom nonlineartype component to the variation. But it must be recalled that this was done after filtering out all linearity. If you filter out everything except for what you are looking for, then no matter how small the object of your search turns out to be you will find it. Furthermore, we have no reason to believe that the nonlinearity is a fact about the system under observation. Like the randomness, it could have been introduced at any stage in the process from data collection onward. These are interesting studies, but until some difficult followup work is done, the best that can be concluded is that some of the apparent randomness in the data that they analyzed may be the result of simple interactions.
In the standard complexity examples that have been used, agents are simple minded entities that follow simple minded rules. In game theory, agents can anticipate the future and the consequences of their actions and the actions of others. To ignore the ability to reflect may be ignoring exactly the sorts of things that make human systems interesting. Systems without the ability to reflect or anticipate may be extremely interesting, after all evolution cannot look to the future, but it can build agents which can. But when we talk about human systems, it seems reasonable to leave open the possibility, as game theory does, that the agents think about their situation and what they are doing.
Game theory, like complexity theory, works on the interactions between fairly simple and abstract agents. The core ontologies of both theories are very simple. In fact, the only real difference in the ontologies of the theories is that in game theory the agents can make conscientious decisions, aware, to some extent, of its own predicament and that of others.
Is it important to consider the reasoning of self and others in interaction when trying to model system with many interactions?
Parable 5you and someone else (let's call her Alice) are to meet at 12 noon on a particular day on Manhattan Island, but you forgot to arrange a meeting place. Neither of you live there or have an office on the island. Neither of you carries a mobile phone. Where would you go? Before reading what studies show the most common answer to be, you should stop and think about the options yourself. Write down an ordered list of locations. In a series of studies of questions like this (Schelling, 1980), it appears that the overwhelming first choice is Grand Central Station. While an impressive piece of architecture, it is not really many people's favorite place to wait for other people. Very few of us would actually arrange to meet someone there, but it is where we would go when the meeting place wasn't arranged. When you thought about this problem, you must have thought about what Alice would think about what you thought. There is reflection on others reflecting on your own state of mind.
By selfreflection, humans are able to exclude early on some highly unlikely options from their decisions and substantially reduce the number of possible outcomes. But agents that are described by the complexity/chaos theory would just move all around New York and the chance that they would meet be very small indeed. Selfreflection and reflection on others clearly play an important role in this example, reducing the set of possible outcomes by excluding highly unlikely options.
Some might argue, however, that although in certain situations selfreflection might be necessary, most organizational activities are routine and do not require self awareness and foresight. Some people might feel that they are ``just a cog in a wheel of a big machine,'' but even that makes them profoundly different from a cog in a wheel of a big machine. Real cogs in real wheels never think of themselves as such.
Even where an organization is designed to minimize its members' understanding of it, people will try to figure out what their place is, as the following example suggests.
Parable 6Park (BP) was the site of UK code breaking during the second world war. It employed at various times more than 12000 people. At the same time, the code breaking activity (and particularly its substantial successes) had to be kept strictly secret. To a very large extent, BP was an information processing center. Some of the steps in processing the information involved people, and some of the steps involved machines. It seems that here is the perfect setting to have people act as mindless agents acting their small part and not thinking about the whole picture or even where they fit in. While this may have been more true of BP than of any other organization, it appears that it didn't work that way. Reports from people who worked there indicate they while they were not really supposed to know what was going on outside of their own narrow activities, they did have a sense of what was going on. In fact it appears that in order to maintain commitment, people were even deliberately shown where their work fitted in. The operators of the Turing bombes performed ``souldestroying but vital work on the monster deciphering machines'' (Payne, 1993, p. 132) used in some steps of Enigma decoding. The operators were specifically taken to the British Tabulating Machine Company at Letchworth ``to watch [the machines] being made and to encourage the workers, although we thought their conditions were better than ours. It was a surprise to see the large number of machines in production'' (Payne, 1993, p. 135). Apparently it was felt that various people needed to see other bits of the operation (or at least some of the other people involved) to be encouraged. Also for the operators to have been surprised by the number of machines being built, they must have had a sense (even if incorrect) of the scale of the whole operation.
The Bletchley Park example illustrates that even where it might appear to be good (and possibly possible through secrecy regulations) for an organization to have people unaware of the big picture and their place in it, people in organizations just aren't that way. Awareness is ever present.
There will, of course, be some models in which individual decision rules can be simple and mindless instead of complicated and mindful. Game theory, and in particular, evolutionary game theory, has exactly the ability to model simple agents where that is called for. Game theory, however, is uniquely positioned to model mindfulness and selfawareness in decision making and the systems that emerge from that in the many cases where that is appropriate.
It appears that one of the appeals of complexity/chaos is that it somehow rejects reductionism:
These results [of complexity] are rather counterintuitive to those of us brought up on the reductionist assumption that knowing all about the parts will enable us to understand the whole. In complex system the whole shows behaviors which cannot be gleaned by examining the parts alone. McKergow (1996, p. 722)
One widely distributed version of the call for papers for a special issue on complexity for the journal Organization Studies stated
Complexity theorists share a dissatisfaction with the `reductionist' science of the past, and a belief in the power of mathematics and computer modeling to transcend the limits of conventional science...
Unfortunately for those who seek antireductionism in complexity/chaos, it just isn't there in any interesting sense.
There are three common uses of the word ``reductionism'' either as a philosophy or a pejorative.
In this section we ask in what senses of reductionism are game theory and complexity/chaos reductionistic and whether reductionism in these senses is a good or bad thing. The answers we will state is that reductionism type 1 is a good thing, type 2 not generally a good thing, and type 3 is something we will not pass judgment on; and game theory and chaos are reductionistic in the good ways, while complexity may be overly reductionistic. In much of our discussion of reductionism, we are following Dennett (1995, p. 8083).
We agree with Dennett (1995) that reductionism type 1 is a good thing. Any theory or explanation which is not reductionistic in that sense is simply question begging or mystical. An explanation that is not in terms of simpler things or things already explained and the interactions between them fails to be an explanation.
Reductionism type 2 is simply not very interesting. While there are some systems that are reductionistic in that sense and many more that aren't, it does not present any interesting or disputed boundary between different ways of investigating the world. By this type of definition an analysis that uses linear regression would be reductionist while one that uses ANOVA would be nonreductionist. We suspect that this notion of reductionism is little more than a straw man. Neither game theory, complexity theory, nor chaos are reductionist in this sense. They all deal with interactions.
Reductionism type 3 is what Dennett (1995, p. 82) calls ``greedy reductionism''. It is the attempt to explain things without recourse to intermediate levels. A meteorologist who tried to explain the weather directly in terms of the motions of billions of molecules instead of talking about the intermediate levels of air masses, humidity and the like might be guilty of greedy reductionism. A slightly less pejorative term for this might be ``eliminative reductionism''.
In the rest of this discussion we will ignore the straw man of reductionism type 2 and will just consider the two other types. What needs to be emphasized is that the study of chaos is in no way a retreat from reductionism.
Before developments in chaos theory, certain nonlinear systems were simply not studied because they were too hard. Chaos theory has allowed researchers to make some sorts of predictions about the attractors and equilibria of these difficult systems. Chaos does not represent a retreat from the domain of Newtonian determinism, but an advance. Chaos does not say that there are fewer things that we can talk about and make predictions about; instead it gives us tools to talk about things that previously were too difficult to consider. Chaos theory expands the domain of reductionist (type 1) analysis.
When one observes collective behavior that exhibits instability over slight variations one typically assumes that an explanation must be equally complex. Traditionally, one expects simple behavioral outcomes from simple processes, and complex outcomes from complex processes. However, recent developments in chaotic dynamics show that a simple deterministic system [emphasis added] that is nonlinear can produce extremely complex and varied outcomes over time. (Richards, 1990, p. 219)
Chaos then, is about finding simple underlying models for complicated phenomena. What then about complexity? One contrast between game theory and complexity theory is that the latter usually relies on very simple agents with no selfreflection as we've discussed in section 3.2.2. Game theory allows for more sophisticated agents. Complexity is very specifically about generating macroscopic level phenomena directly in terms of the many interactions of simple parts, often with little concern for developing theories about intermediate constructs. Clearly complexity is more reductionistic in the sense of type 3.
Anyone who seeks antireductionism in chaos or complexity is bound to be disappointed. For us however, their reductionism is appealing.
Our critique of complexity/chaos has been with the rhetoric and with incorrect claims about what they entail. Once the rhetoric has been removed and the real tools are seen for what they are, we see real value to the study of management. Using complexity/chaos means constructing explicit models of the systems in question. In another domain, theoretical biology, Maynard Smith (1989) describes the utility of formal models (as opposed to what Saloner (1991, p. 127) calls the ``boxesandarrows variety'').
There are two reasons why simple mathematical models are helpful in answering such questions. First, in constructing such a model, you are forced to make your assumptions explicitor, at the very least, it is possible for others to discover what you assumed, even if you are not aware of it. Second, you can find out what is the simplest set of assumptions that will give rise to the phenomenon that you are trying to explain.
Saloner (1991) points out additional benefits of formal models, including that they can be built upon and that they can lead to novel insights through surprising results.
We suspect, however, that many management scholars who currently find complexity/chaos appealing will find it less appealing, and even distasteful, if we do manage to persuade them that complexity/chaos is not a challenge to traditional science, but instead are analytical tools which allow traditional science and modeling to be extended to domains which were previously too difficult. Our suspicion has been recently confirmed by the announcement for a new management journal on complexity, Emergence: Complexity Issues in Organization and Management.
The journal will be a quarterly and sponsored by the New England Complex Systems Institute...Aim is for qualitative work (for those of you who like math formulas this will not be your outlet). (Lissack, 1998)
If the explicit modeling of complexity is removed, it is disturbing to imagine what will actually remain.
It may seem puzzling that a field is willing to embrace complexity theory and makes little use of its near equivalent, game theory. We have neither the data nor the space to provide a detailed argument as to why this discrepancy exists, but that won't prevent us from engaging in some brief speculation.
Many social sciences are under ``threat'' from the expansion of the economist's way of thinking and analysis into their domains. While the expansion has been going on for a while, it has been described explicitly by Hirshleifer (1985). At a recent workshop (ELSE, 1997) on the evolution of utility functions involving economists, biologists, some cognitive psychologists and anthropologists and three management scholars the economist John Hey expressed some disappointment. He had expected to learn some methods and perspectives from the biologists, but instead discovered that they were just doing some (dated) economics.
Fear of this expansion can lead to management scholars trying to build walls around their domain by exaggerating the differences which ``incites a level of fear'' (Hesterley & Zenger, 1993, p. 497). This would include demonizing the encroaching forces. Markóczy & Goldberg (1997, p. 409) argue that management scholars should be doing exactly the opposite.
We will have to learn how to enter into dialogue with scholars from other social sciences. Even if we ultimately reject the assumptions and approaches of those fields, we need to understand why those approaches are attractive to other scholars instead of merely searching for ways to dismiss them quickly.This will be a difficult transition and it will meet with much internal resistance. But it is necessary. As soon as this interdisciplinary group extends their study of cooperation to organizations, they will develop theories of organizations and behavior within them which will be attractive to anthropologists, biologists, cognitive scientists, economists, philosophers, and psychologists. As they are making great gains in discovering the nature of cooperation, management scholars ought to be working with them.
We believe that a renewed interest among management scholars in modeling human systems provides a step towards that interdisciplinary integration. Those who resist the encroachment of economics (or fields which have adopted many of their methods) will be reluctant to build explicit models, or will try to call them other things when they do.
Everyone loves a selfreferential paradox: A rule or a system that turns in on itself or proves its own impossibility. If that system is thought to be cold, cruel, an authority, and a power then it is even better if it contains the seeds of its own destruction. Those who maintain this view of traditional science will naturally delight in the claims of complexity/chaos.
From a theoretical perspective, chaos theory is congruous with the postmodern paradigm, which questions deterministic positivism as it acknowledges the complexity and diversity of experience. (Levy, 1994, p. 168).
We accept neither their view of science nor those claims of complexity/chaos. Chaos and complexity do not pose a serious challenge to science and prediction; and science has always been concerned with the interactions of parts.
Some of the objections that are occasionally raised about game theory are that it requires absurd assumptions of rationality. This simply isn't true. The introductory exercises and examples given usually do involve very strong rationality assumptions, but once one understands how to use game theory, it is possible to relax those assumptions substantially (Camerer, 1991). Evolutionary game theory involves agents, such as bees and trees, with exceedingly limited rationality; and behavioral game theory specifically seeks to work with agents that have empirically verified types of human rationality (Camerer, 1997).
In looking at some of the literature on chaos/complexity we find misleading and incorrect statements. But we also find that many of those come not through the misinterpretations of management scholars, but are from the popularizations of complexity/chaos itself. When complexity proponents make statements suggesting a radical new paradigm for all sciences including the social sciences, it is no wonder that some of those in search of a radical new paradigm will follow. Some complexity workers very strongly exaggerate the difference between what they do and what evolutionary game theorists do. At a seminar of the Research Centre for Economic Learning and Social Evolution (March, 1997), John Holland argued forcefully that the model he presented could not be treated game theoretically because ``the rules changed''. However, a superficial glance at his model, showed what he called ``rules'' map into what game theory calls ``strategies'', which can and do change.
In other cases, popularizers have been more careful, but have still left things open for misunderstanding. For example, most of the discussion by Gleick (1996) treats the issues of determinism and nondeterminism correctly. However, Gleick does indeed quote people whose statements do suggest that chaos overturns determinism. Gleick does not appear to notice the contradiction and takes no corrective action. People seeking a radical challenge to traditional science will pick up on those few quotes and entirely ignore most of the rest of the books insistence that those systems are deterministic.
It is natural for any stream of research to overstate the differences between themselves and others, but it is also the responsibility of the rest of the academic community to look through the rhetoric and examine the real claims and identify what is of real value. We hope we have helped fulfill that responsibility.
To add one more paradox to this paper, we note that our challenge to complexity and chaos as reported in the management literature is partially motivated by our sympathy with chaos and some parts of complexity in general. The benefits to fields outside of the social sciences of the study of nonlinear dynamical systems are too numerous to mention. Some of the best work in complexity (e.g, Axelrod, 1997; Schelling, 1978; Sigmund, 1993) eschews the worst of the rhetoric and have helped raise the awareness of what can be reached with very simple agents. It is our appreciation of the better parts of this work which drives us to discourage management scholars from using misunderstood slogans from these fields and to encourage people to show these areas due respect by either really learning about them or remaining silent.
This document was generated using the LaTeX2HTML translator Version 98.1p1 release (March 2nd, 1998)
Copyright © 1993, 1994, 1995, 1996, 1997, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
The command line arguments were:
latex2html split 0 dir html complex.tex.
The translation was initiated by Jeffrey Goldberg on 19980707