Measuring cognition is not exotic

Lívia Markóczy
Cranfield University
+44 1234 751122 (ext 3757)

Jeffrey Goldberg
Cranfield University
+44 1234 754200 (ext 2826)

The study of strategy making and implementation often requires an understanding of the thinking of the strategist, and these need has sparked a quest for ways of understanding managerial cognition. The apparent difficulties of that latter task have raised a barrier to pursuing work on the thinking of the individual strategist.

In this presentation, we shall be arguing two closely related points which have been just below the surface in much of our work over the past several years. Each point, if correct, suggests that management scholars should not be put off by attempts to measure cognition. Both points, however, run counter to tacit assumptions that have been part of the managerial cognition literature. So, while our conclusions should be considered good news, they may also meet with a certain amount of resistence.

Perceived difficulties in measuring managerial cognition have led to the development of entire research streams that rely on poor substitutes for cognition. The validity of some of that work has recently been called into question (Markoczy, 1997).

Rhetoric and reality: How deep is the cognition that we need to measure?

On occasion researchers and research streams can become mislead by their own terminology. In paper after paper people have talked about the need to get at things like "cognitive base", "deep structures", "mental representations", and "cognitive maps". The presumption is that measuring these exotic and mysterious sorts of things will require exotic and mysterious sorts of measurement tools. Certainly the tools developed can be extremely cumbersome. Our own contribution to causal mapping techniques (Markóczy & Goldberg, 1995) has been described as "complex and tedious" (Hitt et al, 1998: p. 30), which frankly is putting it mildly. The specialized techniques, where they have been developed, can be off-putting for a variety of reasons.

If we want to overcome the barriers imposed by such problems, we need to do at least one of the following:

  1. Develop new practical techniques for getting at what we want to measure.
  2. Find existing techniques which do the job.
  3. Show that what we are actually trying to measure needs no special techniques at all.
We believe that the last two options are the most fruitful, and for the first point of the presentation we argue that the last option is, indeed, correct.

How is it, then, that we need no special cognitive techniques to get at things like an individual "cognitive base"? Quite simply by recognizing that while people may use terms like "cognitive base", as they are actually used are perfectly operationalized by collected report beliefs. If we look at the cognitive mapping methods that have been developed or used by management scholars, we will see that at the level of elicitation they are nothing more than reported beliefs.

These particular cognitive techniques then involve iterated contortions of reported beliefs to produce some sort of visual representation of an individual (or groups) "cognitive map". The output, while possibly pretty, is often difficult to use in studies designed to detect or test generalizations across individuals (or groups). Again, for an example, my own contribution to these techniques requires an analysis procedure which is far more "complex and tedious" than the map creation procedure. Despite all the effort, they are ultimately based on reported beliefs and have all the problems that report beliefs have. As Robert Axelrod, the inventor of causal maps, said

The cognitive mapping approach is, of course, in no better (or worse) position in this regard than any other procedure that relies on a person's conscious and monitored linguistic behavior to make inferences about his or her beliefs (Axelrod, 1976, p. 252)

What appears to have happened over the years is that people have forgotten Axelrod's warning, and come to imagine that since the range of techniques and the general discussion is about "cognition" that somehow, something deeper than reported beliefs is being discussed. We have been misled by our own terminology.

Once we recognize the situation the immediate feeling is to try to develop or look for techniques that are somehow cognitive in a deeper sense. But before jumping to that conclusion we should ask whether, in fact, reported beliefs are deep enough. Our answer to that question is "to the extent that work based on reported beliefs have worked so far for the problems we have been interested in, reported beliefs are good enough".

Once we see that reported beliefs have been good enough for much managerial cognition research to date irrespective of the terminology used, we are in a position to consider using the mundane family of techniques that have been very well used for studies based on reported beliefs: The questionnaire. A well designed questionnaire has the advantage of producing results which are easier to analyze and are generally more powerful. Many researchers who have been using questionnaires to capture reported beliefs of managers have been doing research in managerial cognition all along!

None of this is to say that the questionnaire is always preferable to cognitive mapping techniques in all cases. It is to point out that the questionnaire has no less access to "mental models" or "cognitive bases" then most cognitive techniques as usually practiced.

When reported beliefs won't do

Sometimes reported beliefs won't do. As researchers in behavioral decision making know well, the patterns of decision making and information processing that people display in field studies comes much closer to the pattern of decision making and information processing that they display in the laboratory than it is to self-reports. There are many aspects of human cognition which are just very poorly studied by exploring textually reported beliefs.

The techniques used by behavioral economists, cognitive psychologists, behavioral decision making researchers all come out of work done by cognitive psychology. Most of them involve some sort of experiment, but many involve carefully designed field studies as well.

Some of the discussion of studying "managerial cognition" in the context of strategic thinking has ignored much of that research, and again, we believe that this is a consequence of being mislead by our own terminology. The phrase "managerial cognition" suggests, somehow, that it should be studied entirely differently from the way that human cognition is studied.

In conclusion

The argument presented here should be taken as good news. It is possible to study the thinking and cognition of strategy makers, and it is possible to do so without having to develop whole sets of techniques. Where reported beliefs are good enough (as it often is in many of the cases were cognitive mapping techniques might initially seem appropriate) a well designed questionnaire may get at the desired aspects of "cognition" best. In those cases were reported beliefs are suspect because we are considering cognitive processes which are not entirely available to introspection, various techniques, often experiments, can be used to understand the workings of those processes.

It is a fine thing to take our terminology seriously, but on occassion, doing so in an inconsistant matter may mislead us, and can even induce us to build artificial and unnecessary barriers to research.


Axelrod, Robert (ed) (1976)
Structure of Decision: The Cognitive Maps of Political Elites, Princeton University Press, Princeton, NJ

Hitt, Michael A., Javier Gimeno and Robert E. Hoskisson (1998)
Current and future research methods in strategic management, Organizational Research Methods, 1 No. 1, p. 6-44

Markóczy, Lívia (1997)
Measuring beliefs: Accept no substitutes, Academy of Management Journal, 40 No. 5, p. 1228-1242

Markóczy, Lívia and Jeff Goldberg (1995)
A Method for Eliciting and Comparing Causal Maps, Journal of Management, 21, No. 2, p. 305-333