Published Online:https://doi.org/10.1287/stsc.2019.0077

While not a new methodological approach, the use of formal models with an emphasis on credible behavioral foundations appears to be growing recently in the strategy and broader management domains.1 The “birth” of formal behavioral models within management can arguably be traced to the publication of Cyert and March’s (1963)Behavioral Theory of the Firm. However, the initial rate of adoption and diffusion of this approach was modest and only recently seems to have gained demonstrable critical mass.

Broader adoption suggests legitimation of the “form.” However, it is worth considering the issue of form (i.e., modeling as a methodology) not just in some diffuse, abstract sense but in the spirit of Lave and March (1975): What might constitute some basic principles to which the method should adhere? As authors and individuals who have served in senior editorial roles at Management Science, Organization Science, and Strategy Science, we have a point of view. We are not claiming a unique hold on wisdom in these matters, but, given a large experience base, we feel sufficiently immodest to want to share our point of view, with the thought that this perspective, while inevitably containing some individual and collective biases, might be useful if made public and common knowledge. While our motivating context is research in strategy and management, per Lave and March (1975), the issues raised here arguably generalize more broadly to formal models, particularly those with behavioral foundations, in the social sciences.

As the title of this editorial suggests, “A model is a model.” That is, a model is not a full description of a particular context or phenomenon but an abstract rendering that is amendable to design and manipulation by the modeler. This simple observation has a number of implications.

A direct implication of the observation that “a model is a model” is that a model is inevitably a “small-world” representation of a more complex underlying reality (Savage 1954; Levinthal 2011). This feature of the inherent lack of realism can be a particularly challenging property for behavioral models. Indeed, the conjunction of behavioral and lack of realism seems to form an oxymoron. The critical issue is not whether the model has some tight, one-to-one mapping to the underlying context and phenomena it is addressing but whether the model, despite its limitations and distortions, captures some important fundamental properties of the underlying context. Much as one appreciates abstract art not for its verisimilitude but because reflection on the work gives us some new insight about what we had regarded as the familiar and understood, a “false” (i.e., not fully realistic) model can give us insight into the real-world problem of interest (Box 1976).

Related to this notion of a model as a small world, we are troubled by an emerging practice in which authors have chosen to precede their analysis with a set of hypotheses regarding the behavior of the model that they have specified. We suggest that a model is no more a worthy subject of hypothesizing (e.g., “H1: We predict that when X goes up…”) than the question of whether 2 + 2 equals 4. In this regard, there are not multiple possible truth statements over which one might conjecture but a single deductive logic and associated implications. One might want to precede the analysis of a model with a characterization of received wisdom in the existing literature on the phenomena in question, but that is not a hypothesis regarding the properties of the specific model under consideration. While one, including the modeler, may not fully anticipate the behavior of the model, terming one’s “test” of one’s intuition regarding this behavior a hypothesis is problematic, as it suggests that computational models generate empirical data, which they do not. Rather, results obtained via simulations simply reflect the properties and assumptions specified by the modeler.

As a small world, one can always add more features to a model, and reviewers and editors can be quite generative in that regard. However, our collective ingenuity is not terribly relevant to the question of what constitutes the requisite number of bells and whistles in a given model. Instead, the question is whether the model has sufficient features to capture the core tensions underlying the issues that one wishes to examine. Some modeling is aimed at general insight (cf. March 1991), whereas other models strive for a closer mapping to an empirical context (cf. Malerba et al. 1999). Clearly, the feature set of the former sorts of models will be relatively sparse in contrast to the latter sorts of efforts.

However, even in the context of deriving general insight from modeling, it is important that the insight is aimed at some phenomena of the world and not merely focused on the properties of the model itself (e.g., what happens to a bandit/landscape/urn model, etc., if we change x or y in the model?). A healthy self-discipline is to ask oneself what insights would someone who has no interest in modeling per se take from one’s work. In the absence of substantive answers to this question, modeling research risks speaking merely to a “cave” of fellow modelers and will fail to have an impact and engage the broader community of scholars.

A critical benefit of modeling is that it allows one to examine underlying processes and mechanisms rather than having to take a typically longer leap from verbal theorizing and observed empirics. In this regard, modeling human behavior in a formal framework has some of the same relative advantages and disadvantages over verbal theorizing that a laboratory study has over analysis of field data. What one gives up in terms of realism, one gains in terms of a window into mechanisms. Thus, although at times a modeling effort may elicit a critical response that what is being identified is some pre-existing truth now conveyed in a formal wrapper (i.e., the issue of “old wine” and “new bottles”), the modeling effort is capable of identifying boundary conditions on these pre-existing folk theorems and may illuminate the mechanisms underlying these existing beliefs and arguments. In this sense, effective modeling should be akin to a laboratory study that clearly isolates the underlying mechanisms at work.

This pursuit of process-level understanding speaks to the issue posed above as to the desired number of bells and whistles. As Bonini (1963) observed, there is an inherent tension, which he termed a paradox, between the complexity of a model and its comprehensibility to the modeler and his or her audience. We have observed many problematic manuscripts in which the author(s) offers (only) intuition regarding the mechanisms that may be responsible for the observed results. As editors, readers, and seminar audience members, we find that behavior enormously frustrating. Unlike naturally occurring data, there is nothing that inhibits the researcher from examining the underlying processes and mechanics generating the model output. Indeed, our view is that it is in this regard that we have seen notable advances in the quality of computational modeling efforts in recent years, as scholars are increasingly taking advantage of the possibility of detailed process tracing of the model’s behavior. Thus, back to the right level of bells and whistles question, another answer to the question is to ask whether the core tension and mechanism are embodied in the model and whether the model is sufficiently sparse in its structure that the mechanisms driving the results of the model are clearly visible and comprehensible.

An important feature of “a model is a model” sensibility is to de-escalate the contrast that is, at times, made between closed-form analytic models and computational models. Both classes of modeling require a formal specification of the underlying context, and there is no fundamental conceptual distinction between the two sorts of exercises, only a difference in techniques in extracting answers from the model. One tends to employ a computational model as the level of interdependencies, both spatial and temporal (with the latter generally thought of as path dependence), increase, particularly when the realization of those interactions is stochastic. Early work in industrial organization economics devoted considerable attention to two-period, two-firm models, a state space amenable to fully analytical, closed-form characterization. By contrast, an industry composed of N > 2 heterogeneous firms that compete for t > 2 periods and whose competition has direct and possibly stochastic interdependencies becomes more challenging for closed-form solutions. Computational models allow one to expand the state space (number and variety of actors and outcomes) and the degree of complexity by which these states are linked relative to analytical models, but these are differences in degree or magnitude rather than a fundamental difference in form.

In this spirit, we also think it is problematic to reify the distinctions among different families of models, such as bandit models, NK models, or value-added games. Again, a model is a model. At its core, a model specifies the basis of action of a micro-entity (individual human actor, firm, etc.) and how such micro-entities are linked to their environment (which may contain other such micro-entities). The characterization of these two elements need not be strongly coupled. One common pathology that we observe is that a particular characterization of behavior becomes viewed as inherently linked to a particular model of context. For example, an NK fitness landscape need not imply that the actor only moves by local hill-climbing, a bandit problem need not imply that actors engage only in Bayesian updating, and a value-added game does not have to presume that the individual firms are making rational, fully informed choices.

Modelers need to make two basic choices related to structure and dynamics: What structure captures the context of interest, and how do the modelers wish to characterize the behavior of individual actors in such a context? Per the first question, the model structure that one chooses to employ is a function of the core features of the problem context one is considering. Is the core uncertainty one of sampling, per a bandit problem, or is it the combinatorial complexity of a fitness landscape? Second, relative to the problem context, how myopic or farsighted is the behavior of actors assumed to be (Puranam et al. 2015)? These should be considered choices, and, much like the choice of a logit model or a multivariate regression, the choice should reflect one’s understanding of the underlying generating process in the context of interest.

Models are powerful tools with which to push our insight further. We are able to derive implications that were not a priori obvious, engage in counterfactual thinking, and identify boundary conditions that are generally even less self-evident. Models provide a certain kind of window of insight and understanding about the world. They are not a substitute for existing theory or data but complement both in the broader ecosystem of intellectual activity. As modelers, we are delighted to observe the relative growth of formal modeling efforts in the strategy and broader management canons but also feel the need for greater self-vigilance about earning the right to pull up to the academic table (and the privilege of being asked to stay). Our arguments here should be seen simply as suggestions about proper etiquette that speak to the broadest features of the form rather than imbuing particular practices as properties of a new orthodoxy.

Acknowledgments

We wish to thank Julien Clement, Helge Klapper, Marlo Raveendran, Bart Vanneste, and Maciej Workiewicz for comments on a prior draft.

Endnote

1 In contrast, formal modeling within economics has been a dominant methodology since Samuelson’s (1947) introduction of the “engine” of comparative statics.

References

  • Bonini C (1963) Simulation of Information and Decision Systems in the Firm (Prentice Hall, Englewood Cliffs, NJ).Google Scholar
  • Box G (1976) Science and statistics. J. Amer. Statist. Assoc. 71(356):791–799.CrossrefGoogle Scholar
  • Cyert RM, March JG (1963) A Behavioral Theory of the Firm (Prentice Hall, Englewood Cliffs, NJ).Google Scholar
  • Lave C, March JG (1975) An Introduction to Models in the Social Sciences (Harper & Row, New York).Google Scholar
  • Levinthal DA (2011) A behavioral approach to strategy: What’s the alternative? Strategic Management J. 32(13):1517–1523.CrossrefGoogle Scholar
  • Malerba F, Nelson R, Orsenigo L, Winter S (1999) “History-friendly” models of industry evolution: The computer industry. Indust. Corporate Change 8(1):3–40.CrossrefGoogle Scholar
  • March JG (1991) Exploration and exploitation in organizational learning. Organ. Sci. 2(1):71–87.LinkGoogle Scholar
  • Puranam PN, Stieglitz M, Osman M, Pillutla MM (2015) Modelling bounded rationality in organizations: Progress and prospects. Acad. Management Ann. 9(1):337–392.CrossrefGoogle Scholar
  • Samuelson P (1947) Foundations of Economic Analysis (Harvard University Press, Cambridge, MA).Google Scholar
  • Savage L (1954) The Foundations of Statistics (John Wiley & Sons, New York).Google Scholar

Thorbjørn Knudsen is professor of strategic organization design at the University of Southern Denmark, and chair of social science at the Danish Institute for Advanced Study. He has previously served as guest editor at International Game Theory Review and is currently senior editor at Organization Science. His research focuses on how organization design can shape organizational evolution and adaptation.

Daniel A. Levinthal is the Reginald H. Jones Professor of Corporate Strategy at the Wharton School, University of Pennsylvania. He currently serves as editor-in-chief of Strategy Science and has previously served as editor-in-chief of Organization Science and as the department editor for business strategy at Management Science. His research focuses on questions of organizational adaptation and industry evolution, particularly in the context of technological change.

Phanish Puranam is the Roland Berger Chair Professor of Strategy & Organization Design at INSEAD. He has previously served as senior editor at Organization Science and guest editor at Strategic Management Journal and Strategy Science. He is currently associate editor at Journal of Organization Design. His research examines organizations as systems of aggregation, using a micro-structural perspective.

INFORMS site uses cookies to store information on your computer. Some are essential to make our site work; Others help us improve the user experience. By using this site, you consent to the placement of these cookies. Please read our Privacy Statement to learn more.