Perspective—Discovery Within Validation Logic: Deliberately Surfacing, Complementing, and Substituting Abductive Reasoning in Hypothetico-Deductive Inquiry

Published Online:https://doi.org/10.1287/orsc.2017.1193

Abstract

We propose a more explicit role for abductive reasoning, or the development of initial explanation, in hypothetico-deductive (H-D) inquiry. We begin by describing the roots of abduction in pragmatism and its role in exploration and discovery. Recognizing that pragmatism treats abductive reasoning as inevitable, we argue that it can also be a deliberate form of reasoning in scientific inquiry, articulating the unique place it can have in hypothetico-deductive theorizing. We explain the opportunities from surfacing abductive reasoning in H-D where it already exists; from explicitly acknowledging abductive reasoning as a complement in building logical chains in H-D; and from using abductive reasoning as a substitute for H-D logic when a body of knowledge exhibits inconsistent, contradictory, or discrepant results. We elaborate strategies for data search and selection, data production and compilation, and analytical corroboration. Our overall argument is that the deliberate use of abductive reasoning in hypothetico-deductive projects has distinct advantages stemming from an explicitly tight connection between data and theory. We end by explaining the benefits of actively recognizing the role of abductive reasoning in organizational and management theorizing.

The article was written and prepared by the U.S. goverenment employee(s) on official time and is therefore in the public domain.

This article discusses how exploration and discovery can be more explicitly incorporated into the hypothetico-deductive (H-D) tradition, the dominant paradigm in organization science (Locke 2007). Although H-D research has a primary focus on validation, it also relies heavily on elements of reasoning that explore, discover, and produce explanation during the research process, such as formulating hypotheses, explaining results, and drawing connections between multiple studies. Strictly speaking, however, the H-D tradition separates the development of an explanation (i.e., the active reasoning that goes into formulating a hypothesis) from the empirical work necessary to test and validate it (see Locke 2007 for a historical discussion). In this way, the positivist underpinnings of H-D research make it better suited to test theory than to develop theory (Mantere and Ketokivi 2013, Timmermans and Tavory 2012).

We propose that abduction, or the process of reasoning from data to an initial hypothesis (Bailyn 1977, Dunne and Dougherty 2016, Locke 2011, Locke and Golden-Biddle 1997, Lockett et al. 2014, Shepherd and Sutcliffe 2011, Walton 2004), can integrate positivist and pragmatist approaches by connecting theory building and theory testing in our research. We argue that abductive reasoning in H-D projects is not simply useful—it is an inseparable, indispensable, and valuable approach linking the development of explanation and the testing of resulting hypotheses to advance theory. Our contention is that leaving the role of abductive reasoning implicit, ambiguously defined, or unrecognized in validation research risks removing a useful and epistemologically honest way of thinking about exploration and discovery in H-D research. Thus, our primary audience comprises researchers in the H-D tradition interested in alternative modes of reasoning for developing explanation.

We begin by presenting a very brief overview of pragmatist philosophy to provide an intellectual biography for abduction and explain its relationship to induction and deduction. We highlight the nature of knowledge claims, or “warranted assertions” (Martela 2015, p. 537), that researchers make using abductive, inductive, and deductive reasoning, focusing on two elements that differentiate them: the generality of the explanation involved and the degree of certainty that is justified in their claims. We then explicitly turn our focus to H-D work, noting the role of knowledge claims from abductive reasoning within this paradigm and showing that while abductive reasoning is already common in such work (Klag and Langley 2013), it generally goes unnoticed and is often misunderstood or misused in the form of “overassertion,” undirected searches, and potential researcher misconduct.

In the second half of this article, we detail three specific approaches for the incorporation of abductive reasoning in H-D projects within the field of organization science. We first discuss the need for the field to more deliberately surface or recognize abductive reasoning in H-D projects where it already exists. A second opportunity stems from more explicitly using and acknowledging abductive reasoning as a complement in logical chains of theory building in H-D studies. We end with a discussion of situations where abductive reasoning can substitute for H-D studies, specifically when H-D studies have produced inconsistent, contradictory, or discrepant results (Hanson 1958, Lipton 2001). We argue that the acceptance of a more explicit link between abductive reasoning and H-D inquiry supplements current perspectives on methods and theory building in management (Lockett et al. 2014, Shepherd and Sutcliffe 2011, Suddaby 2006, Sutton and Staw 1995), responds to calls for greater attention to exploration and discovery (Alvesson and Sandberg 2013, Bamberger and Ang 2016, Locke et al. 2008), and can help advance the development of contextualized midrange theories (Birkinshaw et al. 2014, Thompson 2011).

Exploring the role of abductive reasoning in H-D theorizing is likely to raise different questions for different audiences. Researchers well versed on abduction who are more comfortable with the idea of its omnipresence in reasoning may ask “what is new” in such an exploration and wonder how one can argue for abductive reasoning as “deliberate.” Such readers may also wonder how abductive reasoning, which favors exploration and discovery, can be useful in H-D reasoning, which is seen as inimical to both. For readers with strong positivist roots, questions may revolve around the nature of abductive reasoning, standards to evaluate it, and its applicability to validation and developing certainty in theorizing. For these readers, questions may focus on whether an emphasis on exploration and discovery that is closely reliant on a phenomenon risks corrupting the scientific enterprise. Alternatively, such readers may also wonder how explanations derived through abductive reasoning can be distinguished from those achieved through garden- variety research misconduct. Building answers to these and other questions is the task of this article.

Positivism, Pragmatism, and Abduction

Positivism has had an important influence on organization science through its emphasis on validation, offering standards for assessing causes, general laws, and verification of facts (Comte 1975). Positivism assumes rationality, objectivity, and usefulness in ruling out alternative explanations as a way of building theory. As Ketokivi and Mantere (2010, p. 320) suggest, H-D projects attempt to use reasoning toward validation that is “theory, context, and researcher invariant.” In contrast to positivism stands pragmatism, a philosophical perspective originating in the United States in the latter half of the 19th century. Originally conceived by Charles Peirce and developed further by others, such as William James, John Dewey, and Herbert Mead (see Burch 2014, Elkjaer and Simpson 2011), pragmatism as a philosophy assumes subjectivity, relies on revision, distrusts confidence, and is suspicious of strong claims of certainty and objectivity (Alvesson and Sandberg 2013, Elkjaer and Simpson 2011, Farjoun et al. 2015, Martela 2015). Pragmatism is different from positivism in its insistence that empirical evidence and explanation are closely intertwined and are, in fact, inseparable in informing one another. As such, pragmatism has offered an alternative to rationalist perspectives such as positivism (Elkjaer and Simpson 2011), which has been the dominant influence in (and is often equated with) the H-D tradition (Farjoun et al. 2015).1 The early influence of pragmatism in the field of organizations and management, through the work of Selznick, Cohen, and Weick, is well recognized (see Farjoun et al. 2015, Mantere and Ketokivi 2013, Martela 2015). Thus, although our field has long noted that pragmatist ideas and philosophical assumptions are well suited to represent organizational behavior in a realistic and complex manner, positivist norms have been dominant (Locke 2007).

It was in the development of pragmatism that Peirce originally coined the term abduction (or retroduction) (Burch 2014) to indicate a form of reasoning that moves from observations in a specific situation, information source, or data set to an explanation that accounts for those particular observations. The conjectures that result from abductive reasoning are, therefore, untested and, according to Peirce, are provisional, tentative, and fallible. Since Peirce’s original formulation, many scholars have expanded on abduction. For instance, abduction can be characterized as elaborating different degrees of novelty in reasoning (e.g., selecting from options or creating new ones), different motivating sensory experiences (e.g., visual or tactile) that might trigger inquiry, and different applications for observed patterns (i.e., development of hypotheses or particular explanation) (Magnani 2001). This resurgent interest, at the same time, reveals disagreement and ongoing conversation among scholars regarding pragmatism and abduction (see Elkjaer and Simpson 2011), making standard definitions of pragmatism and abduction difficult to pin down. This expanding work, however, also suggests there is broad agreement among scholars on what abductive reasoning involves—that is, the search for explanation in a particular situation. For instance, abductive reasoning is generally agreed to happen as an initial inquiry moves from inference to a “best” (Walton 2004), “plausible” (Burch 2014), or “most likely” explanation (Lockett et al. 2014) to account “for surprises and unmet expectations” (Locke 2010) using the evidence available (Ketokivi and Mantere 2010, Rozeboom 1997). This emphasis on explanation for a particular set of evidence helps clarify why abductive reasoning may require “intelligent guessing” or “hypothesis generation” (Popper 1959), “imaginative creation of explanatory hypotheses” (Elkjaer and Simpson 2011), a “flash of insight” (Peirce 1965), a “propositional attitude” (Rozeboom 1997), a “conceptual leap” (Klag and Langley 2013), and engaging “puzzles, hunches, speculation, imagination, guesswork” or even “permissive exploration” (Locke 2010).

Abductive reasoning is a mundane human activity (Folger and Stein 2017, Locke et al. 2008, Timmermans and Tavory 2012), a natural and inevitable part of everyday reasoning, because our senses constantly use previous experience and understanding to interpret everything we observe (Goffman 1959, Kohler 1947). However, like engaging doubt (Locke et al. 2008), abductive reasoning can also be deliberate in scientific inquiry (Dunne and Dougherty 2016, Mantere and Ketokivi 2013). Deliberate abductive reasoning, for example, is explicitly recognized in fields where the primary concern is firsthand engagement with evidence: it is explicitly documented and taught in actuarial science (e.g., setting premiums for insurance based on statistical samples), marketing (e.g., discovering common purchase pairs during shopping), artificial intelligence (e.g., machine learning), and predictive analytics from “big data” (e.g., determining a point estimate based on a population). In fact, all of us have experienced abductive reasoning when we see a doctor with a complaint: our doctor uses our medical history and current symptoms as evidence to develop a preliminary diagnosis, using knowledge about disease to rule out extremely unlikely scenarios and to gather new evidence by quizzing us further, continually evaluating new explanations (i.e., diagnoses) as more likely or less likely. Our physician may also order radiological, blood, or urine tests to generate additional evidence, or he or she may develop new questions and explanations through consultations with peers who might possess different disease information. As the abductive inquiry advances, the physician narrows the range of possible root causes for our symptoms to a few plausible diagnoses (or even one) on which confirmatory tests can be conducted so the definite cause of the symptoms becomes the target of treatment. Importantly, an abductively reasoned explanation, such as a preliminary diagnosis, simply has to be plausible to fulfill its role in the inquiry process (Mantere and Ketokivi 2013, Rozeboom 1997, Walton 2004).

Knowledge Claims in Abductive, Inductive, and Deductive Reasoning

Before focusing strictly on the relationship between abductive and deductive reasoning, it is useful to briefly discuss the three most common modes of reasoning in organization science: abductive, inductive, and deductive reasoning. Within organization science, abductive and inductive reasoning are much more closely related, to the point that in some instances they are confused with one another (Folger and Stein 2017, Locke 2010). For instance, both abductive and inductive modes of inquiry involve exploration, discovery, and the incorporation of anomalies into emerging explanation (Mantere and Ketokivi 2013, Simon 1973).

Induction, deduction, and abduction, however, can be distinguished from each other in two important regards: the generality of the explanations they propose and the certainty of the knowledge claims they produce (summarized in Table 1). Both deductive reasoning and inductive reasoning operate in the development of theory, causal explanations, and propositions with general (or even universal) applicability. Deductive reasoning, for instance, is often described as moving from general theory to a specific prediction while inductive reasoning is described as moving from specific cases to a general explanation or theory. By contrast, abductive reasoning is not concerned with developing general theory but is instead concerned with moving from specific observations to particular explanation. Without an eye toward generalization, abductive reasoning helps develop a narrow explanation that is appropriate for local observations.

Table

Table 1: Modes of Reasoning to Generate Different Degrees of Generalizability and Certainty in Knowledge Claims

Table 1: Modes of Reasoning to Generate Different Degrees of Generalizability and Certainty in Knowledge Claims

Type of reasoningInitiating impetus for the inquiryRelationship between observation and theoryOutcome: Strength of the assertionEvaluation criteria for knowledge claim

AbductiveAn empirical puzzle or anomaly (e.g., inconsistent, contradictory, or discrepant findings)Moving from specific observations to particular explanationPlausible knowledge claim for resolving the empirical puzzle or anomalyDoes the particular explanation coherently account for the local observation? Does the explanation cohere into a testable hypothesis? Does the explanation identify new core variables or relationships? Does the explanation provide a map of conceptual relationships? Does the explanation carefully document the anomaly?
InductiveA working hypothesisMoving from specific cases to general explanationProbable knowledge claim that the hypothesized relationship is consequential and reliableDoes the general observation account for the specific cases? Does the evidence demonstrate that one pattern is more probable than others? Can the relationships that occur in specific incidents generalize to other contexts or relationships?
DeductiveA hypothesisMoving from general explanation to specific predictionCertain knowledge claim that the pattern is predictable or the phenomena reliably occursWas the premise validated? Can the prediction be replicated?

A second way in which deductive, inductive, and abductive reasoning vary is in the degree of certainty they assert in the knowledge claims they help generate. Abductive reasoning leads to plausible knowledge claims that are untested, held tentatively, and subject to continuous revision. Inductive studies make knowledge claims where “it is improbable that the premises be true and the conclusion false” (Hurley 2000, p. 33, emphasis added). In deductive studies, the research process sets out to establish greater certainty, and knowledge claims are stronger because, by testing an a priori hypothesis, “it is impossible for the premises to be true and the conclusion false” (Hurley 2000, p. 33, emphasis added). In essence, abductive reasoning offers a plausible explanation, inductive reasoning offers a probable explanation, and deductive reasoning offers a certain explanation. As summarized by Mantere and Ketokivi (2013, p. 72), “We predict, confirm, and disconfirm through deduction, generalize through induction, and theorize through abduction.” Of the three modes, abductive reasoning is the most exploratory. While it is the least dependable mode of developing certainty, it is the most open and fruitful mode of introducing new insights into other modes of scientific inquiry through its “innovative potential” (Timmermans and Tavory 2012, p. 171) and “ampliative and conjectural” stance (Locke et al. 2008, p. 907).

Although one could be tempted to conclude that engaging abductive reasoning as part of scientific inquiry is not worth the effort because it can only produce locally relevant and merely plausible explanations, it is useful to note that the challenge itself speaks to the potential complementarity between positivism and pragmatism in scientific inquiry. According to Peirce’s notion of pragmatism, all explanations deserve to be treated with suspicion and care because they are always particular, local, and always only plausible, perpetually subject to critique and further development. Abductive reasoning contributes to the advancement of science by allowing for discovery, by serving as a mode for gathering and/or exploring evidence to produce a plausible explanation but a reasoning mode that also ends when the inquiry produces an explanation that fits the facts at hand. As such, the process of abductive reasoning does not “test” or validate an idea in the manner that deductive inquiry does (Mantere and Ketokivi 2013) but rather produces plausible explanation that can be subjected to additional scrutiny or validation through induction and deduction (Folger and Stein 2017).

Abduction in the H-D Paradigm

We next return our attention to the focus of this paper: how abductive and deductive reasoning can be supplementary parts of scientific projects, helping us relax the unhelpful divide between deriving explanations that are evidence based (i.e., abduction) and validating explanations that are justified a priori (i.e., deduction) (Lockett et al. 2014). Considering complementarity between creating and validating explanation can be generative because it clarifies the utility of pragmatism as a philosophical approach in theory building. Specifically, it emphasizes inquiry (i.e., study design and methodology choices) that seeks deep engagement with empirical phenomena (i.e., a close match between the observed world and our explanations), it refuses simplification (i.e., resists codifying or breaking down social situations), and it holds assertions or truth claims lightly (i.e., is self-reflective and constantly evolving), as it emphasizes exploration and discovery (Martela 2015).

It is in this context that the pragmatist underpinnings of abductive reasoning can provide a complementary fit to enrich validation through H-D scientific inquiry. One way of exemplifying this relationship is to note that deductive reasoning begins with the statement of a hypothesis and ends with its test. Once a test is completed, abductive reasoning assists the development of additional explanation and ends when a new plausible explanation (i.e., a new hypothesis) is generated. With this in mind, it is easy to recognize that abductive reasoning is regularly incorporated into H-D projects through, for example, literature reviews used to identify patterns of findings that raise new questions through top-down induction (Shepherd and Sutcliffe 2011) or to support or generate hypotheses that are then subject to empirical testing (Dunne and Dougherty 2016, Ketokivi and Mantere 2010, Minnameier 2017). Abductive reasoning is also present in H-D manuscripts after hypotheses are tested, in post hoc analyses of alternative patterns in data, and in discussion sections where nonsignificant or unanticipated results are speculated upon or where links to other findings are proposed, where mysteries are explored (Alvesson and Kärreman 2007). In each of these instances, researchers reverse the usual H-D logic from one focused on “effects of causes” to one of “causes of effects” (Gelman and Imbens 2013, p. 1). By searching for plausible explanations, authors seek to make initial and tentative knowledge claims that link the theoretical and empirical findings in a manuscript to other work in the field.

Theory Building Through Abductive Reasoning in H-D Projects

Having described abduction, its intellectual biography, and its relationship to induction and deduction, we now continue our development by describing three ways in which the incorporation of abductive reasoning as an explicit part of reasoning in H-D projects can enrich our understanding: surfacing abductive reasoning in our current practices, complementing H-D logic chains through abductive reasoning, and substituting with abductive reasoning to address empirical puzzles.

  1. Surfacing abductive reasoning in H-D research. So if abductive reasoning is where most theory building begins (in literature reviews and hypothesis generation) and also where most research returns in the end (in discussion sections, explaining results and suggesting avenues for future research), the relative silence about the role of abductive reasoning in H-D projects should seem puzzling. We believe there are two reasons for this silence, which together conspire to prevent surfacing abductive reasoning and the exploration and discovery it implies, in the H-D tradition. One reason may be a legitimate distaste for dishonest data practices (Bedeian et al. 2010) that stem from intended or unintended misuses of abductive reasoning. For example, when doing data analysis, researchers engaging in deductive reasoning can be tempted to eliminate outliers in ways that modify results or findings, and these exploration activities can be interpreted as “fitting” or “massaging” data. These practices can be categorized as abductive because they rely on close engagement with the data to identify patterns, clusters, groupings, or unusual cases that reveal new information about the phenomena they represent, and they help develop initial explanation (Chen and Han 1996, Fayyad et al. 1996, Schwab and Starbuck 2017, Selvin and Stuart 1966). However, “fishing,” “data mining,” “p-hacking,” and “data dredging” lead to unwarranted assertions when those patterns, products of data exploration and abductive reasoning, are presented as the product of H-D inquiry. In other words, the error occurs when researchers rely on abductive reasoning to generate initial explanations but present them as deductively derived. The presentation of hypotheses as prethought ideas when they, in fact, emerge from patterns the researcher finds in the data (e.g., hypothesizing after results are known, or HARKing) leads to flawed knowledge claims that are labeled as certain when they can, in fact, only be plausible (Anonymous 2015, Biemann 2013, Bosco et al. 2016). A related offense occurs when reviewers ask authors to restate or reframe their original hypotheses to better match their empirical findings (Anonymous 2015). Instead, when conclusions from abductive reasoning are represented correctly, the knowledge claims are described with language that claims them as “inferential,” “presumptive,” or “plausible” (Walton 2004).

    A second (and perhaps less conscious or intentional) reason for the silence on abductive reasoning in the H-D tradition may be due to how strongly abductive reasoning brings researcher judgment forward in research, with the researcher as an active reasoner (Bailyn 1977, Ketokivi and Mantere 2010, Locke 2011). In fact, the language of abductive reasoning (i.e., flashes of insight, hunches, doubt, conundrum, conceptual leap, mysteries) clearly reveals the role of the subjective researcher that deductive inquiry seeks to minimize (Anteby 2013, Hudson and Okhuysen 2014, Ketokivi and Mantere 2010). And the lack of comfort with a subjective researcher is genuinely rooted in a concern that researcher interventions can produce significant violations of core assumptions for H-D work, leading to unwarranted knowledge claims that compromise scientific progress. Treating researchers as “solvers of mysteries” is therefore judged with suspicion (Alvesson and Kärreman 2007, Ketokivi and Mantere 2010). However, many have argued that it is naïve to think that researchers can play the role of the “disinterested scholar” (Hudson and Okhuysen 2014), as disembodied and perfectly rational minds that merely report on the world around them via data. As Mintzberg (1979, p. 584) elegantly stated, “The data do not generate theory—only researchers do that.”

    The proposition that abductive and deductive reasoning are important complements is consistent with the pragmatist belief that theory comes from the thinking process and interpretations of researchers—“[p]rior findings cannot by themselves motivate hypotheses, and the reporting of results cannot substitute for causal reasoning” (Sutton and Staw 1995, p. 374). Instead, it is important to acknowledge the role that the active reasoner plays in any mode of reasoning. As Mantere and Ketokivi (2013, p. 72, italics in original) noted, “Theories are, in a peculiar way, always partly about the people who create them.” It is necessary to explicitly recognize that researchers are situated in their role of the active reasoner in building connections, telling the “story,” motivating hypotheses, and making the logical case about how previous results suggest new relationships (Locke 2007, Shepherd and Sutcliffe 2011). No H-D project can stand independently of the active reasoner and abductive reasoning (Mantere and Ketokivi 2013). This suggests that concerns about having “interested scholars” in the research enterprise (Hudson and Okhuysen 2014) must be addressed, not by ignoring the role they play (Schwab and Starbuck 2017), but rather by clearly surfacing their role.

  2. Complementing explanation through abductive reasoning in H-D research. The second way in which abductive reasoning can be useful to H-D research is through what we label complementing, its explicit use to draw links between instances of H-D reasoning. In such cases, abductively generated conclusions or interpretations act as connections between different H-D studies, with the situated researcher building the inquiry starting with one hypothesis, its test, and the explanation of results to the postulation of new questions and plausible hypotheses. These new plausible hypotheses, in turn, become the impetus for subsequent studies. In essence, abductive reasoning helps connect a given sequence of studies as appropriate, explaining how the pieces build on each other to advance a knowledge claim.

    The use of abductive reasoning to complement H-D projects is perhaps most evident in multiexperiment manuscripts focused on microlevel phenomena. Grant (2008), for example, showcased this complementarity when he presents three different field experiments that test hypotheses focused on the relationship between task significance and job performance. As with many manuscripts that involve H-D logic, he laid out hypotheses based on extant research, pointing out challenges with our current understanding and proposing additional explanations that can be put to the test. For our purpose, what is notable is how, after each study has been presented, he reexamined the broad question using abductive language to map the phenomenon and build the inquiry in the article. For instance, after results for the first experiment are reported, Grant used tentative language to interpret them, noting they provide “initial support” (p. 113). Implicitly calling on abductive reasoning, he noted that the results “raise two critical unanswered questions” (p. 113) as hypotheses. This reasoning is then used to motivate the second study as a test of those plausible explanations. Similarly, after the second study is detailed, he again returned to an explanatory stance, suggesting the results “build on” (p. 116) the first study and “provide convergent support” while still recognizing that they do not inform “the boundary conditions” (p. 116) that he then hypothesized about and the last study goes on to test. This alternating sequence of H-D studies and presentation of plausible explanation shows the way in which abductive reasoning can help build an H-D project, playing an indispensable and valuable role in linking explanation and testing by complementing H-D logic.

  3. Substituting explanation through abduction in H-D research. The discussion to this point should make evident that abductive reasoning already plays an important role in many stages of the hypothetico-deductive research process—through literature reviews, during initial or exploratory search, and while developing pilot studies. It should also be evident how abductive reasoning complements or links H-D projects together when interpreting findings post hoc, linking studies together, and proposing future directions. Moving on from the supporting roles that abductive reasoning can play through surfacing and complementing, we next consider how abductive reasoning can play a role in theory building by substituting for H-D inquiry when there is a need to explain inconsistencies, discrepancies, and contradictions in H-D research. In this situation, we suggest that an abductive project can become the lead or sole focus for a manuscript and/or study, acting as a substitute for traditional H-D inquiry.

Guidelines for Substituting Abductive Inquiry in H-D Projects to Resolve Anomalies

One situation that naturally calls for the use of an abductive approach occurs when the accumulated evidence within an H-D research stream indicates that theoretical coherence has broken down (Shepherd and Sutcliffe 2011) or has produced a “conundrum” (Locke et al. 2008), reflected in empirical inconsistencies, contradictions, or discrepancies across studies. These situations typically appear when there is mixed empirical support for a theoretical explanation (Locke 2007); when choices of model specification, methodologies, or context are associated with competing explanations; or when a proliferation of moderators suggests a need to revisit relationships. Such situations can leave us with “few means to adjudicate competing explanations” (Kreager et al. 2017, p. 689). In general, inconsistencies, contradictions, or discrepancies present a need to explore unexpected or surprising results and a need to refine earlier theoretical explanations, to explain “what does not work in an existing theory” (Alvesson and Sandberg 2013, p. 146, italics in original). Reliance on abductive reasoning as a primary focus of a manuscript is reasonable here because of the need to explore and discover a new and plausible explanation, one that restores theoretical coherence in light of empirical reality. Here, the researcher can drop the tools (and rules) of H-D inquiry and create a manuscript that relies on abductive inquiry to put forth a new set of plausible explanations.

To respond to these needs, we next elaborate six guidelines that can guide a researcher through critical questions in the development of high-quality projects by relying on abductive reasoning. We focus on three important areas that are present in any research project but which take special form in situations where anomalies occur: (1) data search and selection, (2) data production and compilation, and (3) analytical corroboration. Although we present these guidelines sequentially, a distinctive feature of abductive inquiry is that the reasoning processes involved are iterative and impossible to separate in practice (Folger and Stein 2017, Mantere and Ketokivi 2013, Shepherd and Sutcliffe 2011). Thus, instead of thinking of “steps” in a process that begins with a hypothesis and proceeds through methods to test it, it is more helpful to think of data search and selection, data production and compilation, and analytical corroboration as three legs of a stool, as processes that occur simultaneously and iteratively, where the quality of each component is essential to successfully hold up the integrity of the research process. For each guideline, we present the abductive inquiry goal it fulfills and provide a contrast to the corresponding element in H-D projects (summarized in Table 2).

Table

Table 2: Suggested Guidelines for Projects Using Abductive Reasoning to Resolve an Anomaly Produced in H-D Research

Table 2: Suggested Guidelines for Projects Using Abductive Reasoning to Resolve an Anomaly Produced in H-D Research

Abductive research decisionAbductive reasoning guidelinesGoal of guidelineHow abductive reasoning guideline violates positivist tenets of HD research

Data search and selection
(H-D equivalent: Sampling)
No. 1: Searching on speculationTo find and select data sources, contexts, populations, samples, and/or sampling techniques that can identify a suspected idiosyncrasy or pattern related to the anomalyNonprobability sampling introduces systematic sampling error and/or selection bias
 No. 2: Resampling for emergent explanationTo iteratively engage data sources to pursue an explanation, comparison, or insight more deeply by adding contextual authenticityAny change in sampling frame or method introduces threats to internal validity, nullifies the ability to make equivalent comparisons or rule out alternative explanations
Data production and compilation
(H-D equivalent: Study design and data collection)
No. 3: Expanding through methods and measuresTo compile evidence, using different methods, or altering existing methods to expand the ability to make comparisons and develop an initial explanationDifferent study design or methods introduce incommensurability, making equivalent comparisons to test an a priori hypothesis impossible
 No. 4: Expanding through surplus dataTo expand evidence to allow exploration, identification, and uncovering of previously overlooked elements, interdependencies, or relationships important to explaining the puzzleOnly the hypothesized variables and/or data required for verification of an a priori hypothesis are appropriate to utilize
Analytical corroboration
(H-D equivalent: Data analysis)
No. 5: Assessing corroborationTo utilize and learn from iterative comparisons/analyses to explore and exhaust relationships between concepts and to assess corroboration of emerging evidence and evaluate (to support or question) the plausibility of explanationsConsideration of unplanned comparisons introduce confounds and void the certainty of a knowledge claim about a priori hypotheses
 No. 6: Assembling plausible explanationTo assemble plausible explanation through a set of empirically coherent and evidence-grounded knowledge claims to identify new testable hypotheses, consequential inclusion criteria for variables or relationships, a phenomenon map, and/or a previously unknown anomalyMultiple comparisons to test a hypothesis confuses the notion of testing, undermining the certainty that is the product of H-D testing

Data Search/Selection

In the H-D tradition, selection of data sources (i.e., sampling) is intended to decrease systematic bias or sampling error to achieve high degrees of certainty for a generalizable conclusion (Hurley 2014, Kalton 1983). Normative deductive sampling procedures therefore typically employ a single sampling frame and a sampling technique based on randomization (random sampling, stratified random sampling, etc.). These practices increase the probability of equivalence and independence in selection so that the distribution of errors is not systematic (Kalton 1983).

By sharp contrast, the selection of data sources in abduction to explain anomalies is intended to search for and select cases, contexts, and events that help identify the “error” in H-D projects. In other words, the goal is to search for sources that help the researcher discover unanticipated but systematic patterns that can offer explanation for an anomaly. The reasons for selecting a given population and a sampling frame therefore must reflect an initial speculation (i.e., the “flash of insight”) in the researcher’s mind regarding where explanations about the anomaly might come from (Argyle 1972, Edmonson and McManus 2007, Suddaby 2006). This means that, from early stages, abductive inquiry requires exploration and discovery via sampling procedures that will necessarily violate norms in the H-D tradition. This could include sampling on the dependent variable (Heckman 1979) or seeking cases “affirming the consequent” (Ketokivi and Mantere 2010, p. 330) to generate explanation. It could also include selecting extreme cases to better understand their circumstances (Miller and Bamberger 2016) even as such decisions introduce threats for regression to the mean and generalizability in H-D projects. In other words, because sampling is done to explore and discover explanation rather than to eliminate systematic bias, the goal of an abductive sampling strategy is to identify idiosyncrasy rather than strive for certainty and generalizability. While there are some conditions where random sampling might be appropriate, it is more likely that multiple sampling frames and purposeful or theoretical sampling techniques (Eisenhardt 1989, Pratt 2009, Suddaby 2006, Trochim 2001) will be more useful in abductive inquiry.

Guideline No. 1: Searching on Speculation

As previously mentioned, the criterion in choosing where to seek data is based on the researcher’s hunch about where a likely explanation for an empirical puzzle resides (Miller and Bamberger 2016). For instance, when the researcher suspects that the anomaly may be due to a mismatch between theory and method (that is, low internal validity), the researcher may choose to use populations and sampling frames that are the same as those used in earlier research. In such a situation, the goal is to maintain contextual similarity to previous work to focus on a methodological artifact or “internal validity” issue. For example, if the previous H-D research that produced the anomaly used undergraduate subject pools (perhaps with random sampling), it would be appropriate to return to the same population and sampling technique. Returning to previous populations, sampling frames and techniques, or data sources is justified because it will help the researcher understand whether his or her initial speculation about internal validity should be pursued or discarded. A nicely documented example is provided by Latham et al. (1988), who described their method for resolving contradictory findings about the motivational effect of participation in goal setting. In their article, Latham and colleagues began by reviewing the body of evidence (field- and laboratory-based) showing inconsistent findings. Then, sitting together, they “brainstormed differences in the two sets of experiments that might account for the differences in their results” (p. 755). This resulted in five tentative explanations involving differences in experimental processes and participant populations that could account for mixed empirical results. Importantly, four of the tentative explanations reflected internal validity concerns. Given this initial hunch, it was reasonable for the researchers to collect data for their four experiments with a narrow frame (i.e., in the United States, from students in business and management courses) to limit potential problems that differences across national or industry samples could introduce. This sampling strategy, together with clear experimental designs, allowed them to resolve several discrepancies in earlier research. Had they continued with their abductive reasoning process, they might have also pursued the fifth tentative explanation (reflecting differences in participant populations) by sampling to explore the role of national and industry cultural values about participation in goal setting.

If researchers’ initial suspicion is that the empirical puzzle stems from threats to external validity rather than internal validity, by contrast, reasoning about differences in participants, contexts, timing, or other circumstances in which earlier H-D work has taken place is necessary. In this case, the researcher would deliberately incorporate different samples and sampling frames to allow for a more open exploration of new explanations (Miller and Bamberger 2016). For example, one study seeking to better understand turnover among working adults began with the hunch that part-time and full-time employees might “process” their employment experiences differently, potentially explaining inconsistent findings in the literature (Peters et al. 1981). The researchers’ sampling choices reflected this hunch: they returned to a traditionally studied high-turnover population (i.e., telephone sales) but deliberately chose new employees within the same sales force to make comparisons between part-time and full-time workers. Using multiple data waves, they explored relationships among commonly hypothesized causes of turnover. They found that traditional workplace experience variables (job satisfaction, thoughts of quitting, ease of finding another job, etc.) replicated predictions for higher turnover among full-time employees but had less relevance for part-time workers, who the authors “suggest” are more influenced in their evaluations by their commuting time.

Purposeful and deliberate search for data is justified when seeking to understand how the context of the phenomenon might change an explanation because the goal is to find data sources to discover new explanations for an empirical contradiction, discrepancy, or inconsistency. Here, although novel settings may provide particular value to explore and help develop explanation (Bamberger and Pratt 2010), relying on a common setting can also boost the robustness of the findings in a study. Thus, the guideline is to find data sources that directly respond to the initial hunch about a plausible explanation.

Guideline No. 2: Resampling for Emergent Explanation

A second guideline for deliberate abductive inquiry reflects the need to adjust both the selection of sampling frames (i.e., the source of data) and sampling methods (i.e., the process of selecting within a sampling frame) as the abductive inquiry advances. One of the best-known examples is provided by the pioneering Hawthorne studies (Roethlisberger et al. 1956), which show the value of reengaging a sample. In this project, the initial hypothesis that lighting levels affected worker productivity was not supported in an initial experimental study. In response, the researchers engaged in a more systematic exploration of their data and developed several plausible explanations for this result. They returned to multiple data sources within the plant to conduct three more experiments, an interview study, and then a final experiment (see Franke and Kaul 1978) with abductive reasoning playing a key role in the interstitial reasoning spaces. It was through this exploration and resampling that the researchers discovered the powerful effect that employee attitudes toward their work can have on productivity. From this work, the researchers were also able to suggest new insights about placebo effects in field research and proposed new ideas for different variants of participative management techniques (see Levitt and List 2011).

The example above illustrates how abductive sampling choices naturally change as explanation expands (i.e., from lighting levels to exploring the phenomenon of participation). In the deliberate abduction research process, the researcher shifts constantly from seeking data to produce explanation to producing explanation from the data already collected. Iteratively selecting populations or samples that allow for deeper engagement with the phenomenon through data, comparisons, and new insight is how abduction moves an inquiry forward. Although iterative and flexible sampling (i.e., nonprobability) violates H-D norms because the practice introduces systematic biases (Attewell et al. 2015, Bedeian et al. 2010, Chen and Han 1996, Fayyad et al. 1996), from the perspective of pragmatism (Farjoun et al. 2015, Mantere and Ketokivi 2013), this type of purposeful and iterative sampling is systematic because it reflects how the active reasoner is incorporating new information into the search for—and development of—a new explanation. As such, this guideline suggests that the primary standard to use in evaluating the appropriateness of data selection is the congruence of the choice with the initiating inquiry to establish the “contextual authenticity of reasoning” (Ketokivi and Mantere 2010, p. 323).

Data Production and Compilation

In the H-D tradition, the quality of data production is typically associated with study design—that is, the methods through which data are collected to test a hypothesis (Rea and Parker 1997). H-D projects are evaluated according to established standards (e.g., maximizing internal validity to eliminate alternative explanations and intervening influences), and they typically include prescribed methods to produce quantitative data (i.e., experiments and surveys) and methods to collect already existing data (i.e., archival) (Sproull 1988). In essence, the quality of an H-D project is judged based on its adherence to established practice (Ketokivi and Mantere 2010). Importantly, in H-D projects all decisions about measures and methods are made prior to any testing of a hypothesis and are not modified once chosen. In fact, any modification of a method or measure after any testing takes place constitutes a violation of the H-D paradigm because it introduces variability among predictors and dependent variables that cannot be estimated, and it eliminates the basis for meaningful comparison (Edwards 1990, Sproull 1988, Trochim 2001). For example, in psychological projects, altering a measure can change the psychometric properties (i.e., reliability and validity) of a scale and compromise its integrity, or an inconsistent prime in an experiment can produce nonequivalent experiences between groups, or altering the timing or context of study protocol can produce maturation threats and unanticipated intervening influences. Each of these contaminates the test of a hypothesis and raises threats for H-D research.

The assumptions that underpin the standards for H-D research stand in contrast to those that guide deliberate abductive inquiry and that, for instance, do not emphasize strict internal validity and a priori study design. Instead, abductive reasoning emphasizes the consideration of methods and measures as “ways of knowing.” In other words, in abduction, the goal is to compile data through different methods or measures to open opportunities to explore different explanations. This means that in the context of deliberate abduction, it is appropriate to produce data that are idiographic, unique, or strongly situational to help establish different contrasts and comparisons. Ongoing adjustments, flexibility, and iteration in data production are required to explore and discover new explanations, and these strategies sit comfortably within a pragmatist paradigm even as they certainly violate standards of internal validity and verification in the H-D paradigm. Here, we offer our next two abductive reasoning guidelines, focused on the compilation of data.

Guideline No. 3: Expanding Through Methods and Measures

In the pragmatist tradition, all knowledge is considered provisional, and there is not a single way of “knowing.” Data production techniques, therefore, are treated as lenses that help capture different “views” of a phenomenon. As such, the choices for data compilation typically involve choosing among—and potentially using—multiple methods, or a process of “data expansion” beyond a single source or method. If the researcher, for example, suspects a problem with construct validity in previous H-D research or that there is a gap between how a theoretical construct is defined and measured (Cook and Campbell 1979, Trochim 2001), data collection might employ several methods to enable comparisons between different ways of understanding the construct. Behfar et al. (2011) faced this situation when they explored an anomaly related to the use of the process conflict construct in studies of groups.

Behfar et al. (2011), noting how “the few studies that have examined process conflict have produced mixed results” (p. 129), explored how group members understood process conflict by compiling data with three different methods: semistructured interviews to elicit scale item interpretations, members’ concept mapping of their own qualitative accounts describing conflict in their teams, and quantitative ratings from members and academic experts to codify those accounts of conflict. Here, the researchers specifically selected data compilation methods to enable different analytic contrasts: interview data provided idiographic interpretations of “conflict” in scale items, concept mapping provided a statistical visualization to contrast lay theory and academic classification of conflict types, and survey ratings provided a quantifiable basis to assess where layperson and academic expert ratings converged and diverged. Analysis of these three data sources helped to explore how earlier operationalization of process conflict suffered because they omitted unique aspects of the construct and simultaneously had conceptual overlap with two other intragroup conflict constructs: task and relationship conflict. According to Behfar et al. (2011), the choice of these data compilation methods was guided by the speculation that a comparison of perspectives would help discover the suspected construct explication problem related to the observed anomaly in the literature.

As an alternative to utilizing multiple methods, the researcher can use the same methodological “lens” but position it differently by altering the way instruments are administered. For instance, in addition to asking standard questions in a scale, the researcher might also ask them in a native versus second language or also ask them in different settings (e.g., at home and at work) to see how responses may vary. In a series of studies examining bicultural identity integration (BII), for example, Benet-Martinez et al. (2002) argued that “there is reason to question whether the process of cultural frame switching is uniform across all biculturals” (p. 495). They used different primes to understand the effect of having salient characteristics in the background or the foreground of a participant’s mind—and to examine whether such factors interfere with the psychometric properties of the BII scale. Their approach confirmed their speculation that “variations in BII may influence the process of cultural frame switching” (p. 496), concluding that the manner in which data elicitation techniques had been implemented in the past might be creating or hiding variability.

Guideline No. 4: Expanding Through Surplus Data

The choice to use different methods and measures as lenses, per guideline number 3, reflects a suspicion that an anomaly stems from a phenomenon being represented incorrectly. By contrast, our fourth guideline reflects the speculation that the anomaly exists because the complexity of the phenomenon has been underspecified or poorly explicated or that previous H-D research has overlooked dimensions of the phenomenon (Johns 2006). In response, a researcher may choose to compile additional data to enable different kinds of analyses, many of which are undefined a priori, and whose need may only become apparent through the analytical process. Bailyn (1977, p. 105) described this data expansion practice as the production of “surplus data,” where the researcher collects data that reflect on the initial “hunch,” may not inform it directly, but which may help uncover a different explanation for the anomaly. For instance, in her seminal paper investigating the well-being of working women and their families, Bailyn (1977) explained how she was guided by a suspicion that previous survey data demonstrating a mixed—but predominantly negative—impact of women’s employment on well-being did not fully account for the phenomenon. She noted how, “with some alarm,” she reexamined existing data sets in an effort “to try to disprove the finding” because it ran counter to her perspective on the issue (p. 98). In a set of new analyses, she included two variables she suspected could impact a working mother’s attitude toward children: her degree of choice to work (i.e., reflecting a need for money) and the quality of her marriage (i.e., reflecting a desire to work to escape an unhappy home life). When neither of those explanations fully accounted for the variance in well-being, her analysis identified that the orientation women had toward their careers was a slightly better indicator of well-being than a binary “working or not.” It was through this analytic persistence that she discovered a difference between the husband’s and wife’s career orientations that accounted for the biggest negative impact on family well-being. As Bailyn (1977, p. 100) explains, “Thus an analysis that started with a concern with women’s careers ended with a concept applicable to men’s relation to their own work.” By embracing the pragmatist assumption of complexity and by using data to broaden the inquiry around the interplay between constructs, she was able to offer a more nuanced and plausible explanation for consequential factors in relationships, both between husbands and wives and between employees and employers.

As Bailyn’s (1977) work shows, using unplanned variables allows for the investigation of alternative antecedents, interplay between constructs, different chains of events, or the relevance of different temporal parameters. In essence, rather than finding ways to reexamine the phenomenon as it has already been studied, expansion through the use of “surplus data” allows for the exploration of different or new aspects of a phenomenon. The central challenge in this type of expansion is that the researcher, in the role of active reasoner, must remain focused on unrepresented features of the context (Johns 2006) and choose data compilation techniques that cast a wider net. Whether about persons, groups, or organizations, the researcher needs to compile data in a way that accounts for previous inconsistencies and opens the door for broader and more comprehensive explanation.

Analytical Corroboration

Data analysis in the H-D tradition typically involves the application of a decision rule that produces a binary conclusion, either establishing support or not establishing support for the a priori hypothesis, and in this context, any analysis to develop explanation is inappropriate. Data analysis in abduction, on the other hand, actively seeks to develop a hypothesis or explanation—not to test one. As such, within abductive inquiry, explanations and analytical approaches are continually revised, reassessed, and reapplied until they converge on a plausible explanation. Abductive data analysis can be usefully thought of as a series of successive exploratory analyses that progressively inform or corroborate one another, so learning from one analysis and incorporating that knowledge into the next is desirable. The researcher may, for example, finish one set of analyses and realize that a different sample, different level of analysis, or different set of comparisons may provide a deeper or complementary perspective. Naturally, such a process involves trial and error, which means that the researcher, as the active reasoner, must remain open to the possibility that some analyses will be dead ends that do not add to the explanation. In general, analysis in abductive inquiry advances progressively and iteratively, recursively tightening the explanatory web and surrounding the particular local anomaly under study (Agar 2010, Bailyn 1977, Farjoun et al. 2015). We offer two final and interrelated guidelines for the use of abductive reasoning when developing such analytical corroboration.

Guideline No. 5: Assessing Corroboration

Reflecting the pragmatist assumption that any single examination of a phenomenon is unlikely to provide a full explanation, abductive analysis must involve multiple contrasts and comparisons (Farjoun et al. 2015, Martela 2015). In other words, analytic comparisons in abductive reasoning should explore and exhaust relationships between concepts to allow the researcher to assess corroboration of multiple results with the emerging explanation. In the previously mentioned example of turnover in part- and full-time employees (Peters et al. 1981), for instance, one set of analyses identified similarities and differences between the groups while another analysis within the same sample compared patterns of results when different contextual variables were added or removed. This type of analytic comparison violates H-D standards because excessive comparisons can inflate error in estimation (Keppel and Zedeck 1989). This type of analysis, therefore, cannot rely on p-values as evidence (Gelman and Loken 2013), but researchers can consider the results of multiple comparisons to corroborate a plausible explanation and use their judgment to draw comparisons across multiple samples or studies to understand the anomaly. Returning to the Hawthorne studies, researchers learned from nonsignificant findings and used different data sources (i.e., experiments, interviews, observations) to produce new comparisons—and each one informed the next step in the research process. Similarly, Latham et al. (1988) conducted four experiments that produced data to evaluate four initial “brainstorm” suspicions. From an abductive perspective, seeking corroboration across multiple studies follows a core pragmatist assumption that multiple sources are needed to provide convergent and divergent evidence to corroborate an explanation.

Guideline No. 6: Assembling Plausible Explanation

Our last guideline builds on the use of multiple analyses because, after engaging analytical corroboration, the researcher’s task in the role of the active reasoner is to interpret and knit together the results of the different analyses conducted into a plausible and coherent explanation. Coherence is attained when a clear link is made between the empirical data and the explanations that are built, such that evidence from multiple sources consistently converges into a plausible explanation (Shepherd and Sutcliffe 2011). Behfar et al. (2011) again provide a useful example; it was the cumulative findings from the three different studies and their respective analysis that allowed them to arrive at conclusions regarding a new conceptualization and operationalization of process conflict. In fact, as they resolved the puzzle of process conflict, Behfar and colleagues also found evidence that a similar puzzle might underlie conceptions of relationship conflict, pointing out an avenue for future research.

In presenting abductive explanation, the burden is on the researchers to explain the path as one where explanation moves from “incoherent representations” to “increasingly more coherent representations” (Shepherd and Sutcliffe 2011, p. 365), or as a “noncontradictory whole” (Locke 2007, p. 883). This might require researchers, again in the role of active reasoners, to carefully explain and interpret analyses where particular variables were important versus when they were not and to document which analyses point to alternative or divergent explanations in the same detail as convergent ones. Returning to Bailyn’s (1977, pp. 98–99) work, she allowed initially that the results of additional analysis “brought further verification of the finding” she considered suspect, leaving her with “a provisional, if reluctant acceptance” that couples with working wives were less happy. It was only as the addition of new variables, particularly a more accurate representation of the “career orientation” of women and a cross-tabulation with their husbands’ “career and family orientation,” that she could develop firmer evidence for her ultimate (and plausible) conclusion. Bailyn’s (1977) explanation highlights how the degree of convergence of the evidence (Agar 2010) and of the coherence of a given explanation (Shepherd and Sutcliffe 2011) is what provides plausibility to the knowledge claims that are the result of abductive analysis. As Shepherd and Sutcliffe (2011) noted, “The more coherent a story, the more it is accepted as a plausible explanation of the phenomenon,” such that in the case of an anomaly an “explanation can be replaced by a more coherent explanation” (p. 365).

Evaluating Abduction as Substitution

Ultimately, the process and product of the three legs of the abductive inquiry stool we outline (data search and selection, data production and compilation, and analytical corroboration) must be evaluated. In this sense, the scientific process requires a critical appraisal of the resulting knowledge claims—primarily to determine whether they are plausibly “warranted” (Martela 2015). From a pragmatist perspective, though, the resulting explanation and the reasoning process that produces it cannot be meaningfully separated. In other words, abductive inquiry is both a process and a product of the search for explanation, and so we next explain ways in which the process and product can be evaluated. These guidelines can best be understood as an initial set of suggestions; as our H-D science develops in the acknowledgement and use of abduction, norms and guidelines such as these are likely to expand.

Evaluating the Deliberate Abduction Process

Evaluating a project that seeks to explain anomalies through the use of deliberate abduction is different from evaluating an H-D project. Norms for evaluating H-D work have developed over time, establishing strong paradigmatic rules for judgments of adequacy based on how well prescriptive norms are followed in the interest of methodological rigor (Mantere and Ketokivi 2013). Such an approach is not available in judging projects based on deliberate abductive reasoning because the very notion of compliance with established norms violates key assumptions regarding abductive reasoning and discovery, which require open and unstructured approaches rooted in the specific features of a situation. Thus, the primary standard to use in evaluating the appropriateness of the abductive reasoning process is to judge its congruence with the initiating inquiry (Ketokivi and Mantere 2010)—that is, the match between the anomaly that initiates the project and the methods and data used. In other words, judgments of quality rest on assessing the integrity of, or consistency among, the three legs of the stool in abductive inquiry (data search and selection, data production and compilation, and analytical corroboration). Evaluating the plausibility of an abductively generated knowledge claim must also consider whether and how thoroughly the inquiry process expanded the search for explanation in the face of the initiating anomaly and whether the wider net accounted for potential alternative antecedents or consequences that open the door for more comprehensive explanation.

To demonstrate coherence between the inquiry process and any knowledge claim, the derivation of conclusions should be carefully documented as researchers build the case for them. The reader of a manuscript should be able to examine the presentation of findings and see a clear link between “flashes of insight,” “ampliative reasoning,” and “conceptual leaps” in the analytical process, the choices made (and perhaps the choices not made), the evidence demonstrating plausibility, and resulting explanations about the anomaly. This “road map of discovery” is the basis on which other researchers, reviewers, and editors, who are also active reasoners in their evaluations, can develop their judgment on the adequacy of the effort by providing close enough engagement to the data to draw their own conclusions (Pratt 2009). The description of these different elements should allow the reader to identify the strengths and pitfalls of the approach, providing information on what led to the different links in a “reasoning and evidence” chain. Naturally, this also highlights how, in situations where this process is not well articulated or justified, where it lacks substance, or where insufficient attention is given to expanding the explanation or considering alternatives, skepticism is warranted—the onus is on the researcher to explain the fit of the process, the inquiry, and the explanation produced (Chaffee 1991). In sum, the abductive process can be evaluated positively if (1) the selection of data sources reflects the guiding inquiry, (2) the choices for data production have developed logical links and relationships between comparisons, (3) the analysis has refused tendencies to simplify and end too quickly, and (4) resulting explanations are deemed plausible because they leverage differences among different data sources and types.

Evaluating the Explanation Produced from Abductive Reasoning

In addition to evaluation of the abductive inquiry process, it is also important to establish criteria to evaluate the contribution of the knowledge claims and explanations that are produced. Given that plausible explanations are the only ones that can result from abduction, the first and most important standard to consider is whether, given the quality of the process and the evidence, the explanation produced is plausible. In addition, though, our particular focus on resolving anomalies within the H-D tradition suggests that there are at least four types of outcomes that reflect a superior standard of plausible explanation.

The first product of good abductive reasoning, and perhaps the most straightforward to evaluate, occurs when a testable hypothesis is produced—one that can be subject to empirical analysis through deductive testing. In this situation, the researcher may use abductive reasoning to examine an anomaly and establish sufficient corroborative evidence to create an explanation that establishes coherence among mixed findings. This knowledge claim can then be tested with an H-D approach to establish certainty. In other words, a testable hypothesis is one potential and important knowledge claim (i.e., a plausible or coherent explanation) that results from abductive reasoning.

A second high-quality explanation from deliberate abduction to understand an anomaly is the identification of new constructs or relationships that are critical to understand the phenomenon. Here, the resulting knowledge claim creates a plausible case that previous research has omitted relationships and variables that are central to a phenomenon. As Peters et al. (1981) summarized after showing the effects of employment status (i.e., full or part time) in considering turnover, “Both the results and the interpretations offered above should be viewed as suggestive of the existence of an important moderator variable and a potential direction for future research” (p. 97). Other examples can be the identification of features whose presence or absence can create different typologies of organizations and, consequently, different configurations in outcomes (e.g., Fiss 2011), or demonstrations of consistent differences or similarities, or as plausible reasons for associations (Eisenhardt 1989, Fawcett and Downs 1986, Yin 2018). In each of these cases, the outcome of the abductive inquiry may suggest that subsequent work examining the same phenomena must consider (and even include) specific variables, constructs, or relationships as indispensable parts of any analysis to be considered robust or complete.

A third and more complex knowledge claim from the type of abductive analysis we are proposing occurs when a phenomenon does not easily yield to examination, situations where the inquiry uncovers complexity that might have remained hidden in earlier work. In these cases, the product may not easily be summarized into a testable hypothesis or a set of indispensable variables and relationships. Instead, the complexity may only allow for the creation of a provisional map of the phenomenon, where basic contours and understanding of a phenomenon are presented but where theoretical space remains to be filled in. A valuable example is Lawrence’s (2006) work mapping the complex contours of organizational reference groups, which opens multiple avenues for additional research. In such situations, the contribution is the identification of features that give shape or “contextual certainty” (Locke 2007, p. 885) to the landscape, recognizing that it provides only an initial exploration of a complex phenomenon. The development of such a map, however, can offer multiple plausible explanations to illuminate interconnections and complex relationships, retaining the holistic nature of the phenomenon and refusing simplification (Farjoun et al. 2015, Martela 2015). We argue that such a comprehensive map, one that grapples with a complex anomaly, also meets high standards as a product of abductive inquiry.

A fourth and final outcome of abductive analysis may, in fact, focus less on providing an initial explanation and instead emphasize the presentation of an empirical puzzle. As Hambrick (2007) noted, in some instances, the careful documentation of a puzzling or novel empirical result is necessary to simply open a conversation. He argued that dressing up interesting empirical results with too much explanation (or, in his words, “theory”) can dilute their importance and interestingness by limiting the description of features that may be inconsistent with an overarching explanation. In the extreme, he points to unfortunate situations where “facts must wait for theories” before they can be revealed (Hambrick 2007, p. 1348). An example of what such a manuscript might involve is provided by Burt and Merluzzi’s (2016) work explaining network oscillation. They readily acknowledge that their data “is limited” and thus propose an explanation “[w]ith an eye to future research” (p. 369). The manuscript itself is devoted to very carefully documenting the empirical data they use and the different analyses they conducted, including a set based on stylized examples. Overall, this shows that a valuable outcome of abductive analysis can also be the careful and thorough documentation of an interesting empirical situation.

Discussion

In this article, we have discussed abduction and its roots in pragmatist philosophy, and we have argued that a more thorough and deliberate consideration of the role of abductive reasoning in theorizing can advance work in the H-D paradigm. We have discussed how abductive reasoning is already present in H-D projects as well as some of the common pitfalls centered on its improper use in analytical practices (e.g., data fishing) and in using abductive reasoning to make probable or certain knowledge claims (e.g., HARKing). We have argued for the need to deliberately surface abduction and for the need to use it explicitly as a complement, and we have also explained how abduction can substitute for H-D logic in manuscripts that focus on anomalies in extant research. We have also argued that resolving anomalies from H-D projects is one place where abductive reasoning is especially relevant, as it provides an alternative epistemological foundation to explore, discover, and develop explanation through close engagement with data. Finally, we have offered guidelines to illustrate how both the analytical choices of the “active reasoner” and the knowledge claims generated in projects reliant on abductive reasoning might be evaluated. We next discuss how our treatment of abduction contributes to broader conversations regarding theory development in management and organizations.

Abductive Reasoning in Theory Development

Responding to Concerns for Stronger Theory Development

Explicitly embracing the use of abductive reasoning within H-D projects can respond to calls for stronger and more expansive theorizing within management and organizations. For instance, Delbridge and Fiss (2013) suggest that developing understanding in our field requires a broad range of approaches. Abductive inquiry can aid this call because the development of explanation about a phenomenon will often require the researcher to cross paradigms, mix methods, and understand phenomena through different disciplinary approaches. Additionally, Luthans and Davis (1982) argue that research in organizations needs to balance the development of specificity and accuracy (i.e., by understanding single cases or events) with generalizability (i.e., by understanding populations). As they put it, research “needs to proceed both from the idiographic to the nomothetic and from the nomothetic to the idiographic” (Luthans and Davis 1982, p. 381). Such a view argues that the generalizability that H-D approaches naturally pursue cannot be fully understood in the absence of context-specific inquiry, a strength of abductive reasoning since contextual factors are always relevant to social phenomena. Tsoukas (1989) also argues that knowledge claims in organizations and management are incomplete unless they involve both forward-looking predictions (i.e., deduction) and also backward-looking explanations (i.e., abduction/retroduction) that connect to epistemological assumptions. Finally, greater openness to discovery through abductive reasoning would also respond to calls for more innovative and imaginative research (Alvesson and Sandberg 2013). Overall, the point is that because abduction has different epistemological assumptions about what constitutes evidence and explanation, it is well positioned to challenge established positivist norms in our field, which is one way to develop frame-breaking advances. In particular, abductive reasoning (and pragmatism more broadly) provide alternative norms on which to judge knowledge claims. These alternative norms can also help ameliorate the straightjacket that strict adherence to H-D standards (i.e., “rigor”) can impose and which can “guillotine” frame-breaking thinking (Alvesson and Sandberg 2013).

Making the Researcher Central in Evaluating Knowledge Claims

Another opportunity that comes from a more explicit incorporation of abductive reasoning into our theorizing comes from the opportunity to bring the researcher, as an active reasoner, into focus as a key player in theory development. To establish the limits and strengths of the theories we develop, Anteby (2013) and Hudson and Okhuysen (2014) have called for a more explicit incorporation of the researcher into our theorizing processes. In other words, they argue that a full accounting of the participation of the researcher in theorizing is likely to yield better theory. Through its pragmatic emphasis on “how we know” as well as “what we know,” abductive reasoning can help answer this call. We believe that the explicit acknowledgment of the researcher and the focus on “how we know” can be useful in clarifying the type of knowledge claim studies make (i.e., plausible, probable, or certain), making it easier to focus on generating explanation in addition to prediction, to clarify criteria to evaluate abductive knowledge claims, and to put discussions about the merit of particular methods (i.e., quantitative versus qualitative) in the background while bringing the generation of understanding and the evaluation of knowledge claims to the foreground (Wicks and Freeman 1998). It is only through a focus on knowledge claims as the building blocks of theory, their origin in the complex mix of methods, approaches, and researcher choices, that the field can arrive at a consensus about what knowledge claims are warranted, establishing solid foundations in our science.

Enhancing the Justification for Mixed Methods

A more explicit acknowledgment of abductive reasoning can also provide an epistemological framework for working with mixed methods, for justifying their use, and for knitting together the results from different sources. As we noted above, abductive reasoning places a strong emphasis on corroboration, on the development of evidence and explanation from different sources as a way to make appropriate assertions regarding a particular situation. Kreager et al. (2017), in a study to resolve competing explanations about the “experience and social organization” (p. 686) of prison inmates, demonstrated the usefulness of abductive reasoning in mixed-methods work by corroborating qualitative insight about inmate status through quantitative network data. This approach was effective, as mixed methods are often useful in solving practical problems or anomalies (Tashakkori and Teddlie 1998). In fact, a common challenge in using mixed methods is the presence of conflicting paradigmatic norms: qualitative and quantitative data, subjective and objective, idiographic and nomothetic approaches have difficulty coexisting. Our arguments suggest that the use of abductive reasoning, with appropriate recognition of its role in inquiry and knowledge claiming, can provide a useful philosophical justification for crossing paradigmatic norms in mixed-methods research.

Enhancing the Development of Midrange Theory

Incorporating abductive reasoning in H-D work can also strengthen the development and advancement of midrange theory. Midrange theory “acknowledges the importance of abstraction, representation, and refinement of general principles” but also seeks to represent particular phenomena that are “emergent, contingent, and locally specific” (Thompson 2011, p. 754). Midrange theory has been used to explain situations where a particular context causes a phenomenon to exhibit unique characteristics. Examples include organizational politics in high-velocity environments (Eisenhardt and Bourgeois 1988) as well as human resource practices that vary across countries (Teagarden et al. 1995). The hallmark of midrange explanations is that they refuse simplification of either theoretical principles to achieve specificity or of empirical reality to achieve generality, instead seeking a balance between the two. And as Birkinshaw et al. (2014) indicated, midrange theories “provide a flexible toolkit to confront the complex and shifting realities of organizations and their management” (p. 51). From our perspective, what is important is that the exploration necessary for the development of midrange theory relies heavily on abductive reasoning, on contextually specific examinations to understand the limits to the particular or local applicability of general theory—situations that are likely revealed through anomalies. In essence, the emphasis on phenomenological precision that abductive reasoning allows can help identify and refine the scope of theories and explicitly help balance generality with specificity to account for unique contextual features, common concerns in midrange theory (Merton 1949, Thompson 2011, Weick 1979).

The Future of Abductive Research

Naturally, proposing the use of abductive reasoning to further H-D projects is likely to inspire concerns, particularly given the radically different epistemological roots between abduction and deduction (Ketokivi and Mantere 2010), and the adoption of such a different analytical approach is unlikely to be smooth. One important obstacle is the acknowledgement or admission that the researcher is, and always has been, an active reasoner in theorizing within the H-D tradition. Such an acknowledgement runs afoul of conceptions of the researcher as “disinterested” or “objective” rather than one situated in a discipline, set of personal experiences, and ontology (Hudson and Okhuysen 2014, Van de Ven 2007). Acknowledging the researcher in this way is likely to remain a challenge even though, increasingly, there is recognition that objectivity and rationality are themselves socially constructed. Such an acceptance—or even embrace—of the subjectivity of the researcher (and reviewers) in the broader research enterprise can be salutary, as it forces attention to challenges in our scholarly pursuits that otherwise remain hidden (Anteby 2013, Hudson and Okhuysen 2014, Ketokivi and Mantere 2010, Schwab and Starbuck 2017).

More broadly, we judge the obstacles for incorporating abductive reasoning to be similar to earlier controversies regarding inductive and qualitative research in organization studies. Three decades ago, studies based on case analyses, qualitative data, and inductive approaches were barely present in the field of management and organizations (Luthans and Davis 1982). Their inclusion in top journals required significant justification and also involved extensive elaboration and explanation of empirical and analytical approaches, as well as significant education of the field and adaptation to its expectations by authors and reviewers. We see a similar path open for H-D research that theorizes using abductive reasoning. To begin, we believe that such research must become more familiar because abductive inquiry does not conform to traditional norms for deductive work. This means that the language of abduction, such as “backward induction,” “initiating inquiry,” and “tentative explanation,” all need to become part of our verbal repertoire so that the approach can be more readily understood. In addition, recognition is needed that the development of explanation as a goal in research is justified. Without changes such as these, there is a danger that processes of evaluation (i.e., editorial review) will push the development of a manuscript toward perspectives that are inconsistent with the goal of the research (Bansal and Corley 2012, Pratt 2009, Schein 2007), pressuring authors to more neatly conform to well-known standards.

One modest step to begin this journey is to explicitly “name” and surface abductive reasoning in H-D manuscripts and to use language that is appropriate to describe the generation of explanation within already existing traditions. For instance, “Post Hoc Analysis” sections can be renamed “Abductive Explanation” sections, opening the door to the use of more accurate and explanatory language for our research. Such steps can lead to greater understanding of—and more acceptance for—manuscripts that are solely based on abductive reasoning. Returning to the example of inductive work, three decades ago, such research was difficult to publish and infrequent. Although not every barrier to inductive research has been removed, as time has passed, the tide has turned, and more research using an inductive lens has reached our top journals. As the value of inductive work became evident and its contribution widely appreciated, so did conventions and standards for judging its empirical and theoretical adequacy and excellence. In the same manner, our field can also change to recognize the value and embrace the usefulness of abductive inquiry in our research endeavors.

Conclusion

Early in this manuscript, we asked a central question—that is, whether discovery has a place in the H-D paradigm. To the extent that H-D is seen as a methodological approach focused on validation (as suggested by Popper 1959), the answer is no (Locke 2007, Simon 1973). To the extent that the H-D paradigm relies on the idea of researcher-invariant approaches, discovery is implausible because of H-D’s strong normative approach (Mantere and Ketokivi 2013). However, as others have highlighted and our treatment suggests, close engagement to data in the context of H-D projects allows a researcher to identify new variables, relationships, and contextual features. Our own language reveals how abductive reasoning helps “explore,” “discover,” and otherwise shine light on unexpected aspects of social phenomena. In fact, our treatment suggests that abductive reasoning in H-D can play a role similar to one it plays in induction (Timmermans and Tavory 2012, pp. 176–177) by allowing a researcher stance that enables a “revisiting” of the phenomenon through “defamiliarization,” and encouraging the use of “alternative casings” to see it anew. It is in this process that discovery becomes possible in the context of H-D research.

How then, do we reconcile perspectives indicating that H-D does not allow for discovery with our own arguments? Here, a useful guide is provided by the core assumptions of pragmatism. In his original work, one of Peirce’s potent insights is the inseparability of validation and discovery. For Peirce, reasoning processes and empirical data are always intertwined (Hanson 1958, Mantere and Ketokivi 2013, Simon 1973, Timmermans and Tavory 2012). While reasoning processes and data may be distinguishable, they are not separable. For Peirce, validation is not a method but simply one more way of knowing and understanding the world. As such, deduction is subject to the same vagaries and opportunities as induction and abduction, rendering it equally as a process focused on explanation rather than validation (Hanson 1958, Lukka and Modell 2010). It is in this context that our perspective on the contribution of abduction to H-D scholarship in organization science can be understood.

Acknowledgments

The authors were equal contributors to this manuscript. They gratefully acknowledge the effort and attention of three anonymous reviewers and the senior editor, Michel Anteby. In addition, the authors would like to thank Peter Belmi, Matthew Cronin, Edward Freeman, and Bryant Hudson for their comments on the paper during its development. They also acknowledge participants in seminars and workshops at the Richard Ivey School of Business, the C. T. Bauer College of Business, the Wharton School of Business, and the Academy of Management for their engagement and comments. The views expressed are those of the authors and do not necessarily reflect the official policy or position of the Department of the Army, Department of Defense, or the U.S. Government.

Endnote

1 It is helpful to note that constructivist and interpretivist approaches, which strongly emphasize peoples’ lived experience, carry many of the same values as pragmatism. Some authors have noted these similarities and sometimes argued that pragmatist concerns underpin such approaches (Farjoun et al. 2015, Martela 2015).

References

  • Agar M (2010) On the ethnographic part of the mix: A multi-genre tale of the field. Organ. Res. Methods 13(2):286–303.CrossrefGoogle Scholar
  • Alvesson M, Kärreman D (2007) Constructing mystery: Empirical matters in theory development. Acad. Management Rev. 32(4):1265–1281.CrossrefGoogle Scholar
  • Alvesson M, Sandberg J (2013) Has management studies lost is way? Ideas for more imaginative and innovative research. J. Management Stud. 50(1):128–152.CrossrefGoogle Scholar
  • Anonymous (2015) The case of the hypothesis that never was: Uncovering the deceptive use of post hoc hypotheses. J. Management Inquiry 24(2):214–216.CrossrefGoogle Scholar
  • Anteby M (2013) Relaxing the taboo on telling our own stories: Upholding professional distance and personal involvement. Organ. Sci. 24(4):1277–1290.LinkGoogle Scholar
  • Argyle M (1972) Non-verbal communication in human social interaction. Hinde RA, ed. Non-Verbal Communication (Cambridge University Press, Oxford, UK), 243–267.Google Scholar
  • Attewell P, Monaghan D, Kwong D (2015) Data Mining for the Social Sciences: An Introduction (University of California Press, Oakland).Google Scholar
  • Bailyn L (1977) Research as a cognitive process: Implications for data analysis. Quality Quantity 11(2):97–117.CrossrefGoogle Scholar
  • Bamberger P, Ang S (2016) The quantitative discovery: What is it and how to get it published. Acad. Management Discoveries 2(1):1–6.CrossrefGoogle Scholar
  • Bamberger PA, Pratt MG (2010) Moving forward by looking back: Reclaiming unconventional research contexts and samples in organizational scholarship. Acad. Management J. 53(4):665–671.CrossrefGoogle Scholar
  • Bansal P, Corley K (2012) Publishing in AMJ—Part 7: What’s different about qualitative research? Acad. Management J. 55(3):509–513.CrossrefGoogle Scholar
  • Bedeian AG, Taylor SG, Miller AN (2010) Management science on the credibility bubble: Cardinal sins and various misdemeanors. Acad. Management Learn. Ed. 9(4):715–725.CrossrefGoogle Scholar
  • Behfar KJ, Mannix EA, Peterson RS, Trochim WM (2011) Conflict in small groups: The meaning and consequences of process conflict. Small Group Res. 42(2):127–176.CrossrefGoogle Scholar
  • Benet-Martinez V, Leu J, Lee F, Morris MW (2002) Negotiating biculturalism: Cultural frame switching in biculturals with oppositional versus compatible cultural identities. J. Cross-Cultural Psych. 33(5):492–516.CrossrefGoogle Scholar
  • Biemann T (2013) What if we were Texas sharpshooters? Predictor reporting bias in regression analysis. Organ. Res. Methods 16(3):335–363.CrossrefGoogle Scholar
  • Birkinshaw J, Healey MP, Suddaby R, Weber K (2014) Debating the future of management research. J. Management Stud. 51(1):38–55.CrossrefGoogle Scholar
  • Bosco FA, Aguinis H, Field JG, Pierce CA, Dalton DR (2016) HARKing’s threat to organizational research: Evidence from primary and meta-analytic sources. Personnel Psych. 69(3):709–750.CrossrefGoogle Scholar
  • Burch R (2014) Charles Sanders Peirce. Stanford Encyclopedia of Philosophy (Stanford University, Stanford, CA). Article published June 22, 2001; last modified November 12, 2014. https://plato.stanford.edu/entries/peirce/.Google Scholar
  • Burt RS, Merluzzi J (2016) Network oscillation. Acad. Management Discoveries 2(4):368–391.CrossrefGoogle Scholar
  • Chaffee S (1991) Explication (Sage, Newbury Park, CA).Google Scholar
  • Chen MS, Han J (1996) Data mining: An overview from a database perspective. IEEE Trans. Knowledge Data Engrg. 8(6):866–883.CrossrefGoogle Scholar
  • Comte A (1975) Auguste Comte and Positivism: The Essential Writings (Transaction Publishers, New Brunswick, NJ).Google Scholar
  • Cook T, Campbell D (1979) Quasi-Experimentation: Design and Analysis for Field Settings (Houghton Mifflin Company, Boston).Google Scholar
  • Delbridge RD, Fiss PC (2013) Styles of theorizing and the social organization of knowledge. Acad. Management Rev. 38(3):325–331.CrossrefGoogle Scholar
  • Dunne DD, Dougherty D (2016) Abductive reasoning: How innovators navigate in the labyrinth of complex product innovation. Organ. Stud. 37(2):131–159.CrossrefGoogle Scholar
  • Edmonson A, McManus S (2007) Methodological fit in management field research. Acad. Management Rev. 32(4):1155–1179.CrossrefGoogle Scholar
  • Edwards K (1990) The interplay of affect and cognition in attitude formation and change. J. Personality Soc. Psych. 59(2):202–216.CrossrefGoogle Scholar
  • Eisenhardt KM (1989) Building theories from case study research. Acad. Management Rev. 14(4):532–550.CrossrefGoogle Scholar
  • Eisenhardt K, Bourgeois J (1988) Politics of strategic decision making in high velocity environments. Acad. Management J. 31(4):737–770.CrossrefGoogle Scholar
  • Elkjaer B, Simpson B (2011) Pragmatism: A lived and living philosophy. What can it offer to contemporary organization theory? Philos. Organ. Theory 32:55–84.CrossrefGoogle Scholar
  • Farjoun M, Ansell C, Boin A (2015) Pragmatism in organization studies: Meeting the challenges of a dynamic and complex world. Organ. Sci. 26(6):1787–1804.LinkGoogle Scholar
  • Fawcett J, Downs F (1986) The Relationship of Theory and Research (Appleton Centry Crofts, Norwalk, CT).Google Scholar
  • Fayyad U, Piatetsky-Shapiro G, Smyth P, Uthurusamy R, eds. (1996) Advances in Knowledge Discovery and Data Mining (MIT Press, Cambridge, MA).Google Scholar
  • Fiss PC (2011) Building better causal theories: A fuzzy set approach to typologies in organization research. Acad. Management J. 54(2):393–420.CrossrefGoogle Scholar
  • Folger R, Stein C (2017) Abduction 101: Reasoning processes to aid discovery. Human Resource Management Rev. 27(2):306–315.CrossrefGoogle Scholar
  • Franke RH, Kaul JD (1978) The Hawthorne experiments: First statistical interpretation. Amer. Sociol. Rev. 43(5):623–643.CrossrefGoogle Scholar
  • Gelman A, Imbens G (2013) Why ask why? Forward causal inference and reverse causal questions. NBER Working Paper 19614, National Bureau of Economic Research, Cambridge, MA.Google Scholar
  • Gelman A, Loken E (2013) The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time. Working paper, Columbia University, New York. http://www.stat.columbia.edu/∼gelman/research/unpublished/p_hacking.pdf.Google Scholar
  • Goffman E (1959) The Presentation of Self in Everyday Life (Doubleday, Garden City, NY).Google Scholar
  • Grant AM (2008) The significance of task significance: Job performance effects, relational mechanisms, and boundary conditions. J. Appl. Psych. 93(1):108–124.CrossrefGoogle Scholar
  • Hambrick DC (2007) The field of management’s devotion to theory: Too much of a good thing? Acad. Management J. 50(6):1346–1352.CrossrefGoogle Scholar
  • Hanson NR (1958) The logic of discovery. J. Philos. 55(25):1073–1089.CrossrefGoogle Scholar
  • Heckman JJ (1979) Sample selection bias as a specification error. Econometrica 47(1):153–161.CrossrefGoogle Scholar
  • Hudson BA, Okhuysen GA (2014) Taboo topics: Structural barriers to the study of organizational stigma. J. Management Inquiry 23(3):242–253.CrossrefGoogle Scholar
  • Hurley P (2000) A Concise Introduction to Logic, 7th ed. (Wadsworth, Belmont, CA).Google Scholar
  • Hurley P (2014) A Concise Introduction to Logic, 12th ed. (Clark Baxter, Boston).Google Scholar
  • Johns G (2006) The essential impact of context on organizational behavior. Acad. Management Rev. 31(2):386–408.CrossrefGoogle Scholar
  • Kalton G (1983) Introduction to Survey Sampling, Vol. 35 (Sage, Newbury Park, CA).CrossrefGoogle Scholar
  • Keppel G, Zedeck S (1989) Data Analysis for Research Designs: Analysis of Variance and Multiple Regression/Correlation Approaches (W. H. Freeman and Company, New York).Google Scholar
  • Ketokivi M, Mantere S (2010) Two strategies for inductive reasoning in organizational research. Acad. Management Rev. 35(2):315–333.CrossrefGoogle Scholar
  • Klag M, Langley A (2013) Approaching the conceptual leap in qualitative research. Internat. J. Management Rev. 15(2):149–166.CrossrefGoogle Scholar
  • Kohler W (1947) Gestalt Pychology: An Introduction to New Concepts in Modern Psychology (Liveright, New York).Google Scholar
  • Kreager DA, Young JTN, Haynie DL, Bouchard M, Schaefer DR, Zajac G (2017) Where “old heads” prevail: Inmate hierarchy in a men’s prison unit. Amer. Sociol. Rev. 82(4):685–718.CrossrefGoogle Scholar
  • Latham G, Erez M, Locke E (1988) Resolving scientific disputes by the joint design of crucial experiments by the antagonists: Application to the Erez-Latham dispute regarding participation in goal setting. J. Appl. Psych. 73(4):753–772.CrossrefGoogle Scholar
  • Lawrence BS (2006) Organizational reference groups: A missing perspective on social context. Organ. Sci. 17(1):80–100.LinkGoogle Scholar
  • Levitt SD, List JA (2011) Was there really a Hawthorne effect at the Hawthorne plant? An analysis of the original illumination experiments. Amer. Econom. J.: Appl. Econom. 3(1):224–238.CrossrefGoogle Scholar
  • Lipton P (2001) What good is an explanation? Hon G, Rakover SS, eds. Explanation: Theoretical Approaches and Applications (Springer, Dordrecht, Netherlands), 43–59.CrossrefGoogle Scholar
  • Locke EA (2007) The case for inductive theory building. J. Management 33(6):867–890.CrossrefGoogle Scholar
  • Locke K (2010) Abduction. Mills AJ, Durepos G, Wiebe E, eds. Encyclopedia of Case Study Research (Sage, Thousand Oaks, CA), 46–53.Google Scholar
  • Locke K (2011) Field research practice in management and organization studies: Reclaiming its tradition of discovery. Acad. Management Ann. 5(1):613–652.CrossrefGoogle Scholar
  • Locke K, Golden-Biddle K (1997) Constructing opportunities for contribution: Structuring intertextual coherence and “problematizing” in organizational studies. Acad. Management J. 40(5):1023–1062.CrossrefGoogle Scholar
  • Locke K, Golden-Biddle K, Feldman MS (2008) Making doubt generative: Rethinking the role of doubt in the research process. Organ. Sci. 19(6):907–918.LinkGoogle Scholar
  • Lockett A, McWilliams A, Van Fleet DD (2014) Reordering our priorities by putting phenomena before design: Escaping the straitjacket of null hypothesis significance testing. British J. Management 25:863–873.CrossrefGoogle Scholar
  • Lukka K, Modell S (2010) Validation in interpretive management accounting research. Accounting, Organ. Soc. 35(4):462–477.CrossrefGoogle Scholar
  • Luthans F, Davis T (1982) An idiographic approach to organizational behavior research: The use of single case experimental designs and direct measures. Acad. Management Rev. 7(3):380–391.CrossrefGoogle Scholar
  • Magnani L (2001) Abduction, Reason and Science: Processes of Discovery and Explanation (Kluwer Academic/Plenum Publishers, New York).CrossrefGoogle Scholar
  • Mantere S, Ketokivi M (2013) Reasoning in organization science. Acad. Management Rev. 38(1):70–89.CrossrefGoogle Scholar
  • Martela F (2015) Fallible inquiry with ethical ends-in-view: A pragmatist philosophy of science for organizational research. Organ. Stud. 36(4):537–563.CrossrefGoogle Scholar
  • Merton R (1949) Social Theory and Social Structure (Free Press, Glencoe, IL).Google Scholar
  • Miller CC, Bamberger P (2016) Exploring emergent and poorly understood phenomena in the strangest of places: The footprint of discovery in replications, meta-analyses, and null findings. Acad. Management Discoveries 2(4):313–319.CrossrefGoogle Scholar
  • Minnameier G (2017) Forms of abduction and an inferential taxonomy. Magnani L, Bertolotti T, eds. Springer Handbook of Model-Based Science (Springer, Berlin), 175–195.CrossrefGoogle Scholar
  • Mintzberg H (1979) An emerging strategy of “direct” research. Admin. Sci. Quart. 24(4):582–589.CrossrefGoogle Scholar
  • Peirce C (1965) The Collected Papers of Charles Sanders Peirce: Elements of Logic, Vol. 2, Hartshorne C, Weiss P, eds. (Harvard University Press, Cambridge, MA).Google Scholar
  • Peters LH, Jackofsky EF, Salter JR (1981) Predicting turnover: A comparison of part-time and full-time employees. J. Organ. Behav. 2(2):89–98.CrossrefGoogle Scholar
  • Popper K (1959) The Logic of Scientific Discovery (Basic Books, New York).Google Scholar
  • Pratt M (2009) For the lack of a boilerplate: Tips on writing up (and reviewing) qualitative research. Acad. Management J. 52(5):856–862.CrossrefGoogle Scholar
  • Rea L, Parker R (1997) Designing and Conducting Survey Research: A Comprehensive Guide (Jossey-Bass Publishers, San Francisco).Google Scholar
  • Roethlisberger F, Dickson W, Wright H (1956) Management and the Worker (Harvard University Press, Cambridge, MA).Google Scholar
  • Rozeboom W (1997) Good science is abductive, not hypothetico-deductive. Harlow L, Mulaik S, Steiger J, eds. What If There Were No Significance Tests? (Lawrence Erlbaum Associates, Mahwah, NJ), 335–392.Google Scholar
  • Schein VE (2007) Women in management: Reflections and projections. Women Management Rev. 22(1):6–18.CrossrefGoogle Scholar
  • Schwab A, Starbuck WH (2017) A call for openness in research reporting: How to turn covert practices into helpful tools. Acad. Management Learn. Ed. 16(1):125–141.CrossrefGoogle Scholar
  • Selvin HC, Stuart A (1966) Data-dredging procedures in survey analysis. Amer. Statist. 20(3):20–23.Google Scholar
  • Shepherd DA, Sutcliffe KM (2011) Inductive top-down theorizing: A source of new theories of organization. Acad. Management Rev. 36(2):361–380.CrossrefGoogle Scholar
  • Simon HA (1973) Does scientific discovery have a logic? Philos. Sci. 40(4):471–480.CrossrefGoogle Scholar
  • Sproull N (1988) Handbook of Research Methods: A Guide for Practitioners and Students in the Social Sciences, 2nd ed. (Scarecrow Press, Lanham, MD).Google Scholar
  • Suddaby R (2006) From the editors: What grounded theory is not. Acad. Management J. 49(4):633–642.CrossrefGoogle Scholar
  • Sutton R, Staw B (1995) What a theory is not. Admin. Sci. Quart. 40(3):371–384.CrossrefGoogle Scholar
  • Tashakkori A, Teddlie C (1998) Mixed Methodology: Combining Qualitative and Quantitative Approaches (Sage, Thousand Oaks, CA).Google Scholar
  • Teagarden MB, Von Glinow MA, Bowen DE, Frayne CA, Nason S, Huo YP, Milliman J, et al. (1995) Toward a theory of comparative management research: An idiographic case study of the best international human resources management project. Acad. Management J. 38(5):1261–1287.CrossrefGoogle Scholar
  • Thompson J (2011) Organizations in Action: Social Science Bases of Administrative Theory (McGraw Hill, New Brunswick, NJ).Google Scholar
  • Timmermans S, Tavory I (2012) Theory construction in qualitative research: From grounded theory to abductive analysis. Sociol. Theory 30(3):167–186.CrossrefGoogle Scholar
  • Trochim W (2001) The Research Methods Knowledge Base (Atomic Dog Publishing, Cincinnati).Google Scholar
  • Tsoukas H (1989) The validity of idiographic research explanations. Acad. Management Rev. 14(4):551–561.CrossrefGoogle Scholar
  • Van de Ven A (2007) Engaged Scholarship: A Guide for Organizational and Social Research (Oxford University Press, Oxford, UK).Google Scholar
  • Walton D (2004) Abductive Reasoning (University of Alabama Press, Tuscaloosa).Google Scholar
  • Weick K (1979) The Social Psychology of Organizing, 2nd ed. (McGraw-Hill, New York).Google Scholar
  • Wicks AC, Freeman RE (1998) Organization studies and the new pragmatism: Positivism, anti-positivism, and the search for ethics. Organ. Sci. 9(2):123–140.LinkGoogle Scholar
  • Yin RK (2018) Case Study Research and Applications: Design and Methods, 6th ed. (Sage, Thousand Oaks, CA).Google Scholar

Kristin Behfar is professor of strategic leadership and ethics at the United States Army War College. She received her Ph.D. from Cornell University in organizational behavior. Her research focuses on conflict management, cross-cultural teamwork, and research methods.

Gerardo A. Okhuysen is professor of organization and management at the Paul Merage School of Business, University of California, Irvine. He received his Ph.D. from Stanford University in industrial engineering and engineering management. He primarily examines the interactions that take place within organizations to understand purposeful collective behavior.

INFORMS site uses cookies to store information on your computer. Some are essential to make our site work; Others help us improve the user experience. By using this site, you consent to the placement of these cookies. Please read our Privacy Statement to learn more.