How Much Do Industry, Corporation, and Business Matter, Really? A Meta-Analysis

Published Online:


The academic field of strategy seeks to explain differences in firm performance. A consensus exists that industry, corporate, and business effects together account for most performance differences, but there is debate over how much each factor explains. Previous studies have used three different effect size measures: sum of squares, variance, or standard deviation. These measures yield different results for a given sample, which precludes direct comparison. Using simulation analysis, I show that the sum-of-squares measure is sensitive to sample dimensions (e.g., the number of industries, the number of businesses per industry). Using 25 samples from nine studies (N = 212,112), I find that this sensitivity is strong in practice: knowing only the dimensions of a sample is sufficient to predict well the sum-of-squares measure. A meta-analysis is conducted using the variance and standard deviation measures instead (18 samples from 16 studies, N = 225,183). With the variance measure, the effect sizes are 0.08 for industry, 0.14 for corporate, and 0.36 for business; effect sizes with the standard deviation measure are 0.28 for industry, 0.36 for corporate, and 0.59 for business. Thus business effects have about twice the explanatory power of corporate effects, and corporate effects explain somewhat more than do industry effects.

The online appendix is available at

1. Introduction

The academic field of strategy seeks to explain differences in firm performance (Rumelt et al. 1994, Nag et al. 2007). Consider, for example, Procter and Gamble’s business of manufacturing diapers for babies and Boeing’s business of building military aircraft for governments. Studies have found that three factors typically account for most of the performance differences between such businesses (Schmalensee 1985, Hansen and Wernerfelt 1989, Rumelt 1991, Roquebert et al. 1996, McGahan and Porter 1997): the industry in which the business operates (here, baby care versus defense industry); the corporate parent, if any, to which the business belongs (e.g., Procter & Gamble versus Boeing); and the business itself (i.e., the specifics of each business). Although consensus exists that industry, corporate, and business effects together explain most differences in performance, there is ongoing debate about how much each of these factors explains (Hough 2006, Misangyi et al. 2006, Bou and Satorra 2010, Karniouchina et al. 2013, Stavropoulos et al. 2015, Zavosh and Dibiaggio 2015).

I argue that achieving consensus requires consistent measures to quantify the contribution of industry, corporate, and business factors to firm performance. Studies have used three different effect size measures: sum of squares, variance, or standard deviation. These various measures yield different results for a given sample, which precludes the direct comparison of samples using different measures. Whereas the variance and standard deviation measures are easy to convert from one to the other, this is not the case for the sum-of-squares and variance measures—which are the measures used most often.

I employ simulation analysis to show that the sum-of-squares measure is sensitive to sample dimensions (e.g., the number of industries, the number of businesses per industry), which limits its usefulness as effect size measure. For example, the industry sum-of-squares measure will be higher with stronger industry effects or with more industry degrees of freedom. It follows that we obtain a different sum of squares when changing the degrees of freedom (e.g., by picking a sample with different dimensions) even when the strength of industry effects is held constant. Using 25 samples from nine empirical studies (N = 212,112), I find that this sensitivity is strong in practice: knowing only the dimensions of a sample is sufficient to predict well the sum-of-squares measure.

To assess the explanatory power of industry, corporate, and business effects, I conduct a meta-analysis using not the sum-of-squares measure of effect size, but instead the variance and standard deviation measures. Using 18 samples from 16 studies (N = 225,183), the variance findings are 0.08 for industry, 0.14 for corporate, and 0.36 for business effects; the standard deviation findings are 0.28 for industry, 0.36 for corporate, and 0.59 for business. Thus, business effects explain about twice what industry and corporate effects do, and corporate effects explain somewhat more than do industry effects.

These results contribute in two ways to answering the question of how much of performance differences is explained by each of these factors. First, the effect size for business is substantially greater than for industry and corporate, but that difference depends on the measure employed. For example, the average effect size for business is more than four times that of industry when measured in variances, but only twice as large when measured in standard deviations. Indeed, under the standard deviation measure, industry and corporate effects together explain as least as much as do business effects. The variance measure is based on squared distances, so differences between factors are amplified (Brush and Bromiley 1997, Hunter and Schmidt 2004); taking the square root of the standard deviation measure reduces that amplification. Thus, the question of explanatory power is as much a question about the strength of particular effects as it is about the effect size measure itself.

Second, corporate effects are somewhat greater than industry effects. This finding does not depend on whether the variance or instead the standard deviation measure is used. Two reasons may explain why this result has not previously been established in the literature. First of all, the corporate effect is greater than the industry effect in most samples, but not in all samples. It is more difficult to separate signal from noise in a single sample than in a large set of samples, as is employed here. Secondly, the industry factor has received more attention than the corporate factor. All studies in the empirical literature include industry effects, but several do not include corporate effects. In the theoretical literature, most debate concerns the merits of adopting an external (industry) perspective versus an internal (business) perspective rather than a corporate perspective. Hence this finding reinforces the relevance of corporate effects for the understanding of performance differences.

2. Measures

2.1. Background

The empirical approach to explaining firm performance amounts to describing at which level performance differences occur. Here “performance” is usually measured as return on assets and less frequently as return on sales, return on invested capital, or market share. The levels investigated most often are industry, corporate parent, and business. An industry is the set of businesses that sell similar goods or services. Similarity is often captured via an industry classification, such as the Standard Industrial Classification (SIC) or its successor, the North American Industry Classification System (NAICS). Thus we can distinguish among the industries of rail transportation (NAICS 4821), insurance carriers (NAICS 5241), and electric power generation, transmission, and distribution (NAICS 2211). A corporate parent is a legal entity that owns and operates one or more businesses (Rumelt 1991).1 For example, Berkshire Hathaway owns and operates many businesses, including a rail transportation business, an insurance business, and an electric power generation business. The businesses of a corporate parent are demarcated such that each business operates in one and only one industry. It follows that a business is interpreted from a market perspective (i.e., as a business segment) and not from an organizational perspective (i.e., not as a business unit) (McGahan and Porter 1997). Even if the operations of rail transportation and electric power generation were organized in a single business unit, they would still count as two businesses for the purposes of performance analysis. Insurance operations that are distributed between two business units would still be considered a single business.

Figure 1 gives examples of performance differences occurring at different levels. Consider four businesses, each belonging to one of two corporate parents (ovals in the figure) and one of two industries (triangles). Our aim is to characterize the occurrences of high-performing businesses (gray squares) and low-performing businesses (white squares). Therefore, explaining “firm” performance is equivalent to explaining the performance of businesses, which belong to corporate parents and industries. Industry effects are such that performance differences occur between industries. In Example I, all high-performing businesses are in the industry on the figure’s left side while all low-performing businesses are on the right-side industry. Corporate effects are such that performance differences occur between corporate parents. So in Example II, all high-performing businesses belong to the left-side corporate parent and the low-performing businesses to the right-side corporate parent. Business effects are such that performance differences are related to the specifics of the business. In Example III, the leftmost and rightmost businesses are high-performing, while those in the middle are low-performing. This pattern cannot be explained either by industry or corporate effects because each corporate parent and each industry has both a low-performing and a high-performing business.

Figure 1: Examples of Industry, Corporate, and Business Effects

These examples are simplifications. First of all, they are “pure”examples: in each case, a single factor accounts fully for the pattern of performance differences. In reality, the three effects co-occur; the result is complex performance patterns—especially when considering more than two industries, two corporate parents, and four businesses. Second, these examples ignore time. If we interpret Figure 1 as performance from a single year, then all the patterns may reflect nothing more than randomness. Only if these performance differences persist over longer periods can we meaningfully talk about industry, corporate, and business effects.

2.2. Basic Properties of Measures

Three different measures have been used to quantify the effect of year, industry, corporate, and business effects on performance. The first measure is based on the sum of squares of these effects (e.g., Goddard et al. 2009, Ma et al. 2013, Fitza 2014, Zacharias et al. 2015); the second on their variances (e.g., Chan et al. 2010, Karniouchina et al. 2013), and the third on their standard deviations (e.g., Brush and Bromiley 1997, Hough 2006). The sum-of-squares and variance measures have been used most frequently and the standard deviation measure least frequently.

These measures are related but usually yield different outcomes, which precludes aggregation. In general, a sum of squares is the sum of squared differences from a mean. A sum of squares divided by the degrees of freedom gives the variance, and the square root of a variance is the standard deviation. The sum of squares measure does not account for degrees of freedom. A larger sample yields a higher sum of squares but not a higher variance or higher standard deviation. Formally, we have the following:

SS(x)=i=1n(xix¯)2;Var(x)=SS(x)n1;SD(x) =Var(x).(1)

To describe industry, corporate, and business effects, we can write the performance (pjkt) of a business in corporation k, industry j, and year t in line with Rumelt (1991) as


Here m is a constant (it signifies the mean performance of all businesses across all years); Yt is the year effect for year t (i.e., the performance difference in year t relative to the mean); Ij is the industry effect for industry j (i.e., the performance difference in industry j relative to the mean); Ck is the corporate effect for corporation k (i.e., the performance difference of corporation k relative to the mean); Bjk is the business effect for a business in corporation k and industry j (i.e., the performance difference between that business relative to the mean); and ejkt is an error term. For simplicity, the model does not include interaction effects (in addition to the business effect, which can be viewed as an interaction between the industry and corporate effects).

For industry, corporate, and business effects, calculation of the three measures is the same when transforming variance to standard deviation (the square root) but differs when transforming the sum of squares to variance (i.e., dividing the industry sum of squares by its degrees of freedom does not yield the industry variance). However, the general intuition holds. The sum of squares measure is sensitive to sample dimensions, e.g., the number of businesses per industry, and the variance and standard deviation are not (or much less so).

2.2.1. Measure 1: Sum of Squares

The total sum of squares (SST) is the sum of squared differences from the performance mean. The goal with this approach is to allocate the total sum of squares among the sum of squares of the effects: year, industry, corporate, and business. For example, the sum of squares industry (SSI) is that part of the total sum of squares arising from performance differences between industries, and the sum of squares business (SSB) is the part arising from performance differences between businesses. For comparison, effect sizes are expressed as a proportion of the total sum of squares: SSI/SST and SSB/SST. These ratios correspond to a regression’s R2 values that are due to (respectively) industry and business effects.

The sum-of-squares effect size measure depends not only on the strength of the effects but also on the sample dimensions (e.g., the number of industries, the number of businesses per industry). Even if the sum of squares business is greater than the sum of squares industry, we cannot then say that business is more important than industry because the difference could be due to the sample’s dimensions and not to any difference in the relative strength of their effects. This property limits the measure’s usefulness for aggregating results across samples, which is the goal here.

I illustrate the influence of sample dimensions with a simulation.2 For simplicity (but without loss of generality), I focus on industry and business effects and ignore corporate and year effects. Let the performance (pjlt) of business l in industry j and year t be written as


By construction, industry effects Ij and business effects Bjl are equally important in determining performance: both are normally distributed with mean 0 and standard deviation 1. The error (ejlt) has the same distribution. The overall mean (m) is 0. The range of sample dimensions matches that of the studies included in the meta-analysis: the number of industries ranges from 5 to 2,375 and the number of businesses per industry from 2 to 167. The number of years is fixed at eight, which is the median of all the studies considered.

Most researchers use ANOVA for the sum-of-squares measure; I shall do likewise. For a given combination of parameters, each industry has the same number of businesses. The result is a balanced design (i.e., an equal number of observations per business and an equal number of businesses per industry). The sum of squares can therefore be calculated directly from the data without the need for regression analysis (which would give the exact same results but takes much longer to calculate) (Searle 1971).3 For the sum-of-squares formulas, see column SS of Table 1 (or Kutner et al. 2005, pp. 1096–1099).


Table 1: Sum of Squares and Variances for Simulation

Table 1: Sum of Squares and Variances for Simulation

Source of variationSSdfMSFixedRandom


Note. Dot indicates summation over that subscript; p¯ indicates the sample mean.

Figure 2 plots the explained sum of squares for industry and for business (SSI/SST and SSB/SST). Each data point represents the average of 1,000 randomly generated samples. In the figure’s right panel, the number of business per industry is fixed at the studies’ median (10); in the left panel, the number of industries is fixed at the studies’ median (69). These results illustrate two points. First, the explained sum of squares for industry and business effects depends on the sample dimensions. Second, despite the equal importance (by design) of industry and business effects, their sums of squares differ for most parameter combinations.

Figure 2: The Industry and Business Explained Sum of Squares Depend on Sample Dimensions

The right panel of Figure 2 shows that, with two businesses per industry, business effects explain only about 19% of performance while industry effects explain 51%. Yet with 167 businesses per industry, the situation is reversed: business effects explain more than do industry effects (38% versus 33%). The only difference in these two cases is the number of business effects, not their size. In the left panel, if there are five industries, then industry effects explain about 29% of performance, while business effects explain 38%. Yet for a higher number of industries, industry effects explain more than business effects: when there are 100 industries, the respective percentages are 37% and 34%. The only difference is that the sample size has increased. So if the sum of squares is taken as an indicator of importance for performance, then one could mistakenly conclude that industry matters less than business if few industries are sampled, and more if many are sampled.

This problem is not resolved by using adjusted R2, which penalizes the addition of more variables, instead of R2 (cf., Brush et al. 1999). I calculate the adjusted R2 by adjusting the sum of squares with the appropriate degrees of freedom (see Figure 3). The derived values for adjusted R2 are somewhat lower than those derived for R2 (because of the penalty just described), but they lead to the same conclusion: notwithstanding equal industry and business effects, their adjusted sum of squares differ markedly for most parameter settings and are sensitive to sample dimensions.

Figure 3: The Explained Sum of Squares Depend on Sample Dimensions Even with Adjusted R2

The problem arises as follows. Changing the degrees of freedom (e.g., the number of industries) will naturally lead to a change in the sum of squares (e.g., SSI), simply because a different number of squares need to be summed (see Table 1). At the same time, the total sum of squares (SST) will change, but not by the same factor. Hence, the ratio of the two will change (SSI/SST).4 As I will show next, this problem is much diminished (or entirely absent) for ratios of variances or standard deviations.

2.2.2 Measure 2: Variance

The total variance is the variance in performance. The goal here is to allocate it among the variances of the effects: year, industry, corporate, and business. For purposes of comparison, the effect size is given as a proportion of the total variance—for example, σI2/σp2 for industry or σB2/σp2 for business. Unlike the sum-of-squares measure, the variance measure is not (or is only weakly) dependent on sample dimensions.

To illustrate this difference, I calculate the variance measure for the previously simulated data (which, by design, had equally strong industry and business effects). Using the simplified performance Equation (3), which includes industry and business effects but not year and corporate effects, we can write (see e.g., Rumelt 1991)


Assuming independent effects, this expression decomposes the total variance (σp2) into the variances of the effects (σI2 for industry and σB2 for business), and the variance of the error (σe2). Following the extant literature, I use a random-effects assumption when calculating the variances of the effects. Hence, these are expressed as population variances (σI2 and σB2) and not as sample variances (sI2 and sB2), which would apply with a fixed-effects assumption. However, the discussion that follows holds for both random- and fixed-effects.

For the variance measure, most researchers use either variance component analysis (VCA) or hierarchical linear modeling (HLM). The key difference between these methods is in how they account for multilevel data and the relationships between levels (Hough 2006, Misangyi et al. 2006). This distinction is less relevant to the simplified model here, which includes only industry and business effects. The data are nested (i.e., a business belongs to an industry) but not cross-classified (i.e., a business does not belong to an industry and a corporation). Because of ease of calculation, I use VCA to estimate the variances (Searle 1971). This procedure involves equating the calculated sum of squares with their expected values and then solving the system of equations.5

Figure 4 plots the average explained variance for industry and business (σ̂I2/σ̂p2 and σ̂B2/σ̂p2); each explains about a third of the variance in performance. The number of businesses has no effect (right panel), but the number of industries has some effect when there are few industries (left panel). This is because it is difficult to estimate precisely the variance of a factor when limited levels are sampled (Kutner et al. 2005, Hox 2010).6 That difficulty has been referred to as the problem of “data sparseness” (for further discussion, see Stavropoulos et al. 2015). Note that this problem arises only for a small number of levels and appears to be modest in size compared to the sum of squares. Therefore, provided the standard errors of the effects’ estimates are not too high, the variance measure will be insensitive to sample dimensions.

Figure 4: The Industry and Business Explained Variances Are Mostly Insensitive to Sample Dimensions

To compare the sum-of-squares and variance measures, Figure 5 shows the effect sizes for industry relative to business (SSI/SSB and σ̂I2/σ̂B2). The variance measure attributes roughly equal strength to industry and business, irrespective of the sample dimensions; in contrast, the sum-of-squares measure’s ranking depends on the sample dimensions.

Figure 5: The Variance Measure Attributes Equal Importance to Industry and Business, But the Sum of Squares Measure Does Not
2.2.3 Measure 3: Standard Deviation

The standard deviation measure is the square root of the variance measure. Expressed as a proportion of the performance standard deviation, we have, for example, σI/σp for industry and σB/σp for business. Like the variance measure, the standard deviation measure is relatively insensitive to sample dimensions.

The variance measure is based on squared distances, so differences in linear distances are amplified (Brush and Bromiley 1997, Hunter and Schmidt 2004). The square root of the standard deviation measure reduces that amplification.7 To illustrate this dynamic, I intentionally make the business effects stronger than the industry effects. Thus, I simulate data in which the business effects have been multiplied by a factor that ranges from 1 to 3 while leaving industry effects unchanged. The number of industries is fixed at the median (69), as is the number of businesses per industry (10); the other settings remain unchanged. I use VCA to obtain the variances and then take the square root to obtain the standard deviation.

Figure 6 illustrates the effect size for business relative to industry using the variance and the standard deviation measures (σ̂B2/σ̂I2 and σ̂B/σ̂I, respectively). Making the business effect three times as strong yields an effect size—for business relative to industry—of three (with the standard deviation measure) and more than 10 (with the variance measure). Thus, the variance measure amplifies differences between industry and business, making industry seem less important than it actually is. These results reflect the variance measure’s use of squared distances. Because the sum-of-squares measure is also based on squares, it likewise displays such amplification.

Figure 6: The Variance Measure Amplifies Differences Between Business and Industry

2.3. Methods

The literature has active discussions on fixed- versus random-effects assumptions and on methods. Here I describe how these discussions relate to the three measures.

The year, industry, corporate, and business effects can be described as random or fixed (Searle 1971). If random, then the effects are seen as coming from a larger population. If fixed, then the effects are not seen as coming from a larger population. For instance, with a random-effects assumption, we can describe industry effects in terms of the population standard deviation (σI)—that is, how much all industries differ from each other. With a fixed-effects assumption, we can describe such effects only in terms of the sample standard deviation (sI), or how much industries in the sample differ from each other.8 Because it is not always obvious whether effects are fixed or random (Searle 1971), some variance decomposition studies have used fixed effects (e.g., Mackey 2008, Goddard et al. 2009, Ma et al. 2013) and others have used random effects (e.g., Roquebert et al. 1996, Chang and Singh 2000, Chan et al. 2010). These types have been used in separate models (e.g., Schmalensee 1985, Rumelt 1991, McGahan and Porter 1997) and also in the same model, as when some effects are fixed and others are random (i.e., mixed-effects) (e.g., Hough 2006, Misangyi et al. 2006).

The three measures can be calculated for both fixed and random effects (see Table 2). The sum-of-squares measure is the same for fixed and random effects. Sum of squares is a property of a sample; hence, the presence or absence of a larger population is irrelevant. The variance and standard deviation measures differ for random versus fixed effects: for random effects, these measures refer to the population variance and standard deviation; for fixed effects, they refer to the sample variance and standard deviation.9


Table 2: Effect Size Measures for Fixed and Random Effects

Table 2: Effect Size Measures for Fixed and Random Effects

Effect size measure based on FactorEffect size measure


Standard deviationσpsYσYsY/σpσY/σp

Even though the fixed versus random effects assumption does not limit one’s choice of measure, in practice, however, nearly all studies have used either a fixed-effects assumption with the sum-of-squares measure or a random-effects (or mixed-effects) assumption with the variance and standard deviation measures (an exception is Brush et al. 1999). Table 3 illustrates this mapping between the effects assumption made and the measures used. It shows also the variety of methods employed, including analysis of variance (ANOVA), hierarchical linear modeling (HLM), two-stage least squares (2SLS), and variance components analysis (VCA). In the literature, debate has centered on the appropriateness of the fixed- versus random-effects assumption (columns in the table) and on the merits of the different methods (cells in the table). For a discussion, see for example Brush and Bromiley (1997), McGahan and Porter (2002, 2005), Ruefli and Wiggins (2003, 2005), Hough (2006), and Bou and Satorra (2010). Here the discussion has focused on the measures (rows in the tables).


Table 3: Methods for Fixed, Mixed, and Random Effects

Table 3: Methods for Fixed, Mixed, and Random Effects

Effect size measure based onFactor

Fixed effectsMixed effectsRandom effects

Sum of squaresANOVA partial (Ma et al. 2013)
ANOVA sequential (Goddard et al. 2009)
2SLS (Brush et al. 1999)
VarianceHLM (Misangyi et al. 2006)HLM (Karniouchina et al. 2013)
VCA (Chan et al. 2010)
Standard deviationHLM (Hough 2006)

3. Methods

3.1. Identification of Studies

Several steps were taken to identify studies for the meta-analysis. First, the Business Source Premier database and the Google Scholar database were searched using the following search terms: industry effect, corporate effect, or business effect; or variance, decomposition, and at least one of industry, corporate, and business. Second, the following journals were searched for any study containing at least two of industry, corporate, or business (anywhere in the text): Academy of Management Journal, Administrative Science Quarterly, Global Strategy Journal, Journal of Business Research, Journal of International Business Studies, Journal of Management, Journal of Management Studies, Long Range Planning, Management Science, Organization Science, Organization Studies, Strategic Management Journal, Strategic Organization, and Strategy Science. Third, all studies citing the foundational study of McGahan and Porter (1997) were searched for the word “variance.” Fourth, a request for unpublished (and published) studies was sent to strategy scholars via the Business Policy and Strategy listserv (BPS Net). Fifth, all works cited in the identified studies were screened for their possible relevance.

Most meta-analyses in the strategy literature use as inputs sample estimates that do not depend on modeling choices (e.g., pairwise rather than partial correlations). Such model-free estimates are unavailable here. To ensure comparability among studies, I selected those that employ similar models (Hunter and Schmidt 2004). A study was included in the meta-analysis only if it met the following conditions. First, it had to include industry, corporate, and business effects. Second, it had to report an effect size in standard deviations, variances, or sums of squares (two studies excluded). Third, the study had to describe a model with no more than one interaction term involving year, industry, or corporate effects (in addition to the business effect) (three studies excluded). Fourth, for VCA or HLM, the study had to report a model without covariances (two studies excluded).

3.2. Selection of Analyses

Most studies report multiple analyses. For each study, only one is included per measure by sequentially applying the following criteria. First, for overlapping samples, choose the largest. Hence, aggregate samples are chosen instead of subsamples of select industries, but also samples are selected such that single-business corporations are included rather than excluded. The latter selection may affect the estimates, especially those of the corporate effects, per Bowman and Helfat (2001). I investigate the impact of including versus excluding single-business corporations in an additional analysis. Second, for the model, choose one without additional interaction effect (if one is provided). Third, for the method, choose HLM over VCA (only one study uses both). Note that the more recent studies reporting variances or standard deviations use HLM.

3.3. Calculation of Effect Sizes

Few studies report standard deviations; moreover, in those that do report them, the standard deviations of effects are relative to each other rather than to performance. So even for studies that do report standard deviations, I rely on the reported variances to calculate the standard deviations used here (by taking the square root). Sum-of-squares measures could conceivably be converted into variances using Henderson’s (1953) “method III,” but studies do not report data sufficient for conducting these calculations. For this reason, the sum-of-squares samples are reported separately.

The procedures just described yielded, for the standard deviation and variance measures, a set of 16 studies reporting on 18 samples with a total of N = 225,183 business-year observations. For the sum-of-squares measure, I obtain a (partially overlapping) set of nine studies reporting on 25 samples with a total of N = 212,112 observations.

3.4. Nonindependence

For a given sample, the effect sizes are not independent. For example, the business standard deviation (as a ratio of the total standard deviation) depends on the corporate standard deviation. To account for such nonindependence, I treat the data as paired when testing for differences in effect size. For example, I analyze the distribution of the business minus the corporate standard deviation across samples.

Across samples, the effect sizes are unfortunately also not independent. This violates a key assumption of a meta-analysis. The problem arises because many samples are from the United States (12 of 18 variance and standard deviation samples). Most of these U.S. samples are from the Compustat database, and some with overlapping time periods. To address this non-independence, I provide an analysis restricted to the six non-U.S. samples, which draw from different databases and cover different regions and time periods. Because the results are broadly consistent, I report all samples as the main analysis and the non-U.S. samples as an additional analysis. Note that the reported confidence intervals for the main analysis are merely indicative and are probably too narrow.

4. Results

Table 4 lists the studies and samples. The three effect size measures are also reported; these measures are standardized relative to performance (e.g., industry/performance). The samples are sorted by industry effect size for the variance or standard deviation or (absent those measures) for the sum of squares. Three summary statistics are reported in the bottom rows. The sample size–weighted mean puts more weight on larger samples, whereas the unweighted mean gives equal weight to all samples; the median reduces the influence of any outliers.


Table 4: Studies, Samples, and Their Effect Sizes

Table 4: Studies, Samples, and Their Effect Sizes


ID AuthorsSourceCountryPeriodPerformance

1Becerra and Santaló (2003)CompustatUSA1991–1994ROA
2Chang and Singh (2000)TrinetUSA1981–1989Market share
3Brush et al. (1999)CompustatUSA1986–1995ROA
4Chan et al. (2010)METI TrendUSA1996–2005ROS
5Tarziján and Ramirez (2011)EconomaticaChile1998–2007ROA
6Chan et al. (2010)METI TrendChina1996–2005ROS
7Roquebert et al. (1996)CompustatUSA1985–1991ROA
8Brush et al. (1999)CompustatUSA1986–1995ROA
9Chang and Hong (2002)KISKorea1985–1996ROIC
10Misangyi et al. (2006)CompustatUSA1984–1999ROA
11Chaddad and Mondelli (2013)CompustatUSA1984–2006ROA
12Makino et al. (2004)METI TrendJapan1996–2001ROS
13Iurkov and Sasson (2015)CompustatUSA1990–2013ROA
14Hough (2006)CompustatUSA1995–1999ROA
15Fukui and Ushijima (2011)Nikkei NEEDSJapan1998–2003ROA
16Karniouchina et al. (2013)CompustatUSA1978–1994ROA
17Zavosh and Dibiaggio (2016)CompustatUSA2001–2009ROA
18Lieu and Chi (2006)TEJTaiwan1994–2000ROS
19Furman (2000)WorldscopeCanada1992–1996ROA
20Khanna and Rivkin (2001)Datastream Int.Philippines1992–1997ROA
21Khanna and Rivkin (2001)Datastream Int.Israel1992–1997ROA
22Khanna and Rivkin (2001)Datastream Int.Argentina1990–1997ROA
23Furman (2000)WorldscopeAustralia1992–1996ROA
24Khanna and Rivkin (2001)ICMDIndonesia1993–1995ROA
25Khanna and Rivkin (2001)SVSChile1988–1996ROA
26Khanna and Rivkin (2001)Datastream Int.Mexico1988–1997ROA
27Furman (2000)WorldscopeUSA1992–1996ROA
28Khanna and Rivkin (2001)Datastream Int.Taiwan1990–1997ROA
29Furman (2000)WorldscopeUK1992–1996ROA
30Rumelt (1991)FTCUSA1974–1977ROA
31McGahan and Porter (2002)CompustatUSA1982–1994ROA
32Khanna and Rivkin (2001)Datastream Int.Turkey1988–1997ROA
33Khanna and Rivkin (2001)Datastream Int.Peru1991–1997ROA
34Khanna and Rivkin (2001)KCHKorea1991–1995ROA
35McGahan and Porter (1997)CompustatUSA1982–1994ROA
36Khanna and Rivkin (2001)Datastream Int.Brazil1990–1997ROA
37Mackey (2008)CompustatUSA1992–2002ROA
38Khanna and Rivkin (2001)McGregorSouth Africa1993–1996ROA
39Khanna and Rivkin (2001)Datastream Int.Thailand1992–1997ROA
40Adner and Helfat (2003)FRSUSA1977–1997ROA
41Khanna and Rivkin (2001)CMEIndia1989–1995ROA


IDSum of squaresVarianceMethodTableModelSingle business corporationsYICB onlyManufacturing only

1 ANOVA2 NoYesNo
1 VCA3 NoNoNo
2 VCA34NoYesYes
3 VCA104 segmentsNoYesNo
4 VCA11NoNoNo
5 HLM2 YesYesNo
6 VCA12NoNoNo
7 VCA3AverageNoNoYes
8 VCA103 segmentsNoYesNo
9 VCA21NoNoNo
10 HLM3 YesYesNo
11 HLM51YesYesNo
12 VCA21NoNoNo
13 HLM3 YesNoNo
14 ANOVA2ANOVA uncorrectedYesYesNo
14 HLM2MultilevelYesYesNo
15 VCA24NoYesNo
16 HLM1Sample (1978–1994)YesYesYes
17 HLM3 NoNoNo
18 VCA3 YesNoYes
19 ANOVA5ACanadaYesYesNo
20 ANOVA5Panel B (R2)YesYesNo
21 ANOVA5Panel B (R2)YesYesNo
22 ANOVA5Panel B (R2)YesYesNo
23 ANOVA5AAustraliaYesYesNo
24 ANOVA5Panel B (R2)YesYesNo
25 ANOVA5Panel B (R2)YesYesNo
26 ANOVA5Panel B (R2)YesYesNo
28 ANOVA5Panel B (R2)YesYesNo
30 ANOVA2Bottom (Sample B)NoYesYes
31 ANOVA3 YesYesNo
32 ANOVA5Panel B (R2)YesYesNo
33 ANOVA5Panel B (R2)YesYesNo
34 ANOVA5Panel B (R2)YesYesNo
36 ANOVA5Panel B (R2)YesYesNo
37 ANOVA4Segment ROAYesNoNo
38 ANOVA5Panel B (R2)YesYesNo
39 ANOVA5Panel B (R2)YesYesNo
40 ANOVA2Downsizing lastYesYesNo
41 ANOVA5Panel B (R2)YesYesNo

SampleSum of squaresVarianceStandard deviation 


220,161    0.0030.1750.1100.4870.0550.4180.3320.698 
33,447    0.0080.1530.1450.2510.0880.3910.3810.501e, l
416,277    0.0020.1360.1920.1750.0450.3690.4380.418 
51,564    0.1050.1430.4630.3240.3790.680i
613,051    0.0220.1050.2080.1580.1480.3240.4560.397 
716,596    0.0050.1020.1790.3710.0710.3190.4230.609f
87,994    0.0110.0970.0510.4800.1070.3110.2250.693e, l
914,575    0.0250.0760.0940.2080.1580.2760.3070.456f
1010,633    0.0080.0760.0720.3660.0890.2760.2680.605 
1110,776    0.0050.0700.1800.3610.0710.2650.4240.601 
1228,809    0.0100.0690.1080.3140.1000.2630.3290.560 
137,197    0.0100.0660.1090.3610.1000.2580.3300.601 
1419,4050.0050.1390.1470.4380.0050.0530.2020.4010.0710.2300.4490.633a, g, l
1524,808    0.0030.0530.0870.5260.0550.2300.2950.725 
1617,773    0.0420.1550.3850.2050.3940.620i
176,821    0.0020.0370.3020.3270.0450.1920.5500.572f, j
184,549    0.0000.0310.0070.3620.0000.1770.0840.601f, h
191,1420.0040.3030.0900.168        a
202810.0100.2650.1080.358        a
21860.1240.2610.1450.242        a
221290.1130.2220.1080.258        a
236900.0060.1910.0980.488        a
243390.0060.1860.3110.243        a
251,7800.0080.1600.0540.457        a
263440.0210.1500.0420.466        a
2712,3900.0010.1450.1350.400        a
285720.0250.1190.1390.517        a
296,0960.0010.1140.2290.245        a
3010,8660.0010.0980.1160.414        b, k
3172,7420.0080.0960.1200.377        a
322730.0540.0810.0610.426        a
33990.1100.0780.0730.421        a
342,1070.0140.0770.1290.439        a
3558,1320.0030.0680.1190.349        c, k
366290.0830.0640.1120.178        a
378,5220.0000.0460.0780.344        d
381,0710.0020.0360.0480.835        a
391,3290.0840.0230.2000.331        a
401,8100.0130.0210.0270.194        a
4110,5310.0060.0170.1000.458        a
Mean (weighted)0.0060.0910.1220.3770.0080.0840.1380.3580.0820.2830.3630.590 
Mean (unweighted)0.0280.1270.1210.3740.0080.0900.1360.3560.0820.2920.3540.589 

Notes. (a) Sequential method: YICB (sum of squares). (b) Sequential method: YCIB (sum of squares). (c) YCIB (sum of squares) listed because overlapping sample 31 is YICB (sum of squares). (d) Partial method (sum of squares). (e) n not provided so estimated as # of businesses × # of years × 0.623, which is the average of n/(# of businesses × # of years) for the other samples to account for the fact that not all businesses are observed all years. (f) Model with industry × year (variance). (g) Year sum of squares and variance reported as <0.010; here, the midpoint is taken. (h) Year variance estimated as −0.003; here, 0 is taken. (i) Year variance accounted for but unreported. (j) Corporate variance is the sum of the business-invariant and business-variant corporate effects. (k) Variances provided but only from a model with covariance. (l) Standard deviation provided but not relative to performance.

4.1. Sum of Squares vs. Variance

4.1.1. Sensitivity to Sample Dimensions

The sum-of-squares measure for actual samples is sensitive to sample dimensions. In Figure 7 the sum-of-squares samples (1, 14, 19–41) are represented by dots.10 On the vertical axis is the explained sum of squares per factor; on the horizontal axis are the degrees of freedom used per factor as a proportion of the total degrees of freedom. If the degrees of freedom are not reported, then these are calculated for year, industry, and corporate effects as (respectively) the number of years minus 1, the number of industries minus 1, and the number of multibusiness corporations minus 1. For business effects, missing degrees of freedom are approximated as the number of businesses minus the number of industries minus the number of multi-business corporations plus 1. In line with the simulations, more relative degrees of freedom correspond to a higher explained sum of squares for a factor. For instance, a sample with more relative industry degrees of freedom displays (on average) a greater industry effect whereas a sample with more relative corporate degrees of freedom displays a greater corporate effect.

Figure 7: Sample Dimensions Predict Explained Sum of Squares

The variance measure is much less sensitive to sample dimensions. The correlation between relative degrees of freedom and effect sizes is weak for the variance but is strong for the sum-of-squares measure (see the df columns in Table 5).11 The “data sparseness” problem mentioned before with respect to the variance measure may manifest itself not with relative but rather with absolute degrees of freedom (Stavropoulos et al. 2015). However, correlations between effect sizes and the number of years, industries, multibusiness corporations, and businesses (column nk) or their natural logarithms (column ln(nk)) remain weak.


Table 5: Sample Dimensions Strongly Correlate with Sum of Squares But Not with Variance

Table 5: Sample Dimensions Strongly Correlate with Sum of Squares But Not with Variance




Note. Correlations for standard deviation are within 0.06 of those reported for variance.

4.1.2. Effect Size

In the two samples that provide both measures, the effect sizes of the sum-of-squares measure differ noticeably from those of the variance measure (see Figure 8). In sample 1, the corporate effect’s size as measured by sum of squares is more than twice its size as measured by variance. In fact, using sums of squares shows corporate effects to be greater than industry effects whereas the opposite result obtains when variances are used. In sample 14, the ranking of effects is preserved; however, the industry effect size under sum of squares is more than double that under variance while the corporate effect size is less when the sum-of-squares measure is used.

Figure 8: Results for Sum of Squares and Variance Differ for the Same Sample

At the aggregate level, differences are more subtle. The weighted mean for the sum of squares measure is similar to that for the variance measure (see Table 6). This outcome is mainly driven by samples 31 and 35, which together account for more than 60% of all sum-of-squares observations. Looking instead at the unweighted mean, we see a notable difference for industry effects (0.127 versus 0.090). These findings underscore the importance of using a common effect size measure.


Table 6: Meta-Analytic Results for Variance and Standard Deviation for k = 18 Samples

Table 6: Meta-Analytic Results for Variance and Standard Deviation for k = 18 Samples

 VarianceStandard deviation

Year0.01 (0.00, 0.01)0.08 (0.06, 0.10)
Industry0.08 (0.06, 0.10)0.28 (0.25, 0.31)
Corporate0.14 (0.11, 0.16)0.36 (0.33, 0.40)
Business0.36 (0.30, 0.42)0.59 (0.54, 0.65)
Ind.–Year0.08 (0.05, 0.10)0.21 (0.16, 0.24)
Corp.–Ind.0.05 (0.02, 0.09)0.08 (0.03, 0.13)
Bus.–Corp.0.22 (0.15, 0.30)0.23 (0.15, 0.31)

Note. Weighted mean and 95% confidence interval indicated.

4.2. Variance vs. Standard Deviation

For the variance and standard deviation samples (1–18) results are consistent across the three summary statistics (weighted mean, unweighted mean, median). I will focus on the weighted mean as an estimate of a population parameter (Hunter and Schmidt 2004).

The weighted mean for the variances are 0.01 for year, 0.08 for industry, 0.14 for corporate, and 0.36 for business effects. The weighted mean for the standard deviations are 0.08 for year, 0.28 for industry, 0.36 for corporate, and 0.59 for business effects.12Figure 9 and Table 6 report meta-analytic results with bootstrapped 95% confidence intervals based on 10,000 replications. We can informally interpret these variances and standard deviations as follows. An effect is defined as a performance deviation from an overall mean, and the estimates give an indication of the relative size of these performance deviations associated with each factor. On average, then, the deviations associated with corporate are somewhat greater than those associated with industry, and those associated with business are substantially greater. A formal interpretation would reference Equation (2), where performance is the sum of four factors (and a mean and an error term). The effect sizes provide estimates of the distributions of these factors. For example, the industry effects can be seen as coming from a distribution with mean 0 and a variance of 0.08 or a standard deviation of 0.28. The year, corporate, and business effects each have their own mean 0 distribution with variance or standard deviation as previously reported.

Figure 9: Meta-Analytic Results with 95% CI for k = 18 Samples

Figure 10 shows the differences for variances (left panel) and standard deviations (right panel) between business and corporate effects, between corporate and industry effects, and between industry and year effects (for numbers, see Table 6). The figure plots the weighted mean differences and the 95% confidence intervals. No confidence interval overlaps with 0. Hence, business effects are the strongest, followed by corporate, then industry, and finally year effects.

Figure 10: Difference Between Factors for Variance (Left) and Standard Deviation (Right) with 95% CI

Thus, the results are qualitatively similar for variance and standard deviation. In line with the simulation, the variance amplifies differences between factors. For the variance, the industry effect is ×10.5 (standard deviation: ×3.5) the year effect, the corporate effect is ×1.6 (standard deviation: ×1.3) the industry effect, and the business effect is ×2.6 (standard deviation: ×1.6) the corporate effect.

4.3. Additional Analyses for Standard Deviation

To determine how much of the differences between samples are due to sampling error or differences in underlying effects, we would need the standard errors of the sample estimates. These standard errors are neither reported nor can they be derived from the data provided. Instead, I explore the extent to which estimates differ by the following characteristics: sample, method, and model (see Table 7 and Figure 11). I report here the results for standard deviation (the results for variance are qualitatively similar).


Table 7: Meta-Analytic Results for Standard Deviation by Subgroup

Table 7: Meta-Analytic Results for Standard Deviation by Subgroup

   Effect size

A1SampleUSA0.07 (0.06, 0.08)0.30 (0.25, 0.35)0.39 (0.35, 0.43)0.60 (0.56, 0.66)
A2 Non-USA0.10 (0.06, 0.13)0.26 (0.23, 0.29)0.32 (0.26, 0.38)0.57 (0.46, 0.69)
B1SampleManuf. only0.06 (0.04, 0.11)0.31 (0.22, 0.42)0.36 (0.31, 0.45)0.64 (0.60, 0.68)
B2 Other0.09 (0.07, 0.11)0.27 (0.24, 0.30)0.36 (0.31, 0.41)0.57 (0.51, 0.64)
C1MethodVCA0.08 (0.06, 0.11)0.31 (0.26, 0.35)0.34 (0.30, 0.39)0.58 (0.51, 0.66)
C2 HLM0.08 (0.06, 0.09)0.24 (0.21, 0.26)0.40 (0.35, 0.47)0.61 (0.60, 0.63)
D1ModelYICB only0.07 (0.05, 0.08)0.28 (0.21, 0.32)0.35 (0.30, 0.40)0.66 (0.62, 0.70)
D2 Other0.09 (0.06, 0.12)0.29 (0.25, 0.32)0.38 (0.32, 0.43)0.52 (0.46, 0.58)

   Difference in effect size

A112137,8270.24 (0.19, 0.30)0.09 (0.02, 0.17)0.22 (0.14, 0.30)
A2687,3560.16 (0.15, 0.18)0.06 (0.03, 0.10)0.25 (0.11, 0.42)
B1459,0790.30 (0.23, 0.42)0.05 (−0.07, 0.19)0.28 (0.18, 0.36)
B214166,1040.19 (0.15, 0.21)0.09 (0.04, 0.13)0.21 (0.11, 0.31)
C111151,0140.22 (0.16, 0.27)0.04 (0.00, 0.09)0.24 (0.13, 0.35)
C2774,1690.17 (0.15, 0.18)0.17 (0.10, 0.25)0.21 (0.15, 0.27)
D19116,5610.22 (0.15, 0.27)0.07 (−0.01, 0.16)0.31 (0.23, 0.39)
D29108,6220.20 (0.14, 0.24)0.09 (0.03, 0.13)0.14 (0.06, 0.24)

Note. Weighted mean and 95% confidence interval indicated.

Figure 11: Meta-Analytic Results with 95% CI by Subgroup
4.3.1. Sample: U.S. vs. Non-U.S.

In Panel A of Figure 11 (and in rows A1 and A2 of Table 7), the samples are split by region, which refers to the corporate parent’s location. Most samples include the international businesses; for example, U.S. samples contain businesses that operate beyond U.S. borders. The results are fairly similar, although industry, corporate, and business effects are all somewhat lower in non-U.S. than in U.S. samples. The lack of substantial differences reduces concerns about the possible nonindependence of the U.S. samples.

4.3.2. Sample: Manufacturers Only

Due to data limitations, Rumelt (1991) restricted his analysis to manufacturing firms only. Nowadays, most data sets include nonmanufacturing firms, too. Out of 18 samples, four are manufacturing firms only. The results of these four samples are similar to those of the other 14 (see Panel B of Figure 11 and rows B1 and B2 of Table 7).

4.3.3 Sample: Single- vs. Multibusiness

For a single-business corporation, the business and corporate effects are indistinguishable. Some studies exclude such corporations whereas others include them. If they are included then, under VCA, an explicit assumption is needed. The convention in this regard is to estimate a business effect and then set the corporate effect to zero. This approach leads to underestimating the corporate effect and overestimating the business effect (Bowman and Helfat 2001). Under HLM, no such explicit assumption is required because the model can be estimated. It is difficult ex ante to state with high confidence whether or not the inclusion of single-business corporations leads to biases in the corporate or business effect. Our current knowledge on biases is based on simulations, not on analytical results (e.g., Baldwin et al. 2011). These simulations indicate that biases are small or nonexistent when the total number of groups (i.e., single-business plus multibusiness corporations) is high, even if the percentage of “singletons” (single-business corporations) is high. The term “high” is 168 groups consisting of 57% singletons in Clarke and Wheaton (2007) and 500 groups consisting of 70% singletons in Bell et al. (2008, 2010). One reason for cautious optimism then is that the HLM samples have many corporations: even the smallest sample contains 136 corporations, and the second smallest has 998. Because of the different approaches for VCA and HLM, the comparison here is within method (i.e., either VCA or HLM). In Section 4.3.4, the comparison is between methods (i.e., VCA versus HLM).

Only one VCA sample includes single-business corporations. Its corporate effect (sample 18: 0.084) is, as anticipated, the lowest across all samples; it is also substantially below the sample with the second-lowest VCA sample (sample 8: 0.225). In contrast, only one HLM sample excludes singletons. Its corporate effect (sample 17: 0.550) is the highest across all samples and substantially above the second-highest HLM sample (sample 14: 0.449). The sample without single-business corporations differs from the others not only in sample selection but also in model specification: it views the corporate effect as consisting of a business-invariant and a business-variant component. Hence, from this single and atypical sample, we cannot infer the impact of single business corporations, when using HLM.

4.3.4 Method: VCA vs. HLM

Among the 18 samples, 11 use VCA and seven use HLM. Panel C of Figure 11 (and rows C1 and C2 of Table 7) show that industry effects are somewhat lower and that corporate effects are somewhat higher with the HLM than with the VCA method. As a result, the difference between industry and corporate effects is more pronounced under HLM. One distinction is that the VCA samples typically exclude single-business corporations, whereas these are included in the HLM samples. Yet, in light of the simulation results mentioned previously, this distinction may not actually explain the differences in effects. Furthermore, if HLM with single business corporations overestimates the corporate effect, then we should expect it to underestimate the business effect; but the business effect is, if anything, greater under HLM than under VCA. Thus, further investigation comparing these two approaches is needed.

4.3.5 Model: Year, Industry, Corporate, and Business Effects Only

Model specifications differ across samples. In particular, half of the samples employ models with only year, industry, corporate, and business effects (“YICB only”). Studies in the other half also include such terms as country, region, and/or an interaction effect between industry and year. Panel D of Figure 11 (and rows D1 and D2 of Table 7) show that the year, industry, and corporate effects differ little across alternative model specifications. The business effect becomes weaker when additional terms are included, which might be explained by the business effect picking up influences that are fixed for a business, but vary across industry or corporations. For example, a business may operate in a single region even as its industry and corporation span multiple regions. In that case, omitting region from the specification will lead to a higher business effect.

Thus, a consistent pattern emerges across alternative samples, methods, and models: the industry effect is about half that of the business effect, and the corporate effect is slightly greater than the industry effect.

5 Discussion

Based on Cohen’s f2 (1988) criteria yielding 0.02, 0.13, and 0.26 for (respectively) small, medium, and large explained variance, we can classify the effect sizes for industry and corporate as “medium” and for business as “large.”13 There are two striking aspects of the findings reported here. First, business effects explain the most, but their explanatory power, relative to industry and corporate effects, depends on whether the variance measure or instead the standard deviation measure is used. Second, the effect size for corporate effects is somewhat greater than for industry effects; that relation has not been well established in existing research, regardless of the measure used.

When analyzing industry, corporate, and business effects, one should bear three cautionary statements in mind. First, the size of an effect does not equal its importance. A small performance difference can be enough to spell the death (or survival) of a business, and a small difference in return on assets may represent a big difference in absolute returns. Second, size of an effect is not the same as its influence (Bowman and Helfat 2001, McGahan and Porter 2005). Thus, an effect does not, in itself, reveal the managerial actions required to generate the performance difference. For example, if a successful corporate parent consistently picks profitable industries to enter, then this upside will be viewed as an industry effect rather than as a corporate effect. In other words, the empirical approach identifies correlates, not causes, of performance.

Third, it follows that the observed effects are not causal effects. The literature on variance decomposition defines an “effect” as a performance deviation from a mean (Rumelt 1991, McGahan and Porter 1997)—for example, the mean performance of the businesses of one corporate parent relative to an overall mean performance. Both types of performances are observable. In contrast, a causal effect is interpreted as the difference between factual and counterfactual performance (Rubin 1974, Morgan and Winship 2015); an example here is the performance of a business of a given corporate parent relative to the performance of the same business if it were under different ownership. By definition, factual and counterfactual performances cannot be observed simultaneously. So then, what can be learned from this empirical approach? Most importantly, it offers a set of stylized facts (McGahan and Porter 2005). If the field of strategy seeks to explain differences in firm performance, then we need to identify those differences and the level at which they occur.

This study has the following implications for the choice of effect size measure. The variance and standard deviation should be favored over the sum of squares as an effects measure. “An effect-size measure is a standardized index and estimates a parameter that is independent of sample size” (Olejnik and Algina 2003, p. 434)—and, I would add, independent of sample dimensions. The sum-of-squares measure does not satisfy this criterion. Chang and Singh (2000) show that the level of industry aggregation (e.g., three- versus four-digit industry classification) matters. However, the argument here is that, for a given level of industry aggregation, the sum of squares is sensitive to sample dimensions. As mentioned previously, sum-of-squares measures are well predicted using only the number of years, industries, corporations, and businesses in the sample. The unfortunate consequence is that relative effect sizes can differ between samples simply because of their different dimensions. For this reason, the preference of variance and standard deviation over sum of squares is clear.

The variance measure is (mostly) insensitive to sample dimensions, so the choice between the standard deviation and variance measures is more subjective. One downside of the variance measure is that large effects are amplified and small effects are compressed, which may reduce the latter’s perceived importance. For example, the weighted average variance for year effects is less than 0.01. Most scholars would be reluctant to claim that the year is irrelevant, but this is what the variance measure seems to suggest. Similarly, the weighted average variance for industry is only 0.08, which could create the false impression that industry does not matter. At a minimum, it makes industry appear to matter less than it actually does. For instance, one of the most popular strategy textbooks notes that “[i]t appears that industry environment is a relatively minor determinant of a firm’s profitability. Studies of the sources of interfirm differences in profitability have produced very different results […] but all acknowledge that industry factors account for a minor part (less than 20%) of variation in return on assets among firms” (Grant 2016, p. 90). Although the author then proceeds to defend industry analysis, it is unclear whether a defense is needed (recall that, under Cohen’s criteria, a small effect is around 2% and a medium effect around 13%). One upside of the variance measure is that it has a long tradition in the social sciences, which facilitates comparability.

This study suggests two opportunities for further research in this area. First, it was found that the ranking of the industry, corporate, and business effects was fairly constant across samples, methods, and models (using the variance or standard deviation measure). Even so, each factor separately exhibited variability across studies. A subgroup analysis was used here to explore that variability. Future studies can analyze the same question using individual samples and possibly subsamples. The second research opportunity is that—given the robust findings on industry, corporate, and business effects—it would be interesting to identify which industries, corporations, and business are overperformers and which are underperformers. We could then move from a factor to an individual effect (e.g., from the corporate factor to a specific corporation).


The author thanks Caroline Koekkoek for excellent research assistance. For discussions and comments, the author thanks Karla Diaz-Ordaz, Isabel Fernandez-Mateo, Colin Fisher, Rouba Ibrahim, Noemi Kreif, Phebo Wibbens, and Miros Zohrevand, as well as participants in the INSEAD Corporate Strategy Camp and the UCL School of Management reading group. The author also thanks the editor Dan Levinthal and the reviewers for the insightful and constructive comments.


1 Business ownership can be partial, as in the case of business groups (e.g., Khanna and Rivkin 2001).

2 The R code of the simulation is provided in an online appendix.

3 When taking into account that business is nested within industry, their respective order of entry in a regression is irrelevant because the data are balanced. Thus, the sequential method (e.g., industry first and then business: SS(I) and SS(B | I)) and the partial method (i.e., industry with business already included, and next business with industry already included: SS(I | B) and SS(B | I)) yield the same sum of squares and R2 as in a regression.

4 The ANOVA literature emphasizes that the inclusion or exclusion of factors in a research design changes the denominator but not the numerator (Kennedy 1970, Cohen 1973). This problem is not unique to a ratio of the sum of squares; it occurs also for a ratio of variances or of standard deviations (Olejnik and Algina 2003, Fritz et al. 2012). However, the issue at hand here is not the inclusion or exclusion of factors (e.g., industry or business), but how many effects are included per factor (e.g., the number of industries or the number of businesses).

5 The expected values for a sum of squares is the degrees of freedom (column df of Table 1) multiplied by the expected mean square (column 𝔼[MS]—Random).

6 The precision of the estimates (SD(σ̂I2) and SD(1/σ̂p2)) matters beyond unbiasedness (𝔼[σ̂I2]=σI2 and 𝔼[1/σ̂p2]=1/σp2) because the effect size measure is a ratio of two random variables. Since Cov(X,Y)=𝔼[XY]𝔼[X]𝔼[Y], it follows that we can write 𝔼[σ̂I2/σ̂p2]=Cor(σ̂I2,1/σ̂p2)SD(σ̂I2)SD(1/σ̂p2)+𝔼[σ̂I2]𝔼[1/σ̂p2]. When estimate precision is low, typically 𝔼[σ̂I2/σ̂p2]𝔼[σ̂I2]/𝔼[σ̂p2]. Yet with high precision, 𝔼[σ̂I2/σ̂p2]𝔼[σ̂I2]/𝔼[σ̂p2] because (i) any correlation (Cor(σ̂I2,1/σ̂p2)) matters less given that both SD(σ̂I2) and SD(1/σ̂p2) are low; and (ii) by Jensen’s inequality 𝔼[1/σ̂p2] is closer to 1/𝔼[σ̂p2].

7 Formally, if 0 < a < b, then b/a<b/a because b/a=b/a×b/a and b/a>1. Thus, if σ̂B2>σ̂I2, then the ratio of standard deviations (σ̂B/σ̂I) is less than that of the ratio of variances (σ̂B2/σ̂I2).

8 The random- versus fixed-effects assumption is not indicative of whether the effects are constant over time. Either assumption can accommodate time-varying effects by including interactions between year effects and industry or corporate effects. Neither is the random- versus fixed-effects assumption the same as random- versus fixed-effects regressions, a distinction that indicates whether (unobserved) effects are assumed to be uncorrelated with the explanatory variables. In general, even if a random-effects assumption is made, one must still decide on the uncorrelatedness of the effects and the explanatory variables (Wooldridge 2003, p. 473). In the case of ANOVA with a random-effects assumption, the variance components can be estimated from a dummy variable regression (i.e., a fixed-effects regression); see Searle (1971, p. 443) or Method III in Henderson (1953).

9 Even in a fixed-effects model, the notation used for performance is σp—and not sp—because performance includes the error, which is always seen as coming from a population distribution (i.e., a random “effect”).

10 Due to incomplete information on sample dimensions, only 24 of 25 samples are plotted for industry and corporate effects and 23 samples for business effects.

11 These degrees of freedom are only approximations (Hodges and Sargent 2001).

12 Note that the standard deviations sum to more than 1. If σp2=σY2+σI2+σC2+σB2+σe2, then σp2=σY2+σI2+σC2+σB2+σe2, which is less than σY2+σI2+σC2+σB2+σe2 because the square-root function is concave. When dividing by σp2, it follows that 1<σY2/σp2+σI2/σp2+σC2/σp2+σB2/σp2+σe2/σp2.

13 Cohens f2 is defined as the variance accounted for by a factor over the unaccounted variance (Cohen 1988, p. 410). The effect sizes for (respectively) small, medium, and large are 0.02, 0.15, and 0.35 (pp. 413–414). Explained variance is the variance accounted for by a factor over the total variance. Explained variance is then equivalent to f2/(1 + f2) (p. 412), from which we obtain the following effect sizes for (respectively) small, medium, and large: 0.02, 0.13, and 0.26.


  • Adner R, Helfat CE (2003) Corporate effects and dynamic managerial capabilities. Strategic Management J. 24(10):1011–1025.CrossrefGoogle Scholar
  • Baldwin SA, Bauer DJ, Stice E, Rohde P (2011) Evaluating models for partially clustered designs. Psych. Methods 16(2):149–165.CrossrefGoogle Scholar
  • Becerra M, Santaló J (2003) An empirical analysis of the corporate effect: The impact of the multinational corporation on the performance of its units worldwide. Management Internat. Rev. 43(2):7–25.CrossrefGoogle Scholar
  • Bell BA, Ferron JM, Kromrey JD (2008) Cluster size in multilevel models: The impact of sparse data structures on point and interval estimates in two-level models. Proc. JSM, Survey Res. Methods Section (American Statistical Association, Alexandria, VA),1122–1129.Google Scholar
  • Bell BA, Morgan GB, Kromrey JD, Ferron JM (2010) The impact of small cluster size on multilevel models: A Monte Carlo examination of two-level models with binary and continuous predictors. Proc. JSM, Survey Res. Methods Section (American Statistical Association, Vancouver), 4057–4067.Google Scholar
  • Bou JC, Satorra A (2010) A multigroup structural equation approach: A demonstration by testing variation of firm profitability across EU samples. Organ. Res. Methods 13(4):738–766.CrossrefGoogle Scholar
  • Bowman EH, Helfat CE (2001) Does corporate strategy matter? Strategic Management J. 22(1):1–23.CrossrefGoogle Scholar
  • Brush TH, Bromiley P (1997) What does a small corporate effect mean? A variance components simulation of corporate and business effects. Strategic Management J. 18(10):825–835.CrossrefGoogle Scholar
  • Brush TH, Bromiley P, Hendrickx M (1999) The relative influence of industry and corporation on business segment performance: An alternative estimate. Strategic Management J. 20(6):519–547.CrossrefGoogle Scholar
  • Chaddad FR, Mondelli MP (2013) Sources of firm performance differences in the US food economy. J. Agricultural Econom. 64(2):382–404.CrossrefGoogle Scholar
  • Chan CM, Makino S, Isobe T (2010) Does subnational region matter? Foreign affiliate performance in the United States and China. Strategic Management J. 31(11):1226–1243.CrossrefGoogle Scholar
  • Chang S-J, Hong J (2002) How much does the business group matter in Korea? Strategic Management J. 23(3):265–274.CrossrefGoogle Scholar
  • Chang S-J, Singh H (2000) Corporate and industry effects on business unit competitive position. Strategic Management J. 21(7):739–752.CrossrefGoogle Scholar
  • Clarke P, Wheaton B (2007) Addressing data sparseness in contextual population research using cluster analysis to create synthetic neighborhoods. Sociol. Methods Res. 35(3):311–351.CrossrefGoogle Scholar
  • Cohen J (1973) Eta-squared and partial eta-squared in fixed factor ANOVA designs. Educational Psych. Measurement 33(1):107–112.CrossrefGoogle Scholar
  • Cohen J (1988) Statistical Power Analysis for the Behavioral Sciences, 2nd ed. (Lawrence Erlbaum Associates, Inc., Hillsdale, NJ).Google Scholar
  • Fitza MA (2014) The use of variance decomposition in the investigation of CEO effects: How large must the CEO effect be to rule out chance? Strategic Management J. 35(12):1839–1852.CrossrefGoogle Scholar
  • Fritz CO, Morris PE, Richler JJ (2012) Effect size estimates: Current use, calculations, and interpretation. J. Experiment. Psych.: General 141(1):2–18.CrossrefGoogle Scholar
  • Fukui Y, Ushijima T (2011) What drives the profitability of Japanese multi-business corporations? A variance components analysis. J. Japanese Internat. Econom. 25(2):1–11.CrossrefGoogle Scholar
  • Furman J (2000) Does industry matter differently in different places? Evidence from four OECD countries. Working Paper 1–43, MIT, Cambridge, MA.Google Scholar
  • Goddard J, Tavakoli M, Wilson JOS (2009) Sources of variation in firm profitability and growth. J. Bus. Res. 62(4):495–508.CrossrefGoogle Scholar
  • Grant RM (2016) Contemporary Strategy Analysis, 9th ed. (John Wiley & Sons, Chichester, UK).Google Scholar
  • Hansen GS, Wernerfelt B (1989) Determinants of firm performance: The relative importance of economic and internalization factors. Strategic Management J. 10(5):399–411.CrossrefGoogle Scholar
  • Henderson CR (1953) Estimation of variance and covariance components. Biometrics 9(2):226–252.CrossrefGoogle Scholar
  • Hodges JS, Sargent DJ (2001) Counting degrees of freedom in hierarchical and other richly-parameterised models. Biometrika 88(2):367–379.CrossrefGoogle Scholar
  • Hough JR (2006) Business segment performance redux: A multilevel approach. Strategic Management J. 27(1):45–61.CrossrefGoogle Scholar
  • Hox JJ (2010) Multilevel Analysis: Techniques and Applications (Routledge, New York).CrossrefGoogle Scholar
  • Hunter JE, Schmidt FL (2004) Methods of Meta-Analysis: Correcting Error and Bias in Research Findings (Sage, Thousand Oaks, CA).CrossrefGoogle Scholar
  • Iurkov V, Sasson A (2015) How much do alliance networks matter? Working Paper 1–40, BI Norwegian Business School, Olso, Norway.CrossrefGoogle Scholar
  • Karniouchina EV, Carson SJ, Short JC, Ketchen DJ (2013) Extending the firm vs. industry debate: Does industry life cycle stage matter? Strategic Management J. 34(8):1010–1018.CrossrefGoogle Scholar
  • Kennedy JJ (1970) The eta coefficient in complex ANOVA designs. Educational Psych. Measurement 30(4):885–889.CrossrefGoogle Scholar
  • Khanna T, Rivkin JW (2001) Estimating the performance effects of business groups in emerging markets. Strategic Management J. 22(1):45–74.CrossrefGoogle Scholar
  • Kutner MH, Nachtsheim CJ, Neter J, Li W (2005) Applied Linear Statistical Models, 5th ed. (McGraw-Hill, Singapore).Google Scholar
  • Lieu P-T, Chi C-W (2006) How much does industry matter in Taiwan? Internat. J. Bus. 11(4):387–402.Google Scholar
  • Ma X, Tong TW, Fitza M (2013) How much does subnational region matter to foreign subsidiary performance? Evidence from Fortune Global 500 corporations’ investment in China. J. Internat. Bus. Stud. 44(1):66–87.CrossrefGoogle Scholar
  • Mackey A (2008) The effect of CEOs on firm performance. Strategic Management J. 29(12):1357–1367.CrossrefGoogle Scholar
  • Makino S, Isobe T, Chan CM (2004) Does country matter? Strategic Management J. 25(10):1027–1043.CrossrefGoogle Scholar
  • McGahan AM, Porter ME (1997) How much does industry matter, really? Strategic Management J. 18(S1):15–30.CrossrefGoogle Scholar
  • McGahan AM, Porter ME (2002) What do we know about variance in accounting profitability? Management Sci. 48(7):834–851.LinkGoogle Scholar
  • McGahan AM, Porter ME (2005) Comment on “Industry, corporate and business-segment effects and business performance: A non-parametric approach” by Ruefli and Wiggins. Strategic Management J. 26(9):873–880.CrossrefGoogle Scholar
  • Misangyi VF, Elms H, Greckhamer T, Lepine JA (2006) A new perspective on a fundamental debate: A multilevel approach to industry, corporate, and business unit effects. Strategic Management J. 27(6):571–590.CrossrefGoogle Scholar
  • Morgan SL, Winship C (2015) Counterfactuals and Causal Inference, 2nd ed. (Cambridge University Press, New York).Google Scholar
  • Nag R, Hambrick DC, Chen M-J (2007) What is strategic management, really? Inductive derivation of a consensus definition of the field. Strategic Management J. 28(9):935–955.CrossrefGoogle Scholar
  • Olejnik S, Algina J (2003) Generalized eta and omega squared statistics: Measures of effect size for some common research designs. Psych. Methods 8(4):434–447.CrossrefGoogle Scholar
  • Roquebert JA, Phillips RL, Westfall PA (1996) Markets vs. management: What “drives” profitability? Strategic Management J. 17(8):653–664.CrossrefGoogle Scholar
  • Rubin DB (1974) Estimating causal effects of treatments in randomized and nonrandomized studies. J. Educational Psych. 66(5):688–701.CrossrefGoogle Scholar
  • Ruefli TW, Wiggins RR (2003) Industry, corporate, and segment effects and business performance: A non-parametric approach. Strategic Management J. 24(9):861–879.CrossrefGoogle Scholar
  • Ruefli TW, Wiggins RR (2005) Response to McGahan and Porter’s commentary on “Industry, corporate and business-segment effects and business erformance: A non-parametric approach.” Strategic Management J. 26(9):881–886.CrossrefGoogle Scholar
  • Rumelt RP (1991) How much does industry matter? Strategic Management J. 12(3):167–185.CrossrefGoogle Scholar
  • Rumelt RP, Schendel DE, Teece DJ (1994) Fundamental Issues in Strategy: A Research Agenda (Harvard Business School Press, Boston).Google Scholar
  • Schmalensee R (1985) Do markets differ much? Amer. Econom. Rev. 75(3):341–351.Google Scholar
  • Searle SR (1971) Linear Models (John Wiley & Sons, New York).Google Scholar
  • Stavropoulos S, Burger MJ, Skuras D (2015) Data sparseness and variance in accounting profitability. Organ. Res. Methods 18(4):656–678.CrossrefGoogle Scholar
  • Tarziján J, Ramirez C (2011) Firm, industry and corporation effects revisited: A mixed multilevel analysis for Chilean companies. Appl. Econom. Lett. 18(1):95–100.CrossrefGoogle Scholar
  • Wooldridge JM (2003) Introductory Econometrics: A Modern Approach, 2nd ed. (Thomson, Mason, OH).Google Scholar
  • Zacharias N, Six B, Schiereck D, Stock RM (2015) CEO influences on firms’ strategic actions: A comparison of CEO-, firm-, and industry-level effects. J. Bus. Res. 68(11):2338–2346.CrossrefGoogle Scholar
  • Zavosh G, Dibiaggio L (2015) How much does corporate effect matter? Definition of business-variant corporate effect. Acad. Management Proc. 2015(1):13000.CrossrefGoogle Scholar
  • Zavosh G, Dibiaggio L (2016) How much does corporate effect matter? Definition and estimation of business-variant corporate effect. Working Paper 1–36, SKEMA Business School, Sophia Antipolis, France.Google Scholar

Bart Vanneste is an associate professor in the strategy and entrepreneurship area at the UCL School of Management, University College London. He received his PhD in strategic and international management from the London Business School. His research focuses on interorganizational relationships and corporate strategy.

INFORMS site uses cookies to store information on your computer. Some are essential to make our site work; Others help us improve the user experience. By using this site, you consent to the placement of these cookies. Please read our Privacy Statement to learn more.