Skip to Main Content
7,561
Views
170
CrossRef citations to date
Altmetric

Quantitative Research

Bridging Design and Behavioral Research With Variance-Based Structural Equation Modeling

Advertising research is a scientific discipline that studies artifacts (e.g., various forms of marketing communication) as well as natural phenomena (e.g., consumer behavior). Empirical advertising research therefore requires methods that can model design constructs as well as behavioral constructs, which typically require different measurement models. This article presents variance-based structural equation modeling (SEM) as a family of techniques that can handle different types of measurement models: composites, common factors, and causal–formative measurement. It explains the differences between these types of measurement models and clears up possible ambiguity regarding formative endogenous constructs. The article proposes confirmatory composite analysis to assess the nomological validity of composites, confirmatory factor analysis (CFA) and the heterotrait-monotrait ratio of correlations (HTMT) to assess the construct validity of common factors, and the multiple indicator, multiple causes (MIMIC) model to assess the external validity of causal–formative measurement.

Advertising research is a relatively young academic discipline that combines design research and behavioral research. On one hand, advertising research covers “knowledge unique to advertising as an institution and professional practice” (Reid 2014, p. 410). It investigates what is understood as advertising in the widest sense, including the whole range of marketing communication and branding. In this sense, advertising forms a class of marketing instruments that can be viewed as artifacts designed by humans. The term artifact should be understood broadly, but not as a statistical or methodological artifact despite the methodological character of this article.1 Advertising research with this focus is a “science of the artificial” (Simon 1969) and thus a design science, aiming to generate a body of knowledge on how to create, improve, orchestrate, and manage specific types of marketing instruments. On the other hand, advertising research is predominantly regarded as a behavioral science (Carlson 2015), aiming to explain advertising effects and the social aspects of advertising (Reid 2014). Advertising research with this focus sheds light on consumers and can be viewed as a particular type of consumer research and applied psychology. This dual focus poses challenges for its theories as well as the empirical methods to create and validate them.

Empirical advertising research on relationships between behavioral constructs and design constructs needs analytical tools that can cope with the different requirements of behavioral and design sciences. Behavioral constructs are often latent variables that can be understood as ontological entities, such as attributes or attitudes of consumers. This way of theoretical reasoning rests on the assumption that theoretical constructs of interest exist in nature, irrespective of scientific investigation. In contrast, constructs of design research (artifacts) can be conceived as products of theoretical thinking. Thinking about constructs as artifacts has its roots in constructivist epistemology. Constructs in this sense can be understood as constructions that are theoretically justified. The epistemological distinction between the ontological and the constructivist nature of constructs has important design implications. The correspondence rule that links the empirical indicators to the theoretical construct, as conceptually represented in what is referred to as the measurement model, depends on the nature of the construct. Whereas behavioral constructs are typically modeled as common factors, design constructs can be modeled as composites. Modeling design constructs as composites pays tribute to the fact that all artifacts or abstractions thereof consist of more elementary components (Nelson and Stolterman 2003).

Against this background, this article illustrates the use of variance-based structural equation modeling (SEM) as an analytical tool for empirical advertising research at the interface of design and behavioral research. Unlike covariance-based SEM, variance-based SEM can estimate common factors and composites, which makes it suitable for behavioral constructs as well as design constructs. The remainder of this article is organized as follows: It begins by explaining the nature of variance-based SEM and how to specify structural equation models containing composites as well as common factors. This includes the different specifications of measurement models (composite, reflective, and causal–formative) as well as structural models. Next, it describes how to conduct model tests, how to assess and report estimates of variance-based SEM, and the use of the bootstrap for inference statistics. Finally, it discusses extensions of variance-based SEM and provides suggestions for further research.

VARIANCE-BASED STRUCTURAL EQUATION MODELING

As in several other scientific disciplines, theoretical constructs are the building blocks of advertising theories. A theoretical construct is “a conceptual term used to describe a phenomenon of theoretical interest” (Edwards and Bagozzi 2000, pp. 156–57). Generally speaking, most theoretical constructs “can only be measured through observable measures or indicators that vary in their degree of observational meaningfulness and validity. No single indicator can capture the full theoretical meaning of the underlying construct and hence, multiple indicators are necessary” (Steenkamp and Baumgartner 2000, p. 196).

Common theoretical constructs in advertising research are, for instance, attitude toward the brand, ad liking, ad awareness, advertising practices, media mix, advertising budget, and advertising content. Whereas some theoretical constructs of advertising research refer to consumer attributes, others refer to human-made objects (artifacts) that are typically created by managers, staff, or other agents of firms.

To empirically investigate relationships between theoretical constructs of advertising research, researchers can apply SEM. SEM is a family of statistical techniques that have become popular in advertising and marketing research (Henseler, Ringle, and Sarstedt 2012). A key reason for the attractiveness of SEM is the possibility to (graphically) model and estimate parameters for relationships between theoretical constructs and to test complete behavioral science theories (Bollen 1989). SEM distinguishes between theoretical constructs and their empirical measurement by multiple observable variables.

SEM can be divided into two subtypes: covariance based (Jöreskog 1978; Rigdon 1998) and variance based (Reinartz, Haenlein, and Henseler 2009). These two approaches to SEM differ in their estimation objectives (Henseler, Ringle, and Sinkovics 2009). Covariance-based SEM minimizes a discrepancy between the empirical covariance matrix and the theoretical covariance matrix implied by the structural equations of the specified model. In contrast, variance-based SEM determines construct scores as linear combinations of observed variables such that a certain criterion of interrelatedness is maximized. Variance-based SEM techniques encompass extended canonical correlation analysis (Kettenring 1971), generalized structured component analysis (Hwang and Takane 2004), traditional partial least squares (PLS) path modeling (Lohmöller 1989), consistent PLS path modeling (Dijkstra and Henseler 2015a, 2015b), and regularized generalized canonical correlation analysis (Tenenhaus and Tenenhaus 2011). Regressions between sum scores or principal components can also be regarded as simple variance-based SEM techniques. The techniques differ with respect to their optimization function and abilities.

Applications of variance-based SEM in advertising research address topics such as online and mobile advertising (Jensen 2008; Naik and Raman 2003; Okazaki, Li, and Hirose 2009), advertising believability and information source value (O'Cass 2002), e-mail viral marketing (San José-Cabezudo and Camarero-Izquierdo 2012), integrated marketing communication ability (Luxton, Reid, and Mavondo 2015), and advertising and selling practices (Okazaki, Mueller, and Taylor 2010a, 2010b).

SPECIFYING AND ESTIMATING STRUCTURAL EQUATION MODELS

Structural equation models consist of two submodels: the measurement model, which specifies the relationships between constructs and their indicators, and the structural model, which contains the relationships between constructs. Three types of measurement models can be distinguished: composite, reflective, and causal–formative. The choice of measurement model should be driven mainly by the nature of the construct, in other words, whether a design construct or a behavioral construct is studied. Design constructs can be regarded as mixtures of elements. This suggests that they should be modeled as composites. In contrast, constructs of behavioral sciences are typically latent variables and are traditionally modeled using reflective measurement. If causal indicators (antecedents of the construct) are available in addition to the reflective indicators, an analyst can apply causal–formative measurement. Table 1 summarizes the differences between the three types of measurement models, and Figure 1 depicts the resulting decision tree.

TABLE 1 Summary of Differences Between Types of Measurement Models

FIG. 1. Decision tree for measurement models.

Composite Measurement

The composite measurement model, also referred to as the composite factor model (Henseler et al. 2014), the composite–formative model (Bollen and Diamantopoulos 2015), or simply the composite model, assumes a definitorial relation between a construct and its indicators. This means that the construct is made up of its indicators or elements. An example from advertising research would be brand equity as conceptualized by Aaker (1991): It is a construct made up of brand awareness, brand associations, brand quality, brand loyalty, and other proprietary assets.

In composite measurement, the relationships between the indicators and the construct are not cause-effect relationships but rather a prescription of how the ingredients should be arranged to form a new entity. Nelson and Stolterman (2003, p. 119) remind us that “[a]lthough it's true that ‘the whole is greater than the sum its parts,’ we must also acknowledge that the whole is of these parts.”

Figure 2 depicts a composite measurement model. The arrow connections between the indicators and the composite should not be regarded as causal relationships in the common sense of the word causal. Rather, in terms of the four Aristotelian causes, composite measurement taps into the material cause instead of the efficient cause. In formal terms, the composite model regards the construct as a linear combination of its indicators, each weighted by an indicator weight w:(1)

FIG. 2. Composite measurement.

Researchers who introduce a composite can be thought of as designers: They design this construct. Designers can choose whether they define the weights or let mathematical tools determine the weights to achieve some kind of optimality. A typical composite construct of advertising research is the media mix: Different media can receive equal budgets, different budgets based on the decider's experience, or different budgets based on heuristics or optimization tools (Färe et al. 2004; Reynar, Phillips, and Heumann 2010). Researchers using variance-based SEM typically let the software provide estimates for the weights. If the indicators are highly correlated, preset equal weights are also a viable option (McDonald 1996). In general, preset weights are the way to go if concrete weights form an integral part of the recipe of the modeled artifact.

Composite measurement models pose only a few restrictions on the overall model. The most important restriction is that all correlations between indicators of different constructs can be explained as the product of interconstruct correlations and respective indicator loadings. Besides that, the composite measurement model does not require any assumptions about the correlations between its indicators; they can have any value. Consequently, the correlations between indicators will not be indicative for any sort of quality; applying internal consistency reliability coefficients to composite measurement models bears any meaning. Instead, composite measurement can be evaluated only in relation to its nomological net, which implies that constructs specified as composites typically require a context in which they are embedded.

Reflective Measurement

Reflective measurement models form the backbone of behavioral research. Advertising research constructs borrowed from consumer psychology are most often modeled in this way. A typical example would be consumer involvement (Andrews, Durvasula, and Akhter 1990). Reflective measurement models are essentially common factor models, which postulate that there is a latent variable underlying a set of observable variables. In turn, each observable variable or indicator is regarded as an error-prone manifestation of a latent variable's level, as expressed by the following equation:(2)

The measurement errors are assumed to be centered around zero and uncorrelated with other variables, constructs, or errors in the model. The latent variable is not directly observable, but only the correlational pattern of its indicators provides indirect support for its existence. Figure 3 depicts a typical reflective measurement model. The strong tie between reflective measurement and the common factor model implies that covariance-based SEM typically serves as its statistical workhorse (Bollen 1989; Jöreskog and Sörbom 1982, 1993). For a more detailed description of reflective measurement, we refer to Hair, Babin, and Krey (2017) in this issue of Journal of Advertising.

FIG. 3. Reflective measurement.

In principle, variance-based SEM estimates composite models, not factor models. If a composite is created as a linear combination of error-prone indicators, the composite itself does contain measurement error. As a consequence, researchers who use composites as stand-ins for latent variables will obtain inconsistent model coefficients and risk inflated Type I and Type II errors (Henseler 2012). For most types of research—except predictive research—it is indispensable to aim for consistent estimates. The solution is the correction for attenuation. It entails that the correlation between composites divided by the geometric mean of their reliabilities is a consistent estimate of the correlation between the factors.

There are several ways of determining the reliability of the composites. First, one can use covariance-based SEM to estimate a factor model and derive the composite's reliability from the variance that the composite and the factor share (Raykov 1997). If all the weights of a composite are equal, the reliability can be calculated based on the factor loadings (Werts et al. 1978). Second, if a factor is embedded in a nomological net, one can exploit the fact that some variance-based SEM techniques (such as PLS Mode A) provide weights that are proportional to the true yet unknown correlations between the indicators and their common factor (Dijkstra and Henseler 2015a, 2015b). Researchers do not have to conduct a separate common factor analysis to obtain consistent estimates for the loadings.

Causal–Formative Measurement

The causal–formative measurement model (often referred to as the formative measurement model) assumes a different epistemic relationship between the construct and its indicators: The indicators are considered as immediate causes of the focal construct (Fassott and Henseler 2015). In turn, the construct is seen as a linear combination of the indicators plus a measurement error. An example from advertising research would be the perceived interactivity of a website: This construct can be measured in a causal–formative way using the indicators “active control,” “synchronicity,” and “two-way communication” (Voorveld, Neijens, and Smit 2010).

The following equation represents a causal–formative measurement model, where w indicates each indicator's contribution to ξ, and δ is an error term.(3)

This equation strongly resembles the one for composite measurement, yet the measurement error on the construct level makes it distinct. The measurement error on the construct level implies that the construct of interest has not been perfectly measured by its formative indicators. Except for rare cases when all causes can be measured (e.g., see Diamantopoulos 2006), it is indispensable to also have a reflective measurement model; otherwise it is not possible to capture the entire content of the construct. The reflective indicators can be observed or latent as long as there are at least two reflective indicators whose correlation is fully attributable to the construct as a common cause. Figure 4 depicts a causal–formative measurement model.

FIG. 4. Causal–formative measurement.

There is some confusion in the literature about what is meant by formative measurement. Authors referring to formative measurement sometimes discuss the characteristics of composite measurement and sometimes those of causal–formative measurement (e.g., in particular early contributions on formative measurement, such as Diamantopoulos and Winklhofer 2001; Jarvis, MacKenzie, and Podsakoff 2003). This confusion can be traced back to Edwards and Bagozzi (2000), who deliberately sought a term that characterizes both causal and definitorial relationships.

The confusion has culminated in such statements as “When an endogenous latent variable relies on formative indicators for measurement, empirical studies can say nothing about the relationship between exogenous variables and the endogenous formative latent variable” (Cadogan and Lee 2013, p. 233; for a rejoinder, see Rigdon 2014a) or variance-based SEM “is not an adequate approach to modeling scenarios where a latent variable of interest is endogenous to other latent variables in the research model in addition to its own observed formative indicators” (Aguirre-Urreta and Marakas 2013, p. 776; for a rejoinder, see Rigdon et al. 2014).

The confusion can be cleared up if one carefully distinguishes between composite measurement and causal–formative measurement. Whereas the older literature on variance-based SEM tends to equate formative measurement with composite measurement (e.g., see Chin 1998; Hwang and Takane 2004), it is only recently that scholars started recommending the multiple indicators, multiple causes (MIMIC) model specification for causal–formative measurement in variance-based SEM, as depicted in Figure 4 (Rigdon et al. 2014). For covariance-based SEM, such types of models have been the standard for decades (e.g., see Bagozzi 1980).

Particular care is required if a construct with a causal–formative measurement model is meant to be explained by other constructs in the model. Researchers should then apply the litmus test of whether these other constructs are theorized to directly or indirectly cause the construct. In the case of a direct causal relationship, the other constructs should be added as additional formative indicators. In the case of an indirect causal relationship, the extant formative indicators mediate the effect of the other constructs. Consequently, the researcher should include effects from the other constructs on the formative indicators in the model.

The Structural Model

The structural model consists of endogenous and exogenous constructs as well as the (typically linear) relationships between them. In variance-based SEM, exogenous constructs can freely correlate. The size and significance of path relationships are typically the focus points of the scientific endeavors pursued in empirical research.

In variance-based SEM, it is helpful to estimate two models: the estimated model, as specified by the analyst, and the saturated model (Gefen, Straub, and Rigdon 2011). The latter corresponds to a model in which all constructs can freely correlate, whereas the construct measurement is exactly as specified by the analyst. The difference lies purely in the structural model. If the estimated model is a full graph, both models will be equivalent. The saturated model is useful to assess the quality of the measurement model, because potential model misfit can be entirely attributed to measurement model misspecification.

In principle, it is possible for structural models to leave the comfortable realm of linear relations. In advertising research, more is not always better, but there can be optimal numbers of advertising instruments. This notion can be modeled by an inverse U-shaped relation. Another common phenomenon in advertising research is saturation. Both phenomena can be modeled using variance-based SEM if nonlinear terms are included in the structural model. In many cases, simple polynomial extensions can help model the typical nonlinearities in advertising research (Dijkstra and Henseler 2011; Henseler et al. 2012).

A particular form of nonlinearity is moderation. One refers to a moderating effect if a focal effect is not constant but depends on the level of another construct in the model. Several approaches for modeling moderating effects using variance-based SEM have been proposed (e.g., Fassott, Henseler, and Coelho 2016; Henseler and Fassott 2010; Henseler and Chin 2010).

Model Identification

Researchers using covariance-based SEM quickly become aware of the need for identified models. The applied statistical technique can only provide unanimous estimates for the model parameters if the model is identified. Variance-based structural equation models typically do not have identification problems, because the available software packages restrict the allowed models to those that are theoretically identified.

Nevertheless, it can happen that a variance-based structural equation model is statistically underidentified. This occurs if a construct with multiple indicators is unrelated to all other constructs in the model. In this case, any combination of indicator weights would yield the same result, namely a construct that is unrelated to the rest of the model. Analysts should avoid this situation and take care that every construct is embedded in a nomological net that consists of at least one other related variable in the model. If such a nomological net is not available, researchers should preset the weights or determine them by means of techniques that do not require a nomological net, such as principal component analysis (PCA).

A special identification issue is the phenomenon of sign indeterminacy, which all SEM techniques face. Sign indeterminacy means that the statistical method can determine weight or loading estimates for a factor or a composite only jointly for their value but not for their sign. For instance, it can happen that all indicators of a construct have a sign opposite to what would be expected. In covariance-based SEM, it has become customary to constrain one loading to one, dictating the orientation of the construct. Recently, this approach was partly transferred to variance-based SEM as the dominant indicator approach (Henseler, Hubona, and Ray 2016). For each construct, the researcher should determine one indicator—the dominant indicator—that must correlate positively with the construct. If the loading of this indicator turns out to be negative, the orientation of the construct will be switched. This is achieved by multiplying its scores by −1.

ASSESSING AND REPORTING THE RESULTS OF VARIANCE-BASED STRUCTURAL EQUATION MODELING

The fact that structural equation models consist of two submodels has immediate implications for the way in which the results of variance-based SEM are assessed. It makes sense to analyze the relationships among the constructs only if there is sufficient evidence of their validity and reliability. In analogy to the two-step approach for covariance-based SEM (Anderson and Gerbing 1988), a two-step approach for variance-based SEM is suggested. In a first step, the quality of construct measurement is determined. In a second step, the empirical estimates for the relationships between the constructs are examined. In the following section, we draw from new guidelines to assess and report results of variance-based SEM (Henseler, Hubona, and Ray 2016).

Assessing Composite Measurement Models

Composites can be assessed with regard to three characteristics: nomological validity, reliability, and weights (composition).

Composites can be regarded as prescriptions for dimension reduction (Dijkstra and Henseler 2011) and generally go along with a loss of information. Analysts face a trade-off: Should they form the composite and accept the loss of information, or continue the analysis simply using the indicators?

A generally accepted heuristic is Ockham's razor: A model should be preferred over a more general model in which it is nested if it does not exhibit a significantly worse goodness of fit. Composites impose proportionality constraints on the correlations between the composite's indicators and other variables in the model. If a model with these proportionality constraints does not have a significantly worse fit than a model without them, the composite can be said to have nomological validity. If a composite has nomological validity, a researcher can infer that it is the composite that acts within a nomological net rather than the individual indicators. The concept of nomological validity was developed by Cronbach and Meehl (1955) for factor models; its adaptation for the application to composite measurement is new.

The statistical technique needed to test for the nomological validity of composites is confirmatory composite analysis (Henseler et al. 2014). Confirmatory composite analysis tests whether the discrepancy between the empirical correlation matrix and the correlation matrix implied by the saturated model is so small that the possibility cannot be excluded that this discrepancy is purely attributable to sampling error. The statistical test underlying confirmatory composite analysis uses bootstrapping to generate an empirical distribution of the discrepancy if the model was true (Dijkstra and Henseler 2015a). According to Zhang and Savalei (2016), this “[m]odel-based bootstrap is appropriate for obtaining accurate estimates of the p value for the test of exact fit under the null hypothesis” (p. 395).

In addition to the nomological validity of the composite, it is possible to determine its reliability. If a composite is measured by means of perfectly observable variables, there is no random measurement error involved, and the resulting reliability of the composite equals 1. If the indicators contain a random measurement error, the composite will have imperfect reliability. In these instances, the reliability of the composite can be determined using the following equation (Mosier 1943):(4)

In this equation for a composite's reliability ρ, w is the column vector of indicator weights, and S* is the correlation matrix of the composite's indicators, with the respective indicator reliabilities in the main diagonal. Analysts facing the challenge to provide reliability estimates for each indicator could make use of respective values reported in previous studies or model second-order constructs as composites of factors (van Riel et al. forthcoming).

Finally, if the weights were not preset by the analyst but freely estimated, they should be carefully studied. What is their size? What is their sign? What are their confidence intervals? Another point of concern should be multicollinearity among indicators (Diamantopoulos and Winklhofer 2001): High levels of multicollinearity may let indicators yield unexpected signs or huge confidence intervals.

Assessing Reflective Measurement Models

The point of departure to assess reflective measurement models should be a model test of the saturated model (Anderson and Gerbing 1988). In analogy to the confirmatory composite analysis as described in the previous subsection, reflective measurement models should be examined using confirmatory factor analysis (CFA). If a structural equation model consists only of reflectively measured constructs, covariance-based SEM is the most versatile technique for this task. If the structural equation model also contains composites, covariance-based SEM is not applicable, and it is recommended that the CFA be conducted using variance-based SEM, leading to a combined confirmatory composite/factor analysis. Technically, the CFA using variance-based SEM does not differ from the confirmatory composite analysis. The main difference can be found in the model-implied correlation matrix: For factor models, the implied correlation between two indicators of a factor is constrained to the product of their loadings, while these implied correlations are unconstrained for composite models.

Experience has shown that most empirical studies on marketing and management provide evidence against the existence of a factor model (Henseler et al. 2014). Concretely, researchers almost always find a significant discrepancy between the empirical correlation matrix and the model-implied correlation matrix, which reflects the pattern that should be observed if the world indeed functioned according to the researcher's model. For advertising research, the figures are quite similar, although not that bad: As Hair, Babin, and Krey (2017) point out in this issue of Journal of Advertising, about 12% of the CFAs reported in the journal exhibit factor models without significant misfit.

As a consequence of the poor test record of the factor model, many researchers lose interest in testing the hypothesis of exact fit (Zhang and Savalei 2016, p. 395), which is a worrying trend. They acknowledge more or less that the factor model is not (fully) correct and rely on measures of approximate model fit to quantify the degree of the model's misfit. A popular measure of approximate model fit is the standardized root mean square residual (SRMR; Hu and Bentler 1999), which has been shown to work well in combination with variance-based SEM (Henseler et al. 2014). SRMR values below 0.08 typically indicate that the degree of misfit is not substantial (Henseler, Hubona, and Ray 2016). Instead of surrendering in the light of significant misfit and referring to measures of approximate fit, it would be wiser to investigate the sources of misfit. In terms of model diagnostics, the (standardized) residual matrix is most informative to detect significant discrepancies between the empirical covariance matrix and the model-implied covariance matrix.

While the overall goodness-of-fit test and measures of approximate fit are informative about whether the data at hand favor a factor model, they hardly provide evidence of the quality of measurement. This becomes obvious if one looks at the extreme case of a factor model whose indicators exhibit very low correlations. In this case, a factor model is unlikely to be rejected, because the discrepancy between the empirical correlation matrix and the model-implied correlation matrix will most likely be small. Yet it is legitimate to ask whether one was able to measure the intended factor at all. Additional quality criteria have therefore been proposed.

Unidimensionality indicates whether a researcher succeeds in extracting a dominant factor out of a set of indicators. The most widely applied measure of unidimensionality is the average variance extracted (AVE; Fornell and Larcker 1981). It equals the average proportion of variance explained of each reflective indicator of a latent variable. Researchers should strive for values higher than 0.5, because then there cannot be a second factor that explains as much variance as the first one. A weaker alternative is the permutation test that Sahmer, Hanafi and El Qannari (2006) proposed, which tests whether the first extracted factor explains significantly more variance than the second factor.

Discriminant validity applies if two conceptually different constructs are also statistically distinct. Fornell and Larcker (1981) operationalized this requirement as a comparison between a construct's AVE and its squared correlations with other constructs in the model. The Fornell-Larcker criterion postulates that a construct's AVE should be higher than all its squared correlations. A new criterion for discriminant validity is the heterotrait-monotrait ratio of correlations (HTMT, proposed by Henseler, Ringle, and Sarstedt 2015). In a recent simulation study, the HTMT clearly outperforms the Fornell-Larcker criterion (Voorhees et al. 2016). An HTMT value significantly smaller than 1 or clearly below 0.85 provides sufficient evidence of the discriminant validity of a pair of constructs.

The internal consistency reliability quantifies the amount of random measurement error contained in the construct scores that serve as stand-ins for the latent variables. Consistent reliability coefficients for construct scores are Raykov's r (Raykov 1997) and Dijkstra-Henseler's rho (ρA, proposed by Dijkstra and Henseler 2015b). If all weights of a composite are equal, they will equal the composite reliability ρc as proposed by Werts et al. (1978). Psychometricians recommend a minimum reliability value of 0.7 (Nunnally and Bernstein 1994). Finally, a researcher should ensure that each indicator loads sufficiently well on its own construct but less on other constructs in the model. The latter can be ensured by inspecting the cross-loadings. If needed, researchers can rely on additional assessment criteria implemented for covariance-based SEM (e.g., see Markus and Borsboom 2013).

Assessing Causal–Formative Measurement Models

Because causal–formative measurement models require a complementary reflective measurement model, a point of departure is the assessment of this reflective measurement. Once this is accomplished, the analyst can devote attention to the causal–formative measurement. Diamantopoulos and Winklhofer (2001) propose to assess content validity, indicator validity, indicator collinearity, and external validity of causal–formative measurement models.

Content validity is about whether the set of indicators indeed captures the full meaning of the construct. Transparent reporting of the employed indicators helps create face validity. In this way, content validity can be assessed without collecting data. Indicator validity can also be assessed before data collection by letting experts conduct a sorting task (Anderson and Gerbing 1991). If experts are able to correctly assign indicators to constructs, one can refer to the expert validity of the indicators.

Other ways of assessing causal–formative measurement models require estimates and corresponding inference statistics obtained from empirical data. Concretely, indicator validity applies if an indicator contributes significantly and substantially to explaining the construct. Indicator multicollinearity can have an adverse effect on this approach to indicator validity. Analysts are therefore advised to keep an eye on the variance inflation factors of the formative indicators.

The strongest evidence of the validity of causal–formative measurement is external validity. How much variance of the construct can be explained by the formative indicators? While there are general suggestions for threshold levels (e.g., see Henseler, Ringle, and Sinkovics 2009), it might depend on the scientific development of the construct, as well as its scientific discipline, to best identify which threshold would make sense.

Assessing Structural Models

A starting point for the assessment of structural models should be the coefficients of determination (R2 values) of the endogenous constructs. The coefficient of determination quantifies the proportion of variance of a dependent construct that is explained by its predictors and lies between 0 and 1. The coefficient of determination lies between 0 and 1 and quantifies the proportion of variance of a dependent construct that is explained by its predictors. To compare models with different numbers of independent variables estimated using differently large datasets, the adjusted R2 should be applied.

Because the constructs in variance-based SEM are typically standardized, the path coefficients of the structural equation model should be interpreted like standardized regression coefficients: A coefficient for a path relationship between a dependent and an independent variable quantifies the expected increase in a dependent variable if the independent variable increases by one standard deviation and all other independent variables in the regression equation are kept constant (i.e., ceteris paribus). Apart from the size of a coefficient, its sign also matters, because a negative sign implies that an increase in the independent variable is accompanied by a decrease in the dependent variable.

Inference statistics for all coefficients in a structural equation model are typically obtained using the bootstrap. Empirical bootstrap confidence intervals are the output of choice to gauge the sampling variability of a coefficient. Alternatively, the bootstrap can provide Student t and corresponding p values for one-sided and two-sided null hypothesis significance tests.

Although the path coefficients provide a first impression of the size of an effect, they are not very helpful in comparing the size of effects across models, because they are influenced by the number of other explanatory variables as well as the correlations among them. As a remedy, Cohen (1988) introduced the effect size, f2. f2 values above 0.35, 0.15, and 0.02 can respectively be regarded as strong, moderate, and weak (Cohen 1988).

In addition to the direct effects, variance-based SEM can derive estimates for indirect effects as the departure point for the analysis of mediation (Nitzl, Roldán, and Cepeda Carrión 2016; Zhao, Lynch, and Chen 2010). The sum of the direct effect and the indirect effect(s) between two constructs is called the total effect. It is regarded as particularly useful for success factor analysis (Albers 2010).

If a researcher aims to conduct predictive research, the results need to be assessed and reported accordingly. Additional desirable assessments of predictive validity are the use of holdout samples (e.g., Cepeda Carrión et al. 2016) or the triangulation of results using different samples (e.g., Lancelot-Miltgen et al. 2016).

EXAMPLE

To demonstrate the application of variance-based SEM in an advertising research setting, the empirical data Yoo, Donthu, and Lee (2000) reported serve as a showcase. Their empirical study of 569 individuals aims to explore the relationships between selected marketing mix elements and the creation of brand equity. Figure 5 depicts the conceptual model. The data contain indicators of relevant advertising constructs, and the correlation matrix is publicly available, which permits readers to completely replicate the reported analyses.2

FIG. 5. Example model.

The analyses entail CFA, SEM, and confirmatory composite analysis. CFA answers the questions of whether there is evidence of the existence of nine latent variables and whether it is possible to measure them validly. SEM allows one to say something about the causal relationships among these latent variables. Confirmatory composite analysis helps answer an additional research question (not asked by Yoo, Donthu, and Lee 2000), namely whether it makes sense to create a brand equity construct as a weighted sum of the elements' perceived brand quality, brand loyalty, and brand awareness or associations.

Table 2 contrasts the original values Yoo, Donthu, and Lee (2000) reported with estimates obtained with covariance-based and variance-based SEM. Yoo, Donthu, and Lee used the maximum likelihood (ML) estimator as implemented in LISREL 8. The reanalysis makes use of three established estimators of covariance-based SEM, as implemented in the R package (R Core Team 2014) Lavaan (Rosseel 2012). Next to ML, these are generalized least squares (GLS) and unweighted least squares (ULS). Potential deviations between the original values and the ML values are attributable to rounding errors (Yoo, Donthu, and Lee reported the correlation matrix with only two digits) and differences in software used. The far-right column of Table 2 contains the results from consistent PLS as implemented in ADANCO 2.0 (Henseler and Dijkstra 2015), which is currently the only variance-based SEM technique that yields consistent estimates for factor models.

TABLE 2 Comparison of Results for the Example Model

Table 2 reports the outcomes from two separate analyses. The SRMR and the standardized loadings stem from a CFA, whereas the standardized path coefficients stem from SEM. The structural equation model is nested in the confirmatory factor model and has nine additional degrees of freedom (500 versus 491 df).

Overall, the results obtained from variance-based SEM strongly resemble those of covariance-based SEM using the ULS estimator. Tenenhaus (2008) has already reported similar findings. The use of confirmatory composite analysis can be illustrated using the same data set, but for a model on a higher level of abstraction. Concretely, one might ask whether it makes sense to model brand equity as a second-order construct. According to Aaker's (1991) conceptualization, brand equity is composed of perceived quality, brand loyalty, and brand awareness or associations, and is therefore regarded as a composite of latent variables. Van Riel et al. (forthcoming) have shown that variance-based SEM can be used to test and consistently estimate second-order constructs in the form of composites of factors.

Figure 6 depicts two competing models, which represent two different understandings of the brand equity concept. The model on the left understands brand equity as an umbrella term that groups three individual variables: perceived quality, brand loyalty, and brand awareness or associations. In contrast, the model on the right regards brand equity as a composite of these three factors. The second model has fewer parameters than the first, or more restrictions on the implied correlation matrix.

FIG. 6. Competing models of confirmatory composite analysis.

A confirmatory composite analysis for the model on the left yields an SRMR of 0.004 and an according HI95 value of 0.013. This means that in more than 5% of the cases one would obtain a value higher than 0.004 if the model was correct. There is therefore no reason to reject this model. In contrast, the model on the right yields an SRMR of 0.073 and an according HI95 value of 0.019, which means that it is very unlikely that the empirical data stem from a world that functions as theorized by the model. Consequently, one should reject this model. This implies that for the empirical study of Yoo, Donthu, and Lee (2000), there is no added value in regarding brand equity as a composite of perceived quality, brand loyalty, and brand awareness or associations.

DISCUSSION

Empirical advertising research—similar to probably any other type of empirical research—strives for a logical fit between research goals and statistical techniques. Rigdon (2014b) comments that “[w]e have seen a long period where our choice of statistical tools has shaped our research goals […] In future, we need to have our choice of goals shaping our tools” (p. 166).

Combining design and behavioral research, empirical advertising research poses special challenges to statistical tools. It requires an SEM technique that can handle both composites (as the dominant model for design constructs) and factors (as the dominant model for latent variables of behavioral research). Variance-based SEM is a family of techniques that fulfill this requirement.

Researchers often call for more rigor when applying variance-based SEM techniques (e.g., Rigdon et al. 2014). For this purpose, methodological research has presented a wide range of extensions that enable researchers and practitioners to adequately use variance-based SEM for the purpose of their study. These advances include consistent estimates for factor models (Dijkstra 2014; Dijkstra and Henseler 2015a, 2015b), the confirmatory tetrad analysis to test the kind of measurement model and construct (Gudergan et al. 2008), the heterotrait-monotrait ratio of correlations (HTMT) to assess discriminant validity (Henseler, Ringle and Sarstedt 2015), different multigroup analysis approaches (Chin and Dibbern 2010; Sarstedt, Henseler, and Ringle 2011), testing measurement invariance of composites (Henseler, Ringle, and Sarstedt 2016), as well as bootstrap-based tests of overall model fit (Dijkstra and Henseler 2015a). All these changes culminate in revised guidelines for a confirmatory research use of variance-based SEM (Henseler, Hubona, and Ray 2016).

Although empirical advertising research is focused on developing and testing theories, it is sometimes also about prediction (Gardner 1984). Orientation toward prediction has been one of the key building blocks of variance-based SEM and its most emphasized characteristic since its creation (Jöreskog and Wold 1982; Wold 1985). Recent conceptual (Chin 2010; Sarstedt et al. 2014) and empirical studies (Becker, Rai, and Rigdon 2013; Evermann and Tate 2012) substantiate the suitability of variance-based SEM for predictive purposes.

Future research should equip researchers with the tools and criteria they need to exploit variance-based SEM's capabilities for predictive modeling (Shmueli 2010). The first advances in this direction have recently been presented by Cepeda Carrión et al. (2016), Evermann and Tate (2016), and Shmueli et al. (2016). Based on these advances, more research on methods development can be expected to exploit variance-based SEM's predictive capabilities and a more intensive use of predictive modeling in advertising research and other business and social science disciplines.

ACKNOWLEDGMENTS

The author thanks the editor, three anonymous reviewers, Gabriel Cepeda Carrión, Marko Sarstedt, Christian M. Ringle, and José Luis Roldán for helpful comments. The author acknowledges a financial interest in ADANCO and its distributor, Composite Modeling.

NOTES

Notes

1. I thank an anonymous reviewer for pointing out this possible source of confounding.

2. I thank Boonghee Yoo, Naveen Donthu, and Sungho Lee for permission to use their data in the example.

    REFERENCES

  • Aaker, David A. (1991), Managing Brand Equity: Capitalizing on the Value of a Brand Name, New York: Free Press. [Google Scholar]
  • Aguirre-Urreta, Miguel I., and George M. Marakas (2013), “Research Note: Partial Least Squares and Models with Formatively Specified Endogenous Constructs: A Cautionary Note,” Information Systems Research, 25 (4), 76178. [Crossref], [Web of Science ®][Google Scholar]
  • Albers, Sönke (2010), “PLS and Success Factor Studies in Marketing,” in Handbook of Partial Least Squares, Vincenzo Esposito Vinzi, Wynne W. Chin, Jörg Henseler, and Huiwen Wang, eds., Berlin: Springer, 40925. [Crossref][Google Scholar]
  • Anderson, James C., and David W. Gerbing (1988), “Structural Equation Modeling in Practice: A Review and Recommended Two-Step Approach,” Psychological Bulletin, 103 (3), 41123. [Crossref], [Web of Science ®][Google Scholar]
  • ———, and ——— (1991), “Predicting the Performance of Measures in a Confirmatory Factor Analysis with a Pretest Assessment of Their Substantive Validities,” Journal of Applied Psychology, 76 (5), 73240. [Crossref], [Web of Science ®][Google Scholar]
  • Andrews, J. Craig, Srinivas Durvasula, and Syed H. Akhter (1990), “A Framework for Conceptualizing and Measuring the Involvement Construct in Advertising Research,” Journal of Advertising, 19 (4), 2740. [Taylor & Francis Online], [Web of Science ®][Google Scholar]
  • Bagozzi, Richard P. (1980), Causal Models in Marketing, New York: Wiley. [Google Scholar]
  • Becker, Jan-Michael, Arun Rai, and Edward E. Rigdon (2013), “Predictive Validity and Formative Measurement in Structural Equation Modeling: Embracing Practical Relevance,” paper presented at the 2013 International Conference on Information Systems, Milan, Italy, December. [Google Scholar]
  • Bollen, Kenneth A. (1989), Structural Equations with Latent Variables, New York: Wiley. [Crossref][Google Scholar]
  • ———, and Adamantios Diamantopoulos (2015), “In Defense of Causal–Formative Indicators: A Minority Report,” Psychological Methods, published electronically September 21, doi:10.1037/met0000056 [Crossref][Google Scholar]
  • Cadogan, John W., and Nicholas Lee (2013), “Improper Use of Formative Endogenous Variables,” Journal of Business Research, 66 (2), 23341. [Crossref], [Web of Science ®][Google Scholar]
  • Carlson, Les (2015), “The Journal of Advertising: Historical, Structural, and Brand Equity Considerations,” Journal of Advertising, 44 (1), 8488. [Taylor & Francis Online], [Web of Science ®][Google Scholar]
  • Cepeda Carrión, Gabriel, Jörg Henseler, Christian M. Ringle, and José L. Roldán (2016), “Prediction-Oriented Modeling in Business Research by Means of PLS Path Modeling,” Journal of Business Research, 69 (10), 454551. [Crossref], [Web of Science ®][Google Scholar]
  • Chin, Wynne W. (1998), “The Partial Least Squares Approach for Structural Equation Modeling,” in Modern Methods for Business Research, G.A. Marcoulides, ed., London: Erlbaum, 295336. [Google Scholar]
  • ——— (2010), “Bootstrap Cross-Validation Indices for PLS Path Model Assessment,” in Handbook of Partial Least Squares, Vincenzo Esposito Vinzi, Wynne W. Chin, Jörg Henseler, and Huiwen Wang, eds., Berlin: Springer, 8397. [Crossref][Google Scholar]
  • ———, and Jens Dibbern (2010), “A Permutation-Based Procedure for Multi-Group PLS Analysis: Results of Tests of Differences on Simulated Data and a Cross-Cultural Analysis of the Sourcing of Information System Services between Germany and the USA,” in Handbook of Partial Least Squares, Vincenzo Esposito Vinzi, Wynne W. Chin, Jörg Henseler, and Huiwen Wang, eds., Berlin: Springer, 17193. [Google Scholar]
  • Cohen, Jacob (1988), Statistical Power Analysis for the Behavioral Sciences, Mahwah, NJ: Erlbaum. [Google Scholar]
  • Cronbach, Lee J., and Paul E. Meehl (1955), “Construct Validity in Psychological Tests,” Psychological Bulletin, 52 (4), 281302. [Crossref], [PubMed], [Web of Science ®][Google Scholar]
  • Diamantopoulos, Adamantios (2006), “The Error Term in Formative Measurement Models: Interpretation and Modeling Implications,” Journal of Modelling in Management, 1 (1), 717. [Crossref][Google Scholar]
  • ———, and Heidi M. Winklhofer (2001), “Index Construction with Formative Indicators: An Alternative to Scale Development,” Journal of Marketing Research, 38 (2), 26977. [Crossref], [Web of Science ®][Google Scholar]
  • Dijkstra, Theo K. (2014), “PLS' Janus Face: Response to Professor Rigdon's ‘Rethinking Partial Least Squares Modeling: In Praise of Simple Methods,’Long Range Planning, 47 (3), 14653. [Crossref], [Web of Science ®][Google Scholar]
  • ———, and Jörg Henseler (2011), “Linear Indices in Nonlinear Structural Equation Models: Best Fitting Proper Indices and Other Composites,” Quality and Quantity, 45 (6), 150518. [Crossref], [Web of Science ®][Google Scholar]
  • ———, and ——— (2015a), “Consistent and Asymptotically Normal PLS Estimators for Linear Structural Equations,” Computational Statistics and Data Analysis, 81 (1), 1023. [Google Scholar]
  • ———, and ——— (2015b), “Consistent Partial Least Squares Path Modeling,” MIS Quarterly, 39 (2), 297316. [Crossref], [Web of Science ®][Google Scholar]
  • Edwards, Jeffrey R., and Richard P. Bagozzi (2000), “On the Nature and Direction of Relationships Between Constructs and Measures,” Psychological Methods, 5 (2), 15574. [Crossref], [PubMed], [Web of Science ®][Google Scholar]
  • Evermann, Joerg, and Mary Tate (2012), “Comparing the Predictive Ability of PLS and Covariance Analysis,” paper presented at the 2012 International Conference on Information Systems, Orlando, FL, December. [Google Scholar]
  • ———, and ——— (2016), “Assessing the Predictive Performance of Structural Equation Model Estimators,” Journal of Business Research, 69 (10), 456582. [Crossref], [Web of Science ®][Google Scholar]
  • Färe, Rolf, Shawna Grosskopf, Barry J. Seldon, and Victor J. Tremblay (2004), “Advertising Efficiency and the Choice of Media Mix: A Case of Beer,” International Journal of Industrial Organization, 22 (4), 50322. [Crossref], [Web of Science ®][Google Scholar]
  • Fassott, Georg, and Jörg Henseler (2015), “Formative (Measurement),” in Wiley Encyclopedia of Management, Vol. 9, Marketing, Cary Cooper, Nick Lee, and Andrew Farrell, eds., Chichester: Wiley, 14. [Crossref][Google Scholar]
  • ———, ———, and Pedro S. Coelho (2016), “Testing Moderating Effects in PLS Path Models with Composite Variables,” Industrial Management and Data Systems, 116 (9), 18871900. [Crossref], [Web of Science ®][Google Scholar]
  • Fornell, Claes, and David F. Larcker (1981), “Evaluating Structural Equation Models with Unobservable Variables and Measurement Error,” Journal of Marketing Research, 18 (1), 3950. [Crossref], [Web of Science ®][Google Scholar]
  • Gardner, Burleigh B. (1984), “Research, Measurement, and Prediction,” Journal of Advertising Research, 24 (4), 1618. [Web of Science ®][Google Scholar]
  • Gefen, David, Detmar W. Straub, and Edward E. Rigdon (2011), “An Update and Extension to SEM Guidelines for Administrative and Social Science Research,” MIS Quarterly, 35 (2), iiixiv. [Crossref], [Web of Science ®][Google Scholar]
  • Gudergan, Siegfried P., Christian M. Ringle, Sven Wende, and Alexander Will (2008), “Confirmatory Tetrad Analysis in PLS Path Modeling,” Journal of Business Research, 61 (12), 123849. [Crossref], [Web of Science ®][Google Scholar]
  • Hair, Joseph F., Jr., Barry J. Babin, and Nina Krey (2017), “An Overview of the Use of SEM of Covariance in The Journal of Advertising,” Journal of Advertising, 46 (1), XXX–XXX. [Google Scholar]
  • Henseler, Jörg (2012), “Why Generalized Structured Component Analysis Is Not Universally Preferable to Structural Equation Modeling,” Journal of the Academy of Marketing Science, 40 (3), 40213. [Crossref], [Web of Science ®][Google Scholar]
  • ———, and Wynne W. Chin (2010), “A Comparison of Approaches for the Analysis of Interaction Effects between Latent Variables Using Partial Least Squares Path Modeling,” Structural Equation Modeling: An Interdisciplinary Journal, 17 (1), 82109. [Taylor & Francis Online], [Web of Science ®][Google Scholar]
  • ———, and Theo K. Dijkstra (2015), ADANCO 2.0 [Software], Kleve, Germany: Composite Modeling. [Google Scholar]
  • ———, ———, Marko Sarstedt, Christian M. Ringle, Adamantios Diamantopoulos, Detmar W. Straub, David J. Ketchen, Joseph F. Hair, Jr., G. Tomas M. Hult, and Roger J. Calantone (2014), “Common Beliefs and Reality about PLS: Comments on Rönkkö and Evermann (2013),” Organizational Research Methods, 17 (2), 182209. [Crossref], [Web of Science ®][Google Scholar]
  • ———, and Georg Fassott (2010), “Testing Moderating Effects in PLS Path Models: An Illustration of Available Procedures,” in Handbook of Partial Least Squares, Vincenzo Esposito Vinzi, Wynne W. Chin, Jörg Henseler, and Huiwen Wang, eds., Berlin: Springer, 71335. [Google Scholar]
  • ———, ———, Theo K. Dijkstra, and Bradley Wilson (2012), “Analysing Quadratic Effects of Formative Constructs by Means of Variance-Based Structural Equation Modelling,” European Journal of Information Systems, 21 (1), 99112. [Crossref], [Web of Science ®][Google Scholar]
  • ———, Geoffrey Hubona, and Pauline Ash Ray (2016), “Using PLS Path Modeling in New Technology Research: Updated Guidelines,” Industrial Management and Data Systems, 116 (1), 119. [Web of Science ®][Google Scholar]
  • ———, Christian M. Ringle, and Marko Sarstedt (2012), “Using Partial Least Squares Path Modeling in International Advertising Research: Basic Concepts and Recent Issues,” in Handbook of Research in International Advertising, Shintaro Okazaki, ed., Cheltenham: Edward Elgar, 25276. [Google Scholar]
  • ———, ———, and ——— (2015), “A New Criterion for Assessing Discriminant Validity in Variance-Based Structural Equation Modeling,” Journal of the Academy of Marketing Science, 43 (1), 11535. [Crossref], [Web of Science ®][Google Scholar]
  • ———, ———, and ——— (2016), “Testing Measurement Invariance of Composites Using Partial Least Squares,” International Marketing Review, 33 (3), 127. [Web of Science ®][Google Scholar]
  • ———, ———, and Rudolf R. Sinkovics (2009), “The Use of Partial Least Squares Path Modeling in International Marketing,” in Advances in International Marketing, Rudolf R. Sinkovics and Pervez N. Ghauri, eds., Bingley: Emerald, 277320. [Crossref][Google Scholar]
  • Hu, Li-Tze, and Peter M. Bentler (1999), “Cutoff Criteria for Fit Indexes in Covariance Structure Analysis: Conventional Criteria versus New Alternatives,” Structural Equation Modeling: An Interdisciplinary Journal, 6 (1), 155. [Taylor & Francis Online], [Web of Science ®][Google Scholar]
  • Hwang, Heungsun, and Yoshio Takane (2004), “Generalized Structured Component Analysis,” Psychometrika, 69 (1), 8199. [Crossref], [Web of Science ®][Google Scholar]
  • Jarvis, Cheryl Burke, Scott B. MacKenzie, and Philip M. Podsakoff (2003), “A Critical Review of Construct Indicators and Measurement Model Misspecification in Marketing and Consumer Research,” Journal of Consumer Research, 30 (2), 199218. [Crossref], [Web of Science ®][Google Scholar]
  • Jensen, Morten B. (2008), “Online Marketing Communication Potential: Priorities in Danish Firms and Advertising Agencies,” European Journal of Marketing, 42 (3/4), 50225. [Crossref], [Web of Science ®][Google Scholar]
  • Jöreskog, Karl G. (1978), “Structural Analysis of Covariance and Correlation Matrices,” Psychometrika, 43 (4), 44377. [Crossref], [Web of Science ®][Google Scholar]
  • ———, and Dag Sörbom (1982), “Recent Developments in Structural Equation Modeling,” Journal of Marketing Research, 19 (4), 40416. [Crossref], [Web of Science ®][Google Scholar]
  • ———, and ——— (1993), LISREL 8: User's Guide, Chicago: Scientific Software International. [Google Scholar]
  • ———, and Herman, O. A. Wold (1982), “The ML and PLS Techniques for Modeling with Latent Variables: Historical and Comparative Aspects,” in Systems under Indirect Observation, Part I, Herman O.A. Wold and Karl G. Jöreskog, eds., Amsterdam: North-Holland, 26370. [Google Scholar]
  • Kettenring, Jon R. (1971), “Canonical Analysis of Several Sets of Variables,” Biometrika, 58 (3), 43351. [Crossref], [Web of Science ®][Google Scholar]
  • Lancelot-Miltgen, Caroline, Jörg Henseler, Carsten Gelhard, and Aleš Popovič (2016), “Introducing New Products That Affect Consumer Privacy: A Mediation Model,” Journal of Business Research, 69 (10), 465966. [Crossref], [Web of Science ®][Google Scholar]
  • Lohmöller, Jan-Bernd (1989), Latent Variable Path Modeling with Partial Least Squares, Heidelberg: Physica. [Crossref][Google Scholar]
  • Luxton, Sandra, Mike Reid, and Felix Mavondo (2015), “Integrated Marketing Communication Capability and Brand Performance,” Journal of Advertising, 44 (1), 3746. [Taylor & Francis Online], [Web of Science ®][Google Scholar]
  • McDonald, Roderick P. (1996), “Path Analysis with Composite Variables,” Multivariate Behavioral Research, 31 (2), 239–70. [Google Scholar]
  • Markus, Keith A., and Denny Borsboom (2013), Frontiers of Test Validity Theory: Measurement, Causation, and Meaning, New York: Routledge. [Crossref][Google Scholar]
  • Mosier, Charles I. (1943), “On the Reliability of a Weighted Composite,” Psychometrika, 8 (3), 16168. [Crossref][Google Scholar]
  • Naik, Prasad A., and Kalyan Raman (2003), “Understanding the Impact of Synergy in Multimedia Communications,” Journal of Marketing Research, 40 (4), 37588. [Crossref], [Web of Science ®][Google Scholar]
  • Nelson, Harold G., and Erik Stolterman, E. (2003), The Design Way: Intentional Change in an Unpredictable World, Englewood Cliffs, NJ: Educational Technology. [Google Scholar]
  • Nitzl, Christian, José Luis Roldán, and Gabriel Cepeda Carrión (2016), “Mediation Analysis in Partial Least Squares Modeling: Helping Researchers Discuss More Sophisticated Models,” Industrial Management and Data Systems, 116 (9), 184964. [Crossref], [Web of Science ®][Google Scholar]
  • Nunnally, Jum C., and Ira H. Bernstein (1994), Psychometric Theory, 3rd ed., New York: McGraw-Hill. [Google Scholar]
  • O'Cass, Aron (2002), “Political Advertising Believability and Information Source Value during Elections,” Journal of Advertising, 31 (1), 6374. [Taylor & Francis Online], [Web of Science ®][Google Scholar]
  • Okazaki, Shintaro, Hairong Li, and Morikazu Hirose (2009), “Consumer Privacy Concerns and Preference for Degree of Regulatory Control: A Study of Mobile Advertising in Japan,” Journal of Advertising, 38 (4), 6377. [Taylor & Francis Online], [Web of Science ®][Google Scholar]
  • ———, Barbara, Mueller, and Charles R. Taylor (2010a), “Global Consumer Culture Positioning: Testing Perceptions of Soft-Sell and Hard-Sell Advertising Appeals between U.S. and Japanese Consumers,” Journal of International Marketing, 18 (2), 2034. [Crossref], [Web of Science ®][Google Scholar]
  • ———, ———, and ——— (2010b), “Measuring Soft-Sell versus Hard-Sell Advertising Appeals,” Journal of Advertising, 39 (2), 520. [Taylor & Francis Online], [Web of Science ®][Google Scholar]
  • R Core Team (2014), R: A Language and Environment for Statistical Computing, Vienna, Austria: R Foundation for Statistical Computing. [Google Scholar]
  • Raykov, Tenko (1997), “Estimation of Composite Reliability for Congeneric Measures,” Applied Psychological Measurement, 21 (2), 17384. [Crossref], [Web of Science ®][Google Scholar]
  • Reid, Leonard N. (2014), “Green Grass, High Cotton: Reflections on the Evolution of The Journal of Advertising,” Journal of Advertising, 43 (4), 41016. [Taylor & Francis Online], [Web of Science ®][Google Scholar]
  • Reinartz, Werner J., Michael Haenlein, and Jörg Henseler (2009), “An Empirical Comparison of the Efficacy of Covariance-Based and Variance-Based SEM,” International Journal of Research in Marketing, 26 (4), 33244. [Crossref], [Web of Science ®][Google Scholar]
  • Reynar, Angela, Jodi Phillips, and Simona Heumann (2010), “New Technologies Drive CPG Media Mix Optimization,” Journal of Advertising Research, 50 (4), 41627. [Crossref], [Web of Science ®][Google Scholar]
  • Rigdon, Edward E. (1998), “Structural Equation Modeling,” in Modern Methods for Business Research, George A. Marcoulides, ed., Mahwah, NJ: Erlbaum, 25194. [Google Scholar]
  • ——— (2014a), “Comment on ‘Improper Use of Endogenous Formative Variables,’Journal of Business Research, 67 (1), 2800802. [Crossref], [Web of Science ®][Google Scholar]
  • ——— (2014b), “Rethinking Partial Least Squares Path Modeling: Breaking Chains and Forging Ahead,” Long Range Planning, 47 (3), 16167. [Crossref], [Web of Science ®][Google Scholar]
  • ———, Jan-Michael, Becker, Arun Rai, Christian M. Ringle, Adamantios Diamantopoulos, Elena Karahanna, Detmar Straub, and Theo K. Dijkstra (2014), “Conflating Antecedents and Formative Indicators: A Comment on Aguirre-Urreta and Marakas,” Information Systems Research, 25 (4), 78084. [Crossref], [Web of Science ®][Google Scholar]
  • Rosseel, Yves (2012), “Lavaan: An R Package for Structural Equation Modeling,” Journal of Statistical Software, 48 (2), 136. [Crossref], [Web of Science ®][Google Scholar]
  • Sahmer, Karin, Mohamed Hanafi, and Mostafa El Qannari (2006), “Assessing Unidimensionality within the PLS Path Modeling Framework,” in From Data and Information Analysis to Knowledge Engineering, M. Spiliopoulou, R. Kruse, C. Borgelt, A. Nürnberger, and W. Gaul, eds., Berlin: Springer, 22229. [Crossref][Google Scholar]
  • San José-Cabezudo, Rebeca, and Carmen Camarero-Izquierdo (2012), “Determinants of Opening-Forwarding E-Mail Messages,” Journal of Advertising, 41 (2), 97112. [Taylor & Francis Online], [Web of Science ®][Google Scholar]
  • Sarstedt, Marko, Jörg Henseler, and Christian M. Ringle (2011), “Multi-Group Analysis in Partial Least Squares (PLS) Path Modeling: Alternative Methods and Empirical Results,” in Advances in International Marketing, Vol. 22, Marko Sarstedt, Manfred Schwaiger, and Charles R. Taylor, eds., Bingley: Emerald, 195218. [Google Scholar]
  • ———, Christian, M. Ringle, Jörg Henseler, and Joseph F. Hair (2014), “On the Emancipation of PLS-SEM: A Commentary on Rigdon (2012),” Long Range Planning, 47 (3), 15460. [Crossref], [Web of Science ®][Google Scholar]
  • Shmueli, Galit (2010), “To Explain or to Predict?Statistical Science, 25 (3), 289310. [Crossref], [Web of Science ®][Google Scholar]
  • ———, Soumya, Ray, Juan Manuel Velasquez Estrada, and Suneel Babu Chatla (2016), “The Elephant in the Room: Evaluating the Predictive Performance of PLS Models,” Journal of Business Research, 69 (10), 455264. [Crossref], [Web of Science ®][Google Scholar]
  • Simon, Herbert (1969), The Sciences of the Artificial, Cambridge, MA: MIT. [Google Scholar]
  • Steenkamp, Jan-Benedict E.M., and Hans Baumgartner (2000), “On the Use of Structural Equation Models for Marketing Modeling,” International Journal of Research in Marketing, 17 (2/3), 195202. [Crossref], [Web of Science ®][Google Scholar]
  • Tenenhaus, Arthur, and Michel Tenenhaus (2011), “Regularized Generalized Canonical Correlation Analysis,” Psychometrika, 76 (2), 25784. [Crossref], [Web of Science ®][Google Scholar]
  • Tenenhaus, Michel (2008), “Component-Based Structural Equation Modelling,” Total Quality Management, 19 (7–8), 87186. [Taylor & Francis Online], [Web of Science ®][Google Scholar]
  • van Riel, Allard C.R., Jörg Henseler, Ildikó Kemény, and Zuzana Sasovova (forthcoming), “Estimating Hierarchical Constructs Using Consistent Partial Least Squares: The Case of Second-Order Composites of Common Factors,” Industrial Management and Data Systems, 117 (1). [Google Scholar]
  • Voorhees, Clay M., Michael K Brady, Roger Calantone, and Edward Ramirez (2016), “Discriminant Validity Testing in Marketing: An Analysis, Causes for Concern, and Proposed Remedies,” Journal of the Academy of Marketing Science, 44 (1), 11934. [Crossref], [Web of Science ®][Google Scholar]
  • Voorveld, Hilde A.M., Peter C. Neijens, and Edith G. Smit (2010), “The Perceived Interactivity of Top Global Brand Websites and its Determinants,” in Advances in Advertising Research, Vol. 1, Ralf Terlutter, Sandra Diehl, and Shintaro Okazaki, eds., Wiesbaden: Gabler, 21733. [Crossref][Google Scholar]
  • Werts, Charles E., Donald R. Rock, Robert L. Linn, and Karl G. Jöreskog (1978), “A General Method of Estimating the Reliability of a Construct,” Educational and Psychological Measurement, 38 (1), 93338. [Crossref], [Web of Science ®][Google Scholar]
  • Wold, Herman O.A. (1985), “Partial Least Squares,” in Encyclopedia of Statistical Sciences, Samuel Kotz and Normal L. Johnson, eds., New York: Wiley, 58191. [Google Scholar]
  • Yoo, Boonghee, Naveen Donthu, and Sungho Lee (2000), “An Examination of Selected Marketing Mix Elements and Brand Equity,” Journal of the Academy of Marketing Science, 28 (2), 195211. [Crossref], [Web of Science ®][Google Scholar]
  • Zhang, Xijuan, and Victoria Savalei (2016), “Bootstrapping Confidence Intervals for Fit Indexes in Structural Equation Modeling,” Structural Equation Modeling: A Multidisciplinary Journal, 23 (3), 392408. [Taylor & Francis Online], [Web of Science ®][Google Scholar]
  • Zhao, Xinshu, John G. Lynch, and Qimei Chen (2010), “Reconsidering Baron and Kenny: Myths and Truths about Mediation Analysis,” Journal of Consumer Research, 37 (2), 197206. [Crossref], [Web of Science ®][Google Scholar]