Adaptive Multivariate Global Testing

We present a methodology for dealing with recent challenges in testing global hypotheses using multivariate observations. The proposed tests target situations, often arising in emerging applications of neuroimaging, where the sample size n is relatively small compared with the observations’ dimension K. We employ adaptive designs allowing for sequential modifications of the test statistics adapting to accumulated data. The adaptations are optimal in the sense of maximizing the predictive power of the test at each interim analysis while still controlling the Type I error. Optimality is obtained by a general result applicable to typical adaptive design settings. Further, we prove that the potentially high-dimensional design space of the tests can be reduced to a low-dimensional projection space enabling us to perform simpler power analysis studies, including comparisons to alternative tests. We illustrate the substantial improvement in efficiency that the proposed tests can make over standard tests, especially in the case of n smaller or slightly larger than K. The methods are also studied empirically using both simulated data and data from an EEG study, where the use of prior knowledge substantially increases the power of the test. Supplementary materials for this article are available online.


INTRODUCTION
In this work, we develop novel methodology for dealing with recent challenges in testing global hypotheses using multivariate observations. The classical approach for studying the problem, Hotelling's T 2 -test (Hotelling 1931), can efficiently detect effects in every direction of the multivariate space when the sample size n is sufficiently large. However, in settings where n approaches or becomes smaller than the observation dimension K, T 2 -test becomes respectively inefficient and inapplicable. This cost in efficiency, paid due to the need to search in every direction of the alternative space, seems particularly wasteful (but avoidable), if prior knowledge about the direction of the effect is available. Motivated by the latter settings, often arising in the increasingly important field of neuroimaging, we develop tests which are powerful in studies with n K, but can also be efficient in situations where n is close to or smaller than K.
The proposed tests employ adaptive designs allowing for sequential modifications of the test statistic based on accumulated data. Such adaptive designs have straightforward but not exclusive application in clinical trials. A large literature on the subject (e.g., Bauer and Köhne 1994;Proschan and Hunsberger 1995;Lehmacher and Wassmer 1999;Müller and Schäfer 2001;Brannath, Posch, and Bauer 2002;Liu, Proschan, and Pledger 2002;Brannath, Gutjahr, and Bauer 2012) deals with the derivation of flexible procedures that allow for adaptations of the initial design without inflation of the Type I error rate. Some sequential designs (e.g., Denne and Jennison 2000) also permit design adaptations, but the latter need to be preplanned and independent of the interim test statistics. Adaptive designs are employed for many kinds of adaptations including sample size recalculation (Lehmacher and Wassmer 1999;Mehta and Pocock 2011), treatment or hypothesis selection (Kimani, Stallard, and Hutton 2009), and sample allocation to treatments (Zhu and Hu 2010). Despite the fact that many authors have stressed the potential for test statistic adaptation (e.g., Bauer and Köhne 1994;Bretz et al. 2009), there are only a few papers on the subject (Lang, Auterith, and Bauer 2000;Kieser, Schneider, and Friede 2002). Furthermore, various approaches for adaptive designs in multiple testing are available (see Bretz et al. 2009). These methods can efficiently detect few independently significant outcomes. However, it is well known that standard multiple testing methods (e.g., Bonferroni and Simes tests) become conservative and inefficient in settings, such as the typical neuroimaging studies, where strong dependencies and a large number of outcomes are present (D'Agostino and Russell 2005).
Similarly to the tests developed by O'Brien (1984), Läuter, Glimm, and Kropf (1998), and Minas et al. (2012), the proposed tests are based on linear combinations of the observation vectors. The crucial element in this approach is the weighting vector reducing the observation vectors to the scalar linear combinations. This defines the direction in which we decide to search for effects, and it can substantially affect both Type I and Type II error rate of the tests. O'Brien proposed deriving the weighting vectors under the assumption of uniform mean structure, while Läuter et al. showed that if the weighting vector is derived from the observation sums of products matrix, the Type I error is controlled and high power is attained under certain factorial structures. On the other hand, the tests in Minas et al. (2012) can attain high power levels independently of the mean and covariance structure but a part of the sample is used in a separate pilot study to learn the weighting vector.
In this work, linear combination test statistics, initially constructed using weighting vectors derived from prior information, are sequentially updated based on observed data at subsequent interim analyses in an adaptive design. Early termination of the study (due to early acceptance or rejection of the null hypothesis at an interim analyses) which is often of interest, especially in clinical trials, is also possible within our approach. Our methods provide a formal framework for optimally using prior information in constructing test statistics as has been suggested, but not implemented, in earlier papers (Pocock, Geller, and Tsiatis 1987;Läuter, Glimm, and Kropf 1996;Tang, Gnecco, and Geller 1989a).
While our tests maintain the two prime targets of adaptive designs, namely flexibility and Type I error control (Brannath et al. 2012), we also focus on attaining power optimality. Specifically, we employ the methods proposed by Spiegelhalter, Abrams, and Myles (2002) to derive optimal tests maximizing the predictive power of the test at each interim analysis. The methods of proofs can be useful in deriving optimal adaptive designs in more general settings. As we illustrate in Section 3, the results of Theorem 3.1 could be used to derive optimal designs for regression analysis for example.
The power performance of a multivariate test, lying in a possibly high-dimensional design space, can be hard to illustrate and interpret. Therefore, power analysis of multivariate tests is typically restricted to a limited part of the design space. We tackle this problem by reexpressing the O(K 2 )-dimensional design space as a lower dimensional easily interpretable space that is still sufficient to determine power. The crucial step here is to identify a measure quantifying the angular distance between the selected weighting vector and the optimal weighting vector and proving its sufficiency in computing power. These results provide wide understanding of the behavior of linear combination tests and allow us to extend earlier work on power analysis of single stage (Pocock, Geller, and Tsiatis 1987;Follmann 1996;Logan and Tamhane 2004) and sequential (Tang, Gnecco, and Geller 1989b;Tang, Geller, and Pocock 1993) linear combination tests, beyond low-dimensional observations or specific mean and covariance structures.
We perform extensive simulation studies to explore and compare the proposed and alternative single stage and sequential procedures throughout the design space. We show that linear combination tests outperform Hotelling's T 2 -tests for the latter angular distance being below a certain value which, especially for sample sizes close to K, can be rather high. We further show that, in contrast to linear combination tests, such as O'Brien OLS test, with fixed weighting vectors, the adaptive linear combination tests can attain high power levels even in situations where the weighting vector selected at the planning stage is orthogonal to the true optimal (where, of course, a nonadaptive test would have zero power asymptotically). The advantages of the proposed tests are also illustrated through a real example taken from an EEG depression study (Läuter, Glimm, and Kropf 1996).
This article is organized as follows. In Section 2, we formulate the class of linear combination tests while in Section 3 we derive optimal, with respect to power, tests in this class. In Section 4, we present the results allowing us to characterize power based on low-dimensional summaries of the design parameters. In Section 5, we discuss the main results of extensive simulation studies performed using the latter results to explore power and compare the proposed tests with alternative global tests under various conditions, while in Section 6 we apply our procedures to an EEG depression study. Section 7 includes a short summary and discussion of the obtained results. Technical lemmas and proofs are provided in Supplementary Material A, while further illustrations of the simulation studies are provided in Supplementary Material B.

FORMULATION OF J-STAGE LINEAR COMBINATION TESTS
In the following, we formulate J-stage linear combination z and t-tests and define their error rate functions. We assume that the K-dimensional observation vectors Y ij = (Y ij 1 , . . . , Y ij K ) T of subjects i = 1, 2, . . . , n j , participating in stage j, j = 1, 2, . . . , J , of the study, are independent and identically distributed Gaussian random variables (2.1) with mean μ = (μ 1 , . . . , μ K ) T and covariance matrix the positive definite = (σ kk ) K k,k =1 . In medical applications, the mean vector is often interpreted as the treatment effect. We wish to test the global null hypothesis of no treatment effect H 0 : μ = 0 = (0, 0, . . . , 0) T against the two-sided alternative H 1 : μ = 0. Note that the methods which follow equally apply to the twosample test with common covariance matrix, but we continue with the one-sample presentation to simplify notation.
The observation vectors Y ij , i = 1, 2, . . . , n j , of the jth stage are projected on the nonzero weighting vector w j = (w j 1 , w j 2 , . . . , w jK ) T and the projection magnitudes form the linear combinations L ij = w T j Y ij ,i = 1, 2, . . . , n j , j = 1, 2, . . . , J . The stagewise z and t statistics for testing H 0 against H 1 using the random sample of linear combinations L ij , i = 1, . . . , n j , when is either known or unknown, are respectively Here, σ 2 j is the variance andL j , s 2 j are the sample mean and sample variance of the linear combination L j , respectively. Under assumption (2.1), the stagewise z and t statistics, Z j , T j , j = 1, 2, . . . , J are respectively normally and noncentrally t distributed, Z j ∼ N (θ j , 1) and T j ∼ t ν j (θ j ) with location parameterθ and ν j = n j − 1. Under H 0 , the z and t statistics are standard normal and Student's t random variables, that is, Z j ∼ N (0, 1) and T j ∼ t ν j . The two-sided stagewise p values of the z and t-tests are, respectively, p z j = 2 (−|Z j |) and p t j = 2 ν j (−|T j |), where (·) and (·) are the cumulative distribution functions of the standard normal and Student's t-distribution with ν j degrees of freedom, respectively. At the jth analysis, j = 1, 2, . . . , J , performed after the jth stage study, a combination function C( p j ) is used to combine the stagewise p values, p j = (p 1 , . . . , p j ), of stages 1 to j (p j either p z j or p t j ). Rejection and acceptance critical values α 1,j and α 0,j (0 ≤ α 1,j ≤ α < α 0,j ≤ 1, j = 1, 2, . . . , J ) are used to decide whether to stop the study early and either reject or accept H 0 , respectively. Specifically, the J-stage sequential design has the following form: Several combination functions are proposed in the literature. Bauer and Köhne (1994) suggested the use of Fisher's product combination function (2.5) while Lehmacher and Wassmer (1999) suggested the use of the inverse normal combination function. These two combination functions are the most commonly used in the literature (Bretz et al. 2009). The formulation and results which follow use the Fisher's product function in (2.5), but our results equally apply to other combination functions including the inverse normal.

OPTIMAL J-STAGE z AND t-TESTS
The crucial element for these J-stage linear combination z and t-tests are the stage-wise weighting vectors w j . In this section we develop a methodology for optimally deriving these weighting vectors. The next lemma is the first step for computing the weighting vectors maximizing the power of the z and t-tests. Note that it can be straightforwardly shown that the above result hold for both one-sided stagewise tests and for the inverse normal combination function. The proof of the above lemma is surprisingly complex because for some range of values of θ j an increase in |θ j | decreases the probability to continue to the next stage and therefore the power of the subsequent stages, β (j +1) = J l=j +1 β l , decreases. In Supplementary Material A, we prove that even for these range of values of |θ j |, the decrease (in absolute value) in β (j +1) is bounded above by the increase in β j .
The above result, except for being crucial for deriving Theorem 3.1, can also be useful for more general settings of adaptive designs. For example, Lemma 3.1 proves that if investigators wish to apply an adaptive z or t-test and are interested in maximizing the power of these procedures, they only need to sequentially maximize the location parameters of the stagewise test statistics separately. For instance, suppose that one is willing to conduct an adaptive design study to explore the relationship between an observation variable Y with a set of covariates X described by Y j = X j b j + e j , e j ∼ N n (0, σ 2 I n ), j = 1, 2, . . . , J , independent. Then, our results prove that to maximize the power of the J-stage test with stagewise statistics the classical z and t statistics, with respect to the experimental design, it is sufficient to maximize X T j X j , j = 1, 2, . . . , J , which agrees with the standard practice of deriving optimal designs.
Considering the J-stage linear combination z and t-tests, Lemma 3.1 implies that to maximize the power of these tests with respect to the weighting vectors w j , it is sufficient to maximize the value of θ j , j = 1, 2, . . . , J . Using this result, we next derive the power-optimal weighting vector.
Theorem 3.1. Under (2.1), the power of the J-stage z and t-tests in (2.4) with combination function as in (2.5) are maximized with respect to the weighting vectors w j , j = 1, 2, . . . , J , if and only if the latter are proportional to ω * = −1 μ. (3.1) The last result provides the optimal, in terms of power, weighting vector for the J-stage linear combination tests ω * . In Section 3.1, we show that ω * , which expresses the multivariate treatment effect standardized with respect to the variance matrix , is central in characterizing the power of these tests. However, this optimal vector ω * depends on the unknown parameters μ and and therefore is also unknown. In the next section, we develop a methodology for selecting the weighting vectors w j in practice. We propose using the information for μ and , available at each interim analysis, to optimally select w j , j = 1, 2, . . . , J , where optimality is expressed here in terms of predictive power. The source of this information is the data collected from the stages completed before each interim analysis, but also prior information extracted from previous studies and expert clinical opinion. Predictive power allows the incorporation of this information into our procedures in a natural and plausible way. Note that, as we also explain in the next section, if Equation (2.7) is satisfied, the Type I error of these tests is controlled.

The Proposed z * and t * Tests
Prior information, I 0 , is used to inform standard conjugate multivariate priors for the observation mean and covariance matrix. We use the Gaussian-inverse-Wishart prior where m 0 represents a prior estimate of the value of μ and n 0 corresponds to the number of observations on which this prior estimate is based, while ν 0 and S 0 respectively represent the degrees of freedom and the (positive definite) scale matrix of the inverse-Wishart prior.
Under this standard Bayesian model (see Gelman et al. 2004), the posterior distribution of μ and given the information set I j = {I 0 , y (j ) }, consisting of the prior information I 0 and the data collected up to the jth interim analysis y (j ) and ν (j ) = n 0 + n (j ) − 1 with n (j ) = n 1 + n 2 + · · · + n j and y (j ) = j l=1 n j i=1 y il /n (j ) respectively the sample size and sample mean of y (j ) . Note that, due to the positive definiteness of the prior estimates S 0 , the posterior estimates S j are also positive definite. Positive definiteness of S 0 is required for our procedures to be applicable.
We wish to use this information to select the weighting vectors w j optimally. Optimality here is expressed in terms of predictive power of the test. Predictive power (Spiegelhalter, Abrams, and Myles 2002) in the present context is derived by averaging the power of the J-stage z and t-tests over the distributions of the model parameters for a given information set. The predictive power for the first stage given the prior information set I 0 is B 1 = Pr(p 1 < α 1,1 | I 0 ) and for the jth stage, j = 2, 3, . . . , J , given the information set I j −1 is (3.4) The next result presents the weighting vectors that we suggest to use for the stagewise linear combination z and t-tests.
Theorem 3.2. Under (2.1) and (3.2), the jth stage predictive power, B z j , j = 1, 2, . . . , J , of the J-stage z-test in (3.4) is maximized with respect to the weighting vector w j if and only if w j is proportional to Similarly, as we prove in Supplementary Material A, for n (j −1) → ∞, the jth stage predictive power, B t j , j = 1, 2, . . . , J , of the J-stage t-test in (3.4) is maximized with respect to the weighting vector w j if and only if w j is proportional to where m j , S j as in (3.3). The proposed J-stage tests, henceforth called (adaptive) z * and t * -tests, proceed as follows: for the jth analysis, j = 1, 2, . . . , J , (i) obtain w z * j or w z * j using (3.5) or (3.6), (ii) set w j equal to w z * j or w z * j and compute the stage j statistic Z j or T j as in (2.2), (iii) calculate the stage j pvalue, p z j = 2 (−|Z j |) or p t j = 2 ν j (−|T j |), (iv) use all the observed p-values to perform the combination test in (2.4).
Importantly, the weighting vectors w z * j and w t * j , given the prior information and the observed (if any) data y (j −1) , are fixed before collecting y j and hence, under the standard conditions described in the following theorem, the Type I error of z * and t * -test, is preserved.

POWER CHARACTERIZATION (POC)
To study the performance of a test, we primarily need to explore the relationship between its power function and the design parameters. The latter might be, among others, the critical values, the sample size(s), and the model parameters. The critical values and the sample size(s) are scalar and therefore it is straightforward to visualize power even across all their possible values (e.g., using simulations). Their relation to power can then be easily described and understood. In univariate settings, this is also the case for the model parameters. However, in the multivariate setting, model parameters can be high-dimensional and therefore it is not practically feasible to visualize power over the whole design space. Power analysis is then typically restricted to a limited range of different structures of the model parameters. This might be sufficient for power analysis in specific settings, but it has obvious limitations in considering the general behavior of a testing procedure.
In the following, we encounter this problem in the context of linear combination tests and we provide a solution. We first consider the case of J-stage linear combination z and t-tests with fixed weighting vectors which, apart from providing a method for performing simple and efficient power analysis of tests such as the OLS test in O'Brien (1984, see Logan and Tamhane 2004;Pocock, Geller, and Tsiatis 1987;Tang, Geller, and Pocock 1993 for earlier work), also provides the intuition for the results considering the z * and t * tests. Note that in Section 4, the critical values and sample sizes (including the "prior" sample sizes) are assumed to be fixed and described by the design vector d = (α 0,1 , α 0,2 , . . . , α 0,J , α 1,1 , α 1,2 , . . . , α 1,J , ν 0 , n 0 , n 1 , . . . , n J ).
To provide greater insight to the subsequent results, it is also worth noting the joint distribution of the stagewise linear combination z statistics, Z j , j = 1, 2, . . . , J , here for J = 2, P r (Z 1 ≤ z 1 , Z 2 ≤ z 2 ) = P r(Z 1 ≤ z 1 , Z 2 ≤ z 2 | y 1 ) dF ( y 1 ) = { y 1 :Z 1 ≤z 1 } (z 2 −θ 2 ( y 1 )) dF ( y 1 ), where F ( y 1 ) the cdf of the first stage data, y 1 , andθ 2 ( y 1 ) the location parameter as in (2.3). The latter parameter is independent of y 1 , that isθ 2 ( y 1 ) =θ 2 , for the linear combination tests with fixed weighting vector, while for the adaptive z * and t * tests,θ 2 ( y 1 ) depends on y 1 through the weighting vectors in (3.5) or (3.6), respectively. The next section focuses on characterizing further the effect of the weighting vector, through the parametersθ j , on the power function. Note that the power function can be easily derived from the joint distribution of the stagewise statistics by replacing z j with suitable rejection or acceptance boundaries. In Supplementary Material A, we show that the above expression can be easily generalized to any J > 1 and that by replacing (·) with the cdf of the Student's t-distribution (·), we can easily derive the joint distribution of T j , j = 1, 2, . . . , J .

PoC for the J-Stage z and t-Tests With Fixed Weighting Vectors
To compute the power of the J-stage z and t-tests with fixed weighting vectors w j = w, it is sufficient to know the design vector d, as well as the stagewise location parameters θ j in (2.3) which in this case are also fixed, that is, θ j = θ . The latter can be reexpressed as = ω * cos(ang(w,ω * )), (4.1) where ang(w j ,ω * ) denotes the angle, in measured radians at the origin, between the vectorsw andω * . Here,w = 1/2 w, ω * = 1/2 ω * = −1/2 μ are the standardized selected and optimal weighting vectors. In particular, the latter expresses the standardized multivariate treatment effect, generalizing the univariate (K = 1) standardized treatment effect μ/σ . Considering the weighting vector selection problem, the first equation in (4.1) implies that a weighting vector that increases the mean and/or decreases the variance of the linear combination gives higher power. The ambiguity in the latter expression becomes clearer by the standardization in the second equation which implies that the weighting vector selection can be expressed as a process of learning the standardized optimal weighting vectorω * . The last equation in (4.1) establishes two scalar measures which are sufficient to determine power. The first is the magnitude ofω * , ω * = (μ T −1 μ) 1/2 = D μ, , which is the Mahalanobis distance between the distributions of the observation Y ij under the null and the alternative hypotheses. The Mahalanobis distance is a generalization of the univariate signal-to-noise ratio and can be interpreted as a measure of deviation from the null hypothesis. In medical settings, it is a well-known global measure of the strength of the treatment effect. The second, cos(ang(w,ω * )), is a measure of angular distance between the selected and the optimal weighting vector. It is a measure, in other words, of the distance of our weighting vector selection to the optimal choice. Under this representation, it becomes clear that, for fixed weighting vectors, the location parameter θ is equal to a measure (D μ, ) of the strength of the treatment effect scaled down by a measure (cos(ang(w,ω * ))) of the distance between the parameters and their prior estimates. The last results are formally stated in the next theorem.
Theorem 4.1. The design vector d, the Mahalanobis distance D μ, = (μ T −1 μ) 1/2 and the angle ang(ω * ,w) between the vectorsω * = −1/2 μ andw = 1/2 w are sufficient to determine the power function β of the J-stage linear combination z and t-tests with fixed weighting vectors w j = w.

PoC for the z * -Test
The sequential adaptation of the weighting vector increases the complexity within the relation between the power function and the design parameters. However, following similar methodology as above, analogous results can be derived. For this we use two steps, the first of which involves standardizing the procedure, similarly to (4.1), and the second establishing a rotation invariance property of the power function. The next lemma is a direct consequence of the standardization step summarizing μ, , and m 0 to the vectorsω * andw z * 1 . Lemma 4.1. The design vector d, the standardized optimal weighting vectorω * = −1/2 μ and the standardized first-stage weighting vectorw z * 1 in (3.5) are sufficient to determine the power function β z * .
In the above result, we make use of the fact that the location parameter, θ z * j , of the z * -test can be written as which implies that the adaptive selection of the weighting vectors can be reexpressed as a procedure of adaptive estimation of the vectorω * . Under this standardization, we can proceed to the rotation-invariance step which results in the next lemma.
Lemma 4.2. The power, β z * , of the z * -test is invariant to rotations of the weighting vectorw z * 1 around the optimal weighting vectorω * .
The idea behind Lemma 4.2 is that ifw z * 1 is rotated around ω * , that is,w z * 1 is replaced byẇ z * 1 = Rw z * 1 , where R is a rotation matrix with rotation axisω * , the rejection region of the test is changed. However, the new rejection region is simply a rotation of the initial rejection region. That is, for each point saywȳ (j ) in the initial rejection region, we can find a unique point, saẏ wȳ (j ) , in the rotated rejection region such thatẇȳ (j ) = Rwȳ (j ) . Because the symmetrical Gaussian distribution of the obser-vationswȲ (j ) ∼ N K (ω * , I/n (j ) ) remains unchanged under the rotation, the likelihood of the rejection region, that is, the power of the z * -test, remains the same. The next theorem is direct consequence of Lemmas 4.1 and 4.2.
Theorem 4.2. The design vector d, the Mahalanobis distance D μ, and the angle ang(ω * ,w z * 1 ) between the vectorsω * and w z * 1 are sufficient to determine the power function β z * . The above theorem states that the dependence of the power function on the model parameters and their prior estimates is described by simply a scalar measure of the strength of the treatment effect and a scalar measure of distance between the parameters and their prior estimates. It provides a sufficient description of power which is based on easily interpretable summaries and is considerably lower dimensional (importantly not depending on K, see Table 1). This allows us to perform power analysis of the adaptive J-stage z * -test in a simple way potentially covering the whole design space.

PoC for the t * Test
The need to estimate the unknown increases substantially the dimension and the complexity of the design space. The sequential estimation of , in addition to μ, to obtain the weighting vectors w t * j , implies that the power analysis needs to account for both estimation procedures. For this, we write the weighting vectorw t * j , j = 1, 2, . . . , J in (3.6) as andw z * j the jth standardized weighting vector of the z * -test in (4.2). Here the -deviation matrix D j is a measure of deviation of the estimate S j −1 in (3.3) from the parameter . The weighting vectorw t * j is then written as a product of the inverse of the matrix D j , that accounts for the estimation of , and the vectorw z * j which accounts for the estimation of μ, the latter taking as known. We next follow the same steps as in Section 4.2 for deriving the PoC of the t * -test. The standardization step results in the next lemma summarizing μ and and their prior estimates m 0 and S 0 to the vectorsω * ,w z * 1 and the matrix D 1 that have clear interpretation.
Lemma 4.3. The design vector d, the matrix D 1 in (4.3) and the vectorsω * andw z * 1 are sufficient to determine the power function β t * .
Lemma 4.4. The power function β t * is invariant to simultaneous rotations of the vectorw z * 1 and the eigenvectors of the matrix D 1 around the optimal weighting vectorω * .
The proof of Lemma (4.4) is similar to the proof of Lemma (4.2), albeit rather more complex. The next theorem is direct consequence of Lemmas 4.3 and 4.4.
Theorem 4.3. The design vector d, the vector of eigenvalues λ 1 of the matrix D 1 in (4.3), and the vectors c z * 1 and c * in (4.5) are sufficient to determine the power function β t * .
As we can see in Table 1, the last result reduces the dimension of the design space of the t * -test substantially, allowing us to explore power across the design space. While the design space, due to the covariance matrix estimation, still depends on K, it is reduced from order K 2 to order K.
Furthermore, this reduction provides an understanding of how the selection of the weighting vector affects power. This becomes clearer if we consider that θ t * j in (4.4) can be written as Here, cȳ (j ) and S c y (j ) are the sample mean and sample covariance matrix of the transformed observation vectors c Y (j ) I), i = 1, 2, . . . , n j . The last expressions show that the distance of the prior estimates m 0 , S 0 to the model parameters μ, can be expressed by the distances of the vectors c z * 1 and λ −1 1 = (1/λ 11 , . . . , 1/λ 1K ) T to c * , the latter directly reflected to power through θ t * j (see the next section for more information).
In the special case of the first stage -deviation matrix being proportional to the identity matrix, that is, D 1 ∝ I (λ 11 = λ 12 = · · · = λ 1K ), as the next result shows, the design space can be reduced further.
Theorem 4.4. For D 1 = c −1 I, the design vector d, the constant c, the Mahalanobis distance D μ, , and the angle ang(w z * 1 ,ω * ) are sufficient to determine the power function β t * . The last theorem proves that, for D 1 ∝ I, we can use the fact that the prior -deviation matrix D 1 does not change the directions ofw z * j 's, to show that the relation of β t * to the model parameters and their prior estimates can be described simply by the scalars D μ, and ang(w z * 1 ,ω * ). In the next section, we use We plot the sequential χ 2 -test (magenta · ·) and the z * (green −− line), sequential z (cyan −), and z + (orange −·) tests with first stage/fixed/first step weighting vector 0 (×), 30 • (•), 60 • ( ) and 90 • ( ) angle to the optimal. The remaining design parameters are J = 2, K = 10, α = 0.05, α 1,1 = 0.01, α 0,1 = 1, n T = 60, n 0 = 0.5n 1 , D μ, = 0.65. this result and the results of Theorems 4.2 and 4.3 to perform power analysis studies.

EMPIRICAL STUDIES
To explore properties of the adaptive z * and t * -tests as well as alternative global tests and to perform comparisons, we present empirical studies making use of the results in Theorems 4.2, 4.3, and 4.4.
In addition to z * and t * -tests, we consider linear combination z and t-tests with fixed weighting vectors, a class that includes the OLS z and t-test in O'Brien (1984). We also consider the likelihood-ratio χ 2 and Hotelling's T 2 -test with statistics χ 2 = nȲ −1Ȳ and T 2 = n(n − K)Ȳ S −1 YȲ /K(n − 1) that follow the noncentral χ 2 and F distribution with K and (K, n − K) degrees of freedom, respectively, and noncentrality parameter D 2 μ, . We consider both single stage and sequential J-stage designs for all these tests. Finally, the two-step, single-stage linear combination z + and t + tests proposed in Minas et al. (2012) are also considered. Note that the latter tests can be derived as special cases of the z * and t * -tests for J = 2, (α 1,1 , α 0,1 ) = (0, 1) and C( p 2 ) = p 2 .
A range of experiments are performed under different values of the design parameters. The power function of J-stage (J > 1) tests is not analytically tractable and therefore power is approximated by the rate of rejections in a large number of simulated replications, here R = 10,000, of a single experiment. Furthermore, to study the reduction in sample size due to early stopping of the study, we also empirically compute the rate of sample size reduction (RSSR), where n T = n 1 + n 2 + · · · + n J the total sample size, N the sample size used for a single replication of the study and E(N ) its expected value. Note that single-stage tests have RSSR = 0, in contrast to sequential tests that allow for early stopping and thus have nonzero RSSR.

Simulation Data Examples
We next summarize the main results of a comprehensive study of the power behavior of the above tests in relation to the design parameters (more illustrations are included in Supplementary Material B). First, larger values of D μ, and/or n T result in higher power values for all tests considered, except the z and ttests with fixed weighting vectorsw orthogonal toω * for which β = α. Considering the prior sample size, the results indicate that for n 0 ∈ (0.5n 1 , 0.75n 1 ) the prior estimates become influential, but they do not dominate the accumulated data when selecting the weighting vector while larger values of n 0 enforces z * and t * to have more similar behavior to z and t-tests with fixed weighting vector. Furthermore, simulation examples confirm that larger values of the acceptance critical values α 0,j increase the power of multistage tests especially for larger potential power gain in subsequent stages, at the expense of less chance of early acceptance. Simulation examples also confirm that larger power is gained if larger rejection critical values α 1,j are allocated to stages with larger potential power gain, while the value of RSSR increases for larger α 1,j in early stages.
We also consider power behavior related to allocation of sample size to stages (Figure 1). For the sequential z and χ 2 -test, the results show that higher power is achieved if sample allocation is analogous to α-rate allocation. The z * and t * -tests generally attain higher efficiency for close to balanced allocations. For w z * 1 close to (far from) the optimalω * , slightly higher power is attained for assigning more sample to early (late) stages. Small to moderate allocation ratios r are more appropriate for the z + test since no α rate is spent in the first stage. Further, as in the χ 2 -test, the z * achieves higher RSSR for r = 0.5.
Before we proceed to comparisons, it is worth considering the impact of being unknown and thus estimated on the performance of the t * -test. First, in the case of D 1 ∝ I (λ 1 ∝ 1 = (1, 1, . . . , 1) T ), which as we show in Theorem 4.4 is somewhat easier case to consider, the estimation variability is substantially reduced and thus we generally expectw t * j to be closer tow z * j . On the other hand, if D 1 ∝ I (λ 1 ∝ 1), the direction of λ 1 is more influential onw t * j with the consequence being double-edged (see Figure 2). That is, compared to the situation of λ 1 ∝ 1, the distance ofw t * j 's to optimal can be larger (left panel) but also smaller (right panel) depending on how close the direction of λ −1 1 = (1/λ 11 , . . . , 1/λ 1K ) T is to the optimal direction c * .
Finally, it is useful to note that throughout our simulations of t * -test, the cos(ang(c * , −1 1 c z * 1 )) is shown to be a robust summary, albeit not sufficient (see and their prior estimates. For this reason, but also to reduce complexity, in the comparisons to follow, we focus on the case of λ 1 ∝ 1 (particularly, as we explain later on, in cases resembling the right panel of Figure 2), for various values of the summary cos(ang(c * , −1 1 c z * 1 )). In terms of comparisons, first note that, for fixed design parameters, single-stage tests attain higher power levels than multi-stage tests, nevertheless at the expense of not allowing for early stopping and thus not allowing for sample size reduction (RSSR = 0). Furthermore, it might be useful to emphasize that for fixed design parameters, the power of the linear combination test with weighting vector (either fixed or initial) set equal to the optimal weighting vector ω * attains the maximum power and provides an upper bound to all the other presented procedures, including Hotelling's T 2 -test as proved in Minas et al. (2012) (Corollary 1). Compared to the z-tests with fixed weighting vec-tors w, as we can see in Figure 3, the adaptive z * lose some power forw (=w z * 1 ) close to optimal but gains substantial amounts of power forw far from optimal, importantly avoiding the problem of z-tests having zero power forw orthogonal to optimal. This result emphasizes that, even though the power of the proposed tests remains sensitive to the prior information used to select the weighting vector, they are less sensitive to the initial selection of the weighting vector than the z and t-tests, where the weighting vector is fixed. The adaptive z * -test also has substantially higher power to z + for small angles to the optimal and slightly lower power for large angles. Finally, the power of the single-stage and sequential χ 2 -tests is approximately equal to the power of the z * -test forw z * 1 having respectively 60 • and 45 • angle withω * . Note that, as the results in Figure 3 confirm, all the considered tests control the Type I error at the nominal level α = 0.05.  Figure 3. Power and RSSR versus Mahalanobis distance. We plot the z * -test (green −−) with the tests z + (orange −.) (up left), sequential z (cyan −) and χ 2 (magenta · ·) (up right), single stage z (blue −) and χ 2 (red · ·) (down left) and sequential χ 2 (down right). The linear combination z * /z/z + tests are performed with first stage/fixed/first step weighting vectors having 0 (×), 30 • (•), 60 • ( ), and 90 • ( ) angle to the optimal. The remaining design parameters are J = 2, K = 10, α = 0.05, α 1,1 = 0.01, α 0,1 = 1, n T = 30, r = 0.5, n 0 = 0.75n 1 , ν 0 = n 0 − 1.  Figure 4. Power and RSSR versus the total sample size n T . We plot the t * -test (green −−) with the tests, t + (orange −.) (up left), sequential t (cyan −) and T 2 (magenta · ·) (up right), single stage t (blue −) and T 2 (red · ·) (down left) and sequential T 2 (down right). The linear combination t * /t/t + tests are performed with first stage/fixed/first step weighting vectors having 0 (×), 30 • (•), 60 • ( ), and 90 • ( ) angle to the optimal. The remaining design parameters are K = 15, J = 2, α = 0.05, α 1,1 = 0.01, α 0,1 = 1, r = 0.5, n 0 = 6, ν 0 = n 0 − 1, D μ, = 0.7.
In the case of unknown, we consider comparisons for the case of D 1 = I which, using the results of Theorem 4.4, they can be performed in a similar way to the case of known . For the simulations in Figure 4, the case of D 1 = I can be thought of as representative of λ −1 1 fairly distant to c * (right panel of Figure 2), since we take c * = e 1 resulting in cos(ang(c * , λ −1 1 )) = √ K/K ( ∼ = 0.26, angle 75 • , for K = 15). As we would expect, the power of all tests is lower than their counterparts for known (same design parameters), but the patterns of power difference across tests remain the same except from Hotelling's T 2 which in contrast to χ 2 -test is highly dependent on the sample size.
As Figure 4 illustrates, for n T ≤ K or n T slightly larger than K (here, n T = 10−30 for K = 15), T 2 is respectively inapplicable or very inefficient with power levels lower than the power of t * even for angles close to orthogonal. As sample size becomes considerably bigger than K (n T > 50), the power of T 2 -test increases sharply to yield power levels analogous to the χ 2 -test. For instance, for the design parameters in Figure 4, the single stage and sequential T 2 -tests, likewise to the χ 2 -test, have power close to the power of the t * for angle 60 • and 45 • , respectively, for large sample sizes.

APPLICATION TO AN EEG STUDY
We consider applications to an electroencephalogram (EEG) study, the results of which are provided in Läuter, Glimm, and Kropf (1996). As Läuter et al. described, the data are collected from n T = 19 depressive patients at the beginning and at the end of a six week therapy. For demonstration, K = 9 variables are used which represent the changes of the absolute theta power in channels 3-8, 17-19 of EEG during the therapy of each patient. In Table 2, we present the means, standard deviations, and correlation matrix of the data. Note that although an increase is indicated in all channels, none of them (min k p k = 0.04) fall below the Bonferroni corrected threshold α/K ∼ = 0.0056 at the α = 5% significance level. Hotelling's T 2 -test also fails to reject H 0 (p T 2 = 0.261). On the contrary, the SS and PC t-tests proposed by Läuter et al. reject H 0 at the 5% significance level (p SS = 0.0489, p PC = 0.0487).
We perform power analysis by setting the design parameters as in the above study, that is, n T = 19, K = 9, μ =ȳ, = S y , α = 0.05. For these design parameters, the power of Hotelling's T 2 is β T 2 ∼ = 0.68 (D μ, = 1.15). This is larger than the power of the SS and PC tests which are respectively β t SS ∼ = 0.52, β t PC ∼ = 0.51 (the contrasting results of the tests performed using these data are because of the different shape of the t and F distributions). The latter power values are very close to the power of the OLS t-test in O' Brien (1984), β t OLS ∼ = 0.52, which uses the uniform weighting vector w OLS ∝ 1. This gives angle ang(w OLS ,ω * ) ∼ = 71 • . Taking into account that the single-stage t-test for a weighting vector equal to the optimal has power β t ∼ = 1, we can easily see that there is considerable scope for improvement. Since the study was performed, there has been considerable research into EEG studies on depressive patients. There is now literature (see, e.g., Davidson et al. 2002) indicating that left-frontal hypoactivation and right-frontal hyperactivation are present in such subjects. This would indicate that a nonuniform prior over these frontal regions should be used. Using prior information based on such evidence, the adaptive t * -test can attain high power levels. For example, the prior estimates given in Table 2 are in agreement with the evidence in the literature and further, the prior correlation structure is set to be roughly coherent to the distances between the channels, that is, larger Supplement A: Technical results Technical details, lemmas, and proofs.

Supplement B: Extended simulation examples Examples
from the extensive simulation studies performed to study the power of the considered tests. [Received April 2013. Revised October 2013