Meta-Analysis of Mid-p-Values: Some New Results based on the Convex Order

ABSTRACT The mid-p-value is a proposed improvement on the ordinary p-value for the case where the test statistic is partially or completely discrete. In this case, the ordinary p-value is conservative, meaning that its null distribution is larger than a uniform distribution on the unit interval, in the usual stochastic order. The mid-p-value is not conservative. However, its null distribution is dominated by the uniform distribution in a different stochastic order, called the convex order. The property leads us to discover some new finite-sample and asymptotic bounds on functions of mid-p-values, which can be used to combine results from different hypothesis tests conservatively, yet more powerfully, using mid-p-values rather than p-values. Our methodology is demonstrated on real data from a cyber-security application.


Introduction
Let T be a real-valued test statistic, with probability measure P 0 under the null hypothesis, denoted H 0 . Let X be a uniform random variable on the unit interval that is independent of T under P 0 . X is a randomization device which is in practice usually generated by a computer.
We consider the (one-sided) p-value, the mid-p-value (Lancaster 1952), and the randomized p-value, where T * is a hypothetical independent replicate of T under P 0 . If T is absolutely continuous under H 0 , then the three quantities are equal and distributed uniformly on the unit interval. More generally, that is, if discrete components are possible, the three are different. Two main factors, one obvious and one more subtle, make this a very common occurrence. First, T is discrete if it is a function of discrete data, for example, a contingency table, categorical data, or a presence/absence event. Second, discrete test statistics often occur as a result of conditioning, as in the permutation test or Kendall's tau test (Sheskin 2003). Partially discrete tests occur, for example, as a result of censoring. When P, Q, and R are not equal, it is a question which to choose. The ordinary p-value is often preferred in relatively strict hypothesis testing conditions, for example, in clinical trials, where the probability of rejecting the null hypothesis CONTACT Patrick Rubin-Delanchy patrick. rubin-delanchy@bristol.ac.uk School of Mathematics, University of Bristol, Bristol BS TH, United Kingdom. must not exceed the nominal level (often 5%). The randomized p-value has some theoretical advantages, for example, the nominal level of the test is met exactly. However, to quote one of its earliest proponents, "most people will find repugnant the idea of adding yet another random element to a result which is already subject to the errors of random sampling" (Stevens 1950). Randomized p-values also fail Birnbaum's admissibility criterion (Birnbaum 1954). Note that we can also work with an unrealized version of the randomized p-value, known as the fuzzy or abstract p-value (Geyer and Meeden 2005), and either stop there-leaving interpretation to the decision-maker-or propagate uncertainty through to any subsequent analysis, for example, multiple-testing (Kulinskaya and Lewin 2009;Habiger 2015).
Although it can allow breaches of the nominal level, the midp-value is often deemed to better represent evidence against the null hypothesis than the ordinary or randomized p-values. Justifications are not just heuristic as, for example, the mid-pvalue can arise as a Rao-Blackwellization of the randomized p-value corresponding to the uniformly most powerful test (Wells 2010), as an optimal estimate of the H 0 versus H 1 truth indicator under squared loss (Hwang and Yang 2001), or from asymptotic Bayesian arguments (Routledge 1994). Performance has also been demonstrated in applications, for example, in the context of healthcare monitoring (Spiegelhalter et al. 2012) (an article read before the Royal Statistical Society), genetics (Graffelman and Moreno 2013), a wealth of examples involving contingency tables (Lydersen, Fagerland, and Laake 2009), and more. Our own interest stems from cyber-security applications, and a motivating example is given in Section 3. Most arguments for using the mid-p-value in hypothesis testing scenarios also work for confidence intervals. Here, using the mid-p-value over the p-value can result in a smaller interval, with a closerto-nominal coverage probability (Berry and Armitage 1995;Fagerland, Lydersen, and Laake 2015).
In this article, we are able to make further mathematical progress on the mid-p-value by using a stochastic order known as the convex order. The problem we focus on is meta-analysis, that is, combining evidence from different hypothesis tests into one, global measure of significance. In several scenarios analyzed, the use of the ordinary p-value leads to suboptimal, and even spurious results. New bounds for some commonly used methods for combining ordinary p-values are derived for midp-values. These allow large gains in power over using ordinary p-values, while, unlike any previous study based on mid-pvalues, the false positive rate is still controlled exactly (albeit conservatively).
The remainder of this article is structured as follows. In Section 2, we summarize our main results. Section 3 gives a cyber-security application where, using mid-p-values, we are able to detect a cyber-attack that would likely fall under the radar if only ordinary p-values were used. Section 4 elaborates on the results of Section 3, with improved (although more complicated) bounds, simulations, and discussion. Section 5 concludes. All proofs are relegated to the Appendix.

Main Results
This section summarizes the main ideas and findings of the article. Let U denote a uniform random variable on the unit interval, with expectation operator E, and let E 0 denote expectation with respect to P 0 . Under the null hypothesis, it is well known, see, for example, Casella and Berger (2002), that P dominates U in the usual stochastic order, denoted P ≥ st U . One way to write this is for any nondecreasing function f , whenever the expectations exist (Shaked and Shanthikumar 2007). It is also well known, and in fact true by design, that R is uniformly distributed under the null hypothesis, denoted R = st U . On the other hand, it is not widely known that, under the null hypothesis, Q is dominated by U in the convex order, denoted Q ≤ cx U . One way to write this is (Shaked and Shanthikumar 2007, chap. 3) for any convex function h, whenever the expectations exist. We have used the qualifier "widely, " because an effective equivalent of Equation (5) can be found in Hwang and Yang (2001). However, even there, Equation (5) is not recognized as a major stochastic order, meaning that some of its importance is missed.
In particular, we now present three concrete, new results, made possible by the literature on the convex order. Each provides a method for combining mid-p-values conservatively, the first two in finite samples and the last asymptotically. Details and improved (but more complicated) bounds are given in Section 4. In what follows, Q 1 , . . . , Q n denote independent (but not necessarily identically distributed) mid-p-values, with joint probability measure denotedP 0 under the null hypothesis.
LetQ n = n −1 n i=1 Q i denote the average mid-p-value. For t ≥ 0,P 0 1/2 −Q n ≥ t ≤ exp(−6nt 2 ). (6) Note that, first, no knowledge of the specific individual midp-value distributions is required. Second, Hoeffding's inequality (Hoeffding 1963), which would be available more generally, gives the larger bound exp(−2nt 2 ) (the cubic root). Let F n = −2 n i=1 log(Q i ), known as Fisher's statistic (Fisher 1934) and the most popular method for combining p-values. In the continuous case, it is well-known that F n has a chi-square distribution with 2n degrees of freedom under H 0 . For t ≥ 2n, Finally, assume additionally that Q 1 , . . . , Q n are identically distributed. Then applying Fisher's method as usual, that is, treating the mid-p-values as if they were ordinary p-values and using the chi-square tail, is asymptotically conservative as n → ∞.

Example: Network Intrusion Detection
The perceived importance of cyber-security research has risen dramatically in recent years, particularly after several wellpublicized events in 2016 and 2017. In this area, anomaly detection over very high volumes and rates of network data is a key statistical problem (Adams and Heard 2016). In our experience of the field, discrete data, whether they be presence/absence events, counts or categorical data, are the norm rather than the exception. We will demonstrate the value of our article's contributions in a network intrusion detection problem. Figure 1 shows publically available authentication data covering 58 days on the Los Alamos National Laboratory computer network (Kent 2016). Nodes in the graph are computers, and an edge indicates that there was at least one connection from one computer to the other, resulting in a graph with m ≈ 18,000 nodes and ∼400,000 directed edges. An exciting opportunity offered by this data resource is that it contains an actual cyberattack: or, to be precise, records of penetration testing activity conducted by a "red-team. " One of the four computers used for the attack (the highest degree of the four, ID "C17693, " with 296 out of 534 edges labeled as nefarious) is highlighted in red on the left, with its connections highlighted in pink on the right.

Figure .
Authentication data: Full network of connections comprising ∼18,000 nodes and ∼400,000 directed edges. Edges are colored by authentication type. On the left, nodes are shown as black points, with node ID "C" highlighted in red (and larger). On the right, the points are hidden to better see the connections made by node ID "C, " which are now highlighted in pink.
Earlier work on network intrusion has suggested that the occurrence of new edges on the network can be (weakly) indicative of nefarious behavior (Neil et al. 2013;Neil 2015). Looking at the outward connections from a given computer, in particular, those which at the other end involve a computer otherwise receiving relatively few new connections present special interest. Because the first day of data has no red-team activity, we use this day to learn a rate λ j , j = 1, . . . , m at which each computer receives new connections, assuming, admittedly unrealistically, that the times are right-censored independent and identically distributed exponential random variables. For every computer on the network, the set of outward new connections made over the remainder of the observation period [1,58] is scored according to this model. The test-statistic if no connection occurs from i to j, τ − 1 if a new connection from i to j occurs at time τ ≥ 1, is considered for every directed pair (i, j) not occurring as an edge on the first day, so that each node i has associated with it a collection of test statistics T i· , which are partially discrete, with a point mass at 57. For regularization purposes, the rates λ j , j = 1, . . . , m are assumed a priori to follow a Gamma distribution matching the mean and variance of the empirical rates computed for each j = 1, . . . , m over the full period of 58 days. The use of this prior implies that before censoring T i j has a Gamma-Exponential (also called Lomax) predictive distribution, which is used to compute the collection of ordinary, mid, and randomized p-values P i· , Q i· , R i· corresponding to the outward connections of each node i = 1, . . . , m. Mathematical details about the calculations above are in the Appendix.
Since we are interested in the ranking of computer ID "C17693" among the other ∼18, 000 computers, as well as its p-value, it makes sense to extend the ranges of the bounds (6) and (7) as follows: which preserves monotonicity, and remains valid because larger values than unity are returned outside the old ranges. Our options are: 1. to compute the average ordinary, mid, and randomized p-values, and obtain a significance level using bound (8).
As rankings go, therefore, the mid-p-value is never beaten, with computer ID "C17693" coming in the top 10 every time and coming second once. The most obvious approach of using Fisher's method with ordinary p-values fails completely. As for the other three red-team computers: using the best performing method, that is, Fisher's statistic with mid-p-values and bound (9), where Computer ID "C17693" comes second, their ranks are 384th (ID "C18025"), 550th (ID "C19932"), and 1079th (ID "C22409").

Meta-Analysis of Mid-p-Values: Further Details
This section elaborates on the results of Section 2. We say that a random variable (and its measure and distribution function) is subuniform if it is less variable than a uniform random variable, U , in the convex order.
To see why the mid-p-value is sub-uniform, notice that Q = E 0 (R | T ). By Jensen's inequality, for any convex function h, whenever the expectations exist, since R = st U . Remember that we do not claim this result is new, see, for example, Hwang and Yang (2001), but rather the idea to exploit the literature on the convex order.
To formalize the meta-analysis framework, let T 1 , . . . , T n be a sequence of independent test statistics. We consider a joint null hypothesis,H 0 , under which T 1 , . . . , T n have probability measure P (1) 0 , . . . , P (n) 0 , respectively. The p-values, P i , mid-p-values, Q i , and randomized p-values, R i , are obtained by replacing P 0 with P (i) 0 in (1), (2), and (3), respectively. In the case of the randomized p-value, an independent uniform variable, X i , is generated each time.P 0 denotes the implied joint probability measure of the statistics underH 0 . The focus of this section is on testing the joint null hypothesisH 0 . Probability bounds that follow often have the formP 0 { f (Q 1 , . . . , Q n ) ≥ t} ≤ b n (t ). If the observed mid-p-values are q 1 , . . . , q n and level of the test is α (e.g., 5%), then a procedure that rejects when b n { f (q 1 , . . . , q n )} ≤ α is conservative: the probability of rejecting the null hypothesisH 0 , if it holds, does not exceed α.

Sums of Mid-p-Values
An early advocate of mid-p-values, Barnard (1989Barnard ( , 1990 proposed to combine test results from different contingency tables by taking the sum of standardized mid-p-values. His exposition relies on some approximations. Our results make exact inference possible. We begin with a bound on the sum of independent mid-pvalues. This bound bears an interesting resemblance to Hoeffding's inequality (Hoeffding 1963). It will later be extended to be relevant to Barnard's analysis.
Instead of summing the mid-p-values directly, Barnard (1990) actually considered sums of the standardized statistics where σ i is the standard deviation of Q i underH 0 . The upper tail probability of the sum is then estimated by Gaussian approximation. In the purely discrete case, Barnard shows that , and S i is the (countable) support of Q i . Instead of appealing to the Gaussian approximation, the convex order allows us to find an exact bound.
In practice, the bound (14), which is an important improvement over (15), can be found numerically by minimizing over h. Of course, even if the optimum cannot be determined exactly the obtained bound still holds, because the tail area is simply over-estimated.

Products of Mid-p-Values (Fisher's Method)
Fisher's method (Fisher 1934) is the most popular way of combining p-values. As is well-known, underH 0 , the statistic −2 n i=1 log(P i ) has a chi-square distribution with 2n degrees of freedom if P i are absolutely continuous. Therefore, the pvalue of the combined test is P † = S 2n {−2 n i=1 log(P i )}, where S k is the survival function of a chi-square distribution with k degrees of freedom. This results in an exact procedure when P i are absolutely continuous, and a conservative one otherwise, that is, P † ≥ st U underH 0 .
Our next result allows us to use the mid-p-values Q 1 , . . . , Q n in place of P 1 , . . . , P n while retaining a conservative procedure. We were able to derive three probability bounds. None beats the other two uniformly for all n and all significance levels (see Figure 2), but the last is often the winner, hence the simpler statement of Section 2.
Theorem 2. Let X 1 , . . . , X n be a sequence of independent subuniform random variables. Then for x ≥ 2n,  The first uses P(X i ≤ α) ≤ 2α for α ≥ 0, obvious in the case of a mid-p-value, but actually true of any subuniform random variable (Meng 1994). The second uses bounds on the mean and variance of − log(X i ) (given in Lemma 2, in the Appendix) and then applies the Chebyshev-Cantelli inequality. The third is based on a bound on the moment generating function of − log(X i ). Derivation details are in the Appendix.
For a given n and α ∈ (0, 1], let t α,n denote the critical value of Fisher's statistic, that is, t α,n satisfies S 2n (t α,n ) = α. Figure 2 presents the behavior of the different bounds for different n (20 on the left and 1 billion on the right) and α. The curves show the bound given by each formula at different α (which can be interpreted as "canonical levels"), that is, inputting x = t α,n in Theorem 2, as α ranges from 10 −5 to 0.1. For low α, the bound based on the moment generating function, marked MGF, is by far superior. Let Then Q † is again conservative, that is, Q † ≥ st U underH 0 . Both P † and Q † are valid p-values. Clearly, if the underlying p-values are continuous, then the standard P † is superior (in fact, deterministically smaller). However, Q † seems to be substantially more powerful in a wide range of discrete cases. This is demonstrated by simulation in Section 4.3.
Finally, we find this interesting asymptotic result.
Theorem 3 (Fisher's method is asymptotically conservative). Let X 1 , X 2 , . . . denote independent and identically distributed subuniform random variables. For any α ∈ (0, 1], there exists N ∈ N such that Hence, we can dispense with any correction entirely if n is large enough and the Q i are identically distributed. A formal proof is given in the Appendix. Since E{− log(X i )} ≤ E{− log(U )}, from the definition of the convex order, a direct application of the law of large numbers gets us most of way, except for the possibility E{− log(X i )} = E{− log(U )}. In fact, this exception is no problem because, perhaps surprisingly, it implies that the X i are uniform, using Shaked and Shanthikumar (2007, Theorem 3.A.43).

Simulations
To illustrate the potential improvement of employing Fisher's method with mid-p-values, using the bound (7), over the traditional approach of using ordinary p-values and the chi-square tail, we considered p-values from three types of support. In the first column of Figure 3, each p-value P i can only take one of two values, 1/2 and 1. We therefore have Q i = 0.25 if P i = 1/2 and Q i = 0.75 if P i = 1. Under the null hypothesis, P (i) 0 (P i = 1/2) = P (i) 0 (P i = 1) = 1/2. In the second column, each p-value P i is supported on the pair {p i , 1}, where p i were drawn uniformly on the unit interval but are subsequently treated as fixed known values. We have Q i = p i /2 if P i = p i and Q i = (1 + p i )/2 otherwise. Under the null hypothesis, we have P (i) 0 (P i = p i ) = 1 − P (i) 0 (P i = 1) = p i , for each i. Finally, in the third column each p-value P i takes one of 10 values, 1/10, 2/10, . . . , 1, and therefore Q i = P i − 1/20. Under the null hypothesis, P (i) 0 (P i = j/10) = 1/10, for j = 1, . . . , 10. The rows represent two different alternatives and sample sizes. In both cases, the P i are generated by left-censoring a sequence of independent and identically distributed Beta variables, B 1 , . . . , B n , that is, P i is the smallest supported value larger than B i . In the first scenario, the dataset is small (n = 10), but the signal is strong (a Beta distribution with parameters 1 and 20). In the second, the dataset is larger (n = 100) but the signal is made weaker accordingly (a Beta distribution with parameters 1 and 5). Comparing just the solid and dashed lines first, we see that Q † always outperforms P † substantially, and sometimes overwhelmingly. In the bottom-left corner, for example, we have a situation where, at a false positive rate set to 5% say, the test Q † would detect the effect with probability close to one whereas with P † the probability would be close to zero.
As a final possibility, consider A disappointment is that this randomized version, the dotted line in Figure 3, tends to outperform even the mid-p-values, and by a substantial margin. On the other hand, as pointed out in the introduction, the randomized p-value has some important philosophical disadvantages, and did not perform better in our real data example.

Conclusion
The convex order provides a formal platform for the treatment and interpretation of mid-p-values. This article used mathematical results from this literature to combine mid-p-values, which are not conservative individually, into an overall significance level that is conservative. As shown in real data and simulations, the gains in power can be substantial.
Whereas the focus of this article was on meta-analysis, another canonical problem is multiple testing, where the task is to subselect from or adjust a set of p-values, for example, subject to a maximum false discovery rate (Benjamini and Hochberg 1995). The case of discrete data has been analyzed in a number of articles, including Kulinskaya and Lewin (2009);Habiger and Pena (2011);Liang (2016);Habiger (2015). A promising (but ostensibly harder) avenue of research would be to investigate the use of the convex order in this problem.

A1. Section 3: Mathematical Details
First, we calculate the empirical rates r j = number of new connections to j over [0, 58] (number of nodes -1) × 58 , for j = 1, . . . , m, and then set α = {mean(r j )} 2 /var(r j ), β = mean(r j )/var(r j ), so that a Gamma distribution with parameters α and β has mean and variance equal to the empirical mean and variance of r j , respectively. This distribution is used as the prior for each rate λ j . After a day of observation, the posterior distribution for λ j is also Gamma, with parameters where τ i j = 1 if no connection occurs from i to j in [0, 1), τ if a new connection from i to j occurs at time τ < 1.
For each i, restrict j to the indices of nodes that did not receive a connection from i over [0, 1). Conditional on the observation period [0, 1), each statistic T i j has a probability measure, denoted P (i j) 0 , with a point mass at 57 and an absolutely continuous distribution over [0, 57) given by Since low values of T i j are of interest, the p-value and mid-p-value associated with T i j are computed from lower-tailed versions of (1) and (2) by substituting P (i j) 0 in for P 0 , giving and respectively.

Proofs
Proof of Theorem 1. Since 1 − X is subuniform if and only if X is subuniform, it is sufficient to prove the bounds in (11), (12), and (13) hold for P(X n − 1/2 ≥ t ). Since exp(xh) is a convex function in x for any h, the convex order gives us E{exp(hX i )} ≤ E{exp(hU )} = (e h − 1)/h. Therefore, for any h ≥ 0, where the second line follows from Markov's inequality. The choice h = 12t (motivated by an analysis of the Taylor expansion in h at 0) leads to using the fact that e −x sinh(x)/x = (1 − e −2x )/(2x) is one at x = 0 (using l'Hospital's rule) and decreasing.
The proofs of Theorems 2 and 3 both need the following result.
In the second line, the fact that the expected squared distance from the mean is smaller than from any other point is used, and in the fourth we used the fact that (log(x) + 1) 2 is convex.

Proof of Theorem 3. Let
where U 1 , . . . ,U n are independent uniform random variables on [0, 1], and μ W = E(W i ). If μ V = μ W then by Lemma 2 the X i are uniform on [0, 1] and we are done. The statement is also true if α = 1. Therefore assume μ V < μ W , α ∈ (0, 1) and let t ∈ (μ V , μ W ). By the weak law of large numbers there exists an N ∈ N such that, for n ≥ N , so that t α,n ≥ nt. Therefore, for n ≥ N , Again by the law of large numbers, the right-hand side tends to zero. Hence, there exists an N ≥ N such that it is bounded by α for n ≥ N.