A selective review of statistical methods using calibration information from similar studies

In the era of big data, divide-and-conquer, parallel, and distributed inference methods have become increasingly popular. How to effectively use the calibration information from each machine in parallel computation has become a challenging task for statisticians and computer scientists. Many newly developed methods have roots in traditional statistical approaches that make use of calibration information. In this paper, we first review some classical statistical methods for using calibration information, including simple meta-analysis methods, parametric likelihood, empirical likelihood, and the generalized method of moments. We further investigate how these methods incorporate summarized or auxiliary information from previous studies, related studies, or populations. We find that the methods based on summarized data usually have little or nearly no efficiency loss compared with the corresponding methods based on all-individual data. Finally, we review some recently developed big data analysis methods including communication-efficient distributed approaches, renewal estimation, and incremental inference as examples of the latest developments in methods using calibration information.


Introduction
Statistical inference with big data can be extremely challenging owing to the high volume and large variety of observed quantities. Currently, one of the most popular approaches to this problem in statistics and computer science is the divide-and-conquer paradigm. The basic idea of this method is to break down a problem recursively into two or more sub-problems of the same or related type, such that each sub-problem becomes simple enough to be solved easily. The solution to the original problem is the optimal combination of the solutions to the sub-problems. A closely related statistical method is called parallel and distributed inference. In essence, large amounts of observed data are stored in different machines in a distributed manner. The computation is often relatively inexpensive in each machine. Then, communication is essential to enable assembly of the available results from all machines. Many related references can be found in, for example, Jordan et al. (2019). Although many new statistical methods have been developed for big data analysis, most of them have roots in traditional statistical methods of combining auxiliary information.
Combining information from similar studies has been and will continue to be an extremely important strategy in statistical inference. The most popular example of such methods is meta-analysis, in which the published results of multiple similar scientific studies are pooled to produce an enhanced estimate without using the raw individual data from each study. We refer to Borenstein et al. (2009) for a comprehensive introduction to meta-analysis. For various reasons such as privacy or capacity of computer storage, in massive data inference, only summarized data rather than the original individual data may be available. This poses a very challenging problem: how to conduct efficient updated inference by making full use of the summarized data? In recent years, many methods of combining information have been developed in economic studies, machine learning, and distributed statistical inference. The goal of this paper is to selectively review a few popular methods that are able to integrate information in different disciplines.
Utilizing external summary data or auxiliary information to obtain more accurate inference is an old and effective method in survey sampling. Owing to restrictions such as cost effectiveness or convenience, the variable of interest Y may be available for only a small portion of individuals. However, the explanatory variable X associated with Y may readily be available for all individuals. Cochran (1977) presented a comprehensive discussion on regression-type estimators making use of the summarized information from X. Chen and Qin (1993), Chen et al. (2002), and Wu and Sitter (2001) used empirical likelihood (EL; Owen, 1988) to incorporate such information in finite populations.
With advances in technology, many summarized statistical results have become available in public domains. For example, many aggregated demographic and socioeconomic status data are provided in the US census reports. The Surveillance, Epidemiology, and End Results (SEER) programme of the National Cancer Institute provides population-based cancer survival statistics such as covariate-specific survival probabilities. Imbens and Lancaster (1994) combined micro and macro data in economic studies through the generalized method of moments (GMM). Chaudhuri et al. (2008) showed that inclusion of population-level information could reduce bias and increase the efficiency of the parameter estimates in a generalized linear model setup. Wu and Thompson (2020) published an excellent monograph on combining auxiliary information in survey sampling.
In this paper, we consider two situations. In the first, the summarized information from different studies was derived using the same statistical model. Second, the summarized information was derived using statistical models that were similar but not exactly the same. In general, combining information in the former case is easier. The latter case is more complex, as one has to take into consideration the heterogeneity among different studies.
The rest of this paper is organized as follows. In Section 2, we briefly review two simple and popular meta-analysis methods for combining similar results. In Section 3, we review Owen's (1988) EL method and Qin and Lawless's (1994) over-identified parameter problem as examples of general tools for synthesizing information from summarized data. In particular, we present a new way of deriving the lower information bound for the over-identified parameter problem. Section 4 discusses enhanced inference by utilizing auxiliary information. Section 5 presents results on more flexible meta-analyses where information on different covariates are available in similar studies. Calibration of information from previous studies is described in Section 6. We discuss methods of using disease prevalence information for more efficient estimation in case-control studies in Section 7. The popular communication-efficient distributed statistical inference method used in machine learning is discussed in Section 8. Renewal estimation and incremental inference are briefly presented in Section 9. Finally, some further discussion is presented in Section 10.

Random-effect meta-analysis
Dersimonian and Laird (1986) proposed a momentbased estimation method using a random-effect model for meta-analysis. Letθ i be an estimator of θ i from the i-th study, i = 1, 2, . . . , K. For example,θ i could be the estimated mean response from the i-th study. When the sample size n i in the i-th study is reasonably large, we may assume that where the w −1 i s are treated as known. Although the normal models hold to be true approximately, we assume that they are all true for ease of theoretical development. The goal here is to better estimate θ by combining the results from all the studies.
Unconditionally, we haveθ i ∼ N(θ , w −1 i + τ 2 ). Consider the following inverse-variance weighting estimator for θ:θ We can easily check that which implies that a natural estimator of τ 2 iŝ For small sample sizes, there is no guarantee that this estimator is non-negative; one may replace it by max(τ 2 , 0). Alternatively, we may estimate τ using the likelihood approach. The joint likelihood based on theθ i s is Maximizing with respect to θ and τ 2 gives their maximum likelihood estimators (MLEs). Lin and Zeng (2010) compared the relative efficiency of using summary statistics versus individual-level data in meta-analysis. They found that in general there was no information loss when using the summarized information compared with inference based on the original individual data when available.

Empirical likelihood and general estimating equations
In this section we briefly review Owen's (1988) EL andLawless' (1994) estimating equations approaches, as those methods represent general tools for assembly of information from different sources. The maximum likelihood method for regular parametric models is among the most popular methods in statistical inference, as it has many nice properties. However, model mis-specification is a major concern, as a misspecified model may lead to biased results. For the case when the underlying distribution is multinomial, Hartely and Rao (1968) proposed a mean constrained estimator for the population total in survey sampling problems. To mimic the parametric likelihood but discard parametric model assumptions, Owen (1988) and Owen (1990) proposed the EL method, which is a natural generalization of the multinomial likelihood when the number of categories is equal to the sample size. The EL approach can be thought of as a bootstrap that does not resample, or as a likelihood without parametric assumptions (Owen, 2001).

Definition of empirical likelihood
Suppose that X 1 , . . . , X n are n independent and identically distributed observations from X, with cumulative distribution F. For convenience, we assume there are no ties, i.e., any two observations are unequal to each other. The techniques developed below can be easily adapted to handle ties. Let dF(X i ), i = 1, 2, . . . , n, be the jumps of F(x) at the observed data points. The nonparametric likelihood is According to the likelihood principle (that parameters with larger likelihoods are preferable), one need only consider the distribution functions F(x) with p i > 0 and n i=1 p i = 1. If we maximize the log-likelihood subject to the constraints then we obtain p i = 1/n, i = 1, 2, . . . , n. Therefore, the . This is why the empirical distribution is called the nonparametric MLE of F(x).
Suppose we are interested in constructing a confidence interval for μ = E(X) = x dF(x), the mean of X. Since we have discretized F at each of the observed data points, the integral becomes μ = n i=1 p i X i . Next, we maximize the nonparametric log-likelihood subject to an extra constraint: Maximizing the log-likelihood (1) subject to constraints (2) and (3), the Lagrange multiplier method gives the profile log-likelihood of μ, We can treat n (μ) as a parametric likelihood of μ. Based on this likelihood, the maximum EL estimator of μ isμ =X = n −1 n i=1 X i , which is exactly the sample mean. We define the likelihood ratio function as Under the regularity conditions specified in Owen (1988) and Owen (1990), as n goes to infinity, R n (μ 0 ) converges to the χ 2 distribution with p degrees of freedom, where p is the dimension of μ, and μ 0 is the true value of μ.

General estimating equations
The original EL was mainly used to make inference for linear functionals of the underlying population distribution such as the population mean (Owen, 1988(Owen, , 1990. Qin and Lawless (1994) applied this method to general estimating models, which greatly broadened its applications. Specifically, suppose the population of interest satisfies a general estimating equation for a r × 1 vector-valued function g and some θ , which is a p × 1 parameter to be estimated. We assume r ≥ p as otherwise the true parameter value of θ would be undefined.
For general estimating equations with r > p or overidentified models, Hansen (1982) proposed the celebrated GMM, which has become one of the most popular methods in the econometric community. In essence, the GMM minimizes with respect to θ , where is the variance matrix of the estimating equation g(X, θ). If is unknown, we may replace it by the sample varianceˆ = 1 n n i=1 g(X i ,θ)g (X i ,θ), whereθ is an initial and consistent estimate of θ.
Instead of GMM, Qin and Lawless (1994) used the EL to make inferences for parameters defined by a general estimating equation. For discretized F(x) satisfying (2), Equation (5) Maximizing the log-likelihood (1) subject to (2) and (6), we have the following profile log-likelihood of θ (up to a constant): where λ is the Lagrange multiplier determined by We then estimate θ by the maximizerθ = arg max θ n (θ ), whose limiting distribution is established in the following theorem. Hereafter, we use ∇ θ to denote the differentiation operator with respect to θ .

Calculation of the information bound
Assuming that the parameter of interest satisfies the general estimating equation E{g(X, θ)} = 0, we next consider how well we can estimate θ based on this model, and whether the maximum EL estimator is optimal. To answer these questions, we consider an ideal situation, where the probability function X has a parametric form f (x, θ), which is known up to θ. We define as it reduces to f (x, θ) when η = 0. As the parametric form f (x, θ) is unknown in practice, we anticipate that any estimator based on the moment constraints E{g(X, θ)} = 0 should have a variance that is no less than that of the MLE derived from the enlarged model. We show that even if the form of f (x, θ) is available, the MLE of θ based on h(x, η, θ) has the same asymptotic variance as the maximum EL estimator. With the parametric model h, we can estimate θ by maximizing L(θ , η) = n i=1 h(X i , η, θ) with respect to (θ , η). We denote the resulting MLE by (η,θ). We show in Section 3.4 that under some regularity conditions on h (see, e.g., Theorems 14 and 23 of van de Vaart (2000)), where V is defined in (7). In general, the parametric form f (x, θ) is unknown; hence, we expect that the best estimator of θ should have an asymptotic variance at least as large as V. As the maximum EL estimator of θ of Qin and Lawless (1994) has asymptotic variance V, we conclude that it achieves the lower information bound.
Remark 3.1: If g(x, θ) is an unbounded function of x for each θ , we may construct a new density Clearly, ψ is bounded. We may go through the same derivations to get the same conclusion.
Remark 3.2: Back and Brown (1992) established a similar result by constructing an exponential family. In particular, is determined implicitly by the above constraint equation, whereas in our new approach, η is an independent parameter.

A sketched proof of (8)
The log-likelihood based on the enlarged model is If log{h(x, η, θ)} satisfies the conditions of Theorem 14 of van de Vaart (2000) on m θ (x), then (θ,η) is consistent with (θ 0 , 0). Result (8) follows from Theorem 23 of van de Vaart (2000). With tedious algebra, we find that Under some mild assumptions, such as that g(x, θ) × f (x, θ) dx = 0 holds for θ in a neighbourhood of θ 0 , differentiating both sides with respect to θ leads to Theorem 5.23 of van de Vaart (2000), we have This, together with the fact that as n goes to infinity, implies (8).

Empirical entropy family
Again we assume that the available information is given by the estimating equation It is often too restrictive to assume a known underlying parametric model f (x, θ) in the construction of the enlarged parametric model h(x, η, θ). We may replace the cumulative distribution function and the likelihood becomes In fact, this is equivalent to the EL n i=1 p i , where the p i s minimize the Kullback-Leibler divergence (up to a constant) or minus the exponential titling likelihood (2007) for more details. We call this the empirical entropy family induced by the estimating equation E{g(X, θ)} = 0.

Enhancing efficiency using auxiliary information
In this section, we discuss methods of incorporating auxiliary information to enhance estimation efficiency. This aspect was also investigated by Qin (2000). We assume a parametric model f (y | x, β) for the conditional density function of Y given X and leave the marginal distribution G(x) of X unspecified. We wish to make inferences for β when some auxiliary information is summarized through an estimating equation For example, if we know the mean μ of Y, then we can construct an estimating equation Furthermore, we allow that the response Y may have missing values. Let D be the non-missingness indicator, which takes the value 1 if Y is available, and 0 otherwise. We assume a missing-at-random model where π(x) depends only on x. We denote the observed data by We can maximize this likelihood subject to the constraints is not a function of β, the profile hybrid empirical log-likelihood (up to a constant) is where λ is the Lagrange multiplier determined by For the special case where data are missing completely at random, i.e., π(x) is a constant function of x, Qin (2000) established the following theorem.
Theorem 4.1: Let β 0 be the true parameter value, let β be the maximum hybrid EL estimator, i.e., the maximizer of (10), and letλ be the corresponding Lagrange Under some regularity conditions, when n goes to infinity, we have Remark 4.1: Imbens and Lancaster (1994) studied the same problem using GMM. In particular, they directly combined the conditional score estimating equation ∇ β log f (y | x, β) and φ(x, β). Even though the firstorder large-sample results are the same, the hybrid EL based approach is more appealing as it respects the parametric conditional likelihood and replaces only the marginal likelihood with the EL. See Qin (2000) for numerical comparisons of results of the two methods.

Combining summary information: a more flexible method for meta-analysis
Developing systematic methods for combining published information is one of the main goals of metaanalysis, which has become increasingly popular since little extra cost is needed. The main restriction in metaanalysis is that all studies must include the same variables in their analyses. The only difference allowed is in the sample sizes. Thus, studies must be discarded if they contain different variables from those in other studies. Summarized information is often available from publications such as census reports and results of national health studies. For reasons including confidentiality, it is typically not possible to gain access to the original data, only the summarized reports. Suppose we are interested in conducting a new study that may contain some new variables of interest that are not available in the summarized information, for example, a genetic study involving newly discovered biomarkers or genes. Below we discuss a more flexible method that could be used to combine published information and individual study data for enhanced inference in such cases. Chatterjee et al. (2016) discussed a related problem on the utilization of auxiliary information. As Han and Lawless (2016) pointed out, however, their methodology and theoretical results had already been developed by Imbens and Lancaster (1994) and Qin (2000) in the absence of selection bias in sampling.
We consider two cases. (I) The sample size for the summarized information is much larger than that of the new study. (II) Sample sizes from the two data sources are comparable. In Case I, we can treat the summarized information as known, i.e., the variation in the summarized data is negligible compared with the variation in the new study. In Case II, we have to take the variation in the summarized information into consideration as it is comparable to the variation in the new study. We focus on Case I in this section and study Case II in Section 6.

Setup and solution
Suppose that the summarized results were obtained from statistical analyses of response Y and covariate variables X (although the original data are not available), and that the new study includes an extra covariate Z in addition to (Y, X). We are interested in fitting a parametric model f (y | x, z, β) for the conditional density function of Y given X and Z. Let (y * 1 , x * 1 ), . . . ., (y * N , x * N ) be the historic data even though they are unavailable. The published information can be summarized in two ways: Let (y 1 , x 1 , z 1 ), . . . , (y n , x n , z n ) be observed data from the new study. The basic assumption is that (y i , x i ), i = 1, 2, . . . , n, and (y * i , x * i ) have the same distribution. To utilize the summarized information, we can define estimating functions in Scenario (I), and g = (g 1 , g 3 ), in Scenario (II). We consider only the situation where n/N → 0. In other words, the variation in the auxiliary information is negligible.
The EL approach amounts to maximizing n i=1 log p i subject to the constraint According to Qin and Lawless (1994), the asymptotic variance of the maximum EL estimatorβ based on estimating equation g is where ∇ β g = ∂g(y, x, z, β)/∂β| β=β 0 , g = g(y, x, z, β 0 ), and β 0 is the true value of β. We denote Equivalently, the asymptotic variance can be written as In the above approach, the estimating equation g 3 = h(y, x) −h does not involve the parameter β. However, there are ways to achieve higher efficiency. For example, Then, E{g 2 (x, z, β)} = 0. If we combine the empirical log-likelihood based on the estimating equation g 2 and the log-likelihood n i=1 log f (y i | x i , z i , β) as in the previous section (see Equation (12)), then the asymptotic variance of the resulting MLEβ is given by In general, this approach can achieve better efficiency.

A comparison
Given two pairs of estimation functions, {g 1 , g 3 } and {g 1 , g 2 }, we may wonder combining which pair leads to a better estimator if we directly compare their asymptotic variance formulae. Alternatively, we may enquire whether we should combine all three constraints g = (g 1 , g 2 , g 3 ) together. Write g 12 = g 21 = (g 1 , g 2 ), a = E{h (y, x)∇ β log f (y | x, z, β)}, and Using results from Qin and Lawless (1994) and 11 B 12 , we find that the asymptotic variance ofβ obtained by combining the three estimating equations and which implies that the asymptotic variance in the case where g 1 , g 2 , and g 3 are combined is the same as that in the case where g 1 and g 2 only are combined. This indicates that taking g 3 into account leads to no efficiency gain in the estimation of β. The method of combining g 2 and the parametric likelihood n i=1 f (y i | x i , z i , β) is better than that of combining g 1 , g 3 , and the parametric likelihood. To see this, recall that the asymptotical variances for the MLEs of β with the two methods are

Proof of V
For convenience, we assume that E(h) = 0. As E(∇ β ψ ) = A 12 and ψ = E(h | X, Z), it suffices to show that Let E * and Var * denote E(· | X, Z) and Var(· | X, Z), respectively. As and E * (g 1 ) = 0, it follows that Multiplying both sides by (−A 21 A −1 11 , I) from the left and by (−A 21 A −1 11 , I) from the right, we arrive at that is, inequality (13) holds, which implies V 2 − V 1 ≥ 0.

Calibration of information from previous studies
We consider calibration of information using parametric likelihood, EL (Owen, 1988), and GMM (Hansen, 1982). When only summary information from previous studies is available, these three wellknown methods can be used to calibrate such summary information and to make inferences about the unknown parameters of interest. We may wonder whether doing so results in efficiency loss compared with inferences based on the pooled data if they were all available. Zeng and Lin (2015) found that parametriclikelihood-based meta-analysis of summarized information retained first-order asymptotic efficiency compared with analysis based on individual data. We show here that EL and GMM also possess this property. This is extremely important, as individual data may involve privacy issues, whereas summarized information does not.

Efficiency comparison
Suppose that (Y ij , X ij ) (j = 1, 2, . . . , n i ; i = 1, 2, . . . , K) are independent observations from the same population. We consider two scenarios according to the model's assumption about the population.
(I) The conditional probability function (i.e., the probability density/mass function of a continuous/discrete random variable) of Y given X has a parametric form f (y | x, β). (II) The population satisfies E{g(Y, X, β)} = 0.
Here, β is a finite-dimensional unknown parameter, and β * is its true value. Assume that data are available batch by batch, and that n i /n = ρ i ∈ (0, 1), where n = K i=1 n i . For the i-th batch (i = 1, 2, . . . , K) of data: (a) under assumption (I), the parametric loglikelihood function of β is (b) under assumption (II), we define an empirical loglikelihood function 1+λ i g(Y ij ,X ij ;β) = 0; (c) under assumption (II), we define the objective function of the GMM method (GMM loglikelihood for short) as where = Var{g(Y, X, β * )} and β * is the true value of β. In practice, β * is generally replaced by a consistent estimator of β in the expression for . Using the true value β * of β does not affect the theoretical analysis presented in this section. GMM (β). Under certain regularity conditions, it can be verified that for In Case (a), In Case (b) In Case (c), We denote the MLE of β based on the i-th batch of data byβ i = arg max r (β). The above approximation implies that When the K-th batch of individual data are available, we no longer have access to the individual data of the previous K−1 batches but only have summarized information (β i ,ˆ i ), i = 1, 2, . . . , K − 1, whereβ i is the MLE based on the i-th batch of data andˆ i = V −1 /n i + o(n −1 ), and we can define an augmented log-likelihood and the corresponding MLEβ A = arg max A (β). For β = β * + O p (n −1/2 ), using the approximation in (14), we have where the constant C differs in different equations.
For comparison, based on the pooled data, in Case (a) we define the parametric log-likelihood as in Case (b) we define the empirical log-likelihood function as 1+λ g(Y ij ,X ij ;β) = 0; and in Case (c) we define the GMM log-likelihood as Let the log-likelihood based on the pooled data be pool (β) = PL (β), EL (β), and GMM (β) in Cases (a), (b), and (c), respectively. Then, it can be shown that for some constant C. Letβ pooled = arg max pooled (β). By comparing pooled (β) and A (β), we obtain This indicates that compared with the methods, including parametric likelihood, EL, and GMM, based on all individual data, the calibration method based on the last batch of individual data and all summary results of the previous batches has no efficiency loss.

When nuisance parameters are present
where β is common but γ i is a batch-specific parameter. We define r (β, γ r ) in the same way as r (β). Let (β i ,γ i ) be the MLE of (β, γ i ) based on the i-th batch of data, and assume that approximately We have two ways of combining information from previous studies. If we use all the previous summary information, we can define 22ˆ i,21 , using only this summary information, we can define Below we show that the MLEs of β based on these two likelihoods are actually equal to each other. In other words, there is no efficiency loss when estimating β based on (2) A (β, γ K ) instead of (1) A (β, γ 1 , . . . , γ K ). To see this, it suffices to show that sup γ 1 ,...,γ K−1 (1) We denote the inverse matrix of i by −1 It can be seen that (1) Setting ∂ (1) where we used the definition of i,11·2 in the last equation. We arrive at Equation (16) after comparing this with the definition of (2) A (β, γ K ).

Using covariate-specific disease prevalent information
As discussed in the previous section, summarized statistics from previous studies can sometimes be utilized to enhance the estimation efficiency in a current study. This is especially important in the big data era, when many types of information can be found through the internet. More specifically, suppose the prevalence of a disease is known at various levels of a known risk factor X. In this section, we combine this type of information in a case-control biased sampling setup.

Induced estimating equations under case-control sampling
Case-control sampling is among the most popular methods in cancer epidemiological studies. This is mainly because it is the most convenient, economic, and effective method. In the study of rare diseases in particular, one has to collect large samples in order to get a reasonable number of cases by using prospective sampling, which may not be practical. Using case-control sampling, a pre-specified number of cases (n 1 ) and controls (n 0 ) are collected retrospectively from case and control populations, respectively. Typically, this can be accomplished by sampling cases from hospitals and controls from the general disease-free population.
For a given risk factor X, let F i (x) = pr(X ≤ x | D = i) for i = 0, 1. Given X in a range (a, b], the disease prevalence is where E 0 and E 1 denote the expectation operators with respect to F 0 and F 1 , respectively. We assume that given covariates X and Y, the underlying disease model is given by the conventional logistic regression pr(D = 1 | x, y) = exp(α * + xβ + yγ + yxξ) 1 + exp(α * + xβ + yγ + yxξ) .

Empirical likelihood approach
The log-likelihood is where p i = dF 0 (x i ), i = 1, 2, . . . , n, and the constraints are where the Lagrange multiplier λ is determined by Finally, the underlying parameters can be obtained by maximizing .
If the overall disease prevalence probability π = pr(D = 1) is known, then η = log{(1 − π)/π} is known. On the other hand, if it is unknown but I ≥ 1, then π is identifiable. If I > 1, then we have an overidentified equation problem. This can be treated as a generalization of the EL method for estimating functions (Qin & Lawless, 1994) for biased sampling problems. Qin et al. (2015) considered the case where η is unknown and I ≥ 1.
Let ω = (η, α, β, γ , ξ , λ) and letω be its maximum EL estimator. As the first estimating function g 0 corrects biased sampling in a case-control study, the remaining estimating functions g 1 , . . . , g I are used for improving efficiency. When n goes to infinity, it can be shown that the limit of λ is a (I + 1)-dimensional vector where the first component is lim n→∞ (n 1 /n) and the remainder are all zero. Qin et al. (2015) showed that if ρ = n 1 /n 0 remains constant as n → ∞ and ρ ∈ (0, 1), then under suitable regularity conditions √ n(ω − ω 0 ) is asymptotically normally distributed with mean zero. Moreover, the estimation of the logistic regression parameters (β, γ , ξ) improves as the number I of estimating functions increases. This means that a richer set of auxiliary information leads to better estimators. In practice, however, this consideration must be balanced with the numerical difficulty of solving a larger number of equations.
Notably, auxiliary information is informative for estimating β and ξ but not for estimating γ . This can be observed through the following equations: As the underlying distribution F 0 (x, y) is unspecified, we can treat F 0 (x, s/γ ) as a new underlying distribution F * 0 (x, s). With F * 0 profiled out, the auxiliary information equation does not involve γ if ξ = 0. Hence, even if ξ = 0, the information for γ is minimal as γ and ξ cannot be separated.

Generalizations
The simulation results of Qin et al. (2015) indicate that when covariate-specific auxiliary information is employed, the estimator of the coefficient β of X has the maximum variance reduction, whereas the variance reductions for other coefficients are small. If the auxiliary information is also available, we can combine them through estimating equations It would be more informative if the auxiliary informa-

More on the use of auxiliary information
Under a logistic regression model, the case and control densities are linked by the exponential tilting model Suppose that for the general population E(X) = μ 1 , E(Y) = μ 2 , and E(XY) = μ 3 are all known, and π = pr(D = 1) is known or can be estimated using external data. Under the exponential tilting model (20), the density f (x, y) in the general population and the density pr(x, y | D = 0) in the control population are linked by pr(x, y) = {πe α+xβ+yγ +ξ xy + (1 − π)} × pr(x, y | D = 0).

As a consequence
where E 0 is an expectation with respect to pr(x, y | D = 0). Let h(x, y) = (x − μ 1 , y − μ 2 , xy − μ 3 ) with known μ 1 ,μ 2 , and μ 3 . The log-likelihood under case-control data is still (19), where the p i s satisfy the following constraints: More generally, any information in the general population such as E[ψ(Y, X)] = 0 can be converted to an equation for the control population, Therefore, the results developed by Qin et al. (2015) can be applied. The results of Chatterjee et al. (2016) for case-control data can be considered as a special case of Qin et al. (2015).

Communication-efficient distributed inference
In the era of big data, it is commonplace for data analyses to run on hundreds or thousands of machines, with the data distributed across those machines and no longer available in a single central location. Recently, parallel and distributed inference has become popular in the statistical literature in both frequentist and Bayesian settings. In essence, the data-parallel procedures are intended to break the overall dataset into subsets that are processed independently. To the extent that communication-avoiding procedures have been discussed explicitly, the focus has been on oneshot or embarrassingly parallel approaches that use only one round of communication in which estimators or posterior samples are first obtained in parallel on local machines, then communicated to a centre node, and finally combined to form a global estimator or approximation to the posterior distribution (Lee et al., 2017;Neiswanger et al., 2015;Wang & Dunson, 2015;Zhang et al., 2013). In the frequentist setting, most one-shot approaches rely on averaging (Zhang et al., 2013), where the global estimator is the average of the local estimators. Lee et al. (2017) extend this idea to high-dimensional sparse linear regression by combining local debiased Lasso estimates (van de Geer et al., 2014). Recent work by Duchi et al. (2015) shows that under certain conditions, these averaging estimators can attain the information-theoretic complexity lower bound for linear regression, and at least O(dk) bits must be communicated in order to attain the minimax rate of parameter estimation, where d is the dimension of the parameter and k is the number of machines. This result holds even in the sparse setting (Braverman et al., 2016). The method of Jordan et al. (2019) proceeds as follows. Suppose the big data consists of N observations and there are k machines. For the convenience of presentation, we assume that each machine has n observations, i.e., N = nk. Denote the full-data likelihood by where j (θ ) is the log-likelihood based on the data from the j-th machine. For θ near its target valueθ , where R N (θ ) and R 1 (θ ) are remainders. Observing that R N ≈ R 1 , define a surrogate log-likelihood Ignoring the constant terms, the surrogate loglikelihood is The score equation based on the surrogate likelihood is Letθ be the solution. Expanding it at θ 0 and using the fact that If we letθ be the MLE based on 1 (θ ), the surrogate log-likelihood can be simplified tō If the dimension of θ is high, one may add a penalty function in the surrogate log-likelihood and estimate θ byθ = arg max θ∈ {¯ (θ ) − λ θ 1 }, where θ 1 is the L 1 -norm of θ . Similarly, Bayesian inference can be adapted to the surrogate likelihood as well. Duan et al. (2020) proposed distributed algorithms that account for heterogeneous distributions by allowing site-specific nuisance parameters. The proposed methods extend the surrogate likelihood approach (Jordan et al., 2019;Wang et al., 2017) to the heterogeneous setting by applying a novel density ratio tilting method to the efficient score function. Asymptotically, the approach described in Section 6.2 on nuisance parameters is equivalent to that of Duan et al. (2020).

Renewal estimation and incremental inference
Let U(D 1 , β) = ∇ β M(D 1 , β) be a score function of β based on some objective function M(D 1 , β) from the first batch of data, where M can be either the log-likelihood M(D 1 , β) = n 1 i=1 log f (y 1i | x 1i , β) or a pseudo log-likelihood.
Letβ 1 be the solution to U(D 1 , β) = 0, when only the first batch of data D 1 is available. Let D 2 denote the second batch of data. If both of them are available, we letβ 2 be the solution to the pooled score equation, U(D 1 , β) + U(D 2 , β) = 0. Clearly,β 2 is the most efficient estimator of β when D 1 and D 2 are both available.
Alternatively, we may understand this renewal estimation strategy in the manner of Zhang et al. (2020), who propose estimating β by maximizing where = E ∇ β log f (Y | X, β)∇ β log f (Y | X, β) is the Fisher information. If both batches are available, the score for β is ∇ β log f (y 2i | x 2i , β).
Here, we have assumed that n 1 = O(n 2 ) = O(n). This indicates that estimating β by maximizing (22) results in no efficiency loss asymptotically compared with the MLE based on all individual data, where the latter is infeasible in the current situation.

Concluding remarks
Rapid growth in hardware technology has made data collection much easier and more effective. In many applications, data often arrive in streams and chunks, which leads to batch-by-batch data or streaming data. For example, web sites served by widely distributed web servers may need to coordinate many distributed clickstream analyses, e.g., to track heavily accessed web pages as part of their real-time performance monitoring. Other examples include financial applications, network monitoring, security, telecommunications data management, manufacturing, and sensor networks (Babcock et al., 2002;Nguyen et al., 2021 ). The continuous arrival of such data in multiple, rapid, time-varying, possibly unpredictable and unbounded streams not only yields many fundamentally new research problems but provides contains various forms of auxiliary information.
Assembling information from different data sources has become indispensable in big data and artificial intelligence research. Statistical tools play an essential part in updating information. In this paper, we have presented a selective review of several traditional statistical methods, including meta-analysis, calibration information methods in survey sampling, and EL together with over-identified estimating equations and GMM. We have also briefly reviewed some recently developed statistical methods, including communication-efficient distributed statistical inference and renewal estimation and incremental inference, which can be regarded as the latest developments of calibration information methods in the era of big data. Although these methods were developed in different fields and in different statistical frameworks, in principle, they are asymptotically equivalent to wellknown methods developed for meta-analysis. These methods result in almost no or little information loss compared with the case when full data are available.
Finally, we apologize to people whose work has inadvertently have been left out of our reference list.