Model averaging for generalized linear models in fragmentary data prediction

ABSTRACT Fragmentary data is becoming more and more popular in many areas which brings big challenges to researchers and data analysts. Most existing methods dealing with fragmentary data consider a continuous response while in many applications the response variable is discrete. In this paper, we propose a model averaging method for generalized linear models in fragmentary data prediction. The candidate models are fitted based on different combinations of covariate availability and sample size. The optimal weight is selected by minimizing the Kullback–Leibler loss in the completed cases and its asymptotic optimality is established. Empirical evidences from a simulation study and a real data analysis about Alzheimer disease are presented.


Introduction
Our study is motivated by the study of Alzheimer disease (AD).Its main clinical features are the decline of cognitive function, mental symptoms, behavior disorders, and the gradual decline of activities of daily living.It is the most common cause of dementia among people over age of 65, yet no prevention methods or cures have been discovered.The Alzheimers Disease Neuroimaging Initiative (ADNI, http://adni.loni.usc.edu) is a global research program that actively supports the investigation and development of treatments that slow or stop the progression of AD.The researchers collect multiple sources of data from voluntary subjects including: cerebrospinal fluid (CSF), positron emission tomography (PET), magnetic resonance imaging (MRI) and genetics data (GENE).In addition, mini-mental state examination (MMSE) score is collected for each subject, which is an important diagnostic criterion for AD.Our target is to establish a model focusing on the AD prediction (Alzheimer or not).This task is relatively easy if the data are fully observed.However, in ADNI data, not all the data sources are available for each subject.As we can see from Table 2 in Section 5, among the total of 1170 subjects, only 409 of them have all the covariate data available, 368 of them do not have the GENE data, 40 of them do not have the MRI data, and so on.Such kind of "fragmentary data" nowadays is very common in the area of medical studies, risk management, marketing research and social sciences (Fang et al., 2019;Zhang et al., 2020b;Xue and Qu, 2021;Lin et al., 2021).But the extremely high missing rate and complicated missing patterns bring big challenges to the analysis of fragmentary data.
In this paper we discuss the model averaging methods for fragmentary data prediction.Model averaging is historically proposed as an alternative to model selection.The most well-known model selection methods include AIC (Akaike, 1970), Mallows C p (Mallows, 1973), BIC (Schwarz, 1978), lasso (Tibshirani, 1996), smoothly clipped absolute deviation (Fan and Li, 2001), sure independence screening (Fan and Lv, 2008) and so on.
Model averaging, unlike most variable selection methods which focus on identifying a single "correct model", aims to the prediction accuracy given several predictors (Ando and Li, 2014).Without "putting all inferential eggs in one unevenly woven basket" (Longford, 2005), model averaging takes all the candidate models into account and makes prediction by a weighted average, which can be classified into Bayesian and frequentist model averaging.In this paper, we focus on frequentist model averaging (Buckland et al., 1997;Yang, 2001Yang, , 2003;;Hjort and Claeskens, 2003;Leung and Barron, 2006;Hansen 2007, among many others) and refer readers being interested in Bayesian model averaging to Hoeting et al. (1999) and the references therein.Researchers have developed many frequestist model averaging methods over the past two decades.To just name a few, the smoothed AIC and smoothed BIC (Buckland et al., 1997), Mallows model averaging (Hansen, 2007), Jackknife model averaging (Hansen and Racine, 2012) and heteroskedasticity-robust C p (Liu and Okui, 2013) mainly focus on low dimensional linear models.Ando and Li (2014) and Zhang et al. (2020a) consider least squares model averaging with high dimensional data.For more complex models, we have model averaging for generalized linear models (Zhang et al., 2016;Ando and Li, 2017), quantile regression (Lu and Su, 2015), semiparametric "model averaging marginal regression" for time series (Li et al., 2015;Chen et al., 2018), model averaging for covariance matrix estimation (Zheng et al., 2017), varying-coefficient models (Li et al., 2018;Zhu et al., 2019), vector autoregressions (Liao et al., 2019), semiparametric model averaging for the dichotomous response (Fang et al., 2020), and so on.
All the model averaging methods mentioned above assume that the data are fully observed and can not be applied to fragmentary data directly.Due to the extra large missing rate and complex response patterns, the traditional missing data techniques such as imputation and inverse propensity weighting (Little and Rubin, 2002;Kim and Shao, 2013) can not be efficiently applied either.Recently, Zhang et al. (2020b), Xue and Qu (2021) and Lin et al. (2021) develop some methods for block-wise missing or individual-specific missing data.But they only consider a continuous response.
On the other hand, Schomaker et al. (2010) and Dardanoni et al. (2011Dardanoni et al. ( , 2015) ) propose model averaging methods based on imputation with no asymptotic optimality.Zhang (2013) proposes a model averaging method by impute the missing data by zeros for linear models.Liu and Zheng (2020) extends it to generalized linear models.In the context of fragmentary data, Fang et al. (2019) proposes a model averaging method to select weight by cross-validation on the complete cases and shows its advantage to the previous model averaging methods.Ding et al. (2021) extends it to multiple quantile regression.Asymptotic optimalities are established for the last four methods but they are only applicable to a continuous response except Liu and Zheng (2020).
In this paper, we propose a model averaging method for fragmentary data prediction in generalized linear models.The candidate models are fitted based on different combinations of covariate availability and sample size.The optimal weight is selected by minimizing the Kullback-Leibler loss in the completed cases and its asymptotic optimality is established.Unlike the methods in Fang et al. (2019), our method does not need to refit the candidate models in the complete cases for weight selection.Empirical results from a simulation study and a real data analysis about Alzheimer disease show the superiority of the proposed method.
The paper is organized as follows.Section 2 discusses the proposed method in details.Asymptotic optimality is established in Section 3. Empirical results of a simulation study and a real data analysis are presented in Section 4 and Section 5, respectively.Section 6 concludes the paper with some remarks.All the proofs are provided in the Appendix.

The Proposed Method
For illustration, we consider the fragmentary data in Fang et al. (2019) as presented in Table 1.A random sample consists of n subjects with a response variable Y and a covariate set for any i, where Φ presents the empty set, but K may be much smaller than 2 p − 1.In Table 1, For notation simplicity, throughout the paper we also use D i or ∆ k to denote the set of indices of the covariates in D i or ∆ k , e.g., Assume that the subjects have been rearranged so that max i∈Tk i = min i∈Tk+1 i − 1. Denote S k = {i : D i ⊇ ∆ k } as the subject set with covariates in ∆ k being available.In Table 1, Our target is to make prediction given the fragmentary data , where y i and x ij are observations of Y and X j whenever they are observed.Specifically, consider that Y given for some known functions b(•), c(•, •) and a known dispersion parameter φ.The canonical parameter θ(•) is unknown.For a new subject with available covariate data Without loss of generality, we assume that ∆ 1 = D = {1, 2, • • • , p}.Then S 1 = T 1 is the CC (complete cases) sample in the missing data terminology.Similar to Fang et al. (2019), we mainly focus on prediction of θ(x * ) with x * = (x * 1 , • • • , x * p ) T from pattern ∆ 1 , i.e., D * = ∆ 1 = D. Any x * from other pattern ∆ k can be handled in the same way by ignoring the covariates not in ∆ k , which will be illustrated in the real data analysis.
As discussed in Fang et al. (2019), there exists a natural trade-off between the covariates included in the prediction model and available sample size.Taking Table 1 as an example, if we want to include all the 8 covariates in the model, only subjects 1 and 2 can be used without imputation.But if we only include the first covariate in the model, all the 10 subjects can be used.This trade-off naturally prepares a sequence of candidate models for model averaging.
Specifically, we can fit a generalized linear model M k on the data {(y i , x ij ), i ∈ S k , j ∈ ∆ k } and try to combine the prediction results from all the candidate models where θ . Denote the maximum likelihood estimator of β (k) by β(k) .Note that we do not assume that the true model θ(X) in ( 1) is indeed a linear function of X.Thus, all the candidate models can be misspecified.For a new be the model averaging estimator of θ (1) .Our weight choice criterion is motivated by the Kullback-Leibler (KL) loss in Zhang et al. (2016) and is defined as follows.Denote the true value of θ (1) and independent of y.The KL loss of θ{ β(w)} is where As Zhang et al. ( 2016) discussed, we would obtain a weight vector by minimizing J(w) given µ S1 .However, it is infeasible in practice to do so since the parameter µ S1 is unknown.Instead, we replace µ S1 by y S1 = (y i , i ∈ S 1 ) T and add an penalty term to J(w) to avoid overfitting, which gives us the following weight choice criterion where λ n K k=1 w k p k is the penalty term, and λ n is a tuning parameter that usually takes value 2 or log(n 1 ).The optimal weight vector is defined as ŵ = argmin w∈Hn G(w). (4) Remark 1: Basically, our idea is to use all available data to estimate parameters for each candidate model and use CC data to construct the optimal weights.This is similar to Fang et al. (2019) that deals with linear models for fragmentary data.However, unlike Fang et al. (2019), our proposed method does not need to refit the candidate models in the CC data to decide the optimal weight.Similar to Zhang (2013), Liu and Zheng (2020) selects weights by applying KL loss to the entire data with unavailable covariate data replaced by zeros, which does not perform quite well in the empirical studies.
The Conditions (C1)-(C3) are similar to Conditions (C.1)-(C.3) in Zhang et al. (2016).What is slightly different is the order O(n 1 ) other than O(n).It is rational because our weights selection is based on CC (i ∈ S 1 ) data with sample size n 1 .Condition (C3) requires that ξ n grows at a rate no slower than n 1/2 1 , which is the same as the third part of Condition (A7) of Zhang et al. (2014), and is also implied by Conditions ( 7) and ( 8) of Ando and Li (2014).Condition (C3) is imposed in order to obtain the asymptotic optimality, which is slightly stronger than that ξ n → ∞.Note that Theorem 3.1 holds when both λ n = 2 and λ n = log(n 1 ).These two versions of model averaging methods are both applied in Section 4 and Section 5.

Simulation
In this section, we conduct a simulation study to compare the finite sample performances of the following methods: • CC: a generalized linear regression using subjects that all the covariates are available.
• SAIC & SBIC: use the smoothed AIC and smoothed BIC in Buckland et al. (1997) to decide the model weights.
• IMP: the zero imputation method in Liu and Zheng (2020).We use IMP1 and IMP2 to denote the IMP method with λ n = 2 and log(n 1 ), respectively.• GLASSO: the method using CC data and group lasso of Meier et al. (2008) to select covariates and fitting a model with the subjects that have all the selected covariates available.• OPT: the proposed method.We use OPT1 and OPT2 to denote the OPT method with λ n = 2 and log(n 1 ), respectively.
The data is generated as follows.A binary y i is generated from model Binomial(1,p i ) with where p = 14, β = 0.4 ) is generated from a multivariate normal distribution with E(x ij ) = 1, Var(x ij ) = 1, and Cov(x ij1 , x ij2 ) = ρ for j 1 = j 2 , ρ = 0.3, 0.6 or 0.9, and the sample size n = 400 or 800.
To mimic the situation that all candidate models are misspecified, we pretend that the last covariate is not available for all the candidate models.The remaining 12 covariates other than the intercept are divided into 3 groups.The sth group consists of X 4(s−1)+2 to X 4s+1 , s = 1, 2, 3.The covariates in the sth group are available if the first covariate of each group X 4(s−1)+2 < 1, which results in K = 8.The percentages of CC (S 1 ) data are 19%, 25.5% and 38.8%, respectively for ρ = 0.3, 0.6 and 0.9.We consider the prediction when D * = D and use KL loss (divided by n 1 ) defined in (5) for assessment.The number of simulation runs is 200.Figures 1 to 3 present the KL loss boxplots for each method under different simulation settings.The main conclusions are as follows.
(1) The SAIC, SBIC and CC methods perform much worse than OPT1 and OPT2.In many situations, these three methods perform quite similar, indicating that SAIC and SBIC tend to select the model with more covariates and smaller sample size (M 1 with CC data).
(2) The zero imputation methods IMP1 and IMP2 generally perform not as well as the proposed methods OPT1 and OPT2.Some exceptions happen when n and ρ are small (for example, the first panel in Figure 1), in which the usage of zeros to replace   unavailable covariates has relatively small effect to the prediction.
(3) The performance of GLASSO is also worse than the proposed methods, which shows the model selection method does not work quite well when the models are misspecified.
(4) The proposed method OPT1 produces the lowest KL loss in most situations.

A Real Data Example
To illustrate the application of our proposed method, we consider the ADNI data which is available at http://adni.loni.usc.edu.The ADNI data contains three different phases: ADNI1, ADNIGO, and ADNI2.In this paper, we use ADNI2 in which some new model data are added.For every subject, different visits at longitudinal time points are recorded and here we focus on the baseline data.response Y = 1 if the MMSE score is no less than 28 and Y = 0 otherwise.It can be seen that the data is high dimensional, which may contain variables with redundant information.Thus, we first use correlation screening to select features that are most likely to be related to the response variable.All the 3 variables in CSF are kept and 10 variables each for PET, MRI and GENE are screened.We also tried other variable number but found that this screening procedure gave us the smallest KL loss.To compare the prediction performances of the methods considered in the simulation, we randomly select 75% of the subjects from each response pattern and combine them to a training data for model fitting, and use the rest of the subjects as the test data for performance evaluation.For each of the considered methods, we use the training data to fit the model, apply it to the test data, and compute the KL loss of the predictions on test data.We repeat this procedure independently for 100 replications.
Note that in this real data analysis, do not only consider the prediction D * = D.For D * = {CSF, PET, MRI, GENE}, the proposed method ignores the covariates not in D * for modeling and prediction.For example, when D * = {PET, MRI, GENE}, the covariates from "CSF" are ignored and only 5 candidate models are considered.More details for this kind of procedure can be found in Fang at al. (2019).
Figure 4 displays boxplots of the KL losses over 100 replications for different methods.The boxplots for IMP1 and IMP2 are not shown in the figure because their KL losses are too large.The proposed methods OPT1 and OPT2 outperform the other methods.

Concluding Remarks
Fragmentary data is becoming more and more popular in many areas and it is not easy to handle.Most existing methods dealing with fragmentary data consider a continuous response while in many applications the response variable is discrete.We propose a model averaging method to deal with fragmentary data under generalized linear models.The asymptotic optimality is established and empirical results from a simulation study and a real data analysis about Alzheimer disease show the superiority of the proposed method.
There are several topics for our future study.First, the covariate dimension p and the number of candidate models K are assumed to be fixed.The asymptotic optimality with diverging p and K needs further investigation.Second, we do not focus on the comparison of λ n = 2 and λ n = log(n 1 ).Which tuning parameter should we use in the practice?In fact, how to choose the best tuning parameter for model averaging is still a challenging problem even under linear models.Third, we assume the overall model belongs to an exponential family which is still restrictive.The extension to more general models deserves further study.λ n = O(1), we can obtain (A.1) and (A.2).This completes the proof.

Figure 4 .
Figure 4.The KL loss of all the methods in 100 replications for ADNI data.

Table 2 .
Response patterns and sample sizes for ADNI data.