Predicting dichotomised outcomes from high-dimensional data in biomedicine

In many biomedical applications, we are more interested in the predicted probability that a numerical outcome is above a threshold than in the predicted value of the outcome. For example, it might be known that antibody levels above a certain threshold provide immunity against a disease, or a threshold for a disease severity score might reflect conversion from the presymptomatic to the symptomatic disease stage. Accordingly, biomedical researchers often convert numerical to binary outcomes (loss of information) to conduct logistic regression (probabilistic interpretation). We address this bad statistical practice by modelling the binary outcome with logistic regression, modelling the numerical outcome with linear regression, transforming the predicted values from linear regression to predicted probabilities, and combining the predicted probabilities from logistic and linear regression. Analysing high-dimensional simulated and experimental data, namely clinical data for predicting cognitive impairment, we obtain significantly improved predictions of dichotomised outcomes. Thus, the proposed approach effectively combines binary with numerical outcomes to improve binary classification in high-dimensional settings. An implementation is available in the R package cornet on GitHub (https://github.com/rauschenberger/cornet) and CRAN (https://CRAN.R-project.org/package=cornet).


Introduction
Many diagnostic and prognostic problems in biomedicine are essentially binary classification tasks.A binary outcome splits samples into two groups of interest.Some binary outcomes are naturally binary, whereas other binary outcomes are artificially binary [29].We focus on artificial binary outcomes that result from the dichotomisation of numerical outcomes with a single threshold.Such binary variables indicate whether the underlying measurements are greater than a given cut-off value.
While there are strong reservations against outcome dichotomisation in the statistical literature [3,8,14,15,19,26], it remains popular in empirical research.The main problem for prediction is that dichotomising a numerical outcome implies a loss of information equivalent to discarding a certain proportion of the data [2], although it might simplify the understanding and communication of results [6,7] or increase robustness against contamination [25].In our experience, researchers often underestimate the disadvantages or overestimate the advantages of dichotomisation.
However, many biomedical applications require predicted probabilities rather than predicted values.Suppose there is a critical transition if y > c, where y denotes a clinical outcome, and c denotes a threshold.Then we would want to predict P(y > c) rather than y.Typically, the prediction y = c means that the probability of the critical transition is about 50%, but other predictions y = c) are more difficult to interpret, because they only tell whether the probability is below or above 50%.Even if the threshold is only an estimate of the tipping point where the critical transition occurs, we might want to predict the probability that the outcome will exceed this threshold, e.g. to make or to predict a treatment decision.(This also holds for arbitrary thresholds.Suppose a clinical protocol requires mechanical ventilation if the oxygen level falls below a certain value: Even if the patient could cope with lower values, we might want to predict whether the physician will use a ventilator.) The analysis of modern biomedical data, typically including some hundred samples but many thousand features, requires new statistical methods.In this paper, we propose an approach to obtain improved predictions of dichotomised outcomes in high-dimensional settings (i.e.settings with many more features than samples).
The same problem has previously been addressed in low-dimensional settings [5,13,28] (i.e.settings with many fewer features than samples).Although the proposed approach is novel, we consider these and other related methods for possible extensions (see Section 5).There are methods that address different problems but also combine binary and numerical outcomes, such as risk estimation for dichotomised outcomes [27], bivariate regression for binary-continuous outcomes [4,9], odds ratios for linear regression [17], and ordinal logistic regression [12].A recurrent idea is to exploit information from the numerical outcome and provide interpretation for the binary outcome.
This manuscript describes a straightforward approach to predict dichotomised outcomes from high-dimensional data.A more complex predictive method (e.g.random forests or neural networks for obtaining predicted values) together with a calibration method (e.g.Platt scaling or isotonic regression for transforming predicted values to predicted probabilities) could provide more predictive models, but these would be less interpretable ('black box').We solve this specific prediction problem by combining two familiar methods (linear and logistic regression with lasso or ridge regularisation), leading to models that are not only predictive but also interpretable.

Overview
Our goal is to predict the (artificial) binary outcome, rather than the (natural) numerical outcome, from many features.For any sample, we either know or ignore both outcomes.Our strategy is to learn from the samples with observed outcomes how the features affect both outcomes, in order to predict the binary outcome of the samples with unobserved outcomes.A challenge in supervised learning (especially in high-dimensional settings) is to avoid overfitting, which occurs if the model fits well to the observed data but not to unobserved data.This is how we model the two outcomes based on many features: • two outcomes: In the generalised linear model framework, a suitable approach for binary outcomes is logistic regression, and a suitable approach for numerical outcomes is linear regression.Given both types of outcomes, we can fit both regression models.In most cases, linear regression is the better choice, because the numerical outcome is normally more informative than the binary outcome (see Examples 1 and 2 in Section 3.3).In some cases, however, logistic regression is the better choice, because it is more robust against departures from linearity and normality (see Examples 3 and 4 in Section 3.3).While logistic regression returns predicted probabilities, linear regression returns predicted values.• many features: In low-dimensional settings without strong multicollinearity, we could estimate the regression coefficients by maximising the likelihood function.But in highdimensional settings, which include many more features than samples, we need to regularise the regression coefficients.The lasso and ridge penalties, whose weighted sum is the elastic net penalty [32], increase with the absolute or squared values of the coefficients, respectively.Both penalties shrink the coefficients towards zero (regularisation), but only the lasso penalty sets coefficients equal to zero (variable selection).
We combine logistic and linear regression, with lasso or ridge regularisation, to predict dichotomised outcomes.This leads to two estimated effects for each feature, one from logistic regression and one from linear regression, and two predictions for each sample, one from logistic regression and one from linear regression.The proposed approach transforms the predicted values from linear regression to predicted probabilities and combines these predicted probabilities with those from logistic regression.Figure 1 illustrates the workflow.

Data
We observe one numerical outcome and p features for n samples.Let i in {1, . . ., n} index the samples, and let j in {1, . . ., p} index the features.For each i and j, let y i denote the outcome for sample i, and let x ij denote feature j for sample i.Then the vector y = (y 1 , . . ., y n ) represents the outcome, and the n × p matrix X represents the features.We focus on high-dimensional settings, where p n.To prepare the data for penalised regression, we standardise all features (zero mean, unit variance).
Given a predefined threshold for dichotomising the numerical outcome, samples with an outcome above this threshold are in class 1, and all other samples are in class 0. For each sample i, let z i indicate whether the numerical outcome y i is greater than the threshold c, or formally z i = I[y i > c].Then the vector z = (z 1 , . . ., z n ) represents the binary outcome.Since the transformation of y to z is non-invertible, y is at least as informative as z (but typically much more informative).

Logistic regression
We relate the binary outcome to the features through logistic regression: where γ 0 is the unknown intercept, and {γ 1 , . . ., γ p } are the unknown slopes.The latter represent the effects of the features on the log-odds of the binary outcome.Given the estimated coefficients γ = ( γ 0 , . . ., γ p ) , the predicted probabilities are ẑ = (ẑ 1 , . . ., ẑn ) , where ẑi = logit −1 ( γ 0 + p j=1 γ j x ij ).For logistic regression, we use the logistic deviance as loss function: which tends to zero if the predicted probabilities ẑ approach 1 for positives (z i = 1) and 0 for negatives (z i = 0).

Linear regression
We relate the numerical outcome to the features through linear regression: where β 0 is the unknown intercept, and {β 1 , . . ., β p } are the unknown slopes.The latter represent the effects of the features on the numerical outcome.Given the estimated coefficients β = ( β0 , . . ., βp ) , the predicted values are ŷ = (ŷ 1 , . . ., ŷn ) , where ŷi = β0 + p j=1 βj x ij .For linear regression, we use the mean squared error as loss function: which tends to zero if the predicted values ŷ approach the numerical outcomes y.

Parameter regularisation
We estimate the logistic and linear regression models by penalised maximum likelihood using lasso (L 1 ) or ridge (L 2 ) regularisation, which are generalised by elastic net regularisation [32].Following the notation from [10], the penalties for logistic and linear regression are equal to where λ 0 and λ 1 are the regularisation parameters (λ 0 ≥ 0, λ 1 ≥ 0), and α is the elastic net mixing parameter (0 ≤ α ≤ 1).The elastic net penalty collapses to the lasso or ridge penalty if α equals 1 or 0, respectively.We use the lasso penalty to estimate sparse models and the ridge penalty to estimate dense models, but it would also be possible to select α by tuning or combine multiple α by stacking [23].
The penalised loss functions for logistic and linear regression are the sums of the respective loss and penalty functions: Given an elastic net mixing parameter α and the regularisation parameters λ 0 and λ 1 , we can estimate the coefficients γ and β.

Model combination
We aim to improve the predicted probabilities from penalised logistic regression by accounting for the predicted values from penalised linear regression.Since the predicted values from linear regression are unbounded real numbers, we transform them to the unit interval via the Gaussian cumulative distribution function: where μ is the mean (μ = c) and σ 2 is an optimisable variance (σ 2 ≥ 0).This corresponds to the probit link, one of the two most common link functions for binary regression, with a fixed mean parameter for the threshold and a free variance parameter for calibration.
The crucial difference to probit regression is that we do not model the binary outcome and transform the linear predictor to predicted probabilities but that we model the numerical outcome and transform predicted values to predicted probabilities.If the predicted value ŷi is greater than the threshold c, the probability (ŷ i |μ = c, σ 2 ) is greater than 0.5.The variance σ 2 calibrates the probabilities: these diverge to 0 and 1 as σ 2 decreases, and converge to 0.5 as σ 2 increases.Intuitively, this transformation 'confidently' assigns samples to classes if σ 2 is small and 'hesitantly' if σ 2 is large.
For each sample i, we combine the predicted probability ẑi from logistic regression and the predicted value ŷi from linear regression: where π is an optimisable weight (0 ≤ π ≤ 1).The weighting provides a compromise between the probabilities from penalised logistic regression and the calibrated probabilities from penalised linear regression.By construction, the combined values p = (p 1 , . . ., pn ) are interpretable as probabilities.As the weight π increases, the contribution of logistic regression decreases, and the contribution of linear regression increases.The combined predicted probability pi is completely determined by logistic or linear regression if π equals 0 or 1, respectively.Again, we use the logistic deviance as loss function: which tends to zero if the predicted probabilities p approach 1 for positives (z i = 1) and 0 for negatives (z i = 0).In short, we combine the predicted probabilities from logistic regression (ẑ) and the predicted values from linear regression (ŷ) to the predicted probabilities p, and we propose to interpret these combined predicted probabilities.

Parameter optimisation
We fix the elastic net mixing parameter α, tune the regularisation parameters λ 0 and λ 1 , estimate the coefficients γ and β, and then estimate the weight parameter π and the scale parameter σ 2 : • tuning λ 0 and λ 1 : We generate two sequences of 100 decreasing values for λ 0 and λ 1 , with the largest values (→ ∞) yielding empty models, and the smallest values (→ 0) yielding full models.In k-fold cross-validation, we split the samples into k folds, repeatedly estimate the coefficients with k−1 included folds, and predict the outcomes for the excluded fold.In each iteration, we estimate the coefficients by minimising the penalised loss functions M log (γ |λ 0 , α) and M lin (β|λ 1 , α) with respect to γ or β, respectively, via coordinate descent along the regularisation path [10].After the last iteration, we tune the regularisation parameters λ 0 and λ 1 to minimise the loss functions L log (γ ) and L lin (β).• estimating γ , β, π and σ 2 : Given the tuned regularisation parameters λ0 and λ1 , we reestimate the coefficients by minimising M log (γ | λ0 , α) and M lin (β| λ1 , α) with respect to γ and β.With the estimated coefficients γ and β, we calculate the fitted probabilities ẑ from logistic regression and the fitted values ŷ from linear regression.To combine them, we estimate the weight and scale parameters by numerically minimising the loss function L com (π , σ 2 ) with respect to π and σ 2 .
This optimisation procedure first addresses penalised logistic (λ 0 , γ ) and linear (λ 1 , β) regression separately, and then addresses their combination (π, σ 2 ).Alternatively, we might use the expectation-maximisation (em) algorithm to iteratively estimate {γ , β} and {π, σ 2 }.In contrast to the em approach, our two-stage approach has practical advantages: the processing time is only slightly longer than for logistic and linear regression together, γ and β are interpretable as estimated effects of the features on the log-odds of the binary outcome or on the identity of the numerical outcome, respectively, and the local minima problem does not affect the estimation of the coefficients.

Motivation
In this simulation study, we empirically show that combined regression outperforms not only logistic regression but also 'calibrated linear regression' at predicting dichotomised outcomes.
Logistic regression and calibrated linear regression are special cases of the proposed combined regression (with π = 0 or π = 1, respectively).While logistic regression requires the binary outcome and returns predicted probabilties, calibrated linear regression requires the numerical outcome and returns predicted values transformed to predicted probabilities.
We illustrate in four examples why the proposed combined regression -combining predicted probabilities from logistic regression and predicted values from linear regressionis suitable for predicting dichotomised outcomes.

Data generating process
Let n denote the sample size and let p denote the number of features.This is our process for generating features, effects and outcomes: • features: Let x ij represent feature j for sample i, for any j in {1, . . ., p} and any i in {1, . . ., n}.Simulating all values from a standard Gaussian distribution (x ij ∼ n(μ = 0, σ 2 = 1)), we obtain the n × p feature matrix X. • effects: Let β j represent the effect of feature j, for any j in {1, . . ., p}.Simulating all effects from a mixture distribution of a Bernoulli trial with success probability 5% and a standard Gaussian distribution ( )), we obtain the p-dimensional vector β = (β 1 , . . ., β p ) .While around 95% of the features have no effects, around 5% of the features have negative or positive effects of different sizes ( • linear predictors: Let η i represent the linear predictor for sample i, for any i in {1, . . ., n}. Calculating all linear predictors from the effects and the features (η i = p j=1 β j x ij ), we obtain the n-dimensional vector η = (η 1 , . . ., η n ) .• error terms: Let i represent the error term for sample i, for any i in {1, . . ., n}.Simulating all error terms from a standard Gaussian distribution ( i ∼ n(μ = 0, σ 2 = 1)), we obtain the n-dimensional vector = ( 1 , . . ., n ) .• outcomes: Let y i and z i represent the numerical or binary outcome of sample i, respectively, for any i in {1, . . ., n}.In each example, the numerical outcome depends on the linear predictor and the error term in a different way (see below).In all examples, the binary outcome z i indicates whether the numerical outcome y i is greater than the threshold zero (z i = I[y i > 0]).The corresponding n-dimensional vectors are y = (y 1 , . . ., y n ) and z = (z 1 , . . ., z n ) .

Examples
We provide one representative example where calibrated linear regression should outperform logistic regression, and three illustrative examples where logistic regression should outperform calibrated linear regression.In each example, the equation holds for any i in {1, . . ., n}.
(1) standard setting: The numerical outcome equals the sum of the linear predictor and the error term.
(2) latent binary variable: The numerical outcome is clustered around a negative or positive value, depending on whether the linear predictor is below or above the threshold, respectively.
(3) asymmetric relationship: The numerical outcome is not linearly related to the linear predictor but with a square-root below the threshold and a square above the threshold.
(4) presence of outliers: The numerical outcome usually equals the sum of the linear predictor and the error term, but rarely there is contamination by a large negative or a large positive number.

Hold-out method
As there is no restriction on the sample size for simulated data, we simulate data for n 0 = 100 training samples but n 1 =10,000 testing samples (n = n 0 + n 1 = 10,100) in each repetition of the hold-out method.Using p = 500 features, we obtain a high-dimensional setting because the number of features is much larger than the number of training samples (p n 0 ).After estimating the parameters of the three regression models with the 100 training samples, we predict the binary outcome for the 10,000 testing samples and compare the predicted probabilities (0 ≤ pi ≤ 1) with the observed classes (z i = 0 or z i = 1).

Figure 2.
Out-of-sample logistic deviance (lower = better) from logistic regression ('binomial'), combined regression, and calibrated linear regression ('gaussian'), in four simulation settings.The black point added to the box plot represents the mean.A p-value with an asterisk indicates that the decrease in logistic deviance from logistic (left) or calibrated linear (right) to combined regression is statistically significant (one-sided Wilcoxon signed-rank test, Bonferroni-adjusted 5% significance level).

Predictive performance
For each example in Section 3.3, we performed 100 repetitions of the hold-out method (i.e.simulating 100 sets of training and testing data).Figure 2 summarises the distributions of out-of-sample logistic deviances from logistic regression, calibrated linear regression, and combined regression, each under lasso regularisation.We tested whether combined regression leads to a significantly lower logistic deviance than logistic regression and calibrated linear regression, using the one-sided Wilcoxon signed-rank test.
We find that combined regression is significantly more predictive than logistic regression in the first example and significantly more predictive than calibrated linear regression in the other examples, at the Bonferroni-adjusted 5% level (p-value ≤ 0.05/8).Thus, combined regression is highly predictive because it combines the advantages of linear regression (efficiency) and logistic regression (robustness).

Application
The Montreal Cognitive Assessment (moca) is a screening tool for mild cognitive impairment (mci) [20].Although the total moca score is a discrete numerical variable ranging from 0 to 30, researchers often model a binary variable indicating the absence or presence of cognitive impairment.For example, Fullard et al. [11] use Cox proportional hazards regression to predict the conversion time to mci, and Caspell-Garcia et al. [1] use logistic regression to predict mci, given the commonly accepted definition of mci as moca ≤ 25.Identifying patients at risk of cognitive impairment is important to develop measures for early intervention and prevention, such as cognitive training and physical exercise programmes.Here, we predict cognitive impairment from clinical features, analysing data from a longitudinal cohort study, the Parkinson's Progression Markers Initiative (ppmi) [16].
• features: We extracted the features from the curated baseline data.While the raw data include several hundred unfiltered variables in the categories 'subject characteristics', 'biospecimen', 'digital sensor', 'enrolment', 'imaging', 'medical history', 'motor assessment', 'non-motor assessment', and 'remote data collection', the curated data include 130 relevant variables, either selected or derived from the raw data (Supplementary Table A1).The proportion of missing data is approximately 3%.• outcomes: We extracted the outcomes from the curated follow-up data, which cover the clinical visits after approximately one, two and three years.The total moca score is available for 390, 373 and 363 patients, indicating cognitive impairment (moca≤ 25) for 34.4%, 32.4% and 32.2% of the patients, respectively.The apparent improvement likely results from non-random missingness, measurement variation, and training effects after repeated participation in cognitive assessments.
Our objective is to predict from clinical features at baseline which patients will have cognitive impairment after one, two or three years.While logistic regression only exploits the binary outcome of interest 'total moca score ≤ 25 versus ≥ 26', combined regression also exploits the underlying numerical outcome 'total moca score' to predict this probability.We first imputed missing values in the feature matrix by chained random forests with predictive mean matching (R package missRanger) and then replaced categorical variables by dummy variables.Instead of imputing the missing values once and analysing one imputed data set ('single imputation'), we imputed the missing values ten times and analysed each imputed data set separately ('multiple imputation').
For each imputed data set, we estimated the predictive performance of logistic and combined regression by nested cross-validation, with an internal loop for training and validation and an external loop for testing.In this unbiased evaluation, we split the samples into five folds, repeatedly train and validate the models with four folds, and test the models with the other fold.To obtain comparable performance estimates, we used the same 5 external and the same 10 internal folds for logistic and combined regression.
Algorithm 1 includes the high-level pseudocode for multiple imputation and nested cross-validation.In all comparisons, we used either lasso or ridge regularisation for both logistic and combined regression.We then examined the percentage change in crossvalidated logistic deviance from logistic to combined regression (Supplementary Table A2).For both penalties (L 1 , L 2 ) and all years (1, 2, 3), we observe an improvement for most imputations (8/10 or 10/10).This improvement also holds for other evaluation metrics, including the misclassification rate and the areas under the receiver operating characteristic and precision-recall curves (Supplementary Table A3).

Algorithm 1 Pseudocode
High-level pseudocode for multiple imputation, external cross-validation, parameter optimisation, and internal cross-validation.We use internal cross-validation to tune the hyperparameters and external cross-validation to estimate the predictive performance.Samples repeatedly switch between the training set (included folds in internal loop), the validation set (excluded fold in internal loop), and the test set (excluded fold in external loop).We used the multi-split approach from [30] to test the prediction error difference between logistic and combined regression.First, we randomly split the samples 50 times into 80% for training and validation, and 20% for testing.Then, for each split, we calculated the squared deviance residuals, whose mean equals the logistic deviance, and compared the paired residuals from logistic and combined regression with the one-sided Wilcoxon signed-rank test.Finally, we calculated the median p-value from the 50 splits, which maintains the type I error rate [30].For each penalty (L 1 , L 2 ), each year (1, 2, 3), and each imputation (1-10), the median p-value is significant at the 5% level (Supplementary Table A2).Therefore, combined regression leads to significantly better predictions than logistic regression.In this application, however, combined regression does not lead to  significantly better prediction than calibrated linear regression (i.e.combined regression with zero weight for the logistic part).Here, two ensemble learning methods (random forest, gradient boosting) perform worse than ridge and lasso regression (Supplementary Table A4).
To examine weighting and scaling, we refitted combined regression to all folds.Depending on the penalty (L 1 , L 2 ), the year (1, 2, 3), and the imputation (1-10), we estimated weights (π) between 0.20 and 1.00 and variances (σ 2 ) between 0.16 2 and 1.70 2 (Supplementary Table A2).Together, these estimates determine the combination of the predicted probabilities from logistic and linear regression.Figure 3 shows the transformation of predicted values from linear regression to calibrated probabilities, and Figure 4 shows the mean loss (for predicting the first-year outcome under lasso regularisation) at different combinations of weights and variances, where the mean is taken over the 10 imputations.

Discussion
We have proposed an approach for predicting dichotomised outcomes from highdimensional data.Combining predicted probabilities from penalised logistic regression and predicted values from penalised linear regression, it achieves a high predictive performance, as shown by simulation and application.The general applicability includes biomedical prediction problems with clinically relevant thresholds.
Ideally, the threshold for dichotomisation is commonly established and splits the samples into two biologically relevant groups.If there is no practical or theoretical justification for setting the threshold equal to a specific value, the need for a probabilistic interpretation is questionable.Special care is required for data-dependent thresholds, because the same criterion typically leads to different thresholds in different data sets, and searching for the 'optimal' threshold typically leads to model overfitting.
Our approach integrates numerical information into binary classification, by first modelling binary and numerical outcomes separately and then combining the (calibrated) probabilities.This is related to transforming classifier scores to calibrated probabilities [18,24,31].Given a threshold and predictions of the numerical outcome, we provide a probabilistic classification.Our aim is an interpretable combination of logistic and linear regression, but we recognise that non-parametric methods for mapping scores onto probabilities might improve the predictive performance.
Instead of applying linear regression on the numerical outcome and transforming the predicted values to probabilities, we could transform the numerical outcome to probabilities and apply logistic regression on the probabilities.Such an approach has previously been developed for low-dimensional settings [13,28].However, due to the iteration between estimating the nuisance parameter and estimating the coefficients, an extension to highdimensional settings would be computationally expensive.We estimate them separately but recognise that a simultaneous approach might provide superior performance.
Only the binomial distribution supports binary outcomes, but different distributions support quantitative outcomes.We chose the Gaussian distribution for modelling the quantitative outcome and for transforming predicted values to probabilities.This distribution is supported on the whole real line and has two parameters for thresholding and calibration.It is possible to use different distributions for modelling the observed outcomes or transforming the predicted outcomes.For the first, we could model counts with the Poisson or the negative binomial distribution.For the latter, we could increase flexibility with the three-parameter log-normal distribution [28] or the skew normal distribution.
Since the numerical outcome is normally more informative than the binary outcome, it is not surprising that modelling the numerical outcome next to the binary outcome improves the predictions of the binary outcome.A more important result is that modelling the numerical and the binary outcomes together can provide better predictions than modelling only the numerical outcome.Similarly, numerical features and binary transformations of the same features can be more predictive together than alone [21].
The proposed approach combines the predicted probabilities from logistic regression and the predicted values from linear regression, leaving their estimated coefficients untouched.If the aim was to merge the estimated coefficients from logistic and linear regression into a single set of estimated coefficients, one could use bivariate regression by stacked generalisation for the binary and the numerical outcome [22].However, this would make the combination of predicted probabilities and predicted values less interpretable.
Although this study focuses on dichotomised outcomes, it is not our intention to advocate dichotomisation.Numerical outcomes should not be binarised, unless there is a strong reason to the contrary.On this condition, we recommend to exploit the binary and the numerical outcome.For binary classification in high-dimensional settings, the proposed approach combines both sources of information.

Conclusion
For predicting numerical outcomes, we suggest to use penalised linear regression to obtain predicted values.For natural binary outcomes, we suggest to use penalised logistic regression to obtain predicted probabilities.And for artificial binary outcomes (also known as dichotomised outcomes), we propose to combine penalised linear and logistic regression to obtain predicted probabilities.

Figure 1 .
Figure 1.After modelling the (artificial) binary outcome with penalised logistic regression, and the (original) numerical outcome with penalised linear regression, we use the numerical prediction to improve the binary classification.

( 1 )
MultipleImputation input: incomplete data for i from 1 to 10 impute missing values ExternalCrossValidation end for output: mean performance metric (2) ExternalCrossValidation input: complete data split samples into 5 folds for j from 1 to 5 exclude fold j (test set) ParameterOptimisation predict z for fold j end for output: performance metric (e.g.L com ) (3) ParameterOptimisation input: training data for various λ 0 and λ 1 InternalCrossValidation end for tune λ 0 and λ 1 (min L lin and L log ) re-estimate β and γ (min M lin and M log ) estimate σ 2 and π (min L com ) output: parameter estimates (4) InternalCrossValidation input: training data, λ 0 , λ 1 split samples into 10 folds for k from 1 to 10 exclude fold k (validation set) estimate β and γ (min M lin and M log ) predict y and z for fold k end for output: loss (L lin and L log )

Figure 3 .
Figure 3. Transformation of predicted values (x-axis) to calibrated probabilities (y-axis) via the Gaussian cumulative distribution function with mean μ and variance σ 2 .Predicted values above μ (vertical line) imply probabilities above 0.5 (horizontal line).While the mean μ equals the threshold c, we need to estimate the variance σ 2 .The probabilities tend to 0 or 1 under small variances and to 0.5 under large variances.

Figure 4 .
Figure 4. Logistic deviance given weight π (y-axis) and standard deviation σ (x-axis).The region with the lowest mean loss (dark) contains the selected tuning parameters (white crosses).Logistic regression obtains full weight if π equals 0 (bottom), and linear regression if π equals 1 (top).The latter renders predicted probabilities around 0 and 1 if σ is small (left) and around 0.5 if σ is large (right).