Model Mixing Using Bayesian Additive Regression Trees

In modern computer experiment applications, one often encounters the situation where various models of a physical system are considered, each implemented as a simulator on a computer. An important question in such a setting is determining the best simulator, or the best combination of simulators, to use for prediction and inference. Bayesian model averaging (BMA) and stacking are two statistical approaches used to account for model uncertainty by aggregating a set of predictions through a simple linear combination or weighted average. Bayesian model mixing (BMM) extends these ideas to capture the localized behavior of each simulator by defining input-dependent weights. One possibility is to define the relationship between inputs and the weight functions using a flexible non-parametric model that learns the local strengths and weaknesses of each simulator. This paper proposes a BMM model based on Bayesian Additive Regression Trees (BART). The proposed methodology is applied to combine predictions from Effective Field Theories (EFTs) associated with a motivating nuclear physics application.


Introduction
In statistical learning problems, one often considers a set of plausible models, each designed to explain the system of interest.A common practice is to select a best performing model based on some pre-specified criteria.The ensuing inference for quantities of interest is then carried out using the selected model as if it were the true data generating mechanism.The resulting uncertainty 1 arXiv:2301.02296v2[stat.ME] 5 May 2023 quantification ignores any variability due to the underlying model structure (Draper, 1995).The misrepresentation of uncertainties associated with such quantities can ultimately lead to misguided interpretation or inappropriate decisions.Another shortcoming of the typical approach to modeling is that the resulting inference may strongly depend on the selection criteria.In other words, different sets of criteria could lead to noticeably different final models and inferential results.To account for such uncertainties, one may elect to combine information across the set of models in some manner.
Any model set can be classified as M-closed, M-complete, or M-open (Bernardo and Smith, 1994).These three categories differ in their underlying assumptions regarding a true model, M † , and its relation to the model set.The M-closed setting assumes a mathematical representation of M † can be formulated and it is included in the model set.In this setting, model selection is appropriate because M † can be recovered from the set of models under consideration.The Mcomplete setting also assumes it is possible to construct M † , however it is not included in the model set.For example, an expression for M † may exist, however it may be computationally intensive or intractable compared to the models under consideration.The M-open case assumes the true model may exist, however a lack of knowledge or resources prevents one from constructing its mathematical representation.Consequently, M † is excluded from the model set.This work is motivated by applications in nuclear physics which tend to fall within the M-open class as the underlying truth regarding the physical system may not yet be understood.In such cases, one may desire to leverage the known information about the physical system which is contained in the model set along with experimental data to further understand the nuclear phenomena.
Assume a set of K models are considered when studying a particular system of interest.One approach to account for model uncertainty is to combine the information across these K models.This may involve combining the individual point predictions or probability density functions from each model, usually in some additive manner.Traditional frequentist and Bayesian approaches utilize global weighting schemes, where each model is weighted by a value intended to reflect overall (global) model performance.For example, a classical global weighting scheme is Bayesian model averaging (BMA) (Raftery et al., 1997), which combines the individual posterior densities from each model using a convex combination.The BMA weights are given by the individual posterior model probabilities, each which can be interpreted as the probability the individual model is the true data generating one.Hence, BMA implicitly assumes the true model is contained within the model set, which renders this method inappropriate outside of the M-closed setting (Bernardo and Smith, 1994).More recent Bayesian global weighting schemes adopt a model stacking approach, where model weights are assigned to minimize a specified posterior expected loss.This decision theory viewpoint of global weighting can be used for combining point predictions (Le and Clarke, 2017) or probability densities (Yao et al., 2018).Under some assumptions, stacking methods have been shown to be more appropriate for both the M-open and M-closed settings (Yao et al., 2018).
Though global weighting methods are effective, they still might lead to poor approximations of the true system when the individual model performance is localized.In such a case, one may wish to select a weighting scheme that reflects the localized characteristics of the models by constructing input-dependent weights.With input-dependent weights, one would expect an individual model to receive a higher weight in input regions where it exhibits strong predictive performance, while receiving a weight close to 0 in regions of poor performance.Localized weighting schemes are more appropriate for the M-open or M-complete settings where the true model may be better characterized as a localized mixture of the model set under consideration.This work is motivated by problems in nuclear physics modeled using a technique known as Effective Field Theory (EFT) (Burgess, 2020;Petrov and Blechman, 2016;Georgi, 1993).EFTs are designed to perform well in a particular subregion(s) of the input domain, yet diverge in the rest of the input domain.Prototypes of such models are the weak and strong coupling finiteorder expansions for the partition function of the zero-dimensional φ 4 theory presented by Honda (2014).Examples of this problem are shown in Figure 1 where the various dashed and dotted lines represent the mean predictions from a finite-order expansion and the solid line denotes the true physical system.One can see that these models are highly accurate descriptions of the true system EFTs can be constructed based on the known physics to recover the true system across subsets of the domain.This poses the question as to how to combine the predictions from multiple EFTs in order to obtain a globally accurate prediction.Various interpolation methods (Honda, 2014) exist, however no data-driven approaches are currently available for EFTs.
To demonstrate why problems falling in the M-open class may not be suited for model averaging schemes, consider applying BMA to the model set involving the two expansions as shown in Figure 1(a).The posterior mean prediction from BMA results in a poor estimate of the true system as shown in Figure 2. Essentially, BMA selects the dashed model rather than leveraging the localized strengths contained in the model set.Given the characteristics of EFTs and the M-open setting associated with these problems, a simple weighted average of the predictions from each model is insufficient for recovering the true physical system.A better approach is to use an input-dependent weighting scheme which leverages the localized behaviors of each model to ascertain appropriate mean prediction and uncertainty quantification.Such an approach falls under the general class of problems known as Bayesian model mixing (BMM) (Yao et al., 2021).
A key challenge in BMM is to define the relationship between the inputs and the weight functions.This work proposes a Bayesian treed model which specifies the weight functions as a sum-oftrees.This representation relies on tree bases which are used to learn the localized model behavior.
Additionally, this flexible and non-parametric approach allows the user to avoid having to specify a more restrictive model for the weight functions, such as a generalized linear model.Maintaining the traditional conjugacy properties associated with Bayesian Additive Regression Tree (BART) models, the weight functions are regularized via a multivariate Gaussian prior.The prior is calibrated so that the weight functions prefer the interval [0, 1] without imposing any further constraints.
Additionally, this framework includes a simple strategy for incorporating prior information about localized model performance when available.All together, this approach highlights the localized behaviors of the candidate models and yields significant improvements in prediction, interpretation, and uncertainty quantification compared to traditional model averaging methods.
In addition to proposing a novel non-parametric BMM method, this work introduces a new data-driven approach for combining predictions from various EFTs.This is not only important for prediction of the system, but also for the resulting inference.In particular, practitioners can better understand the accuracy of each EFT while also advancing their knowledge about the underlying physical system across areas which are not well explained by the EFTs under consideration.
The remainder of this paper is organized in the following manner.Section 2 highlights some rel-

Background
This section provides an overview of the primary statistical methods discussed throughout this work.Section 2.1 details popular model averaging and model mixing techniques.Section 2.2 summarizes the primary features of Bayesian tree models, which play an integral role in the proposed model mixing approach described in this work.

Model Averaging and Model Mixing
Methods to address model uncertainty have been widely studied throughout the past few decades.The majority of work in this area combines competing models through either mean or density estimation.In either case, the combined result is generally computed via linear combination of the individual predictive means or densities from the models under consideration.The weights in this linear combination may or may not depend on the inputs for each model and are learned using the set of training data D = {(x 1 , y 1 ), . . ., (x n , y n )}.Many frequentist and Bayesian methods exist for estimating the model weights.Popular frequentist approaches such as stacking (Breiman, 1996) and model aggregation (Bunea et al., 2007) estimate the weights by minimizing a specified loss function.Additionally, one may elect to impose constraints such as a non-negativity or sum-to-one constraint on the weights or apply regularization techniques.Other frequentist approaches estimate the weights using evaluation metrics such as the Akaike information criteria (Burnham et al., 1998) or Mallow's CP (Hansen, 2007).These methods generally fall under the model averaging regime, as the weights are independent of the model inputs, with the exception of Sill et al. (2009).The remainder of this section reviews popular Bayesian methods in further detail.
Bayesian Model Averaging: A classical approach for combining models M 1 ,. . .,M K is Bayesian Model Averaging (Raftery et al., 1997) settings, e.g. Figure 2), and being sensitive to prior specification.
Bayesian Mean Stacking: Recent work has extended stacking to the Bayesian paradigm as an approach for mean estimation (Clyde and Iversen, 2013; Le and Clarke, 2017).Given K competing models, the stacked mean for a future observation ỹ at input x is constructed as a linear combination . When the individual models are unknown, stacking is conducted in a two-step procedure: (i) independently fitting the individual models M l , l = 1, . . .K, given the set of training data D, and (ii) estimating the weights w = (w 1 , . . ., w K ) for the stacked predictor given the fitted models.
In the first step, each model is fit and their corresponding mean predictions, fl (x i ), are obtained at each of the training points.In practice, cross validation techniques are used to reduce the risk of overfitting the stacked predictor to the training data.In the second step, the coefficient vector w = (w 1 , . . ., w K ) is defined as the minimzer of a specified posterior expected loss.Additionally, one may impose various constraints such as a simplex, non-negativity, or sum-to-m constraint on the weights (Le and Clarke, 2017).Other approaches include regularization via a penalty term or a prior (Breiman, 1996;Yang and Dunson, 2014).
Bayesian Complete Stacking: Complete Stacking was motivated by the shortcomings of BMA (Yao et al., 2018).This Bayesian stacking model emphasizes prediction, as the weights are selected to minimize the Kullback-Leibler (KL) divergence between the true predictive density and the stacked predictive density p(ỹ , where ỹ is a future observation with input x.Similar to mean stacking, the leave-one-out (LOO) cross validated predictive density can be used in place of p(ỹ | x, D, M l ) when the individual models are unknown.
Given training data, the weights are constrained to a K−dimensional simplex S K and estimated , where D (−i) denotes the training set excluding the pair (x i , y i ).
Bayesian Hierarchical Stacking: Hierarchical Stacking (Yao et al., 2021) is a model mixing approach which extends Complete Stacking by defining input-dependent weights that are estimated in a fully Bayesian manner.One way to define the weight functions is through a parametric model.First, K − 1 unconstrained weight functions are defined as, which depend on the sets of hyperparameters {α lj } and {µ l } along with user-specified basis functions g j (x i ), where j = 1, . . ., J and l = 1, . . ., K − 1.The K th function w * K (x i ) is set to 0 to serve as a baseline.Then, a softmax transformation is applied to the unconstrained weights in order to confine each model weight to the K-dimensional simplex, namely The methods discussed above outline a number of strategies one can take to combine the information across multiple models.In the setting of EFT experiments, the localized nature of the predictions suggests an input-dependent weighing scheme like Bayesian Hierarchical Stacking is more suitable.However, specifying the required basis functions may not be trivial.Thus, the proposed method will adopt the notion of mean stacking within an additive tree basis model to achieve localized weighting in a flexible and non-parametric manner.

Bayesian Tree Models
Bayesian additive regression trees (BART) have become increasingly popular for modeling complex and high dimensional systems (Chipman et al., 2010).This additive approach involves summing together the predictions made from m trees and is facilitated through a Bayesian backfitting algorithm (Hastie and Tibshirani, 2000).Each tree T j is characterized by its structure, comprised of internal and terminal nodes, along with its associated set of terminal node parameters, M j .The internal nodes define binary partitions of the input space according to a specified splitting rule.A given node η is defined to be an internal node with probability p(η is internal) = α(1 + d η ) −β where d η is the depth of η and α and β are tuning parameters.By construction, this prior penalizes tree complexity and thus ensures each tree maintains a shallow and simple structure.Given d different predictors, x 1 , ..., x d , splitting rules are of the form x v < c for v ∈ {1, ..., d} and cutpoint c from a discretized subset of R. In the simplest approach, the predictor and cutpoint associated with each splitting rule are randomly selected from discrete uniform distributions.The probabilities associated with the designation of each node along with the splitting rules for internal nodes are used to define the stochastic tree-generating prior for each tree.
The m trees are learned through MCMC, where a slight modification to each structure is proposed at every iteration of the simulation.Generally, such modifications to the tree include birth, death, perturb, or rotate as described by Chipman et al. (1998) and Pratola (2016).Proposals are then accepted or rejected using a Metropolis-Hastings step.To avoid a complex reversible jump MCMC, the algorithm depends on the integrated likelihood, which is obtained by integrating over the terminal node parameters associated with the given tree.A closed form expression for this density can be obtained with conditional conjugate priors for the terminal node parameters.
Given the tree structure, prior distributions can be assigned to each terminal node parameter.
In the BART model, the priors ensure each tree explains a small yet different source of variation in the data.For continuous data, BART assigns Gaussian priors to the terminal node parameters.
Assuming the data is mean centered, the prior assigned to terminal node parameter µ pj in node η pj is given by µ pj | T j ∼ N (0, τ 2 ) where τ = (y max − y min )/(2k √ m) with tuning parameter k.
Additionally, a scaled inverse chi-squared prior is assigned to the variance, i.e. σ 2 ∼ νλ/χ 2 ν .The traditional Bayesian regression tree model can be extended to allow for a more complex structure in the terminal nodes.Existing extensions include linear regression (Chipman et al., 2002;Prado et al., 2021) and Gaussian processes (Gramacy and Lee, 2008).For the setting of model mixing, this work utilizes a multivariate Gaussian terminal node model.

Towards Model Mixing with EFTs
An EFT forms an expansion (or multiple expansions) as a ratio of an input parameter to a physically relevant scale.Computer models implement EFTs as simulators.The theoretical predictions of the physical system are approximations from each simulator plus a discrepancy term, which is designed to account for the remaining unexplained portions of a system.These two components may have specific properties which can be leveraged when working with observational data.This section summarizes these details in the context of EFTs, while a further discussion is provided in the supplementary material.

Motivating EFT Example
Consider the EFT example where the true physical system is the partition function of the zero-dimensional φ 4 theory defined by where x denotes the coupling constant (Honda, 2014).Two types of finite-order expansions exist for this partition function and are given by ( 2) and (3) for n s or n l ≥ 1, namely The weak coupling expansion in ( 2) is an asymptotic Taylor-like series of order n s centered about zero.Thus, h (ns) s (x) will yield high-fidelity predictions for smaller coupling constants and diverge as the value increases.The reverse behavior is observed for the strong coupling expansion in (3), , which is convergent.Example predictions of the physical system using these finite-order expansions can be seen in Figure 1 and are discussed in detail in Section 3.2.
The theoretical predictions of the physical system using the weak and strong coupling expansions are expressed using ( 4) and ( 5), respectively.
where the truncation errors δ The features present in this example from Honda (2014) are commonly found across the landscape of EFT problems.For instance, the physical system can be expressed as an additive model involving a finite-order expansion and the induced truncation error.The finite-order expansions are designed to provide high-fidelity predictions in specific subregions of the domain.There exists a subregion of the domain where none of the finite-order expansions yield accurate theoretical predictions.All together, this motivating example serves as a prototype for the EFTs that may be encountered in a general experimental setting.

The Model Set for EFT Experiments
One may encounter various experimental settings when working with EFTs.Such scenarios are introduced in the context of the motivating example presented in Section 3.1.First, consider the most basic case where the model set contains a single EFT.With one EFT, the overall predictive accuracy of the true system is poor, despite the good performance in a localized region.For example, suppose the model set M contains the 2nd order weak coupling expansion f leverage the available information to improve the overall prediction of the physical system.For instance, the 2nd and 6th order expansions (dashed and dashed-dotted) are concave functions while the 4th order expansion (dotted) is convex.This suggests the true physical system lies between the expansions under consideration and can be recovered by re-weighting the corresponding predictions.
A third situation is to consider EFTs centered about different areas of the domain.For example, a model set can contain a finite-order weak coupling expansion (dashed) and the 4th order strong coupling expansion (dotted) as shown in Figures 1(a) and 1(b).The addition of the strong coupling expansion allows for a high-fidelity approximation of the physical system to be considered in the rightmost subregion of the domain.The model set listed in panel (a) implies the true system lies between the two expansions.This is particularly useful in the intermediate range where neither of the EFTs are accurate.Meanwhile, the set in panel (b) presents an interesting case where the physical system lies below both EFTs in the intermediate range.In this case, the information in the observational data can be leveraged to help recover the true system.
In this example, the predictions from the weak coupling expansion degrade slowly compared to those from the strong coupling expansions.Consequently, the weak coupling expansions generally appear to have a better overall predictive performance across the entirety of the domain.When combining these two types of EFTs using global weighting schemes such as BMA, the resulting prediction will favor the weak coupling expansion due to its drastic advantage in the overall model performance.The undesirability of the BMA solution is evident in Figure 2, which demonstrates that BMA effectively matches the 2nd order weak coupling expansion.Hence, a weighting scheme which captures the localized behaviors of each model is preferred in the EFT setting.
The proceeding sections consider a general set of K different EFTs, which are denoted by f 1 (x), . . ., f K (x).In this motivating example, f l (x) = h l (x) + δ l (x) where h l (x) can denote either a weak or strong coupling expansion of order N l , where l = 1, . . ., K.Meanwhile, δ l (x) is the associated truncation error and is modeled by a GP as described in the supplementary material.

Predictions from EFTs
Prior to model mixing, each of the K EFTs are independently emulated.Without loss of generality, consider the lth EFT denoted by f l (x).It is assumed this EFT is accompanied by a set of simulator runs across a fixed set of inputs x c l1 , . . ., x c ln l .Information regarding the design of the computer experiment for each EFT can be found in Melendez et al. (2021).The simulator runs are evaluations of the finite-order expansion, h l (•), at the specified inputs.Using these runs, one can extract the set of N l + 1 coefficients c 0 (•), . . ., c N l (•) at each of the fixed inputs.The training set for the lth EFT is then defined by D l = x c l1 , C(x c l1 ) , . . ., (x c ln l , C(x c ln l ) where C(•) denotes the vector of known finite-order coefficients at the specified model input.The resulting coefficients and the set of inputs can differ across the K models, thus the sets D 1 , . . ., D K will contain different information.
As described in the supplementary material, an EFT is fit using the finite-order coefficients to learn the unknown parameters which characterize the GP assigned to the truncation error.This information can be extracted from D l , which implies the set of field observations is not required to fit each EFT.Consequently, the desired theoretical predictions across the input domain can be obtained without using any of the observational data.The resulting posterior predictive distribution is a Gaussian process, which can be characterized by the corresponding mean and covariance functions as described in Melendez et al. (2019) (see also the supplementary material).
The predictions for an EFT are then computed through the posterior mean.

Defining a Mixed Model
The proposed BMM model is trained using a set of observational data, Y 1 , ..., Y n , which are assumed to be independently generated at fixed inputs x 1 , ..., x n according to where f † (x i ) represents the true and unknown physical system.Conditional on the theoretical predictions at a given point, f 1 (x i ), . . ., f K (x i ), the data can be modeled as where f . This formulation is an example of Bayesian mean stacking with an input-dependent weighting scheme.In practice, the predictions from each model are unknown and must be estimated.
The proposed BMM model relies on a two-step approach for combining the predictions across K EFTs.This implies each EFT is first fit independently and the estimated predictions fl (x i ) are obtained for l = 1 . . ., K and i = 1, . . ., n prior to learning the weight functions w 1 (x i ), . . ., w K (x i ).
The proposed two-step approach is tailored to EFTs by taking advantage of the sources of data described in Section 3.3 as well as the properties described in the supplementary material.Conditional on the estimated predictions, the model for the observational data becomes where f (x i ) = f1 (x i ), . . ., fK (x i ) .The weight functions are then learned using the set of field data.The next section outlines the proposed model mixing scheme which defines the weight functions using Bayesian Additive Regression Trees (BART).

Model Mixing using BART
The weight functions w(x) = (w 1 (x), . . ., w K (x)) are modeled using a sum-of-trees where g(x i , T j , M j ) is the K-dimensional output of the jth tree using the set of terminal node parameters, M j , at the input, x i .This approach defines the weight functions using tree bases which are learned from the data.The amount of flexibility in the weight functions can be controlled by changing the number of trees or tuning the hyperparameters in the prior distributions.
In this application of BART, each terminal node parameter is a K-dimensional vector which is assigned a multivariate Gaussian prior.The parameter is regularized so that each tree accounts for a small amount of variation in the weight functions.For the proceeding statements, let η pj represent the pth terminal on the jth tree and define its corresponding parameter by µ pj = (µ pj1 , ..., µ pjK ) .
Now assume the observations (x 1 , y 1 ), ..., (x np , y np ) lie in the hyper-rectangle defined by η pj , where n p is the number of observations assigned to this subregion.The model at each terminal node amounts to fitting a localized Bayesian linear regression with parameter vector µ pj .Due to conditional independence, the likelihood in this node is defined by where f (x i ) = f1 (x i ), ..., fK (x i ) is a vector of mean predictions from each EFT and r i is the ith residual given by r i = y i − q =j f (x i )g(x i , T q , M q ).
Conditional on the tree structure, T j , the terminal node parameter, µ pj is assigned a conjugate multivariate Gaussian prior, namely where β = (β 1 , ..., β K ) is a K-dimensional mean vector and I K is the identity matrix.This prior is non-informative in the sense that the mean is fixed regardless of how the input space is partitioned.
In model mixing, each simulator may perform strongly in one subregion of the input space but weakly in another.This belief can be reflected in the prior distribution of µ pj by allowing the hyperparameters to depend on the partition of input space assigned to the given terminal node.
Thus, an informative prior for µ pj can be constructed as where β pj = (β pj1 , ..., β pjK ) .This allows the prior mean to vary depending on the tree partitions and thus reflect some sense of localized model performance.Meanwhile, the assumed covariance structure implies the K vector components µ pj1 , ..., µ pjK are independent apriori.
Both of the proposed priors are conjugate, which is an important choice in BART, as it allows for a closed form expression for the marginal likelihood for the vector of residuals R pj = (r 1 , .., r np ) , Additionally, the conjugate priors result in closed form expressions for the full conditional distributions of the terminal node parameters and the error variance.The derivations of these distributions are found in the Appendix.In particular, the full conditional distribution for the pth terminal node in T j is given by where F pj is the n p × K design matrix with the ith row vector given by the vector f (x i ).The full conditional distribution for σ 2 is a scaled inverse chi-squared, i.e. σ 2 | • ∼ ν λ /χ 2 ν , where with ν and λ denoting the prior shape and scale parameters, respectively.

Calibrating Priors
First consider the prior for the terminal node parameters.The calibration of the hyperparameters differs for the non-informative and informative priors, however both approaches are designed to ensure that each model weight w l (x) should prefer the interval [0, 1] and be centered at a value within this region.Moreover, the functions w 1 (x), . . ., w K (x) are assumed to be independent apriori at a fixed input.This enables the prior for each weight to be calibrated marginally.

Non-Informative Prior
Consider a non-informative prior for the terminal node parameters.In this setting, µ pj | T j iid ∼ N K β, τ 2 I K for the pth terminal node parameter in the jth tree.First, fix l ∈ {1, ..., K} and i ∈ {1, ..., n} to calibrate the prior for w l (x i ).Since the terminal node parameters are independent and identically distributed with a diagonal covariance structure, the prior induced on w l (x i ) is the same for the remaining weight and input combinations.From ( 7) and ( 8), the induced prior on the lth model weight is w l (x i ) ∼ N (mβ l , mτ 2 ).Since it is believed w l (x i ) ∈ [0, 1] with high probability, it is plausible to set mβ l = 0.5.Consequently, β l = 0.5/m.Thus, each weight has an equal chance to reach the "extreme" values of 0 or 1 regardless of the input location.The prior standard deviation, τ , can be selected so that w l (x i ) ∈ [0, 1] with high probability.To do this, a confidence interval for w l (x i ) is constructed such that 0 = 0.5 − kτ √ m and 1 = 0.5 + kτ √ m.Subtracting the first equation from the second and solving for τ yields τ = 1/2k √ m.This calibration approach is very similar to the one proposed by Chipman et al. (2010).The main difference is due to the context of the problem, as it is believed the weights are predominately contained in an interval [0, 1] rather than the observed range of the data, [y min , y max ].

Informative Prior
In the informative setting, the prior mean directly depends on the partitions of the input space induced by the given tree, i.e. µ pj | T j ∼ N K β pj , τ 2 I K .This prior is tailored towards EFTs, where the functional variance, v l (x i ), indicates the severity of the truncation error.A larger variance within a particular subregion of the domain indicates the presence of larger truncation error meaning the EFT provides a poor approximation of the true system.
Given this interpretation of the truncation error variances, one strategy for combining EFTs is precision weighting (Phillips et al., 2021).For example, the precision weight for the lth EFT at x i is given by .
The precision weight β l (x i ) can be interpreted as an initial guess for the weight function w l (x i ) for l = 1, . . ., K and i = 1, . . ., n.
Since the prior of the terminal node parameter changes conditional on the tree structure, each β pj is chosen separately from the other terminal node parameters.Given the precision weights for the EFTs and n training points, component of the the prior mean vector, β pjl , is chosen by where 1(x i ∈ η pj ) is the indicator that x i is assigned to the terminal node η pj .A confidence interval for each terminal node parameter can be set to have a length of 1/m in order to ensure each tree is a weak learner.This is done by setting τ = 1/2km.

Variance Prior
A conjugate scaled inverse chi-squared distribution with hyperparameters ν and λ is assigned to the error variance σ 2 .To calibrate the prior, first select a value of ν to reflect the desired shape of the distribution.Common values of ν range from 3 to 10.Before selecting a value for λ, one needs an initial estimate of the error variance to help set the prior around a range of plausible values of σ 2 .
Given the model set and the corresponding point predictions at each of the training points f l (x i ), one can use a lightly data informed prior by setting σ2 = max l=1,..,K min i=1,..,n Since a common belief is that each model yields accurate approximations of the true system over some subregion of the domain, one should expect the set of minimum squared differences across the K models will unveil reliable information about the true error variance.Given this information, one strategy is to set σ2 to be the mean or mode of a λν/χ 2 ν distribution.The value of λ is then found by solving the resulting equation.

EFT Examples
This section applies the proposed model mixing methodology to three different examples.Section 5.1 demonstrates the success of the BART-based mixing approach on two univariate EFT examples, which are introduced in Section 3. A multi-dimensional example is highlighted in Section 5.2 using simulators which are based on Taylor series expansions of a trigonometric function.
Though this last example does not involve a true underlying physical system, the model set considers simulators which have similar qualities of EFTs with double expansions (see Burgess (2020)).
Each example highlights specific features of the proposed BART-based mixing model such as flexible basis functions for the weights and the associated prior regularization.

Example 1: Mixing Univariate EFTs
This section applies the BART model mixing (BART-BMM) method to various EFTs over a one-dimensional domain.For comparison, Hierarchical Stacking (HS) is also applied to the same set of EFTs.In both EFT examples, 20 observations are independently generated according to where i = 1, ..., 20, σ = 0.005, and f † (x) is defined in (1).The 20 training points are located at inputs which are evenly spaced over the interval of 0.03 to 0.50.The error standard deviation of 0.005 was selected to mimic a controlled experiment setting.Each EFT model is fit using n c = 4 evaluations of the corresponding finite-order expansion.

Example 1a: Mixing Two EFTs
First consider mixing the EFTs based on the second order weak coupling expansion, f s (x) and the fourth order strong coupling expansion, f l (x) as shown in Figure 1(a).The true system f † (x) lies between both EFTs across the entire domain, hence a convex combination of the predictions from both EFTs is appropriate for recovering the true system.The BART-BMM model is fit using 10 trees and k = 5.0.Meanwhile, the HS unconstrained weight function is defined by The results of the BART-BMM method and HS are shown in Figure 3.
In terms of the root mean squared error (RMSE) between the predicted system and the true f † (x), the BART-BMM model results in more accurate mean predictions compared to HS, which have RMSE values of 0.0053 and 1.9460 respectively.The RMSE for the HS result is inflated by the diverging mixed prediction in the left portion of the domain.For example, the RMSE for the HS model over the interval [0.1, 0.5] drops to 0.0717.Additionally, from Figure 3 it is evident BART-BMM results in predictions of f † (x) which have lower uncertainty than those from HS.
The weight functions in Figure 3 also take similar sigmoid-like shapes, however the HS solution displays a high degree of uncertainty.The most noticeable difference between the two methods can be seen in the weight function of f (4) l (x) (dotted).In particular, the curve in the BART-BMM result increases at a quicker rate in the sub-region [0.3, 0.4] compared to the HS result.This slower rate of increase contributes to the poor prediction from HS in this sub-region.Another difference is observed in the region of [0.03, 0.15], as the weight of f (4) l (x) under the HS approach is near 0, however it is not small enough to negate the effect of the drastically diverging mean prediction from f (4) l (x).Meanwhile, the BART-BMM weight is shrunk close to 0 with minimal uncertainty due to Another advantage of BART-BMM is that the weight functions are learned throughout the MCMC via the tree models.This differs from HS, which requires specification of a basis for the unconstrained weights apriori.In this example, one may consider a different basis function, as the specified linear basis appears to be inadequate for ascertaining high-fidelity mean predictions across the entire domain.

Example 1b: Mixing Two Convex EFTs
Now, consider a second model set which is shown in Figure 1(b) and replaces f (2) s (x) with f (4) s (x).Both EFTs overestimate f † (x) in the intermediate range, hence weights which are confined a simplex are unable to recover the true system.In this case, a piecewise basis function is assigned to the unconstrained HS weight as shown below, This basis was chosen to roughly reflect the areas where the mean predictions begin to change at differing rates.Other selections of the partitions for a piecewise basis are equally valid.
The BART-BMM and HS results are shown in Figure 4. Once more, the BART-BMM approach outperforms HS in terms of mean prediction, with RMSE values of 0.0057 and 0.1141 respectively.
Most notably, the HS solution is unable to accurately predict the true system in the intermediate range of the domain due to the simplex constraint on the model weights.Meanwhile, the BART-BMM approach is able to recover the system across the entirety of the domain due to the prior regularization approach taken with the weights, which does not impose such strict constraints.
In this HS result, it appears the piecewise basis was more effective than the linear basis in terms of predicting the true system in the left and right portion of the domain.This further poses the question of how to select the partitions induced by the piecewise basis, as different choices may lead to drastically different results.This question served as the motivation for defining a BART-based model, which adaptively learns these partitions based on the observational data and the model set.

Example 2: Multi-Dimensional Mixing
The proposed model mixing approach is also applicable for computer experiments which depend on multi-dimensional inputs.To demonstrate this, consider a 2-dimensional problem where the true underlying system is defined by A set of 80 training points are generated from this true system with observational error standard deviation of 0.1.Additionally two candidate models are considered, each with simulators defined in terms of Taylor series expansions of s(x i ) := sin(x 1 ) and c(x 2 ) := cos(x 2 ).For this example, the simulators are defined by where s (j) (x 1 ) and c (k) (x 2 ) denote the jth and kth derivatives of sin(x 1 ) and cos(x 2 ), respectively.
Note, the first simulator h 1 (x) centers both Taylor series expansions about π, hence it produces relatively accurate predictions of the system in upper right corner of the domain and diverges when moving towards the negative portion of the domain.Meanwhile, the h 2 (x) is composed of Taylor series expansions centered about −π which produces accurate predictions in the negative portion of the domain.One key difference between the simulators is that h 2 (x) contains a highly accurate approximation of sin(x 1 ) across the entire interval [−π, π] because its corresponding Taylor series expansion is composed of 7 non-zero terms.Thus, even though the expansion sin(x 1 ) and cos(x 2 ) are centered about −π, one would expect h 2 (x) to result in accurate predictions of f † (x) across The theoretical predictions from each model f 1 (x) and f 2 (x) can be defined using the additive 5: (Left) The mean difference between the predicted system f † (x), and the true system f † (x).(Center) The mean weight function for h 1 (x).(Right) The mean weight function for h 2 (x).form f l (x) = h l (x) + δ l (x), where δ l (x) represents the unknown higher-order corrections and l = 1, 2. Due to the nature of this example, no model is postulated for δ l (x).Consequently, the estimated theoretical predictions at each training point x i are obtained by fl (x i ) = h l (x i ).Note, in a multi-dimensional EFT setting, the strategy discussed from Section 3 remains applicable.
The results from a 30-tree BART-BMM model are shown in Figure 5 The second and third plots illustrate the posterior mean weight functions for each simulator.
Based on the middle plot, the first simulator has increasing utility as x 1 and x 2 both increase.This is to be expected, as h 1 (x) is composed of two expansions centered about π.Note, the mean value of w 1 (x) does not reach 1 in the upper right corner of the domain because the simulator slightly overestimates the peak of f † (x) in this region.Meanwhile, the posterior mean of w 2 (x) indicates h 2 (x) has high utility for x ∈ [−π, π] × [−π, 0], which is to be expected given the nature of the expansions included in this simulator.Moreover, the predictions from h 2 (x) appear to align closely with the data, and thus f (x), as is evident by the weights approaching values near 1 in the bottom half of the domain.

Discussion
A variety of frequentist and Bayesian approaches are available for model averaging and mixing.
Each method involves estimating the overall predictive mean or density based on the individual models.The selection between these two objectives should ultimately be guided by the underlying statistical inference one wishes to ascertain.In computer experiments, a primary objective is to recover the underlying system, which is generally expressed as the mean function in an additive model for the observational data.Hence, a mean estimation approach is more desirable when working within this setting compared to a predictive density estimation, which is modeled with the intention of predicting a future observation ỹ.
Example 5.1 compares the proposed mean estimation method versus a density estimation method in Hierarchical Stacking (HS).In HS, the weight functions are learned relative to leaveone-out (LOO) predictive densities under a simplex constraint.These LOO densities incorporate information regarding the mean and variance of each EFT at a given x.In portions of the domain where a model may rapidly diverge, the resulting LOO predictive density is shrunk towards 0. In turn, the corresponding weight function will approach 0, however it may struggle to obtain a small enough value to shrink out the effect of the diverging mean.Meanwhile, shrinking the effect of a diverging prediction appears to be easier when mixing the mean predictions from each EFT.The primary objective of the weight functions is to re-scale the predictions given by each individual model so that a linear combination of these predictions can adequately recover the true system.Given the prior regularization method applied to the weight functions, exact interpretation of the resulting values can be unclear.However, using this regularization perspective, one can conclude that weight functions which fall close to 0 within a particular subregion indicate that the corresponding model is unnecessary for the overall prediction.Meanwhile, a model which is the unique local expert within a particular region should be weighted by values close to 1. Overall, a joint interpretation of the weight functions is appropriate, particularly in regions where the weights concentrate around values away from 0 or 1.These features are observed across each example.
The benefit of the proposed regularization approach can further be understood through the posterior distribution of the sum of the weight functions, w sum (x) =  Even though a sum-to-one property is not strictly imposed, it appears to naturally occur in this situation where an interpolation of the competing models is appropriate.Meanwhile, the posterior of w sum (x) from Example 1b (left, dashed) significantly drops below 1 in the intermediate range of the domain because both EFTs overestimate the true system, which renders a convex combination to be inappropriate.Similar features are observed in the 2-dimensional example, as the mean of w sum (x) concentrates around 1 in areas where at least one of the simulators aligns well with the true system.Meanwhile, when neither simulator is accurate (i.e. the top left corner) the the mean value of w sum (x) is drastically below 1.From these observations, it appears the BART-BMM approach benefits by not imposing strict assumptions, such as a simplex constraint, on the weights.In conclusion, this work proposes a Bayesian treed framework to mix predictions from a set of competing models, each of which are intended to explain the physical system across a subregion of the domain.This approach falls within the class of problems referred to as Bayesian model mixing, as input-dependent weights are defined to reflect the localized behavior of each model.
The weight functions are modeled using a sum-of-trees and are regularized via a multivariate Gaussian prior.The tree bases coupled with the regularization approach allows for the weights to be learned in a flexible non-parametric manner free of strict constraints.Using the weight functions, predictions from the individual models are mixed via a linear combination.The success of this mixing approach is demonstrated on three examples, each of which considers models with localized predictive performances.Leveraging the localized behavior of the individual models leads to significant improvements in the posterior prediction and uncertainty quantification of f † (x) and the overall interpretation of the system compared to existing global and local weighting schemes.Y., Pirš, G., Vehtari, A. and Gelman, A. (2021), "Bayesian hierarchical stacking: Some models are (somewhere) useful", Bayesian Analysis 1(1), 1-29.Yao, Y., Vehtari, A., Simpson, D. and Gelman, A. (2018), "Using stacking to average Bayesian predictive distributions", Bayesian Analysis 13(3), 917-1007.

Now let
Then, from ( 10) and ( 11), the marginal likelihood simplifies as The Full Conditional Distribution of µ pj Now consider the full conditional posterior distribution of the terminal node parameter µ pj .
Using Bayes rule, π(µ pj | R pj , T j , σ 2 ) ∝ L(R pj | T j , µ pj , σ 2 )π(µ pj | T j ) A conjugate prior is assumed for µ pj , thus the terms in the likelihood and prior can be rearranged to obtain a Normal kernel for the posterior distribution.This process is summarized below.
The previous expression simplifies as This is the of a Multivariate Gaussian distribution with mean Ab and covariance matrix A.

Thus it follows
µ pj | R pj , T j , σ 2 ind ∼ N K Ab, A replacing A and b with their respective definitions implies The Full Conditional Distribution of σ 2 Finally, consider the full conditional posterior for the error variance, which is defined by where Y = (y 1 , ..., y n ) , T = {T 1 , ..., T m }, and M = {M 1 , ..., M m }.
The parameters in (13) are learned using a set of evaluations from the kth-order expansion, for k = 0, . . ., N , at n c design inputs x c 1 , . . ., x c nc .Define the set of evaluations of the expansions at the ith design point by H(x c i ) = {h (0) (x c i ), . . ., h (N ) (x c i )}.Given Q(x) and y ref (x), these evaluations are used to extract the observed finite-order coefficients at each design point C(x c i ) = {c 0 (x c i ), . . ., c N (x c i )}.A likelihood is formed using C(x c 1 ), . . ., C(x c nc ) and ( 13).The unknown parameters θ are estimated through their resulting posterior distributions given these observed coefficients.
The truncation error accounts for the remaining unknown terms in the series, thus δ (N ) (x) is modeled using a similar factorization Using ( 13) and ( 14) along with properties of the multivariate normal distributions (Ravishanker et al., 2021), the induced prior on the truncation error term is given by with mean and covariance functions The unknown parameters in ( 15) -( 17) originate from the coefficient model in (12).Thus, the mean and covariance functions which characterize the discrepancy model are also learned using the set of evaluations of the finite-order expansions at the n c design points.This is a unique property of EFTs, as observational data is not required to learn the model discrepancy.
When the finite-order expansion is computationally inexpensive to evaluate, the induced prior on the theoretical predictions, f (N ) (x) = h (N ) (x) + δ (N ) (x) is given by where m th (x) = h (N ) (x) + m δ (x) and Σ th (x, x ) = c2 R δ (x, x ; ).In the expensive case, a GP can be used to emulate the finite-order expansion and is defined by

Figure 1 :
Figure 1: Three different EFT experimental settings.Each panel displays the true physical system (solid) and the mean predictions from the EFTs under consideration (non-solid).

Figure 2 :
Figure 2: The posterior mean prediction of f † (x) when applying BMA to the 2nd order weak and 4th order strong coupling expansions.
evant work related to model averaging, model mixing, and BART.Section 3 introduces the essential features of EFTs, while Section 4 outlines the specifics of the proposed BART-based framework.Three motivating EFT examples are presented in Section 5. Finally, Section 6 provides a detailed discussion of the results presented throughout this work.Full derivations of the methodology are provided in the appendix.Additional examples and information regarding EFTs are provided in the online supplement.
which is a weighted average of the posterior densities with respect to each model.Each weight is defined in terms of its corresponding posterior model probability, i.e. w l = π(M l | D) where π(M l | D) = p(D | M l )π(M l ) K k=1 p(D | M k )π(M k ) and p(D | M l ) is the marginal likelihood of the data with respect to the lth model.Though BMA is useful, it has been criticized for emphasizing a fit to the training data as opposed to out-of-sample prediction, asymptotically selecting a single model (inappropriate in the M-complete and M-open are modeled with Gaussian processes (GPs)(Gramacy, 2020;Santner et al., 2018).As described byMelendez et al. (2019), the parameters in both truncation error models are dependent upon the evaluations of their corresponding finite-order expansions (described in (2) and (3), respectively) over a sparse grid of points.The discrepancy model also depends on physical quantities, Q(x) and y ref (x), which are chosen based on domain expertise.The relationship between these quantities and the discrepancy are summarized in the supplementary material.When Q(x) and y ref (x) are unknown, one can alternatively use the error approximation described bySemposki et al. (2022).
predictions constructed from (2) and (4) are shown by the dashed line in Figure 1(a).Clearly, this model is limited to strong predictive accuracy in only the left subregion of the domain.When available, one can consider different finite-order approximations of the same EFT.For example, consider the 2nd, 4th, and the 6th order coupling expansions which are shown in Figure 1(c).The three models are very similar for lower coupling constants yet drastically differ in the remainder of the domain.Despite each expansion's poor theoretical predictions, one can still

Figure 4 :
Figure 4: The predicted mean (dark and 95% credible intervals (shaded) when mixing f (4) s (x) . The leftmost plot displays the absolute value of the mean residuals, | f † (x) − f † (x)| where f † (x) denotes the mean prediction from the BMM model.Based on the residual plot, it appears f † (x) is adequately recovered across the majority of the domain with an RMSE of 0.2575.As expected, the error in the mean prediction noticeably increases in the upper left corner of the domain, where only two training points are included and both simulators are inaccurate.

K
l=1 w l (x), as shown in 6: (Left) The posterior mean estimates and 95% credible intervals (shaded) of the sum of weight functions from Examples 1a and 1b (solid and dashed).(Right) The posterior mean estimate of the sum of weight functions in Example 2.

Figure 6 .
Figure 6.The posterior of w sum (x) from Example 1a (left panel, solid) is centered very close to1 with relatively small amounts of uncertainty.This results because: (i) the prior regularization and (ii) f † (x) lies between the selected EFTs, which indicates a convex combination is appropriate.

Finally
, the weight functions can be used to better understand the M-open assumption associated with the model set.An initial confirmation of the M-open setting can be made when the weight functions noticeably change as a function of the inputs.This observation indicates localized performance of each model, hence one can confirm the true system is not contained in the set.If the weight functions are nearly constant, one may also wish to check the posterior of w sum (x) to see if the sum of the weights is fixated close to 1.Such a case may suggest model averaging with a simplex constraint could also be an appropriate solution.This alone is not enough to confirm or deny the M-open assumption, however may indicate that the M-complete or M-closed assumptions are possible for the model set.A final case to consider is the situation where a single model receives a weight near 1 while the effects of the competing models are shrunk to 0 across a subregion of the domain.This situation may indicate the model set is M-closed conditional on the subregion of interest despite falling in the M-open case when considering the entire domain.