Improved Uncertainty Quantification for Neural Networks With Bayesian Last Layer

Uncertainty quantification is an important task in machine learning - a task in which standard neural networks (NNs) have traditionally not excelled. This can be a limitation for safety-critical applications, where uncertainty-aware methods like Gaussian processes or Bayesian linear regression are often preferred. Bayesian neural networks are an approach to address this limitation. They assume probability distributions for all parameters and yield distributed predictions. However, training and inference are typically intractable and approximations must be employed. A promising approximation is NNs with Bayesian last layer (BLL). They assume distributed weights only in the linear output layer and yield a normally distributed prediction. To approximate the intractable Bayesian neural network, point estimates of the distributed weights in all but the last layer should be obtained by maximizing the marginal likelihood. This has previously been challenging, as the marginal likelihood is expensive to evaluate in this setting. We present a reformulation of the log-marginal likelihood of a NN with BLL which allows for efficient training using backpropagation. Furthermore, we address the challenge of uncertainty quantification for extrapolation points. We provide a metric to quantify the degree of extrapolation and derive a method to improve the uncertainty quantification for these points. Our methods are derived for the multivariate case and demonstrated in a simulation study. In comparison to Bayesian linear regression with fixed features, and a Bayesian neural network trained with variational inference, our proposed method achieves the highest log-predictive density on test data.

Abstract-Uncertainty quantification is an important task in machine learning -a task in which standard neural networks (NNs) have traditionally not excelled.This can be a limitation for safety-critical applications, where uncertainty-aware methods like Gaussian processes or Bayesian linear regression are often preferred.Bayesian neural networks are an approach to address this limitation.They assume probability distributions for all parameters and yield distributed predictions.However, training and inference are typically intractable and approximations must be employed.A promising approximation is NNs with Bayesian last layer (BLL).They assume distributed weights only in the linear output layer and yield a normally distributed prediction.To approximate the intractable Bayesian neural network, point estimates of the distributed weights in all but the last layer should be obtained by maximizing the marginal likelihood.This has previously been challenging, as the marginal likelihood is expensive to evaluate in this setting.
We present a reformulation of the log-marginal likelihood of a NN with BLL which allows for efficient training using backpropagation.Furthermore, we address the challenge of uncertainty quantification for extrapolation points.We provide a metric to quantify the degree of extrapolation and derive a method to improve the uncertainty quantification for these points.Our methods are derived for the multivariate case and demonstrated in a simulation study.In comparison to Bayesian linear regression with fixed features, and a Bayesian neural network trained with variational inference, our proposed method achieves the highest log-predictive density on test data.
Index Terms-Bayesian last layer, Bayesian neural network, uncertainty quantification

I. INTRODUCTION
Machine learning tries to capture patterns and trends through data.Both, the data and the identified patterns are subject to uncertainty [1], [2].For many applications, especially those where machine learning is applied to safety critical tasks, it is imperative to quantify the uncertainty of the predictions.An important example is learning-based control, where probabilistic system models are identified from data and used for safe control decisions [3]- [8].
Bayesian linear regression (BLR) [3]- [7] and Gaussian processes (GPs) [7], [8] are prominent methods for probabilistic system identification.Both assume that a nonlinear feature space can be mapped linearly to the outputs.As a main difference, BLR requires the features to be explicitly defined, while for GPs the features are implicitly defined through a kernel function [9].This is an advantage of GPs, as the most suitable features for BLR are often challenging to determine.On the other hand, GPs scale poorly with the number of data samples [10].For big data problems they are typically approximated with sparse GPs [11], and can be further improved with deep kernel learning [12], [13].
Especially in recent years, neural networks (NNs) and deep learning have gained significant popularity for a vast variety of machine learning tasks [14].NNs have also been successfully applied for control applications [15], [16], often to infer the system model from data [17]- [19].A challenge with NNs is their tendency to overfit and their inability to express uncertainty [1].Bayesian neural networks (BNNs), in which the weights and predictions are probability distributions, are a concept to tackle this shortcoming.In practice, BNNs can be intractable to train and query and are often approximated [1], [2].Most approximate BNN approaches fall into one of two categories: Markov chain Monte Carlo (MCMC) and variational inference (VI) [1].MCMC methods do not require a classical training phase and instead sample directly the posterior weight distribution of the Bayesian neural network [20].Unfortunately, MCMC scales poorly to large models which limits their applicability [1].Variational inference is a popular alternative to MCMC and based on the idea of learning the parameters of a surrogate posterior distribution of the weights [1], [2].For neural networks, variational inference is often implemented with the Bayes by Backprop algorithm [21].
Unfortunately, even a BNN trained with VI requires sampling to approximate the predictive distribution [1].This can be a significant disadvantage in comparison to GPs and BLR, which yield analytical results.A promising compromise between tractability and expressiveness are NNs with Bayesian last layer (BLL) [10], [22].NNs with BLL can be seen as a simplified BNN, where only the weights of the output layer follow a Gaussian distribution, and the remaining layers contain deterministic weights.At the same time, they can be interpreted as a deep kernel learning approach with linear kernel.NNs with BLL are also strongly related to BLR in that they consider a nonlinear feature space which is mapped linearly onto the outputs.Similarly to GPs, BLR and other probabilistic models [9], [23], NNs with BLL are trained by maximizing the marginal likelihood with respect to the parameters of the probabilistic model.These parameters include prior and noise variance and, importantly, the weights of the deterministic layers.While the log-marginal likelihood (LML) can be expressed analytically for NNs with BLL, it contains expressions such as the inverse of the precision matrix, making it unsuitable for direct gradient-based optimization.Previous works, especially for control applications, have therefore either assumed knowledge of all parameters [3], [5], [19], including prior and noise variance, or have maximized the LML after training a NN with fixed features [4], [24].In previous works that did include the deterministic weights as parameters, maximizing the LML required sampling the surrogate posterior during training [22], or using an approximate precision matrix [12] to enable gradient-based optimization.
As a main contribution of this work, we propose an approach to maximize the exact LML of a NN with BLL that does not require sampling and is suitable for gradient-based optimization.Most importantly, we avoid the matrix inverse in the LML by reintroducing the weights of the last layer, which were marginalized, as optimization variables.We show that our reformulation of the LML satisfies the conditions of optimality for the same solution as the original formulation.In this way, we provide a simpler training procedure, in comparison to training with variational inference, and can outperform BLR with fixed features obtained from a NN.Our second main contribution is an algorithm to improve the uncertainty quantification for extrapolation points.To this end, we relate the computation of the BLL covariance to an intuitive metric for extrapolation, which is inspired by the definition of extrapolation in [25].Based on this relationship, the proposed algorithm adjusts a scalar parameter to improve the log-predictive density on additional validation data.Our proposed methods are derived for the multivariatecase, estimating individual noise-variances for each output.This is in contrast to GP and BLR applications, where the multivariate case is typically tackled by fitting an individual model to each output [4], [7], [8] or by assuming i.i.d.noise for all outputs [5].The advantage of a single model for the multivariate case is apparent for the application of system identification for optimization-based control, as multiple models may significantly increase computation times.
This work is structured as follows.In Section II, we introduce NNs with BLL.The marginal likelihood, which is maximized during training, is discussed in Section III.In Section IV, we discuss interpolation and extrapolation for NNs with BLL and present an algorithm to improve the predictive distribution in the extrapolation regime.We discuss the special considerations required for the multivariate case in Section V.The Bayes by Backprop method is introduced in Section VI.A simulation example to compare the proposed method with BLR and Bayes by Backprop is presented in Section VII.The paper is concluded in Section VIII.

II. BAYESIAN LAST LAYER
We investigate a dataset D = {X, t} consisting of m data pairs of inputs x ∈ R nx and targets t ∈ R from which the matrices X = [x 1 , . . ., x m ] ⊤ ∈ R m×nx and t = [t 1 , . . ., t m ] ⊤ ∈ R m×1 are formed.We assume a scalar output for ease of notation and address the multivariate case in Section V. Regarding the notation, lower case bold symbols denote vectors, upper case bold symbols denote matrices, and regular lower case symbols are scalars.For the regression task, we introduce a feed-forward NN model with L hidden layers as where • denotes function composition.At each layer, we have a linear mapping h l (•) followed by a nonlinear activation function g l (•), that is: The set I [1,L+1] denotes the set of integers from 1 to L+1.We have a 0 = x, a L+1 = y, and a l ∈ R n a,l for all l ∈ I [1,L] .The number of neurons in layer l is denoted as n a,l , and we have the weights W ∈ R n a,l−1 +1×n a,l , which include the bias term.
The set of weight matrices with cardinality L + 1 is denoted As a requirement for NNs with BLL, we state the following assumption.
With the linear mapping, the output of the last internal layer is of particular importance and we introduce the notation: where φ ∈ R n φ are referred to as linear features and ϕ ∈ R n ϕ are the corresponding affine features with n ϕ = n φ + 1.
With slight abuse of notation, we use φ = φ(x; W L ) and ϕ = ϕ(x; W L ) as both, the values of the features, and the function, parameterized with W L .We require two additional assumptions to state Lemma 1 which formally describes a NN with BLL.
Assumption 2. The NN (1) provides a feature space ϕ(x; W L ) ∈ R n ϕ from which the targets are obtained through a linear mapping, according to Assumption 1: where we introduce the set W L = {W 1 , . . ., W L } for all weights until layer L and have w = W L+1 the weights of the output layer.The additive noise ϵ ∈ R m is zero-mean normally distributed, that is, ϵ ∼ N (0, Σ E ).
Assumption 3. We have a prior belief for the weights of the output layer w ∼ N (0, Σ W ).
We introduce the parameters of the probabilistic model (5) as: and state the posterior distribution of the weights w of the last layer using Bayes' law: A neural network with Bayesian last layer has deterministic weights in all hidden layers and distributed weights in the output layer.We obtain an analytical expression for the distribution of the the predicted outputs as shown in the following lemma [22].
Lemma 1. Assumptions 1-3 hold.The predicted outputs are normally distributed with: where Φ = ϕ(X; W L ) is the feature matrix for the training data, ϕ = ϕ(x; W L ) is the feature matrix for the test data, and with the precision matrix Proof.The posterior p(w|D, Θ) in ( 7) yields a normal distribution in the weights: with: as shown for the case of arbitrary features in [23].Finally, the predicted output y is a linear transformation of the random variable w, yielding the distribution in (8).
We can also obtain a posterior distribution for the targets, for which we consider ( 5) and obtain where µ NN y (x) and Σ NN y (x) stem from ( 8).We will revisit (11) again for the definition of performance metrics.

III. MARGINAL LIKELIHOOD MAXIMIZATION
The posterior distribution of the weights of the last layer can be obtained with (10) for given values of the parameters Θ. Determining suitable values of the parameters from prior knowledge is often challenging.Instead, the parameters can be inferred by maximizing the marginal likelihood in (7), which is also known as empirical Bayes or type-2 maximum likelihood [26].In BLR, that is, for a fixed feature space, the log-marginal likelihood (LML) can be maximized as shown in [23].Following this idea, the authors in [24] propose an approach where a fixed feature space is obtained through classical NN regression.After NN training, the LML is then maximized with these fixed features.However, the LML is also influenced by the weights of the deterministic layers and the resulting feature space.It is therefore reasonable to include the set of weights W L until the last layer as parameters and train them by maximizing the LML.Unfortunately, this poses significant challenges.
We proceed by introducing the LML and discuss the challenge of maximizing this expression in the next subsection.
Proof.The proof for fixed features is shown in [23].
For practical applications, the formulation in ( 12) is further simplified by considering the following assumption, where diag(A, B) denotes a block-diagonal matrix with A and B on the diagonal.Assumption 4. The additive noise introduced in Assumption 2 is i.i.d. for all samples m, yielding Σ E = σ 2 e I m .The weight prior introduced in Assumption 3 is i.i.d. with a flat prior for the bias, that is, The assumption of a flat prior for the bias term will have a negligible effect in practice but is important for the theoretical results presented in Section IV.For ease of notation, we will reuse J(Θ; D) and Θ in following results.

A. Augmented log-marginal likelihood maximization
To train a NN with BLL we seek to minimize the negative LML: with J(Θ; D) and Θ according to Result 1.This problem excludes the weights in the last layer of the NN as they have been marginalized.A result of this marginalization is the expression (14b) which computes these weights explicitly.
Unfortunately, this creates a major challenge when iteratively solving problem (15) for the optimal values Θ * , as the computational graph contains unfavorable expressions such as the inverse of Λ p , which are both numerically challenging and computationally expensive.Furthermore, (14b) requires leveraging the entire training dataset for the computation of the gradient ∇ Θ J(Θ; D).The authors in [22] circumvent these issues by variational inference, replacing the LML objective by the evidence lower bound objective (ELBO).As the variational posterior they choose a Gaussian distribution, parameterized with mean and covariance.This is the obvious choice in the BLL setting where the true posterior is Gaussian as shown in (8) and, consequently, the ELBO objective is equivalent to the LML [22].Variational inference comes at the cost, however, of introducing as optimization variables the parameters to describe the variational posterior.Furthermore, the variational inference training loop requires sampling this variational posterior.
In this work, we present an alternative approach, which simultaneously avoids parameterizing the variational posterior and yields a computational graph with lower complexity than (13).The resulting formulation is suitable for fast gradient-based optimization.To obtain this result we reformulate (15) as with Θ = {W L , σ e , σ w }.Importantly, we introduce w as an optimization variable and add an equality constraint corresponding to (14b).The optimal solution Θ * of ( 16) is thus identical to the optimal solution of ( 15).As a main contribution of this work, we state the following theorem.
Theorem 1 (Augmented log-marginal likelihood maximization).The optimal solution (Θ * , w * ) of problem ( 16) is identical to the optimal solution obtained from the unconstrained problem: min where, as the only difference to (15), w is now an optimization variable and, in comparison to (16), the equality constraint has been dropped.
Proof.The Lagrangian of problem ( 16) can be written as: We then state the first-order condition of optimality for the optimization variable w: Considering ( 13) and (5), that is, y = Φ w, we obtain: Using (14a), we obtain: We substitute ( 22) into (18) and have: where in (24) we have substituted (14b).From (25), we obtain that λ = 0 and the Lagrangian of problem ( 16) thus simplifies to: This is exactly the Lagrangian of problem (17) and therefore problem ( 16) and ( 17) yield the same optimal solution.
Theorem 1 enables us to maximize the LML (13) without having to explicitly compute w according to Equation (14b).This means, in particular, that we are not required to compute the inverse of Λ p to express the LML.Consequentially, we can use back-propagation and gradient-based optimization to maximize the LML, which significantly simplifies training NNs with BLL.

B. Change of variables and scaling
With Theorem 1 we can state the LML as a function of w which is now included in the set of parameters Θ = {W L , w, a, b} = {W L+1 , a, b}.Furthermore, we propose a change of variables and scale the objective function to improve numerical stability.In particular, we introduce: which can be interpreted as a signal-to-noise ratio and optimize over log(α) and log(σ e ) to ensure that σ e > 0 and σ w > 0 without constraining the problem.These changes are formalized in the following result.
To train a NN with BLL and univariate output, we consider in the following the LML and parameters in the form of Result 2. Additionally, the newly introduced parameter α in (27) will play an important role in the following discussion on interpolation and extrapolation and ultimately helps to improve the extrapolative uncertainty.

IV. IMPROVING THE EXTRAPOLATIVE UNCERTAINTY
One of the main challenges of the BLL predictive distribution (8) is that for arbitrary extrapolation points the required assumptions for Lemma 1 will not hold.In this section, we discuss the behavior of the predictive distribution in the interpolation and extrapolation regime and propose a method to improve the performance of NNs with BLL for extrapolation.
−1 To formalize the notion of interpolation and extrapolation, we follow the definition of interpolation described in [25], for which we also need to define the convex hull.
Definition 1 (Convex hull).The convex hull of a set of samples X ∈ R m×nx is defined as the set: Definition 2 (Interpolation).A sample x is considered to be an interpolation point of a set of samples X, given a feature space Φ(X; W L ) which satisfies Assumption 1 and 2, if: Interpolation is an attribute of the input space but its definition considers the learned feature space of the neural network.Considering the feature space in Definition 2 may seem counter-intuitive but applies well to reality, where nonlinear features for regression can show arbitrary behavior between data points, even for a univariate input.In this case, interpolation points in the input domain are rightly classified as extrapolation points by considering the feature domain.
Classifying a point as interpolation or extrapolation is a binary decision.In reality this is a shortcoming, as different degrees of extrapolation are possible.That is, a point "close" to the convex hull might still lead to a trustworthy prediction.The distance to a set, e.g. the convex hull, is defined below.Definition 3 (Distance).For a set X ⊂ R nx and a point x ∈ R nx we define the distance d(x, X) as: A. Quantification of interpolation and extrapolation In the following, we seek to define a metric to quantify the degree of extrapolation for a nonlinear regression model with feature space.Intuitively, such a metric could be based on the distance, according to Definition 3, to the convex hull of the features.Unfortunately, the distance to the convex hull results in an optimization problem that scales with the number of samples and the feature dimension, which can be a limitation for practical applications.Instead, we propose an approximate metric for which we introduce the affine cost.As related concepts of the affine cost, we define the well known span and affine hull.
Definition 4 (Span).The span of a set of samples X ∈ R m×nx is defined as the set: Definition 5 (Affine hull).The affine hull of a set of samples X ∈ R m×nx is defined as the set: Definition 6 (Affine cost).The affine cost of a a test point x ∈ R nx and given the data X ∈ R m×nx is defined as the optimal cost: subject to : where the spanning coefficient ν ∈ R m and the residual variable e ∈ R nx are optimization variables and γ ∈ R is a weighting factor for the residuals.
The affine cost naturally complements the definition of the affine hull by computing the norm of the respective spanning coefficients ν from Definition 5. Importantly, test values x / ∈ A X also have a value assigned, for which the γ-weighted norm of the residuals e is considered.
The relationship of convex hull, span and affine hull as well as the affine cost can be inspected in Figure 1.In this figure, two exemplary sets of features Φ ∈ R m×n φ , both with m = 3 and n φ = 2 are presented and we compare the relationship of test to training samples.By comparing, for example Figure 1 a) and d), we can see that the affine cost has a strong relationship with the convex hull on which we have based the notion of interpolation in Definition 2. In particular, we consider the level set obtained with the affine cost and suitable level l as a soft approximation of the convex hull.Such level sets can be seen in Figure 1 d) and h) as the contour lines of c A Φ .It holds that for a test point φ, the affine cost grows with the distance to the level set L A Φ .In this sense, we consider the distance d( φ, L A Φ ) and, more directly, the affine cost c A Φ ( φ) itself as the desired metric for the degree of extrapolation.
For the behavior of this metric we distinguish two important cases.In the first case, small values of the affine cost are achieved for test points that are within the affine hull, i.e. φ ∈ A Φ , and for small distances to the convex hull.According to Definition 2, these include all interpolation points and what we consider mild extrapolation.Both test points in Figure 1 h) are examples for this case.
In the second case, a test point is not within the affine hull φ / ∈ A Φ. The affine cost is now influenced primarily through the parameter γ.This can be seen by inspection of (32), where the residuals e must be used if φ / ∈ A Φ. The cost may then be dominated by the term ∥e∥ 2  2 which is weighted with γ.We argue that in the second case the test point can be considered an extrapolation point and γ can be interpreted as a penalty for extrapolation.This case can be observed in Figure 1 d) for the test point with higher affine cost.

B. Relationship of affine cost and covariance
As another main contribution of this work, we introduce Theorem 2 to establish the relationship of the BLL covariance and the previously presented affine cost.
Theorem 2 (BLL affine cost).If Assumption 4 holds, and with γ = α, the affine cost c A Φ ( φ(x)), according to Definition 6, is equivalent to the scaled BLL covariance (8c): where ϕ and φ describe the features obtained at the last internal layer of the neural network as defined in (4).
Proof.We reformulate the equality constraints of c A Φ ( φ) shown in (32): where 0 and 1 denote matrices filled with zeros or ones and their respective dimensions are given in the subscript.
Considering also (4), we then introduce: which allows to reformulate Equation (34) as: By further introducing D = Φ ⊤ M and W = diag(I m , γI n φ ), Problem (32) can be stated as: We have a weighted least-squares problem in (36) for which the solution can be obtained as: Substituting p * from (37) into the cost function of (36) yields the affine cost: where Considering that γ = α, as stated in the theorem, and ( 28)-( 29), we have: where Ĩn ϕ = diag(I n φ , 0).Substituting (40) in (38) yields: which concludes the proof.
Theorem 2 establishes the theoretical relationship between affine cost and BLL covariance.In the next subsection, we discuss how this relationship can be used to improve the extrapolative uncertainty of NNs with BLL.

C. Improving the uncertainty quantification with the affine cost interpretation
The first practical implication of Theorem 2 is that a NN with BLL should indeed be trained by maximizing the marginal likelihood as discussed in Section III.Using the features of a trained NN and then applying BLR, as previously shown in [4], [24] might lead to suboptimal performance.The reason for this is the log det-regularization of the precision matrix Λ p which arises only in the marginal likelihood cost formulation.This regularization encourages a low rank of the feature matrix Φ, resulting in a proper subspace for the affine hull, that is, log det-regularization to obtain matrices with low rank is well known [27] and also applied in other fields such as compressed sensing [28].
The second important implication of Theorem 2 is the interpretation of the parameter α (or γ respectively) in the context of the affine cost from Definition 6.The parameter directly controls the affine cost and thus the variance for test points φ(x) / ∈ A Φ.This causes a dilemma: Naturally, we have the situation that all training samples Φ are within the affine hull of themselves.Therefore, extrapolation in the sense of φ(x) / ∈ A Φ does not occur during training and α * , which maximizes the LML, might not yield desirable results.
To illustrate the issue we present a simple regression problem with n x = n y = 1 and m = 3 samples.We investigate a NN with n φ = 2 which allows for a graphical representation of the feature space.The NN is trained by maximizing the LML (30), yielding the optimal parameters Θ * , which includes α * .The predicted mean and standard deviation for the trained model using α * can be seen in Figure 2 a).The predicted mean of the NN, in light of the sparse training data, is suitable.However, the variance shows that the prediction is overconfident.We show in Figure 2 b) and c) that this overconfidence can be tackled simply by increasing α relative to the optimal value α * .
The reason for this effect of α on the extrapolation uncertainty can be seen by inspecting Figure 2 d)-f), where the features (recall n φ = 2) for the test ( φ) and the training points ( Φ) are displayed.
As desired, we have that A Φ ⊂ R n φ , that is, the training features lie in a subspace of R 2 , which can be seen in Figure 2 where the training samples in the feature space could be connected by a straight line.This effect can be attributed to the log det-regularization in the LML.Extrapolation thus occurs for φ / ∈ A Φ and for these points the extrapolative uncertainty grows with increasing α.Importantly, by considering Definition 2, we also have extrapolation for test points that are within the convex hull of the input space, i.e. x ∈ C X .In this example, the only true interpolation points are the training samples for which increasing α has no significant effect on the predicted variance.
In Figure 3, we further investigate the effect of increasing α for the same regression problem and NN architecture as displayed in Figure 2. To quantify the quality of the predictive distribution, we use the log-predictive density (LPD) [29]: which evaluates the logarithm of the posterior distribution of the targets (11) for all test values and computes the average thereof.
In Figure 3, we display log-predictive density (42) and the negative LML (30) for the optimal parameters Θ * and as a function of α.As expected, α * minimizes the negative LML.We see, however, that the optimal value is almost at a plateau and that increasing α has only a minor effect on the LML.We also see in the top diagram of Figure 3 that the log-predictive density for the training points remains independent of α.On the other hand, we see that the log-predictive density for test points can be increased significantly by setting α = α max .Further increasing α has a negative effect as the predicted posterior distribution becomes increasingly flat.
As a consequence of this investigation we propose Algorithm 1 to obtain a NN with BLL, trained with LML maximization, and enhanced extrapolation uncertainty.The optimal value for the scalar parameter α in Algorithm 1 can be obtained with a simple bijection approach.

V. THE MULTIVARIATE CASE
In general, we seek to investigate multivariate problems where, in contrast to Section II, we now have that t ∈ R ny .As before, we consider a dataset D = {X, T } consisting of m samples and introduce T = [t 1 , . . ., t m ] ⊤ ∈ R m×ny .
To obtain the setting in which Lemma 1 holds, we augment (5) for the multivariate case: where E ∈ R m×ny is the matrix of residuals of the regression model.We use the vec(•) operation as defined in [30,Defn. 11.5] and vectorize (43): Introducing ⊗ as the Kronecker product, this equation can be reformulated as [30,Prop. 11.16(b)]: and expressed as: In the form of (46), Lemma 1 directly applies to the multivariate case where the prior and noise covariances from Assumption 2 and 3 now refer to: and for which we need to consider the multivariate feature matrix Φ.

A. Simplified training
The multivariate settings adds significant complexity to the marginal likelihood maximization introduced in Section III.
To reduce the computational complexity, we present Result 3 and two required assumptions in the following.
Assumption 5. Noise and prior, as introduced in Assumption 2 and 3, are uncorrelated for all n y outputs and identical for all m samples and all n ϕ features, respectively.We denote σ e = σ e,1 , . . ., σ e,ny and σ w = σ w,1 , . . ., σ w,ny and obtain: We introduce α = , similarly to (27), and assume the following.Assumption 6.For all predicted outputs y i , we have the same parameter α, i.e.: Result 3. Let Assumption 5 and 6 hold.The scaled negative LML in (12) for the vectorized multivariate case (46) is: (50) with Θ = W L+1 , α, σ e,1 , . . ., σ e,ny and Λp according to (29).The full precision matrix Λp for the multivariate case can be obtained as: Proof.The result follows directly from the properties of the Kronecker product [30] applied to the log-marginal likelihood in (12) with (46), as well as Assumption 5 and 6.
Result 3 shows that the LML can be easily expressed for the multivariate case, allowing for fast and efficient NN training.This is largely due to Assumption 5 and 6.While Assumption 5 is a natural extension of Assumption 4 for the multivariate case, Assumption 6 might be questioned.We argue again with our interpretation of α as an extrapolation penalty weight, as discussed in Section IV.Importantly, extrapolation, as defined in Definition 2, is a property of the feature space and occurs regardless of the number of outputs.
Apart from simplifying the LML in (50), Assumption 5 and 6 also yield a simplified computation of the predictive distribution.The outputs are uncorrelated due to Assumption 5 and we can obtain independent covariance matrices: where we have the same Λp for all outputs due to Assumption 6.Therefore, the complexity of evaluating the predictive distribution scales negligibly with the number of outputs.This is a major advantage of NNs with BLL, in comparison to GPs, where it is common to fit an independent model for each output.

VI. BAYES BY BACKPROP
As a comparative baseline to our proposed method, we also employ variational inference with the Bayes by Backprop [1], [21] method to train a full BNN, that is, a neural network in which all weights follow a probability distribution.

A. Background
For the formulation of the NN with BLL we require the posterior distribution of the weights in the last layer in (7).As a main difference, we now state the posterior distribution for all weights of the NN, that is: In contrast to NNs with BLL, the exact posterior distribution is intractable and we resort to variational inference.To this end, we introduce a surrogate distribution q(W L+1 ) and minimize the Kullback-Leibler (KL) divergence between the surrogate and the true posterior: Minimizing the KL-divergence yields the surrogate distribution of the weights that approximates the true posterior distribution.As proposed in [21], we choose the surrogate distribution as: with trainable parameters µ w,l and σ w,l .Consequently, each weight of the neural network is now represented with two parameters, which makes Bayes by Backprop a tractable method even for larger models.
For the concrete implementation of training the BNN with Bayes by Backprop, we refer to [1].In particular, we use Bayes by Backprop in combination with empirical Bayes to obtain the optimal parameters Θ = {σ w , σ e } which, similarly to the NN with BLL, include the prior variances of the weights in the last layer, that is, σ w and the noise covariances σ e .Following the discussion in [21], we assume a fixed prior for the weights in the previous layers.

B. Evaluation
The predictive distribution of the trained BNN can be evaluated as: In contrast to predictive distribution of the NN with BLL in (11), expression (56) has no analytical solution.Instead, we resort to Monte Carlo sampling of the weights from the Training with the proposed Algorithm 1, which maximizes the log-marginal likelihood in (50), and yields the optimal parameter α * .This value can then be adapted to improve the uncertainty quantification in the extrapolation regime by maximizing (42), yielding α max .surrogate distribution and obtain the predictive distribution by averaging over the samples, that is: L+1 ∼ q(W L+1 ).Equation (57) yields a Gaussian mixture model and also allows for the computation of the logpredictive density, similarly to (42).

VII. SIMULATION STUDY
In this section, we investigate our proposed method in comparison to BLR with features from a trained NN, and a full BNN trained with Bayes by Backprop.The code to reproduce the presented results is available online 1 .
We investigate data from a function with n x = 1 input and n y = 2 outputs, which is shown in Figure 4.Both outputs exhibit non-linear behavior with different noise-levels, in particular σ 1 = 0.05 and σ 2 = 0.2.Both noise variances are assumed to be unknown.
We train a NN with BLL using our proposed Algorithm 1.For the study, a suitable structure with L = 2, n φ1 = n φ2 = n φ3 = 20 and g 1 (•) = g 2 (•) = g 3 (•) = tanh(•) is determined using trial and error.To maximize the LML, we use the Adam [31] optimizer and employ early-stopping to avoid overfitting.
The results of the simulation study are shown in Figure 4. We visualize mean and standard deviation of the predicted distribution obtained from the NN with BLL.A focus of the investigation is the comparison of the distributions obtained with optimal value α * and α max .Both distributions suitably describe the training data and capture the behavior of the unknown function in the interpolation regime.The estimated noise variance σ 1 = 0.051 and σ 2 = 0.17 is close to the true value.However, only the distribution with tuned extrapolative uncertainty, that is, with α max , is suitable to describe the extrapolation regime.To quantify the improvement, we compute the log-predictive density of the NN with BLL for α * and α max , and present the results in Table I.For comparison, we train a NN with the same structure by minimizing the meansquared-error and then use the learned features for BLR as in previous works [4], [24].The second step of Algorithm 1, that is, updating α subsequently with validation data, can also be applied in this setting and the resulting performance metric is also shown in Table I.
The results in Table I show that updating α, as proposed in Algorithm 1, significantly improves the log-predictive density (42) of the test data, both for BLL and BLR.We see that the NN with BLL, which is trained according to Algorithm 1, achieves overall the best results with respect to the logpredictive density.

A. Comparison to Bayes by Backprop
As a final step of our evaluation, we compare the NN with BLL to a full BNN trained with Bayes by Backprop, as described in Section VI.We investigate the same neural network architecture as described in the previous subsection and, as a main difference, now have distributed weights in all layers.We determine a fixed prior distribution for the weights in all but the last layer as: p(vec(W l,0 )) = N (0, 0.5I) ∀l ∈ I [1,L] . (58) Setting the mean of the prior distribution to zero is justified by employing batch normalization [32] after each variational layer.As for the NN with BLL, the prior variance of the weights in the last layer is also learned.
Training the BNN with Bayes by Backprop is performed as outlined in Section VI and using the Adam optimizer.The resulting predictive distribution is obtained by sampling N = 100 weights from the surrogate distribution and displayed in Figure 5.We show the mean values ȳ(i) that are obtained with the sampled weights W (i) L+1 ∼ q(W L+1 ).Additionally, we compute mean and variance of the resulting Gaussian mixture model in (57) and display them in the same figure.Finally, we compute the log-predictive density of the Gaussian mixture model and report the results in Table I.We find that the BNN trained with Bayes by Backprop yields a suitable predictive distribution but is outperformed by the NN with BLL.In particular, we see that the extrapolative uncertainty of the NN with BLL is better suited to describe the data.This holds especially after tuning α with the validation data, as described in Algorithm 1.For a BNN trained with Bayes by Backprop it is not possible to modify α to calibrate the extrapolation behavior of the model.Furthermore, it is necessary to sample weights from the surrogate posterior to obtain the predictive distribution, adding significant complexity to this method.This is in contrast to the NN with BLL, where the predictive distribution can be computed analytically.

VIII. CONCLUSIONS
Neural networks with Bayesian last layer are an attractive compromise between tractability and expressiveness in the field of Bayesian neural networks.As a main contribution of this work, we propose an efficient algorithm for training neural networks with Bayesian last layer, based on a reformulation of the marginal likelihood.Importantly, we show that our reformulation satisfies the conditions of optimality for the same argument as the original formulation.In comparison to Bayesian linear regression with features from a previously trained neural network, we find that training on the log-marginal likelihood shows advantages in a presented simulation study.Our second contribution, an algorithm to tune the extrapolative uncertainty also shows excellent results in the simulation study.To derive this algorithm, we further contribute a discussion on the relationship of the Bayesian last layer covariance and a proposed metric to quantify extrapolation.This metric intuitively relates to the definition of extrapolation which is based on the convex hull.Our proposed method also compares favorably to a full Bayesian neural network trained with Bayes by Backprop.Important advantages are the possibility to tune the extrapolative uncertainty, an analytical expression for the predictive distribution and the overall ease of implementation.
This work lays the theoretical foundations for our future work: identification of probabilistic dynamic system models that can be used for safe control decisions.

Fig. 2 .
Fig. 2. NN with BLL: Predicted mean and standard deviation (8) feature space with n φ = 2 for m = 3 training samples.The effect of parameter α on the extrapolation uncertainty is shown by comparing the optimal α * (maximization of LML (30)) with suggested improved α max .

Algorithm 1 Fig. 3 .
Fig.3.Effect of log(α) on the LML for a trained NN and the mean logpredictive density (42).The same regression problem and NN as in Figure2is considered.

2 Fig. 4 .
Fig. 4. Multivariate neural network with Bayesian last layer: Predicted mean and standard deviation for two outputs with different and unknown noise level.Training with the proposed Algorithm 1, which maximizes the log-marginal likelihood in (50), and yields the optimal parameter α * .This value can then be adapted to improve the uncertainty quantification in the extrapolation regime by maximizing (42), yielding α max .

TABLE I SIMULATION
STUDY IN FIGURE 4: MEAN-SQUARED ERROR (MSE), NEGATIVE LML (NLML) AND MEAN LOG-PREDICTIVE DENSITY (LPD).COMPARISON BETWEEN NN WITH BLL (TRAINED BY MINIMIZING THE NLML) AND BLR WITH NN FEATURES (NN TRAINED BY MINIMIZING MSE).COMPARISON OF α * (MAXIMIZING THE LML) VS. α MAX (MAXIMIZING THE LPD FOR VALIDATION DATA).