A Visual Exploration of Gaussian Processes

How to turn a collection of small building blocks into a versatile tool for solving regression problems.

y = 0Regression is used to find a function (line)that represents a set of data points as closelyas possibleA Gaussian process is a probabilistic method that gives a confidence(shaded) for the predicted function(click to enable)


April 2, 2019



Even if you have spent some time reading about machine learning, chances are that you have never heard of Gaussian processes. And if you have, rehearsing the basics is always a good way to refresh your memory. With this blog post we want to give an introduction to Gaussian processes and make the mathematical intuition behind them more approachable.

Gaussian processes are a powerful tool in the machine learning toolbox. They allow us to make predictions about our data by incorporating prior knowledge. Their most obvious area of application is fitting a function to the data. This is called regression and is used, for example, in robotics or time series forecasting. But Gaussian processes are not limited to regression — they can also be extended to classification and clustering tasks. For a given set of training points, there are potentially infinitely many functions that fit the data. Gaussian processes offer an elegant solution to this problem by assigning a probability to each of these functions. The mean of this probability distribution then represents the most probable characterization of the data. Furthermore, using a probabilistic approach allows us to incorporate the confidence of the prediction into the regression result.

We will first explore the mathematical foundation that Gaussian processes are built on — we invite you to follow along using the interactive figures and hands-on examples. They help to explain the impact of individual components, and show the flexibility of Gaussian processes. After following this article we hope that you will have a visual intuition on how Gaussian processes work and how you can configure them for different types of data.

Multivariate Gaussian distributions

Before we can explore Gaussian processes, we need to understand the mathematical concepts they are based on. As the name suggests, the Gaussian distribution (which is often also referred to as normal distribution) is the basic building block of Gaussian processes. In particular, we are interested in the multivariate case of this distribution, where each random variable is distributed normally and their joint distribution is also Gaussian. The multivariate Gaussian distribution is defined by a mean vector μ\mu and a covariance matrix Σ\Sigma. You can see an interactive example of such distributions in the figure below.

The mean vector μ\mu describes the expected value of the distribution. Each of its components describes the mean of the corresponding dimension. Σ\Sigma models the variance along each dimension and determines how the different random variables are correlated. The covariance matrix is always symmetric and positive semi-definite. The diagonal of Σ\Sigma consists of the variance σi2\sigma_i^2 of the ii-th random variable. And the off-diagonal elements σij\sigma_{ij} describe the correlation between the ii-th and jj-th random variable.

X=[X1X2Xn]N(μ,Σ) X = \begin{bmatrix} X_1 \\ X_2 \\ \vdots \\ X_n \end{bmatrix} \sim \mathcal{N}(\mu, \Sigma)

We say XX follows a normal distribution. The covariance matrix Σ\Sigma describes the shape of the distribution. It is defined in terms of the expected value EE:

Σ=Cov(Xi,Xj)=E[(Xiμi)(Xjμj)T] \Sigma = \text{Cov}(X_i, X_j) = E \left[ (X_i - \mu_i)(X_j - \mu_j)^T \right]

Visually, the distribution is centered around the mean and the covariance matrix defines its shape. The following figure shows the influence of these parameters on a two-dimensional Gaussian distribution. The variances for each random variable are on the diagonal of the covariance matrix, while the other values show the covariance between them.

Covariance matrix (Σ)0.09-0.02-0.022.18By dragging the handles you can adjust the variance alongeach dimension, as well as thecorrelation between the tworandom variables. Violetvalues show a high probabilityinside the distribution.

Gaussian distributions are widely used to model the real world. For example, we can employ them to describe errors of measurements or phenomena under the assumptions of the central limit theorem One of the implications of this theorem is that a collection of independent, identically distributed random variables with finite variance are together distributed normally. A good introduction to the central limit theorem is given by this video from Khan Academy. . In the next section we will take a closer look at how to manipulate Gaussian distributions and extract useful information from them.

Marginalization and Conditioning

Gaussian distributions have the nice algebraic property of being closed under conditioning and marginalization. Being closed under conditioning and marginalization means that the resulting distributions from these operations are also Gaussian, which makes many problems in statistics and machine learning tractable. In the following we will take a closer look at both of these operations, as they are the foundation for Gaussian processes.

Marginalization and conditioning both work on subsets of the original distribution and we will use the following notation:

PX,Y=[XY]N(μ,Σ)=N([μXμY],[ΣXXΣXYΣYXΣYY])P_{X,Y} = \begin{bmatrix} X \\ Y \end{bmatrix} \sim \mathcal{N}(\mu, \Sigma) = \mathcal{N} \left( \begin{bmatrix} \mu_X \\ \mu_Y \end{bmatrix}, \begin{bmatrix} \Sigma_{XX} \, \Sigma_{XY} \\ \Sigma_{YX} \, \Sigma_{YY} \end{bmatrix} \right)

With XX and YY representing subsets of original random variables.

Through marginalization we can extract partial information from multivariate probability distributions. In particular, given a normal probability distribution P(X,Y)P(X,Y) over vectors of random variables XX and YY, we can determine their marginalized probability distributions in the following way:

XN(μX,ΣXX)YN(μY,ΣYY) \begin{aligned} X &\sim \mathcal{N}(\mu_X, \Sigma_{XX}) \\ Y &\sim \mathcal{N}(\mu_Y, \Sigma_{YY}) \end{aligned}

The interpretation of this equation is that each partition XX and YY only depends on its corresponding entries in μ\mu and Σ\Sigma. To marginalize out a random variable from a Gaussian distribution we can simply drop the variables from μ\mu and Σ\Sigma.

pX(x)=ypX,Y(x,y)dy=ypXY(xy)pY(y)dy p_X(x) = \int_y p_{X,Y}(x,y)dy = \int_y p_{X|Y}(x|y) p_Y(y) dy

The way to interpret this equation is that if we are interested in the probability density of X=xX = x, we need to consider all possible outcomes of YY that can jointly lead to the result The corresponding Wikipedia article has a good description of the marginal distribution, including several examples. .

Another important operation for Gaussian processes is conditioning. It is used to determine the probability of one variable depending on another variable. Similar to marginalization, this operation is also closed and yields a modified Gaussian distribution. This operation is the cornerstone of Gaussian processes since it allows Bayesian inference, which we will talk about in the next section. Conditioning is defined by:

XYN(μX+ΣXYΣYY1(YμY),ΣXXΣXYΣYY1ΣYX)YXN(μY+ΣYXΣXX1(XμX),ΣYYΣYXΣXX1ΣXY) \begin{aligned} X|Y &\sim \mathcal{N}(\:\mu_X + \Sigma_{XY}\Sigma_{YY}^{-1}(Y - \mu_Y),\: \Sigma_{XX}-\Sigma_{XY}\Sigma_{YY}^{-1}\Sigma_{YX}\:) \\ Y|X &\sim \mathcal{N}(\:\mu_Y + \Sigma_{YX}\Sigma_{XX}^{-1}(X - \mu_X),\: \Sigma_{YY}-\Sigma_{YX}\Sigma_{XX}^{-1}\Sigma_{XY}\:) \\ \end{aligned}

Note that the new mean only depends on the conditioned variable, while the covariance matrix is independent from this variable.

Now that we have worked through the necessary equations, we will think about how we can understand the two operations visually. While marginalization and conditioning can be applied to multivariate distributions of many dimensions, it makes sense to consider the two-dimensional case as shown in the following figure. Marginalization can be seen as integrating along one of the dimensions of the Gaussian distribution, which is in line with the general definition of the marginal distribution. Conditioning also has a nice geometric interpretation — we can imagine it as making a cut through the multivariate distribution, yielding a new Gaussian distribution with fewer dimensions.

Marginalization (Y)

Conditioning (X = 1.2)

μY = 0σY = 1.4
X = 1.2
μY|X = 0.96σY|X = 1.4
A bivariate normal distribution in the center. On the left you can see the result of marginalizing this distribution for Y, akin to integrating along the X axis. On the right you can see the distribution conditioned on a given X, which is similar to a cut through the original distribution. The Gaussian distribution and the conditioned variable can be changed by dragging the handles.

Gaussian Processes

Now that we have recalled some of the basic properties of multivariate Gaussian distributions, we will combine them together to define Gaussian processes, and show how they can be used to tackle regression problems.

First, we will move from the continuous view to the discrete representation of a function: rather than finding an implicit function, we are interested in predicting the function values at concrete points, which we call test points XX. So how do we derive this functional view from the multivariate normal distributions that we have considered so far? Stochastic processes, such as Gaussian processes, are essentially a set of random variables. In addition, each of these random variables has a corresponding index ii. We will use this index to refer to the ii-th dimension of our nn-dimensional multivariate distributions. The following figure shows an example of this for two dimensions:

Here, we have a two-dimensional normal distribution. Each dimension xix_i is assigned an index i{1,2}i \in \{1,2\}. You can drag the handles to see how a particular sample (left) corresponds to functional values (right). This representation also allows us to understand the connection between the covariance and the resulting values: the underlying Gaussian distribution has a positive covariance between x1x_1 and x2x_2 — this means that x2x_2 will increases as x1x_1 gets larger and vice versa. You can also drag the handles in the figure to the right and observe the probability of such a configuration in the figure to the left.

Now, the goal of Gaussian processes is to learn this underlying distribution from training data. Respective to the test data XX, we will denote the training data as YY. As we have mentioned before, the key idea of Gaussian processes is to model the underlying distribution of XX together with YY as a multivariate normal distribution. That means that the joint probability distribution PX,YP_{X,Y} spans the space of possible function values for the function that we want to predict. Please note that this joint distribution of test and training data has X+Y|X| + |Y| dimensions.

In order to perform regression on the training data, we will treat this problem as Bayesian inference. The essential idea of Bayesian inference is to update the current hypothesis as new information becomes available. In the case of Gaussian processes, this information is the training data. Thus, we are interested in the conditional probability PXYP_{X|Y}. Finally, we recall that Gaussian distributions are closed under conditioning — so PXYP_{X|Y} is also distributed normally.

Now that we have the basic framework of Gaussian processes together, there is only one thing missing: how do we set up this distribution and define the mean μ\mu and the covariance matrix Σ\Sigma? The covariance matrix Σ\Sigma is determined by its covariance function kk, which is often also called the kernel of the Gaussian process. We will talk about this in detail in the next section. But before we come to this, let us reflect on how we can use multivariate Gaussian distributions to estimate function values. The following figure shows an example of this using ten test points at which we want to predict our function:

f(x) = ?We are interested in predicting the function values for 10 differentx values from [,] withoutknowing about training points.Covariance matrix= k(,)10x10The covariance matrix is created by pairwise evaluationof the kernel function resultingin a 10-dimensional distribution.Sampling from this distribution results in a 10-dimensional vectorwhere each entry representsone function value.

In Gaussian processes we treat each test point as a random variable. A multivariate Gaussian distribution has the same number of dimensions as the number of random variables. Since we want to predict the function values at X=N|X|=N test points, the corresponding multivariate Gaussian distribution is also NN -dimensional. Making a prediction using a Gaussian process ultimately boils down to drawing samples from this distribution. We then interpret the ii-th component of the resulting vector as the function value corresponding to the ii-th test point.


Recall that in order to set up our distribution, we need to define μ\mu and Σ\Sigma. In Gaussian processes it is often assumed that μ=0\mu = 0, which simplifies the necessary equations for conditioning. We can always assume such a distribution, even if μ0\mu \neq 0, and add μ\mu back to the resulting function values after the prediction step. This process is also called centering of the data. So configuring μ\mu is straight forward — it gets more interesting when we look at the other parameter of the distribution.

The clever step of Gaussian processes is how we set up the covariance matrix Σ\Sigma. The covariance matrix will not only describe the shape of our distribution, but ultimately determines the characteristics of the function that we want to predict. We generate the covariance matrix by evaluating the kernel kk, which is often also called covariance function, pairwise on all the points. The kernel receives two points t,tRnt,t’ \in \mathbb{R}^n as an input and returns a similarity measure between those points in the form of a scalar:

k:Rn×RnR,Σ=Cov(X,X)=k(t,t) k: \mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R},\quad \Sigma = \text{Cov}(X,X’) = k(t,t’)

We evaluate this function for each pairwise combination of the test points to retrieve the covariance matrix. This step is also depicted in the figure above. In order to get a better intuition for the role of the kernel, let’s think about what the entries in the covariance matrix describe. The entry Σij\Sigma_{ij} describes how much influence the ii-th and jj-th point have on each other. This follows from the definition of the multivariate Gaussian distribution, which states that Σij\Sigma_{ij} defines the correlation between the ii-th and the jj-th random variable. Since the kernel describes the similarity between the values of our function, it controls the possible shape that a fitted function can adopt. Note that when we choose a kernel, we need to make sure that the resulting matrix adheres to the properties of a covariance matrix.

Kernels are widely used in machine learning, for example in support vector machines. The reason for this is that they allow similarity measures that go far beyond the standard euclidean distance (L2L2-distance). Many of these kernels conceptually embed the input points into a higher dimensional space in which they then measure the similarityIf the kernel follows Mercer’s theorem it can be used to define a Hilbert space. More information on this can be found on Wikipedia.. The following figure shows examples of some common kernels for Gaussian processes. For each kernel, the covariance matrix has been created from N=25N=25 linearly-spaced values ranging from [5,5][-5,5]. Each entry in the matrix shows the covariance between points in the range of [0,1][0,1].

RBF Kernel

\sigma^2 \exp \left( - \frac{||t-t'||^2}{2 l^2} \right)




\sigma^2 \exp \left( - \frac{2 \sin^2(\pi |t-t'| / p)}{l^2} \right)





\sigma_b^2 + \sigma^2 (t - c)(t' - c)




For the kernel the parameter ... determines ...
This figure shows various kernels that can be used with Gaussian processes. Each kernel has different parameters, which can be changed by adjusting the according sliders. When grabbing a slider, information on how the current parameter influences the kernel will be shown on the right.

Kernels can be separated into stationary and non-stationary kernels. Stationary kernels, such as the RBF kernel or the periodic kernel, are functions invariant to translations, and the covariance of two points is only dependent on their relative position. Non-stationary kernels, such as the linear kernel, do not have this constraint and depend on an absolute location. The stationary nature of the RBF kernel can be observed in the banding around the diagonal of its covariance matrix (as shown in this figure). Increasing the length parameter increases the banding, as points further away from each other become more correlated. For the periodic kernel, we have an additional parameter PP that determines the periodicity, which controls the distance between each repetition of the function. In contrast, the parameter CC of the linear kernel allows us to change the point on which all functions hinge.

There are many more kernels that can describe different classes of functions, which can be used to model the desired shape of the function. A good overview of different kernels is given by Duvenaud. It is also possible to combine several kernels — but we will get to this later.

Prior Distribution

We will now shift our focus back to the original task of regression. As we have mentioned earlier, Gaussian processes define a probability distribution over possible functions. In this figure above, we show this connection: each sample of our multivariate normal distribution represents one realization of our function values. Because this distribution is a multivariate Gaussian distribution, the distribution of functions is normal. Recall that we usually assume μ=0\mu=0. For now, let’s consider the case where we have not yet observed any training data. In the context of Bayesian inference, this is called the prior distribution PXP_X.

If we have not yet observed any training examples, this distribution revolves around μ=0\mu=0, according to our original assumption. The prior distribution will have the same dimensionality as the number of test points N=XN = |X|. We will use the kernel to set up the covariance matrix, which has the dimensions N×NN \times N.

In the previous section we have looked at examples of different kernels. The kernel is used to define the entries of the covariance matrix. Consequently, the covariance matrix determines which type of functions from the space of all possible functions are more probable. As the prior distribution does not yet contain any additional information, it is perfect to visualize the influence of the kernel on the distribution of functions. The following figure shows samples of potential functions from prior distributions that were created using different kernels:

y = 0μ + 2σμ - 2σ(click to start)