The Bayesian optimization algorithm attempts to minimize a scalar objective function f(x) for x in a bounded domain. The function can be deterministic or stochastic, meaning it can return different results when evaluated at the same point x. The components of x can be continuous reals, integers, or categorical, meaning a discrete set of names.
Throughout this discussion, D represents the number of components of x.
The key elements in the minimization are:
A Gaussian process model of f(x).
A Bayesian update procedure for modifying the Gaussian process model at each new evaluation of f(x).
An acquisition function a(x) (based on the Gaussian process model of f) that you maximize to determine the next point x for evaluation. For details, see Acquisition Function Types and Acquisition Function Maximization.
Evaluate yi = f(xi) for
NumSeedPoints points xi,
taken at random within the variable bounds.
bayesopt setting. If there are evaluation errors,
take more random points until there are
evaluations. The probability distribution of each component is either
uniform or log-scaled, depending on the
Then repeat the following steps:
Update the Gaussian process model of f(x) to
obtain a posterior distribution over functions Q(f|xi, yi for i = 1,...,t).
fit a Gaussian process model to the data.)
Find the new point x that maximizes the acquisition function a(x).
The algorithm stops after reaching any of the following:
For the algorithmic differences in parallel, see Parallel Bayesian Algorithm.
The underlying probabilistic model for the objective function f is a
Gaussian process prior with added Gaussian noise in the observations. So the prior
distribution on f(x) is a Gaussian process with mean μ(x;θ) and covariance kernel function k(x,x′;θ). Here, θ is a vector of kernel parameters. For
the particular kernel function
bayesopt uses, see Kernel Function.
In a bit more detail, denote a set of points X = xi with associated objective function values F = fi. The prior’s joint distribution of the function values F is multivariate normal, with mean μ(X) and covariance matrix K(X,X), where Kij = k(xi,xj).
Without loss of generality, the prior mean is given as
Also, the observations are assumed to have added Gaussian noise with variance σ2. So the prior distribution has covariance K(X,X;θ) + σ2I.
Fitting a Gaussian process regression model to observations
consists of finding values for the noise variance σ2 and
kernel parameters θ. This fitting is a computationally
intensive process performed by
For details on fitting a Gaussian process to observations, see Gaussian Process Regression.
The kernel function k(x,x′;θ) can
significantly affect the quality of a Gaussian process regression.
the ARD Matérn 5/2 kernel defined in Kernel (Covariance) Function Options.
See Snoek, Larochelle, and Adams .
Six choices of acquisition functions are available for
There are three basic types, with
The acquisition functions evaluate the “goodness” of a point x based on the posterior distribution function Q. When there are coupled constraints, including the Error constraint (see Objective Function Errors), all acquisition functions modify their estimate of “goodness” following a suggestion of Gelbart, Snoek, and Adams . Multiply the “goodness” by an estimate of the probability that the constraints are satisfied, to arrive at the acquisition function.
'expected-improvement' family of acquisition
functions evaluates the expected amount of improvement in the objective
function, ignoring values that cause an increase in the objective.
In other words, define
xbest as the location of the lowest posterior mean.
μQ(xbest) as the lowest value of the posterior mean.
Then the expected improvement
function makes a similar, but simpler, calculation as
In both cases,
bayesopt first calculates xbest and μQ(xbest).
the probability PI that a new point x leads
to a better objective function value, modified by a “margin”
bayesopt takes m as
the estimated noise standard deviation.
this probability as
Here Φ(·) is the unit normal CDF, and σQ is the posterior standard deviation of the Gaussian process at x.
function looks at the curve G two standard deviations
below the posterior mean at each point:
G(x) is the 2σQ lower
confidence envelope of the objective function model.
maximizes the negative of G:
Sometimes, the time to evaluate the objective function can depend on the region. For example,
many Support Vector Machine calculations vary in timing a good deal over certain
ranges of points. If so,
bayesopt can obtain better
improvement per second by using time-weighting in its acquisition function. The
cost-weighted acquisition functions have the phrase
per-second in their names.
These acquisition functions work as follows. During the objective
bayesopt maintains another
Bayesian model of objective function evaluation time as a function
of position x. The expected improvement per second
that the acquisition function uses is
where μS(x) is the posterior mean of the timing Gaussian process model.
To escape a local objective function minimum, the acquisition
plus in their names modify their
behavior when they estimate that they are overexploiting an
area. To understand overexploiting, let σF(x)
be the standard deviation of the posterior objective function at x.
Let σ be the posterior standard deviation
of the additive noise, so that
σQ2(x) = σF2(x) + σ2.
Define tσ to
be the value of the
ExplorationRatio option, a
positive number. The
functions, after each iteration, evaluate whether the next point x satisfies
σF(x) < tσσ.
If so, the algorithm declares that x is overexploiting. Then the acquisition function modifies its Kernel Function by multiplying θ by the number of iterations, as suggested by Bull . This modification raises the variance σQ for points in between observations. It then generates a new point based on the new fitted kernel function. If the new point x is again overexploiting, the acquisition function multiplies θ by an additional factor of 10 and tries again. It continues in this way up to five times, trying to generate a point x that is not overexploiting. The algorithm accepts the new x as the next point.
ExplorationRatio therefore controls a tradeoff
between exploring new points for a better global solution, versus
concentrating near points that have already been examined.
bayesopt maximizes an acquisition
function using the following general steps:
For algorithms starting with
the smallest feasible mean of the posterior distribution μQ(xbest) by
sampling several thousand points within the variable bounds, taking
several of the best (low mean value) feasible points, and improving
them using local search, to find the ostensible best feasible point.
Feasible means that the point satisfies constraints (see Constraints in Bayesian Optimization).
For all algorithms,
several thousand points within the variable bounds, takes several
of the best (high acquisition function) feasible points, and improves
them using local search, to find the ostensible best feasible point.
The acquisition function value depends on the modeled posterior distribution,
not a sample of the objective function, and so it can be calculated
 Bull, A. D. Convergence rates of efficient global optimization algorithms. http://arxiv.org/abs/1101.3501v3, 2011.
 Gelbart, M., J. Snoek, R. P. Adams. Bayesian Optimization with Unknown Constraints. http://arxiv.org/abs/1403.5607, 2014.
 Snoek, J., H. Larochelle, R. P. Adams. Practical Bayesian Optimization of Machine Learning Algorithms. http://arxiv.org/abs/1206.2944, 2012.