lbfgsupdate
Syntax
Description
Update the network learnable parameters in a custom training loop using the limited-memory BFGS (L-BFGS) algorithm.
The L-BFGS algorithm [1] is a quasi-Newton method that approximates the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. The L-BFGS algorithm is best suited for small networks and data sets that you can process in a single batch.
Note
This function applies the L-BFGS optimization algorithm to update network parameters in
custom training loops that use networks defined as dlnetwork
objects or model functions. If you want to train a network defined as
a Layer
array or as a
LayerGraph
, use the
following functions:
Create an SGDM, Adam, or RMSProp training options object using the
trainingOptions
function.Use the options object with the
trainNetwork
function.
[
updates the learnable parameters of the network netUpdated
,solverStateUpdated
] = lbfgsupdate(net
,lossFcn
,solverState
)net
using the L-BFGS
algorithm with the specified loss function and solver state. Use this syntax in a training
loop to iteratively update a network defined as a dlnetwork
object.
[
updates the learnable parameters in parametersUpdated
,solverStateUpdated
] = lbfgsupdate(parameters
,solverState
)parameters
using the L-BFGS algorithm
with the specified loss function and solver state. Use this syntax in a training loop to
iteratively update the learnable parameters of a network defined as a function.
Examples
Input Arguments
Output Arguments
Algorithms
References
[1] Liu, Dong C., and Jorge Nocedal. "On the limited memory BFGS method for large scale optimization." Mathematical programming 45, no. 1 (August 1989): 503-528. https://doi.org/10.1007/BF01589116.
Extended Capabilities
Version History
Introduced in R2023a