Preconditioned stochastic gradient descent

Upgrading stochastic gradient descent method to second order optimization method
736 Downloads
Updated 23 Jul 2016

View License

This package demonstrates the method proposed in paper http://arxiv.org/abs/1512.04202 which shows how to upgrade a stochastic gradient descent (SGD) method to a second order optimization method by preconditioning. More materials (pseudo code, more examples and papers) are put on https://sites.google.com/site/lixilinx/home/psgd.

Descriptions of enclosed files
binary_pattern.m
This file generates the zebra stripe like binary pattern to be learned by our four tested algorithms.

plain_SGD.m
This demo shows how to use a standard SGD to train a neural network by minimizing logistic loss. As usual, SGD requires some tuning work. Convergence is too slow for small step sizes, too bad for large step sizes.

preconditioned_SGD_dense.m
This demo shows how to precondition a SGD to improve its convergence using a dense preconditioner. We do need to calculate the gradient twice at each iteration, but the convergence is much faster, and less tuning effort is required. The step size is normalized, and a value in range [0.01, 0.1] seems good.

preconditioned_SGD_sparse.m
This demo shows how to approximate a preconditioner as direct sums and/or Kronecker products of smaller matrices. In practice, the scales of problem can be so large that we need to sparsely represent a preconditioner to make its estimation affordable.

preconditioner_kron.m
This function shows how to adaptively estimate a Kronecker product approximation of a preconditioner for parameters in matrix form.

preconditioner.m
This function shows how to adaptively estimate a preconditioner via gradient perturbation analysis.

RMSProp_SGD.m
This demo implements a popular variation of SGD for neural network training: RMSProp. Similar to the standard SGD, its tuning could be difficult.

Cite As

Xilin Li (2024). Preconditioned stochastic gradient descent (https://www.mathworks.com/matlabcentral/fileexchange/54525-preconditioned-stochastic-gradient-descent), MATLAB Central File Exchange. Retrieved .

MATLAB Release Compatibility
Created with R2015a
Compatible with any release
Platform Compatibility
Windows macOS Linux

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!
Version Published Release Notes
1.2.0.0

The step size normalization factor in preconditioner estimation is changed to max(max(abs(grad))).

1.1.0.0

revised preconditioner estimation method. Specifically,
Q = Q - step_size*grad*Q/(max(abs(diag(grad))) + eps);
is changed to
Q = Q - step_size*grad*Q/max(max(abs(diag(grad))), 1);

1.0.0.0