Main Content

Solve Partial Differential Equation with LBFGS Method and Deep Learning

This example shows how to train a Physics Informed Neural Network (PINN) to numerically compute the solution of the Burger's equation by using the limited-memory BFGS (LBFGS) algorithm.

The Burger's equation is a partial differential equation (PDE) that arises in different areas of applied mathematics. In particular, fluid mechanics, nonlinear acoustics, gas dynamics, and traffic flows.

Given the computational domain[-1,1]×[0,1], this examples uses a physics informed neural network (PINN) [1] and trains a multilayer perceptron neural network that takes samples (x,t) as input, where x[-1,1] is the spatial variable, and t[0,1] is the time variable, and returns u(x,t), where u is the solution of the Burger's equation:

ut+uux-0.01π2ux2=0,

with u(x,t=0)=-sin(πx)as the initial condition, and u(x=-1,t)=0 and u(x=1,t)=0 as the boundary conditions.

Instead of training the network using the trainNetwork function, or using a custom training loop that updates parametes using sgdmupdate or similar functions, this example estimates the learnable parameters by using the fmincon function (requires Optimization Toolbox™). The fmincon function finds the minimum of constrained nonlinear multivariable functions.

The example trains the model by enforcing that given an input (x,t), the output of the network u(x,t) fulfills the Burger's equation, the boundary conditions, and the initial condition. To train the model, the exmaple uses the limited-memory BFGS (LBFGS) algorithm which is a quasi-Newton method that approximates the Broyden-Fletcher-Goldfarb-Shanno algorithm.

Training this model does not require collecting data in advance. You can generate data using the definition of the PDE and the constraints.

Generate Training Data

Training the model requires a data set of collocation points that enforce the boundary conditions, enforce the initial conditions, and fulfill the Burger's equation.

Select 25 equally spaced time points to enforce each of the boundary conditions u(x=-1,t)=0 and u(x=1,t)=0.

numBoundaryConditionPoints = [25 25];

x0BC1 = -1*ones(1,numBoundaryConditionPoints(1));
x0BC2 = ones(1,numBoundaryConditionPoints(2));

t0BC1 = linspace(0,1,numBoundaryConditionPoints(1));
t0BC2 = linspace(0,1,numBoundaryConditionPoints(2));

u0BC1 = zeros(1,numBoundaryConditionPoints(1));
u0BC2 = zeros(1,numBoundaryConditionPoints(2));

Select 50 equally spaced spatial points to enforce the initial condition u(x,t=0)=-sin(πx).

numInitialConditionPoints  = 50;

x0IC = linspace(-1,1,numInitialConditionPoints);
t0IC = zeros(1,numInitialConditionPoints);
u0IC = -sin(pi*x0IC);

Group together the data for initial and boundary conditions.

X0 = [x0IC x0BC1 x0BC2];
T0 = [t0IC t0BC1 t0BC2];
U0 = [u0IC u0BC1 u0BC2];

Select 10,000 points to enforce the output of the network to fulfill the Burger's equation.

numInternalCollocationPoints = 10000;

pointSet = sobolset(2);
points = net(pointSet,numInternalCollocationPoints);

dataX = 2*points(:,1)-1;
dataT = points(:,2);

Create an array datastore containing the training data.

ds = arrayDatastore([dataX dataT]);

Define Deep Learning Model

Define a multilayer perceptron architecture with 9 fully connect operations with 20 hidden neurons. The first fully connect operation has two input channels corresponding to the inputs x and t. The last fully connect operation has one output u(x,t).

Define and Initialize Model Parameters

Define the parameters for each of the operations and include them in a structure. Use the format parameters.OperationName_ParameterName where parameters is the structure, OperationName is the name of the operation (for example "fc1") and ParameterName is the name of the parameter (for example, "Weights").

The algorithm in this example requires learnable parameters to be in the first level of the stucture, so do not use nested structures in this step. The fmincon function requires the learnable to be doubles.

Specify the number of layers and the number of neurons for each layer.

numLayers = 9;
numNeurons = 20;

Initialize the parameters for the first fully connect operation. The first fully connect operation has two input channels.

parameters = struct;

sz = [numNeurons 2];
parameters.fc1_Weights = initializeHe(sz,2,'double');
parameters.fc1_Bias = initializeZeros([numNeurons 1],'double');

Initialize the parameters for each of the remaining intermediate fully connect operations.

for layerNumber=2:numLayers-1
    name = "fc"+layerNumber;

    sz = [numNeurons numNeurons];
    numIn = numNeurons;
    parameters.(name + "_Weights") = initializeHe(sz,numIn,'double');
    parameters.(name + "_Bias") = initializeZeros([numNeurons 1],'double');
end

Initialize the parameters for the final fully connect operation. The final fully connect operation has one output channel.

sz = [1 numNeurons];
numIn = numNeurons;
parameters.("fc" + numLayers + "_Weights") = initializeHe(sz,numIn,'double');
parameters.("fc" + numLayers + "_Bias") = initializeZeros([1 1],'double');

View the network parameters.

parameters
parameters = struct with fields:
    fc1_Weights: [20×2 dlarray]
       fc1_Bias: [20×1 dlarray]
    fc2_Weights: [20×20 dlarray]
       fc2_Bias: [20×1 dlarray]
    fc3_Weights: [20×20 dlarray]
       fc3_Bias: [20×1 dlarray]
    fc4_Weights: [20×20 dlarray]
       fc4_Bias: [20×1 dlarray]
    fc5_Weights: [20×20 dlarray]
       fc5_Bias: [20×1 dlarray]
    fc6_Weights: [20×20 dlarray]
       fc6_Bias: [20×1 dlarray]
    fc7_Weights: [20×20 dlarray]
       fc7_Bias: [20×1 dlarray]
    fc8_Weights: [20×20 dlarray]
       fc8_Bias: [20×1 dlarray]
    fc9_Weights: [1×20 dlarray]
       fc9_Bias: [1×1 dlarray]

Define Model and Model Gradients Functions

Create the function model, listed in the Model Function section at the end of the example, that computes the outputs of the deep learning model. The function model takes as input the model parameters and the network inputs, and returns the model output.

Create the function modelGradients, listed in the Model Gradients Function section at the end of the example, that takes as input the model parameters, the network inputs, and the initial and boundary conditions, and returns the gradients of the loss with respect to the learnable parameters and the corresponding loss.

Define fmincon Objective Function

Create the function objectiveFunction, listed in the fmincon Objective Function section of the example that returns the loss and gradients of the model. The function objectiveFunction takes as input, a vector of learnable parameters, the network inputs, the initial conditions, and the names and sizes of the learnable parameters, and returns the loss to be minimized by the fmincon function and the gradients of the loss with respect to the learnable parameters.

Specify Optimization Options

Specify the optimization options:

  • Optimize using the fmincon optmizer with the LBFGS algorithm for no more than 7500 iterations and function evaluations.

  • Evaluate with optimality tolerance 1e-5.

  • Provide the gradients to the algorithm.

options = optimoptions('fmincon', ...
    'HessianApproximation','lbfgs', ...
    'MaxIterations',7500, ...
    'MaxFunctionEvaluations',7500, ...
    'OptimalityTolerance',1e-5, ...
    'SpecifyObjectiveGradient',true);

Train Network Using fmincon

Train the network using the fmincon function.

The fmincon function requires the learnable parameters to be specified as a vector. Convert the parameters to a vector using the paramsStructToVector function (attached to this example as a supporting file). To convert back to a structure of parameters, also return the parameter names and sizes.

[parametersV,parameterNames,parameterSizes] = parameterStructToVector(parameters);
parametersV = extractdata(parametersV);

Convert the training data to dlarray objects with format 'CB' (channel, batch).

dlX = dlarray(dataX','CB');
dlT = dlarray(dataT','CB');
dlX0 = dlarray(X0,'CB');
dlT0 = dlarray(T0,'CB');
dlU0 = dlarray(U0,'CB');

Create a function handle with one input that defines the objective function.

objFun = @(parameters) objectiveFunction(parameters,dlX,dlT,dlX0,dlT0,dlU0,parameterNames,parameterSizes);

Update the learnable parameters using the fmincon function. Depending on the number of iterations, this can take a while to run. To enable a detailed verbose output, set the 'Display' optimization option to 'iter-detailed'.

parametersV = fmincon(objFun,parametersV,[],[],[],[],[],[],[],options);
Solver stopped prematurely.

fmincon stopped because it exceeded the function evaluation limit,
options.MaxFunctionEvaluations = 7.500000e+03.

For prediction, convert the vector of parameters to a structure using the parameterVectorToStruct function (attached to this example as a supporting file).

parameters = parameterVectorToStruct(parametersV,parameterNames,parameterSizes);

Evaluate Model Accuracy

For values of t at 0.25, 0.5, 0.75, and 1, compare the predicted values of the deep learning model with the true solutions of the Burger's equation using the relative l2 error.

Set the target times to test the model at. For each time, calculate the solution at 1001 equally spaced points in the range [-1,1].

tTest = [0.25 0.5 0.75 1];
numPredictions = 1001;
XTest = linspace(-1,1,numPredictions);

Test the model.

figure
for i=1:numel(tTest)
    t = tTest(i);
    TTest = t*ones(1,numPredictions);

    % Make predictions.
    dlXTest = dlarray(XTest,'CB');
    dlTTest = dlarray(TTest,'CB');
    dlUPred = model(parameters,dlXTest,dlTTest);

    % Calcualte true values.
    UTest = solveBurgers(XTest,t,0.01/pi);

    % Calculate error.
    err = norm(extractdata(dlUPred) - UTest) / norm(UTest);

    % Plot predictions.
    subplot(2,2,i)
    plot(XTest,extractdata(dlUPred),'-','LineWidth',2);
    ylim([-1.1, 1.1])

    % Plot true values.
    hold on
    plot(XTest, UTest, '--','LineWidth',2)
    hold off

    title("t = " + t + ", Error = " + gather(err));
end

subplot(2,2,2)
legend('Predicted','True')

The plots show how close the predictions are to the true values.

Solve Burger's Equation Function

The solveBurgers function returns the true solution of Burger's equation at times t as outlined in [2].

function U = solveBurgers(X,t,nu)

% Define functions.
f = @(y) exp(-cos(pi*y)/(2*pi*nu));
g = @(y) exp(-(y.^2)/(4*nu*t));

% Initialize solutions.
U = zeros(size(X));

% Loop over x values.
for i = 1:numel(X)
    x = X(i);

    % Calculate the solutions using the integral function. The boundary
    % conditions in x = -1 and x = 1 are known, so leave 0 as they are
    % given by initialization of U.
    if abs(x) ~= 1
        fun = @(eta) sin(pi*(x-eta)) .* f(x-eta) .* g(eta);
        uxt = -integral(fun,-inf,inf);
        fun = @(eta) f(x-eta) .* g(eta);
        U(i) = uxt / integral(fun,-inf,inf);
    end
end

end

fmincon Objective Function

The objectiveFunction function defines the objective function to be used by the LBFGS algorithm.

This objectiveFunction funtion takes as input, a vector of learnable parameters parametersV, the network inputs, dlX and dlT, the initial and boundary conditions dlX0, dlT0, and dlU0, and the names and sizes of the learnable parameters parameterNames and parameterSizes, respectively, and returns the loss to be minimized by the fmincon function loss and a vector containing the gradients of the loss with respect to the learnable parameters gradientsV.

function [loss,gradientsV] = objectiveFunction(parametersV,dlX,dlT,dlX0,dlT0,dlU0,parameterNames,parameterSizes)

% Convert parameters to structure of dlarray objects.
parametersV = dlarray(parametersV);
parameters = parameterVectorToStruct(parametersV,parameterNames,parameterSizes);

% Evaluate model gradients and loss.
[gradients,loss] = dlfeval(@modelGradients,parameters,dlX,dlT,dlX0,dlT0,dlU0);

% Return loss and gradients for fmincon.
gradientsV = parameterStructToVector(gradients);
gradientsV = extractdata(gradientsV);
loss = extractdata(loss);

end

Model Gradients Function

The model is trained by enforcing that given an input (x,t) the output of the network u(x,t) fulfills the Burger's equation, the boundary conditions, and the intial condition. In particular, two quantities contribute to the loss to be minimized:

loss=MSEf+MSEu,

where MSEf=1Nfi=1Nf|f(xfi,tfi)|2 and MSEu=1Nui=1Nu|u(xui,tui)-ui|2.

Here, {xui,tui}i=1Nu correspond to collocation points on the boundary of the computational domain and account for both boundary and initial condition. {xfi,tfi}i=1Nf are points in the interior of the domain.

Calculating MSEf requires the derivatives ut,ux,2ux2 of the output u of the model.

The function modelGradients takes as input, the model parameters parameters, the network inputs dlX and dlT, the initial and boundary conditions dlX0, dlT0, and dlU0, and returns the gradients of the loss with respect to the learnable parameters and the corresponding loss.

function [gradients,loss] = modelGradients(parameters,dlX,dlT,dlX0,dlT0,dlU0)

% Make predictions with the initial conditions.
U = model(parameters,dlX,dlT);

% Calculate derivatives with respect to X and T.
gradientsU = dlgradient(sum(U,'all'),{dlX,dlT},'EnableHigherDerivatives',true);
Ux = gradientsU{1};
Ut = gradientsU{2};

% Calculate second-order derivatives with respect to X.
Uxx = dlgradient(sum(Ux,'all'),dlX,'EnableHigherDerivatives',true);

% Calculate lossF. Enforce Burger's equation.
f = Ut + U.*Ux - (0.01./pi).*Uxx;
zeroTarget = zeros(size(f), 'like', f);
lossF = mse(f, zeroTarget);

% Calculate lossU. Enforce initial and boundary conditions.
dlU0Pred = model(parameters,dlX0,dlT0);
lossU = mse(dlU0Pred, dlU0);

% Combine losses.
loss = lossF + lossU;

% Calculate gradients with respect to the learnable parameters.
gradients = dlgradient(loss,parameters);

end

Model Function

The model trained in this example consists of a series of fully connect operations with a tanh operation between each one.

The model function takes as input the model parameters parameters and the network inputs dlX and dlT, and returns the model output dlU.

function dlU = model(parameters,dlX,dlT)

dlXT = [dlX;dlT];
numLayers = numel(fieldnames(parameters))/2;

% First fully connect operation.
weights = parameters.fc1_Weights;
bias = parameters.fc1_Bias;
dlU = fullyconnect(dlXT,weights,bias);

% tanh and fully connect operations for remaining layers.
for i=2:numLayers
    name = "fc" + i;

    dlU = tanh(dlU);

    weights = parameters.(name + "_Weights");
    bias = parameters.(name + "_Bias");
    dlU = fullyconnect(dlU, weights, bias);
end

end

References

  1. Maziar Raissi, Paris Perdikaris, and George Em Karniadakis, Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations https://arxiv.org/abs/1711.10561

  2. C. Basdevant, M. Deville, P. Haldenwang, J. Lacroix, J. Ouazzani, R. Peyret, P. Orlandi, A. Patera, Spectral and finite difference solutions of the Burgers equation, Computers & fluids 14 (1986) 23–41.

See Also

| |

Related Topics