The network is initialized only once when you use
net = feedforwardnet(hiddenLayerSize, trainFcn)
And when you do “net = train(net, inputs, targets)” in a loop, the network gets updated with the new weights and biases, and this updated network is used in the next iteration and so on.
You can also use pre-existing deep learning functionalities. For that, you would have to transform your feedforward net into a simple deep learning network that only has 1 input layer, 1 fully connected layer, 1 custom layer and 1 output classification layer. Define the custom layer as the tansig activation function of the feedforward nets. This would reproduce a standard feedforward net.
This approach automatically uses stochastic gradient descent as the training algorithm, which works with mini-batches of data.
Please refer to the following link for more information on how to create custom layers: