Get from Ico-github-logo

Download apps, toolboxes, and other File Exchange content using Add-On Explorer in MATLAB.

» Watch video

Highlights from
Deep Learning Toolbox

  • allcomb(varargin)
    ALLCOMB - All combinations
  • caeapplygrads(cae)
  • caebbp(cae)
  • caebp(cae, y)
  • caedown(cae)
  • caenumgradcheck(cae, x, y)
  • caesdlm(cae, opts, m)
  • caetrain(cae, x, opts)
  • caeup(cae, x)
  • cnnapplygrads(net, opts)
  • cnnbp(net, y)
  • cnnff(net, x)
  • cnnnumgradcheck(net, x, y)
  • cnnsetup(net, x, y)
  • cnntest(net, x, y)
  • cnntrain(net, x, y, opts)
  • dbnsetup(dbn, x, opts)
  • dbntrain(dbn, x, opts)
  • dbnunfoldtonn(dbn, output...
    DBNUNFOLDTONN Unfolds a DBN to a NN
  • expand(A, S)
    EXPAND Replicate and tile each element of an array, similar to repmat.
  • flicker(X,fps)
  • flipall(X)
  • fliplrf(x)
    FLIPLR Flip matrix in left/right direction.
  • flipudf(x)
    FLIPUD Flip matrix in up/down direction.
  • im2patches(im,m,n)
  • isOctave()
    detects if we're running Octave
  • makeLMfilters
    Returns the LML filter bank of size 49x49x48 in F. To convolve an
  • max3d(X, M)
  • myOctaveVersion()
    return OCTAVE_VERSION or 'undefined' as a string
  • nnapplygrads(nn)
    NNAPPLYGRADS updates weights and biases with calculated gradients
  • nnbp(nn)
    NNBP performs backpropagation
  • nnchecknumgrad(nn, x, y)
  • nneval(nn, loss, train_x,...
    NNEVAL evaluates performance of neural network
  • nnff(nn, x, y)
    NNFF performs a feedforward pass
  • nnpredict(nn, x)
  • nnsetup(architecture)
    NNSETUP creates a Feedforward Backpropagate Neural Network
  • nntest(nn, x, y)
  • nntrain(nn, train_x, trai...
    NNTRAIN trains a neural net
  • nnupdatefigures(nn,fhandl...
    NNUPDATEFIGURES updates figures during training
  • normalize(x, mu, sigma)
  • patches2im(patches,n,m)
  • randcorr(n,R)
    RANDCORR Generates corremlated random variables
  • randp(P,varargin)
    RANDP - pick random values with relative probability
  • rbmdown(rbm, x)
  • rbmtrain(rbm, x, opts)
  • rbmup(rbm, x)
  • rnd(x)
  • saesetup(size)
  • saetrain(sae, x, opts)
  • scaesetup(cae, x, opts)
  • scaetrain(scae, x, opts)
  • sigm(P)
  • sigmrnd(P)
  • softmax(eta)
  • tanh_opt(A)
  • test_cnn_gradients_are_nu...
  • test_example_CNN
    ex1 Train a 6c-2s-12c-2s Convolutional neural network
  • test_example_DBN
    ex1 train a 100 hidden unit RBM and visualize its weights
  • test_example_NN
  • test_example_SAE
    ex1 train a 100 hidden unit SDAE and use it to initialize a FFNN
  • test_nn_gradients_are_num...
  • visualize(X, mm, s1, s2)
  • whiten(X, fudgefactor)
  • zscore(x)
  • caeexamples.m
    mnist data
  • runalltests.m
  • View all files
4.3 | 45 ratings Rate this file 389 Downloads (last 30 days) File Size: 16 MB File ID: #38310 Version: 1.2
image thumbnail

Deep Learning Toolbox


Rasmus Berg Palm (view profile)

  • 1 file
  • 4.30952


24 Sep 2012 (Updated )

Deep Belief Nets, Stacked Autoencoders, Convolutional Neural Nets and more. With examples.

Editor's Notes:

Popular File 2014

| Watch this File

File Information

Deprecation notice.
This toolbox is outdated and no longer maintained.

There are much better tools available for deep learning than this toolbox, e.g. Theano, torch or tensorflow

I would suggest you use one of the tools mentioned above rather than use this toolbox.

Best, Rasmus.

A Matlab toolbox for Deep Learning.
Deep Learning is a new subfield of machine learning that focuses on learning deep hierarchical models of data. It is inspired by the human brain's apparent deep (layered, hierarchical) architecture. A good overview of the theory of Deep Learning theory is Learning Deep Architectures for AI

For a more informal introduction, see the following videos by Geoffrey Hinton and Andrew Ng.

The Next Generation of Neural Networks (Hinton, 2007)
Recent Developments in Deep Learning (Hinton, 2010)
Unsupervised Feature Learning and Deep Learning (Ng, 2011)
If you use this toolbox in your research please cite:

Prediction as a candidate for learning deep hierarchical models of data (Palm, 2012)

Directories included in the toolbox
NN/ - A library for Feedforward Backpropagation Neural Networks

CNN/ - A library for Convolutional Neural Networks

DBN/ - A library for Deep Belief Networks

SAE/ - A library for Stacked Auto-Encoders

CAE/ - A library for Convolutional Auto-Encoders

util/ - Utility functions used by the libraries

data/ - Data used by the examples

tests/ - unit tests to verify toolbox is working

For references on each library check

Required Products MATLAB
MATLAB release MATLAB 7.11 (R2010b)
Tags for This File   Please login to tag files.
Please login to add a comment or rating.
Comments and Ratings (62)
16 Jan 2017 Bowen Liu

Zero documentations, none of the code are commented, and the library throws errors with no explanations if any assert fails.

Comment only
16 Jan 2017 Bowen Liu

14 Dec 2016 m sh

m sh (view profile)

14 Sep 2016 Ashutosh Kumar Upadhyay

16 Jul 2016 zhenzhou wang

zhenzhou wang (view profile)

Not enough description. No clear goal!

29 Jun 2016 Charles Harrison

I also am interested in knowing whether this package can be used for regression.

24 Jun 2016 Vikrant Karale

19 Jun 2016 ahmad karim

Please asume that i have image how can i classified it by using cnn i cant implement the code

Comment only
02 Jun 2016 Delowar Hossain

Can you explain about labeling? i.e, how did you labels for train and test data?

Comment only
02 Jun 2016 bam

bam (view profile)

May i know the NN toolbox refer to DNN or conventional NN

Comment only
31 May 2016 M J

M J (view profile)

Dear Karakule,
Please write your problems here to help you.

04 May 2016 Karakule

is there anyone who can help me. I didn't run any code. I have faced some problem and I didnt fix it. Please help me.

03 May 2016 Shelvin

So, you won't update this toolbox any more, right? But it really works very well. Thank you for providing this.

28 Apr 2016 michael scheinfeild

works very good test_example_CNN.m
need to add paths

16 Apr 2016 Suparna Biswas

I am getting an error during executation of test_example_SAE. The error is

Attempted to access lmisys(4);
index out of bounds because

Error in lmiunpck (line 23)
rs=lmisys(4); rv=lmisys(5);
% row sizes of LMISET,LMIVAR

Error in nnsetup (line 26)

Error in saesetup (line 3){u-1} =
size(u) size(u-1)]);

Error in test_example_SAE (line
sae = saesetup([784 100]);

Comment only
29 Mar 2016 Shemmy

Shemmy (view profile)

15 Mar 2016 Nan Ye

Nan Ye (view profile)

11 Jan 2016 Lenka Polaskova

08 Jan 2016 Eason Tseng

08 Dec 2015 su jin an

I m getting an error saying it cant load mnist_uint8.
where can I get this dataset?
i tried using the mnist data from their website and it says some columns are not available due to ASCII.
Wil you help me please?

01 Dec 2015 Karel Macek

Is it possible to use the Toolbox also for general regression?

Comment only
25 Nov 2015 Song weiwei

Very nice

22 May 2015 NeuralDip

Sorry, but the more i look the code the more i'm convinced that this si not as cool as it seems at first. True that there are no other libraries doing CAEs, but this is left unfinished.

Beware that even kernels don't work. Also you have to expect that the toolbox is strictly thought for image data.

21 May 2015 Kai Zhang

Kai Zhang (view profile)

21 May 2015 NeuralDip

This is a really nice toolbox, but as some say it lacks of a document and of code commenting.

May i ask what is the difference between input and output kernels in the Convolutional autoencoders? Why such a distinction is not done in the simple convolutional networks? And if i was to pre-train a CNN using a CAE how should i preceed?

29 Apr 2015 Frozenarm

I am getting this error:

Error using nntrain (line 33)
numbatches must be a integer

Which has to do with batchsize

I have used different number and all give me this error.
My training dataset contains 8150 datapoints
When I give nn.numbatches = 50;

I get this error.
Anyone any idea how to deal with this?
Not sure what to do

Comment only
23 Apr 2015 Nadith

Nadith (view profile)

In nnff(), does anyone know why we use dropout fraction when testing ? Is'nt the testing supposed to be done with non-corrupted/manipulated data ?

Ref. source
if(nn.dropoutFraction > 0)

nn.a{i} = nn.a{i}.*(1 - nn.dropoutFraction);


nn.dropOutMask{i} = (rand(size(nn.a{i}))>nn.dropoutFraction);
nn.a{i} = nn.a{i}.*nn.dropOutMask{i};


Comment only
16 Apr 2015 Gustavo Mafra

Don't you have a bug in your rbmtrain function?

When I select the number of batches to be equal to one the codes sum(v1 - v2) and sum(h1 - h2) sum up to a scalar when in fact they should be a vector. I don't know if this is different for your version of MATLAB, but a simple fix in R2014a is simply adding a second parameter to the sum function like this: sum(v1 - v2, 1)

I haven't explored that much the library to say this issue is not present elsewhere

Comment only
16 Apr 2015 Java Xun

Hi, i tried to run DBN.m with my own data, but when I run this code:[er, bad] = nntest(nn, test_x, test_y);
I found er is zero and bad is null. the input size of train_x is 320*200,the output is 320*1. who could tell my why this happened ? Thanks

01 Feb 2015 Aik Hong

Hi, i tried to run the test_example_DBN.m, i get the error below:

??? Attempted to access lmisys(5); index out of bounds because numel(lmisys)=4.

Error in ==> lmiunpck at 23
rs=lmisys(4); rv=lmisys(5); % row sizes of LMISET,LMIVAR

Error in ==> nnsetup at 26

Error in ==> dbnunfoldtonn at 10
nn = nnsetup(size);

is that something not right there? i didnt change anything in the code. Or is it something related to my Matlab version?

Comment only
30 Jan 2015 Po Sheng Wang

When I run the file of "test_example_CNN", I got the error of below
assert(~isOctave() ||
compare_versions(OCTAVE_VERSION, '3.8.0',
'>='), ['Octave 3.8.0 or greater is required
for CNNs as there is a bug in convolution in
previous versions. See Your
version is ' myOctaveVersion]);

It seems like my Octave is too old? So how can I update my Octave? Just download new Octave and install it and then everything will be great?? Thanks

Comment only
10 Jan 2015 Chris McCormick

I found the CNN library very informative for helping me learn more of the basics of Convolutional Neural Networks.

It's well written, though lacking in comments and documentation.

I wrote a post explaining the CNN example along with documented / commented versions of most of the CNN functions:

Comment only
05 Jan 2015 Hazem

Hazem (view profile)

I can't understand the example, what is this data and what the example used for?

05 Jan 2015 Hazem

Hazem (view profile)

I can't understand the example, what is this data and what the example used for?

24 Dec 2014 qu

qu (view profile)

Is there someone who would like to tell me how to install it?

19 Oct 2014 Hanan Shteingart

poor documentation makes this hardly usable.

for example:
function scae = scaesetup(cae, x, opts)
x = x{1};

code starts straight away without any parameter explaining. what is x? opts? cae?
you can look at the example code but it is hard to reverse engineer it.

13 Aug 2014 Umer

Umer (view profile)

29 Jul 2014 Tarek El-Gaaly

I just started using this code and was puzzled by the following (plz excuse me for my newbie questions):

How come the errors on MNIST in the examples are less than state of the art? I get ~0.07 error on the 2-layer DBN-NN in the example. State-of-the-art, as far as I am aware, is higher than this.

Also when visualizing the dbn.rbm{2}.W' layer I see pretty much garbage. There is no structure to the weights like dbn.rbm{1}.W'. What has to be done to enable higher level structure learning.

20 Jul 2014 Jane Shen

How to download this valuable toolbox here... can only from GitHub?

18 Jul 2014 BO

BO (view profile)

nntest.m has an error: it will give all 1s for expected, rather than the column index for each row of y.

function [er, bad] = nntest(nn, x, y)
labels = nnpredict(nn, x);
expected = I;
bad = find(labels ~= expected);
er = length(bad) / size(x, 1);

17 Jul 2014 Bruce Ferguson

It does not run under version R2014a. All tests crash.

Comment only
11 Jul 2014 Nadith

Nadith (view profile)

The best cleanest code that anyone could get at the moment... :) Thanks a lot. I'm working on extending the network capabilities depending on my work.

23 Jun 2014 Junhong YE

10 Jun 2014 satya narayana

10 Jun 2014 ming

ming (view profile)

10 Jun 2014 Xiaotian Wang

03 Jun 2014 BUITEMS Quetta

how can I download this toolbox? I really it.

17 May 2014 Yao

Yao (view profile)

17 May 2014 Yao

Yao (view profile)


Comment only
11 May 2014 Rania

Rania (view profile)

how can i download this file please?

Comment only
01 May 2014 Litao Shen

I have a problem about using the nnbp, there is nn.e in nnbp while in nnsetup nn has no e. What shoud I do?

23 Apr 2014 Kazuhito Sato

Thanks for your code.
This code is very useful and exciting for me.

03 Apr 2014 ted p teng

ted p teng (view profile)

13 Mar 2014 Yong Ho

Thanks for your code! But, when I execute cnnexamples in CNN folder after modifying cnn.layers' kernel size from "5" to "4",
I got an error in cnnbp.m line 37 like "Array dimensions must match for binary array op."
please check this error message. Thanks.

15 Feb 2014 Andrew Diamond

Andrew Diamond (view profile)

Just tried to run the 2nd DBN example and it failed. First, the assert at line 6 of rbmtrain.m failed. From what I see, the assert should be ==0 not ~= 0 (numbatches should be integer).

Secondly, the example failed on nnff line 14. From what I can tell, the last size size in dbn.sizes should be 10 as that's the y (==> output layer) size.

Comment only
31 Jan 2014 Géraud

Thanks for your code! I would like to know if there is more documentation than the few examples that are given? Particularly for the cnn.layer in the CNN toolbox?

04 Dec 2013 Al

Al (view profile)

01 Jun 2013 random22

How do I find the probability of a label being selected with this code? I am new to neural networks and am just trying this toolbox out.

Comment only
12 Feb 2013 Johnathan

27 Nov 2012 Jeff

Jeff (view profile)

02 Nov 2012 Ahmed

Ahmed (view profile)

Well written code that saved me a lot of time.

24 Sep 2012 Sebastien PARIS

04 Aug 2016 1.2

Changed to use GitHub

13 Sep 2016 1.2


Contact us