View License

Download apps, toolboxes, and other File Exchange content using Add-On Explorer in MATLAB.

» Watch video

Highlights from
Deep Neural Network

Join the 15-year community celebration.

Play games and win prizes!

» Learn more

4.8
4.8 | 46 ratings Rate this file 481 Downloads (last 30 days) File Size: 4.57 MB File ID: #42853 Version: 1.19
image thumbnail

Deep Neural Network

by

 

29 Jul 2013 (Updated )

It provides deep learning tools of deep belief networks (DBNs).

| Watch this File

File Information
Description

Run testDNN to try!
Each function includes description. Please check it!
It provides deep learning tools of deep belief networks (DBNs) of stacked restricted Boltzmann machines (RBMs). It includes the Bernoulli-Bernoulli RBM, the Gaussian-Bernoulli RBM, the contrastive divergence learning for unsupervised pre-training, the sparse constraint, the back projection for supervised training, and the dropout technique.
The sample codes with the MNIST dataset are included in the mnist folder. Please, see readme.txt in the mnist folder.
Hinton et al, Improving neural networks by preventing co-adaptation of feature detectors, 2012.
Lee et al, Sparse deep belief net model for visual area V2, NIPS 2008.
http://read.pudn.com/downloads103/sourcecode/math/421402/drtoolbox/techniques/train_rbm.m__.htm
Modified the implementation of the dropout.
Added feature of the cross entropy object function for the neural network training.
It includes the implementation of the following paper. If you use this toolbox, please cite the following paper:
Masayuki Tanaka and Masatoshi Okutomi, A Novel Inference of a Restricted Boltzmann Machine, International Conference on Pattern Recognition (ICPR2014), August, 2014.
Related SlideShare and pdf are available.
http://like.silk.to/matlab/dnn.html

Required Products Symbolic Math Toolbox
Simulink 3D Animation
MATLAB
MATLAB release MATLAB 7.14 (R2012a)
MATLAB Search Path
/
/DeepNeuralNetwork
/DeepNeuralNetwork/mnist
Tags for This File   Please login to tag files.
Please login to add a comment or rating.
Comments and Ratings (144)
17 Aug 2016 Masayuki Tanaka

Hi QINGJU LIU,

It depends on the application.
Simply, try both settings!

Thanks,

Comment only
16 Aug 2016 QINGJU LIU

If you set the mode as GBRBM, should you use rbmtype (GBRBM) from the second layer or should you use BBRBM from the second layer? I think in randDBN.m, maybe it should be:

for i=2:numel(dbn.rbm) - 1
%dbn.rbm{i} = randRBM( dims(i), dims(i+1), rbmtype );
dbn.rbm{i} = randRBM( dims(i), dims(i+1), 'BBRBM' );
end

Comment only
05 Aug 2016 Masayuki Tanaka

Hi freecity freeman,

Thank you for your comment.
The opts.Layer is the Layer of training. If you want to train all layer, please put 0.
I have already upload the new version which is modified the explanation.

Thanks,

Comment only
04 Aug 2016 freecity freeman

Hi,
In the sample code, opts.Layer is used but not explained. Could you explain this parameter? Thanks

Comment only
29 Jul 2016 Masayuki Tanaka

Hi Guangyi,

It depends on the problem. But, you can try it.
Pre-whitening may help you.

Thanks,

Comment only
28 Jul 2016 Guangyi

I would like to use real value images for your deep neural networks. Will your code work for real values instead of binary values as in MNIST? Thanks!

Comment only
14 Jun 2016 Murugesan Thangavel

I wish to implement for audio file like wav or mp3. How RBM accepts the input? Which features?

PLP or MFCC

11 Jun 2016 Pedro Aguiar  
10 Jun 2016 tajul miftahushudur  
16 May 2016 Masayuki Tanaka

Hi belete boru,

I think that it is very good point!
Some researchers use BB-RBM and others use GB-RBM.
I don't know which is the better.

Thanks,

Comment only
14 May 2016 belete boru

many thanks for your hard work! you used MNIST data set with grey scale image but trained it with BBRBM, is that ok?, i am using a real valued data should i use GBRBM?

12 May 2016 Masayuki Tanaka

Hi Max,

The pretrainRBM is an Unsupervised Learning. The pretrainRBM does not require the output.

Thanks,

Comment only
11 May 2016 Max

Max (view profile)

Hello Masayuki,

First, thanks for this work!

I have a few questions regarding the implementation of the RBM. In your example in the "pretrainRBM.m" file:

%Example:
% datanum = 1024;
% outputnum = 16;
% inputnum = 4;
%
% inputdata = rand(datanum, inputnum);
% outputdata = rand(datanum, outputnum);
%
% rbm = randRBM(inputnum, outputnum);
% rbm = pretrainRBM( rbm, inputdata );

The final output of the RBM are:

rbm =

type: 'BBRBM'
W: [4x16 double]
b: [1x16 double]
c: [-0.0552 -0.0073 -0.0329 0.0276]

However, there is no output data matrix in the rbm object. Does that mean 1) I have to multiply the weight matrix by the inputdata to get the output of the hidden units. And 2) do I still have to put the output through an activation function?

I'm fairly new to neural networks and deep learning so I apologize if this is somewhat of a basic question. Thanks!

11 May 2016 Dorothy

Hello Masayuki,
Thanks a lot for this work!
Is it possible to use the code for image in-painting case?
Best Regards,
Cande

11 May 2016 Dorothy  
05 May 2016 Masayuki Tanaka

Hi Sohanjyoti Mohanty,

Of course, you can use the toolbox for other data sets.

Thanks,

Comment only
04 May 2016 Sohanjyoti Mohanty

Can I use other image datasets like MNIST for this code?

Comment only
01 May 2016 Masayuki Tanaka

Hi Masashi Kaneko,

Please google by your os and the file extension.

Thanks,

Comment only
01 May 2016 Masashi Kaneko

Hi, I know it's very basic but I cannot correctly unzip the mnist files. Please tell me the way how you do. Thanks.

22 Apr 2016 Ashutosh Kumar Upadhyay  
08 Apr 2016 Masayuki Tanaka

Hi Shivani Ghatge,

I provided a sample code. Please check mnist directory.

Thanks.

Comment only
07 Apr 2016 Shivani Ghatge

Hello, I am new to DNN and working on a project. Does this toolbox have some documentation? Anything to guide me on how to use this?

30 Mar 2016 Kala

Kala (view profile)

 
09 Mar 2016 zied

zied (view profile)

i am a student and my work consists on the "Convolutional Deep Networks for Face Recognition" , i would like to know if anyone could help me to get some code or any other similar subject that can make it easier for me , or help me to progress in my work

Comment only
07 Mar 2016 Romain Jacotin  
02 Mar 2016 Jose Chen

Hi, can you please suggest a reasonable set of parameter values for very sparse input - including Layer?

Comment only
25 Feb 2016 Tongwen Li

Hi, moslem yousefi, have you solved your problems? I am faced with the same problems, too.

Comment only
26 Jan 2016 tariq bdair

Hi Masayuiki,

Thank you for this useful tool.
How can I use the trained dbn in SVM, what is the input of SVM, is it W that stored in bbdbn?

24 Jan 2016 Wonyong Sung

Very good presentation material. Thank you.

Comment only
21 Jan 2016 lallouani bouchakour

Hi

Thanks for this code.
I can used this code for speech recognition
why !!!
I would like to use code matlab with deep neural network for speech recognition .
How to modify the code?

Comment only
18 Jan 2016 jeffin

jeffin (view profile)

Hi Masayuiki

Thanks for this code. If i have need to use this code for training colour dataset, how to proceed whith the code. kindly suggest me. How to modify the code?

Comment only
16 Jan 2016 Valerio Giuffrida

Hi Masayuiki,
I am digging inside your code and I am trying to correlate theory with implementation. In the "A Practical Guide to Training Restricted Boltzmann Machines" by Hinton (2010), he says that in order to avoid changing the Learning Rate parameter accordingly to the batch size, to divide the Learning Rate by the number of the Batch Size.

In the file pretrainRBM, I read you divide StepRatio by num, which is the total number of samples. Is this a mistake, or am I interpreting Hinton wrongly?

Thanks a lot

14 Jan 2016 Xianxian Zhang

Hi, Masayuki,

Thanks for the codes. I tried to use it for audio classification. I use MFCC features as input (21700 x 13), and choose "GBDBN". The RMSE from pre-training are very small, but from training are suprisely high, and does not change anymore after iter = 8. Could you please suggest what might go wrong? How should I modify my data or matlab codes ?

Thanks a lot,
xianxian

Comment only
14 Jan 2016 Xianxian Zhang

Hi, Masayuki,

If I choose GBDBN, does this mean all the layers will use GB? If I hope to use GB for the first layer, and BB for the rest, how I should modify the codes?

Thanks a lot,
xianxian

07 Jan 2016 Yi-Cheng LO

Hi, Masayuki
Thanks for the code.

How to enter your own data in the IN and ON

Enter their own data errors:

Error using double
Conversion to double from struct is not possible.

Comment only
21 Dec 2015 Ather Iqbal

Hi,
I am unable to run the code GetDroppedDBN.m, how to place the arguments in the field and what are they?

Comment only
07 Dec 2015 moslem yousefi

Hi Masayuki
Thanks for the code.
I have modified the code for a prediction purpose.However, same problem as Sam is happening for me. The outcome in the training is not as promising as one may expect ( comparing to a simple feed forward neural network) and when I'm using my test data the outcome is nearly constant. Although the code is written for image processing, I believe it could work for regression as well. Right? I just cannot figure out where my mistake is.Could it be the size of the data set? My data set is a 1000*32 input and a 1000*1 output. Tq

Comment only
27 Nov 2015 Lachie Vogt

Hi Sam,

Could it be that your overfitting during learning, try and pull back the number of layers and set a smaller amount of max iterations. Also if possible include more training cases. You can also increase the ratio of the dropout from 0.5

Comment only
24 Nov 2015 Masayuki Tanaka

Hi Mridusmita Sharma,

The output data form is up to your application.

Thanks.

Comment only
24 Nov 2015 Masayuki Tanaka

Hi SAM,

I think the demo code is just to show the usage. The input and output is random. Please try with meaningful data like MNIST.

Thanks.

Comment only
18 Nov 2015 Mridusmita Sharma

Hi,
I just wanted to know what should be the inputs and outputs like? can it be feature vectors with different class? can you give some example please.

Comment only
11 Nov 2015 SAM

SAM (view profile)

hi...i tried your the code of RBM and DNN with linear regression as the top layer ...but thr are two problems
1. after fine tuning at the outputs it doesnot predict any thing just a hortizontal line at 0 or 0.5
2. if it predicts the pattern at the output layer the pattern is exactly same as target waveform but the range of output values are being shrinked if the target range is from 0to 1...it well preidcts the pattern but it ranges it in 0.5555556 to 0.5555557

Comment only
27 Aug 2015 Masayuki Tanaka

Hi Lachie Vogt,

Please use v2h for inference like

out = v2h( dbn, IN );

Thanks.

Comment only
26 Aug 2015 Lachie Vogt

Hi Masayuki,

Sorry to come back to you again.
I've added the code to initalize trainDBN.m it comes back with the same error message.
However I was able to use my data on testDNN so have the trained algorithm. Given it is trained by known inputs and outputs how do I ask it to estimate the outputs given new inputs? Is it v2h, v2hall or something else?

Comment only
07 Aug 2015 Masayuki Tanaka

Hi liang ma,

My code does not support sparse constrain.

Thanks.

Comment only
06 Aug 2015 liang ma

Thanks for sharing code Masayuki. My question is given the weight between layers which also could be regarded as the feature, how to get the sparse representation of input? Thanks for your help in advance.

04 Aug 2015 Masayuki Tanaka

Hi Lachie Vogt,

One example is
nodes = [32 16 8 4];
dnn = randDBN( nodes );

Please check testDNN.m

Thanks.

Comment only
03 Aug 2015 Lachie Vogt

Thanks for the code and your time Masayuki Tanaka. Do you know how to initialize the dbn? I've looked through matlab textbook and tried googling it without any success. I've got a good understanding of machine learning but far less with the coding.

24 Jul 2015 Masayuki Tanaka

Hi Lachie Vogt,

Have you initialized the dbn?
Please check it.

Thanks.

Comment only
23 Jul 2015 Lachie Vogt

hello,

I'm just trying to train the DNB with sample data.
[dbn rmse] = trainDBN ( dbn,IN, OUT, opts)

The error message "Undefined function or variable 'dbn'." is then displayed. I have installed the toolbox do you know what is going wrong with the command?

Comment only
11 Jul 2015 Yijun Zhao

BTW, I am using GBDBN ....

Comment only
10 Jul 2015 Yijun Zhao

Hi, I am getting the predictions (bout) are either all 0's or all 1's. Anyone had same problem? Any suggestions? Thanks in advance for the help.

Yijun

Comment only
10 Jul 2015 Masayuki Tanaka

Hi Yijun Zhao,

The toolbox does not include the code for the cross validation. If you need, please implement by yourself.

Thank you

Comment only
09 Jul 2015 Yijun Zhao

Just want to confirm that given an input dataset, we need to write the validation and test wrapper around the code, i.e. the code does not provide training and testing split. Is that correct?

Comment only
09 Jul 2015 Ramya

Ramya (view profile)

It seems to now work when I use the zipped files! I think when I was using the toolbox, it might have been using a different h2v file?

Comment only
09 Jul 2015 Ramya

Ramya (view profile)

Thank you Masayuki for your work!
I have an error when I try to pretrain the RBM or run testDNN, and I'm not sure how to fix it to start using your code. Please let me know!
Thank you so much!

Error in h2v (line 9)
if isempty(av) % h2v([],main_figure,varnames)

Output argument "out1" (and maybe others) not assigned during call to "h2v".

Error in pretrainRBM (line 195)
vis1 = h2v( rbm, bhid0 ); % Compute visible nodes

Comment only
09 Jul 2015 Masayuki Tanaka

Hi Yijun Zhao,

The main focus of this toolbox is the classification.

Thank you.

Comment only
08 Jul 2015 Yijun Zhao

Hi Masayuki,

Could you please clarify if this code can do both classification and regression?

Thanks,

Yijun

Comment only
06 Jul 2015 moslem yousefi

Thank you for this file. Great help.

Comment only
26 Jun 2015 Masayuki Tanaka

Hi Leqi Zhu,

This tool box is the supervised training. The unsupervised training is used in the pre-training.

Thanks.

Comment only
26 Jun 2015 Masayuki Tanaka

Hi SAM,

I think that there are several approach to apply the time series data. But, it strongly depends on the application.

Thanks.

Comment only
26 Jun 2015 Masayuki Tanaka

Hi KwangHun Jeong,

This tool box is for the supervised training.
Thanks.

Comment only
25 Jun 2015 Leqi Zhu

I have another question : Is this toolbox used for unsupervised learning ? If so , why it need us to provide outputdata to train?

24 Jun 2015 Leqi Zhu

Hi, I used your toolbox to do classification. I got 10 kinds of materials with 4 features. And each material contain 10 samples. When I run the demo calling v2h , the estimate is always within (0,1). what does it mean?

19 May 2015 SAM

SAM (view profile)

thanku for your code.
but i have one questionn can i use ur toolbox for time series prediction problem?

i have to predict weather temperature from 5 input parametrs.

can you plz help me in this

18 May 2015 Amira bouallégue

thank you for your code

Comment only
15 May 2015 KwangHun Jeong

ah what is the label???
i don't know label's meaning...
just name??

Comment only
15 May 2015 KwangHun Jeong

thank you for your code

i can't find content of target
i wonder this toolbox is unsupervised

is bbdbn trained model??
then i wonder how testing is possible

how can i know testing result?
that is, what is input data of testing classified?

Comment only
15 May 2015 KwangHun Jeong

i send message to Masayuki Tanaka
plase, check message

Comment only
08 May 2015 Masayuki Tanaka

Hi Bill,

In my implementation, the RBM.sig is considered as a given parameter. As you commented, that parameter is not updated.
However, the update rule is different between the GBRBM and the BBRBM.

Thanks.

Comment only
07 May 2015 bill gates

hi Masayuki,

Thanks for sharing your code.

The RBM.sig has been set to ones and never get updated anywhere. Thus the GBRBM and BBRBM is literally the same in your implementation. Could you justify this? Thanks.

kfinger

Comment only
04 May 2015 Lepsoy

Lepsoy (view profile)

Hello Masayuki Tanaka,
first of all, thank you for the work you have done here.

I wish to use the toolbox for regression. More specifically, my input vectors are of dimension 41 and the number of output nodes I wish to infer is 12. Due to the input not being (0,1) I use the GBRBM as input layer and a [41 300 300 300 12] structure.

My question in essence: Do I need to change anything in the code to be able to do regression instead of classification?

Thanks in advance!

Comment only
28 Apr 2015 Masayuki Tanaka

Hi arthi,

The toolbox has some bugs for GBRBM. I fixed it.
Please use the latest version of DeepNeuralNetwork20150428.

Thanks.

Comment only
28 Apr 2015 Masayuki Tanaka

Hi Bipul Das,

I could not get the meaning of the size 64x4, where 64 represents number of observations and 4 represents the features.

The size of the input data is usually [number of data]x[dim of feature]. Please check the testDNN.

Thanks.

Comment only
25 Apr 2015 arthi

arthi (view profile)

Hello Masayuki Tanaka,
thank you for the files. i have used GBRBM to test my own data(non image data).

but my output in all rows has same value as mentioned by nirmal.
can you please provide some inform?

30 Mar 2015 Jiaju Yang

Thanks to Morten Kolbæk,now I can run testDNN,but the result rmse turns out to be 0.28,which is really too high.
Also,in pretrainRBM, I can not find the update for sigma, which stays 1 all the time.It really puzzles me ,because in some other codes it changes with iters.
Can anyone help me?
Thanks a lot.

Comment only
29 Mar 2015 Izzy Abdul

Plz is there anybody help me how to add the tool?

26 Mar 2015 Morten Kolbæk

I experience the same as Jiaju Yang about an error on line 281 in trainDBN in revision 20131024.
However I found that the error is not present in revision 20131204. I have “fixed” the problem by using the line from rev. 20131204 in rev. 20131024 which seems to work. The line is
“deltaW = bsxfun( @rdivide, deltaW, trainDBN.rbm{n}.sig' );”

Comment only
26 Mar 2015 Jiaju Yang

Hello Masayuki,
thank you for your toolbox.
However,when i run testDNN,it shows"
Error in trainDBN (line 281)
deltaW = bsxfun( @rdivide, detaW, rbm{h}.sig' );"
how can i solve it ?

23 Feb 2015 Bipul Das

Hello Masayuki,

First of all thank you for your toolbox on DNN. It seems to be quite useful. But I am facing one problem while implementing the code.

1) My input data has a size 64x4, where 64 represents number of observations and 4 represents the features.

But your code returns something that is neither input not the output.

So can you help me to understand where I am doing the mistake

Comment only
09 Dec 2014 Marco

Marco (view profile)

Hi Masayuki,
I find your toolbox very interesting but I have two separate issues.

First: if I run testDNN I get:
??? Error using ==> randperm
Too many input arguments.

Error in ==> pretrainRBM at 172
p = randperm(dimV, DropOutNum);

Error in ==> pretrainDBN at 88
dbn.rbm{i} = pretrainRBM(dbn.rbm{i}, X, opts);

Error in ==> testDNN at 21
dnn = pretrainDBN(dnn, IN, opts);

Secondly, I've tried to write a script of mine following your nice helps but it seem not to be working correctly. It always chose class one no matter what. Actually

pretrainDBN(dbn, train_data, opts);

returns a first layer that does not learn anything. I mean no matter the size the answer always look like this (now 3 iterations just for sake of space)

1 : 57.8774 0.7509
2 : 57.8774 0.7656
3 : 57.8774 0.7656
1 : 0.4938 0.4939
2 : 0.4846 0.5583
3 : 0.4719 0.6207

Which seem strange when the N of neurons of the autoencoder is set equal to the N_ of inputs.

Could you please help me with this?

p.s. here is the script except for the data loading part

% Sets variables
datanum = size(train_data,1);
outputnum = size(train_target,2);
inputnum = size(train_data,2);
hiddennum = 32;

opts.Verbose = true;
opts.MaxIter = 10;

dbn = randDBN([192 192 11], type);
dbn2 = pretrainDBN(dbn, train_data, opts);
dbn3 = SetLinearMapping(dbn2, train_data, train_target);
dbn4 = trainDBN(dbn3, train_data, train_target);

train_estimate = v2h(dbn4, train_data);
[~,CM,~,~] = confusion(train_target', train_estimate')
test_estimate = v2h(dbn4, test_data);
[~,CM,~,~] = confusion(test_target', test_estimate')

Comment only
26 Nov 2014 Salem

Salem (view profile)

Hi Masayuki,
Thanks for replying, I have my own data set. Therefore, I applied this code to it and the result was an excellent, much better than Conv. Neural Networks. I want to go through the implementation again because the result is something incredible and I want to make sure I have implemented in the correct way. Thanks again for sharing the code.

Comment only
26 Nov 2014 Masayuki Tanaka

Hi Salem,

For the object detection, I think that the convolution network is better. But, I hope this code also works for the object detection.

Thanks.

Comment only
24 Nov 2014 Salem

Salem (view profile)

 
24 Nov 2014 Salem

Salem (view profile)

Hi Masayuki,
Thanks for sharing this work.
Is it possible to use this code for object detection?
Regards,

Comment only
17 Nov 2014 Naeem Ul Islam

thank you so much i fixed my problem

Comment only
17 Nov 2014 Masayuki Tanaka

Hi Nirmal,

I won’t consult on each specific problem.
If you have any bug, please report that. I will fix it if I have time…

Thank you!

Comment only
14 Nov 2014 Nirmal

Nirmal (view profile)

Thanks Masayuki for your response . I don't know where exactly the bug is .I can tell you my input and output data for training and testing .

Training

Input and Ouput I have given to train are

IN =

0.0508 0.1028 0.2597 0.3425 0.4421 0.5450 0.6116 0.7312 0.9024 1.0000
0.0573 0.1087 0.2060 0.3443 0.4321 0.5143 0.6097 0.7396 0.8907 1.0000
0.0434 0.1066 0.2399 0.3202 0.4109 0.4947 0.6062 0.6984 0.8613 1.0000
0.0588 0.1160 0.2182 0.3529 0.4726 0.5468 0.6609 0.8125 0.9331 1.0000
0.0497 0.0869 0.2175 0.3110 0.3862 0.4690 0.5324 0.6646 0.8984 1.0000
0.0553 0.1104 0.1955 0.3246 0.4395 0.5263 0.6127 0.7208 0.8704 1.0000
0.0530 0.1022 0.2058 0.3652 0.4738 0.5564 0.6354 0.7720 0.9023 1.0000
0.0533 0.1118 0.2367 0.3263 0.4184 0.5061 0.6123 0.7301 0.8726 1.0000
0.0538 0.1179 0.2293 0.3267 0.4333 0.5230 0.6226 0.7258 0.8801 1.0000
0.0473 0.1111 0.2347 0.3560 0.4627 0.5681 0.6481 0.7799 0.9202 1.0000
0.0481 0.1091 0.1906 0.2956 0.3788 0.4684 0.5427 0.7065 0.8924 1.0000
0.0536 0.1164 0.2625 0.3452 0.4307 0.5382 0.6301 0.7400 0.8729 1.0000
0.0547 0.1109 0.2527 0.3440 0.4316 0.5300 0.6416 0.7294 0.8658 1.0000
0.0459 0.1080 0.2353 0.3129 0.4413 0.5336 0.6143 0.7321 0.8933 1.0000
0.0546 0.1018 0.2035 0.3188 0.4412 0.5370 0.6084 0.7173 0.8851 1.0000
0.0598 0.1121 0.2279 0.3452 0.4224 0.5233 0.6205 0.7327 0.8971 1.0000
0.0465 0.1062 0.2274 0.3396 0.4300 0.5292 0.6130 0.7032 0.8624 1.0000
0.0473 0.1031 0.2673 0.3637 0.4229 0.5268 0.6370 0.7149 0.8271 1.0000
0.0509 0.1024 0.1906 0.3217 0.4049 0.4952 0.5999 0.7157 0.8828 1.0000
0.0482 0.0994 0.1910 0.3368 0.4615 0.5317 0.6292 0.7473 0.8908 1.0000

OUT =

0.0619 0.1074 0.1986 0.2981 0.4495 0.5487 0.7022 0.8203 0.9526 1.0000
0.0495 0.1119 0.1972 0.3024 0.4619 0.5569 0.7223 0.8382 0.9456 1.0000
0.0474 0.1066 0.2208 0.3254 0.4762 0.5625 0.6883 0.7866 0.9373 1.0000
0.0609 0.1016 0.1936 0.2756 0.4321 0.5390 0.6710 0.7997 0.9176 1.0000
0.0646 0.1039 0.2133 0.3243 0.4175 0.5608 0.6469 0.7916 0.9045 1.0000
0.0585 0.1081 0.1846 0.2689 0.4416 0.5450 0.7111 0.8411 0.9702 1.0000
0.0494 0.0918 0.1798 0.3124 0.4554 0.5728 0.7282 0.8470 0.9533 1.0000
0.0597 0.1232 0.2334 0.3116 0.4309 0.5399 0.6387 0.7748 0.9233 1.0000
0.0554 0.1002 0.2053 0.3289 0.4744 0.5669 0.6879 0.8111 0.9494 1.0000
0.0473 0.1111 0.2347 0.3560 0.4627 0.5681 0.6481 0.7799 0.9202 1.0000
0.0481 0.1091 0.1906 0.2956 0.3788 0.4684 0.5427 0.7065 0.8924 1.0000
0.0536 0.1164 0.2625 0.3452 0.4307 0.5382 0.6301 0.7400 0.8729 1.0000
0.0547 0.1109 0.2527 0.3440 0.4316 0.5300 0.6416 0.7294 0.8658 1.0000
0.0459 0.1080 0.2353 0.3129 0.4413 0.5336 0.6143 0.7321 0.8933 1.0000
0.0546 0.1018 0.2035 0.3188 0.4412 0.5370 0.6084 0.7173 0.8851 1.0000
0.0598 0.1121 0.2279 0.3452 0.4224 0.5233 0.6205 0.7327 0.8971 1.0000
0.0465 0.1062 0.2274 0.3396 0.4300 0.5292 0.6130 0.7032 0.8624 1.0000
0.0473 0.1031 0.2673 0.3637 0.4229 0.5268 0.6370 0.7149 0.8271 1.0000
0.0509 0.1024 0.1906 0.3217 0.4049 0.4952 0.5999 0.7157 0.8828 1.0000
0.0482 0.0994 0.1910 0.3368 0.4615 0.5317 0.6292 0.7473 0.8908 1.0000

Testing

Input ,Expected Output ( OUT) and Output by our system (out)

IN =

0.0571 0.1051 0.2289 0.3170 0.3904 0.4603 0.5426 0.6916 0.8513 1.0000
0.0600 0.1202 0.1827 0.3124 0.4269 0.4994 0.6124 0.7648 0.8969 1.0000
0.0500 0.1038 0.2320 0.3211 0.4101 0.5151 0.6038 0.7421 0.8818 1.0000
0.0549 0.0942 0.1917 0.3012 0.3757 0.4808 0.6032 0.6957 0.8200 1.0000
0.0519 0.0993 0.1598 0.2916 0.4260 0.4866 0.5815 0.6982 0.8647 1.0000
0.0551 0.1124 0.1759 0.2736 0.3927 0.4553 0.5314 0.7169 0.8839 1.0000
0.0602 0.1062 0.2197 0.3179 0.4412 0.5577 0.6200 0.7419 0.8830 1.0000
0.0588 0.1126 0.1782 0.2953 0.3918 0.4797 0.5877 0.7279 0.8814 1.0000

OUT =

0.0537 0.1227 0.2162 0.3215 0.4550 0.5871 0.7132 0.8521 0.9558 1.0000
0.0633 0.1196 0.1881 0.2807 0.4691 0.5499 0.6738 0.8023 0.9324 1.0000
0.0613 0.1148 0.1805 0.2892 0.4700 0.5843 0.7635 0.8470 0.9414 1.0000
0.0653 0.1402 0.2052 0.2768 0.4669 0.5362 0.7040 0.8622 0.9405 1.0000
0.0648 0.1156 0.1807 0.2804 0.4378 0.5466 0.7169 0.8337 0.9482 1.0000
0.0593 0.1175 0.1746 0.2710 0.4707 0.5343 0.7108 0.8234 0.9452 1.0000
0.0558 0.1104 0.2144 0.3186 0.4410 0.5616 0.6886 0.7815 0.9127 1.0000
0.0594 0.1193 0.1956 0.3045 0.4722 0.5430 0.7349 0.8158 0.9345 1.0000

out =

0 1.0000 0 0 1.0000 1.0000 1.0000 1.0000 1.0000 0.9968
0 1.0000 0 0 1.0000 1.0000 1.0000 1.0000 1.0000 0.9968
0 1.0000 0 0 1.0000 1.0000 1.0000 1.0000 1.0000 0.9968
0 1.0000 0 0 1.0000 1.0000 1.0000 1.0000 1.0000 0.9968
0 1.0000 0 0 1.0000 1.0000 1.0000 1.0000 1.0000 0.9968
0 1.0000 0 0 1.0000 1.0000 1.0000 1.0000 1.0000 0.9968
0 1.0000 0 0 1.0000 1.0000 1.0000 1.0000 1.0000 0.9968
0 1.0000 0 0 1.0000 1.0000 1.0000 1.0000 1.0000 0.9968

nodes info : nodes = [10,15,10];

If you please could help me in this

Comment only
14 Nov 2014 Masayuki Tanaka

Hi Nirmal,

If you have any bug, please report that. I will fix it if I have time…

Thank you!

Comment only
11 Nov 2014 Nirmal

Nirmal (view profile)

Hi Masayuki ,
Thanks for sharing this work , it is really nice indeed.

I am trying to train a M*N matrix with M*N matrix i.e. input and output both are M*N matrix ,both of which are containing float nos positive as well as negative no.

When I test the system on some M'*N points ,all the rows have same value

Example of output

0.9942 0.0001 0.9998 0.0001 0.1693 0.0026 0.2899 0.0002
0.9942 0.0001 0.9998 0.0001 0.1693 0.0026 0.2899 0.0002
0.9942 0.0001 0.9998 0.0001 0.1693 0.0026 0.2899 0.0002
0.9942 0.0001 0.9998 0.0001 0.1693 0.0026 0.2899 0.0002
0.9942 0.0001 0.9998 0.0001 0.1693 0.0026 0.2899 0.0002

Here every row is same even my input was having different value .Can you please help me in this regard

01 Oct 2014 Masayuki Tanaka

Hi Omrah,

That is a pre-processing issue. I think that it is beyond of my scope. But, one of simple approach is to apply the padding.
Please try it.

Thank you.

Comment only
30 Sep 2014 omrah

omrah (view profile)

hi,
I try to construct the matrix to test with DNN. but I have a problem;
I have vectors that will vary in size from 200 columns to 1200 columns. Each vector represents the characteristic of a handwritten word (5000mot). what to do so that I can normalize vectors for use with RBM and DBN.
for MNIST it does not pose a problem because the images are the same size 28 * 28 which gives us 784 columns network input.
I hope I find the solution.
Thank you for your cooperation.

30 Sep 2014 Masayuki Tanaka

Hi Omrah,

I think that you can use my code with your data. I provide the sample for the MNIST handwritten digits.
Please check it!

Thank you.

Comment only
29 Sep 2014 omrah

omrah (view profile)

Hello Masayuki,
I try to use your code in my work of research but i have error!!!
This is my problem;
After extracting the characteristics of online handwritten words from a database. I obtained the feature vectors composed of the real data for each word but are variable in their sizes. Also I prepared equity vectors representing the labels of each word.
Can i use your proposed DBN-DNN for my data ?
thanks

16 Sep 2014 joewale

Masayuki Tanaka,thank you!

16 Sep 2014 Masayuki Tanaka

Hi Joewale,

You can download author's preprint from my web page:

http://www.ok.ctrl.titech.ac.jp/~mtanaka/publication2014.html

Thanks!

Comment only
13 Sep 2014 joewale

Hi Masayuki Tanaka, I can't find your paper "A Novel Inference of a Restricted Boltzmann Machine, International Conference on Pattern Recognition (ICPR2014), August, 2014." on the google schoolar website, could you send it to my email.
In addition, I wonder whether this algorithm is also suitable for speech processing, such as speech feature extraction or audio classification.Thank you!

10 Sep 2014 Masayuki Tanaka

Hi Giacomo,

Thank you for your comment and bug report!
You are correct. That was a bug. I already fixed it and update the code.
Please use the latest version.

Thank you!

Comment only
05 Sep 2014 Giacomo

Hi Masayuki Tanaka,
first of all thank you for your work :). I have just found a bug in the code. Please let me know (and ignore it) if it is not correct.

File: trainDBN.m
Rows: 128,129
Descr: the opts matlab struct has the field 'object' with letter 'o' not capitalized. The code actually checks if 'Object' - capitalized - is an existing field and, if true, it compares the content of opts.object - not capitalized -. This always leads to the default value OBJECTSQUARE even if you set the field 'object' equals to 'CrossEntropy'.

thank you again,
Giacomo

04 Sep 2014 Masayuki Tanaka

Hi Seoul,

Thank you for your comment.

I think it depends the training data.

If you find any observation related that phenomena, please let me know.

Thanks,

Comment only
29 Aug 2014 seoul

seoul (view profile)

I have a question
RBM is always good
Sometimes this will not be getting work BACK-PROPAGATION
Why is that?


for example
rbm = epoch: 500 backpropagation = 2000epoch
NODE Structure [784 1568 1568 10]
iter: 1 -> traindata MSE = 0.8
iter: 2 -> MSE = 0.8
.
.
.
iter: 1000 -> MSE = 0.8

29 Aug 2014 Liang ZOU  
25 Aug 2014 Masayuki Tanaka

Hi Seoul,

That is an actually great question.
The BBRBM is basically developed assuming the binary input. But, we can calculate the real value between 0 and 1. That is one of key point of my ICPR 2014 paper. If you have interests, please check the paper:

Masayuki Tanaka and Masatoshi Okutomi, A Novel Inference of a Restricted Boltzmann Machine, International Conference on Pattern Recognition (ICPR2014), August, 2014.

Thanks.

Comment only
24 Aug 2014 seoul

seoul (view profile)

Why put a visible neuron to recognize the real value between 0 and 1 BBRBM also
Does well?

24 Aug 2014 seoul

seoul (view profile)

 
01 Aug 2014 Junxing  
18 Jul 2014 Masayuki Tanaka

Hi Andre Flipe,

Honestly, I could not get what kind of the network structure you want. But, if you set [10, 2] when create a DBN, it means that 10 nodes for input and 2 nodes for output without hidden nodes.

Thanks.

Comment only
18 Jul 2014 Masayuki Tanaka

Hi Ari,

I have run testDNN. But, I could not reproduce the error which you mentioned.

Thanks.

Comment only
18 Jul 2014 Masayuki Tanaka

Hi Alena,

readme.txt is in the folder of mnist. Please check it!

Thanks.

Comment only
09 Jul 2014 Alena

Alena (view profile)

could you please send me a detailed readme.txt ? I can not download the readme. Thanks very much!

23 Jun 2014 Andre Filipe

Hi,

One thing I am not understanding. lets say I have an input=[rand(100,2)*0.1+0.3;rand(100,2)*0.1+0.6] and output=[zeros(100,1),ones(100,1);ones(100,1),zeros(100,1)]; and I want to create a DBM with two layers (10 and 2 nodes, in order). Shouldn't nodes than be equal to [10 2] ? Currently, it gives a inner matrix error at the v2h. please help

Comment only
16 Jun 2014 Ari

Ari (view profile)

 
12 Jun 2014 Ari

Ari (view profile)

Hi,
Thanks for sharing this code. I downloaded the last version and tried to run testDNN as it recommended. I got these errors:
??? Error using ==> randperm
Too many input arguments.

Error in ==> pretrainRBM at 172
p = randperm(dimV, DropOutNum);

Error in ==> pretrainDBN at 88
dbn.rbm{i} = pretrainRBM(dbn.rbm{i}, X, opts);

Error in ==> testDNN at 21
dnn = pretrainDBN(dnn, IN, opts);

Thanks,
Ari

Comment only
14 May 2014 hiba

hiba (view profile)

hello, can you send me the technical report of this program?

Comment only
03 Apr 2014 ted p teng

ted p teng (view profile)

 
03 Apr 2014 siddhartha

So when i run the code by Masayuki Tanaka in MATLAB

I train a RBM with real valued input samples and binary hidden units.

So now I want to feed a new input sample to know the classification.

Which file in the toolbox the code should i use
calRMSE is it

also the values it gives are in decimals

so how will i know which class my input sample has been clasified into

example code

rbm=randRBM( 3, 3, 'GBRBM' )

V=[0.5 -3 1;-0.5 2 0;-0.25 -0.44 1];

rbm=pretrainRBM(rbm, V)

now once trained should i use
v2hall(rbm, [-0.5 -0.5 0])

on the new input vector

Comment only
01 Apr 2014 MA

MA (view profile)

hi, Tanaka, I use this toolsbox to train GBRBM, while in your code h2v.m, there is no gaussian distribution:
h = bsxfun(@times, H * dnn.W', dnn.sig);
V = bsxfun(@plus, h, dnn.c );
I think there should be a gaussian sampling: normrnd(h+dnn.c,dnn.sig)

Comment only
01 Apr 2014 siddhartha

Hi I just have one question
I am using GBRBM type rbm.

So i give it a training set and train it and it gives a rbm with W,b,c,sig

Now when i give it a new input the output should be a binary vector equal to legnth of hidden neurons. Since it will classify which neuron corresponds to the input more

So how do i feed the new input and is my approach correct

Comment only
19 Mar 2014 ling

ling (view profile)

Thanks for sharing the easily used package..
It works very well.
I have a question that besides the setting opts.maxIter, I have not found other criteria for stop training. How to decide when to stop training?

Comment only
19 Mar 2014 ling

ling (view profile)

Sorry, I am wrong. It should be nrbm-1

07 Mar 2014 TabZim

TabZim (view profile)

Thanks a lot for enhancing our understanding with this well commented code. I have a query regarding the sparsity constraint imposed in RBM, i.e in the pretrainRBM function. In order to update the hidden biases according to the sparsity constraint, why have you multiplied the gradients with 2.

dsW = dsW + SparseLambda * 2.0 * bsxfun(@times, (SparseQ-mH)', svdH)';

dsB = dsB + SparseLambda * 2.0 * (SparseQ-mH) .* sdH;

This does not match any update equation given by Lee et al. Could you please elaborate on this? Many thanks!

Comment only
04 Mar 2014 Sanjanaashree

Hello Sir, I am working in Machine Transliteration, since I am newbie to DBN I wish to know whether will be able to use this code for the matrix created using one-hot representation of data.

Comment only
27 Feb 2014 Masayuki Tanaka

Hi Yong Ho,

Thank you for your comment.
The linear mapping is just a option. You don't need to use that. But, the training requires the initial parameters. I think the linear mapping is one of candidates for initial parameters.
If you know better initial parameter setting, please let me know.

Thanks.

Comment only
26 Feb 2014 Yong Ho

Hello,

Your code is very helpful for me.

But during understanding your code, I wonder why you are using linear mapping for calculating weights between TrainLabels and last hidden nodes?

Is there any advantage using linear mapping?

25 Jan 2014 Masayuki Tanaka

Hi Adel,

Thank you for your comment and good rating!
But, my code does not include the sparse RBM feature.

Thanks!

Comment only
21 Jan 2014 Adel

Adel (view profile)

Dear Prof.

I used the library and it was very useful and easy to use. But when i read the paper:

"Lee et al, Sparse deep belief net model for visual area V2, NIPS 2008. "

I found it uses Sparse RBM. and i want to know how can i apply and use Sparse RBM in my application as in the mentioned paper.

I appreciate any help or advice about this.

thanks.

13 Jan 2014 Masayuki Tanaka

Hi Usman,

I think you can use the Gaussian RBM instead of the Bernoulli RBM.

Thank you!

Comment only
10 Jan 2014 ashkan abbasi

thanks for your generosity!

06 Jan 2014 Usman

Usman (view profile)

Hi Xiaowang,

You just need to run output=v2h(bbdbn,TestImages) to get the output labels.
Match these with "TestLabels" to verfy your output

Comment only
27 Dec 2013 Usman

Usman (view profile)

Hi,
I am using this Toolbox for speech recognition features as input/visible units. However my features are both negative and positive and are greater then 1,-1. Can you please help me out as to if i can use -ve values as visible units or do i have to normalize the features to between o and 1. Also how to cater for zero mean and unit variance standardization since standardization makes data greater than 1 and normalizing makes it loose the zero mean unit variance distribution.

Comment only
27 Dec 2013 xiaowang

I have read your MNIST code part and have some questions.You just use tranimages and trainlabels to train the DNN but I did not see the testimages were used to test and calculate the errorate in comarison with the testlabels? I am not well kown DNN, maybe I am wrong...

Comment only
08 Dec 2013 Xin

Xin (view profile)

It is a wonderful tool box.

But could you please upload a tutorial-like file?

That will be more helpful for us .

Many thanks!!

05 Dec 2013 Masayuki Tanaka

Hi Arjun,

Thank you for your bug report!
I fixed that bug and updated. Please use DeepNeuralNetwork20131204.zip

Thank you!

Comment only
03 Dec 2013 Arjun

Arjun (view profile)

There seems to be a bug on line 205 in trainDBN.m when using GBDBN.

Comment only
18 Nov 2013 Ming-Ju

It works! Awesome implementation!

10 Nov 2013 eve

eve (view profile)

Thanks a lot Masayuki!!!! :-)

Comment only
08 Nov 2013 Masayuki Tanaka

Hi Sigurd,

Thank you for your comment. I think that the random inputs and outputs in testDNN make such training results. If you train with the MNIST dataset for example, I believe that you will get a reasonable model.

You can get the MNIST dataset from:
http://yann.lecun.com/exdb/mnist/

Thanks!

Comment only
08 Nov 2013 Sigurd

Sigurd (view profile)

Hi,

Thanks for providing the code.

Running testDNN, the trained net doesn't actually appear to model the data. 1) v2h(dnn,IN) yields the output layer as (nearly) constant for all input 2) RMSE equal to (and sometimes great than!) the standard deviation of the data. 3)

...just a little puzzled as to what is going on...

Cheers,
Sigurd

Comment only
06 Nov 2013 altan

altan (view profile)

 
05 Nov 2013 Masayuki Tanaka

Hi eve,

Thank you very much for you comment. That was bug. I already fixed that bug.
Please use DeepNeuralNetwork20131105

Thank you again!

Comment only
04 Nov 2013 eve

eve (view profile)

Hi,

I tried running testDNN.m but I got an error in v2h.m line 39 as the sizes of V and dnn.W dont match?? Is that a new bug?
THanks

Comment only
23 Aug 2013 Masayuki Tanaka

Chong, thanks a lot!
I have fixed it and update the codes.

Comment only
22 Aug 2013 chong

chong (view profile)

if( Object = OBJECTSQUARE )?????
der = derSgm .* der;
end

Comment only
30 Jul 2013 Hwa-Chiang

Nice and Nice!

Updates
22 Aug 2013 1.2

Modified the implementation of the dropout.
Added feature of the cross entropy object function for the neural network training.

23 Aug 2013 1.5

Debugged. Thank you, chong!

23 Aug 2013 1.6

Some bags are fixed.

23 Sep 2013 1.7

Bug fixed in GetDroppedDBN.

24 Sep 2013 1.8

Modified testDNN.m

05 Nov 2013 1.9

Bug fix.

08 Nov 2013 1.10

The bug is fixed.

15 Nov 2013 1.11

Sample codes of the MNIST dataset are included.

04 Dec 2013 1.12

Fixed the bug in trainDBN.m for GBDBN.

13 Dec 2013 1.13

CalcErrorRate is debuged.

13 Jan 2014 1.14

Added sample of evaluation test data of MNIST.

15 Aug 2014 1.15

I added the implementation of the ICPR 2014 algorithm.

10 Sep 2014 1.16

The bug related the object function is fixed.

29 Jan 2015 1.17

Added ICPR2014 implimentation.

28 Apr 2015 1.18

Bug fixed for the GBDBN.

18 Aug 2015 1.18

Added link information.

05 Aug 2016 1.19

Modified explanation of option.

Contact us