Code covered by the BSD License  

Highlights from
Deep Neural Network

4.78947

4.8 | 23 ratings Rate this file 390 Downloads (last 30 days) File Size: 4.57 MB File ID: #42853
image thumbnail

Deep Neural Network

by

 

29 Jul 2013 (Updated )

It provides deep learning tools of deep belief networks (DBNs).

| Watch this File

File Information
Description

Run testDNN to try!
Each function includes description. Please check it!
It provides deep learning tools of deep belief networks (DBNs) of stacked restricted Boltzmann machines (RBMs). It includes the Bernoulli-Bernoulli RBM, the Gaussian-Bernoulli RBM, the contrastive divergence learning for unsupervised pre-training, the sparse constraint, the back projection for supervised training, and the dropout technique.
The sample codes with the MNIST dataset are included in the mnist folder. Please, see readme.txt in the mnist folder.

Hinton et al, Improving neural networks by preventing co-adaptation of feature detectors, 2012.
Lee et al, Sparse deep belief net model for visual area V2, NIPS 2008.
http://read.pudn.com/downloads103/sourcecode/math/421402/drtoolbox/techniques/train_rbm.m__.htm

Modified the implementation of the dropout.
Added feature of the cross entropy object function for the neural network training.

It includes the implementation of the following paper. If you use this toolbox, please cite the following paper:
Masayuki Tanaka and Masatoshi Okutomi, A Novel Inference of a Restricted Boltzmann Machine, International Conference on Pattern Recognition (ICPR2014), August, 2014.

Required Products MATLAB
MATLAB release MATLAB 7.14 (R2012a)
Tags for This File   Please login to tag files.
Please login to add a comment or rating.
Comments and Ratings (64)
09 Dec 2014 Marco

Hi Masayuki,
I find your toolbox very interesting but I have two separate issues.

First: if I run testDNN I get:
??? Error using ==> randperm
Too many input arguments.

Error in ==> pretrainRBM at 172
p = randperm(dimV, DropOutNum);

Error in ==> pretrainDBN at 88
dbn.rbm{i} = pretrainRBM(dbn.rbm{i}, X, opts);

Error in ==> testDNN at 21
dnn = pretrainDBN(dnn, IN, opts);

Secondly, I've tried to write a script of mine following your nice helps but it seem not to be working correctly. It always chose class one no matter what. Actually

pretrainDBN(dbn, train_data, opts);

returns a first layer that does not learn anything. I mean no matter the size the answer always look like this (now 3 iterations just for sake of space)

1 : 57.8774 0.7509
2 : 57.8774 0.7656
3 : 57.8774 0.7656
1 : 0.4938 0.4939
2 : 0.4846 0.5583
3 : 0.4719 0.6207

Which seem strange when the N of neurons of the autoencoder is set equal to the N_ of inputs.

Could you please help me with this?

p.s. here is the script except for the data loading part

% Sets variables
datanum = size(train_data,1);
outputnum = size(train_target,2);
inputnum = size(train_data,2);
hiddennum = 32;

opts.Verbose = true;
opts.MaxIter = 10;

dbn = randDBN([192 192 11], type);
dbn2 = pretrainDBN(dbn, train_data, opts);
dbn3 = SetLinearMapping(dbn2, train_data, train_target);
dbn4 = trainDBN(dbn3, train_data, train_target);

train_estimate = v2h(dbn4, train_data);
[~,CM,~,~] = confusion(train_target', train_estimate')
test_estimate = v2h(dbn4, test_data);
[~,CM,~,~] = confusion(test_target', test_estimate')

26 Nov 2014 Salem

Hi Masayuki,
Thanks for replying, I have my own data set. Therefore, I applied this code to it and the result was an excellent, much better than Conv. Neural Networks. I want to go through the implementation again because the result is something incredible and I want to make sure I have implemented in the correct way. Thanks again for sharing the code.

26 Nov 2014 Masayuki Tanaka

Hi Salem,

For the object detection, I think that the convolution network is better. But, I hope this code also works for the object detection.

Thanks.

24 Nov 2014 Salem  
24 Nov 2014 Salem

Hi Masayuki,
Thanks for sharing this work.
Is it possible to use this code for object detection?
Regards,

17 Nov 2014 Naeem Ul Islam

thank you so much i fixed my problem

17 Nov 2014 Masayuki Tanaka

Hi Nirmal,

I won’t consult on each specific problem.
If you have any bug, please report that. I will fix it if I have time…

Thank you!

14 Nov 2014 Nirmal

Thanks Masayuki for your response . I don't know where exactly the bug is .I can tell you my input and output data for training and testing .

Training

Input and Ouput I have given to train are

IN =

0.0508 0.1028 0.2597 0.3425 0.4421 0.5450 0.6116 0.7312 0.9024 1.0000
0.0573 0.1087 0.2060 0.3443 0.4321 0.5143 0.6097 0.7396 0.8907 1.0000
0.0434 0.1066 0.2399 0.3202 0.4109 0.4947 0.6062 0.6984 0.8613 1.0000
0.0588 0.1160 0.2182 0.3529 0.4726 0.5468 0.6609 0.8125 0.9331 1.0000
0.0497 0.0869 0.2175 0.3110 0.3862 0.4690 0.5324 0.6646 0.8984 1.0000
0.0553 0.1104 0.1955 0.3246 0.4395 0.5263 0.6127 0.7208 0.8704 1.0000
0.0530 0.1022 0.2058 0.3652 0.4738 0.5564 0.6354 0.7720 0.9023 1.0000
0.0533 0.1118 0.2367 0.3263 0.4184 0.5061 0.6123 0.7301 0.8726 1.0000
0.0538 0.1179 0.2293 0.3267 0.4333 0.5230 0.6226 0.7258 0.8801 1.0000
0.0473 0.1111 0.2347 0.3560 0.4627 0.5681 0.6481 0.7799 0.9202 1.0000
0.0481 0.1091 0.1906 0.2956 0.3788 0.4684 0.5427 0.7065 0.8924 1.0000
0.0536 0.1164 0.2625 0.3452 0.4307 0.5382 0.6301 0.7400 0.8729 1.0000
0.0547 0.1109 0.2527 0.3440 0.4316 0.5300 0.6416 0.7294 0.8658 1.0000
0.0459 0.1080 0.2353 0.3129 0.4413 0.5336 0.6143 0.7321 0.8933 1.0000
0.0546 0.1018 0.2035 0.3188 0.4412 0.5370 0.6084 0.7173 0.8851 1.0000
0.0598 0.1121 0.2279 0.3452 0.4224 0.5233 0.6205 0.7327 0.8971 1.0000
0.0465 0.1062 0.2274 0.3396 0.4300 0.5292 0.6130 0.7032 0.8624 1.0000
0.0473 0.1031 0.2673 0.3637 0.4229 0.5268 0.6370 0.7149 0.8271 1.0000
0.0509 0.1024 0.1906 0.3217 0.4049 0.4952 0.5999 0.7157 0.8828 1.0000
0.0482 0.0994 0.1910 0.3368 0.4615 0.5317 0.6292 0.7473 0.8908 1.0000

OUT =

0.0619 0.1074 0.1986 0.2981 0.4495 0.5487 0.7022 0.8203 0.9526 1.0000
0.0495 0.1119 0.1972 0.3024 0.4619 0.5569 0.7223 0.8382 0.9456 1.0000
0.0474 0.1066 0.2208 0.3254 0.4762 0.5625 0.6883 0.7866 0.9373 1.0000
0.0609 0.1016 0.1936 0.2756 0.4321 0.5390 0.6710 0.7997 0.9176 1.0000
0.0646 0.1039 0.2133 0.3243 0.4175 0.5608 0.6469 0.7916 0.9045 1.0000
0.0585 0.1081 0.1846 0.2689 0.4416 0.5450 0.7111 0.8411 0.9702 1.0000
0.0494 0.0918 0.1798 0.3124 0.4554 0.5728 0.7282 0.8470 0.9533 1.0000
0.0597 0.1232 0.2334 0.3116 0.4309 0.5399 0.6387 0.7748 0.9233 1.0000
0.0554 0.1002 0.2053 0.3289 0.4744 0.5669 0.6879 0.8111 0.9494 1.0000
0.0473 0.1111 0.2347 0.3560 0.4627 0.5681 0.6481 0.7799 0.9202 1.0000
0.0481 0.1091 0.1906 0.2956 0.3788 0.4684 0.5427 0.7065 0.8924 1.0000
0.0536 0.1164 0.2625 0.3452 0.4307 0.5382 0.6301 0.7400 0.8729 1.0000
0.0547 0.1109 0.2527 0.3440 0.4316 0.5300 0.6416 0.7294 0.8658 1.0000
0.0459 0.1080 0.2353 0.3129 0.4413 0.5336 0.6143 0.7321 0.8933 1.0000
0.0546 0.1018 0.2035 0.3188 0.4412 0.5370 0.6084 0.7173 0.8851 1.0000
0.0598 0.1121 0.2279 0.3452 0.4224 0.5233 0.6205 0.7327 0.8971 1.0000
0.0465 0.1062 0.2274 0.3396 0.4300 0.5292 0.6130 0.7032 0.8624 1.0000
0.0473 0.1031 0.2673 0.3637 0.4229 0.5268 0.6370 0.7149 0.8271 1.0000
0.0509 0.1024 0.1906 0.3217 0.4049 0.4952 0.5999 0.7157 0.8828 1.0000
0.0482 0.0994 0.1910 0.3368 0.4615 0.5317 0.6292 0.7473 0.8908 1.0000

Testing

Input ,Expected Output ( OUT) and Output by our system (out)

IN =

0.0571 0.1051 0.2289 0.3170 0.3904 0.4603 0.5426 0.6916 0.8513 1.0000
0.0600 0.1202 0.1827 0.3124 0.4269 0.4994 0.6124 0.7648 0.8969 1.0000
0.0500 0.1038 0.2320 0.3211 0.4101 0.5151 0.6038 0.7421 0.8818 1.0000
0.0549 0.0942 0.1917 0.3012 0.3757 0.4808 0.6032 0.6957 0.8200 1.0000
0.0519 0.0993 0.1598 0.2916 0.4260 0.4866 0.5815 0.6982 0.8647 1.0000
0.0551 0.1124 0.1759 0.2736 0.3927 0.4553 0.5314 0.7169 0.8839 1.0000
0.0602 0.1062 0.2197 0.3179 0.4412 0.5577 0.6200 0.7419 0.8830 1.0000
0.0588 0.1126 0.1782 0.2953 0.3918 0.4797 0.5877 0.7279 0.8814 1.0000

OUT =

0.0537 0.1227 0.2162 0.3215 0.4550 0.5871 0.7132 0.8521 0.9558 1.0000
0.0633 0.1196 0.1881 0.2807 0.4691 0.5499 0.6738 0.8023 0.9324 1.0000
0.0613 0.1148 0.1805 0.2892 0.4700 0.5843 0.7635 0.8470 0.9414 1.0000
0.0653 0.1402 0.2052 0.2768 0.4669 0.5362 0.7040 0.8622 0.9405 1.0000
0.0648 0.1156 0.1807 0.2804 0.4378 0.5466 0.7169 0.8337 0.9482 1.0000
0.0593 0.1175 0.1746 0.2710 0.4707 0.5343 0.7108 0.8234 0.9452 1.0000
0.0558 0.1104 0.2144 0.3186 0.4410 0.5616 0.6886 0.7815 0.9127 1.0000
0.0594 0.1193 0.1956 0.3045 0.4722 0.5430 0.7349 0.8158 0.9345 1.0000

out =

0 1.0000 0 0 1.0000 1.0000 1.0000 1.0000 1.0000 0.9968
0 1.0000 0 0 1.0000 1.0000 1.0000 1.0000 1.0000 0.9968
0 1.0000 0 0 1.0000 1.0000 1.0000 1.0000 1.0000 0.9968
0 1.0000 0 0 1.0000 1.0000 1.0000 1.0000 1.0000 0.9968
0 1.0000 0 0 1.0000 1.0000 1.0000 1.0000 1.0000 0.9968
0 1.0000 0 0 1.0000 1.0000 1.0000 1.0000 1.0000 0.9968
0 1.0000 0 0 1.0000 1.0000 1.0000 1.0000 1.0000 0.9968
0 1.0000 0 0 1.0000 1.0000 1.0000 1.0000 1.0000 0.9968

nodes info : nodes = [10,15,10];

If you please could help me in this

14 Nov 2014 Masayuki Tanaka

Hi Nirmal,

If you have any bug, please report that. I will fix it if I have time…

Thank you!

11 Nov 2014 Nirmal

Hi Masayuki ,
Thanks for sharing this work , it is really nice indeed.

I am trying to train a M*N matrix with M*N matrix i.e. input and output both are M*N matrix ,both of which are containing float nos positive as well as negative no.

When I test the system on some M'*N points ,all the rows have same value

Example of output

0.9942 0.0001 0.9998 0.0001 0.1693 0.0026 0.2899 0.0002
0.9942 0.0001 0.9998 0.0001 0.1693 0.0026 0.2899 0.0002
0.9942 0.0001 0.9998 0.0001 0.1693 0.0026 0.2899 0.0002
0.9942 0.0001 0.9998 0.0001 0.1693 0.0026 0.2899 0.0002
0.9942 0.0001 0.9998 0.0001 0.1693 0.0026 0.2899 0.0002

Here every row is same even my input was having different value .Can you please help me in this regard

01 Oct 2014 Masayuki Tanaka

Hi Omrah,

That is a pre-processing issue. I think that it is beyond of my scope. But, one of simple approach is to apply the padding.
Please try it.

Thank you.

30 Sep 2014 omrah

hi,
I try to construct the matrix to test with DNN. but I have a problem;
I have vectors that will vary in size from 200 columns to 1200 columns. Each vector represents the characteristic of a handwritten word (5000mot). what to do so that I can normalize vectors for use with RBM and DBN.
for MNIST it does not pose a problem because the images are the same size 28 * 28 which gives us 784 columns network input.
I hope I find the solution.
Thank you for your cooperation.

30 Sep 2014 Masayuki Tanaka

Hi Omrah,

I think that you can use my code with your data. I provide the sample for the MNIST handwritten digits.
Please check it!

Thank you.

29 Sep 2014 omrah

Hello Masayuki,
I try to use your code in my work of research but i have error!!!
This is my problem;
After extracting the characteristics of online handwritten words from a database. I obtained the feature vectors composed of the real data for each word but are variable in their sizes. Also I prepared equity vectors representing the labels of each word.
Can i use your proposed DBN-DNN for my data ?
thanks

16 Sep 2014 joewale

Masayuki Tanaka,thank you!

16 Sep 2014 Masayuki Tanaka

Hi Joewale,

You can download author's preprint from my web page:

http://www.ok.ctrl.titech.ac.jp/~mtanaka/publication2014.html

Thanks!

13 Sep 2014 joewale

Hi Masayuki Tanaka, I can't find your paper "A Novel Inference of a Restricted Boltzmann Machine, International Conference on Pattern Recognition (ICPR2014), August, 2014." on the google schoolar website, could you send it to my email.
In addition, I wonder whether this algorithm is also suitable for speech processing, such as speech feature extraction or audio classification.Thank you!

10 Sep 2014 Masayuki Tanaka

Hi Giacomo,

Thank you for your comment and bug report!
You are correct. That was a bug. I already fixed it and update the code.
Please use the latest version.

Thank you!

05 Sep 2014 Giacomo

Hi Masayuki Tanaka,
first of all thank you for your work :). I have just found a bug in the code. Please let me know (and ignore it) if it is not correct.

File: trainDBN.m
Rows: 128,129
Descr: the opts matlab struct has the field 'object' with letter 'o' not capitalized. The code actually checks if 'Object' - capitalized - is an existing field and, if true, it compares the content of opts.object - not capitalized -. This always leads to the default value OBJECTSQUARE even if you set the field 'object' equals to 'CrossEntropy'.

thank you again,
Giacomo

04 Sep 2014 Masayuki Tanaka

Hi Seoul,

Thank you for your comment.

I think it depends the training data.

If you find any observation related that phenomena, please let me know.

Thanks,

29 Aug 2014 seoul

I have a question
RBM is always good
Sometimes this will not be getting work BACK-PROPAGATION
Why is that?


for example
rbm = epoch: 500 backpropagation = 2000epoch
NODE Structure [784 1568 1568 10]
iter: 1 -> traindata MSE = 0.8
iter: 2 -> MSE = 0.8
.
.
.
iter: 1000 -> MSE = 0.8

29 Aug 2014 Liang ZOU  
25 Aug 2014 Masayuki Tanaka

Hi Seoul,

That is an actually great question.
The BBRBM is basically developed assuming the binary input. But, we can calculate the real value between 0 and 1. That is one of key point of my ICPR 2014 paper. If you have interests, please check the paper:

Masayuki Tanaka and Masatoshi Okutomi, A Novel Inference of a Restricted Boltzmann Machine, International Conference on Pattern Recognition (ICPR2014), August, 2014.

Thanks.

24 Aug 2014 seoul

Why put a visible neuron to recognize the real value between 0 and 1 BBRBM also
Does well?

24 Aug 2014 seoul  
01 Aug 2014 Junxing  
18 Jul 2014 Masayuki Tanaka

Hi Andre Flipe,

Honestly, I could not get what kind of the network structure you want. But, if you set [10, 2] when create a DBN, it means that 10 nodes for input and 2 nodes for output without hidden nodes.

Thanks.

18 Jul 2014 Masayuki Tanaka

Hi Ari,

I have run testDNN. But, I could not reproduce the error which you mentioned.

Thanks.

18 Jul 2014 Masayuki Tanaka

Hi Alena,

readme.txt is in the folder of mnist. Please check it!

Thanks.

09 Jul 2014 Alena

could you please send me a detailed readme.txt ? I can not download the readme. Thanks very much!

23 Jun 2014 Andre Filipe

Hi,

One thing I am not understanding. lets say I have an input=[rand(100,2)*0.1+0.3;rand(100,2)*0.1+0.6] and output=[zeros(100,1),ones(100,1);ones(100,1),zeros(100,1)]; and I want to create a DBM with two layers (10 and 2 nodes, in order). Shouldn't nodes than be equal to [10 2] ? Currently, it gives a inner matrix error at the v2h. please help

16 Jun 2014 Ari  
12 Jun 2014 Ari

Hi,
Thanks for sharing this code. I downloaded the last version and tried to run testDNN as it recommended. I got these errors:
??? Error using ==> randperm
Too many input arguments.

Error in ==> pretrainRBM at 172
p = randperm(dimV, DropOutNum);

Error in ==> pretrainDBN at 88
dbn.rbm{i} = pretrainRBM(dbn.rbm{i}, X, opts);

Error in ==> testDNN at 21
dnn = pretrainDBN(dnn, IN, opts);

Thanks,
Ari

14 May 2014 hiba

hello, can you send me the technical report of this program?

03 Apr 2014 ted p teng  
03 Apr 2014 siddhartha

So when i run the code by Masayuki Tanaka in MATLAB

I train a RBM with real valued input samples and binary hidden units.

So now I want to feed a new input sample to know the classification.

Which file in the toolbox the code should i use
calRMSE is it

also the values it gives are in decimals

so how will i know which class my input sample has been clasified into

example code

rbm=randRBM( 3, 3, 'GBRBM' )

V=[0.5 -3 1;-0.5 2 0;-0.25 -0.44 1];

rbm=pretrainRBM(rbm, V)

now once trained should i use
v2hall(rbm, [-0.5 -0.5 0])

on the new input vector

01 Apr 2014 MA

hi, Tanaka, I use this toolsbox to train GBRBM, while in your code h2v.m, there is no gaussian distribution:
h = bsxfun(@times, H * dnn.W', dnn.sig);
V = bsxfun(@plus, h, dnn.c );
I think there should be a gaussian sampling: normrnd(h+dnn.c,dnn.sig)

01 Apr 2014 siddhartha

Hi I just have one question
I am using GBRBM type rbm.

So i give it a training set and train it and it gives a rbm with W,b,c,sig

Now when i give it a new input the output should be a binary vector equal to legnth of hidden neurons. Since it will classify which neuron corresponds to the input more

So how do i feed the new input and is my approach correct

19 Mar 2014 ling

Thanks for sharing the easily used package..
It works very well.
I have a question that besides the setting opts.maxIter, I have not found other criteria for stop training. How to decide when to stop training?

19 Mar 2014 ling

Sorry, I am wrong. It should be nrbm-1

07 Mar 2014 TabZim

Thanks a lot for enhancing our understanding with this well commented code. I have a query regarding the sparsity constraint imposed in RBM, i.e in the pretrainRBM function. In order to update the hidden biases according to the sparsity constraint, why have you multiplied the gradients with 2.

dsW = dsW + SparseLambda * 2.0 * bsxfun(@times, (SparseQ-mH)', svdH)';

dsB = dsB + SparseLambda * 2.0 * (SparseQ-mH) .* sdH;

This does not match any update equation given by Lee et al. Could you please elaborate on this? Many thanks!

04 Mar 2014 Sanjanaashree

Hello Sir, I am working in Machine Transliteration, since I am newbie to DBN I wish to know whether will be able to use this code for the matrix created using one-hot representation of data.

27 Feb 2014 Masayuki Tanaka

Hi Yong Ho,

Thank you for your comment.
The linear mapping is just a option. You don't need to use that. But, the training requires the initial parameters. I think the linear mapping is one of candidates for initial parameters.
If you know better initial parameter setting, please let me know.

Thanks.

26 Feb 2014 Yong Ho

Hello,

Your code is very helpful for me.

But during understanding your code, I wonder why you are using linear mapping for calculating weights between TrainLabels and last hidden nodes?

Is there any advantage using linear mapping?

25 Jan 2014 Masayuki Tanaka

Hi Adel,

Thank you for your comment and good rating!
But, my code does not include the sparse RBM feature.

Thanks!

21 Jan 2014 Adel

Dear Prof.

I used the library and it was very useful and easy to use. But when i read the paper:

"Lee et al, Sparse deep belief net model for visual area V2, NIPS 2008. "

I found it uses Sparse RBM. and i want to know how can i apply and use Sparse RBM in my application as in the mentioned paper.

I appreciate any help or advice about this.

thanks.

13 Jan 2014 Masayuki Tanaka

Hi Usman,

I think you can use the Gaussian RBM instead of the Bernoulli RBM.

Thank you!

10 Jan 2014 ashkan abbasi

thanks for your generosity!

06 Jan 2014 Usman

Hi Xiaowang,

You just need to run output=v2h(bbdbn,TestImages) to get the output labels.
Match these with "TestLabels" to verfy your output

27 Dec 2013 Usman

Hi,
I am using this Toolbox for speech recognition features as input/visible units. However my features are both negative and positive and are greater then 1,-1. Can you please help me out as to if i can use -ve values as visible units or do i have to normalize the features to between o and 1. Also how to cater for zero mean and unit variance standardization since standardization makes data greater than 1 and normalizing makes it loose the zero mean unit variance distribution.

27 Dec 2013 xiaowang

I have read your MNIST code part and have some questions.You just use tranimages and trainlabels to train the DNN but I did not see the testimages were used to test and calculate the errorate in comarison with the testlabels? I am not well kown DNN, maybe I am wrong...

08 Dec 2013 Xin

It is a wonderful tool box.

But could you please upload a tutorial-like file?

That will be more helpful for us .

Many thanks!!

05 Dec 2013 Masayuki Tanaka

Hi Arjun,

Thank you for your bug report!
I fixed that bug and updated. Please use DeepNeuralNetwork20131204.zip

Thank you!

03 Dec 2013 Arjun

There seems to be a bug on line 205 in trainDBN.m when using GBDBN.

18 Nov 2013 Ming-Ju

It works! Awesome implementation!

10 Nov 2013 eve

Thanks a lot Masayuki!!!! :-)

08 Nov 2013 Masayuki Tanaka

Hi Sigurd,

Thank you for your comment. I think that the random inputs and outputs in testDNN make such training results. If you train with the MNIST dataset for example, I believe that you will get a reasonable model.

You can get the MNIST dataset from:
http://yann.lecun.com/exdb/mnist/

Thanks!

08 Nov 2013 Sigurd

Hi,

Thanks for providing the code.

Running testDNN, the trained net doesn't actually appear to model the data. 1) v2h(dnn,IN) yields the output layer as (nearly) constant for all input 2) RMSE equal to (and sometimes great than!) the standard deviation of the data. 3)

...just a little puzzled as to what is going on...

Cheers,
Sigurd

06 Nov 2013 altan  
05 Nov 2013 Masayuki Tanaka

Hi eve,

Thank you very much for you comment. That was bug. I already fixed that bug.
Please use DeepNeuralNetwork20131105

Thank you again!

04 Nov 2013 eve

Hi,

I tried running testDNN.m but I got an error in v2h.m line 39 as the sizes of V and dnn.W dont match?? Is that a new bug?
THanks

23 Aug 2013 Masayuki Tanaka

Chong, thanks a lot!
I have fixed it and update the codes.

22 Aug 2013 chong

if( Object = OBJECTSQUARE )?????
der = derSgm .* der;
end

30 Jul 2013 Hwa-Chiang

Nice and Nice!

Updates
22 Aug 2013

Modified the implementation of the dropout.
Added feature of the cross entropy object function for the neural network training.

23 Aug 2013

Debugged. Thank you, chong!

23 Aug 2013

Some bags are fixed.

23 Sep 2013

Bug fixed in GetDroppedDBN.

24 Sep 2013

Modified testDNN.m

05 Nov 2013

Bug fix.

08 Nov 2013

The bug is fixed.

15 Nov 2013

Sample codes of the MNIST dataset are included.

04 Dec 2013

Fixed the bug in trainDBN.m for GBDBN.

13 Dec 2013

CalcErrorRate is debuged.

13 Jan 2014

Added sample of evaluation test data of MNIST.

15 Aug 2014

I added the implementation of the ICPR 2014 algorithm.

10 Sep 2014

The bug related the object function is fixed.

Contact us