File Exchange

image thumbnail

Pattern Recognition and Machine Learning Toolbox

version 1.0.0.0 (109 KB) by Mo Chen
Pattern Recognition and Machine Learning Toolbox

159 Downloads

Updated 19 Apr 2018

GitHub view license on GitHub

This package is a Matlab implementation of the algorithms described in the book: Pattern Recognition and Machine Learning by C. Bishop (PRML).
The repo for this package is located at: https://github.com/PRML/PRMLT
If you find a bug or have a feature request, please file issue there. I do not usually check the comment here.
The design goal of the code are as follows:

Succinct: Code is extremely terse. Minimizing the number of line of code is one of the primal target. As a result, the core of the algorithms can be easily spot.
Efficient: Many tricks for making Matlab scripts fast were applied (eg. vectorization and matrix factorization). Many functions are even comparable with C implementation. Usually, functions in this package are orders faster than Matlab builtin functions which provide the same functionality (eg. kmeans). If anyone found any Matlab implementation that is faster than mine, I am happy to further optimize.
Robust: Many numerical stability techniques are applied, such as probability computation in log scale to avoid numerical underflow and overflow, square root form update of symmetric matrix, etc.
Easy to learn: The code is heavily commented. Reference formulas in PRML book are indicated for corresponding code lines. Symbols are in sync with the book.
Practical: The package is designed not only to be easily read, but also to be easily used to facilitate ML research. Many functions in this package are already widely used (see Matlab file exchange).

Comments and Ratings (48)

lei wang

Chapter 4.
Do there some functions lack sub-functions,such as softman and sigmod (lacking "logsumexp" and "log1pexp", respectively)

Only just diving deeper, but from someone coming from a non coding background this is a lifesaver. The book has great explanations and I'm already getting a better understanding of the code and how I can apply it to my research.

Mo Chen

@zjyedword @MisterTellini, the MLP function has been rewritten, which matches the book better and includes bias.

i need rnn lstm code for any app but work ok

hello everyone, i don't understand the line "E = W{l}*dG;", after W{1} updating itself, why not excute E = W{l}*dG;? please explain it in detail, thanks
function [model, mse] = mlp(X, Y, h)
% Multilayer perceptron
% Input:
% X: d x n data matrix
% Y: p x n response matrix
% h: L x 1 vector specify number of hidden nodes in each layer l
% Ouput:
% model: model structure
% mse: mean square error
% Written by Mo Chen (sth4nth@gmail.com).
h = [size(X,1);h(:);size(Y,1)];
L = numel(h);
W = cell(L-1);
for l = 1:L-1
W{l} = randn(h(l),h(l+1));
end
Z = cell(L);
Z{1} = X;
eta = 1/size(X,2);
maxiter = 20000;
mse = zeros(1,maxiter);
for iter = 1:maxiter
% forward
for l = 2:L
Z{l} = sigmoid(W{l-1}'*Z{l-1});
end
% backward
E = Y-Z{L};
mse(iter) = mean(dot(E(:),E(:)));
for l = L-1:-1:1
df = Z{l+1}.*(1-Z{l+1});
dG = df.*E;
dW = Z{l}*dG';
W{l} = W{l}+eta*dW;
E = W{l}*dG;
end
end
mse = mse(1:iter);
model.W = W;

all codes is here:
https://www.mathworks.com/matlabcentral/fileexchange/55946-deep-multilayer-perceptron-neural-network-with-back-propagation

Shouldn't there be biases in the example from chapter 5?

I'm having some issues trying to implement the neural networks from chapter 5 for regression problems. More concretely, I am trying to implement those functions appearing in figure 5.3 from Bishop's book. Could anyone be so kind to lend me a hand? I would gladly appreciate it. Best regards, Aitor

Karthick PA

It is very helpfull..Many thanks!

Jorge

Good job, many thanks. How about a package for RL algorithms in Sutton Barto book (http://incompleteideas.net/book/bookdraft2018jan1.pdf)?

Great submission, thanks!

Many thanks

ahmed silik

chapter 1
this is my data for example i want to calculate the joint entropy but i cant please help me how
0.006304715
0.002032715
0.002948715
0.003558715
-0.000867286
0.000354715
0.005388715
0.004320715
-0.006969285
0.002948715
-0.000103286
-0.000103286
-0.009717285
-0.006665285
0.002184715
0.002490715

Although I've found quite instructing, the program hmm_demo.m from Chapter 13 does not work. It seems that the vilan is the normalized procedure.

Taotao Zhou

s wu

ahmed silik

isequalf(Hx_y,Hxy-Hy) in this when try to run it said that there is error pl your comments

David Duan

Qiong Song

Zikai Li

XU YIZHEN

zwang8

yusen zhang

nice work, thanks. would you like to show us how to cite your work?

xin huang

Pablo

Chi-Fu

Can you please provide the PDF of your book or just give the link for downloading the "Pattern Recognition and Machine Learning".

ramimj

Thank you for this work.
but why the classification results of rvmBinPred are reversed?

citysky

Yang Sun

Thanks for clearing that up,
Derry

i am working using the hmm code, i understand that the emission matrix should be NxM
where N number of states and M number of symboles of the Observation, the HmmFilter used here uses another dimension for the Emission matrix it used Nxd where d is the length of the observation vector generated or used , can someone explain to me why?

Mo Chen

@Derry Fitzgerald. The behavior is correct, the probability is the MAP probability of the who sequence. However the description is not right. I should have wrote p is single value.

Hi, very nice toolbox, thanks!
I have noticed a bug in hmmViterbi_, it only outputs v as a single value instead of a vector of probabilities

Soobok

Chi-Fu

michio

Bin Yang

Minsu Kim

Updates

1.0.0.0

update description

MATLAB Release Compatibility
Created with R2016a
Compatible with any release
Platform Compatibility
Windows macOS Linux