Setting sample weights for training of network to set the contribution of each sample to the network outcome
Show older comments
What I need to do is train a classification network (like Pattern Recognition Tool) where each sample would have a different weight. The contribution of a sample to the network error would be proportional to its weight.
For example, given samples with higher and lower weights; after training the network would classify the samples with higher weights with a more success while sacrificing some correct classification of the samples with lower weights.
Does anyone know how to do this?
Currently my only idea on how to achieve this goal would be: For each iteration of a loop: 1. randomly assemble a subset of samples with a chance of picking a sample proportional to its weight. 2. train for 1 epoch
Accepted Answer
More Answers (3)
Greg Heath
on 27 Apr 2013
Notation: The term sample implies a group of data, not a single case or measurement.
Use patternnet with 'logsig' or 'softmax' as the output transfer function
For c classes use a target matrix that has columns of the c-dimensional unit matrix eye(c).
The relationships between the target matrix, integer (1:c) class index row vector, integer assigned class row vector, {0,1} error vector etc. are
target = ind2vec(classind);
classind = vec2ind(target) % integers 1:c
net = train(net, input, target);
output = net(input);
assigned = vec2ind(output)
errors = (assigned ~= classind )
Nerr = sum(errors)
Individual class performances are obtained using unique vector (NOT class) indices (1:N). If class performances are unsatisfactory,several measures can be used. For example
1. Weight the input matrix
2. Weight the target matrix
3. Weight the output matrix
4. Add noisy duplicates of poorly classified vectors to the input matrix.
I've forgotten the details. However, in Mar-May 2009 (5 threads) I did post results of comparing my choice of the duplication method with others for BioID classification
Search the Newsgroup using the searchword
BioID.
Hope this helps.
Thank you for formally choosing my answer
Greg
1 Comment
Ferenc Raksi
on 27 Apr 2013
Edited: John Kelly
on 24 Jun 2021
Greg Heath
on 29 Apr 2013
I don't really believe in twiddling the net to accommodate a few isolated inputs that cannot be classified correctly.
BEFORE considering a neural network
1. Plot the data
2. Check the data for
a. Errors
b. Outliers
3. a. Correct and/or remove errors
b. Modify and/or remove outliers
c. Weight inputs and/or add noisy duplicates to equalize training priors
4. Design and test a Linear (SLASH) Model
5. Design and test NN Models
6. Apply class output weighting to optimize risk based
on operational priors and misclassification costs
2 Comments
Ferenc Raksi
on 30 Apr 2013
preksha pareek
on 27 Oct 2019
Considering my weights of neural network as mean of each feature then how can I change size of my weight matrix.
For example: I have 60 features and I am trying to find mean data from each sample of each of these 60 values so the matrix is 1*60, which I want to use as weight matrxi for initialization.
Whereas for neural network, weight matrix will have size corresponding to (input*hidden layer) so how I can reshape my matrix to fit as weight matrix?
Greg Heath
on 1 May 2013
0 votes
I have found (not only with BioID) that the best way to approach the problem is to weight and/or duplicate so that training priors are balanced. Then you can apply the Bayesian Risk Formula to satisfy any combination of misclassification costs and operational priors.
Hope this helps.
Greg
Categories
Find more on AI for Signals in Help Center and File Exchange
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!