From: "Greg Heath" <>
Newsgroups: comp.soft-sys.matlab
Subject: Re: Performance function with pattern recognition in neural networks
Date: Fri, 29 Mar 2013 22:09:07 +0000 (UTC)
Organization: The MathWorks, Inc.
Lines: 33
Message-ID: <kj53e3$hqt$>
References: <kivgk2$i7$>
Reply-To: "Greg Heath" <>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Trace: 1364594947 18269 (29 Mar 2013 22:09:07 GMT)
NNTP-Posting-Date: Fri, 29 Mar 2013 22:09:07 +0000 (UTC)
X-Newsreader: MATLAB Central Newsreader 2929937
Xref: comp.soft-sys.matlab:792298

"William " <> wrote in message <kivgk2$i7$>...
> Hi,
> A question for anyone who might know or have an opinion.
> I have set up a neural network to perform a pattern recognition (or classification) but I have found I am getting way too many false negatives compared to what I might actually get with say a Support Vector Machine set up.  One possibility I am thinking is that the SVM set up can have harsh penalties for incorrect classifications.  So, with this in mind, is there a "best" performance function for pattern recognition with neural networks??  Or am I best to say use use some function on the distance from the hyperplane (or similar)??
> Cheers

You have given absolutely no information that will let any one help you.
Are you using patternnet with tansig/logsig or tansig/softmax ?
Dimension of inputs? How many classes? For c classes does your target 
contain columns of the c-dimensional unit matrix eye(c) or eye(c-1)?
How unbalanced is the data set: How large is each class? Are the ratios 
of class sizes the same as the apriori probabilities of the general population?
Are the misclassification costs specified or the usual default values {0,1}?

My apriori advice is to standardize your inputs and remove or modify 
outliers. Then use duplicates with or without added noise so that the 
number in each class is equal. If you have c classes, the c-dimensional 
targets and class indices can be obtained from each other via ind2vec 
and vec2ind.

Once the net is trained to yield approximately equal errors, you can 
transform the outputs by multiplying to account for differences in 
class priors and classification costs.

You might find some old posts of mine in and 
CSSM regarding priors and classification costs that will help.

Hope tis helps.