For example, suppose you want to classify a tumor as benign or malignant, based on uniformity of cell size, clump thickness, mitosis, etc. You have 699 example cases for which you have 9 items of data and the correct classification as benign or malignant.
As with function fitting, there are two ways to solve this problem:
It is generally best to start with the GUI, and then to use the GUI to automatically generate command-line scripts. Before using either method, the first step is to define the problem by selecting a data set. The next section describes the data format.
To define a pattern recognition problem, arrange a set of Q input vectors as columns in a matrix. Then arrange another set of Q target vectors so that they indicate the classes to which the input vectors are assigned (see “Data Structures” for a detailed description of data formatting for static and time series data).
When there are only two classes; you set each scalar target value to either 0 or 1, indicating which class the corresponding input belongs to. For instance, you can define the two-class exclusive-or classification problem as follows:
inputs = [0 1 0 1; 0 0 1 1]; targets = [1 0 0 1; 0 1 1 0];
When inputs are to be classified into N different classes, the target vectors have N elements. For each target vector, one element is 1 and the others are 0. For example, the following lines show how to define a classification problem that divides the corners of a 5-by-5-by-5 cube into three classes:
The origin (the first input vector) in one class
The corner farthest from the origin (the last input vector) in a second class
All other points in a third class
inputs = [0 0 0 0 5 5 5 5; 0 0 5 5 0 0 5 5; 0 5 0 5 0 5 0 5]; targets = [1 0 0 0 0 0 0 0; 0 1 1 1 1 1 1 0; 0 0 0 0 0 0 0 1];
Classification problems involving only two classes can be represented using either format. The targets can consist of either scalar 1/0 elements or two-element vectors, with one element being 1 and the other element being 0.
The next section shows how to train a network to recognize patterns, using the neural
network pattern recognition app,
nprtool. This example uses the cancer data set
provided with the toolbox. This data set consists of 699 nine-element input vectors and
two-element target vectors. There are two elements in each target vector, because there are two
categories (benign or malignant) associated with each input vector.
If needed, open the Neural Network Start GUI with this command:
Click Pattern Recognition app to open
the Neural Network Pattern Recognition app. (You can also use the command
Click Next to proceed. The Select Data window opens.
Click Load Example Data Set. The Pattern Recognition Data Set Chooser window opens.
Select Breast Cancer and click Import. You return to the Select Data window.
Click Next to continue to the Validation and Test Data window.
Validation and test data sets are each set to 15% of the original data. With these settings, the input vectors and target vectors will be randomly divided into three sets as follows:
70% are used for training.
15% are used to validate that the network is generalizing and to stop training before overfitting.
The last 15% are used as a completely independent test of network generalization.
(See “Dividing the Data” for more discussion of the data division process.)
The standard network that is used for pattern recognition is a two-layer feedforward network, with a sigmoid transfer function in the hidden layer, and a softmax transfer function in the output layer. The default number of hidden neurons is set to 10. You might want to come back and increase this number if the network does not perform as well as you expect. The number of output neurons is set to 2, which is equal to the number of elements in the target vector (the number of categories).
The training continues for 55 iterations.
Under the Plots pane, click Confusion in the Neural Network Pattern Recognition App.
The next figure shows the confusion matrices for training, testing, and validation, and the three kinds of data combined. The network outputs are very accurate, as you can see by the high numbers of correct responses in the green squares and the low numbers of incorrect responses in the red squares. The lower right blue squares illustrate the overall accuracies.
The colored lines in each axis represent the ROC curves. The ROC curve is a plot of the true positive rate (sensitivity) versus the false positive rate (1 - specificity) as the threshold is varied. A perfect test would show points in the upper-left corner, with 100% sensitivity and 100% specificity. For this problem, the network performs very well.
In the Neural Network Pattern Recognition App, click Next to evaluate the network.
At this point, you can test the network against new data.
If you are dissatisfied with the network’s performance on the original or new data, you can train it again, increase the number of neurons, or perhaps get a larger training data set. If the performance on the training set is good, but the test set performance is significantly worse, which could indicate overfitting, then reducing the number of neurons can improve your results.
When you are satisfied with the network performance, click Next.
Use this panel to generate a MATLAB function or Simulink diagram for simulating your neural network. You can use the generated code or diagram to better understand how your neural network computes outputs from inputs or deploy the network with MATLAB Compiler tools and other MATLAB code generation tools.
Click Next. Use the buttons on this screen to save your results.
You can click Simple Script or Advanced Script to create MATLAB® code that can be used to reproduce all of the previous steps from the command line. Creating MATLAB code can be helpful if you want to learn how to use the command-line functionality of the toolbox to customize the training process. In Using Command-Line Functions, you will investigate the generated scripts in more detail.
You can also save the network as
net in the workspace. You can
perform additional tests on it or put it to work on new inputs.
When you have saved your results, click Finish.
The easiest way to learn how to use the command-line functionality of the toolbox is to generate scripts from the GUIs, and then modify them to customize the network training. For example, look at the simple script that was created at step 14 of the previous section.
% Solve a Pattern Recognition Problem with a Neural Network % Script generated by NPRTOOL % % This script assumes these variables are defined: % % cancerInputs - input data. % cancerTargets - target data. inputs = cancerInputs; targets = cancerTargets; % Create a Pattern Recognition Network hiddenLayerSize = 10; net = patternnet(hiddenLayerSize); % Set up Division of Data for Training, Validation, Testing net.divideParam.trainRatio = 70/100; net.divideParam.valRatio = 15/100; net.divideParam.testRatio = 15/100; % Train the Network [net,tr] = train(net,inputs,targets); % Test the Network outputs = net(inputs); errors = gsubtract(targets,outputs); performance = perform(net,targets,outputs) % View the Network view(net) % Plots % Uncomment these lines to enable various plots. % figure, plotperform(tr) % figure, plottrainstate(tr) % figure, plotconfusion(targets,outputs) % figure, ploterrhist(errors)
You can save the script, and then run it from the command line to reproduce the results of the previous GUI session. You can also edit the script to customize the training process. In this case, follow each step in the script.
[inputs,targets] = cancer_dataset;
Create the network. The default network for function fitting (or
patternnet, is a feedforward network with the
default tan-sigmoid transfer function in the hidden layer, and a softmax transfer function in
the output layer. You assigned ten neurons (somewhat arbitrary) to the one hidden layer in the
The network has two output neurons, because there are two target values (categories) associated with each input vector.
Each output neuron represents a category.
When an input vector of the appropriate category is applied to the network, the corresponding neuron should produce a 1, and the other neurons should output a 0.
To create the network, enter these commands:
hiddenLayerSize = 10; net = patternnet(hiddenLayerSize);
The choice of network architecture for pattern recognition problems follows similar
guidelines to function fitting problems. More neurons require more computation, and they have
a tendency to overfit the data when the number is set too high, but they allow the network to
solve more complicated problems. More layers require more computation, but their use might
result in the network solving complex problems more efficiently. To use more than one hidden
layer, enter the hidden layer sizes as elements of an array in the
Set up the division of data.
net.divideParam.trainRatio = 70/100; net.divideParam.valRatio = 15/100; net.divideParam.testRatio = 15/100;
With these settings, the input vectors and target vectors will be randomly divided, with 70% used for training, 15% for validation and 15% for testing.
(See “Dividing the Data” for more discussion of the data division process.)
Train the network. The pattern recognition network uses the default
Scaled Conjugate Gradient (
trainscg) algorithm for training. To train the
network, enter this command:
[net,tr] = train(net,inputs,targets);
During training, as in function fitting, the training window opens. This window displays training progress. To interrupt training at any point, click Stop Training.
This training stopped when the validation error increased for six iterations, which occurred at iteration 24.
Test the network. After the network has been trained, you can use it to compute the network outputs. The following code calculates the network outputs, errors and overall performance.
outputs = net(inputs); errors = gsubtract(targets,outputs); performance = perform(net,targets,outputs)
performance = 0.0307
It is also possible to calculate the network performance only on the test set, by using the testing indices, which are located in the training record.
tInd = tr.testInd; tstOutputs = net(inputs(:,tInd)); tstPerform = perform(net,targets(:,tInd),tstOutputs)
tstPerform = 0.0257
View the network diagram.
Plot the training, validation, and test performance.
plotconfusion function to plot the confusion matrix. It shows the various types of errors that occurred for the final trained
The diagonal cells show the number of cases that were correctly classified, and the off-diagonal cells show the misclassified cases. The blue cell in the bottom right shows the total percent of correctly classified cases (in green) and the total percent of misclassified cases (in red). The results show very good recognition. If you needed even more accurate results, you could try any of the following approaches:
Reset the initial network weights and biases to new values with
init and train again.
Increase the number of hidden neurons.
Increase the number of training vectors.
Increase the number of input values, if more relevant information is available.
Try a different training algorithm (see “Training Algorithms”).
In this case, the network response is satisfactory, and you can now put the network to use on new inputs.
To get more experience in command-line operations, here are some tasks you can try:
Also, see the advanced script for more options, when training from the command line.
Each time a neural network is trained, can result in a different solution due to different initial weight and bias values and different divisions of data into training, validation, and test sets. As a result, different neural networks trained on the same problem can give different outputs for the same input. To ensure that a neural network of good accuracy has been found, retrain several times.
There are several other techniques for improving upon initial solutions if higher accuracy is desired. For more information, see Improve Shallow Neural Network Generalization and Avoid Overfitting.