Documentation |
Process matrices by mapping each row's means to 0 and deviations to 1
[Y,PS] = mapstd(X,ymean,ystd)
[Y,PS] = mapstd(X,FP)
Y = mapstd('apply',X,PS)
X = mapstd('reverse',Y,PS)
dx_dy = mapstd('dx_dy',X,Y,PS)
mapstd processes matrices by transforming the mean and standard deviation of each row to ymean and ystd.
[Y,PS] = mapstd(X,ymean,ystd) takes X and optional parameters,
X | N-by-Q matrix or a 1-by-TS row cell array of N-by-Q matrices |
ymean | Mean value for each row of Y (default is 0) |
ystd | Standard deviation for each row of Y (default is 1) |
and returns
Y | Each M-by-Q matrix (where M == N) (optional) |
PS | Process settings that allow consistent processing of values |
[Y,PS] = mapstd(X,FP) takes parameters as a struct: FP.ymean, FP.ystd.
Y = mapstd('apply',X,PS) returns Y, given X and settings PS.
X = mapstd('reverse',Y,PS) returns X, given Y and settings PS.
dx_dy = mapstd('dx_dy',X,Y,PS) returns the reverse derivative.
Here you format a matrix so that the minimum and maximum values of each row are mapped to default mean and STD of 0 and 1.
x1 = [1 2 4; 1 1 1; 3 2 2; 0 0 0] [y1,PS] = mapstd(x1)
Next, apply the same processing settings to new values.
x2 = [5 2 3; 1 1 1; 6 7 3; 0 0 0] y2 = mapstd('apply',x2,PS)
Reverse the processing of y1 to get x1 again.
x1_again = mapstd('reverse',y1,PS)
Another approach for scaling network inputs and targets is to normalize the mean and standard deviation of the training set. The function mapstd normalizes the inputs and targets so that they will have zero mean and unity standard deviation. The following code illustrates the use of mapstd.
[pn,ps] = mapstd(p); [tn,ts] = mapstd(t);
The original network inputs and targets are given in the matrices p and t. The normalized inputs and targets pn and tn that are returned will have zero means and unity standard deviation. The settings structures ps and ts contain the means and standard deviations of the original inputs and original targets. After the network has been trained, you should use these settings to transform any future inputs that are applied to the network. They effectively become a part of the network, just like the network weights and biases.
If mapstd is used to scale the targets, then the output of the network is trained to produce outputs with zero mean and unity standard deviation. To convert these outputs back into the same units that were used for the original targets, use ts. The following code simulates the network that was trained in the previous code, and then converts the network output back into the original units.
an = sim(net,pn); a = mapstd('reverse',an,ts);
The network output an corresponds to the normalized targets tn. The unnormalized network output a is in the same units as the original targets t.
If mapstd is used to preprocess the training set data, then whenever the trained network is used with new inputs, you should preprocess them with the means and standard deviations that were computed for the training set using ps. The following commands apply a new set of inputs to the network already trained:
pnewn = mapstd('apply',pnew,ps); anewn = sim(net,pnewn); anew = mapstd('reverse',anewn,ts);
For most networks, including feedforwardnet, these steps are done automatically, so that you only need to use the sim command.