Quantcast

Documentation Center

  • Trial Software
  • Product Updates

Contents

newgrnn

Design generalized regression neural network

Syntax

net = newgrnn(P,T,spread)

Description

Generalized regression neural networks (grnns) are a kind of radial basis network that is often used for function approximation. grnns can be designed very quickly.

net = newgrnn(P,T,spread) takes three inputs,

P

R-by-Q matrix of Q input vectors

T

S-by-Q matrix of Q target class vectors

spread

Spread of radial basis functions (default = 1.0)

and returns a new generalized regression neural network.

The larger the spread, the smoother the function approximation. To fit data very closely, use a spread smaller than the typical distance between input vectors. To fit the data more smoothly, use a larger spread.

Properties

newgrnn creates a two-layer network. The first layer has radbas neurons, and calculates weighted inputs with dist and net input with netprod. The second layer has purelin neurons, calculates weighted input with normprod, and net inputs with netsum. Only the first layer has biases.

newgrnn sets the first layer weights to P', and the first layer biases are all set to 0.8326/spread, resulting in radial basis functions that cross 0.5 at weighted inputs of +/– spread. The second layer weights W2 are set to T.

Examples

Here you design a radial basis network, given inputs P and targets T.

P = [1 2 3];
T = [2.0 4.1 5.9];
net = newgrnn(P,T);

The network is simulated for a new input.

P = 1.5;
Y = sim(net,P)

References

Wasserman, P.D., Advanced Methods in Neural Computing, New York, Van Nostrand Reinhold, 1993, pp. 155–61

See Also

| | |

Was this topic helpful?