Note: This page has been translated by MathWorks. Please click here

To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

The goal here is to design a network that stores a specific set of equilibrium points such that, when an initial condition is provided, the network eventually comes to rest at such a design point. The network is recursive in that the output is fed back as the input, once the network is in operation. Hopefully, the network output will settle on one of the original design points.

The design method presented is not perfect in that the designed network can have spurious undesired equilibrium points in addition to the desired ones. However, the number of these undesired points is made as small as possible by the design method. Further, the domain of attraction of the designed equilibrium points is as large as possible.

The design method is based on a system of first-order linear ordinary differential equations that are defined on a closed hypercube of the state space. The solutions exist on the boundary of the hypercube. These systems have the basic structure of the Hopfield model, but are easier to understand and design than the Hopfield model.

The material in this section is based on the following paper:
Jian-Hua Li, Anthony N. Michel, and Wolfgang Porod, “Analysis
and synthesis of a class of neural networks: linear systems operating
on a closed hypercube,” *IEEE Trans. on Circuits and
Systems*, Vol. 36, No. 11, November 1989, pp. 1405–22.

For further information on Hopfield networks, see Chapter 18, “Hopfield Network,” of Hagan, Demuth, and Beale [HDB96].

The architecture of the Hopfield network follows.

As noted, the *input* **p** to
this network merely supplies the initial conditions.

The Hopfield network uses the saturated linear transfer function `satlins`

.

For inputs less than −1 `satlins`

produces
−1. For inputs in the range −1 to +1 it simply returns
the input value. For inputs greater than +1 it produces +1.

This network can be tested with one or more input vectors that are presented as initial conditions to the network. After the initial conditions are given, the network produces an output that is then fed back to become the input. This process is repeated over and over until the output stabilizes. Hopefully, each output vector eventually converges to one of the design equilibrium point vectors that is closest to the input that provoked it.

Li et al. [LiMi89]
have studied a system that has the basic structure of the Hopfield
network but is, in Li's own words, “easier to analyze, synthesize,
and implement than the Hopfield model.” The authors are enthusiastic
about the reference article, as it has many excellent points and is
one of the most readable in the field. However, the design is mathematically
complex, and even a short justification of it would burden this guide.
Thus the Li design method is presented, with thanks to Li et al.,
as a recipe that is found in the function `newhop`

.

Given a set of target
equilibrium points represented as a matrix **T** of
vectors, `newhop`

returns weights and biases for
a recursive network. The network is guaranteed to have stable
equilibrium points at the target vectors, but it could contain other spurious equilibrium points as well.
The number of these undesired points is made as small as possible
by the design method.

Once the network has been designed, it can be tested with one or more input vectors. Hopefully those input vectors close to target equilibrium points will find their targets. As suggested by the network figure, an array of input vectors is presented one at a time or in a batch. The network proceeds to give output vectors that are fed back as inputs. These output vectors can be can be compared to the target vectors to see how the solution is proceeding.

The ability to run batches of trial input vectors quickly allows you to check the design in a relatively short time. First you might check to see that the target equilibrium point vectors are indeed contained in the network. Then you could try other input vectors to determine the domains of attraction of the target equilibrium points and the locations of spurious equilibrium points if they are present.

Consider the following design example. Suppose that you want to design a network with two stable points in a three-dimensional space.

T = [-1 -1 1; 1 -1 1]' T = -1 1 -1 -1 1 1

You can execute the design with

net = newhop(T);

Next, check to make sure that the designed network is at these two points, as follows. Because Hopfield networks have no inputs, the first argument to the network is an empty cell array whose columns indicate the number of time steps.

Ai = {T}; [Y,Pf,Af] = net(cell(1,2),{},Ai); Y{2}

This gives you

-1 1 -1 -1 1 1

Thus, the network has indeed been designed to be stable at its design points. Next you can try another input condition that is not a design point, such as

Ai = {[-0.9; -0.8; 0.7]};

This point is reasonably close to the first design point, so you might anticipate that the network would converge to that first point. To see if this happens, run the following code.

[Y,Pf,Af] = net(cell(1,5),{},Ai); Y{end}

This produces

-1 -1 1

Thus, an original condition close to a design point did converge to that point.

This is, of course, the hope for all such inputs. Unfortunately, even the best known Hopfield designs occasionally include spurious undesired stable points that attract the solution.

Consider a Hopfield network with just two neurons. Each neuron
has a bias and weights to accommodate two-element input vectors weighted.
The target equilibrium points are defined to be stored in the network
as the two columns of the matrix **T**.

T = [1 -1; -1 1]' T = 1 -1 -1 1

Here is a plot of the Hopfield state space with the two stable points labeled with * markers.

These target stable points are given to `newhop`

to
obtain weights and biases of a Hopfield network.

net = newhop(T);

The design returns a set of weights and a bias for each neuron. The results are obtained from

W = net.LW{1,1}

which gives

W = 0.6925 -0.4694 -0.4694 0.6925

and from

b = net.b{1,1}

which gives

b = 0 0

Next test the design with the target vectors **T** to
see if they are stored in the network. The targets are used as inputs
for the simulation function `sim`

.

Ai = {T}; [Y,Pf,Af] = net(cell(1,2),{},Ai); Y = Y{end} ans = 1 -1 -1 1

As hoped, the new network outputs are the target vectors. The solution stays at its initial conditions after a single update and, therefore, will stay there for any number of updates.

Now you might wonder how the network performs with various random input vectors. Here is a plot showing the paths that the network took through its state space to arrive at a target point.

This plot show the trajectories
of the solution for various starting points. You can try the example `demohop1`

to see more of this kind of network
behavior.

Hopfield networks can be designed for an arbitrary number of
dimensions. You can try `demohop3`

to see a three-dimensional
design.

Unfortunately, Hopfield networks can have both unstable equilibrium
points and spurious stable points. You can try examples `demohop2`

and demohop4 to investigate these issues.

Hopfield networks can act as error correction or vector categorization networks. Input vectors are used as the initial conditions to the network, which recurrently updates until it reaches a stable output vector.

Hopfield networks are interesting from a theoretical standpoint, but are seldom used in practice. Even the best Hopfield designs may have spurious stable points that lead to incorrect answers. More efficient and reliable error correction techniques, such as backpropagation, are available.

This topic introduces the following functions:

Function | Description |
---|---|

| Create a Hopfield recurrent network. |

Symmetric saturating linear transfer function. |

Was this topic helpful?