Note: This page has been translated by MathWorks. Please click here

To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

The basic structure of Mamdani fuzzy inference system is a model that maps input characteristics to input membership functions, input membership functions to rules, rules to a set of output characteristics, output characteristics to output membership functions, and the output membership functions to a single-valued output or a decision associated with the output. Such a system uses fixed membership functions that are chosen arbitrarily and a rule structure that is essentially predetermined by the user's interpretation of the characteristics of the variables in the model.

`anfis`

and the Neuro-Fuzzy Designer apply fuzzy inference techniques
to data modeling. As you have seen from the other fuzzy inference
GUIs, the shape of the membership functions depends on parameters,
and changing these parameters change the shape of the membership function.
Instead of just looking at the data to choose the membership function
parameters, you choose membership function parameters automatically
using these Fuzzy Logic Toolbox™ applications.

Suppose you want to apply fuzzy inference to a system for which you already have a collection of input/output data that you would like to use for modeling, model-following, or some similar scenario. You do not necessarily have a predetermined model structure based on characteristics of variables in your system.

In some modeling situations, you cannot discern what the membership
functions should look like simply from looking at data. Rather than
choosing the parameters associated with a given membership function
arbitrarily, these parameters could be chosen so as to tailor the
membership functions to the input/output data in order to account
for these types of variations in the data values. In such cases, you
can use the Fuzzy Logic Toolbox *neuro-adaptive* learning
techniques incorporated in the `anfis`

command.

The neuro-adaptive learning method works similarly to that
of neural networks. Neuro-adaptive learning techniques provide a method
for the fuzzy modeling procedure to *learn* information
about a data set. Fuzzy Logic Toolbox software computes the membership
function parameters that best allow the associated fuzzy inference
system to track the given input/output data. The Fuzzy Logic Toolbox function
that accomplishes this membership function parameter adjustment is
called `anfis`

. The `anfis`

function
can be accessed either from the command line or through the **Neuro-Fuzzy
Designer**. Because the functionality of the command line function `anfis`

and
the **Neuro-Fuzzy Designer** is similar, they are used somewhat
interchangeably in this discussion, except when specifically describing
the GUI.

The acronym ANFIS derives its name from *adaptive neuro-fuzzy inference
system*. Using a given input/output data set, the toolbox
function `anfis`

constructs a fuzzy inference system
(FIS) whose membership function parameters are tuned (adjusted) using
either a back propagation algorithm alone or in combination with a
least squares type of method. This adjustment allows your fuzzy systems
to learn from the data they are modeling.

A network-type structure similar to that of a neural network, which maps inputs through input membership functions and associated parameters, and then through output membership functions and associated parameters to outputs, can be used to interpret the input/output map.

The parameters associated with the membership functions changes
through the learning process. The computation of these parameters
(or their adjustment) is facilitated by a gradient vector. This gradient
vector provides a measure of how well the fuzzy inference system is
modeling the input/output data for a given set of parameters. When
the gradient vector is obtained, any of several optimization routines
can be applied in order to adjust the parameters to reduce some error
measure. This error measure is usually defined by the sum of the squared
difference between actual and desired outputs. `anfis`

uses
either back propagation or a combination of least squares estimation
and back propagation for membership function parameter estimation.

The modeling approach used by `anfis`

is similar
to many system identification techniques. First, you hypothesize a
parameterized model structure (relating inputs to membership functions
to rules to outputs to membership functions, and so on). Next, you
collect input/output data in a form that will be usable by `anfis`

for
training. You can then use `anfis`

to *train* the
FIS model to emulate the training data presented to it by modifying
the membership function parameters according to a chosen error criterion.

In general, this type of modeling works well if the training
data presented to `anfis`

for training (estimating)
membership function parameters is fully representative of the features
of the data that the trained FIS is intended to model. In some cases
however, data is collected using noisy measurements, and the training
data cannot be representative of all the features of the data that
will be presented to the model. In such situations, model validation is helpful.

**Model Validation Using Testing and Checking Data Sets. ***Model validation* is the process by which
the input vectors from input/output data sets on which the FIS was
not trained, are presented to the trained FIS model, to see how well
the FIS model predicts the corresponding data set output values.

One problem with model validation for models constructed using adaptive techniques is selecting a data set that is both representative of the data the trained model is intended to emulate, yet sufficiently distinct from the training data set so as not to render the validation process trivial.

If you have collected a large amount of data, hopefully this data contains all the necessary representative features, so the process of selecting a data set for checking or testing purposes is made easier. However, if you expect to be presenting noisy measurements to your model, it is possible the training data set does not include all of the representative features you want to model.

The testing data set lets you check the generalization capability of the resulting fuzzy inference system. The idea behind using a checking data set for model validation is that after a certain point in the training, the model begins overfitting the training data set. In principle, the model error for the checking data set tends to decrease as the training takes place up to the point that overfitting begins, and then the model error for the checking data suddenly increases. Overfitting is accounted for by testing the FIS trained on the training data against the checking data, and choosing the membership function parameters to be those associated with the minimum checking error if these errors indicate model overfitting.

Usually, these training and checking data sets are collected based on observations of the target system and are then stored in separate files.

In the first example, two similar data sets are used for checking
and training, but the checking data set is corrupted by a small amount
of noise. This example illustrates of the use of the **Neuro-Fuzzy
Designer** with checking data to reduce the effect of model overfitting.
In the second example, a training data set that is presented to `anfis`

is
sufficiently different than the applied checking data set. By examining
the checking error sequence over the training period, it is clear
that the checking data set is not good for model validation purposes.
This example illustrates the use of the **Neuro-Fuzzy Designer** to
compare data sets.

[1] Jang, J.-S. R., "Fuzzy Modeling
Using Generalized Neural Networks and Kalman Filter Algorithm," *Proc.
of the Ninth National Conf. on Artificial Intelligence (AAAI-91)*,
pp. 762-767, July 1991.

[2] Jang, J.-S. R., "ANFIS: Adaptive-Network-based
Fuzzy Inference Systems," *IEEE Transactions on Systems,
Man, and Cybernetics*, Vol. 23, No. 3, pp. 665-685, May
1993.

[3] Jang, J.-S. R. and N. Gulley, "Gain
scheduling based fuzzy controller design," *Proc.
of the International Joint Conference of the North American Fuzzy
Information Processing Society Biannual Conference, the Industrial
Fuzzy Control and Intelligent Systems Conference, and the NASA Joint
Technology Workshop on Neural Networks and Fuzzy Logic*,
San Antonio, Texas, Dec. 1994.

[4] Jang, J.-S. R. and C.-T. Sun, "Neuro-fuzzy
modeling and control, *Proceedings of the IEEE*,
March 1995.

[5] Jang, J.-S. R. and C.-T. Sun, *Neuro-Fuzzy
and Soft Computing: A Computational Approach to Learning and Machine
Intelligence*, Prentice Hall, 1997.

[6] Wang, L.-X., *Adaptive fuzzy
systems and control: design and stability analysis*, Prentice
Hall, 1994.

[7] Widrow, B. and D. Stearns, *Adaptive
Signal Processing*, Prentice Hall, 1985.

Was this topic helpful?