Importance of attributes (predictors) using ReliefF algorithm
[RANKED,WEIGHT] = relieff(X,Y,K)
[RANKED,WEIGHT] = relieff(X,Y,K,'PARAM1',val1,'PARAM2',val2,...)
[RANKED,WEIGHT] = relieff(X,Y,K)
computes
ranks and weights of attributes (predictors) for input data matrix X
and
response vector Y
using the ReliefF algorithm for
classification or RReliefF for regression with K
nearest
neighbors. For classification, relieff
uses K
nearest
neighbors per class. RANKED
are indices of columns
in X
ordered by attribute importance, meaning RANKED(1)
is
the index of the most important predictor. WEIGHT
are
attribute weights ranging from 1
to 1
with
large positive weights assigned to important attributes.
If Y
is numeric, relieff
by
default performs RReliefF analysis for regression. If Y
is
categorical, logical, a character array, or a cell array of character
vectors, relieff
by default performs ReliefF analysis
for classification.
Attribute ranks and weights computed by relieff
usually
depend on K
. If you set K
to
1, the estimates computed by relieff
can be unreliable
for noisy data. If you set K
to a value comparable
with the number of observations (rows) in X
, relieff
can
fail to find important attributes. You can start with K
= 10
and investigate
the stability and reliability of relieff
ranks
and weights for various values of K
.
[RANKED,WEIGHT] = relieff(X,Y,K,'PARAM1',val1,'PARAM2',val2,...)
specifies
optional parameter name/value pairs.
Specify optional
commaseparated pairs of Name,Value
arguments. Name
is
the argument name and Value
is the corresponding value.
Name
must appear inside single quotes (' '
). You can
specify several name and value pair arguments in any order as
Name1,Value1,...,NameN,ValueN
.

Either 

Prior probabilities for each class, specified as
If the input value is 

Number of observations to select at random for computing the weight of every attribute. By default all observations are used. 



Distance scaling factor. For observation i,
influence on the attribute weight from its nearest neighbor j is
multiplied by exp(( 
[1] Kononenko, I., Simec, E., & RobnikSikonja, M. (1997). Overcoming
the myopia of inductive learning algorithms with RELIEFF.
Retrieved from CiteSeerX: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.56.4740
[2] RobnikSikonja, M., & Kononenko, I. (1997). An
adaptation of Relief for attribute estimation in regression.
Retrieved from CiteSeerX: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.34.8381
[3] RobnikSikonja, M., & Kononenko, I. (2003). Theoretical and empirical analysis of ReliefF and RReliefF. Machine Learning , 53, 23–69.