Identify influential features to improve model performance
Feature selection is a dimensionality reduction technique that selects only a subset of measured features (predictor variables) that provide the best predictive power in modeling the data. It is particularly useful when dealing with very high-dimensional data or when modeling with all features is undesirable.
Feature selection can be used to:
- Improve the accuracy of a machine learning algorithm
- Boost the performance on very high-dimensional data
- Improve model interpretability
- Prevent overfitting
There are several common approaches to feature selection:
- Stepwise regression sequentially adds or removes features until there is no improvement in prediction; used with linear regression or generalized linear regression algorithms.
- Sequential feature selection is similar to stepwise regression and can be used with any supervised learning algorithm and a custom performance measure.
- Boosted and bagged decision trees are ensemble methods that compute variable importance from out-of-bag estimates.
- Regularization (lasso and elastic nets) is a shrinkage estimator used to remove redundant features by reducing their weights (coefficients) to zero.
Another dimensionality reduction approach is to use feature extraction or feature transformation techniques, which transform existing features into new features (predictor variables) with the less descriptive features dropped.
Approaches to feature transformation include:
For more information on feature selection, including machine learning, regression, and transformation, see Statistics Toolbox™ for use with MATLAB®.
Examples and How To
See also: Statistics Toolbox, AdaBoost, machine learning, linear model, regularization