Machine Learning and Deep Learning with Wavelet Scattering | Understanding Wavelets, Part 5
From the series: Understanding Wavelets
Wavelet scattering networks help you obtain low-variance features from signals and images for use in machine learning and deep learning applications. Scattering networks help you automatically obtain features that minimize differences within a class while preserving discriminability across classes. An important distinction between the scattering network and deep learning framework is that the filters are defined a priori as opposed to being learned as in the case of deep convolutional networks. As the scattering transform is not required to learn the filters, you can often use scattering successfully in situations where there is shortage of training data. You can also visualize and interpret the features extracted by the wavelet scattering network. Once the features are extracted, you can train and evaluate various machine learning algorithms, such as support vector machine (SVM) and random forest, or deep learning algorithms, such as long short-term memory (LSTM) networks.
In this video, we will discuss wavelet scattering transform and how it can be used as an automatic robust feature extractor for classification. We will cover the working of the wavelet scattering technique for signals, but the same technique can also be applied to images.
Wavelet scattering is best understood in the context of deep convolutional networks, or deep CNNs, which some of you may already be familiar with.
At a high level, deep convolutional networks filter the data, apply some nonlinearity, and pool or average the output. These steps are repeated to form the layers.
There are a few challenges with deep CNNs:
First: These models typically require large datasets and significant computing resources for training and evaluation.
Second: Typically, you must choose many settings for your networks, which do not independently affect performance.
Lastly: It can be difficult to understand and interpret the features that are extracted.
Now that you have this background, let’s see how wavelet scattering addresses these challenges.
The motivation behind using wavelet scattering networks is to start with a set of known filters since the filters in fully trained networks often resemble wavelet-like filters.
The main difference is that the filter weights are learned in the case of convolutional neural networks, while the filter weights are fixed in the case of wavelet scattering networks.
Now, let’s dive into the details of the network:
An input signal is first averaged using wavelet lowpass filters. This is the Layer-0 scattering feature. With the averaging operation, you lose high-frequency detail in the signal.
The details lost in the first step are captured at the subsequent layer by performing a continuous wavelet transform of the signal to yield a set of scalogram coefficients. A nonlinear operator (in this case a modulus) is applied on the scalogram coefficients and then the output is filtered with the wavelet lowpass filter, yielding a set of Layer-1 scattering coefficients.
The same process is repeated to obtain the Layer-2 scattering coefficients. Meaning, the output of the scalogram coefficients in the previous layer becomes the input to the operations in the next layer. Then we apply the same modulus operator and filter the output with the wavelet lowpass function to yield the Layer-2 scattering coefficients.
You can have more than three layers in the scattering network, but in practice, the energy dissipates with every iteration, so three layers are enough for most applications. The coefficients are typically downsampled to reduce the computational complexity of the network. These coefficients are collectively referred to as the scattering features. You can also visualize and interpret these features.
A wavelet scattering network is referred to as a deep network because it performs the three main tasks that make a deep network:
Convolution, Nonlinearity, And pooling
In this case, convolution is performed by wavelets, the modulus operator serves as the nonlinearity, and filtering with wavelet lowpass filters is analogous to pooling.
This way, you can use the features obtained from the wavelet scattering network and build models that can classify your data. For more information and examples, please refer to the documentation section of Wavelet Toolbox.
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.