Visualize network features using deep dream
Load a pretrained AlexNet network.
net = alexnet;
Visualize the first 25 features learned by the first convolutional layer (
'PyramidLevels' to 1 so that the images are not scaled.
layer = 'conv1'; channels = 1:25; I = deepDreamImage(net,layer,channels, ... 'PyramidLevels',1, ... 'Verbose',0); figure for i = 1:25 subplot(5,5,i) imshow(I(:,:,:,i)) end
net— Trained network
Trained network, specified as a
SeriesNetwork object or a
DAGNetwork object. You can get a trained network by importing
a pretrained network or by training your own network using the
trainNetwork function. For more
information about pretrained networks, see Pretrained Deep Neural Networks.
deepDreamImage only supports networks with an image
layer— Layer index or name
Layer to visualize, specified as a positive integer, a character vector, or a string scalar.
net is a
DAGNetwork object, specify
layer as a character vector or string scalar only.
layer as the index or the name of the layer you
want to visualize the activations of. To visualize classification layer
features, select the last fully connected layer before the classification
Selecting ReLU or dropout layers for visualization may not produce useful images because of the effect that these layers have on the network gradients.
channels— Channel index
Queried channels, specified as scalar or vector of channel indices.
channels is a vector, the layer activations
for each channel are optimized independently. The possible choices
channels depend on the selected layer. For
convolutional layers, the
NumFilters property specifies
the number of output channels. For fully connected layers, the
specifies the number of output channels.
comma-separated pairs of
the argument name and
Value is the corresponding value.
Name must appear inside quotes. You can specify several name and value
pair arguments in any order as
deepDreamImage(net,layer,channels,'NumItetations',100,'ExecutionEnvironment','gpu')generates images using 100 iterations per pyramid level and uses the GPU.
InitialImage— Image to initialize Deep Dream
Image to initialize Deep Dream. Use this syntax to see how an image is modified to maximize network layer activations. The minimum height and width of the initial image depend on all the layers up to and including the selected layer:
For layers towards the end of the network, the initial image must be at least the same height and width as the image input layer.
For layers towards the beginning of the network, the height and width of the initial image can be smaller than the image input layer. However, it must be large enough to produce a scalar output at the selected layer.
The number of channels of the initial image must match the number of channels in the image input layer of the network.
If you do not specify an initial image, the software uses a
random image with pixels drawn from a standard normal distribution.
PyramidLevels— Number of pyramid levels
Number of multi-resolution image pyramid levels to use to generate
the output image, specified as a positive integer. Increase the number
of pyramid levels to produce larger output images at the expense of
additional computation. To produce an image of the same size as the
initial image, set the number of levels to
PyramidScale— Scale between pyramid levels
Scale between each pyramid level, specified as a scalar with value > 1. Reduce the pyramid scale to incorporate fine grain details into the output image. Adjusting the pyramid scale can help generate more informative images for layers at the beginning of the network.
NumIterations— Number of iterations per pyramid level
Number of iterations per pyramid level, specified as a positive integer. Increase the number of iterations to produce more detailed images at the expense of additional computation.
OutputScaling— Type of scaling to apply to output
Type of scaling to apply to output image, specified as the comma-separated
pair consisting of
'OutputScaling' and one of the
|Scale output pixel values in the interval [0,1]. The output image corresponding to each layer
|Disable output scaling.|
Scaling the pixel values can cause the network to misclassify the
output image. If you want to classify the output image, set the
'OutputScaling' value to
Verbose— Indicator to display progress information
Indicator to display progress information in the command window,
specified as the comma-separated pair consisting of
either 1 (
true) or 0 (
The displayed information includes the pyramid level, iteration, and
the activation strength.
ExecutionEnvironment— Hardware resource
Hardware resource, specified as the comma-separated pair consisting of
'ExecutionEnvironment' and one of the following:
'auto' — Use a GPU if one is available; otherwise, use the
'gpu' — Use the GPU. Using a GPU requires
Parallel Computing Toolbox™ and a supported GPU device. For information on supported devices, see GPU Support by Release (Parallel Computing Toolbox). If Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an
'cpu' — Use the CPU.
I— Output image
Output image, specified by a sequence of grayscale or truecolor
(RGB) images stored in a 4–D array. Images are concatenated
along the fourth dimension of
I such that the image
that maximizes the output of
You can display the output image using
imshow (Image Processing Toolbox).
This function implements a version of deep dream that uses a multi-resolution image pyramid and Laplacian Pyramid Gradient Normalization to generate high-resolution images. For more information on Laplacian Pyramid Gradient Normalization, see this blog post: DeepDreaming with TensorFlow.
When you train a network using the
trainNetwork function, or when you use prediction or validation functions
objects, the software performs these computations using single-precision, floating-point
arithmetic. Functions for training, prediction, and validation include
The software uses single-precision arithmetic when you train networks using both CPUs and
 DeepDreaming with TensorFlow. https://github.com/tensorflow/docs/blob/master/site/en/tutorials/generative/deepdream.ipynb