This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Click here to see
To view all translated materials including this page, select Country from the country navigator on the bottom of this page.


Visualize network features using deep dream


I = deepDreamImage(net,layer,channels)
I = deepDreamImage(net,layer,channels,Name,Value)



I = deepDreamImage(net,layer,channels) returns an array of images that strongly activate the channels channels within the network net of the layer with numeric index or name given by layer. These images highlight the features learned by a network.

I = deepDreamImage(net,layer,channels,Name,Value) returns an image with additional options specified by one or more Name,Value pair arguments.


collapse all

Load a pretrained AlexNet network.

net = alexnet;

Visualize the first 25 features learned by the first convolutional layer ('conv1') using deepDreamImage. Set 'PyramidLevels' to 1 so that the images are not scaled.

layer = 'conv1';
channels = 1:25;

I = deepDreamImage(net,layer,channels, ...
    'PyramidLevels',1, ...

for i = 1:25

Input Arguments

collapse all

Trained network, specified as a SeriesNetwork object. You can get a trained network by importing a pretrained network or by training your own network using the trainNetwork function. For more information about pretrained networks, see Pretrained Convolutional Neural Networks.

deepDreamImage only supports networks with an image input layer.

Layer to visualize, specified as a positive integer scalar or character vector. To visualize classification layer features, select the last fully connected layer before the classification layer.


Selecting ReLU or dropout layers for visualization may not produce useful images because of the effect that these layers have on the network gradients.

Queried channels, specified as scalar or vector of channel indices. If channels is a vector, the layer activations for each channel are optimized independently. The possible choices for channels depend on the selected layer. For convolutional layers, the NumFilters property specifies the number of output channels. For fully connected layers, the OutputSize property specifies the number of output channels.

Name-Value Pair Arguments

Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside single quotes (' '). You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

Example: deepDreamImage(net,layer,channels,'NumItetations',100,'ExecutionEnvironment','gpu') generates images using 100 iterations per pyramid level and uses the GPU.

collapse all

Image to initialize Deep Dream. Use this syntax to see how an image is modified to maximize network layer activations. The minimum height and width of the initial image depend on all the layers up to and including the selected layer:

  • For layers towards the end of the network, the initial image must be at least the same height and width as the image input layer.

  • For layers towards the beginning of the network, the height and width of the initial image can be smaller than the image input layer. However, it must be large enough to produce a scalar output at the selected layer.

  • The number of channels of the initial image must match the number of channels in the image input layer of the network.

If you do not specify an initial image, the software uses a random image with pixels drawn from a standard normal distribution. See also 'PyramidLevels'.

Number of multi-resolution image pyramid levels to use to generate the output image, specified as a positive integer. Increase the number of pyramid levels to produce larger output images at the expense of additional computation. To produce an image of the same size as the initial image, set the number of levels to 1.

Example: 'PyramidLevels',3

Scale between each pyramid level, specified as a scalar with value > 1. Reduce the pyramid scale to incorporate fine grain details into the output image. Adjusting the pyramid scale can help generate more informative images for layers at the beginning of the network.

Example: 'PyramidScale',1.4

Number of iterations per pyramid level, specified as a positive integer. Increase the number of iterations to produce more detailed images at the expense of additional computation.

Example: 'NumIterations',10

Type of scaling to apply to output image, specified as the comma-separated pair consisting of 'OutputScaling' and one of the following:

'linear'Scale output pixel values in the interval [0,1]. The output image corresponding to each layer channel, I(:,:,:,channel), is scaled independently.
'none'Disable output scaling.

Example: 'OutputScaling','linear'

Data Types: char

Indicator to display progress information in the command window, specified as the comma-separated pair consisting of 'Verbose' and either 1 (true) or 0 (false). The displayed information includes the pyramid level, iteration, and the activation strength.

Example: 'Verbose',0

Data Types: logical

Hardware resource, specified as the comma-separated pair consisting of 'ExecutionEnvironment' and one of the following:

  • 'auto' — Use a GPU if one is available; otherwise, use the CPU.

  • 'gpu' — Use the GPU. Using a GPU requires Parallel Computing Toolbox™ and a CUDA® enabled NVIDIA® GPU with compute capability 3.0 or higher. If Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an error.

  • 'cpu' — Use the CPU.

Example: 'ExecutionEnvironment','cpu'

Output Arguments

collapse all

Output image, specified by a sequence of grayscale or truecolor (RGB) images stored in a 4–D array. Images are concatenated along the fourth dimension of I such that the image that maximizes the output of channels(k) is I(:,:,:,k). You can display the output image using imshow.


This function implements a version of deep dream that uses a multi-resolution image pyramid and Laplacian Pyramid Gradient Normalization to generate high-resolution images. For more information on Laplacian Pyramid Gradient Normalization, see this blog post: DeepDreaming with TensorFlow.

All functions for deep learning training, prediction, and validation in Neural Network Toolbox™ perform computations using single-precision, floating-point arithmetic. Functions for deep learning include trainNetwork, predict, classify, and activations. The software uses single-precision arithmetic when you train networks using both CPUs and GPUs.


[1] DeepDreaming with TensorFlow.

Introduced in R2017a

Was this topic helpful?