Documentation

This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English verison of the page.

Note: This page has been translated by MathWorks. Please click here
To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

Neural Network Toolbox Functions

Alphabetical List By Category
activationsCompute convolutional neural network layer activations
adaptAdapt neural network to data as it is simulated
adaptwbAdapt network with weight and bias learning rules
adddelayAdd delay to neural network response
additionLayerAddition layer
addLayersAdd layers to layer graph
alexnetPretrained AlexNet convolutional neural network
augmentedImageSourceGenerate batches of augmented image data
AutoencoderAutoencoder class
averagePooling2dLayerAverage pooling layer
batchNormalizationLayerBatch normalization layer
boxdistDistance between two position vectors
cascadeforwardnetCascade-forward neural network
catelementsConcatenate neural network data elements
catsamplesConcatenate neural network data samples
catsignalsConcatenate neural network data signals
cattimestepsConcatenate neural network data timesteps
classificationLayerCreate classification output layer
ClassificationOutputLayerClassification output layer
classifyClassify data using a trained deep learning neural network
classifyAndUpdateStateClassify data using a trained recurrent neural network and update the network state
clippedReluLayerClipped Rectified Linear Unit (ReLU) layer
closeloopConvert neural network open-loop feedback to closed loop
combvecCreate all combinations of vectors
competCompetitive transfer function
competlayerCompetitive layer
con2seqConvert concurrent vectors to sequential vectors
configureConfigure network inputs and outputs to best match input and target data
confusionClassification confusion matrix
connectLayersConnect layers in layer graph
convolution2dLayer2-D convolutional layer
convwfConvolution weight function
crossChannelNormalizationLayer Channel-wise local response normalization layer
crossentropyNeural network performance
DAGNetworkDirected acyclic graph (DAG) network for deep learning
decodeDecode encoded data
deepDreamImageVisualize network features using deep dream
depthConcatenationLayerDepth concatenation layer
disconnectLayersDisconnect layers in layer graph
dispNeural network properties
displayName and properties of neural network variables
distEuclidean distance weight function
distdelaynetDistributed delay network
divideblockDivide targets into three sets using blocks of indices
divideindDivide targets into three sets using specified indices
divideintDivide targets into three sets using interleaved indices
dividerandDivide targets into three sets using random indices
dividetrainAssign all targets to training set
dotprodDot product weight function
dropoutLayerDropout layer
elliot2sigElliot 2 symmetric sigmoid transfer function
elliotsigElliot symmetric sigmoid transfer function
elmannetElman neural network
encodeEncode input data
errsurfError surface of single-input neuron
extendtsExtend time series data to given number of timesteps
feedforwardnetFeedforward neural network
fitnetFunction fitting neural network
fixunknownsProcess data by marking rows with unknown values
formwbForm bias and weights into single vector
fromnndataConvert data from standard neural network cell array form
fullyConnectedLayerFully connected layer
gaddGeneralized addition
gdivideGeneralized division
generateFunctionGenerate a MATLAB function to run the autoencoder
generateSimulinkGenerate a Simulink model for the autoencoder
genFunctionGenerate MATLAB function for simulating neural network
gensimGenerate Simulink block for neural network simulation
getelementsGet neural network data elements
getL2FactorGet L2 regularization factor of layer learnable parameter
getLearnRateFactorGet learn rate factor of layer learnable parameter
getsamplesGet neural network data samples
getsignalsGet neural network data signals
getsiminitGet Simulink neural network block initial input and layer delays states
gettimestepsGet neural network data timesteps
getwbGet network weight and bias values as single vector
gmultiplyGeneralized multiplication
gnegateGeneralized negation
googlenetPretrained GoogLeNet convolutional neural network
gridtopGrid layer topology function
gsqrtGeneralized square root
gsubtractGeneralized subtraction
hardlimHard-limit transfer function
hardlimsSymmetric hard-limit transfer function
hextopHexagonal layer topology function
imageDataAugmenterConfigure image data augmentation
imageInputLayerImage input layer
importCaffeLayersImport convolutional neural network layers from Caffe
importCaffeNetworkImport pretrained convolutional neural network models from Caffe
ind2vecConvert indices to vectors
initInitialize neural network
initconConscience bias initialization function
initlayLayer-by-layer network initialization function
initlvqLVQ weight initialization function
initnwNguyen-Widrow layer initialization function
initwbBy weight and bias layer initialization function
initzeroZero weight and bias initialization function
isconfiguredIndicate if network inputs and outputs are configured
LayerNetwork layer for deep learning
layerGraphGraph of network layers for deep learning
layrecnetLayer recurrent neural network
leakyReluLayerLeaky Rectified Linear Unit (ReLU) layer
learnconConscience bias learning function
learngdGradient descent weight and bias learning function
learngdmGradient descent with momentum weight and bias learning function
learnhHebb weight learning rule
learnhdHebb with decay weight learning rule
learnisInstar weight learning function
learnkKohonen weight learning function
learnlv1LVQ1 weight learning function
learnlv2LVQ2.1 weight learning function
learnosOutstar weight learning function
learnpPerceptron weight and bias learning function
learnpnNormalized perceptron weight and bias learning function
learnsomSelf-organizing map weight learning function
learnsombBatch self-organizing map weight learning function
learnwhWidrow-Hoff weight/bias learning function
linearlayerLinear layer
linkdistLink distance function
logsigLog-sigmoid transfer function
LSTMLayerLong short-term memory (LSTM) layer
lvqnetLearning vector quantization neural network
lvqoutputsLVQ outputs processing function
maeMean absolute error performance function
mandistManhattan distance weight function
mapminmaxProcess matrices by mapping row minimum and maximum values to [-1 1]
mapstdProcess matrices by mapping each row’s means to 0 and deviations to 1
maxlinlrMaximum learning rate for linear layer
maxPooling2dLayerMax pooling layer
maxUnpooling2dLayerMax unpooling layer
meanabsMean of absolute elements of matrix or matrices
meansqrMean of squared elements of matrix or matrices
midpointMidpoint weight initialization function
minmaxRanges of matrix rows
mseMean squared normalized error performance function
narnetNonlinear autoregressive neural network
narxnetNonlinear autoregressive neural network with external input
negdistNegative distance weight function
netinvInverse transfer function
netprodProduct net input function
netsumSum net input function
networkConvert Autoencoder object into network object
networkCreate custom neural network
newgrnnDesign generalized regression neural network
newlindDesign linear layer
newpnnDesign probabilistic neural network
newrbDesign radial basis network
newrbeDesign exact radial basis network
nncell2matCombine neural network cell data into matrix
nncorrCrross correlation between neural network time series
nndataCreate neural network data
nndata2simConvert neural network data to Simulink time series
nnsizeNumber of neural data elements, samples, timesteps, and signals
nnstartNeural network getting started GUI
nntraintoolNeural network training tool
normcNormalize columns of matrix
normprodNormalized dot product weight function
normrNormalize rows of matrix
numelementsNumber of elements in neural network data
numfiniteNumber of finite values in neural network data
numnanNumber of NaN values in neural network data
numsamplesNumber of samples in neural network data
numsignalsNumber of signals in neural network data
numtimestepsNumber of time steps in neural network data
openloopConvert neural network closed-loop feedback to open loop
patternnetPattern recognition network
perceptronPerceptron
performCalculate network performance
plotPlot neural network layer graph
plotconfusionPlot classification confusion matrix
plotepPlot weight-bias position on error surface
ploterrcorrPlot autocorrelation of error time series
ploterrhistPlot error histogram
plotesPlot error surface of single-input neuron
plotfitPlot function fit
plotinerrcorrPlot input to error time-series cross-correlation
plotpcPlot classification line on perceptron vector plot
plotperformPlot network performance
plotpvPlot perceptron input/target vectors
plotregressionPlot linear regression
plotresponsePlot dynamic network time series response
plotrocPlot receiver operating characteristic
plotsomhitsPlot self-organizing map sample hits
plotsomncPlot self-organizing map neighbor connections
plotsomndPlot self-organizing map neighbor distances
plotsomplanesPlot self-organizing map weight planes
plotsomposPlot self-organizing map weight positions
plotsomtopPlot self-organizing map topology
plottrainstatePlot training state values
plotvPlot vectors as lines from origin
plotvecPlot vectors with different colors
plotwbPlot Hinton diagram of weight and bias values
plotWeightsPlot a visualization of the weights for the encoder of an autoencoder
pnormcPseudonormalize columns of matrix
poslinPositive linear transfer function
predictPredict responses using a trained deep learning neural network
predictReconstruct the inputs using trained autoencoder
predictAndUpdateStatePredict responses using a trained recurrent neural network and update the network state
preparetsPrepare input and target time series data for network simulation or training
processpcaProcess columns of matrix with principal component analysis
pruneDelete neural inputs, layers, and outputs with sizes of zero
prunedataPrune data for consistency with pruned network
purelinLinear transfer function
quantDiscretize values as multiples of quantity
radbasRadial basis transfer function
radbasnNormalized radial basis transfer function
randncNormalized column weight initialization function
randnrNormalized row weight initialization function
randsSymmetric random weight/bias initialization function
randsmallSmall random weight/bias initialization function
randtopRandom layer topology function
regressionLinear regression
regressionLayerCreate a regression output layer
RegressionOutputLayerRegression output layer
reluLayerRectified Linear Unit (ReLU) layer
removeconstantrowsProcess matrices by removing rows with constant values
removedelayRemove delay to neural network’s response
removeLayersRemove layers from layer graph
removerowsProcess matrices by removing rows with specified indices
resetStateReset the state of a recurrent neural network
rocReceiver operating characteristic
saeSum absolute error performance function
satlinSaturating linear transfer function
satlinsSymmetric saturating linear transfer function
scalprodScalar product weight function
selforgmapSelf-organizing map
separatewbSeparate biases and weight values from weight/bias vector
seq2conConvert sequential vectors to concurrent vectors
sequenceInputLayerSequence input layer
SeriesNetworkSeries network for deep learning
setelementsSet neural network data elements
setL2FactorSet L2 regularization factor of layer learnable parameter
setLearnRateFactorSet learn rate factor of layer learnable parameter
setsamplesSet neural network data samples
setsignalsSet neural network data signals
setsiminitSet neural network Simulink block initial conditions
settimestepsSet neural network data timesteps
setwbSet all network weight and bias values with single vector
simSimulate neural network
sim2nndataConvert Simulink time series to neural network data
softmaxSoft max transfer function
softmaxLayerSoftmax layer
srchbac1-D minimization using backtracking
srchbre1-D interval location using Brent’s method
srchcha1-D minimization using Charalambous' method
srchgol1-D minimization using golden section search
srchhyb1-D minimization using a hybrid bisection-cubic search
sseSum squared error performance function
stackStack encoders from several autoencoders together
sumabsSum of absolute elements of matrix or matrices
sumsqrSum of squared elements of matrix or matrices
tansigHyperbolic tangent sigmoid transfer function
tapdelayShift neural network time series data for tap delay
timedelaynetTime delay neural network
tonndataConvert data to standard neural network cell array form
trainTrain neural network
trainAutoencoderTrain an autoencoder
trainbBatch training with weight and bias learning rules
trainbfgBFGS quasi-Newton backpropagation
trainbrBayesian regularization backpropagation
trainbuBatch unsupervised weight/bias training
traincCyclical order weight/bias training
traincgbConjugate gradient backpropagation with Powell-Beale restarts
traincgfConjugate gradient backpropagation with Fletcher-Reeves updates
traincgpConjugate gradient backpropagation with Polak-Ribiére updates
traingdGradient descent backpropagation
traingdaGradient descent with adaptive learning rate backpropagation
traingdmGradient descent with momentum backpropagation
traingdxGradient descent with momentum and adaptive learning rate backpropagation
trainingOptionsOptions for training neural network
TrainingOptionsSGDMTraining options for stochastic gradient descent with momentum
trainlmLevenberg-Marquardt backpropagation
trainNetworkTrain neural network for deep learning
trainossOne-step secant backpropagation
trainrRandom order incremental training with learning functions
trainrpResilient backpropagation
trainruUnsupervised random order weight/bias training
trainsSequential order incremental training with learning functions
trainscgScaled conjugate gradient backpropagation
trainSoftmaxLayerTrain a softmax layer for classification
transposedConv2dLayerTransposed 2-D convolution layer
tribasTriangular basis transfer function
tritopTriangle layer topology function
unconfigureUnconfigure network inputs and outputs
vec2indConvert vectors to indices
vgg16Pretrained VGG-16 convolutional neural network
vgg19Pretrained VGG-19 convolutional neural network
viewView neural network
viewView autoencoder
Was this topic helpful?