Nonlinear regression model class
An object comprising training data, model description, diagnostic
information, and fitted coefficients for a nonlinear regression. Predict
model responses with the predict
or feval
methods.
or nlm
=
fitnlm(tbl
,modelfun
,beta0
)
create
a nonlinear model of a table or dataset array nlm
=
fitnlm(X
,y
,modelfun
,beta0
)tbl
,
or of the responses y
to a data matrix X
.
For details, see fitnlm
.

Covariance matrix of coefficient estimates.  

Cell array of strings containing a label for each coefficient.  

Coefficient values stored as a table.
To obtain any of these columns as a vector, index into the property
using dot notation. For example, in beta = mdl.Coefficients.Estimate Use  

Table with diagnostics helpful in finding outliers and influential observations. The table contains the following fields.
 

Degrees of freedom for error (residuals), equal to the number of observations minus the number of estimated coefficients.  

Vector of predicted values based on the training data.  

Object that represents the mathematical form of the model.  

Structure with information about the fitting process. Fields:
 

Log likelihood of the model distribution at the response values, with mean fitted from the model, and other parameters estimated as part of the model fit.  

To obtain any of these values as a scalar, index into the property
using dot notation. For example, in a model aic = mdl.ModelCriterion.AIC  

Mean squared error, a scalar that is an estimate of the variance of the error term in the model.  

Number of coefficients in the fitted model, a scalar.  

Number of estimated coefficients in the fitted model, a scalar.  

Number of variables  

Number of variables in the data.  

Table with the same number of rows as the input data (
 

Cell array of strings containing the names of the observations used in the fit.
 

Cell array of strings, the names of the predictors used in fitting the model.  

Table of residuals, with one row for each observation and these variables.
To obtain any of these columns as a vector, index into the property
using dot notation. For example, in a model r = mdl.Residuals.Raw Rows not used in the fit because of missing values (in Rows not used in the fit because of excluded values (in
 

String giving naming the response variable.  

Root mean squared error, a scalar that is an estimate of the standard deviation of the error term in the model.  

Structure that is empty unless
 

Proportion of total sum of squares explained by the model. The
ordinary Rsquared value relates to the
For a linear or nonlinear model,
For a generalized linear model,
To obtain any of these values as a scalar, index into the property
using dot notation. For example, the adjusted Rsquared value in r2 = mdl.Rsquared.Adjusted  

Sum of squared errors (residuals). The Pythagorean theorem implies
 

Regression sum of squares, the sum of squared deviations of the fitted values from their mean. The Pythagorean theorem implies
 

Total sum of squares, the sum of squared deviations of The Pythagorean theorem implies
 

Table containing metadata about
 

Cell array of strings containing names of the variables in the fit.
 

Table containing the data, both observations and responses,
that the fitting function used to construct the fit. If the fit is
based on a table or dataset array, 
coefCI  Confidence intervals of coefficient estimates of nonlinear regression model 
coefTest  Linear hypothesis test on nonlinear regression model coefficients 
disp  Display nonlinear regression model 
feval  Evaluate nonlinear regression model prediction 
fit  Fit nonlinear regression model 
plotDiagnostics  Plot diagnostics of nonlinear regression model 
plotResiduals  Plot residuals of nonlinear regression model 
plotSlice  Plot of slices through fitted nonlinear regression surface 
predict  Predict response of nonlinear regression model 
random  Simulate responses for nonlinear regression model 
The hat matrix H is defined in terms of the data matrix X and the Jacobian matrix J:
$${J}_{i,j}={\frac{\partial f}{\partial {\beta}_{j}}}_{{x}_{i},\beta}$$
Here f is the nonlinear model function, and β is the vector of model coefficients.
The Hat Matrix H is
H = J(J^{T}J)^{–1}J^{T}.
The diagonal elements H_{ii} satisfy
$$\begin{array}{l}0\le {h}_{ii}\le 1\\ {\displaystyle \sum _{i=1}^{n}{h}_{ii}}=p,\end{array}$$
where n is the number of observations (rows of X), and p is the number of coefficients in the regression model.
The leverage of observation i is the value of the ith diagonal term, h_{ii}, of the hat matrix H. Because the sum of the leverage values is p (the number of coefficients in the regression model), an observation i can be considered to be an outlier if its leverage substantially exceeds p/n, where n is the number of observations.
The Cook's distance D_{i} of observation i is
$${D}_{i}=\frac{{\displaystyle \sum _{j=1}^{n}{\left({\widehat{y}}_{j}{\widehat{y}}_{j(i)}\right)}^{2}}}{p\text{\hspace{0.17em}}MSE},$$
where
$${\widehat{y}}_{j}$$ is the jth fitted response value.
$${\widehat{y}}_{j(i)}$$ is the jth fitted response value, where the fit does not include observation i.
MSE is the mean squared error.
p is the number of coefficients in the regression model.
Cook's distance is algebraically equivalent to the following expression:
$${D}_{i}=\frac{{r}_{i}^{2}}{p\text{\hspace{0.17em}}MSE}\left(\frac{{h}_{ii}}{{\left(1{h}_{ii}\right)}^{2}}\right),$$
where e_{i} is the ith residual.
Value. To learn how value classes affect copy operations, see Copying Objects in the MATLAB^{®} documentation.