plotDiagnostics(mdl)
plotDiagnostics(mdl,plottype)
h = plotDiagnostics(...)
h = plotDiagnostics(mdl,plottype,Name,Value)
plotDiagnostics(
plots
diagnostics from the mdl
)mdl
linear model using leverage
as the plot type.
plotDiagnostics(
plots
diagnostics in a plot of type mdl
,plottype
)plottype
.
returns
handles to the lines in the plot.h
= plotDiagnostics(...)
plots
with additional options specified by one or more h
= plotDiagnostics(mdl
,plottype
,Name,Value
)Name,Value
pair
arguments.
For many plots, the Data Cursor tool in the figure window displays the x and y values for any data point, along with the observation name or number.

Nonlinear regression model, constructed by  

String specifying the type of plot:
Default: 
Specify optional commaseparated pairs of Name,Value
arguments.
Name
is the argument
name and Value
is the corresponding
value. Name
must appear
inside single quotes (' '
).
You can specify several name and value pair
arguments in any order as Name1,Value1,...,NameN,ValueN
.
Note:
The plot property namevalue pairs apply to the first returned
handle 

Color of the line or marker, a string or 

Type of line, a string or Chart Line Properties specification. For details, see 

Width of the line or edges of filled area, in points, a positive scalar. One point is 1/72 inch. Default: 

Color of the marker or edge color for filled markers, a string
or 

Color of the marker face for filled markers, a string or 

Size of the marker in points, a strictly positive scalar. One point is 1/72 inch. 

Vector of handles to lines or patches in the plot. 
The hat matrix H is defined in terms of the data matrix X and the Jacobian matrix J:
$${J}_{i,j}={\frac{\partial f}{\partial {\beta}_{j}}}_{{x}_{i},\beta}$$
Here f is the nonlinear model function, and β is the vector of model coefficients.
The Hat Matrix H is
H = J(J^{T}J)^{–1}J^{T}.
The diagonal elements H_{ii} satisfy
$$\begin{array}{l}0\le {h}_{ii}\le 1\\ {\displaystyle \sum _{i=1}^{n}{h}_{ii}}=p,\end{array}$$
where n is the number of observations (rows of X), and p is the number of coefficients in the regression model.
The leverage of observation i is the value of the ith diagonal term, h_{ii}, of the hat matrix H. Because the sum of the leverage values is p (the number of coefficients in the regression model), an observation i can be considered to be an outlier if its leverage substantially exceeds p/n, where n is the number of observations.
The Cook's distance D_{i} of observation i is
$${D}_{i}=\frac{{\displaystyle \sum _{j=1}^{n}{\left({\widehat{y}}_{j}{\widehat{y}}_{j(i)}\right)}^{2}}}{p\text{\hspace{0.17em}}MSE},$$
where
$${\widehat{y}}_{j}$$ is the jth fitted response value.
$${\widehat{y}}_{j(i)}$$ is the jth fitted response value, where the fit does not include observation i.
MSE is the mean squared error.
p is the number of coefficients in the regression model.
Cook's distance is algebraically equivalent to the following expression:
$${D}_{i}=\frac{{r}_{i}^{2}}{p\text{\hspace{0.17em}}MSE}\left(\frac{{h}_{ii}}{{\left(1{h}_{ii}\right)}^{2}}\right),$$
where e_{i} is the ith residual.
[1] Neter, J., M. H. Kutner, C. J. Nachtsheim, and W. Wasserman. Applied Linear Statistical Models, Fourth Edition. Irwin, Chicago, 1996.