Documentation

# mle

Maximum likelihood estimates

## Description

phat = mle(data) returns maximum likelihood estimates (MLEs) for the parameters of a normal distribution, using the sample data in the vector data.

example

phat = mle(data,'distribution',dist) returns parameter estimates for a distribution specified by dist.

example

phat = mle(data,'pdf',pdf,'start',start) returns parameter estimates for a custom distribution specified by the probability density function pdf. You must also specify the initial parameter values, start.

example

phat = mle(data,'pdf',pdf,'start',start,'cdf',cdf) returns parameter estimates for a custom distribution specified by the probability density function pdf and custom cumulative distribution function cdf.

phat = mle(data,'logpdf',logpdf,'start',start) returns parameter estimates for a custom distribution specified by the log probability density function logpdf. You must also specify the initial parameter values, start.

example

phat = mle(data,'logpdf',logpdf,'start',start,'logsf',logsf) returns parameter estimates for a custom distribution specified by the log probability density function logpdf and custom log survival function logsf.

example

phat = mle(data,'nloglf',nloglf,'start',start) returns parameter estimates for the custom distribution specified by the negative loglikelihood function nloglf. You must also specify the initial parameter values, start.

phat = mle(___,Name,Value) specifies options using name-value pair arguments in addition to any of the input arguments in previous syntaxes. For example, you can specify the censored data, frequency of observations, and confidence level.

example

[phat,pci] = mle(___) also returns the 95% confidence intervals for the parameters.

## Examples

collapse all

The variable MPG has the miles per gallon for different models of cars.

Draw a histogram of MPG data.

histogram(MPG)

The distribution is somewhat right skewed. A symmetric distribution, such as normal distribution, might not be a good fit.

Estimate the parameters of the Burr Type XII distribution for the MPG data.

phat = mle(MPG,'distribution','burr')
phat = 1×3

34.6447    3.7898    3.5722

The maximum likelihood estimates for the scale parameter α is 34.6447. The estimates for the two shape parameters $c$ and $k$ of the Burr Type XII distribution are 3.7898 and 3.5722, respectively.

Generate sample data of size 1000 from a noncentral chi-square distribution with degrees of freedom 8 and noncentrality parameter 3.

rng default % for reproducibility
x = ncx2rnd(8,3,1000,1);

Estimate the parameters of the noncentral chi-square distribution from the sample data. To do this, custom define the noncentral chi-square pdf using the pdf input argument.

[phat,pci] = mle(x,'pdf',@(x,v,d)ncx2pdf(x,v,d),'start',[1,1])
phat = 1×2

8.1052    2.6693

pci = 2×2

7.1121    1.6025
9.0983    3.7362

The estimate for the degrees of freedom is 8.1052 and the noncentrality parameter is 2.6693. The 95% confidence interval for the degrees of freedom is (7.1121,9.0983) and the noncentrality parameter is (1.6025,3.7362). The confidence intervals include the true parameter values of 8 and 3, respectively.

The data includes ReadmissionTime, which has readmission times for 100 patients. The column vector Censored has the censorship information for each patient, where 1 indicates a censored observation, and 0 indicates the exact readmission time is observed. This is simulated data.

Define a custom probability density and cumulative distribution function.

custpdf = @(data,lambda) lambda*exp(-lambda*data);
custcdf = @(data,lambda) 1-exp(-lambda*data);

Estimate the parameter, lambda, of the custom distribution for the censored sample data.

phat = 0.1096

The data includes ReadmissionTime, which has readmission times for 100 patients. The column vector Censored has the censorship information for each patient, where 1 indicates a censored observation, and 0 indicates the exact readmission time is observed. This is simulated data.

Define a custom log probability density and survival function.

custlogpdf = @(data,lambda,k) log(k)-k*log(lambda)+(k-1)*log(data)-(data/lambda).^k;
custlogsf = @(data,lambda,k) -(data/lambda).^k;

Estimate the parameters, lambda and k, of the custom distribution for the censored sample data.

'start',[1,0.75],'Censoring',Censored)
phat = 1×2

9.2090    1.4223

The scale and shape parameters of the custom-defined distribution are 9.2090 and 1.4223, respectively.

The data includes ReadmissionTime, which has readmission times for 100 patients. This is simulated data.

Define a negative log likelihood function.

custnloglf = @(lambda,data,cens,freq) - length(data)*log(lambda) + nansum(lambda*data);

Estimate the parameters of the defined distribution.

phat = 0.1462

Generate 100 random observations from a binomial distribution with the number of trials, $n$ = 20, and the probability of success, $p$ = 0.75.

data = binornd(20,0.75,100,1);

Estimate the probability of success and 95% confidence limits using the simulated sample data.

[phat,pci] = mle(data,'distribution','binomial','alpha',.05,'ntrials',20)
phat = 0.7615
pci = 2×1

0.7422
0.7800

The estimate of probability of success is 0.7615 and the lower and upper limits of the 95% confidence interval are 0.7422 and 0.78. This interval covers the true value used to simulate the data.

Generate sample data of size 1000 from a noncentral chi-square distribution with degrees of freedom 10 and noncentrality parameter 5.

rng default % for reproducibility
x = ncx2rnd(10,5,1000,1);

Suppose the noncentrality parameter is fixed at the value 5. Estimate the degrees of freedom of the noncentral chi-square distribution from the sample data. To do this, custom define the noncentral chi-square pdf using the pdf input argument.

[phat,pci] = mle(x,'pdf',@(x,v,d)ncx2pdf(x,v,5),'start',1)
phat = 9.9307
pci = 2×1

9.5626
10.2989

The estimate for the noncentrality parameter is 9.9307, with a 95% confidence interval of 9.5626 and 10.2989. The confidence interval includes the true parameter value of 10.

Generate sample data of size 1000 from a Rician distribution with noncentrality parameter of 8 and scale parameter of 5. First create the Rician distribution.

r = makedist('Rician','s',8,'sigma',5);

Now, generate sample data from the distribution you created above.

rng default % For reproducibility
x = random(r,1000,1);

Suppose the scale parameter is known, and estimate the noncentrality parameter from sample data. To do this using mle, you must custom define the Rician probability density function.

[phat,pci] = mle(x,'pdf',@(x,s,sigma) pdf('rician',x,s,5),'start',10)
phat = 7.8953
pci = 2×1

7.5405
8.2501

The estimate for the noncentrality parameter is 7.8953, with a 95% confidence interval of 7.5404 and 8.2501. The confidence interval includes the true parameter value of 8.

Add a scale parameter to the chi-square distribution for adapting to the scale of data and fit it. First, generate sample data of size 1000 from a chi-square distribution with degrees of freedom 5, and scale it by the factor of 100.

rng default % For reproducibility
x = 100*chi2rnd(5,1000,1);

Estimate the degrees of freedom and the scaling factor. To do this, custom define the chi-square probability density function using the pdf input argument. The density function requires a $1/s$ factor for data scaled by $s$.

[phat,pci] = mle(x,'pdf',@(x,v,s)chi2pdf(x/s,v)/s,'start',[1,200])
phat = 1×2

5.1079   99.1681

pci = 2×2

4.6862   90.1215
5.5297  108.2146

The estimate for the degrees of freedom is 5.1079 and the scale is 99.1681. The 95% confidence interval for the degrees of freedom is (4.6862,5.5279) and the scale parameter is (90.1215,108.2146). The confidence intervals include the true parameter values of 5 and 100, respectively.

## Input Arguments

collapse all

Sample data mle uses to estimate the distribution parameters, specified as a vector.

Data Types: single | double

Distribution type to estimate parameters for, specified as one of the following.

distDescriptionParameter 1Parameter 2Parameter 3Parameter 4
'Bernoulli'Bernoulli Distributionp: probability of success for each trial
'Beta'Beta Distributiona: first shape parameterb: second shape parameter
'bino' or 'Binomial'Binomial Distributionn: number of trialsp: probability of success for each trial
'BirnbaumSaunders'Birnbaum-Saunders Distributionβ: scale parameterγ: shape parameter
'Burr'Burr Type XII Distributionα: scale parameterc: first shape parameterk: second shape parameter
'Discrete Uniform' or 'unid'Uniform Distribution (Discrete)n: maximum observable value
'exp' or 'Exponential'Exponential Distributionμ: mean
'ev' or 'Extreme Value'Extreme Value Distributionμ: location parameterσ: scale parameter
'gam' or 'Gamma'Gamma Distributiona: shape parameterb: scale parameter
'gev' or 'Generalized Extreme Value'Generalized Extreme Value Distributionk: shape parameterσ: scale parameterμ: location parameter
'gp' or 'Generalized Pareto'Generalized Pareto Distributionk: tail index (shape) parameterσ: scale parameterθ: threshold (location) parameter
'geo' or 'Geometric'Geometric Distributionp: probability parameter
'hn' or 'Half Normal'Half-Normal Distributionμ: location parameterσ: scale parameter
'InverseGaussian'Inverse Gaussian Distributionμ: scale parameterλ: shape parameter
'Logistic'Logistic Distributionμ: mean σ: scale parameter
'LogLogistic'Loglogistic Distributionμ: mean of logarithmic valuesσ: scale parameter of logarithmic values
'logn' or 'LogNormal'Lognormal Distributionμ: mean of logarithmic valuesσ: standard deviation of logarithmic values
'Nakagami'Nakagami Distributionμ: shape parameterω: scale parameter
'nbin' or 'Negative Binomial'Negative Binomial Distributionr: number of successesp: probability of success in a single trial
'norm' or 'Normal'Normal Distributionμ: mean σ: standard deviation
'poiss' or 'Poisson'Poisson Distributionλ: mean
'rayl' or 'Rayleigh'Rayleigh Distributionb: scale parameter
'Rician'Rician Distributions: noncentrality parameterσ: scale parameter
'Stable'Stable Distributionα: first shape parameterβ: second shape parameterγ: scale parameterδ: location parameter
'tLocationScale't Location-Scale Distributionμ: location parameterσ: scale parameterν: shape parameter
'unif' or 'Uniform'Uniform Distribution (Continuous)a: lower endpoint (minimum)b: upper endpoint (maximum)
'wbl' or 'Weibull'Weibull Distributiona: scale parameterb: shape parameter

Example: 'rician'

Custom probability distribution function, specified as a function handle created using @.

This custom function accepts the vector data and one or more individual distribution parameters as input parameters, and returns a vector of probability density values.

For example, if the name of the custom probability density function is newpdf, then you can specify the function handle in mle as follows.

Example: @newpdf

Data Types: function_handle

Custom cumulative distribution function, specified as a function handle created using @.

This custom function accepts the vector data and one or more individual distribution parameters as input parameters, and returns a vector of cumulative probability values.

You must define cdf with pdf if data is censored and you use the 'Censoring' name-value pair argument. If 'Censoring' is not present, you do not have to specify cdf while using pdf.

For example, if the name of the custom cumulative distribution function is newcdf, then you can specify the function handle in mle as follows.

Example: @newcdf

Data Types: function_handle

Custom log probability density function, specified as a function handle created using @.

This custom function accepts the vector data and one or more individual distribution parameters as input parameters, and returns a vector of log probability values.

For example, if the name of the custom log probability density function is customlogpdf, then you can specify the function handle in mle as follows.

Example: @customlogpdf

Data Types: function_handle

Custom log survival function, specified as a function handle created using @.

This custom function accepts the vector data and one or more individual distribution parameters as input parameters, and returns a vector of log survival probability values.

You must define logsf with logpdf if data is censored and you use the 'Censoring' name-value pair argument. If 'Censoring' is not present, you do not have to specify logsf while using logpdf.

For example, if the name of the custom log survival function is logsurvival, then you can specify the function handle in mle as follows.

Example: @logsurvival

Data Types: function_handle

Custom negative loglikelihood function, specified as a function handle created using @.

This custom function accepts the following input arguments.

 params Vector of distribution parameter values. mle detects the number of parameters from the number of elements in start. data Vector of data. cens Boolean vector of censored values. freq Vector of integer data frequencies.

nloglf must accept all four arguments even if you do not use the 'Censoring' or 'Frequency' name-value pair arguments. You can write 'nloglf' to ignore cens and freq arguments in that case.

nloglf returns a scalar negative loglikelihood value and optionally, a negative loglikelihood gradient vector (see the 'GradObj' field in 'Options').

If the name of the custom negative log likelihood function is negloglik, then you can specify the function handle in mle as follows.

Example: @negloglik

Data Types: function_handle

Initial parameter values for the custom functions, specified as a scalar value or a vector of scalar values.

Use start when you fit custom distributions, that is, when you use pdf and cdf, logpdf and logsf, or nloglf input arguments.

Example: 0.05

Example: [100,2]

Data Types: single | double

### Name-Value Pair Arguments

Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside quotes. You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

Example: 'Censoring',Cens,'Alpha',0.01,'Options',Opt specifies that mle estimates the parameters for the distribution of censored data specified by array Cens, computes the 99% confidence limits for the parameter estimates, and uses the algorithm control parameters specified by the structure Opt.

Indicator for censoring, specified as the comma-separated pair consisting of 'Censoring' and a Boolean array of the same size as data. Use 1 for observations that are right censored and 0 for observations that are fully observed. The default is all observations are fully observed.

For example, if the censored data information is in the binary array called Censored, then you can specify the censored data as follows.

Example: 'Censoring',Censored

mle supports censoring for the following distributions:

 Birnbaum-Saunders Burr Exponential Extreme Value Gamma Inverse Gaussian Kernel Log-Logistic Logistic Lognormal Nakagami Normal Rician t Location-Scale Weibull

Data Types: logical

Frequency of observations, specified as the comma-separated pair consisting of 'Frequency' and an array containing nonnegative integer counts, which is the same size as data. The default is one observation per element of data.

For example, if the observation frequencies are stored in an array named Freq, you can specify the frequencies as follows.

Example: 'Frequency',Freq

Data Types: single | double

Significance level for the confidence interval of parameter estimates, pci, specified as the comma-separated pair consisting of 'Alpha' and a scalar value in the range (0,1). The confidence level of pci is 100(1-Alpha)% . The default is 0.05 for 95% confidence.

For example, for 99% confidence limits, you can specify the confidence level as follows.

Example: 'Alpha',0.01

Data Types: single | double

Number of trials for the corresponding element of data, specified as the comma-separated pair consisting of 'Ntrials' and a scalar or a vector of the same size as data.

Applies only to binomial distribution.

Example: 'Ntrials',total

Data Types: single | double

Location parameter for the half-normal distribution, specified as the comma-separated pair consisting of 'mu' and a scalar value.

Applies only to half-normal distribution.

Example: 'mu',1

Data Types: single | double

Fitting algorithm control parameters, specified as the comma-separated pair consisting of 'Options' and a structure returned by statset.

Not applicable to all distributions.

Use the 'Options' name-value pair argument to control details of the maximum likelihood optimization when fitting a custom distribution. For parameter names and default values, type statset('mlecustom'). You can set the options under a new name and use that in the name-value pair argument. mle interprets the following statset parameters for custom distribution fitting.

ParameterValue

Default is 'off'.

'on' or 'off', indicating whether or not fmincon can expect the custom function provided with the nloglf input argument to return the gradient vector of the negative log-likelihood as a second output.

mle ignores 'GradObj' when using fminsearch.

'DerivStep'

Default is eps^(1/3).

The relative difference, specified as a scalar or a vector the same size as start, used in finite difference derivative approximations when using fmincon, and 'GradObj' is 'off'.

mle ignores 'DerivStep' when using fminsearch.

'FunValCheck'

Default is 'on'.

'on' or 'off', indicating whether or not mle should check the values returned by the custom distribution functions for validity.

A poor choice of starting point can sometimes cause these functions to return NaNs, infinite values, or out-of-range values if they are written without suitable error checking.

'TolBnd'

Default is 1e-6.

An offset for lower and upper bounds when using fmincon.

mle treats lower and upper bounds as strict inequalities, that is, open bounds. With fmincon, this is approximated by creating closed bounds inset from the specified lower and upper bounds by TolBnd.

Example: 'Options',statset('mlecustom')

Data Types: struct

Lower bounds for distribution parameters, specified as the comma-separated pair consisting of 'Lowerbound' and a vector the same size as start.

This name-value pair argument is valid only when you use the pdf and cdf, logpdf and logcdf, or nloglf input arguments.

Example: 'Lowerbound',0

Data Types: single | double

Upper bounds for distribution parameters, specified as the comma-separated pair consisting of 'Upperbound' and a vector the same size as start.

This name-value pair argument is valid only when you use the pdf and cdf, logpdf and logsf, or nloglf input arguments.

Example: 'Upperbound',1

Data Types: single | double

Optimization function mle uses in maximizing the likelihood, specified as the comma-separated pair consisting of 'Optimfun' and either 'fminsearch' or 'fmincon'.

Default is 'fminsearch'.

You can only specify 'fmincon' if Optimization Toolbox™ is available.

The 'Optimfun' name-value pair argument is valid only when you fit custom distributions, that is, when you use the pdf and cdf, logpdf and logsf, or nloglf input arguments.

Example: 'Optimfun','fmincon'

## Output Arguments

collapse all

Parameter estimates, returned as a scalar value or a row vector.

Confidence intervals for parameter estimates, returned as a column vector or a matrix depending on the number of parameters, hence the size of phat.

pci is a 2-by-k matrix, where k is the number of parameters mle estimates. The first and second rows of the pci show the lower and upper confidence limits, respectively.

collapse all

### Survival Function

The survival function is the probability of survival as a function of time. It is also called the survivor function. It gives the probability that the survival time of an individual exceeds a certain value. Since the cumulative distribution function, F(t), is the probability that the survival time is less than or equal to a given point in time, the survival function for a continuous distribution, S(t), is the complement of the cumulative distribution function: S(t) = 1 – F(t).

## Tips

When you supply distribution functions, mle computes the parameter estimates using an iterative maximization algorithm. With some models and data, a poor choice of starting point can cause mle to converge to a local optimum that is not the global maximizer, or to fail to converge entirely. Even in cases for which the log-likelihood is well-behaved near the global maximum, the choice of starting point is often crucial to convergence of the algorithm. In particular, if the initial parameter values are far from the MLEs, underflow in the distribution functions can lead to infinite log-likelihoods.