Right way to calculate the determinant of variance-covariance matrix

14 views (last 30 days)
Hi,
I'm working on a series of optimization problems wherein the objective function to be minimized is the determinant of the variance-covariance matrix. If we consider the expression for determinant as a function f(q; x) then x is the vector of decision variable and q is a vector of parameters based on a user supplied probability distribution.
For different values of the q vector (Monte Carlo sample), I run the optimization to find optimal values of x. Note, the matrices are symmetric/square.
Because of floating point issues, there are certain scenarios wherein the determinant of a matrix is "extremely" low (but not zero). Strangely calculating rank of such matrices using rank function will say it is NOT full rank so I use the rank function to first check whether it is full rank to avoid reporting wrong determinant values and output NaN for such cases. Im surprised because I would expect det to do what rank does and not output incorrect det values if not full rank.
When the rank function does indicate full rank, in such cases I use the "det" function to calculate the determinant. My expectation is that in such scenarios I could be confident of det having reported the correct result (but do see point b below)
The above simple (but very costly) logic is built in to a user defined function used for actual optimization.
My question is:-
a) Is there a way to avoid using Rank/Det function for my context? My objective is speed (as long as answer is not incorrect) and in present approach I seem to be duplicating effots with rank followed by det.
(Btw, there seems to be lot of posts online suggesting to not use det function, but I guess they do NOT apply for my scenario?)
b) Even using the rank function first does NOT provide a guarantee. I see some matrices with crazy determinant values and my hypothesis is that it is not a full rank matrix but rank function reports it as full rank. So optimization should have just reported NaN but get some gibberish value
c) I'm considering using "cond" or "rcond" but not sure whether they would mimic what I want to do and worried about cutoff values to be used (considering that the range of user supplied q is quite vast, so determining a single cutoff that works well is tough). I dont want to use eps; some suggest 1e-12 but understand that it depends on context of the problem (what am doing can be used across different industries/functions so possibly I could have a good generic "default" value which could be over ridden by the user if required)
Any thoughts or suggestions highly appreciated (Im using Matlab 2015a on a Win 8 and Win 10 machine)
Thanks
Hari
  1 Comment
Hari
Hari on 8 Sep 2015
Hi,
I haven't gotten any responses to above question. Is it worded vaguely?
Let me know so that I could improve on the same
Thanks Hari

Sign in to comment.

Answers (1)

Steven Lord
Steven Lord on 8 Sep 2015
Using the determinant to determine if a matrix is singular is a Bad Idea. This matrix A, according to DET, is exactly singular ... but it's extremely well behaved, being a scalar multiple of the identity matrix!
A = 0.1*eye(500);
det(A)
cond(A)
On the other hand, this matrix B has a determinant value very far from 0, but is horribly conditioned. If not for the scaling factor of 1./eps, the determinant would suggest it's very close to singular (which the condition number supports) but the scaling factor inflates the determinant value.
B = [1 1; 1 1+eps]./eps;
det(B)
cond(B)
Why don't you describe the problem you're trying to optimize in a little more detail. Why are you choosing to use the determinant of the variance-covariance matrix as your objective function? What does a minimizer of that determinant mean for the underlying problem?
  1 Comment
Hari
Hari on 8 Sep 2015
Hey Steven,
Thanks for responding. Your examples are right on the mark; only issue is I don't know what approach to take for my scenario.
I'm working on developing optimal designs and my focus is on minimizing generalized variance. Check out this very basic AMSTAT paper around minimizing the determinant of variance-covariance matrix. At a high level by finding the values of decision variable (design) for which the determinant is minimized, and performing an experiment based on that specific design and then finally fitting a regression model (based on the design + response) will yield parameter estimates with minimum possible variance.
Btw, on a "slightly related" topic Loren does mention scenarios for which inv function could be used (covariance of least square estimates); my assumption at this point is that an explicit DET function is similarly a necessary (evil) for my objective function...but there is possibly a workaround for avoiding RANk for checking singularity.
Hope that helps
Thanks Hari

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!