"Roger Stafford" wrote in message <je370o$ldd$1@newscl01ah.mathworks.com>...
> "Brian " <bwgould@wisc.edu> wrote in message <je2a4c$mjk$1@newscl01ah.mathworks.com>...
> > I have a function that is defined for M observations. This function is determined by the current values of K parameters. How can I return an M x K matrix of derivatives without having to put the gradient function within an M dimension loop?
>          
> In deriving partial derivative estimates using the the finite differences of the 'gradient' function, you need a grid of values with as many dimensions are there are independent variables. That is, 'gradient' needs to see your function varying with respect to each individual independent variable while the remaining variables are all held constant. How do your K parameters enter into this? Are they the independent variables?
>
> None of this seems compatible with a simple twodimensional M by K matrix. Your description sounds as though you were thinking of pairwise variances or correlations between K variables using M statistical observations. Finding partial derivatives with 'gradient' is a very different matter. I think you need to explain your problem in far greater detail.
>
> Roger Stafford
To further explain, I teach a graduate course in econometrics. I am trying to write some code for my students that estimates the parameters (i.e., K) of a nonlinear regression model using the GaussNewton Algorithm where I have a couple of thousand observations (i.e. M) used for estimation. I need to calculate the Kvector of derivatives for these K parameters for each of these M observations. How do I do this? My reading of the gradient function is that it will do this for only 1 observation at a time.
