multivariate newton's method not working
3 views (last 30 days)
Show older comments
Hi. So I'm trying to get something that will tell me the right number of option contracts per dollar owned, for call options of various strike prices, given their current prices per option contract, the strike prices, and the estimated mean and standard deviation of the underlying stock price at expiration. Supposedly Newton's method works in multiple dimensions just as it does in one dimension. That is to say, if you want to close in on a spot where the derivative is 0 of a function which will probably be a minimum or a maximum, you would start from an initial guess x, and then for each additional iteration, Δx = - f'/f'', where f' is the first derivative and f'' is the 2nd derivative of the function you're trying to maximize. In multivariate form, Δx = - (∇²f)^(-1) * ∇f where ∇f is the gradient of the function and ∇²f is the Hessian, that is, the higher dimensional equivalent of the 2nd derivative, which is a matrix of partial derivatives, and (∇²f)^(-1) is the matrix inverse of ∇²f. And here is the code I have created. It doesn't produce any errors, but I don't understand why it's not converging. It seems to be slowly going in the wrong direction in fact. The current settings are a stock that's currently 445 dollars a share and expected to rise $1.50 per day for the next 9 days (cough cough Eli Lily cough cough) but with a standard deviation of $5.30 per day. And standard deviation increases with the square root of time and mean with the time, so in other words after 9 days, the standard deviation of the price's probability distribution is then 3 times as much or 15.90 and the mean price per share expected would be up 13.50 from the current price of 445.
--------------------
clear all
close all
currentprice=445;
optionstrikeprices=[442.5, 460, 470, 480];
calloptionprices=[8.675, 1.88, .685, .4];
expectedpriceatexpiration=currentprice+1.5*9;
standarddeviationatexpiration=5.3*sqrt(9);
%The cumulative probability function of a gaussian random variable
%with standard deviation 1 is .5*(1+erf(x/sqrt(2)))
quantitytoown=zeros(size(optionstrikeprices));
currentutility=0;
utilitydifferentials=zeros(1,size(optionstrikeprices,2)-1);
utilityhessian=zeros(size(utilitydifferentials,2),size(utilitydifferentials,2));
gaussiandistributiondomain=-4:.1:4;
gaussiandistribution=exp(-.5*gaussiandistributiondomain.^2);
gaussiandistribution=gaussiandistribution/sum(gaussiandistribution);
for counter3=1:9
for counter2=1:size(utilitydifferentials,2)
quantitytoowntest=quantitytoown+[zeros(1,counter2-1), .00001, zeros(1,size(quantitytoown,2)-counter2-1), -.00001];
utility=0;
for counter=1:size(gaussiandistribution,2)
endingprice=expectedpriceatexpiration+gaussiandistributiondomain(counter)*standarddeviationatexpiration;
calloptionvaluesatend=endingprice-optionstrikeprices;
calloptionvaluesatend=calloptionvaluesatend.*(calloptionvaluesatend>0);
performance=(1+quantitytoowntest*(calloptionvaluesatend-calloptionprices)')*2/(1+endingprice/currentprice); %*2/(1+endingprice/currentprice) because it is making a benchmark out of 50% dollars and 50% shares at their starting price
if performance<1e-20
performance=1e-20;
end
utility=utility+gaussiandistribution(counter)*log(performance);
end
utilitydifferentials(counter2)=(utility-currentutility)*100000; %Now stores the gradient
for counter4=1:counter2 %Now need the hessian
quantitytoowntest2=quantitytoowntest+[zeros(1,counter4-1), .00001, zeros(1,size(quantitytoown,2)-counter4-1), -.00001];
utility2=0;
for counter=1:size(gaussiandistribution,2)
endingprice=expectedpriceatexpiration+gaussiandistributiondomain(counter)*standarddeviationatexpiration;
calloptionvaluesatend=endingprice-optionstrikeprices;
calloptionvaluesatend=calloptionvaluesatend.*(calloptionvaluesatend>0);
performance=(1+quantitytoowntest2*(calloptionvaluesatend-calloptionprices)')*2/(1+endingprice/currentprice);
if performance<1e-20
performance=1e-20;
end
utility2=utility2+gaussiandistribution(counter)*log(performance);
end
utilityhessian(counter2,counter4)=((utility2-utility)*100000-utilitydifferentials(counter2))*100000;
end
end
for counter2=1:(size(utilitydifferentials,2)-1)
utilityhessian(counter2, (counter2+1):size(utilitydifferentials,2))=utilityhessian((counter2+1):size(utilitydifferentials,2), counter2)';
end
utilitydifferentials=utilitydifferentials*utilityhessian^-1;
quantitytoown=quantitytoown-[utilitydifferentials, -sum(utilitydifferentials)]
currentutility=0;
for counter=1:size(gaussiandistribution,2)
endingprice=expectedpriceatexpiration+gaussiandistributiondomain(counter)*standarddeviationatexpiration;
calloptionvaluesatend=endingprice-optionstrikeprices;
calloptionvaluesatend=calloptionvaluesatend.*(calloptionvaluesatend>0);
performance=(1+quantitytoown*(calloptionvaluesatend-calloptionprices)')*2/(1+endingprice/currentprice);
if performance<1e-20
performance=1e-20;
end
currentutility=currentutility+gaussiandistribution(counter)*log(performance);
end
currentutility
end
3 Comments
Torsten
on 14 Jun 2023
I don't see what x and f from Newton's formula are in your code and where x is updated. So it's impossible to gives advice.
Answers (1)
Matt J
on 15 Jun 2023
Edited: Matt J
on 15 Jun 2023
You seem to be under the impression that it is far more extraordinary to thoroughly understand Newton's method, which I learned in high school calculus class, than it is to understand the use of obscure matlab toolboxes
It is not that it is extraordinary to understand plain-vanilla Newton's method. It is just that plain Newton's method is rarely ever used, as it is less reliable than certain refined versions of the algorithm, some of which are implemented in the Optimization Toolbox for your convenience. In particular, damped versions of Newton's method, which adjust the stepsize adaptively using a line search, will have a larger radius of converge than plain-vanilla Newton.
With that in mind, your problem could be that you are simply not starting close enough to the desired solution for convergence to occur. Like the others have said though, the code is very hard to read, and so any bugs present in your implementation are very likely to escape our notice.
2 Comments
Matt J
on 26 Jun 2023
Edited: Matt J
on 26 Jun 2023
I believe the function doesn't have any tricky saddle points in it so it should converge from pretty much anywhere
That doesn't matter. Consider the strictly convex function f(x)=|x|^1.5, which has no saddle points. If you initialize plain vanilla Newton's method from
, it will just oscillate forever between +1 and -1, never converging to the minimum at x=0.
syms x real
f=abs(x)^(3/2);
nextPoint(x) = simplify( x - diff(f,1)./diff(f,2) ,3) %newton update step
See Also
Categories
Find more on Linear Least Squares in Help Center and File Exchange
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!