# Problem while implementing "Gradient Descent Algorithm" in Matlab

1,528 views (last 30 days)
Atinesh S on 11 Apr 2015
Edited: Jayan Joshi on 15 Oct 2019
I'm solving a programming assignment in machine learning course. In which I've to implement "Gradient Descent Algorithm" like below
I'm using the following code
% text file conatins 2 values in each row separated by commas
X = [ones(m, 1), data(:,1)];
theta = zeros(2, 1);
iterations = 1500;
alpha = 0.01;
function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
m = length(y); % number of training examples
J_history = zeros(num_iters, 1);
for iter = 1:num_iters
k=1:m;
j1=(1/m)*sum((theta(1)+theta(2).*X(k,2))-y(k))
j2=((1/m)*sum((theta(1)+theta(2).*X(k,2))-y(k)))*(X(k,2))
theta(1)=theta(1)-alpha*(j1);
theta(2)=theta(2)-alpha*(j2);
J_history(iter) = computeCost(X, y, theta);
end
end
theta = gradientDescent(X, y, theta, alpha, iterations);
On running the above code I'm getting this error message

Racz Robert on 6 Jan 2019
brackets mate, should work!
Cheers
Nancy Irisarri on 13 May 2019
Calculation of k can be outside the for loop. Improves performance!

Matt J on 11 Apr 2015
j2 is not a scalar, but you are trying to assign it to a scalar location theta(2).
Did you intend for this line
k=1:m;
to be a for-loop
for k=1:m

Atinesh S on 11 Apr 2015
Why j2 is not scalar, the expression
(1/m)*sum((theta(1)+theta(2).*X(k,2))-y(k))
is producing scalar result which can be multiplied by
X(k,2)
to produce scalar result. But on the matlab, I've also seen the result that is going to be stored in j2 is a vector. But Why ??
Matt J on 12 Apr 2015
k is not a scalar. You defined it to be the vector 1:m. Therefore X(k,2) is also a vector.

Jayan Joshi on 15 Oct 2019
Edited: Jayan Joshi on 15 Oct 2019
predictions =X*theta;
theta=theta-(alpha/m*sum((predictions-y).*X))';

Margo Khokhlova on 19 Oct 2015
Edited: Walter Roberson on 19 Oct 2015
Well, sort of super late, but you just made it wrong with the brackets... This one works for me:
k=1:m;
j1=(1/m)*sum((theta(1)+theta(2).*X(k,2))-y(k))
j2=(1/m)*sum(((theta(1)+theta(2).*X(k,2))-y(k)).*X(k,2))
theta(1)=theta(1)-alpha*(j1);
theta(2)=theta(2)-alpha*(j2);

#### 1 Comment

Nancy Irisarri on 13 May 2019
Calculation of k can be outside the for loop. Improves performance!

Sesha Sai Anudeep Karnam on 7 Aug 2019
Edited: Sesha Sai Anudeep Karnam on 7 Aug 2019
temp0 = theta(1)-alpha*((1/m)*(theta(1)+theta(2).*X(k,2)-y(k)));
temp1 = theta(2)- alpha*((1/m)*(theta(1)+theta(2).*X(k,2)-y(k)).*X(k,2));
theta(1) = temp0;
theta(2) = temp1;
% this code gives approximate values but while submitting I'm getting 0points for this
% Theta found by gradient descent:
% -3.588389
% 1.123667
% Expected theta values (approx)
% -3.6303
% 1.1664
% How to overcome this??

#### 1 Comment

Shekhar Raj on 19 Sep 2019
Below code gave the exact value -
for iter = 1:num_iters
% ====================== YOUR CODE HERE ======================
% Instructions: Perform a single gradient step on the parameter vector
% theta.
%
% Hint: While debugging, it can be useful to print out the values
% of the cost function (computeCost) and gradient here.
%
Prediction = X * theta;
temp1 = alpha/m * sum((Prediction - y));
temp2 = alpha/m * sum((Prediction - y) .* X(:,2));
theta(1) = theta(1) - temp1;
theta(2) = theta(2) - temp2;
% ============================================================

Shekhar Raj on 19 Sep 2019
Below Code works for me -
Prediction = X * theta;
temp1 = alpha/m * sum((Prediction - y));
temp2 = alpha/m * sum((Prediction - y) .* X(:,2));
theta(1) = theta(1) - temp1;
theta(2) = theta(2) - temp2;

#### 1 Comment

Jayan Joshi on 15 Oct 2019
Thank you this really helped. I tried more vectorized form of this and it worked.
predictions =X*theta;
theta=theta-(alpha/m*sum((predictions-y).*X))';

ICHEN WU on 8 Nov 2015
Can you tell me why my answer is not correct? I felt they are the same.
theta(1)=theta(1)-(alpha/m)*sum( (X*theta)-y);
theta(2)=theta(2)-(alpha/m)*sum( ((X*theta)-y)'*X(:,2));

Austin Lindquist on 7 Mar 2016
By assigning theta(1) before assigning theta(2), you've introduced a side effect.
One way of writing it:
temp1 = theta(1)-(alpha/m)*sum(X*theta-y);
theta(2) = theta(2)-(alpha/m)*sum((X*theta-y)'*X(:,2));
theta(1) = temp1;
pavan B on 20 Feb 2017
above one works perfect .try below code of mine too
earlier i used h = X * theta; a0 = (1/m)*sum((h-y)); a1 = (1/m)*sum((h-y)'*x1); surprisingly it didn't work
working code: x1 = X(:,2); a0 = (1/m)*sum((X * theta-y)); a1 = (1/m)*sum((X * theta-y)'*x1); a = [a0;a1]; theta = theta- (alpha*a);
if anyone find out whats wrong with my earlier code it would be appreciated.
Leon Cai on 6 Apr 2017
yea I tried h = X*theta and it didn't work too, I'm thinking that when we use the variable h, as we update theta, the value of h will remain unchanged.

Ali Dezfooli on 17 Jun 2016
In this line
X = [ones(m, 1), data(:,1)];
You add bias to your X, but in the formula of your picture (Ng's slides) when you want to compute theta(2) you should remove it.

Utkarsh Anand on 17 Mar 2018
Looking at the problem, I also think that you cannot initiate Theta as Zero.