-
Notifications
You must be signed in to change notification settings - Fork 1
Exam 2
Exercise 2. Consider the following optimization problem:
close all;
clear;
clc;
matlab.lang.OnOffSwitchState = 1;We need to analyze the convexity of each term and the objective function as a whole. A function is convex if the Hessian Matrix is positive semi-definite.
In this case the objective function is:
To create the Hessian Matrix in a quick form we have:
Q = [4 0 1 0
0 2 2 0
1 2 6 1
0 0 1 4];
fprintf('Eigen Values Objective Function:');
display(eig(Q))Output
Eigen Values Objective Function:
1.0608
3.5933
4.0000
7.3460
Since all the eigenvalues are positive and not zero, we can conclude that Hessian Matrix is positive definite, therefore is a strongly convex optimization problem.
Starting from the point
The function is a quadratic function with the form:
Pseudocode: The steepest descent direction
% Gradient method
Choose x0 tolerance=ie-6 i=0
while (norm(g) > tolerance)
[step direction] d = -g;
[step size] alpha = -(g'*d)/d'Qd;
x = x + alpha*d;
i = i + 1;
end
q = [-5
0
-4
3];
x0 = [0
0
0
0];
tolerance = 1e-6;
% Starting Point
x = x0;
i = 0;
g = Q*x + q;
fprintf('Gradient Method with exact line search');
fprintf('i \t f(x) \t \t \t ||grad f(x)|| \n');
while (norm(g)>tolerance)
v = (1/2)*x'*Q*x + q'*x;
g = Q*x + q;
fprintf('%i \t %2.10f \t \t %2.10f \n',i,v,norm(g));
% Direction
d = -g;
% Step size
alpha = -(g'*d)/(d'*Q*d);
% New point
x = x + alpha*d;
% Next iteration
i = i+1;
end
Output
Gradient Method with exact line search
i f(x) ||grad f(x)||
0 0.0000000000 7.0710678119
1 -5.0403225806 2.4685851395
2 -5.5976694222 1.1368021920
3 -5.7883000924 0.9808455712
4 -5.8850831815 0.6005067114
5 -5.9375039954 0.5321261507
. . .
. . .
. . .
45 -6.0000000000 0.0000027326
46 -6.0000000000 0.0000016771
47 -6.0000000000 0.0000014863
48 -6.0000000000 0.0000009122
With Gradient Method using exact line search we need 48 iterations to arrive to the best solution.
Starting from the point (0, 0, 0, 0). How many iterations are needed? Write the point found by the method at any iteration. With this method, the search direction involves the gradient computed at the current iteration and the direction computed at the precious iteration.
The optimization problem is unconstraint and the objective function is a quadratic, where Q is positive definite, as follows:
Pseudocode: Conjugate gradient method for quadratic functions with linear search
% Conjugate method
Choose x0 i=0
while g > 0
if i = 0
d = -g;
else
beta = norm(g)^2 / norm(g_prev)^2;
d = -g + beta*d_prev;
end
alpha = norm(g)^2 / (d'Q*d);
x = x + alpha*d;
g = Qx + q;
i = i + 1;
end
clear x v g
x0 = [0
0
0
0];
x = x0;
i = 0;
v = (1/2)*x'*Q*x + q'*x;
g = Q*x + q;
fprintf("\n \n Conjugate Gradient with exact line search \n");
fprintf("i \t f(x) \t \t \t ||grad f(x)|| ");
while (norm(g)>tolerance)
fprintf('%i \t %2.10f \f %2.10f \n', i, v, norm(g));
if i == 0
% Direction on the first iteration
d = -g;
else
beta = (norm(g)^2) / (norm(g_prev)^2);
% Directio
d = -g + beta*d_prev;
end
% Step Size
alpha = (norm(g)^2) / (d'*Q*d);
x = x + alpha*d;
d_prev = d;
g_prev = g;
v = (1/2)*x'*Q*x + q'*x;
g = Q*x + q;
i = i + 1;
endOutput
Conjugate Gradient with exact line search
i f(x) ||grad f(x)||
0 0.0000000000 7.0710678119
1 -5.0403225806 2.4685851395
2 -5.6669616780 0.9277888981
3 -5.9998774891 0.0296673867
On this case, we can easily see that the convergence rate is quicklier than the previous method, with Conjugate Method using exact line search we use just 3 iterations to found the minimum.
Starting from the point (0, 0, 0, 0). How many iterations are needed? We can conclude an objective function is strongly convex because the Hessian of the function is positive definite, and we get the global convergence because the direction is a descent direction:
Pseudocode: Newton method basic version
% Newton method
Choose x0 i=0
while (g ~= 0)
[search direction] d = -H\g;
x = x + d
i = i + 1;
end
x0 = [0
0
0
0];
i = 0;
x = x0;
v = (1/2)*x'*Q*x + q'*x;
g = Q*x + q;
fprintf("\n \n Newton method basic version \n");
fprintf("i \t f(x) \t \t \t ||grad f(x)|| \n");
while (norm(g) > tolerance)
fprintf("%i \t %2.10f \t \t %2.10f \n", i, v, norm(g));
d = -Q\g;
x = x + d;
i = i + 1;
v = (1/2)*x'*Q*x + q'*x;
g = Q*x + q;
end Output
Newton method basic version
i f(x) ||grad f(x)||
0 0.0000000000 7.0710678119