Computational Project Part 1

$24.99 $18.99

Goal: minimizing a multivariate function f(x) using gradient-based method with backtracking. Use backtracking as described in class to compute step-lengths (so you need to set the parameters s, and ). Condition: > 0, ∈ (0,1), ∈ (0,1) Start with initial guess for step length (“) = > 0 If ( “) − ( “ +…

Rate this product

You’ll get a: zip file solution

 

Description

Rate this product

Goal: minimizing a multivariate function f(x) using gradient-based method with backtracking.

  1. Use backtracking as described in class to compute step-lengths (so you need to set the parameters s, and ). Condition: > 0, ∈ (0,1), ∈ (0,1)

    1. Start with initial guess for step length (“) = > 0

    2. If ( ) − ( + ” “) ≥ − ∇f( )

      • (“) decrease function sufficiently current (“) is chosen

    1. Otherwise, repeat reduce (“) by multiplying ∈ (0,1) until condition in step 2 is met

      • (“) = (“)

Observation (take $( ) as an example)

$( ) = $% + %% + &%(‘) = (1,1,1)(

  • Let = 0.25 and = 2. The larger the , the larger the final step length (“) will be chosen. Larger reduce smaller amount of (“), higher chance the larger (“) will satisfy the condition in step 2.

  • Let = 0.6 and = 2. The smaller the , the larger the final step length (“) will be chosen. Smaller require larger (“) to decrease the function sufficiently.

  1. Print the initial point and for each iteration print the search direction, the step length, and the new iterate x(k+1): If the number of iterations is more than 15 then printout the details of the just the first 10 iterations as well as the details of the last 5 iterations before the stopping condition is met. Indicate if the iteration maximum is reached.

The implementation of gradient-based method with backtracking:

      1. Set initial point

      1. Find the next solution “0$ = + ” “ = ∇f(x)) with backtracking step size and gradient direction = −∇f(x))

      1. Repeat step 2 until the termination criteria are met

  1. Test your algorithms on the following test problems (Set = 2, = 0.25, = 0.5 for consistence)

    1. $( ) = $% + %% + &%(‘) = (1,1,1)(

    1. %( ) = $% + 2 %% − 2 $ % − 2 %(‘) = (0,0)(

3. &( ) = 100( %$%)% + (1 − $)%(‘) = (−1.2, 1)(

  1. 5( ) = ( $ + %)5 + %%(‘) = (2, −3)(

  1. 4( ) = ( $ − 1)% + ( % − 1)% + ( $% + %% − 0.25)%(‘) = (1, −1)(

c =1

c = 10

c = 100

Comment on how larger c affects the performance of the algorithm.

Iteration

( )

Gradient at

( )

11

16

[0.56408574 0.56408685]

[-0.00000762, -0.00000367]

[0.40261231 0.40260761]

[ 0.00000987, -0.00001345]

226

[0.35979117 0.35978795]

[ 0.00001533, -0.00000256]

The larger the c, more iterations are required to converge to optimal solution most case. Use the norm of gradients as metric of performance. The larger the norm of gradient, the final gradient is more far away from 0. The plot in the right side illustrates the value of c won’t affect the performance too much.

(e) Are your computational results consistent with the theory of the gradient-based methods?

Under the assumption that ( ) is bounded below and the gradient of ( ) is Lipchitz continuous over 6. The computational results are consistent with the theory of the gradient-based methods with backtracking. For functions in part d, show

fP (“)Q → 0 → ∞

  1. $( ) = $% + %% + &%(‘) = (1,1,1)( = 1:

f$P (“)Q = 0 , = = [0,0,0]

$( ) = 0 ⇒ is global minimum since $( ) ≥ 0

  1. %( ) = $% + 2 %% − 2 $ % − 2 %(‘) = (0,0)( = 33:

f%P (“)Q = [−0.00001526, 0] , = [0.99998474, 0.99999237]

→ ∞ (if there’s no stopping criteria)

= [1,1]

,

f

%

(“)

Q → 0

,P

%

2

2

\

( )

( ) = Z−2

4

Hessian PD

f

is convex

    • is global minimum

  1. &( ) = 100( %$%)% + (1 − $)%(‘) = (−1.2, 1)( = 421:

f&P (“)Q = [0.0000036, 0.00000819], = [1.00000999, 1.00002003] → ∞ (if there’s no stopping criteria)

= [1,1], ∇f%P (“)Q → 0

&( ) = 0 ⇒ is global minimum since &( ) ≥ 0

  1. 5( ) = ( $ + %)5 + %%(‘) = (2, −3)( = 617:

f5P (“)Q = [ 0.00000997, −0.00000019], = [ 0.01356503, −0.00000508]

→ ∞ (if there’s no stopping criteria)

,

(“)

Q → 0

= [0,0] ∇f5P

5( ) ≥ 0

5( ) = 0

is global minimum since

5. 4( ) = ( $ − 1)% + ( % − 1)% + ( $% + %% − 0.25)%(‘) = (1, −1)(

Iteration

[0.

( )

Gradient at

( )

11

0.56408685]

56408574

[-0.00000762, -0.00000367]

16

[0.40261231

0.40260761]

[ 0.00000987, -0.00001345]

226

[0.35979117

0.35978795]

[ 0.00001533, -0.00000256]

→ ∞ (if there’s no stopping criteria)

f4P (“)Q → 0

Computational Project Part 1
$24.99 $18.99