• Microsoft PowerPoint - dynamic


  •   
  • FileName: dynamic.pdf [preview-online]
    • Abstract: independent subproblems, solve the subproblems recursively, and ... Goal: define the value of an optimal solution recursively in terms of. the optimal solutions to subproblems. ...

Download the ebook

Zhiyang Yao
Reference:
T.H.Cormen, C.E.Leiserson and R.L.Riverst.
Introduction to Algorithms. The MIT Press, 1997.
What is Dynamic Programming?
Background
Divide-and-conquer algorithms partition the problem into
independent subproblems, solve the subproblems recursively, and
then combine their solutions to solve the original problem.
What happens if subproblems are not independent, that is, what if
subproblems depend on the solution to another problem? In this
case, the divide-and-conquer technique does more work than
necessary, repeatedly solving common subproblems.
Introduction
Dynamic programming is a metatechnique (not an algorithm) like
divide and conquer. It predates computer programming. Dynamic
programming solves every subproblem just once, and saves the
subproblem answers in a table.
Dynamic Programming vs. Devide-and-Conquer
Dynamic Programming is similar to Divide & Conquer
–Both are algorithm design methods
–Both solve problems by combining solutions to subproblems
Dynamic Programming is different from Divide & Conquer
–Dynamic Programming solves optimization problems
• Goal is to find the best of many possible answers
–Dynamic Programming is applied when subproblems are not
independent
•“Not independent” means the same subproblem can show up
many times
•Each subproblem is computed just once, then the answer is
saved
Top-Down Bottom-
Up
Implementing Dynamic Programming
Steps of the development of a dynamic-programming
algorithm
•Characterize the structure of an optimal solution
•Recursively define the value of an optimal solution
•Computer the value of an optimal solution in a bottom-up
fashion
•Construct an optimal solution from computed information (if
only the value of an optimal solution is required, this step can
be omitted.)
Example: matrix-chain multiplication
A=[p×q], B=[q × r], then the scalar multiplications to computer AB is p × q × r
b11 b12 b13 
a11 a12 a13 a14    c11 c12 c13 
a b21 b22 b23  
 21 a22 a23 a24  ×  
 b b b  = c21 c22 c23 

a31 a32 a33
 a34  
31 32 33
 b b b  c31 c32  c33 

 41 42 43 
A1A2 A3 A4 = (A1(A2 (A3 A4)))
= (A1((A2 A3 )A4))
= ((A1A2 )(A3 A4))
= ((A1(A2 A3 )A4))
= (((A1A2 )A3 )A4)
A product of matrices is fully parenthesized if it is either a single matrix or the product of
two fully parenthesized matrix products, surrounded by parentheses.
Different parenthesized pattern will product different number of
scalar multiplications(the less the better)
Example: matrix-chain multiplication(Con.)
For example, A1=[10×5], A2=[5×20], A3=[20×10], A4=[10×5]:
Case 1:
(A1(A2 (A3 A4))): (A3 A4 )=[20×5], perform 20 ×10 ×5 =1000 multiplications
(A2 (A3 A4)) =[5×5], perform 5 ×20 ×5 =500 multiplications
(A1(A2 (A3 A4))) =[10×5], perform 10 ×5 ×5 =250 multiplications
______________________________________________________________________
total: 1000+500+250=1750
Case 2:
((A1A2 )(A3 A4)): (A1 A2) =[10×20], perform 10 ×5 ×20 =1000 multiplications
(A3 A4 )=[20×5], perform 20 ×10 ×5 =1000 multiplications
((A1A2 )(A3 A4)) =[10×5], perform 10 ×20 ×5 =1000 multiplications
______________________________________________________________________
total: 1000+1000+1000=3000
Example: matrix-chain multiplication(Con.)
The matrix-chain multiplication problem can be stated as follows:
given a chain 〈A1 A2 ... An 〉 of n matrices, where for i=1,2,…,n,
matrix Ai has dimension p i-1 × pi, fully parenthesize the product A1
× A2 ... × An in a way that minimizes the number of scalar
multiplication.
Exhaustively checking all possible parenthesizations does not
yield an efficient algorithm: O(pn), where p>2.
Using Dynamic Programming
Step 1: The structure of an optimal parenthesization
Observation:
Suppose an optimal parenthesization of the product A1…An splits the product between Ak and
Ak+1:
(A1…Ak)(Ak+1…An). The cost of this optimal parenthesization is the cost of computing
(A1…Ak) plus cost of (Ak+1…An) plus the cost of multiplying the two together.
Then, the parenthesization of (A1…Ak) must be optimal!!! So does (Ak+1…An), WHY?????
If there were a less costly way to parenthesize (A1…Ak), substituting that partenthesization in
the optimal patenthesization of (A1…An) would produce another parenthesization of (A1…An)
whose cost was lower than the optimum->a contradiction
Conclusion:
An optimal solution to an instance of the matrix-chain multiplication
problem contains within it optimal solutions to subproblem instances.
This is the optimal structure of the applicability of dynamic programming
Using Dynamic Programming
Step 2: A recursive solution
Goal: define the value of an optimal solution recursively in terms of
the optimal solutions to subproblems.
Set Ai…j=(Ai…Aj), m[i,j] be the minimum number of scalar multiplications needed to
compute the matrix Ai…j.
If we know how to get m[i,j], then the minimal cost of compute A1…n=(A1A2…An) of
m[1,n].
If we know ((Ai…Ak)(Ak+1…Aj)) produce the optimal parenthesization, then :
m[i,j]=m[i,k]+m[k+1,j]+pi-1pkpj
For [i,j], we can try to assign k=i,i+1,…,j-1 and to find the optimal k value.
So the recursive definition of the m[i,j] can be written as:
0 If i=j,

m[i, j ] = 
min {m[i, k ] + m[k + 1, j ] + pi−1 pk p j }
 i≤k < j

if i 5 > 4
Gold Dust Gold Block
Conclusion
Applications:
minimum-spanning-tree algorithms
Dijkstra’s algorithm for shortest paths from a
single source
Chvatal’s greedy set-covering heuristic
Using Dijkstra’s algorithm to find the best set of cutters.
Time O(N2)
Cutter Ci Cutter Cn
Cutter Cj
Initial Cutter C1 Part P1 Cutter Ci Part Pi Cutter Cj Part Pj Cutter Cn Final
Part P0 Part Pn
Cutter Cj
Cutter Cn
Cutter Cn


Use: 0.2801