Dynamic Programming Examples



  1. optimal substructure (hardest)
  1. recursive formulation
  1. dynamic programming
  1. bookkeeping (for reconstruction’s sake)


Longest Common Subsequence

Simple task: given two strings X,YX, Y of length n,mn, m, find the longest subsequence that they share in common. A subsequence is formed when you remove characters from a string

Optimal substructure

Here, we rely on one key insight. Look at the last character of both strings.

  1. If they are the same, then this last character is necessairly part of the longest common subsequence
  1. If they are different, then at least one of the last characters is not part of the longest common subsequence, so you can essentially chop off the last character. But which one? Well, that you don’t know. You must find out by trying both!

Recursive formulation

Let’s use a table to keep track of our results. In top-down recursion, the table would just be a recursive call. However, today we are focusing on bottom-up solution approaches. As such, we define C[i,j]C[i, j] to be the longest common subsequence with strings Xi,YjX_i, Y_j where XiX_i is the prefix of XX with the first ii characters, and same deal with YjY_j.

If X[i]=Y[j]X[i] = Y[j], then

C[i,j]=1+C[i1,j1]C[i,j] = 1 + C[i-1, j-1]

What about if X[i]Y[j]X[i] \neq Y[j]? Well, you’ve got some exploring to do!

C[i,j]=max(C[i1,j],C[i,j1])C[i,j] = \max(C[i-1, j], C[i, j-1])

The two components essentially try the strings with one character removed. Implicitly, it also tries the the strings with both end characters removed, in the next step (just roll out what C[i1,j]C[i-1, j] is equal to)

Dynamic programming

You start with the base case, and then you build up the table row by row. At element C[i,j]C[i, j], you apply the recursive rules that we defined above. This either means adding one to the diagonal, or picking the maximum from the left and the above cells. This would be a bottom-up programing algoriithm

For a top-down programming algorithm, you would make a recursive call every time you wanted to find the CC of a smaller problem, but you would keep track of a table and short-circuit the recursion if you find that it has already been solved.


Now, how do we reconstruct the LCS from the table? You can use the naive approach, which is to keep track of your string as you go. However, this does have a high space complexity.

To mitigate this, you can use a clever backtracking strategy. The lower right corner of the table is the answer you get when you call i=n,j=mi = n, j = m (i.e. with the whole string). Now, you apply the following algorithm on the table

  1. Look at X[i],Y[j]X[i], Y[j]. If they match, pick this character and move diagonally.
  1. If they don’t match, find the element that was copied to the current element. This is either above or to the left. Go there, but don’t pick any characters. If there are ties, your choice doesn’t matter.

When you reach any 00, you have reconstructed the LCS (albeit backwards).

Complexity analysis

To fill the table, we need O(nm)O(nm) steps. If you are not concerned with the final answer, we can actually skip the table and just keep the last two rows in each iteration of the algorithm, because the decomposition never goes before i1i-1 or j1j-1.

Challenge problem

XX is the rows and YY are the columns. What can you say about X2X_2 vs Y4Y_4? Well, you can use the following chain of logic

  1. We know that X2=Y3X_2 = Y_3 because that’s a one surrounded by zeros
  1. We know that X1=Y4X_1 = Y_4 because of similar reasoning
  1. However, we know that X1Y3X_1 \neq Y_3 because it’s a zero. Therefore, there is no equality ‘bridge’ between X2X_2 and Y4Y_4, meaning that X2Y4X_2 \neq Y_4.

You can do the same reasoning pattern for the symbols to figure out the rest of the matrix.

Knapsack problem: unbounded knapsack

Given items with i,...,ki, ..., k with weights wiw_i and values viv_i, what’s the most value I can get with a capacity of nn? Let’s start with a simple iteration of the problem, which assumes that you are in a warehouse and can take as many things of item ii as possible.

Subproblem decomposition

If you have an optimal knapsack and you take away an element, this smaller knapsack must also be optimal for that size. This is because if it wasn’t optimal, then you can add that element back into the true optimal smaller backpack and create an even more optimal original knapsack, which is a contradiction.

Therefore, if you have a capacity WW, the optimal packing is the best you can make WwiW-w_i, plus a new object V+viV + v_i. You have multiple objects, so you must try them all.

Recursive Formulation

So, if you define an array KK such that K[w]K[w] is the best possible value for a specific weight ww, you can define the relationship as

K[w]=maxi(K[wwi]+vi)K[w] = \max_i(K[w - w_i] + v_i)

If there is no such wiw_i such that wwi>0w - w_i > 0, then K[w]=0K[w] = 0.

Dynamic programming

You can just use this recurrence relationship to build a bottom-up algorithm

Challenge problem

What can we say about this?

  1. Because 22 is the first knapsack that has non-zero value, we conclude that there is one object of weight 2 and value 1
  1. Because there are no objects of weight 1, we conclude that there must be one object of weight 3 and value 3
  1. Therefore, the minimum value of the knapsack at weight limit 44 is just 33, because you can just use the knapsack of weight 33. It’s a minimum because we don’t know if there is a weight 4 object.
  1. We can also say that the minimum value of the knapsack with limit 55 is 4, becuase we can add the objects of weight 2 and weight 3 together .
  1. We can also say that the minimum value of the knapsack with limit 6 is 6 because you can add two objects of weight 3 together.

Knapsack Problem: 0/1 Knapsack

Here, we have a small problem. Previously, we were able to do maxi\max_i and find the best thing to add. However, if we can only take one of each item, we can’t blindly optimize. We must formulate a different solution.

The insight

If we are given the use of the first kk items, we can either choose or not choose to use the kthkth item. If we don’t choose the kthkth item, then we default to the optimal bag that has the same capacity but operating on the first k1k-1 item. If you choose the kthkth item, then you reduce the capacity and you look at the optimal bag with reduced capacity on k1k-1 item.

This sounds like a job for...2D array!!

The setup

let K[w][k]K[w][k] be the optimal value at capacity ww with the first kk items available.

K[w][k]=max(K[w][k1],K[wwk][k1]+vk)K[w][k] = \max(K[w][k-1], K[w - w_k][k - 1] + v_k)

The code

and this is what it looks like in tabular form

Independent Set

An independent set in a graph is a set of vertices such that no pair has an edge between them. We want to find the independent set with the largest weight. The original question with the random graph is actually NP complete. So that’s no good. Let’s try a tree!

Hmmm. Let’s use the strategies that we’ve learned so far and see where they can take us. Let’s try building things up. A tree can be split into subtrees. In each subtree:

  1. the root can be selected. Then, its children can’t be, but the children of those children are just more subtrees
  1. The root may not be selected. Then, its children are subtree problems.

The approach

Keep two arrays. Let A[u]A[u] be the weight of a maximally independent set rooted at node uu, and let B[u]B[u] be the sum of weights of the children at uu, assuming that uu is not selected. Technically, this distinction is not necessary but it helps for notation. As such, you have

A[u]=max(vCh(u)A[v],wu+vCh(u)B[v])A[u] = \max(\sum_{v \in Ch(u)}A[v], w_u + \sum_{v\in Ch(u)}B[v])

Note how the first term is when the root is not selected, and the second terms is when the root is selected.

Dynamic programming

Here, we actually use a sort of bottom-up approach, even though we use recursion to get to the leaf nodes. This is because we need the children (base case) to be filled before we make our way up. It’s still bottom up.

This is different from recursive divide and conquer because you are taking advantage of overlapping subproblems!

Because you only fill up each entry once, the complexity is O(V)O(|V|). Nice!

Binary numbers

What is the number of binary strings of length kk you can make such that there are no repeated 00’s?

So, you make A[i]A[i] where ii is the length of the binary string. We have

A[i]=A[i1]+A[i2]A[i] = A[i-1]+ A[i-2]

with a base case of A[1]=1A[1] = 1 and A[0]=0A[0] = 0.

This one is tricky because there’s a big risk of double counting

More examples