# Dynamic Programming — Part 1

Collection of interesting and challenging Dynamic Programming problems and solutions (in Python).

**Problem 1:**

Given a string containing just the characters `'('`

and `')'`

, find the length of the longest valid (well-formed) parentheses substring. (https://leetcode.com/problems/longest-valid-parentheses/)

**Solution:**

Let dp[i] be the length of longest valid parenthesis ending at index i.

Note that dp[i] = 0 if s[i] = ‘(‘ because parenthesis cannot end with ‘(‘.

If s[i] = ‘)’ and s[i-1] = ‘(‘, then dp[i] = dp[i-2]+2, because valid parenthesis is of the form ‘A()’ where A is a valid parenthesis of length ≥ 0.

Else if s[i] = ‘)’ and s[i-1] = ‘)‘, then valid parenthesis is of the form A(B) where A is a valid parenthesis of length ≥ 0 and B is a valid parenthesis of length ≥ 1.

dp[i] = dp[i-1] + dp[i-dp[i-1]-2] + 2 where dp[i-1] is for B and dp[i-dp[i-1]-2] is for A.

i-dp[i-1]-2 is the ending index of A.

Time and space complexity is O(N) where N is length of string ‘s’.

**Problem 2:**

You are given an integer array `prices`

where `prices[i]`

is the price of a given stock on the `ith`

day.

Design an algorithm to find the maximum profit. You may complete at most `k`

transactions. You may not engage in multiple transactions simultaneously (i.e., you must sell the stock before you buy again). (https://leetcode.com/problems/best-time-to-buy-and-sell-stock-iv/)

**Solution 2:**

Let dp[h][i] be the maximum profit possible by selling a stock on the i-th day for at-most h transactions.

The objective is to find on which day should I buy the stock to sell on the i-th day to maximize the profit.

Recursively we can define dp[h][i] as follows:

dp[h][i]=max(dp[h][i-1], maximum of dp[h-1][j-1] + prices[i]-prices[j] for all j < i)

The first term dp[h][i-1] is because we may not sell the stock on the i-th day.

The next term is the maximum over all possible j < i where we buy the stock on the j-th day and sell on i-th day.

Time complexity of the approach is O(K*N²) and space complexity is O(K*N).

We can improve the run-time by observing the following:

- We can pre-compute possible trading windows. This can be found by going backwards from the i-th day till the stock price decreases. For each i, let’s denote this day by ‘prev_smaller_left[i]’
- If the overall profit by selling on the i-th day is greater by buying on prev_smaller_left[i] than buying on ‘curr_best_start’ day, then assign curr_best_start=prev_smaller_left[i] else do not change curr_best_start.
- Let’s consider 3 trading windows: [b1, s1], [b2, s2], [b3, s3] where bi=prev_smaller_left[si].
- We want to find ‘dp[h][s3]’. The possible days to buy are b1, b2 and b3 assuming b1 < s3, b2 < s3 and b3 < s3 (this is by definition).
- We have computed ‘dp[h][s2]’. Also let’s assume that b1 < b2.
- While computation of dp[h][s2] we found that dp[h-1][b2–1]+price[s2]-price[b2] > dp[h-1][b1–1]+price[s2]-price[b1].
- Or dp[h-1][b2–1]-dp[h-1][b1–1]+price[b1]-price[b2] > 0
- Assuming that for s3, apart from b3 we consider the smaller of the 2 prices i.e. price[b1], the profit will be: dp[h-1][b1–1]+price[s3]-price[b1]
- Let x = dp[h-1][b1–1]+price[s3]-price[b1] (if we take b1 for buying and s3 for selling)
- Let y = dp[h-1][b2–1]+price[s3]-price[b2] (if we take b2 for buying and s3 for selling)
- y-x = dp[h-1][b2–1]-price[b2]-dp[h-1][b1–1]+price[b1] > 0 (see above)
- Thus although price[s3]-price[b1] > price[s3]-price[b2], but dp[h-1][b1–1]+price[s3]-price[b1] < dp[h-1][b2–1]+price[s3]-price[b2].
- That is why we consider ‘curr_best_start’ to be the buying price where the overall profit is higher rather than profit of current trading window.

Time complexity is O(K*N) and space complexity is O(K*N).

**Problem 3:**

There are `n`

oranges in the kitchen and you decided to eat some of these oranges every day as follows:

- Eat one orange.
- If the number of remaining oranges (
`n`

) is divisible by 2 then you can eat n/2 oranges. - If the number of remaining oranges (
`n`

) is divisible by 3 then you can eat 2*(n/3) oranges.

You can only choose one of the actions per day. Return the minimum number of days to eat `n`

oranges. (https://leetcode.com/problems/minimum-number-of-days-to-eat-n-oranges/)

**Solution 3:**

There are multiple ways to solve this problem. One way is to use a greedy approach + recursion with memoization. Note that if we use DFS type recursion then the time complexity will be too high.

Let F(n) be the number of steps required to eat n oranges.

If n is divisible by 6 i.e. divisible by both 3 and 2, then we recursively compute F(n/3) (because n/3 < n/2).

If n is divisible by only 3, then compute min(F(n/3), F((n-n%2)/2) + n%2)

The 2nd term implies that we eat 1 orange for n%2 days and then eat remaining oranges in F((n-n%2)/2) days.

Similarly if n is divisible by only 2, then compute min(F(n/2), F((n-n%3)/3) + n%3)

The 2nd term implies that we eat 1 orange for n%3 days and then eat remaining oranges in F((n-n%3)/3) days.

If n is neither divisible by 2 or 3, then compute min(F((n-n%2)/2) + n%2, F((n-n%3)/3) + n%3).

Time and space complexity is approximately O(logN).

We can also solve the problem using BFS instead of DFS.

BFS works and DFS do not work here. The reasoning is that, in BFS we reach the solution much faster as most values are visited by n/2 and n/3 branches so the n-1 branch do not visit many paths here.

**Problem 4:**

You are given a tree with `n`

nodes numbered from `0`

to `n-1`

in the form of a parent array where `parent[i]`

is the parent of node `i`

. The root of the tree is node `0`

.

Implement the function `getKthAncestor(int node, int k)`

to return the `k`

-th ancestor of the given `node`

. If there is no such ancestor, return `-1`

.

The k-th ancestor of a tree node is the `k`

-th node in the path from that node to the root. (https://leetcode.com/problems/kth-ancestor-of-a-tree-node/)

**Solution 4:**

One way is to traverse from the given node using the parent array till we reach the k-th ancestor. This is straightforward and has complexity of O(N) if there are N nodes in the tree. But if we have to call the method M times then total complexity is O(MN).

A better solution is to use binary uplifting technique. Let dp[i][j] denote the 2^j ancestor of the i-th node from the root. Then we can define:

dp[i][j] = dp[dp[i][j-1]][j-1]

i.e. the 2^j th ancestor of the i-th node is the 2^(j-1) th ancestor of the 2^(j-1) th ancestor of i-th node. For e.g. the 8-th ancestor from a node i is the 4-th ancestor of the 4-th ancestor node of i.

We compute dp[i][j] for all i and j such that 2^j ≤ i. Pre-computing this takes O(NlogN) time and space complexity because for each i, there are at-most O(log(i)) values of j.

For each call, finding the K-th ancestor of a node i takes O(logN) complexity, because we can write K as sum of powers of 2 (at-most logK summations).

For e.g. if K=15, then K=8+4+2+1=2³+2²+2¹+2⁰=dp[dp[dp[dp[i][3]][2]][1]][0].

Thus total time complexity is O(MlogN+NlogN) for M calls.

**Problem 5:**

Define `S = [s,n]`

as the string S which consists of n connected strings s. For example, `["abc", 3]`

="abcabcabc".

On the other hand, we define that string s1 can be obtained from string s2 if we can remove some characters from s2 such that it becomes s1. For example, “abc” can be obtained from “abdbec” based on our definition, but it can not be obtained from “acbbe”.

You are given two non-empty strings s1 and s2 (each at most 100 characters long) and two integers 0 ≤ n1 ≤ 10⁶ and 1 ≤ n2 ≤ 10⁶. Now consider the strings S1 and S2, where `S1=[s1,n1]`

and `S2=[s2,n2]`

. Find the maximum integer M such that `[S2,M]`

can be obtained from `S1`

. (https://leetcode.com/problems/count-the-repetitions/)

**Solution 5:**

Let’s try to find how many maximum times s2 occurs in [s1, n1]. Let’s call the maximum number of occurrences of s2 in [s1, n1] to be H. Then M = H/n2.

We can find the occurrences of s2 by greedily adding a segment of s1 each time till we find repeating pattern of occurrences, i.e. the period of occurrences. Let’s say before we find the period, the indices of occurrences of s2 be denoted as [i1, i2, … ik] where ij is an ending index of s2 in [s1, q] for some q ≥ 1. We can find the period by taking the difference (ik+1)-(i1) where ik+1 % len(s1) = i1 % len(s1).

Once we find the period and the indices of occurrences of s2 in [s1, q], we can then use arithmetic progression to find how many times each ij will occur in n1*len(s1) with a period of P, i.e. i1, i1+P, i1+2*P, … and so on.

Time complexity is O(len(s1)*len(s2)) and space complexity is O(len(s2)). To find the period we need to explore at-most s2 segments of s1.

**Problem 6:**

Given a string `S`

, count the number of distinct, non-empty subsequences of `S`

(https://leetcode.com/problems/distinct-subsequences-ii/)

**Solution:**

Let pos[S[i]] be the index of the last occurrence of S[i] before index i.

Let dp[i] be the number of distinct, non-empty subsequences for S[0..i], then we can define the recurrence relation:

dp[i] = dp[i-1] + (dp[i-1]-dp[pos[S[i]]-1]) + h, where h = 1 if this is the 1st occurrence of S[i].

The 1st term denotes all distinct, non-empty subsequences for S[0..i-1]. So this term does not include S[i] in the subsequences.

The 2nd term is including S[i] in the subsequences. So for all subsequences found till index i-1, we add S[i] to all of those. But if S[i] has already occurred earlier than i (at pos[S[i]]), then using all subsequences before pos[S[i]]-1 to compute subsequences including S[i] will be repetition because we already have calculated those at pos[S[i]].

The last term ‘h’ is for just the subsequence ‘S[i]’. So we include this if we have not seen S[i] earlier else we do not include.

Time and space complexity is O(N).

**Problem 7:**

There is a strange printer with the following two special requirements:

- The printer can only print a sequence of the same character each time.
- At each turn, the printer can print new characters starting from and ending at any places, and will cover the original existing characters.

Given a string consists of lower English letters only, your job is to count the minimum number of turns the printer needed in order to print it. (https://leetcode.com/problems/strange-printer/)

**Solution:**

Let dp[i][j] be the minimum turns required to print the string s[i…j]. Then we recursively define this variable as follows:

dp[i][[j] = min(dp[i][j-1]+1, min(dp[i][k1–1]+dp[k1][j], dp[i][k2–1]+dp[k2][j], …))

where i ≤ k1, k2, … < j and s[k1]=s[j], s[k2]=s[j] and so on.

This implies that to print the letter at s[j] we could have printed it from different indices k1, k2, k3, … < j and so on in one turn, even just would have printed s[j] at only index j.

Then recursively we print the remaining i.e. dp[i][k1–1], dp[i][k2–1] and so on. The term dp[k1][j] already includes the turn to print s[j] from index k1 to j and other turns from index k1+1 to j to possibly overwrite them.

Crucial piece of observation is that is for any 2 indices p and q, s[p]=s[q], then it is optimal to print in one turn from index p to q and then possibly overwrite p+1 to q-1, instead of printing from p to r and r+1 to q in 2 turns separately (where p ≤ r ≤ q, and s[p] != s[r]) and then printing p+1 to r and r to q-1.

This implies that from any index q, we should look to print from an index p (< q) s.t. s[p]=s[q] first and then recursivley print the remaining. Objective is then to select which p, if there are multiple p1, p2, … s.t. s[p1]=s[q], s[p2]=s[q],… and so on, such that the number of turns is minimum.

Time complexity is O(N³) and space complexity is O(N²).

**Problem 8:**

A frog is crossing a river. The river is divided into some number of units, and at each unit, there may or may not exist a stone. The frog can jump on a stone, but it must not jump into the water.

Given a list of `stones`

' positions (in units) in sorted **ascending order**, determine if the frog can cross the river by landing on the last stone. Initially, the frog is on the first stone and assumes the first jump must be `1`

unit.

If the frog’s last jump was `k`

units, its next jump must be either `k - 1`

, `k`

, or `k + 1`

units. The frog can only jump in the forward direction. (https://leetcode.com/problems/frog-jump/)

**Solution:**

Let dp[i][k] be True if it is possible to reach the end from index i with a jump of length k to the next stone i.e the stone at position stones[i]+k if at all a stone exists at this position.

We can recursively compute dp[i][k] as follows:

dp[i][k] = dp[j][k] or dp[j][k+1] or dp[j][k-1] where j > i and k = stones[j]-stones[i]

Thus for each index i, we look at all indices j > i and compute dp[i][k] as above. Instead of iterating over all possible values of ‘k’ here we use a dictionary to store only possible values of ‘k’ in it for an index i.

Time and space complexity is O(N²).

The approach is kind of backward i.e. we are evaluating for every index i although we might never come to that index if we start from 0 (not all paths from index 0 are valid).

We can improve this by doing recursion starting from the 0-th index. The worst case time complexity remains same but overall we achieve better performance because we skip many indices.

**Problem 9:**

There are `N`

piles of stones arranged in a row. The `i`

-th pile has `stones[i]`

stones.

A move consists of merging exactly `K`

consecutive piles into one pile, and the cost of this move is equal to the total number of stones in these `K`

piles.

Find the minimum cost to merge all piles of stones into one pile. If it is impossible, return `-1`

. (https://leetcode.com/problems/minimum-cost-to-merge-stones/)

**Solution**:

Let dp[i][j] denote the minimum cost to merge stones from pile i to pile j. In-fact we will have a tuple for dp[i][j] = [a, b] where ‘a’ is the sum of piles from i to j and ‘b’ is the minimum cost to merge the piles from i to j.

Note that a != b is j-i+1 < K or j-i+1 > K. Because if j-i+1 < K then b=0 because we are not merging them.

Also note that if the number of stones cannot be expressed in the form K + (K-1)*p where p is an integer, then finally there will be number of stones left which is < K but > 1, thus we cannot merge them into one.

To compute dp[i][j]:

If j-i+1 < K: dp[i][j] = [sum(stones[i:j+1]), 0]

If j-i+1 = K: dp[i][j] = [sum(stones[i:j+1]), sum(stones[i:j+1])]

If j-i+1 >K: For each i ≤ k < j, find the minimum cost of merging ‘merged-piles’ [i, k] and [k+1, j] together using dp[i][k] and dp[k+1][j].

Time complexity is O(N³) and space complexity is O(N²).

**Problem 10:**

You are installing a billboard and want it to have the largest height. The billboard will have two steel supports, one on each side. Each steel support must be an equal height.

You are given a collection of `rods`

that can be welded together. For example, if you have rods of lengths `1`

, `2`

, and `3`

, you can weld them together to make a support of length `6`

.

Return the largest possible height of your billboard installation. If you cannot support the billboard, return `0`

. (https://leetcode.com/problems/tallest-billboard/)

**Solution:**

The idea is to find 2 disjoint subsequences whose sums are equal and is maximum among all such possible disjoint subsequences.

One possible solution is as follows:

Let dp[i][h][k] be the True if there are 2 disjoint subsequences from 0 to i whose sums are respectively h and k else False. Then we can find dp[i][h][k] using dynamic programming as follows:

dp[i][h][k] = (dp[i-1][h][k]) or (dp[i-1][h-rods[i]][k]) or (dp[i-1][h][k-rods[i]])

i.e. if we consider the i-th rod then either dp[i-1][h-rods[i]][k] or dp[i-1][h][k-rods[i]] must be true or if we do not consider the i-th rod then dp[i-1][h][k] must be true.

Then we need to return the maximum value of h s.t. dp[len(rods)-1][h][h] is true.

Time and space complexity of the approach is O(N*M²) where N is length of rods array and M is the sum of the array elements. We can do some optimisations here by taking only half the sum of the array because if dp[len(rods)-1][h][h] is true then ‘h’ can be at-most M/2.

Another solution is as follows:

Let dp[h] be an array of all indices from which the sum ‘h’ can start. dp[h] can be found using dynamic programming as follows:

For an index ‘i’, if dp[h-rods[i]] has non-zero length and the last value in dp[h-rods[i]] > i (which implies that there is an index greater than ‘i’ from which the sum ‘h-rods[i]’ starts and therefore the sum ‘h’ can start at ‘i’), then add i to dp[h].

Then for each sum ‘h’ s.t. h is divisible by 2, using recursion find whether the sum h/2 exists in a subsequence with sum h using the ‘dp’ variable.

Time and space complexity of the above approach is O(N*M) where M is the sum of rods array.