- Notifications
You must be signed in to change notification settings - Fork2
🟣 Dynamic Programming interview questions and answers to help you prepare for your next data structures and algorithms interview in 2025.
Devinterview-io/dynamic-programming-interview-questions
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
You can also find all 35 answers here 👉Devinterview.io - Dynamic Programming
Dynamic Programming (DP) andrecursion both offer ways to solve computational problems, but they operate differently.
- Recursion: Solves problems by reducing them to smaller, self-similar sub-problems, shortening the input until a base case is reached.
- DP: Breaks a problem into more manageable overlapping sub-problems, but solves each sub-problem only once and then stores its solution. This reduces the problem space and improves efficiency.
- Repetition: In contrast to simple recursion, DP uses memoization (top-down) or tabulation (bottom-up) to eliminate repeated computations.
- Directionality: DP works in a systematic, often iterative fashion, whereas recursive solutions can work top-down, bottom-up, or employ both strategies.
Recursion: Directly calculates the $n$th number based on the
$(n-1)$ and$(n-2)$ numbers. This results in many redundant calculations, leading to inefficient time complexity, often$O(2^n)$ .deffibonacci_recursive(n):ifn<=1:returnnreturnfibonacci_recursive(n-1)+fibonacci_recursive(n-2)
DP:
Top-Down (using memoization): Caches the results of sub-problems, avoiding redundant calculations.
deffibonacci_memoization(n,memo={}):ifn<=1:returnnifnnotinmemo:memo[n]=fibonacci_memoization(n-1,memo)+fibonacci_memoization(n-2,memo)returnmemo[n]
Bottom-Up (using tabulation): Builds the solution space from the ground up, gradually solving smaller sub-problems until the final result is reached. It typically uses an array to store solutions.
deffibonacci_tabulation(n):ifn<=1:returnnfib= [0]* (n+1)fib[1]=1foriinrange(2,n+1):fib[i]=fib[i-1]+fib[i-2]returnfib[n]
Overlapping subproblems is a core principle ofdynamic programming. It means that the same subproblem is encountered repeatedly during the execution of an algorithm. By caching previously computed solutions, dynamic programming avoids redundant computations,improving efficiency.
In the grid above, each cell represents a subproblem. If the same subproblem (indicated by the red square) is encountered multiple times, it leads to inefficiencies as the same computation is carried out repeatedly.
In contrast, dynamic programming, by leveraging caching (the grayed-out cells), eliminates such redundancies and accelerates the solution process.
Fibonacci Series: In
$Fib(n) = Fib(n-1) + Fib(n-2)$ , the recursive call structure entailsrepeated calculations of smaller fib values.Edit Distance:
- For the strings "Saturday" and "Sunday", the subproblem of finding the edit distance between "Satur" and "Sun" is used inmultiple paths of the decision tree for possible edits. This recurrence means the same subproblem of "Satur" to "Sun" will reoccur if it's not solved optimally in the initial step.
Here is the Python code:
# Simple Recursive Implementationdeffib_recursive(n):ifn<=1:returnnreturnfib_recursive(n-1)+fib_recursive(n-2)# Dynamic Programming using Memoizationdeffib_memoization(n,memo={}):ifn<=1:returnnifnnotinmemo:memo[n]=fib_memoization(n-1,memo)+fib_memoization(n-2,memo)returnmemo[n]
Dynamic Programming often relies on two common strategies to achieve performance improvements -Tabulation andMemoization.
Memoization involves storing calculated results of expensive function calls and reusing them when the same inputs occur again. This technique uses a top-down approach (starts with the main problem and breaks it down).
- Enhanced Speed: Reduces redundant task execution, resulting in faster computational times.
- Code Simplicity: Makes code more readable by avoiding complex if-else checks and nested loops, improving maintainability.
- Resource Efficiency: Can save memory in certain scenarios by pruning the search space.
- Tailor-Made: Allows for customization of function calls, for instance, by specifying cache expiration.
- Overhead: Maintaining a cache adds computational overhead.
- Scalability: In some highly concurrent applications or systems with heavy memory usage, excessive caching can lead to caching problems and reduced performance.
- Complexity: Implementing memoization might introduce complexities, such as handling cache invalidation.
Without memoization, a recursive Fibonacci function has an exponential time complexity of
Here is the Python code:
fromfunctoolsimportlru_cache@lru_cache(maxsize=None)# Optional for memoization using cache decoratordeffibonacci(n):ifn<=1:returnnreturnfibonacci(n-1)+fibonacci(n-2)
When employingdynamic programming, you have a choice betweentop-down andbottom-up approaches. Both strategies aim to optimize computational resources and improve running time, but they do so in slightly different ways.
Top-down, also known as memoization, involves breaking down a problem into smaller subproblems and then solving them in a top-to-bottom manner. Results of subproblems are typically stored for reuse, avoiding redundancy in calculations and leading to better efficiency.
In the top-down approach, we calculate
deffib_memo(n,memo={}):ifninmemo:returnmemo[n]ifn<=2:return1memo[n]=fib_memo(n-1,memo)+fib_memo(n-2,memo)returnmemo[n]
The bottom-up method, also referred to as tabulation, approaches the problem from the opposite direction. It starts by solving the smallest subproblem and builds its way up to the larger one, without needing recursion.
For the Fibonacci sequence, we can compute the values iteratively, starting from the base case of 1 and 2.
deffib_tabulate(n):ifn<=2:return1fib_table= [0]* (n+1)fib_table[1]=fib_table[2]=1foriinrange(3,n+1):fib_table[i]=fib_table[i-1]+fib_table[i-2]returnfib_table[n]
This approach doesn't incur the overhead associated with function calls and is often more direct and therefore faster. Opting for top-down or bottom-up DP, though, depends on the specifics of the problem and the programmer's preferred style.
State definition serves as a crucial foundation for Dynamic Programming (DP) solutions. It partitions the problem intooverlapping subproblems and enables bothtop-down (memoization) andbottom-up (tabulation) techniques for efficiency.
Subproblem Identification: Clearly-defined states help break down the problem, exposing its recursive nature.
State Transition Clarity: Well-structured state transitions make the problem easier to understand and manage.
Memory Efficiency: Understanding the minimal information needed to compute a state avoids unnecessary memory consumption or computations.
Code Clarity: Defined states result in clean and modular code.
Problems aim tomaximize orminimize a particular value. They often follow a functional paradigm, where each state is associated with a value.
Example: Finding the longest increasing subsequence in an array.
These problems require counting the number of ways to achieve a certain state or target. They are characterized by a combinatorial paradigm.
Example: Counting the number of ways to make change for a given amount using a set of coins.
The objective here is more about whether a solution exists or not rather than quantifying it. This kind of problem often has a binary nature—either a target state is achievable, or it isn't.
Example: Determining whether a given string is an interleaving of two other strings.
Unified by their dependence onoverlapping subproblems andoptimal substructure, these problem types benefit from a state-based approach, laying a foundation for efficient DP solutions.
The task is tocompute the Fibonacci sequence using dynamic programming, in order to reduce the time complexity from exponential to linear.
Tabulation, also known as thebottom-up method, is an iterative approach. It stores and reuses the results of previous subproblems in a table.
- Initialize an array,
fib[]
, with base values. - For
$i = 2$ to$n$ , computefib[i]
using the sum of the two previous elements infib[]
.
- Time Complexity:
$O(n)$ . This is an improvement over the exponential time complexity of the naive recursive method. - Space Complexity:
$O(n)$ , which is primarily due to the storage needed for thefib[]
array.
Here is the Python code:
deffibonacci_tabulation(n):ifn<=1:returnnfib= [0]* (n+1)fib[1]=1foriinrange(2,n+1):fib[i]=fib[i-1]+fib[i-2]returnfib[n]
Thecoin change problem is an optimization task often used as an introduction to dynamic programming. Given an array ofcoins and atarget amount, the goal is to find theminimum number of coins needed to make up that amount. Each coin can be used anunlimited number of times, and an unlimited supply of coins is assumed.
Example: With coins[1, 2, 5]
and a target of11
, the minimum number of coins needed is via the combination5 + 5 + 1
(3 coins).
The optimal way to solve the coin change problem is throughdynamic programming, specifically using thebottom-up approach.
By systematically consideringsmaller sub-problems first, this method allows you to build up the solution step by step and avoid redundant calculations.
- Create an array
dp
of sizeamount+ 1
and initialize it with a value larger than the target amount, for instance,amount + 1
. - Set
dp[0] = 0
, indicating that zero coins are needed to make up an amount of zero. - For each
i
from1
toamount
and eachcoin
:- If
coin
is less than or equal toi
, computedp[i]
as the minimum of its current value and1 + dp[i - coin]
. The1
represents using one of thecoin
denomination, anddp[i - coin]
represents the minimum number of coins needed to make up the remaining amount.
- If
- The value of
dp[amount]
reflects the minimum number of coins needed to reach the target, given the provided denominations.
- Time Complexity:
$O(\text{{amount}} \times \text{{len(coins)}})$ . This arises from the double loop structure. - Space Complexity:
$O(\text{{amount}})$ . This is due to thedp
array.
Here is the Python code:
defcoinChange(coins,amount):dp= [amount+1]* (amount+1)dp[0]=0foriinrange(1,amount+1):forcoinincoins:ifcoin<=i:dp[i]=min(dp[i],1+dp[i-coin])returndp[amount]ifdp[amount]<=amountelse-1
Consider a scenario where a thief is confronted with
This programming challenge can be effectively handled usingdynamic programming (DP). The standard, and most efficient, version of this method is based on an approach known as thetable-filling.
Creating the 2D Table: A table of size
$(n + 1) \times (W + 1)$ is established. Each cell represents the optimal value for a specific combination of items and knapsack capacities. Initially, all cells are set to 0.Filling the Table: Starting from the first row (representing 0 items), each subsequent row is determined based on the previous row's status.
$\cdot$ For each item, and for each possible knapsack weight (for i, w in enumerate(weights)
), the DP algorithm decides whether adding the item would be more profitable.$\cdot$ The decision is made by comparing the value of the current item plus the best value achievable with the remaining weight and items (taken from the previous row), versus the best value achieved based on the items up to this point but without adding the current item.Deriving the Solution: Once the entire table is filled, the optimal set of items is extracted by backtracking through the table, starting from the bottom-right cell.
- Time Complexity:
$O(n \cdot W)$ where$n$ is the number of items and$W$ is the maximum knapsack capacity. - Space Complexity:
$O(n \cdot W)$ due to the table.
Here is the Python code:
defknapsack(weights,values,W):n=len(weights)dp= [[0for_inrange(W+1)]for_inrange(n+1)]foriinrange(1,n+1):forwinrange(1,W+1):ifweights[i-1]>w:dp[i][w]=dp[i-1][w]else:dp[i][w]=max(dp[i-1][w],values[i-1]+dp[i-1][w-weights[i-1]])# Backtrack to get the selected itemsselected_items= []i,w=n,Wwhilei>0andw>0:ifdp[i][w]!=dp[i-1][w]:selected_items.append(i-1)w-=weights[i-1]i-=1returndp[n][W],selected_items
While the above solution is straightforward, it uses
The task is to find theLongest Common Subsequence (LCS) of two sequences, which can be any string, including DNA strands.
For instance, given two sequences,ABCBDAB
andBDCAB
, the LCS would beBCAB
.
The optimal substructure and overlapping subproblems properties make it a perfect problem forDynamic Programming.
We start with two pointers at the beginning of each sequence. If the characters match, we include them in the LCS and advance both pointers. If they don't, we advance just one pointer and repeat the process. We continue until reaching the end of at least one sequence.
- Initialize a matrix
L[m+1][n+1]
with all zeros, wherem
andn
are the lengths of the two sequences. - Traverse the sequences. If
X[i] == Y[j]
, setL[i+1][j+1] = L[i][j] + 1
. If not, setL[i+1][j+1]
to the maximum ofL[i][j+1]
andL[i+1][j]
. - Start from
L[m][n]
and backtrack using the rules based on which neighboring cell provides the maximum value, untili
orj
becomes 0.
- Time Complexity:
$O(mn)$ , where$m$ and$n$ are the lengths of the two sequences. This is due to the nested loop used to fill the matrix. - Space Complexity:
$O(mn)$ , where$m$ and$n$ are the lengths of the two sequences. This is associated with the space used by the matrix.
Here is the Python code:
deflcs(X,Y):m,n=len(X),len(Y)# Initialize the matrix with zerosL= [[0]* (n+1)for_inrange(m+1)]# Building the matrixforiinrange(m):forjinrange(n):ifX[i]==Y[j]:L[i+1][j+1]=L[i][j]+1else:L[i+1][j+1]=max(L[i+1][j],L[i][j+1])# Backtrack to find the LCSi,j=m,nlcs_sequence= []whilei>0andj>0:ifX[i-1]==Y[j-1]:lcs_sequence.append(X[i-1])i,j=i-1,j-1elifL[i-1][j]>L[i][j-1]:i-=1else:j-=1return''.join(reversed(lcs_sequence))# Example usageX="ABCBDAB"Y="BDCAB"print("LCS of",X,"and",Y,"is",lcs(X,Y))
Given a sequence of matrices
The goal is tominimize the number of scalar multiplications, which is dependent on the order of multiplication.
The most optimized way to solve the Matrix Chain Multiplication problem usesDynamic Programming (DP).
Subproblem Definition: For each
$i, j$ pair (where$1 \leq i \leq j \leq n$ ), we find theoptimal break in the subsequence$A_i \ldots A_j$ . Let the split be at$k$ , such that the cost of multiplying the resulting A matrix chain using the split is minimized.Base Case: The cost is 0 for a single matrix.
Build Up: For a chain of length
$l$ , iterate through all$i, j, k$ combinations and find the one with the minimum cost.Optimal Solution: The DP table helps trace back the actual parenthesization and the optimal cost.
Key Insight: Optimal parenthesization for a chain involves an optimal split.
- Time Complexity:
$O(n^3)$ - Three nested loops for each subchain length. - Space Complexity:
$O(n^2)$ - DP table.
importsysdefmatrix_chain_order(p):n=len(p)-1# Number of matricesm= [[0for_inrange(n)]for_inrange(n)]# Cost tables= [[0for_inrange(n)]for_inrange(n)]# Splitting tableforlinrange(2,n+1):# l is the chain lengthforiinrange(n-l+1):j=i+l-1m[i][j]=sys.maxsizeforkinrange(i,j):q=m[i][k]+m[k+1][j]+p[i]*p[k+1]*p[j+1]ifq<m[i][j]:m[i][j]=qs[i][j]=kreturnm,s
While bothmemoization andtabulation optimize dynamic programming algorithms for efficiency, they differ in their approaches and application.
Divide and Conquer Technique: Breaking a problem down into smaller, more manageable sub-problems.
Approach: Top-down, meaning you start by solving the main problem and then compute the sub-problems as needed.
Key Advantage: Eliminates redundant calculations, improving efficiency.
Potential Drawbacks:
- Can introduce overhead due to recursion.
- Maintaining the call stack can consume significant memory, especially in problems with deep recursion.
Divide and Conquer Technique: Breaks a problem down into smaller, more manageable sub-problems.
Approach: Bottom-up, which means you solve all sub-problems before tackling the main problem.
Key Advantage: Operates without recursion, avoiding its overhead and limitations.
Potential Drawbacks:
- Can be less intuitive to implement, especially for problems with complex dependencies.
- May calculate more values than required for the main problem, especially if its size is significantly smaller than the problem's domain.
Dynamic Programming (DP) aims to solve problems by breaking them down into simpler subproblems. However, tracking the time complexity of these approaches isn't always straightforward becausethe time complexity can vary between algorithms and implementations.
Top-Down with Memoization:
- Time Complexity: Usually, it aligns with the Top-Down (1D/2D array) method being the slowest but can't be generalized.
- Historically, it might be
$O(2^n)$ , and upon adding state space reduction techniques, it typically reduces to$O(nm)$ , where$n$ and$m$ are the state space parameters. However, modern implementations like the A* algorithm or RL algorithms can offer a flexible time complexity. - Space Complexity:
$O(\text{State Space})$
Bottom-Up with 1D Array:
- Time Complexity: Often
$O(n \cdot \text{Subproblem Complexity})$ . - Space Complexity:
$O(1)$ or$O(n)$ with memoization.
- Time Complexity: Often
Bottom-Up with 2D Array:
- Time Complexity: Generally
$O(mn \cdot \text{Subproblem Complexity})$ , where$m$ and$n$ are the 2D dimensions. - Space Complexity:
$O(mn)$
- Time Complexity: Generally
State Space Reduction Techniques:
- Techniques likeSliding Window,Backward Induction, andCycle Breaking reduce the state space or the size of the table, ultimately influencing both time and space complexity metrics.
Here is the Python code:
deffib_top_down(n,memo):ifn<=1:returnnifmemo[n]isNone:memo[n]=fib_top_down(n-1,memo)+fib_top_down(n-2,memo)returnmemo[n]deffib_bottom_up(n):ifn<=1:returnnprev,current=0,1for_inrange(2,n+1):prev,current=current,prev+currentreturncurrent
Dynamic Programming (DP) can be memory-intensive due to its tabular approach. However, there are several methods to reduce space complexity.
Tabular-vs-Recursive Methods: Use tabular methods for bottom-up DP and minimize space by only storing current and previous states if applicable.
Divide-and-Conquer: Optimize top-down DP with a divide-and-conquer approach.
Interleaved Computation: Reduce space needs by computing rows or columns of a table alternately.
High-Level Dependency: Represent relationships between subproblems to save space, as seen in tasks likeedit distance and0-1 Knapsack.
Stateful Compression: For state machines or techniques likeRabin-Karp, compact the state space using a bitset or other memory-saving structures.
Locational Memoization: In mazes and similar problems, memoize decisions based on present position rather than the actual state of the system.
Parametric Memory: For problems such as theegg-dropping puzzle, keep track of the number of remaining parameters.
State transition relations describe how a problem's state progresses during computation. They are a fundamental concept in dynamic programming, used to define state transitions in the context of a mathematical model and its computational representation.
State Definition: Identify the core components that define the state of the problem. This step requires a deep understanding of the problem and the information necessary to model the state accurately. State may be defined flexibly, depending on the problem's complexity and nuances.
State Transitions: Define how the state evolves over time or through an action sequence. Transition functions or relations encapsulate the possible state changes, predicated on specific decisions or actions.
Initial State: Establish the starting point of the problem. This step is crucial for state-based algorithms, ensuring that the initial configuration aligns with practical requirements.
Terminal State Recognition: Determine the criteria for identifying when the state reaches a conclusion or a halting condition. For dynamic programming scenarios, reaching an optimal or final state can trigger termimnation or backtracking.
- Memory Mechanism: It allows dynamic programming to store and reuse state information, avoiding redundant calculations. This capacity for efficient information retention differentiates it from more regular iterative algorithms.
- Information Representation: State spaces and their transitions capture essential information about various problem configurations and their mutual relationships.
Dynamic programming accomplishes a balance between thoroughness and computational efficiency by leveraging the inherent structure of problems.
Here is the Python code:
defknapsack01(values,weights,capacity):n=len(values)dp= [[0for_inrange(capacity+1)]for_inrange(n+1)]foriinrange(1,n+1):forwinrange(1,capacity+1):ifweights[i-1]<=w:dp[i][w]=max(values[i-1]+dp[i-1][w-weights[i-1]],dp[i-1][w])else:dp[i][w]=dp[i-1][w]returndp[n][capacity],dpvalues= [60,100,120]weights= [10,20,30]capacity=50max_value,state_table=knapsack01(values,weights,capacity)print(f"Maximum value achievable:{max_value}")
Find thelongest palindromic substring within a string s.
- Input: "babad"
- Output: "bab" or "aba" — The two substrings "bab" and "aba" are both valid solutions.
The problem can be solved using either anExpand Around Center algorithm orDynamic Programming. Here, I will focus on the more efficient dynamic programming solution.
- Define a 2D table,
dp
, wheredp[i][j]
indicates whether the substring from indexi
toj
is a palindrome. - Initialize the base cases: single characters are always palindromes, and for any two adjacent characters, they form a palindrome if they are the same character.
- Traverse the table diagonally, filling it from shorter strings to longer ones since the state of a longer string depends on the states of its shorter substrings.
- Time Complexity:
$O(n^2)$ - Space Complexity:
$O(n^2)$
Here is the Python code:
deflongestPalindrome(s:str)->str:n=len(s)ifn<2ors==s[::-1]:returnsstart,maxlen=0,1# Initialize dp table. dp[i][i] is always True (single character).dp= [[False]*nfor_inrange(n)]foriinrange(n):dp[i][i]=Trueforjinrange(1,n):foriinrange(j):# Check for palindrome while updating dp table.ifs[i]==s[j]and (dp[i+1][j-1]orj-i<=2):dp[i][j]=True# Update the longest palindrome found so far.ifj-i+1>maxlen:start,maxlen=i,j-i+1returns[start :start+maxlen]# Test the functionprint(longestPalindrome("babad"))# Output: "bab"
Explore all 35 answers here 👉Devinterview.io - Dynamic Programming
About
🟣 Dynamic Programming interview questions and answers to help you prepare for your next data structures and algorithms interview in 2025.
Topics
Resources
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.