Dynamic programming is a powerful optimization technique used in computer science and mathematics to solve problems by breaking them down into simpler subproblems and solving each subproblem only once, storing the solutions to avoid redundant work. This approach is particularly useful when a problem exhibits overlapping subproblems and optimal substructure. Dynamic programming is applied to problems in which the optimal solution to the overall problem can be constructed from optimal solutions to its subproblems.

Here are the key concepts associated with dynamic programming:

Characteristics of Dynamic Programming:

  1. Overlapping Subproblems:
  2. Optimal Substructure:

Steps in Dynamic Programming:

  1. Define the Structure:

  2. Formulate a Recursive Solution:

  3. Memoization or Tabulation:

  4. Construct the Optimal Solution:

    Untitled

    Dynamic programming (DP) encompasses several types or approaches, each with its own characteristics and use cases. Here are some common types of dynamic programming:

    1. Top-Down (Memoization):

    Example:

    pythonCopy code
    def fib(n, memo={}):
        if n <= 1:
            return n
        elif n not in memo:
            memo[n] = fib(n-1, memo) + fib(n-2, memo)
        return memo[n]
    
    

    2. Bottom-Up (Tabulation):

    Example:

    pythonCopy code
    def fib(n):
        if n <= 1:
            return n
        table = [0] * (n + 1)
        table[1] = 1
        for i in range(2, n + 1):
            table[i] = table[i-1] + table[i-2]
        return table[n]
    
    

    3. State Transition Matrix:

    Example: Solving linear recurrence relations using matrix exponentiation.

    4. Optimal Substructure:

    Example: The shortest path problem, where the optimal path from A to C is composed of optimal paths from A to B and B to C.

    5. State Compression:

    Example: The subset sum problem, where the state space can be compressed using bitmasking.

    6. Bitmask DP:

    Example: Traveling Salesman Problem with bitmask DP to represent subsets of cities visited.

    7. Knapsack DP:

    Example: 0/1 Knapsack Problem, where items have weights and values, and the goal is to maximize the total value within a weight constraint.

    2. Fibonacci Series

    The Fibonacci Series is a sequence of integers where the next integer in the series is the sum of the previous two.

    It’s defined by the following recursive formula:

    .There are many ways to calculate the

    term of the Fibonacci series, and below we’ll look at three common approaches.

    2.1. The Recursive Approach

    To compute

    in the recursive approach, we first try to find the solutions to

    and

    . But to find

    , we need to find

    and

    . This continues until we reach the base cases:

    and

    .In the image below, we can see a tree of subproblems we need to solve in order to get

    :

    One drawback to this approach is that it requires computing the same Fibonacci numbers multiple times in order to get our solution.

    Let’s see if we can get rid of this redundant work.

    2.2. The Top-Down Approach

    The first dynamic programming approach we’ll use is the top-down approach. The idea here is similar to the recursive approach, but the difference is that we’ll save the solutions to subproblems we encounter.

    This way, if we run into the same subproblem more than once, we can use our saved solution instead of having to recalculate it. This allows us to compute each subproblem exactly one time.

    This dynamic programming technique is called memoization. We can see how our tree of subproblems shrinks when we use memoization:

    2.3. The Bottom-Up Approach

    In the bottom-up dynamic programming approach, we’ll reorganize the order in which we solve the subproblems.

    We’ll compute

    , then

    , then

    , and so on:

    This will allow us to compute the solution to each problem only once, and we’ll only need to save two intermediate results at a time.

    For example, when we’re trying to find

    , we only need to have the solutions to

    and

    available. Similarly, for

    , we only need to have the solutions to

    and

    .

    This will allow us to use less memory space in our code.

    3. Pseudocode