Longest Common Subsequence
Identifying Problem Isomorphism
“Longest Common Subsequence” can be approximately mapped to “Edit Distance”.
Reasoning:
Both problems deal with sequences and share the idea of comparing elements between two sequences.
In “Longest Common Subsequence,” the task is to find the length of the longest common subsequence between two sequences, which does not need to occupy consecutive positions within the original sequences.
In “Edit Distance,” the task is to compute the minimum number of operations (insertions, deletions, and substitutions) required to transform one word into another.
The common subsequence between two words in the “Longest Common Subsequence” problem can be interpreted as the remaining elements after the minimum number of deletions required to transform one word into another, which is part of the “Edit Distance” problem. This means you can adapt an algorithm for the “Longest Common Subsequence” problem to solve the “Edit Distance” problem, but with additional handling for insertions and substitutions.
This is an approximate mapping because while both problems deal with the comparison of two sequences, the operations and exact objectives are not the same. The “Longest Common Subsequence” problem only considers deletion of characters to achieve a common sequence, while the “Edit Distance” problem considers deletions, insertions, and substitutions to transform one sequence into another.
Am I the only one who thinks DP is easier to think about than memorization? I directly came up and coded a working DP solution in 15 minutes after I did those stringrelated problems: 5, 516, 300, 376.
 Longest Palindromic Substring
 Longest Palindromic Subsequence
 Longest Increasing Subsequence
 Wiggle Subsequence
10 Prerequisite LeetCode Problems
Finding the Longest Common Subsequence (LCS) is a classic problem that requires knowledge of dynamic programming. Here are some problems as preparation:
392. Is Subsequence: This problem requires you to determine if one string is a subsequence of another, which is a fundamental concept for understanding the LCS problem.
583. Delete Operation for Two Strings: This problem can be solved using a similar dynamic programming approach as the LCS problem.
72. Edit Distance: This problem requires understanding of dynamic programming on strings, which is a necessary concept for the LCS problem.
646. Maximum Length of Pair Chain: This problem is not about strings but about sequences and dynamic programming, which is a fundamental concept for the LCS problem.
70. Climbing Stairs: This is a simpler dynamic programming problem that can help understand the basic concept.
300. Longest Increasing Subsequence: This problem requires finding a longest subsequence with certain properties, a key concept for understanding the LCS problem.
53. Maximum Subarray: This problem involves finding a subarray with maximum sum, which introduces you to subproblems concept that is often used in dynamic programming.
198. House Robber: This problem also requires dynamic programming and gives you practice with thinking about how to break a problem down into subproblems.
509. Fibonacci Number: This is a basic dynamic programming problem that introduces the concept of solving overlapping subproblems, which is important for the LCS problem.
1143. Longest Common Subsequence: Yes, it’s the same problem, but you could solve it first with a bruteforce approach, getting a feel for the problem, and then try to optimize your solution with dynamic programming.


Subsequence
Subsequence testing is a problem where you’re given a short string (which we’ll call the “pattern”) and a longer string (which we’ll call the “text”). The question is whether the characters of the pattern appear in the text in the same order, even if other characters in the text come in between them. If they do, we say that the pattern is a “subsequence” of the text.
For example, consider the pattern “nano” and the text “nematode knowledge”. Here, you can find each letter of the pattern (“n”, “a”, “n”, “o”) in the text, in that order. They’re not all next to each other  there are other letters in between  but because they appear in the correct order, we can say that “nano” is a subsequence of “nematode knowledge”.
We could visualize this by capitalizing the letters in the text that make up the subsequence: “NemAtode kNOwledge”.
To perform this subsequence testing, we can use a finite state machine (FSM). In this FSM, each state would represent a partial match of the pattern in the text. We start at the beginning of the text and move to a new state in the FSM each time we find a character that matches the next character in the pattern. Since we only move forward in the text and in the pattern (never backwards), there’s no need for backtracking in this FSM.
This is a simplified version of the longest common subsequence problem, where instead of just determining whether a pattern is a subsequence of a text, we’re trying to find the longest sequence of characters that two texts have in common.
The longest common subsequence problem is a bit more complex than the simple subsequence testing. Here, you are given two strings A and B of lengths m and n respectively. The task is to find the longest sequence of characters that occur in both strings in the same order. This sequence doesn’t need to be a prefix of any string.
For instance, if A = “ABCBDAB” and B = “BDCAB”, the longest common subsequence is “BCAB”.
This problem is particularly useful in several realworld scenarios. Here are a few examples:
Molecular Biology: DNA sequences can be represented as strings composed of four characters A, C, G, and T. When scientists discover a new DNA sequence, they usually want to compare it with known sequences to determine its similarity. One way to measure this similarity is by finding the longest common subsequence between the new sequence and the known sequences.
File Comparison: Programs like Unix’s “diff” find differences between two versions of a file by computing their longest common subsequence. The lines that are part of this common subsequence haven’t changed, so the program can display only the differing lines, which have been added, modified, or deleted.
Screen Redisplay: Text editors need to update their display as changes are made to a file. For efficiency, especially with slow connections, it’s preferable to send the fewest possible characters to update the display. This can be seen as a common subsequence problem, where the common subsequence represents the parts of the display that don’t need to change.
Please note that there’s also a similar problem called the “longest common substring” problem, which seeks the longest substring (a sequence of characters occurring consecutively) present in two strings. This problem can be solved using a data structure called a suffix tree, but the solution is quite complex.
Clarification Questions
Before attempting the problem, it is indeed important to clarify certain aspects of the problem statement. Here are a few clarification questions that could be asked:
Case sensitivity: The problem statement specifies lowercase English characters. We could ask if there will ever be a situation where uppercase characters could be involved. If yes, would they be considered different from their lowercase counterparts?
Character Set: The problem statement says that the text will be in English. Are there any other characters, such as punctuation or special characters, that could appear in the text?
Empty Strings: If an empty string is provided as input, should the output also be zero since an empty string doesn’t have any subsequences?
Multiple LCS: In the case where there are multiple longest common subsequences, are we only interested in the length, or do we need to return one or all of these subsequences as well?
Return Format: Should the length of the longest common subsequence be returned as a string or as an integer? The problem statement suggests an integer, but it’s often good to clarify.
Subsequence Order: Just to confirm, we are considering subsequences, not substrings, correct? The order of characters matters in subsequences, but characters do not have to be contiguous.
Please note that some of these questions may be answered by the problem constraints and definitions, but if anything is unclear, it’s better to ask and confirm.
Problem Analysis and Key Insights
From the problem statement, there are a few key insights we can draw:
Subsequence Not Substring: This problem is about the longest common subsequence (LCS), not substring. The difference is that the characters in a subsequence don’t have to be adjacent or in a continuous sequence, but they must maintain their relative order.
Multiple LCS May Exist: It’s possible that multiple LCSs exist, but we are only asked for the length of the LCS, not the LCS itself.
Boundary Constraints: The strings’ lengths can go up to 1000. This suggests that a solution with time complexity worse than O(N^2) will likely not be efficient enough.
Case and Character Limitations: The strings only contain lowercase English characters. This constraint could potentially simplify our approach since we only have to consider 26 possible characters.
Nonexistence of LCS: The problem statement clarifies that if there is no common subsequence, we should return 0. This makes sense, as the length of the LCS would be zero in this case.
In terms of solving the problem, it’s clear that we need to compare the characters of the two strings in some way to find commonalities. Given the nature of the problem (finding an optimal solution  in this case, longest length), dynamic programming is a technique that comes to mind. This problem involves making optimal decisions by breaking the problem into smaller subproblems and building up the solution.
Problem Boundary
The scope of this problem is as follows:
Primary Goal: The primary objective of the problem is to find the length of the longest common subsequence (LCS) between two given strings. The LCS is a sequence that appears in the same order within both strings, although not necessarily consecutively.
Return Type: The problem asks for an integer return type, which should represent the length of the longest common subsequence.
Input Parameters: The inputs are two strings of lowercase English characters. Each string’s length can be up to 1000 characters.
Edge Cases: The problem also requires handling edge cases such as when there is no common subsequence, in which case the function should return 0.
Performance: Given that the maximum length of the input strings can be quite large (up to 1000 characters), the solution should be efficient in terms of time and space complexity. Solutions with a time complexity worse than O(N^2) may not be efficient enough.
The problem doesn’t require us to return the actual LCS or handle cases with multiple LCSs of the same length. Thus, these considerations are outside of the scope of this problem.
The boundaries of this problem can be established as follows:
Input Constraints: The problem has explicit input constraints. Both input strings
text1
andtext2
will have lengths between 1 and 1000. Furthermore, these strings will consist only of lowercase English characters.Output: The output will be an integer representing the length of the longest common subsequence (LCS) of
text1
andtext2
. The return value could be anywhere from 0 (if no common subsequence exists) to the length of the smaller input string (if one string is entirely a subsequence of the other).Algorithmic Complexity: Given that the input size could be up to 1000, solutions that have a time complexity worse than O(N^2) are not likely to be performant. Therefore, the problem is bounded by a requirement of reasonable efficiency.
Functionality: The solution must find the LCS of two input strings. It is not required to handle cases such as returning the actual LCS or managing scenarios where multiple LCSs of the same length exist.
Edge Cases: The problem explicitly mentions handling the case when there is no common subsequence, in which case the function should return 0.
Assumptions: We can assume that the input will always follow the mentioned constraints, and we don’t need to handle any additional corner cases, like null or extremely large strings.
These considerations establish the boundaries of this problem, defining what the problem is asking for and what constraints and requirements any solution must adhere to.
Problem Classification
The problem is a classic example of the Dynamic Programming domain, specifically focusing on the Longest Common Subsequence (LCS) problem. The objective is to find the maximum length of a subsequence that is common to both given strings.
Let’s categorize the problem based on the provided problem statement:
Domain Categorization:
 Dynamic Programming: The problem asks for an optimal solution (the longest length), and involves overlapping subproblems. The best length for a common subsequence ending at a given pair of characters in text1 and text2 depends on the best lengths for subsequences ending at previous characters. This kind of overlapping subproblem and the need to build an optimal solution suggests that dynamic programming will be a useful approach.
‘What’ Components:
 Input: Two strings  text1 and text2. Both strings consist of only lowercase English characters, and their lengths range from 1 to 1000.
 Output: An integer that represents the length of the longest common subsequence (LCS) of the two input strings. If no common subsequence exists, the output is 0.
 Constraints: The constraints primarily deal with the length of the input strings and their character sets.
Based on these details, the problem can be classified as an optimization problem (since we are looking for the ’longest’ common subsequence) under the broader category of dynamic programming problems. Furthermore, due to the nature of the problem (comparing two strings), it can also be seen as falling under the string manipulation category.
Distilling the Problem to Its Core Elements
Fundamental Concept: This problem is primarily based on the concept of subsequences and the dynamic programming paradigm. The task is to find the longest common subsequence (LCS) between two given strings.
Simple Description: Imagine you have two strings of characters, and you can remove some characters from either string without rearranging the remaining ones. Your goal is to make the remaining characters in both strings match as much as possible. The problem asks for the maximum number of matching characters you can achieve.
Core Problem: The core problem is to find the maximum length of common subsequence between two strings.
Simplified Problem Statement: Given two strings, find the maximum length of a sequence that can be found in both strings without changing the order of characters.
Key Components:
 Understanding what a subsequence is.
 Recognizing that the problem asks for the longest common subsequence.
 Implementing a way to track and compare subsequences in the two strings.
 Identifying the need for dynamic programming to avoid recomputation of overlapping subproblems.
Minimal Set of Operations:
 Iterate over the characters in the two strings.
 Compare each character in the first string to each character in the second string.
 If two characters match, increase the count of common characters for that position, considering the maximum length found so far.
 If they don’t match, the maximum length at that position will be the maximum of the lengths found so far in the previous positions.
 Continue until all characters are compared and return the maximum length found.
Visual Model of the Problem
This problem is best visualized as a 2D grid, where each cell (i, j) represents the longest common subsequence (LCS) between the substrings of text1[0…i] and text2[0…j].
To construct the grid, we would start from the topleft corner (which corresponds to the comparison of two empty strings) and gradually fill the grid cell by cell according to the rules of the problem. If characters at current positions in both strings are the same, the value in the grid cell is one more than the value in the cell diagonally above and to the left. If they aren’t, it’s the maximum value between the cell directly above and the cell directly to the left.
For example, consider two strings “abcde” and “ace”:
0 a b c d e (text1)
0  0 0 0 0 0 0
a  0 1 1 1 1 1
c  0 1 1 2 2 2
e  0 1 1 2 2 3 (text2)
The last cell (bottom right) of the grid shows the length of the LCS, which in this case is 3 (“ace”). This 2D grid is a helpful way to visualize the problem and how to approach solving it using dynamic programming.
Problem Restatement
We have two strings, let’s call them String A and String B. Our task is to identify the longest sequence of characters that appears in both strings, in the same order, but not necessarily consecutively. This sequence of characters is known as a common subsequence. If there’s no such sequence, we should return 0.
A few important points to consider:
A subsequence is a sequence that can be derived from another sequence by deleting some or no elements without changing the order of the remaining elements. For example, “abc” is a subsequence of “abcd”, but “bca” is not.
The strings A and B can be of different lengths, ranging from 1 to 1000 characters.
Both A and B will only consist of lowercase English characters.
In other words, we’re looking for the longest chain of characters that we can find in both strings, without changing their relative order.
Abstract Representation of the Problem
Let’s consider two sequences, A and B. These sequences are collections of elements, and in our case, these elements are characters.
The sequences don’t have to be of the same length, and the elements they contain are drawn from a fixed set, which, in this case, is the set of all lowercase English letters.
The problem is about identifying the longest common subsequence (LCS) between the two sequences, A and B. A common subsequence is a sequence that appears in both A and B and maintains the relative order of elements but not necessarily their contiguity.
So, the abstract problem can be stated as follows:
“Given two sequences of elements, identify the longest sequence of elements that appears in both given sequences, in the same order, but not necessarily consecutively. If there is no common sequence, the length of the LCS is 0.”
This abstraction allows us to disregard the specifics related to the characters, and instead focus on the core computational problem: identifying the LCS between two sequences. This problem is a classic dynamic programming problem, which allows it to be applicable to numerous situations beyond just strings.
Terminology
Here are some specialized terms and concepts that are central to understanding this problem and its solution:
String: A sequence of characters. In the context of this problem, we are given two strings and we are to find a common subsequence.
Subsequence: A subsequence is a sequence that can be derived from another sequence by deleting some or no elements without changing the order of the remaining elements. For example, “abc” is a subsequence of “aebdc”.
Common Subsequence: A common subsequence is a subsequence that is common to two or more sequences. For instance, if we have two strings, “abcde” and “ace”, “ace” is a common subsequence.
Longest Common Subsequence (LCS): Of all the common subsequences between two strings, the one with the greatest length is termed the Longest Common Subsequence. For instance, if we have two strings, “abcde” and “ace”, “ace” is the LCS with a length of 3.
Dynamic Programming (DP): This is a method for solving complex problems by breaking them down into simpler subproblems, solving the subproblems, and storing their solutions to avoid solving the same subproblem multiple times. DP is the main technique used to solve the LCS problem.
DP Table: A 2D table used in DPbased solutions to store the results of subproblems. In the LCS problem, we use a DP table to store the lengths of common subsequences of all prefixes of the given strings, which we then use to determine the length of the LCS.
Each of these terms and concepts plays a critical role in defining the problem and outlining the approach towards its solution.
Problem Simplification and Explanation
Imagine you have two boxes of LEGO bricks. Each box contains bricks of different colors arranged in a specific order. Your goal is to find the longest sequence of colored bricks that appears in the same order in both boxes. However, not every brick needs to be used and you can skip over certain bricks as needed.
The boxes of LEGOs are the two input strings, text1
and text2
. Each brick represents a character in the string, and the color of the brick corresponds to the specific character. The sequence of colored bricks you’re trying to find is the longest common subsequence.
The key concept here is the idea of a “subsequence.” Just like how you can skip bricks in your sequence, a subsequence of a string is a sequence that can be derived from the original string by deleting some or no characters without changing the order of the remaining characters.
To solve this problem, one common method is dynamic programming. Think of it as creating a blueprint or a grid that compares each brick (character) from one box (string) to each brick from the other box. You fill out each cell in the grid based on whether the bricks from the two boxes match or not, always keeping track of the longest sequence you’ve built so far. By the time you’ve compared every brick from both boxes, you’ll have your longest sequence!
This is a bit like constructing a building  you start from the ground up, building on what you’ve already constructed, until you’ve reached the top and completed the building. Similarly, in dynamic programming, you start by solving smaller subproblems and build on these solutions until you’ve solved the entire problem.
Constraints
Here are a few observations:
The problem is wellsuited for Dynamic Programming (DP): The longest common subsequence (LCS) problem exhibits overlapping subproblems, which means that the same subproblems are solved multiple times. This redundancy can be exploited using DP to store results of subproblems so they do not have to be recomputed each time, which greatly improves efficiency.
Character comparison: The problem involves two strings and essentially boils down to comparing the characters in the strings. This property allows us to use a 2D DP array where the cell [i][j] indicates the length of the LCS for the prefixes
text1[0...i]
andtext2[0...j]
.Length constraints: The problem constraints that the lengths of
text1
andtext2
are both less than or equal to 1000. This tells us that an algorithm with time complexity O(n^2) will be feasible.Lowercase English characters only: The problem only deals with lowercase English characters. There are only 26 lowercase English letters, so if there were some problems requiring frequency counts or similar operations, this constraint could be useful. However, in this problem, it’s just worth noting that we only need to deal with these 26 possibilities per character in the string.
Longest common subsequence, not substring: The problem requires a subsequence, not a substring. A subsequence doesn’t require the characters to be contiguous in the strings. Therefore, characters can be skipped over in the input strings while determining the longest common subsequence. This leads to the strategy of comparing characters and including them in the LCS if they match, or excluding and moving on if they don’t, which is key to the formulation of our dynamic programming solution.
Analyzing the constraints of a problem often provides insights into potential solution approaches. Here’s what we can glean from the constraints of this problem:
Size Constraints: The constraints specify that the length of the strings will not exceed 1000 characters. This tells us that a solution with a time complexity of O(n^2) is feasible. For larger constraints, we might have had to consider more efficient algorithms or heuristics.
Character Constraints: The problem only deals with lowercase English characters. This indicates that we don’t need to account for casesensitivity or special characters in our solution. In other problems, this kind of constraint can sometimes lead to solutions involving direct character mapping, hashing, or frequency count using an array of size 26 (the number of lowercase English letters).
Subsequence, not Substring: The problem is about subsequences, not substrings, meaning the elements don’t have to be contiguous, just in order. This indicates that our solution needs to account for noncontiguous character matches and implies that we might need to use techniques like dynamic programming to solve the problem effectively.
Zero Common Subsequence: The problem statement mentions that if there’s no common subsequence, the output should be 0. This provides a clear base case for our solution, which is especially helpful when we’re dealing with recursive solutions or dynamic programming.
These constraints and the insights we derive from them are invaluable when we start to design and implement our solution. They help us figure out the boundaries within which we need to work and guide us toward the most effective techniques to use.
Case Analysis
Here are some additional examples and test cases:
Case of Empty Strings (Edge case):
 Input: text1 = “”, text2 = ""
 Output: 0
 Explanation: There are no characters in either string, so there is no common subsequence. The output is therefore 0.
Case of One Empty String (Edge case):
 Input: text1 = “”, text2 = “abc”
 Output: 0
 Explanation: One of the strings is empty, so there is no common subsequence. The output is 0.
Case of Single Character Strings (Boundary case):
 Input: text1 = “a”, text2 = “a”
 Output: 1
 Explanation: Both strings contain a single character and it’s the same in both. So, the longest common subsequence is of length 1.
Case of NonOverlapping Strings (Negative case):
 Input: text1 = “abc”, text2 = “def”
 Output: 0
 Explanation: There are no common characters in the two strings, so there is no common subsequence.
Case of Strings with Differing Case (Special case):
 Input: text1 = “AbC”, text2 = “aBc”
 Output: 0
 Explanation: This problem assumes case sensitivity, hence ‘A’ and ‘a’ are considered different characters. No characters match exactly in case, hence the longest common subsequence length is 0.
Case of Repetitive Characters (Unique case):
 Input: text1 = “aaa”, text2 = “aa”
 Output: 2
 Explanation: Even though text1 has more ‘a’s than text2, the longest common subsequence is ‘aa’ which has length 2.
These examples cover a variety of edge and boundary cases, and examining them can help us understand the problem better and design a more robust solution.
One way to visualize these cases would be to draw twodimensional grids where each cell (i,j) represents the length of the longest common subsequence (LCS) for the prefixes up to the ith character of text1
and the jth character of text2
.
Case of Empty Strings:
A 1x1 grid (since lengths are 0). All cells will contain 0 as there are no common subsequences.
Case of One Empty String:
A grid with one row (or column, depending on which string is empty). All cells will contain 0 as there is no common subsequence with an empty string.
Case of Single Character Strings:
A 2x2 grid. If the characters match, the bottom right cell (representing the LCS for the entire strings) will contain 1.
Case of NonOverlapping Strings:
If
text1 = "abc"
andtext2 = "def"
, we would have a 4x4 grid (considering string lengths + 1). All cells in the bottom right 3x3 square will contain 0, as there are no common characters.Case of Strings with Differing Case:
Similarly, a grid of appropriate dimensions will have all 0’s in cells corresponding to the LCS of any prefixes, since no characters match exactly in case.
Case of Repetitive Characters:
If
text1 = "aaa"
andtext2 = "aa"
, we will have a 4x3 grid. The diagonal cells from top left to bottom right will progressively increase to a maximum of 2, showing that the LCS has length 2.
These visualizations can help us understand how a dynamic programming approach could be used to solve the problem, by building up the solution for the entire strings based on solutions for their prefixes.
Let’s visualize these cases using 2D grids. Note that each cell (i,j) in the grid represents the length of the longest common subsequence (LCS) for the prefixes up to the ith character of text1
and the jth character of text2
. For simplicity, I’m using ‘.’ to represent a cell containing 0.
Case of Empty Strings:
"."
An empty string vs. another empty string. The LCS is 0.
Case of One Empty String:
". . . ."
An empty string vs. a nonempty string. The LCS is 0 for all prefixes of the nonempty string.
Case of Single Character Strings:
". a" "a 1"
Here, both strings are “a”. The LCS for the entire strings (cell 1,1) is 1, as they share a common character.
Case of NonOverlapping Strings:
". d e f" "a . . ." "b . . ." "c . . ."
The strings are “abc” and “def”. There are no common characters, so the LCS is 0 for all pairs of prefixes.
Case of Strings with Differing Case:
". a b c" "A . . ." "B . . ." "C . . ."
The strings are “abc” and “ABC”. Even though they contain the ‘same’ letters, the differing case means that there are no common characters, and so the LCS is 0 for all pairs of prefixes.
Case of Repetitive Characters:
". a a" "a 1 1" "a 1 2" "a 1 2"
The strings are “aaa” and “aa”. The diagonal cells increase progressively as the LCS for the prefixes increases in length. The bottomright cell (3,2) represents the LCS for the entire strings, and its value of 2 shows that the LCS is “aa”.
Analyzing the different cases, we can derive some important insights:
Length of the Longest Common Subsequence (LCS): For two strings of lengths m and n, the LCS cannot be longer than min(m, n) since a subsequence can’t be longer than the original sequences. This sets an upper limit on the possible length of the LCS.
Handling Empty Strings: When either or both strings are empty, the LCS is 0 as there are no characters to form a common subsequence. This forms our base case in the dynamic programming approach.
Same Characters at Current Position: If the characters at the current positions in both strings are the same, then the LCS at this point would be one plus the LCS of the prefixes of both strings (excluding the current character). This forms one part of our recursive relation in the dynamic programming approach.
Different Characters at Current Position: If the characters at the current positions in both strings are different, then the LCS at this point would be the maximum of the LCS calculated from two scenarios:
 Excluding the current character of the first string and including the current character of the second string.
 Including the current character of the first string and excluding the current character of the second string.
Case Sensitivity: The problem is case sensitive. The same character in different cases is treated as a different character.
Repetitive Characters: Repetitive characters in the same string do not add to the LCS unless there are equal or more repetitions in the other string as well. The sequence must occur in both strings for it to be counted.
These insights can guide us towards formulating an efficient approach (like Dynamic Programming) to solve the problem.
Identification of Applicable Theoretical Concepts
There are some fundamental concepts and methodologies in computer science that can be applied to the problem of finding the longest common subsequence (LCS) between two strings. These include:
Dynamic Programming (DP): This is a powerful algorithmic paradigm where problems are solved by breaking them down into smaller subproblems, which are then solved and their solutions stored. These stored solutions are reused to solve bigger subproblems, thereby reducing redundant computation. DP is particularly useful when the problem exhibits the properties of overlapping subproblems and optimal substructure, both of which the LCS problem does.
Overlapping Subproblems: This property is exhibited when solving a problem involves solving the same subproblem multiple times. In the LCS problem, when we’re comparing two strings, we often end up comparing the same prefixes or suffixes of the strings multiple times. By using DP, we can store the results of these comparisons and avoid redundant computations.
Optimal Substructure: This property is present when an optimal solution to a problem can be constructed from the optimal solutions of its subproblems. In the LCS problem, the LCS of two strings is composed of the LCS of their prefixes. This means we can build up the solution incrementally, reusing previously computed results.
Sequence Alignment and Comparisons: The problem of finding an LCS is a fundamental problem in the field of bioinformatics, where it is used for sequence alignment (comparing DNA, RNA, or protein sequences). Understanding the alignment process and how mismatches, gaps, and similarities are scored can provide insights into the LCS problem.
By leveraging these concepts, we can develop an efficient solution to the LCS problem that takes advantage of the structure of the problem and avoids unnecessary computation.
Simple Explanation
Let’s imagine that you have two favorite songs, and you want to make a new mix of those songs. However, you only want to include the lyrics (words) that are common to both songs and maintain their original order from each song. The mix can’t have words jumbled up!
So, you start listening to the first song and writing down the lyrics. Then you start the second song, but this time, you only write down the words if they are in your written list from the first song and they appear in the same order.
For example, if the first song goes, “I love to sing in the rain,” and the second song goes, “Rain makes me love to dance,” your mixed lyrics would be “love to.”
This is similar to finding the longest common subsequence in two strings. The songs are the strings, and the words are the characters in those strings. You’re looking for the longest ‘string of words’ (sequence) that appear in both songs (strings) in the same order.
But remember, it doesn’t have to be every word from the songs, just like our mix doesn’t include all the lyrics from either song, only the words that they have in common in the same order. And the ’longest’ mix is the one with the most number of words (or characters in our original problem).
Problem Breakdown and Solution Methodology
Let’s say we’re given two strings A and B. Here’s a stepbystep process to find the length of their longest common subsequence:
Create a 2D array (or table): The size of the array should be (length of A+1) by (length of B+1). This table will help us to keep track of the maximum length of common subsequence we’ve found so far. You can think of this table as a scorecard in a game where string A is competing against string B and each cell in the table stores the highest score (length of common subsequence) up to that point.
Initialize the table: Fill the first row and the first column of the array with zeros. This represents the base case where one of the strings has a length of zero, meaning there’s no common subsequence.
Fill in the table: Now for each character in A (let’s say
i
) and each character in B (let’s sayj
), we compare them:If A[i] is equal to B[j], it means we’ve found a common character, so we increment the value from the upperleft diagonal cell (i.e.,
table[i1][j1]
) by 1 and put that in our current cell (table[i][j]
). It’s like scoring a point in our game whenever we find a match.If A[i] is not equal to B[j], it means we haven’t found a match. In this case, we just take the maximum value from either the cell directly above (
table[i1][j]
) or the cell directly to the left (table[i][j1]
). It’s like we’re checking the highest score we’ve had so far in the game, even if we didn’t score this round.
Find the longest common subsequence: After filling in the entire table, the bottomright cell (
table[length of A][length of B]
) will contain the length of the longest common subsequence.
Let’s go through a concrete example:
Consider two strings: A = “abcde”, B = “ace”.
After creating and initializing our 2D table, we get:
0 a c e
0 0 0 0
a 0 0 0
b 0 0 0
c 0 0 0
d 0 0 0
e 0 0 0
Now, we start filling the table by comparing characters. After completing this, our table looks like this:
0 a c e
0 0 0 0
a 0 1 1 1
b 0 1 1 1
c 0 1 2 2
d 0 1 2 2
e 0 1 2 3
So, the length of the longest common subsequence for “abcde” and “ace” is 3.
This approach is known as Dynamic Programming because it breaks a complex problem into simpler, smaller subproblems and uses the solutions to these subproblems to construct the solution to the original problem. It’s like building a puzzle by putting together small pieces one at a time to gradually create the entire picture.
Inference of ProblemSolving Approach from the Problem Statement
Absolutely. This problem involves several key concepts:
String: The inputs to the problem are two strings, i.e., sequences of characters.
Subsequence: A subsequence of a string is a new string generated from the original string with some characters (can be none) deleted without changing the relative order of the remaining characters. This is crucial to our understanding of what we’re looking for.
Common Subsequence: A common subsequence of two strings is a subsequence that is common to both strings. Our goal in this problem is to find the longest such subsequence.
Longest Common Subsequence (LCS): This is the main problem we are trying to solve. The LCS is the longest subsequence that two sequences have in common. This is what our dynamic programming solution will find.
Dynamic Programming: This is a method for solving complex problems by breaking them down into simpler subproblems, solving each of those subproblems just once, and storing their solutions  ideally, using a memorybased data structure. The next time the same subproblem occurs, instead of recomputing its solution, one simply looks up the previously computed solution, thereby saving computation time at the expense of a (hopefully) modest expenditure in storage space. The technique of storing solutions to subproblems instead of recomputing them is called “memoization.”
As for recognizing these properties by drawing tables or diagrams, that’s precisely what the 2D table in the dynamic programming solution is for. Each cell in the table represents a subproblem, i.e., the LCS of a prefix of string A and a prefix of string B. The value in each cell represents the length of the LCS for those prefixes. By filling in this table, we systematically solve all subproblems and use their solutions to solve larger subproblems, until we’ve solved the original problem.
There are a few key signals in the problem statement that suggest dynamic programming could be an effective approach:
Overlapping subproblems: The problem asks for the longest common subsequence (LCS) between two strings. To solve this, we need to compare every character of one string with every character of the other. In this process, we’ll end up solving the same subproblems multiple times, such as the LCS of the substrings up to certain indices. Overlapping subproblems are a hallmark of problems that can be solved using dynamic programming.
Optimal substructure: The problem can be broken down into smaller subproblems, and the solution to the larger problem depends on the solutions to these smaller subproblems. In this case, the LCS for two strings depends on the LCS of their prefixes. This property of optimal substructure is another key indicator that dynamic programming may be a good approach.
Ordering: The problem involves finding subsequences, which have a natural ordering since they preserve the relative order of elements from the input strings. This suggests that we might be able to build up a solution incrementally, considering one character at a time, which is a common technique in dynamic programming.
Dynamic programming is a powerful technique that is applicable to a wide range of problems, especially those involving optimization over sequences or arrays, which fits our problem well. Hence, recognizing these features in the problem statement can guide us to consider dynamic programming as a possible solution strategy.
Simple Explanation of the Proof
Let’s break down the explanation of how the dynamic programming algorithm works for the Longest Common Subsequence (LCS) problem.
Key Concepts:
Dynamic Programming (DP): DP is a method for solving complex problems by breaking them down into simpler subproblems and reusing solutions to these subproblems to construct solutions to the original problem.
LCS: The LCS between two strings is the longest sequence of characters that can be found in both strings while preserving the order of characters.
Understanding the DP approach to LCS:
The DP approach to the LCS problem involves building up a solution by comparing each character in the first string to each character in the second string. For this, we use a 2D DP table, where dp[i][j]
represents the length of the LCS of the prefixes of the two strings up to indices i
and j
.
The rules for populating the DP table are as follows:
If the characters at the current position in both strings are the same (i.e.,
text1[i] == text2[j]
), that means we’ve found a common character, so we add 1 to the LCS of the prefixes of the strings without this character, which isdp[i1][j1]
. So,dp[i][j] = dp[i1][j1] + 1
.If the characters at the current position in the strings are not the same (i.e.,
text1[i] != text2[j]
), we can’t include both characters in the LCS. So, we take the maximum length of the LCS we could get if we either exclude the current character from the first string or from the second string. This is given bymax(dp[i1][j], dp[i][j1])
.
Why does this work?
The DP approach works because of the nature of the problem, which exhibits both overlapping subproblems and optimal substructure:
Overlapping Subproblems: While computing the LCS for larger prefixes, we end up computing the LCS for smaller prefixes multiple times. The DP approach saves these intermediate results, which can be reused, hence reducing redundant computation.
Optimal Substructure: The longest common subsequence of two strings can be built by knowing the longest common subsequences of their prefixes. Therefore, we can build the solution for the larger problem using the solutions to the smaller subproblems.
In essence, the DP solution to the LCS problem incrementally builds up the length of the LCS by solving the problem for smaller prefixes of the strings, and uses these solutions to construct the solution to the original problem.
Stepwise Refinement
Stepwise Refinement of the Approach:
Step 1: Initialize a 2D DP array/table where
dp[i][j]
represents the length of the LCS of the prefixes of the two strings up to indicesi
andj
. The dimensions of the table would be (n+1) x (m+1) where n is the length oftext1
and m is the length oftext2
.Step 2: Iterate over the DP table starting from the topleft to the bottomright. The outer loop corresponds to the characters in
text1
and the inner loop corresponds to the characters intext2
.Step 3: If the characters at the current indices in both strings are the same, set
dp[i][j]
to bedp[i1][j1] + 1
. This means that we found a common character and we add 1 to the LCS of the prefixes of the strings without this character.Step 4: If the characters at the current indices in the strings are not the same, set
dp[i][j]
to bemax(dp[i1][j], dp[i][j1])
. This means we take the maximum length of the LCS we could get if we either exclude the current character from the first string or from the second string.Step 5: After filling the entire DP table, the bottomright cell of the table will hold the length of the LCS of the two input strings.
Granular, Actionable Steps:
 Define the problem constraints and input/output formats.
 Initialize a 2D DP array of size (length of text1 + 1) x (length of text2 + 1) to store the LCS values.
 Iterate over both strings character by character. For each pair of characters, perform the checks and update the DP table as per the rules explained above.
 Return the value at the bottomright cell of the DP table as the length of the LCS.
Independent Parts of the Problem:
Each cell in the DP table can be filled independently once the cells it depends on (to the left, top, and topleft) have been computed. However, due to these dependencies, we must ensure the correct order of computation, i.e., top to bottom, left to right.
Repeatable Patterns within our Solution:
The process of comparing each pair of characters and updating the DP table is a repeatable pattern. For every pair of indices
i
,j
, we updatedp[i][j]
based on the comparison oftext1[i]
andtext2[j]
, and the values atdp[i1][j]
,dp[i][j1]
, anddp[i1][j1]
. This process is repeated for every pair of characters until the entire DP table is filled out.
Solution Approach and Analysis
I’ll walk you through the problem using a stepbystep approach and a simple example to illustrate. I’ll also discuss how changes in the problem’s parameters could affect the solution.
Here’s how you can think of it:
 Creating a table: You can think of this problem as creating a grid or table, where one string is laid out on the top row (each character in its own cell), and the other string is laid out in the first column. This creates a matrix where each cell can be associated with a pair of characters, one from each string. For example, if the strings are “abcde” and “ace”, our initial grid might look like this:
a b c d e
0 0 0 0 0 0
a 0
c 0
e 0
Filling the table: Next, we’ll fill in each cell with the length of the longest common subsequence (LCS) of the substring pairs represented by that cell’s row and column. If the characters in the current row and column are the same, we add 1 to the value of the cell diagonally above and to the left of the current cell. If the characters are different, we take the maximum value from the cell above or the cell to the left. This is because we’re either excluding the current character from the first string (taking value from the cell above) or from the second string (taking value from the cell to the left).
Our table would then look like this:
a b c d e
0 0 0 0 0 0
a 0 1 1 1 1 1
c 0 1 1 2 2 2
e 0 1 1 2 2 3
 Finding the answer: The value in the bottom right corner is the length of the LCS of the two strings. In this case, the length of the LCS of “abcde” and “ace” is 3, which corresponds to the subsequence “ace”.
How parameters affect the solution:
Length of the strings: As the length of the strings increases, the size of the dynamic programming table increases. This will increase the time and space complexity of the algorithm. So, for very long strings, the algorithm might take a considerable amount of time.
Content of the strings: If the strings are very similar or identical, the LCS will be longer. Conversely, if they have few or no common characters, the LCS will be shorter or even nonexistent. This doesn’t significantly affect the complexity of the algorithm, but it’s worth noting as it affects the LCS length result.
Examples:
Let’s take a look at how this works with a different pair of strings.
If we have “abc” and “abc”, the table looks like this:
a b c
0 0 0 0
a 0 1 1 1
b 0 1 2 2
c 0 1 2 3
As you can see, the LCS of “abc” and “abc” is 3, which corresponds to the subsequence “abc”.
Identify Invariant
An invariant in the context of an algorithm is a condition that remains true before and after each iteration.
In the case of the Longest Common Subsequence (LCS) problem, the invariant is the following:
For any cell (i, j) in our Dynamic Programming table, the value in the cell represents the length of the LCS of the substrings A[0…i] and B[0…j].
This is an invariant because it holds true throughout the process of filling out the Dynamic Programming table. For each cell, we calculate the LCS length considering substrings up to the current cell’s position (i, j), either by extending the length from the diagonal cell if the characters match, or by carrying the maximum value from the left or top cells if the characters do not match. This ensures that the calculated length of the LCS remains consistent and accurate at each step of the process.
This invariant helps us to build the final solution incrementally, and it guarantees that by the time we reach the cell (m, n), where m and n are the lengths of the two input strings, we have correctly computed the length of the LCS for the full strings A and B.
Identify Loop Invariant
In the Longest Common Subsequence (LCS) problem, the loop invariant can be stated as follows:
For the outer loop, which iterates over the characters of string text1
from 1 to m
(let’s assume the current index is i
), the invariant is that each cell (i, j)
for all j
from 0 to n
in the Dynamic Programming (DP) matrix represents the length of the longest common subsequence between the substrings text1[0...i]
and text2[0...j]
.
For the inner loop, which iterates over the characters of string text2
from 1 to n
(let’s assume the current index is j
), the invariant is that the current cell (i, j)
in the DP matrix represents the length of the longest common subsequence between the substrings text1[0...i]
and text2[0...j]
.
Here, the loop invariants help ensure that the algorithm correctly computes the lengths of the LCS for the substrings of text1
and text2
up to indices i
and j
respectively. Thus, when the loops end, we have the length of the LCS for the full strings text1
and text2
in the cell (m, n)
.
These invariants hold true at the start of each loop iteration, and the operations within the loops maintain these invariants, ensuring that they still hold at the end of each iteration. Hence, they are crucial for the correctness of the algorithm.
In this particular problem of Longest Common Subsequence (LCS), the invariant and the loop invariant can be considered the same because the algorithm mainly consists of nested loops that fill up a twodimensional array (DP table), and the correctness of the algorithm heavily depends on the proper execution of these loops.
The invariant, which is a condition that is initially true and remains so after each iteration, is crucial for the correctness of the algorithm. In this case, it is the property that each cell in the DP table correctly represents the length of the LCS for the substrings up to the given indices.
However, it’s important to note that this might not always be the case in other problems. In general, a loop invariant is a specific type of invariant that applies to loops. An invariant can also be a broader property or condition that remains true throughout the execution of an entire algorithm, not just loops. For example, in some sorting algorithms, a certain portion of the array might remain sorted throughout the algorithm, and this would be an invariant, but not necessarily a loop invariant.
Thought Process
The problem is about finding the length of the longest common subsequence (LCS) between two given strings. The LCS is a sequence that appears in both of the given strings in the same order. Let’s break it down into steps:
Understanding the Problem: The problem is asking for the length of the longest common subsequence between two strings. A subsequence is a sequence that can be derived from another sequence by deleting some or no elements without changing the order of the remaining elements. For example, “abc”, “abg”, “bdf”, “a”, "" etc. are subsequences of “abcdefg”.
Identifying the Problem Type: This is a classical dynamic programming problem. The cues in the problem statement that lead us to this approach are the need to find an optimal solution (longest subsequence), and the presence of overlapping subproblems (the LCS problem for substrings of the input strings).
Formulating a Plan: We can solve this problem by using a 2D dynamic programming table, where
dp[i][j]
will represent the length of the LCS of the substringstext1[0..i]
andtext2[0..j]
. We will start from the smallest substrings and gradually build up to the LCS for the entire strings.Coding the Solution: Here is how we can code the solution in Python:


The if text1[i] == text2[j]
condition checks if the current characters in both strings are equal. If they are, we add 1 to the LCS of the next substrings text1[i + 1..]
and text2[j + 1..]
. Otherwise, we take the maximum between the LCS of text1[i + 1..]
and text2[j..]
, and the LCS of text1[i..]
and text2[j + 1..]
. This corresponds to ignoring the current character in either text1
or text2
.
Establishing Preconditions and Postconditions
Let’s break this down for the longestCommonSubsequence
function:
Parameters:
 Inputs to the method:
text1
andtext2
.  Types: Both are strings.
 They represent the two sequences for which we want to find the longest common subsequence.
 Inputs to the method:
Preconditions:
 Before this method is called, there aren’t any specific conditions about the state of the program that must be true.
 Constraints: Both
text1
andtext2
can be any strings, but in the context of the longest common subsequence problem, they are usually nonempty and have lengths within a reasonable range to avoid issues with time complexity. The characters in these strings are typically letters or digits, but the function should work for any characters.  There’s no need for the program to be in a specific state before calling this function.
Method Functionality:
 The method is expected to find the longest common subsequence between
text1
andtext2
, and return its length.  It uses the dynamic programming technique to interact with the inputs, creating a matrix
dp
to store the lengths of the longest common subsequences for all prefixes oftext1
andtext2
.
 The method is expected to find the longest common subsequence between
Postconditions:
 After the method has been called, the state of the program doesn’t change, since the method doesn’t modify any state or have any side effects.
 The return value represents the length of the longest common subsequence of
text1
andtext2
.  The method has no side effects, as it doesn’t change the state of the program or modify its inputs.
Error Handling:
 The method doesn’t include specific error handling. If the preconditions aren’t met (e.g., if
text1
ortext2
is not a string), Python will raise aTypeError
when it tries to perform operations that aren’t defined for the given input types.  The function doesn’t throw a custom exception or return a special value. If the inputs are invalid, it will simply fail with a standard Python exception.
 The method doesn’t include specific error handling. If the preconditions aren’t met (e.g., if
Problem Decomposition
Problem Understanding:
 In simple terms, the problem is to find the longest sequence that two given sequences have in common. However, the elements in this sequence don’t have to appear consecutively in the original sequences, but they should maintain their relative order.
Initial Breakdown:
 Major parts of the problem include identifying all possible subsequences of both strings and comparing these subsequences to identify the longest one that is common to both.
Subproblem Refinement:
 Identifying all subsequences could be broken down into iterating through each string and creating subsequences. However, this approach would be inefficient.
 A better subproblem would be to analyze both sequences and compare them element by element, storing intermediate results.
 To efficiently solve this problem, we can use dynamic programming to break it down into overlapping subproblems, where each subproblem represents the longest common subsequence between the prefixes of the two sequences.
Task Identification:
 Repeated tasks include comparing characters and maintaining a record of the length of the longest common subsequence found so far.
Task Abstraction:
 The task of comparing characters and maintaining a record is abstracted in the dynamic programming approach, where a matrix is used to keep track of the longest common subsequence for each pair of prefixes.
Method Naming:
 A simple, descriptive name for this task could be
findLongestCommonSubsequence
.
 A simple, descriptive name for this task could be
Subproblem Interactions:
 The subproblems interact with each other through the dynamic programming matrix. The solution to a larger subproblem (longest common subsequence for longer prefixes) depends on the solution to smaller subproblems (longest common subsequence for shorter prefixes).
 They need to be solved in a specific order  from the smallest subproblem (empty prefix) to the largest (the whole sequence). The dependencies follow from this order.
From Brute Force to Optimal Solution
Let’s begin by describing a bruteforce solution to the longest common subsequence problem:
BruteForce Approach:
Generate all subsequences of both input sequences.
For each subsequence of the first sequence, check if it is also a subsequence of the second sequence.
Keep track of the longest subsequence that is common to both sequences.
Inefficiencies of the BruteForce Approach:
Generating all subsequences is expensive. There are 2^n possible subsequences of a sequence of length n (including the empty sequence), so we need to generate 2^m + 2^n subsequences if our inputs have lengths m and n.
For each subsequence of the first sequence, we check if it is a subsequence of the second sequence. This step also has a high time complexity.
Overall, the bruteforce approach has a time complexity of O(n * 2^n), which is infeasible for large inputs.
Optimized Approach Using Dynamic Programming:
The key insight is that the problem exhibits both optimal substructure and overlapping subproblems  hallmarks of problems that can be solved efficiently using dynamic programming. We can solve this problem by considering smaller subproblems: the longest common subsequences of prefixes of our input sequences.
Create a 2D array (dp) with dimensions (m+1) x (n+1), where m and n are the lengths of the input sequences. dp[i][j] will represent the length of the longest common subsequence of the first i characters of the first sequence and the first j characters of the second sequence.
Initialize the first row and the first column of dp to 0, because the longest common subsequence of the empty prefix and any other prefix is the empty sequence.
For each pair of characters in the input sequences (for i from 1 to m, for j from 1 to n), do the following:
 If the characters are equal, then the length of the longest common subsequence ending at those characters is 1 plus the length of the longest common subsequence of the previous prefixes (dp[i1][j1]).
 If the characters are not equal, then the longest common subsequence ending at those characters is the longer of the two longest common subsequences that end at the previous character in either sequence (max(dp[i1][j], dp[i][j1])).
After filling in the dp array, the value in the bottom right corner (dp[m][n]) is the length of the longest common subsequence of the two input sequences.
To recover the longest common subsequence itself, we can backtrack from dp[m][n]. If the characters at position i in the first sequence and position j in the second sequence are equal, then this character is part of the longest common subsequence, and we move diagonally up to dp[i1][j1]. If they are not equal, we move to the left or up depending on which of dp[i1][j] or dp[i][j1] is larger.
The time and space complexity of the dynamic programming approach is O(m*n), which is a substantial improvement over the bruteforce approach. However, it is worth noting that this problem still requires significant resources for large inputs due to its polynomial complexity.
Code Explanation and Design Decisions
Initial Parameters: The main parameters in this problem are the two input sequences for which we’re trying to find the longest common subsequence (LCS). Each sequence can be a string or an array of characters or numbers.
Primary Loop or Iteration: The dynamic programming solution involves iterating over each character in both input sequences. The outer loop iterates through each character in the first sequence, while the nested loop iterates through each character in the second sequence. Each iteration of the inner loop corresponds to considering a pair of characters, one from each sequence.
Conditions or Branches Within The Loop: Within the nested loop, there is a conditional statement that checks if the characters at the current positions in the two sequences are equal. This condition signifies that we’ve found a common character in both sequences, and thus can extend the LCS that ends at the previous pair of characters. If the characters are not equal, then we take the longer of the LCSs that end at the previous character in either sequence.
Updates or Modifications to Parameters Within The Loop: The dp table is updated within the nested loop based on whether the current characters are equal or not. If they are, the length of the LCS is incremented by 1. If they aren’t, the length is set to the maximum length found so far. This reflects the dynamic programming approach of using previously computed solutions to smaller subproblems to solve the current subproblem.
Invariant: An important invariant in this problem is that for any pair of positions (i, j), dp[i][j] represents the length of the LCS of the first i characters of the first sequence and the first j characters of the second sequence. This invariant is crucial because it allows us to build up the solution iteratively and ensures that our final answer is correct.
Final Output: The final output is the value in the bottomright cell of the dp table, dp[m][n], which represents the length of the LCS of the two input sequences. This satisfies the problem’s requirements because it gives us the length of the longest subsequence that is common to both sequences. If we wish to find the actual LCS, we can backtrack from this cell to reconstruct the LCS.
Coding Constructs
HighLevel Strategies: The code uses a technique called dynamic programming. This approach involves breaking down a larger problem into smaller subproblems and solving each of those, storing their results for later use. The results of smaller subproblems are then combined to solve larger subproblems, building up to the solution of the original problem. This strategy avoids redundant work and typically leads to more efficient solutions for certain types of problems, including this one.
Explaining to a NonProgrammer: Imagine you’re trying to find the longest common part of two different stories. To do that, you start by looking at smaller parts of the stories first  maybe a paragraph or a page at a time  and check if they’re the same. You keep track of the longest matching part you’ve found so far. Then, as you go through the rest of the stories, you build upon the work you’ve already done. If you find a matching part that continues the longest matching part you’ve found so far, you extend your current match. If not, you move on to the next part, but you don’t forget about the longest match you’ve found. By the time you’ve gone through both stories, you’ve found the longest common part. That’s essentially what this code is doing, but with sequences of characters instead of stories.
Logical Constructs: The solution uses nested loops to iterate over two input sequences, conditionals to check if elements from the two sequences match or not, a 2dimensional array to keep track of intermediate results (lengths of common subsequences), and assignments to update the values in this array based on the logic of the problem.
Algorithmic Approach in Plain English: The code goes through each pair of elements, one from each sequence. If the elements match, it knows it has found a common subsequence of length one more than the longest common subsequence of the previous elements. If they don’t match, it knows that the longest common subsequence up to these elements is the same as the longest common subsequence up to the previous elements in either sequence. It keeps track of these lengths for every pair of elements, eventually building up to the length of the longest common subsequence of the entire sequences.
Key Steps or Operations: The code initializes a 2dimensional array to keep track of the longest common subsequences. It then iterates over each sequence, comparing elements and updating the array based on whether the elements match or not. The key operation here is the comparison of elements and the updating of the array.
Algorithmic Patterns or Strategies: The main pattern is dynamic programming, which involves breaking a problem into smaller subproblems, solving these subproblems, and using the solutions to build up to the solution of the original problem. The code also uses the common strategy of iteration to go through each pair of elements in the sequences. The use of a 2dimensional array to store intermediate results is a common pattern in dynamic programming problems.
Language Agnostic Coding Drills
Dissecting the Code into Learning Units:
a. Variables and Data Types: Understanding basic variables and data types forms the bedrock of most programming languages. This involves defining variables, understanding different types of data, and how to manipulate these data types.
b. Conditional Statements: They are used to perform different actions based on different conditions. The
if
statement in the code allows it to decide whether two elements match or not.c. Loops: For iterating over sequences and running a block of code multiple times, we need loops. In this code, nested loops are used to iterate over two sequences.
d. Arrays (Specifically 2Dimensional Arrays): This code uses a 2D array to store the lengths of the longest common subsequences for different pairs of elements. This is a slightly advanced topic as it involves understanding how to create, access, and manipulate 2D arrays.
e. Functions: Defining reusable blocks of code is a crucial aspect of programming. In the context of this problem, understanding how to define and call a function with parameters is key.
f. Dynamic Programming: This is an advanced concept that involves understanding how to break down a problem into smaller subproblems, solve them, and build up to the solution of the original problem. In the context of this problem, it involves understanding how to keep track of the lengths of common subsequences in a 2D array and use this to find the longest common subsequence.
Order of Increasing Difficulty and Descriptions:
a. Variables and Data Types: This is one of the first things anyone learns in a programming language. The difficulty level is low.
b. Conditional Statements: This is a basic concept, but it’s a step up from just understanding variables and data types. It requires understanding logical conditions and how to use them to control the flow of a program.
c. Loops: Understanding loops requires knowing how to control the flow of a program over multiple iterations. This is a medium difficulty level.
d. Arrays (Specifically 2Dimensional Arrays): While basic use of arrays is often considered an earlytomidstage topic, understanding 2dimensional arrays is more complex and requires being comfortable with nested loops and understanding how data is organized in a 2D space.
e. Functions: Creating and using functions requires a firm understanding of how to encapsulate blocks of code, pass parameters, and return values. It also requires understanding of scope and how data can be passed between different parts of a program.
f. Dynamic Programming: This is one of the more advanced topics in programming. It requires a deep understanding of problemsolving, logic, and being able to think about problems in a way that lets you break them down into smaller subproblems and build up solutions.
Problemsolving Approach from Problem Statement to Solution:
a. Understand the Problem Statement and Constraints: Begin by understanding what is being asked and what the constraints are.
b. Identify the Subproblems: Recognize that the problem of finding the longest common subsequence between two sequences can be broken down into subproblems of finding the longest common subsequences between prefixes of the two sequences.
c. Define a Method for Solving the Subproblems: Decide to keep track of the lengths of the longest common subsequences for each pair of prefixes in a 2D array.
d. Implement the Subproblem Solving Method: Write a function that uses loops to iterate over the sequences and an
if
statement to decide whether to extend a common subsequence or not, updating the array accordingly.e. Combine the Subproblem Solutions: After going through the entire sequences, use the array to find the length of the longest common subsequence of the entire sequences.
Targeted Drills in Python
 Variables and Data Types
Here’s a simple Python drill to help understand the usage of variables and data types:


 Conditional Statements
This simple Python drill illustrates the use of if
conditions:


 Loops
Here’s a Python drill that uses for
loops to iterate over a list:


 2Dimensional Arrays
This Python drill demonstrates the creation and manipulation of a 2D list (Python’s version of an array):


 Functions
A Python drill to demonstrate the creation and use of functions:


 Dynamic Programming
This is a more advanced drill, as dynamic programming isn’t a specific coding construct, but an approach or technique used to solve complex problems. Here’s a simple drill using a common dynamic programming problem: Fibonacci series.


Now, problemspecific concept: Longest Common Subsequence
The problemspecific concept for this problem is understanding how to calculate the longest common subsequence. The drill would involve coding a simple implementation of this without any optimization:


Here we use recursion and the concept of subproblems to find the longest common subsequence.
Integrating these pieces:
After understanding each of these individual concepts and seeing how they are implemented in Python, you can start to see how they fit together. The function lcs()
from the last drill can be thought of as a more complex function that builds on all the previous drills. It uses variables and data types to hold the sequences and their lengths, conditional statements to check if characters match or not, loops to iterate over the sequences, a 2D list to keep track of lengths of common subsequences, and the technique of dynamic programming to break the problem down into subproblems and build up the solution. Once you understand how all these parts work, you can integrate them together into the final, optimized solution.
Recursive Solution
To solve the longest common subsequence problem by dynamic programming, we first need a recursive solution. The dynamic programming idea doesn’t tell us how to find this, it just gives us a way of making the solution more efficient once we have a recursive solution.
Let’s start with some simple observations about the LCS problem. If we have two strings, say “nematode knowledge” and “empty bottle”, we can represent a subsequence as a way of writing the two so that certain letters line up:
n e m a t o d e k n o w l e d g e
      
e m p t y b o t t l e
If we draw lines connecting the letters in the first string to the corresponding letters in the second, no two lines cross (the top and bottom endpoints occur in the same order, the order of the letters in the subsequence). Conversely any set of lines drawn like this, without crossings, represents a subsequence.
From this we can observe the following simple fact: if the two strings start with the same letter, it’s always safe to choose that starting letter as the first character of the subsequence. This is because, if you have some other subsequence, represented as a collection of lines as drawn above, you can “push” the leftmost line to the start of the two strings, without causing any other crossings, and get a representation of an equallylong subsequence that does start this way.
On the other hand, like the example above, the two first characters differ. Then it is not possible for both of them to be part of a common subsequence  one or the other (or maybe both) will have to be removed.
Finally, observe that once we’ve decided what to do with the first characters of the strings, the remaining subproblem is again a longest common subsequence problem, on two shorter strings. Therefore we can solve it recursively.
Rather than finding the subsequence itself, it turns out to be more efficient to find the length of the longest subsequence. Then in the case where the first characters differ, we can determine which subproblem gives the correct solution by solving both and taking the max of the resulting subsequence lengths. Once we turn this into a dynamic program we will see how to get the sequence itself.
These observations give us the following, very inefficient, recursive algorithm.
Recursive LCS:
int lcs_length(char * A, char * B)
{
if (*A == '\0'  *B == '\0') return 0;
else if (*A == *B) return 1 + lcs_length(A+1, B+1);
else return max(lcs_length(A+1,B), lcs_length(A,B+1));
}
This is a correct solution but it’s very time consuming. For example, if the two strings have no matching characters, so the last line always gets executed, the the time bounds are binomial coefficients, which (if m=n) are close to 2^n.
Q&A
What if you want the subsequence itself, and not just its length? This is important for some but not all of the applications we mentioned. Once we have filled in the array L described above, we can find the sequence by working forwards through the array. sequence S = empty; i = 0; j = 0; while (i < m && j < n) { if (A[i]==B[j]) { add A[i] to end of S; i++; j++; } else if (L[i+1,j] >= L[i,j+1]) i++; else j++; } Let’s see an example of this. Here’s the array for the earlier example: n e m a t o d e _ k n o w l e d g e
e 7 7 6 5 5 5 5 5 4 3 3 3 2 2 2 1 1 1 0
m 6 6 6 5 5 4 4 4 4 3 3 3 2 2 1 1 1 1 0
p 5 5 5 5 5 4 4 4 4 3 3 3 2 2 1 1 1 1 0
t 5 5 5 5 5 4 4 4 4 3 3 3 2 2 1 1 1 1 0
y 4 4 4 4 4 4 4 4 4 3 3 3 2 2 1 1 1 1 0
_ 4 4 4 4 4 4 4 4 4 3 3 3 2 2 1 1 1 1 0
b 3 3 3 3 3 3 3 3 3 3 3 3 2 2 1 1 1 1 0
o 3 3 3 3 3 3 3 3 3 3 3 3 2 2 1 1 1 1 0
t 3 3 3 3 3 2 2 2 2 2 2 2 2 2 1 1 1 1 0
t 3 3 3 3 3 2 2 2 2 2 2 2 2 2 1 1 1 1 0
l 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 0
e 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
(you can check that each entry is computed correctly from the entries below and to the right.)
To find the longest common subsequence, look at the first entry L[0,0]. This is 7, telling us that the sequence has seven characters. L[0,0] was computed as max(L[0,1],L[1,0]), corresponding to the subproblems formed by deleting either the “n” from the first string or the “e” from the second. Deleting the “n” gives a subsequence of length L[0,1]=7, but deleting the “e” only gives L[1,0]=6, so we can only delete the “n”. Now let’s look at the entry L[0,1] coming from this deletion. A[0]=B[1]=“e” so we can safely include this “e” as part of the subsequence, and move to L[1,2]=6. Similarly this entry gives us an “m” in our sequence. Continuing in this way (and breaking ties as in the algorithm above, by moving down instead of across) gives the common subsequence “emt ole”.
So we can find longest common subsequences in time O(mn). Actually, if you look at the matrix above, you can tell that it has a lot of structure – the numbers in the matrix form large blocks in which the value is constant, with only a small number of “corners” at which the value changes. It turns out that one can take advantage of these corners to speed up the computation. The current (theoretically) fastest algorithm for longest common subsequences runs in time O(n log s + c log log min(c,mn/c)) where c is the number of these corners, and s is the number of characters appearing in the two strings.
I would like to know what is the longest common subsequence between these two texts: text1 = “mhunuzqrkzsnidwbun” text2 = “szulspmhwpazoxijwbq”
I’m not understanding the problem. Because my code is returning 5 and the expected value is 6.
And what would be the difference between the example above and this: text1 = “pmjghexybyrgzczy” text2 = “hafcdqbgncrcbihkd”
My code passed the second input and showed me that the longest common subsequence is ‘hbrc’.
The Longest Common Subsequence (LCS) problem is about finding the longest subsequence common to two sequences. A subsequence is a sequence that can be derived from another sequence by deleting some or no elements without changing the order of the remaining elements.
Let’s see what’s happening with the examples you’ve provided:
text1 = "mhunuzqrkzsnidwbun"
andtext2 = "szulspmhwpazoxijwbq"
The longest common subsequence here is “hunuzq”, which is of length 6. It is possible that your code is only finding subsequences up to length 5 or is having an offbyone error.
text1 = "pmjghexybyrgzczy"
andtext2 = "hafcdqbgncrcbihkd"
The longest common subsequence here is indeed “hbrc”, which is of length 4. It seems that your code worked correctly in this case.
If you’re having trouble, I recommend debugging your code with smaller examples first. This way, you can step through your code more easily and understand what’s happening. If you’re using memoization, make sure to check that your cache is working as expected. If your code involves any loops or conditionals, ensure they’re executing in the way you intend.
Here is the Python code to get the Longest Common Subsequence between two strings:


This code builds a twodimensional table dp
with dimensions (m+1) x (n+1)
, where m
and n
are the lengths of text1
and text2
, respectively. The table dp[i][j]
contains the LCS of the suffixes text1[i:]
and text2[j:]
. The final answer is then stored in dp[0][0]
.
Remember, there may be multiple valid longest common subsequences for a given pair of strings. The above code will return one of them.
i think i did not understand the question: how text1=“oxcpqrsvwf” and text2=“shmtulqrypy” have subsequence of 2. As per my understanding only ’s’ is the subsequence if we do not change the original sequence of both strings. And my code answers 1.
Let’s break this down. In the Longest Common Subsequence (LCS) problem, the subsequence doesn’t have to be consecutive characters but must maintain the order in the original strings. So, for your example:
text1 = “oxcpqrsvwf” text2 = “shmtulqrypy”
The LCS is “qr”, which is of length 2. Here is the visualization:
o x c p q r s v w f
 
s h m t u l q r y p y
As you can see, “q” and “r” are in the same order in both strings, and hence they form a common subsequence. Other subsequences like “o” and “s” could be found but they’re of length 1, which is less than the length of “qr”.
Remember, the LCS problem does not require the subsequence to be contiguous (next to each other) in the original strings, just that they appear in the same order.
I can’t understand why I can’t solve it with two pointers intuitively
The Longest Common Subsequence (LCS) problem might seem solvable using a twopointer approach because of its similarity to other problems that involve comparing elements in two sequences. However, there are a few reasons why the twopointer method isn’t sufficient for the LCS problem:
Noncontiguous nature of the subsequence: In the LCS problem, the subsequence doesn’t have to be contiguous (i.e., next to each other) in the original strings, but they must appear in the same relative order. So, even if you were able to find a common subsequence with two pointers, it might not be the longest one. A twopointer approach might miss out on a longer subsequence that’s spread out more widely across the strings.
Multiple Choices: At each step, you face a choice: If the current characters in both strings are the same, they must be part of the LCS. However, if they’re not the same, you have to decide whether to move the pointer in the first string or the second string, and making that choice optimally is not obvious.
Overlapping subproblems: The LCS problem involves solving overlapping subproblems, i.e., you end up solving the same subproblems multiple times. A twopointer approach doesn’t efficiently handle these overlapping subproblems, which is why a more sophisticated method like dynamic programming is needed.
That’s why for this problem, a dynamic programming approach is more suitable as it systematically breaks down the problem into smaller, overlapping subproblems, solves each subproblem just once, and uses the results of these smaller subproblems to construct an optimal solution to the problem.
DP[i][j] = DP[i  1][j  1] + 1 , if text1[i] == text2[j] DP[i][j] = max(DP[i  1][j], DP[i][j  1]) , otherwise
This is where many people are struggling with DP yet its the most fun part about DP, which is figuring out the relationship between current problem and subproblems.
Such hint basically gives away the answer, and probably should be revised, thoughts?
The dynamic programming (DP) approach to the Longest Common Subsequence (LCS) problem indeed involves deducing the optimal substructure property of the problem, which is the relationship between a problem and its subproblems.
This formulation:
DP[i][j] = DP[i  1][j  1] + 1 , if text1[i] == text2[j]
DP[i][j] = max(DP[i  1][j], DP[i][j  1]) , otherwise
encapsulates the intuition behind this property. If the current characters in both strings are the same, the length of the LCS up to these characters (DP[i][j]
) is one more than the LCS of the strings up to the previous characters (DP[i  1][j  1]
). If the characters are not the same, the LCS up to these characters is the maximum of the LCS if you ignore the current character in the first string (DP[i  1][j]
) or ignore the current character in the second string (DP[i][j  1]
).
In terms of the learning process, there are indeed two main viewpoints. One group believes that providing these relationships explicitly is a form of handholding that prevents the learner from truly understanding the problem and figuring out the solution themselves. The other group believes that these relationships can serve as helpful guidance for learners who are new to dynamic programming or who are struggling with a particular problem.
Both views have merit, and the optimal approach depends on the learner’s existing knowledge, problemsolving skills, and learning style. A balanced approach may involve trying to solve the problem on your own first, then looking at the relationships to verify your understanding or to help if you’re stuck.
The 2 differences are
That you need to find the maximum count of elements that can fit in the knapsack Instead of check if the wt of a item is less a given wt, we just check if the sequence till current position will be subsequence or not (this is easy to do in O(n) time). Technically the time complexity is higher than the bottomup approch, but it is more intutive and also when doing top down, we only calculate the states of dp which are actually useful so the solution passes in under 100ms, which is only 10x slower than the bottomup solution.
Your comparison between the Longest Common Subsequence problem and the Knapsack problem is insightful. Indeed, both problems employ a similar dynamic programming approach, but they deal with different concepts.
The Knapsack problem is about optimizing the total value of items that fit within a given weight limit, while the Longest Common Subsequence problem involves finding the longest subsequence common to two sequences.
In the Knapsack problem, we consider whether or not to include an item based on its weight and value. In the Longest Common Subsequence problem, we consider whether a character matches in both sequences.
About the time complexity, the traditional LCS problem’s dynamic programming solution has a time complexity of O(nm), where n and m are the lengths of the input sequences. This is because we have to fill a 2D array of size nm, with each cell taking constant time to compute.
In terms of approach, both bottomup and topdown strategies have their advantages. The bottomup approach usually provides a more straightforward path to an optimized solution, while the topdown approach (also known as memoization) can potentially avoid computing some states, potentially saving time if only a portion of the problem space needs to be explored.
While the topdown approach might be more intuitive for some people and can indeed avoid computing certain states, this doesn’t usually result in a significant time complexity reduction for the LCS problem because most of the states in the DP table are typically required to find the solution. Thus, both approaches should generally have similar performance for this problem.
can anyone plss tell me what is wrong in this code if i replace m by n and n by m i got correct answer but why this code is not working


The issue in your code seems to be related to how you’ve defined your 2D array, t
. You are defining t
as t[m+1][n+1]
, where m
is the size of text2
and n
is the size of text1
. However, in your loops, you are treating i
as the index for text1
and j
as the index for text2
, which would be correct if t
were defined as t[n+1][m+1]
. This mismatch could cause unexpected results.
Here is the corrected code:


In this corrected code, i
and j
iterate over text1
and text2
respectively, which aligns with the dimensions of t
. Always remember to maintain consistency in dimensionality when working with multidimensional arrays.
In the tabulation approach why is everyone using index shifting for iteration of loop i.e 1 to n instead of 0 to n1 , i tried tabulation with 0 to n1 but it is giving wrong answer .
In the tabulation approach for dynamic programming, the indices often start from 1 instead of 0 because the 0th row and the 0th column are usually used to set up base cases.
The indices are shifted because the value at dp[i][j]
often depends on the values at dp[i1][j]
, dp[i][j1]
, dp[i1][j1]
, etc. If we started from index 0 (i.e., if i
and j
started from 0), then when we tried to access dp[i1][j]
, dp[i][j1]
, or dp[i1][j1]
, we would be trying to access dp[1][j]
, dp[i][1]
, and dp[1][1]
, respectively. Accessing negative indices could lead to accessing unintended memory and result in undefined behavior.
By using 1based indexing (i.e., starting from 1), we can easily use dp[i1][j]
, dp[i][j1]
, and dp[i1][j1]
in our formulas without going out of bounds. The 0th row and the 0th column are usually filled with initial or base values, such as 0 in many cases, to ensure that the rest of the table can be filled correctly. This is particularly common in problems dealing with strings or arrays where having a 0length is a valid scenario that needs to be handled.
When you try to implement it with 0 to n1
, you need to be careful about boundary conditions and ensure you’re not accessing memory that’s out of bounds or hasn’t been defined. Adjustments to the base case setup and the dynamic programming transitions might be needed.
class Solution {
int solve(String text1, String text2, int i, int j, int[][] dp){
if (i == text1.length()  j == text2.length()) return 0;
else if(dp[i][j] != 0) return dp[i][j];
if(text1.charAt(i) == text2.charAt(j))
dp[i][j] = 1 + solve(text1, text2, i+1, j+1, dp);
else
dp[i][j] = Math.max(solve(text1, text2, i+1, j, dp),solve(text1, text2, i, j+1, dp) );
return dp[i][j];
}
public int longestCommonSubsequence(String text1, String text2) {
int[][] dp = new int [text1.length()+1][text2.length()+1];
return solve(text1, text2, 0, 0, dp);
}
}
I don’t know why does my code dosen’t work now …earlier it was working fine.. Please H.E.L.P
It appears that you’re trying to solve the longest common subsequence problem using a recursive approach with memoization, which is a good strategy.
Your code looks mostly correct, and it should work assuming there’s no input error or environmentspecific problem. Here are some things you can check to troubleshoot your issue:
Inputs: Make sure the inputs
text1
andtext2
are valid strings. They should not be null, and they should only contain the characters you expect. Test your code with some known inputs and outputs to ensure there’s no problem with how you’re receiving or handling the input data.Bounds Checking: It seems like your bounds checks are correct. You are stopping your recursion when either
i
orj
reaches the length of their respective strings.Memoization: Your memoization approach looks correct as well. You are storing and retrieving values from your
dp
array as expected.Initialization of the DP array: Your
dp
array is initialized correctly. It has a size oftext1.length() + 1
bytext2.length() + 1
, and it’s filled with zeros which is the correct base value.EnvironmentSpecific Issue: Sometimes, the problem is not with the code itself, but with the environment in which you’re running it. If your code was working earlier and now it’s not, this could be the issue. Try restarting your development environment or running the code on a different system or online compiler.
If none of the above helps, it would be useful to know more specifics about what’s not working. Are you getting incorrect results? If so, what inputs are causing the problem? Are you encountering any error messages? The more details you can provide, the better we can help troubleshoot the problem.
Can anyone explain why this has lower efficiency? The editorial proves that this runs in O(M∗N)O(M*N)O(M∗N) just like the DP algorithm. Then, why this recursion+memoization technique beats only 14%?
While both the dynamic programming (DP) approach and the recursion with memoization approach have a time complexity of O(M*N), they differ in other performancerelated aspects.
Overhead of Recursion: Recursive calls come with some overhead. Each recursive call adds a new layer to the call stack. The function call mechanism in most programming languages involves overhead for context switching, passing parameters, and sometimes even for just jumping to a new point of execution.
Cache Efficiency: DP, especially bottomup DP, usually has better locality of reference because it iterates over the data in a predictable pattern, often sequentially. This can make better use of the CPU cache, leading to faster execution times. On the other hand, recursive solutions may not access memory in a predictable manner, leading to more cache misses and potentially slower execution times.
Implementation Differences: Finally, the efficiency could be a result of implementation differences. Certain ways of implementing the same algorithm can be faster than others due to factors such as how the code interacts with the specific language interpreter or compiler, or the specific data structures used.
It’s also worth noting that the percentage you see on LeetCode reflects the running time of your solution compared to all other submitted solutions, and it’s not always an accurate measure of the efficiency of your algorithm. Some solutions might use clever tricks or languagespecific features to achieve faster running times.
in python dp=[[None](m + 1) for i in range(n+ 1)] // accepted dp=[None]*(n+1) // not accepted for all answers can any one tell me the reason
In Python, using the multiplication operation for lists does not create distinct lists, but instead creates multiple references to the same list. Here’s a breakdown of what each line does:
dp = [[None] * (m + 1) for _ in range(n + 1)]
This uses a list comprehension to create a new list for each row. The
(m + 1)
elements in each row are distinct and independent from those in other rows.dp = [[None] * (m + 1)] * (n + 1)
This line first creates a single list of
(m + 1)
elements, and then creates(n + 1)
references to that list. All rows indp
are actually the same list. Therefore, changing an element in one row will affect all rows.
The first one is the correct way to initialize a 2D list in Python because it ensures that each row is a unique list. The second one, on the other hand, could lead to unexpected results due to all rows referring to the same list.
can anyone suggest what’s wrong with this code same code got submitted on GFG but here give TLE on invalid test case after passing 46/46 test case .


Based on your code and description, it seems you are solving the problem with the right approach  using recursion and memoization.
However, the issue lies with the way you are defining your DP table size. It should be defined as vector<vector<int>> dp(n+1, vector<int>(m+1, 1))
, because while comparing strings, we need to account for an extra position for the base case of an empty string.
In your recursive function calls, you are checking if i == n  j == m
and returning 0, which means you are checking up to n
and m
. However, your DP table only has size n
and m
respectively, it should actually be n+1
and m+1
in size. When i
or j
equals n
or m
, it’s trying to access an outofbound index of the DP table, and this might cause undefined behavior or error.
Here’s the corrected DP table definition and function:


Why is it giving TLE error?


A TLE error in this case typically means that your program is trying to do too much work and is not efficient enough. In this case, it’s likely that the inefficiency is coming from the fact that you’re passing strings s1
and s2
by value into the recursive function sol
.
When you pass a string by value, the entire string gets copied for each recursive call. As the size of the strings increases, this results in a lot of extra work, leading to a TLE error. This issue can be fixed by passing the strings by reference, which simply passes a pointer to the original string instead of making a full copy.
Here’s the corrected function:


This version of the function passes s1
and s2
as const references, which should improve the efficiency of your code and prevent the TLE error.
Why does my code not work when I initialize my DP matrix like this: mat = [ [0] * (N2 + 1)] * (N1 + 1) , but it does work when I initialize like this: mat = [[ 0 for j in range(N2 + 1)] for i in range(N1 + 1)] ? N1 is len(text1) and N2 is len(text2)
In Python, lists are mutable objects. That means when you do mat = [[0] * (N2 + 1)] * (N1 + 1)
, you’re actually creating N1 + 1
references to the same list [0] * (N2 + 1)
. This means that if you change one row, it’ll affect all the other rows, because they’re all pointing to the same list.
On the other hand, when you do mat = [[0 for j in range(N2 + 1)] for i in range(N1 + 1)]
, you are creating a new list for each i
in the range N1 + 1
. So, every row in mat
is a separate list and modifications to one row do not affect the others.
This is why the first way of initializing doesn’t work as expected for dynamic programming problems where you’re changing individual elements in the matrix, but the second way does.
Can anybody tell me Why we are declaring extra row and extra column in dp. The Solution without those extra columns gives me wrong answer. Thanks in advance
Adding an extra row and column initialized with zeros in the DP matrix is a common practice in dynamic programming. This is particularly useful in problems where the DP state dp[i][j]
depends on dp[i1][j]
, dp[i][j1]
, dp[i1][j1]
or similar.
The extra row and column essentially act as boundary conditions or base cases that simplify the code. If you don’t include them, you’d need to add extra condition checks in your DP update step to handle cases where i1
or j1
might be less than 0, which would make the code more complex and potentially errorprone.
For the longest common subsequence problem, the base case is that an empty string has a common subsequence of length 0 with any other string. The extra row and column represent these cases.
So, not including these extra rows and columns in your solution would mean your base case is not handled correctly, which could lead to incorrect results.
memoisation failed at last test case which was 1000 of c and1000 of b


Your approach seems correct, and it looks like a typical recursive memoization solution to the longest common subsequence problem. However, the problem that you’re facing might be due to Java’s default stack size, which can result in a StackOverflowError for large input cases in deep recursive calls.
To overcome this issue, you can try to refactor your recursive solution into an iterative one with tabulation (bottomup DP). Iterative solutions don’t suffer from the stack size limitation because they don’t make recursive calls.
Here’s a sample code using the bottomup dynamic programming approach:


This approach builds up the DP table iteratively and doesn’t rely on recursion. It should pass all the test cases, including the large inputs.
Please can anyone explain what’s wrong in my solution. s1> “abc” s2> “def” test case not passing


The issue with your solution is likely due to a typographical error in your recursive calls inside the max()
function. It appears that you’re comparing s1
with itself in the recursive calls rather than comparing s1
with s2
.
Here’s the corrected function call:


So, your correct code should look like:


This should fix the problem you’re facing with your test case.
One of the testcases is “ezupkr”, “ubmrapg”, as we can see the common subsequence is ‘upr’ and answer should be 3, but in test cases I get 2
The longest common subsequence (LCS) for the strings “ezupkr” and “ubmrapg” is indeed ‘upr’, and its length is 3.
If your current solution is returning 2, there might be a problem with how you’re implementing the LCS problem. Without seeing your code, it’s hard to tell what exactly might be going wrong, but there’s a good chance you’re not properly handling the case where characters match in the strings.
Here’s a Python solution using dynamic programming to find the longest common subsequence for the given strings:


This code creates a 2D DP table where dp[i][j]
holds the length of the LCS between text1[0...i1]
and text2[0...j1]
. The table is filled in a bottomup manner. When characters at index i1
in text1
and j1
in text2
match, 1 + dp[i1][j1]
is taken, otherwise the maximum of dp[i1][j]
and dp[i][j1]
is taken.
Seeing at the problem we can think of comparing the indexes of each string and the repeating the process that is intuitively recursion. Now since there will be 2^n possiblities and complexity will grow upto exponential terms hence we go for a DP approach where we create a DP vector and perform the operations accordingly but the space comes out to be of O(N*M) hence to cut it down we use tabulation process.
That’s a great explanation of the thought process behind solving this problem!
To clarify a little bit:
Recursion: A recursive approach to the longest common subsequence problem would involve comparing characters from each string one at a time. This can lead to many repeated computations due to the overlap of subproblems, which is common in many recursive solutions. Therefore, a naive recursive solution would indeed have a time complexity of O(2^n), where n is the length of the strings.
Dynamic Programming (DP): DP is a strategy for optimizing recursion by storing the results of subproblems to avoid repeated computation. In this case, a 2D DP array can be used where DP[i][j] represents the length of the longest common subsequence of the substrings up to the ith character of the first string and the jth character of the second string. This brings down the time complexity to O(n*m), where n and m are the lengths of the two strings.
Tabulation (BottomUp DP): Tabulation is a specific form of dynamic programming where we start with the smallest subproblems and iteratively solve larger subproblems using the results of the smaller ones. In this problem, we start filling the DP table from the top left (DP[0][0]) and gradually fill the rest of the table using the relationships between each cell and its neighboring cells until we reach DP[n][m]. This approach maintains the time complexity of O(n*m) but provides a more systematic way to fill the DP table, which can be more intuitive for some people.


what is error in code ?
Your code appears to be correct, but you didn’t initialize your DP table arr
with 1
values to indicate that those cells have not yet been computed. The Java new
keyword initializes an int
array with 0
by default, not 1
. Therefore, when your helper
method checks if arr[n1][n2] != 1
, it will always return false
and you will never compute the value for that cell.
Here is how you can correct this:


Now, your helper
method will correctly compute the longest common subsequence for each pair of indices (n1, n2)
.
Can anybody guess y i am getting TLE with this code although it is giving correct ans.


Your approach is correct. However, the problem is likely with the time complexity of your function due to multiple function calls for the same subproblems, leading to excessive time consumption.
To reduce the time complexity and avoid Time Limit Exceeded (TLE), you should avoid computing the same subproblem multiple times (redundant recursive calls). Instead, you should store the result of each subproblem so that when it is needed again, you can directly return the result from memory, which is known as memoization.
You’ve already initialized your memoization matrix arr
, but you’re not using it properly. Before making recursive calls, check whether the subproblem is already solved, i.e., arr[l1][l2]
is not 1
, and return the saved result. Only compute the result when it’s not previously computed.
Here is a revised version of your lcs
function:


This modification should help to reduce the time complexity of your code and help avoid the TLE issue.
why this solution create error  ==31==ERROR: AddressSanitizer: heapbufferoverflow on address 0x6020000000b8 at pc 0x000000344e3a bp 0x7ffd6caeb7d0 sp Shadow gap: cc ==31==ABORTING


The error you’re facing is related to accessing memory beyond the allocated range. This is known as a buffer overflow. The error message shows that your program is trying to read from an address that has not been allocated.
The problem lies in the initialization of your 2D vector (dp) which you are using as a memoization table.
In your line of code:


You are creating a 2D vector with size n1 x (n21)
, but in your recursive function you are trying to access indices up to n11
and n21
. The column size of your vector is n21
, but you’re trying to access the n21
th index, which results in a buffer overflow.
To fix the issue, you should allocate the vector with sizes n1
and n2
like this:


Here we are also initializing each element in the matrix to 1. This ensures that the dp matrix has a size of n1 x n2
and all cells are initialized to 1, which means “not yet computed”. Now you should be able to access indices up to n11
and n21
without causing a buffer overflow.


what is the problem here anybody can tell? it is giving runtime error
The issue with your code lies in the arguments of the lcs
function in the longestCommonSubsequence
function. You have passed n
and m
as arguments for i
and j
.
In your lcs
function, i
and j
are used as indexes for the strings text1
and text2
. Because string indexing is 0based, the valid indices for a string of length n
are from 0
to n1
inclusive. Therefore, when you pass n
and m
(the lengths of text1
and text2
), it results in outofbounds access in the first iteration of lcs
when it tries to access text1[i]
or text2[j]
.
Here’s how you can fix it:


This adjustment will ensure that your lcs
function starts comparing the characters at the last index of each string, avoiding the outofbounds error.


Why this code is getting “Time Limit exceeded”. Could not understand.
Test case : text1  pmjghexybyrgzczy text2  hafcdqbgncrcbihkd
The code you provided implements a recursive solution for the Longest Common Subsequence problem. While this approach is logically correct, it has a significant drawback in terms of efficiency because it includes repetitive computations. Specifically, the code will recompute the Longest Common Subsequence (LCS) for the same pair of indices (i, j) many times. This leads to an exponential time complexity, causing your code to exceed the time limit for larger inputs.
This inefficiency can be remedied by using a technique called memoization, where you store the results of expensive function calls and reuse them when the same inputs occur again. In the context of this problem, you can use a 2D array to store the LCS for every pair of indices (i, j). Here’s how you can modify your code to use memoization:


In this code, memo
is a 2D array that stores the LCS for each pair of indices (i, j). Before we compute the LCS, we first check if we have already computed it. If so, we simply return the stored value. This change significantly improves the efficiency of the code by avoiding repetitive computations.
Is there a way to recover the longest common sub sequence without using m*n space dp?
Yes, there is an approach to the Longest Common Subsequence (LCS) problem that doesn’t require m*n space, where m and n are the lengths of the two input strings. This approach uses an optimized version of the dynamic programming (DP) solution, which requires only linear space.
In the typical DP solution to the LCS problem, we use a 2D DP table with dimensions (m+1)x(n+1). However, if we look closer, to fill each cell in the DP table, we only need data from the current and previous rows. This leads to an optimization where we only keep track of two rows (or columns, depending on how you implement it) at a time: the current row and the previous row. This reduces the space complexity from O(m*n) to O(min(m, n)).
Here’s an example of how you can implement this:


In this code, dp
is a 1D array representing the current row in the DP table, and newDp
represents the next row. We update newDp
based on the values in dp
(the current row and the previous cell in newDp
). Finally, we set dp
to newDp
to move on to the next row. Note that this approach returns the length of the LCS, not the LCS itself.
Reconstructing the LCS with this approach is more complicated than with the traditional DP approach, because we’re not storing all the DP states. If you need to recover the LCS itself, it might be better to use the traditional approach, even though it uses more space.
Sure! In a normal dynamic programming approach, you create a 2D matrix that can look something like this (m x n):
If you have two strings of length 4 and 4, you end up with a 4x4 matrix, giving you 16 total cells.
However, with the optimized space approach, you only need to keep track of the current row and the previous row (or column), meaning you’d only have something like this:
With the same two strings of length 4, you end up with a matrix of size 2x4, giving you 8 total cells, which is half of what you had before. This space saving can be significant for very long strings.
It’s important to note that in both cases, each cell represents a state in our dynamic programming problem. The key observation here is that for a given cell, we only need information from its immediate top and left cells, meaning we only need to store the current and previous rows (or columns) at any point in time. This is the basis of the space optimization.
What’s wrong with this code?!


It seems like there is a small issue in your code. You’re defining your solve() function above the class definition, which won’t work since C++ does not support nested functions. You should move the solve() function inside your Solution class.
Additionally, in your solve() function declaration, you have two extra parameters n and m, but they are not being used within the function. It seems like you’re using them to track the original size of the strings, but you can get the same information from the text1 and text2 strings themselves.
Here’s how your code might look after those modifications:


Please note that if you are doing the check if(n==0  m==0) return 0;
before calling the solve()
function, then you don’t need to check for if(i<=0  j<=0)
inside the solve()
function as n
and m
will always be greater than 0 when calling solve()
.


why this code give me TLE ????
The code you’ve provided is a topdown or recursive approach to the Longest Common Subsequence problem using memoization. It should not lead to Time Limit Exceeded (TLE) for most inputs.
However, recursive solutions with a high depth of recursion can lead to TLE due to function call overhead and other factors. In your case, it is also possible that the problem is with the specific input you’re testing. If the input strings are very long, even an optimized solution might not run in the given time limit. Remember that the worstcase time complexity of this approach is O(n*m) where n and m are lengths of the input strings.
One more potential issue in your code could be the initialization of your dp array. It is initialized with size n1
by n2+1
which should instead be n1+1
by n2+1
if you’re using 1based indexing for your strings in the dp array. Please make sure your dp array initialization aligns with your usage in the code.
Additionally, it is worth noting that you should always check your code with a variety of test cases, not just the longest or hardest ones. There might be a corner case or an unexpected type of input that’s causing the problem.
If all else fails, consider using the bottomup or iterative approach to dynamic programming, which generally runs faster in practice due to the lack of function call overhead.
I dont think we need 2 Arrays. For the space optimization we can just use a single temporary variable.


Your solution appears to be a valid and efficient approach to solving the longest common subsequence problem.
The use of a singledimensional array, dp
, helps to reduce the space complexity from O(n*m) to O(n), where n is the length of text1 and m is the length of text2. This is achieved by maintaining and updating only the current and previous values needed to perform the calculations, as opposed to keeping the entire matrix.
In each iteration, the prevBest
variable holds the diagonal value (topleft value from the current cell in a full 2D dp table) which is needed in the calculation when characters at current indices of both strings are same. This variable is updated with the current cell value before it gets updated, thereby preserving the diagonal relationship.
Overall, this approach saves space without sacrificing the time complexity, which remains O(n*m). This is an excellent optimization for this problem, especially when dealing with large strings. Well done!
I don’t understand why we need to iterate in reverse?
When solving the longest common subsequence problem using a bottomup dynamic programming approach, we typically fill the DP table by iterating over both strings from left to right (i.e., starting at index 0 and ending at the last index). In this case, the cell dp[i][j] depends on the cells dp[i1][j], dp[i][j1], and dp[i1][j1], so we need to make sure these cells have been computed before we compute dp[i][j].
When trying to optimize the space complexity by using only one row (or column) of the DP table, the order of iteration becomes important. If we were to iterate from left to right, we would be overwriting cells in our singlerow DP table before we have the chance to use their original values to compute other cells.
To overcome this, we iterate from right to left (i.e., starting at the last index and ending at index 0). This way, when computing the cell dp[i], we can be sure that the cells dp[i+1] and dp[i] (which we need for the computation) have their correct values from the previous row, not overwritten values from the current row. This trick only works because the LCS problem has the property that the cell dp[i][j] only depends on cells that are to its right or below it in the DP table.
The most difficult part of DP is exactly identifying that a problem HAS TO be solved using DP, the implementation itself usually is not that hard though (perhaps at least true for all medium problems).
And I am usually already exhausted after exploring different possibilities before even think about DP.
You’re absolutely right! One of the main challenges with dynamic programming (DP) is identifying when to use it. DP is a powerful tool and can be applied to a broad range of problems, but recognizing that a problem is a good fit for DP can be challenging, especially in complex situations.
Here are a few clues that suggest a problem might be a good candidate for DP:
Overlapping subproblems: DP is all about solving each subproblem once and then using the answer for that subproblem to build up solutions to larger problems. If a problem can be broken down into smaller problems that are solved multiple times, DP might be a good approach.
Optimal substructure: If an optimal solution to the problem can be constructed efficiently from optimal solutions of its subproblems, then DP could be a good choice. This property is key for problems like the shortest path problem, where the shortest path to a point is composed of the shortest paths to some other points.
Ordering: Many DP problems involve making a series of choices, decisions, or steps. There’s often an order to these steps that allows us to solve the problem incrementally, reusing previous work. For example, the classic DP problems of Knapsack, LCS (Longest Common Subsequence), and LIS (Longest Increasing Subsequence) all involve sequences or orderings of decisions.
Tradeoffs: Some DP problems involve making tradeoffs to optimize a particular quantity. This is the case in problems like the knapsack problem, where you’re trying to maximize the total value of items you can carry, subject to a weight constraint.
So, when you’re faced with a problem, consider these properties. With practice, identifying DP problems will become easier over time. The more problems you solve using DP, the better you will get at recognizing these patterns.
Python 1d:


Your Python solution is correct! It finds the longest common subsequence (LCS) between two strings using a single dimension dynamic programming array. This solution has a time complexity of O(m*n) and a space complexity of O(min(m, n)), where m and n are the lengths of the two input strings.
Here is a breakdown of how it works:
Create a dynamic programming (dp) array with the length of the second string plus 1. Initially, all elements are 0, which corresponds to comparing the first string with an empty string.
Iterate through the first string (s1) starting from index 1, and for each character, compare it with every character in the second string (s2).
If the characters match, then the new dp value at that index (j) is 1 plus the previous dp value (before the update in the current iteration). If the characters do not match, then the new dp value is the maximum of the dp value at the previous index (dp[j  1]) and the current dp value (which is the value before the update in the current iteration).
Store the current dp value before it is updated in a temporary variable (prev) for use in the next iteration. This value represents the dp value in the upper left cell in a traditional 2D dp matrix for LCS.
After the iterations, the last element in the dp array represents the length of the longest common subsequence.
Remember to be careful with the indices because Python arrays are 0indexed.
I don’t understand why the complexity of memoization version is MNN. I totally don’t see it. Can anyone explain how would you figure out what is the complexity if you get this question on interview for this problem? Thank you <3
If you’re referring to the memoization approach of solving the Longest Common Subsequence problem, the time complexity is generally O(M*N), not O(MNN), where M and N are the lengths of the two strings.
Here’s why:
In this approach, you typically use a 2D array for memoization. Each cell in the 2D array corresponds to a subproblem  in this case, the subproblem of finding the LCS of prefixes of the two strings. There are MN such subproblems in total because the first string can have M different lengths (0 to M) and the second string can have N different lengths (0 to N). Since you’re using memoization, each of these subproblems is solved only once, so the overall time complexity is proportional to the number of subproblems, which is O(MN).
If you find yourself needing to determine the complexity of a memoization solution in an interview, the key thing to consider is the number of unique subproblems you need to solve. You also need to factor in the complexity of solving each subproblem, but in this case (and in many DP problems), the work done for each subproblem is constant time, so it does not add to the overall complexity.
So, in this case, I believe there might have been a mistake or misunderstanding with the O(MNN) complexity. The complexity of the memoization version of this problem should be O(M*N).
Just a quick question, why we have to go backward in the dp sol? I think going forward works as well tho
It’s true that you can solve the Longest Common Subsequence (LCS) problem by going forward or backward in the dynamic programming (DP) solution. The direction of iteration in a dynamic programming solution often depends on the problem’s specific requirements and the dependencies between subproblems.
For LCS, if we’re defining dp[i][j]
as the length of the LCS for the first i
characters of string1 and the first j
characters of string2, we can fill in our DP array going forward. This is because dp[i][j]
depends on dp[i1][j1]
, dp[i1][j]
, and dp[i][j1]
 all of which are calculated before dp[i][j]
if we’re iterating over i
and j
from low to high.
In some problems, the situation could be different: dp[i][j]
might depend on future states (dp[i+1][j]
, dp[i][j+1]
, etc.). In these cases, you would need to fill in the DP array backward (i.e., starting from high i
and j
and going down to low i
and j
).
In summary, both forward and backward iteration can work for the LCS problem; the choice between the two usually depends on the specific problem and how the DP states are defined.
You can also filter the strings based on common characters before you start solving.
Yes, you’re correct. Preprocessing the strings to remove characters that aren’t present in both can be a helpful optimization for the Longest Common Subsequence (LCS) problem, particularly when the strings are large and contain many distinct characters.
Here’s a rough sketch of how you might do that in Python:


This function first counts the occurrence of each character in the two strings, then creates two new strings that only contain characters present in both original strings.
This preprocessing step can reduce the size of the problem and speed up the dynamic programming algorithm for finding the LCS. However, keep in mind that the time complexity for this filtering step is O(n + m), where n and m are the lengths of the two strings. So, this optimization is most beneficial when the time saved during the dynamic programming step is greater than the time spent on filtering.
Could someone explain how they come up with the double for loop:
for (int col = text2.length()  1; col >= 0; col–) for (int row = text1.length()  1; row >= 0; row–)
Instead I would run:
for (int col = 0; col <= text2.length()  1; col++) for (int row = 0; row <=text1.length()  1; row++)
Great question! Both versions you provided will technically work and achieve the same result. The reason we often see the iteration done in a reverse manner (from the end of the string to the beginning) in dynamic programming solutions is because of the dependency direction.
In the Longest Common Subsequence (LCS) problem, we use a 2D dp array to store the results of subproblems. The value of dp[i][j] (where i
represents an index in text1
and j
represents an index in text2
) is dependent on the values of dp[i+1][j], dp[i][j+1], and dp[i+1][j+1].
If we iterate from 0 to text1.length()  1
and 0 to text2.length()  1
, we’ll find that the subproblems we need (dp[i+1][j], dp[i][j+1], dp[i+1][j+1]) haven’t been solved yet, because we’re moving from the topleft to the bottomright of the dp array. This is why we usually iterate in reverse order for this problem: it ensures that the subproblems we need have already been solved by the time we need them.
However, if you wanted to iterate from 0 to text1.length()  1
and 0 to text2.length()  1
, you could do so by first initializing your dp array with one extra row and one extra column (filled with zeroes), then tweaking your implementation to match. But this approach is slightly more complex and less intuitive for most people, which is why you usually see the reverseorder iteration.
Why is the second tabulation approach better than the first tabulation approach in terms of space? While the second tabulation approach doesn’t require us making a 2D array of size MxN, it still has us making an 1D array of size N and a new 1D array of size N for every loop. Wouldn’t that add up to be the same?
The second tabulation approach you’re referring to probably involves only keeping two rows (or columns, depending on the orientation) of the 2D table at any given time, as these are the only values needed to compute the next row. Specifically, this strategy involves using an old row and a new row during computation.
The key advantage to this method is that it only requires O(N) extra space where N is the length of one of the strings (as opposed to O(M*N) space where M and N are the lengths of the two strings). This is because we’re not storing the entire 2D table in memory at once  we only keep two 1D arrays (the current row and the previous row). So even though we’re creating a new array in each iteration, we’re discarding the old array at the same time, so the space usage remains O(N).
In terms of space complexity, we consider the maximum space required by the algorithm at any point in time. Even though we create and discard many arrays over the course of the algorithm, at any given moment, the algorithm only needs space for two arrays of length N. Hence, the space complexity is O(N).
In contrast, if we were to use a 2D table, the space complexity would be O(MN) because we’re storing MN values in memory at the same time.
There are four different approaches to solve the problem of finding the longest common subsequence of two strings. To provide a brief summary:
Approach 1 (Recursion): This is a bruteforce solution where you consider two cases  when the first letter of text1 is not in the subsequence and when it is. For each case, you solve a smaller subproblem of the same type, and then take the maximum of the two cases.
Approach 2 (Memoization): This is a more efficient solution where you use dynamic programming to avoid redundant computation. You store the results of each subproblem in a cache so that you can quickly look them up later. This technique, called memoization, can greatly speed up the solution by eliminating redundant work.
Approach 3 (Tabulation  2D array): This is another form of dynamic programming, but instead of using recursion and a cache, you iteratively compute the solutions to the subproblems and store them in a 2D table (or grid). You start by solving the smallest subproblems and gradually build up to the solution of the original problem.
Approach 4 (Tabulation  1D array): This is a more spaceefficient version of the previous solution. Instead of storing the solutions to all subproblems in a 2D table, you only keep the solutions to the subproblems of the current and previous row. This reduces the space complexity of the solution from O(mn) to O(min(m, n)), where m and n are the lengths of the input strings.
Each of these solutions has its own tradeoffs in terms of time complexity, space complexity, and code complexity. The best solution to use depends on the specific requirements and constraints of the problem.
Could someone tell me the base case/stopping case for approach 2. It seems to me that p1, and p2 can go up infinitive.
The base case for Approach 2 (memoization) is when either p1
is equal to the length of text1
or p2
is equal to the length of text2
. This is represented by the following line in the memo_solve
function:


The reasoning for this base case is that p1
and p2
represent the indices in text1
and text2
respectively. When p1
equals the length of text1
, it means we have exhausted all characters in text1
. Similarly, when p2
equals the length of text2
, we have exhausted all characters in text2
. In both these cases, we cannot form any additional common subsequence, so we return 0
.
Now, regarding your concern about p1
and p2
going up to infinity, that won’t happen due to the nature of the problem and the function’s design. p1
and p2
are only ever incremented (by 1
), not decremented, in the recursive calls. They start from 0
and can only increase until they reach their respective string lengths. Once they reach the end of their respective strings, the function hits the base case and starts returning, so no infinite recursion occurs.
Why is the second tabulation approach better than the first tabulation approach in terms of space? While the second tabulation approach doesn’t require us making a 2D array of size MxN, it still has us making an 1D array of size N and a new 1D array of size N for every loop. Wouldn’t that add up to be the same?
The second tabulation approach is considered better in terms of space complexity because it uses two 1D arrays instead of a 2D array.
In the second approach, we only keep track of the current and previous rows (or columns, depending on the orientation of your solution), because those are the only ones necessary for the calculation. After we finish computing the current row, we don’t need the previous row anymore. So we can just use two rows and keep swapping between them, thereby saving memory.
Even though it looks like you’re creating a new array at each iteration of the outer loop, you’re not actually creating new space for every loop in total. Each time the loop iterates, it discards the old “current” array and reuses that space for the new “current” array. This is a concept known as dynamic memory allocation.
The space complexity of this approach is O(N), where N is the length of the string. So this approach is more spaceefficient than the first approach, which has a space complexity of O(M * N), where M and N are the lengths of the two strings.
Written my own first wrote memoization one and converted to DP let me know what do you think?


Your code looks quite good and it is correctly applying dynamic programming for the Longest Common Subsequence problem!
The lcssMem
function applies memoization which uses a topdown approach. It starts from a big problem (the entire strings) and breaks it down into smaller subproblems (suffixes of the strings), and uses memoization to avoid solving the same subproblem multiple times.
The lcssDp
function applies tabulation which uses a bottomup approach. It starts with small problems (empty strings) and builds up solutions to larger problems (prefixes of the strings), filling up the DP table along the way.
One potential improvement to your code is the addition of input validation at the beginning of longestCommonSubsequence
. It’s always a good practice to check for null or empty input strings and handle them accordingly.


Overall, your understanding and implementation of both memoization and tabulation methods for the LCS problem are spoton!
The answer for the swapping is here: Cur is always less than Prev We have cur[col] = max(cur[col+1], prev[col]) only because cur[col] = 1+ prev[col+1] ( so sometimes cur entries may be greater than prev array)
Yes, your understanding is correct.
The key to understanding this swapping trick is that for the current cell in the 2D dynamic programming table, we are only interested in the “next” cell (i.e., the cell on the right) in the “current” row and the “next” cell in the “previous” row (i.e., the cell right below the current cell).
The important observation here is that the “current” cell’s value is only needed for the next cell in the “previous” row when we move to the next column in our dynamic programming table.
Therefore, after we are done with the current column and are ready to move to the next column, we can swap our “current” row with our “previous” row because the current row’s values will be needed for the next column’s “previous” row. The “current” row’s values for the current column won’t be needed anymore.
This swapping trick is very useful for optimizing space in dynamic programming problems where the state transition only depends on the “neighboring” states.
Remember to be careful when applying this trick because it’s not applicable to all dynamic programming problems. The specific state transition rule will dictate whether this optimization is feasible or not.
The problem is actually not that hard once you understand the trick. Let’s use the case that text1 = “abcde” and text2 = “ace”.
Intuition
(1) If two characters equals, then we process the rest String. We can tell first char a equals to each other. “abcde” and “ace”. Hence, we only need to compare the rest of str, “bcde” and “ce”. Therefore, we can come up that dp[i+1][j+1] = 1 + dp[i][j].
(2) If two characters not equal, then we either increase 1 idx to each of String. We can tell that b not equal c. “bcde” and “ce”. Hence, we can compare “bcde” and “e” or “cde” and “ce”. Therefore, we can come up an equation that dp[i+1][j+1] = Math.max(dp[i][j+1], dp[i+1][j]).


Yes, this is a wellwritten explanation and implementation for this problem using Dynamic Programming (DP). Here’s a brief overview of what this solution is doing:
This solution uses a 2D DP table with size
(n+1) x (m+1)
, wheren
is the length oftext1
andm
is the length oftext2
. The reason for the “+1” is to account for the case where we compare an empty string to the other string, which will have a Longest Common Subsequence (LCS) of 0.The outer loop iterates over
text1
and the inner loop iterates overtext2
.For each pair of characters, it checks if they are the same. If they are, it increments the LCS of the previous pair of characters (i.e.,
dp[i][j]
wherei
andj
are the indices of the current characters intext1
andtext2
, respectively) by 1 and assigns it to the LCS of the current pair of characters (i.e.,dp[i+1][j+1]
).If the characters are not the same, it assigns the maximum LCS between the previous character of
text1
and the current character oftext2
(i.e.,dp[i][j+1]
) and the current character oftext1
and the previous character oftext2
(i.e.,dp[i+1][j]
) to the LCS of the current pair of characters.At the end of the DP table,
dp[n][m]
contains the LCS oftext1
andtext2
.
In essence, this DP approach breaks down the problem into smaller subproblems (i.e., finding the LCS of different prefixes of text1
and text2
), solves each subproblem, and builds up to the solution of the original problem.
It’s a great example of how understanding the substructure of a problem can lead to an efficient DP solution.
For both iterative DP solutions, we don’t actually need the two DP arrays previous and current. We can use one array and a “diagonal” value to track everything we need to build the next row/column of the DP table because all calculations are local to the current index:


Yes, your approach is correct and quite clever! This approach is often referred to as “space optimization in dynamic programming”. It optimizes the space complexity from O(mn) to O(min(m, n)) where m and n are the lengths of text1 and text2 respectively.
Here’s a brief explanation of your approach:
text1
andtext2
are reversed. This doesn’t change the problem since the longest common subsequence can appear in any order.A 1D DP array
memo
is initialized with length(n + 1)
. It represents the previous row or column in the 2D DP table in the standard approach.The outer loop iterates over
text1
, and the inner loop iterates overtext2
.diag
keeps track of the previous value ofmemo[j]
which represents the diagonal element in the 2D DP table in the standard approach.In each iteration of the inner loop,
memo[j]
is updated based on whether the current characters intext1
andtext2
are the same or not, similar to the standard approach. However, instead of referring to the elements in the 2D DP table directly,memo[j]
(the current element),memo[j  1]
(the left element), anddiag
(the diagonal element) are used.tmp
temporarily stores the old value ofmemo[j]
before it is updated. After the update,diag
is set totmp
, so thatdiag
can be used as the diagonal element in the next iteration.The final answer is
memo[n]
, which is the last element inmemo
.
This approach achieves the same result as the standard approach with less space, demonstrating how thoughtful usage of variables can lead to more efficient solutions.
We can further reduce space utilisation in Approach 4: (Runtime 9 ms)
We are actually only interested in previous row/column last updated data. Instead of having two arrays on Min(Min), We can do this with just one array of size min(M,N) and a variable store the previous cell nonupdated value.
The Modified code for Approach 4 code will look something like this (Runtime 9 ms)


Yes, you’re absolutely correct!
This solution optimizes the space usage even more. It still uses a 1D array of size min(M, N), but instead of maintaining another 1D array for the previous row or column, it only keeps a single variable prev
to remember the diagonal element (the element at current[row + 1]
before current[row]
is updated).
Here’s the core idea of this approach:
prev
is first set to the last element in thecurrent
array, which is the diagonal element for the first cell in the row or column.In the inner loop,
temp
temporarily stores the value ofcurrent[row]
before it’s updated.After
current[row]
is updated,prev
is set totemp
, which meansprev
now holds the value of the diagonal element for the next cell.The rest of the process remains the same as the previous optimized approach.
By doing this, we effectively maintain the DP state transitions with only a single array and a variable, further reducing the space complexity. Great job on your optimization!
Just to note: the time complexity is still O(MN) as we still need to traverse each element in the grid at least once. The space complexity, however, is reduced to O(min(M, N)).
I used Solution 2 (recursion with memoization) but I am getting Runtime Error (terminate called after throwing an instance of ‘std::bad_alloc’ what(): std::bad_alloc) with 42 / 45 test cases passed.
The error message you’re seeing, std::bad_alloc
, is thrown when a program attempts to allocate memory and fails. This is typically because there’s insufficient memory available to meet the request.
It is not typical to see std::bad_alloc
when solving this problem with the second solution (recursion with memoization). However, the error can occur if you are dealing with extremely long strings or sequences, as the memory required for the recursive calls and to store the memoization cache (which would be a twodimensional array with dimensions equivalent to the sizes of the two input strings) may exceed the available memory on your machine or the limitations of your programming environment.
It’s important to keep in mind that recursion can use a significant amount of memory, particularly when used on large data sets. Each recursive call adds a new layer to the call stack, and this can add up quickly when dealing with large inputs.
In this scenario, you might want to consider using an iterative solution, such as tabulation (bottomup dynamic programming). Tabulation generally uses less memory than a recursive/memoization approach because it doesn’t involve a call stack, and instead fills up a DP table iteratively. This could potentially solve the issue you’re facing. For example, you might use the third or fourth solutions provided above.
In case you’re dealing with inputs of enormous sizes and even the tabulation approach can’t handle them, you might have to use a different algorithmic strategy that doesn’t require storing an entire DP table in memory, or even seek solutions outside the realm of dynamic programming.
Is it possible to solve this problem using a suffix trie? I tried to build one but encountered some issues, mostly because i tried to use dictionaries instead of an actual Tree.
Its not that i don’t appreciate the DP solution. But i cannot stop thinking about the suffix tree’s fast accessibility after construction. Especially if it uses pointers to string instead of copying string elements in construction.
I also noticed that if indexes are related in order as a list of lists, the longest common subsequence is also the longest strictly increasing subsequence of these indexes.
This question is just so rich, but also so challenging
Using a suffix trie (or suffix tree) for solving the longest common subsequence (LCS) problem is an interesting idea, but it may not be as straightforward or efficient as the dynamic programming approach.
Firstly, let’s address the construction of a suffix trie for a single string. A suffix trie is a trie where every node represents a suffix of the input string. Constructing a suffix trie for a string of length n can take O(n^2) time in the worst case (consider a string of the same character). That alone can be quite expensive computationally.
However, the LCS problem involves two strings. One way to use a suffix trie would be to construct it for one string, and then traverse it using the other string. This could indeed give you all the common substrings of the two strings, but it wouldn’t necessarily give you the longest subsequence that is common to both. This is because a subsequence doesn’t require all elements to be contiguous, unlike substrings. This could result in additional complexity in the traversal and matching process.
Furthermore, the space complexity of a suffix trie can be quite high, as each node in the trie can potentially have as many children as there are unique characters in the alphabet of the input string.
In contrast, the dynamic programming approach can solve the LCS problem in O(nm) time where n and m are the lengths of the two strings, and with O(min(n,m)) space with some optimizations. This is generally more efficient than the trie approach.
As for your observation about the longest strictly increasing subsequence, it is indeed a fascinating point. However, please note that it doesn’t apply to all cases. The longest common subsequence between two strings doesn’t have to be strictly increasing in the index order. For instance, consider the strings “abc” and “bca”. The longest common subsequence is “bc”, but the indices of “b” and “c” in the first string are not in increasing order relative to the second string.
Overall, while the suffix trie can be a powerful tool for certain types of string processing tasks (like searching for substrings), it may not be the most suitable tool for the LCS problem.
A quick question and seeking for a short quick answer: why can’t space be optimized just like tabulation, when doing topdown recursion dp?
It’s late and I used up all my brain cells, it us hard to imagine but I got a feeling when doing top down, the subproblem tree will be penetrating the single layer memo sometimes.
The topdown (memoization) approach to dynamic programming involves recursive calls, and these calls can pile up, creating a deep call stack. This call stack is part of the memory footprint of the algorithm. While the memoization does indeed save space in terms of the number of distinct subproblems that need to be stored, it does not save the overhead of the recursive call stack.
On the other hand, the bottomup (tabulation) approach builds the solution iteratively. It does not involve any recursive calls, hence there is no extra memory needed for a call stack. That’s why the space complexity of the bottomup approach is typically lower than the topdown approach.
As for penetrating singlelayer memo, in topdown approach, subproblems can indeed share or overlap in the memoization table, but this is actually an advantage because it allows the algorithm to avoid redundant calculations. The issue is not the shared subproblems, but the depth of the recursive call stack that adds to the space complexity.
So, while you can reduce the space used by the memoization table itself (by using techniques similar to those in the tabulation approach), you can’t eliminate the extra space used by the call stack in the topdown approach.
DP Question: For ease of implementation, we add an extra row of zeroes at the bottom, and an extra column of zeroes to the right. Could anyone tell me the reason? Many thanks.
The reason for adding an extra row of zeroes at the bottom, and an extra column of zeroes to the right, is to handle edge cases where we would otherwise go out of bounds of the array.
In the longest common subsequence problem, consider when we are at the first character of one string and we need to compare it with the other string. If the characters don’t match, we need to move the pointer to the left or above. However, if we are already at the first character, there’s no left or above to move to, and we would go out of bounds.
The extra row and column of zeroes at the bottom and right basically serve as dummy values. This technique is commonly used in dynamic programming to avoid outofbounds errors and make the implementation cleaner and easier, since you don’t need to write extra code to handle the boundary cases.
In the context of the longest common subsequence problem, the zeroes make sense because a sequence of length zero will have a longest common subsequence of length zero with any other sequence.
“However, if a greedy algorithm exists, then it will almost always be better than a dynamic programming one.”
Why is this the case? Is it because of their time complexities or difficulty in implementations?
Yes, if a problem can be solved using a greedy algorithm, it is typically more efficient than a dynamic programming solution in terms of both time and space complexity. This is because a greedy algorithm makes the optimal choice at each step as it attempts to find the overall optimal way to solve the entire problem.
Here are a few reasons why a greedy algorithm might be preferred over a dynamic programming approach:
Time Complexity: Greedy algorithms often have lower time complexities because they do not generally require examining all possible combinations or sequences of decisions. On the other hand, dynamic programming solutions often involve solving and combining subproblems, which can take more time.
Space Complexity: Greedy algorithms also often have lower space complexity because they typically do not need to store solutions to subproblems as dynamic programming solutions do.
Implementation Difficulty: Greedy algorithms can often be simpler and more straightforward to implement, while dynamic programming solutions can be more complex and harder to conceptualize, design, and code.
However, it’s important to note that greedy algorithms can only be used for problems where making locally optimal choices at each step leads to a globally optimal solution. Many problems are not “greedychoice” friendly, and in such cases, dynamic programming or other methods might be the best approach.
In the fourth approach it’s mentioned to make sure that the shortest word is the one going down(vertical), but I’m not sure I understand why, and also why it’s not recommended in the third approach as well?
The fourth approach ensures that the shortest word goes down (or is mapped to columns if we are thinking in terms of a 2D matrix) as a part of space optimization. By allocating the shortest word to the columns, we’re minimizing the space complexity. We use a 1D array of length shortest_word.length() + 1
, and this space saving can be significant when the lengths of the two strings are largely different.
In the third approach, we use a 2D array for storing our DP values which means we are not actively trying to minimize space. Thus, it doesn’t matter if the shorter word is used for rows or columns. It does not affect the time complexity as well. The only advantage of using the shorter string for rows would be to save a small amount of space, but the overall space complexity would still be O(MN).
However, it’s always good practice to consider such optimizations, especially when dealing with problems that could involve large inputs.
This is their first solution. Doesn’t work


There seems to be an error in the last return statement of your dp
function. It should not be return dp(option1, option2)
as option1
and option2
are not indices, but the results of recursive dp
calls, which represent lengths of some longest common subsequences.
Here’s a correct version of the first approach from the discussion:


In this corrected version, dp(idx1: int, idx2: int)
is a helper function which computes the length of the longest common subsequence of text1[idx1:]
and text2[idx2:]
. We make the decision at each step whether to include the current character in the longest common subsequence (if they are equal), or to exclude it and move on to the next character. This decision is represented by the if
statement and the max
function. The decorator @lru_cache(maxsize=None)
memoizes the function to avoid redundant computations.
Am I the only one who thinks DP is easier to think about than memorization? I directly came up and coded a working DP solution in 15 minutes after I did those stringrelated problems: 5, 516, 300, 376.
Can someone explain this difference between the top down and bottom up approaches?
In both there are two cases: 1. The letters match, 2. The letters don’t match (or we don’t take it even if they do match).
In top down: 1. Move both char indexes forward. 2. Only move one index forward. In bottom up: 1. Move both char indexes forward. 2. Take the max of moving each index forward one.
It seems like in the other DP problems I’ve solved (going through the explore topic) the recurrence relation was more similar/identical. Why is it different in this one?
Your observation is correct, and it highlights the sometimes subtle difference between topdown and bottomup approaches to dynamic programming problems.
Let’s dissect both methods in the context of the Longest Common Subsequence (LCS) problem:
Topdown (usually implemented with memoization): It starts from the original problem and breaks it down into subproblems. The logic can often be more intuitive because it follows the problem’s natural recursion. When the two characters at the current position match, we move both indexes forward. When they don’t match, we have two choices: move the index of the first string or the second string. The LCS will be the maximum length achieved from these two possibilities.
Bottomup (usually implemented with tabulation): It starts from the simplest subproblem and uses its solutions to solve more complex subproblems iteratively. The logic can sometimes be less intuitive because it doesn’t follow the natural recursion of the problem but often leads to more efficient code. In this case, the bottomup approach builds a matrix
dp[i][j]
, where each cell represents the length of the LCS betweentext1[0..i]
andtext2[0..j]
. Whentext1[i]
andtext2[j]
match,dp[i][j]
will be1 + dp[i1][j1]
. When they don’t match, we take the maximum ofdp[i1][j]
anddp[i][j1]
, which represent the LCS length without including eithertext1[i]
ortext2[j]
.
The core idea is the same for both methods, but the way they build the solution differs. The bottomup method constructs the solution iteratively by solving all subproblems, while the topdown method uses recursion and might not solve all subproblems, thanks to memoization. However, the conceptual difference in the way they traverse the solution space can sometimes lead to slightly different recurrence relations.
Since we aim at having the Dynamic Programming approach to be a bottomup approach, it is not clear why this article is doing the iteration in reverse as opposed to be doing from the beginning of the Strings (specially since all the context of the article so far has been focused on looking at the both strings from the beginning). This can be confusing to people new to DP.
That’s a great point! The choice to iterate in reverse in this particular solution is actually an optimization specific to this problem. In the Longest Common Subsequence problem, it’s more efficient to fill up the DP table in reverse when using a single row or column to represent the current and previous states. This way, you avoid overwriting values from the “previous” state that you still need to compute the “current” state.
In a more general sense, the direction of iteration in a bottomup DP approach depends on the problem’s specific details. For many problems, you indeed start from the smallest subproblems and work your way up, which usually means iterating from the beginning of the strings or arrays.
However, the reverse iteration doesn’t alter the “bottomup” nature of the approach. It’s still bottomup because it starts from smaller subproblems (e.g., comparing substrings of shorter lengths) and uses their solutions to solve larger subproblems (comparing substrings of longer lengths).
Understanding why the iteration is done in reverse in this case requires a firm grasp of the problem and the DP approach used to solve it. It’s definitely a more advanced concept, and I agree it could be confusing for those new to DP.
I have a question, we have that the first approach is recursion and memoization, however, is not labeled as a dynamic programming approach, I thought because of previous explanations that for this type of approach (memorization with recursion) was of type Topdown type of dynamic programming.
You’re absolutely correct! The first approach is indeed a form of Dynamic Programming (DP), specifically a topdown approach often referred to as “memoization”.
In Dynamic Programming, there are generally two methods:
Topdown approach (Memoization): This is when you start solving the problem by breaking it down. If you see that the problem has been solved already, then just return the saved answer. If it has not been solved, solve it and save the answer. This is usually done using recursion and a data structure like a hashtable or an array for the memoization part.
Bottomup approach (Tabulation): In this approach, you solve all the possible smaller subproblems first, then use those solutions to solve the larger subproblems, until you solve the original problem you were trying to solve. This is usually done using iterative methods and is more efficient than the topdown approach in terms of function call overhead.
So, the distinction made in the question’s context between Approach 1 (Recursion + Memoization) and the later approaches (Tabulation) is not about whether they’re DP or not, but about the specific DP approach used. It’s a bit misleading, but both are actually forms of Dynamic Programming.
Aren’t all of these approaches dynamic programming? But approach 3 is the only one explicitly called dynamic programming. Another way to phrase this question: doesn’t topdown recursion with OR without memoization still qualify as a kind of dynamic programming?
Yes, you’re absolutely right. All of the methods described are versions of dynamic programming.
Dynamic programming is a broad concept that includes both topdown (memoization) and bottomup (tabulation) approaches. The key characteristic of dynamic programming is that it breaks down a complex problem into simpler subproblems, then combines the solutions to those subproblems to solve the original problem. This can be done either recursively (topdown) or iteratively (bottomup).
In the topdown approach (also known as memoization), the algorithm starts at the top (the original problem), breaks it down into subproblems, and recursively solves those. If a subproblem has already been solved, its result is stored and reused, thus eliminating redundant work.
In the bottomup approach (also known as tabulation), the algorithm starts with the simplest subproblems and gradually solves more complex ones, using the solutions of previous subproblems. This approach typically uses iteration instead of recursion and builds a table in a systematic order.
So, you’re correct that Approach 1 (using recursion with memoization) and Approach 2 (using recursion without memoization) are also forms of dynamic programming. Approach 3 and Approach 4, which use the bottomup strategy, are often more commonly associated with the term “dynamic programming” because they are more efficient and avoid the overhead of recursion.
However, all these approaches fall under the umbrella of dynamic programming since they all involve breaking down the problem into simpler subproblems and reusing previous results to avoid redundant work.
Alternative terms for Dynamic Programming
The technique of breaking down a problem, storing the results of subproblems, and reusing those results to efficiently find a solution is most commonly referred to as “Dynamic Programming” in computer science. However, if you are looking for a more descriptive name, you could consider terms like:
 “Memoized Problem Decomposition”
 “Subproblem Caching”
 “Optimized DivideandConquer”
 “Tabulated Recursive Solution”
 “Cached Subproblem Solution”
Keep in mind, though, that “Dynamic Programming” is the standard term in the field and is widely recognized and understood by computer scientists and programmers.
Obviously the final solution is a better operation optimization since you’re just reassigning rather than creating/calculating but technically there is another optimization that can be performed in the original DP optimized space solution. You could just use prev’s length each time to initialize curr’s length rather than performing a calculation via addition. Something I noticed so I’d figure I’d throw it out there :)
Absolutely! That’s a good point. You can certainly initialize the curr
array with the length of prev
in the original optimized space solution. This would save an additional calculation operation at each iteration, slightly optimizing the code further.
That said, this change would not have a significant impact on the overall time complexity of the algorithm, which would remain O(M*N) where M and N are the lengths of the two input strings. However, it’s definitely an excellent example of how there are always little improvements and efficiencies that can be found in code, even in already optimized solutions! Good catch!
You could go forward too. Often in these kinds of problems, there is an advantage one way over the other. You should pick the way that seems to lead to the least edge case handling/ other hassles.
An extreme example I can think of off the top of my head is the “Triangle” problem (#120 I think). One way has a lot less edge case handling than the other.
Yes, you’re right! When solving problems using dynamic programming, you can choose to fill the DP table either in a forward manner (from top to bottom and left to right) or in a backward manner (from bottom to top and right to left).
In this problem, both ways would yield the correct result. The choice between them often depends on the specifics of the problem and which direction makes it easier to define the state transitions.
For the Longest Common Subsequence problem, either direction works because the state at dp[i][j] only depends on dp[i1][j], dp[i][j1], and dp[i1][j1], which can be easily computed in both directions.
For some other problems, such as the “Triangle” problem you mentioned, one direction might make it easier to handle edge cases and simplify the code. In general, it’s a good idea to consider both directions and choose the one that makes the most sense for the specific problem at hand.
Yes. I believe you could also go forward. An advantage of going backwards is that the indices of the 2d array will be the same as the indices of the chars in the word. Going forward, you’d have a bunch of
Yes, that’s one advantage of filling the DP table backwards. By going from the end to the start, the indices in your DP table align directly with the indices in your strings, which can simplify your code.
However, this benefit might not be as noticeable in the Longest Common Subsequence problem because whether you iterate from the start or from the end, you’ll still need to do a 1 index adjustment when comparing the characters of the strings (due to the extra row and column of zeros in the DP table).
In general, the choice between forward and backward iteration depends on the specifics of the problem and which one makes the state transition logic clearer and simpler to implement. Both methods will work as long as they are implemented correctly.
why is bottom up better than top down? or is it the same? sorry for the confusion
The choice between a topdown approach (recursion with memoization) and a bottomup approach (iteration and dynamic programming) usually depends on the specific problem and context. However, there are some general differences between the two that might make one more favorable than the other in certain scenarios:
Memory Usage: A topdown approach uses recursion, which can lead to high memory usage due to the recursion stack, especially with deep recursion trees. In contrast, a bottomup approach iteratively builds up the solution, which generally uses less memory.
Speed: A bottomup approach can be faster because it doesn’t have the overhead of recursive calls. However, this speed difference can be negligible in many cases, especially if memoization is used effectively in the topdown approach.
Implementation Complexity: Topdown approaches can sometimes be more intuitive and easier to code, especially for complex problems, because they closely follow the problem’s natural recursive structure. However, they can also be more difficult to debug due to recursion. On the other hand, bottomup approaches are often more straightforward and easier to debug, but the iterative logic can sometimes be harder to devise.
In general, a bottomup approach is usually a good first choice for dynamic programming problems due to its efficiency and simplicity. But the topdown approach can sometimes be more intuitive or easier to implement, especially for more complex problems. So, neither approach is strictly better than the other; it depends on the specific problem and the coder’s comfort and familiarity with the techniques.
Can we still have space optimized DP if we want to extract the common subsequence also?
Space optimization in dynamic programming (DP) often involves reducing the amount of stored state. However, if you want to recover the actual longest common subsequence (LCS), not just its length, you typically need to keep track of the full DP table (or some additional information), which would use more space. This is because extracting the LCS involves backtracking through the DP table.
Here’s an example of how you might extract the actual LCS after using DP to find its length (for this example, assume we have a full 2D DP table dp
):


In the function above, we start from the bottomright of the DP table and work our way up to the topleft. When we find a character that’s part of the LCS (i.e., text1[i  1] == text2[j  1]
), we add it to our LCS
list. If the characters don’t match, we move in the direction of the greater value in the DP table. At the end, we reverse the LCS
list to get the characters in the correct order, since we built the list in reverse order.
So, in conclusion, if you also want to extract the actual LCS, you can’t use the spaceoptimized version of DP that only keeps track of the current and previous rows/columns. You need the full DP table or some additional information to backtrack and extract the LCS.
My intuition on this was “two pointer” approach, but I struggled to connect that to DP/recursion. I think a good rule of thumb is that anytime a twopointer approach is helpful, but you don’t know which of the two pointers it would be best to move, those are your two recursion cases to compare results for.
That’s a great insight! Indeed, this problem can be viewed as a twopointer problem where we are uncertain about which pointer to move. The issue is that we cannot know which decision is the best without exploring both options. Hence, it leads naturally to a recursive solution, where we explore all possibilities and choose the best one.
In many problems, especially in string or array manipulation, when we find ourselves torn between multiple decisions (like moving one pointer or another, choosing one element or another), it often indicates that a dynamic programming solution may be effective.
However, it’s crucial to note that not all problems that can be approached with two pointers can be solved with dynamic programming, and vice versa. The key indicator that dynamic programming might be applicable is the existence of overlapping subproblems, which isn’t always the case in twopointer problems.
You said “If the first character of each string is not the same, then either one or both of those characters will not be used in the final result (i.e. not have a line drawn to or from it). Therefore, the length of the longest common subsequence is max(LCS(p1 + 1, p2), LCS(p1, p2 + 1)).”
But this only deal with the not using EITHER one of the chars in the final result, how come we don’t have to explicitly consider not using both chars in the final result?
The reason we don’t need to explicitly consider the case of not using both characters in the final result is because that situation is implicitly handled by the two recursive calls that we make.
Let’s break down the two calls:
LCS(p1 + 1, p2): This call explores the scenario where we discard the character at position p1 from the first string, and try to find the longest common subsequence in the remaining substrings.
LCS(p1, p2 + 1): This call explores the scenario where we discard the character at position p2 from the second string, and continue with the remaining substrings.
If both characters are not part of the longest common subsequence, then eventually, the algorithm will advance both pointers p1 and p2 by making the two separate recursive calls mentioned above. So, the scenario where we discard both characters is covered by a combination of these two calls. We don’t need to make a separate call to handle it.
In simpler terms, the max() function takes care of this. If neither character is in the final LCS, both recursive calls will essentially move past these characters. It’s just that we will do it one at a time (either skip the character from text1 first or skip the character from text2 first). This is acceptable because the LCS does not depend on the order of character consideration.
Can someone point out the mistake in my solution? It is essentially the same as DP tabulation solution, but instead of using dp[][] of size (m+1)(n+1), I am trying to use it with size mn. Only modification that I think needs to be made is:
If text1.charAt(i) == text2.charAt(j) then check if i==0  j==0 then initialize dp[i][j] =1, otherwise we set dp[i][j] = 1 + dp[i1][j1].
Now, If text1.charAt(i) != text2.charAt(j) then check if i==0  j==0 then initialize dp[i][j] =0, otherwise we set dp[i][j] = Math.max(dp[i][j1], dp[i1][j]);
Here is the code:


The problem with your solution lies in the conditions where you check for i==0
or j==0
. Even when i==0
or j==0
, it doesn’t necessarily mean that the length of the longest common subsequence (LCS) is 0
when characters do not match, or 1
when characters match.
For example, if i==0
and j>0
, if the characters do not match, it doesn’t mean the length of the LCS is 0
. We may have a match in the rest of the j
characters, so we should keep the maximum length of LCS found so far which is dp[i][j1]
.
Similarly, if the characters match, it doesn’t mean that the LCS is 1
, it could be 1 + dp[i][j1]
if there were matches found earlier in the sequence.
Here’s the corrected code:


This ensures that even at the boundaries, we are considering possible matches or maintaining the LCS length accurately.
In the last solution code, line 22 (“previous, current = current, previous”) is not completely necessary. It suffices to write “previous = current[:]” and just copy current into previous. We don’t need to copy previous into current. That’s because we only need to refer to one example back and not two.
Yes, you’re correct. The line previous, current = current, previous
is a neat Python trick to swap the contents of two variables, but it’s indeed not necessary to copy previous
back into current
in this context.
As you’ve correctly pointed out, we’re only interested in the last result, i.e., the previous
value. The current
array gets overwritten in each iteration, so preserving its value from the last iteration is not needed.
Your suggestion of using previous = current[:]
is a more optimized way to copy the contents of current
to previous
, while discarding the old value of previous
and not unnecessarily preserving the old current
. This would look like:


This would make the code a little bit more efficient.
Great explanation. In the last approach, I saw some discussion about space optimization. Rather than using 2 arrays, a single array can be used if it’s implemented as a circular queue. That will further help reduce the space required though the Big O notation would remain unchanged to O(min(M,N)).
Yes, that’s a great insight! Using a circular queue (or circular buffer) can indeed optimize space usage.
The idea of a circular buffer is to use an array as if it were connected endtoend. When you reach the end of the buffer and you’re trying to add another element, you wrap around and start overwriting from the start of the array again. In the context of this problem, a circular queue can be used to keep track of the current and previous states in the dynamic programming solution.
Here’s how you can modify the last approach to use a circular queue:


In this solution, the new DP state new_dp
is calculated based on the old DP state dp
, and then the old DP state is replaced with the new one for the next iteration. This simulates a circular buffer with a size of 2.
However, as you’ve mentioned, this doesn’t change the space complexity, which remains O(min(M,N)). The circular queue optimization can be more beneficial when the buffer size is significantly larger. But it’s a great insight and an excellent example of how you can further optimize memory usage.
is there a reason approach 3’s outer loop is columns and inner loop is rows? for some reason, when I switch and have the outer loop as rows and the inner loop as the column, my answer is incorrect, but logically, I would think that both solutions would be correct?
The outer and inner loops in dynamic programming problems like this one often relate to the order of filling up the DP table.
In the Approach 3, we first iterate through each column (representing characters in text2) and within that, for each row (representing characters in text1). This means we are filling up the DP table column by column.
If you switch the outer loop to be rows and the inner loop to be columns, then you would be filling up the DP table row by row. While this might seem equivalent, it changes the order of filling up the DP table and thus, the dependencies between the states.
To illustrate why this is important, let’s think about how we calculate dp[i][j], which represents the longest common subsequence of text1 up to i and text2 up to j.
When filling the table column by column (i.e., outer loop is columns), dp[i][j] depends on dp[i1][j] and dp[i][j1]. At the time of calculating dp[i][j], dp[i1][j] (same column, one row above) and dp[i][j1] (previous column, same row) are already calculated because we are going column by column.
However, if you fill the table row by row (i.e., outer loop is rows), when you try to calculate dp[i][j], dp[i][j1] (same row, one column before) is already calculated, but dp[i1][j] (one row above, same column) is not yet calculated. In this case, you’re using an unknown value to calculate dp[i][j], which will lead to incorrect results.
So, the order of the loops matters because of the way the DP table is filled and the dependencies between the states.
What is the time complexity of the brute force solution of the length of the strings are M and N?
The brute force solution of this problem involves trying all possible subsequences of the two strings and checking if they match. Since a string of length M has 2^M possible subsequences, and similarly a string of length N has 2^N possible subsequences, the brute force solution would require checking each subsequence of the first string against each subsequence of the second string.
Therefore, the time complexity of the brute force solution would be proportional to O(2^M * 2^N), which simplifies to O(2^(M+N)). This is an exponential time complexity, which is not practical for large strings.
That’s why we resort to dynamic programming for this problem, which has a more manageable time complexity of O(M*N).
In the approach 3 , to calculate lcs(“attag”, “gtgatcg”)
Why the grid is size len(“gtgtgatcg”) * len(“actgattag”) ? while in the algo the size of grid is (len(“attag”)+1) * (len(“gtgatcg”) +1)
Where the extra prefix of gt and actg were added respectively to gtgatcgand attag ?
I believe there may be a misunderstanding here. When calculating the longest common subsequence (LCS) between two strings using dynamic programming, you indeed create a grid (or a twodimensional matrix) that has dimensions (len(text1) + 1) x (len(text2) + 1).
So for your example, if you are finding the LCS of “attag” and “gtgatcg”, the grid would be of size (5+1) x (7+1) = 6 x 8. There are no extra prefixes added to the strings. The extra row and column are added to the grid for initialization purposes, they serve as ‘buffer’ entries to handle the cases where one of the strings is empty.
The grid or matrix in dynamic programming is not an extension or modification of the original strings. Instead, it’s a separate data structure used to store intermediate results of the computation, to avoid recomputing the same subproblems multiple times. So the strings “attag” and “gtgatcg” themselves do not change during this process.
why in the last solution this line (current[row] = max(previous[row], current[row + 1]) doesn’t use previous[row+1] same as the solution before it => (dp_grid[row][col] = max(dp_grid[row+1][col], dp_grid[row][col+1]) )
The bottomup DP algorithm starts at the bottom right of the DP matrix and works its way up to the top left. If we look at the traditional 2D DP grid, dp_grid[row][col] = max(dp_grid[row+1][col], dp_grid[row][col+1])
is the transition function. Here, dp_grid[row+1][col]
refers to the result from the cell below the current cell, and dp_grid[row][col+1]
refers to the result from the cell to the right of the current cell.
In the optimized space solution, the current
and previous
lists represent a row of the DP grid and the row below it. Specifically, previous
is the row below the current row (current
). So when we write current[row] = max(previous[row], current[row + 1])
, previous[row]
refers to the cell below the current cell (analogous to dp_grid[row+1][col]
), and current[row + 1]
refers to the cell to the right of the current cell (analogous to dp_grid[row][col+1]
).
This change in indexing is due to the space optimization where we only keep track of the current and previous rows instead of the whole DP grid. The key idea is to correctly map the transitions from the 2D grid to the optimized space version.
Approach 1 seems like O(M * N) to me, not O(M * N^2). Can anyone give an input that demonstrates the more expensive runtime?
The first approach described, naive recursion without memoization, does indeed have a worstcase time complexity of O(2^(M+N)), not O(M * N^2). Here’s why:
For each recursive call on the characters of the two strings, we have two options:
If the current characters of both strings match, we can choose to include this character in the LCS and proceed to the next character in both strings.
If the characters do not match, or even if they match, we have two choices: we can either move to the next character in the first string (leaving the second string’s current character in place), or move to the next character in the second string (leaving the first string’s current character in place).
This results in a branching factor of 2 for each recursive call, leading to a time complexity of O(2^(M+N)).
When you apply memoization (as in approach 2), the time complexity reduces to O(M * N) because you only solve each subproblem once and store the result, eliminating the duplicated work of recalculating the same subproblems.
Approach 4 seems still space O(n*m) as cur got reinitialize col time.
In Approach 4, although the current
array is reinitialized m
times, the space complexity is still O(min(M, N)) rather than O(M * N). The reason is that space complexity measures the total storage that your algorithm needs, not how many times you allocate it.
In this case, you have two arrays: current
and previous
, each with a length equal to the smaller of the two input string lengths (let’s say N). The size of these arrays doesn’t change based on the size of the other input string length (M). So, even if we are reinitializing the current
array m
times, it doesn’t change the total amount of space used at any given time.
The important point here is that we are reusing the same space over and over again, rather than creating new space for each iteration. This is why the space complexity is still O(min(M, N)) instead of O(M * N). The algorithm never uses more than min(M, N) space at any one time.


This matrix solution is really clever, but I don’t think I would have ever figured this out on my own. Do you guys feel this is a simple variation on DP or is it something that most people wouldn’t get unless they’ve seen the trick before?
This solution is a clever variation of Dynamic Programming (DP) and it’s quite common to feel that way when you first encounter such optimizations. This solution is called spaceoptimized DP because it only keeps track of the current row and the previous row (or column, depending on the orientation) instead of the entire matrix. This reduces the space complexity from O(M * N) to O(min(M, N)).
The trick to understand this solution is understanding the dependencies in the DP transition. Here, dp[i][j] depends only on dp[i1][j], dp[i][j1] and dp[i1][j1]. This means when calculating the current dp[i][j], you only need the current row and the previous row, hence the space optimization. This understanding comes from the DP transition equation, which is the core part of DP problems.
That said, coming up with this kind of optimization often requires some experience with dynamic programming problems and recognizing patterns in them. Some people might intuitively recognize these patterns while others might require more practice.
Even if you don’t immediately come up with this solution, understanding how and why it works is a great way to improve your problemsolving skills and will definitely help you in tackling other similar problems in the future. So don’t be discouraged if you don’t see the trick right away! Practice and exposure to a variety of problems will gradually build up your ability to spot these patterns.
The question gives TLE with memoisation. And it accepts in tabulation dp. (bottomup) I have used 2d vector of m*n in both ways. Why does it give TLE in memoisation.
The time limit exceeded (TLE) issue can occur in the memoization approach due to recursion stack overhead and nonsequential memory access. Let me explain:
Recursion Stack Overhead: In the topdown approach (memoization), we start with the original problem and break it down into subproblems using recursion. This process of recursion involves function call overhead, and if the depth of recursion is very large, it could lead to a stack overflow.
NonSequential Memory Access: In the memoization approach, memory access isn’t sequential. You could jump from one end of the matrix to the other due to the recursive calls, which might lead to cache inefficiency. In contrast, the bottomup approach (tabulation) iterates over the DP table sequentially, which leads to better cache utilization and can be faster.
The key point here is that while the time complexity for both methods is theoretically the same, the actual runtime can depend on various factors such as function call overhead and memory access patterns, which might vary based on the specific problem and the characteristics of the computer system (like cache size, memory bandwidth, etc.).
Remember that these are just potential reasons. It’s also possible that your topdown solution has some inefficiency that isn’t present in your bottomup solution. You may want to review your code and ensure that you’re properly initializing your memoization table, filling it in the correct order, and correctly handling base cases.
can we apply same O(min(n,m)) space complexity in Top down memoization approach?
The O(min(n, m)) space optimization is typically used in the bottomup dynamic programming approach, which uses a tabular method to iteratively compute the solutions to subproblems.
However, in the topdown approach, also known as memoization, the recursive nature of the solution makes it challenging to reduce the space complexity in the same way. This is because the recursion implicitly uses a stack to keep track of the subproblems, and this stack can grow as large as the depth of the recursive call tree, which, for these problems, can be O(n) or O(m).
If you try to limit the memoization table to a size of O(min(n, m)), you could encounter issues since the recursive calls in the topdown approach could potentially need to reference entries outside of this reduced table size.
So in short, the O(min(n, m)) space complexity optimization is not easily applicable to the topdown dynamic programming approach due to its recursive nature and the necessity of having complete information of all subproblems.
How did you come to conclusion you need to use dp?
Determining whether or not to use Dynamic Programming (DP) for a problem is generally based on a few key characteristics of the problem:
Overlapping Subproblems: If a problem can be broken down into smaller subproblems, and these subproblems are repeated multiple times in the computation, then it’s a strong hint that DP can be useful. In the case of finding the longest common subsequence, the problem of finding the common subsequence between substrings is a subproblem that repeats many times.
Optimal Substructure: If an optimal solution to the problem can be constructed efficiently from the optimal solutions of its subproblems, then DP is a suitable method. For the longest common subsequence problem, the longest common subsequence of two sequences can be found by comparing the last characters of the two sequences, and solving the problem for the two sequences with the last characters removed. This indicates that the problem has an optimal substructure.
The need for Memoization: If you find yourself solving the same subproblems over and over again, storing and reusing the results of those subproblems (a technique called “memoization”) can save significant time, which is a hallmark of DP solutions.
After recognizing these properties, it becomes clear that dynamic programming is a suitable strategy for this problem. Note that using DP is not always straightforward and it often requires practice to get a feel for when to use it and how to formulate a solution.
can you help me where am I going wrong
int dp[10001][10001]={0};
int i,j;
for(i = 0; i < text1.size(); i++)
{
for(j = 0; j < text2.size();j++)
{
if(text1[i]==text2[j])
{
dp[i+1][j+1]=1+dp[i][j];
}
else{
dp[i+1][j+1]=max(dp[i][j+1],dp[i+1][j]);
}
}
}
The code you’ve shared is actually a correct implementation of the longest common subsequence problem using dynamic programming. However, if you’re having issues, it’s likely due to how you’re using the dp array outside of this snippet.
Keep in mind that the resulting length of the longest common subsequence will be stored in dp[text1.size()][text2.size()].
If this value isn’t what you expect, there are a few things you might want to check:
Check the initial size of the
dp
array. Is it big enough for the inputs you’re using? You’ve declared it as 10001x10001, which should be enough for most inputs but may not be sufficient for all. Ensure your array size aligns with the constraints of your problem.Are
text1
andtext2
being properly assigned? Ensure they contain the correct string values before this algorithm runs.Are you properly outputting/returning the value in
dp[text1.size()][text2.size()]
after these loops execute? The final answer of the longest common subsequence problem is stored atdp[text1.size()][text2.size()]
.
Regarding to your bottom up approach:
Why ? “1 plus LCS till the i1 and j1 indexes.” ? Can you explain the theory behind ? why ( 1 + DP[i1][j1] ) when there’s a match ?
The “1 +” is there because when you find a matching character at position ‘i’ in the first string and position ‘j’ in the second string, this character contributes a length of 1 to the longest common subsequence.
It’s 1 because a matching character between the two strings represents a common element in the subsequence. This is essentially how we ‘build’ our common subsequence  by identifying these common elements one by one.
The parentheses “( )” part is because you’re adding this matching character to the longest common subsequence found until the previous positions (‘i1’ and ‘j1’). This represents the optimal solution to the smaller problem (the longest common subsequence until the ‘i1’ and ‘j1’ indexes).
So, in “1 + DP[i1][j1]”, ‘1’ stands for the current matching character and ‘DP[i1][j1]’ stands for the longest common subsequence found until the previous positions. You add these two together to get the longest common subsequence until the current positions ‘i’ and ‘j’.
The idea behind the line “1 plus LCS till the i1 and j1 indexes” is based on how the longest common subsequence (LCS) problem is defined.
The LCS problem is about finding the longest common subsequence between two strings, and a subsequence is a sequence that can be derived from another sequence by deleting some or no elements without changing the order of the remaining elements.
When you find a matching character at index ‘i’ in the first string and index ‘j’ in the second string, it means that this character is part of a common subsequence. However, it’s not just a subsequence by itself, but it continues the subsequence that has been found until the ‘i1’ and ‘j1’ indexes.
So, to get the length of the longest common subsequence till the ‘i’ and ‘j’ indexes, you need to count this matching character (which is why there’s a ‘1’) and also add the length of the longest common subsequence that has been found until the ‘i1’ and ‘j1’ indexes (which is ‘DP[i1][j1]’).
This is because dynamic programming is all about using the solutions of smaller subproblems to solve larger subproblems. Here, the subproblem is “What’s the longest common subsequence till the ‘i1’ and ‘j1’ indexes?”, and you’re using its solution (which you’ve stored in ‘DP[i1][j1]’) to solve the larger problem “What’s the longest common subsequence till the ‘i’ and ‘j’ indexes?”.
Can someone explain to me why this is the preferred solution compared to solving it by utilizing a hashtable of the characters in string 1 with key being the character and value being the frequency of the character and then running through string 2 and subtracting the frequency when each letter is found and each time incrementing a total variable match variable? I believe that would be runtime O(n + m) and memory O(min(n,m)).
The approach you’ve described—creating a hashtable of the characters in string 1, then iterating through string 2 and counting the matching characters—would indeed be more time and spaceefficient than the dynamic programming solution. However, it would not solve the problem correctly.
The problem asks for the length of the longest common subsequence, not the longest common substring or the total number of matching characters. A subsequence doesn’t have to be consecutive characters, but it must preserve the order of characters.
For example, consider the strings “ABCBDAB” and “BDCAB”. Both strings contain the characters ‘B’, ‘C’, ‘A’, and ‘B’ in that order, so “BCAB” is a common subsequence and, in fact, it’s the longest common subsequence. Your approach would potentially count characters that are not part of any common subsequence when they’re in the wrong order. For example, if string 2 was “BCD”, ‘C’ and ‘D’ would be counted even though they don’t appear in string 1 in that order.
The dynamic programming solution, though more complex and resourceintensive, correctly handles the order of characters and ensures that it’s finding a subsequence. It’s a classic example of a situation where a more sophisticated algorithm is needed to get the right answer.
Similar Problems
Here are 10 problems that use similar underlying concepts:
Longest Palindromic Subsequence (516): This problem is similar because it requires finding a longest subsequence, except in this case it must also be a palindrome. The dynamic programming approach will be quite similar to the one used in the longest common subsequence problem.
Edit Distance (72): This problem requires you to find the minimum number of operations to transform one string into another. It uses dynamic programming in a 2D array similar to the longest common subsequence problem.
Minimum ASCII Delete Sum for Two Strings (712): This problem requires you to delete characters to make two strings equal, minimizing the sum of the ASCII values of the characters you delete. It is related because it requires a similar dynamic programming solution.
Uncrossed Lines (1035): This problem is essentially identical to the longest common subsequence problem, with the lines corresponding to elements in the sequences.
Delete Operation for Two Strings (583): This problem requires you to find the minimum number of operations to make two strings equal, which is similar to finding the longest common subsequence.
Shortest Common Supersequence (1092): This problem is about finding the shortest common supersequence of two sequences, which is related to the longest common subsequence problem. They both involve comparing two sequences and making decisions based on their elements.
Longest Increasing Subsequence (300): This problem is about finding the longest subsequence where every element is larger than the previous one. It uses similar dynamic programming techniques.
Maximum Length of Pair Chain (646): This problem is about finding the longest chain of pairs. It is related as it involves finding the longest sequence satisfying certain conditions.
Longest Arithmetic Subsequence of Given Difference (1218): This problem is about finding the longest subsequence with a given difference, which is similar to the longest common subsequence problem, both of them requiring us to find the longest subsequence under certain conditions.
Coin Change (322): This problem requires you to find the fewest number of coins needed to make up a given amount. It’s a typical dynamic programming problem where the solution to a problem depends on the solutions to smaller instances of the same problem.
These problems are chosen because they all involve working with sequences and using dynamic programming to find an optimal solution by breaking the problem down into smaller subproblems and building up the solution.
1062. Longest Repeating Substring
Language Agnostic Coding Drills
Understanding variables and assignments: Understanding how to assign values to variables is fundamental in programming. This includes creating variables, variable naming, and assigning values.
Understanding builtin functions: This piece of code utilizes builtin Python functions like
map()
,len()
, andenumerate()
. Understanding how these functions work is crucial.Understanding control flow  Conditional Statements: The code uses conditional statements (
if
) to control the flow of execution.Understanding loops: Loops are used in the code to iterate over the characters of the strings (
for
loop).Understanding lists: The concept of lists in Python is used here (
dp = [0] * (n + 1)
). The developer needs to understand how to create a list, how to access and modify list elements.Understanding string operations: The code accesses individual characters of the string using indexing (
text1
,text2
).Understanding function definitions: The code is enclosed within a function definition. Understanding how to define a function, how to accept parameters, and how to return a value is required.
Understanding problemsolving patterns: The specific pattern used in this problem is dynamic programming. This is a method for solving complex problems by breaking them down into simpler subproblems. It is used here to solve the Longest Common Subsequence problem efficiently.
Here are the coding drills, in increasing difficulty:
Variables and assignments: Create two variables with any names and assign them different numbers. Print these variables.
Builtin functions: Create a list of 5 numbers. Use the
len()
function to print the length of the list.Conditional Statements: Write a simple program that prints “Even” if a number is even and “Odd” if it’s odd.
Loops: Create a list of 5 numbers. Write a loop that prints each number in the list.
Lists: Create a list of any 5 numbers. Print the first, third, and last number in the list.
String operations: Create a string variable with the value of “Hello, World!”. Print the first, seventh, and last character of the string.
Function definitions: Define a function that accepts two numbers and returns their sum. Call this function with two numbers and print the result.
Problemsolving patterns  Dynamic Programming: Start with a simple problem like finding the Fibonacci sequence up to n numbers. Use dynamic programming to solve this problem.
Targeted Drills in Python
 Variables and assignments


 Builtin functions


 Conditional Statements


 Loops


 Lists


 String operations


 Function definitions


 Problemsolving patterns  Dynamic Programming

