TIP
Recurrence relations will often arise for divide and conquer based algorithms.
Child: Imagine you have a big puzzle to solve. It’s really hard to solve the whole puzzle at once, right? So, what do you do? You break it into smaller parts, solve each part, and then put those parts together to complete the whole puzzle. That’s exactly what ‘Divide and Conquer’ is  solving a big problem by breaking it into smaller problems.
Teenager: When you have a big problem, it’s often easier to divide it into smaller parts, solve each of these smaller parts, and then combine those solutions to solve the original problem. This idea is known as ‘Divide and Conquer’. It’s like cleaning your room. Instead of thinking about the whole room, you might break it down  clean the desk, then the bed, then the floor. Each part is easier to handle.
Undergrad: In computer science, ‘Divide and Conquer’ is an important algorithm design paradigm based on multibranched recursion. It works by recursively breaking down a problem into two or more subproblems of the same or related type, until these become simple enough to be solved directly. The solutions to the subproblems are then combined to give a solution to the original problem.
Grad Student: ‘Divide and Conquer’ is an effective method for the design of efficient algorithms. It involves three steps at each level of recursion: Divide the problem into a number of subproblems that are smaller instances of the same problem. Conquer the subproblems by solving them recursively. If the subproblem sizes are small enough, however, just solve the subproblems in a straightforward manner. Combine the solutions to the subproblems into the solution for the original problem.
Colleague: The ‘Divide and Conquer’ paradigm optimizes the time and space complexity of algorithms in many cases, especially for problems such as Sorting (Merge Sort, Quick Sort), Multiplication of Large Integers (Karatsuba Algorithm), Matrix Multiplication (Strassen’s Algorithm), Closest Pair of Points, and many others. However, the efficient implementation of this paradigm requires a deep understanding of recursion and problemsolving skills to identify if a problem can be broken down into similar subproblems, solve the base cases, and combine the results properly.
Have you ever had to tackle a big project or task that seemed overwhelming at first?
Yes, we all have! Now, instead of trying to do the entire project all at once, did you find it more manageable to break it down into smaller, individual tasks?
That’s right! When we break down a big problem into smaller parts, we can focus on each part individually, making the overall problem easier to solve. Can you think of how we might use a similar approach when dealing with complex problems in computer science or mathematics?
You’re on the right path! In computer science, we often encounter problems that seem overwhelming at first. However, we can break these problems down into smaller, more manageable parts. This approach can help make the problem easier to solve. Can you guess what this problemsolving strategy is called?
Exactly! We call this strategy “Divide and Conquer”. It’s a powerful method used in computer science, where we divide the problem into smaller subproblems. Then we solve those subproblems, and combine their solutions to solve the original problem. It’s a clever way of dealing with complex problems, don’t you agree?
One big problem may be hard to solve, but two problems that are half the size may be significantly easier. In these cases, divideandconquer algorithms fare well by doing just that: splitting the problem into smaller subproblems, solving the subproblems independently, and combining the solutions of subproblems into a solution of the original problem.
The situation is usually more complicated than this and after splitting one problem into subproblems, a divideandconquer algorithm usually splits these subproblems into even smaller subsubproblems and so on, until it reaches a point at which it no longer needs to recurse.
A critical step in many divideandconquer algorithms is the recombining of solutions to subproblems into a solution for a larger problem.
Given a function to compute on n inputs, the divide and conquer strategy splits the input into k distinct subsets, yielding k subproblems. These subproblems must be solved and then a method must be found to combine subsolutions into a solution of the whole.
If the subproblems are large, then the divide and conquer strategy may possibly be reapplied. Often the subproblems resulting from a divide and conquer design are of the same type as the original problem. For those cases, the reapplication of the divide and conquer principle is naturally expressed by a recursive procedure.
Now smaller and smaller subproblems of the same kind as the original problem are generated, eventually producing subproblems that are small enough to be solved without splitting.
We can write a program template which mirrors the way an actual program based upon divideandconquer will look. By a program template we mean a procedure whose flow of control is clear, but whose primary operations are specified by other procedures whose precise meaning is left undefined.
Well, imagine that you’re at a party and you’ve been given the task of finding a particular friend named Joe, but the place is jampacked. How would you go about it?
You could, of course, go through the crowd one by one, asking each person, “Are you Joe?” This would be incredibly timeconsuming and inefficient. So, what can you do to find Joe more quickly?
Here’s where the ‘Divide and Conquer’ approach comes in, something we often use in computer science to solve complex problems. You could divide the crowd into two halves. You then ask someone in the first half if Joe is in their half. If they say ‘yes’, you focus your search only in that half and ignore the other. If they say ’no’, you move to the other half.
You keep dividing the crowd (or problem) into smaller and smaller sections (or subproblems) until you find Joe (or solve the subproblem). By doing this, you have significantly reduced the amount of time and effort to find Joe in that large crowd.
Similarly, ‘Divide and Conquer’ algorithms in computer science work by repeatedly breaking down a problem into two or more subproblems of the same or related type, until these become simple enough to be solved directly. The solutions to the subproblems are then combined to give a solution to the original problem. It’s like dealing with a large, complex puzzle by tackling smaller pieces of it one by one.
Let the n inputs be stored by the array called input. Here is the program template for divide and conquer:


small(p, q) returns true if the input size q  p + 1 is small enough so that the answer can be computed without splitting. If so, the function compute() is invoked. Otherwise the function divide (p, q) is called. This function returns an integer which specifies where the input is to be split.
Let m = divide(p, q). The input is split so that input(p, m) and input(m+1, q) define instances of two subproblems. The subsolutions x and y respectively of these two subproblems are obtained by recursive application of solve().
The combine(x, y) is a function which determines the solution to input[p, q] using the solutions x and y to the subproblems input[p, m] and input[m+1, q]. If the sizes of the two subproblems are approximately equal then the computing time of solve() is naturally described by the recurrence relation:


T(n)  Time for solve() on n inputs. compute(n)  Time to compute the answer directly for small inputs f(n)  Time for divide and combine
For divide and conquer based algorithms which produce subproblems of the same type as the original problem it is natural to first describe such an algorithm using recursion. But to gain efficiency it may be desirable to translate the resulting program into iterative form.
The iterative form of divide and conquer program template:


Problems with optimal substructure can be divided into similar but smaller subproblems. They can be divided over and over until subproblems become easy. Then subproblem solutions are combined for obtaining the original problem’s solution.
The divideandconquer strategy solves a problem by:
The real work is done piecemeal, in three different places:
These are held together and coordinated by the algorithm’s core recursive structure.
The divideandconquer strategy is a common approach in computer science and mathematics for solving complex problems. It is based on the principle of breaking down a large problem into smaller, more manageable subproblems, solving each of these subproblems, and then combining their solutions to solve the original problem.
The process of dividing the original problem is often performed recursively, breaking down each subproblem into even smaller subsubproblems until a base case is reached. The base case refers to the simplest possible instance of the problem that can be solved directly without further subdivision.
A central element in many divideandconquer algorithms is the combination or merging of solutions to the subproblems. Once the base case is solved, the solutions to the smaller subproblems are combined stepbystep, leading up to the solution to the original problem.
In practice, a divideandconquer algorithm is usually implemented as a recursive function or procedure. The function takes as input the problem to be solved and outputs the solution. The input is split into several subsets, each representing a subproblem. The function is then called recursively on each subset. The base case checks if the problem is small enough to be solved directly, otherwise, the problem is further divided.
Let’s illustrate this with the provided pseudocode in Ruby:


In this code:
small(p, q)
: This function checks if the current problem, defined by the range [p, q], is small enough to solve directly.
compute(p, q)
: This function is used to solve the problem directly when it is small enough.
divide(p, q)
: This function is used to divide the problem into two subproblems. It returns the point ’m’ at which to split the input.
combine(solve(p, m), solve(m + 1, q))
: This line represents the recursive calls to solve the subproblems, and then combines their results into a solution for the current problem.
A common way to analyze the time complexity of a divideandconquer algorithm is to use a recurrence relation. The time T(n) it takes to solve a problem of size n is expressed in terms of the time it takes to solve smaller instances of the problem.


Here, compute(n)
is the time to compute the answer directly for small inputs, 2T(n/2)
is the total time for solving the two subproblems, and f(n)
is the time for dividing the problem and combining the results.
While recursive solutions are often more straightforward to implement and understand for divideandconquer problems, they can lead to problems such as stack overflow for large inputs. Therefore, in some cases, it may be beneficial to implement an iterative version of the algorithm.
An iterative implementation of a divideandconquer algorithm often makes use of a data structure such as a stack or a queue to keep track of the subproblems that still need to be solved.
There are several algorithms and concepts in computer science that are similar to Divide and Conquer, either by involving a recursive approach or by breaking down problems into smaller parts. Here are some of them:
Recursive Algorithms: Recursive algorithms often break down a problem into smaller subproblems in a manner similar to divide and conquer. However, not all recursive algorithms divide the problem into equal parts or combine the results of the subproblems to get the final answer.
Dynamic Programming: Dynamic programming is a methodology that solves complex problems by breaking them down into simpler subproblems, and stores the results of these subproblems to avoid computing the same results again. This is similar to divide and conquer, but differs in that subproblems are often overlapping in dynamic programming, while they are disjoint in divide and conquer.
Backtracking: Backtracking also breaks down the problem into smaller subproblems, but differs from divide and conquer in that it involves exploration and discarding some solutions if they are found to be unworkable.
Branch and Bound: This technique is used in optimization problems. It divides the problem into subproblems (branch) and uses bounds to eliminate unprofitable subproblems.
Merge Sort and Quick Sort: These are specific examples of algorithms that follow a divide and conquer strategy, by breaking down the problem (sorting a list) into smaller parts, sorting them independently, and then combining them.
Binary Search: Another example where a problem (searching in a sorted list) is divided into smaller parts, one of which is discarded and the search continues in the remaining part.
Master Theorem: It is a method that solves recurrence relations which often arise when analyzing the time complexity of divide and conquer algorithms.
All these techniques share the broad idea of solving a large problem by breaking it down into smaller, more manageable subproblems. The specifics of how this is done and how the solutions to the subproblems are used to solve the original problem differ between the techniques.
The term “Divide and Conquer” in computer science doesn’t strictly relate to the mathematical operation of division, as in dividing numbers. Instead, it refers to a strategic approach to problemsolving.
In a “Divide and Conquer” algorithm, the problem is divided (broken down) into smaller subproblems. These subproblems are independently solved, and then their solutions are combined to solve the original problem.
This division is usually done in a recursive manner, where each subproblem is further divided until we reach a stage where the subproblem can be solved directly without any further division. This base condition typically involves a very simple problem, the solution of which is easy to compute. Once the base condition is reached, recursion stops and the solutions of the subproblems are combined up the recursive call stack to solve the bigger problem.
So while it’s not “division” in the arithmetic sense, it is a process of breaking down a complex problem into simpler parts.
Indeed, the term “divide and conquer” can sometimes be a little misleading. It doesn’t necessarily mean literal division, but rather breaking a problem down into smaller, more manageable subproblems. Sometimes this involves division (like binary search or merge sort), but not always.
A classic example where “division” is not literally used is the QuickSort algorithm. QuickSort is a divide and conquer algorithm that sorts an array or list of items.
In QuickSort, the “division” part is done by selecting a “pivot” element from the array and partitioning the other elements into two subarrays, according to whether they are less than or greater than the pivot. The pivot is then in its final position, and the same process is recursively applied to the subarrays.
Here, the array is not divided into equal halves, the sizes of the partitions depend on the pivot and the elements in the array. Yet, this is still a divide and conquer strategy, because the problem (sorting the whole array) is divided into smaller problems (sorting the two partitions).
Here’s an example of quicksort in Python:


So, while “divide and conquer” often involves splitting a problem into roughly equal halves, it doesn’t always. The key concept is not the act of division, but the act of breaking a larger problem down into smaller subproblems that are easier to solve.
QuickSort is considered a divide and conquer algorithm because it partitions (divides) the input into two subarrays and then recursively sorts each subarray independently. The dividing step in QuickSort doesn’t just make the problem slightly smaller (as in decrease and conquer), it creates two new, independent subproblems.
In contrast, a decrease and conquer algorithm typically solves one smaller instance of the problem and uses that to solve the larger instance. In other words, decrease and conquer problems solve a single subproblem, while divide and conquer strategies tackle multiple subproblems.
So, although the partition in QuickSort doesn’t necessarily split the array into two equal halves, it still generates two independent subproblems to be solved, fitting the paradigm of divide and conquer.
Both binary search and QuickSort are examples of Divide and Conquer strategy, not Decrease and Conquer.
In binary search, the problem size is halved (or divided) at each step until the desired element is found, making it a clear example of the Divide and Conquer approach.
Similarly, QuickSort divides the input array into two smaller arrays around a pivot element. This division is performed recursively until the base case (an array of one or zero elements) is reached. Hence, it’s also a Divide and Conquer algorithm.
In contrast, a Decrease and Conquer strategy involves decreasing the problem size by a constant at each step, rather than dividing it into subproblems. An example of a Decrease and Conquer algorithm is the Insertion Sort algorithm, where in each iteration, one element is removed from the input data, processed, and then added to the already sorted section of data.
That’s correct. The term “divide” in “divide and conquer” is more conceptual than mathematical. It refers to breaking down a problem into smaller, more manageable subproblems. These subproblems are usually (but not always) similar or identical to the original problem. The subproblems are then solved, and their solutions are combined to create a solution to the original problem.
This division of the problem doesn’t necessarily involve the mathematical operation of division (as in dividing one number by another). For example, in the QuickSort algorithm, the array is “divided” into subarrays based on the pivot element, but this process doesn’t involve any mathematical division.
So yes, the concept of division in divide and conquer is about partitioning or breaking down the problem, rather than the mathematical operation of division.
Divideandconquer is a fundamental algorithmic technique for solving problems. It is applicable to a broad range of problems and is often used in combination with other techniques, like recursion or iteration. This strategy provides an efficient and effective way to design algorithms.