Fair Distribution of Cookies Q&A
I mean i can’t help to understand that why is it not of binary search , its language is similar to most of binary search problems.
It’s a good observation, but not all problems with similar language structures map directly to a known algorithmic paradigm like binary search. The key lies in understanding the nature of the problem.
Binary search is a divide and conquer algorithm used for searching a sorted array by repeatedly dividing the search interval in half. The critical aspect is the sorted array and the ability to exclude half of the remaining elements from consideration at each step.
In this problem, you’re given an unsorted list of cookie counts and a number of children. The task is to distribute the cookies among the children to minimize the maximum number of cookies any child gets. The problem is asking for an optimal distribution of an unsorted list into several different groups, which doesn’t lend itself to a straightforward binary search solution.
The “unfairness” measure is not something that we can sort or search in a binary way. Instead, we need to evaluate different distribution strategies to find the one with the lowest maximum unfairness.
Binary search could be used if we had a sorted sequence and we were looking for a particular element (or condition) within that sequence. However, in this problem, the sequence (cookies array) is unsorted, and we’re not looking for a specific element but rather an optimal distribution.
The provided backtracking solution works because it explores all possible distributions of cookies, updating the minimum unfairness when a better distribution is found. The optimization comes from stopping the exploration of a distribution if it already exceeds the current minimum unfairness.
So, even though the language might seem to suggest a binary search type problem, the structure of the problem and the nature of the solution indicate that it is better suited to a backtracking approach.
We can use binary search in this problem to do the pruning by finding the upper bound in searching.


Your solution is indeed a clever approach to the problem and an example of a more advanced technique called binary search on the answer, which can be highly efficient in many optimization problems. Your code maintains the idea of backtracking but applies a binary search to limit the search space effectively.
Here’s how it works:
 First, you sort the
cookies
array.  Then, you establish a lower bound
lb
and an upper boundrb
for the binary search. The lower bound is the maximum number of cookies in a single bag since it’s the minimum number of cookies a child can get. The upper bound is the total number of cookies since it’s the maximum number of cookies a child can get.  Next, you perform a binary search between these bounds. For each midpoint
mb
, you try to distribute the cookies such that no child gets more thanmb
cookies. This is done by the depthfirst search (dfs
) function.  If you can distribute the cookies with this restriction, it means the unfairness can be at most
mb
, so you update the answerans
and continue to search for a lowermb
in the first half of the search space (rb = mb  1
).  If you can’t distribute the cookies with this restriction, you need to search for a larger
mb
in the second half of the search space (lb = mb + 1
).
The depthfirst search function (dfs
) tries to distribute the cookies such that no child gets more than mb
cookies. It does so by assigning each cookie to a child if it doesn’t exceed the limit. If all cookies can be distributed, it returns true. If not, it returns false.
This solution combines the backtracking algorithm’s exhaustive search with the binary search’s efficiency, providing a powerful approach to this optimization problem. The binary search limits the search space for the unfairness value, significantly improving the efficiency of the problemsolving process.
Thank you for sharing this excellent solution. I believe it will be quite helpful for others who encounter this problem.
Because All the Problems we solved on binary Search Mentions Consecutiveness In the Description , But In The Above There is no Constraint on Consecutive Distribution so we cannot apply binary search here.
You’re correct in observing that binary search is often used when there is some sort of ordering or consecutiveness involved in the problem. However, binary search can also be applied in a variety of nontraditional ways, especially when trying to optimize a certain parameter or value.
In this problem, we’re using binary search to optimize the “unfairness” value, which is defined as the maximum number of cookies any child gets. We’re essentially using binary search to home in on the minimum possible value of the maximum number of cookies a child can get, which is why this approach works, even though there’s no explicit consecutiveness or ordering in the distribution of cookies to children.
This usage is sometimes referred to as a “binary search on the answer”, because we’re applying the binary search technique not directly on the input array, but on the range of possible answers to our problem. This is a more advanced and creative use of binary search and it’s a very valuable tool in the problemsolving toolkit for optimization problems.
So, while your observation about binary search typically involving consecutiveness is generally correct, there are indeed cases like this where binary search can be used in nontraditional ways.
Hence, no boundary to search here.
Yes, in the classic sense of binary search where we are looking for a specific element in a sorted array or list, there is a clear boundary or range we are searching within. The search space is defined by the sorted array or list and the element we are looking for.
However, in this problem, we are not searching for a specific element within a given sorted list or array. Instead, we are using a technique often called “binary search on the answer” where we are trying to find a value (in this case, the minimum unfairness of cookie distribution) which is not directly present in our input but is a result of our computation or distribution strategy. Here, our boundaries are defined by the range of possible unfairness values, which go from the maximum value in our cookie array to the sum of all cookies. We are then binary searching within these boundaries to find the optimal distribution unfairness.
So in essence, while there might not be a direct boundary as with a traditional binary search, the problem does provide an indirect boundary (range of unfairness values) within which we apply the binary search technique to find our optimal solution. It’s a more advanced use of the binary search technique and can be a powerful tool for solving certain types of problems.
Hmm, I had a similar thought while solving. But I don’t know how to apply binary search over this. The constraints suggested brute force algorithm should work here. That’s why, I simply used bitmask & backtracking. Much easier than working out the binary search here.
Absolutely! The approach you chose is perfectly valid. Using bit masking and backtracking is a straightforward way to approach this problem, especially given the constraints.
Binary search on the answer, or using binary search to optimize a result rather than find an element, can be more difficult to conceptualize and implement. It’s often used when the search space is large and a bruteforce approach would be too slow. However, in this case, the constraints of the problem allow for a simpler, bruteforce solution.
Ultimately, choosing the appropriate algorithm depends on the problem at hand, its constraints, and your familiarity with different algorithms. It’s great that you’re considering different ways to approach the problem and understanding when one might be preferable over the other! That kind of thinking will serve you well in algorithm design and problemsolving.
because there is no contiguous allocation.
Exactly! Binary search is a powerful tool when dealing with problems that involve ordered data or contiguous allocations. However, in this particular problem, the distribution of cookies to children is not sequential or ordered, so we can’t easily apply binary search. Instead, we use backtracking to explore all possible distributions, which is a more fitting strategy in this case. Remember, understanding the characteristics and requirements of a problem is crucial to choosing the right algorithmic approach!
Backtracking means that you have to try almost every case. But there is still an alternative by using DP & bit mask! It is doable.
Dynamic Programming (DP) combined with Bitmasking can provide a more efficient approach for this problem. It’s especially effective when the problem size isn’t too large.
Here’s how we can think about it:
Bitmasking: We can use a bitmask to represent which bags of cookies have been taken. For an array of size n, there are 2^n possibilities. In this case, where n can go up to 8, it’s manageable.
Dynamic Programming (DP): Once we have our bitmask, we can use dynamic programming to store and reuse answers to subproblems. This reduces the time complexity significantly.
However, remember that while DP and bitmasking can be more efficient, it can also be a bit harder to grasp conceptually, especially if you’re not familiar with these techniques. Backtracking is a more straightforward method and might be easier to understand if you’re just getting started with problemsolving in competitive programming. As you gain more experience, you’ll find that being able to apply more advanced techniques like DP and bitmasking is a powerful tool in your toolkit.
Whenever the question has said to find minimum of maximum or maximum of minimum , then it is a question of BINARY SEARCH .
Yes sure , taking example of question like Maximum pages allocation , painters problem or aggressive cows  all these are solved using binary search and in each of them question stated to find minimum of maximum. > So , in these question if we are able to find the search space for out binary search , half of the question gets over , like it’s the main part to think and evaluate. in 50% cases which can be done by taking the lower limit as 0 or the least element and upper limit as the sum of all the elements. > You can check my solution for further understanding , it took a lot of effort in researching and explaining
Yes, you’re absolutely correct. The technique you’re describing is sometimes referred to as “binary search on the answer,” which is a type of binary search where we use binary search to find the optimal value from a range of possible answers.
In such problems, we usually have a predicate function that can test whether a certain value is feasible or not, and our goal is to find the smallest or largest feasible value.
For the “distribute cookies” problem, a binary search approach could look something like this:
Finding the Search Space: The lower bound can be the maximum value in the cookies array, and the upper bound can be the sum of all elements in the cookies array.
Predicate Function: Given a value X, the predicate function checks if it is possible to distribute the cookies such that no child receives more than X cookies. This can be done by simulating the distribution and checking if the number of children exceeds ‘k’.
Binary Search: We then perform a binary search over the possible values of X using our predicate function, aiming to find the smallest X for which the predicate function returns true.
I agree that recognizing this pattern and determining the search space is often the most crucial part of solving such problems. Once we can do that, we can use a standard binary search template to find the solution.
How do you find if the current distribution is optimal or not?
In the context of this problem, a distribution is considered “optimal” if the maximum number of cookies received by any child is minimized. Essentially, we’re trying to make the distribution as fair as possible by reducing the discrepancy between the child who gets the most cookies and the rest.
When using the backtracking approach, we iterate over all possible distributions and keep track of the one that has the smallest maximum number of cookies (i.e., the smallest unfairness). This means we have to explore all possible distributions to ensure we’ve found the best one, which can be computationally intensive.
When using the binary search approach, the “optimal” distribution is evaluated a bit differently. We define a predicate function that, given a certain number of cookies (X), checks if it’s possible to distribute the cookies so that no child receives more than X cookies. The binary search algorithm then searches for the smallest X for which this is possible.
In both cases, determining whether a distribution is “optimal” relies on comparing it to all other possible distributions, either implicitly (in the case of binary search) or explicitly (in the case of backtracking). It’s worth noting that while these methods can find the optimal solution, they do not necessarily tell us if there is only one optimal solution – there could be multiple distributions that all achieve the same minimum unfairness.
Time Limit Exceeded 27 / 38 testcases passed
passed all case use


I see that you’re implementing a recursive backtracking solution with memoization. That’s a good approach for this problem. However, the part of your code that sorts the children for each recursive call may be causing excessive time complexity.
Sorting an array has a time complexity of O(n log n), where n is the number of elements in the array. In your code, you’re sorting the ‘child’ array for every recursive call to ‘knapSack’, which could be causing your time limit exceeded error. The ‘child’ array’s size is equal to ‘k’, and since ‘knapSack’ is called recursively for each cookie, the time complexity of your solution might end up being O(m * k * log k), where m is the number of cookies.
Instead of sorting the entire ‘child’ array every time, you can keep track of the maximum value in ‘child’ and use it for the key in your memoization dictionary. This should reduce your solution’s time complexity and help it pass the test cases without exceeding the time limit.
Here’s a modified version of your code:


I hope this helps! Let me know if you have any other questions.
sorted the cookies in decrease order make my backtrack code from TLE to Beats 96 %, I just don’t get it.
Depends on what pruning (i.e., early termination) technique you’re using. It shouldn’t change anything if you don’t do any pruning. If you’re e.g. terminating a certain permutation as soon as the current maximum crosses the current best answer, sorting by descending order ensures that you cross this threshold early, saving time and space (via callstack).
That’s correct. If you’re using a pruning technique in your backtracking solution, sorting the cookies in descending order can have a big impact on performance.
When you’re distributing the cookies, you’re essentially generating permutations of ways to distribute cookies to the children. In your pruning condition, you’re probably stopping the generation of a permutation as soon as you reach a distribution where one of the children has more cookies than the best solution found so far.
If the cookies are sorted in descending order, the children who are considered earlier in the permutations will get the larger cookies first. This increases the likelihood that a distribution will be pruned early in the process of generating it, because the children with more cookies will likely cross the threshold of the current best solution more quickly.
This leads to fewer permutations being fully generated, and fewer recursive calls, which speeds up the algorithm and reduces the amount of memory used by the call stack.
The constraints are the biggest hints to this problem
Absolutely, in many algorithmic problems, constraints provide crucial hints for the appropriate solution approach.
In this problem, we’re given that 1 <= k <= cookies.length <= 16
. This constraint hints that a brute force or backtracking approach could potentially be applied, because the problem size (which would be 2^cookies.length
in a brute force scenario) is small enough to be computed within reasonable time limits.
Additionally, the upper limit on the number of cookies also tells us that an exact solution is required (as opposed to an approximate one), again indicating that a combinatorial approach such as backtracking would be applicable.
However, even with these constraints, a naive brute force solution might still be too slow. Thus, the constraint also suggests that some optimization would be required  either by using dynamic programming, memoization, or pruning. The specific optimization would depend on the nature of the problem. In this case, pruning the search space using bounding conditions proved to be a successful approach.
Finally, the fact that k <= cookies.length
suggests that the distribution will not be even, which is the main challenge that we need to address. This forms the basis of the problem and guides us towards solutions that minimize the maximum number of cookies any child can receive.
Can’t we do it with minimum subset problem approach?
The problem of distributing cookies among children such that the maximum number of cookies received by any child is minimized can be thought of as a variation of the subset sum problem. However, there are some key differences that make this problem more complex and prevent a direct application of the subset sum approach.
In the subset sum problem, we’re given a set of numbers and we have to determine whether there is a subset whose sum is a particular target value. It’s a binary problem: either such a subset exists or it doesn’t.
In this problem, we’re given a set of numbers (the cookies) and a number of subsets (the children), and we have to divide the numbers among the subsets such that the maximum sum in any subset is minimized. This is an optimization problem rather than a binary one.
Furthermore, in the subset sum problem, we only need to find one subset that meets the criteria. In this problem, we need to find a distribution of the entire set across multiple subsets.
While there are some similarities between the problems, they are different in key ways. In particular, this problem has additional constraints and a different objective, which make it more complex. You would likely need a more advanced algorithm, such as backtracking or binary search, to solve it effectively.
I came up with the backtracking idea within 10 minutes, but i was sure there must be a more efficient solution. So I wasted another hour looking for a more optimal solution
Your experience is quite common in competitive programming and problemsolving in general. It’s good to remember that not all problems have a “more optimal” solution than the first one you come up with. Sometimes, a brute force or backtracking solution might be the most efficient one given the problem’s constraints.
In this case, backtracking was indeed the right approach given the problem’s structure and constraints. It’s often a good idea to implement and test your initial solution if it fits within the problem’s constraints before spending too much time looking for a more optimal solution.
That said, striving for more efficient solutions is a great habit in problemsolving and will serve you well. Just remember to balance this with practical considerations. If your initial solution is good enough for the problem at hand, it might be best to go ahead with it.


how the statement : if(child[i] == 0) break; increases the efficiency of code.
The statement if(child[i] == 0) break;
is a pruning operation that helps to eliminate some unnecessary recursive calls, thereby increasing the efficiency of the code.
In the context of this problem, it’s saying that if a child hasn’t received any cookies yet (child[i] == 0
), we don’t need to consider giving cookies to the rest of the children for the current distribution. This is because we’re trying to minimize the maximum number of cookies any child gets, and giving more cookies to children who have already received cookies will only increase the maximum.
By breaking early when we encounter a child who hasn’t received any cookies yet, we eliminate the possibility of distributing cookies in a way that is guaranteed to be suboptimal (i.e., a way that results in a larger maximum number of cookies for any child). This pruning operation helps to reduce the size of the search space that the backtracking algorithm needs to explore, thereby increasing its efficiency.
However, please note that this type of optimization is problemspecific and depends on the problem’s unique requirements and constraints. It may not be applicable or beneficial in all situations.
Why is the dp solution failing here? Isn’t there a repetitive subproblem?
The problem of distributing cookies to children might initially seem like it could be solved using a dynamic programming (DP) approach, especially given that there seems to be overlapping subproblems (i.e., the distribution of remaining cookies after giving some to a child).
However, there are a few reasons why a traditional DP solution could fail or be inefficient for this problem:
State definition is complex: In a DP problem, we typically define the state in terms of a few variables, like the current index or remaining value, and we base our subproblems around these states. However, in this case, to define a state, we need to keep track of the remaining cookies, the number of children left, and also how much each child has already received to maintain the “minimum unfairness” constraint. This leads to a very highdimensional DP table, which can be challenging to handle and reason about.
No optimal substructure: DP relies on the principle of optimal substructure, which says that the optimal solution to a problem can be constructed efficiently from the optimal solutions of its subproblems. However, in this problem, giving more cookies to a child in an attempt to optimize the current distribution does not guarantee that this will lead to the optimal solution in the end, as we are trying to minimize the maximum number of cookies a child can get. The choice at one step depends on choices at other steps, which does not adhere strictly to the optimal substructure property.
These factors make the problem unsuitable for a straightforward DP solution. However, a more advanced technique, such as using DP with bitmasking to keep track of which cookies have been distributed, might be possible but would be far more complex than simpler approaches like backtracking or binary search.
[941,797,1475,638,191,712] 3
with this test case given sol is 1653 but according to my algo the optimize distribution for the above is 1579 ,1509 ,1666 with and with this distribution answer should be 1666 not 1653 . can anyone explain me with this.
From the problem statement, it is clear that the goal is to distribute cookies among ‘k’ children such that the maximum number of cookies any child receives is minimized. This is also referred to as minimizing unfairness. The solution should yield the minimum possible maximum number of cookies a child can get.
In the case of the cookies array [941,797,1475,638,191,712] and 3 children, the optimal distribution to minimize the maximum cookies any child receives would be [1579, 1509, 1653].
Here’s a step by step distribution:
 Child 1 gets 941 and 638 (total 1579)
 Child 2 gets 797 and 712 (total 1509)
 Child 3 gets 1475 and 191 (total 1666)
However, this results in a maximum of 1666 cookies that a child (child 3) receives, which is not the minimum unfairness. We are tasked to minimize this maximum.
The correct distribution to minimize the maximum number of cookies a child receives would be:
 Child 1 gets 941 and 712 (total 1653)
 Child 2 gets 797 and 638 (total 1435)
 Child 3 gets 1475 and 191 (total 1666)
In this distribution, the maximum number of cookies a child receives is 1666 (Child 3). However, this is not the optimal solution as Child 1 can receive more cookies such that the maximum any child receives is decreased.
The optimal solution is:
 Child 1 gets 941, 638, and 191 (total 1770)
 Child 2 gets 797 and 712 (total 1509)
 Child 3 gets 1475 (total 1475)
In this distribution, the maximum number of cookies a child receives is 1770 (Child 1). This is the smallest maximum number of cookies any child can get, and thus 1770 is the solution. In the context of the problem, it means that the maximum unfairness of cookie distribution is minimized to 1770.
So the output of the program should be 1770, and if it is giving 1653, then the program is not producing the correct output for this input.
Binary Search + 0/1Kp ? Will this approach work?
In this problem, applying Binary Search in conjunction with 0/1 Knapsack Problem would not work. Here’s why:
Binary Search: This algorithm is usually effective when we have a sorted array and a target to find. We can partition the array and compare the target with the middle element to determine in which half the target might be. In the current problem, the array (i.e., the cookies) is not necessarily sorted, and there’s no single target to find.
0/1 Knapsack: This algorithm is typically used when we need to maximize or minimize the total value or weight by choosing or not choosing certain items with given values and weights. We decide whether to take an item based on whether the optimal solution includes this item or not. The current problem does not resemble the 0/1 Knapsack because we don’t have to choose or not choose cookies. Every cookie has to be distributed, and there’s no concept of the weight of the cookies.
The crux of this problem lies in dividing the cookies in a way that the child who gets the maximum cookies gets as few as possible, i.e., we want to minimize the maximum number of cookies any child receives. This involves exploring all possibilities of distributing the cookies, making it a good fit for a backtracking approach. We distribute the cookies, calculate the unfairness (maximum cookies any child gets), backtrack, distribute them differently, and continue this process until we find the most fair distribution. This can also be optimized with techniques like pruning and memoization.
Why can’t all the cookies go to a same person??
The goal of the problem is to distribute the cookies among “k” children such that the maximum number of cookies received by any child is minimized. If all the cookies go to the same person, then the maximum number of cookies received by that child would not be minimized. It would, in fact, be the maximum possible (the total number of cookies), which is contrary to what the problem asks for.
The problem is essentially asking us to distribute the cookies as evenly as possible among the “k” children. We want to avoid a situation where one child gets a lot more cookies than the others, which is why we can’t just give all the cookies to one child.
can it be done using min priority queue ?
Yes, this problem can indeed be solved using a priority queue, specifically a minheap in Python.
The idea is to create a minheap with size equal to the number of children (k), initialize it with 0’s, and for each cookie, we add it to the child who currently has the least amount of cookies. This way, we ensure that the cookies are distributed as evenly as possible.
Here’s how the Python solution might look:


In this solution, the heap stores the total number of cookies each child has. Each time we get a new cookie, we give it to the child who currently has the least amount of cookies. At the end, the child with the most cookies will be at the top of the heap, and we return this number as the maximum unfairness. The time complexity is O(n log k), where n is the number of cookies and k is the number of children.
your code is buggy. let’s move on to another question. Is the distribution continuous or it may be random
The distribution of cookies to children is not required to be continuous. The only goal is to minimize the maximum number of cookies any one child receives. Thus, the cookies can be distributed in any order that meets this requirement. It’s important to note that a single child should not receive all the cookies unless there is only one child.
This means that the distribution could potentially be random, as long as it meets the criteria of minimizing the maximum number of cookies any one child receives. The best approach would involve some kind of strategy, however, rather than purely random distribution.
Thought, it would be solved using greedy approach by sorting and all at first glance. But that’s not the case.
The instinct to use a greedy algorithm by sorting the array is understandable given that many problems can be solved this way. However, for this problem, a greedy algorithm won’t necessarily give the correct answer.
The reason is that a greedy algorithm would involve giving the smallest remaining cookie to the child who currently has the least. But there may be a better distribution where that smallest cookie is given to a child who currently has more, with the result that the maximum number of cookies any child has is smaller.
For example, consider the case where there are two children and the cookies have sizes [1, 2, 3]. A greedy algorithm would distribute the cookies as follows:
 Give the smallest cookie (1) to the first child.
 Give the next smallest cookie (2) to the second child, because the first child already has a cookie.
 Give the largest cookie (3) to the first child, because they have fewer cookies.
So the distribution would be [1, 3] and [2], with the maximum number of cookies being 4.
However, a better distribution would be [1, 2] and [3], with the maximum number of cookies being 3. So a greedy algorithm doesn’t work in this case, and that’s why the backtracking approach is used instead, as it examines all possible distributions.
Can anyone pls tell for lager N what should we do….we cant use backtracking so how we solve?
When N
is large, using a backtracking approach will likely become computationally infeasible due to its time complexity, which is exponential. Thus, it’s necessary to resort to more efficient strategies, like dynamic programming or greedy algorithms, if they are applicable. However, for this particular problem, neither dynamic programming nor greedy algorithms are straightforward due to the complexity of the problem.
One potential approach for large N
is to use an optimization of backtracking known as branch and bound. This method can help by pruning the search tree, allowing the algorithm to ignore certain branches that are guaranteed not to contain the optimal solution.
In addition, if the distribution of the input data is known, it may be possible to use a heuristic to guide the search towards promising regions of the solution space. However, this would require problemspecific knowledge.
As another alternative, approximation algorithms or heuristics could be used to obtain a solution that is “good enough,” although it may not be the absolute best. These approaches are often used in realworld situations where finding an exact solution is too costly or timeconsuming.
In the end, for extremely large N
, it may not be feasible to find an exact solution using any method due to the inherent complexity of the problem. Instead, one might need to settle for an approximation or a probabilistic algorithm that can find a likely good solution quickly.
Can anyone provide with the contradictory example , When actually greedy fails for the above problem?
A simple scenario where a greedy approach could fail in this problem would be the following:
Suppose we have cookies = [5, 10, 2]
and k = 2
.
The greedy approach would be to always give the largest available cookie to the child who currently has the least total cookies. This means we first give cookie 10 to child 1 (cookies now: child 1 = 10, child 2 = 0), then cookie 5 to child 2 (cookies now: child 1 = 10, child 2 = 5), and finally cookie 2 to child 2 (cookies now: child 1 = 10, child 2 = 7).
So the maximum number of cookies any child has (the unfairness) is 10. But we could do better.
Instead, if we distributed the cookies like this: first give cookie 10 to child 1, then cookie 5 to child 2, and then cookie 2 to child 1 (cookies now: child 1 = 12, child 2 = 5), the maximum number of cookies any child has is 12, which is worse than the greedy solution.
However, if we distributed the cookies like this: first give cookie 5 to child 1, then cookie 2 to child 1 (cookies now: child 1 = 7, child 2 = 0), and finally cookie 10 to child 2 (cookies now: child 1 = 7, child 2 = 10), the maximum number of cookies any child has is only 10, which is equal to the greedy solution and doesn’t demonstrate the failure of the greedy approach.
Thus, to demonstrate the failure of the greedy approach, let’s add a fourth cookie: cookies = [5, 10, 2, 1]
and k = 2
.
Now, if we distributed the cookies like this: first give cookie 10 to child 1, then cookie 5 to child 2, then cookie 2 to child 2, and finally cookie 1 to child 2 (cookies now: child 1 = 10, child 2 = 8), the maximum number of cookies any child has is 10.
However, the optimal distribution is: first give cookie 10 to child 1, then cookie 5 to child 2, then cookie 2 to child 1, and finally cookie 1 to child 2 (cookies now: child 1 = 12, child 2 = 6), the maximum number of cookies any child has is 12, which is better than the greedy solution. Hence, this example demonstrates a case where the greedy approach fails to find the optimal solution.
All the cookies in the same bag must go to the same child and cannot be split up. what it means ?
The phrase “All the cookies in the same bag must go to the same child and cannot be split up” means that you cannot divide the quantity of cookies from a single bag among different children.
For instance, if you have a bag of 5 cookies, you cannot give 2 cookies from this bag to one child and the remaining 3 cookies to another. The entire bag of 5 cookies must be given to a single child.
it means arr[i] cannot be split up; should be taken as a whole
Yes, that’s correct! Each element arr[i]
in the array represents a “bag of cookies”. This bag must be given entirely to one child. The contents of the bag cannot be divided amongst different children. This is a critical constraint in the problem and must be adhered to when distributing the cookies.
Just an addon. How do we solve it if the k was a big value.
If ‘k’ is a large value, using a brute force method like backtracking will become infeasible due to the explosion of computation time. For such scenarios, we need to look for optimized algorithms, often involving Dynamic Programming (DP) or Greedy approaches.
However, for this specific problem, both dynamic programming and greedy approaches are difficult to apply due to the nature of the problem.
The dynamic programming approach is tough because the state transition is not easy to find. We are not simply selecting or not selecting a bag for a child, but distributing these bags amongst ‘k’ children. This increases the complexity of defining the state and transition.
The greedy approach might not work here because it is not always optimal to give the child with fewer cookies more cookies or to give the largest bag of cookies to the child with fewer cookies.
When ‘k’ is large, a possible solution would be a mix of a binary search approach for a valid distribution along with a feasible check algorithm using a priority queue (like a heap).
Here’s an overview of this approach:
The binary search range would be from 0 to the sum of all cookies. For each midvalue in the binary search, check if it’s feasible to distribute the cookies such that the maximum cookies any child gets is less than or equal to mid.
The feasibility check can be implemented using a maxheap priority queue. Initialize the queue with ‘k’ zeroes. For each bag of cookies from the largest to smallest, add the bag to the child who currently has the fewest cookies, and then update that child’s cookie count in the priority queue.
If all the cookies can be distributed under the mid limit, then we can try to minimize the limit; otherwise, we increase the limit.
This approach reduces the time complexity to O(n log(sum(cookies))) where ’n’ is the total number of bags and ‘sum(cookies)’ is the total number of cookies in all bags. Please note that this approach assumes the cookies in a bag can be distributed one by one, which may or may not be applicable based on the problem constraints. This approach also needs further verification for edge cases.
Maybe easier to understand is the following equivalent problem: You have k computers to run a list of jobs. Each job must run entirely on a single computer. The array cookies contains the duration of each job. You must find a distribution of the jobs, where the last job finishes as early as possible.
That’s a great analogy! Indeed, this problem can be seen as a variant of the job scheduling problem, which is a wellknown problem in computer science. In this scenario, the cookies represent the job duration, the children represent the computers, and we want to minimize the time at which the last job (or distribution of the last cookie) is finished.
In a job scheduling context, this problem is often referred to as the “Makespan Minimization” problem, where the objective is to minimize the time after which all jobs have been processed.
This interpretation can help with formulating efficient algorithms, especially when the number of children (computers in this analogy) is large. The binary search solution I described in the previous message is essentially an implementation of a heuristic for the makespan minimization problem, known as the “Longest Processing Time” (LPT) rule. The LPT rule says that we should always assign the next job to the machine that finishes its current jobs the earliest, which is effectively what the priority queue does in our solution.
Here’s the solution reinterpreted in terms of the job scheduling analogy:


Remember, this Python code assumes that we are free to assign cookies (jobs) one by one.
The goal is to find the fairest distribution possible, where the difference between the number of cookies received by the child with the most cookies and the child with the least cookies is as small as possible.
That’s correct. In the context of our problem, we’re aiming to minimize the unfairness of the distribution. Unfairness in this problem is defined as the maximum number of cookies received by any child. Our goal is to make this maximum as small as possible.
Given this objective, the problem becomes more complex than just a simple distribution of cookies, as we need to take into account the number of cookies and the number of children, and we have to find a distribution where the child who receives the most cookies gets as few cookies as possible. This is a fairly challenging computational problem, and it is where our backtracking or binary search approaches come in.
For the binary search approach, we’re essentially trying to find a limit to the number of cookies a child can receive and we iterate to adjust that limit until we find the smallest possible maximum. For backtracking, we’re exploring all possible distributions of cookies and tracking the one with the smallest possible maximum.
Both of these approaches seek to solve the problem by iteratively improving the solution until we reach the optimal distribution.
It’s important to note that the goal isn’t necessarily to distribute the cookies evenly. The goal is to make sure that no child gets more cookies than is absolutely necessary, given the distribution of cookies we have. This may mean that some children get more cookies than others, but the goal is to minimize the disparity as much as possible.
why is my code not working out please help


The code you provided seems to implement the binary search algorithm which works fine for problems where you need to divide or distribute a quantity into groups so as to minimize the maximum sum in a group (like the book allocation problem).
However, this approach does not seem to work for the current problem because of a key difference  we can’t split the cookies among different children. Each cookie packet (element of the list) needs to be given to one child completely. The approach you have used allows splitting of a group of cookies which doesn’t align with the problem statement.
Your code works under the assumption that you can divide the cookies in a manner that balances the distribution among the kids. This assumption fails in our problem, where a set of cookies cannot be split among children.
The problem in this case can be solved using backtracking or dynamic programming since we need to explore all possible ways of distributing the cookies to minimize the maximum number a child receives. We can’t just balance the distribution based on sum as your current code tries to do. This is why your code isn’t working as expected.
Let me know if you need further clarification or if you have more questions.
It is very much similar to book allocation problem of binary search. The only difference that i noticed was that in book allocation we were allowed to take substrings only. Whereas in this subsets are allowed.
Yes, you are correct. Both the book allocation problem and this cookie distribution problem fall under the category of “Divide and Conquer” problems. However, the conditions of the two problems are different:
In the book allocation problem, you are supposed to allocate contiguous books to students (subarray of the array), while ensuring that the student with the maximum allocated pages is minimized. This problem is essentially a partition problem where the objective is to partition the books in a way that the maximum sum of pages assigned to a student is minimized. This is a classical binary search problem where the search space is between the maximum element and the sum of all elements in the array.
In the cookie distribution problem, however, you are distributing individual cookie packets (or elements in the list) among kids, and these don’t need to be contiguous in the original list. This problem involves finding the different ways to distribute the cookies (not necessarily contiguous) such that the maximum number of cookies any child receives is minimized. The nature of this problem is more combinatorial, and requires a different approach (like backtracking or dynamic programming).
So while both problems share a similar highlevel goal (minimize the maximum allocated quantity), the constraints and details of the problems lead to different specific strategies being optimal.
other prune we can use is if(v[i]==0) then break the for loop after solve.


because the children is actually unlabelled. so [10,0,0,0] is the same as [0,10,0,0],[0,0,10,0] and [0,0,0,10] we only need to track one of this.
Absolutely, this is a very good point. This is a typical optimization technique used in combinatorics, to avoid duplicate calculations in situations where the order doesn’t matter.
This pruning condition “if (v[i] == 0) break;” is based on the insight that children are interchangeable (unlabeled).
Once a cookie is not given to a child, there’s no point in trying to give the same cookie to the next child because the resulting distributions will be the same (since the children are unlabeled).
Therefore, by adding this break statement, we ensure that each child only receives cookies while the previous child still has cookies, which can effectively eliminate a lot of duplicated distributions and speed up the search process significantly.
This is a classic speedup technique for backtracking algorithms, where avoiding exploring duplicate states can greatly improve performance.
You can see that with this constraint, first cookie can go to only v[0], second cookie to {v[0], v[1]}, third cookie {v[0], v[1], v[2]} and so on. Cookies after kth position can go to any of the position. So the time complexity can be calculated as O((1∗2∗3∗…∗k)∗(k∗k∗…(n−k)times))=O(k!∗kn−k)O((1 * 2 * 3 * … * k) * (k * k * … (nk) times)) = O(k! * k^{nk})O((1∗2∗3∗…∗k)∗(k∗k∗…(n−k)times))=O(k!∗k n−k ). When k=n/2k = n/2k=n/2, this roughly becomes O(n!∗nn)O(n! * n^n)O(n!∗n n ).
If you find anything incorrect, please feel free to point out.
Your analysis is correct, and you did a great job breaking it down!
The time complexity is indeed O(k! * k^{nk}) as you have calculated. This is because for the first k cookies, there are k! ways to distribute them, as each new cookie can go to one more child. For the remaining (n  k) cookies, each of them can go to any of the k children.
However, your estimation O(n! * n^n) when k = n/2 might be a bit off. It would actually be O((n/2)! * (n/2)^{n/2}) in that case, which is still exponential and large, but not as large as O(n! * n^n).
In practical terms, this is still a very expensive operation for large inputs, so it’s crucial to use pruning strategies like the one you mentioned to trim the search space and make the algorithm feasible for larger inputs.
Remember, these are worstcase time complexities and the actual performance can be much better depending on the specific input and how efficient your pruning is. But you’ve done a great job analyzing the time complexity!
Another optimization is to not continue with a distribution that is not promising at all. So, if v[i] += nums[start]; increases v[i] more than the current ans, no need to continue with this :


Yes, that’s a valid optimization!
This is a form of a strategy called pruning, where you avoid exploring parts of the solution space that you know cannot lead to a better solution than what you’ve already found.
In this case, if adding nums[start]
to v[i]
would result in a total greater than the current best answer (ans
), there’s no point in continuing to explore this branch of the solution space, because the goal is to minimize the maximum amount of cookies any child receives. So you skip this option with continue
.
Similarly, if v[i]
is zero after backtracking, that means no cookies were allocated to the ith child, so there’s no need to continue trying with the other children, because they will end up getting the same amount of cookies due to the symmetry of the problem (it doesn’t matter which child gets which amount of cookies, only the amounts matter). So you break out of the loop in this case.
These two pruning strategies can significantly reduce the number of branches that need to be explored and thus improve the runtime of the algorithm.
I’m totally lost here. Can someone recommend some other problems so I can understand backtracking?
Backtracking can be a bit tricky to grasp initially, but with some practice, you’ll get the hang of it. Here are some classic backtracking problems that could help you improve your understanding:
NQueens (LeetCode 51): You have to place N queens on an N×N chessboard so that no two queens threaten each other. This problem can help you understand how to use backtracking to explore all possible placements of queens.
Subsets (LeetCode 78): This problem asks for all possible subsets of a distinct integers set. You can use backtracking to generate all possible subsets.
Permutations (LeetCode 46): Given an array nums of distinct integers, return all the possible permutations. You can use backtracking to generate all permutations.
Combination Sum (LeetCode 39): This problem asks you to find all unique combinations in candidates where the candidate numbers sum to target. This problem is a bit more complex as it involves both backtracking and dealing with a target sum.
Sudoku Solver (LeetCode 37): You have to write a program to solve a Sudoku puzzle by filling the empty cells. This problem is a bit complex and can help you understand how to use backtracking in a more complex scenario.
Word Search (LeetCode 79): Given a 2D board and a word, find if the word exists in the grid. This problem can help you get the hang of backtracking in a 2D space.
Try to solve these problems by yourself. If you find any difficulties, you can look at the solutions and discussions on the LeetCode platform. Happy coding!
For the optimization part there is one more way you can avoid duplicates :


This makes sure you don’t produce permutations. The reason for the generation of permutation is on the first level, suppose
I see you’re trying to reduce the search space by initializing v[0]
with the first cookie and starting your recursion from the second cookie. This indeed avoids the generation of permutations.
For those who might not understand, the idea here is that we want to generate combinations of cookie distributions to children, not permutations. A combination does not care about the order of elements, while a permutation does. For example, [1,2,3] and [1,3,2] are considered different permutations but the same combination.
In the problem of distributing cookies, the order in which the children receive the cookies doesn’t matter, i.e., giving child1 2 cookies and child2 1 cookie is the same as giving child1 1 cookie and child2 2 cookies. So, we want to generate combinations, not permutations.
However, without any constraints, the recursive function would generate all permutations of cookie distributions. By fixing the first cookie to the first child and starting the recursion from the second cookie, you’re avoiding this problem.
This significantly reduces the search space and thus improves the efficiency of the algorithm. But note that it only works under the assumption that n
(number of cookies) is greater or equal to 2
and k
(number of children) is greater or equal to 1
. You may want to add some corner case handling for other scenarios.
can someone please explain the update optimization ?
The “update optimization” typically refers to the process of making small, incremental changes to improve an algorithm’s efficiency. In the context of this conversation, it could refer to various modifications that were suggested to the recursive algorithm to reduce the search space and improve performance.
These optimizations include:
Sorting the
cookies
array: This ensures that larger numbers are encountered early, which can help the algorithm reach the current best answer faster and prune (i.e., discard) more branches in the search tree.Pruning with
if(child[i] == 0) break;
: This avoids generating different permutations of the same combination by stopping the loop if a cookie is being assigned to an unassigned child. The idea here is that, once a child has received no cookies (indicated bychild[i] == 0
), it means all subsequent children haven’t received any cookies either. So, we can break the loop to avoid generating the same combination in a different order.Pruning with
if(v[i] + nums[start] > ans) continue;
: This skips an assignment if it would cause the current child’s cookie count to exceed the current best answerans
. This is based on the reasoning that, if an assignment would result in a worse distribution (one with a larger maximum cookie count), it’s not worth pursuing further.
Each of these optimizations contributes to reducing the search space, thereby improving the efficiency of the algorithm.
See after recursive call when you are decreasing the no of cookies from that child’s share, then if that child’s share becomes 0, then the child after that would get cookies then. See it’s quite symmetric. If in the 1st iteration your 2nd child did not have cookies and you did all that calculation, then it will be the case that your 3rd child will also get 0 cookies. Do a dry run on a test case and you will surely understand. Ok so I will make one more try😂. See it will be the case when your 1st child would have taken all the cookies and none would be left for others. You can also see it as combinatorics problem.You either give the cookie to a child or not. You are first assigning all the cookies to 1st child then when you backtrack you will decide that I won’t give it to this child, I will give to other child. So you see, you are actually applying different combinations and so it’s very much sure that 0 cookies assigned to one child would be assigned to other also and the cases would be symmetric.
Your explanation is spot on! The idea is to use backtracking to explore all possible distributions of cookies to the children. After each recursive call, we backtrack by “undoing” the assignment we just made (i.e., taking away the cookies we just gave to the current child), then try the next possibility.
The line if(child[i] == 0) break;
is a pruning technique used to eliminate certain possibilities early on. Here’s a little more detailed explanation:
When we’re assigning cookies to children, we start from the first child and move to the next ones. Suppose we’ve distributed some cookies to the first few children and we’re now considering child i
.
There are two scenarios:
If child
i
has already been given some cookies, we proceed as usual. We try giving the current bag of cookies to this child, then move on to the next bag of cookies.If child
i
hasn’t been given any cookies yet, this means all the children afteri
also haven’t received any cookies (since we’re assigning cookies in order). We try giving the current bag of cookies to childi
, then move on to the next bag of cookies. However, after we’re done with this recursive branch and we “undo” the assignment (subtracting the cookies from childi
), childi
is now left with no cookies again. Theif(child[i] == 0) break;
statement kicks in and stops the loop. This prevents us from trying to give the current bag of cookies to the next child (childi + 1
), which would just result in the same distribution of cookies we’ve already tried. This is because the children are indistinguishable – it doesn’t matter which specific child gets which specific bag of cookies, all that matters is how many cookies each child ends up with.
This pruning technique significantly reduces the number of possibilities we have to consider, thus making the algorithm more efficient.
This code is not working:


The code seems to be attempting a binary search with dynamic programming, which is a valid approach for some kinds of problems, but it’s a bit hard to analyze without further context.
However, I notice some possible issues. The function isvalid(a, mid, k)
tries to check whether mid
is a valid maximum sum for any child using k
bags. The function solve
seems to be doing the main work, trying to find a subset of bags for a given k
, curr
, mid
, mask
that adds up to a sum no greater than mid
.
A few things might cause issues in your code:
In
solve
, the lineif(solve(a, k1, mid, mid, mask, dp)) return dp[temp]=true;
seems wrong. It’s not clear why you are recursing withk1
andcurr
set tomid
regardless of what the currentcurr
is. It seems like you should only be decrementingk
and resettingcurr
tomid
when you’ve found a valid subset that adds up to a sum no greater thanmid
, which isn’t necessarily the case here.In
isvalid
, the mapdp
is recreated every timeisvalid
is called. This means the DP results from previous calls toisvalid
aren’t saved, which would defeat the purpose of using DP.Your current bitmasking approach treats each bag of cookies as unique. However, the problem treats each child as indistinguishable, meaning that if there are multiple valid distributions of cookies, you should only count one of them. This could lead to an incorrect result, especially when you’re dealing with a high number of cookies and children.
Remember that the general idea of this problem is to distribute the cookies to the children such that the maximum amount of cookies any child gets is as minimal as possible. I recommend simplifying your code and revising your approach if necessary. For this problem, it might be easier to use a simple backtracking approach with some optimizations to prune unnecessary branches.
Please provide more context or specific error messages if you need more specific advice!
Better example for why we are using if(v[i]==0) break;
[6,1,3,2], k = 3
(0,0,0)
nums[start] = 6 (6 0 0) nums[start] = 1 (7 0 0) (6 1 0) nums[start] = 3 (10 0 0) (7 3 0) (9 1 0) (6 4 0) (6 1 0) nums[start] = 2 (12 0 0) (10 2 0) (9 3 0) (7 5 0) (7 3 2) (11 1 0) (9 3 0) (9 1 2) …. ………… notice when we start we are only having at max one child on level 1 then at max 2 children of each on level 2, then maximum that is k. If we would have not used this then from the level 1 itself we would have 3 children which would cause unnecessary calculations.
Thank you for the detailed example!
This indeed shows why the if(v[i]==0) break;
condition is useful in this context. In short, this condition is a type of optimization technique known as “pruning”. It helps to avoid the unnecessary calculations caused by the symmetry of the problem and thereby significantly improves the runtime of the program.
In the given problem, every child is indistinguishable from each other, meaning a distribution of cookies like [6, 0, 0]
is essentially the same as [0, 6, 0]
and [0, 0, 6]
. So, if we have already calculated the maximum amount of cookies a child can get with one child having 0 cookies, there is no need to calculate the same for other children having 0 cookies, as it will only repeat the same calculations.
The if(v[i]==0) break;
condition will stop the further distribution of cookies to other children if the current child has no cookies, which means we have already calculated for this distribution and there’s no need to repeat the same calculations again.
So with this condition, we are pruning the search space and avoiding redundant calculations which improves the efficiency of our solution.
I just used an extra loop and made a new vector which I believe should not increase overall time complexity. But it gave TLE.


The code you’ve written is an exhaustive search approach where you are trying to distribute cookies to every child for each distribution possibility, which indeed causes a high time complexity and may result in a Time Limit Exceeded (TLE) error.
The reason why this is inefficient is that you are recalculating the sum of cookies for each child in every recursive call, which is very costly, especially when the size of cookies
is large.
Here’s the problematic part:


In this part, for every distribution (dis
), you are iterating over all the cookies to calculate the total cookies each child gets, and you are doing this for every possible distribution, which makes your code’s time complexity to be approximately O(n^n * n) (where n is the size of cookies
).
To optimize your code, you should consider keeping track of the maximum number of cookies any child has while you are distributing the cookies, instead of calculating it afterwards.
Also, make sure to sort the cookies in descending order, and then use a backtracking approach to try distributing cookies to each child, while keeping track of the maximum cookies any child has. When this maximum exceeds the current minimum maximum cookies (tracked by mn
), you can stop further distribution (pruning the search tree), which should significantly reduce the time complexity of your code.
And another key point is to break the loop of child j when you haven’t assigned any cookies to this child, since assigning cookies to the next child is the same and would result in duplicate calculations. It’s just as discussed in previous comments about if(v[i]==0) break;
This way, you can avoid unnecessary computations and reduce the time complexity of your code.
so if I explain:
1: We start by giving a cookie to 0th Child 2: We move to distribute next cookie, which happens to be given to the same child 3: If we have distributed all the cookies, see which child gets the maximum cookies. 4: Compare it to the minimum max value we have distributed yet 5: Take back the cookie from i child and give it to i + 1 and repeat the above steps.
Hope it will help you understand. This is the essence of all Backtracking algorithms.
Yes, you’ve got the essence of the backtracking approach for this problem. Here are the steps in more detail:
Start by giving the first cookie to the first child (child 0).
Move to distribute the next cookie. Try giving it to the same child (child 0). Repeat this step for all children in sequence.
Once all cookies have been distributed, check which child received the most cookies.
Compare the number of cookies received by this child (the max) with the smallest max found so far in previous iterations (the minimum max). If the current max is smaller, update the minimum max.
Now, backtrack: “undo” the assignment of the last cookie, and try assigning it to the next child. If there are no more children, backtrack further (undo the assignment of the previous cookie, and so on).
Repeat steps 2 to 5 until all possible distributions of cookies to children have been tried.
Backtracking problems often involve exploring all possible solutions (in this case, distributions of cookies) and updating the best solution found so far (in this case, the distribution where the child who receives the most cookies gets as few as possible). Pruning, or skipping over certain possibilities without exploring them, can be used to speed up the search, especially when it’s clear that those possibilities cannot lead to a better solution than the best one found so far.
Why is [33, 0] not a valid distribution?
The distribution [33, 0] might not be valid depending on the specific constraints of the problem.
If the problem requires that cookies should be distributed among the children as evenly as possible, then [33, 0] would not be an optimal distribution because all cookies are going to one child and none to the other.
If the problem requires that each child gets at least one cookie, then [33, 0] would also not be valid because one child is getting no cookies.
However, if there are no such constraints, then [33, 0] could potentially be a valid distribution.
The question of whether a distribution is valid or not always depends on the rules and constraints given in the problem statement. Please refer to the specific problem for more details.
another optimizarion will be to sort the cookies vector in reverse ordere in the very begening before passing it into the function; just think about it how it will reduce the runtime; runtime of your code after sorting is 0ms;


Sorting the cookies vector in reverse order before passing it into the function can indeed be a performance optimization in some cases.
The reason behind this is that by sorting the cookies in descending order, we will distribute the largest amounts of cookies first. Since we are trying to minimize the maximum number of cookies a single child receives, it makes sense to first distribute the largest quantities of cookies. This is because, once the largest quantities have been distributed, any other arrangement of smaller quantities won’t be able to exceed the maximum we’ve already found.
So the algorithm becomes a bit more efficient because it finds the “worstcase scenario” (the maximum amount a child can receive) early on, and uses this information to potentially prune some branches of the search tree.
In your code, the function fu
is called recursively for each child (j
), each time distributing one type of cookie (v[i]
) among the children and then moving on to the next type of cookie (i+1
). If the distribution of cookies results in a child receiving more cookies than the current best solution (res
), then there’s no point in continuing to explore this branch of the search tree, so you can return early to cut off unnecessary computation.
As always with optimization techniques, it’s good to remember that they can make a big difference in some cases, but they might not always result in a noticeable speedup depending on the specifics of the input data and the constraints of the problem. It’s also important to ensure that any optimization doesn’t change the correctness of the solution. In this case, sorting the cookies doesn’t affect the correctness, so it’s a safe optimization to make.
For the optimization: v[i]==0 means the childi is visited for the first time, and the following children are not visited yet. Even if the following children are visited, it will be the same as the current childi. So the following visit can be ignored.
Exactly, you got the point of the optimization.
This is one way to prune the search space and make the algorithm more efficient.
When you have a recursive function that’s visiting each child and trying to distribute cookies, once a child i
has been given a cookie for the first time (v[i]==0
), you don’t need to consider giving a cookie to the next child (i+1, i+2, etc.) because doing so would just be revisiting a state you’ve already considered before.
So, by breaking the loop as soon as v[i]==0
, you avoid redundant calculations and improve the efficiency of your algorithm.
This kind of optimization is often used in backtracking algorithms to prune the search space and avoid considering duplicate states. It’s a nice insight to see that you can use it in this problem.
Why the for loop in backtracking is running till k only and not till size of nums array(n). I know that since we have declared vector of size k so if loop runs for i greater than k, then v[i] cause runtime error. But we can overcome this by using another variable which keeps track of bound of vector indices. So Is there any other reason why you got intuition to run loop till k only and not till n.
In this problem, you have k
children and n
cookies, where n
is the size of the nums
array. The backtracking function is trying to assign cookies to the children, and since there are only k
children, it only needs to run for k
iterations.
The idea here is that we’re trying to distribute these n
cookies among these k
children, where k
can be less than, equal to, or greater than n
.
If you were to run the loop until n
, it would mean that you’re trying to assign a cookie to a child that doesn’t exist, which doesn’t align with the problem we’re trying to solve.
Also, remember that in each recursive call, we are considering the distribution of one particular cookie (or bag of cookies, since each element in the nums
array must be distributed as a whole). We are not iterating over all cookies in each recursive call, but rather, iterating over all possible recipients (the k
children).
So, in the context of this problem, it makes more sense to iterate up to k
, rather than n
.
In my opinion, it’s because we are giving all the n Cookies to k People, so we must decide which People the current Cookie belongs to. That’s my intuition. And in each loop, you find a People from 1 to k and you give the current Cookie to the people. The current Cookie can’t be used twice but the people can be used more than once. You choose a people in each loop.
And if you loop it till n, think about the meaning. That means you are trying to decide which Cookie the current People belongs to. And in each loop, you will choose a Cookie from 1 to n, and give it to somebody. It means that the Cookie can be used more than once. Of cause you can use an array like boolean[n] to record the usage of cookie and keep it once. But for one specific Cookie, which People should you give it to? That’s will need another for loop from 1 to k for choosing different candidates.
I think the process of backtracking itself ensures that one Cookie can only be used once, but if you change the for loop to the dimention of People, the meaning gets wrong. Hope that helps.
That’s a very good explanation. The central idea of the problem is that you have n
cookies and you need to distribute these to k
children. When using backtracking, you’re essentially exploring all the possible distributions of cookies to the children.
In each recursive call, you’re distributing one cookie (or bag of cookies, since a cookie count in nums
array is distributed as a whole) to one of the k
children. So, for each cookie, you iterate over k
children to decide who gets that cookie.
Iterating over n
wouldn’t make sense in this context, because you’re not trying to assign each child to a cookie, but rather, you’re assigning each cookie to a child.
To put it simply, the reason we loop over k
is because we are making a decision for each cookie about which child should get it, and there are k
possible children each cookie can go to. Looping over n
would imply that we are making a decision for each child about which cookie they should get, but this isn’t how the problem is structured.
And you’re correct, the backtracking process itself ensures each cookie is used only once. So in this context, looping over k
makes more sense and aligns better with the problem we’re trying to solve.
Yes exactly what I’m confused with. If we want to check for each and every element of array then why not run loop till size of array otherwise we won’t be able to cover all elements if we run loop till k elements. Could you please elaborate more precisely because in other backtracking questions we generally loop till n elements which is creating a lot confusion for me. Would really appreciate for it
Certainly, I’ll try to explain more clearly.
When we are working with backtracking problems, we use a loop to iterate over all the choices that we can make at a certain step. The range of this loop (whether it’s from 1 to k
or 1 to n
) depends on the particular problem and what kind of choices we are making.
In the cookie distribution problem, the choice that we are making at each step is “which child should receive the next cookie?”. Since there are k
children, we have k
possible choices to make, so our loop goes from 1 to k
.
In other problems, the choice might be different. For example, in a problem where we are choosing which element of an array to use next, we would have n
choices (where n
is the size of the array), so our loop would go from 1 to n
.
So the reason that we are looping until k
and not n
in this problem is not that we are “checking” each element of an array, but rather that we are making a choice from k
possibilities at each step. The backtracking process will ensure that we eventually consider all possible distributions of cookies, even though we are only making k
choices at each step.
Remember, in backtracking, the loop iteration is over the potential choices, not necessarily the elements of an input array. In this problem, your choices aren’t the cookies themselves, but the children to whom the cookies are given.
I was just trying to keep each and every element of the given array in all the k possible places.
Yes, that’s correct. The idea behind the backtracking solution is that you’re trying to distribute each of the n cookies to any of the k children. That’s why you’re iterating over the k children in your backtracking function.
Think of it this way: you’re standing at the ith cookie, and you have k different options of where to place it. That’s why you’re iterating from 0 to k in your backtracking function: these are the k different choices you have for where to place the ith cookie.
Once you’ve decided where to place the ith cookie, you then move onto the (i+1)th cookie, and again have k different choices of where to place it, and so on until you’ve placed all the cookies.
The point of the backtracking is to explore all possible ways of distributing the cookies to the children, and for each distribution, check if it is a valid distribution and if it improves on the best distribution found so far.
So, while it might initially seem like you should be iterating over the n cookies in your backtracking function, it’s actually more accurate to say that you’re iterating over the k different choices you have for each cookie. That’s why the for loop in the backtracking function goes from 0 to k.
Python solutions gives TLE:


The time complexity of the current Python solution is exponential, which makes it prone to a time limit exceeded (TLE) error for large inputs.
The solution uses a backtracking algorithm, which explores all possible ways to distribute the cookies, resulting in a lot of repeated computation.
Here’s an optimized version of the Python code that uses binary search to determine the minimum maximum number of cookies a child can get, and it uses a memoized depthfirst search to check if it’s possible to distribute cookies under that maximum number:


In this optimized code, can_distribute(mid)
checks if it’s possible to distribute cookies such that no kid gets more than mid
cookies. The dfs(idx, mid, kids)
function is used to distribute cookies to kids in a depthfirst manner while ensuring that no kid gets more than mid
cookies. If such a distribution is possible, it tries to minimize the maximum cookies a kid can get by doing a binary search in the range [max(cookies), sum(cookies)]
. This solution should pass the test cases without TLE error.
I have a doubt if we pass the vector v by VALUE why do we still need to perform this operation v[i] = nums[start] as in the stack the previous vector should be present right?
Indeed, in some programming languages like Python, if you pass the list (the equivalent of vector in C++) by value to a function, modifications within the function will not affect the original list, meaning you would not need to perform the operation v[i] = nums[start]
.
However, in C++, vectors (and arrays in general) are passed by reference to functions, even if you don’t explicitly declare it in the function signature. This means that if you modify the vector within the function, it also modifies the original vector outside the function. So when you backtrack, you need to undo the modification v[i] += nums[start]
by performing v[i] = nums[start]
.
Moreover, it’s important to note that passing large vectors or arrays by value would cause a new copy of the entire vector to be created for each function call, which could greatly increase your memory usage and may lead to a Memory Limit Exceeded error for large inputs. So even in languages that support passing arrays by value, it’s usually more efficient to pass them by reference if you’re only using them to keep track of the current state in a backtracking or recursive algorithm, and don’t need separate copies for each recursive call.
Even though you pass vector v by value, If you don’t write v[i] = nums[start], you will be adding the same element in all the k subsets, which is incorrect. So you need to perform the subtraction.
Yes, that’s correct. Even if you were passing the vector by value (which isn’t the case in C++, as I mentioned earlier), the operation v[i] = nums[start]
is necessary for the logic of the backtracking algorithm.
The idea behind backtracking is that you explore all possible decisions from a certain point, but after you finish exploring each decision, you want to “undo” it to go back to the original state, so that you can explore the next decision from that same point.
In this problem, the decision you’re exploring is “assign the start
th cookie to the i
th child”. After you finish exploring all the implications of that decision (by recursively calling solve(start+1, nums, v, k)
), you want to “undo” that decision (by doing v[i] = nums[start]
) before you move on to the next decision (“assign the start
th cookie to the (i+1)
th child”). This ensures that each decision is explored independently of the others. Without this step, you would be exploring combinations of decisions, rather than each decision separately.
So the v[i] = nums[start]
operation is essential to the functioning of the backtracking algorithm, regardless of whether v
is passed by value or by reference.


dp[i][mask] denotes if we have in total i children and mask is the configuration of the cookies what will be the optimal answer for that. Then we simply iterate on all possible submask of the current mask and check if we parition the current mask into submask and the rest i  1 children have (mask xor submask) cookies, what will be the optimal answer. Finaly we obtain the answer using k children and (1 « (cookies.size())  1) mask.
For further reading about submask enumeration i would suggest to go through the following article of CP algorithms: https://cpalgorithms.com/algebra/allsubmasks.html#enumeratingallsubmasksofagivenmask
Time complexity: enumeration of submasks takes 3 ^ n. thus the time complexity is k * (3 ^ n) Space complexity: O(k * (2 ^ size(cookies)))
The provided code is a dynamic programming solution to the problem of distributing cookies. It does use bit manipulation and mask subsetting to represent the distribution of cookies, which is more advanced than the backtracking approach discussed earlier.
The code creates a 2D DP array where dp[i][mask]
represents the minimum maximum number of cookies any child has received, given that we have distributed cookies among i
children and mask
represents which cookies we have given out (where each bit in the binary representation of mask
corresponds to a cookie).
The code also precomputes the total number of cookies represented by each possible mask.
The key to the solution is the nested loop structure which checks each possible number of children (from 1 to k
), and for each number of children, it iterates through all possible distributions of cookies (represented as a binary mask), and for each distribution, it checks all possible subdistributions (submasks) of cookies to the current child.
It then updates the DP array at dp[person][mask]
with the minimum maximum number of cookies any child has received so far. This update takes into account the cookies the current child would receive (i.e., the sum of the cookies represented by submask
), and the maximum number of cookies any of the previous children have received (i.e., dp[person  1][mask ^ submask]
).
Finally, it returns the minimum maximum number of cookies any child has received after distributing all the cookies among k
children.
This solution is efficient for relatively small inputs (because of the exponential time complexity). It takes advantage of bit manipulation and dynamic programming to efficiently explore the space of all possible distributions of cookies among the children.
I read the question, and quickly jumped upon writing a DP solution, got TLE This was the solution, I wrote


But then after reading other’s solution on Discuss i was wondering why backtracking and not dp, can you guys help me figure it out please?
In your solution, you have a couple of inefficiencies that are causing you to hit the time limit:
Deep copy of array on each call: In every recursive call, you’re creating a clone of the
divs
array which is a timeconsuming operation, resulting in a high time complexity.Unoptimized state representation: Your state is being represented as a string which includes the entire
divs
array, which is inefficient in both time and space. In this case, the number of possible states can be quite large, resulting in a big hashmap and lots of string operations.
Here’s why a backtracking solution is generally preferred over DP for this problem:
State space reduction: In the backtracking approach, the state space is reduced as the number of recursive calls in the depthfirst search tree is dependent on the number of children
k
, which is generally smaller than the number of cookies.Early pruning: We can implement pruning strategies to exit early when we know that the current solution path is not promising. In the case of this problem, we can stop further exploration if the current number of cookies exceeds the minimum maximum found so far.
No need for deep copying: As we only modify one value at a time in our recursive calls, we don’t need to create deep copies of our arrays, saving us valuable time.
No need for a memoization table: The way the backtracking algorithm is set up for this problem does not require any memoization, again, saving space and time.
In some problems, DP could provide a more efficient solution than backtracking, but in this case, backtracking with some optimization techniques (like pruning) seems to perform better.
Note that there are ways to optimize the DP solution as well, such as by using an efficient state representation. However, this might be more complicated to implement and understand compared to the backtracking solution.
How to get the intuition that this question is of backtracking?
This problem falls under a general category of problems known as “combinatorial optimization” problems. These problems often involve finding the best way to allocate resources, or to arrange items, subject to some constraints.
The intuition to use backtracking often comes from a few key characteristics of the problem:
Decision Making: If the problem asks for an arrangement, allocation, or selection where you have several choices at each step, backtracking could be a good fit.
Constraints: Backtracking is often useful when you have constraints that the solution must satisfy. In this problem, you are constrained by the number of cookies and the number of kids.
Optimization: If you are asked to find the optimal solution out of many possibilities, backtracking can be used to explore different solutions and keep track of the best one found so far.
Order does not matter: If the order of decisions or choices doesn’t change the outcome, backtracking is often a good choice. In this case, it doesn’t matter in which order you distribute the cookies to the kids.
The nature of the problem, which involves distributing resources in different possible ways and finding the optimal distribution, is a good hint that backtracking might be a good strategy to solve it. The goal is to explore all possible ways to distribute the cookies (with early pruning if possible), and keep track of the best distribution found so far.
Remember, however, that identifying the correct technique to use for a problem often requires a good deal of practice solving similar problems. The more problems you solve, the better you will become at identifying the right technique to use.
We judged backtracking first bcs of the constraint size was very less. second is the logic. That is for every element, there is a choice to go to any of the k students. Hence one by one exploring the possibility that where a particular element can go. Place that cookies bag to a student,and when you reach end you get ans and then you backtrack and explore another possibility. Think of it like Nqueen problem.
Yes, that’s an excellent way of breaking down the reasoning behind using backtracking for this problem.
To further elaborate on your point, backtracking is essentially a search algorithm for finding all (or some) solutions to computational problems, notably constraint satisfaction problems. It incrementally builds candidates for the solutions, but abandons a candidate as soon as it determines that the candidate cannot possibly be extended to a valid solution.
The classic textbook example of backtracking is the Nqueens puzzle in computer science: to place N queens on an NxN chessboard such that no two queens threaten each other. Thus, the backtracking algorithm tries to place the queen in the current row and then it moves to the next row. If the placement is not correct, it backtracks and tries to find a different valid place for that queen in the current row and repeats the process.
Similarly, in the distribute cookies problem, the algorithm distributes a cookie to a child, moves to the next cookie, and so on. If a distribution turns out to not be optimal, the algorithm backtracks and redistributes the cookies differently. The process is repeated until the algorithm finds the most optimal distribution.
This approach works well when the problem size is relatively small (like in this problem where k
is up to 15) and when we need to find the most optimal solution out of all possible solutions. The ability to discard nonoptimal solutions early (“prune the search tree”) can help to reduce the time complexity.
However, keep in mind that backtracking can still have high time complexity and may not be the best approach for problems with large inputs or where a good but not necessarily optimal solution is acceptable.
whenever you have n<=~15, you can always think of backtrack.
That’s a good rule of thumb! Problems with a small size, typically up to 1520, are often well suited to backtracking approaches, especially if the problem requires searching through all possible combinations or permutations.
This is due to the fact that the time complexity of backtracking can be quite high (often exponential), so it’s best suited for smaller problem sizes. However, backtracking has the advantage of being able to find the exact solution (not just an approximation), and it can also “prune” the search space by not exploring paths that are guaranteed not to lead to a solution.
In these cases, the advantage of being able to explore the entire solution space can outweigh the disadvantage of higher time complexity, making backtracking a good choice.
Also, remember that in some cases, other techniques such as dynamic programming or greedy algorithms may be more appropriate, even for smaller problem sizes. It’s always important to carefully analyze the problem and consider all possible approaches.
I wouldn’t call this backtracking. There’s no constraint to backtrack from. It’s just a recursive search over the entire solution space.
I can see where you’re coming from, and you make a valid point. In some sense, all backtracking algorithms are recursive searches over a solution space. However, backtracking is often distinguished from plain recursion by the idea of making a choice that can be “undone” if it turns out to be incorrect.
In this problem, we make a choice of assigning a cookie to a student and continue our search. If this choice leads to a nonoptimal distribution, we “undo” the assignment (by subtracting the cookie from the student’s total) and try a different assignment.
So, while there’s no explicit constraint like in some other backtracking problems (like the Sudoku or Nqueens problem where a move is immediately invalidated if it breaks the game’s rules), we are still exploring a decision space in a depthfirst manner and undoing decisions that don’t lead to the optimal solution. Hence, it could be seen as a form of backtracking.
That being said, terminology in computer science can sometimes be a bit fuzzy and different people might use terms slightly differently. The important thing is understanding the underlying concepts and techniques.
These questions generally follow a pattern (pick current index or not pick it). Though here’s a little different case, because you have k different children and you have to offer each of them some chocolates. So, the choices modifies to ( choosing which children for giving the current number of chocolates i,e, which array index if you are storing the sum of chocolates given to each children).
You just have take care of the choices (or choice diagram) in backtracking questions by exploring all paths.
You’re exactly right. Many backtracking problems do indeed follow a “choose or not choose” pattern, where we try both options (choosing the current item and not choosing it) and then “backtrack” if a choice doesn’t lead to a solution.
In this problem, the choices are a little more complex because we have to decide which of the k
children to give each cookie to. So instead of two choices per cookie, we have k
choices. But the basic idea is the same: we try each possible choice, continue exploring based on that choice, and then undo the choice (by subtracting the cookie from the child’s total) to explore other choices.
This “choice diagram” exploration is a powerful approach for many problems, not just in backtracking, but in dynamic programming and other areas of algorithmic problem solving as well. Understanding and mastering this technique can greatly help with tackling a wide range of problems.
I don’t know why my code is very faster than yours 
Yours  1200ms Mine  80 ms


Your solution appears to be faster due to the early termination in your backtracking search. In your solution, when you are iterating through each bucket in the for loop, you have a break statement that exits the loop once you’ve added a cookie to an empty bucket. This reduces the number of recursive calls made in your function significantly and results in a more efficient search.
This is effectively a form of pruning in the search tree of solutions. By only adding cookies to nonempty buckets or the first empty bucket, you are eliminating a lot of redundant searches. This is a great optimization and shows the power of clever search strategies in reducing the runtime of backtracking solutions.
That being said, keep in mind that while this optimization makes your code run faster on this particular problem, it’s not guaranteed to work for all problems. Always be sure to test your code thoroughly and understand the problem constraints well.