Search Blogs

Showing results for "Binary Search on Answer"

Found 12 results

LeetCode 1011 β€” Capacity To Ship Packages Within D Days | Binary Search on Answer Explained

LeetCode 1011 β€” Capacity To Ship Packages Within D Days | Binary Search on Answer Explained

πŸš€ Try This Problem First!Before reading the solution, attempt it yourself on LeetCode β€” you'll retain the concept far better.πŸ”— Problem Link: https://leetcode.com/problems/capacity-to-ship-packages-within-d-days/1. Understanding the ProblemYou have a conveyor belt carrying N packages, each with a given weight. A ship must transport all of them within at most D days. Every day, you load packages in order (no rearranging allowed), and you cannot exceed the ship's weight capacity in a single day.Goal: Find the minimum weight capacity of the ship such that all packages are delivered within D days.Constraints:1 ≀ days ≀ weights.length ≀ 5 Γ— 10⁴1 ≀ weights[i] ≀ 5002. Two Key Observations (Before Writing a Single Line of Code)Before jumping to code, anchor yourself with these two facts:Minimum possible capacity: The ship must at least be able to carry the single heaviest package. If it can't, that package can never be shipped. So:low = max(weights)Maximum possible capacity: If the ship can carry everything at once, it finishes in 1 day β€” always valid. So:high = sum(weights)Our answer lies somewhere in the range [max(weights), sum(weights)]. This is the classic setup for Binary Search on the Answer.3. Intuition β€” Why Binary Search?Ask yourself: what happens as ship capacity increases?The number of days needed decreases or stays the same. This is a monotonic relationship β€” and monotonicity is the green flag for Binary Search.Instead of checking every capacity from 1 to sum(weights) (which is huge), we binary search over the capacity space and for each candidate capacity mid, we ask:"Can all packages be shipped in ≀ D days with this capacity?"This feasibility check runs in O(N) using a greedy simulation, making the whole approach O(N log(sum(weights))).4. The Feasibility Check β€” Greedy LoadingGiven a capacity mid, simulate loading the ship greedily:Keep adding packages to today's load.The moment adding the next package would exceed mid, start a new day and reset the current load to that package.Count total days used.If days used ≀ D, capacity mid is feasible.5. Binary Search StrategyIf canShip(mid) is true β†’ mid might be the answer, but try smaller. Set ans = mid, high = mid - 1.If canShip(mid) is false β†’ capacity is too small, increase it. Set low = mid + 1.6. Dry Run β€” Example 1Input: weights = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], days = 5low = 10 (max weight), high = 55 (sum of weights)IterationlowhighmidDays NeededFeasible?ans11055322βœ… Yes3221031203βœ… Yes2031019146❌ No2041519175βœ… Yes1751516155βœ… Yes1561514β€”loop endsβ€”15Output: 15 βœ…7. The Code Implementationclass Solution { /** * Feasibility Check (Helper Function) * * Given a ship capacity 'mid', this function simulates loading packages * greedily and returns true if all packages can be shipped within 'days' days. * * @param mid - candidate ship capacity to test * @param arr - weights array * @param days - allowed number of days * @return true if shipping is possible within 'days' days, false otherwise */ public boolean canShip(int mid, int[] arr, int days) { int daysNeeded = 1; // We always need at least 1 day int currentLoad = 0; // Weight loaded on the ship today for (int i = 0; i < arr.length; i++) { // If adding this package exceeds today's capacity, // start a new day and carry this package on the new day if (currentLoad + arr[i] > mid) { currentLoad = arr[i]; // This package starts the new day's load daysNeeded++; // Increment day count } else { // Package fits β€” add it to today's load currentLoad += arr[i]; } } // If days needed is within the allowed limit, this capacity is feasible return daysNeeded <= days; } /** * Main Function β€” Binary Search on the Answer * * Search range: [max(weights), sum(weights)] * - low = max(weights) β†’ ship must carry the heaviest package at minimum * - high = sum(weights) β†’ ship carries everything in one day (upper bound) * * @param weights - array of package weights * @param days - maximum allowed days * @return minimum ship capacity to deliver all packages within 'days' days */ public int shipWithinDays(int[] weights, int days) { int high = 0; // Will become sum(weights) int low = Integer.MIN_VALUE; // Will become max(weights) int ans = 0; // Calculate the binary search bounds for (int a : weights) { high += a; // sum of all weights β†’ upper bound low = Math.max(low, a); // max single weight β†’ lower bound } // Binary Search over the capacity space while (low <= high) { int mid = low + (high - low) / 2; // Avoids integer overflow if (canShip(mid, weights, days)) { // mid works β€” record it as a potential answer // and try to find a smaller valid capacity ans = mid; high = mid - 1; } else { // mid is too small β€” increase the capacity low = mid + 1; } } return ans; // Minimum feasible capacity }}8. Code Walkthrough β€” Step by StepStep 1 β€” Setting bounds: We iterate through the weights array once to compute low = max(weights) and high = sum(weights). These are our binary search boundaries.Step 2 β€” Binary Search loop: We pick mid = low + (high - low) / 2 (safe overflow-free midpoint). We check if capacity mid can ship all packages in ≀ D days.Step 3 β€” Feasibility helper (canShip): We simulate a greedy day-by-day loading. We start with daysNeeded = 1 and currentLoad = 0. For each package, if it fits in today's load, we add it. If not, we start a new day. The key line is:if (currentLoad + arr[i] > mid) { currentLoad = arr[i]; // new day starts with this package daysNeeded++;}Step 4 β€” Narrowing the search: If feasible β†’ ans = mid, high = mid - 1 (try smaller). If not feasible β†’ low = mid + 1 (try larger).9. Common Mistakes to AvoidMistake 1 β€” Wrong lower bound: Using low = 1 instead of low = max(weights) works but is far slower since you binary search over a much larger range unnecessarily.Mistake 2 β€” Wrong condition in canShip: The return should be daysNeeded <= days (not < days). If days needed equals D, it's still valid.Mistake 3 β€” Off-by-one in greedy loading: When a package doesn't fit, you start a new day with that package as the first item: currentLoad = arr[i]. Do NOT set currentLoad = 0 β€” that package must still be accounted for.Mistake 4 β€” Integer overflow in midpoint: Always use mid = low + (high - low) / 2 instead of (low + high) / 2 to avoid overflow when low and high are large.10. Complexity AnalysisTime Complexity: O(N Γ— log(sum(weights)))Binary search runs over the range [max(weights), sum(weights)], which has at most sum(weights) β‰ˆ 500 Γ— 50000 = 25,000,000 values β†’ about logβ‚‚(25,000,000) β‰ˆ 25 iterations.Each iteration calls canShip which is O(N).Total: O(N log S) where S = sum(weights).Space Complexity: O(1)No extra data structures. Only a handful of integer variables are used.11. Similar Problems (Same Pattern β€” Binary Search on Answer)Once you understand this pattern, the following problems become very similar:LeetCode 410 β€” Split Array Largest SumLeetCode 875 β€” Koko Eating Bananas [ Blog is also avaliable on this - Read Now ]LeetCode 1283 β€” Find the Smallest Divisor Given a ThresholdLeetCode 2064 β€” Minimized Maximum of Products Distributed to Any StoreAll of these follow the same template: define a feasibility check, identify a monotonic answer space, and binary search over it.12. Key Takeawaysβœ… When you see "find the minimum/maximum value such that a condition holds" β€” think Binary Search on the Answer.βœ… The lower bound of the search space is the most constrained valid value (max weight here).βœ… The upper bound is the least constrained valid value (total weight here).βœ… The feasibility check must be O(N) or better to keep the overall complexity efficient.βœ… Greedy loading (pack as much as possible each day) is optimal here since packages must go in order.Happy Coding! If this helped you, share it with a friend who's grinding LeetCode. πŸš€

LeetCodeBinary SearchMediumJavaBinary Search on AnswerArrays
LeetCode 1283 β€” Find the Smallest Divisor Given a Threshold | Binary Search on Answer Explained

LeetCode 1283 β€” Find the Smallest Divisor Given a Threshold | Binary Search on Answer Explained

πŸš€ Try This Problem First!Before reading the solution, attempt it yourself on LeetCode β€” you'll retain the concept far better.πŸ”— Problem Link: https://leetcode.com/problems/find-the-smallest-divisor-given-a-threshold/Understanding the ProblemYou are given an array of integers nums and an integer threshold. You must choose a positive integer divisor, divide every element of the array by it (rounding up to the nearest integer), sum all the results, and make sure that sum is ≀ threshold.Goal: Find the smallest possible divisor that keeps the sum within the threshold.Important detail β€” Ceiling Division: Every division rounds up, not down. So 7 Γ· 3 = 3 (not 2), and 10 Γ· 2 = 5.Constraints:1 ≀ nums.length ≀ 5 Γ— 10⁴1 ≀ nums[i] ≀ 10⁢nums.length ≀ threshold ≀ 10⁢Two Key Observations (Before Writing a Single Line of Code)Minimum possible divisor: The divisor must be at least 1. Dividing by anything less than 1 isn't a positive integer. So:low = 1Maximum possible divisor: If divisor = max(nums), then every element divided by it gives at most 1 (due to ceiling), so the sum equals nums.length, which is always ≀ threshold (guaranteed by constraints). So:high = max(nums)Our answer lies in the range [1, max(nums)]. This is the search space for Binary Search.Intuition β€” Why Binary Search?Ask yourself: what happens as the divisor increases?As divisor gets larger, each divided value gets smaller (or stays the same), so the total sum decreases or stays the same. This is a monotonic relationship β€” the green flag for Binary Search on the Answer.Instead of trying every divisor from 1 to max(nums), we binary search over divisor values. For each candidate mid, we ask:"Does dividing all elements by mid (ceiling) give a sum ≀ threshold?"This feasibility check runs in O(N), making the whole approach O(N log(max(nums))).The Feasibility Check β€” Ceiling Sum SimulationGiven a divisor mid, compute the sum of ⌈arr[i] / midβŒ‰ for all elements. If the total sum ≀ threshold, then mid is a valid divisor.In Java, ceiling division of integers is done as:Math.ceil((double) arr[i] / mid)Binary Search StrategyIf canDivide(mid) is true β†’ mid might be the answer, but try smaller. Set ans = mid, high = mid - 1.If canDivide(mid) is false β†’ divisor is too small, increase it. Set low = mid + 1.Dry Run β€” Example 1 (Step by Step)Input: nums = [1, 2, 5, 9], threshold = 6We start with low = 1 and high = 9 (max element in array).Iteration 1: mid = 1 + (9 - 1) / 2 = 5Compute ceiling sum with divisor 5: ⌈1/5βŒ‰ + ⌈2/5βŒ‰ + ⌈5/5βŒ‰ + ⌈9/5βŒ‰ = 1 + 1 + 1 + 2 = 55 ≀ 6 β†’ βœ… Valid. Record ans = 5, search smaller β†’ high = 4.Iteration 2: mid = 1 + (4 - 1) / 2 = 2Compute ceiling sum with divisor 2: ⌈1/2βŒ‰ + ⌈2/2βŒ‰ + ⌈5/2βŒ‰ + ⌈9/2βŒ‰ = 1 + 1 + 3 + 5 = 1010 > 6 β†’ ❌ Too large. Increase divisor β†’ low = 3.Iteration 3: mid = 3 + (4 - 3) / 2 = 3Compute ceiling sum with divisor 3: ⌈1/3βŒ‰ + ⌈2/3βŒ‰ + ⌈5/3βŒ‰ + ⌈9/3βŒ‰ = 1 + 1 + 2 + 3 = 77 > 6 β†’ ❌ Too large. Increase divisor β†’ low = 4.Iteration 4: mid = 4 + (4 - 4) / 2 = 4Compute ceiling sum with divisor 4: ⌈1/4βŒ‰ + ⌈2/4βŒ‰ + ⌈5/4βŒ‰ + ⌈9/4βŒ‰ = 1 + 1 + 2 + 3 = 77 > 6 β†’ ❌ Too large. Increase divisor β†’ low = 5.Loop ends: low (5) > high (4). Binary search terminates.Output: ans = 5 βœ…The Code Implementationclass Solution {/*** Feasibility Check (Helper Function)** Given a divisor 'mid', this function computes the ceiling sum of* all elements divided by 'mid' and checks if it is within threshold.** @param mid - candidate divisor to test* @param arr - input array* @param thresh - the allowed threshold for the sum* @return true if the ceiling division sum <= threshold, false otherwise*/public boolean canDivide(int mid, int[] arr, int thresh) {int sumOfDiv = 0;for (int i = 0; i < arr.length; i++) {// Ceiling division: Math.ceil(arr[i] / mid)// Cast to double to avoid integer division truncationsumOfDiv += Math.ceil((double) arr[i] / mid);}// If total sum is within threshold, this divisor is validreturn sumOfDiv <= thresh;}/*** Main Function β€” Binary Search on the Answer** Search range: [1, max(nums)]* - low = 1 β†’ smallest valid positive divisor* - high = max(nums) β†’ guarantees every ceil(num/divisor) = 1,* so sum = nums.length <= threshold (always valid)** @param nums - input array* @param threshold - maximum allowed sum after ceiling division* @return smallest divisor such that the ceiling division sum <= threshold*/public int smallestDivisor(int[] nums, int threshold) {int min = 1; // Lower bound: divisor starts at 1int max = Integer.MIN_VALUE; // Will become max(nums)int ans = 1;// Find the upper bound of binary search (max element)for (int a : nums) {max = Math.max(max, a);}// Binary Search over the divisor spacewhile (min <= max) {int mid = min + (max - min) / 2; // Safe midpoint, avoids overflowif (canDivide(mid, nums, threshold)) {// mid is valid β€” record it and try a smaller divisorans = mid;max = mid - 1;} else {// mid is too small β€” the sum exceeded threshold, go highermin = mid + 1;}}return ans; // Smallest valid divisor}}Code Walkthrough β€” Step by StepSetting bounds: We iterate through nums once to find max β€” this becomes our upper bound high. The lower bound low = 1 because divisors must be positive integers.Binary Search loop: We pick mid = min + (max - min) / 2 as the candidate divisor. We check if using mid as the divisor keeps the ceiling sum ≀ threshold.Feasibility helper (canDivide): For each element, we compute Math.ceil((double) arr[i] / mid) and accumulate the total. The cast to double is critical β€” without it, Java performs integer division (which truncates, not rounds up).Narrowing the search: If the sum is within threshold β†’ record ans = mid, try smaller (max = mid - 1). If the sum exceeds threshold β†’ divisor is too small, increase it (min = mid + 1).A Critical Bug to Watch Out For β€” The return min vs return ans TrapIn your original code, the final line was return min instead of return ans. This is a subtle bug. After the loop ends, min has overshot past the answer (it's now ans + 1). Always store the answer in a dedicated variable ans and return that. Using return min would return the wrong result in most cases.Common Mistakes to AvoidWrong lower bound: Setting low = min(nums) instead of low = 1 seems intuitive but is wrong. A divisor smaller than the minimum element is still valid β€” for example, dividing [5, 9] by 3 gives ⌈5/3βŒ‰ + ⌈9/3βŒ‰ = 2 + 3 = 5, which could be within threshold.Forgetting ceiling division: Using arr[i] / mid (integer division, which truncates) instead of Math.ceil((double) arr[i] / mid) is wrong. The problem explicitly states results are rounded up.Returning min instead of ans: After the binary search loop ends, min > max, meaning min has already gone past the valid answer. Always return the stored ans.Integer overflow in midpoint: Always use mid = min + (max - min) / 2 instead of (min + max) / 2. When both values are large (up to 10⁢), their sum can overflow an int.Complexity AnalysisTime Complexity: O(N Γ— log(max(nums)))Binary search runs over [1, max(nums)] β†’ at most logβ‚‚(10⁢) β‰ˆ 20 iterations.Each iteration calls canDivide which is O(N).Total: O(N log M) where M = max(nums).Space Complexity: O(1) No extra data structures β€” only a few integer variables are used throughout.How This Relates to LeetCode 1011This problem and LeetCode 1011 (Ship Packages Within D Days) are almost identical in structure:πŸ”— LeetCode 1011 #Search space: [max(weights), sum(weights)]Feasibility check: Can we ship in ≀ D days?Monotonic property: More capacity β†’ fewer daysGoal: Minimize capacityLeetCode 1283Search space: [1, max(nums)]Feasibility check: Is ceiling sum ≀ threshold?Monotonic property: Larger divisor β†’ smaller sumGoal: Minimize divisorOnce you deeply understand one, the other takes minutes to solve.Similar Problems (Same Pattern β€” Binary Search on Answer)LeetCode 875 β€” Koko Eating Bananas [ Blog is also avaliable on this - Read Now ]LeetCode 1011 β€” Capacity To Ship Packages Within D Days [ Blog is also avaliable on this - Read Now ]LeetCode 410 β€” Split Array Largest SumLeetCode 2064 β€” Minimized Maximum of Products Distributed to Any StoreAll follow the same template: identify a monotonic answer space, write an O(N) feasibility check, and binary search over it.Key Takeawaysβœ… When the problem asks "find the minimum value such that a condition holds" β€” think Binary Search on the Answer.βœ… The lower bound is the most constrained valid value (1 here, since divisors must be positive).βœ… The upper bound is the least constrained valid value (max element, guarantees sum = length ≀ threshold).βœ… Ceiling division in Java requires casting to double: Math.ceil((double) a / b).βœ… Always store the answer in a separate ans variable β€” never return min or max directly after a binary search loop.Happy Coding! Smash that upvote if this helped you crack the pattern. πŸš€

LeetCodeBinary SearchMediumJavaBinary Search on AnswerArraysCeiling Division
Mastering Binary Search – LeetCode 704 Explained

Mastering Binary Search – LeetCode 704 Explained

IntroductionBinary Search is one of the most fundamental and powerful algorithms in computer science. If you're preparing for coding interviews, mastering Binary Search is absolutely essential.In this blog, we’ll break down LeetCode 704 – Binary Search, explain the algorithm in detail, walk through your Java implementation, analyze complexity, and recommend additional problems to strengthen your understanding.You can try this problem -: Problem LinkπŸ“Œ Problem OverviewYou are given:A sorted array of integers nums (ascending order)An integer targetYour task is to return the index of target if it exists in the array. Otherwise, return -1.Example 1Input: nums = [-1,0,3,5,9,12], target = 9Output: 4Example 2Input: nums = [-1,0,3,5,9,12], target = 2Output: -1Constraints1 <= nums.length <= 10⁴All integers are uniqueThe array is sorted in ascending orderRequired Time Complexity: O(log n)πŸš€ Understanding the Binary Search AlgorithmBinary Search works only on sorted arrays.Instead of checking each element one by one (like Linear Search), Binary Search:Finds the middle element.Compares it with the target.Eliminates half of the search space.Repeats until the element is found or the search space is empty.Why is it Efficient?Every iteration cuts the search space in half.If the array size is n, the number of operations becomes:logβ‚‚(n)This makes it extremely efficient compared to linear search (O(n)).🧠 Step-by-Step AlgorithmInitialize two pointers:low = 0high = nums.length - 1While low <= high:Calculate middle index:mid = low + (high - low) / 2If nums[mid] == target, return midIf target > nums[mid], search right half β†’ low = mid + 1Else search left half β†’ high = mid - 1If loop ends, return -1πŸ’» Your Java Code ExplainedHere is your implementation:class Solution {public int search(int[] nums, int target) {int high = nums.length-1;int low = 0;while(low <= high){int mid = low+(high-low)/2;if(target == nums[mid] ){return mid;}else if(target > nums[mid]){low = mid+1;}else{high = mid-1;}}return -1;}}πŸ” Code Breakdown1️⃣ Initialize Boundariesint high = nums.length - 1;int low = 0;You define the search space from index 0 to n-1.2️⃣ Loop Conditionwhile(low <= high)The loop continues as long as there is a valid search range.3️⃣ Safe Mid Calculationint mid = low + (high - low) / 2;This is preferred over:(low + high) / 2Why?Because (low + high) may cause integer overflow in large arrays.Your approach prevents that.4️⃣ Comparison Logicif(target == nums[mid])If found β†’ return index.else if(target > nums[mid])low = mid + 1;Search in right half.elsehigh = mid - 1;Search in left half.5️⃣ Not Found Casereturn -1;If the loop finishes without finding the target.⏱ Time and Space ComplexityTime Complexity: O(log n)Each iteration halves the search space.Space Complexity: O(1)No extra space used β€” purely iterative.πŸ”₯ Why This Problem Is ImportantLeetCode 704 is:The foundation of all Binary Search problemsA template questionFrequently asked in interviewsRequired to understand advanced problems like:Search in Rotated Sorted ArrayFind First and Last PositionPeak ElementBinary Search on AnswerπŸ“š Recommended Binary Search Practice ProblemsAfter solving this, practice these in order:🟒 Easy35. Search Insert Position69. Sqrt(x)278. First Bad Version🟑 Medium34. Find First and Last Position of Element in Sorted Array33. Search in Rotated Sorted Array74. Search a 2D Matrix875. Koko Eating Bananas (Binary Search on Answer)πŸ”΄ Advanced Pattern Practice1011. Capacity To Ship Packages Within D Days410. Split Array Largest SumThese will help you master:Lower bound / upper boundBinary search on monotonic functionsSearching in rotated arraysSearching in 2D matricesBinary search on answer pattern🎯 Final ThoughtsBinary Search is not just a single algorithm β€” it’s a pattern.If you truly understand:How the search space shrinksWhen to move left vs rightHow to calculate mid safelyLoop conditions (low <= high vs low < high)You can solve 50+ interview problems easily.LeetCode 704 is the perfect starting point.Master this template, and you unlock an entire category of problems.

Binary SearchLeetCodeEasy
LeetCode 1482: Minimum Number of Days to Make m Bouquets – Binary Search on the Earliest Day

LeetCode 1482: Minimum Number of Days to Make m Bouquets – Binary Search on the Earliest Day

Try the ProblemYou can practice the problem here:https://leetcode.com/problems/minimum-number-of-days-to-make-m-bouquets/Problem DescriptionYou are given an integer array bloomDay, where:bloomDay[i]represents the day on which the i-th flower blooms.You are also given two integers:m β†’ number of bouquets requiredk β†’ number of adjacent flowers needed for one bouquetRulesTo create one bouquet, you must use:k adjacent flowersImportant constraints:A flower can only be used once.Only flowers that have already bloomed can be used.Flowers must be adjacent in the array.Your goal is to determine the minimum number of days required so that it becomes possible to create m bouquets.If it is impossible, return:-1Example WalkthroughExample 1InputbloomDay = [1,10,3,10,2]m = 3k = 1Output3ExplanationWe need:3 bouquets1 flower eachGarden progress:Day 1[x, _, _, _, _]Bouquets possible = 1Day 2[x, _, _, _, x]Bouquets possible = 2Day 3[x, _, x, _, x]Bouquets possible = 3 βœ…Minimum day = 3Example 2InputbloomDay = [1,10,3,10,2]m = 3k = 2Output-1Explanation:We need:3 bouquets Γ— 2 flowers = 6 flowersBut we only have:5 flowersSo it is impossible.Example 3InputbloomDay = [7,7,7,7,12,7,7]m = 2k = 3Output12Explanation:Day 7[x,x,x,x,_,x,x]Only 1 bouquet can be made.Day 12[x,x,x,x,x,x,x]Now 2 bouquets can be formed.Minimum day = 12Constraints1 <= n <= 10^51 <= bloomDay[i] <= 10^91 <= m <= 10^61 <= k <= nImportant observations:bloomDay[i] can be very largeThe array size can be 100,000A brute force simulation for every day would be too slowThinking About the ProblemAt first glance, the problem may look like a simulation problem, where we check the garden day by day.However, this approach quickly becomes inefficient because the maximum bloom day can be as large as:10^9So instead of checking every day sequentially, we need to search intelligently for the minimum valid day.Key ObservationIf we wait for more days, more flowers will bloom.Which means:Days ↑ β†’ Flowers available ↑ β†’ Bouquets possible ↑This means the function is monotonic.If it is possible to make m bouquets on day X, then it will also be possible on any day after X.This type of pattern strongly suggests using:Binary Search on AnswerApproach: Binary Search on Minimum DayInstead of checking every day, we search between:minimum bloom day β†’ maximum bloom dayFor each candidate day mid:Assume we wait until day mid.Count how many bouquets we can make.If we can make at least m bouquets, try smaller days.Otherwise, increase the day.How We Count BouquetsWhile scanning the garden:If the flower has bloomed (bloomDay[i] <= mid)β†’ increase adjacent flower countIf a flower has not bloomed yetβ†’ reset the adjacent counterWhenever we have:k adjacent flowerswe form one bouquet.Your Java Implementation (Binary Search + Greedy Counting)class Solution {// Function to check if we can make m bouquets// if we wait until 'mid' dayspublic boolean binaryS(int mid, int[] arr, int m, int k){int flower = 0;int bouq = 0;for(int i = 0; i < arr.length; i++){// Flower has bloomedif(arr[i] <= mid){flower++;}else{// Calculate bouquets from collected flowersbouq += flower / k;// Reset adjacent counterflower = 0;}}// Handle remaining flowers after loopbouq += flower / k;return bouq >= m;}public int minDays(int[] bloomDay, int m, int k) {// If required flowers exceed total flowersif((long)m * k > bloomDay.length)return -1;int min = Integer.MAX_VALUE;int max = Integer.MIN_VALUE;// Find search rangefor(int a : bloomDay){min = Math.min(min, a);max = Math.max(max, a);}int ans = -1;// Binary Searchwhile(min <= max){int mid = min + (max - min) / 2;if(binaryS(mid, bloomDay, m, k)){ans = mid;// Try smaller daymax = mid - 1;}else{// Need more daysmin = mid + 1;}}return ans;}}Dry Run ExamplebloomDay = [1,10,3,10,2]m = 3k = 1Search range:1 β†’ 10Binary search steps:mid = 5Possible bouquets = 3 β†’ validTry smaller:mid = 2Bouquets = 2 β†’ not enoughIncrease day:mid = 3Bouquets = 3 β†’ validMinimum day:3Time ComplexityBinary search runs:O(log(maxDay - minDay))For each check we scan the array:O(n)Total complexity:O(n log(10^9))Which is efficient for n = 10^5.Space ComplexityO(1)No extra space is required.Key TakeawayThis problem is a classic example of:Binary Search on AnswerWhenever a problem asks:Find the minimum value such that a condition becomes trueBinary search is often the best solution.ConclusionThe Minimum Number of Days to Make m Bouquets problem teaches an important interview technique: transforming a simulation problem into a search problem.By recognizing the monotonic nature of the problem, we can apply binary search on the answer space and efficiently determine the minimum day required to create the desired number of bouquets.Mastering problems like this will significantly improve your understanding of binary search patterns, which are extremely common in coding interviews.

Binary SearchArraysBinary Search on AnswerLeetCodeMedium
LeetCode 875: Koko Eating Bananas – Find the Minimum Eating Speed Using Binary Search

LeetCode 875: Koko Eating Bananas – Find the Minimum Eating Speed Using Binary Search

Try the ProblemYou can practice the problem here:https://leetcode.com/problems/koko-eating-bananas/Problem DescriptionKoko loves eating bananas 🍌.You are given n piles of bananas, where the i-th pile contains piles[i] bananas.The guards have left and will return after h hours. Koko wants to finish eating all the bananas before the guards come back.Koko can choose her banana-eating speed k bananas per hour.However, there are some important rules:Every hour, Koko chooses one pile of bananas.She eats k bananas from that pile.If the pile contains fewer than k bananas, she eats all of them and stops for that hour.She cannot move to another pile in the same hour.Your task is to determine the minimum integer value of k such that Koko can finish all the bananas within h hours.Example WalkthroughExample 1Inputpiles = [3,6,7,11]h = 8Output4Explanation:If Koko eats 4 bananas per hour, the time taken for each pile is:3 bananas β†’ 1 hour6 bananas β†’ 2 hours7 bananas β†’ 2 hours11 bananas β†’ 3 hoursTotal time required:1 + 2 + 2 + 3 = 8 hoursSince Koko finishes all bananas within 8 hours, the minimum speed is 4 bananas per hour.Example 2Inputpiles = [30,11,23,4,20]h = 5Output30Example 3Inputpiles = [30,11,23,4,20]h = 6Output23Constraints1 <= piles.length <= 10^4piles.length <= h <= 10^91 <= piles[i] <= 10^9Important observations:A pile may contain up to one billion bananas.The number of hours can also be extremely large.A naive solution may become computationally expensive.Intuition Behind the ProblemThe key observation in this problem is the relationship between eating speed and required hours.If Koko eats slowly, she needs more hours.If she eats faster, she needs fewer hours.This creates a monotonic relationship:Eating Speed ↑ β†’ Hours Required ↓Because of this property, we can apply Binary Search on the answer.Instead of testing every possible eating speed, we can efficiently search the correct speed using binary search.Approach 1: Brute ForceIdeaTry every possible eating speed:k = 1 β†’ max(piles)For each speed:Calculate the total hours required.If the hours are less than or equal to h, return that speed.DrawbackThe maximum pile size can be:10^9Trying all speeds up to 10^9 would take too long.Time ComplexityO(max(pile) Γ— n)This approach is not feasible for large inputs.Approach 2: Binary Search on Answer (Optimal Solution)Since the answer lies between 1 and the maximum pile size, we can apply binary search.Search SpaceMinimum possible speed:1 banana/hourMaximum possible speed:max(piles)Binary Search StrategyChoose a middle value mid as the candidate eating speed.Calculate how many hours Koko needs at this speed.If the hours are less than or equal to h, try a smaller speed.If the hours are greater than h, increase the speed.This continues until we find the minimum valid speed.Java Implementationclass Solution { // Function to calculate total hours needed // if Koko eats bananas at a given speed public int hourCalculate(int[] piles, int speed){ int hours = 0; // Traverse through each pile for(int i = 0; i < piles.length; i++){ // Calculate hours needed for this pile // Using ceiling division hours += Math.ceil((double)piles[i] / speed); } return hours; } public int minEatingSpeed(int[] piles, int h) { // Minimum possible eating speed int low = 1; // Maximum possible speed int high = Integer.MIN_VALUE; // Find the maximum pile for(int pile : piles){ high = Math.max(high, pile); } int answer = high; // Binary Search while(low <= high){ int mid = low + (high - low) / 2; // Calculate required hours at speed mid int requiredHours = hourCalculate(piles, mid); if(requiredHours <= h){ // mid is a valid answer answer = mid; // Try smaller speed high = mid - 1; } else{ // Speed too slow low = mid + 1; } } return answer; }}Time ComplexityBinary Search runs in:O(log(max(pile)))Each iteration calculates hours in:O(n)Overall complexity:O(n log(max(pile)))Space ComplexityO(1)No extra memory is required.Key TakeawayThis problem is a classic example of Binary Search on Answer.Whenever a problem asks:β€œFind the minimum or maximum value such that a condition is satisfied”You should consider applying Binary Search on the answer space.ConclusionInstead of testing every possible eating speed, we used Binary Search to efficiently find the minimum speed that allows Koko to finish the bananas within the given number of hours.This approach dramatically improves performance and is a common technique used in many coding interview problems involving optimization and search space reduction.

Binary SearchBinary Search on AnswerArraysLeetCodeMediumJava
Floor in a Sorted Array – Binary Search Explained with Story & Visuals | GeeksforGeeks

Floor in a Sorted Array – Binary Search Explained with Story & Visuals | GeeksforGeeks

Problem StatementPlatform: GeeksforGeeksYou are given a sorted array arr[] and an integer x. Your task is to find the index of the largest element in the array that is less than or equal to x.Return -1 if no such element exists.If multiple elements equal the floor, return the last occurrence.Example:Input: arr = [1, 2, 8, 10, 10, 12, 19], x = 11Output: 4βœ… The largest element ≀ 11 is 10. The last occurrence is at index 4.πŸ‘‰ Try this problem here: GeeksforGeeks – Floor in a Sorted ArrayIntuition: What is β€œFloor” and Why It MattersImagine climbing stairs:You want to step as high as possible without going past a certain height.That step is your floor – the largest number ≀ x.In arrays:The floor of x is the largest number smaller than or equal to x.Because the array is sorted, we can search efficiently with binary search instead of checking every element.This is faster and helps you handle large arrays with millions of elements.Multiple Approaches1️⃣ Linear Search (Easy but Slow)Check each element from left to right. If it’s ≀ x, update the answer.int ans = -1;for(int i = 0; i < arr.length; i++){if(arr[i] <= x){ans = i; // store last occurrence}}return ans;Time Complexity: O(n) – slow for large arraysSpace Complexity: O(1) – constant memory2️⃣ Binary Search (Fast & Efficient)Binary search cuts the search space in half at every step.int ans = -1;int low = 0, high = arr.length - 1;while(low <= high){int mid = low + (high - low)/2;if(arr[mid] == x){ans = mid; // candidate floorlow = mid + 1; // move right for last occurrence} else if(arr[mid] < x){ans = mid; // candidate floorlow = mid + 1;} else {high = mid - 1; // too large, move left}}return ans;Time Complexity: O(log n) – very fastSpace Complexity: O(1) – no extra spaceDry Run / Step-by-StepInput: arr = [1, 2, 8, 10, 10, 12, 19], x = 11Steplowhighmidarr[mid]ansAction1063103arr[mid] < x β†’ move right2465123arr[mid] > x β†’ move left3444104arr[mid] < x β†’ move right454--4low > high β†’ stop, return 4βœ… Finds floor = 10 at index 4.Code Explanation in Simple Wordsans = -1 β†’ stores best candidate for floor.Use low and high as binary search boundaries.mid = low + (high - low)/2 β†’ safe midpoint.If arr[mid] <= x, it can be the floor β†’ move right to find last occurrence.If arr[mid] > x, move left β†’ floor is smaller.Loop ends when low > high, return ans.Edge Cases to Rememberx < arr[0] β†’ return -1 (floor doesn’t exist)x β‰₯ arr[n-1] β†’ return last index (floor is last element)Duplicates β†’ always return last occurrenceStory-Based Visual Example: β€œAlice’s Book Shelf Adventure” πŸ“šScenario:Alice is a librarian.Books are arranged by height on a shelf.She has a new book and wants to place it next to the tallest book shorter than or equal to hers.Instead of checking each book, she uses a binary search approach to find the position quickly."Alice is scanning the bookshelf, which represents a sorted array: [1, 2, 8, 10, 10, 12, 19]. She is thinking where to place her new book labeled 11. This step represents the initial step of the floor algorithm, understanding the array elements.""Alice places the book labeled 11 right after the last 10 on the shelf. This demonstrates finding the floor: the largest number ≀ 11 is 10, and the book is positioned next to it, illustrating the last occurrence logic.""From a top view, Alice is scanning all the books. This shows how binary search would conceptually divide the array: she quickly decides which section the book 11 belongs to without checking every book, demonstrating efficient search.""Alice has successfully placed the book 11 at the correct position. The floor of 11 is 10 (index 4). This visual confirms the algorithm’s result: the new element is positioned immediately after the last element ≀ x, exactly as binary search would determine."Why This Problem is ImportantStrengthens binary search skillsTeaches last occurrence / boundary conditions handlingMakes you think algorithmically, not just about numbersStory-based learning improves retention and understandingConclusionLinear search: easy but slow (O(n))Binary search: fast, elegant (O(log n))Multiple dry run steps make it easy to followStory-based images make abstract concepts concrete and memorable

GeeksforGeeksBinary SearchEasy
Ceil in a Sorted Array – Binary Search Explained with Story & Visuals | GeeksforGeeks

Ceil in a Sorted Array – Binary Search Explained with Story & Visuals | GeeksforGeeks

Try This Problem FirstPlatform: GeeksforGeeksπŸ‘‰ Try this problem here: Ceil in a Sorted Array – GeeksforGeeksProblem StatementYou are given a sorted array arr[] and an integer x. Your task is to find the index of the smallest element in the array that is greater than or equal to x.If no such element exists, return -1.If multiple elements equal the ceil, return the first occurrence.Example:Input: arr = [1, 2, 8, 10, 11, 12, 19], x = 5Output: 2Explanation: Smallest element β‰₯ 5 is 8 at index 2.IntuitionThink of the problem as finding the first step you can reach without falling short:The ceil of x is the smallest number β‰₯ x.Since the array is sorted, we can use binary search to quickly locate the answer instead of checking each element.Linear search is simple but slow for large arrays. Binary search gives an efficient O(log n) solution.Multiple Approaches1️⃣ Linear Search (Easy to Understand)int ans = -1;for(int i = 0; i < arr.length; i++){if(arr[i] >= x){ans = i; // first occurrencebreak;}}return ans;Time Complexity: O(n)Space Complexity: O(1)βœ… Works for small arrays❌ Slow for large arrays2️⃣ Binary Search (Optimized & Fast)int ans = -1;int low = 0, high = arr.length - 1;while(low <= high){int mid = low + (high - low)/2;if(arr[mid] == x){ans = mid;high = mid - 1; // move left for first occurrence} else if(arr[mid] > x){ans = mid; // candidate ceilhigh = mid - 1; // move left} else {low = mid + 1; // arr[mid] < x β†’ move right}}return ans;Time Complexity: O(log n)Space Complexity: O(1)βœ… Efficient for large arraysβœ… Automatically returns first occurrenceDry RunInput: arr = [1, 2, 8, 10, 11, 12, 19], x = 5Steplowhighmidarr[mid]ansAction1063103arr[mid] > x β†’ move left202123arr[mid] < x β†’ move right322282arr[mid] > x β†’ move left421--2low > high β†’ stop, return 2βœ… Binary search finds ceil = 8 at index 2.Why This Problem is ImportantTeaches binary search for first occurrenceStrengthens understanding of ceil/floor conceptsVisualization through story improves understanding and retentionPrepares for coding interviews and competitive programmingConclusionLinear search: simple but slow (O(n))Binary search: fast and efficient (O(log n))Story-based visualization helps learn, not just memorizeUsing numbers on books in images makes abstract concepts concrete

GeeksForGeeksEasyBinary SearchSorted Array
LeetCode 3488 β€” Closest Equal Element Queries: A Complete Walkthrough from Brute Force to Optimal

LeetCode 3488 β€” Closest Equal Element Queries: A Complete Walkthrough from Brute Force to Optimal

If you have been grinding LeetCode lately, you have probably run into problems where your first clean-looking solution times out and forces you to rethink from scratch. LeetCode 3488 is exactly that kind of problem. This article walks through the complete thought process β€” from the naive approach that got me TLE, to the intuition shift, to the final optimized solution in Java.You can find the original problem here: LeetCode 3488 β€” Closest Equal Element Queries at Problem LinkUnderstanding the ProblemYou are given a circular array nums and an array of queries. For each query queries[i], you must find the minimum distance between the element at index queries[i] and any other index j such that nums[j] == nums[queries[i]]. If no such other index exists, the answer is -1.The critical detail here is the word circular. The array wraps around, which means the distance between two indices i and j in an array of length n is not simply |i - j|. It is:min( |i - j| , n - |i - j| )You can travel either clockwise or counterclockwise, and you take whichever path is shorter.Breaking Down the ExamplesExample 1nums = [1, 3, 1, 4, 1, 3, 2], queries = [0, 3, 5]For query index 0, the value is 1. Other indices holding 1 are 2 and 4. Circular distances are min(2, 5) = 2 and min(4, 3) = 3. The minimum is 2.For query index 3, the value is 4. It appears nowhere else in the array. Answer is -1.For query index 5, the value is 3. The other 3 sits at index 1. Circular distance is min(4, 3) = 3. Answer is 3.Output: [2, -1, 3]Example 2nums = [1, 2, 3, 4], queries = [0, 1, 2, 3]Every element is unique. Every query returns -1.Output: [-1, -1, -1, -1]First Attempt β€” Brute ForceMy first instinct was straightforward. For each query, scan the entire array, collect every index that matches the queried value, compute the circular distance to each, and return the minimum. Clean logic, easy to reason about, and dead simple to implement.while (i != queries.length) { int max = Integer.MAX_VALUE; for (int j = 0; j < nums.length; j++) { int target = nums[queries[i]]; if (nums[j] == target && j != queries[i]) { // Linear distance between the two indices int right = Math.abs(j - queries[i]); // Distance going the other direction around the ring int left = nums.length - right; // True circular distance is the shorter of the two int dist = Math.min(right, left); max = Math.min(max, dist); } } lis.add(max == Integer.MAX_VALUE ? -1 : max); i++;}This got TLE immediately, and once you look at the constraints it is obvious why. Both nums.length and queries.length can be up to 10^5. For every query you are scanning every element, giving you O(n Γ— q) time β€” which in the worst case is 10 billion operations. No judge is going to wait for that.Rethinking the Approach β€” Where Is the Waste?After the TLE, the question I asked myself was: what work is being repeated unnecessarily?The answer was obvious in hindsight. Every time a query asks about a value like 3, the brute force scans the entire array again looking for every index that holds 3. If ten different queries all ask about value 3, you are doing that scan ten times. You are finding the same indices over and over.The fix is to do that work exactly once, before any query is processed. You precompute a map from each value to all the indices where it appears. Then for every query you simply look up the relevant list and work within it.That observation reduces the precomputation to O(n) β€” one pass through the array. The question then becomes: once you have that sorted list of indices for a given value, how do you find the closest one to your query index efficiently?The Key Insight β€” You Only Need Two NeighboursHere is the insight that makes this problem elegant. The index list for any value is sorted in ascending order because you build it by iterating left to right through the array. If your query index sits at position mid inside that sorted list, then by definition every index to the left of mid - 1 is farther away than arr[mid - 1], and every index to the right of mid + 1 is farther away than arr[mid + 1].This means you never need to compare against all duplicates. You only ever need to check the immediate left and right neighbours of your query index within the sorted list.The one subtlety is the circular wrap. Because the array itself is circular, the left neighbour of the very first element in the list is actually the last element in the list, since you can wrap around the ring. This is handled cleanly with modular arithmetic: (mid - 1 + n) % n for the left neighbour and (mid + 1) % n for the right.The Optimized Solution β€” HashMap + Binary SearchStep 1 β€” Precompute the index mapIterate through nums once and build a HashMap mapping each value to a list of all indices where it appears. The lists are sorted by construction since you insert indices in order.Step 2 β€” Binary search to locate the query indexFor a given query at index q, look up the index list for nums[q]. Binary search the list to find the position of q within it. This runs in O(log n) rather than O(n).Step 3 β€” Check immediate neighbours and compute circular distancesOnce you have the position mid, fetch arr[(mid + 1) % n] and arr[(mid - 1 + n) % n]. For each, compute the circular distance using min(|diff|, totalLength - |diff|). Return the smaller of the two.Full Annotated Java Solutionclass Solution { public List<Integer> solveQueries(int[] nums, int[] queries) { int c = 0; // Precompute: map each value to the sorted list of indices where it appears. // Since we iterate left to right, the list is sorted by construction. HashMap<Integer, List<Integer>> mp = new HashMap<>(); for (int i = 0; i < nums.length; i++) { mp.computeIfAbsent(nums[i], k -> new ArrayList<>()).add(i); } List<Integer> lis = new ArrayList<>(); while (c != queries.length) { // Retrieve the sorted index list for the value at the queried position List<Integer> arr = mp.get(nums[queries[c]]); int n = arr.size(); int i = 0; int j = n - 1; int min = -1; while (i <= j) { int mid = i + (j - i) / 2; if (arr.get(mid) == queries[c]) { // Only one occurrence in the entire array β€” no duplicate exists if (n == 1) { min = -1; } else { // Circular neighbour to the right within the index list int right = arr.get((mid + 1) % n); // Circular neighbour to the left within the index list int left = arr.get((mid - 1 + n) % n); // Compute circular distance to the right neighbour int d1 = Math.abs(right - queries[c]); int distRight = Math.min(d1, nums.length - d1); // Compute circular distance to the left neighbour int d2 = Math.abs(left - queries[c]); int distLeft = Math.min(d2, nums.length - d2); // The answer is the closer of the two neighbours min = Math.min(distLeft, distRight); } break; } else if (arr.get(mid) > queries[c]) { // Query index is smaller β€” search the left half j = mid - 1; } else { // Query index is larger β€” search the right half i = mid + 1; } } lis.add(min); c++; } return lis; }}Complexity AnalysisTime Complexity: O(n log n)Building the HashMap takes O(n). For each of the q queries, binary search over the index list takes O(log n) in the worst case. Total: O(n + q log n), which simplifies to O(n log n) given the constraint that q ≀ n.Space Complexity: O(n)The HashMap stores every index exactly once across all its lists, so total space used is O(n).Compared to the brute force O(n Γ— q), this is the difference between ~1.7 million operations and ~10 billion operations at the constraint limits.Common PitfallsMixing up the two values of n. Inside the solution, n refers to arr.size() β€” the number of occurrences of a particular value. But when computing circular distance, you need nums.length β€” the full array length. These are different numbers and swapping them silently produces wrong answers.Forgetting the + n in the left neighbour formula. Writing (mid - 1) % n when mid is 0 produces -1 in Java, since Java's modulo preserves the sign of the dividend. Always write (mid - 1 + n) % n.Not handling the single-occurrence case. If a value appears only once, n == 1, and the neighbour formula wraps around to the element itself, giving a distance of zero β€” which is completely wrong. Guard against this explicitly before running the neighbour logic.What This Problem Teaches YouThe journey from brute force to optimal here follows a pattern worth internalizing.The brute force was correct but repeated work. Recognizing that repeated work and lifting it into a precomputation step is the single move that makes this problem tractable. The HashMap does that.Once you have a sorted structure, binary search is almost always the right tool to find a position within it. And once you have a position in a sorted structure, you only ever need to look at adjacent elements to find the nearest one β€” checking anything further is redundant by definition.These are not tricks specific to this problem. They are transferable patterns that appear across dozens of medium and hard problems on the platform. Internalizing them β€” rather than memorizing solutions β€” is what actually builds problem-solving ability over time.

ArraysHashMapBinary SearchCircular ArraysMediumLeetCodeJava
Recursion in Java - Complete Guide With Examples and Practice Problems

Recursion in Java - Complete Guide With Examples and Practice Problems

IntroductionIf there is one topic in programming that confuses beginners more than anything else, it is recursion. Most people read the definition, nod their head, and then immediately freeze when they have to write recursive code themselves.The problem is not that recursion is genuinely hard. The problem is that most explanations start with code before building the right mental model. Once you have the right mental model, recursion clicks permanently and you start seeing it everywhere β€” in tree problems, graph problems, backtracking, dynamic programming, divide and conquer, and more.This guide covers everything from the ground up. What recursion is, how the call stack works, how to identify base cases and recursive cases, every type of recursion, common patterns, time and space complexity analysis, the most common mistakes, and the top LeetCode problems to practice.By the end of this article, recursion will not feel like magic anymore. It will feel like a natural tool you reach for confidently.What Is Recursion?Recursion is when a function calls itself to solve a smaller version of the same problem.That is the complete definition. But let us make it concrete.Imagine you want to count down from 5 to 1. One way is a loop. Another way is β€” print 5, then solve the exact same problem for counting down from 4 to 1. Then print 4, solve for 3. And so on until you reach the base β€” there is nothing left to count down.void countDown(int n) { if (n == 0) return; // stop here System.out.println(n); countDown(n - 1); // solve the smaller version}The function countDown calls itself with a smaller input each time. Eventually it reaches 0 and stops. That stopping condition is the most important part of any recursive function β€” the base case.The Two Parts Every Recursive Function Must HaveEvery correctly written recursive function has exactly two parts. Without both, the function either gives wrong answers or runs forever.Part 1: Base CaseThe base case is the condition under which the function stops calling itself and returns a direct answer. It is the smallest version of the problem that you can solve without any further recursion.Without a base case, recursion never stops and you get a StackOverflowError β€” Java's way of telling you the call stack ran out of memory.Part 2: Recursive CaseThe recursive case is where the function calls itself with a smaller or simpler input β€” moving closer to the base case with each call. If your recursive case does not make the problem smaller, you have an infinite loop.Think of it like a staircase. The base case is the ground floor. The recursive case is each step going down. Every step must genuinely bring you one level closer to the ground.How Recursion Works β€” The Call StackThis is the mental model that most explanations skip, and it is the reason recursion confuses people.Every time a function is called in Java, a new stack frame is created and pushed onto the call stack. This frame stores the function's local variables, parameters, and where to return to when the function finishes.When a recursive function calls itself, a new frame is pushed on top. When that call finishes, its frame is popped and execution returns to the previous frame.Let us trace countDown(3) through the call stack:countDown(3) called β†’ frame pushed prints 3 calls countDown(2) β†’ frame pushed prints 2 calls countDown(1) β†’ frame pushed prints 1 calls countDown(0) β†’ frame pushed n == 0, return β†’ frame popped back in countDown(1), return β†’ frame popped back in countDown(2), return β†’ frame popped back in countDown(3), return β†’ frame poppedOutput: 3, 2, 1The call stack grows as calls go deeper, then shrinks as calls return. This is why recursion uses O(n) space for n levels deep β€” each level occupies one stack frame in memory.Your First Real Recursive Function β€” FactorialFactorial is the classic first recursion example. n! = n Γ— (n-1) Γ— (n-2) Γ— ... Γ— 1Notice the pattern β€” n! = n Γ— (n-1)!. The factorial of n is n times the factorial of n-1. That recursive structure makes it perfect for recursion.public int factorial(int n) { // base case if (n == 0 || n == 1) return 1; // recursive case return n * factorial(n - 1);}Dry Run β€” factorial(4)factorial(4)= 4 * factorial(3)= 4 * 3 * factorial(2)= 4 * 3 * 2 * factorial(1)= 4 * 3 * 2 * 1= 24The call stack builds up going in, then multiplications happen coming back out. This "coming back out" phase is called the return phase or unwinding of the stack.Time Complexity: O(n) β€” n recursive calls Space Complexity: O(n) β€” n frames on the call stackThe Two Phases of RecursionEvery recursive function has two phases and understanding both is critical.Phase 1: The Call Phase (Going In)This happens as the function keeps calling itself with smaller inputs. Things you do before the recursive call happen in this phase β€” in order from the outermost call to the innermost.Phase 2: The Return Phase (Coming Back Out)This happens as each call finishes and returns to its caller. Things you do after the recursive call happen in this phase β€” in reverse order, from the innermost call back to the outermost.This distinction explains why the output order can be surprising:void printBothPhases(int n) { if (n == 0) return; System.out.println("Going in: " + n); // call phase printBothPhases(n - 1); System.out.println("Coming out: " + n); // return phase}For printBothPhases(3):Going in: 3Going in: 2Going in: 1Coming out: 1Coming out: 2Coming out: 3This two-phase understanding is what makes problems like reversing a string or printing a linked list backwards via recursion feel natural.Types of RecursionRecursion is not one-size-fits-all. There are several distinct types and knowing which type applies to a problem shapes how you write the solution.1. Direct RecursionThe function calls itself directly. This is the most common type β€” what we have seen so far.void direct(int n) { if (n == 0) return; direct(n - 1); // calls itself}2. Indirect RecursionFunction A calls Function B which calls Function A. They form a cycle.void funcA(int n) { if (n <= 0) return; System.out.println("A: " + n); funcB(n - 1);}void funcB(int n) { if (n <= 0) return; System.out.println("B: " + n); funcA(n - 1);}Used in: state machines, mutual recursion in parsers, certain mathematical sequences.3. Tail RecursionThe recursive call is the last operation in the function. Nothing happens after the recursive call returns β€” no multiplication, no addition, nothing.// NOT tail recursive β€” multiplication happens after returnint factorial(int n) { if (n == 1) return 1; return n * factorial(n - 1); // multiply after return β€” not tail}// Tail recursive β€” recursive call is the last thingint factorialTail(int n, int accumulator) { if (n == 1) return accumulator; return factorialTail(n - 1, n * accumulator); // last operation}Why does tail recursion matter? In languages that support tail call optimization (like Scala, Kotlin, and many functional languages), tail recursive functions can be converted to iteration internally β€” no stack frame accumulation, O(1) space. Java does NOT perform tail call optimization, but understanding tail recursion is still important for interviews and functional programming concepts.4. Head RecursionThe recursive call happens first, before any other processing. All processing happens in the return phase.void headRecursion(int n) { if (n == 0) return; headRecursion(n - 1); // call first System.out.println(n); // process after}// Output: 1 2 3 4 5 (processes in reverse order of calls)5. Tree RecursionThe function makes more than one recursive call per invocation. This creates a tree of calls rather than a linear chain. Fibonacci is the classic example.int fibonacci(int n) { if (n <= 1) return n; return fibonacci(n - 1) + fibonacci(n - 2); // TWO recursive calls}The call tree for fibonacci(4): fib(4) / \ fib(3) fib(2) / \ / \ fib(2) fib(1) fib(1) fib(0) / \ fib(1) fib(0)Time Complexity: O(2ⁿ) β€” exponential! Each call spawns two more. Space Complexity: O(n) β€” maximum depth of the call treeThis is why memoization (caching results) is so important for tree recursion β€” it converts O(2ⁿ) to O(n) by never recomputing the same subproblem twice.6. Mutual RecursionA specific form of indirect recursion where two functions call each other alternately to solve a problem. Different from indirect recursion in that the mutual calls are the core mechanism of the solution.// Check if a number is even or odd using mutual recursionboolean isEven(int n) { if (n == 0) return true; return isOdd(n - 1);}boolean isOdd(int n) { if (n == 0) return false; return isEven(n - 1);}Common Recursion Patterns in DSAThese are the patterns you will see over and over in interview problems. Recognizing them is more important than memorizing solutions.Pattern 1: Linear Recursion (Do Something, Recurse on Rest)Process the current element, then recurse on the remaining problem.// Sum of arrayint arraySum(int[] arr, int index) { if (index == arr.length) return 0; // base case return arr[index] + arraySum(arr, index + 1); // current + rest}Pattern 2: Divide and Conquer (Split Into Two Halves)Split the problem into two halves, solve each recursively, combine results.// Merge Sortvoid mergeSort(int[] arr, int left, int right) { if (left >= right) return; // base case β€” single element int mid = (left + right) / 2; mergeSort(arr, left, mid); // sort left half mergeSort(arr, mid + 1, right); // sort right half merge(arr, left, mid, right); // combine}Pattern 3: Backtracking (Try, Recurse, Undo)Try a choice, recurse to explore it, undo the choice when backtracking.// Generate all subsetsvoid subsets(int[] nums, int index, List<Integer> current, List<List<Integer>> result) { if (index == nums.length) { result.add(new ArrayList<>(current)); return; } // Choice 1: include nums[index] current.add(nums[index]); subsets(nums, index + 1, current, result); current.remove(current.size() - 1); // undo // Choice 2: exclude nums[index] subsets(nums, index + 1, current, result);}Pattern 4: Tree Recursion (Left, Right, Combine)Recurse on left subtree, recurse on right subtree, combine or process results.// Height of binary treeint height(TreeNode root) { if (root == null) return 0; // base case int leftHeight = height(root.left); // solve left int rightHeight = height(root.right); // solve right return 1 + Math.max(leftHeight, rightHeight); // combine}Pattern 5: Memoization (Cache Recursive Results)Store results of recursive calls so the same subproblem is never solved twice.Map<Integer, Integer> memo = new HashMap<>();int fibonacci(int n) { if (n <= 1) return n; if (memo.containsKey(n)) return memo.get(n); // return cached int result = fibonacci(n - 1) + fibonacci(n - 2); memo.put(n, result); // cache before returning return result;}This converts Fibonacci from O(2ⁿ) to O(n) time with O(n) space β€” a massive improvement.Recursion vs Iteration β€” When to Use WhichThis is one of the most common interview questions about recursion. Here is a clear breakdown:Use Recursion when:The problem has a naturally recursive structure (trees, graphs, divide and conquer)The solution is significantly cleaner and easier to understand recursivelyThe problem involves exploring multiple paths or choices (backtracking)The depth of recursion is manageable (not too deep to cause stack overflow)Use Iteration when:The problem is linear and a loop is equally clearMemory is a concern (iteration uses O(1) stack space vs O(n) for recursion)Performance is critical and function call overhead mattersJava's stack size limit could be hit (default around 500-1000 frames for deep recursion)The key rule: Every recursive solution can be converted to an iterative one (usually using an explicit stack). But recursive solutions for tree and graph problems are almost always cleaner to write and understand.Time and Space Complexity of Recursive FunctionsAnalyzing complexity for recursive functions requires a specific approach.The Recurrence Relation MethodExpress the time complexity as a recurrence relation and solve it.Factorial:T(n) = T(n-1) + O(1) = T(n-2) + O(1) + O(1) = T(1) + nΓ—O(1) = O(n)Fibonacci (naive):T(n) = T(n-1) + T(n-2) + O(1) β‰ˆ 2Γ—T(n-1) = O(2ⁿ)Binary Search:T(n) = T(n/2) + O(1) = O(log n) [by Master Theorem]Merge Sort:T(n) = 2Γ—T(n/2) + O(n) = O(n log n) [by Master Theorem]Space Complexity Rule for RecursionSpace complexity of a recursive function = maximum depth of the call stack Γ— space per frameLinear recursion (factorial, sum): O(n) spaceBinary recursion (Fibonacci naive): O(n) space (maximum depth, not number of nodes)Divide and conquer (merge sort): O(log n) space (depth of recursion tree)Memoized Fibonacci: O(n) space (memo table + call stack)Classic Recursive Problems With SolutionsProblem 1: Reverse a StringString reverse(String s) { if (s.length() <= 1) return s; // base case // last char + reverse of everything before last char return s.charAt(s.length() - 1) + reverse(s.substring(0, s.length() - 1));}Dry run for "hello":reverse("hello") = 'o' + reverse("hell")reverse("hell") = 'l' + reverse("hel")reverse("hel") = 'l' + reverse("he")reverse("he") = 'e' + reverse("h")reverse("h") = "h"Unwinding: "h" β†’ "he" β†’ "leh" β†’ "lleh" β†’ "olleh" βœ…Problem 2: Power Function (x^n)double power(double x, int n) { if (n == 0) return 1; // base case if (n < 0) return 1.0 / power(x, -n); // handle negative if (n % 2 == 0) { double half = power(x, n / 2); return half * half; // x^n = (x^(n/2))^2 } else { return x * power(x, n - 1); }}This is the fast power algorithm β€” O(log n) time instead of O(n).Problem 3: Fibonacci With Memoizationint[] memo = new int[100];Arrays.fill(memo, -1);int fib(int n) { if (n <= 1) return n; if (memo[n] != -1) return memo[n]; memo[n] = fib(n - 1) + fib(n - 2); return memo[n];}Time: O(n) β€” each value computed once Space: O(n) β€” memo array + call stackProblem 4: Tower of HanoiThe classic recursion teaching problem. Move n disks from source to destination using a helper rod.void hanoi(int n, char source, char destination, char helper) { if (n == 1) { System.out.println("Move disk 1 from " + source + " to " + destination); return; } // Move n-1 disks from source to helper hanoi(n - 1, source, helper, destination); // Move the largest disk from source to destination System.out.println("Move disk " + n + " from " + source + " to " + destination); // Move n-1 disks from helper to destination hanoi(n - 1, helper, destination, source);}Time Complexity: O(2ⁿ) β€” minimum moves required is 2ⁿ - 1 Space Complexity: O(n) β€” call stack depthProblem 5: Generate All Subsets (Power Set)void generateSubsets(int[] nums, int index, List<Integer> current, List<List<Integer>> result) { result.add(new ArrayList<>(current)); // add current subset for (int i = index; i < nums.length; i++) { current.add(nums[i]); // include generateSubsets(nums, i + 1, current, result); // recurse current.remove(current.size() - 1); // exclude (backtrack) }}For [1, 2, 3] β€” generates all 8 subsets: [], [1], [1,2], [1,2,3], [1,3], [2], [2,3], [3]Time: O(2ⁿ) β€” 2ⁿ subsets Space: O(n) β€” recursion depthProblem 6: Binary Search Recursivelyint binarySearch(int[] arr, int target, int left, int right) { if (left > right) return -1; // base case β€” not found int mid = left + (right - left) / 2; if (arr[mid] == target) return mid; else if (arr[mid] < target) return binarySearch(arr, target, mid + 1, right); else return binarySearch(arr, target, left, mid - 1);}Time: O(log n) β€” halving the search space each time Space: O(log n) β€” log n frames on the call stackRecursion on Trees β€” The Natural HabitatTrees are where recursion truly shines. Every tree problem becomes elegant with recursion because a tree is itself a recursive structure β€” each node's left and right children are trees themselves.// Maximum depth of binary treeint maxDepth(TreeNode root) { if (root == null) return 0; return 1 + Math.max(maxDepth(root.left), maxDepth(root.right));}// Check if tree is symmetricboolean isSymmetric(TreeNode left, TreeNode right) { if (left == null && right == null) return true; if (left == null || right == null) return false; return left.val == right.val && isSymmetric(left.left, right.right) && isSymmetric(left.right, right.left);}// Path sum β€” does any root-to-leaf path sum to target?boolean hasPathSum(TreeNode root, int target) { if (root == null) return false; if (root.left == null && root.right == null) return root.val == target; return hasPathSum(root.left, target - root.val) || hasPathSum(root.right, target - root.val);}Notice the pattern in all three β€” base case handles null, recursive case handles left and right subtrees, result combines both.How to Think About Any Recursive Problem β€” Step by StepThis is the framework you should apply to every new recursive problem you encounter:Step 1 β€” Identify the base case What is the smallest input where you know the answer directly without any recursion? For arrays it is usually empty array or single element. For trees it is null node. For numbers it is 0 or 1.Step 2 β€” Trust the recursive call Assume your function already works correctly for smaller inputs. Do not trace through the entire recursion mentally β€” just trust it. This is the Leap of Faith and it is what makes recursion feel natural.Step 3 β€” Express the current problem in terms of smaller problems How does the answer for size n relate to the answer for size n-1 (or n/2, or subtrees)? This relationship is your recursive case.Step 4 β€” Make sure each call moves toward the base case The input must become strictly smaller with each call. If it does not, you have infinite recursion.Step 5 β€” Write the base case first, then the recursive case Always. Writing the recursive case first leads to bugs because you have not defined when to stop.Common Mistakes and How to Avoid ThemMistake 1: Missing or wrong base case The most common mistake. Missing the base case causes StackOverflowError. Wrong base case causes wrong answers.Always ask β€” what is the simplest possible input, and what should the function return for it? Write that case first.Mistake 2: Not moving toward the base case If you call factorial(n) inside factorial(n) without reducing n, you loop forever. Every recursive call must make the problem strictly smaller.Mistake 3: Trusting your brain to trace deep recursion Do not try to trace 10 levels of recursion in your head. Trust the recursive call, verify the base case, and check that each call reduces the problem. That is all you need.Mistake 4: Forgetting to return the recursive result// WRONG β€” result is computed but not returnedint sum(int n) { if (n == 0) return 0; sum(n - 1) + n; // computed but discarded!}// CORRECTint sum(int n) { if (n == 0) return 0; return sum(n - 1) + n;}Mistake 5: Modifying shared state without backtracking In backtracking problems, if you add something to a list before a recursive call, you must remove it after the call returns. Forgetting to backtrack leads to incorrect results and is one of the trickiest bugs to find.Mistake 6: Recomputing the same subproblems Naive Fibonacci computes fib(3) multiple times when computing fib(5). Use memoization whenever you notice overlapping subproblems in your recursion tree.Top LeetCode Problems on RecursionThese are organized by pattern β€” work through them in this order for maximum learning:Pure Recursion Basics:509. Fibonacci Number β€” Easy β€” start here, implement with and without memoization344. Reverse String β€” Easy β€” recursion on arrays206. Reverse Linked List β€” Easy β€” recursion on linked list50. Pow(x, n) β€” Medium β€” fast power with recursionTree Recursion (Most Important):104. Maximum Depth of Binary Tree β€” Easy β€” simplest tree recursion112. Path Sum β€” Easy β€” decision recursion on tree101. Symmetric Tree β€” Easy β€” mutual recursion on tree110. Balanced Binary Tree β€” Easy β€” bottom-up recursion236. Lowest Common Ancestor of a Binary Tree β€” Medium β€” classic tree recursion124. Binary Tree Maximum Path Sum β€” Hard β€” advanced tree recursionDivide and Conquer:148. Sort List β€” Medium β€” merge sort on linked list240. Search a 2D Matrix II β€” Medium β€” divide and conquerBacktracking:78. Subsets β€” Medium β€” generate all subsets46. Permutations β€” Medium β€” generate all permutations77. Combinations β€” Medium β€” generate combinations79. Word Search β€” Medium β€” backtracking on grid51. N-Queens β€” Hard β€” classic backtrackingMemoization / Dynamic Programming:70. Climbing Stairs β€” Easy β€” Fibonacci variant with memoization322. Coin Change β€” Medium β€” recursion with memoization to DP139. Word Break β€” Medium β€” memoized recursionRecursion Cheat Sheet// Linear recursion templatereturnType solve(input) { if (baseCase) return directAnswer; // process current return solve(smallerInput);}// Tree recursion templatereturnType solve(TreeNode root) { if (root == null) return baseValue; returnType left = solve(root.left); returnType right = solve(root.right); return combine(left, right, root.val);}// Backtracking templatevoid backtrack(choices, current, result) { if (goalReached) { result.add(copy of current); return; } for (choice : choices) { make(choice); // add to current backtrack(...); // recurse undo(choice); // remove from current }}// Memoization templateMap<Input, Output> memo = new HashMap<>();returnType solve(input) { if (baseCase) return directAnswer; if (memo.containsKey(input)) return memo.get(input); returnType result = solve(smallerInput); memo.put(input, result); return result;}FAQs β€” People Also AskQ1. What is recursion in Java with a simple example? Recursion is when a function calls itself to solve a smaller version of the same problem. A simple example is factorial β€” factorial(5) = 5 Γ— factorial(4) = 5 Γ— 4 Γ— factorial(3) and so on until factorial(1) returns 1 directly.Q2. What is the difference between recursion and iteration? Iteration uses loops (for, while) and runs in O(1) space. Recursion uses function calls and uses O(n) stack space for n levels deep. Recursion is often cleaner for tree and graph problems. Iteration is better when memory is a concern or the problem is inherently linear.Q3. What causes StackOverflowError in Java recursion? StackOverflowError happens when recursion goes too deep β€” too many frames accumulate on the call stack before any of them return. This is caused by missing base case, wrong base case, or input too large for Java's default stack size limit.Q4. What is the difference between recursion and dynamic programming? Recursion solves a problem by breaking it into subproblems. Dynamic programming is recursion plus memoization β€” storing results of subproblems so they are never computed twice. DP converts exponential recursive solutions into polynomial ones by eliminating redundant computation.Q5. What is tail recursion and does Java support tail call optimization? Tail recursion is when the recursive call is the absolute last operation in the function. Java does NOT support tail call optimization β€” Java always creates a new stack frame for each call even if it is tail recursive. Languages like Scala and Kotlin (on the JVM) do support it with the tailrec keyword.Q6. How do you convert recursion to iteration? Every recursive solution can be converted to iterative using an explicit stack data structure. The call stack's behavior is replicated manually β€” push the initial call, loop while stack is not empty, pop, process, and push sub-calls. Tree traversals are a common example of this conversion.ConclusionRecursion is not magic. It is a systematic way of solving problems by expressing them in terms of smaller versions of themselves. Once you internalize the two parts (base case and recursive case), understand the call stack mentally, and learn to trust the recursive call rather than trace it completely, everything clicks.The learning path from here is clear β€” start with simple problems like Fibonacci and array sum. Move to tree problems where recursion is most natural. Then tackle backtracking. Finally add memoization to bridge into dynamic programming.Every hour you spend understanding recursion deeply pays dividends across the entire rest of your DSA journey. Trees, graphs, divide and conquer, backtracking, dynamic programming β€” all of them build on this foundation.

RecursionJavaBase CaseCall StackBacktrackingDynamic Programming
What Is Dynamic Programming? Origin Story, Real-Life Uses, LeetCode Problems & Complete Beginner Guide

What Is Dynamic Programming? Origin Story, Real-Life Uses, LeetCode Problems & Complete Beginner Guide

Introduction β€” Why Dynamic Programming Feels Hard (And Why It Isn't)If you've ever stared at a LeetCode problem, read the solution, understood every single line, and still had absolutely no idea how someone arrived at it β€” welcome. You've just experienced the classic Dynamic Programming (DP) confusion.DP has a reputation. People treat it like some dark art reserved for competitive programmers or Google engineers. The truth? Dynamic Programming is one of the most logical, learnable, and satisfying techniques in all of computer science. Once it clicks, it really clicks.This guide will take you from zero to genuinely confident. We'll cover where DP came from, how it works, what patterns to learn, how to recognize DP problems, real-world places it shows up, LeetCode problems to practice, time complexity analysis, and the mistakes that trip up even experienced developers.Let's go.The Origin Story β€” Who Invented Dynamic Programming and Why?The term "Dynamic Programming" was coined by Richard Bellman in the early 1950s while working at RAND Corporation. Here's the funny part: the name was deliberately chosen to sound impressive and vague.Bellman was doing mathematical research that his employer β€” the US Secretary of Defense, Charles Wilson β€” would have found difficult to fund if described accurately. Wilson had a well-known distaste for the word "research." So Bellman invented a name that sounded suitably grand and mathematical: Dynamic Programming.In his autobiography, Bellman wrote that he picked the word "dynamic" because it had a precise technical meaning and was also impossible to use negatively. "Programming" referred to the mathematical sense β€” planning and decision-making β€” not computer programming.The underlying idea? Break a complex problem into overlapping subproblems, solve each subproblem once, and store the result so you never solve it twice.Bellman's foundational contribution was the Bellman Equation, which underpins not just algorithms but also economics, operations research, and modern reinforcement learning.So the next time DP feels frustrating, remember β€” even its inventor named it specifically to confuse people. You're in good company.What Is Dynamic Programming? (Simple Definition)Dynamic Programming is an algorithmic technique used to solve problems by:Breaking them down into smaller overlapping subproblemsSolving each subproblem only onceStoring the result (memoization or tabulation)Building up the final solution from those stored resultsThe key insight is overlapping subproblems + optimal substructure.Overlapping subproblems means the same smaller problems come up again and again. Instead of solving them every time (like plain recursion does), DP solves them once and caches the answer.Optimal substructure means the optimal solution to the whole problem can be built from optimal solutions to its subproblems.If a problem has both these properties β€” it's a DP problem.The Two Approaches to Dynamic Programming1. Top-Down with Memoization (Recursive + Cache)You write a recursive solution exactly as you would naturally, but add a cache (usually a dictionary or array) to store results you've already computed.fib(n):if n in cache: return cache[n]if n <= 1: return ncache[n] = fib(n-1) + fib(n-2)return cache[n]This is called memoization β€” remember what you computed so you don't repeat yourself.Pros: Natural to write, mirrors the recursive thinking, easy to reason about. Cons: Stack overhead from recursion, risk of stack overflow on large inputs.2. Bottom-Up with Tabulation (Iterative)You figure out the order in which subproblems need to be solved, then solve them iteratively from the smallest up, filling a table.fib(n):dp = [0, 1]for i from 2 to n:dp[i] = dp[i-1] + dp[i-2]return dp[n]This is called tabulation β€” fill a table, cell by cell, bottom to top.Pros: No recursion overhead, usually faster in practice, easier to optimize space. Cons: Requires thinking about the order of computation upfront.🧩 Dynamic Programming Template CodeBefore diving into how to recognize DP problems, here are ready-to-use Java templates for every major DP pattern. Think of these as your reusable blueprints β€” every DP problem you ever solve will fit into one of these structures. Just define your state, plug in your recurrence relation, and you are good to go.Template 1 β€” Top-Down (Memoization)import java.util.HashMap;import java.util.Map;public class TopDownDP {Map<Integer, Integer> memo = new HashMap<>();public int solve(int n) {// Base caseif (n <= 1) return n;// Check cacheif (memo.containsKey(n)) return memo.get(n);// Recurrence relation β€” change this part for your problemint result = solve(n - 1) + solve(n - 2);// Store in cachememo.put(n, result);return result;}}Template 2 β€” Bottom-Up (Tabulation)public class BottomUpDP {public int solve(int n) {// Create DP tableint[] dp = new int[n + 1];// Base casesdp[0] = 0;dp[1] = 1;// Fill the table bottom-upfor (int i = 2; i <= n; i++) {// Recurrence relation β€” change this part for your problemdp[i] = dp[i - 1] + dp[i - 2];}return dp[n];}}Template 3 β€” Bottom-Up with Space Optimizationpublic class SpaceOptimizedDP {public int solve(int n) {// Only keep last two values instead of full tableint prev2 = 0;int prev1 = 1;for (int i = 2; i <= n; i++) {// Recurrence relation β€” change this part for your problemint curr = prev1 + prev2;prev2 = prev1;prev1 = curr;}return prev1;}}Template 4 β€” 2D DP (Two Sequences or Grid)public class TwoDimensionalDP {public int solve(String s1, String s2) {int m = s1.length();int n = s2.length();// Create 2D DP tableint[][] dp = new int[m + 1][n + 1];// Base cases β€” first row and columnfor (int i = 0; i <= m; i++) dp[i][0] = i;for (int j = 0; j <= n; j++) dp[0][j] = j;// Fill table cell by cellfor (int i = 1; i <= m; i++) {for (int j = 1; j <= n; j++) {// Recurrence relation β€” change this part for your problemif (s1.charAt(i - 1) == s2.charAt(j - 1)) {dp[i][j] = dp[i - 1][j - 1];} else {dp[i][j] = 1 + Math.min(dp[i - 1][j],Math.min(dp[i][j - 1], dp[i - 1][j - 1]));}}}return dp[m][n];}}Template 5 β€” Knapsack Patternpublic class KnapsackDP {public int solve(int[] weights, int[] values, int capacity) {int n = weights.length;// dp[i][w] = max value using first i items with capacity wint[][] dp = new int[n + 1][capacity + 1];for (int i = 1; i <= n; i++) {for (int w = 0; w <= capacity; w++) {// Don't take item idp[i][w] = dp[i - 1][w];// Take item i if it fitsif (weights[i - 1] <= w) {dp[i][w] = Math.max(dp[i][w],values[i - 1] + dp[i - 1][w - weights[i - 1]]);}}}return dp[n][capacity];}}πŸ’‘ How to use these templates:Step 1 β€” Identify which pattern your problem fits into. Step 2 β€” Define what dp[i] or dp[i][j] means in plain English before writing any code. Step 3 β€” Write your recurrence relation on paper first. Step 4 β€” Plug it into the matching template above. Step 5 β€” Handle your specific base cases carefully.πŸŽ₯ Visual Learning Resource β€” Watch This Before Moving ForwardIf you prefer learning by watching before reading, this free full-length course by freeCodeCamp is one of the best Dynamic Programming resources on the internet. Watch it alongside this guide for maximum understanding.Credit: freeCodeCamp β€” a free, nonprofit coding education platform.How to Recognize a Dynamic Programming ProblemAsk yourself these four questions:1. Can I define the problem in terms of smaller versions of itself? If you can write a recursive formula (recurrence relation), DP might apply.2. Do the subproblems overlap? If a naive recursive solution would recompute the same thing many times, DP is the right tool.3. Is there an optimal substructure? Is the best answer to the big problem made up of best answers to smaller problems?4. Are you looking for a count, minimum, maximum, or yes/no answer? DP problems often ask: "What is the minimum cost?", "How many ways?", "Can we achieve X?"Red flag words in problem statements: minimum, maximum, shortest, longest, count the number of ways, can we reach, is it possible, fewest steps.The Core DP Patterns You Must LearnMastering DP is really about recognizing patterns. Here are the most important ones:Pattern 1 β€” 1D DP (Linear) Problems where the state depends on previous elements in a single sequence. Examples: Fibonacci, Climbing Stairs, House Robber.Pattern 2 β€” 2D DP (Grid / Two-sequence) Problems with two dimensions of state, often grids or two strings. Examples: Longest Common Subsequence, Edit Distance, Unique Paths.Pattern 3 β€” Interval DP You consider all possible intervals or subarrays and build solutions from them. Examples: Matrix Chain Multiplication, Burst Balloons, Palindrome Partitioning.Pattern 4 β€” Knapsack DP (0/1 and Unbounded) You decide whether to include or exclude items under a capacity constraint. Examples: 0/1 Knapsack, Coin Change, Partition Equal Subset Sum.Pattern 5 β€” DP on Trees State is defined per node; you combine results from children. Examples: Diameter of Binary Tree, House Robber III, Maximum Path Sum.Pattern 6 β€” DP on Subsets / Bitmask DP State includes a bitmask representing which elements have been chosen. Examples: Travelling Salesman Problem, Shortest Superstring.Pattern 7 β€” DP on Strings Matching, editing, or counting arrangements within strings. Examples: Longest Palindromic Subsequence, Regular Expression Matching, Wildcard Matching.Top LeetCode Problems to Practice Dynamic Programming (With Links)Here are the essential problems, organized by difficulty and pattern. Solve them in this order.Beginner β€” Warm UpProblemPatternLinkClimbing Stairs1D DPhttps://leetcode.com/problems/climbing-stairs/Fibonacci Number1D DPhttps://leetcode.com/problems/fibonacci-number/House Robber1D DPhttps://leetcode.com/problems/house-robber/Min Cost Climbing Stairs1D DPhttps://leetcode.com/problems/min-cost-climbing-stairs/Best Time to Buy and Sell Stock1D DPhttps://leetcode.com/problems/best-time-to-buy-and-sell-stock/Intermediate β€” Core PatternsProblemPatternLinkCoin ChangeKnapsackhttps://leetcode.com/problems/coin-change/Longest Increasing Subsequence1D DPhttps://leetcode.com/problems/longest-increasing-subsequence/Longest Common Subsequence2D DPhttps://leetcode.com/problems/longest-common-subsequence/0/1 Knapsack (via Subset Sum)Knapsackhttps://leetcode.com/problems/partition-equal-subset-sum/Unique Paths2D Grid DPhttps://leetcode.com/problems/unique-paths/Jump Game1D DP / Greedyhttps://leetcode.com/problems/jump-game/Word BreakString DPhttps://leetcode.com/problems/word-break/Decode Ways1D DPhttps://leetcode.com/problems/decode-ways/Edit Distance2D String DPhttps://leetcode.com/problems/edit-distance/Triangle2D DPhttps://leetcode.com/problems/triangle/Advanced β€” Interview LevelProblemPatternLinkBurst BalloonsInterval DPhttps://leetcode.com/problems/burst-balloons/Regular Expression MatchingString DPhttps://leetcode.com/problems/regular-expression-matching/Wildcard MatchingString DPhttps://leetcode.com/problems/wildcard-matching/Palindrome Partitioning IIInterval DPhttps://leetcode.com/problems/palindrome-partitioning-ii/Maximum Profit in Job SchedulingDP + Binary Searchhttps://leetcode.com/problems/maximum-profit-in-job-scheduling/Distinct Subsequences2D DPhttps://leetcode.com/problems/distinct-subsequences/Cherry Pickup3D DPhttps://leetcode.com/problems/cherry-pickup/Real-World Use Cases of Dynamic ProgrammingDP is not just for coding interviews. It is deeply embedded in the technology you use every day.1. Google Maps & Navigation (Shortest Path) The routing engines behind GPS apps use DP-based algorithms like Dijkstra and Bellman-Ford to find the shortest or fastest path between two points across millions of nodes.2. Spell Checkers & Autocorrect (Edit Distance) When your phone corrects "teh" to "the," it is computing Edit Distance β€” a classic DP problem β€” between what you typed and every word in the dictionary.3. DNA Sequence Alignment (Bioinformatics) Researchers use the Needleman-Wunsch and Smith-Waterman algorithms β€” both DP β€” to align DNA and protein sequences and find similarities between species or identify mutations.4. Video Compression (MPEG, H.264) Modern video codecs use DP to determine the most efficient way to encode video frames, deciding which frames to store as full images and which to store as differences from the previous frame.5. Financial Portfolio Optimization Investment algorithms use DP to find the optimal allocation of assets under risk constraints β€” essentially a variant of the knapsack problem.6. Natural Language Processing (NLP) The Viterbi algorithm β€” used in speech recognition, part-of-speech tagging, and machine translation β€” is a DP algorithm. Every time Siri or Google Assistant understands your sentence, DP played a role.7. Game AI (Chess, Checkers) Game trees and minimax algorithms with memoization use DP to evaluate board positions and find the best move without recomputing already-seen positions.8. Compiler Optimization Compilers use DP to decide the optimal order of operations and instruction scheduling to generate the most efficient machine code.9. Text Justification (Word Processors) Microsoft Word and LaTeX use DP to optimally break paragraphs into lines β€” minimizing raggedness and maximizing visual appeal.10. Resource Scheduling in Cloud Computing AWS, Google Cloud, and Azure use DP-based scheduling to assign computational tasks to servers in the most cost-efficient way possible.Time Complexity Analysis of Common DP ProblemsUnderstanding the time complexity of DP is critical for interviews and for building scalable systems.ProblemTime ComplexitySpace ComplexityNotesFibonacci (naive recursion)O(2ⁿ)O(n)Exponential β€” terribleFibonacci (DP)O(n)O(1) with optimizationLinear β€” excellentLongest Common SubsequenceO(m Γ— n)O(m Γ— n)m, n = lengths of two stringsEdit DistanceO(m Γ— n)O(m Γ— n)Can optimize space to O(n)0/1 KnapsackO(n Γ— W)O(n Γ— W)n = items, W = capacityCoin ChangeO(n Γ— amount)O(amount)Classic tabulationLongest Increasing SubsequenceO(nΒ²) or O(n log n)O(n)Binary search version is fasterMatrix Chain MultiplicationO(nΒ³)O(nΒ²)Interval DPTravelling Salesman (bitmask)O(2ⁿ Γ— nΒ²)O(2ⁿ Γ— n)Still exponential but manageable for small nThe general rule: DP trades time for space. You use memory to avoid recomputation. The time complexity equals the number of unique states multiplied by the work done per state.How to Learn and Master Dynamic Programming β€” Step by StepHere is an honest, structured path to mastery:Step 1 β€” Get recursion absolutely solid first. DP is memoized recursion at its core. If you cannot write clean recursive solutions confidently, DP will remain confusing. Practice at least 20 pure recursion problems first.Step 2 β€” Start with the classics. Fibonacci β†’ Climbing Stairs β†’ House Robber β†’ Coin Change. These teach you the core pattern of defining state and transition without overwhelming you.Step 3 β€” Learn to define state explicitly. Before writing any code, ask: "What does dp[i] represent?" Write it in plain English. "dp[i] = the minimum cost to reach step i." This single habit separates good DP thinkers from struggling ones.Step 4 β€” Write the recurrence relation before coding. On paper or in a comment. Example: dp[i] = min(dp[i-1] + cost[i-1], dp[i-2] + cost[i-2]). If you can write the recurrence, the code writes itself.Step 5 β€” Master one pattern at a time. Don't jump between knapsack and interval DP in the same week. Spend a few days on each pattern until it feels intuitive.Step 6 β€” Solve the same problem both ways. Top-down and bottom-up. This builds deep understanding of what DP is actually doing.Step 7 β€” Optimize space after getting correctness. Many 2D DP solutions can use a single row instead of a full matrix. Learn this optimization after you understand the full solution.Step 8 β€” Do timed practice under interview conditions. Give yourself 35 minutes per problem. Review what you got wrong. DP is a muscle β€” it builds with reps.Common Mistakes in Dynamic Programming (And How to Avoid Them)Mistake 1 β€” Jumping to code before defining state. The most common DP error. Always define what dp[i] or dp[i][j] means before writing a single line of code.Mistake 2 β€” Wrong base cases. A single wrong base case corrupts every answer built on top of it. Trace through your base cases manually on a tiny example before running code.Mistake 3 β€” Off-by-one errors in indexing. Whether your dp array is 0-indexed or 1-indexed must be 100% consistent throughout. This causes more bugs in DP than almost anything else.Mistake 4 β€” Confusing top-down with bottom-up state order. In bottom-up DP, you must ensure that when you compute dp[i], all values it depends on are already filled. If you compute in the wrong order, you get garbage answers.Mistake 5 β€” Memoizing in the wrong dimension. In 2D problems, some people cache only one dimension when the state actually requires two. Always identify all variables that affect the outcome.Mistake 6 β€” Using global mutable state in recursion. If you use a shared array and don't clear it between test cases, you'll get wrong answers on subsequent inputs. Always scope your cache correctly.Mistake 7 β€” Not considering the full state space. In problems like Knapsack, forgetting that the state is (item index, remaining capacity) β€” not just item index β€” leads to fundamentally wrong solutions.Mistake 8 β€” Giving up after not recognizing the pattern immediately. DP problems don't announce themselves. The skill is learning to ask "is there overlapping subproblems here?" on every problem. This takes time. Don't mistake unfamiliarity for inability.Frequently Asked Questions About Dynamic ProgrammingQ: Is Dynamic Programming the same as recursion? Not exactly. Recursion is a technique for breaking problems into smaller pieces. DP is recursion plus memoization β€” or iterative tabulation. All DP can be written recursively, but not all recursion is DP.Q: What is the difference between DP and Divide and Conquer? Divide and Conquer (like Merge Sort) breaks problems into non-overlapping subproblems. DP is used when subproblems overlap β€” meaning the same subproblem is solved multiple times in a naive approach.Q: How do I know when NOT to use DP? If the subproblems don't overlap (no repeated computation), greedy or divide-and-conquer may be better. If the problem has no optimal substructure, DP won't give a correct answer.Q: Do I need to memorize DP solutions for interviews? No. You need to recognize patterns and be able to derive the recurrence relation. Memorizing solutions without understanding them will fail you in interviews. Focus on the thinking process.Q: How long does it take to get good at DP? Most people start to feel genuinely comfortable after solving 40–60 varied DP problems with deliberate practice. The first 10 feel impossible. The next 20 feel hard. After 50, patterns start feeling obvious.Q: What programming language is best for DP? Any language works. Python is often used for learning because its dictionaries make memoization trivial. C++ is preferred in competitive programming for its speed. For interviews, use whatever language you're most comfortable in.Q: What is space optimization in DP? Many DP problems only look back one or two rows to compute the current row. In those cases, you can replace an nΓ—m table with just two arrays (or even one), reducing space complexity from O(nΓ—m) to O(m). This is called space optimization or rolling array technique.Q: Can DP be applied to graph problems? Absolutely. Shortest path algorithms like Bellman-Ford are DP. Longest path in a DAG is DP. DP on trees is a rich subfield. Anywhere you have states and transitions, DP can potentially apply.Q: Is Greedy a type of Dynamic Programming? Greedy is related but distinct. Greedy makes locally optimal choices without reconsidering. DP considers all choices and picks the globally optimal one. Some DP solutions reduce to greedy when the structure allows, but they are different techniques.Q: What resources should I use to learn DP? For structured learning: Neetcode.io (organized problem list), Striver's DP Series on YouTube, and the book "Introduction to Algorithms" (CLRS) for theoretical depth. For practice: LeetCode's Dynamic Programming study plan and Codeforces for competitive DP.Final Thoughts β€” Dynamic Programming Is a SuperpowerDynamic Programming is genuinely one of the most powerful ideas in computer science. It shows up in your GPS, your autocorrect, your streaming video, your bank's risk models, and the AI assistants you talk to daily.The path to mastering it is not memorization. It is developing the habit of asking: can I break this into smaller problems that overlap? And then learning to define state clearly, write the recurrence, and trust the process.Start with Climbing Stairs. Write dp[i] in plain English before every problem. Solve everything twice β€” top-down and bottom-up. Do 50 problems with genuine reflection, not just accepted solutions.The click moment will come. And when it does, you'll wonder why it ever felt hard.

Dynamic ProgrammingMemoizationTabulationJavaOrigin StoryRichard Bellman
LeetCode 1980: Find Unique Binary String – Multiple Ways to Generate a Missing Binary Combination

LeetCode 1980: Find Unique Binary String – Multiple Ways to Generate a Missing Binary Combination

Try the ProblemYou can solve the problem here:https://leetcode.com/problems/find-unique-binary-string/Problem DescriptionYou are given an array nums containing n unique binary strings, where each string has length n.Your task is to return any binary string of length n that does not appear in the array.Important ConditionsEach string consists only of '0' and '1'.Every string in the array is unique.The output must be a binary string of length n.If multiple valid answers exist, any one of them is acceptable.ExamplesExample 1Inputnums = ["01","10"]Output"11"ExplanationPossible binary strings of length 2:00011011Since "01" and "10" are already present, valid answers could be:00 or 11Example 2Inputnums = ["00","01"]Output"11"Another valid output could be:10Example 3Inputnums = ["111","011","001"]Output101Other valid answers include:000010100110Constraintsn == nums.length1 <= n <= 16nums[i].length == nnums[i] consists only of '0' and '1'All strings in nums are uniqueImportant ObservationThe total number of binary strings of length n is:2^nBut the array contains only:n stringsSince 2^n grows very quickly and n ≀ 16, there are many possible binary strings missing from the array. Our goal is simply to construct one of those missing strings.Thinking About the ProblemBefore jumping into coding, it's useful to think about different strategies that could help us generate a binary string that does not appear in the array.Possible Ways to Think About the ProblemWhen approaching this problem, several ideas may come to mind:Generate all possible binary strings of length n and check which one is missing.Store all strings in a HashSet or HashMap and construct a candidate string to verify whether it exists.Manipulate existing strings by flipping bits to create new combinations.Use a mathematical trick that guarantees the new string is different from every string in the list.Each of these approaches leads to a different solution strategy.In this article, we will walk through these approaches and understand how they work.Approach 1: Brute Force – Generate All Binary StringsIdeaThe simplest idea is to generate every possible binary string of length n and check whether it exists in the given array.Since there are:2^n possible binary stringsWe can generate them one by one and return the first string that does not appear in nums.StepsConvert numbers from 0 to (2^n - 1) into binary strings.Pad the binary string with leading zeros so its length becomes n.Check if that string exists in the array.If not, return it.Time ComplexityO(2^n * n)This works because n is at most 16, but it is still not the most elegant approach.Approach 2: HashMap + Bit Flipping (My Approach)IdeaWhile solving this problem, another idea is to store all given binary strings inside a HashMap for quick lookup.Then we can try to construct a new binary string by flipping bits from the existing strings.The intuition is simple:If the current character is '0', change it to '1'.If the current character is '1', change it to '0'.By flipping bits at different positions, we attempt to build a new binary combination.Once the string is constructed, we check whether it already exists in the map.If the generated string does not exist, we return it as our answer.Java Implementation (My Solution)class Solution { public String findDifferentBinaryString(String[] nums) { int len = nums[0].length(); // HashMap to store all given binary strings HashMap<String, Integer> mp = new HashMap<>(); for(int i = 0; i < nums.length; i++){ mp.put(nums[i], i); } int cou = 0; String ans = ""; for(int i = 0; i < nums.length; i++){ if(cou < len){ // Flip the current bit if(nums[i].charAt(cou) == '0'){ ans += '1'; cou++; } else{ ans += '0'; cou++; } }else{ // If generated string does not exist in map if(!mp.containsKey(ans)){ return ans; } // Reset and try building again ans = ""; cou = 0; } } return ans; }}Time ComplexityO(nΒ²)Because we iterate through the array and perform string operations.Space ComplexityO(n)For storing the strings in the HashMap.Approach 3: Cantor’s Diagonalization (Optimal Solution)IdeaA clever mathematical observation allows us to construct a string that must differ from every string in the array.We build a new string such that:The first character differs from the first string.The second character differs from the second string.The third character differs from the third string.And so on.By ensuring that the generated string differs from each string at least at one position, it is guaranteed not to exist in the array.This technique is known as Cantor’s Diagonalization.Java Implementationclass Solution { public String findDifferentBinaryString(String[] nums) { int n = nums.length; StringBuilder result = new StringBuilder(); for(int i = 0; i < n; i++){ // Flip the diagonal bit if(nums[i].charAt(i) == '0'){ result.append('1'); } else{ result.append('0'); } } return result.toString(); }}Time ComplexityO(n)We only traverse the array once.Space ComplexityO(n)For storing the resulting string.Comparison of ApproachesApproachTime ComplexitySpace ComplexityNotesBrute ForceO(2^n * n)O(n)Simple but inefficientHashMap + Bit FlippingO(nΒ²)O(n)Constructive approachCantor DiagonalizationO(n)O(n)Optimal and elegantKey TakeawaysThis problem highlights an interesting concept in algorithm design:Sometimes the best solution is not searching for the answer but constructing one directly.By understanding the structure of the input, we can generate a result that is guaranteed to be unique.ConclusionThe Find Unique Binary String problem can be solved using multiple strategies, ranging from brute force enumeration to clever mathematical construction.While brute force works due to the small constraint (n ≀ 16), more elegant solutions exist. Using hashing or constructive approaches improves efficiency and demonstrates deeper algorithmic thinking.Among all approaches, the Cantor Diagonalization technique provides the most efficient and mathematically guaranteed solution.Understanding problems like this helps strengthen skills in string manipulation, hashing, and constructive algorithms, which are commonly tested in coding interviews.

Binary StringsHashingCantor DiagonalizationLeetCodeMedium
LeetCode 784 Letter Case Permutation | Recursion & Backtracking Java Solution

LeetCode 784 Letter Case Permutation | Recursion & Backtracking Java Solution

IntroductionThe Letter Case Permutation problem is a classic example of recursion and backtracking, often asked in coding interviews and frequently searched by learners preparing for platforms like LeetCode.This problem helps in understanding:Decision-making at each stepRecursive branchingString manipulationIn this article, we’ll break down the intuition, visualize the decision process using your decision tree, and implement an efficient Java solution.πŸ”— Problem LinkLeetCode: Letter Case PermutationProblem StatementGiven a string s, you can transform each alphabet character into:LowercaseUppercaseDigits remain unchanged.πŸ‘‰ Return all possible strings formed by these transformations.ExamplesExample 1Input:s = "a1b2"Output:["a1b2","a1B2","A1b2","A1B2"]Example 2Input:s = "3z4"Output:["3z4","3Z4"]Key InsightAt each character:If it's a digit β†’ only one choiceIf it's a letter β†’ two choices:lowercase OR uppercaseSo total combinations:2^(number of letters)Intuition (Using Your Decision Tree)For input: "a1b2"Start from index 0: "" / \ "a" "A" | | "a1" "A1" / \ / \ "a1b" "a1B" "A1b" "A1B" | | | | "a1b2" "a1B2" "A1b2" "A1B2"Understanding the TreeAt 'a' β†’ branch into 'a' and 'A''1' β†’ no branching (digit)'b' β†’ again branching'2' β†’ no branchingπŸ“Œ Leaf nodes = final answersApproach: Recursion + BacktrackingIdeaTraverse the string character by characterIf digit β†’ move forwardIf letter β†’ branch into:lowercaseuppercaseJava Codeimport java.util.*;class Solution { // List to store all results List<String> lis = new ArrayList<>(); public void solve(String s, int ind, String ans) { // Base case: reached end of string if (ind == s.length()) { lis.add(ans); // store generated string return; } char ch = s.charAt(ind); // If character is a digit β†’ only one option if (ch >= '0' && ch <= '9') { solve(s, ind + 1, ans + ch); } else { // Choice 1: convert to lowercase solve(s, ind + 1, ans + Character.toLowerCase(ch)); // Choice 2: convert to uppercase solve(s, ind + 1, ans + Character.toUpperCase(ch)); } } public List<String> letterCasePermutation(String s) { solve(s, 0, ""); // start recursion return lis; }}Step-by-Step ExecutionFor "a1b2":Start β†’ ""'a' β†’ "a", "A"'1' β†’ "a1", "A1"'b' β†’ "a1b", "a1B", "A1b", "A1B"'2' β†’ final stringsComplexity AnalysisTime Complexity: O(2^n)(n = number of letters)Space Complexity: O(2^n)(for storing results)Why This Approach WorksRecursion explores all possibilitiesEach letter creates a branching pointDigits pass through unchangedBacktracking ensures all combinations are generatedKey TakeawaysThis is a binary decision recursion problemLetters β†’ 2 choicesDigits β†’ 1 choiceDecision tree directly maps to recursionPattern similar to:SubsetsPermutations with conditionsWhen This Problem Is AskedCommon in:Coding interviewsRecursion/backtracking roundsString manipulation problemsConclusionThe Letter Case Permutation problem is a perfect example of how recursion can be used to explore all possible combinations efficiently.Once the decision tree is clear, the implementation becomes straightforward. This pattern is widely used in many advanced problems, making it essential to master.Frequently Asked Questions (FAQs)1. Why don’t digits create branches?Because they have only one valid form.2. What is the main concept used?Recursion with decision-making (backtracking).3. Can this be solved iteratively?Yes, using BFS or iterative expansion, but recursion is more intuitive.

LeetCodeMediumJavaRecursion
Ai Assistant Kas