Search Blogs

Showing results for "Iteration"

Found 30 results

Recursion in Java - Complete Guide With Examples and Practice Problems

Recursion in Java - Complete Guide With Examples and Practice Problems

IntroductionIf there is one topic in programming that confuses beginners more than anything else, it is recursion. Most people read the definition, nod their head, and then immediately freeze when they have to write recursive code themselves.The problem is not that recursion is genuinely hard. The problem is that most explanations start with code before building the right mental model. Once you have the right mental model, recursion clicks permanently and you start seeing it everywhere — in tree problems, graph problems, backtracking, dynamic programming, divide and conquer, and more.This guide covers everything from the ground up. What recursion is, how the call stack works, how to identify base cases and recursive cases, every type of recursion, common patterns, time and space complexity analysis, the most common mistakes, and the top LeetCode problems to practice.By the end of this article, recursion will not feel like magic anymore. It will feel like a natural tool you reach for confidently.What Is Recursion?Recursion is when a function calls itself to solve a smaller version of the same problem.That is the complete definition. But let us make it concrete.Imagine you want to count down from 5 to 1. One way is a loop. Another way is — print 5, then solve the exact same problem for counting down from 4 to 1. Then print 4, solve for 3. And so on until you reach the base — there is nothing left to count down.void countDown(int n) { if (n == 0) return; // stop here System.out.println(n); countDown(n - 1); // solve the smaller version}The function countDown calls itself with a smaller input each time. Eventually it reaches 0 and stops. That stopping condition is the most important part of any recursive function — the base case.The Two Parts Every Recursive Function Must HaveEvery correctly written recursive function has exactly two parts. Without both, the function either gives wrong answers or runs forever.Part 1: Base CaseThe base case is the condition under which the function stops calling itself and returns a direct answer. It is the smallest version of the problem that you can solve without any further recursion.Without a base case, recursion never stops and you get a StackOverflowError — Java's way of telling you the call stack ran out of memory.Part 2: Recursive CaseThe recursive case is where the function calls itself with a smaller or simpler input — moving closer to the base case with each call. If your recursive case does not make the problem smaller, you have an infinite loop.Think of it like a staircase. The base case is the ground floor. The recursive case is each step going down. Every step must genuinely bring you one level closer to the ground.How Recursion Works — The Call StackThis is the mental model that most explanations skip, and it is the reason recursion confuses people.Every time a function is called in Java, a new stack frame is created and pushed onto the call stack. This frame stores the function's local variables, parameters, and where to return to when the function finishes.When a recursive function calls itself, a new frame is pushed on top. When that call finishes, its frame is popped and execution returns to the previous frame.Let us trace countDown(3) through the call stack:countDown(3) called → frame pushed prints 3 calls countDown(2) → frame pushed prints 2 calls countDown(1) → frame pushed prints 1 calls countDown(0) → frame pushed n == 0, return → frame popped back in countDown(1), return → frame popped back in countDown(2), return → frame popped back in countDown(3), return → frame poppedOutput: 3, 2, 1The call stack grows as calls go deeper, then shrinks as calls return. This is why recursion uses O(n) space for n levels deep — each level occupies one stack frame in memory.Your First Real Recursive Function — FactorialFactorial is the classic first recursion example. n! = n × (n-1) × (n-2) × ... × 1Notice the pattern — n! = n × (n-1)!. The factorial of n is n times the factorial of n-1. That recursive structure makes it perfect for recursion.public int factorial(int n) { // base case if (n == 0 || n == 1) return 1; // recursive case return n * factorial(n - 1);}Dry Run — factorial(4)factorial(4)= 4 * factorial(3)= 4 * 3 * factorial(2)= 4 * 3 * 2 * factorial(1)= 4 * 3 * 2 * 1= 24The call stack builds up going in, then multiplications happen coming back out. This "coming back out" phase is called the return phase or unwinding of the stack.Time Complexity: O(n) — n recursive calls Space Complexity: O(n) — n frames on the call stackThe Two Phases of RecursionEvery recursive function has two phases and understanding both is critical.Phase 1: The Call Phase (Going In)This happens as the function keeps calling itself with smaller inputs. Things you do before the recursive call happen in this phase — in order from the outermost call to the innermost.Phase 2: The Return Phase (Coming Back Out)This happens as each call finishes and returns to its caller. Things you do after the recursive call happen in this phase — in reverse order, from the innermost call back to the outermost.This distinction explains why the output order can be surprising:void printBothPhases(int n) { if (n == 0) return; System.out.println("Going in: " + n); // call phase printBothPhases(n - 1); System.out.println("Coming out: " + n); // return phase}For printBothPhases(3):Going in: 3Going in: 2Going in: 1Coming out: 1Coming out: 2Coming out: 3This two-phase understanding is what makes problems like reversing a string or printing a linked list backwards via recursion feel natural.Types of RecursionRecursion is not one-size-fits-all. There are several distinct types and knowing which type applies to a problem shapes how you write the solution.1. Direct RecursionThe function calls itself directly. This is the most common type — what we have seen so far.void direct(int n) { if (n == 0) return; direct(n - 1); // calls itself}2. Indirect RecursionFunction A calls Function B which calls Function A. They form a cycle.void funcA(int n) { if (n <= 0) return; System.out.println("A: " + n); funcB(n - 1);}void funcB(int n) { if (n <= 0) return; System.out.println("B: " + n); funcA(n - 1);}Used in: state machines, mutual recursion in parsers, certain mathematical sequences.3. Tail RecursionThe recursive call is the last operation in the function. Nothing happens after the recursive call returns — no multiplication, no addition, nothing.// NOT tail recursive — multiplication happens after returnint factorial(int n) { if (n == 1) return 1; return n * factorial(n - 1); // multiply after return — not tail}// Tail recursive — recursive call is the last thingint factorialTail(int n, int accumulator) { if (n == 1) return accumulator; return factorialTail(n - 1, n * accumulator); // last operation}Why does tail recursion matter? In languages that support tail call optimization (like Scala, Kotlin, and many functional languages), tail recursive functions can be converted to iteration internally — no stack frame accumulation, O(1) space. Java does NOT perform tail call optimization, but understanding tail recursion is still important for interviews and functional programming concepts.4. Head RecursionThe recursive call happens first, before any other processing. All processing happens in the return phase.void headRecursion(int n) { if (n == 0) return; headRecursion(n - 1); // call first System.out.println(n); // process after}// Output: 1 2 3 4 5 (processes in reverse order of calls)5. Tree RecursionThe function makes more than one recursive call per invocation. This creates a tree of calls rather than a linear chain. Fibonacci is the classic example.int fibonacci(int n) { if (n <= 1) return n; return fibonacci(n - 1) + fibonacci(n - 2); // TWO recursive calls}The call tree for fibonacci(4): fib(4) / \ fib(3) fib(2) / \ / \ fib(2) fib(1) fib(1) fib(0) / \ fib(1) fib(0)Time Complexity: O(2ⁿ) — exponential! Each call spawns two more. Space Complexity: O(n) — maximum depth of the call treeThis is why memoization (caching results) is so important for tree recursion — it converts O(2ⁿ) to O(n) by never recomputing the same subproblem twice.6. Mutual RecursionA specific form of indirect recursion where two functions call each other alternately to solve a problem. Different from indirect recursion in that the mutual calls are the core mechanism of the solution.// Check if a number is even or odd using mutual recursionboolean isEven(int n) { if (n == 0) return true; return isOdd(n - 1);}boolean isOdd(int n) { if (n == 0) return false; return isEven(n - 1);}Common Recursion Patterns in DSAThese are the patterns you will see over and over in interview problems. Recognizing them is more important than memorizing solutions.Pattern 1: Linear Recursion (Do Something, Recurse on Rest)Process the current element, then recurse on the remaining problem.// Sum of arrayint arraySum(int[] arr, int index) { if (index == arr.length) return 0; // base case return arr[index] + arraySum(arr, index + 1); // current + rest}Pattern 2: Divide and Conquer (Split Into Two Halves)Split the problem into two halves, solve each recursively, combine results.// Merge Sortvoid mergeSort(int[] arr, int left, int right) { if (left >= right) return; // base case — single element int mid = (left + right) / 2; mergeSort(arr, left, mid); // sort left half mergeSort(arr, mid + 1, right); // sort right half merge(arr, left, mid, right); // combine}Pattern 3: Backtracking (Try, Recurse, Undo)Try a choice, recurse to explore it, undo the choice when backtracking.// Generate all subsetsvoid subsets(int[] nums, int index, List<Integer> current, List<List<Integer>> result) { if (index == nums.length) { result.add(new ArrayList<>(current)); return; } // Choice 1: include nums[index] current.add(nums[index]); subsets(nums, index + 1, current, result); current.remove(current.size() - 1); // undo // Choice 2: exclude nums[index] subsets(nums, index + 1, current, result);}Pattern 4: Tree Recursion (Left, Right, Combine)Recurse on left subtree, recurse on right subtree, combine or process results.// Height of binary treeint height(TreeNode root) { if (root == null) return 0; // base case int leftHeight = height(root.left); // solve left int rightHeight = height(root.right); // solve right return 1 + Math.max(leftHeight, rightHeight); // combine}Pattern 5: Memoization (Cache Recursive Results)Store results of recursive calls so the same subproblem is never solved twice.Map<Integer, Integer> memo = new HashMap<>();int fibonacci(int n) { if (n <= 1) return n; if (memo.containsKey(n)) return memo.get(n); // return cached int result = fibonacci(n - 1) + fibonacci(n - 2); memo.put(n, result); // cache before returning return result;}This converts Fibonacci from O(2ⁿ) to O(n) time with O(n) space — a massive improvement.Recursion vs Iteration — When to Use WhichThis is one of the most common interview questions about recursion. Here is a clear breakdown:Use Recursion when:The problem has a naturally recursive structure (trees, graphs, divide and conquer)The solution is significantly cleaner and easier to understand recursivelyThe problem involves exploring multiple paths or choices (backtracking)The depth of recursion is manageable (not too deep to cause stack overflow)Use Iteration when:The problem is linear and a loop is equally clearMemory is a concern (iteration uses O(1) stack space vs O(n) for recursion)Performance is critical and function call overhead mattersJava's stack size limit could be hit (default around 500-1000 frames for deep recursion)The key rule: Every recursive solution can be converted to an iterative one (usually using an explicit stack). But recursive solutions for tree and graph problems are almost always cleaner to write and understand.Time and Space Complexity of Recursive FunctionsAnalyzing complexity for recursive functions requires a specific approach.The Recurrence Relation MethodExpress the time complexity as a recurrence relation and solve it.Factorial:T(n) = T(n-1) + O(1) = T(n-2) + O(1) + O(1) = T(1) + n×O(1) = O(n)Fibonacci (naive):T(n) = T(n-1) + T(n-2) + O(1) ≈ 2×T(n-1) = O(2ⁿ)Binary Search:T(n) = T(n/2) + O(1) = O(log n) [by Master Theorem]Merge Sort:T(n) = 2×T(n/2) + O(n) = O(n log n) [by Master Theorem]Space Complexity Rule for RecursionSpace complexity of a recursive function = maximum depth of the call stack × space per frameLinear recursion (factorial, sum): O(n) spaceBinary recursion (Fibonacci naive): O(n) space (maximum depth, not number of nodes)Divide and conquer (merge sort): O(log n) space (depth of recursion tree)Memoized Fibonacci: O(n) space (memo table + call stack)Classic Recursive Problems With SolutionsProblem 1: Reverse a StringString reverse(String s) { if (s.length() <= 1) return s; // base case // last char + reverse of everything before last char return s.charAt(s.length() - 1) + reverse(s.substring(0, s.length() - 1));}Dry run for "hello":reverse("hello") = 'o' + reverse("hell")reverse("hell") = 'l' + reverse("hel")reverse("hel") = 'l' + reverse("he")reverse("he") = 'e' + reverse("h")reverse("h") = "h"Unwinding: "h" → "he" → "leh" → "lleh" → "olleh" ✅Problem 2: Power Function (x^n)double power(double x, int n) { if (n == 0) return 1; // base case if (n < 0) return 1.0 / power(x, -n); // handle negative if (n % 2 == 0) { double half = power(x, n / 2); return half * half; // x^n = (x^(n/2))^2 } else { return x * power(x, n - 1); }}This is the fast power algorithm — O(log n) time instead of O(n).Problem 3: Fibonacci With Memoizationint[] memo = new int[100];Arrays.fill(memo, -1);int fib(int n) { if (n <= 1) return n; if (memo[n] != -1) return memo[n]; memo[n] = fib(n - 1) + fib(n - 2); return memo[n];}Time: O(n) — each value computed once Space: O(n) — memo array + call stackProblem 4: Tower of HanoiThe classic recursion teaching problem. Move n disks from source to destination using a helper rod.void hanoi(int n, char source, char destination, char helper) { if (n == 1) { System.out.println("Move disk 1 from " + source + " to " + destination); return; } // Move n-1 disks from source to helper hanoi(n - 1, source, helper, destination); // Move the largest disk from source to destination System.out.println("Move disk " + n + " from " + source + " to " + destination); // Move n-1 disks from helper to destination hanoi(n - 1, helper, destination, source);}Time Complexity: O(2ⁿ) — minimum moves required is 2ⁿ - 1 Space Complexity: O(n) — call stack depthProblem 5: Generate All Subsets (Power Set)void generateSubsets(int[] nums, int index, List<Integer> current, List<List<Integer>> result) { result.add(new ArrayList<>(current)); // add current subset for (int i = index; i < nums.length; i++) { current.add(nums[i]); // include generateSubsets(nums, i + 1, current, result); // recurse current.remove(current.size() - 1); // exclude (backtrack) }}For [1, 2, 3] — generates all 8 subsets: [], [1], [1,2], [1,2,3], [1,3], [2], [2,3], [3]Time: O(2ⁿ) — 2ⁿ subsets Space: O(n) — recursion depthProblem 6: Binary Search Recursivelyint binarySearch(int[] arr, int target, int left, int right) { if (left > right) return -1; // base case — not found int mid = left + (right - left) / 2; if (arr[mid] == target) return mid; else if (arr[mid] < target) return binarySearch(arr, target, mid + 1, right); else return binarySearch(arr, target, left, mid - 1);}Time: O(log n) — halving the search space each time Space: O(log n) — log n frames on the call stackRecursion on Trees — The Natural HabitatTrees are where recursion truly shines. Every tree problem becomes elegant with recursion because a tree is itself a recursive structure — each node's left and right children are trees themselves.// Maximum depth of binary treeint maxDepth(TreeNode root) { if (root == null) return 0; return 1 + Math.max(maxDepth(root.left), maxDepth(root.right));}// Check if tree is symmetricboolean isSymmetric(TreeNode left, TreeNode right) { if (left == null && right == null) return true; if (left == null || right == null) return false; return left.val == right.val && isSymmetric(left.left, right.right) && isSymmetric(left.right, right.left);}// Path sum — does any root-to-leaf path sum to target?boolean hasPathSum(TreeNode root, int target) { if (root == null) return false; if (root.left == null && root.right == null) return root.val == target; return hasPathSum(root.left, target - root.val) || hasPathSum(root.right, target - root.val);}Notice the pattern in all three — base case handles null, recursive case handles left and right subtrees, result combines both.How to Think About Any Recursive Problem — Step by StepThis is the framework you should apply to every new recursive problem you encounter:Step 1 — Identify the base case What is the smallest input where you know the answer directly without any recursion? For arrays it is usually empty array or single element. For trees it is null node. For numbers it is 0 or 1.Step 2 — Trust the recursive call Assume your function already works correctly for smaller inputs. Do not trace through the entire recursion mentally — just trust it. This is the Leap of Faith and it is what makes recursion feel natural.Step 3 — Express the current problem in terms of smaller problems How does the answer for size n relate to the answer for size n-1 (or n/2, or subtrees)? This relationship is your recursive case.Step 4 — Make sure each call moves toward the base case The input must become strictly smaller with each call. If it does not, you have infinite recursion.Step 5 — Write the base case first, then the recursive case Always. Writing the recursive case first leads to bugs because you have not defined when to stop.Common Mistakes and How to Avoid ThemMistake 1: Missing or wrong base case The most common mistake. Missing the base case causes StackOverflowError. Wrong base case causes wrong answers.Always ask — what is the simplest possible input, and what should the function return for it? Write that case first.Mistake 2: Not moving toward the base case If you call factorial(n) inside factorial(n) without reducing n, you loop forever. Every recursive call must make the problem strictly smaller.Mistake 3: Trusting your brain to trace deep recursion Do not try to trace 10 levels of recursion in your head. Trust the recursive call, verify the base case, and check that each call reduces the problem. That is all you need.Mistake 4: Forgetting to return the recursive result// WRONG — result is computed but not returnedint sum(int n) { if (n == 0) return 0; sum(n - 1) + n; // computed but discarded!}// CORRECTint sum(int n) { if (n == 0) return 0; return sum(n - 1) + n;}Mistake 5: Modifying shared state without backtracking In backtracking problems, if you add something to a list before a recursive call, you must remove it after the call returns. Forgetting to backtrack leads to incorrect results and is one of the trickiest bugs to find.Mistake 6: Recomputing the same subproblems Naive Fibonacci computes fib(3) multiple times when computing fib(5). Use memoization whenever you notice overlapping subproblems in your recursion tree.Top LeetCode Problems on RecursionThese are organized by pattern — work through them in this order for maximum learning:Pure Recursion Basics:509. Fibonacci Number — Easy — start here, implement with and without memoization344. Reverse String — Easy — recursion on arrays206. Reverse Linked List — Easy — recursion on linked list50. Pow(x, n) — Medium — fast power with recursionTree Recursion (Most Important):104. Maximum Depth of Binary Tree — Easy — simplest tree recursion112. Path Sum — Easy — decision recursion on tree101. Symmetric Tree — Easy — mutual recursion on tree110. Balanced Binary Tree — Easy — bottom-up recursion236. Lowest Common Ancestor of a Binary Tree — Medium — classic tree recursion124. Binary Tree Maximum Path Sum — Hard — advanced tree recursionDivide and Conquer:148. Sort List — Medium — merge sort on linked list240. Search a 2D Matrix II — Medium — divide and conquerBacktracking:78. Subsets — Medium — generate all subsets46. Permutations — Medium — generate all permutations77. Combinations — Medium — generate combinations79. Word Search — Medium — backtracking on grid51. N-Queens — Hard — classic backtrackingMemoization / Dynamic Programming:70. Climbing Stairs — Easy — Fibonacci variant with memoization322. Coin Change — Medium — recursion with memoization to DP139. Word Break — Medium — memoized recursionRecursion Cheat Sheet// Linear recursion templatereturnType solve(input) { if (baseCase) return directAnswer; // process current return solve(smallerInput);}// Tree recursion templatereturnType solve(TreeNode root) { if (root == null) return baseValue; returnType left = solve(root.left); returnType right = solve(root.right); return combine(left, right, root.val);}// Backtracking templatevoid backtrack(choices, current, result) { if (goalReached) { result.add(copy of current); return; } for (choice : choices) { make(choice); // add to current backtrack(...); // recurse undo(choice); // remove from current }}// Memoization templateMap<Input, Output> memo = new HashMap<>();returnType solve(input) { if (baseCase) return directAnswer; if (memo.containsKey(input)) return memo.get(input); returnType result = solve(smallerInput); memo.put(input, result); return result;}FAQs — People Also AskQ1. What is recursion in Java with a simple example? Recursion is when a function calls itself to solve a smaller version of the same problem. A simple example is factorial — factorial(5) = 5 × factorial(4) = 5 × 4 × factorial(3) and so on until factorial(1) returns 1 directly.Q2. What is the difference between recursion and iteration? Iteration uses loops (for, while) and runs in O(1) space. Recursion uses function calls and uses O(n) stack space for n levels deep. Recursion is often cleaner for tree and graph problems. Iteration is better when memory is a concern or the problem is inherently linear.Q3. What causes StackOverflowError in Java recursion? StackOverflowError happens when recursion goes too deep — too many frames accumulate on the call stack before any of them return. This is caused by missing base case, wrong base case, or input too large for Java's default stack size limit.Q4. What is the difference between recursion and dynamic programming? Recursion solves a problem by breaking it into subproblems. Dynamic programming is recursion plus memoization — storing results of subproblems so they are never computed twice. DP converts exponential recursive solutions into polynomial ones by eliminating redundant computation.Q5. What is tail recursion and does Java support tail call optimization? Tail recursion is when the recursive call is the absolute last operation in the function. Java does NOT support tail call optimization — Java always creates a new stack frame for each call even if it is tail recursive. Languages like Scala and Kotlin (on the JVM) do support it with the tailrec keyword.Q6. How do you convert recursion to iteration? Every recursive solution can be converted to iterative using an explicit stack data structure. The call stack's behavior is replicated manually — push the initial call, loop while stack is not empty, pop, process, and push sub-calls. Tree traversals are a common example of this conversion.ConclusionRecursion is not magic. It is a systematic way of solving problems by expressing them in terms of smaller versions of themselves. Once you internalize the two parts (base case and recursive case), understand the call stack mentally, and learn to trust the recursive call rather than trace it completely, everything clicks.The learning path from here is clear — start with simple problems like Fibonacci and array sum. Move to tree problems where recursion is most natural. Then tackle backtracking. Finally add memoization to bridge into dynamic programming.Every hour you spend understanding recursion deeply pays dividends across the entire rest of your DSA journey. Trees, graphs, divide and conquer, backtracking, dynamic programming — all of them build on this foundation.

RecursionJavaBase CaseCall StackBacktrackingDynamic Programming
LeetCode 143 Reorder List - Java Solution Explained

LeetCode 143 Reorder List - Java Solution Explained

IntroductionLeetCode 143 Reorder List is one of those problems that looks simple when you read it but immediately makes you wonder — where do I even start? There is no single trick that solves it. Instead it combines three separate linked list techniques into one clean solution. Mastering this problem means you have genuinely understood linked lists at an intermediate level.You can find the problem here — LeetCode 143 Reorder List.This article walks through everything — what the problem wants, the intuition behind each step, all three techniques used, a detailed dry run, complexity analysis, and common mistakes beginners make.What Is the Problem Really Asking?You have a linked list: L0 → L1 → L2 → ... → LnYou need to reorder it to: L0 → Ln → L1 → Ln-1 → L2 → Ln-2 → ...In plain English — take one node from the front, then one from the back, then one from the front, then one from the back, and keep alternating until all nodes are used.Example:Input: 1 → 2 → 3 → 4 → 5Output: 1 → 5 → 2 → 4 → 3Node 1 from front, Node 5 from back, Node 2 from front, Node 4 from back, Node 3 stays in middle.Real Life Analogy — Dealing Cards From Both EndsImagine you have a deck of cards laid out in a line face up: 1, 2, 3, 4, 5. Now you deal them by alternately picking from the left end and the right end of the line:Pick 1 from left → placePick 5 from right → place after 1Pick 2 from left → place after 5Pick 4 from right → place after 2Pick 3 (only one left) → place after 4Result: 1, 5, 2, 4, 3That is exactly what the problem wants. The challenge is doing this efficiently on a singly linked list where you cannot just index from the back.Why This Problem Is Hard for BeginnersIn an array you can just use two pointers — one at the start and one at the end — and swap/interleave easily. But a singly linked list only goes forward. You cannot go backwards. You cannot easily access the last element.This is why the problem requires a three-step approach that cleverly works around the limitations of a singly linked list.The Three Step ApproachEvery experienced developer solves this problem in exactly three steps:Step 1 — Find the middle of the linked list using the Fast & Slow Pointer techniqueStep 2 — Reverse the second half of the linked listStep 3 — Merge the two halves by alternating nodes from eachLet us understand each step deeply before looking at code.Step 1: Finding the Middle — Fast & Slow PointerThe Fast & Slow Pointer technique (also called Floyd's algorithm) uses two pointers moving at different speeds through the list:slow moves one step at a timefast moves two steps at a timeWhen fast reaches the end, slow is exactly at the middle. This works because fast covers twice the distance of slow in the same number of steps.ListNode fast = head;ListNode slow = head;while (fast.next != null && fast.next.next != null) { fast = fast.next.next; slow = slow.next;}// slow is now at the middleFor 1 → 2 → 3 → 4 → 5:Start: slow=1, fast=1Step 1: slow=2, fast=3Step 2: slow=3, fast=5 (fast.next is null, stop)Middle is node 3For 1 → 2 → 3 → 4:Start: slow=1, fast=1Step 1: slow=2, fast=3Step 2: fast.next.next is null, stopslow=2, middle is node 2After finding the middle, we cut the list in two by setting slow.next = null. This disconnects the first half from the second half.Step 2: Reversing the Second HalfOnce we have the second half starting from slow.next, we reverse it. After reversal, what was the last node becomes the first — giving us easy access to the back elements of the original list.public ListNode reverse(ListNode head) { ListNode curr = head; ListNode prev = null; while (curr != null) { ListNode next = curr.next; // save next curr.next = prev; // reverse the link prev = curr; // move prev forward curr = next; // move curr forward } return prev; // prev is the new head}For second half 3 → 4 → 5 (from the first example):Reverse → 5 → 4 → 3Now we have:First half: 1 → 2 → 3 (but 3 is the end since we cut at slow)Wait — actually after cutting at slow=3: first half is 1 → 2 → 3, second half reversed is 5 → 4Let us be precise. For 1 → 2 → 3 → 4 → 5, slow stops at 3. slow.next = null cuts to give:First half: 1 → 2 → 3 → nullSecond half before reverse: 4 → 5Second half after reverse: 5 → 4Step 3: Merging Two HalvesNow we have two lists and we merge them by alternately taking one node from each:Take from first half, take from second half, take from first half, take from second half...ListNode orig = head; // pointer for first halfListNode newhead = second; // pointer for reversed second halfwhile (newhead != null) { ListNode temp1 = orig.next; // save next of first half ListNode temp2 = newhead.next; // save next of second half orig.next = newhead; // first → second newhead.next = temp1; // second → next of first orig = temp1; // advance first half pointer newhead = temp2; // advance second half pointer}Why do we loop on newhead != null and not orig != null? Because the second half is always equal to or shorter than the first half (we cut at middle). Once the second half is exhausted, the remaining first half nodes are already in the correct position.Complete Solutionclass Solution { public ListNode reverse(ListNode head) { ListNode curr = head; ListNode prev = null; while (curr != null) { ListNode next = curr.next; curr.next = prev; prev = curr; curr = next; } return prev; } public void reorderList(ListNode head) { // Step 1: Find middle using fast & slow pointer ListNode fast = head; ListNode slow = head; while (fast.next != null && fast.next.next != null) { fast = fast.next.next; slow = slow.next; } // Step 2: Reverse second half ListNode newhead = reverse(slow.next); slow.next = null; // cut the list into two halves // Step 3: Merge two halves alternately ListNode orig = head; while (newhead != null) { ListNode temp1 = orig.next; ListNode temp2 = newhead.next; orig.next = newhead; newhead.next = temp1; orig = temp1; newhead = temp2; } }}Complete Dry Run — head = [1, 2, 3, 4, 5]Step 1: Find MiddleList: 1 → 2 → 3 → 4 → 5Initial: slow=1, fast=1Iteration 1: slow=2, fast=3Iteration 2: fast.next=4, fast.next.next=5 → slow=3, fast=5fast.next is null → stopslow is at node 3Step 2: Cut and ReverseCut: slow.next = nullFirst half: 1 → 2 → 3 → nullSecond half: 4 → 5Reverse second half 4 → 5:prev=null, curr=4 → next=5, 4.next=null, prev=4, curr=5prev=4, curr=5 → next=null, 5.next=4, prev=5, curr=nullReturn prev=5Reversed second half: 5 → 4 → nullStep 3: Mergeorig=1, newhead=5Iteration 1:temp1 = orig.next = 2temp2 = newhead.next = 4orig.next = newhead → 1.next = 5newhead.next = temp1 → 5.next = 2orig = temp1 = 2newhead = temp2 = 4List so far: 1 → 5 → 2 → 3Iteration 2:temp1 = orig.next = 3temp2 = newhead.next = nullorig.next = newhead → 2.next = 4newhead.next = temp1 → 4.next = 3orig = temp1 = 3newhead = temp2 = nullList so far: 1 → 5 → 2 → 4 → 3newhead is null → loop endsFinal result: 1 → 5 → 2 → 4 → 3 ✅Dry Run — head = [1, 2, 3, 4]Step 1: Find MiddleInitial: slow=1, fast=1Iteration 1: slow=2, fast=3fast.next=4, fast.next.next=null → stopslow is at node 2Step 2: Cut and ReverseFirst half: 1 → 2 → nullSecond half: 3 → 4Reversed: 4 → 3 → nullStep 3: Mergeorig=1, newhead=4Iteration 1:temp1=2, temp2=31.next=4, 4.next=2orig=2, newhead=3List: 1 → 4 → 2 → 3Iteration 2:temp1=null (2.next was originally 3 but we cut at slow=2, so 2.next = null... wait)Actually after cutting at slow=2, first half is 1 → 2 → null, so orig when it becomes 2, orig.next = null.temp1 = orig.next = nulltemp2 = newhead.next = null2.next = 3, 3.next = nullorig = null, newhead = nullnewhead is null → stopFinal result: 1 → 4 → 2 → 3 ✅Why slow.next = null Must Come After Saving newheadThis is a subtle but critical ordering detail in the code. Look at this sequence:ListNode newhead = reverse(slow.next); // save reversed second half FIRSTslow.next = null; // THEN cut the listIf you cut first (slow.next = null) and then try to reverse, you lose the reference to the second half entirely because slow.next is already null. Always save the second half reference before cutting.Time and Space ComplexityTime Complexity: O(n) — each of the three steps (find middle, reverse, merge) makes a single pass through the list. Total is 3 passes = O(3n) = O(n).Space Complexity: O(1) — everything is done by rearranging pointers in place. No extra arrays, no recursion stack, no additional data structures. Just a handful of pointer variables.This is the optimal solution — linear time and constant space.Alternative Approach — Using ArrayList (Simpler but O(n) Space)If you find the three-step approach hard to implement under interview pressure, here is a simpler approach using extra space:public void reorderList(ListNode head) { // store all nodes in ArrayList for random access List<ListNode> nodes = new ArrayList<>(); ListNode curr = head; while (curr != null) { nodes.add(curr); curr = curr.next; } int left = 0; int right = nodes.size() - 1; while (left < right) { nodes.get(left).next = nodes.get(right); left++; if (left == right) break; // odd number of nodes nodes.get(right).next = nodes.get(left); right--; } nodes.get(left).next = null; // terminate the list}This is much easier to understand and code. Store all nodes in an ArrayList, use two pointers from both ends, and wire up the next pointers.Time Complexity: O(n) Space Complexity: O(n) — ArrayList stores all nodesThis is acceptable in most interviews. Mention the O(1) space approach as the optimal solution if asked.Common Mistakes to AvoidNot cutting the list before merging If you do not set slow.next = null after finding the middle, the first half still points into the second half. During merging, this creates cycles and infinite loops. Always cut before merging.Wrong loop condition for finding the middle The condition fast.next != null && fast.next.next != null ensures fast does not go out of bounds when jumping two steps. Using just fast != null && fast.next != null moves slow one step too far for even-length lists.Looping on orig instead of newhead The merge loop should run while newhead != null, not while orig != null. The second half is always shorter or equal to the first half. Once the second half is done, the remaining first half is already correctly placed.Forgetting to save both temp pointers before rewiring In the merge step, you must save both orig.next and newhead.next before changing any pointers. Changing orig.next first and then trying to access orig.next to save it gives you the wrong node.How This Problem Combines Multiple PatternsThis problem is special because it does not rely on a single technique. It is a combination of three fundamental linked list operations:Fast & Slow Pointer — you saw this concept in problems like finding the middle of a list and detecting cycles (LeetCode 141, 142).Reverse a Linked List — the most fundamental linked list operation, appears in LeetCode 206 and as a subtask in dozens of problems.Merge Two Lists — similar to merging two sorted lists (LeetCode 21) but here order is not sorted, it is alternating.Solving this problem proves you are comfortable with all three patterns individually and can combine them when needed.FAQs — People Also AskQ1. What is the most efficient approach for LeetCode 143 Reorder List? The three-step approach — find middle with fast/slow pointer, reverse second half, merge alternately — runs in O(n) time and O(1) space. It is the optimal solution. The ArrayList approach is O(n) time and O(n) space but simpler to code.Q2. Why use fast and slow pointer to find the middle? Because a singly linked list has no way to access elements by index. You cannot just do list[length/2]. The fast and slow pointer technique finds the middle in a single pass without knowing the length beforehand.Q3. Why reverse the second half instead of the first half? The problem wants front-to-back alternation. If you reverse the second half, its first node is the original last node — exactly what you need to interleave with the front of the first half. Reversing the first half would give the wrong order.Q4. What is the time complexity of LeetCode 143? O(n) time for three linear passes (find middle, reverse, merge). O(1) space since all operations are in-place pointer manipulations with no extra data structures.Q5. Is LeetCode 143 asked in coding interviews? Yes, frequently at companies like Amazon, Google, Facebook, and Microsoft. It is considered a benchmark problem for linked list mastery because it requires combining three separate techniques cleanly under pressure.Similar LeetCode Problems to Practice Next206. Reverse Linked List — Easy — foundation for step 2 of this problem876. Middle of the Linked List — Easy — fast & slow pointer isolated21. Merge Two Sorted Lists — Easy — merging technique foundation234. Palindrome Linked List — Easy — also uses find middle + reverse second half148. Sort List — Medium — merge sort on linked list, uses same split techniqueConclusionLeetCode 143 Reorder List is one of the best Medium linked list problems because it forces you to think in multiple steps and combine techniques rather than apply a single pattern. The fast/slow pointer finds the middle efficiently without knowing the length. Reversing the second half turns the "cannot go backwards" limitation of singly linked lists into a non-issue. And the alternating merge weaves everything together cleanly.Work through the dry runs carefully — especially the pointer saving step in the merge. Once you see why each step is necessary and how they connect, this problem will always feel approachable no matter when it shows up in an interview.

LeetCodeJavaLinked ListTwo PointerFast Slow PointerMedium
LeetCode 1283 — Find the Smallest Divisor Given a Threshold | Binary Search on Answer Explained

LeetCode 1283 — Find the Smallest Divisor Given a Threshold | Binary Search on Answer Explained

🚀 Try This Problem First!Before reading the solution, attempt it yourself on LeetCode — you'll retain the concept far better.🔗 Problem Link: https://leetcode.com/problems/find-the-smallest-divisor-given-a-threshold/Understanding the ProblemYou are given an array of integers nums and an integer threshold. You must choose a positive integer divisor, divide every element of the array by it (rounding up to the nearest integer), sum all the results, and make sure that sum is ≤ threshold.Goal: Find the smallest possible divisor that keeps the sum within the threshold.Important detail — Ceiling Division: Every division rounds up, not down. So 7 ÷ 3 = 3 (not 2), and 10 ÷ 2 = 5.Constraints:1 ≤ nums.length ≤ 5 × 10⁴1 ≤ nums[i] ≤ 10⁶nums.length ≤ threshold ≤ 10⁶Two Key Observations (Before Writing a Single Line of Code)Minimum possible divisor: The divisor must be at least 1. Dividing by anything less than 1 isn't a positive integer. So:low = 1Maximum possible divisor: If divisor = max(nums), then every element divided by it gives at most 1 (due to ceiling), so the sum equals nums.length, which is always ≤ threshold (guaranteed by constraints). So:high = max(nums)Our answer lies in the range [1, max(nums)]. This is the search space for Binary Search.Intuition — Why Binary Search?Ask yourself: what happens as the divisor increases?As divisor gets larger, each divided value gets smaller (or stays the same), so the total sum decreases or stays the same. This is a monotonic relationship — the green flag for Binary Search on the Answer.Instead of trying every divisor from 1 to max(nums), we binary search over divisor values. For each candidate mid, we ask:"Does dividing all elements by mid (ceiling) give a sum ≤ threshold?"This feasibility check runs in O(N), making the whole approach O(N log(max(nums))).The Feasibility Check — Ceiling Sum SimulationGiven a divisor mid, compute the sum of ⌈arr[i] / mid⌉ for all elements. If the total sum ≤ threshold, then mid is a valid divisor.In Java, ceiling division of integers is done as:Math.ceil((double) arr[i] / mid)Binary Search StrategyIf canDivide(mid) is true → mid might be the answer, but try smaller. Set ans = mid, high = mid - 1.If canDivide(mid) is false → divisor is too small, increase it. Set low = mid + 1.Dry Run — Example 1 (Step by Step)Input: nums = [1, 2, 5, 9], threshold = 6We start with low = 1 and high = 9 (max element in array).Iteration 1: mid = 1 + (9 - 1) / 2 = 5Compute ceiling sum with divisor 5: ⌈1/5⌉ + ⌈2/5⌉ + ⌈5/5⌉ + ⌈9/5⌉ = 1 + 1 + 1 + 2 = 55 ≤ 6 → ✅ Valid. Record ans = 5, search smaller → high = 4.Iteration 2: mid = 1 + (4 - 1) / 2 = 2Compute ceiling sum with divisor 2: ⌈1/2⌉ + ⌈2/2⌉ + ⌈5/2⌉ + ⌈9/2⌉ = 1 + 1 + 3 + 5 = 1010 > 6 → ❌ Too large. Increase divisor → low = 3.Iteration 3: mid = 3 + (4 - 3) / 2 = 3Compute ceiling sum with divisor 3: ⌈1/3⌉ + ⌈2/3⌉ + ⌈5/3⌉ + ⌈9/3⌉ = 1 + 1 + 2 + 3 = 77 > 6 → ❌ Too large. Increase divisor → low = 4.Iteration 4: mid = 4 + (4 - 4) / 2 = 4Compute ceiling sum with divisor 4: ⌈1/4⌉ + ⌈2/4⌉ + ⌈5/4⌉ + ⌈9/4⌉ = 1 + 1 + 2 + 3 = 77 > 6 → ❌ Too large. Increase divisor → low = 5.Loop ends: low (5) > high (4). Binary search terminates.Output: ans = 5 ✅The Code Implementationclass Solution {/*** Feasibility Check (Helper Function)** Given a divisor 'mid', this function computes the ceiling sum of* all elements divided by 'mid' and checks if it is within threshold.** @param mid - candidate divisor to test* @param arr - input array* @param thresh - the allowed threshold for the sum* @return true if the ceiling division sum <= threshold, false otherwise*/public boolean canDivide(int mid, int[] arr, int thresh) {int sumOfDiv = 0;for (int i = 0; i < arr.length; i++) {// Ceiling division: Math.ceil(arr[i] / mid)// Cast to double to avoid integer division truncationsumOfDiv += Math.ceil((double) arr[i] / mid);}// If total sum is within threshold, this divisor is validreturn sumOfDiv <= thresh;}/*** Main Function — Binary Search on the Answer** Search range: [1, max(nums)]* - low = 1 → smallest valid positive divisor* - high = max(nums) → guarantees every ceil(num/divisor) = 1,* so sum = nums.length <= threshold (always valid)** @param nums - input array* @param threshold - maximum allowed sum after ceiling division* @return smallest divisor such that the ceiling division sum <= threshold*/public int smallestDivisor(int[] nums, int threshold) {int min = 1; // Lower bound: divisor starts at 1int max = Integer.MIN_VALUE; // Will become max(nums)int ans = 1;// Find the upper bound of binary search (max element)for (int a : nums) {max = Math.max(max, a);}// Binary Search over the divisor spacewhile (min <= max) {int mid = min + (max - min) / 2; // Safe midpoint, avoids overflowif (canDivide(mid, nums, threshold)) {// mid is valid — record it and try a smaller divisorans = mid;max = mid - 1;} else {// mid is too small — the sum exceeded threshold, go highermin = mid + 1;}}return ans; // Smallest valid divisor}}Code Walkthrough — Step by StepSetting bounds: We iterate through nums once to find max — this becomes our upper bound high. The lower bound low = 1 because divisors must be positive integers.Binary Search loop: We pick mid = min + (max - min) / 2 as the candidate divisor. We check if using mid as the divisor keeps the ceiling sum ≤ threshold.Feasibility helper (canDivide): For each element, we compute Math.ceil((double) arr[i] / mid) and accumulate the total. The cast to double is critical — without it, Java performs integer division (which truncates, not rounds up).Narrowing the search: If the sum is within threshold → record ans = mid, try smaller (max = mid - 1). If the sum exceeds threshold → divisor is too small, increase it (min = mid + 1).A Critical Bug to Watch Out For — The return min vs return ans TrapIn your original code, the final line was return min instead of return ans. This is a subtle bug. After the loop ends, min has overshot past the answer (it's now ans + 1). Always store the answer in a dedicated variable ans and return that. Using return min would return the wrong result in most cases.Common Mistakes to AvoidWrong lower bound: Setting low = min(nums) instead of low = 1 seems intuitive but is wrong. A divisor smaller than the minimum element is still valid — for example, dividing [5, 9] by 3 gives ⌈5/3⌉ + ⌈9/3⌉ = 2 + 3 = 5, which could be within threshold.Forgetting ceiling division: Using arr[i] / mid (integer division, which truncates) instead of Math.ceil((double) arr[i] / mid) is wrong. The problem explicitly states results are rounded up.Returning min instead of ans: After the binary search loop ends, min > max, meaning min has already gone past the valid answer. Always return the stored ans.Integer overflow in midpoint: Always use mid = min + (max - min) / 2 instead of (min + max) / 2. When both values are large (up to 10⁶), their sum can overflow an int.Complexity AnalysisTime Complexity: O(N × log(max(nums)))Binary search runs over [1, max(nums)] → at most log₂(10⁶) ≈ 20 iterations.Each iteration calls canDivide which is O(N).Total: O(N log M) where M = max(nums).Space Complexity: O(1) No extra data structures — only a few integer variables are used throughout.How This Relates to LeetCode 1011This problem and LeetCode 1011 (Ship Packages Within D Days) are almost identical in structure:🔗 LeetCode 1011 #Search space: [max(weights), sum(weights)]Feasibility check: Can we ship in ≤ D days?Monotonic property: More capacity → fewer daysGoal: Minimize capacityLeetCode 1283Search space: [1, max(nums)]Feasibility check: Is ceiling sum ≤ threshold?Monotonic property: Larger divisor → smaller sumGoal: Minimize divisorOnce you deeply understand one, the other takes minutes to solve.Similar Problems (Same Pattern — Binary Search on Answer)LeetCode 875 — Koko Eating Bananas [ Blog is also avaliable on this - Read Now ]LeetCode 1011 — Capacity To Ship Packages Within D Days [ Blog is also avaliable on this - Read Now ]LeetCode 410 — Split Array Largest SumLeetCode 2064 — Minimized Maximum of Products Distributed to Any StoreAll follow the same template: identify a monotonic answer space, write an O(N) feasibility check, and binary search over it.Key Takeaways✅ When the problem asks "find the minimum value such that a condition holds" — think Binary Search on the Answer.✅ The lower bound is the most constrained valid value (1 here, since divisors must be positive).✅ The upper bound is the least constrained valid value (max element, guarantees sum = length ≤ threshold).✅ Ceiling division in Java requires casting to double: Math.ceil((double) a / b).✅ Always store the answer in a separate ans variable — never return min or max directly after a binary search loop.Happy Coding! Smash that upvote if this helped you crack the pattern. 🚀

LeetCodeBinary SearchMediumJavaBinary Search on AnswerArraysCeiling Division
Subsets Problem (LeetCode 78) Explained | Recursion, Iterative & Bit Manipulation

Subsets Problem (LeetCode 78) Explained | Recursion, Iterative & Bit Manipulation

IntroductionThe Subsets problem (LeetCode 78) is one of the most fundamental and frequently asked questions in coding interviews. It introduces the concept of generating a power set, which is a core idea in recursion, backtracking, and combinatorics.Mastering this problem helps in solving a wide range of advanced problems like combinations, permutations, and decision-based recursion.In this article, we will explore:Intuition behind subsetsRecursive (backtracking) approachIterative (loop-based) approachBit manipulation approachTime and space complexity analysisProblem StatementGiven an integer array nums of unique elements, return all possible subsets (the power set).Key PointsEach element can either be included or excludedNo duplicate subsetsReturn subsets in any orderExamplesExample 1Input:nums = [1, 2, 3]Output:[[], [1], [2], [1,2], [3], [1,3], [2,3], [1,2,3]]Example 2Input:nums = [0]Output:[[], [0]]Key InsightFor each element, there are two choices:Include it OR Exclude itSo total subsets:2^nThis makes it a binary decision tree problem, very similar to:Permutation with SpacesBinary choices recursionBacktracking problemsApproach 1: Recursion + Backtracking (Most Important)IntuitionAt each index:Skip the elementInclude the elementBuild subsets step by step and backtrack.Java Code (With Explanation)import java.util.*;class Solution { List<List<Integer>> liss = new ArrayList<>(); void solve(int[] an, int ind, List<Integer> lis) { // Base case: reached end → one subset formed if (ind == an.length) { liss.add(new ArrayList<>(lis)); // store copy return; } // Choice 1: Do NOT include current element solve(an, ind + 1, lis); // Choice 2: Include current element lis.add(an[ind]); solve(an, ind + 1, lis); // Backtrack: remove last added element lis.remove(lis.size() - 1); } public List<List<Integer>> subsets(int[] nums) { List<Integer> lis = new ArrayList<>(); solve(nums, 0, lis); return liss; }}Dry Run (nums = [1,2])Start: [] → skip 1 → [] → skip 2 → [] → take 2 → [2] → take 1 → [1] → skip 2 → [1] → take 2 → [1,2]Final Output:[], [2], [1], [1,2]Approach 2: Iterative (Loop-Based)IntuitionStart with an empty subset:[ [] ]For each element:Add it to all existing subsetsCodeimport java.util.*;class Solution { public List<List<Integer>> subsets(int[] nums) { List<List<Integer>> result = new ArrayList<>(); result.add(new ArrayList<>()); for (int num : nums) { int size = result.size(); for (int i = 0; i < size; i++) { List<Integer> temp = new ArrayList<>(result.get(i)); temp.add(num); result.add(temp); } } return result; }}How It WorksFor [1,2,3]:Start: [[]]Add 1 → [[], [1]]Add 2 → [[], [1], [2], [1,2]]Add 3 → [[], [1], [2], [1,2], [3], [1,3], [2,3], [1,2,3]]Approach 3: Bit ManipulationIntuitionEach subset can be represented using a binary number:For n = 3:000 → []001 → [1]010 → [2]011 → [1,2]...Codeimport java.util.*;class Solution { public List<List<Integer>> subsets(int[] nums) { List<List<Integer>> result = new ArrayList<>(); int n = nums.length; int total = 1 << n; // 2^n for (int i = 0; i < total; i++) { List<Integer> subset = new ArrayList<>(); for (int j = 0; j < n; j++) { if ((i & (1 << j)) != 0) { subset.add(nums[j]); } } result.add(subset); } return result; }}Complexity AnalysisApproachTime ComplexitySpace ComplexityRecursionO(2^n)O(n) stackIterativeO(2^n)O(2^n)Bit ManipulationO(2^n)O(2^n)Why All Approaches Are O(2ⁿ)Because:Total subsets = 2ⁿEach subset takes up to O(n) to constructWhen to Use Which ApproachRecursion / Backtracking → Best for interviews (easy to explain)Iterative → Clean and beginner-friendlyBit Manipulation → Best for optimization & advanced understandingKey TakeawaysSubsets = power set problemEvery element → 2 choicesThink in terms of decision treesBacktracking = build + undo (add/remove)Common Interview VariationsSubsets with duplicatesCombination sumPermutationsK-sized subsetsConclusionThe Subsets problem is a foundational DSA concept that appears across many interview questions. Understanding all approaches—especially recursion and iterative expansion—gives a strong base for solving complex backtracking problems.If you master this pattern, you unlock a whole category of problems in recursion and combinatorics.Frequently Asked Questions (FAQs)1. Why are there 2ⁿ subsets?Because each element has 2 choices: include or exclude.2. Which approach is best for interviews?Recursion + backtracking is the most preferred.3. Is bit manipulation important?Yes, it helps in optimizing and understanding binary patterns.

LeetCodeMediumJavaRecursionBacktracking
Sort a Stack Using Recursion - Java Solution Explained

Sort a Stack Using Recursion - Java Solution Explained

IntroductionSort a Stack is one of those problems that feels impossible at first — you only have access to the top of a stack, you cannot index into it, and the only tools you have are push, pop, and peek. How do you sort something with such limited access?The answer is recursion. And this problem is a perfect example of how recursion can do something elegant that feels like it should require much more complex logic.You can find this problem here — Sort a Stack — GeeksForGeeksThis article explains the complete intuition, both recursive functions in detail, a thorough dry run, all approaches, and complexity analysis.What Is the Problem Really Asking?You have a stack of integers. Sort it so that the smallest element is at the bottom and the largest element is at the top.Input: [41, 3, 32, 2, 11] (11 is at top)Output: [41, 32, 11, 3, 2] (2 is at top, 41 at bottom)Wait — if smallest is at bottom and largest at top, then popping gives you the smallest first. So the stack is sorted in ascending order from top to bottom when you pop.The constraints make this interesting — you can only use stack operations (push, pop, peek, isEmpty). No arrays, no sorting algorithms directly on the data, no random access.Real Life Analogy — Sorting a Stack of BooksImagine a stack of books of different thicknesses on your desk. You want the thinnest book on top and thickest at the bottom. But you can only take from the top.Here is what you do — pick up books one by one from the top and set them aside. Once you have removed enough, place each book back in its correct position. If the book you are placing is thicker than what is currently on top, slide the top books aside temporarily, place the thick book down, then put the others back.That is exactly what the recursive sort does — take elements out one by one, and insert each back into the correct position.The Core Idea — Two Recursive Functions Working TogetherThe solution uses two recursive functions that work hand in hand:sortStack(st) — removes elements one by one until the stack is empty, then inserts each back using insertinsert(st, temp) — inserts a single element into its correct sorted position in an already-sorted stackThink of it like this — sortStack is the manager that empties the stack recursively, and insert is the worker that places each element in the right position on the way back.Function 1: sortStack — The Main Recursive Driverpublic void sortStack(Stack<Integer> st) { // base case — empty or single element is already sorted if (st.empty() || st.size() == 1) return; // Step 1: remove the top element int temp = st.pop(); // Step 2: recursively sort the remaining stack sortStack(st); // Step 3: insert the removed element in correct position insert(st, temp);}How to Think About ThisThe leap of faith here — trust that sortStack correctly sorts whatever is below. After the recursive call, the remaining stack is perfectly sorted. Now your only job is to insert temp (the element you removed) into its correct position in that sorted stack.This is the classic "solve smaller, fix current" recursion pattern. Reduce the problem by one element, trust recursion for the rest, then fix the current element's position.Function 2: insert — Placing an Element in Sorted Positionvoid insert(Stack<Integer> st, int temp) { // base case 1: stack is empty — just push // base case 2: top element is smaller or equal — push on top if (st.empty() || st.peek() <= temp) { st.push(temp); return; } // top element is greater than temp // temp needs to go below the top element int val = st.pop(); // temporarily remove the top insert(st, temp); // try inserting temp deeper st.push(val); // restore the removed element on top}How to Think About ThisYou want to insert temp in sorted order. The stack is already sorted (largest on top). So:If the top is smaller than or equal to temp → temp belongs on top. Push it.If the top is greater than temp → temp needs to go below the top. So pop the top temporarily, try inserting temp deeper, then push the top back.This is the same "pop, recurse deeper, push back" pattern you saw in the Baseball Game problem — when you need to access elements below the top without permanent removal.Complete Solutionclass Solution { // insert temp into its correct position in sorted stack void insert(Stack<Integer> st, int temp) { if (st.empty() || st.peek() <= temp) { st.push(temp); return; } int val = st.pop(); insert(st, temp); st.push(val); } // sort the stack using recursion public void sortStack(Stack<Integer> st) { if (st.empty() || st.size() == 1) return; int temp = st.pop(); // remove top sortStack(st); // sort remaining insert(st, temp); // insert top in correct position }}Detailed Dry Run — st = [3, 2, 1] (1 at top)Let us trace every single call carefully.sortStack calls (going in):sortStack([3, 2, 1]) temp = 1, remaining = [3, 2] call sortStack([3, 2]) temp = 2, remaining = [3] call sortStack([3]) size == 1, return ← base case now insert(st=[3], temp=2) peek=3 > 2, pop val=3 insert(st=[], temp=2) st empty, push 2 ← base case push val=3 back stack is now [2, 3] (3 on top) sortStack([3,2]) done → stack = [2, 3] now insert(st=[2,3], temp=1) peek=3 > 1, pop val=3 insert(st=[2], temp=1) peek=2 > 1, pop val=2 insert(st=[], temp=1) st empty, push 1 ← base case push val=2 back stack = [1, 2] push val=3 back stack = [1, 2, 3] (3 on top)sortStack done → stack = [1, 2, 3]✅ Final stack (3 on top, 1 at bottom): largest at top, smallest at bottom — correctly sorted!Detailed Dry Run — st = [41, 3, 32, 2, 11] (11 at top)Let us trace at a higher level to see the pattern:sortStack calls unwinding (popping phase):sortStack([41, 3, 32, 2, 11]) pop 11, sortStack([41, 3, 32, 2]) pop 2, sortStack([41, 3, 32]) pop 32, sortStack([41, 3]) pop 3, sortStack([41]) base case — return insert([41], 3) → [3, 41] (41 on top) insert([3, 41], 32) → [3, 32, 41] (41 on top) insert([3, 32, 41], 2) → [2, 3, 32, 41] (41 on top) insert([2, 3, 32, 41], 11) → [2, 3, 11, 32, 41] (41 on top)✅ Final: [2, 3, 11, 32, 41] with 41 at top, 2 at bottom — correct!Let us verify the insert([3, 32, 41], 2) step in detail since it involves multiple pops:insert(st=[3, 32, 41], temp=2) peek=41 > 2, pop val=41 insert(st=[3, 32], temp=2) peek=32 > 2, pop val=32 insert(st=[3], temp=2) peek=3 > 2, pop val=3 insert(st=[], temp=2) st empty, push 2 push val=3 back → [2, 3] push val=32 back → [2, 3, 32] push val=41 back → [2, 3, 32, 41]Beautiful chain of pops followed by a chain of pushes — recursion handling what would be very messy iterative logic.Approach 2: Using an Additional Stack (Iterative)If recursion is not allowed or you want an iterative solution, you can use a second stack.public void sortStack(Stack<Integer> st) { Stack<Integer> tempStack = new Stack<>(); while (!st.empty()) { int curr = st.pop(); // move elements from tempStack back to st // until we find the right position for curr while (!tempStack.empty() && tempStack.peek() > curr) { st.push(tempStack.pop()); } tempStack.push(curr); } // move everything back to original stack while (!tempStack.empty()) { st.push(tempStack.pop()); }}How This WorkstempStack is maintained in sorted order (smallest on top). For each element popped from st, we move elements from tempStack back to st until we find the right position, then push. At the end, transfer everything back.Time Complexity: O(n²) — same as recursive approach Space Complexity: O(n) — explicit extra stackApproach 3: Using Collections.sort (Cheat — Not Recommended)public void sortStack(Stack<Integer> st) { List<Integer> list = new ArrayList<>(st); Collections.sort(list); // ascending st.clear(); for (int num : list) { st.push(num); // smallest pushed first = smallest at bottom }}This gives the right answer but completely defeats the purpose of the problem. Never use this in an interview — it shows you do not understand the problem's intent.Time Complexity: O(n log n) Space Complexity: O(n)Approach ComparisonApproachTimeSpaceAllowed in InterviewCode ComplexityRecursive (Two Functions)O(n²)O(n)✅ Best answerMediumIterative (Two Stacks)O(n²)O(n)✅ Good alternativeMediumCollections.sortO(n log n)O(n)❌ Defeats purposeEasyTime and Space Complexity AnalysisRecursive ApproachTime Complexity: O(n²)For each of the n elements popped by sortStack, the insert function may traverse the entire stack to find the correct position — O(n) per insert. Total: O(n) × O(n) = O(n²).This is similar to insertion sort — and in fact, this recursive approach IS essentially insertion sort implemented on a stack using recursion.Space Complexity: O(n)No extra data structure is used. But the call stack holds up to n frames for sortStack and up to n additional frames for insert. In the worst case, total recursion depth is O(n) + O(n) = O(n) space.The Connection to Insertion SortIf you have studied sorting algorithms, this solution will look very familiar. It is Insertion Sort implemented recursively on a stack:sortStack is like the outer loop — take one element at a timeinsert is like the inner loop — find the correct position by comparing and shiftingThe key difference from array-based insertion sort is that "shifting" in an array is done by moving elements right. Here, "shifting" is done by temporarily popping elements off the stack, inserting at the right depth, then pushing back. Recursion handles the "shift" naturally.Common Mistakes to AvoidWrong base case in insert The base case is st.empty() || st.peek() <= temp. Both conditions are needed. Without st.empty(), you call peek() on an empty stack and get an exception. Without st.peek() <= temp, you never stop even when you find the right position.Using < instead of <= in insert Using strictly less than means equal elements get treated as "wrong position" and cause unnecessary recursion. Using <= correctly handles duplicates — equal elements stay in their current relative order.Calling sortStack instead of insert inside insert insert should only call itself recursively. Calling sortStack inside insert re-sorts an already-sorted stack — completely wrong and causes incorrect results.Not pushing val back after the recursive insert call This is the most common bug. After insert(st, temp) places temp in the correct position, you must push val back on top. Forgetting this loses elements from the stack permanently.FAQs — People Also AskQ1. How do you sort a stack without using extra space in Java? Using recursion — the call stack itself serves as temporary storage. sortStack removes elements one by one and insert places each back in its correct sorted position. No explicit extra data structure is needed, but O(n) implicit call stack space is used.Q2. What is the time complexity of sorting a stack using recursion? O(n²) in all cases — for each of the n elements, the insert function traverses up to n elements to find the correct position. This is equivalent to insertion sort applied to a stack.Q3. Can you sort a stack without recursion? Yes — using a second auxiliary stack. Pop elements from the original stack one by one and insert each into the correct position in the second stack by temporarily moving elements back. Transfer everything back to the original stack at the end.Q4. Why does the insert function need to push val back after the recursive call? Because the goal of insert is to place temp in the correct position WITHOUT losing any existing elements. When we pop val temporarily to go deeper, we must restore it after temp has been placed. Not restoring it means permanently losing that element from the stack.Q5. Is sorting a stack asked in coding interviews? Yes, it appears frequently at companies like Amazon, Adobe, and in platforms like GeeksForGeeks and InterviewBit. It tests whether you understand recursion deeply enough to use it as a mechanism for reordering — not just for computation.Similar Problems to Practice NextDelete Middle Element of a Stack — use same recursive pop-recurse-push patternReverse a Stack — very similar recursive approach, great warmup84. Largest Rectangle in Histogram — advanced stack problem946. Validate Stack Sequences — stack simulation150. Evaluate Reverse Polish Notation — stack with operationsConclusionSort a Stack Using Recursion is a beautiful problem that demonstrates the true power of recursion — using the call stack itself as temporary storage to perform operations that would otherwise require extra data structures.The key insight is splitting the problem into two clean responsibilities. sortStack handles one job — peel elements off one by one until the base case, then trigger insert on the way back up. insert handles one job — place a single element into its correct position in an already-sorted stack by temporarily moving larger elements aside.Once you see those two responsibilities clearly, the code almost writes itself. And once this pattern clicks, it directly prepares you for more advanced recursive problems like reversing a stack, deleting the middle element, and even understanding how recursive backtracking works at a deeper level.

StackRecursionJavaGeeksForGeeksMedium
Reverse LinkedList (LeetCode 206) – The One Trick That Changes Everything

Reverse LinkedList (LeetCode 206) – The One Trick That Changes Everything

🚀 Try the ProblemPractice here:https://leetcode.com/problems/reverse-linked-list/🤔 Let’s Start With a Simple Question…What happens if you take this:1 → 2 → 3 → 4 → 5…and try to reverse it?You want:5 → 4 → 3 → 2 → 1Sounds easy, right?But here’s the catch:👉 You can only move forward in a linked list👉 There is no backward pointerSo how do we reverse something that only goes one way?🧠 The Core Problem (In Simple Words)You are given the head of a linked list.👉 Reverse the list👉 Return the new head📦 Constraints0 <= number of nodes <= 5000-5000 <= Node.val <= 5000🔍 Before Jumping to Code — Think Like ThisWhen solving this, your brain might go:Can I store values somewhere and reverse? ❌ (extra space)Can I rebuild the list? ❌ (unnecessary)Can I just change links? ✅ (YES, THIS IS THE KEY)⚡ The Breakthrough Idea👉 Instead of moving nodes👉 We reverse the direction of pointers🎯 Visual Intuition (Very Important)Let’s take:1 → 2 → 3 → 4 → nullWe want:null ← 1 ← 2 ← 3 ← 4But how?🧩 The 3-Pointer Magic TrickWe use 3 pointers:prev → stores previous nodecurr → current nodenext → stores next node🔄 Step-by-Step TransformationInitial State:prev = nullcurr = 1 → 2 → 3 → 4Step 1:Save next nodeReverse linkf = 21 → nullMove pointers:prev = 1curr = 2Step 2:f = 32 → 1Move:prev = 2curr = 3Continue…Eventually:4 → 3 → 2 → 1 → null💡 Final Insight👉 Each step reverses one link👉 At the end, prev becomes the new head✅ Clean Java Code (Iterative Approach)class Solution {public ListNode reverseList(ListNode head) {// Previous pointer starts as nullListNode prev = null;// Current pointer starts from headListNode curr = head;while (curr != null) {// Store next nodeListNode f = curr.next;// Reverse the linkcurr.next = prev;// Move prev forwardprev = curr;// Move curr forwardcurr = f;}// New head is prevreturn prev;}}⏱️ ComplexityTime ComplexityO(n)We traverse the list once.Space ComplexityO(1)No extra space used.🔁 Another Way: Recursive ApproachNow let’s think differently…👉 What if we reverse the rest of the list first, then fix the current node?🧠 Recursive IdeaFor:1 → 2 → 3 → 4We:Reverse from 2 → 3 → 4Then fix 1💻 Recursive Codeclass Solution {public ListNode reverseList(ListNode head) {// Base caseif(head == null || head.next == null)return head;// Reverse restListNode newHead = reverseList(head.next);// Fix current nodehead.next.next = head;head.next = null;return newHead;}}⚖️ Iterative vs RecursiveApproachProsConsIterativeFast, O(1) spaceSlightly tricky to understandRecursiveElegant, clean logicUses stack (O(n) space)❌ Common MistakesForgetting to store next nodeLosing reference to rest of listNot updating pointers correctlyReturning wrong node🔥 Real Interview InsightThis problem is not just about reversing a list.It teaches:👉 Pointer manipulation👉 In-place updates👉 Thinking in steps👉 Breaking problems into small operations🧠 Final ThoughtAt first, this problem feels tricky…But once you understand this line:curr.next = prev👉 Everything clicks.🚀 ConclusionThe Reverse Linked List problem is one of the most important foundational problems in DSA.Mastering it will:Boost your confidenceStrengthen pointer logicHelp in many advanced problems👉 Tip: Practice this until you can visualize pointer movement in your head — that’s when you truly master linked lists.

Linked ListPointer ManipulationIterative ApproachRecursionLeetCodeEasy
LeetCode 36: Valid Sudoku Explained – Java Solutions, Intuition & Formula Dry Run

LeetCode 36: Valid Sudoku Explained – Java Solutions, Intuition & Formula Dry Run

IntroductionSudoku is a universally beloved puzzle, but validating a Sudoku boardalgorithmically is a classic technical interview question. In this post, we aregoing to dive deep into LeetCode 36: Valid Sudoku.We won't just look at the code; we will explore the intuition behind the problemso you don't have to memorize anything. We’ll cover an ingenious in-placevalidation approach, break down the complex math formula used to check3 \times 3 sub-boxes, and look at an alternative optimal solution usingHashSets.Let's dive in!Understanding the ProblemThe problem asks us to determine if a partially filled 9 \times 9 Sudoku boardis valid. To be valid, the filled cells must follow three straightforward rules:1. Each row must contain the digits 1-9 without repetition.2. Each column must contain the digits 1-9 without repetition.3. Each of the nine 3 \times 3 sub-boxes must contain the digits 1-9 withoutrepetition.Important Note: A valid board doesn't mean the board is fully solvable! We onlycare about checking the numbers that are currently on the board.Intuition: How to Think About the ProblemBefore writing code, how do we, as humans, check if a Sudoku board is valid? Ifyou place a 5 in a cell, you quickly scan horizontally (its row), vertically(its column), and within its small 3 \times 3 square. If you see another 5, theboard is invalid.To translate this to code, we have two choices:1. The Simulation Approach: Go cell by cell. Pick up the number, hide it, andcheck its row, column, and 3 \times 3 box to see if that number existsanywhere else. (This is the approach we will look at first).2. The Memory Approach: Go cell by cell, but keep a "notebook" (like a HashTable) of everything we have seen so far. If we see a number we've alreadywritten down for a specific row, column, or box, it's invalid.Approach 1: The In-Place Validation (Space-Optimized)Here is a brilliant solution that validates the board without using any extradata structures.The Logic: Iterate through every cell on the board. When we find a number, wetemporarily replace it with a . (empty space). Then, we iterate 9 times to checkits entire row, column, and sub-box. If the number is found, we return false.Otherwise, we put the number back and move to the next cell.The Java Codeclass Solution {public boolean isvalid(char[][] board, int i, int j, char k) {for(int m = 0; m < 9; m++) {// Check rowif(board[i][m] == k) return false;// Check columnif(board[m][j] == k) return false;// Check 3x3 sub-boxif(board[3 * (i / 3) + m / 3][3 * (j / 3) + m % 3] == k) return false;}return true;}public boolean isValidSudoku(char[][] board) {for(int i = 0; i < board.length; i++) {for(int j = 0; j < board[0].length; j++) {if(board[i][j] != '.') {char temp = board[i][j];board[i][j] = '.'; // Temporarily remove the numberif(!isvalid(board, i, j, temp)) {return false;}board[i][j] = temp; // Put the number back}}}return true;}}The Math Breakdown: Demystifying the 3 \times 3 Grid FormulaThe hardest part of this code to understand is this exact line: board[3*(i/3) +m/3][3*(j/3) + m%3]How does a single loop variable m (from 0 to 8) traverse a 3 \times 3 grid?Let’s do a dry run.Step 1: Finding the Starting Point of the BoxThe grid is 9 \times 9, broken into nine 3 \times 3 boxes. If we are at a randomcell, say row i = 4, col j = 5, which box are we in? Because integer division inJava drops the decimal:i / 3 = 4 / 3 = 1j / 3 = 5 / 3 = 1Now multiply by 3 to get the actual starting coordinates (top-left corner) ofthat specific sub-box:3 * 1 = 3 (Row offset)3 * 1 = 3 (Col offset) So, the 3 \times 3 box starts at row 3, col 3.Step 2: Traversing the Box (Dry Run)Now, as m goes from 0 to 8, we use m / 3 for rows and m % 3 for columns:m = 0: row offset 0/3 = 0, col offset 0%3 = 0 \rightarrow Checks (3+0, 3+0) = (3, 3)m = 1: row offset 1/3 = 0, col offset 1%3 = 1 \rightarrow Checks (3+0, 3+1) = (3, 4)m = 2: row offset 2/3 = 0, col offset 2%3 = 2 \rightarrow Checks (3+0, 3+2) = (3, 5)m = 3: row offset 3/3 = 1, col offset 3%3 = 0 \rightarrow Checks (3+1, 3+0) = (4, 3)m = 4: row offset 4/3 = 1, col offset 4%3 = 1 \rightarrow Checks (3+1, 3+1) = (4, 4)...and so on up to m = 8.This brilliant math formula maps a 1D loop (0 to 8) directly onto a 2D3 \times 3 grid perfectly! No nested loops needed inside the isvalid function.Approach 2: The HashSet Solution (Single Pass)While the first approach is highly space-efficient, it does a bit of redundantchecking. An alternative approach that interviewers love is using a HashSet.Instead of checking rows and columns every time we see a number, we generate aunique "string signature" for every number and attempt to add it to a HashSet.If we see a 5 at row 0 and col 1, we create three strings:1. "5 in row 0"2. "5 in col 1"3. "5 in block 0-0"The HashSet.add() method returns false if the item already exists in the set. Ifit returns false, we instantly know the board is invalid!HashSet Java Code:class Solution {public boolean isValidSudoku(char[][] board) {HashSet<String> seen = new HashSet<>();for (int i = 0; i < 9; i++) {for (int j = 0; j < 9; j++) {char number = board[i][j];if (number != '.') {// HashSet.add() returns false if the element already existsif (!seen.add(number + " in row " + i) ||!seen.add(number + " in col " + j) ||!seen.add(number + " in block " + i/3 + "-" + j/3)) {return false;}}}}return true;}}Notice how we use i/3 + "-" + j/3 to identify the blocks. Top-left is block 0-0,bottom-right is block 2-2.Time and Space Complexity BreakdownInterviewers will always ask for your complexity analysis. Because a Sudokuboard is strictly fixed at 9 \times 9, the strict Big-O is actually constant.However, let's look at it conceptually as if the board were N \times N.Approach 1: In-Place Validation (Your Solution)Time Complexity: O(1) (Strictly speaking). We traverse 81 cells, and foreach cell, we do at most 9 iterations. 81 \times 9 = 729 operations. Since729 is a constant, it's O(1). (If the board was N \times N, time complexitywould be O(N^3) because for N^2 cells, we iterate N times).Space Complexity: O(1). We only use primitive variables (i, j, k, m, temp).No extra memory is allocated.Approach 2: HashSet ApproachTime Complexity: O(1). We traverse the 81 cells exactly once. Generatingstrings and adding to a HashSet takes O(1) time. (If the board wasN \times N, time complexity would be O(N^2)).Space Complexity: O(1). The HashSet will store a maximum of81 \times 3 = 243 strings. Since this upper limit is fixed, space isconstant.ConclusionThe Valid Sudoku problem is a fantastic exercise in matrix traversal andcoordinate math.When solving this in an interview:1. Use the first approach if you want to impress the interviewer with O(1)space complexity and your deep understanding of math formulas (the /3 and %3trick).2. Use the second approach (HashSet) if you want to show off your knowledge ofdata structures and write highly readable, clean, and clever code.I hope this breakdown gives you the intuition needed so you never have tomemorize the code for LeetCode 36!Happy Coding! Keep Learning🤟

LeetCodeJavaMatrixHash TableRecursionBacktrackingMedium
LeetCode 1011 — Capacity To Ship Packages Within D Days | Binary Search on Answer Explained

LeetCode 1011 — Capacity To Ship Packages Within D Days | Binary Search on Answer Explained

🚀 Try This Problem First!Before reading the solution, attempt it yourself on LeetCode — you'll retain the concept far better.🔗 Problem Link: https://leetcode.com/problems/capacity-to-ship-packages-within-d-days/1. Understanding the ProblemYou have a conveyor belt carrying N packages, each with a given weight. A ship must transport all of them within at most D days. Every day, you load packages in order (no rearranging allowed), and you cannot exceed the ship's weight capacity in a single day.Goal: Find the minimum weight capacity of the ship such that all packages are delivered within D days.Constraints:1 ≤ days ≤ weights.length ≤ 5 × 10⁴1 ≤ weights[i] ≤ 5002. Two Key Observations (Before Writing a Single Line of Code)Before jumping to code, anchor yourself with these two facts:Minimum possible capacity: The ship must at least be able to carry the single heaviest package. If it can't, that package can never be shipped. So:low = max(weights)Maximum possible capacity: If the ship can carry everything at once, it finishes in 1 day — always valid. So:high = sum(weights)Our answer lies somewhere in the range [max(weights), sum(weights)]. This is the classic setup for Binary Search on the Answer.3. Intuition — Why Binary Search?Ask yourself: what happens as ship capacity increases?The number of days needed decreases or stays the same. This is a monotonic relationship — and monotonicity is the green flag for Binary Search.Instead of checking every capacity from 1 to sum(weights) (which is huge), we binary search over the capacity space and for each candidate capacity mid, we ask:"Can all packages be shipped in ≤ D days with this capacity?"This feasibility check runs in O(N) using a greedy simulation, making the whole approach O(N log(sum(weights))).4. The Feasibility Check — Greedy LoadingGiven a capacity mid, simulate loading the ship greedily:Keep adding packages to today's load.The moment adding the next package would exceed mid, start a new day and reset the current load to that package.Count total days used.If days used ≤ D, capacity mid is feasible.5. Binary Search StrategyIf canShip(mid) is true → mid might be the answer, but try smaller. Set ans = mid, high = mid - 1.If canShip(mid) is false → capacity is too small, increase it. Set low = mid + 1.6. Dry Run — Example 1Input: weights = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], days = 5low = 10 (max weight), high = 55 (sum of weights)IterationlowhighmidDays NeededFeasible?ans11055322✅ Yes3221031203✅ Yes2031019146❌ No2041519175✅ Yes1751516155✅ Yes1561514—loop ends—15Output: 15 ✅7. The Code Implementationclass Solution { /** * Feasibility Check (Helper Function) * * Given a ship capacity 'mid', this function simulates loading packages * greedily and returns true if all packages can be shipped within 'days' days. * * @param mid - candidate ship capacity to test * @param arr - weights array * @param days - allowed number of days * @return true if shipping is possible within 'days' days, false otherwise */ public boolean canShip(int mid, int[] arr, int days) { int daysNeeded = 1; // We always need at least 1 day int currentLoad = 0; // Weight loaded on the ship today for (int i = 0; i < arr.length; i++) { // If adding this package exceeds today's capacity, // start a new day and carry this package on the new day if (currentLoad + arr[i] > mid) { currentLoad = arr[i]; // This package starts the new day's load daysNeeded++; // Increment day count } else { // Package fits — add it to today's load currentLoad += arr[i]; } } // If days needed is within the allowed limit, this capacity is feasible return daysNeeded <= days; } /** * Main Function — Binary Search on the Answer * * Search range: [max(weights), sum(weights)] * - low = max(weights) → ship must carry the heaviest package at minimum * - high = sum(weights) → ship carries everything in one day (upper bound) * * @param weights - array of package weights * @param days - maximum allowed days * @return minimum ship capacity to deliver all packages within 'days' days */ public int shipWithinDays(int[] weights, int days) { int high = 0; // Will become sum(weights) int low = Integer.MIN_VALUE; // Will become max(weights) int ans = 0; // Calculate the binary search bounds for (int a : weights) { high += a; // sum of all weights → upper bound low = Math.max(low, a); // max single weight → lower bound } // Binary Search over the capacity space while (low <= high) { int mid = low + (high - low) / 2; // Avoids integer overflow if (canShip(mid, weights, days)) { // mid works — record it as a potential answer // and try to find a smaller valid capacity ans = mid; high = mid - 1; } else { // mid is too small — increase the capacity low = mid + 1; } } return ans; // Minimum feasible capacity }}8. Code Walkthrough — Step by StepStep 1 — Setting bounds: We iterate through the weights array once to compute low = max(weights) and high = sum(weights). These are our binary search boundaries.Step 2 — Binary Search loop: We pick mid = low + (high - low) / 2 (safe overflow-free midpoint). We check if capacity mid can ship all packages in ≤ D days.Step 3 — Feasibility helper (canShip): We simulate a greedy day-by-day loading. We start with daysNeeded = 1 and currentLoad = 0. For each package, if it fits in today's load, we add it. If not, we start a new day. The key line is:if (currentLoad + arr[i] > mid) { currentLoad = arr[i]; // new day starts with this package daysNeeded++;}Step 4 — Narrowing the search: If feasible → ans = mid, high = mid - 1 (try smaller). If not feasible → low = mid + 1 (try larger).9. Common Mistakes to AvoidMistake 1 — Wrong lower bound: Using low = 1 instead of low = max(weights) works but is far slower since you binary search over a much larger range unnecessarily.Mistake 2 — Wrong condition in canShip: The return should be daysNeeded <= days (not < days). If days needed equals D, it's still valid.Mistake 3 — Off-by-one in greedy loading: When a package doesn't fit, you start a new day with that package as the first item: currentLoad = arr[i]. Do NOT set currentLoad = 0 — that package must still be accounted for.Mistake 4 — Integer overflow in midpoint: Always use mid = low + (high - low) / 2 instead of (low + high) / 2 to avoid overflow when low and high are large.10. Complexity AnalysisTime Complexity: O(N × log(sum(weights)))Binary search runs over the range [max(weights), sum(weights)], which has at most sum(weights) ≈ 500 × 50000 = 25,000,000 values → about log₂(25,000,000) ≈ 25 iterations.Each iteration calls canShip which is O(N).Total: O(N log S) where S = sum(weights).Space Complexity: O(1)No extra data structures. Only a handful of integer variables are used.11. Similar Problems (Same Pattern — Binary Search on Answer)Once you understand this pattern, the following problems become very similar:LeetCode 410 — Split Array Largest SumLeetCode 875 — Koko Eating Bananas [ Blog is also avaliable on this - Read Now ]LeetCode 1283 — Find the Smallest Divisor Given a ThresholdLeetCode 2064 — Minimized Maximum of Products Distributed to Any StoreAll of these follow the same template: define a feasibility check, identify a monotonic answer space, and binary search over it.12. Key Takeaways✅ When you see "find the minimum/maximum value such that a condition holds" — think Binary Search on the Answer.✅ The lower bound of the search space is the most constrained valid value (max weight here).✅ The upper bound is the least constrained valid value (total weight here).✅ The feasibility check must be O(N) or better to keep the overall complexity efficient.✅ Greedy loading (pack as much as possible each day) is optimal here since packages must go in order.Happy Coding! If this helped you, share it with a friend who's grinding LeetCode. 🚀

LeetCodeBinary SearchMediumJavaBinary Search on AnswerArrays
LeetCode 2095. Delete the Middle Node of a Linked List – Fast and Slow Pointer Approach

LeetCode 2095. Delete the Middle Node of a Linked List – Fast and Slow Pointer Approach

🔗 Try This Problemhttps://leetcode.com/problems/delete-the-middle-node-of-a-linked-list/🎥 Video ExplanationProblem ExplanationYou are given the head of a singly linked list. The task is to remove the middle node and return the updated list.The middle node is defined using 0-based indexing:Middle Index = ⌊n / 2⌋Where n is the total number of nodes.ExampleInput: [1, 3, 4, 7, 1, 2, 6]Index: 0 1 2 3 4 5 6n = 7Middle index = 3Node to remove = 7Output: [1, 3, 4, 1, 2, 6]Approach 1: Brute Force (Two Traversals)IdeaTraverse the list to count total nodesCompute middle indexTraverse again to delete that nodeComplexityTime Complexity: O(n)Space Complexity: O(1)LimitationThis approach requires two passes, which is not optimal when a single traversal solution exists.Approach 2: Fast and Slow Pointer (Optimal)IntuitionUse two pointers:Slow pointer moves one step at a timeFast pointer moves two steps at a timeWhen the fast pointer reaches the end of the list:Slow pointer will be at the middle nodeImportant DetailTo remove the middle node, access to the previous node is required.Therefore, maintain an additional pointer:prev → tracks node before slowAlgorithm StepsInitialize:slow = headfast = headprev = nullTraverse:Move fast by 2 stepsMove slow by 1 stepUpdate prev = slow (before moving slow)When loop ends:slow points to middle nodeprev points to node before middleDelete node:prev.next = slow.nextDetailed Dry RunInput1 → 3 → 4 → 7 → 1 → 2 → 6Initial Stateslow = 1fast = 1prev = nullIteration 1fast → 4slow → 3prev → 1Iteration 2fast → 1slow → 4prev → 3Iteration 3fast → nullslow → 7prev → 4After Loopslow = 7 (middle node)prev = 4Deletionprev.next = slow.nextResult:1 → 3 → 4 → 1 → 2 → 6Optimized Code (Java)class Solution {public ListNode deleteMiddle(ListNode head) {// Edge case: single nodeif(head == null) return head;if(head.next == null) return null;ListNode slow = head;ListNode fast = head;ListNode prev = null;// Traverse using fast and slow pointerwhile(fast != null && fast.next != null){fast = fast.next.next;prev = slow;slow = slow.next;}// Remove middle nodeprev.next = slow.next;return head;}}Complexity AnalysisTime Complexity: O(n)Space Complexity: O(1)Only one traversal of the linked listNo extra data structures usedEdge CasesSingle NodeInput: [1]Output: []Two NodesInput: [2, 1]Output: [2]Even Length ListInput: [1, 2, 3, 4]n = 4Middle index = 2Node removed = 3Key TakeawaysFast and slow pointer reduces two-pass solution to one-passTracking the previous node is necessary for deletionWorks efficiently for large linked listsA fundamental pattern used in multiple linked list problemsConclusionThe fast and slow pointer technique provides an elegant and efficient way to identify and remove the middle node in a linked list. By leveraging different traversal speeds, the problem can be solved in a single pass with constant space, making it optimal for both interviews and practical implementations.

LeetCodeMediumLinked ListFast and Slow Pointer
Find Peak Element – Binary Search Explained with Java Solution (LeetCode 162)

Find Peak Element – Binary Search Explained with Java Solution (LeetCode 162)

Try the QuestionBefore reading the explanation, try solving the problem yourself:👉 https://leetcode.com/problems/find-peak-element/Attempting the problem first helps develop algorithmic intuition, especially for binary search problems.Problem StatementYou are given a 0-indexed integer array nums.Your task is to find the index of a peak element.What is a Peak Element?A peak element is an element that is strictly greater than its immediate neighbors.For an element nums[i] to be a peak:nums[i] > nums[i-1]nums[i] > nums[i+1]However, the array boundaries have a special rule.Boundary RuleYou can imagine that:nums[-1] = -∞nums[n] = -∞This means:The first element only needs to be greater than the second element.The last element only needs to be greater than the second last element.Constraints1 <= nums.length <= 1000-2^31 <= nums[i] <= 2^31 - 1nums[i] != nums[i + 1] for all valid iImportant implications:Adjacent elements are never equal.The array always has at least one peak.Understanding the Problem with ExamplesExample 1Inputnums = [1,2,3,1]Visualization1 2 3 1^Here:3 > 23 > 1So 3 is a peak element.Output2(index of value 3)Example 2Inputnums = [1,2,1,3,5,6,4]Visualization1 2 1 3 5 6 4^ ^Possible peaks:26Valid outputs:1 or 5Either peak index is acceptable.Key ObservationsImportant facts about peak elements:Every array must contain at least one peak.If numbers keep increasing, the last element will be a peak.If numbers keep decreasing, the first element will be a peak.If the array has ups and downs, peaks exist in the middle.This guarantees a solution exists.Approach 1 — Linear Scan (Brute Force)The simplest approach is to check each element and see whether it satisfies the peak condition.AlgorithmTraverse the array.Check if the current element is greater than neighbors.Return the index if found.Examplenums = [1,2,3,1]Check:1 < 22 < 33 > 2 and 3 > 1 → PeakImplementationpublic int findPeakElement(int[] nums) {for(int i = 0; i < nums.length; i++){boolean left = (i == 0) || nums[i] > nums[i-1];boolean right = (i == nums.length-1) || nums[i] > nums[i+1];if(left && right){return i;}}return -1;}Time ComplexityO(n)Space ComplexityO(1)Although simple, this solution does not satisfy the required O(log n) complexity.Approach 2 — Binary Search Using Peak ConditionsInstead of scanning the entire array, we can use binary search.Key IdeaIf we are at index mid, compare it with the neighbors.Three situations may occur.Case 1 — Peak Foundnums[mid-1] < nums[mid] > nums[mid+1]Then:mid is the peakCase 2 — Increasing Slopenums[mid] < nums[mid+1]Example:1 3 5 7^This means the peak must exist on the right side.Move search to the right.Case 3 — Decreasing Slopenums[mid] < nums[mid-1]Example:7 5 3 1^Peak exists on the left side.Move search to the left.Implementationint i = 1;int j = nums.length - 2;while(i <= j){int mid = i + (j - i)/2;if(nums[mid-1] < nums[mid] && nums[mid] > nums[mid+1]){return mid;}else if(nums[mid] < nums[mid+1]){i = mid + 1;}else{j = mid - 1;}}Time ComplexityO(log n)Space ComplexityO(1)Approach 3 — Optimized Binary Search (Most Elegant Solution)A simpler and more elegant version of binary search exists.Core IntuitionCompare:nums[mid]nums[mid + 1]Two possibilities exist.Case 1 — Increasing Slopenums[mid] < nums[mid+1]Example:1 2 3 4^The peak must exist on the right side.Move:left = mid + 1Case 2 — Decreasing Slopenums[mid] > nums[mid+1]Example:4 3 2 1^Peak exists on the left side including mid.Move:right = midThis gradually narrows down to the peak.Final Optimized Implementationclass Solution {public int findPeakElement(int[] nums) {int i = 0;int j = nums.length - 1;while(i < j){int mid = i + (j - i) / 2;if(nums[mid] < nums[mid + 1]){i = mid + 1;}else{j = mid;}}return i;}}Step-by-Step ExampleArray[1,2,1,3,5,6,4]Iteration 1mid = 3nums[mid] = 3nums[mid+1] = 5Increasing slope → move right.Iteration 2mid = 5nums[mid] = 6nums[mid+1] = 4Decreasing slope → move left.Eventually:peak index = 5Value:6Why This Binary Search WorksThe array behaves like a mountain landscape.Example visualization:/\/ \/ \__If you stand at mid:If the right side is higher, go right.If the left side is higher, go left.This guarantees that you will eventually reach a peak.Time ComplexityO(log n)Each iteration cuts the search space in half.Space ComplexityO(1)Only a few variables are used.Key Takeaways✔ A peak element is greater than its neighbors✔ The array always contains at least one peak✔ Binary search can be applied using slope comparison✔ The optimized solution only compares nums[mid] and nums[mid+1]✔ The final algorithm runs in O(log n) timeFinal ThoughtsThis problem is an excellent example of applying binary search beyond sorted arrays.Instead of searching for a value, binary search is used here to locate a structural property of the array (a peak).Understanding this pattern is extremely valuable because similar logic appears in many interview problems involving:Mountain arraysBitonic arraysOptimization search problemsMastering this approach will significantly improve your binary search problem-solving skills for technical interviews.

LeetCodeBinary SearchMediumJava
LeetCode 121 — Best Time to Buy and Sell Stock I | Two Pointer / Sliding Window Explained

LeetCode 121 — Best Time to Buy and Sell Stock I | Two Pointer / Sliding Window Explained

🚀 Try This Problem First!Before reading the solution, attempt it yourself on LeetCode — you'll retain the concept far better.🔗 Problem Link: https://leetcode.com/problems/best-time-to-buy-and-sell-stock/Understanding the ProblemYou are given an array prices where prices[i] is the stock price on day i. You can buy on one day and sell on a later day. You want to maximize your profit.Goal: Return the maximum profit possible. If no profit can be made, return 0.Key Rules:You must buy before you sell — no going back in time.You can only make one transaction (one buy, one sell).If prices only go down, profit is impossible — return 0.Constraints:1 ≤ prices.length ≤ 10⁵0 ≤ prices[i] ≤ 10⁴Understanding the Problem With a Real-World AnalogyImagine you're watching a stock ticker for a week. You want to pick the single best day to buy and a later day to sell. You can't predict the future — you just have historical prices. The question is: looking back at all the prices, what was the best single buy-sell pair you could have chosen?Key Observation (Before Writing a Single Line of Code)The profit on any given pair of days is:profit = prices[sell_day] - prices[buy_day]To maximize profit, for every potential sell day, we want the lowest possible buy price seen so far to the left of it. This is the core insight that drives the efficient solution.If at any point the current price is lower than our tracked minimum, there is no point keeping the old minimum — the new one is strictly better for any future sell day.Intuition — The Two Pointer ApproachWe use two pointers i (buy pointer) and j (sell pointer), both starting at the beginning with i = 0 and j = 1.At every step we ask: is prices[j] greater than prices[i]?If yes → this is a profitable pair. Compute the profit and update the maximum if it's better.If no → prices[j] is cheaper than prices[i]. There's no reason to keep i where it is — any future sell day would be better served by buying at j instead. So we move i to j.Either way, j always moves forward by 1 until it reaches the end of the array.Why Moving i to j is CorrectThis is the most important conceptual point. When prices[j] < prices[i], you might wonder — why not just skip j and move on? Why move i?Because for any future day k > j, the profit buying at j is:prices[k] - prices[j]And the profit buying at i is:prices[k] - prices[i]Since prices[j] < prices[i], buying at j will always give a higher or equal profit for the same sell day k. So we update i = j — there is no scenario where keeping the old i is better.Dry Run — Example 1 (Step by Step)Input: prices = [7, 1, 5, 3, 6, 4]We start with i = 0, j = 1, maxP = 0.Step 1: i = 0, j = 1 → prices[i] = 7, prices[j] = 17 > 1 → prices[j] is cheaper. Move i to j. i = 1, j moves to 2. maxP = 0.Step 2: i = 1, j = 2 → prices[i] = 1, prices[j] = 51 < 5 → Profitable! profit = 5 - 1 = 4. 4 > 0 → update maxP = 4. j moves to 3.Step 3: i = 1, j = 3 → prices[i] = 1, prices[j] = 31 < 3 → Profitable! profit = 3 - 1 = 2. 2 < 4 → maxP stays 4. j moves to 4.Step 4: i = 1, j = 4 → prices[i] = 1, prices[j] = 61 < 6 → Profitable! profit = 6 - 1 = 5. 5 > 4 → update maxP = 5. j moves to 5.Step 5: i = 1, j = 5 → prices[i] = 1, prices[j] = 41 < 4 → Profitable! profit = 4 - 1 = 3. 3 < 5 → maxP stays 5. j moves to 6. j = 6 = prices.length → loop ends.Output: maxP = 5 ✅ (Buy at day 2 price=1, sell at day 5 price=6)Dry Run — Example 2 (Step by Step)Input: prices = [7, 6, 4, 3, 1]We start with i = 0, j = 1, maxP = 0.Step 1: i = 0, j = 1 → prices[i] = 7, prices[j] = 67 > 6 → Move i to j. i = 1, j moves to 2. maxP = 0.Step 2: i = 1, j = 2 → prices[i] = 6, prices[j] = 46 > 4 → Move i to j. i = 2, j moves to 3. maxP = 0.Step 3: i = 2, j = 3 → prices[i] = 4, prices[j] = 34 > 3 → Move i to j. i = 3, j moves to 4. maxP = 0.Step 4: i = 3, j = 4 → prices[i] = 3, prices[j] = 13 > 1 → Move i to j. i = 4, j moves to 5. j = 5 = prices.length → loop ends.Output: maxP = 0 ✅ (Prices only went down, no profitable transaction exists)The Code — Implementationclass Solution {public int maxProfit(int[] prices) {int i = 0; // Buy pointer — points to the current minimum price dayint j = 1; // Sell pointer — scans forward through the arrayint maxP = 0; // Tracks the maximum profit seen so far// j scans from day 1 to the end of the arraywhile (i < j && j < prices.length) {if (prices[i] > prices[j]) {// prices[j] is cheaper than current buy price// Move buy pointer to j — buying here is strictly better// for any future sell dayi = j;} else {// prices[j] > prices[i] — this is a profitable pair// Calculate profit and update maxP if it's the best so farint profit = prices[j] - prices[i];if (profit > maxP) {maxP = profit;}}// Always move the sell pointer forwardj++;}// If no profitable transaction was found, maxP remains 0return maxP;}}Code Walkthrough — Step by StepInitialization: i = 0 acts as our buy day pointer (always pointing to the lowest price seen so far). j = 1 is our sell day pointer scanning forward. maxP = 0 is our answer — defaulting to 0 handles the case where no profit is possible.The loop condition: i < j ensures we never sell before buying. j < prices.length ensures we don't go out of bounds.When prices[i] > prices[j]: The current sell day price is cheaper than our buy day. We update i = j, because buying at j dominates buying at i for all future sell days.When prices[j] >= prices[i]: We have a potential profit. We compute it and update maxP if it beats the current best.j always increments: Regardless of which branch we take, j++ moves the sell pointer forward every iteration — this is what drives the loop to completion.Common Mistakes to AvoidNot returning 0 for no-profit cases: If prices are strictly decreasing, maxP never gets updated and stays 0. The initialization maxP = 0 handles this correctly — never initialize it to a negative number.Using a nested loop (brute force): A common first instinct is two nested loops checking every pair. That is O(N²) and will TLE on large inputs. The two pointer approach solves it in O(N).Thinking you need to sort: Sorting destroys the order of days, which is fundamental to the problem. Never sort the prices array here.Moving i forward by 1 instead of to j: When prices[j] < prices[i], some people write i++ instead of i = j. This is wrong — you might miss the new minimum entirely and waste iterations.Complexity AnalysisTime Complexity: O(N) The j pointer traverses the array exactly once from index 1 to the end. The i pointer only moves forward and never exceeds j. So the entire algorithm is a single linear pass — O(N).Space Complexity: O(1) No extra arrays, maps, or stacks are used. Only three integer variables — i, j, and maxP.Alternative Way to Think About It (Min Tracking)Some people write this problem using a minPrice variable instead of two pointers. Both approaches are equivalent — the two pointer framing is slightly more intuitive visually, but here is the min-tracking version for reference:int minPrice = Integer.MAX_VALUE;int maxProfit = 0;for (int price : prices) {if (price < minPrice) {minPrice = price;} else if (price - minPrice > maxProfit) {maxProfit = price - minPrice;}}return maxProfit;The logic is the same — always track the cheapest buy day seen so far, and for each day compute what profit would look like if you sold today.Similar Problems (Build on This Foundation)LeetCode 122 — Best Time to Buy and Sell Stock II (multiple transactions allowed) [ Blog is also avaliable on this - Read Now ]LeetCode 123 — Best Time to Buy and Sell Stock III (at most 2 transactions) [ Blog is also avaliable on this - Read Now ]LeetCode 188 — Best Time to Buy and Sell Stock IV (at most k transactions)LeetCode 309 — Best Time to Buy and Sell Stock with CooldownLeetCode 714 — Best Time to Buy and Sell Stock with Transaction FeeLeetCode 121 is the foundation. Master this one deeply before moving to the others — they all build on the same core idea.Key Takeaways✅ For every potential sell day, the best buy day is always the minimum price seen to its left — this drives the whole approach.✅ When the sell pointer finds a cheaper price than the buy pointer, always move the buy pointer there — it strictly dominates the old buy day for all future sells.✅ Initialize maxP = 0 so that no-profit scenarios (prices only going down) are handled automatically.✅ The two pointer approach solves this in a single linear pass — O(N) time and O(1) space.✅ j always increments every iteration — i only moves when a cheaper price is found.Happy Coding! If this clicked for you, the entire Stock series on LeetCode will feel much more approachable.

LeetCodeTwo PointersEasyJavaDSA
Mastering Binary Search – LeetCode 704 Explained

Mastering Binary Search – LeetCode 704 Explained

IntroductionBinary Search is one of the most fundamental and powerful algorithms in computer science. If you're preparing for coding interviews, mastering Binary Search is absolutely essential.In this blog, we’ll break down LeetCode 704 – Binary Search, explain the algorithm in detail, walk through your Java implementation, analyze complexity, and recommend additional problems to strengthen your understanding.You can try this problem -: Problem Link📌 Problem OverviewYou are given:A sorted array of integers nums (ascending order)An integer targetYour task is to return the index of target if it exists in the array. Otherwise, return -1.Example 1Input: nums = [-1,0,3,5,9,12], target = 9Output: 4Example 2Input: nums = [-1,0,3,5,9,12], target = 2Output: -1Constraints1 <= nums.length <= 10⁴All integers are uniqueThe array is sorted in ascending orderRequired Time Complexity: O(log n)🚀 Understanding the Binary Search AlgorithmBinary Search works only on sorted arrays.Instead of checking each element one by one (like Linear Search), Binary Search:Finds the middle element.Compares it with the target.Eliminates half of the search space.Repeats until the element is found or the search space is empty.Why is it Efficient?Every iteration cuts the search space in half.If the array size is n, the number of operations becomes:log₂(n)This makes it extremely efficient compared to linear search (O(n)).🧠 Step-by-Step AlgorithmInitialize two pointers:low = 0high = nums.length - 1While low <= high:Calculate middle index:mid = low + (high - low) / 2If nums[mid] == target, return midIf target > nums[mid], search right half → low = mid + 1Else search left half → high = mid - 1If loop ends, return -1💻 Your Java Code ExplainedHere is your implementation:class Solution {public int search(int[] nums, int target) {int high = nums.length-1;int low = 0;while(low <= high){int mid = low+(high-low)/2;if(target == nums[mid] ){return mid;}else if(target > nums[mid]){low = mid+1;}else{high = mid-1;}}return -1;}}🔍 Code Breakdown1️⃣ Initialize Boundariesint high = nums.length - 1;int low = 0;You define the search space from index 0 to n-1.2️⃣ Loop Conditionwhile(low <= high)The loop continues as long as there is a valid search range.3️⃣ Safe Mid Calculationint mid = low + (high - low) / 2;This is preferred over:(low + high) / 2Why?Because (low + high) may cause integer overflow in large arrays.Your approach prevents that.4️⃣ Comparison Logicif(target == nums[mid])If found → return index.else if(target > nums[mid])low = mid + 1;Search in right half.elsehigh = mid - 1;Search in left half.5️⃣ Not Found Casereturn -1;If the loop finishes without finding the target.⏱ Time and Space ComplexityTime Complexity: O(log n)Each iteration halves the search space.Space Complexity: O(1)No extra space used — purely iterative.🔥 Why This Problem Is ImportantLeetCode 704 is:The foundation of all Binary Search problemsA template questionFrequently asked in interviewsRequired to understand advanced problems like:Search in Rotated Sorted ArrayFind First and Last PositionPeak ElementBinary Search on Answer📚 Recommended Binary Search Practice ProblemsAfter solving this, practice these in order:🟢 Easy35. Search Insert Position69. Sqrt(x)278. First Bad Version🟡 Medium34. Find First and Last Position of Element in Sorted Array33. Search in Rotated Sorted Array74. Search a 2D Matrix875. Koko Eating Bananas (Binary Search on Answer)🔴 Advanced Pattern Practice1011. Capacity To Ship Packages Within D Days410. Split Array Largest SumThese will help you master:Lower bound / upper boundBinary search on monotonic functionsSearching in rotated arraysSearching in 2D matricesBinary search on answer pattern🎯 Final ThoughtsBinary Search is not just a single algorithm — it’s a pattern.If you truly understand:How the search space shrinksWhen to move left vs rightHow to calculate mid safelyLoop conditions (low <= high vs low < high)You can solve 50+ interview problems easily.LeetCode 704 is the perfect starting point.Master this template, and you unlock an entire category of problems.

Binary SearchLeetCodeEasy
Peak Index in a Mountain Array – Brute Force to Binary Search (LeetCode 852)

Peak Index in a Mountain Array – Brute Force to Binary Search (LeetCode 852)

Try the QuestionBefore reading the explanation, try solving the problem yourself:👉 https://leetcode.com/problems/peak-index-in-a-mountain-array/Solving it first helps strengthen your problem-solving intuition, especially for binary search problems.Problem StatementYou are given an integer array arr that is guaranteed to be a mountain array.A mountain array is defined as an array where:Elements strictly increase until a peak element.After the peak, elements strictly decrease.Example:[0, 2, 5, 7, 6, 3, 1] ^ peakYour task is to return the index of the peak element.Constraints3 <= arr.length <= 10^50 <= arr[i] <= 10^6arr is guaranteed to be a mountain arrayImportant implications:The peak always exists.There will be exactly one peak.The array strictly increases before the peak and strictly decreases after it.Example WalkthroughExample 1Inputarr = [0,1,0]Visualization0 1 0 ^Output1Example 2Inputarr = [0,2,1,0]Visualization0 2 1 0 ^Output1Example 3Inputarr = [0,10,5,2]Visualization0 10 5 2 ^Output1Understanding the Mountain Array StructureA mountain array always looks like this:increasing → peak → decreasingGraphically: peak /\ / \ / \Because of this structure:Before the peak → numbers increaseAfter the peak → numbers decreaseThis property makes the problem perfect for binary search.Approach 1 — Brute Force (Check Every Element)The simplest way is to check every element and determine if it is greater than its neighbors.AlgorithmTraverse the array from index 1 to n-2.For each element check:arr[i] > arr[i-1] AND arr[i] > arr[i+1]If true, return i.Implementationpublic int peakIndexInMountainArray(int[] arr) { for(int i = 1; i < arr.length - 1; i++){ if(arr[i] > arr[i-1] && arr[i] > arr[i+1]){ return i; } } return -1;}Time ComplexityO(n)We may need to check every element.Space ComplexityO(1)Approach 2 — Linear Scan Using Increasing TrendSince the array first increases then decreases, we can detect where the increase stops.Key IdeaTraverse the array and check when:arr[i] > arr[i+1]This means we reached the peak.Implementationpublic int peakIndexInMountainArray(int[] arr) { for(int i = 0; i < arr.length - 1; i++){ if(arr[i] > arr[i+1]){ return i; } } return -1;}Example0 2 5 7 6 3 1 ^At index 3:7 > 6So index 3 is the peak.Time ComplexityO(n)Space ComplexityO(1)Approach 3 — Modified Binary Search (Optimal Solution)Because the array has a mountain structure, binary search can be used.Core IntuitionCompare:arr[mid]arr[mid+1]Two situations are possible.Case 1 — Increasing Slopearr[mid] < arr[mid+1]Example:1 3 5 7 ^This means the peak is on the right side.Move the search space to the right.left = mid + 1Case 2 — Decreasing Slopearr[mid] > arr[mid+1]Example:7 5 3 1 ^This means the peak lies on the left side including mid.right = midOptimal Java Implementationclass Solution { public int peakIndexInMountainArray(int[] arr) { int i = 0; int j = arr.length - 1; while(i < j){ int mid = i + (j - i) / 2; if(arr[mid] < arr[mid + 1]){ i = mid + 1; } else{ j = mid; } } return i; }}Step-by-Step ExampleArray[0,2,5,7,6,3,1]Iteration 1mid = 3arr[mid] = 7arr[mid+1] = 6Decreasing slope → move left.Iteration 2Search space narrows until:peak index = 3Time ComplexityO(log n)Binary search halves the search space each iteration.Space ComplexityO(1)No extra memory is required.Why Binary Search Works HereBecause the array behaves like a mountain curve.If you are standing at index mid:If the right side is higher, go right.If the left side is higher, go left.Eventually you will reach the top of the mountain (the peak).Key Takeaways✔ A mountain array increases then decreases✔ There is exactly one peak✔ Brute force and linear scan work in O(n) time✔ Binary search exploits the mountain structure✔ Optimal solution runs in O(log n) timeFinal ThoughtsThis problem is a classic example of binary search applied to array patterns rather than sorted values.Understanding the slope comparison technique used here will help solve other problems such as:Find Peak ElementMountain Array SearchBitonic Array ProblemsMastering these patterns significantly improves binary search problem-solving skills for coding interviews.

LeetCodeBinary SearchMountain ArrayMediumJava
LeetCode 3488 — Closest Equal Element Queries: A Complete Walkthrough from Brute Force to Optimal

LeetCode 3488 — Closest Equal Element Queries: A Complete Walkthrough from Brute Force to Optimal

If you have been grinding LeetCode lately, you have probably run into problems where your first clean-looking solution times out and forces you to rethink from scratch. LeetCode 3488 is exactly that kind of problem. This article walks through the complete thought process — from the naive approach that got me TLE, to the intuition shift, to the final optimized solution in Java.You can find the original problem here: LeetCode 3488 — Closest Equal Element Queries at Problem LinkUnderstanding the ProblemYou are given a circular array nums and an array of queries. For each query queries[i], you must find the minimum distance between the element at index queries[i] and any other index j such that nums[j] == nums[queries[i]]. If no such other index exists, the answer is -1.The critical detail here is the word circular. The array wraps around, which means the distance between two indices i and j in an array of length n is not simply |i - j|. It is:min( |i - j| , n - |i - j| )You can travel either clockwise or counterclockwise, and you take whichever path is shorter.Breaking Down the ExamplesExample 1nums = [1, 3, 1, 4, 1, 3, 2], queries = [0, 3, 5]For query index 0, the value is 1. Other indices holding 1 are 2 and 4. Circular distances are min(2, 5) = 2 and min(4, 3) = 3. The minimum is 2.For query index 3, the value is 4. It appears nowhere else in the array. Answer is -1.For query index 5, the value is 3. The other 3 sits at index 1. Circular distance is min(4, 3) = 3. Answer is 3.Output: [2, -1, 3]Example 2nums = [1, 2, 3, 4], queries = [0, 1, 2, 3]Every element is unique. Every query returns -1.Output: [-1, -1, -1, -1]First Attempt — Brute ForceMy first instinct was straightforward. For each query, scan the entire array, collect every index that matches the queried value, compute the circular distance to each, and return the minimum. Clean logic, easy to reason about, and dead simple to implement.while (i != queries.length) { int max = Integer.MAX_VALUE; for (int j = 0; j < nums.length; j++) { int target = nums[queries[i]]; if (nums[j] == target && j != queries[i]) { // Linear distance between the two indices int right = Math.abs(j - queries[i]); // Distance going the other direction around the ring int left = nums.length - right; // True circular distance is the shorter of the two int dist = Math.min(right, left); max = Math.min(max, dist); } } lis.add(max == Integer.MAX_VALUE ? -1 : max); i++;}This got TLE immediately, and once you look at the constraints it is obvious why. Both nums.length and queries.length can be up to 10^5. For every query you are scanning every element, giving you O(n × q) time — which in the worst case is 10 billion operations. No judge is going to wait for that.Rethinking the Approach — Where Is the Waste?After the TLE, the question I asked myself was: what work is being repeated unnecessarily?The answer was obvious in hindsight. Every time a query asks about a value like 3, the brute force scans the entire array again looking for every index that holds 3. If ten different queries all ask about value 3, you are doing that scan ten times. You are finding the same indices over and over.The fix is to do that work exactly once, before any query is processed. You precompute a map from each value to all the indices where it appears. Then for every query you simply look up the relevant list and work within it.That observation reduces the precomputation to O(n) — one pass through the array. The question then becomes: once you have that sorted list of indices for a given value, how do you find the closest one to your query index efficiently?The Key Insight — You Only Need Two NeighboursHere is the insight that makes this problem elegant. The index list for any value is sorted in ascending order because you build it by iterating left to right through the array. If your query index sits at position mid inside that sorted list, then by definition every index to the left of mid - 1 is farther away than arr[mid - 1], and every index to the right of mid + 1 is farther away than arr[mid + 1].This means you never need to compare against all duplicates. You only ever need to check the immediate left and right neighbours of your query index within the sorted list.The one subtlety is the circular wrap. Because the array itself is circular, the left neighbour of the very first element in the list is actually the last element in the list, since you can wrap around the ring. This is handled cleanly with modular arithmetic: (mid - 1 + n) % n for the left neighbour and (mid + 1) % n for the right.The Optimized Solution — HashMap + Binary SearchStep 1 — Precompute the index mapIterate through nums once and build a HashMap mapping each value to a list of all indices where it appears. The lists are sorted by construction since you insert indices in order.Step 2 — Binary search to locate the query indexFor a given query at index q, look up the index list for nums[q]. Binary search the list to find the position of q within it. This runs in O(log n) rather than O(n).Step 3 — Check immediate neighbours and compute circular distancesOnce you have the position mid, fetch arr[(mid + 1) % n] and arr[(mid - 1 + n) % n]. For each, compute the circular distance using min(|diff|, totalLength - |diff|). Return the smaller of the two.Full Annotated Java Solutionclass Solution { public List<Integer> solveQueries(int[] nums, int[] queries) { int c = 0; // Precompute: map each value to the sorted list of indices where it appears. // Since we iterate left to right, the list is sorted by construction. HashMap<Integer, List<Integer>> mp = new HashMap<>(); for (int i = 0; i < nums.length; i++) { mp.computeIfAbsent(nums[i], k -> new ArrayList<>()).add(i); } List<Integer> lis = new ArrayList<>(); while (c != queries.length) { // Retrieve the sorted index list for the value at the queried position List<Integer> arr = mp.get(nums[queries[c]]); int n = arr.size(); int i = 0; int j = n - 1; int min = -1; while (i <= j) { int mid = i + (j - i) / 2; if (arr.get(mid) == queries[c]) { // Only one occurrence in the entire array — no duplicate exists if (n == 1) { min = -1; } else { // Circular neighbour to the right within the index list int right = arr.get((mid + 1) % n); // Circular neighbour to the left within the index list int left = arr.get((mid - 1 + n) % n); // Compute circular distance to the right neighbour int d1 = Math.abs(right - queries[c]); int distRight = Math.min(d1, nums.length - d1); // Compute circular distance to the left neighbour int d2 = Math.abs(left - queries[c]); int distLeft = Math.min(d2, nums.length - d2); // The answer is the closer of the two neighbours min = Math.min(distLeft, distRight); } break; } else if (arr.get(mid) > queries[c]) { // Query index is smaller — search the left half j = mid - 1; } else { // Query index is larger — search the right half i = mid + 1; } } lis.add(min); c++; } return lis; }}Complexity AnalysisTime Complexity: O(n log n)Building the HashMap takes O(n). For each of the q queries, binary search over the index list takes O(log n) in the worst case. Total: O(n + q log n), which simplifies to O(n log n) given the constraint that q ≤ n.Space Complexity: O(n)The HashMap stores every index exactly once across all its lists, so total space used is O(n).Compared to the brute force O(n × q), this is the difference between ~1.7 million operations and ~10 billion operations at the constraint limits.Common PitfallsMixing up the two values of n. Inside the solution, n refers to arr.size() — the number of occurrences of a particular value. But when computing circular distance, you need nums.length — the full array length. These are different numbers and swapping them silently produces wrong answers.Forgetting the + n in the left neighbour formula. Writing (mid - 1) % n when mid is 0 produces -1 in Java, since Java's modulo preserves the sign of the dividend. Always write (mid - 1 + n) % n.Not handling the single-occurrence case. If a value appears only once, n == 1, and the neighbour formula wraps around to the element itself, giving a distance of zero — which is completely wrong. Guard against this explicitly before running the neighbour logic.What This Problem Teaches YouThe journey from brute force to optimal here follows a pattern worth internalizing.The brute force was correct but repeated work. Recognizing that repeated work and lifting it into a precomputation step is the single move that makes this problem tractable. The HashMap does that.Once you have a sorted structure, binary search is almost always the right tool to find a position within it. And once you have a position in a sorted structure, you only ever need to look at adjacent elements to find the nearest one — checking anything further is redundant by definition.These are not tricks specific to this problem. They are transferable patterns that appear across dozens of medium and hard problems on the platform. Internalizing them — rather than memorizing solutions — is what actually builds problem-solving ability over time.

ArraysHashMapBinary SearchCircular ArraysMediumLeetCodeJava
What Is Dynamic Programming? Origin Story, Real-Life Uses, LeetCode Problems & Complete Beginner Guide

What Is Dynamic Programming? Origin Story, Real-Life Uses, LeetCode Problems & Complete Beginner Guide

Introduction — Why Dynamic Programming Feels Hard (And Why It Isn't)If you've ever stared at a LeetCode problem, read the solution, understood every single line, and still had absolutely no idea how someone arrived at it — welcome. You've just experienced the classic Dynamic Programming (DP) confusion.DP has a reputation. People treat it like some dark art reserved for competitive programmers or Google engineers. The truth? Dynamic Programming is one of the most logical, learnable, and satisfying techniques in all of computer science. Once it clicks, it really clicks.This guide will take you from zero to genuinely confident. We'll cover where DP came from, how it works, what patterns to learn, how to recognize DP problems, real-world places it shows up, LeetCode problems to practice, time complexity analysis, and the mistakes that trip up even experienced developers.Let's go.The Origin Story — Who Invented Dynamic Programming and Why?The term "Dynamic Programming" was coined by Richard Bellman in the early 1950s while working at RAND Corporation. Here's the funny part: the name was deliberately chosen to sound impressive and vague.Bellman was doing mathematical research that his employer — the US Secretary of Defense, Charles Wilson — would have found difficult to fund if described accurately. Wilson had a well-known distaste for the word "research." So Bellman invented a name that sounded suitably grand and mathematical: Dynamic Programming.In his autobiography, Bellman wrote that he picked the word "dynamic" because it had a precise technical meaning and was also impossible to use negatively. "Programming" referred to the mathematical sense — planning and decision-making — not computer programming.The underlying idea? Break a complex problem into overlapping subproblems, solve each subproblem once, and store the result so you never solve it twice.Bellman's foundational contribution was the Bellman Equation, which underpins not just algorithms but also economics, operations research, and modern reinforcement learning.So the next time DP feels frustrating, remember — even its inventor named it specifically to confuse people. You're in good company.What Is Dynamic Programming? (Simple Definition)Dynamic Programming is an algorithmic technique used to solve problems by:Breaking them down into smaller overlapping subproblemsSolving each subproblem only onceStoring the result (memoization or tabulation)Building up the final solution from those stored resultsThe key insight is overlapping subproblems + optimal substructure.Overlapping subproblems means the same smaller problems come up again and again. Instead of solving them every time (like plain recursion does), DP solves them once and caches the answer.Optimal substructure means the optimal solution to the whole problem can be built from optimal solutions to its subproblems.If a problem has both these properties — it's a DP problem.The Two Approaches to Dynamic Programming1. Top-Down with Memoization (Recursive + Cache)You write a recursive solution exactly as you would naturally, but add a cache (usually a dictionary or array) to store results you've already computed.fib(n):if n in cache: return cache[n]if n <= 1: return ncache[n] = fib(n-1) + fib(n-2)return cache[n]This is called memoization — remember what you computed so you don't repeat yourself.Pros: Natural to write, mirrors the recursive thinking, easy to reason about. Cons: Stack overhead from recursion, risk of stack overflow on large inputs.2. Bottom-Up with Tabulation (Iterative)You figure out the order in which subproblems need to be solved, then solve them iteratively from the smallest up, filling a table.fib(n):dp = [0, 1]for i from 2 to n:dp[i] = dp[i-1] + dp[i-2]return dp[n]This is called tabulation — fill a table, cell by cell, bottom to top.Pros: No recursion overhead, usually faster in practice, easier to optimize space. Cons: Requires thinking about the order of computation upfront.🧩 Dynamic Programming Template CodeBefore diving into how to recognize DP problems, here are ready-to-use Java templates for every major DP pattern. Think of these as your reusable blueprints — every DP problem you ever solve will fit into one of these structures. Just define your state, plug in your recurrence relation, and you are good to go.Template 1 — Top-Down (Memoization)import java.util.HashMap;import java.util.Map;public class TopDownDP {Map<Integer, Integer> memo = new HashMap<>();public int solve(int n) {// Base caseif (n <= 1) return n;// Check cacheif (memo.containsKey(n)) return memo.get(n);// Recurrence relation — change this part for your problemint result = solve(n - 1) + solve(n - 2);// Store in cachememo.put(n, result);return result;}}Template 2 — Bottom-Up (Tabulation)public class BottomUpDP {public int solve(int n) {// Create DP tableint[] dp = new int[n + 1];// Base casesdp[0] = 0;dp[1] = 1;// Fill the table bottom-upfor (int i = 2; i <= n; i++) {// Recurrence relation — change this part for your problemdp[i] = dp[i - 1] + dp[i - 2];}return dp[n];}}Template 3 — Bottom-Up with Space Optimizationpublic class SpaceOptimizedDP {public int solve(int n) {// Only keep last two values instead of full tableint prev2 = 0;int prev1 = 1;for (int i = 2; i <= n; i++) {// Recurrence relation — change this part for your problemint curr = prev1 + prev2;prev2 = prev1;prev1 = curr;}return prev1;}}Template 4 — 2D DP (Two Sequences or Grid)public class TwoDimensionalDP {public int solve(String s1, String s2) {int m = s1.length();int n = s2.length();// Create 2D DP tableint[][] dp = new int[m + 1][n + 1];// Base cases — first row and columnfor (int i = 0; i <= m; i++) dp[i][0] = i;for (int j = 0; j <= n; j++) dp[0][j] = j;// Fill table cell by cellfor (int i = 1; i <= m; i++) {for (int j = 1; j <= n; j++) {// Recurrence relation — change this part for your problemif (s1.charAt(i - 1) == s2.charAt(j - 1)) {dp[i][j] = dp[i - 1][j - 1];} else {dp[i][j] = 1 + Math.min(dp[i - 1][j],Math.min(dp[i][j - 1], dp[i - 1][j - 1]));}}}return dp[m][n];}}Template 5 — Knapsack Patternpublic class KnapsackDP {public int solve(int[] weights, int[] values, int capacity) {int n = weights.length;// dp[i][w] = max value using first i items with capacity wint[][] dp = new int[n + 1][capacity + 1];for (int i = 1; i <= n; i++) {for (int w = 0; w <= capacity; w++) {// Don't take item idp[i][w] = dp[i - 1][w];// Take item i if it fitsif (weights[i - 1] <= w) {dp[i][w] = Math.max(dp[i][w],values[i - 1] + dp[i - 1][w - weights[i - 1]]);}}}return dp[n][capacity];}}💡 How to use these templates:Step 1 — Identify which pattern your problem fits into. Step 2 — Define what dp[i] or dp[i][j] means in plain English before writing any code. Step 3 — Write your recurrence relation on paper first. Step 4 — Plug it into the matching template above. Step 5 — Handle your specific base cases carefully.🎥 Visual Learning Resource — Watch This Before Moving ForwardIf you prefer learning by watching before reading, this free full-length course by freeCodeCamp is one of the best Dynamic Programming resources on the internet. Watch it alongside this guide for maximum understanding.Credit: freeCodeCamp — a free, nonprofit coding education platform.How to Recognize a Dynamic Programming ProblemAsk yourself these four questions:1. Can I define the problem in terms of smaller versions of itself? If you can write a recursive formula (recurrence relation), DP might apply.2. Do the subproblems overlap? If a naive recursive solution would recompute the same thing many times, DP is the right tool.3. Is there an optimal substructure? Is the best answer to the big problem made up of best answers to smaller problems?4. Are you looking for a count, minimum, maximum, or yes/no answer? DP problems often ask: "What is the minimum cost?", "How many ways?", "Can we achieve X?"Red flag words in problem statements: minimum, maximum, shortest, longest, count the number of ways, can we reach, is it possible, fewest steps.The Core DP Patterns You Must LearnMastering DP is really about recognizing patterns. Here are the most important ones:Pattern 1 — 1D DP (Linear) Problems where the state depends on previous elements in a single sequence. Examples: Fibonacci, Climbing Stairs, House Robber.Pattern 2 — 2D DP (Grid / Two-sequence) Problems with two dimensions of state, often grids or two strings. Examples: Longest Common Subsequence, Edit Distance, Unique Paths.Pattern 3 — Interval DP You consider all possible intervals or subarrays and build solutions from them. Examples: Matrix Chain Multiplication, Burst Balloons, Palindrome Partitioning.Pattern 4 — Knapsack DP (0/1 and Unbounded) You decide whether to include or exclude items under a capacity constraint. Examples: 0/1 Knapsack, Coin Change, Partition Equal Subset Sum.Pattern 5 — DP on Trees State is defined per node; you combine results from children. Examples: Diameter of Binary Tree, House Robber III, Maximum Path Sum.Pattern 6 — DP on Subsets / Bitmask DP State includes a bitmask representing which elements have been chosen. Examples: Travelling Salesman Problem, Shortest Superstring.Pattern 7 — DP on Strings Matching, editing, or counting arrangements within strings. Examples: Longest Palindromic Subsequence, Regular Expression Matching, Wildcard Matching.Top LeetCode Problems to Practice Dynamic Programming (With Links)Here are the essential problems, organized by difficulty and pattern. Solve them in this order.Beginner — Warm UpProblemPatternLinkClimbing Stairs1D DPhttps://leetcode.com/problems/climbing-stairs/Fibonacci Number1D DPhttps://leetcode.com/problems/fibonacci-number/House Robber1D DPhttps://leetcode.com/problems/house-robber/Min Cost Climbing Stairs1D DPhttps://leetcode.com/problems/min-cost-climbing-stairs/Best Time to Buy and Sell Stock1D DPhttps://leetcode.com/problems/best-time-to-buy-and-sell-stock/Intermediate — Core PatternsProblemPatternLinkCoin ChangeKnapsackhttps://leetcode.com/problems/coin-change/Longest Increasing Subsequence1D DPhttps://leetcode.com/problems/longest-increasing-subsequence/Longest Common Subsequence2D DPhttps://leetcode.com/problems/longest-common-subsequence/0/1 Knapsack (via Subset Sum)Knapsackhttps://leetcode.com/problems/partition-equal-subset-sum/Unique Paths2D Grid DPhttps://leetcode.com/problems/unique-paths/Jump Game1D DP / Greedyhttps://leetcode.com/problems/jump-game/Word BreakString DPhttps://leetcode.com/problems/word-break/Decode Ways1D DPhttps://leetcode.com/problems/decode-ways/Edit Distance2D String DPhttps://leetcode.com/problems/edit-distance/Triangle2D DPhttps://leetcode.com/problems/triangle/Advanced — Interview LevelProblemPatternLinkBurst BalloonsInterval DPhttps://leetcode.com/problems/burst-balloons/Regular Expression MatchingString DPhttps://leetcode.com/problems/regular-expression-matching/Wildcard MatchingString DPhttps://leetcode.com/problems/wildcard-matching/Palindrome Partitioning IIInterval DPhttps://leetcode.com/problems/palindrome-partitioning-ii/Maximum Profit in Job SchedulingDP + Binary Searchhttps://leetcode.com/problems/maximum-profit-in-job-scheduling/Distinct Subsequences2D DPhttps://leetcode.com/problems/distinct-subsequences/Cherry Pickup3D DPhttps://leetcode.com/problems/cherry-pickup/Real-World Use Cases of Dynamic ProgrammingDP is not just for coding interviews. It is deeply embedded in the technology you use every day.1. Google Maps & Navigation (Shortest Path) The routing engines behind GPS apps use DP-based algorithms like Dijkstra and Bellman-Ford to find the shortest or fastest path between two points across millions of nodes.2. Spell Checkers & Autocorrect (Edit Distance) When your phone corrects "teh" to "the," it is computing Edit Distance — a classic DP problem — between what you typed and every word in the dictionary.3. DNA Sequence Alignment (Bioinformatics) Researchers use the Needleman-Wunsch and Smith-Waterman algorithms — both DP — to align DNA and protein sequences and find similarities between species or identify mutations.4. Video Compression (MPEG, H.264) Modern video codecs use DP to determine the most efficient way to encode video frames, deciding which frames to store as full images and which to store as differences from the previous frame.5. Financial Portfolio Optimization Investment algorithms use DP to find the optimal allocation of assets under risk constraints — essentially a variant of the knapsack problem.6. Natural Language Processing (NLP) The Viterbi algorithm — used in speech recognition, part-of-speech tagging, and machine translation — is a DP algorithm. Every time Siri or Google Assistant understands your sentence, DP played a role.7. Game AI (Chess, Checkers) Game trees and minimax algorithms with memoization use DP to evaluate board positions and find the best move without recomputing already-seen positions.8. Compiler Optimization Compilers use DP to decide the optimal order of operations and instruction scheduling to generate the most efficient machine code.9. Text Justification (Word Processors) Microsoft Word and LaTeX use DP to optimally break paragraphs into lines — minimizing raggedness and maximizing visual appeal.10. Resource Scheduling in Cloud Computing AWS, Google Cloud, and Azure use DP-based scheduling to assign computational tasks to servers in the most cost-efficient way possible.Time Complexity Analysis of Common DP ProblemsUnderstanding the time complexity of DP is critical for interviews and for building scalable systems.ProblemTime ComplexitySpace ComplexityNotesFibonacci (naive recursion)O(2ⁿ)O(n)Exponential — terribleFibonacci (DP)O(n)O(1) with optimizationLinear — excellentLongest Common SubsequenceO(m × n)O(m × n)m, n = lengths of two stringsEdit DistanceO(m × n)O(m × n)Can optimize space to O(n)0/1 KnapsackO(n × W)O(n × W)n = items, W = capacityCoin ChangeO(n × amount)O(amount)Classic tabulationLongest Increasing SubsequenceO(n²) or O(n log n)O(n)Binary search version is fasterMatrix Chain MultiplicationO(n³)O(n²)Interval DPTravelling Salesman (bitmask)O(2ⁿ × n²)O(2ⁿ × n)Still exponential but manageable for small nThe general rule: DP trades time for space. You use memory to avoid recomputation. The time complexity equals the number of unique states multiplied by the work done per state.How to Learn and Master Dynamic Programming — Step by StepHere is an honest, structured path to mastery:Step 1 — Get recursion absolutely solid first. DP is memoized recursion at its core. If you cannot write clean recursive solutions confidently, DP will remain confusing. Practice at least 20 pure recursion problems first.Step 2 — Start with the classics. Fibonacci → Climbing Stairs → House Robber → Coin Change. These teach you the core pattern of defining state and transition without overwhelming you.Step 3 — Learn to define state explicitly. Before writing any code, ask: "What does dp[i] represent?" Write it in plain English. "dp[i] = the minimum cost to reach step i." This single habit separates good DP thinkers from struggling ones.Step 4 — Write the recurrence relation before coding. On paper or in a comment. Example: dp[i] = min(dp[i-1] + cost[i-1], dp[i-2] + cost[i-2]). If you can write the recurrence, the code writes itself.Step 5 — Master one pattern at a time. Don't jump between knapsack and interval DP in the same week. Spend a few days on each pattern until it feels intuitive.Step 6 — Solve the same problem both ways. Top-down and bottom-up. This builds deep understanding of what DP is actually doing.Step 7 — Optimize space after getting correctness. Many 2D DP solutions can use a single row instead of a full matrix. Learn this optimization after you understand the full solution.Step 8 — Do timed practice under interview conditions. Give yourself 35 minutes per problem. Review what you got wrong. DP is a muscle — it builds with reps.Common Mistakes in Dynamic Programming (And How to Avoid Them)Mistake 1 — Jumping to code before defining state. The most common DP error. Always define what dp[i] or dp[i][j] means before writing a single line of code.Mistake 2 — Wrong base cases. A single wrong base case corrupts every answer built on top of it. Trace through your base cases manually on a tiny example before running code.Mistake 3 — Off-by-one errors in indexing. Whether your dp array is 0-indexed or 1-indexed must be 100% consistent throughout. This causes more bugs in DP than almost anything else.Mistake 4 — Confusing top-down with bottom-up state order. In bottom-up DP, you must ensure that when you compute dp[i], all values it depends on are already filled. If you compute in the wrong order, you get garbage answers.Mistake 5 — Memoizing in the wrong dimension. In 2D problems, some people cache only one dimension when the state actually requires two. Always identify all variables that affect the outcome.Mistake 6 — Using global mutable state in recursion. If you use a shared array and don't clear it between test cases, you'll get wrong answers on subsequent inputs. Always scope your cache correctly.Mistake 7 — Not considering the full state space. In problems like Knapsack, forgetting that the state is (item index, remaining capacity) — not just item index — leads to fundamentally wrong solutions.Mistake 8 — Giving up after not recognizing the pattern immediately. DP problems don't announce themselves. The skill is learning to ask "is there overlapping subproblems here?" on every problem. This takes time. Don't mistake unfamiliarity for inability.Frequently Asked Questions About Dynamic ProgrammingQ: Is Dynamic Programming the same as recursion? Not exactly. Recursion is a technique for breaking problems into smaller pieces. DP is recursion plus memoization — or iterative tabulation. All DP can be written recursively, but not all recursion is DP.Q: What is the difference between DP and Divide and Conquer? Divide and Conquer (like Merge Sort) breaks problems into non-overlapping subproblems. DP is used when subproblems overlap — meaning the same subproblem is solved multiple times in a naive approach.Q: How do I know when NOT to use DP? If the subproblems don't overlap (no repeated computation), greedy or divide-and-conquer may be better. If the problem has no optimal substructure, DP won't give a correct answer.Q: Do I need to memorize DP solutions for interviews? No. You need to recognize patterns and be able to derive the recurrence relation. Memorizing solutions without understanding them will fail you in interviews. Focus on the thinking process.Q: How long does it take to get good at DP? Most people start to feel genuinely comfortable after solving 40–60 varied DP problems with deliberate practice. The first 10 feel impossible. The next 20 feel hard. After 50, patterns start feeling obvious.Q: What programming language is best for DP? Any language works. Python is often used for learning because its dictionaries make memoization trivial. C++ is preferred in competitive programming for its speed. For interviews, use whatever language you're most comfortable in.Q: What is space optimization in DP? Many DP problems only look back one or two rows to compute the current row. In those cases, you can replace an n×m table with just two arrays (or even one), reducing space complexity from O(n×m) to O(m). This is called space optimization or rolling array technique.Q: Can DP be applied to graph problems? Absolutely. Shortest path algorithms like Bellman-Ford are DP. Longest path in a DAG is DP. DP on trees is a rich subfield. Anywhere you have states and transitions, DP can potentially apply.Q: Is Greedy a type of Dynamic Programming? Greedy is related but distinct. Greedy makes locally optimal choices without reconsidering. DP considers all choices and picks the globally optimal one. Some DP solutions reduce to greedy when the structure allows, but they are different techniques.Q: What resources should I use to learn DP? For structured learning: Neetcode.io (organized problem list), Striver's DP Series on YouTube, and the book "Introduction to Algorithms" (CLRS) for theoretical depth. For practice: LeetCode's Dynamic Programming study plan and Codeforces for competitive DP.Final Thoughts — Dynamic Programming Is a SuperpowerDynamic Programming is genuinely one of the most powerful ideas in computer science. It shows up in your GPS, your autocorrect, your streaming video, your bank's risk models, and the AI assistants you talk to daily.The path to mastering it is not memorization. It is developing the habit of asking: can I break this into smaller problems that overlap? And then learning to define state clearly, write the recurrence, and trust the process.Start with Climbing Stairs. Write dp[i] in plain English before every problem. Solve everything twice — top-down and bottom-up. Do 50 problems with genuine reflection, not just accepted solutions.The click moment will come. And when it does, you'll wonder why it ever felt hard.

Dynamic ProgrammingMemoizationTabulationJavaOrigin StoryRichard Bellman
Maximum Subarray

Maximum Subarray

LeetCode Problem 53Link of the Problem to try -: LinkGiven an integer array nums, find the subarray with the largest sum, and return its sum.Example 1:Input: nums = [-2,1,-3,4,-1,2,1,-5,4]Output: 6Explanation: The subarray [4,-1,2,1] has the largest sum 6.Example 2:Input: nums = [1]Output: 1Explanation: The subarray [1] has the largest sum 1.Example 3:Input: nums = [5,4,-1,7,8]Output: 23Explanation: The subarray [5,4,-1,7,8] has the largest sum 23.Constraints:1 <= nums.length <= 105-104 <= nums[i] <= 104Follow up: If you have figured out the O(n) solution, try coding another solution using the divide and conquer approach, which is more subtle.My Approach -: I successfully addressed this problem by implementing nested loops to generate the required subarrays. My approach effectively handles the subarray creation logic as specified. Please find the implementation code below.int ans=0;for(int i=0;i<nums.length;i++){int sum =0;for(int j =i;j<nums.length;j++){sum += nums[j];ans = Math.max(sum,ans);}}return ans;Although this solution passed local tests, it encountered a 'Time Limit Exceeded' (TLE) error on LeetCode due to the constraints of the problem. While the nested loop approach is logically sound, further optimization is required to improve its time complexity for larger test cases.Optimized Algorithm (Kadane's Algorithm)This algorithm is highly efficient, solving the problem in a single pass with a time complexity of O(n), not O(1) constant time, as the algorithm must iterate through all n elements of the input array.The core principle is based on the idea that any negative prefix sum acts as a 'debt' that subsequent positive numbers must overcome. The strategy is to always pursue the maximum sum. If the running sum becomes negative at any point, it is reset to zero, effectively 'discarding' the previous negative sequence, and the search for a new maximum sum begins with the next element.Here is a video for more better understandingMy Understanding Steps to solve this QuestionTo solve this problem efficiently in (O(n)) time, I implemented the following steps:Initialization: Define two integer variables: currentSum (initialized to 0) and maxSum (initialized to Integer.MIN_VALUE). currentSum tracks the running total, while maxSum stores the overall maximum subarray sum found so far.Iteration: Traverse the array using a single for loop.Update Running Sum: At each element, add its value to currentSum.Update Maximum: Compare currentSum with maxSum. If currentSum is greater, update maxSum with this new value.Reset Negative Sums: If currentSum drops below zero, reset it to 0. This effectively "discards" a negative prefix that would otherwise reduce the sum of subsequent subarrays.Return Result: After the loop completes, return maxSum as the final result.Here is the Code:int sum = 0;int max = Integer.MIN_VALUE;for(int i=0;i <nums.length;i++){sum +=nums[i];max =Math.max(sum,max);if(sum < 0){sum =0;}}return max;(Note this same solution can also write in another way as well)Here is another way to write this same solution:int sum = 0;int max = Integer.MIN_VALUE;for(int i=0;i <nums.length;i++){// sum +=nums[i];sum = Math.max(sum+nums[i],nums[i]); // Here we are ensuring that sum will alwaays be grestest in all time so that it will not be negative or smaller ass previously we directly putting value in it.max =Math.max(sum,max);// if(sum < 0){ We don't need this as we already ensure that sum will never be negative and always a positive number// sum =0;// }}return max;

LeetCodeArrayMedium
Merge Sort Algorithm Explained | Java Implementation, Intuition & Complexity

Merge Sort Algorithm Explained | Java Implementation, Intuition & Complexity

IntroductionSorting is one of the most fundamental operations in computer science, and Merge Sort is among the most efficient and widely used sorting algorithms.It follows the Divide and Conquer approach, making it highly scalable and predictable even for large datasets.In this article, we will cover:Intuition behind Merge SortStep-by-step breakdownMultiple approachesJava implementation with commentsTime & space complexity analysis🔗 Problem LinkGeeksforGeeks: Merge SortProblem StatementGiven an array arr[] with starting index l and ending index r, sort the array using the Merge Sort algorithm.ExamplesExample 1Input:arr = [4, 1, 3, 9, 7]Output:[1, 3, 4, 7, 9]Example 2Input:arr = [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]Output:[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]Key InsightMerge Sort works by:Divide → Conquer → CombineDivide the array into two halvesRecursively sort each halfMerge both sorted halvesIntuition (Visual Understanding)For:[4, 1, 3, 9, 7]Step 1: Divide[4, 1, 3] [9, 7][4, 1] [3] [9] [7][4] [1]Step 2: Merge[4] [1] → [1, 4][1, 4] [3] → [1, 3, 4][9] [7] → [7, 9]Step 3: Final Merge[1, 3, 4] + [7, 9] → [1, 3, 4, 7, 9]Approach 1: Recursive Merge Sort (Top-Down)IdeaKeep dividing until single elements remainMerge sorted subarraysJava Codeclass Solution { // Function to merge two sorted halves void merge(int[] arr, int l, int mid, int h) { // Temporary array to store merged result int[] temp = new int[h - l + 1]; int i = l; // pointer for left half int j = mid + 1; // pointer for right half int k = 0; // pointer for temp array // Compare elements from both halves while (i <= mid && j <= h) { if (arr[i] <= arr[j]) { temp[k] = arr[i]; i++; } else { temp[k] = arr[j]; j++; } k++; } // Copy remaining elements from left half while (i <= mid) { temp[k] = arr[i]; i++; k++; } // Copy remaining elements from right half while (j <= h) { temp[k] = arr[j]; j++; k++; } // Copy sorted elements back to original array for (int m = 0; m < temp.length; m++) { arr[l + m] = temp[m]; } } // Recursive merge sort function void mergeSort(int arr[], int l, int h) { // Base case: single element if (l >= h) return; int mid = l + (h - l) / 2; // Sort left half mergeSort(arr, l, mid); // Sort right half mergeSort(arr, mid + 1, h); // Merge both halves merge(arr, l, mid, h); }}Approach 2: Iterative Merge Sort (Bottom-Up)IdeaStart with subarrays of size 1Merge pairsIncrease size graduallyCodeclass Solution { void merge(int[] arr, int l, int mid, int h) { int[] temp = new int[h - l + 1]; int i = l, j = mid + 1, k = 0; while (i <= mid && j <= h) { if (arr[i] <= arr[j]) temp[k++] = arr[i++]; else temp[k++] = arr[j++]; } while (i <= mid) temp[k++] = arr[i++]; while (j <= h) temp[k++] = arr[j++]; for (int m = 0; m < temp.length; m++) { arr[l + m] = temp[m]; } } void mergeSort(int[] arr, int n) { for (int size = 1; size < n; size *= 2) { for (int l = 0; l < n - size; l += 2 * size) { int mid = l + size - 1; int h = Math.min(l + 2 * size - 1, n - 1); merge(arr, l, mid, h); } } }}Approach 3: Using Built-in Sorting (For Comparison)Arrays.sort(arr);👉 Internally uses optimized algorithms (TimSort in Java)Complexity AnalysisTime ComplexityCaseComplexityBestO(n log n)AverageO(n log n)WorstO(n log n)Space ComplexityO(n) (extra array for merging)Why Merge Sort is PowerfulStable sorting algorithmWorks efficiently on large datasetsPredictable performanceUsed in external sorting (large files)❌ Why Not Use Bubble/Selection Sort?AlgorithmTime ComplexityBubble SortO(n²)Selection SortO(n²)Merge SortO(n log n) ✅Key TakeawaysMerge Sort uses divide and conquerRecursion splits problem into smaller partsMerging is the key stepAlways O(n log n), regardless of inputWhen to Use Merge SortLarge datasetsLinked lists (very efficient)Stable sorting requiredExternal sortingConclusionMerge Sort is one of the most reliable and efficient sorting algorithms. Understanding its recursive structure and merging process is essential for mastering advanced algorithms.Once you grasp the divide-and-conquer pattern, it becomes easier to solve many complex problems.Frequently Asked Questions (FAQs)1. Is Merge Sort stable?Yes, it maintains the relative order of equal elements.2. Why is extra space required?Because we use a temporary array during merging.3. Can it be done in-place?Not efficiently; standard merge sort requires extra space.

GeekfOfGeeksMediumSortingMerge SortJava
LeetCode 2: Add Two Numbers – Step-by-Step Linked List Addition Explained Clearly

LeetCode 2: Add Two Numbers – Step-by-Step Linked List Addition Explained Clearly

Try the ProblemYou can practice the problem here:https://leetcode.com/problems/add-two-numbers/Problem Description (In Simple Words)You are given two linked lists, where:Each node contains a single digit (0–9)The digits are stored in reverse order👉 That means:[2,4,3] represents 342[5,6,4] represents 465Your task is:👉 Add these two numbers👉 Return the result as a linked list (also in reverse order)Important Points to UnderstandEach node contains only one digitNumbers are stored in reverse orderYou must return the answer as a linked listYou must handle carry just like normal additionExample WalkthroughExample 1Input:l1 = [2,4,3]l2 = [5,6,4]Interpretation:342 + 465 = 807Output:[7,0,8]Example 2Input:l1 = [0]l2 = [0]Output:[0]Example 3Input:l1 = [9,9,9,9,9,9,9]l2 = [9,9,9,9]Output:[8,9,9,9,0,0,0,1]Constraints1 <= number of nodes <= 1000 <= Node.val <= 9No leading zeros except the number 0Understanding the Core IdeaThis problem is essentially:👉 Adding two numbers digit by digitBut instead of arrays or integers, the digits are stored in linked lists.How Addition Works (Real-Life Analogy)Let’s add:342 + 465We normally add from right to left:2 + 5 = 74 + 6 = 10 → write 0, carry 13 + 4 + 1 = 8Now notice something important:👉 Linked list is already in reverse order👉 So we can directly process from left to rightThinking About the SolutionBefore coding, let’s think of possible approaches:Possible Ways to SolveConvert linked lists into numbers → add → convert back (Not safe due to overflow)Store digits in arrays and simulate additionTraverse both lists and simulate addition directly (best approach)Optimal Approach: Simulate Addition (Digit by Digit)Key IdeaWe traverse both linked lists and:Add corresponding digitsMaintain a carryCreate a new node for each digit of resultStep-by-Step LogicStep 1: InitializeCreate a dummy node (to simplify result building)Create a pointer curr to build the result listInitialize carry = 0Step 2: Traverse Both ListsContinue while:l1 != null OR l2 != nullAt each step:Start with carry:sum = carryAdd value from l1 (if exists)Add value from l2 (if exists)Step 3: Create New NodeDigit to store:sum % 10New carry:sum / 10Step 4: Move ForwardMove l1 and l2 if they existMove curr forwardStep 5: Handle Remaining CarryIf carry is not zero after loop:👉 Add one more nodeStep 6: Return ResultReturn:dummy.nextCode Implementation (With Explanation)class Solution { public ListNode addTwoNumbers(ListNode l1, ListNode l2) { // Dummy node to simplify result construction ListNode dumm = new ListNode(-1); ListNode curr = dumm; int sum = 0; int carr = 0; // Traverse both lists while(l1 != null || l2 != null){ // Start with carry sum = carr; // Add l1 value if exists if(l1 != null){ sum += l1.val; l1 = l1.next; } // Add l2 value if exists if(l2 != null){ sum += l2.val; l2 = l2.next; } // Create new node with digit ListNode newNode = new ListNode(sum % 10); // Update carry carr = sum / 10; // Attach node curr.next = newNode; curr = curr.next; } // If carry remains if(carr != 0){ curr.next = new ListNode(carr); } return dumm.next; }}Dry Run (Important for Understanding)Input:l1 = [2,4,3]l2 = [5,6,4]Step-by-step:Step 1:2 + 5 = 7 → node(7), carry = 0Step 2:4 + 6 = 10 → node(0), carry = 1Step 3:3 + 4 + 1 = 8 → node(8), carry = 0Final:7 → 0 → 8Time ComplexityO(max(n, m))Where:n = length of l1m = length of l2Space ComplexityO(max(n, m))For storing the result linked list.Why Dummy Node is ImportantWithout dummy node:You need to handle first node separatelyWith dummy node:Code becomes clean and consistentNo special cases requiredCommon Mistakes to Avoid❌ Forgetting to handle carry❌ Not checking if l1 or l2 is null❌ Missing last carry node❌ Incorrect pointer movementKey TakeawaysThis is a simulation problemLinked list problems become easier with dummy nodesAlways think in terms of real-world analogy (addition)ConclusionThe Add Two Numbers problem is one of the most important linked list problems for interviews.It teaches:Pointer manipulationCarry handlingIterative traversalClean code using dummy nodesOnce you understand this pattern, many other linked list problems become much easier to solve.👉 Tip: Whenever you see problems involving numbers in linked lists, think of digit-by-digit simulation with carry.

Linked ListMathSimulationCarry HandlingDummy NodeLeetCode Medium
Longest Substring with K Unique Characters – Efficient Sliding Window Solution

Longest Substring with K Unique Characters – Efficient Sliding Window Solution

IntroductionFinding substrings with unique characters is a common problem in string manipulation and interview questions.GFG’s Longest Substring with K Unique Characters problem challenges you to find the length of the longest substring containing exactly K distinct characters.The focus is not just on brute-force solutions, but on using sliding window with a HashMap to efficiently track character frequencies and window boundaries.If you’d like to try solving the problem first, you can attempt it here: Try the problem on GeeksforGeeks: https://leetcode.com/problems/max-consecutive-ones-iii/Problem UnderstandingYou are given:A string s of lowercase lettersAn integer kYour task: Return the length of the longest substring containing exactly k distinct characters.Examples:Input: s = "aabacbebebe", k = 3Output: 7Explanation: "cbebebe" is the longest substring with 3 distinct characters.Input: s = "aaaa", k = 2Output: -1Explanation: No substring contains exactly 2 distinct characters.Input: s = "aabaaab", k = 2Output: 7Explanation: "aabaaab" has exactly 2 unique characters.A naive approach would try all possible substrings, count unique characters, and track the max length.Time Complexity: O(n²)Inefficient for large stringsKey Idea: Sliding Window + HashMapInstead of recomputing the unique characters for every substring:Use a HashMap to store character frequencies in the current windowUse two pointers (i and j) to maintain the windowExpand the window by moving j and add characters to the mapShrink the window from the left when the number of unique characters exceeds kUpdate the max length whenever the window has exactly k unique charactersThis ensures each character is processed only once, giving a linear solution.Approach (Step-by-Step)Initialize a HashMap mp to store character countsUse two pointers i (window start) and j (window end)Iterate over the string with jAdd s[j] to the map and update its countCheck the map size:If size < k → move j forwardIf size == k → update max length, move j forwardIf size > k → shrink window from i until map size <= kAfter iterating, return max (or -1 if no valid substring exists)Optimization:Using a HashMap avoids recalculating the number of unique characters for every substringSliding window ensures O(n) complexityImplementation (Java)class Solution {public int longestKSubstr(String s, int k) {HashMap<Character,Integer> mp = new HashMap<>();int i = 0, j = 0;int max = -1;while (j < s.length()) {// Add current character to mapmp.put(s.charAt(j), mp.getOrDefault(s.charAt(j), 0) + 1);if (mp.size() < k) {j++;}else if (mp.size() == k) {// Update max length when exactly k unique charactersmax = Math.max(max, j - i + 1);j++;}else {// Shrink window until map size <= kwhile (mp.size() > k) {mp.put(s.charAt(i), mp.get(s.charAt(i)) - 1);if (mp.get(s.charAt(i)) == 0) {mp.remove(s.charAt(i));}i++;}max = Math.max(max, j - i + 1);j++;}}return max;}}Dry Run ExampleInput:s = "aabacbebebe", k = 3Execution:Window [0-4] = "aaba" → unique = 2 → continueWindow [1-6] = "abacb" → unique = 3 → max = 5Window [4-10] = "cbebebe" → unique = 3 → max = 7Output:7Complexity AnalysisTime Complexity: O(n) → Each character enters and leaves the window onceSpace Complexity: O(k) → HashMap stores at most k unique charactersEdge Cases Consideredk greater than number of unique characters → return -1Entire string has exactly k unique charactersString with repeated characters onlyMinimum string length (1)Sliding Window Pattern ImportanceThis problem reinforces a common sliding window + hashmap pattern used in:Longest substring problems with constraintsCounting substrings with exactly K conditionsOptimizing brute-force approaches to linear solutionsConclusionBy combining sliding window with HashMap frequency tracking, we reduce an O(n²) problem to O(n).Once you master this approach, many string and substring problems with constraints on unique characters become much easier to solve efficiently.

SlidingWindowHashMapStringsGFG
Master LeetCode 92: Reverse Linked List II | Detailed Java Solution & Explanation

Master LeetCode 92: Reverse Linked List II | Detailed Java Solution & Explanation

If you are preparing for software engineering interviews, you already know that Linked Lists are a favourite topic among interviewers. While reversing an entire linked list is a standard beginner problem, reversing only a specific section of it requires a bit more pointer magic.In this blog post, we will tackle LeetCode 92. Reverse Linked List II. We will break down the problem in plain English, walk through a highly intuitive modular approach, and then look at an optimized one-pass technique.Let’s dive in!Understanding the ProblemQuestion Statement:Given the head of a singly linked list and two integers left and right where left <= right, reverse the nodes of the list from position left to position right, and return the reversed list.Example:Input: head = [1,2,3,4,5], left = 2, right = 4Output: [1,4,3,2,5]In Simple Words:Imagine a chain of connected boxes. You don't want to flip the whole chain backwards. You only want to flip a specific middle section (from the 2nd box to the 4th box), while keeping the first and last boxes exactly where they are.Approach 1: The Intuitive Modular Approach (Find, Reverse, Connect)When solving complex linked list problems, breaking the problem into smaller helper functions is an excellent software engineering practice.In this approach, we will:Use a Dummy Node. This is a lifesaver when left = 1 (meaning we have to reverse from the very first node).Traverse the list to find the exact boundaries: the node just before the reversal (slow), the start of the reversal (leftNode), and the end of the reversal (rightNode).Pass the sub-list to a helper function that reverses it.Reconnect the newly reversed sub-list back to the main list.Here is the Java code for this approach:/** * Definition for singly-linked list. * public class ListNode { * int val; * ListNode next; * ListNode() {} * ListNode(int val) { this.val = val; } * ListNode(int val, ListNode next) { this.val = val; this.next = next; } * } */class Solution { // Helper function to reverse a portion of the list public ListNode reverse(ListNode LeftHead, ListNode rightHead){ ListNode curr = LeftHead; ListNode prev = rightHead; // Standard linked list reversal logic while(curr != rightHead){ ListNode newnode = curr.next; curr.next = prev; prev = curr; curr = newnode; } return prev; } public ListNode reverseBetween(ListNode head, int left, int right) { if(left == 1 && right == 1) return head; // Dummy node helps handle edge cases easily ListNode dummy = new ListNode(-1); dummy.next = head; ListNode leftNode = null; ListNode rightNode = null; ListNode curr = head; int leftC = 1; int rightC = 1; ListNode slow = dummy; // Traverse to find the exact bounds of our sublist while(curr != null){ if(leftC == left - 1){ slow = curr; // 'slow' is the node just before the reversed section } if(leftC == left){ leftNode = curr; } if(rightC == right){ rightNode = curr; } if(leftC == left && rightC == right){ break; // We found both bounds, no need to traverse further } leftC++; rightC++; curr = curr.next; } // Reverse the sublist and connect it back ListNode rev = reverse(leftNode, rightNode.next); slow.next = rev; return dummy.next; }}🔍 Dry Run of Approach 1Let’s trace head = [1, 2, 3, 4, 5], left = 2, right = 4.Step 1: Initializationdummy = -1 -> 1 -> 2 -> 3 -> 4 -> 5slow = -1 (dummy node)Step 2: Finding Bounds (While Loop)We move curr through the list.When curr is at 1 (Position 1): left - 1 is 1, so slow becomes node 1.When curr is at 2 (Position 2): leftNode becomes node 2.When curr is at 4 (Position 4): rightNode becomes node 4. We break the loop.Step 3: The Helper ReversalWe call reverse(leftNode, rightNode.next), which means reverse(Node 2, Node 5).Inside the helper, we reverse the links for 2, 3, and 4.Because we initialized prev as rightHead (Node 5), Node 2's next becomes Node 5.The helper returns Node 4 as the new head of this reversed chunk. The chunk now looks like: 4 -> 3 -> 2 -> 5.Step 4: ReconnectionBack in the main function, slow (Node 1) is connected to the returned reversed list: slow.next = rev.Final List: dummy -> 1 -> 4 -> 3 -> 2 -> 5.Return dummy.next!Time & Space Complexity:Time Complexity: O(N). We traverse the list to find the pointers, and then the helper traverses the sub-list to reverse it. Since we visit nodes a maximum of two times, it is linear time.Space Complexity: O(1). We are only creating a few pointers (dummy, slow, curr, etc.), requiring no extra dynamic memory.Approach 2: The Optimized One-Pass SolutionLeetCode includes a follow-up challenge: "Could you do it in one pass?"While the first approach is incredibly readable, we can optimize it to reverse the nodes as we traverse them, eliminating the need for a separate helper function or a second sub-list traversal.In this approach, we:Move a prev pointer to the node just before left.Keep a curr pointer at left.Use a for loop to extract the node immediately after curr and move it to the front of the reversed sub-list. We do this exactly right - left times.class Solution { public ListNode reverseBetween(ListNode head, int left, int right) { if (head == null || left == right) return head; ListNode dummy = new ListNode(0); dummy.next = head; ListNode prev = dummy; // Step 1: Reach the node just before 'left' for (int i = 0; i < left - 1; i++) { prev = prev.next; } // Step 2: Start the reversal process ListNode curr = prev.next; for (int i = 0; i < right - left; i++) { ListNode nextNode = curr.next; // Node to be moved to the front curr.next = nextNode.next; // Detach nextNode nextNode.next = prev.next; // Point nextNode to the current front of sublist prev.next = nextNode; // Make nextNode the new front of the sublist } return dummy.next; }}🔍 Why this works (The Pointer Magic)If we are reversing [1, 2, 3, 4, 5] from 2 to 4:prev is at 1. curr is at 2.Iteration 1: Grab 3, put it after 1. List: [1, 3, 2, 4, 5].Iteration 2: Grab 4, put it after 1. List: [1, 4, 3, 2, 5].Done! We achieved the reversal in a strict single pass.Time & Space Complexity:Time Complexity: O(N). We touch each node exactly once.Space Complexity: O(1). Done entirely in-place.ConclusionReversing a portion of a linked list is a fantastic way to test your understanding of pointer manipulation.Approach 1 is amazing for interviews because it shows you can modularize code and break big problems into smaller, testable functions.Approach 2 is the perfect "flex" to show the interviewer that you understand optimization and single-pass algorithms.I highly recommend writing down the dry run on a piece of paper and drawing arrows to see how the pointers shift. That is the secret to mastering Linked Lists!Happy Coding! 💻🚀

LeetCodeLinkedListJavaReverse LinkedList II
LeetCode 784 Letter Case Permutation | Recursion & Backtracking Java Solution

LeetCode 784 Letter Case Permutation | Recursion & Backtracking Java Solution

IntroductionThe Letter Case Permutation problem is a classic example of recursion and backtracking, often asked in coding interviews and frequently searched by learners preparing for platforms like LeetCode.This problem helps in understanding:Decision-making at each stepRecursive branchingString manipulationIn this article, we’ll break down the intuition, visualize the decision process using your decision tree, and implement an efficient Java solution.🔗 Problem LinkLeetCode: Letter Case PermutationProblem StatementGiven a string s, you can transform each alphabet character into:LowercaseUppercaseDigits remain unchanged.👉 Return all possible strings formed by these transformations.ExamplesExample 1Input:s = "a1b2"Output:["a1b2","a1B2","A1b2","A1B2"]Example 2Input:s = "3z4"Output:["3z4","3Z4"]Key InsightAt each character:If it's a digit → only one choiceIf it's a letter → two choices:lowercase OR uppercaseSo total combinations:2^(number of letters)Intuition (Using Your Decision Tree)For input: "a1b2"Start from index 0: "" / \ "a" "A" | | "a1" "A1" / \ / \ "a1b" "a1B" "A1b" "A1B" | | | | "a1b2" "a1B2" "A1b2" "A1B2"Understanding the TreeAt 'a' → branch into 'a' and 'A''1' → no branching (digit)'b' → again branching'2' → no branching📌 Leaf nodes = final answersApproach: Recursion + BacktrackingIdeaTraverse the string character by characterIf digit → move forwardIf letter → branch into:lowercaseuppercaseJava Codeimport java.util.*;class Solution { // List to store all results List<String> lis = new ArrayList<>(); public void solve(String s, int ind, String ans) { // Base case: reached end of string if (ind == s.length()) { lis.add(ans); // store generated string return; } char ch = s.charAt(ind); // If character is a digit → only one option if (ch >= '0' && ch <= '9') { solve(s, ind + 1, ans + ch); } else { // Choice 1: convert to lowercase solve(s, ind + 1, ans + Character.toLowerCase(ch)); // Choice 2: convert to uppercase solve(s, ind + 1, ans + Character.toUpperCase(ch)); } } public List<String> letterCasePermutation(String s) { solve(s, 0, ""); // start recursion return lis; }}Step-by-Step ExecutionFor "a1b2":Start → ""'a' → "a", "A"'1' → "a1", "A1"'b' → "a1b", "a1B", "A1b", "A1B"'2' → final stringsComplexity AnalysisTime Complexity: O(2^n)(n = number of letters)Space Complexity: O(2^n)(for storing results)Why This Approach WorksRecursion explores all possibilitiesEach letter creates a branching pointDigits pass through unchangedBacktracking ensures all combinations are generatedKey TakeawaysThis is a binary decision recursion problemLetters → 2 choicesDigits → 1 choiceDecision tree directly maps to recursionPattern similar to:SubsetsPermutations with conditionsWhen This Problem Is AskedCommon in:Coding interviewsRecursion/backtracking roundsString manipulation problemsConclusionThe Letter Case Permutation problem is a perfect example of how recursion can be used to explore all possible combinations efficiently.Once the decision tree is clear, the implementation becomes straightforward. This pattern is widely used in many advanced problems, making it essential to master.Frequently Asked Questions (FAQs)1. Why don’t digits create branches?Because they have only one valid form.2. What is the main concept used?Recursion with decision-making (backtracking).3. Can this be solved iteratively?Yes, using BFS or iterative expansion, but recursion is more intuitive.

LeetCodeMediumJavaRecursion
Intersection of Two Arrays

Intersection of Two Arrays

LeetCode Problem 349Link of the Problem to try -: LinkGiven two integer arrays nums1 and nums2, return an array of their intersection. Each element in the result must be unique and you may return the result in any order. Example 1:Input: nums1 = [1,2,2,1], nums2 = [2,2]Output: [2]Example 2:Input: nums1 = [4,9,5], nums2 = [9,4,9,8,4]Output: [9,4]Explanation: [4,9] is also accepted. Constraints:1 <= nums1.length, nums2.length <= 10000 <= nums1[i], nums2[i] <= 1000Solution: Using HashMap and HashSet for IntersectionCore Objective: The primary goal of this problem is to identify common elements between two arrays and return them as a unique set. To achieve this efficiently, we utilize a combination of a HashMap for quick lookup and a HashSet to handle uniqueness in the result.Logic & Implementation:Populate Lookup Table: We begin by iterating through the first array (nums1) and storing its elements in a HashMap. This allows us to verify the existence of any number in $O(1)$ average time.Identify Intersection: Next, we iterate through the second array (nums2). For each element, we check if it exists within our HashMap.Handle Uniqueness: If a match is found, we add the element to a HashSet. This is a crucial step because the problem requires the output to contain unique elements, even if a common number appears multiple times in the input arrays.Final Conversion: Since the required return type is an array, we determine the size of our HashSet to initialize the result array. Using a for-each loop, we transfer the values from the Set into the array and return it as the final answer.Code Implementation:public int[] intersection(int[] nums1, int[] nums2) { // HashMap to store elements from the first array for lookup HashMap<Integer, Integer> mp = new HashMap<>(); // HashSet to store unique common elements HashSet<Integer> ms = new HashSet<>(); // Populate the map with elements from nums1 for (int i = 0; i < nums1.length; i++) { if (mp.containsKey(nums1[i])) { continue; } mp.put(nums1[i], i); } // Check for common elements in nums2 for (int i = 0; i < nums2.length; i++) { if (mp.containsKey(nums2[i])) { ms.add(nums2[i]); // HashSet ensures duplicates are not added } } // Convert the HashSet to the final integer array int[] ans = new int[ms.size()]; int co = 0; for (int i : ms) { ans[co] = i; co++; } return ans;}

LeetCodeArrayEasyHashMap
Climbing Stairs Problem (LeetCode 70) – Complete Guide with Optimized Solutions

Climbing Stairs Problem (LeetCode 70) – Complete Guide with Optimized Solutions

IntroductionThe Climbing Stairs problem is one of the most commonly asked coding interview questions for beginners. It is a perfect example to understand recursion, memoization, and dynamic programming (DP).In this article, we will break down the problem step by step and explore multiple approaches—from brute force recursion to an optimized space-efficient solution.Link of Problem: LeetCode – Climbing StairsProblem StatementYou are climbing a staircase that has n steps.Each time, you can either climb 1 step or 2 steps.The goal is to calculate the total number of distinct ways to reach the top.ExampleInput:n = 2Output:2Explanation:1 + 12Input:n = 3Output:3Explanation:1 + 1 + 11 + 22 + 1Key InsightTo reach step n, there are only two possibilities:From step n-1 (taking 1 step)From step n-2 (taking 2 steps)So, the recurrence relation becomes:ways(n) = ways(n-1) + ways(n-2)This is identical to the Fibonacci sequence, making this problem a classic DP question.Approach 1: Recursive Solution (Brute Force)IdeaBreak the problem into smaller subproblems:Count ways to reach n-1Count ways to reach n-2Add both resultsCodeclass Solution { public int climbStairs(int n) { if(n == 0) return 1; if(n < 0) return 0; return climbStairs(n-1) + climbStairs(n-2); }}ComplexityTime Complexity: O(2^n)Space Complexity: O(n)DrawbackThis solution recalculates the same subproblems multiple times, leading to Time Limit Exceeded (TLE) for larger values.Approach 2: Recursion with Memoization (Top-Down DP)IdeaTo optimize recursion, store already computed results using a HashMap.Avoid repeated calculationsConvert exponential time into linear timeCodeimport java.util.HashMap;class Solution { private HashMap<Integer, Integer> memo = new HashMap<>(); public int climbStairs(int n) { if(n == 0) return 1; if(n < 0) return 0; if(memo.containsKey(n)) { return memo.get(n); } int result = climbStairs(n-1) + climbStairs(n-2); memo.put(n, result); return result; }}ComplexityTime Complexity: O(n)Space Complexity: O(n)Why It WorksMemoization ensures each subproblem is solved only once, making recursion efficient and practical.Approach 3: Dynamic Programming (Bottom-Up)IdeaInstead of recursion, build the solution iteratively:Use an array dp[]Store results for each stepBuild from smaller values to larger onesCodeclass Solution { public int climbStairs(int n) { if(n == 1) return n; if(n == 2) return n; if(n == 3) return n; int dp[] = new int[n+1]; dp[0] = 0; dp[1] = 1; dp[2] = 2; for(int i = 3; i <= n; i++) { dp[i] = dp[i-1] + dp[i-2]; } return dp[n]; }}ComplexityTime Complexity: O(n)Space Complexity: O(n)Approach 4: Optimal Solution (Space Optimized)IdeaWe only need the last two values instead of the whole array.Codeclass Solution { public int climbStairs(int n) { if(n <= 2) return n; int prev1 = 1; int prev2 = 2; for(int i = 3; i <= n; i++) { int current = prev1 + prev2; prev1 = prev2; prev2 = current; } return prev2; }}ComplexityTime Complexity: O(n)Space Complexity: O(1)Key TakeawaysThe problem follows a Fibonacci-like patternBrute force recursion is simple but inefficientMemoization converts recursion into an efficient solutionDynamic programming avoids recursion completelySpace optimization reduces memory usage to constant spaceWhen This Problem Is AskedThis question is frequently asked in:Coding interviews (product-based companies)Data Structures & Algorithms examsOnline coding platformsIt evaluates:Problem-solving abilityUnderstanding of recursionOptimization skillsConclusionThe Climbing Stairs problem is a foundational example for learning dynamic programming. Starting with recursion and improving it using memoization and iterative DP demonstrates how to optimize algorithms effectively.Understanding this pattern will help solve many similar problems related to sequences and decision-making.Frequently Asked Questions (FAQs)1. Is this problem related to Fibonacci?Yes, the recurrence relation is exactly the same as the Fibonacci sequence.2. Why does recursion fail for large inputs?Because it recalculates the same values repeatedly, leading to exponential time complexity.3. What is the best approach?The space-optimized approach is the most efficient with O(n) time and O(1) space.

EasyJavaLeetCodeRecursionMemoization
LeetCode 203: Remove Linked List Elements – Step-by-Step Guide for Beginners

LeetCode 203: Remove Linked List Elements – Step-by-Step Guide for Beginners

Try the ProblemYou can practice the problem here:https://leetcode.com/problems/remove-linked-list-elements/Problem Description (In Very Simple Words)You are given:The head of a linked listAn integer value valYour task is:👉 Remove all nodes from the linked list whose value is equal to val.👉 Return the updated head of the linked list.What is a Linked List? (Quick Recap)A linked list is a chain of nodes where each node contains:value + pointer to next nodeExample:1 → 2 → 6 → 3 → 4 → 5 → 6 → nullExample WalkthroughExample 1Input:head = [1,2,6,3,4,5,6]val = 6Output:[1,2,3,4,5]ExplanationWe remove all nodes with value 6.1 → 2 → ❌6 → 3 → 4 → 5 → ❌6Final list:1 → 2 → 3 → 4 → 5Example 2Input:head = []val = 1Output:[]Example 3Input:head = [7,7,7,7]val = 7Output:[]All nodes are removed.Constraints0 <= number of nodes <= 10^41 <= Node.val <= 500 <= val <= 50Understanding the Problem DeeplyThe tricky part is:👉 Nodes can appear anywhere:At the beginningIn the middleAt the endOr all nodes👉 Linked list does NOT allow backward traversalSo we must carefully handle pointers.Thinking About the SolutionWhen solving this problem, multiple approaches may come to mind:Possible ApproachesTraverse the list and remove nodes manually.Handle head separately, then process the rest.Use a dummy node to simplify logic.Use recursion (less preferred for beginners).Approach 1: Without Dummy Node (Manual Handling)IdeaFirst, remove nodes from the start (head) if they match val.Then traverse the list using two pointers:prev → previous nodecurr → current nodeKey ChallengeHandling head separately makes code more complex.Codeclass Solution { public ListNode removeElements(ListNode head, int val) { // Step 1: Remove matching nodes from the beginning while(head != null && head.val == val){ head = head.next; } // If list becomes empty if(head == null) return null; ListNode prev = head; ListNode curr = head.next; while(curr != null){ if(curr.val == val){ // Skip the node prev.next = curr.next; } else { // Move prev only when node is not removed prev = curr; } curr = curr.next; } return head; }}Why This WorksWe ensure prev always points to the last valid nodeWhen we find a node to delete:prev.next = curr.nextThis skips the unwanted nodeProblem With This Approach👉 Too many edge cases:Removing head nodesEmpty listAll nodes equalApproach 2: Dummy Node (Best & Clean Approach)IdeaWe create a fake node (dummy) before the head.dummy → head → rest of listThis helps us:👉 Treat head like a normal node👉 Avoid special casesVisualizationOriginal:1 → 2 → 6 → 3After dummy:-1 → 1 → 2 → 6 → 3Java Implementation (Best Approach)class Solution { public ListNode removeElements(ListNode head, int val) { // Step 1: Create dummy node ListNode dumm = new ListNode(-1, head); // Step 2: Start from dummy ListNode curr = dumm; while(curr.next != null){ // If next node needs to be removed if(curr.next.val == val){ // Skip that node curr.next = curr.next.next; } else { // Move forward only when no deletion curr = curr.next; } } // Step 3: Return new head return dumm.next; }}Step-by-Step Dry RunInput:1 → 2 → 6 → 3 → 4 → 5 → 6val = 6After dummy:-1 → 1 → 2 → 6 → 3 → 4 → 5 → 6Traversal:curr = -1 → 1 (keep)curr = 1 → 2 (keep)curr = 2 → 6 (remove)curr = 2 → 3 (keep)curr = 3 → 4 (keep)curr = 4 → 5 (keep)curr = 5 → 6 (remove)Final:1 → 2 → 3 → 4 → 5Time ComplexityO(n)We traverse the list once.Space ComplexityO(1)No extra space used.Why Dummy Node is PreferredWithout DummyWith DummyComplex edge casesClean logicSpecial handling for headNo special handlingError-proneSafe and readableApproach 3: Recursive Solution (Conceptual)IdeaWe process one node at a time and recursively solve for the rest.Codeclass Solution { public ListNode removeElements(ListNode head, int val) { if(head == null) return null; head.next = removeElements(head.next, val); if(head.val == val){ return head.next; } else { return head; } }}Time ComplexityO(n)Space ComplexityO(n)(due to recursion stack)Key TakeawaysLinked list problems are all about pointer manipulationAlways think about edge cases (especially head)Dummy node simplifies almost every linked list problemConclusionThe Remove Linked List Elements problem is perfect for understanding how linked lists work in real scenarios.While the manual approach works, the dummy node technique provides a much cleaner and safer solution.If you master this pattern, you will be able to solve many linked list problems easily in interviews.👉 Tip: Whenever you feel stuck in linked list problems, try adding a dummy node — it often simplifies everything!

Linked ListIterationDummy NodePointer ManipulationLeetCodeEasy
Left and Right Sum Differences

Left and Right Sum Differences

LeetCode 2574: Left and Right Sum Differences (Java)The Left and Right Sum Difference problem is a classic array manipulation challenge. It tests your ability to efficiently calculate prefix and suffix values—a skill essential for more advanced algorithms like "Product of Array Except Self."🔗 ResourcesProblem Link: LeetCode 2574 - Left and Right Sum Differences📝 Problem StatementYou are given a 0-indexed integer array nums. You need to return an array answer of the same length where:answer[i] = |leftSum[i] - rightSum[i]|leftSum[i] is the sum of elements to the left of index i.rightSum[i] is the sum of elements to the right of index i.💡 Approach 1: The Three-Array Method (Beginner Friendly)This approach is highly intuitive. We pre-calculate all left sums and all right sums in separate arrays before computing the final difference.The LogicPrefix Array: Fill pref[] by adding the previous element to the cumulative sum.Suffix Array: Fill suff[] by iterating backward from the end of the array.Result: Loop one last time to calculate Math.abs(pref[i] - suff[i]).Java Implementationpublic int[] leftRightDifference(int[] nums) {int n = nums.length;int[] pref = new int[n];int[] suff = new int[n];int[] ans = new int[n];// Calculate Left Sums (Prefix)for (int i = 1; i < n; i++) {pref[i] = pref[i - 1] + nums[i - 1];}// Calculate Right Sums (Suffix)for (int i = n - 2; i >= 0; i--) {suff[i] = suff[i + 1] + nums[i + 1];}// Combine resultsfor (int i = 0; i < n; i++) {ans[i] = Math.abs(pref[i] - suff[i]);}return ans;}Time Complexity: O(n)Space Complexity: O(n) (Uses extra space for pref and suff arrays).🚀 Approach 2: The Running Sum Method (Space Optimized)In technical interviews, you should aim for O(1) extra space. Instead of storing every suffix sum, we calculate the Total Sum first and derive the right sum using logic.The LogicWe use the mathematical property:Right Sum = Total Sum - Left Sum - Current ElementBy maintaining a single variable leftSum that updates as we iterate, we can calculate the result using only the output array.Java Implementationpublic int[] leftRightDifference(int[] nums) {int n = nums.length;int[] ans = new int[n];int totalSum = 0;int leftSum = 0;// 1. Get the total sum of all elementsfor (int num : nums) {totalSum += num;}// 2. Calculate rightSum and difference on the flyfor (int i = 0; i < n; i++) {int rightSum = totalSum - leftSum - nums[i];ans[i] = Math.abs(leftSum - rightSum);// Update leftSum for the next indexleftSum += nums[i];}return ans;}Time Complexity: O(n)Space Complexity: O(1) (Excluding the output array).📊 Summary ComparisonFeatureApproach 1Approach 2Space ComplexityO(n) (Higher)O(1) (Optimal)Logic TypeStorage-basedMathematicalUse CaseBeginnersInterviewsKey TakeawayWhile Approach 1 is easier to visualize, Approach 2 is more professional. It shows you can handle data efficiently without unnecessary memory allocation, which is critical when dealing with large-scale systems.

LeetCodePrefixSuffix
LeetCode 328: Odd Even Linked List – Clean and Easy Explanation with Multiple Approaches

LeetCode 328: Odd Even Linked List – Clean and Easy Explanation with Multiple Approaches

Try the ProblemYou can practice the problem here:https://leetcode.com/problems/odd-even-linked-list/Problem Description (In Very Simple Words)You are given the head of a linked list.Your task is:👉 Rearrange the list such that:All nodes at odd positions come firstFollowed by all nodes at even positionsImportant Clarification❗ This problem is about positions (indices), NOT values.1st node → Odd2nd node → Even3rd node → Odd4th node → EvenExample WalkthroughExample 1Input:[1,2,3,4,5]Positions:1(odd), 2(even), 3(odd), 4(even), 5(odd)Output:[1,3,5,2,4]Example 2Input:[2,1,3,5,6,4,7]Output:[2,3,6,7,1,5,4]Constraints0 <= number of nodes <= 10^4-10^6 <= Node.val <= 10^6Core Idea of the ProblemWe need to:Separate nodes into two groups:Odd index nodesEven index nodesMaintain their original orderFinally:👉 Attach even list after odd listThinking About Different ApproachesWhen solving this problem, multiple approaches may come to mind:Approach 1: Create New ListsIdeaTraverse the listCreate new nodes for odd and even positionsBuild two separate listsMerge themProblem with This Approach❌ Uses extra space❌ Not optimal (violates O(1) space requirement)❌ Code becomes messy and harder to maintainYour approach is logically correct, but:👉 It is not optimal and can be simplified a lot.Approach 2: Brute Force Using Array/ListIdeaStore nodes in an arrayRearrange based on indicesRebuild linked listComplexityTime: O(n)Space: O(n) ❌ (Not allowed)Approach 3: Optimal In-Place Approach (Best Solution)This is the cleanest and most important approach.Optimal Approach: Two Pointer Technique (In-Place)IdeaInstead of creating new nodes:👉 We rearrange the existing nodesWe maintain:odd → points to odd nodeseven → points to even nodesevenHead → stores start of even listVisualizationInput:1 → 2 → 3 → 4 → 5We separate into:Odd: 1 → 3 → 5Even: 2 → 4Final:1 → 3 → 5 → 2 → 4Step-by-Step LogicStep 1: Initialize pointersodd = headeven = head.nextevenHead = head.nextStep 2: Traverse the listWhile:even != null && even.next != nullUpdate:odd.next = odd.next.nexteven.next = even.next.nextMove pointers forward:odd = odd.nexteven = even.nextStep 3: Mergeodd.next = evenHeadClean Java Implementation (Optimal)class Solution { public ListNode oddEvenList(ListNode head) { // Edge case if(head == null || head.next == null) return head; // Initialize pointers ListNode odd = head; ListNode even = head.next; ListNode evenHead = even; // Rearranging nodes while(even != null && even.next != null){ odd.next = odd.next.next; // Link next odd even.next = even.next.next; // Link next even odd = odd.next; even = even.next; } // Attach even list after odd list odd.next = evenHead; return head; }}Dry Run (Important)Input:1 → 2 → 3 → 4 → 5Steps:Iteration 1:odd → 1, even → 2Link:1 → 32 → 4Iteration 2:odd → 3, even → 4Link:3 → 54 → nullFinal connection:5 → 2Result:1 → 3 → 5 → 2 → 4Time ComplexityO(n)We traverse the list once.Space ComplexityO(1)No extra space used.Why This Approach is BestFeatureResultExtra Space❌ NoneClean Code✅ YesEfficient✅ O(n)Interview Friendly✅ HighlyCommon Mistakes❌ Confusing values with positions❌ Creating new nodes unnecessarily❌ Forgetting to connect even list at the end❌ Breaking the list accidentallyKey LearningThis problem teaches:In-place linked list manipulationPointer handlingList partitioningFinal ThoughtsThe Odd Even Linked List problem is a classic example of how powerful pointer manipulation can be.Even though creating new nodes might seem easier at first, the in-place approach is:👉 Faster👉 Cleaner👉 Interview optimized👉 Tip: Whenever you are asked to rearrange a linked list, always think:"Can I solve this by just changing pointers instead of creating new nodes?"That’s the key to mastering linked list problems 🚀

Linked ListPointer ManipulationIn-Place AlgorithmTwo PointersLeetCode Medium
Find Minimum in Rotated Sorted Array – Binary Search Explained | LeetCode Medium 153

Find Minimum in Rotated Sorted Array – Binary Search Explained | LeetCode Medium 153

🔗 Try This Problem FirstPlatform: LeetCodeProblem Number: 153👉 Practice here: https://leetcode.com/problems/find-minimum-in-rotated-sorted-array/🧠 Problem UnderstandingYou are given a sorted array that has been rotated between 1 and n times.Example:Original sorted array:[0,1,2,4,5,6,7]After rotation:[4,5,6,7,0,1,2]Your task: 👉 Return the minimum element from this rotated array. 👉 Time complexity must be O(log n). 👉 All elements are unique.🔍 Key ObservationIn a rotated sorted array:The array is divided into two sorted halves.The minimum element is the pivot point where rotation happened.One half will always be sorted.We can use Binary Search to find where the sorted order breaks.Example:[4,5,6,7,0,1,2] ↑ Minimum🚀 Approach 1: Brute Force (Not Allowed by Constraint)IdeaScan entire array and track minimum.int min = nums[0];for(int i = 1; i < nums.length; i++){ min = Math.min(min, nums[i]);}return min;ComplexityTime: O(n)Space: O(1)❌ Not acceptable because problem requires O(log n).🚀 Approach 2: Binary Search (Optimal – O(log n))💡 Core IdeaCompare nums[mid] with nums[right].There are only two possibilities:Case 1: nums[mid] > nums[right]Minimum is in right half.[4,5,6,7,0,1,2]mid rMove left pointer:l = mid + 1Case 2: nums[mid] <= nums[right]Minimum is in left half including mid.[4,5,6,7,0,1,2]mid rMove right pointer:r = mid✅ Optimized Codeclass Solution { public int findMin(int[] nums) { int l = 0; int r = nums.length - 1; while (l < r) { int mid = l + (r - l) / 2; if (nums[mid] > nums[r]) { l = mid + 1; } else { r = mid; } } return nums[r]; }}📊 Dry RunInput:nums = [4,5,6,7,0,1,2]Steplrmidnums[mid]Action106377 > 2 → l = 4246511 <= 2 → r = 5345400 <= 1 → r = 4Stop44--Return nums[4]✅ Output = 0🧩 Why This WorksIf middle element is greater than rightmost, rotation point is to the right.If middle element is smaller than or equal to rightmost, minimum is on the left side.We continuously shrink search space until l == r.That position holds the minimum element.⏱ Time & Space ComplexityMetricValueTime ComplexityO(log n)Space ComplexityO(1)Because:We eliminate half of the array each iteration.No extra space is used.⚠️ Important Edge CasesArray size = 1[10] → 10Already sorted (no visible rotation)[11,13,15,17] → 11Rotation at last position[1,2,3,4,5] → 1🎯 Interview InsightThis problem teaches:Modified Binary SearchIdentifying sorted halvesHandling rotated arraysUnderstanding pivot logicThis is a very common FAANG interview question and often appears in variations like:Search in Rotated Sorted ArrayFind Peak ElementMinimum in Rotated Array with Duplicates🏁 Final TakeawayBrute force works but violates constraint.Binary search is the correct approach.Compare mid with right.Shrink search space until pointers meet.Return nums[l] or nums[r].

LeetcodeMediumBinary Search
Fruit Into Baskets – Sliding Window with Two Types

Fruit Into Baskets – Sliding Window with Two Types

IntroductionLeetCode 904: Fruit Into Baskets is a classic sliding window problem that tests your ability to track a limited number of distinct elements in a subarray.The problem can be rephrased as:Find the longest subarray containing at most 2 distinct types of elements.This is a great example of using HashMap + sliding window to efficiently track counts of elements while expanding and shrinking the window.If you’d like to try solving the problem first, you can attempt it here:Try the problem on LeetCode:https://leetcode.com/problems/fruit-into-baskets/Problem UnderstandingYou are given:An array fruits where fruits[i] represents the type of fruit from tree iTwo baskets, each can hold only one type of fruitRules:Start at any tree and move only to the rightPick exactly one fruit from each tree while movingStop when a tree’s fruit cannot fit in your basketsGoal: Return the maximum number of fruits you can pick.Examples:Input: fruits = [1,2,1]Output: 3Explanation: Pick all fruits [1,2,1].Input: fruits = [0,1,2,2]Output: 3Explanation: Best subarray [1,2,2].Input: fruits = [1,2,3,2,2]Output: 4Explanation: Best subarray [2,3,2,2].A naive approach would check all subarrays, count distinct fruits, and pick the max →Time Complexity: O(n²) → too slow for fruits.length up to 10⁵Key Idea: Sliding Window with At Most Two TypesInstead of brute-force:Track the number of each fruit type in the current window using a HashMapUse two pointers i and j for the windowExpand j by adding fruits[j] to the mapIf the number of distinct types mp.size() exceeds 2:Shrink the window from the left (i) until mp.size() <= 2The window length j - i + 1 gives the max fruits collected so farIntuition:Only two baskets → window can contain at most 2 distinct fruitsThe sliding window efficiently finds the longest subarray with ≤ 2 distinct elementsApproach (Step-by-Step)Initialize i = 0, j = 0 for window pointersInitialize HashMap mp to store fruit countsInitialize co = 0 → maximum fruits collectedIterate j over fruits:Add fruits[j] to the mapCheck mp.size():If ≤ 2 → update co = max(co, j - i + 1)If > 2 → shrink window from i until mp.size() <= 2Continue expanding the windowReturn co as the maximum fruits collectedOptimization:HashMap keeps track of counts → no need to scan subarray repeatedlySliding window ensures linear traversalImplementation (Java)class Solution {public int totalFruit(int[] nums) {HashMap<Integer,Integer> mp = new HashMap<>();int i = 0, j = 0; // window pointersint co = 0; // max fruits collectedwhile (j < nums.length) {// Add current fruit to mapmp.put(nums[j], mp.getOrDefault(nums[j], 0) + 1);if (mp.size() <= 2) {co = Math.max(co, j - i + 1); // valid windowj++;} else {// Shrink window until at most 2 types of fruitswhile (mp.size() > 2) {mp.put(nums[i], mp.get(nums[i]) - 1);if (mp.get(nums[i]) == 0) {mp.remove(nums[i]);}i++;}co = Math.max(co, j - i + 1);j++;}}return co;}}Dry Run ExampleInput:fruits = [1,2,3,2,2]Execution:Window [i, j]Fruit counts mpDistinct TypesValid?Max co[0,0] → [1]{1:1}1Yes1[0,1] → [1,2]{1:1,2:1}2Yes2[0,2] → [1,2,3]{1:1,2:1,3:1}3No → shrink2[1,2] → [2,3]{2:1,3:1}2Yes2[1,3] → [2,3,2]{2:2,3:1}2Yes3[1,4] → [2,3,2,2]{2:3,3:1}2Yes4Output:4Complexity AnalysisTime Complexity: O(n) → Each element is added and removed at most onceSpace Complexity: O(1) → HashMap stores at most 2 types of fruitsEdge Cases ConsideredArray with ≤ 2 types → entire array can be pickedArray with all same type → window length = array lengthZeros or non-continuous fruit types → handled by sliding windowLarge arrays up to 10⁵ elements → O(n) solution works efficientlySliding Window Pattern ImportanceThis problem demonstrates the sliding window + frequency map pattern:Track limited number of distinct elementsExpand/shrink window dynamicallyEfficiently compute max subarray length with constraintsIt’s directly related to problems like:Max consecutive ones with flipsLongest substring with k distinct charactersFruit collection / subarray problems with type limitsConclusionBy tracking fruit counts with a HashMap in a sliding window, we efficiently find the maximum fruits collectible in O(n) time.Mastering this pattern allows solving many array and string problems with constraints on distinct elements in a linear and elegant way.

SlidingWindowHashMapLeetCodeMedium
Longest Subarray of 1's After Deleting One Element – Sliding Window Approach

Longest Subarray of 1's After Deleting One Element – Sliding Window Approach

IntroductionLeetCode 1493: Longest Subarray of 1's After Deleting One Element is a neat sliding window problem that tests your ability to dynamically adjust a window while handling a constraint: deleting exactly one element.The task is to find the longest subarray of 1's you can get after deleting one element from the array.This problem is an excellent example of how sliding window with zero counting can convert a potentially brute-force solution into an O(n) linear solution.If you’d like to try solving the problem first, you can attempt it here:Try the problem on LeetCode:https://leetcode.com/problems/longest-subarray-of-1s-after-deleting-one-element/Problem UnderstandingYou are given:A binary array nums containing only 0’s and 1’sYou must delete exactly one elementYour task: Return the length of the longest non-empty subarray of 1’s after deleting one element.Examples:Input: nums = [1,1,0,1]Output: 3Explanation: Delete element at index 2 → [1,1,1]. Longest subarray of 1's = 3.Input: nums = [0,1,1,1,0,1,1,0,1]Output: 5Explanation: Delete element at index 4 → [0,1,1,1,1,1,0,1]. Longest subarray of 1's = 5.Input: nums = [1,1,1]Output: 2Explanation: Must delete one element → longest subarray = 2.A naive approach would try removing each element and scanning for the longest subarray →Time Complexity: O(n²) → too slow for nums.length up to 10⁵Inefficient for large arraysKey Idea: Sliding Window with At Most One ZeroNotice the following:Deleting one element is equivalent to allowing at most one zero in the subarrayWe can use a sliding window [i, j] and a counter z for zeros in the windowExpand j while z <= 1If z > 1, shrink the window from the left until z <= 1The length of the window (j - i) gives the maximum length of consecutive 1’s after deleting one elementIntuition:Only one zero is allowed in the window because deleting that zero would turn the window into all 1'sThis converts the problem into a linear sliding window problem with zero countingApproach (Step-by-Step)Initialize i = 0, j = 0 for window pointersInitialize z = 0 → number of zeros in current windowInitialize co = 0 → maximum length of valid subarrayIterate j over nums:If nums[j] == 0, increment zCheck z:If z <= 1: window is valid → update co = max(co, j - i)If z > 1: shrink window from i until z <= 1Continue expanding the windowReturn co as the maximum length after deleting one elementOptimization:Only need one zero counter and window pointersAvoid recomputing subarray lengths repeatedlyImplementation (Java)class Solution {public int longestSubarray(int[] nums) {int i = 0, j = 0; // window pointersint co = 0; // max lengthint z = 0; // count of zeros in windowwhile (j < nums.length) {if (nums[j] == 0) {z++; // increment zero count}if (z <= 1) {co = Math.max(co, j - i); // valid windowj++;} else {// shrink window until at most one zerowhile (z > 1) {if (nums[i] == 0) {z--;}i++;}co = Math.max(co, j - i);j++;}}return co;}}Dry Run ExampleInput:nums = [1,1,0,1]Execution:Window [i, j]Zeros zValid?Max length co[0,0] → [1]0Yes0[0,1] → [1,1]0Yes1[0,2] → [1,1,0]1Yes2[0,3] → [1,1,0,1]1Yes3Output:3Complexity AnalysisTime Complexity: O(n) → Each element is visited at most twice (i and j)Space Complexity: O(1) → Only counters and pointers usedEdge Cases ConsideredArray of all 1’s → must delete one → max length = n - 1Array of all 0’s → return 0Single element arrays → return 0 (because deletion required)Zeros at the start/end of array → handled by sliding windowSliding Window Pattern ImportanceThis problem is a great example of sliding window with limited violations:Maintain a window satisfying a constraint (at most one zero)Expand/shrink dynamicallyCompute max length without scanning all subarraysIt’s directly related to problems like:Max consecutive ones with k flipsLongest substring with at most k distinct charactersSubarray problems with limited replacementsConclusionBy tracking zeros with a sliding window, we efficiently find the longest subarray of 1’s after deleting one element in O(n) time.This pattern is reusable in many binary array and string problems, making it a must-know technique for coding interviews.

SlidingWindowBinaryArrayLeetCodeMedium
Maximum Absolute Sum of Any Subarray

Maximum Absolute Sum of Any Subarray

LeetCode Problem 1749Link of the Problem to try -: LinkYou are given an integer array nums. The absolute sum of a subarray [numsl, numsl+1, ..., numsr-1, numsr] is abs(numsl + numsl+1 + ... + numsr-1 + numsr).Return the maximum absolute sum of any (possibly empty) subarray of nums.Note that abs(x) is defined as follows:If x is a negative integer, then abs(x) = -x.If x is a non-negative integer, then abs(x) = x.Example 1:Input: nums = [1,-3,2,3,-4]Output: 5Explanation: The subarray [2,3] has absolute sum = abs(2+3) = abs(5) = 5.Example 2:Input: nums = [2,-5,1,-4,3,-2]Output: 8Explanation: The subarray [-5,1,-4] has absolute sum = abs(-5+1-4) = abs(-8) = 8.Constraints:1 <= nums.length <= 105-104 <= nums[i] <= 104My ThinkingSo, in this question what exactly we have to do what actual logic behind it to solve this question is that like you need to first find out the maximum sum of sub array via kadane's algorithm also find the minimum sum of subarray and then compare both numbers ignore the negative signs of it and then return the maximum number that is what the question want and we have to solve in it.Kadane's AlgorithmThis algorithm is highly efficient, solving the problem in a single pass with a time complexity of O(n), not O(1) constant time, as the algorithm must iterate through all n elements of the input array.The core principle is based on the idea that any negative prefix sum acts as a 'debt' that subsequent positive numbers must overcome. The strategy is to always pursue the maximum sum. If the running sum becomes negative at any point, it is reset to zero, effectively 'discarding' the previous negative sequence, and the search for a new maximum sum begins with the next element.Here is a video for more better understandingSolution:Here is the solution for this quesiton in java by Kadane's Algorithmpublic int maxAbsoluteSum(int[] nums) {int max = Integer.MIN_VALUE;int min = Integer.MAX_VALUE;int maxSum=0;int minSum=0;for(int i =0 ; i< nums.length;i++){maxSum = Math.max(maxSum+nums[i],nums[i]);max = Math.max(max,maxSum);minSum = Math.min(minSum+nums[i],nums[i]);min = Math.min(min,minSum);}return Math.max(max,Math.abs(min));}

leetcodemediumkadane algorithm
Ai Assistant Kas