Search Blogs

Showing results for "DSA"

Found 13 results

Recursion in Java - Complete Guide With Examples and Practice Problems

Recursion in Java - Complete Guide With Examples and Practice Problems

IntroductionIf there is one topic in programming that confuses beginners more than anything else, it is recursion. Most people read the definition, nod their head, and then immediately freeze when they have to write recursive code themselves.The problem is not that recursion is genuinely hard. The problem is that most explanations start with code before building the right mental model. Once you have the right mental model, recursion clicks permanently and you start seeing it everywhere — in tree problems, graph problems, backtracking, dynamic programming, divide and conquer, and more.This guide covers everything from the ground up. What recursion is, how the call stack works, how to identify base cases and recursive cases, every type of recursion, common patterns, time and space complexity analysis, the most common mistakes, and the top LeetCode problems to practice.By the end of this article, recursion will not feel like magic anymore. It will feel like a natural tool you reach for confidently.What Is Recursion?Recursion is when a function calls itself to solve a smaller version of the same problem.That is the complete definition. But let us make it concrete.Imagine you want to count down from 5 to 1. One way is a loop. Another way is — print 5, then solve the exact same problem for counting down from 4 to 1. Then print 4, solve for 3. And so on until you reach the base — there is nothing left to count down.void countDown(int n) { if (n == 0) return; // stop here System.out.println(n); countDown(n - 1); // solve the smaller version}The function countDown calls itself with a smaller input each time. Eventually it reaches 0 and stops. That stopping condition is the most important part of any recursive function — the base case.The Two Parts Every Recursive Function Must HaveEvery correctly written recursive function has exactly two parts. Without both, the function either gives wrong answers or runs forever.Part 1: Base CaseThe base case is the condition under which the function stops calling itself and returns a direct answer. It is the smallest version of the problem that you can solve without any further recursion.Without a base case, recursion never stops and you get a StackOverflowError — Java's way of telling you the call stack ran out of memory.Part 2: Recursive CaseThe recursive case is where the function calls itself with a smaller or simpler input — moving closer to the base case with each call. If your recursive case does not make the problem smaller, you have an infinite loop.Think of it like a staircase. The base case is the ground floor. The recursive case is each step going down. Every step must genuinely bring you one level closer to the ground.How Recursion Works — The Call StackThis is the mental model that most explanations skip, and it is the reason recursion confuses people.Every time a function is called in Java, a new stack frame is created and pushed onto the call stack. This frame stores the function's local variables, parameters, and where to return to when the function finishes.When a recursive function calls itself, a new frame is pushed on top. When that call finishes, its frame is popped and execution returns to the previous frame.Let us trace countDown(3) through the call stack:countDown(3) called → frame pushed prints 3 calls countDown(2) → frame pushed prints 2 calls countDown(1) → frame pushed prints 1 calls countDown(0) → frame pushed n == 0, return → frame popped back in countDown(1), return → frame popped back in countDown(2), return → frame popped back in countDown(3), return → frame poppedOutput: 3, 2, 1The call stack grows as calls go deeper, then shrinks as calls return. This is why recursion uses O(n) space for n levels deep — each level occupies one stack frame in memory.Your First Real Recursive Function — FactorialFactorial is the classic first recursion example. n! = n × (n-1) × (n-2) × ... × 1Notice the pattern — n! = n × (n-1)!. The factorial of n is n times the factorial of n-1. That recursive structure makes it perfect for recursion.public int factorial(int n) { // base case if (n == 0 || n == 1) return 1; // recursive case return n * factorial(n - 1);}Dry Run — factorial(4)factorial(4)= 4 * factorial(3)= 4 * 3 * factorial(2)= 4 * 3 * 2 * factorial(1)= 4 * 3 * 2 * 1= 24The call stack builds up going in, then multiplications happen coming back out. This "coming back out" phase is called the return phase or unwinding of the stack.Time Complexity: O(n) — n recursive calls Space Complexity: O(n) — n frames on the call stackThe Two Phases of RecursionEvery recursive function has two phases and understanding both is critical.Phase 1: The Call Phase (Going In)This happens as the function keeps calling itself with smaller inputs. Things you do before the recursive call happen in this phase — in order from the outermost call to the innermost.Phase 2: The Return Phase (Coming Back Out)This happens as each call finishes and returns to its caller. Things you do after the recursive call happen in this phase — in reverse order, from the innermost call back to the outermost.This distinction explains why the output order can be surprising:void printBothPhases(int n) { if (n == 0) return; System.out.println("Going in: " + n); // call phase printBothPhases(n - 1); System.out.println("Coming out: " + n); // return phase}For printBothPhases(3):Going in: 3Going in: 2Going in: 1Coming out: 1Coming out: 2Coming out: 3This two-phase understanding is what makes problems like reversing a string or printing a linked list backwards via recursion feel natural.Types of RecursionRecursion is not one-size-fits-all. There are several distinct types and knowing which type applies to a problem shapes how you write the solution.1. Direct RecursionThe function calls itself directly. This is the most common type — what we have seen so far.void direct(int n) { if (n == 0) return; direct(n - 1); // calls itself}2. Indirect RecursionFunction A calls Function B which calls Function A. They form a cycle.void funcA(int n) { if (n <= 0) return; System.out.println("A: " + n); funcB(n - 1);}void funcB(int n) { if (n <= 0) return; System.out.println("B: " + n); funcA(n - 1);}Used in: state machines, mutual recursion in parsers, certain mathematical sequences.3. Tail RecursionThe recursive call is the last operation in the function. Nothing happens after the recursive call returns — no multiplication, no addition, nothing.// NOT tail recursive — multiplication happens after returnint factorial(int n) { if (n == 1) return 1; return n * factorial(n - 1); // multiply after return — not tail}// Tail recursive — recursive call is the last thingint factorialTail(int n, int accumulator) { if (n == 1) return accumulator; return factorialTail(n - 1, n * accumulator); // last operation}Why does tail recursion matter? In languages that support tail call optimization (like Scala, Kotlin, and many functional languages), tail recursive functions can be converted to iteration internally — no stack frame accumulation, O(1) space. Java does NOT perform tail call optimization, but understanding tail recursion is still important for interviews and functional programming concepts.4. Head RecursionThe recursive call happens first, before any other processing. All processing happens in the return phase.void headRecursion(int n) { if (n == 0) return; headRecursion(n - 1); // call first System.out.println(n); // process after}// Output: 1 2 3 4 5 (processes in reverse order of calls)5. Tree RecursionThe function makes more than one recursive call per invocation. This creates a tree of calls rather than a linear chain. Fibonacci is the classic example.int fibonacci(int n) { if (n <= 1) return n; return fibonacci(n - 1) + fibonacci(n - 2); // TWO recursive calls}The call tree for fibonacci(4): fib(4) / \ fib(3) fib(2) / \ / \ fib(2) fib(1) fib(1) fib(0) / \ fib(1) fib(0)Time Complexity: O(2ⁿ) — exponential! Each call spawns two more. Space Complexity: O(n) — maximum depth of the call treeThis is why memoization (caching results) is so important for tree recursion — it converts O(2ⁿ) to O(n) by never recomputing the same subproblem twice.6. Mutual RecursionA specific form of indirect recursion where two functions call each other alternately to solve a problem. Different from indirect recursion in that the mutual calls are the core mechanism of the solution.// Check if a number is even or odd using mutual recursionboolean isEven(int n) { if (n == 0) return true; return isOdd(n - 1);}boolean isOdd(int n) { if (n == 0) return false; return isEven(n - 1);}Common Recursion Patterns in DSAThese are the patterns you will see over and over in interview problems. Recognizing them is more important than memorizing solutions.Pattern 1: Linear Recursion (Do Something, Recurse on Rest)Process the current element, then recurse on the remaining problem.// Sum of arrayint arraySum(int[] arr, int index) { if (index == arr.length) return 0; // base case return arr[index] + arraySum(arr, index + 1); // current + rest}Pattern 2: Divide and Conquer (Split Into Two Halves)Split the problem into two halves, solve each recursively, combine results.// Merge Sortvoid mergeSort(int[] arr, int left, int right) { if (left >= right) return; // base case — single element int mid = (left + right) / 2; mergeSort(arr, left, mid); // sort left half mergeSort(arr, mid + 1, right); // sort right half merge(arr, left, mid, right); // combine}Pattern 3: Backtracking (Try, Recurse, Undo)Try a choice, recurse to explore it, undo the choice when backtracking.// Generate all subsetsvoid subsets(int[] nums, int index, List<Integer> current, List<List<Integer>> result) { if (index == nums.length) { result.add(new ArrayList<>(current)); return; } // Choice 1: include nums[index] current.add(nums[index]); subsets(nums, index + 1, current, result); current.remove(current.size() - 1); // undo // Choice 2: exclude nums[index] subsets(nums, index + 1, current, result);}Pattern 4: Tree Recursion (Left, Right, Combine)Recurse on left subtree, recurse on right subtree, combine or process results.// Height of binary treeint height(TreeNode root) { if (root == null) return 0; // base case int leftHeight = height(root.left); // solve left int rightHeight = height(root.right); // solve right return 1 + Math.max(leftHeight, rightHeight); // combine}Pattern 5: Memoization (Cache Recursive Results)Store results of recursive calls so the same subproblem is never solved twice.Map<Integer, Integer> memo = new HashMap<>();int fibonacci(int n) { if (n <= 1) return n; if (memo.containsKey(n)) return memo.get(n); // return cached int result = fibonacci(n - 1) + fibonacci(n - 2); memo.put(n, result); // cache before returning return result;}This converts Fibonacci from O(2ⁿ) to O(n) time with O(n) space — a massive improvement.Recursion vs Iteration — When to Use WhichThis is one of the most common interview questions about recursion. Here is a clear breakdown:Use Recursion when:The problem has a naturally recursive structure (trees, graphs, divide and conquer)The solution is significantly cleaner and easier to understand recursivelyThe problem involves exploring multiple paths or choices (backtracking)The depth of recursion is manageable (not too deep to cause stack overflow)Use Iteration when:The problem is linear and a loop is equally clearMemory is a concern (iteration uses O(1) stack space vs O(n) for recursion)Performance is critical and function call overhead mattersJava's stack size limit could be hit (default around 500-1000 frames for deep recursion)The key rule: Every recursive solution can be converted to an iterative one (usually using an explicit stack). But recursive solutions for tree and graph problems are almost always cleaner to write and understand.Time and Space Complexity of Recursive FunctionsAnalyzing complexity for recursive functions requires a specific approach.The Recurrence Relation MethodExpress the time complexity as a recurrence relation and solve it.Factorial:T(n) = T(n-1) + O(1) = T(n-2) + O(1) + O(1) = T(1) + n×O(1) = O(n)Fibonacci (naive):T(n) = T(n-1) + T(n-2) + O(1) ≈ 2×T(n-1) = O(2ⁿ)Binary Search:T(n) = T(n/2) + O(1) = O(log n) [by Master Theorem]Merge Sort:T(n) = 2×T(n/2) + O(n) = O(n log n) [by Master Theorem]Space Complexity Rule for RecursionSpace complexity of a recursive function = maximum depth of the call stack × space per frameLinear recursion (factorial, sum): O(n) spaceBinary recursion (Fibonacci naive): O(n) space (maximum depth, not number of nodes)Divide and conquer (merge sort): O(log n) space (depth of recursion tree)Memoized Fibonacci: O(n) space (memo table + call stack)Classic Recursive Problems With SolutionsProblem 1: Reverse a StringString reverse(String s) { if (s.length() <= 1) return s; // base case // last char + reverse of everything before last char return s.charAt(s.length() - 1) + reverse(s.substring(0, s.length() - 1));}Dry run for "hello":reverse("hello") = 'o' + reverse("hell")reverse("hell") = 'l' + reverse("hel")reverse("hel") = 'l' + reverse("he")reverse("he") = 'e' + reverse("h")reverse("h") = "h"Unwinding: "h" → "he" → "leh" → "lleh" → "olleh" ✅Problem 2: Power Function (x^n)double power(double x, int n) { if (n == 0) return 1; // base case if (n < 0) return 1.0 / power(x, -n); // handle negative if (n % 2 == 0) { double half = power(x, n / 2); return half * half; // x^n = (x^(n/2))^2 } else { return x * power(x, n - 1); }}This is the fast power algorithm — O(log n) time instead of O(n).Problem 3: Fibonacci With Memoizationint[] memo = new int[100];Arrays.fill(memo, -1);int fib(int n) { if (n <= 1) return n; if (memo[n] != -1) return memo[n]; memo[n] = fib(n - 1) + fib(n - 2); return memo[n];}Time: O(n) — each value computed once Space: O(n) — memo array + call stackProblem 4: Tower of HanoiThe classic recursion teaching problem. Move n disks from source to destination using a helper rod.void hanoi(int n, char source, char destination, char helper) { if (n == 1) { System.out.println("Move disk 1 from " + source + " to " + destination); return; } // Move n-1 disks from source to helper hanoi(n - 1, source, helper, destination); // Move the largest disk from source to destination System.out.println("Move disk " + n + " from " + source + " to " + destination); // Move n-1 disks from helper to destination hanoi(n - 1, helper, destination, source);}Time Complexity: O(2ⁿ) — minimum moves required is 2ⁿ - 1 Space Complexity: O(n) — call stack depthProblem 5: Generate All Subsets (Power Set)void generateSubsets(int[] nums, int index, List<Integer> current, List<List<Integer>> result) { result.add(new ArrayList<>(current)); // add current subset for (int i = index; i < nums.length; i++) { current.add(nums[i]); // include generateSubsets(nums, i + 1, current, result); // recurse current.remove(current.size() - 1); // exclude (backtrack) }}For [1, 2, 3] — generates all 8 subsets: [], [1], [1,2], [1,2,3], [1,3], [2], [2,3], [3]Time: O(2ⁿ) — 2ⁿ subsets Space: O(n) — recursion depthProblem 6: Binary Search Recursivelyint binarySearch(int[] arr, int target, int left, int right) { if (left > right) return -1; // base case — not found int mid = left + (right - left) / 2; if (arr[mid] == target) return mid; else if (arr[mid] < target) return binarySearch(arr, target, mid + 1, right); else return binarySearch(arr, target, left, mid - 1);}Time: O(log n) — halving the search space each time Space: O(log n) — log n frames on the call stackRecursion on Trees — The Natural HabitatTrees are where recursion truly shines. Every tree problem becomes elegant with recursion because a tree is itself a recursive structure — each node's left and right children are trees themselves.// Maximum depth of binary treeint maxDepth(TreeNode root) { if (root == null) return 0; return 1 + Math.max(maxDepth(root.left), maxDepth(root.right));}// Check if tree is symmetricboolean isSymmetric(TreeNode left, TreeNode right) { if (left == null && right == null) return true; if (left == null || right == null) return false; return left.val == right.val && isSymmetric(left.left, right.right) && isSymmetric(left.right, right.left);}// Path sum — does any root-to-leaf path sum to target?boolean hasPathSum(TreeNode root, int target) { if (root == null) return false; if (root.left == null && root.right == null) return root.val == target; return hasPathSum(root.left, target - root.val) || hasPathSum(root.right, target - root.val);}Notice the pattern in all three — base case handles null, recursive case handles left and right subtrees, result combines both.How to Think About Any Recursive Problem — Step by StepThis is the framework you should apply to every new recursive problem you encounter:Step 1 — Identify the base case What is the smallest input where you know the answer directly without any recursion? For arrays it is usually empty array or single element. For trees it is null node. For numbers it is 0 or 1.Step 2 — Trust the recursive call Assume your function already works correctly for smaller inputs. Do not trace through the entire recursion mentally — just trust it. This is the Leap of Faith and it is what makes recursion feel natural.Step 3 — Express the current problem in terms of smaller problems How does the answer for size n relate to the answer for size n-1 (or n/2, or subtrees)? This relationship is your recursive case.Step 4 — Make sure each call moves toward the base case The input must become strictly smaller with each call. If it does not, you have infinite recursion.Step 5 — Write the base case first, then the recursive case Always. Writing the recursive case first leads to bugs because you have not defined when to stop.Common Mistakes and How to Avoid ThemMistake 1: Missing or wrong base case The most common mistake. Missing the base case causes StackOverflowError. Wrong base case causes wrong answers.Always ask — what is the simplest possible input, and what should the function return for it? Write that case first.Mistake 2: Not moving toward the base case If you call factorial(n) inside factorial(n) without reducing n, you loop forever. Every recursive call must make the problem strictly smaller.Mistake 3: Trusting your brain to trace deep recursion Do not try to trace 10 levels of recursion in your head. Trust the recursive call, verify the base case, and check that each call reduces the problem. That is all you need.Mistake 4: Forgetting to return the recursive result// WRONG — result is computed but not returnedint sum(int n) { if (n == 0) return 0; sum(n - 1) + n; // computed but discarded!}// CORRECTint sum(int n) { if (n == 0) return 0; return sum(n - 1) + n;}Mistake 5: Modifying shared state without backtracking In backtracking problems, if you add something to a list before a recursive call, you must remove it after the call returns. Forgetting to backtrack leads to incorrect results and is one of the trickiest bugs to find.Mistake 6: Recomputing the same subproblems Naive Fibonacci computes fib(3) multiple times when computing fib(5). Use memoization whenever you notice overlapping subproblems in your recursion tree.Top LeetCode Problems on RecursionThese are organized by pattern — work through them in this order for maximum learning:Pure Recursion Basics:509. Fibonacci Number — Easy — start here, implement with and without memoization344. Reverse String — Easy — recursion on arrays206. Reverse Linked List — Easy — recursion on linked list50. Pow(x, n) — Medium — fast power with recursionTree Recursion (Most Important):104. Maximum Depth of Binary Tree — Easy — simplest tree recursion112. Path Sum — Easy — decision recursion on tree101. Symmetric Tree — Easy — mutual recursion on tree110. Balanced Binary Tree — Easy — bottom-up recursion236. Lowest Common Ancestor of a Binary Tree — Medium — classic tree recursion124. Binary Tree Maximum Path Sum — Hard — advanced tree recursionDivide and Conquer:148. Sort List — Medium — merge sort on linked list240. Search a 2D Matrix II — Medium — divide and conquerBacktracking:78. Subsets — Medium — generate all subsets46. Permutations — Medium — generate all permutations77. Combinations — Medium — generate combinations79. Word Search — Medium — backtracking on grid51. N-Queens — Hard — classic backtrackingMemoization / Dynamic Programming:70. Climbing Stairs — Easy — Fibonacci variant with memoization322. Coin Change — Medium — recursion with memoization to DP139. Word Break — Medium — memoized recursionRecursion Cheat Sheet// Linear recursion templatereturnType solve(input) { if (baseCase) return directAnswer; // process current return solve(smallerInput);}// Tree recursion templatereturnType solve(TreeNode root) { if (root == null) return baseValue; returnType left = solve(root.left); returnType right = solve(root.right); return combine(left, right, root.val);}// Backtracking templatevoid backtrack(choices, current, result) { if (goalReached) { result.add(copy of current); return; } for (choice : choices) { make(choice); // add to current backtrack(...); // recurse undo(choice); // remove from current }}// Memoization templateMap<Input, Output> memo = new HashMap<>();returnType solve(input) { if (baseCase) return directAnswer; if (memo.containsKey(input)) return memo.get(input); returnType result = solve(smallerInput); memo.put(input, result); return result;}FAQs — People Also AskQ1. What is recursion in Java with a simple example? Recursion is when a function calls itself to solve a smaller version of the same problem. A simple example is factorial — factorial(5) = 5 × factorial(4) = 5 × 4 × factorial(3) and so on until factorial(1) returns 1 directly.Q2. What is the difference between recursion and iteration? Iteration uses loops (for, while) and runs in O(1) space. Recursion uses function calls and uses O(n) stack space for n levels deep. Recursion is often cleaner for tree and graph problems. Iteration is better when memory is a concern or the problem is inherently linear.Q3. What causes StackOverflowError in Java recursion? StackOverflowError happens when recursion goes too deep — too many frames accumulate on the call stack before any of them return. This is caused by missing base case, wrong base case, or input too large for Java's default stack size limit.Q4. What is the difference between recursion and dynamic programming? Recursion solves a problem by breaking it into subproblems. Dynamic programming is recursion plus memoization — storing results of subproblems so they are never computed twice. DP converts exponential recursive solutions into polynomial ones by eliminating redundant computation.Q5. What is tail recursion and does Java support tail call optimization? Tail recursion is when the recursive call is the absolute last operation in the function. Java does NOT support tail call optimization — Java always creates a new stack frame for each call even if it is tail recursive. Languages like Scala and Kotlin (on the JVM) do support it with the tailrec keyword.Q6. How do you convert recursion to iteration? Every recursive solution can be converted to iterative using an explicit stack data structure. The call stack's behavior is replicated manually — push the initial call, loop while stack is not empty, pop, process, and push sub-calls. Tree traversals are a common example of this conversion.ConclusionRecursion is not magic. It is a systematic way of solving problems by expressing them in terms of smaller versions of themselves. Once you internalize the two parts (base case and recursive case), understand the call stack mentally, and learn to trust the recursive call rather than trace it completely, everything clicks.The learning path from here is clear — start with simple problems like Fibonacci and array sum. Move to tree problems where recursion is most natural. Then tackle backtracking. Finally add memoization to bridge into dynamic programming.Every hour you spend understanding recursion deeply pays dividends across the entire rest of your DSA journey. Trees, graphs, divide and conquer, backtracking, dynamic programming — all of them build on this foundation.

RecursionJavaBase CaseCall StackBacktrackingDynamic Programming
LeetCode 1047: Remove All Adjacent Duplicates In String — Java Solution With All Approaches Explained

LeetCode 1047: Remove All Adjacent Duplicates In String — Java Solution With All Approaches Explained

Introduction: What Is LeetCode 1047 Remove All Adjacent Duplicates In String?If you are grinding LeetCode for coding interviews at companies like Google, Amazon, or Microsoft, LeetCode 1047 Remove All Adjacent Duplicates In String is a problem you cannot skip. It is one of the most elegant examples of the stack simulation pattern and appears frequently as a warmup or follow-up question in technical rounds.In this article we will cover everything you need — plain English explanation, real life analogy, 3 Java approaches with dry runs, complexity analysis, common mistakes, FAQs, and similar problems to practice next.Here is the problem link-: Leetcode 1047 What Is the Problem Really Asking?You are given a string. Keep scanning it and whenever you find two same letters sitting next to each other, remove both of them. After removing, the letters around them might now become adjacent and form a new pair — so you keep doing this until no more adjacent duplicates exist.Example walkthrough for "abbaca":"abbaca" → bb are adjacent duplicates → remove → "aaca""aaca" → aa are adjacent duplicates → remove → "ca""ca" → no adjacent duplicates → done!✅ Output: "ca"Real Life Analogy — Think of Popping BubblesImagine a row of colored bubbles. Whenever two bubbles of the same color are next to each other, they pop and disappear. After they pop, the bubbles on either side might now touch each other — and if they are the same color, they pop too! You keep going until no two same-colored bubbles are touching.That chain reaction is exactly what this problem simulates. And the best tool to handle that chain reaction? A stack.Approach 1: Brute Force (Beginner Friendly)The IdeaScan the string repeatedly. Every time you find two adjacent equal characters, remove them. Keep doing this until a full pass finds nothing to remove.public String removeDuplicates(String s) { StringBuilder sb = new StringBuilder(s); boolean found = true; while (found) { found = false; for (int i = 0; i < sb.length() - 1; i++) { if (sb.charAt(i) == sb.charAt(i + 1)) { sb.deleteCharAt(i); sb.deleteCharAt(i); found = true; break; } } } return sb.toString();}This is easy to understand but very slow. For each pair found, you restart the entire scan. With n up to 100,000, this will get Time Limit Exceeded on LeetCode. Use it only to build intuition.Time Complexity: O(n²) — repeated passes over the string Space Complexity: O(n) — StringBuilder storageApproach 2: Stack Based Solution (Classic Interview Approach)The IdeaA stack is perfect here because of one key observation — when you remove a pair, the character that was before the pair is now adjacent to the character after the pair. That is a Last In First Out situation, which is exactly what a stack handles naturally.Algorithm:If the current character matches the top of the stack → pop (they cancel each other)Otherwise → push the current character onto the stackAt the end, the stack contains your final answerpublic String removeDuplicates(String s) { Stack<Character> st = new Stack<>(); StringBuilder sb = new StringBuilder(); for (int i = 0; i < s.length(); i++) { char c = s.charAt(i); if (!st.empty() && c == st.peek()) { st.pop(); // adjacent duplicate found, cancel both } else { st.push(c); } } while (!st.empty()) { sb.append(st.pop()); } return sb.reverse().toString();}Dry Run — "abbaca"We go character by character and check against the top of the stack:a → stack empty, push → stack: [a]b → top is a, not equal, push → stack: [a, b]b → top is b, equal! pop → stack: [a]a → top is a, equal! pop → stack: []c → stack empty, push → stack: [c]a → top is c, not equal, push → stack: [c, a]Stack remaining: [c, a] → reverse → ✅ "ca"Notice how after removing bb, the two as automatically become adjacent and get caught — the stack handles this chain reaction naturally without any extra logic!Time Complexity: O(n) — single pass through the string Space Complexity: O(n) — stack holds up to n charactersApproach 3: StringBuilder as Stack (Optimal Solution) ✅The IdeaThis is your own solution and the best one! Instead of using a separate Stack<Character>, we use StringBuilder itself as a stack:sb.append(c) acts as pushsb.deleteCharAt(sb.length() - 1) acts as popsb.charAt(sb.length() - 1) acts as peekNo extra data structure, no boxing of char into Character objects, and no reversal needed at the end. Clean, fast, and minimal.public String removeDuplicates(String s) { StringBuilder sb = new StringBuilder(); for (int i = 0; i < s.length(); i++) { char c = s.charAt(i); if (sb.length() != 0 && c == sb.charAt(sb.length() - 1)) { sb.deleteCharAt(sb.length() - 1); // adjacent duplicate, remove both } else { sb.append(c); } } return sb.toString();}Dry Run — "azxxzy"a → sb empty, append → "a"z → last char is a, not equal, append → "az"x → last char is z, not equal, append → "azx"x → last char is x, equal! delete → "az"z → last char is z, equal! delete → "a"y → last char is a, not equal, append → "ay"✅ Final Answer: "ay"Again, notice the chain reaction — after xx was removed, z and z became adjacent and got removed too. The StringBuilder handles this perfectly in a single pass!Time Complexity: O(n) — one pass, every character processed exactly once Space Complexity: O(n) — StringBuilder storageWhy StringBuilder Beats Stack in JavaWhen you use Stack<Character> in Java, every char primitive gets auto-boxed into a Character object. That means extra memory allocation for every single character. With StringBuilder, you work directly on the underlying char array — faster and leaner. Plus you skip the reversal step entirely.For an interview, the Stack approach is great for explaining your thought process clearly. But for the final submitted solution, StringBuilder is the way to go.Common Mistakes to AvoidNot checking sb.length() != 0 before peeking If the StringBuilder is empty and you call sb.charAt(sb.length() - 1), you will get a StringIndexOutOfBoundsException. Always guard this check — even if the problem guarantees valid input, it shows clean coding habits.Thinking you need multiple passes Many beginners think you need to scan the string multiple times because of chain reactions. The stack handles chain reactions automatically in a single pass. Trust the process!Forgetting to reverse when using Stack Since a stack gives you characters in reverse order when you pop them, you must call .reverse() at the end. With StringBuilder you do not need this.How This Fits Into the Stack Simulation PatternBy now you might be noticing a theme across multiple problems:LeetCode 3174 Clear Digits — digit acts as backspace, deletes closest left non-digit LeetCode 2390 Removing Stars — star acts as backspace, deletes closest left non-star LeetCode 1047 Remove Adjacent Duplicates — character cancels itself if it matches the top of stackAll three use the exact same StringBuilder-as-stack pattern. The only difference is the condition that triggers a deletion. This is why pattern recognition is the real skill — once you internalize this pattern, you can solve a whole family of problems in minutes.FAQs — People Also AskQ1. What is the best approach for LeetCode 1047 in Java? The StringBuilder approach is the best. It runs in O(n) time, uses O(n) space, requires no extra data structure, and avoids the reversal step needed with a Stack.Q2. Why does a stack work for removing adjacent duplicates? Because whenever you remove a pair, the characters around them become the new neighbors. A stack naturally keeps track of the most recently seen character, so it catches these chain reactions without any extra logic.Q3. What is the time complexity of LeetCode 1047? The optimal solution runs in O(n) time and O(n) space, where n is the length of the input string.Q4. Is LeetCode 1047 asked in coding interviews? Yes, it is commonly asked as a warmup problem or follow-up at companies like Google, Amazon, and Adobe. It tests your understanding of stack-based string manipulation.Q5. What is the difference between LeetCode 1047 and LeetCode 1209? LeetCode 1047 removes pairs of adjacent duplicates. LeetCode 1209 is the harder version — it removes groups of k adjacent duplicates, requiring you to store counts alongside characters in the stack.Similar LeetCode Problems to Practice Next2390. Removing Stars From a String — Medium — star as backspace3174. Clear Digits — Easy — digit as backspace844. Backspace String Compare — Easy — compare two strings after backspaces1209. Remove All Adjacent Duplicates in String II — Medium — harder version with k duplicates735. Asteroid Collision — Medium — stack simulation with collision logicConclusionLeetCode 1047 Remove All Adjacent Duplicates In String is a beautiful problem that teaches you one of the most powerful and reusable patterns in DSA — stack simulation. The moment you spot that a removal can cause a chain reaction of more removals, you know a stack is your best friend.The StringBuilder solution is clean, optimal, and interview-ready. Master it, understand why it works, and you will be able to tackle the entire family of stack simulation problems with confidence.Found this helpful? Share it with friends preparing for coding interviews

LeetCodeJavaStackStringEasy
Intersection of Two Linked Lists – The Smart Trick Most People Miss (LeetCode 160)

Intersection of Two Linked Lists – The Smart Trick Most People Miss (LeetCode 160)

🚀 Try the ProblemPractice here:https://leetcode.com/problems/intersection-of-two-linked-lists/🤔 Let’s Start With a ThoughtImagine two roads:Road A: 4 → 1 → 8 → 4 → 5 Road B: 5 → 6 → 1 → 8 → 4 → 5At some point… both roads merge into one.👉 That merging point is the intersection node🧠 Problem in Simple WordsYou are given:Head of List AHead of List B👉 Find the node where both linked lists intersect👉 If no intersection exists → return null⚠️ Important Clarification👉 Intersection means:Same memory node (same reference)NOT:Same value ❌📌 Example InsightList A: 1 → 9 → 1 → 2 → 4 List B: 3 → 2 → 4Even though both lists have 2 and 4:👉 They intersect ONLY if they point to the same node in memory📦 Constraints1 <= m, n <= 3 * 10^41 <= Node.val <= 10^5No cycles in the list🔍 First Thought (Brute Force)IdeaFor every node in List A:👉 Traverse entire List B and check if nodes matchComplexityTime: O(m * n) ❌Space: O(1)Too slow.🧩 Better Approach: Length Difference MethodIdeaFind length of both listsMove the longer list ahead by the differenceNow traverse both togetherWhy This Works👉 After skipping extra nodes, both pointers are equally far from intersectionComplexityTime: O(m + n)Space: O(1)🚀 Optimal Approach: Two Pointer Switching TrickNow comes the most beautiful trick 🔥🧠 Core IdeaWe use two pointers:pointerA → starts from headApointerB → starts from headB🎯 The Magic Trick👉 When a pointer reaches end → switch it to the other list🔄 VisualizationLet’s say:Length A = mLength B = nPointer paths:pointerA travels: A → BpointerB travels: B → ASo both travel:m + n distance💡 Why They Meet👉 If intersection exists → they meet at intersection👉 If not → both reach null at same time🔥 Step-by-Step ExampleA: 4 → 1 → 8 → 4 → 5 B: 5 → 6 → 1 → 8 → 4 → 5Iteration:A pointer: 4 → 1 → 8 → 4 → 5 → null → 5 → 6 → 1 → 8 B pointer: 5 → 6 → 1 → 8 → 4 → 5 → null → 4 → 1 → 8👉 They meet at 8💻 Clean Java Code (Optimal Solution)public class Solution { public ListNode getIntersectionNode(ListNode headA, ListNode headB) { ListNode tempa = headA; ListNode tempb = headB; // Traverse until both pointers meet while(tempa != tempb){ // Move pointer A if(tempa != null) tempa = tempa.next; else tempa = headB; // Move pointer B if(tempb != null) tempb = tempb.next; else tempb = headA; } // Either intersection node OR null return tempa; }}⏱️ ComplexityTime ComplexityO(m + n)Space ComplexityO(1)⚖️ Approach ComparisonApproachTimeSpaceNotesBrute ForceO(m*n)O(1)Too slowLength DifferenceO(m+n)O(1)GoodTwo Pointer SwitchO(m+n)O(1)Best & elegant❌ Common MistakesComparing values instead of nodes ❌Forgetting to switch listsNot handling null correctlyAssuming intersection always exists🔥 Interview InsightThis is one of the most clever pointer problems.It teaches:👉 How to equalize paths without extra memory👉 How pointer switching solves alignment problems🧠 Final ThoughtAt first, this trick feels like magic…But actually it’s just:Equalizing path lengths🚀 ConclusionThe Intersection of Two Linked Lists problem is a perfect example of:Smart thinking > brute forceClean logic > complex code👉 Tip: Whenever two lists need alignment, think:"Can I make both pointers travel equal distance?"That’s where the magic begins ✨

Linked ListTwo PointersIntersectionPointer SwitchingDSALeetCodeEasy
Reverse LinkedList (LeetCode 206) – The One Trick That Changes Everything

Reverse LinkedList (LeetCode 206) – The One Trick That Changes Everything

🚀 Try the ProblemPractice here:https://leetcode.com/problems/reverse-linked-list/🤔 Let’s Start With a Simple Question…What happens if you take this:1 → 2 → 3 → 4 → 5…and try to reverse it?You want:5 → 4 → 3 → 2 → 1Sounds easy, right?But here’s the catch:👉 You can only move forward in a linked list👉 There is no backward pointerSo how do we reverse something that only goes one way?🧠 The Core Problem (In Simple Words)You are given the head of a linked list.👉 Reverse the list👉 Return the new head📦 Constraints0 <= number of nodes <= 5000-5000 <= Node.val <= 5000🔍 Before Jumping to Code — Think Like ThisWhen solving this, your brain might go:Can I store values somewhere and reverse? ❌ (extra space)Can I rebuild the list? ❌ (unnecessary)Can I just change links? ✅ (YES, THIS IS THE KEY)⚡ The Breakthrough Idea👉 Instead of moving nodes👉 We reverse the direction of pointers🎯 Visual Intuition (Very Important)Let’s take:1 → 2 → 3 → 4 → nullWe want:null ← 1 ← 2 ← 3 ← 4But how?🧩 The 3-Pointer Magic TrickWe use 3 pointers:prev → stores previous nodecurr → current nodenext → stores next node🔄 Step-by-Step TransformationInitial State:prev = nullcurr = 1 → 2 → 3 → 4Step 1:Save next nodeReverse linkf = 21 → nullMove pointers:prev = 1curr = 2Step 2:f = 32 → 1Move:prev = 2curr = 3Continue…Eventually:4 → 3 → 2 → 1 → null💡 Final Insight👉 Each step reverses one link👉 At the end, prev becomes the new head✅ Clean Java Code (Iterative Approach)class Solution {public ListNode reverseList(ListNode head) {// Previous pointer starts as nullListNode prev = null;// Current pointer starts from headListNode curr = head;while (curr != null) {// Store next nodeListNode f = curr.next;// Reverse the linkcurr.next = prev;// Move prev forwardprev = curr;// Move curr forwardcurr = f;}// New head is prevreturn prev;}}⏱️ ComplexityTime ComplexityO(n)We traverse the list once.Space ComplexityO(1)No extra space used.🔁 Another Way: Recursive ApproachNow let’s think differently…👉 What if we reverse the rest of the list first, then fix the current node?🧠 Recursive IdeaFor:1 → 2 → 3 → 4We:Reverse from 2 → 3 → 4Then fix 1💻 Recursive Codeclass Solution {public ListNode reverseList(ListNode head) {// Base caseif(head == null || head.next == null)return head;// Reverse restListNode newHead = reverseList(head.next);// Fix current nodehead.next.next = head;head.next = null;return newHead;}}⚖️ Iterative vs RecursiveApproachProsConsIterativeFast, O(1) spaceSlightly tricky to understandRecursiveElegant, clean logicUses stack (O(n) space)❌ Common MistakesForgetting to store next nodeLosing reference to rest of listNot updating pointers correctlyReturning wrong node🔥 Real Interview InsightThis problem is not just about reversing a list.It teaches:👉 Pointer manipulation👉 In-place updates👉 Thinking in steps👉 Breaking problems into small operations🧠 Final ThoughtAt first, this problem feels tricky…But once you understand this line:curr.next = prev👉 Everything clicks.🚀 ConclusionThe Reverse Linked List problem is one of the most important foundational problems in DSA.Mastering it will:Boost your confidenceStrengthen pointer logicHelp in many advanced problems👉 Tip: Practice this until you can visualize pointer movement in your head — that’s when you truly master linked lists.

Linked ListPointer ManipulationIterative ApproachRecursionLeetCodeEasy
Delete Middle Element of Stack Without Extra Space | Java Recursive Solution

Delete Middle Element of Stack Without Extra Space | Java Recursive Solution

IntroductionStack-based problems are a core part of data structures and algorithms (DSA) interviews. One such interesting and frequently asked question is deleting the middle element of a stack without using any additional data structure.At first glance, this problem may seem tricky because stacks only allow access to the top element. However, with the help of recursion, it becomes an elegant and intuitive solution.In this article, we will break down the problem, build the intuition, and implement an efficient recursive approach step by step.Link of Problem: GeeksforGeeks – Delete Middle of a StackProblem StatementGiven a stack s, delete the middle element of the stack without using any additional data structure.Definition of Middle ElementThe middle element is defined as:floor((size_of_stack + 1) / 2)Indexing starts from the bottom of the stack (1-based indexing)ExamplesExample 1Input:s = [10, 20, 30, 40, 50]Output:[50, 40, 20, 10]Explanation:Middle index = (5+1)/2 = 3Middle element = 30 → removedExample 2Input:s = [10, 20, 30, 40]Output:[40, 30, 10]Explanation:Middle index = (4+1)/2 = 2Middle element = 20 → removedKey InsightStacks follow LIFO (Last In, First Out), meaning:You can only access/remove the top elementYou cannot directly access the middleSo how do we solve it?We use recursion to:Pop elements until we reach the middleRemove the middle elementPush back all other elementsThis way, no extra data structure is used—just the recursion call stack.Approach: Recursive SolutionIdeaCalculate the middle positionRecursively remove elements from the topWhen the middle is reached → delete itWhile returning, push elements backCode (Java)import java.util.Stack;class Solution {void findMid(Stack<Integer> s, int mid) {// When current stack size equals middle positionif (s.size() == mid) {s.pop(); // delete middle elementreturn;}int temp = s.pop(); // remove top element// Recursive callfindMid(s, mid);// Push element back after recursions.push(temp);}// Function to delete middle element of a stackpublic void deleteMid(Stack<Integer> s) {int mid = (s.size() + 1) / 2;findMid(s, mid);}}Step-by-Step Dry RunLet’s take:Stack (bottom → top): [10, 20, 30, 40, 50]Middle index = 3Recursion pops: 50 → 40Now stack size = 3 → remove 30Push back: 40 → 50Final Stack:[10, 20, 40, 50]Complexity AnalysisTime Complexity: O(n)Each element is removed and added onceAuxiliary Space: O(n)Due to recursion call stackWhy This Approach WorksRecursion simulates stack behaviorNo extra data structures like arrays or lists are usedMaintains original order after deletionEfficient and interview-friendly solutionKey TakeawaysDirect access to middle is not possible in a stackRecursion is the key to solving such constraintsAlways think of breaking + rebuilding for stack problemsThis pattern is useful in many stack-based interview questionsWhen This Problem Is AskedThis problem is commonly seen in:Technical interviewsCoding platforms like GeeksforGeeksStack and recursion-based problem setsIt evaluates:Understanding of stack operationsRecursive thinkingProblem-solving under constraintsConclusionDeleting the middle element of a stack without extra space is a classic example of using recursion effectively. While the problem may seem restrictive, the recursive approach provides a clean and optimal solution.Mastering this concept will help you tackle more advanced stack and recursion problems with confidence.Frequently Asked Questions (FAQs)1. Can this be solved without recursion?Not efficiently without using another data structure. Recursion is the best approach under given constraints.2. Why not use an array or list?The problem explicitly restricts the use of additional data structures.3. What is the best approach?The recursive approach is optimal with O(n) time and space complexity.

GeeksofGeeksEasyStackRecursionJava
Fast and Slow Pointer Technique in Linked List: Cycle Detection, Proof, and Complete Explanation

Fast and Slow Pointer Technique in Linked List: Cycle Detection, Proof, and Complete Explanation

🚀 Before We StartTry these problems (optional but helpful):https://leetcode.com/problems/linked-list-cycle/https://leetcode.com/problems/linked-list-cycle-ii/🤔 Let’s Talk Honestly…When you first learn this technique, you’re told:👉 “Slow moves 1 step, fast moves 2 steps”👉 “If they meet → cycle exists”But your brain asks:❓ Why 2 steps?❓ Why do they meet at all?❓ Why does resetting pointer find cycle start?❓ Is this magic or logic?👉 Let’s answer each doubt one by one.🧩 Doubt 1: Why do we even use two pointers?❓ Question:Why not just use one pointer?✅ Answer:With one pointer:You can only move forwardYou cannot detect loops efficiently👉 Two pointers create a relative motionThat relative motion is the key.🧩 Doubt 2: Why fast = 2 steps and slow = 1 step?❓ Question:Why exactly 2 and 1?✅ Answer:We need:Fast speed > Slow speedSo that:👉 Fast catches up to slow🧠 Think like this:If both move same speed:Slow → 1 stepFast → 1 step👉 They will NEVER meet ❌If:Slow → 1 stepFast → 2 steps👉 Fast gains 1 node every step🔥 Key Insight:Relative speed = fast - slow = 1👉 This means fast is closing the gap by 1 node every step🧩 Doubt 3: Why do they ALWAYS meet in a cycle?❓ Question:Okay, fast is faster… but why guaranteed meeting?🧠 Imagine a Circular TrackInside a cycle, the list behaves like:Circle of length = λNow:Slow moves 1 stepFast moves 2 steps🔄 Gap BehaviorEach step:Gap = Gap - 1Because fast is catching up.Eventually:Gap = 0👉 They meet 🎯💡 Simple AnalogyTwo runners on a circular track:One is fasterOne is slower👉 Faster runner will lap and meet slower runner🧩 Doubt 4: What if there is NO cycle?❓ Question:Why does this fail without cycle?✅ Answer:If no cycle:List ends → fast reaches null👉 No loop → no meeting🧩 Doubt 5: Where do they meet?❓ Question:Do they meet at cycle start?❌ Answer:No, not necessarily.They meet somewhere inside the cycle🧩 Doubt 6: Then how do we find the cycle start?Now comes the most important part.🎯 SetupLet’s define:a = distance from head to cycle startb = distance from cycle start to meeting pointc = remaining cycleCycle length:λ = b + c🧠 What happens when they meet?Slow distance:a + bFast distance:2(a + b)Using relation:2(a + b) = a + b + kλSolve:a + b = kλ=> a = kλ - b=> a = (k-1)λ + (λ - b)💥 Final Meaninga = distance from meeting point to cycle start🔥 BIG CONCLUSION👉 Distance from head → cycle start👉 = Distance from meeting point → cycle start🧩 Doubt 7: Why resetting pointer works?❓ Question:Why move one pointer to head?✅ Answer:Because:One pointer is a away from startOther is also a away (via cycle)👉 Move both 1 step:They meet at:Cycle Start 🎯🔄 VisualizationHead → → → Cycle Start → → Meeting Point → → back to StartBoth pointers:One from headOne from meeting point👉 Same distance → meet at start🧩 Doubt 8: Can we use fast = 3 steps?❓ Question:Will this work?✅ Answer:Yes, BUT:Math becomes complexHarder to reasonNo extra benefit👉 So we use simplest:2 : 1 ratio🧠 Final Mental ModelThink in 3 steps:1. Different SpeedsFast moves faster → gap reduces2. Circular StructureCycle → positions repeat3. Guaranteed MeetingFinite positions + relative motion → collision🧩 TEMPLATE 1: Detect CycleListNode slow = head;ListNode fast = head;while(fast != null && fast.next != null){ slow = slow.next; fast = fast.next.next; if(slow == fast){ return true; }}return false;🧩 TEMPLATE 2: Find Cycle StartListNode slow = head;ListNode fast = head;while(fast != null && fast.next != null){ slow = slow.next; fast = fast.next.next; if(slow == fast){ slow = head; while(slow != fast){ slow = slow.next; fast = fast.next; } return slow; }}return null;🧩 TEMPLATE 3: Find Middle of Linked List❓ ProblemFind the middle node of a linked list.🧠 IntuitionFast moves twice as fast:When fast reaches end → slow reaches half👉 Slow = middle💻 CodeListNode slow = head;ListNode fast = head;while(fast != null && fast.next != null){ slow = slow.next; fast = fast.next.next;}return slow;⚠️ Even Length Case1 → 2 → 3 → 4 → 5 → 6👉 Returns 4 (second middle)❓ How to Get First Middle?while(fast.next != null && fast.next.next != null){ slow = slow.next; fast = fast.next.next;}return slow;🧩 Where Else This Technique Is Used?Detect cycleFind cycle startFind middle nodeCheck palindrome (linked list)Split list (merge sort)Intersection problems⚙️ Time & Space ComplexityTime: O(n)Space: O(1)❌ Common MistakesForgetting fast.next != nullThinking meeting point = cycle start ❌Not resetting pointer properly🧠 Final Mental ModelThink in 3 steps:1. Speed DifferenceFast moves faster → gap reduces2. Circular NatureCycle → repeated positions3. Guaranteed MeetingFinite nodes + relative motion → collision🔥 One Line to RememberFast catches slow because it reduces the gap inside a loop.🚀 ConclusionNow you understand:✅ Why fast moves faster✅ Why they meet✅ Why meeting proves cycle✅ Why reset gives cycle start✅ How to find middle using same logic👉 This is not just a trick…It’s a mathematical guarantee based on motion and cycles.💡 Master this once, and you’ll solve multiple linked list problems easily.

Linked ListFast & Slow PointerTwo Pointer TechniqueFloyd AlgorithmDSA PatternsDeep Intuition
LeetCode 123 — Best Time to Buy and Sell Stock III | At Most Two Transactions Explained

LeetCode 123 — Best Time to Buy and Sell Stock III | At Most Two Transactions Explained

🚀 Try This Problem First!Before reading the solution, attempt it yourself on LeetCode — you will learn much more that way.🔗 Problem Link: https://leetcode.com/problems/best-time-to-buy-and-sell-stock-iii/What Is the Problem Asking?You have a list of stock prices, one for each day. You are allowed to buy and sell the stock, but with one important restriction — you can make at most two transactions in total. A transaction means one buy followed by one sell.You cannot hold two stocks at the same time, meaning you must sell whatever you are holding before you buy again.Your job is to find the maximum profit you can make by doing at most two transactions. If there is no way to make any profit, return 0.Example to build intuition: prices = [3, 3, 5, 0, 0, 3, 1, 4]If you buy on day 4 (price 0) and sell on day 6 (price 3), profit = 3. Then buy on day 7 (price 1) and sell on day 8 (price 4), profit = 3. Total profit = 6. That is the best you can do.Constraints:1 ≤ prices.length ≤ 10⁵0 ≤ prices[i] ≤ 10⁵Why Is This Harder Than the Previous Stock Problems?In LeetCode 121, you could only make one transaction — so you just found the best single buy-sell pair.In LeetCode 122, you could make unlimited transactions — so you greedily collected every upward price movement.Here, you are limited to exactly two transactions. Greedy does not work anymore because sometimes skipping a small profit early on allows you to make a much bigger combined profit across two trades. You need a smarter strategy.The Big Idea Behind the SolutionHere is a simple way to think about it.If you are allowed two transactions, you can imagine drawing a dividing line somewhere in the price array. The first transaction happens entirely on the left side of that line. The second transaction happens entirely on the right side.So the problem becomes:For every possible dividing position, find the best profit on the left side and the best profit on the right side.Add them together.The maximum sum across all dividing positions is your answer.The challenge is doing this efficiently without checking every position with a nested loop, which would be O(N²) and too slow.Approach 1 — Left-Right Prefix DPThe idea:Instead of trying every split point from scratch, precompute the answers in advance.Build a leftProfit array where leftProfit[i] stores the best single transaction profit you can make using prices from day 0 to day i.Build a rightProfit array where rightProfit[i] stores the best single transaction profit you can make using prices from day i to the last day.Then for every index i, the answer if you split at i is simply leftProfit[i] + rightProfit[i]. Take the maximum across all indices.How to build leftProfit:Go left to right. Keep track of the minimum price seen so far. At each day i, the best profit you could make ending at or before day i is either the same as the day before, or selling today at today's price minus the minimum price seen so far. Take whichever is larger.How to build rightProfit:Go right to left. Keep track of the maximum price seen so far from the right. At each day i, the best profit you could make starting at or after day i is either the same as the day after, or buying today at today's price and selling at the maximum price seen to the right. Take whichever is larger.Dry Run — prices = [3, 3, 5, 0, 0, 3, 1, 4]Building leftProfit left to right, tracking minimum price seen:Day 0 → min=3, leftProfit[0] = 0 (nothing to compare yet) Day 1 → min=3, best profit = 3-3 = 0, leftProfit[1] = max(0, 0) = 0 Day 2 → min=3, best profit = 5-3 = 2, leftProfit[2] = max(0, 2) = 2 Day 3 → min=0, best profit = 0-0 = 0, leftProfit[3] = max(2, 0) = 2 Day 4 → min=0, best profit = 0-0 = 0, leftProfit[4] = max(2, 0) = 2 Day 5 → min=0, best profit = 3-0 = 3, leftProfit[5] = max(2, 3) = 3 Day 6 → min=0, best profit = 1-0 = 1, leftProfit[6] = max(3, 1) = 3 Day 7 → min=0, best profit = 4-0 = 4, leftProfit[7] = max(3, 4) = 4leftProfit = [0, 0, 2, 2, 2, 3, 3, 4]Building rightProfit right to left, tracking maximum price seen:Day 7 → max=4, rightProfit[7] = 0 (nothing to compare yet) Day 6 → max=4, best profit = 4-1 = 3, rightProfit[6] = max(0, 3) = 3 Day 5 → max=4, best profit = 4-3 = 1, rightProfit[5] = max(3, 1) = 3 Day 4 → max=4, best profit = 4-0 = 4, rightProfit[4] = max(3, 4) = 4 Day 3 → max=4, best profit = 4-0 = 4, rightProfit[3] = max(4, 4) = 4 Day 2 → max=5, best profit = 5-5 = 0, rightProfit[2] = max(4, 0) = 4 Day 1 → max=5, best profit = 5-3 = 2, rightProfit[1] = max(4, 2) = 4 Day 0 → max=5, best profit = 5-3 = 2, rightProfit[0] = max(4, 2) = 4rightProfit = [4, 4, 4, 4, 4, 3, 3, 0]Combining at each index: Index 0 → 0 + 4 = 4 Index 1 → 0 + 4 = 4 Index 2 → 2 + 4 = 6 Index 3 → 2 + 4 = 6 Index 4 → 2 + 4 = 6 Index 5 → 3 + 3 = 6 Index 6 → 3 + 3 = 6 Index 7 → 4 + 0 = 4Maximum = 6 ✅Implementation code:class Solution { public int maxProfit(int[] prices) { int lefProf[] = new int[prices.length]; int rigProf[] = new int[prices.length]; int j = 1; int i = 0; int coL = 1; while (i < j && j < prices.length) { if (prices[i] > prices[j]) { i = j; if (coL - 1 != -1) { lefProf[coL] = lefProf[coL - 1]; } coL++; } else { int max = prices[j] - prices[i]; if (coL - 1 != -1) { lefProf[coL] = Math.max(lefProf[coL - 1], max); coL++; } else { lefProf[coL] = max; coL++; } } j++; } int ii = prices.length - 2; int jj = prices.length - 1; int coR = prices.length - 2; while (ii >= 0) { if (prices[ii] > prices[jj]) { jj = ii; if (coR + 1 < prices.length) { rigProf[coR] = rigProf[coR + 1]; } coR--; } else { int max = prices[jj] - prices[ii]; if (coR + 1 < prices.length) { rigProf[coR] = Math.max(rigProf[coR + 1], max); coR--; } else { rigProf[coR] = max; coR--; } } ii--; } int maxAns = 0; for (int k = 0; k < lefProf.length; k++) { maxAns = Math.max(maxAns, lefProf[k] + rigProf[k]); } return maxAns; }}The approach here is exactly right. You are building the left and right profit arrays using a two pointer scanning technique — the same idea as LeetCode 121 but applied twice, once going forward and once going backward. The manual counters coL and coR make it a bit harder to follow, but the logic underneath is solid.Here is the same approach written in a cleaner way so it is easier to read and debug:class Solution { public int maxProfit(int[] prices) { int n = prices.length; int[] leftProfit = new int[n]; int[] rightProfit = new int[n]; int minPrice = prices[0]; for (int i = 1; i < n; i++) { minPrice = Math.min(minPrice, prices[i]); leftProfit[i] = Math.max(leftProfit[i - 1], prices[i] - minPrice); } int maxPrice = prices[n - 1]; for (int i = n - 2; i >= 0; i--) { maxPrice = Math.max(maxPrice, prices[i]); rightProfit[i] = Math.max(rightProfit[i + 1], maxPrice - prices[i]); } int maxProfit = 0; for (int i = 0; i < n; i++) { maxProfit = Math.max(maxProfit, leftProfit[i] + rightProfit[i]); } return maxProfit; }}Time Complexity: O(N) — three separate linear passes through the array. Space Complexity: O(N) — two extra arrays of size N.Approach 2 — Four Variable State Machine DPThe idea:Instead of building arrays, what if you could track everything in just four variables and solve it in a single pass?Think about what is happening at any point in time as you go through the prices. You are always in one of these situations:You have bought your first stock and are currently holding it. You have sold your first stock and are sitting on that profit. You have bought your second stock (using the profit from the first sale) and are holding it. You have sold your second stock and collected your total profit.These four situations are your four states. For each one, you always want to know — what is the best possible profit I could have in this state up to today?Here is how each state updates as you look at a new price:buy1 — This represents the best outcome after buying the first stock. Buying costs money, so this is negative. As you see each new price, you ask: is it better to have bought on some earlier day, or to buy today? You keep whichever gives you the least cost, which means the highest value of negative price.sell1 — This represents the best profit after completing the first transaction. As you see each new price, you ask: is it better to have sold on some earlier day, or to sell today using the best buy1 we have? Selling today means adding today's price on top of buy1.buy2 — This represents the best outcome after buying the second stock. The key insight here is that you are not starting from zero — you already have the profit from the first transaction in your pocket. So buying the second stock costs today's price but you subtract it from sell1. As you see each new price, you ask: is it better to have bought the second stock earlier, or to buy today using the profit from sell1?sell2 — This is your final answer. It represents the best total profit after completing both transactions. As you see each new price, you ask: is it better to have completed both transactions earlier, or to sell the second stock today using the best buy2 we have?The beautiful thing is that all four updates happen in order on every single day, in one pass through the array. Each variable feeds into the next — buy1 feeds sell1, sell1 feeds buy2, buy2 feeds sell2.Dry Run — prices = [3, 3, 5, 0, 0, 3, 1, 4]We start with buy1 and buy2 as very negative (not yet bought anything) and sell1 and sell2 as 0 (no profit yet).Day 0, price = 3: buy1 = max(-∞, -3) = -3. Best cost to buy first stock is spending 3. sell1 = max(0, -3+3) = 0. Selling immediately gives no profit. buy2 = max(-∞, 0-3) = -3. Best cost to buy second stock after first sale is 3. sell2 = max(0, -3+3) = 0. No total profit yet.Day 1, price = 3: Nothing changes — same price as day 0.Day 2, price = 5: buy1 = max(-3, -5) = -3. Still best to have bought at price 3. sell1 = max(0, -3+5) = 2. Selling today at 5 after buying at 3 gives profit 2. buy2 = max(-3, 2-5) = -3. Buying second stock today costs too much relative to first profit. sell2 = max(0, -3+5) = 2. Completing both trades gives 2 so far.Day 3, price = 0: buy1 = max(-3, -0) = 0. Buying today at price 0 is the best possible first buy. sell1 = max(2, 0+0) = 2. Selling today at 0 gives nothing — keep the 2 from before. buy2 = max(-3, 2-0) = 2. Using the profit of 2 and buying today at 0 gives buy2 = 2. sell2 = max(2, 2+0) = 2. Selling second stock today at 0 gives nothing extra.Day 4, price = 0: Nothing changes — same price as day 3.Day 5, price = 3: buy1 = max(0, -3) = 0. Still best to have bought at price 0. sell1 = max(2, 0+3) = 3. Selling today at 3 after buying at 0 gives profit 3. buy2 = max(2, 3-3) = 2. Buying second stock today at 3 using sell1=3 gives 0. Keep 2. sell2 = max(2, 2+3) = 5. Selling second stock today at 3 after buy2=2 gives total 5.Day 6, price = 1: buy1 = max(0, -1) = 0. Still best to have bought at 0. sell1 = max(3, 0+1) = 3. Selling today at 1 gives less than 3 — no change. buy2 = max(2, 3-1) = 2. Buying second stock at 1 using sell1=3 gives 2 — same as before. sell2 = max(5, 2+1) = 5. No improvement yet.Day 7, price = 4: buy1 = max(0, -4) = 0. No change. sell1 = max(3, 0+4) = 4. Selling today at 4 after buying at 0 gives profit 4 — new best. buy2 = max(2, 4-4) = 2. No improvement. sell2 = max(5, 2+4) = 6. Selling second stock today at 4 with buy2=2 gives total 6 — new best.Output: 6 ✅class Solution { public int maxProfit(int[] prices) { int buy1 = Integer.MIN_VALUE; int sell1 = 0; int buy2 = Integer.MIN_VALUE; int sell2 = 0; for (int price : prices) { buy1 = Math.max(buy1, -price); sell1 = Math.max(sell1, buy1 + price); buy2 = Math.max(buy2, sell1 - price); sell2 = Math.max(sell2, buy2 + price); } return sell2; }}Time Complexity: O(N) — single pass through the array. Space Complexity: O(1) — only four integer variables, no extra arrays.Approach 3 — Generalizing to At Most K TransactionsOnce you understand Approach 2, there is a natural question — what if instead of two transactions, you were allowed K transactions? That is exactly LeetCode 188.Instead of four hardcoded variables, you use two arrays of size K. buy[j] tracks the best outcome after the j-th buy and sell[j] tracks the best outcome after the j-th sell. The logic is identical to Approach 2, just in a loop.For this problem set K = 2 and it gives the same answer:class Solution { public int maxProfit(int[] prices) { int k = 2; int[] buy = new int[k]; int[] sell = new int[k]; Arrays.fill(buy, Integer.MIN_VALUE); for (int price : prices) { for (int j = 0; j < k; j++) { buy[j] = Math.max(buy[j], (j == 0 ? 0 : sell[j - 1]) - price); sell[j] = Math.max(sell[j], buy[j] + price); } } return sell[k - 1]; }}Time Complexity: O(N × K) — for K=2 this is effectively O(N). Space Complexity: O(K) — two small arrays of size K.If you understand this, LeetCode 188 becomes a five minute problem.Comparing All Three ApproachesApproach 1 — Left-Right Prefix DP (Your Approach)You build two arrays — one scanning left to right, one scanning right to left — and combine them. This is the most visual and intuitive approach. It is easy to explain because you are essentially solving LeetCode 121 twice from opposite ends and adding the results. The downside is it uses O(N) extra space for the two arrays.Approach 2 — Four Variable State MachineYou solve the entire problem in a single pass using just four variables. No arrays, no extra memory. This is the most efficient and elegant solution. Once you understand what each variable represents, the code almost writes itself. This is what most interviewers are hoping to see.Approach 3 — General K-Transaction DPThis is Approach 2 extended to handle any number of transactions using arrays instead of hardcoded variables. Solving LeetCode 123 with this approach means you have already solved LeetCode 188 as a bonus.Mistakes That Catch Most PeopleTrying to use the greedy approach from LeetCode 122: Adding every upward price movement does not work when you are limited to two transactions. You need to pick the two best non-overlapping windows, not collect everything.Buying twice without selling in between: The state machine prevents this naturally — buy2 depends on sell1, so you can never enter the second buy before completing the first sell.Initializing buy variables to 0: Both buy1 and buy2 must start at Integer.MIN_VALUE. Setting them to 0 implies you bought a stock for free, which is wrong and will give inflated profits.Returning the wrong variable: Always return sell2, not buy2 or sell1. sell2 is the only variable that represents a completed two-transaction profit.Where This Fits in the Stock SeriesLeetCode 121 — One transaction only. Find the single best buy-sell pair. [ Blog is also avaliable on this - Read Now ]LeetCode 122 — Unlimited transactions. Collect every upward price movement greedily. [ Blog is also avaliable on this - Read Now ]LeetCode 188 — At most K transactions. Direct extension of Approach 3 above.LeetCode 309 — Unlimited transactions but with a cooldown day after every sale.LeetCode 714 — Unlimited transactions but with a fee charged on every transaction.Each problem adds one new constraint on top of the previous one. If you understand LeetCode 123 deeply — especially Approach 2 — every other problem in this series becomes a small modification of the same framework.Key Takeaways✅ When you are limited to two transactions, greedy fails. You need to find two non-overlapping windows that together give maximum profit.✅ The left-right prefix DP approach is the most intuitive — precompute the best single transaction from both ends, then combine at every split point.✅ The four variable state machine solves it in one pass with O(1) space. Each variable tracks the best outcome for one state — buy1 feeds sell1, sell1 feeds buy2, buy2 feeds sell2.✅ Always initialize buy variables to Integer.MIN_VALUE to represent "not yet bought anything."✅ Understanding Approach 3 here means LeetCode 188 is already solved — just change the hardcoded 2 to K.Happy Coding! This problem is the turning point in the stock series where DP becomes unavoidable — once you crack it, the rest of the series feels manageable. 🚀

LeetCodeDynamic ProgrammingHardJavaDSAPrefix DPArraysStock Series
LeetCode 121 — Best Time to Buy and Sell Stock I | Two Pointer / Sliding Window Explained

LeetCode 121 — Best Time to Buy and Sell Stock I | Two Pointer / Sliding Window Explained

🚀 Try This Problem First!Before reading the solution, attempt it yourself on LeetCode — you'll retain the concept far better.🔗 Problem Link: https://leetcode.com/problems/best-time-to-buy-and-sell-stock/Understanding the ProblemYou are given an array prices where prices[i] is the stock price on day i. You can buy on one day and sell on a later day. You want to maximize your profit.Goal: Return the maximum profit possible. If no profit can be made, return 0.Key Rules:You must buy before you sell — no going back in time.You can only make one transaction (one buy, one sell).If prices only go down, profit is impossible — return 0.Constraints:1 ≤ prices.length ≤ 10⁵0 ≤ prices[i] ≤ 10⁴Understanding the Problem With a Real-World AnalogyImagine you're watching a stock ticker for a week. You want to pick the single best day to buy and a later day to sell. You can't predict the future — you just have historical prices. The question is: looking back at all the prices, what was the best single buy-sell pair you could have chosen?Key Observation (Before Writing a Single Line of Code)The profit on any given pair of days is:profit = prices[sell_day] - prices[buy_day]To maximize profit, for every potential sell day, we want the lowest possible buy price seen so far to the left of it. This is the core insight that drives the efficient solution.If at any point the current price is lower than our tracked minimum, there is no point keeping the old minimum — the new one is strictly better for any future sell day.Intuition — The Two Pointer ApproachWe use two pointers i (buy pointer) and j (sell pointer), both starting at the beginning with i = 0 and j = 1.At every step we ask: is prices[j] greater than prices[i]?If yes → this is a profitable pair. Compute the profit and update the maximum if it's better.If no → prices[j] is cheaper than prices[i]. There's no reason to keep i where it is — any future sell day would be better served by buying at j instead. So we move i to j.Either way, j always moves forward by 1 until it reaches the end of the array.Why Moving i to j is CorrectThis is the most important conceptual point. When prices[j] < prices[i], you might wonder — why not just skip j and move on? Why move i?Because for any future day k > j, the profit buying at j is:prices[k] - prices[j]And the profit buying at i is:prices[k] - prices[i]Since prices[j] < prices[i], buying at j will always give a higher or equal profit for the same sell day k. So we update i = j — there is no scenario where keeping the old i is better.Dry Run — Example 1 (Step by Step)Input: prices = [7, 1, 5, 3, 6, 4]We start with i = 0, j = 1, maxP = 0.Step 1: i = 0, j = 1 → prices[i] = 7, prices[j] = 17 > 1 → prices[j] is cheaper. Move i to j. i = 1, j moves to 2. maxP = 0.Step 2: i = 1, j = 2 → prices[i] = 1, prices[j] = 51 < 5 → Profitable! profit = 5 - 1 = 4. 4 > 0 → update maxP = 4. j moves to 3.Step 3: i = 1, j = 3 → prices[i] = 1, prices[j] = 31 < 3 → Profitable! profit = 3 - 1 = 2. 2 < 4 → maxP stays 4. j moves to 4.Step 4: i = 1, j = 4 → prices[i] = 1, prices[j] = 61 < 6 → Profitable! profit = 6 - 1 = 5. 5 > 4 → update maxP = 5. j moves to 5.Step 5: i = 1, j = 5 → prices[i] = 1, prices[j] = 41 < 4 → Profitable! profit = 4 - 1 = 3. 3 < 5 → maxP stays 5. j moves to 6. j = 6 = prices.length → loop ends.Output: maxP = 5 ✅ (Buy at day 2 price=1, sell at day 5 price=6)Dry Run — Example 2 (Step by Step)Input: prices = [7, 6, 4, 3, 1]We start with i = 0, j = 1, maxP = 0.Step 1: i = 0, j = 1 → prices[i] = 7, prices[j] = 67 > 6 → Move i to j. i = 1, j moves to 2. maxP = 0.Step 2: i = 1, j = 2 → prices[i] = 6, prices[j] = 46 > 4 → Move i to j. i = 2, j moves to 3. maxP = 0.Step 3: i = 2, j = 3 → prices[i] = 4, prices[j] = 34 > 3 → Move i to j. i = 3, j moves to 4. maxP = 0.Step 4: i = 3, j = 4 → prices[i] = 3, prices[j] = 13 > 1 → Move i to j. i = 4, j moves to 5. j = 5 = prices.length → loop ends.Output: maxP = 0 ✅ (Prices only went down, no profitable transaction exists)The Code — Implementationclass Solution {public int maxProfit(int[] prices) {int i = 0; // Buy pointer — points to the current minimum price dayint j = 1; // Sell pointer — scans forward through the arrayint maxP = 0; // Tracks the maximum profit seen so far// j scans from day 1 to the end of the arraywhile (i < j && j < prices.length) {if (prices[i] > prices[j]) {// prices[j] is cheaper than current buy price// Move buy pointer to j — buying here is strictly better// for any future sell dayi = j;} else {// prices[j] > prices[i] — this is a profitable pair// Calculate profit and update maxP if it's the best so farint profit = prices[j] - prices[i];if (profit > maxP) {maxP = profit;}}// Always move the sell pointer forwardj++;}// If no profitable transaction was found, maxP remains 0return maxP;}}Code Walkthrough — Step by StepInitialization: i = 0 acts as our buy day pointer (always pointing to the lowest price seen so far). j = 1 is our sell day pointer scanning forward. maxP = 0 is our answer — defaulting to 0 handles the case where no profit is possible.The loop condition: i < j ensures we never sell before buying. j < prices.length ensures we don't go out of bounds.When prices[i] > prices[j]: The current sell day price is cheaper than our buy day. We update i = j, because buying at j dominates buying at i for all future sell days.When prices[j] >= prices[i]: We have a potential profit. We compute it and update maxP if it beats the current best.j always increments: Regardless of which branch we take, j++ moves the sell pointer forward every iteration — this is what drives the loop to completion.Common Mistakes to AvoidNot returning 0 for no-profit cases: If prices are strictly decreasing, maxP never gets updated and stays 0. The initialization maxP = 0 handles this correctly — never initialize it to a negative number.Using a nested loop (brute force): A common first instinct is two nested loops checking every pair. That is O(N²) and will TLE on large inputs. The two pointer approach solves it in O(N).Thinking you need to sort: Sorting destroys the order of days, which is fundamental to the problem. Never sort the prices array here.Moving i forward by 1 instead of to j: When prices[j] < prices[i], some people write i++ instead of i = j. This is wrong — you might miss the new minimum entirely and waste iterations.Complexity AnalysisTime Complexity: O(N) The j pointer traverses the array exactly once from index 1 to the end. The i pointer only moves forward and never exceeds j. So the entire algorithm is a single linear pass — O(N).Space Complexity: O(1) No extra arrays, maps, or stacks are used. Only three integer variables — i, j, and maxP.Alternative Way to Think About It (Min Tracking)Some people write this problem using a minPrice variable instead of two pointers. Both approaches are equivalent — the two pointer framing is slightly more intuitive visually, but here is the min-tracking version for reference:int minPrice = Integer.MAX_VALUE;int maxProfit = 0;for (int price : prices) {if (price < minPrice) {minPrice = price;} else if (price - minPrice > maxProfit) {maxProfit = price - minPrice;}}return maxProfit;The logic is the same — always track the cheapest buy day seen so far, and for each day compute what profit would look like if you sold today.Similar Problems (Build on This Foundation)LeetCode 122 — Best Time to Buy and Sell Stock II (multiple transactions allowed) [ Blog is also avaliable on this - Read Now ]LeetCode 123 — Best Time to Buy and Sell Stock III (at most 2 transactions) [ Blog is also avaliable on this - Read Now ]LeetCode 188 — Best Time to Buy and Sell Stock IV (at most k transactions)LeetCode 309 — Best Time to Buy and Sell Stock with CooldownLeetCode 714 — Best Time to Buy and Sell Stock with Transaction FeeLeetCode 121 is the foundation. Master this one deeply before moving to the others — they all build on the same core idea.Key Takeaways✅ For every potential sell day, the best buy day is always the minimum price seen to its left — this drives the whole approach.✅ When the sell pointer finds a cheaper price than the buy pointer, always move the buy pointer there — it strictly dominates the old buy day for all future sells.✅ Initialize maxP = 0 so that no-profit scenarios (prices only going down) are handled automatically.✅ The two pointer approach solves this in a single linear pass — O(N) time and O(1) space.✅ j always increments every iteration — i only moves when a cheaper price is found.Happy Coding! If this clicked for you, the entire Stock series on LeetCode will feel much more approachable.

LeetCodeTwo PointersEasyJavaDSA
Subsets Problem (LeetCode 78) Explained | Recursion, Iterative & Bit Manipulation

Subsets Problem (LeetCode 78) Explained | Recursion, Iterative & Bit Manipulation

IntroductionThe Subsets problem (LeetCode 78) is one of the most fundamental and frequently asked questions in coding interviews. It introduces the concept of generating a power set, which is a core idea in recursion, backtracking, and combinatorics.Mastering this problem helps in solving a wide range of advanced problems like combinations, permutations, and decision-based recursion.In this article, we will explore:Intuition behind subsetsRecursive (backtracking) approachIterative (loop-based) approachBit manipulation approachTime and space complexity analysisProblem StatementGiven an integer array nums of unique elements, return all possible subsets (the power set).Key PointsEach element can either be included or excludedNo duplicate subsetsReturn subsets in any orderExamplesExample 1Input:nums = [1, 2, 3]Output:[[], [1], [2], [1,2], [3], [1,3], [2,3], [1,2,3]]Example 2Input:nums = [0]Output:[[], [0]]Key InsightFor each element, there are two choices:Include it OR Exclude itSo total subsets:2^nThis makes it a binary decision tree problem, very similar to:Permutation with SpacesBinary choices recursionBacktracking problemsApproach 1: Recursion + Backtracking (Most Important)IntuitionAt each index:Skip the elementInclude the elementBuild subsets step by step and backtrack.Java Code (With Explanation)import java.util.*;class Solution { List<List<Integer>> liss = new ArrayList<>(); void solve(int[] an, int ind, List<Integer> lis) { // Base case: reached end → one subset formed if (ind == an.length) { liss.add(new ArrayList<>(lis)); // store copy return; } // Choice 1: Do NOT include current element solve(an, ind + 1, lis); // Choice 2: Include current element lis.add(an[ind]); solve(an, ind + 1, lis); // Backtrack: remove last added element lis.remove(lis.size() - 1); } public List<List<Integer>> subsets(int[] nums) { List<Integer> lis = new ArrayList<>(); solve(nums, 0, lis); return liss; }}Dry Run (nums = [1,2])Start: [] → skip 1 → [] → skip 2 → [] → take 2 → [2] → take 1 → [1] → skip 2 → [1] → take 2 → [1,2]Final Output:[], [2], [1], [1,2]Approach 2: Iterative (Loop-Based)IntuitionStart with an empty subset:[ [] ]For each element:Add it to all existing subsetsCodeimport java.util.*;class Solution { public List<List<Integer>> subsets(int[] nums) { List<List<Integer>> result = new ArrayList<>(); result.add(new ArrayList<>()); for (int num : nums) { int size = result.size(); for (int i = 0; i < size; i++) { List<Integer> temp = new ArrayList<>(result.get(i)); temp.add(num); result.add(temp); } } return result; }}How It WorksFor [1,2,3]:Start: [[]]Add 1 → [[], [1]]Add 2 → [[], [1], [2], [1,2]]Add 3 → [[], [1], [2], [1,2], [3], [1,3], [2,3], [1,2,3]]Approach 3: Bit ManipulationIntuitionEach subset can be represented using a binary number:For n = 3:000 → []001 → [1]010 → [2]011 → [1,2]...Codeimport java.util.*;class Solution { public List<List<Integer>> subsets(int[] nums) { List<List<Integer>> result = new ArrayList<>(); int n = nums.length; int total = 1 << n; // 2^n for (int i = 0; i < total; i++) { List<Integer> subset = new ArrayList<>(); for (int j = 0; j < n; j++) { if ((i & (1 << j)) != 0) { subset.add(nums[j]); } } result.add(subset); } return result; }}Complexity AnalysisApproachTime ComplexitySpace ComplexityRecursionO(2^n)O(n) stackIterativeO(2^n)O(2^n)Bit ManipulationO(2^n)O(2^n)Why All Approaches Are O(2ⁿ)Because:Total subsets = 2ⁿEach subset takes up to O(n) to constructWhen to Use Which ApproachRecursion / Backtracking → Best for interviews (easy to explain)Iterative → Clean and beginner-friendlyBit Manipulation → Best for optimization & advanced understandingKey TakeawaysSubsets = power set problemEvery element → 2 choicesThink in terms of decision treesBacktracking = build + undo (add/remove)Common Interview VariationsSubsets with duplicatesCombination sumPermutationsK-sized subsetsConclusionThe Subsets problem is a foundational DSA concept that appears across many interview questions. Understanding all approaches—especially recursion and iterative expansion—gives a strong base for solving complex backtracking problems.If you master this pattern, you unlock a whole category of problems in recursion and combinatorics.Frequently Asked Questions (FAQs)1. Why are there 2ⁿ subsets?Because each element has 2 choices: include or exclude.2. Which approach is best for interviews?Recursion + backtracking is the most preferred.3. Is bit manipulation important?Yes, it helps in optimizing and understanding binary patterns.

LeetCodeMediumJavaRecursionBacktracking
Reverse a Stack — GFG Problem Solved (3 Approaches Explained)

Reverse a Stack — GFG Problem Solved (3 Approaches Explained)

What Is This Problem About?This is a classic stack problem from GeeksForGeeks — "Reverse a Stack" (Medium | 4 Points). You can find it on GFG by Reverse a Stack.You are given a stack. Your job is simple — reverse it. The element that was at the bottom should now be at the top, and vice versa.Example:Input: [1, 2, 3, 4] → bottom to top, so 4 is on topOutput: [1, 2, 3, 4] → after reversal, 1 is on topWait — the input and output look the same? That is because GFG displays the result top to bottom after reversal. So after reversing, 1 comes to the top, and printing top to bottom gives [1, 2, 3, 4]. The stack is indeed reversed internally.Approach 1 — Using Two Extra StacksIntuition: Pop everything from the original stack into Stack 1 — this reverses the order once. Then pop everything from Stack 1 into Stack 2 — this reverses it again, back to original order. Now push everything from Stack 2 back into the original stack. The result? The original stack is reversed.Why does this work? Two reversals cancel each other out to give you... wait, that sounds wrong. Let us trace it:Original: [1, 2, 3, 4] → top is 4After → S1: [4, 3, 2, 1] → top is 1After → S2: [1, 2, 3, 4] → top is 4Push S2 back → st: [1, 2, 3, 4] → top is 4Hmm, that brings it back to the same thing. This approach with two stacks actually does NOT work correctly — it ends up restoring the original order. This is why the approach was commented out in the original code. Good observation to catch in an interview.Lesson: Two full reversals = no change. One reversal = what we want. Keep this in mind.Approach 2 — Using an ArrayList (Clean & Simple) ✅Intuition: Pop all elements from the stack into an ArrayList. At this point, the ArrayList holds elements in reverse order (because popping reverses). Then push them back from index 0 to end. This is the clean, working solution.Stack: [1, 2, 3, 4] → top is 4Pop into ArrayList: [4, 3, 2, 1]Push back index 0→end: push 4 → st: [4] push 3 → st: [4, 3] push 2 → st: [4, 3, 2] push 1 → st: [4, 3, 2, 1] → top is now 1 ✓The stack is now reversed. 1 is on top.public static void reverseStack(Stack<Integer> st) { if (st.empty()) return; ArrayList<Integer> list = new ArrayList<>(); // Pop all elements — goes in reverse order into list while (!st.empty()) { list.add(st.pop()); } // Push back from index 0 — restores in reversed order for (int i = 0; i < list.size(); i++) { st.push(list.get(i)); }}Time Complexity: O(n) — one pass to pop, one pass to push.Space Complexity: O(n) — for the ArrayList.Why this works: When you pop all elements into a list, the top element (last inserted) goes to index 0. When you push back from index 0, that element goes in first and ends up at the bottom. The bottom element (first inserted) was popped last, sits at the end of the list, and gets pushed last — ending up on top. That is a perfect reversal.Approach 3 — Using Recursion (No Extra Space) ✅This is the most elegant approach and the one interviewers love to ask about.Intuition: Use two recursive functions:reverseStack — pops the top element, recursively reverses the rest, then inserts the popped element at the bottom.insertAtBottom — holds all elements out while inserting one element at the very bottom, then restores everything.// Insert an item at the bottom of the stackstatic void insertAtBottom(Stack<Integer> st, int item) { if (st.empty()) { st.push(item); return; } int top = st.pop(); insertAtBottom(st, item); st.push(top);}// Reverse the stackpublic static void reverseStack(Stack<Integer> st) { if (st.empty()) return; int top = st.pop(); reverseStack(st); // reverse remaining stack insertAtBottom(st, top); // put popped element at the bottom}```**Dry Run with [1, 2, 3]:**```reverseStack([1,2,3]) → pop 3, reverseStack([1,2]) reverseStack([1,2]) → pop 2, reverseStack([1]) reverseStack([1]) → pop 1, reverseStack([]) base case → return insertAtBottom([], 1) → push 1 → [1] insertAtBottom([1], 2) → 2 < 1? no → pop 1, insert 2, push 1 → [2,1]insertAtBottom([2,1], 3) → pop 1, pop 2, push 3, push 2, push 1 → [3,2,1]Final stack top → 3... wait, let us recheck display.Top is 3, which was originally at bottom. ✓ Reversed!Time Complexity: O(n²) — for each of n elements, insertAtBottom takes O(n).Space Complexity: O(n) — recursive call stack.Which Approach Should You Use?ApproachTimeSpaceSimplicityInterview ValueTwo Extra Stacks❌ Does not workO(n)SimpleLowArrayListO(n)O(n)Very EasyMediumRecursionO(n²)O(n)ModerateHighFor a coding interview, always mention the recursive approach — it shows you understand stack mechanics deeply. For production code, the ArrayList approach is cleaner and faster.Key TakeawayReversing a stack is fundamentally about understanding LIFO. Because a stack only allows access from the top, you need a systematic way to invert the order — whether that is using auxiliary storage like an ArrayList, or using the call stack itself via recursion. Both are valid. Both teach you something different about how stacks behave.The next time you see a problem that involves reversing, reordering, or inserting at the bottom of a stack — your first instinct should be recursion with insertAtBottom. It is a pattern that appears again and again in DSA.And if you want to understand Stack from Scratch?If you are just getting started with stacks or want a complete reference — I have written a detailed in-depth guide on the Stack Data Structure in Java covering everything from what a stack is, how LIFO works, all three implementations (Array, ArrayList, LinkedList), every operation explained with code, time complexity, advantages, disadvantages, real-world use cases, and six practice problems with full solutions.Check it out here → Stack Data Structure in Java: The Complete GuideIt is the perfect companion to this problem walkthrough — start there if you want the full picture, then come back here for the problem-solving side.

StackProblemsJavaMediumGeeksForGeeksReverseStack
Quick Sort Algorithm Explained | Java Implementation, Partition Logic & Complexity

Quick Sort Algorithm Explained | Java Implementation, Partition Logic & Complexity

IntroductionQuick Sort is one of the most powerful and widely used sorting algorithms in computer science. It follows the Divide and Conquer approach and is known for its excellent average-case performance.What makes Quick Sort special is:It sorts in-place (no extra array required)It is faster in practice than many O(n log n) algorithms like Merge SortIt is heavily used in real-world systems and librariesIn this article, we’ll go deep into:Intuition behind Quick SortPartition logic (most important part)Step-by-step dry runJava implementation with commentsTime complexity analysisCommon mistakes and optimizations🔗 Problem LinkGeeksforGeeks: Quick SortProblem StatementGiven an array arr[], sort it in ascending order using Quick Sort.Requirements:Use Divide and ConquerChoose pivot elementPlace pivot in correct positionElements smaller → left sideElements greater → right sideExamplesExample 1Input:arr = [4, 1, 3, 9, 7]Output:[1, 3, 4, 7, 9]Example 2Input:arr = [2, 1, 6, 10, 4, 1, 3, 9, 7]Output:[1, 1, 2, 3, 4, 6, 7, 9, 10]Core Idea of Quick SortPick a pivot → Place it correctly → Recursively sort left & right🔥 Key Insight (Partition is Everything)Quick Sort depends entirely on partitioning:👉 After partition:Pivot is at its correct sorted positionLeft side → smaller elementsRight side → larger elementsIntuition (Visual Understanding)Consider:[4, 1, 3, 9, 7]Step 1: Choose PivotLet’s say pivot = 4Step 2: Rearrange Elements[1, 3] 4 [9, 7]Now:Left → smallerRight → largerStep 3: Apply RecursivelyLeft: [1, 3]Right: [9, 7]Final result:[1, 3, 4, 7, 9]Partition Logic (Most Important)Your implementation uses:Pivot = first elementTwo pointers:i → moves forwardj → moves backwardJava Codeclass Solution { public void quickSort(int[] arr, int low, int high) { // Base case: if array has 1 or 0 elements if (low < high) { // Partition array and get pivot index int pivotInd = partition(arr, low, high); // Sort left part quickSort(arr, low, pivotInd - 1); // Sort right part quickSort(arr, pivotInd + 1, high); } } // Function to swap two elements void swap(int[] arr, int i, int j) { int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; } private int partition(int[] arr, int low, int high) { int pivot = arr[low]; // choosing first element as pivot int i = low + 1; // start from next element int j = high; // start from end while (i <= j) { // Move i forward until element > pivot while (i <= high && arr[i] <= pivot) { i++; } // Move j backward until element <= pivot while (j >= low && arr[j] > pivot) { j--; } // Swap if pointers haven't crossed if (i < j) { swap(arr, i, j); } } // Place pivot at correct position swap(arr, low, j); return j; // return pivot index }}Step-by-Step Dry RunInput:[4, 1, 3, 9, 7]Execution:Pivot = 4i → moves until element > 4j → moves until element ≤ 4Swaps happen → pivot placed correctlyFinal partition:[1, 3, 4, 9, 7]Complexity AnalysisTime ComplexityCaseComplexityBest CaseO(n log n)Average CaseO(n log n)Worst CaseO(n²)Why Worst Case Happens?When array is:Already sortedReverse sortedPivot always becomes smallest/largest.Space ComplexityO(log n) (recursion stack)❌ Common MistakesWrong partition logicInfinite loops in while conditionsIncorrect pivot placementNot handling duplicates properly⚡ Optimizations1. Random PivotAvoid worst-case:int pivotIndex = low + new Random().nextInt(high - low + 1);swap(arr, low, pivotIndex);2. Median of ThreeChoose better pivot:median(arr[low], arr[mid], arr[high])Quick Sort vs Merge SortFeatureQuick SortMerge Sort link to get moreSpaceO(log n)O(n)SpeedFaster (practical)StableWorst CaseO(n²)O(n log n)Why Quick Sort is PreferredCache-friendlyIn-place sortingFaster in real-world scenariosKey TakeawaysPartition is the heart of Quick SortPivot must be placed correctlyRecursion splits problem efficientlyAvoid worst case using random pivotWhen to Use Quick SortLarge arraysMemory constraints (in-place)Performance-critical applicationsConclusionQuick Sort is one of the most efficient and practical sorting algorithms. Mastering its partition logic is crucial for solving advanced problems and performing well in coding interviews.Understanding how pointers move and how pivot is placed will make this algorithm intuitive and powerful.Frequently Asked Questions (FAQs)1. Is Quick Sort stable?No, it is not stable.2. Why is Quick Sort faster than Merge Sort?Because it avoids extra space and is cache-efficient.3. What is the most important part?👉 Partition logic

MediumJavaSortingQuick SortGeeksofGeeks
LeetCode 844: Backspace String Compare — Java Solution With All Approaches Explained

LeetCode 844: Backspace String Compare — Java Solution With All Approaches Explained

IntroductionLeetCode 844 Backspace String Compare is a fantastic problem that shows up frequently in coding interviews. It combines string manipulation, stack simulation, and even has a follow-up that challenges you to solve it in O(1) space — which is what separates a good candidate from a great one.Here is the Link of Question -: LeetCode 844In this article we cover a plain English explanation, real life analogy, 3 Java approaches including the O(1) space two pointer solution, dry runs, complexity analysis, common mistakes, and FAQs.What Is the Problem Really Asking?You are given two strings s and t. Both contain lowercase letters and # characters. Think of # as the backspace key on your keyboard — it deletes the character just before it. If there is nothing to delete, it does nothing.Process both strings through these backspace operations and check if the resulting strings are equal. Return true if equal, false otherwise.Quick Example:s = "ab#c" → # deletes b → becomes "ac"t = "ad#c" → # deletes d → becomes "ac"Both equal "ac" → return true ✅Real Life Analogy — The Keyboard TypoYou are typing a message. You type "ab", realize you made a typo, hit backspace, and type "c". Your friend types "ad", hits backspace, and types "c". Even though you both typed differently, the final message on screen is the same — "ac".That is exactly what this problem is about. Two people typing differently but ending up with the same result.Approach 1: StringBuilder as Stack (Optimal & Clean) ✅The IdeaThis is your own solution and the best O(n) approach. Process each string independently using a StringBuilder as a stack:Letter → append to StringBuilder (push)# → delete last character if StringBuilder is not empty (pop)Then simply compare the two resulting StringBuilders.public boolean backspaceCompare(String s, String t) { return process(s).equals(process(t));}private String process(String str) { StringBuilder sb = new StringBuilder(); for (int i = 0; i < str.length(); i++) { char c = str.charAt(i); if (c == '#') { if (sb.length() > 0) { sb.deleteCharAt(sb.length() - 1); } } else { sb.append(c); } } return sb.toString();}Notice how extracting a process() helper method makes the code cleaner and avoids repeating the same loop twice — a great habit to show in interviews!Dry Run — s = "ab##", t = "c#d#"Processing s = "ab##":a → append → "a"b → append → "ab"# → delete last → "a"# → delete last → ""Processing t = "c#d#":c → append → "c"# → delete last → ""d → append → "d"# → delete last → ""Both result in "" → return true ✅Time Complexity: O(n + m) — where n and m are lengths of s and t Space Complexity: O(n + m) — two StringBuilders storing processed stringsApproach 2: Stack Based Solution (Interview Classic)The IdeaSame logic as above but using explicit Stack<Character> objects. Great for explaining your thought process clearly in an interview even though StringBuilder is cleaner.public boolean backspaceCompare(String s, String t) { return processStack(s).equals(processStack(t));}private String processStack(String str) { Stack<Character> st = new Stack<>(); for (int i = 0; i < str.length(); i++) { char c = str.charAt(i); if (c == '#') { if (!st.empty()) { st.pop(); } } else { st.push(c); } } StringBuilder sb = new StringBuilder(); while (!st.empty()) { sb.append(st.pop()); } return sb.reverse().toString();}Dry Run — s = "a#c", t = "b"Processing s = "a#c":a → push → stack: [a]# → pop → stack: []c → push → stack: [c]Result: "c"Processing t = "b":b → push → stack: [b]Result: "b""c" does not equal "b" → return false ✅Time Complexity: O(n + m) Space Complexity: O(n + m)Approach 3: Two Pointer — O(1) Space (Follow-Up Solution) 🔥This is the follow-up the problem asks about — can you solve it in O(n) time and O(1) space? This means no extra StringBuilder or Stack allowed.The IdeaInstead of building processed strings, traverse both strings from right to left simultaneously. Keep a count of pending backspaces. Skip characters that would be deleted and compare characters that survive.Why right to left? Because # affects characters to its left, so processing from the end lets us know upfront how many characters to skip.public boolean backspaceCompare(String s, String t) { int i = s.length() - 1; int j = t.length() - 1; int skipS = 0, skipT = 0; while (i >= 0 || j >= 0) { // Find next valid character in s while (i >= 0) { if (s.charAt(i) == '#') { skipS++; i--; } else if (skipS > 0) { skipS--; i--; } else { break; } } // Find next valid character in t while (j >= 0) { if (t.charAt(j) == '#') { skipT++; j--; } else if (skipT > 0) { skipT--; j--; } else { break; } } // Compare the valid characters if (i >= 0 && j >= 0) { if (s.charAt(i) != t.charAt(j)) { return false; } } else if (i >= 0 || j >= 0) { return false; // one string still has chars, other doesn't } i--; j--; } return true;}Dry Run — s = "ab#c", t = "ad#c"Starting from the right end of both strings:Round 1:s[3] = 'c' → valid, no skips → stopt[3] = 'c' → valid, no skips → stopCompare 'c' == 'c' ✅ → move both pointers leftRound 2:s[2] = '#' → skipS = 1, move lefts[1] = 'b' → skipS > 0, skipS = 0, move lefts[0] = 'a' → valid, stopt[2] = '#' → skipT = 1, move leftt[1] = 'd' → skipT > 0, skipT = 0, move leftt[0] = 'a' → valid, stopCompare 'a' == 'a' ✅ → move both pointers leftBoth pointers exhausted → return true ✅Time Complexity: O(n + m) — each character visited at most once Space Complexity: O(1) — only pointer and counter variables, no extra storage!Approach ComparisonThe StringBuilder approach is the easiest to write and explain. The Stack approach is slightly more verbose but shows clear intent. The Two Pointer approach is the hardest to code but the most impressive — it solves the follow-up and uses zero extra space.In an interview, start with the StringBuilder solution, explain it clearly, then mention the Two Pointer approach as the O(1) space optimization if asked.How This Fits the Stack Simulation PatternYou have now seen this same pattern across four problems:3174 Clear Digits — digit deletes closest left non-digit 2390 Removing Stars — star deletes closest left non-star 1047 Remove Adjacent Duplicates — character cancels matching top of stack 844 Backspace String Compare — # deletes closest left character, then compare two stringsAll four use the same StringBuilder-as-stack core. The only differences are the trigger character and what you do with the result. This is the power of pattern recognition in DSA.Common Mistakes to AvoidNot handling backspace on empty string When # appears but the StringBuilder is already empty, do nothing. Always guard with sb.length() > 0 before calling deleteCharAt. The problem explicitly states backspace on empty text keeps it empty.Using Stack and forgetting to handle # when stack is empty In the Stack approach, only pop if the stack is not empty. Pushing # onto the stack when it is empty is a common bug that gives wrong answers.In Two Pointer, comparing before both pointers find valid characters Make sure both inner while loops fully complete before comparing. Comparing too early is the most common mistake in the O(1) space solution.FAQs — People Also AskQ1. What is the best approach for LeetCode 844 in Java? For most interviews, the StringBuilder approach is the best — clean, readable, and O(n) time. If the interviewer asks for O(1) space, switch to the Two Pointer approach traversing from right to left.Q2. How does the O(1) space solution work for LeetCode 844? It uses two pointers starting from the end of both strings, keeping a skip counter to track pending backspaces. Characters that would be deleted are skipped, and only surviving characters are compared.Q3. What is the time complexity of LeetCode 844? All three approaches run in O(n + m) time where n and m are the lengths of the two strings. The Two Pointer approach achieves this with O(1) space instead of O(n + m).Q4. Why traverse from right to left in the Two Pointer approach? Because # affects characters to its left. Scanning from the right lets you know upfront how many characters to skip before you reach them, avoiding the need to store anything.Q5. Is LeetCode 844 asked in Google interviews? Yes, it is commonly used as a warmup or screening problem. The follow-up O(1) space solution is what makes it interesting for senior-level interviews.Similar LeetCode Problems to Practice Next1047. Remove All Adjacent Duplicates In String — Easy — same stack pattern2390. Removing Stars From a String — Medium — star as backspace3174. Clear Digits — Easy — digit as backspace1209. Remove All Adjacent Duplicates in String II — Medium — k adjacent duplicates678. Valid Parenthesis String — Medium — stack with wildcardsConclusionLeetCode 844 Backspace String Compare is a well-rounded problem that tests string manipulation, stack simulation, and space optimization all in one. The StringBuilder solution is your go-to for interviews. But always be ready to explain the Two Pointer O(1) space follow-up — that is what shows real depth of understanding.Check out these problems alongside 1047, 2390, and 3174 and you will have the entire stack simulation pattern locked down for any coding interview.

StringStackTwo PointerString Builder
Queue Data Structure Complete Guide - Java Explained With All Operations

Queue Data Structure Complete Guide - Java Explained With All Operations

IntroductionIf you have been learning Data Structures and Algorithms, you have probably already spent time with arrays, linked lists, and stacks. Now it is time to meet one of the most important and widely used data structures in computer science — the Queue.Queue is not just a theoretical concept. It powers some of the most critical systems you use every day — from how your printer handles jobs, to how your CPU schedules tasks, to how Google Maps finds the shortest path between two locations. Understanding Queue deeply means understanding how real systems work.In this complete guide we will cover absolutely everything — what a Queue is, how it differs from a Stack, every type of Queue, all operations with code, Java implementations, time and space complexity, common interview questions, and the most important LeetCode problems that use Queue.What Is a Queue?A Queue is a linear data structure that follows the FIFO principle — First In First Out. This means the element that was added first is the one that gets removed first.Think of it exactly like a real-world queue (a line of people). The person who joined the line first gets served first. No cutting in line, no serving from the back — strict order from front to back.This is the fundamental difference between a Queue and a Stack:Stack → LIFO (Last In First Out) — like a stack of plates, you take from the topQueue → FIFO (First In First Out) — like a line of people, you serve from the frontReal Life Examples of QueueBefore writing a single line of code, let us understand where queues appear in real life. This will make every technical concept feel natural.Printer Queue — when you send multiple documents to print, they print in the order they were sent. The first document sent prints first.CPU Task Scheduling — your operating system manages running processes in a queue. Tasks get CPU time in the order they arrive (in basic scheduling).Customer Service Call Center — when you call a helpline and are put on hold, you are placed in a queue. The first caller on hold gets connected first.WhatsApp Messages — messages are delivered in the order they are sent. The first message sent is the first one received.BFS (Breadth First Search) — every time you use Google Maps or any navigation app to find the shortest path, it uses BFS internally which is entirely powered by a Queue.Ticket Booking Systems — online booking portals process requests in the order they arrive. First come first served.Queue Terminology — Key Terms You Must KnowBefore diving into code, let us get the vocabulary right:Front — the end from which elements are removed (dequeued). This is where the "first person in line" stands.Rear (or Back) — the end at which elements are added (enqueued). New arrivals join here.Enqueue — the operation of adding an element to the rear of the queue. Like joining the back of a line.Dequeue — the operation of removing an element from the front of the queue. Like the first person in line being served and leaving.Peek (or Front) — looking at the front element without removing it. Like seeing who is first in line without serving them yet.isEmpty — checking whether the queue has no elements.isFull — relevant for fixed-size queues, checking whether no more elements can be added.Types of QueuesThis is where most beginners get confused. There is not just one type of Queue — there are several variations each designed to solve specific problems.1. Simple Queue (Linear Queue)The most basic form. Elements enter from the rear and leave from the front. Strict FIFO, nothing fancy.Enqueue → [ 1 | 2 | 3 | 4 | 5 ] → Dequeue rear frontProblem with Simple Queue: In array-based implementation, once elements are dequeued from the front, those slots cannot be reused even if there is space. This wastes memory. This is why Circular Queue was invented.2. Circular QueueIn a Circular Queue, the rear wraps around to the front when it reaches the end of the array. The last position connects back to the first, forming a circle. This solves the wasted space problem of simple queues. [1] [2] [3] / \ [6] [4] \ / [5] ← rearUsed in: CPU scheduling, memory management, traffic light systems, streaming buffers.3. Double Ended Queue (Deque)A Deque (pronounced "deck") allows insertion and deletion from both ends — front and rear. It is the most flexible queue type.Enqueue Front → [ 1 | 2 | 3 | 4 | 5 ] → Dequeue FrontEnqueue Rear → [ 1 | 2 | 3 | 4 | 5 ] → Dequeue RearTwo subtypes:Input Restricted Deque — insertion only at rear, deletion from both endsOutput Restricted Deque — deletion only at front, insertion at both endsUsed in: browser history (back and forward), undo-redo operations, sliding window problems.4. Priority QueueElements are not served in FIFO order — instead each element has a priority and the element with the highest priority is served first regardless of when it was added.Think of an emergency room. A patient with a critical injury jumps ahead of someone with a minor cut even if they arrived later.Two types:Max Priority Queue — highest value = highest priorityMin Priority Queue — lowest value = highest priorityUsed in: Dijkstra's shortest path, Huffman encoding, A* search algorithm, task scheduling with priorities.5. Blocking QueueA thread-safe queue used in multi-threading. If the queue is empty, a thread trying to dequeue will wait (block) until an element is available. If the queue is full, a thread trying to enqueue will wait until space is available.Used in: Producer-Consumer problems, thread pool implementations, Java's java.util.concurrent package.Queue Operations and Time ComplexityEvery queue operation has a specific time complexity that you must know cold for interviews.OperationDescriptionTime ComplexityEnqueueAdd element to rearO(1)DequeueRemove element from frontO(1)Peek/FrontView front elementO(1)isEmptyCheck if queue is emptyO(1)SizeNumber of elementsO(1)SearchFind a specific elementO(n)Space Complexity: O(n) — where n is the number of elements stored.All core queue operations are O(1). This is what makes Queue so powerful — no matter how many elements are in the queue, adding and removing always takes constant time.Implementing Queue in Java — All WaysJava gives you multiple ways to use a Queue. Let us go through each one.Way 1: Using LinkedList (Most Common)LinkedList implements the Queue interface in Java. This is the most commonly used Queue implementation.import java.util.LinkedList;import java.util.Queue;Queue<Integer> queue = new LinkedList<>();// Enqueue — add to rearqueue.offer(10);queue.offer(20);queue.offer(30);// Peek — view front without removingSystem.out.println(queue.peek()); // 10// Dequeue — remove from frontSystem.out.println(queue.poll()); // 10System.out.println(queue.poll()); // 20// Check emptySystem.out.println(queue.isEmpty()); // false// SizeSystem.out.println(queue.size()); // 1offer() vs add() — both add to the queue. add() throws an exception if the queue is full (for bounded queues). offer() returns false instead. Always prefer offer().poll() vs remove() — both remove from front. remove() throws an exception if queue is empty. poll() returns null. Always prefer poll().peek() vs element() — both view the front. element() throws exception if empty. peek() returns null. Always prefer peek().Way 2: Using ArrayDeque (Fastest)ArrayDeque is faster than LinkedList for Queue operations because it uses a resizable array internally with no node allocation overhead.import java.util.ArrayDeque;import java.util.Queue;Queue<Integer> queue = new ArrayDeque<>();queue.offer(1);queue.offer(2);queue.offer(3);System.out.println(queue.peek()); // 1System.out.println(queue.poll()); // 1System.out.println(queue.size()); // 2When to use ArrayDeque over LinkedList? Use ArrayDeque whenever possible for Queue or Stack operations. It is faster because it avoids the overhead of node objects that LinkedList creates for every element. In competitive programming and interviews, ArrayDeque is the preferred choice.Way 3: Using Deque (Double Ended Queue)import java.util.ArrayDeque;import java.util.Deque;Deque<Integer> deque = new ArrayDeque<>();// Add to frontdeque.offerFirst(10);// Add to reardeque.offerLast(20);deque.offerLast(30);// Remove from frontSystem.out.println(deque.pollFirst()); // 10// Remove from rearSystem.out.println(deque.pollLast()); // 30// Peek front and rearSystem.out.println(deque.peekFirst()); // 20System.out.println(deque.peekLast()); // 20Way 4: Using PriorityQueueimport java.util.PriorityQueue;// Min Heap — smallest element has highest priorityPriorityQueue<Integer> minPQ = new PriorityQueue<>();minPQ.offer(30);minPQ.offer(10);minPQ.offer(20);System.out.println(minPQ.poll()); // 10 — smallest comes out first// Max Heap — largest element has highest priorityPriorityQueue<Integer> maxPQ = new PriorityQueue<>((a, b) -> b - a);maxPQ.offer(30);maxPQ.offer(10);maxPQ.offer(20);System.out.println(maxPQ.poll()); // 30 — largest comes out firstWay 5: Implementing Queue From Scratch Using ArrayUnderstanding the underlying implementation helps you in interviews when asked to build one from scratch.class MyQueue { private int[] arr; private int front; private int rear; private int size; private int capacity; public MyQueue(int capacity) { this.capacity = capacity; arr = new int[capacity]; front = 0; rear = -1; size = 0; } public void enqueue(int val) { if (size == capacity) { System.out.println("Queue is full!"); return; } rear = (rear + 1) % capacity; // circular wrapping arr[rear] = val; size++; } public int dequeue() { if (isEmpty()) { System.out.println("Queue is empty!"); return -1; } int val = arr[front]; front = (front + 1) % capacity; // circular wrapping size--; return val; } public int peek() { if (isEmpty()) return -1; return arr[front]; } public boolean isEmpty() { return size == 0; } public int size() { return size; }}Notice the % capacity in enqueue and dequeue — that is what makes it a Circular Queue. Without this, once the rear reaches the end of the array, you cannot add more even if front has moved forward and freed up space.Way 6: Implementing Queue Using Two StacksThis is a very popular interview question — implement a Queue using two stacks. The idea is to use one stack for enqueue and another for dequeue.class QueueUsingTwoStacks { Stack<Integer> s1 = new Stack<>(); // for enqueue Stack<Integer> s2 = new Stack<>(); // for dequeue public void enqueue(int val) { s1.push(val); // always push to s1 } public int dequeue() { if (s2.isEmpty()) { // transfer all elements from s1 to s2 // this reverses the order, giving FIFO behavior while (!s1.isEmpty()) { s2.push(s1.pop()); } } return s2.pop(); } public int peek() { if (s2.isEmpty()) { while (!s1.isEmpty()) { s2.push(s1.pop()); } } return s2.peek(); } public boolean isEmpty() { return s1.isEmpty() && s2.isEmpty(); }}Why does this work?When you transfer elements from s1 to s2, the order reverses. The element that was added first to s1 ends up on top of s2 — which means it gets dequeued first. FIFO achieved using two LIFOs!Amortized time complexity: Each element is pushed and popped at most twice (once in s1, once in s2). So dequeue is O(1) amortized even though individual calls might take O(n).This is LeetCode 232 — Implement Queue using Stacks.Queue vs Stack — Side by SideFeatureQueueStackPrincipleFIFO — First In First OutLIFO — Last In First OutInsert atRearTopRemove fromFrontTopReal lifeLine of peopleStack of platesJava classLinkedList, ArrayDequeStack, ArrayDequeMain useBFS, schedulingDFS, backtracking, parsingPeekFront elementTop elementBFS — The Most Important Application of QueueBreadth First Search (BFS) is the single most important algorithm that uses a Queue. Understanding BFS is why Queue matters so much in DSA.BFS explores a graph or tree level by level — all nodes at distance 1 first, then all at distance 2, and so on. A Queue naturally enforces this level-by-level behavior.public void bfs(int start, List<List<Integer>> graph) { Queue<Integer> queue = new LinkedList<>(); boolean[] visited = new boolean[graph.size()]; queue.offer(start); visited[start] = true; while (!queue.isEmpty()) { int node = queue.poll(); // process front node System.out.print(node + " "); for (int neighbor : graph.get(node)) { if (!visited[neighbor]) { visited[neighbor] = true; queue.offer(neighbor); // add unvisited neighbors to rear } } }}Why Queue and not Stack for BFS? Queue ensures you process all neighbors of a node before going deeper. Stack would take you deep into one path first — that is DFS, not BFS. The FIFO property is what guarantees level-by-level exploration.BFS with Queue is used in:Shortest path in unweighted graphsLevel order traversal of treesFinding connected componentsWord ladder problemsRotten oranges, flood fill, and matrix BFS problemsLevel Order Traversal — BFS on TreesOne of the most common Queue problems in interviews is Level Order Traversal of a binary tree.public List<List<Integer>> levelOrder(TreeNode root) { List<List<Integer>> result = new ArrayList<>(); if (root == null) return result; Queue<TreeNode> queue = new LinkedList<>(); queue.offer(root); while (!queue.isEmpty()) { int levelSize = queue.size(); // number of nodes at current level List<Integer> level = new ArrayList<>(); for (int i = 0; i < levelSize; i++) { TreeNode node = queue.poll(); level.add(node.val); if (node.left != null) queue.offer(node.left); if (node.right != null) queue.offer(node.right); } result.add(level); } return result;}The key trick here is using queue.size() at the start of each while loop iteration to know exactly how many nodes belong to the current level. Process exactly that many nodes, then move to the next level.This is LeetCode 102 — Binary Tree Level Order Traversal.Sliding Window Maximum — Monotonic DequeOne of the most impressive Queue applications is the Sliding Window Maximum problem using a Monotonic Deque. This is the queue equivalent of the Monotonic Stack pattern you saw in stack problems.The idea — maintain a deque that stores indices of elements in decreasing order. The front always holds the index of the maximum element in the current window.public int[] maxSlidingWindow(int[] nums, int k) { Deque<Integer> deque = new ArrayDeque<>(); // stores indices int[] result = new int[nums.length - k + 1]; int idx = 0; for (int i = 0; i < nums.length; i++) { // remove indices that are out of the current window while (!deque.isEmpty() && deque.peekFirst() < i - k + 1) { deque.pollFirst(); } // remove indices whose values are smaller than current // they can never be the maximum for any future window while (!deque.isEmpty() && nums[deque.peekLast()] < nums[i]) { deque.pollLast(); } deque.offerLast(i); // window is fully formed, record maximum (front of deque) if (i >= k - 1) { result[idx++] = nums[deque.peekFirst()]; } } return result;}This gives O(n) time for what would otherwise be an O(n×k) problem. This is LeetCode 239 — Sliding Window Maximum.Java Queue Interface — Complete Method ReferenceHere is every method you will ever need from Java's Queue and Deque interfaces:Queue Methods:offer(e) — add to rear, returns false if full (preferred over add) poll() — remove from front, returns null if empty (preferred over remove) peek() — view front without removing, returns null if empty (preferred over element) isEmpty() — returns true if no elements size() — returns number of elements contains(o) — returns true if element existsDeque Additional Methods:offerFirst(e) — add to front offerLast(e) — add to rear pollFirst() — remove from front pollLast() — remove from rear peekFirst() — view front peekLast() — view rearPriorityQueue Specific:offer(e) — add with natural ordering or custom comparator poll() — remove element with highest priority peek() — view highest priority element without removingCommon Interview Questions About QueueThese are the questions interviewers ask to test your understanding of queues conceptually — not just coding.Q1. What is the difference between Queue and Stack? Queue is FIFO — elements are removed in the order they were added. Stack is LIFO — the most recently added element is removed first. Queue removes from the front, Stack removes from the top.Q2. Why is ArrayDeque preferred over LinkedList for Queue in Java? ArrayDeque uses a resizable array internally and has better cache locality and no node allocation overhead. LinkedList creates a new node object for every element added, which means more garbage collection pressure. ArrayDeque is faster in practice for most Queue use cases.Q3. When would you use a PriorityQueue instead of a regular Queue? When the order of processing depends on priority rather than arrival order. For example in a hospital, critical patients are treated before minor cases regardless of when they arrived. Or in Dijkstra's algorithm, always processing the shortest known distance first.Q4. How is Queue used in BFS? BFS uses a Queue to explore nodes level by level. The starting node is enqueued first. Each time a node is dequeued, all its unvisited neighbors are enqueued. Since Queue is FIFO, all neighbors of a node are processed before going deeper — guaranteeing level-by-level exploration.Q5. What is the difference between poll() and remove() in Java Queue? Both remove the front element. remove() throws NoSuchElementException if the queue is empty. poll() returns null instead of throwing. Always use poll() for safer code.Q6. Can a Queue have duplicates? Yes. Queue does not have any restriction on duplicate values unlike Sets. The same value can appear multiple times in a Queue.Q7. What is a Blocking Queue and when is it used? A Blocking Queue is a thread-safe Queue used in multi-threaded applications. When a thread tries to dequeue from an empty queue, it blocks (waits) until an element is available. When a thread tries to enqueue into a full queue, it blocks until space is available. Used in Producer-Consumer patterns.Top LeetCode Problems on QueueHere are the most important LeetCode problems that use Queue — organized from beginner to advanced:Beginner Level:232. Implement Queue using Stacks — implement Queue with two stacks, classic interview question225. Implement Stack using Queues — reverse of 232, implement Stack using Queue933. Number of Recent Calls — sliding window with QueueIntermediate Level:102. Binary Tree Level Order Traversal — BFS on tree, must know107. Binary Tree Level Order Traversal II — same but bottom up994. Rotting Oranges — multi-source BFS on grid1091. Shortest Path in Binary Matrix — BFS shortest path542. 01 Matrix — multi-source BFS, distance to nearest 0127. Word Ladder — BFS on word graph, classicAdvanced Level:239. Sliding Window Maximum — monotonic deque, must know862. Shortest Subarray with Sum at Least K — monotonic deque with prefix sums407. Trapping Rain Water II — 3D BFS with priority queue787. Cheapest Flights Within K Stops — BFS with constraintsQueue Cheat Sheet — Everything at a GlanceCreate a Queue:Queue<Integer> q = new LinkedList<>(); // standardQueue<Integer> q = new ArrayDeque<>(); // faster, preferredDeque<Integer> dq = new ArrayDeque<>(); // double endedPriorityQueue<Integer> pq = new PriorityQueue<>(); // min heapPriorityQueue<Integer> pq = new PriorityQueue<>((a,b) -> b-a); // max heapCore Operations:q.offer(x); // enqueueq.poll(); // dequeueq.peek(); // front elementq.isEmpty(); // check emptyq.size(); // number of elementsDeque Operations:dq.offerFirst(x); // add to frontdq.offerLast(x); // add to reardq.pollFirst(); // remove from frontdq.pollLast(); // remove from reardq.peekFirst(); // view frontdq.peekLast(); // view rearBFS Template:Queue<Integer> queue = new LinkedList<>();queue.offer(start);visited[start] = true;while (!queue.isEmpty()) { int node = queue.poll(); for (int neighbor : graph.get(node)) { if (!visited[neighbor]) { visited[neighbor] = true; queue.offer(neighbor); } }}ConclusionQueue is one of those data structures that appears simple on the surface but has incredible depth once you start exploring its variations and applications. From the basic FIFO concept to Circular Queues, Deques, Priority Queues, Monotonic Deques, and BFS — each layer adds a new tool to your problem-solving arsenal.Here is the learning path to follow based on everything covered in this guide:Start with understanding FIFO vs LIFO and when each applies. Then get comfortable with Java's Queue interface — offer, poll, peek. Practice the BFS template until it feels automatic. Then move to Level Order Traversal problems. Once BFS clicks, tackle multi-source BFS problems like Rotting Oranges. Finally learn the Monotonic Deque pattern for sliding window problems.Master these and you will handle every Queue problem in any coding interview with confidence.

QueueData StructureJavaBFSDequePriority QueueCircular Queue
Ai Assistant Kas