Search Blogs

Showing results for "Hard"

Found 15 results

LeetCode 496: Next Greater Element I — Java Solution With All Approaches Explained

LeetCode 496: Next Greater Element I — Java Solution With All Approaches Explained

IntroductionLeetCode 496 Next Greater Element I is your gateway into one of the most important and frequently tested patterns in coding interviews — the Monotonic Stack. Once you understand this problem deeply, problems like Next Greater Element II, Daily Temperatures, and Largest Rectangle in Histogram all start to make sense.Here is the Link of Question -: LeetCode 496This article covers plain English explanation, real life analogy, brute force and optimal approaches in Java, detailed dry runs, complexity analysis, common mistakes, and FAQs.What Is the Problem Really Asking?You have two arrays. nums2 is the main array. nums1 is a smaller subset of nums2. For every element in nums1, find its position in nums2 and look to the right — what is the first element that is strictly greater? If none exists, return -1.Example:nums1 = [4,1,2], nums2 = [1,3,4,2]For 4 in nums2: elements to its right are [2], none greater → -1For 1 in nums2: elements to its right are [3,4,2], first greater is 3For 2 in nums2: no elements to its right → -1Output: [-1, 3, -1]Real Life Analogy — The Taller Person in a QueueImagine you are standing in a queue and you want to know — who is the first person taller than you standing somewhere behind you in the line?You look to your right one by one until you find someone taller. That person is your "next greater element." If everyone behind you is shorter, your answer is -1.Now imagine doing this for every person in the queue efficiently — instead of each person looking one by one, you use a smart system that processes everyone in a single pass. That smart system is the Monotonic Stack.Approach 1: Brute Force (Beginner Friendly)The IdeaFor each element in nums1, find its position in nums2, then scan everything to its right to find the first greater element.javapublic int[] nextGreaterElement(int[] nums1, int[] nums2) { int[] ans = new int[nums1.length]; for (int i = 0; i < nums1.length; i++) { int found = -1; boolean seen = false; for (int j = 0; j < nums2.length; j++) { if (seen && nums2[j] > nums1[i]) { found = nums2[j]; break; } if (nums2[j] == nums1[i]) { seen = true; } } ans[i] = found; } return ans;}Simple to understand but inefficient. For each element in nums1 you scan the entire nums2.Time Complexity: O(m × n) — where m = nums1.length, n = nums2.length Space Complexity: O(1) — ignoring output arrayThis works for the given constraints (n ≤ 1000) but will not scale for larger inputs. The follow-up specifically asks for better.Approach 2: Monotonic Stack + HashMap (Optimal Solution) ✅The IdeaThis is your solution and the best one. The key insight is — instead of answering queries for nums1 elements one by one, precompute the next greater element for every element in nums2 and store results in a HashMap. Then answering nums1 queries becomes just a HashMap lookup.To precompute efficiently, we use a Monotonic Stack — a stack that always stays in decreasing order from bottom to top.Why traverse from right to left? Because we are looking for the next greater element to the right. Starting from the right end, by the time we process any element, we have already seen everything to its right.Algorithm:Traverse nums2 from right to leftMaintain a stack of "candidate" next greater elementsFor current element, pop all stack elements that are smaller or equal — they can never be the next greater for anything to the leftIf stack is empty → next greater is -1, else → top of stack is the answerPush current element onto stackStore result in HashMapLook up each nums1 element in the HashMapjavapublic int[] nextGreaterElement(int[] nums1, int[] nums2) { Stack<Integer> st = new Stack<>(); HashMap<Integer, Integer> mp = new HashMap<>(); // Precompute next greater for every element in nums2 for (int i = nums2.length - 1; i >= 0; i--) { // Pop elements smaller than current — they are useless while (!st.empty() && nums2[i] >= st.peek()) { st.pop(); } // Top of stack is the next greater, or -1 if empty mp.put(nums2[i], st.empty() ? -1 : st.peek()); // Push current element as a candidate for elements to its left st.push(nums2[i]); } // Answer queries for nums1 int[] ans = new int[nums1.length]; for (int i = 0; i < nums1.length; i++) { ans[i] = mp.get(nums1[i]); } return ans;}Why Is the Stack Monotonic?After popping smaller elements, the stack always maintains a decreasing order from bottom to top. This means the top of the stack at any point is always the smallest element seen so far to the right — making it the best candidate for "next greater."This is called a Monotonic Decreasing Stack and it is the heart of this entire pattern.Detailed Dry Run — nums2 = [1,3,4,2]We traverse from right to left:i = 3, nums2[3] = 2Stack is emptyNo elements to popStack empty → mp.put(2, -1)Push 2 → stack: [2]i = 2, nums2[2] = 4Stack top is 2, and 4 >= 2 → pop 2 → stack: []Stack empty → mp.put(4, -1)Push 4 → stack: [4]i = 1, nums2[1] = 3Stack top is 4, and 3 < 4 → stop poppingStack not empty → mp.put(3, 4)Push 3 → stack: [4, 3]i = 0, nums2[0] = 1Stack top is 3, and 1 < 3 → stop poppingStack not empty → mp.put(1, 3)Push 1 → stack: [4, 3, 1]HashMap after processing nums2:1 → 3, 2 → -1, 3 → 4, 4 → -1Now answer nums1 = [4, 1, 2]:nums1[0] = 4 → mp.get(4) = -1nums1[1] = 1 → mp.get(1) = 3nums1[2] = 2 → mp.get(2) = -1✅ Output: [-1, 3, -1]Time Complexity: O(n + m) — n for processing nums2, m for answering nums1 queries Space Complexity: O(n) — HashMap and Stack both store at most n elementsWhy Pop Elements Smaller Than Current?This is the most important thing to understand in this problem. When we are at element x and we see a stack element y where y < x, we pop y. Why?Because x is to the right of everything we will process next (we go right to left), and x is already greater than y. So for any element to the left of x, if they are greater than y, they are definitely also greater than y — meaning y would never be the "next greater" for anything. It becomes useless and gets discarded.This is why the stack stays decreasing — every element we keep is a legitimate candidate for being someone's next greater element.How This Differs From Previous Stack ProblemsYou have been solving stack problems with strings — backspace, stars, adjacent duplicates. This problem introduces the stack for arrays and searching, which is a step up in complexity.The pattern shift is: instead of using the stack to build or reduce a string, we use it to maintain a window of candidates while scanning. This monotonic stack idea is what powers many hard problems like Largest Rectangle in Histogram, Trapping Rain Water, and Daily Temperatures.Common Mistakes to AvoidUsing >= instead of > in the pop condition We pop when nums2[i] >= st.peek(). If you use only >, equal elements stay on the stack and give wrong answers since we need strictly greater.Traversing left to right instead of right to left Going left to right makes it hard to know what is to the right of the current element. Always go right to left for "next greater to the right" problems.Forgetting HashMap lookup handles the nums1 query efficiently Some people recompute inside the nums1 loop. Always precompute in a HashMap — that is the whole point of the optimization.FAQs — People Also AskQ1. What is a Monotonic Stack and why is it used in LeetCode 496? A Monotonic Stack is a stack that maintains its elements in either increasing or decreasing order. In LeetCode 496, a Monotonic Decreasing Stack is used to efficiently find the next greater element for every number in nums2 in a single pass, reducing time complexity from O(n²) to O(n).Q2. What is the time complexity of LeetCode 496 optimal solution? The optimal solution runs in O(n + m) time where n is the length of nums2 and m is the length of nums1. Processing nums2 takes O(n) and answering all nums1 queries via HashMap takes O(m).Q3. Why do we traverse nums2 from right to left? Because we are looking for the next greater element to the right. Starting from the right end means by the time we process any element, we have already seen all elements to its right and stored them in the stack as candidates.Q4. Is LeetCode 496 asked in coding interviews? Yes, it is commonly used as an introduction to the Monotonic Stack pattern at companies like Amazon, Google, and Microsoft. It often appears as a warmup before harder follow-ups like Next Greater Element II (circular array) or Daily Temperatures.Q5. What is the difference between LeetCode 496 and LeetCode 739 Daily Temperatures? Both use the same Monotonic Stack pattern. In 496 you return the actual next greater value. In 739 you return the number of days (index difference) until a warmer temperature. The core stack logic is identical.Similar LeetCode Problems to Practice Next739. Daily Temperatures — Medium — days until warmer temperature, same pattern503. Next Greater Element II — Medium — circular array version901. Online Stock Span — Medium — monotonic stack with span counting84. Largest Rectangle in Histogram — Hard — classic monotonic stack42. Trapping Rain Water — Hard — monotonic stack or two pointerConclusionLeetCode 496 Next Greater Element I is the perfect entry point into the Monotonic Stack pattern. The brute force is easy to understand, but the real learning happens when you see why the stack stays decreasing and how that single insight collapses an O(n²) problem into O(n).Once you truly understand why we pop smaller elements and how the HashMap bridges the gap between precomputation and query answering, the entire family of Next Greater Element problems becomes approachable — including the harder circular and histogram variants.

ArrayStackMonotonic StackHashMapEasyLeetCode
What Is Dynamic Programming? Origin Story, Real-Life Uses, LeetCode Problems & Complete Beginner Guide

What Is Dynamic Programming? Origin Story, Real-Life Uses, LeetCode Problems & Complete Beginner Guide

Introduction — Why Dynamic Programming Feels Hard (And Why It Isn't)If you've ever stared at a LeetCode problem, read the solution, understood every single line, and still had absolutely no idea how someone arrived at it — welcome. You've just experienced the classic Dynamic Programming (DP) confusion.DP has a reputation. People treat it like some dark art reserved for competitive programmers or Google engineers. The truth? Dynamic Programming is one of the most logical, learnable, and satisfying techniques in all of computer science. Once it clicks, it really clicks.This guide will take you from zero to genuinely confident. We'll cover where DP came from, how it works, what patterns to learn, how to recognize DP problems, real-world places it shows up, LeetCode problems to practice, time complexity analysis, and the mistakes that trip up even experienced developers.Let's go.The Origin Story — Who Invented Dynamic Programming and Why?The term "Dynamic Programming" was coined by Richard Bellman in the early 1950s while working at RAND Corporation. Here's the funny part: the name was deliberately chosen to sound impressive and vague.Bellman was doing mathematical research that his employer — the US Secretary of Defense, Charles Wilson — would have found difficult to fund if described accurately. Wilson had a well-known distaste for the word "research." So Bellman invented a name that sounded suitably grand and mathematical: Dynamic Programming.In his autobiography, Bellman wrote that he picked the word "dynamic" because it had a precise technical meaning and was also impossible to use negatively. "Programming" referred to the mathematical sense — planning and decision-making — not computer programming.The underlying idea? Break a complex problem into overlapping subproblems, solve each subproblem once, and store the result so you never solve it twice.Bellman's foundational contribution was the Bellman Equation, which underpins not just algorithms but also economics, operations research, and modern reinforcement learning.So the next time DP feels frustrating, remember — even its inventor named it specifically to confuse people. You're in good company.What Is Dynamic Programming? (Simple Definition)Dynamic Programming is an algorithmic technique used to solve problems by:Breaking them down into smaller overlapping subproblemsSolving each subproblem only onceStoring the result (memoization or tabulation)Building up the final solution from those stored resultsThe key insight is overlapping subproblems + optimal substructure.Overlapping subproblems means the same smaller problems come up again and again. Instead of solving them every time (like plain recursion does), DP solves them once and caches the answer.Optimal substructure means the optimal solution to the whole problem can be built from optimal solutions to its subproblems.If a problem has both these properties — it's a DP problem.The Two Approaches to Dynamic Programming1. Top-Down with Memoization (Recursive + Cache)You write a recursive solution exactly as you would naturally, but add a cache (usually a dictionary or array) to store results you've already computed.fib(n):if n in cache: return cache[n]if n <= 1: return ncache[n] = fib(n-1) + fib(n-2)return cache[n]This is called memoization — remember what you computed so you don't repeat yourself.Pros: Natural to write, mirrors the recursive thinking, easy to reason about. Cons: Stack overhead from recursion, risk of stack overflow on large inputs.2. Bottom-Up with Tabulation (Iterative)You figure out the order in which subproblems need to be solved, then solve them iteratively from the smallest up, filling a table.fib(n):dp = [0, 1]for i from 2 to n:dp[i] = dp[i-1] + dp[i-2]return dp[n]This is called tabulation — fill a table, cell by cell, bottom to top.Pros: No recursion overhead, usually faster in practice, easier to optimize space. Cons: Requires thinking about the order of computation upfront.🧩 Dynamic Programming Template CodeBefore diving into how to recognize DP problems, here are ready-to-use Java templates for every major DP pattern. Think of these as your reusable blueprints — every DP problem you ever solve will fit into one of these structures. Just define your state, plug in your recurrence relation, and you are good to go.Template 1 — Top-Down (Memoization)import java.util.HashMap;import java.util.Map;public class TopDownDP {Map<Integer, Integer> memo = new HashMap<>();public int solve(int n) {// Base caseif (n <= 1) return n;// Check cacheif (memo.containsKey(n)) return memo.get(n);// Recurrence relation — change this part for your problemint result = solve(n - 1) + solve(n - 2);// Store in cachememo.put(n, result);return result;}}Template 2 — Bottom-Up (Tabulation)public class BottomUpDP {public int solve(int n) {// Create DP tableint[] dp = new int[n + 1];// Base casesdp[0] = 0;dp[1] = 1;// Fill the table bottom-upfor (int i = 2; i <= n; i++) {// Recurrence relation — change this part for your problemdp[i] = dp[i - 1] + dp[i - 2];}return dp[n];}}Template 3 — Bottom-Up with Space Optimizationpublic class SpaceOptimizedDP {public int solve(int n) {// Only keep last two values instead of full tableint prev2 = 0;int prev1 = 1;for (int i = 2; i <= n; i++) {// Recurrence relation — change this part for your problemint curr = prev1 + prev2;prev2 = prev1;prev1 = curr;}return prev1;}}Template 4 — 2D DP (Two Sequences or Grid)public class TwoDimensionalDP {public int solve(String s1, String s2) {int m = s1.length();int n = s2.length();// Create 2D DP tableint[][] dp = new int[m + 1][n + 1];// Base cases — first row and columnfor (int i = 0; i <= m; i++) dp[i][0] = i;for (int j = 0; j <= n; j++) dp[0][j] = j;// Fill table cell by cellfor (int i = 1; i <= m; i++) {for (int j = 1; j <= n; j++) {// Recurrence relation — change this part for your problemif (s1.charAt(i - 1) == s2.charAt(j - 1)) {dp[i][j] = dp[i - 1][j - 1];} else {dp[i][j] = 1 + Math.min(dp[i - 1][j],Math.min(dp[i][j - 1], dp[i - 1][j - 1]));}}}return dp[m][n];}}Template 5 — Knapsack Patternpublic class KnapsackDP {public int solve(int[] weights, int[] values, int capacity) {int n = weights.length;// dp[i][w] = max value using first i items with capacity wint[][] dp = new int[n + 1][capacity + 1];for (int i = 1; i <= n; i++) {for (int w = 0; w <= capacity; w++) {// Don't take item idp[i][w] = dp[i - 1][w];// Take item i if it fitsif (weights[i - 1] <= w) {dp[i][w] = Math.max(dp[i][w],values[i - 1] + dp[i - 1][w - weights[i - 1]]);}}}return dp[n][capacity];}}💡 How to use these templates:Step 1 — Identify which pattern your problem fits into. Step 2 — Define what dp[i] or dp[i][j] means in plain English before writing any code. Step 3 — Write your recurrence relation on paper first. Step 4 — Plug it into the matching template above. Step 5 — Handle your specific base cases carefully.🎥 Visual Learning Resource — Watch This Before Moving ForwardIf you prefer learning by watching before reading, this free full-length course by freeCodeCamp is one of the best Dynamic Programming resources on the internet. Watch it alongside this guide for maximum understanding.Credit: freeCodeCamp — a free, nonprofit coding education platform.How to Recognize a Dynamic Programming ProblemAsk yourself these four questions:1. Can I define the problem in terms of smaller versions of itself? If you can write a recursive formula (recurrence relation), DP might apply.2. Do the subproblems overlap? If a naive recursive solution would recompute the same thing many times, DP is the right tool.3. Is there an optimal substructure? Is the best answer to the big problem made up of best answers to smaller problems?4. Are you looking for a count, minimum, maximum, or yes/no answer? DP problems often ask: "What is the minimum cost?", "How many ways?", "Can we achieve X?"Red flag words in problem statements: minimum, maximum, shortest, longest, count the number of ways, can we reach, is it possible, fewest steps.The Core DP Patterns You Must LearnMastering DP is really about recognizing patterns. Here are the most important ones:Pattern 1 — 1D DP (Linear) Problems where the state depends on previous elements in a single sequence. Examples: Fibonacci, Climbing Stairs, House Robber.Pattern 2 — 2D DP (Grid / Two-sequence) Problems with two dimensions of state, often grids or two strings. Examples: Longest Common Subsequence, Edit Distance, Unique Paths.Pattern 3 — Interval DP You consider all possible intervals or subarrays and build solutions from them. Examples: Matrix Chain Multiplication, Burst Balloons, Palindrome Partitioning.Pattern 4 — Knapsack DP (0/1 and Unbounded) You decide whether to include or exclude items under a capacity constraint. Examples: 0/1 Knapsack, Coin Change, Partition Equal Subset Sum.Pattern 5 — DP on Trees State is defined per node; you combine results from children. Examples: Diameter of Binary Tree, House Robber III, Maximum Path Sum.Pattern 6 — DP on Subsets / Bitmask DP State includes a bitmask representing which elements have been chosen. Examples: Travelling Salesman Problem, Shortest Superstring.Pattern 7 — DP on Strings Matching, editing, or counting arrangements within strings. Examples: Longest Palindromic Subsequence, Regular Expression Matching, Wildcard Matching.Top LeetCode Problems to Practice Dynamic Programming (With Links)Here are the essential problems, organized by difficulty and pattern. Solve them in this order.Beginner — Warm UpProblemPatternLinkClimbing Stairs1D DPhttps://leetcode.com/problems/climbing-stairs/Fibonacci Number1D DPhttps://leetcode.com/problems/fibonacci-number/House Robber1D DPhttps://leetcode.com/problems/house-robber/Min Cost Climbing Stairs1D DPhttps://leetcode.com/problems/min-cost-climbing-stairs/Best Time to Buy and Sell Stock1D DPhttps://leetcode.com/problems/best-time-to-buy-and-sell-stock/Intermediate — Core PatternsProblemPatternLinkCoin ChangeKnapsackhttps://leetcode.com/problems/coin-change/Longest Increasing Subsequence1D DPhttps://leetcode.com/problems/longest-increasing-subsequence/Longest Common Subsequence2D DPhttps://leetcode.com/problems/longest-common-subsequence/0/1 Knapsack (via Subset Sum)Knapsackhttps://leetcode.com/problems/partition-equal-subset-sum/Unique Paths2D Grid DPhttps://leetcode.com/problems/unique-paths/Jump Game1D DP / Greedyhttps://leetcode.com/problems/jump-game/Word BreakString DPhttps://leetcode.com/problems/word-break/Decode Ways1D DPhttps://leetcode.com/problems/decode-ways/Edit Distance2D String DPhttps://leetcode.com/problems/edit-distance/Triangle2D DPhttps://leetcode.com/problems/triangle/Advanced — Interview LevelProblemPatternLinkBurst BalloonsInterval DPhttps://leetcode.com/problems/burst-balloons/Regular Expression MatchingString DPhttps://leetcode.com/problems/regular-expression-matching/Wildcard MatchingString DPhttps://leetcode.com/problems/wildcard-matching/Palindrome Partitioning IIInterval DPhttps://leetcode.com/problems/palindrome-partitioning-ii/Maximum Profit in Job SchedulingDP + Binary Searchhttps://leetcode.com/problems/maximum-profit-in-job-scheduling/Distinct Subsequences2D DPhttps://leetcode.com/problems/distinct-subsequences/Cherry Pickup3D DPhttps://leetcode.com/problems/cherry-pickup/Real-World Use Cases of Dynamic ProgrammingDP is not just for coding interviews. It is deeply embedded in the technology you use every day.1. Google Maps & Navigation (Shortest Path) The routing engines behind GPS apps use DP-based algorithms like Dijkstra and Bellman-Ford to find the shortest or fastest path between two points across millions of nodes.2. Spell Checkers & Autocorrect (Edit Distance) When your phone corrects "teh" to "the," it is computing Edit Distance — a classic DP problem — between what you typed and every word in the dictionary.3. DNA Sequence Alignment (Bioinformatics) Researchers use the Needleman-Wunsch and Smith-Waterman algorithms — both DP — to align DNA and protein sequences and find similarities between species or identify mutations.4. Video Compression (MPEG, H.264) Modern video codecs use DP to determine the most efficient way to encode video frames, deciding which frames to store as full images and which to store as differences from the previous frame.5. Financial Portfolio Optimization Investment algorithms use DP to find the optimal allocation of assets under risk constraints — essentially a variant of the knapsack problem.6. Natural Language Processing (NLP) The Viterbi algorithm — used in speech recognition, part-of-speech tagging, and machine translation — is a DP algorithm. Every time Siri or Google Assistant understands your sentence, DP played a role.7. Game AI (Chess, Checkers) Game trees and minimax algorithms with memoization use DP to evaluate board positions and find the best move without recomputing already-seen positions.8. Compiler Optimization Compilers use DP to decide the optimal order of operations and instruction scheduling to generate the most efficient machine code.9. Text Justification (Word Processors) Microsoft Word and LaTeX use DP to optimally break paragraphs into lines — minimizing raggedness and maximizing visual appeal.10. Resource Scheduling in Cloud Computing AWS, Google Cloud, and Azure use DP-based scheduling to assign computational tasks to servers in the most cost-efficient way possible.Time Complexity Analysis of Common DP ProblemsUnderstanding the time complexity of DP is critical for interviews and for building scalable systems.ProblemTime ComplexitySpace ComplexityNotesFibonacci (naive recursion)O(2ⁿ)O(n)Exponential — terribleFibonacci (DP)O(n)O(1) with optimizationLinear — excellentLongest Common SubsequenceO(m × n)O(m × n)m, n = lengths of two stringsEdit DistanceO(m × n)O(m × n)Can optimize space to O(n)0/1 KnapsackO(n × W)O(n × W)n = items, W = capacityCoin ChangeO(n × amount)O(amount)Classic tabulationLongest Increasing SubsequenceO(n²) or O(n log n)O(n)Binary search version is fasterMatrix Chain MultiplicationO(n³)O(n²)Interval DPTravelling Salesman (bitmask)O(2ⁿ × n²)O(2ⁿ × n)Still exponential but manageable for small nThe general rule: DP trades time for space. You use memory to avoid recomputation. The time complexity equals the number of unique states multiplied by the work done per state.How to Learn and Master Dynamic Programming — Step by StepHere is an honest, structured path to mastery:Step 1 — Get recursion absolutely solid first. DP is memoized recursion at its core. If you cannot write clean recursive solutions confidently, DP will remain confusing. Practice at least 20 pure recursion problems first.Step 2 — Start with the classics. Fibonacci → Climbing Stairs → House Robber → Coin Change. These teach you the core pattern of defining state and transition without overwhelming you.Step 3 — Learn to define state explicitly. Before writing any code, ask: "What does dp[i] represent?" Write it in plain English. "dp[i] = the minimum cost to reach step i." This single habit separates good DP thinkers from struggling ones.Step 4 — Write the recurrence relation before coding. On paper or in a comment. Example: dp[i] = min(dp[i-1] + cost[i-1], dp[i-2] + cost[i-2]). If you can write the recurrence, the code writes itself.Step 5 — Master one pattern at a time. Don't jump between knapsack and interval DP in the same week. Spend a few days on each pattern until it feels intuitive.Step 6 — Solve the same problem both ways. Top-down and bottom-up. This builds deep understanding of what DP is actually doing.Step 7 — Optimize space after getting correctness. Many 2D DP solutions can use a single row instead of a full matrix. Learn this optimization after you understand the full solution.Step 8 — Do timed practice under interview conditions. Give yourself 35 minutes per problem. Review what you got wrong. DP is a muscle — it builds with reps.Common Mistakes in Dynamic Programming (And How to Avoid Them)Mistake 1 — Jumping to code before defining state. The most common DP error. Always define what dp[i] or dp[i][j] means before writing a single line of code.Mistake 2 — Wrong base cases. A single wrong base case corrupts every answer built on top of it. Trace through your base cases manually on a tiny example before running code.Mistake 3 — Off-by-one errors in indexing. Whether your dp array is 0-indexed or 1-indexed must be 100% consistent throughout. This causes more bugs in DP than almost anything else.Mistake 4 — Confusing top-down with bottom-up state order. In bottom-up DP, you must ensure that when you compute dp[i], all values it depends on are already filled. If you compute in the wrong order, you get garbage answers.Mistake 5 — Memoizing in the wrong dimension. In 2D problems, some people cache only one dimension when the state actually requires two. Always identify all variables that affect the outcome.Mistake 6 — Using global mutable state in recursion. If you use a shared array and don't clear it between test cases, you'll get wrong answers on subsequent inputs. Always scope your cache correctly.Mistake 7 — Not considering the full state space. In problems like Knapsack, forgetting that the state is (item index, remaining capacity) — not just item index — leads to fundamentally wrong solutions.Mistake 8 — Giving up after not recognizing the pattern immediately. DP problems don't announce themselves. The skill is learning to ask "is there overlapping subproblems here?" on every problem. This takes time. Don't mistake unfamiliarity for inability.Frequently Asked Questions About Dynamic ProgrammingQ: Is Dynamic Programming the same as recursion? Not exactly. Recursion is a technique for breaking problems into smaller pieces. DP is recursion plus memoization — or iterative tabulation. All DP can be written recursively, but not all recursion is DP.Q: What is the difference between DP and Divide and Conquer? Divide and Conquer (like Merge Sort) breaks problems into non-overlapping subproblems. DP is used when subproblems overlap — meaning the same subproblem is solved multiple times in a naive approach.Q: How do I know when NOT to use DP? If the subproblems don't overlap (no repeated computation), greedy or divide-and-conquer may be better. If the problem has no optimal substructure, DP won't give a correct answer.Q: Do I need to memorize DP solutions for interviews? No. You need to recognize patterns and be able to derive the recurrence relation. Memorizing solutions without understanding them will fail you in interviews. Focus on the thinking process.Q: How long does it take to get good at DP? Most people start to feel genuinely comfortable after solving 40–60 varied DP problems with deliberate practice. The first 10 feel impossible. The next 20 feel hard. After 50, patterns start feeling obvious.Q: What programming language is best for DP? Any language works. Python is often used for learning because its dictionaries make memoization trivial. C++ is preferred in competitive programming for its speed. For interviews, use whatever language you're most comfortable in.Q: What is space optimization in DP? Many DP problems only look back one or two rows to compute the current row. In those cases, you can replace an n×m table with just two arrays (or even one), reducing space complexity from O(n×m) to O(m). This is called space optimization or rolling array technique.Q: Can DP be applied to graph problems? Absolutely. Shortest path algorithms like Bellman-Ford are DP. Longest path in a DAG is DP. DP on trees is a rich subfield. Anywhere you have states and transitions, DP can potentially apply.Q: Is Greedy a type of Dynamic Programming? Greedy is related but distinct. Greedy makes locally optimal choices without reconsidering. DP considers all choices and picks the globally optimal one. Some DP solutions reduce to greedy when the structure allows, but they are different techniques.Q: What resources should I use to learn DP? For structured learning: Neetcode.io (organized problem list), Striver's DP Series on YouTube, and the book "Introduction to Algorithms" (CLRS) for theoretical depth. For practice: LeetCode's Dynamic Programming study plan and Codeforces for competitive DP.Final Thoughts — Dynamic Programming Is a SuperpowerDynamic Programming is genuinely one of the most powerful ideas in computer science. It shows up in your GPS, your autocorrect, your streaming video, your bank's risk models, and the AI assistants you talk to daily.The path to mastering it is not memorization. It is developing the habit of asking: can I break this into smaller problems that overlap? And then learning to define state clearly, write the recurrence, and trust the process.Start with Climbing Stairs. Write dp[i] in plain English before every problem. Solve everything twice — top-down and bottom-up. Do 50 problems with genuine reflection, not just accepted solutions.The click moment will come. And when it does, you'll wonder why it ever felt hard.

Dynamic ProgrammingMemoizationTabulationJavaOrigin StoryRichard Bellman
Recursion in Java - Complete Guide With Examples and Practice Problems

Recursion in Java - Complete Guide With Examples and Practice Problems

IntroductionIf there is one topic in programming that confuses beginners more than anything else, it is recursion. Most people read the definition, nod their head, and then immediately freeze when they have to write recursive code themselves.The problem is not that recursion is genuinely hard. The problem is that most explanations start with code before building the right mental model. Once you have the right mental model, recursion clicks permanently and you start seeing it everywhere — in tree problems, graph problems, backtracking, dynamic programming, divide and conquer, and more.This guide covers everything from the ground up. What recursion is, how the call stack works, how to identify base cases and recursive cases, every type of recursion, common patterns, time and space complexity analysis, the most common mistakes, and the top LeetCode problems to practice.By the end of this article, recursion will not feel like magic anymore. It will feel like a natural tool you reach for confidently.What Is Recursion?Recursion is when a function calls itself to solve a smaller version of the same problem.That is the complete definition. But let us make it concrete.Imagine you want to count down from 5 to 1. One way is a loop. Another way is — print 5, then solve the exact same problem for counting down from 4 to 1. Then print 4, solve for 3. And so on until you reach the base — there is nothing left to count down.void countDown(int n) { if (n == 0) return; // stop here System.out.println(n); countDown(n - 1); // solve the smaller version}The function countDown calls itself with a smaller input each time. Eventually it reaches 0 and stops. That stopping condition is the most important part of any recursive function — the base case.The Two Parts Every Recursive Function Must HaveEvery correctly written recursive function has exactly two parts. Without both, the function either gives wrong answers or runs forever.Part 1: Base CaseThe base case is the condition under which the function stops calling itself and returns a direct answer. It is the smallest version of the problem that you can solve without any further recursion.Without a base case, recursion never stops and you get a StackOverflowError — Java's way of telling you the call stack ran out of memory.Part 2: Recursive CaseThe recursive case is where the function calls itself with a smaller or simpler input — moving closer to the base case with each call. If your recursive case does not make the problem smaller, you have an infinite loop.Think of it like a staircase. The base case is the ground floor. The recursive case is each step going down. Every step must genuinely bring you one level closer to the ground.How Recursion Works — The Call StackThis is the mental model that most explanations skip, and it is the reason recursion confuses people.Every time a function is called in Java, a new stack frame is created and pushed onto the call stack. This frame stores the function's local variables, parameters, and where to return to when the function finishes.When a recursive function calls itself, a new frame is pushed on top. When that call finishes, its frame is popped and execution returns to the previous frame.Let us trace countDown(3) through the call stack:countDown(3) called → frame pushed prints 3 calls countDown(2) → frame pushed prints 2 calls countDown(1) → frame pushed prints 1 calls countDown(0) → frame pushed n == 0, return → frame popped back in countDown(1), return → frame popped back in countDown(2), return → frame popped back in countDown(3), return → frame poppedOutput: 3, 2, 1The call stack grows as calls go deeper, then shrinks as calls return. This is why recursion uses O(n) space for n levels deep — each level occupies one stack frame in memory.Your First Real Recursive Function — FactorialFactorial is the classic first recursion example. n! = n × (n-1) × (n-2) × ... × 1Notice the pattern — n! = n × (n-1)!. The factorial of n is n times the factorial of n-1. That recursive structure makes it perfect for recursion.public int factorial(int n) { // base case if (n == 0 || n == 1) return 1; // recursive case return n * factorial(n - 1);}Dry Run — factorial(4)factorial(4)= 4 * factorial(3)= 4 * 3 * factorial(2)= 4 * 3 * 2 * factorial(1)= 4 * 3 * 2 * 1= 24The call stack builds up going in, then multiplications happen coming back out. This "coming back out" phase is called the return phase or unwinding of the stack.Time Complexity: O(n) — n recursive calls Space Complexity: O(n) — n frames on the call stackThe Two Phases of RecursionEvery recursive function has two phases and understanding both is critical.Phase 1: The Call Phase (Going In)This happens as the function keeps calling itself with smaller inputs. Things you do before the recursive call happen in this phase — in order from the outermost call to the innermost.Phase 2: The Return Phase (Coming Back Out)This happens as each call finishes and returns to its caller. Things you do after the recursive call happen in this phase — in reverse order, from the innermost call back to the outermost.This distinction explains why the output order can be surprising:void printBothPhases(int n) { if (n == 0) return; System.out.println("Going in: " + n); // call phase printBothPhases(n - 1); System.out.println("Coming out: " + n); // return phase}For printBothPhases(3):Going in: 3Going in: 2Going in: 1Coming out: 1Coming out: 2Coming out: 3This two-phase understanding is what makes problems like reversing a string or printing a linked list backwards via recursion feel natural.Types of RecursionRecursion is not one-size-fits-all. There are several distinct types and knowing which type applies to a problem shapes how you write the solution.1. Direct RecursionThe function calls itself directly. This is the most common type — what we have seen so far.void direct(int n) { if (n == 0) return; direct(n - 1); // calls itself}2. Indirect RecursionFunction A calls Function B which calls Function A. They form a cycle.void funcA(int n) { if (n <= 0) return; System.out.println("A: " + n); funcB(n - 1);}void funcB(int n) { if (n <= 0) return; System.out.println("B: " + n); funcA(n - 1);}Used in: state machines, mutual recursion in parsers, certain mathematical sequences.3. Tail RecursionThe recursive call is the last operation in the function. Nothing happens after the recursive call returns — no multiplication, no addition, nothing.// NOT tail recursive — multiplication happens after returnint factorial(int n) { if (n == 1) return 1; return n * factorial(n - 1); // multiply after return — not tail}// Tail recursive — recursive call is the last thingint factorialTail(int n, int accumulator) { if (n == 1) return accumulator; return factorialTail(n - 1, n * accumulator); // last operation}Why does tail recursion matter? In languages that support tail call optimization (like Scala, Kotlin, and many functional languages), tail recursive functions can be converted to iteration internally — no stack frame accumulation, O(1) space. Java does NOT perform tail call optimization, but understanding tail recursion is still important for interviews and functional programming concepts.4. Head RecursionThe recursive call happens first, before any other processing. All processing happens in the return phase.void headRecursion(int n) { if (n == 0) return; headRecursion(n - 1); // call first System.out.println(n); // process after}// Output: 1 2 3 4 5 (processes in reverse order of calls)5. Tree RecursionThe function makes more than one recursive call per invocation. This creates a tree of calls rather than a linear chain. Fibonacci is the classic example.int fibonacci(int n) { if (n <= 1) return n; return fibonacci(n - 1) + fibonacci(n - 2); // TWO recursive calls}The call tree for fibonacci(4): fib(4) / \ fib(3) fib(2) / \ / \ fib(2) fib(1) fib(1) fib(0) / \ fib(1) fib(0)Time Complexity: O(2ⁿ) — exponential! Each call spawns two more. Space Complexity: O(n) — maximum depth of the call treeThis is why memoization (caching results) is so important for tree recursion — it converts O(2ⁿ) to O(n) by never recomputing the same subproblem twice.6. Mutual RecursionA specific form of indirect recursion where two functions call each other alternately to solve a problem. Different from indirect recursion in that the mutual calls are the core mechanism of the solution.// Check if a number is even or odd using mutual recursionboolean isEven(int n) { if (n == 0) return true; return isOdd(n - 1);}boolean isOdd(int n) { if (n == 0) return false; return isEven(n - 1);}Common Recursion Patterns in DSAThese are the patterns you will see over and over in interview problems. Recognizing them is more important than memorizing solutions.Pattern 1: Linear Recursion (Do Something, Recurse on Rest)Process the current element, then recurse on the remaining problem.// Sum of arrayint arraySum(int[] arr, int index) { if (index == arr.length) return 0; // base case return arr[index] + arraySum(arr, index + 1); // current + rest}Pattern 2: Divide and Conquer (Split Into Two Halves)Split the problem into two halves, solve each recursively, combine results.// Merge Sortvoid mergeSort(int[] arr, int left, int right) { if (left >= right) return; // base case — single element int mid = (left + right) / 2; mergeSort(arr, left, mid); // sort left half mergeSort(arr, mid + 1, right); // sort right half merge(arr, left, mid, right); // combine}Pattern 3: Backtracking (Try, Recurse, Undo)Try a choice, recurse to explore it, undo the choice when backtracking.// Generate all subsetsvoid subsets(int[] nums, int index, List<Integer> current, List<List<Integer>> result) { if (index == nums.length) { result.add(new ArrayList<>(current)); return; } // Choice 1: include nums[index] current.add(nums[index]); subsets(nums, index + 1, current, result); current.remove(current.size() - 1); // undo // Choice 2: exclude nums[index] subsets(nums, index + 1, current, result);}Pattern 4: Tree Recursion (Left, Right, Combine)Recurse on left subtree, recurse on right subtree, combine or process results.// Height of binary treeint height(TreeNode root) { if (root == null) return 0; // base case int leftHeight = height(root.left); // solve left int rightHeight = height(root.right); // solve right return 1 + Math.max(leftHeight, rightHeight); // combine}Pattern 5: Memoization (Cache Recursive Results)Store results of recursive calls so the same subproblem is never solved twice.Map<Integer, Integer> memo = new HashMap<>();int fibonacci(int n) { if (n <= 1) return n; if (memo.containsKey(n)) return memo.get(n); // return cached int result = fibonacci(n - 1) + fibonacci(n - 2); memo.put(n, result); // cache before returning return result;}This converts Fibonacci from O(2ⁿ) to O(n) time with O(n) space — a massive improvement.Recursion vs Iteration — When to Use WhichThis is one of the most common interview questions about recursion. Here is a clear breakdown:Use Recursion when:The problem has a naturally recursive structure (trees, graphs, divide and conquer)The solution is significantly cleaner and easier to understand recursivelyThe problem involves exploring multiple paths or choices (backtracking)The depth of recursion is manageable (not too deep to cause stack overflow)Use Iteration when:The problem is linear and a loop is equally clearMemory is a concern (iteration uses O(1) stack space vs O(n) for recursion)Performance is critical and function call overhead mattersJava's stack size limit could be hit (default around 500-1000 frames for deep recursion)The key rule: Every recursive solution can be converted to an iterative one (usually using an explicit stack). But recursive solutions for tree and graph problems are almost always cleaner to write and understand.Time and Space Complexity of Recursive FunctionsAnalyzing complexity for recursive functions requires a specific approach.The Recurrence Relation MethodExpress the time complexity as a recurrence relation and solve it.Factorial:T(n) = T(n-1) + O(1) = T(n-2) + O(1) + O(1) = T(1) + n×O(1) = O(n)Fibonacci (naive):T(n) = T(n-1) + T(n-2) + O(1) ≈ 2×T(n-1) = O(2ⁿ)Binary Search:T(n) = T(n/2) + O(1) = O(log n) [by Master Theorem]Merge Sort:T(n) = 2×T(n/2) + O(n) = O(n log n) [by Master Theorem]Space Complexity Rule for RecursionSpace complexity of a recursive function = maximum depth of the call stack × space per frameLinear recursion (factorial, sum): O(n) spaceBinary recursion (Fibonacci naive): O(n) space (maximum depth, not number of nodes)Divide and conquer (merge sort): O(log n) space (depth of recursion tree)Memoized Fibonacci: O(n) space (memo table + call stack)Classic Recursive Problems With SolutionsProblem 1: Reverse a StringString reverse(String s) { if (s.length() <= 1) return s; // base case // last char + reverse of everything before last char return s.charAt(s.length() - 1) + reverse(s.substring(0, s.length() - 1));}Dry run for "hello":reverse("hello") = 'o' + reverse("hell")reverse("hell") = 'l' + reverse("hel")reverse("hel") = 'l' + reverse("he")reverse("he") = 'e' + reverse("h")reverse("h") = "h"Unwinding: "h" → "he" → "leh" → "lleh" → "olleh" ✅Problem 2: Power Function (x^n)double power(double x, int n) { if (n == 0) return 1; // base case if (n < 0) return 1.0 / power(x, -n); // handle negative if (n % 2 == 0) { double half = power(x, n / 2); return half * half; // x^n = (x^(n/2))^2 } else { return x * power(x, n - 1); }}This is the fast power algorithm — O(log n) time instead of O(n).Problem 3: Fibonacci With Memoizationint[] memo = new int[100];Arrays.fill(memo, -1);int fib(int n) { if (n <= 1) return n; if (memo[n] != -1) return memo[n]; memo[n] = fib(n - 1) + fib(n - 2); return memo[n];}Time: O(n) — each value computed once Space: O(n) — memo array + call stackProblem 4: Tower of HanoiThe classic recursion teaching problem. Move n disks from source to destination using a helper rod.void hanoi(int n, char source, char destination, char helper) { if (n == 1) { System.out.println("Move disk 1 from " + source + " to " + destination); return; } // Move n-1 disks from source to helper hanoi(n - 1, source, helper, destination); // Move the largest disk from source to destination System.out.println("Move disk " + n + " from " + source + " to " + destination); // Move n-1 disks from helper to destination hanoi(n - 1, helper, destination, source);}Time Complexity: O(2ⁿ) — minimum moves required is 2ⁿ - 1 Space Complexity: O(n) — call stack depthProblem 5: Generate All Subsets (Power Set)void generateSubsets(int[] nums, int index, List<Integer> current, List<List<Integer>> result) { result.add(new ArrayList<>(current)); // add current subset for (int i = index; i < nums.length; i++) { current.add(nums[i]); // include generateSubsets(nums, i + 1, current, result); // recurse current.remove(current.size() - 1); // exclude (backtrack) }}For [1, 2, 3] — generates all 8 subsets: [], [1], [1,2], [1,2,3], [1,3], [2], [2,3], [3]Time: O(2ⁿ) — 2ⁿ subsets Space: O(n) — recursion depthProblem 6: Binary Search Recursivelyint binarySearch(int[] arr, int target, int left, int right) { if (left > right) return -1; // base case — not found int mid = left + (right - left) / 2; if (arr[mid] == target) return mid; else if (arr[mid] < target) return binarySearch(arr, target, mid + 1, right); else return binarySearch(arr, target, left, mid - 1);}Time: O(log n) — halving the search space each time Space: O(log n) — log n frames on the call stackRecursion on Trees — The Natural HabitatTrees are where recursion truly shines. Every tree problem becomes elegant with recursion because a tree is itself a recursive structure — each node's left and right children are trees themselves.// Maximum depth of binary treeint maxDepth(TreeNode root) { if (root == null) return 0; return 1 + Math.max(maxDepth(root.left), maxDepth(root.right));}// Check if tree is symmetricboolean isSymmetric(TreeNode left, TreeNode right) { if (left == null && right == null) return true; if (left == null || right == null) return false; return left.val == right.val && isSymmetric(left.left, right.right) && isSymmetric(left.right, right.left);}// Path sum — does any root-to-leaf path sum to target?boolean hasPathSum(TreeNode root, int target) { if (root == null) return false; if (root.left == null && root.right == null) return root.val == target; return hasPathSum(root.left, target - root.val) || hasPathSum(root.right, target - root.val);}Notice the pattern in all three — base case handles null, recursive case handles left and right subtrees, result combines both.How to Think About Any Recursive Problem — Step by StepThis is the framework you should apply to every new recursive problem you encounter:Step 1 — Identify the base case What is the smallest input where you know the answer directly without any recursion? For arrays it is usually empty array or single element. For trees it is null node. For numbers it is 0 or 1.Step 2 — Trust the recursive call Assume your function already works correctly for smaller inputs. Do not trace through the entire recursion mentally — just trust it. This is the Leap of Faith and it is what makes recursion feel natural.Step 3 — Express the current problem in terms of smaller problems How does the answer for size n relate to the answer for size n-1 (or n/2, or subtrees)? This relationship is your recursive case.Step 4 — Make sure each call moves toward the base case The input must become strictly smaller with each call. If it does not, you have infinite recursion.Step 5 — Write the base case first, then the recursive case Always. Writing the recursive case first leads to bugs because you have not defined when to stop.Common Mistakes and How to Avoid ThemMistake 1: Missing or wrong base case The most common mistake. Missing the base case causes StackOverflowError. Wrong base case causes wrong answers.Always ask — what is the simplest possible input, and what should the function return for it? Write that case first.Mistake 2: Not moving toward the base case If you call factorial(n) inside factorial(n) without reducing n, you loop forever. Every recursive call must make the problem strictly smaller.Mistake 3: Trusting your brain to trace deep recursion Do not try to trace 10 levels of recursion in your head. Trust the recursive call, verify the base case, and check that each call reduces the problem. That is all you need.Mistake 4: Forgetting to return the recursive result// WRONG — result is computed but not returnedint sum(int n) { if (n == 0) return 0; sum(n - 1) + n; // computed but discarded!}// CORRECTint sum(int n) { if (n == 0) return 0; return sum(n - 1) + n;}Mistake 5: Modifying shared state without backtracking In backtracking problems, if you add something to a list before a recursive call, you must remove it after the call returns. Forgetting to backtrack leads to incorrect results and is one of the trickiest bugs to find.Mistake 6: Recomputing the same subproblems Naive Fibonacci computes fib(3) multiple times when computing fib(5). Use memoization whenever you notice overlapping subproblems in your recursion tree.Top LeetCode Problems on RecursionThese are organized by pattern — work through them in this order for maximum learning:Pure Recursion Basics:509. Fibonacci Number — Easy — start here, implement with and without memoization344. Reverse String — Easy — recursion on arrays206. Reverse Linked List — Easy — recursion on linked list50. Pow(x, n) — Medium — fast power with recursionTree Recursion (Most Important):104. Maximum Depth of Binary Tree — Easy — simplest tree recursion112. Path Sum — Easy — decision recursion on tree101. Symmetric Tree — Easy — mutual recursion on tree110. Balanced Binary Tree — Easy — bottom-up recursion236. Lowest Common Ancestor of a Binary Tree — Medium — classic tree recursion124. Binary Tree Maximum Path Sum — Hard — advanced tree recursionDivide and Conquer:148. Sort List — Medium — merge sort on linked list240. Search a 2D Matrix II — Medium — divide and conquerBacktracking:78. Subsets — Medium — generate all subsets46. Permutations — Medium — generate all permutations77. Combinations — Medium — generate combinations79. Word Search — Medium — backtracking on grid51. N-Queens — Hard — classic backtrackingMemoization / Dynamic Programming:70. Climbing Stairs — Easy — Fibonacci variant with memoization322. Coin Change — Medium — recursion with memoization to DP139. Word Break — Medium — memoized recursionRecursion Cheat Sheet// Linear recursion templatereturnType solve(input) { if (baseCase) return directAnswer; // process current return solve(smallerInput);}// Tree recursion templatereturnType solve(TreeNode root) { if (root == null) return baseValue; returnType left = solve(root.left); returnType right = solve(root.right); return combine(left, right, root.val);}// Backtracking templatevoid backtrack(choices, current, result) { if (goalReached) { result.add(copy of current); return; } for (choice : choices) { make(choice); // add to current backtrack(...); // recurse undo(choice); // remove from current }}// Memoization templateMap<Input, Output> memo = new HashMap<>();returnType solve(input) { if (baseCase) return directAnswer; if (memo.containsKey(input)) return memo.get(input); returnType result = solve(smallerInput); memo.put(input, result); return result;}FAQs — People Also AskQ1. What is recursion in Java with a simple example? Recursion is when a function calls itself to solve a smaller version of the same problem. A simple example is factorial — factorial(5) = 5 × factorial(4) = 5 × 4 × factorial(3) and so on until factorial(1) returns 1 directly.Q2. What is the difference between recursion and iteration? Iteration uses loops (for, while) and runs in O(1) space. Recursion uses function calls and uses O(n) stack space for n levels deep. Recursion is often cleaner for tree and graph problems. Iteration is better when memory is a concern or the problem is inherently linear.Q3. What causes StackOverflowError in Java recursion? StackOverflowError happens when recursion goes too deep — too many frames accumulate on the call stack before any of them return. This is caused by missing base case, wrong base case, or input too large for Java's default stack size limit.Q4. What is the difference between recursion and dynamic programming? Recursion solves a problem by breaking it into subproblems. Dynamic programming is recursion plus memoization — storing results of subproblems so they are never computed twice. DP converts exponential recursive solutions into polynomial ones by eliminating redundant computation.Q5. What is tail recursion and does Java support tail call optimization? Tail recursion is when the recursive call is the absolute last operation in the function. Java does NOT support tail call optimization — Java always creates a new stack frame for each call even if it is tail recursive. Languages like Scala and Kotlin (on the JVM) do support it with the tailrec keyword.Q6. How do you convert recursion to iteration? Every recursive solution can be converted to iterative using an explicit stack data structure. The call stack's behavior is replicated manually — push the initial call, loop while stack is not empty, pop, process, and push sub-calls. Tree traversals are a common example of this conversion.ConclusionRecursion is not magic. It is a systematic way of solving problems by expressing them in terms of smaller versions of themselves. Once you internalize the two parts (base case and recursive case), understand the call stack mentally, and learn to trust the recursive call rather than trace it completely, everything clicks.The learning path from here is clear — start with simple problems like Fibonacci and array sum. Move to tree problems where recursion is most natural. Then tackle backtracking. Finally add memoization to bridge into dynamic programming.Every hour you spend understanding recursion deeply pays dividends across the entire rest of your DSA journey. Trees, graphs, divide and conquer, backtracking, dynamic programming — all of them build on this foundation.

RecursionJavaBase CaseCall StackBacktrackingDynamic Programming
LeetCode 143 Reorder List - Java Solution Explained

LeetCode 143 Reorder List - Java Solution Explained

IntroductionLeetCode 143 Reorder List is one of those problems that looks simple when you read it but immediately makes you wonder — where do I even start? There is no single trick that solves it. Instead it combines three separate linked list techniques into one clean solution. Mastering this problem means you have genuinely understood linked lists at an intermediate level.You can find the problem here — LeetCode 143 Reorder List.This article walks through everything — what the problem wants, the intuition behind each step, all three techniques used, a detailed dry run, complexity analysis, and common mistakes beginners make.What Is the Problem Really Asking?You have a linked list: L0 → L1 → L2 → ... → LnYou need to reorder it to: L0 → Ln → L1 → Ln-1 → L2 → Ln-2 → ...In plain English — take one node from the front, then one from the back, then one from the front, then one from the back, and keep alternating until all nodes are used.Example:Input: 1 → 2 → 3 → 4 → 5Output: 1 → 5 → 2 → 4 → 3Node 1 from front, Node 5 from back, Node 2 from front, Node 4 from back, Node 3 stays in middle.Real Life Analogy — Dealing Cards From Both EndsImagine you have a deck of cards laid out in a line face up: 1, 2, 3, 4, 5. Now you deal them by alternately picking from the left end and the right end of the line:Pick 1 from left → placePick 5 from right → place after 1Pick 2 from left → place after 5Pick 4 from right → place after 2Pick 3 (only one left) → place after 4Result: 1, 5, 2, 4, 3That is exactly what the problem wants. The challenge is doing this efficiently on a singly linked list where you cannot just index from the back.Why This Problem Is Hard for BeginnersIn an array you can just use two pointers — one at the start and one at the end — and swap/interleave easily. But a singly linked list only goes forward. You cannot go backwards. You cannot easily access the last element.This is why the problem requires a three-step approach that cleverly works around the limitations of a singly linked list.The Three Step ApproachEvery experienced developer solves this problem in exactly three steps:Step 1 — Find the middle of the linked list using the Fast & Slow Pointer techniqueStep 2 — Reverse the second half of the linked listStep 3 — Merge the two halves by alternating nodes from eachLet us understand each step deeply before looking at code.Step 1: Finding the Middle — Fast & Slow PointerThe Fast & Slow Pointer technique (also called Floyd's algorithm) uses two pointers moving at different speeds through the list:slow moves one step at a timefast moves two steps at a timeWhen fast reaches the end, slow is exactly at the middle. This works because fast covers twice the distance of slow in the same number of steps.ListNode fast = head;ListNode slow = head;while (fast.next != null && fast.next.next != null) { fast = fast.next.next; slow = slow.next;}// slow is now at the middleFor 1 → 2 → 3 → 4 → 5:Start: slow=1, fast=1Step 1: slow=2, fast=3Step 2: slow=3, fast=5 (fast.next is null, stop)Middle is node 3For 1 → 2 → 3 → 4:Start: slow=1, fast=1Step 1: slow=2, fast=3Step 2: fast.next.next is null, stopslow=2, middle is node 2After finding the middle, we cut the list in two by setting slow.next = null. This disconnects the first half from the second half.Step 2: Reversing the Second HalfOnce we have the second half starting from slow.next, we reverse it. After reversal, what was the last node becomes the first — giving us easy access to the back elements of the original list.public ListNode reverse(ListNode head) { ListNode curr = head; ListNode prev = null; while (curr != null) { ListNode next = curr.next; // save next curr.next = prev; // reverse the link prev = curr; // move prev forward curr = next; // move curr forward } return prev; // prev is the new head}For second half 3 → 4 → 5 (from the first example):Reverse → 5 → 4 → 3Now we have:First half: 1 → 2 → 3 (but 3 is the end since we cut at slow)Wait — actually after cutting at slow=3: first half is 1 → 2 → 3, second half reversed is 5 → 4Let us be precise. For 1 → 2 → 3 → 4 → 5, slow stops at 3. slow.next = null cuts to give:First half: 1 → 2 → 3 → nullSecond half before reverse: 4 → 5Second half after reverse: 5 → 4Step 3: Merging Two HalvesNow we have two lists and we merge them by alternately taking one node from each:Take from first half, take from second half, take from first half, take from second half...ListNode orig = head; // pointer for first halfListNode newhead = second; // pointer for reversed second halfwhile (newhead != null) { ListNode temp1 = orig.next; // save next of first half ListNode temp2 = newhead.next; // save next of second half orig.next = newhead; // first → second newhead.next = temp1; // second → next of first orig = temp1; // advance first half pointer newhead = temp2; // advance second half pointer}Why do we loop on newhead != null and not orig != null? Because the second half is always equal to or shorter than the first half (we cut at middle). Once the second half is exhausted, the remaining first half nodes are already in the correct position.Complete Solutionclass Solution { public ListNode reverse(ListNode head) { ListNode curr = head; ListNode prev = null; while (curr != null) { ListNode next = curr.next; curr.next = prev; prev = curr; curr = next; } return prev; } public void reorderList(ListNode head) { // Step 1: Find middle using fast & slow pointer ListNode fast = head; ListNode slow = head; while (fast.next != null && fast.next.next != null) { fast = fast.next.next; slow = slow.next; } // Step 2: Reverse second half ListNode newhead = reverse(slow.next); slow.next = null; // cut the list into two halves // Step 3: Merge two halves alternately ListNode orig = head; while (newhead != null) { ListNode temp1 = orig.next; ListNode temp2 = newhead.next; orig.next = newhead; newhead.next = temp1; orig = temp1; newhead = temp2; } }}Complete Dry Run — head = [1, 2, 3, 4, 5]Step 1: Find MiddleList: 1 → 2 → 3 → 4 → 5Initial: slow=1, fast=1Iteration 1: slow=2, fast=3Iteration 2: fast.next=4, fast.next.next=5 → slow=3, fast=5fast.next is null → stopslow is at node 3Step 2: Cut and ReverseCut: slow.next = nullFirst half: 1 → 2 → 3 → nullSecond half: 4 → 5Reverse second half 4 → 5:prev=null, curr=4 → next=5, 4.next=null, prev=4, curr=5prev=4, curr=5 → next=null, 5.next=4, prev=5, curr=nullReturn prev=5Reversed second half: 5 → 4 → nullStep 3: Mergeorig=1, newhead=5Iteration 1:temp1 = orig.next = 2temp2 = newhead.next = 4orig.next = newhead → 1.next = 5newhead.next = temp1 → 5.next = 2orig = temp1 = 2newhead = temp2 = 4List so far: 1 → 5 → 2 → 3Iteration 2:temp1 = orig.next = 3temp2 = newhead.next = nullorig.next = newhead → 2.next = 4newhead.next = temp1 → 4.next = 3orig = temp1 = 3newhead = temp2 = nullList so far: 1 → 5 → 2 → 4 → 3newhead is null → loop endsFinal result: 1 → 5 → 2 → 4 → 3 ✅Dry Run — head = [1, 2, 3, 4]Step 1: Find MiddleInitial: slow=1, fast=1Iteration 1: slow=2, fast=3fast.next=4, fast.next.next=null → stopslow is at node 2Step 2: Cut and ReverseFirst half: 1 → 2 → nullSecond half: 3 → 4Reversed: 4 → 3 → nullStep 3: Mergeorig=1, newhead=4Iteration 1:temp1=2, temp2=31.next=4, 4.next=2orig=2, newhead=3List: 1 → 4 → 2 → 3Iteration 2:temp1=null (2.next was originally 3 but we cut at slow=2, so 2.next = null... wait)Actually after cutting at slow=2, first half is 1 → 2 → null, so orig when it becomes 2, orig.next = null.temp1 = orig.next = nulltemp2 = newhead.next = null2.next = 3, 3.next = nullorig = null, newhead = nullnewhead is null → stopFinal result: 1 → 4 → 2 → 3 ✅Why slow.next = null Must Come After Saving newheadThis is a subtle but critical ordering detail in the code. Look at this sequence:ListNode newhead = reverse(slow.next); // save reversed second half FIRSTslow.next = null; // THEN cut the listIf you cut first (slow.next = null) and then try to reverse, you lose the reference to the second half entirely because slow.next is already null. Always save the second half reference before cutting.Time and Space ComplexityTime Complexity: O(n) — each of the three steps (find middle, reverse, merge) makes a single pass through the list. Total is 3 passes = O(3n) = O(n).Space Complexity: O(1) — everything is done by rearranging pointers in place. No extra arrays, no recursion stack, no additional data structures. Just a handful of pointer variables.This is the optimal solution — linear time and constant space.Alternative Approach — Using ArrayList (Simpler but O(n) Space)If you find the three-step approach hard to implement under interview pressure, here is a simpler approach using extra space:public void reorderList(ListNode head) { // store all nodes in ArrayList for random access List<ListNode> nodes = new ArrayList<>(); ListNode curr = head; while (curr != null) { nodes.add(curr); curr = curr.next; } int left = 0; int right = nodes.size() - 1; while (left < right) { nodes.get(left).next = nodes.get(right); left++; if (left == right) break; // odd number of nodes nodes.get(right).next = nodes.get(left); right--; } nodes.get(left).next = null; // terminate the list}This is much easier to understand and code. Store all nodes in an ArrayList, use two pointers from both ends, and wire up the next pointers.Time Complexity: O(n) Space Complexity: O(n) — ArrayList stores all nodesThis is acceptable in most interviews. Mention the O(1) space approach as the optimal solution if asked.Common Mistakes to AvoidNot cutting the list before merging If you do not set slow.next = null after finding the middle, the first half still points into the second half. During merging, this creates cycles and infinite loops. Always cut before merging.Wrong loop condition for finding the middle The condition fast.next != null && fast.next.next != null ensures fast does not go out of bounds when jumping two steps. Using just fast != null && fast.next != null moves slow one step too far for even-length lists.Looping on orig instead of newhead The merge loop should run while newhead != null, not while orig != null. The second half is always shorter or equal to the first half. Once the second half is done, the remaining first half is already correctly placed.Forgetting to save both temp pointers before rewiring In the merge step, you must save both orig.next and newhead.next before changing any pointers. Changing orig.next first and then trying to access orig.next to save it gives you the wrong node.How This Problem Combines Multiple PatternsThis problem is special because it does not rely on a single technique. It is a combination of three fundamental linked list operations:Fast & Slow Pointer — you saw this concept in problems like finding the middle of a list and detecting cycles (LeetCode 141, 142).Reverse a Linked List — the most fundamental linked list operation, appears in LeetCode 206 and as a subtask in dozens of problems.Merge Two Lists — similar to merging two sorted lists (LeetCode 21) but here order is not sorted, it is alternating.Solving this problem proves you are comfortable with all three patterns individually and can combine them when needed.FAQs — People Also AskQ1. What is the most efficient approach for LeetCode 143 Reorder List? The three-step approach — find middle with fast/slow pointer, reverse second half, merge alternately — runs in O(n) time and O(1) space. It is the optimal solution. The ArrayList approach is O(n) time and O(n) space but simpler to code.Q2. Why use fast and slow pointer to find the middle? Because a singly linked list has no way to access elements by index. You cannot just do list[length/2]. The fast and slow pointer technique finds the middle in a single pass without knowing the length beforehand.Q3. Why reverse the second half instead of the first half? The problem wants front-to-back alternation. If you reverse the second half, its first node is the original last node — exactly what you need to interleave with the front of the first half. Reversing the first half would give the wrong order.Q4. What is the time complexity of LeetCode 143? O(n) time for three linear passes (find middle, reverse, merge). O(1) space since all operations are in-place pointer manipulations with no extra data structures.Q5. Is LeetCode 143 asked in coding interviews? Yes, frequently at companies like Amazon, Google, Facebook, and Microsoft. It is considered a benchmark problem for linked list mastery because it requires combining three separate techniques cleanly under pressure.Similar LeetCode Problems to Practice Next206. Reverse Linked List — Easy — foundation for step 2 of this problem876. Middle of the Linked List — Easy — fast & slow pointer isolated21. Merge Two Sorted Lists — Easy — merging technique foundation234. Palindrome Linked List — Easy — also uses find middle + reverse second half148. Sort List — Medium — merge sort on linked list, uses same split techniqueConclusionLeetCode 143 Reorder List is one of the best Medium linked list problems because it forces you to think in multiple steps and combine techniques rather than apply a single pattern. The fast/slow pointer finds the middle efficiently without knowing the length. Reversing the second half turns the "cannot go backwards" limitation of singly linked lists into a non-issue. And the alternating merge weaves everything together cleanly.Work through the dry runs carefully — especially the pointer saving step in the merge. Once you see why each step is necessary and how they connect, this problem will always feel approachable no matter when it shows up in an interview.

LeetCodeJavaLinked ListTwo PointerFast Slow PointerMedium
Stack Problems Explained: NGR, NGL, NSR, NSL — The Four-Problem Family You Must Master

Stack Problems Explained: NGR, NGL, NSR, NSL — The Four-Problem Family You Must Master

IntroductionAmong all the problems built around the Stack data structure, four stand out as a family — they appear repeatedly in coding interviews, competitive programming, and real-world software systems. These four are the Next Greater to the Right (NGR), Next Greater to the Left (NGL), Next Smaller to the Right (NSR), and Next Smaller to the Left (NSL).What makes them special is not just their individual solutions — it is the fact that all four are solved by a single elegant technique called the Monotonic Stack. Learn the pattern once, and you have all four in your toolkit permanently.This guide breaks down each problem with a full solution, step-by-step dry run, edge cases, and the exact reasoning behind every decision in the code. Whether you are preparing for a technical interview or simply want to deeply understand this pattern — you are in the right place.The Story That Makes This ClickBefore any code, let us understand this family of problems with one real-world story.Imagine you are standing in a queue at a cricket stadium. Everyone in the queue has a different height. You are standing somewhere in the middle. You look to your right and ask — who is the first person taller than me? That is your Next Greater Element to the Right (NGR).Now you look to your left — who is the first person taller than me on this side? That is your Next Greater to the Left (NGL).Now instead of taller, you ask shorter — who is the first shorter person to my right? That is Next Smaller to the Right (NSR).And shorter to your left? That is Next Smaller to the Left (NSL).Same queue. Same people. Four different questions. Four different answers. This is exactly what these four problems are about — and they all share the same solution pattern.What Is a Monotonic Stack?A monotonic stack is just a regular stack with one rule — elements inside it are always maintained in a specific order, either always increasing or always decreasing from bottom to top.You never enforce this rule explicitly. It happens naturally as you pop elements that violate the order before pushing a new one. This popping step is the key insight — the moment you pop an element, you have found its answer for the current element being processed.This one pattern solves all four problems. Only two small details change between them — the direction of traversal and the comparison condition inside the while loop.The Four Problems — Quick ReferenceProblemDirectionWhat You WantProblem LinksNGRTraverse Right to LeftFirst greater on right"Next Greater Element GFG"NGLTraverse Left to RightFirst greater on left"Previous Greater Element GFG"NSRTraverse Right to LeftFirst smaller on right"Next Smaller Element GFG"NSLTraverse Left to RightFirst smaller on left"Previous Smaller Element GFG"Problem 1 — Next Greater Element to Right (NGR)GFG Problem: Search "Next Greater Element" on GeeksForGeeks Difficulty: Medium | Accuracy: 32.95% | Submissions: 515K+The QuestionFor each element in the array, find the first element to its right that is strictly greater than it. If none exists, return -1.Input: [1, 3, 2, 4] Output: [3, 4, 4, -1]Input: [6, 8, 0, 1, 3] Output: [8, -1, 1, 3, -1]Real World ExampleThink of the stock market. You have daily closing prices: [1, 3, 2, 4]. For each day, you want to know — on which future day will the price first exceed today's price? Day 1 has price 1, first exceeded on Day 2 with price 3. Day 2 has price 3, first exceeded on Day 4 with price 4. Day 3 has price 2, also first exceeded on Day 4 with price 4. Day 4 has no future day, so -1. This is exactly NGR — and it is literally used in financial software to detect price breakout points.The IntuitionThe brute force is obvious — for every element, scan everything to its right and find the first greater one. That works but it is O(n²). For an array of 10⁶ elements that becomes 10¹² operations. It will time out on any large input.The stack insight is this — traverse right to left. As you move left, the stack always holds elements you have already seen on the right side. These are the candidates for being the next greater element. Before pushing the current element, pop all stack elements that are smaller than or equal to it. Why? Because the current element is blocking them — for any future element to the left, the current element will always be encountered first, so those smaller popped elements can never be an answer for anything. Whatever remains on top of the stack after popping is the answer for the current element.Step-by-Step Dry RunArray: [1, 3, 2, 4], traversing right to left.i=3, element is 4. Stack is empty. Answer for index 3 is -1. Push 4. Stack: [4]i=2, element is 2. Top of stack is 4, which is greater than 2. Answer for index 2 is 4. Push 2. Stack: [4, 2]i=1, element is 3. Top of stack is 2, which is not greater than 3. Pop 2. Top is now 4, which is greater than 3. Answer for index 1 is 4. Push 3. Stack: [4, 3]i=0, element is 1. Top of stack is 3, which is greater than 1. Answer for index 0 is 3. Push 1. Stack: [4, 3, 1]Answers collected right to left: [-1, 4, 4, 3] After Collections.reverse(): [3, 4, 4, -1] ✓The Code// NGR — Next Greater Element to Rightclass Solution {public ArrayList<Integer> nextLargerElement(int[] arr) {ArrayList<Integer> result = new ArrayList<>();Stack<Integer> st = new Stack<>();// Traverse from RIGHT to LEFTfor (int i = arr.length - 1; i >= 0; i--) {// Pop all elements smaller than or equal to current// They can never be the answer for any element to the leftwhile (!st.empty() && arr[i] >= st.peek()) {st.pop();}// Whatever is on top now is the next greater elementif (st.empty()) {result.add(-1);} else {result.add(st.peek());}// Push current — it is a candidate for elements to the leftst.push(arr[i]);}// Collected answers right to left, so reverse before returningCollections.reverse(result);return result;}}Edge CasesAll elements decreasing — Input: [5, 4, 3, 2, 1] Output: [-1, -1, -1, -1, -1] Every element has no greater element to its right. Traversing right to left, each new element is larger than everything already in the stack, so the stack gets cleared and the answer is always -1.All elements increasing — Input: [1, 2, 3, 4, 5] Output: [2, 3, 4, 5, -1] Each element's next greater is simply the next element in the array. The last element always gets -1 since nothing exists to its right.All elements equal — Input: [3, 3, 3, 3] Output: [-1, -1, -1, -1] Equal elements do not count as greater. The pop condition uses >= so equals get removed from the stack, ensuring duplicates never answer each other.Single element — Input: [7] Output: [-1] Nothing to the right, always -1.Why only 32.95% accuracy on GFG? Most people either forget to reverse the result at the end, use the wrong comparison in the while loop, or submit a brute force O(n²) solution that times out on large inputs.Problem 2 — Next Greater Element to Left / Previous Greater Element (NGL)GFG Problem: Search "Previous Greater Element" on GeeksForGeeks Difficulty: Medium | Accuracy: 68.93% | Submissions: 7K+The QuestionFor each element in the array, find the first element to its left that is strictly greater than it. If none exists, return -1.Input: [10, 4, 2, 20, 40, 12, 30] Output: [-1, 10, 4, -1, -1, 40, 40]Real World ExampleImagine you are a junior employee at a company. For each person in the office, you want to know — who is the first senior person sitting to their left who earns more? This is NGL. It is used in organizational hierarchy systems, salary band analysis tools, and even in database query optimizers to find the nearest dominant record on the left side.The IntuitionThis is the mirror image of NGR. Instead of traversing right to left, we traverse left to right. The stack holds elements we have already seen from the left side — these are candidates for being the previous greater element. For each new element, pop everything from the stack that is smaller than or equal to it. Whatever remains on top is the first greater element to its left. Then push the current element for future use.No reverse is needed here because we are already going left to right and building the result in order.Step-by-Step Dry RunArray: [10, 4, 2, 20, 40, 12, 30], traversing left to right.i=0, element is 10. Stack is empty. Answer is -1. Push 10. Stack: [10]i=1, element is 4. Top is 10, greater than 4. Answer is 10. Push 4. Stack: [10, 4]i=2, element is 2. Top is 4, greater than 2. Answer is 4. Push 2. Stack: [10, 4, 2]i=3, element is 20. Top is 2, not greater than 20. Pop 2. Top is 4, not greater. Pop 4. Top is 10, not greater. Pop 10. Stack is empty. Answer is -1. Push 20. Stack: [20]i=4, element is 40. Top is 20, not greater. Pop 20. Stack empty. Answer is -1. Push 40. Stack: [40]i=5, element is 12. Top is 40, greater than 12. Answer is 40. Push 12. Stack: [40, 12]i=6, element is 30. Top is 12, not greater than 30. Pop 12. Top is 40, greater than 30. Answer is 40. Push 30. Stack: [40, 30]Result: [-1, 10, 4, -1, -1, 40, 40] ✓ No reverse needed.The Code// NGL — Next Greater Element to Left (Previous Greater Element)class Solution {static ArrayList<Integer> preGreaterEle(int[] arr) {Stack<Integer> st = new Stack<>();ArrayList<Integer> result = new ArrayList<>();// Traverse LEFT to RIGHT — no reverse neededfor (int i = 0; i <= arr.length - 1; i++) {// Pop all elements smaller than or equal to currentwhile (!st.empty() && arr[i] >= st.peek()) {st.pop();}// Top of stack is the previous greater elementif (!st.empty() && st.peek() > arr[i]) {result.add(st.peek());} else {result.add(-1);}// Push current for future elementsst.push(arr[i]);}return result;}}Edge CasesStrictly increasing array — Input: [10, 20, 30, 40] Output: [-1, -1, -1, -1] Each new element is larger than everything before it, so the stack always gets fully cleared. No previous greater exists for any element.First element is always -1 — regardless of its value, the first element has nothing to its left. The stack is empty at i=0, so the answer is always -1 for index 0. This is guaranteed by the logic.Duplicate values — Input: [5, 5, 5] Output: [-1, -1, -1] Equal elements do not qualify as greater. The pop condition uses >= so duplicates get removed from the stack and never answer each other.Problem 3 — Next Smaller Element to Right (NSR)GFG Problem: Search "Next Smaller Element" on GeeksForGeeks Difficulty: Medium | Accuracy: 36.26% | Submissions: 225K+The QuestionFor each element in the array, find the first element to its right that is strictly smaller than it. If none exists, return -1.Input: [4, 8, 5, 2, 25] Output: [2, 5, 2, -1, -1]Input: [13, 7, 6, 12] Output: [7, 6, -1, -1]Real World ExampleYou work at a warehouse. Shelves have items of weights: [4, 8, 5, 2, 25] kg. For each item, the system needs to find the first lighter item sitting to its right on the shelf — this is used to optimize load balancing and shelf arrangement algorithms. Item of 4 kg — first lighter to the right is 2 kg. Item of 8 kg — first lighter is 5 kg. Item of 5 kg — first lighter is 2 kg. Items of 2 kg and 25 kg have no lighter item to their right, so -1.The IntuitionNSR is structurally identical to NGR — we traverse right to left and collect answers, then reverse. The only change is the pop condition. In NGR we popped elements smaller than or equal to current because we wanted greater. Here we want smaller, so we pop elements greater than or equal to current. After popping, whatever remains on top is the first smaller element to the right.Step-by-Step Dry RunArray: [4, 8, 5, 2, 25], traversing right to left.i=4, element is 25. Stack is empty. Answer is -1. Push 25. Stack: [25]i=3, element is 2. Top is 25, which is greater than or equal to 2. Pop 25. Stack is empty. Answer is -1. Push 2. Stack: [2]i=2, element is 5. Top is 2, which is less than 5. Answer is 2. Push 5. Stack: [2, 5]i=1, element is 8. Top is 5, which is less than 8. Answer is 5. Push 8. Stack: [2, 5, 8]i=0, element is 4. Top is 8, which is greater than or equal to 4. Pop 8. Top is 5, which is greater than or equal to 4. Pop 5. Top is 2, which is less than 4. Answer is 2. Push 4. Stack: [2, 4]Answers collected right to left: [-1, -1, 2, 5, 2] After Collections.reverse(): [2, 5, 2, -1, -1] ✓The Code// NSR — Next Smaller Element to Rightclass Solution {static ArrayList<Integer> nextSmallerEle(int[] arr) {Stack<Integer> st = new Stack<>();ArrayList<Integer> result = new ArrayList<>();// Traverse RIGHT to LEFTfor (int i = arr.length - 1; i >= 0; i--) {// Pop elements greater than or equal to current// Opposite of NGR — we want smaller, so clear the bigger oneswhile (!st.empty() && arr[i] <= st.peek()) {st.pop();}// Top is now the next smaller elementif (!st.empty() && st.peek() < arr[i]) {result.add(st.peek());} else {result.add(-1);}st.push(arr[i]);}Collections.reverse(result);return result;}}Edge CasesStrictly decreasing array — Input: [5, 4, 3, 2, 1] Output: [4, 3, 2, 1, -1] Each element's next smaller is simply the next element in the array. Last element is always -1.Strictly increasing array — Input: [1, 2, 3, 4, 5] Output: [-1, -1, -1, -1, -1] No element has a smaller element to its right since the array only grows.Last element is always -1 — nothing exists to its right regardless of its value.Single element — Input: [42] Output: [-1]Why 36.26% accuracy on GFG? The most common mistake is keeping the NGR pop condition (arr[i] >= st.peek()) and only changing the problem description in your head. The pop condition must flip to arr[i] <= st.peek() for NSR. Forgetting this gives completely wrong answers that look plausible, which makes the bug hard to spot.Problem 4 — Next Smaller Element to Left / Previous Smaller Element (NSL)GFG Problem: Search "Previous Smaller Element" on GeeksForGeeksThe QuestionFor each element in the array, find the first element to its left that is strictly smaller than it. If none exists, return -1.Input: [4, 8, 5, 2, 25] Output: [-1, 4, 4, -1, 2]Real World ExampleSame warehouse. Now the system looks left instead of right. For the item weighing 8 kg, the first lighter item to its left is 4 kg. For 25 kg, the first lighter to its left is 2 kg. For 4 kg, nothing lighter exists to its left so -1. For 2 kg, nothing lighter to its left so -1. This kind of lookback query appears in time-series analysis, price history tracking, and sensor data processing.The IntuitionNSL is the mirror of NSR, exactly as NGL was the mirror of NGR. We traverse left to right (no reverse needed). We maintain a stack of candidates from the left. For each element, pop all elements greater than or equal to it — they cannot be the answer since they are not smaller. Whatever remains on top is the first smaller element to the left. Push current and move on.Step-by-Step Dry RunArray: [4, 8, 5, 2, 25], traversing left to right.i=0, element is 4. Stack is empty. Answer is -1. Push 4. Stack: [4]i=1, element is 8. Top is 4, which is less than 8. Answer is 4. Push 8. Stack: [4, 8]i=2, element is 5. Top is 8, which is greater than or equal to 5. Pop 8. Top is 4, which is less than 5. Answer is 4. Push 5. Stack: [4, 5]i=3, element is 2. Top is 5, greater than or equal to 2. Pop 5. Top is 4, greater than or equal to 2. Pop 4. Stack is empty. Answer is -1. Push 2. Stack: [2]i=4, element is 25. Top is 2, which is less than 25. Answer is 2. Push 25. Stack: [2, 25]Result: [-1, 4, 4, -1, 2] ✓ No reverse needed.The Code// NSL — Next Smaller Element to Left (Previous Smaller Element)class Solution {static ArrayList<Integer> prevSmallerEle(int[] arr) {Stack<Integer> st = new Stack<>();ArrayList<Integer> result = new ArrayList<>();// Traverse LEFT to RIGHT — no reverse neededfor (int i = 0; i < arr.length; i++) {// Pop elements greater than or equal to currentwhile (!st.empty() && arr[i] <= st.peek()) {st.pop();}// Top is the previous smaller elementif (!st.empty() && st.peek() < arr[i]) {result.add(st.peek());} else {result.add(-1);}st.push(arr[i]);}return result;}}Edge CasesFirst element is always -1 — nothing exists to its left. Stack is empty at i=0 every time.All same elements — Input: [5, 5, 5, 5] Output: [-1, -1, -1, -1] Equal elements do not qualify as smaller. The condition arr[i] <= st.peek() ensures equals are popped and never answer each other.Single element — Input: [9] Output: [-1]The Master Cheat SheetThis is the one table to save and refer to whenever you encounter any of these four problems.VariantTraverse DirectionPop ConditionReverse Result?NGR — Next Greater RightRight to Leftarr[i] >= st.peek()YesNGL — Next Greater LeftLeft to Rightarr[i] >= st.peek()NoNSR — Next Smaller RightRight to Leftarr[i] <= st.peek()YesNSL — Next Smaller LeftLeft to Rightarr[i] <= st.peek()NoTwo rules to remember forever:Rule 1 — Direction. If you are looking to the right, traverse right to left and reverse at the end. If you are looking to the left, traverse left to right and no reverse is needed.Rule 2 — Pop Condition. If you want a greater element, pop when arr[i] >= st.peek() to clear out smaller useless candidates. If you want a smaller element, pop when arr[i] <= st.peek() to clear out bigger useless candidates.Mix these two rules and you derive all four variants instantly without memorizing anything separately.Common Mistakes to AvoidWrong pop condition — Using > instead of >= in the while loop. This causes duplicate values to wrongly answer each other. Always use >= for greater problems and <= for smaller problems inside the while loop.Forgetting to reverse — For right-to-left traversals (NGR and NSR), you collect answers from right to left. You must call Collections.reverse() before returning. Skipping this is the single most common reason for wrong answers on these problems.Not checking empty stack before peek — Always check !st.empty() before calling st.peek(). An empty stack peek throws EmptyStackException at runtime and will crash your solution.Wrong if-condition after the while loop — After the while loop, the if-condition must use strict comparison. For NGR use st.peek() > arr[i]. For NSR use st.peek() < arr[i]. These must be strict — no equals sign here.Confusing traversal direction with answer direction — You traverse right to left for NGR but the answer array is filled left to right. The reverse at the end handles this. Do not try to index directly into the result array to compensate — just use reverse.Time and Space ComplexityAll four problems run in O(n) time and use O(n) space.Even though there is a while loop nested inside the for loop, each element is pushed into the stack exactly once and popped from the stack at most once. So across the entire traversal, the total number of push and pop operations combined is at most 2n — which gives O(n) overall. This is the beauty of the monotonic stack.Why These Four Problems Matter Beyond GFGThese four patterns are not just textbook exercises. They appear as the hidden sub-problem inside some of the hardest stack questions:-Largest Rectangle in Histogram uses NSR and NSL to find the left and right boundaries of each bar.Trapping Rain Water uses NGR and NGL to determine the water level above each position.Stock Span Problem is literally NGL applied directly to stock prices.Sum of Subarray Minimums uses NSR and NSL together to count contributions of each element.Once you master these four patterns deeply, a whole family of hard problems that previously seemed unapproachable suddenly becomes a matter of recognizing the pattern and applying it.Also on This BlogIf you are building your stack foundation from scratch, check out the complete deep-dive here → Stack Data Structure in Java: The Complete Guide — covering everything from what a stack is, LIFO principle, all three implementations, every operation with code, and six practice problems.

MonotonicStackNextGreaterElementStackProblemsJavaGeeksForGeeksStackPattern
Stack Data Structure in Java: The Complete In-Depth Guide

Stack Data Structure in Java: The Complete In-Depth Guide

1. What Is a Stack?A Stack is a linear data structure that stores elements in a sequential order, but with one strict rule — you can only insert or remove elements from one end, called the top.It is one of the simplest yet most powerful data structures in computer science. Its strength comes from its constraint. Because everything happens at one end, the behavior of a stack is completely predictable.The formal definition: A Stack is a linear data structure that follows the Last In, First Out (LIFO) principle — the element inserted last is the first one to be removed.Here is what a stack looks like visually: ┌──────────┐ │ 50 │ ← TOP (last inserted, first removed) ├──────────┤ │ 40 │ ├──────────┤ │ 30 │ ├──────────┤ │ 20 │ ├──────────┤ │ 10 │ ← BOTTOM (first inserted, last removed) └──────────┘When you push 60 onto this stack, it goes on top. When you pop, 60 comes out first. That is LIFO.2. Real-World AnalogiesBefore writing a single line of code, it helps to see stacks in the real world. These analogies will make the concept permanently stick.A Pile of Plates In a cafeteria, clean plates are stacked on top of each other. You always pick the top plate. You always place a new plate on top. You never reach into the middle. This is a stack.Browser Back Button Every time you visit a new webpage, it gets pushed onto a history stack. When you press the Back button, the browser pops the most recent page off the stack and takes you there. The page you visited first is at the bottom — you only reach it after going back through everything else.Undo Feature in Text Editors When you type in a document and press Ctrl+Z, the most recent action is undone first. That is because every action you perform is pushed onto a stack. Undo simply pops from that stack.Call Stack in Programming When a function calls another function, the current function's state is pushed onto the call stack. When the inner function finishes, it is popped off and execution returns to the outer function. This is the literal stack your programs run on.A Stack of Books Put five books on a table, one on top of another. You can only take the top book without knocking the pile over. That is a stack.3. The LIFO Principle ExplainedLIFO stands for Last In, First Out.It means whatever you put in last is the first thing to come out. This is the exact opposite of a Queue (which is FIFO — First In, First Out).Let us trace through an example step by step:Start: Stack is empty → []Push 10 → [10] (10 is at the top)Push 20 → [10, 20] (20 is at the top)Push 30 → [10, 20, 30] (30 is at the top)Pop → returns 30 (30 was last in, first out) Stack: [10, 20]Pop → returns 20 Stack: [10]Peek → returns 10 (just looks, does not remove) Stack: [10]Pop → returns 10 Stack: [] (stack is now empty)Every single operation happens only at the top. The bottom of the stack is never directly accessible.4. Stack Operations & Time ComplexityA stack supports the following core operations:OperationDescriptionTime Complexitypush(x)Insert element x onto the top of the stackO(1)pop()Remove and return the top elementO(1)peek() / top()Return the top element without removing itO(1)isEmpty()Check if the stack has no elementsO(1)isFull()Check if the stack has reached its capacity (Array only)O(1)size()Return the number of elements in the stackO(1)search(x)Find position of element from top (Java built-in only)O(n)All primary stack operations — push, pop, peek, isEmpty — run in O(1) constant time. This is what makes the stack so efficient. It does not matter whether the stack has 10 elements or 10 million — these operations are always instant.Space complexity for a stack holding n elements is O(n).5. Implementation 1 — Using a Static ArrayThis is the most fundamental way to implement a stack. We use a fixed-size array and a variable called top to track where the top of the stack currently is.How it works:top starts at -1 (stack is empty)On push: increment top, then place the element at arr[top]On pop: return arr[top], then decrement topOn peek: return arr[top] without changing it// StackUsingArray.javapublic class StackUsingArray { private int[] arr; private int top; private int capacity; // Constructor — initialize with a fixed capacity public StackUsingArray(int capacity) { this.capacity = capacity; arr = new int[capacity]; top = -1; } // Push: add element to the top public void push(int value) { if (isFull()) { System.out.println("Stack Overflow! Cannot push " + value); return; } arr[++top] = value; System.out.println("Pushed: " + value); } // Pop: remove and return top element public int pop() { if (isEmpty()) { System.out.println("Stack Underflow! Stack is empty."); return -1; } return arr[top--]; } // Peek: view the top element without removing public int peek() { if (isEmpty()) { System.out.println("Stack is empty."); return -1; } return arr[top]; } // Check if stack is empty public boolean isEmpty() { return top == -1; } // Check if stack is full public boolean isFull() { return top == capacity - 1; } // Return current size public int size() { return top + 1; } // Display all elements public void display() { if (isEmpty()) { System.out.println("Stack is empty."); return; } System.out.print("Stack (top → bottom): "); for (int i = top; i >= 0; i--) { System.out.print(arr[i] + " "); } System.out.println(); } // Main method to test public static void main(String[] args) { StackUsingArray stack = new StackUsingArray(5); stack.push(10); stack.push(20); stack.push(30); stack.push(40); stack.push(50); stack.push(60); // This will trigger Stack Overflow stack.display(); System.out.println("Peek: " + stack.peek()); System.out.println("Pop: " + stack.pop()); System.out.println("Pop: " + stack.pop()); stack.display(); System.out.println("Size: " + stack.size()); }}```**Output:**```Pushed: 10Pushed: 20Pushed: 30Pushed: 40Pushed: 50Stack Overflow! Cannot push 60Stack (top → bottom): 50 40 30 20 10Peek: 50Pop: 50Pop: 40Stack (top → bottom): 30 20 10Size: 3Key Points about Array Implementation:Fixed size — you must declare capacity upfrontVery fast — direct array index accessStack Overflow is possible if capacity is exceededMemory is pre-allocated even if stack is not full6. Implementation 2 — Using an ArrayListAn ArrayList-based stack removes the fixed-size limitation. The ArrayList grows dynamically, so you never have to worry about stack overflow due to capacity.How it works:The end of the ArrayList acts as the topadd() is used for pushremove(size - 1) is used for popget(size - 1) is used for peek// StackUsingArrayList.javaimport java.util.ArrayList;public class StackUsingArrayList { private ArrayList<Integer> list; // Constructor public StackUsingArrayList() { list = new ArrayList<>(); } // Push: add to the end (which is our top) public void push(int value) { list.add(value); System.out.println("Pushed: " + value); } // Pop: remove and return the last element public int pop() { if (isEmpty()) { System.out.println("Stack Underflow! Stack is empty."); return -1; } int top = list.get(list.size() - 1); list.remove(list.size() - 1); return top; } // Peek: view the last element public int peek() { if (isEmpty()) { System.out.println("Stack is empty."); return -1; } return list.get(list.size() - 1); } // Check if stack is empty public boolean isEmpty() { return list.isEmpty(); } // Return size public int size() { return list.size(); } // Display elements from top to bottom public void display() { if (isEmpty()) { System.out.println("Stack is empty."); return; } System.out.print("Stack (top → bottom): "); for (int i = list.size() - 1; i >= 0; i--) { System.out.print(list.get(i) + " "); } System.out.println(); } // Main method to test public static void main(String[] args) { StackUsingArrayList stack = new StackUsingArrayList(); stack.push(5); stack.push(15); stack.push(25); stack.push(35); stack.display(); System.out.println("Peek: " + stack.peek()); System.out.println("Pop: " + stack.pop()); System.out.println("Pop: " + stack.pop()); stack.display(); System.out.println("Is Empty: " + stack.isEmpty()); System.out.println("Size: " + stack.size()); }}```**Output:**```Pushed: 5Pushed: 15Pushed: 25Pushed: 35Stack (top → bottom): 35 25 15 5Peek: 35Pop: 35Pop: 25Stack (top → bottom): 15 5Is Empty: falseSize: 2Key Points about ArrayList Implementation:Dynamic size — grows automatically as neededNo overflow riskSlight overhead compared to raw array due to ArrayList internalsExcellent for most practical use cases7. Implementation 3 — Using a LinkedListA LinkedList-based stack is the most memory-efficient approach when you do not know the stack size in advance. Each element (node) holds data and a pointer to the next node. The head of the LinkedList acts as the top of the stack.How it works:Each node stores a value and a reference to the node below itPush creates a new node and makes it the new headPop removes the head node and returns its valuePeek returns the head node's value without removing it// StackUsingLinkedList.javapublic class StackUsingLinkedList { // Inner Node class private static class Node { int data; Node next; Node(int data) { this.data = data; this.next = null; } } private Node top; // Head of the linked list = top of stack private int size; // Constructor public StackUsingLinkedList() { top = null; size = 0; } // Push: create new node and link it to top public void push(int value) { Node newNode = new Node(value); newNode.next = top; // new node points to current top top = newNode; // new node becomes the new top size++; System.out.println("Pushed: " + value); } // Pop: remove and return top node's data public int pop() { if (isEmpty()) { System.out.println("Stack Underflow! Stack is empty."); return -1; } int value = top.data; top = top.next; // move top pointer to next node size--; return value; } // Peek: return top node's data without removing public int peek() { if (isEmpty()) { System.out.println("Stack is empty."); return -1; } return top.data; } // Check if empty public boolean isEmpty() { return top == null; } // Return size public int size() { return size; } // Display elements from top to bottom public void display() { if (isEmpty()) { System.out.println("Stack is empty."); return; } System.out.print("Stack (top → bottom): "); Node current = top; while (current != null) { System.out.print(current.data + " "); current = current.next; } System.out.println(); } // Main method to test public static void main(String[] args) { StackUsingLinkedList stack = new StackUsingLinkedList(); stack.push(100); stack.push(200); stack.push(300); stack.push(400); stack.display(); System.out.println("Peek: " + stack.peek()); System.out.println("Pop: " + stack.pop()); System.out.println("Pop: " + stack.pop()); stack.display(); System.out.println("Size: " + stack.size()); }}```**Output:**```Pushed: 100Pushed: 200Pushed: 300Pushed: 400Stack (top → bottom): 400 300 200 100Peek: 400Pop: 400Pop: 300Stack (top → bottom): 200 100Size: 2Key Points about LinkedList Implementation:Truly dynamic — each node allocated only when neededNo wasted memory from pre-allocationSlightly more memory per element (each node carries a pointer)Ideal for stacks where size is completely unknown8. Java's Built-in Stack ClassJava provides a ready-made Stack class inside java.util. It extends Vector and is thread-safe by default.// JavaBuiltinStack.javaimport java.util.Stack;public class JavaBuiltinStack { public static void main(String[] args) { Stack<Integer> stack = new Stack<>(); // Push elements stack.push(10); stack.push(20); stack.push(30); stack.push(40); System.out.println("Stack: " + stack); // Peek — look at top without removing System.out.println("Peek: " + stack.peek()); // Pop — remove top System.out.println("Pop: " + stack.pop()); System.out.println("After pop: " + stack); // Search — returns 1-based position from top System.out.println("Search 20: position " + stack.search(20)); // isEmpty System.out.println("Is Empty: " + stack.isEmpty()); // Size System.out.println("Size: " + stack.size()); }}```**Output:**```Stack: [10, 20, 30, 40]Peek: 40Pop: 40After pop: [10, 20, 30]Search 20: position 2Is Empty: falseSize: 3Important Note: In modern Java development, it is often recommended to use Deque (specifically ArrayDeque) instead of Stack for better performance, since Stack is synchronized and carries the overhead of Vector.// Using ArrayDeque as a stack (modern preferred approach)import java.util.ArrayDeque;import java.util.Deque;public class ModernStack { public static void main(String[] args) { Deque<Integer> stack = new ArrayDeque<>(); stack.push(10); // pushes to front stack.push(20); stack.push(30); System.out.println("Top: " + stack.peek()); System.out.println("Pop: " + stack.pop()); System.out.println("Stack: " + stack); }}9. Comparison of All ImplementationsFeatureArrayArrayListLinkedListJava StackArrayDequeSizeFixedDynamicDynamicDynamicDynamicStack Overflow RiskYesNoNoNoNoMemory UsagePre-allocatedAuto-growsPer-node overheadAuto-growsAuto-growsPush TimeO(1)O(1) amortizedO(1)O(1)O(1)Pop TimeO(1)O(1)O(1)O(1)O(1)Peek TimeO(1)O(1)O(1)O(1)O(1)Thread SafeNoNoNoYesNoBest ForKnown size, max speedGeneral useUnknown/huge sizeLegacy codeModern Java10. Advantages & DisadvantagesAdvantagesAdvantageExplanationSimple to implementVery few rules and operations to worry aboutO(1) operationsPush, pop, and peek are all constant timeMemory efficientNo extra pointers needed (array-based)Supports recursionThe call stack is itself a stackEasy undo/redoNatural fit for reversible action trackingBacktrackingPerfectly suited for maze, puzzle, and game solvingExpression evaluationPowers compilers and calculatorsDisadvantagesDisadvantageExplanationLimited accessCannot access elements in the middle directlyFixed size (array)Array-based stacks overflow if size is exceededNo random accessYou cannot do stack[2] — only top is accessibleMemory waste (array)Pre-allocated array wastes space if underusedNot suitable for all problemsMany problems need queues, trees, or graphs insteadStack overflow in recursionVery deep recursion can overflow the JVM call stack11. Real-World Use Cases of StackUnderstanding when to use a stack is just as important as knowing how to implement one. Here is where stacks show up in real software:Function Call Management (Call Stack) Every time your Java program calls a method, the JVM pushes that method's frame onto the call stack. When the method returns, the frame is popped. This is why you see "StackOverflowError" when you write infinite recursion.Undo and Redo Operations Text editors, image editors (Photoshop), and IDEs use two stacks — one for undo history and one for redo history. Every action pushes onto the undo stack. Ctrl+Z pops from it and pushes to the redo stack.Browser Navigation Your browser maintains a back-stack and a forward-stack. Visiting a new page pushes to the back-stack. Pressing Back pops from it and pushes to the forward-stack.Expression Evaluation and Conversion Compilers use stacks to evaluate arithmetic expressions and convert between infix, prefix, and postfix notations. For example: 3 + 4 * 2 must be evaluated considering operator precedence — this is done with a stack.Balanced Parentheses Checking Linters, compilers, and IDEs use stacks to check if brackets are balanced: {[()]} is valid, {[(])} is not.Backtracking Algorithms Maze solving, N-Queens, Sudoku solvers, and depth-first search all use stacks (explicitly or via recursion) to backtrack to previous states when a path fails.Syntax Parsing Compilers parse source code using stacks to match opening and closing constructs like if/else, try/catch, { and }.12. Practice Problems with Full SolutionsHere is where things get really interesting. These problems will sharpen your stack intuition and prepare you for coding interviews.Problem 1 — Reverse a String Using a StackDifficulty: EasyProblem: Write a Java program to reverse a string using a Stack.Approach: Push every character of the string onto a stack, then pop them all. Since LIFO reverses the order, the characters come out reversed.// ReverseString.javaimport java.util.Stack;public class ReverseString { public static String reverse(String str) { Stack<Character> stack = new Stack<>(); // Push all characters for (char c : str.toCharArray()) { stack.push(c); } // Pop all characters to build reversed string StringBuilder reversed = new StringBuilder(); while (!stack.isEmpty()) { reversed.append(stack.pop()); } return reversed.toString(); } public static void main(String[] args) { System.out.println(reverse("hello")); // olleh System.out.println(reverse("java")); // avaj System.out.println(reverse("racecar")); // racecar (palindrome) System.out.println(reverse("datastructure")); // erutcurtasatad }}Problem 2 — Check Balanced ParenthesesDifficulty: Easy–MediumProblem: Given a string containing (, ), {, }, [, ], determine if the brackets are balanced.Approach: Push every opening bracket onto the stack. When you see a closing bracket, check if it matches the top of the stack. If it does, pop. If it does not, the string is unbalanced.// BalancedParentheses.javaimport java.util.Stack;public class BalancedParentheses { public static boolean isBalanced(String expr) { Stack<Character> stack = new Stack<>(); for (char c : expr.toCharArray()) { // Push all opening brackets if (c == '(' || c == '{' || c == '[') { stack.push(c); } // For closing brackets, check the top of stack else if (c == ')' || c == '}' || c == ']') { if (stack.isEmpty()) return false; char top = stack.pop(); if (c == ')' && top != '(') return false; if (c == '}' && top != '{') return false; if (c == ']' && top != '[') return false; } } // Stack must be empty at the end for a balanced expression return stack.isEmpty(); } public static void main(String[] args) { System.out.println(isBalanced("{[()]}")); // true System.out.println(isBalanced("{[(])}")); // false System.out.println(isBalanced("((()))")); // true System.out.println(isBalanced("{]")); // false System.out.println(isBalanced("")); // true (empty is balanced) }}Problem 3 — Reverse a Stack (Without Extra Data Structure)Difficulty: Medium–HardProblem: Reverse all elements of a stack using only recursion — no array or extra stack allowed.Approach: This is a classic recursion problem. You need two recursive functions:insertAtBottom(stack, item) — inserts an element at the very bottom of the stackreverseStack(stack) — pops all elements, reverses, and uses insertAtBottom to rebuild// ReverseStack.javaimport java.util.Stack;public class ReverseStack { // Insert an element at the bottom of the stack public static void insertAtBottom(Stack<Integer> stack, int item) { if (stack.isEmpty()) { stack.push(item); return; } int top = stack.pop(); insertAtBottom(stack, item); stack.push(top); } // Reverse the stack using insertAtBottom public static void reverseStack(Stack<Integer> stack) { if (stack.isEmpty()) return; int top = stack.pop(); reverseStack(stack); // reverse the remaining stack insertAtBottom(stack, top); // insert popped element at bottom } public static void main(String[] args) { Stack<Integer> stack = new Stack<>(); stack.push(1); stack.push(2); stack.push(3); stack.push(4); stack.push(5); System.out.println("Before: " + stack); // [1, 2, 3, 4, 5] reverseStack(stack); System.out.println("After: " + stack); // [5, 4, 3, 2, 1] }}Problem 4 — Evaluate a Postfix ExpressionDifficulty: MediumProblem: Evaluate a postfix (Reverse Polish Notation) expression. Example: "2 3 4 * +" should return 14 because it is 2 + (3 * 4).Approach: Scan left to right. If you see a number, push it. If you see an operator, pop two numbers, apply the operator, and push the result.// PostfixEvaluation.javaimport java.util.Stack;public class PostfixEvaluation { public static int evaluate(String expression) { Stack<Integer> stack = new Stack<>(); String[] tokens = expression.split(" "); for (String token : tokens) { // If it's a number, push it if (token.matches("-?\\d+")) { stack.push(Integer.parseInt(token)); } // If it's an operator, pop two and apply else { int b = stack.pop(); // second operand int a = stack.pop(); // first operand switch (token) { case "+": stack.push(a + b); break; case "-": stack.push(a - b); break; case "*": stack.push(a * b); break; case "/": stack.push(a / b); break; } } } return stack.pop(); } public static void main(String[] args) { System.out.println(evaluate("2 3 4 * +")); // 14 → 2 + (3*4) System.out.println(evaluate("5 1 2 + 4 * + 3 -")); // 14 → 5+((1+2)*4)-3 System.out.println(evaluate("3 4 +")); // 7 }}Problem 5 — Next Greater ElementDifficulty: MediumProblem: For each element in an array, find the next greater element to its right. If none exists, output -1.Example: Input: [4, 5, 2, 10, 8] → Output: [5, 10, 10, -1, -1]Approach: Iterate right to left. Maintain a stack of candidates. For each element, pop all stack elements that are smaller than or equal to it — they can never be the answer for any element to the left. The top of the stack (if not empty) is the next greater element.// NextGreaterElement.javaimport java.util.Stack;import java.util.Arrays;public class NextGreaterElement { public static int[] nextGreater(int[] arr) { int n = arr.length; int[] result = new int[n]; Stack<Integer> stack = new Stack<>(); // stores elements, not indices // Traverse from right to left for (int i = n - 1; i >= 0; i--) { // Pop elements smaller than or equal to current while (!stack.isEmpty() && stack.peek() <= arr[i]) { stack.pop(); } // Next greater element result[i] = stack.isEmpty() ? -1 : stack.peek(); // Push current element for future comparisons stack.push(arr[i]); } return result; } public static void main(String[] args) { int[] arr1 = {4, 5, 2, 10, 8}; System.out.println(Arrays.toString(nextGreater(arr1))); // [5, 10, 10, -1, -1] int[] arr2 = {1, 3, 2, 4}; System.out.println(Arrays.toString(nextGreater(arr2))); // [3, 4, 4, -1] int[] arr3 = {5, 4, 3, 2, 1}; System.out.println(Arrays.toString(nextGreater(arr3))); // [-1, -1, -1, -1, -1] }}Problem 6 — Sort a Stack Using RecursionDifficulty: HardProblem: Sort a stack in ascending order (smallest on top) using only recursion — no loops, no extra data structure.// SortStack.javaimport java.util.Stack;public class SortStack { // Insert element in correct sorted position public static void sortedInsert(Stack<Integer> stack, int item) { if (stack.isEmpty() || item > stack.peek()) { stack.push(item); return; } int top = stack.pop(); sortedInsert(stack, item); stack.push(top); } // Sort the stack public static void sortStack(Stack<Integer> stack) { if (stack.isEmpty()) return; int top = stack.pop(); sortStack(stack); // sort remaining sortedInsert(stack, top); // insert top in sorted position } public static void main(String[] args) { Stack<Integer> stack = new Stack<>(); stack.push(34); stack.push(3); stack.push(31); stack.push(98); stack.push(92); stack.push(23); System.out.println("Before sort: " + stack); sortStack(stack); System.out.println("After sort: " + stack); // smallest on top }}13. Summary & Key TakeawaysA stack is a simple, elegant, and powerful data structure. Here is everything in one place:What it is: A linear data structure that follows LIFO — Last In, First Out.Core operations: push (add to top), pop (remove from top), peek (view top), isEmpty — all in O(1) time.Three ways to implement it in Java:Array-based: fast, fixed size, risk of overflowArrayList-based: dynamic, easy, slightly more overheadLinkedList-based: truly dynamic, memory-efficient per-element, best for unknown sizesWhen to use it:Undo/redo systemsBrowser navigationBalancing brackets and parenthesesEvaluating mathematical expressionsBacktracking problemsManaging recursive function callsDepth-first searchWhen NOT to use it:When you need random access to elementsWhen insertion/deletion is needed from both ends (use Deque)When you need to search efficiently (use HashMap or BST)Modern Java recommendation: Prefer ArrayDeque over the legacy Stack class for non-thread-safe scenarios. Use Stack only when you need synchronized access.The stack is one of those data structures that once you truly understand, you start seeing it everywhere — in your browser, in your IDE, in recursive algorithms, and deep within the operating system itself.This article covered everything from the fundamentals of the Stack data structure to multiple Java implementations, time complexity analysis, real-world applications, and six practice problems of increasing difficulty. Bookmark it as a reference and revisit the practice problems regularly — they are the real test of your understanding.

DataStructuresJavaStackDataStructureLIFO
LeetCode 1614: Maximum Nesting Depth of Parentheses — Java Solution Explained

LeetCode 1614: Maximum Nesting Depth of Parentheses — Java Solution Explained

IntroductionLeetCode 1614 Maximum Nesting Depth of Parentheses is a natural follow-up to LeetCode 20 Valid Parentheses. While LeetCode 20 asks "are the brackets valid?", this problem asks "how deeply are they nested?" It is a clean, focused problem that teaches you how to think about bracket depth — a concept that appears in compilers, parsers, JSON validators, and XML processors in real world software.Here is the Link of Question -: LeetCode 1614In this article we cover plain English explanation, real life analogy, two Java approaches with dry runs, complexity analysis, and all the important details you need for interviews.What Is the Problem Really Asking?You are given a valid parentheses string. You need to find the maximum number of nested (not just sequential) open parentheses at any point in the string.The key distinction here is nested vs sequential:"()()()" → depth is 1, brackets are sequential not nested"((()))" → depth is 3, brackets are fully nested inside each other"()(())" → depth is 2, the second pair is nested one level deepReal Life Analogy — Folders Inside FoldersThink of your computer's file system. You have a folder, inside that a subfolder, inside that another subfolder. The depth is how many folders deep you are at the deepest point."(1+(2*3)+((8)/4))+1" is like:Outer folder ( → depth 1Inner folder ( inside it → depth 2Innermost folder (( → depth 3The answer is how deep the deepest file is buried. You do not care about other folders — just the maximum depth reached at any single moment.Approach 1: Stack Based Solution (Classic)The IdeaUse a stack exactly like LeetCode 20. Every time you push an opening bracket, increment a counter. Every time you pop a closing bracket, record the max before decrementing. The stack size at any moment represents current depth.public int maxDepth(String s) { Stack<Character> st = new Stack<>(); int current = 0; int max = 0; for (int i = 0; i < s.length(); i++) { char c = s.charAt(i); if (c == '(') { st.push(c); current++; max = Math.max(max, current); // record depth when going deeper } else if (c == ')') { if (!st.empty() && st.peek() == '(') { max = Math.max(max, current); current--; st.pop(); } } } return max;}Dry Run — s = "()(())((()()))"( → push, current = 1, max = 1) → pop, current = 0( → push, current = 1, max = 1( → push, current = 2, max = 2) → pop, current = 1) → pop, current = 0( → push, current = 1, max = 2( → push, current = 2, max = 2( → push, current = 3, max = 3 ✅) → pop, current = 2( → push, current = 3, max = 3) → pop, current = 2) → pop, current = 1) → pop, current = 0✅ Output: 3Time Complexity: O(n) — single pass Space Complexity: O(n) — stack holds up to n/2 opening bracketsApproach 2: Counter Only — No Stack (Optimal) ✅The IdeaThis is the smartest approach and the real insight of this problem. You do not actually need a stack at all! Think about it — the depth at any moment is simply how many unmatched opening brackets we have seen so far. That is just a counter!( → increment counter, update max) → decrement counterEverything else → ignorepublic int maxDepth(String s) { int current = 0; int max = 0; for (int i = 0; i < s.length(); i++) { if (s.charAt(i) == '(') { current++; max = Math.max(max, current); } else if (s.charAt(i) == ')') { current--; } } return max;}This is beautifully simple. No stack, no extra memory, just two integer variables.Dry Run — s = "(1+(2*3)+((8)/4))+1"Only tracking ( and ), ignoring digits and operators:( → current = 1, max = 1( → current = 2, max = 2) → current = 1( → current = 2, max = 2( → current = 3, max = 3 ✅) → current = 2) → current = 1) → current = 0✅ Output: 3Time Complexity: O(n) — single pass Space Complexity: O(1) — only two integer variables, no extra storage!Why Update Max on ( Not on )?This is the most important implementation detail. You update max when you open a bracket, not when you close it. Why?Because when you encounter (, your depth just increased to a new level — that is when you might have hit a new maximum. When you encounter ), you are going back up — depth is decreasing, so it can never be a new maximum.Always capture the peak on the way down into nesting, not on the way back out.Stack vs Counter — Which to Use?The counter approach is strictly better here — same time complexity but O(1) space instead of O(n). In an interview, start by mentioning the stack approach to show you recognize the stack pattern, then immediately offer the counter optimization to show deeper understanding.This mirrors the same progression as LeetCode 844 Backspace String Compare — where the O(1) two pointer follow-up impressed interviewers more than the standard stack solution.How This Fits the Stack Pattern SeriesLooking at the full series you have been solving:20 Valid Parentheses — are brackets correctly matched? 1614 Maximum Nesting Depth — how deeply are they nested?These two problems are complementary. One validates structure, the other measures depth. Together they cover the two most fundamental questions you can ask about a bracket string. Real world parsers need to answer both — "is this valid?" and "how complex is the nesting?"Common Mistakes to AvoidUpdating max after decrementing on ) If you write current-- before Math.max, you will always be one level too low and miss the true maximum. Always capture max before or at the moment of increment, never after decrement.Counting all characters not just brackets Digits, operators like +, -, *, / must be completely ignored. Only ( and ) affect depth.Using a Stack when a counter suffices Since the problem guarantees a valid parentheses string, you never need to validate matching — just track depth. A Stack adds unnecessary complexity and memory overhead here.FAQs — People Also AskQ1. What is nesting depth in LeetCode 1614? Nesting depth is the maximum number of open parentheses that are simultaneously unclosed at any point in the string. For example "((()))" has depth 3 because at the innermost point, three ( are open at the same time.Q2. What is the best approach for LeetCode 1614 in Java? The counter approach is optimal — O(n) time and O(1) space. Increment a counter on (, update max, decrement on ). No stack needed since the string is guaranteed to be a valid parentheses string.Q3. What is the time complexity of LeetCode 1614? Both approaches are O(n) time. The Stack approach uses O(n) space while the counter approach uses O(1) space, making the counter approach strictly better.Q4. What is the difference between LeetCode 20 and LeetCode 1614? LeetCode 20 validates whether a bracket string is correctly formed. LeetCode 1614 assumes the string is already valid and asks how deeply the brackets are nested. LeetCode 20 needs a stack for matching; LeetCode 1614 only needs a counter.Q5. Is LeetCode 1614 asked in coding interviews? It appears occasionally as a warmup or follow-up after LeetCode 20. The more important skill it tests is recognizing when a stack can be replaced by a simpler counter — that kind of space optimization thinking is valued in interviews.Similar LeetCode Problems to Practice Next20. Valid Parentheses — Easy — validate bracket structure1021. Remove Outermost Parentheses — Easy — depth-based filtering1249. Minimum Remove to Make Valid Parentheses — Medium — remove minimum brackets32. Longest Valid Parentheses — Hard — longest valid substring394. Decode String — Medium — nested brackets with encodingConclusionLeetCode 1614 Maximum Nesting Depth of Parentheses teaches a deceptively simple but important lesson — not every bracket problem needs a stack. When the string is guaranteed valid and you only need to measure depth, a counter is all you need.The progression from LeetCode 20 to 1614 perfectly illustrates how understanding the core problem deeply leads to elegant simplifications. Master both, understand why one needs a stack and the other does not, and you will have a strong foundation for every bracket problem in your interview journey.

StringStackLeetCodeEasy
Check if All Characters Have Equal Number of Occurrences – Frequency Map Approach (LeetCode 1941)

Check if All Characters Have Equal Number of Occurrences – Frequency Map Approach (LeetCode 1941)

🔗 Problem LinkLeetCode 1941 – Check if All Characters Have Equal Number of Occurrences 👉 https://leetcode.com/problems/check-if-all-characters-have-equal-number-of-occurrences/IntroductionThis is one of those problems that looks very simple at first glance — and it actually is — but it helps strengthen your understanding of frequency counting using HashMap.The problem asks us to determine whether all characters in a string occur the same number of times.No sliding window. No binary search. Just clean frequency logic.But even simple problems help build strong foundations.📌 Problem UnderstandingA string is considered "good" if:Every character that appears in the stringAppears the same number of timesIf even one character has a different frequency → return false.Example 1Input: s = "abacbc"Output: trueCharacter counts:a → 2b → 2c → 2All equal → ✔ trueExample 2Input: s = "aaabb"Output: falseCharacter counts:a → 3b → 2Not equal → ✘ false🧠 Approach & IntuitionWhen I saw this problem, my thinking was:Count the frequency of every character.Compare all frequencies.If all are equal → return true.The important part is choosing a reference frequency and comparing everything against it.💻 Your Codeclass Solution { public boolean areOccurrencesEqual(String s) { HashMap<Character,Integer> mp = new HashMap<>(); int ref =0; char c = s.charAt(0); for(int i =0 ; i< s.length();i++){ if(c == s.charAt(i)){ ref++; } mp.put(s.charAt(i),mp.getOrDefault(s.charAt(i),0)+1); } for(int a:mp.values()){ if(ref != a){ return false; } } return true; }}🔍 Step-by-Step Explanation1️⃣ Initialize HashMapHashMap<Character,Integer> mp = new HashMap<>();This stores frequency of each character.2️⃣ Choose Reference Characterchar c = s.charAt(0);int ref = 0;You use the first character as a reference.Then count how many times it appears while also building the frequency map.3️⃣ Build Frequency Mapmp.put(s.charAt(i), mp.getOrDefault(s.charAt(i), 0) + 1);This line increases count for each character.4️⃣ Compare Frequenciesfor(int a : mp.values()){ if(ref != a){ return false; }}If any frequency differs from the reference count → return false.Otherwise → true.⏱ Time and Space ComplexityTime Complexity: O(n)One loop to count frequenciesOne loop over at most 26 charactersSpace Complexity: O(26) ≈ O(1)Only lowercase English letters are allowed.🔥 Small Optimization IdeaYour solution works perfectly.However, we can simplify it slightly:Instead of separately counting the reference frequency, we can:First build the entire frequency map.Take the frequency of the first character from the map.Compare all values with it.Cleaner Versionclass Solution { public boolean areOccurrencesEqual(String s) { HashMap<Character,Integer> mp = new HashMap<>(); for(char ch : s.toCharArray()){ mp.put(ch, mp.getOrDefault(ch, 0) + 1); } int ref = mp.get(s.charAt(0)); for(int freq : mp.values()){ if(freq != ref){ return false; } } return true; }}Same logic — slightly cleaner structure.🎯 Key Learning from This ProblemThis problem reinforces:Frequency counting using HashMapUsing a reference value for comparisonClean loop logicEarly return for optimizationEven though it is an easy problem, it builds the base for harder problems like:Valid AnagramGroup AnagramsFirst Unique CharacterRansom Note🏁 Final ThoughtsProblems like this are not about complexity.They are about:Writing clean logicHandling frequency maps properlyThinking clearly about conditionsMastering easy problems makes medium and hard problems much easier later.

HashMapStringFrequency CountLeetCodeEasy
LeetCode 735: Asteroid Collision — Java Solution Explained

LeetCode 735: Asteroid Collision — Java Solution Explained

IntroductionIf you have been building your stack skills through problems like Valid Parentheses, Next Greater Element, and Backspace String Compare, then LeetCode 735 Asteroid Collision is the problem where everything comes together. It is one of the most satisfying Medium problems on LeetCode because it feels like a real simulation — you are literally modelling asteroids flying through space and crashing into each other.You can find the problem here — LeetCode 735 Asteroid Collision.This article breaks everything down in plain English so that anyone — beginner or experienced — can understand exactly what is happening and why the stack is the perfect tool for this problem.What Is the Problem Really Asking?You have a row of asteroids moving through space. Each asteroid has a size and a direction:Positive number → asteroid moving to the rightNegative number → asteroid moving to the leftAll asteroids move at the same speed. When a right-moving asteroid and a left-moving asteroid meet head-on, they collide:The smaller one explodesIf they are the same size, both explodeThe bigger one survives and keeps movingTwo asteroids moving in the same direction never meet, so they never collide.Return the final state of all surviving asteroids after every possible collision has happened.Real Life Analogy — Cars on a HighwayImagine a highway with cars driving in both directions. Cars going right are in one lane, cars going left are in another lane. Now imagine the lanes overlap at some point.A small car going right crashes into a big truck going left — the car gets destroyed, the truck keeps going. Two equally sized cars crash — both are destroyed. A massive truck going right demolishes everything coming from the left until it meets something bigger or nothing at all.That is exactly the asteroid problem. The stack helps us track which asteroids are still "alive" and moving right, waiting to potentially collide with the next left-moving asteroid that comes along.Why Stack Is the Perfect Data Structure HereThe key observation is this — only a right-moving asteroid followed by a left-moving asteroid can collide. A left-moving asteroid might destroy several right-moving ones in a chain before it either survives or gets destroyed itself.This chain reaction behavior — where the outcome of one collision immediately triggers the possibility of another — is exactly what a stack handles naturally. The stack holds right-moving asteroids that are still alive and waiting. When a left-moving asteroid arrives, it battles the top of the stack repeatedly until either it is destroyed or no more collisions are possible.All Possible Collision ScenariosBefore looking at code it is important to understand every case that can happen:Case 1 — Right-moving asteroid (ast[i] > 0) No collision possible immediately. Push it onto the stack and move on.Case 2 — Left-moving asteroid, stack is empty Nothing to collide with. Push it onto the stack.Case 3 — Left-moving asteroid, top of stack is also left-moving (negative) Two asteroids going the same direction never meet. Push it onto the stack.Case 4 — Left-moving asteroid meets right-moving asteroid (collision!) Three sub-cases:Stack top is bigger → left-moving asteroid explodes, stack top survivesStack top is smaller → stack top explodes, left-moving asteroid continues (loop again)Same size → both explodeThe Solution — Stack Simulationpublic int[] asteroidCollision(int[] ast) { Stack<Integer> st = new Stack<>(); for (int i = 0; i < ast.length; i++) { boolean survived = true; // assume current asteroid survives // collision only happens when stack top is positive // and current asteroid is negative while (!st.empty() && st.peek() > 0 && ast[i] < 0) { if (st.peek() > Math.abs(ast[i])) { // stack top is bigger — current asteroid explodes survived = false; break; } else if (st.peek() < Math.abs(ast[i])) { // current asteroid is bigger — stack top explodes // current asteroid keeps going, check next stack element st.pop(); } else { // equal size — both explode st.pop(); survived = false; break; } } if (survived) { st.push(ast[i]); } } // build result array from stack (stack gives reverse order) int[] ans = new int[st.size()]; for (int i = ans.length - 1; i >= 0; i--) { ans[i] = st.pop(); } return ans;}Step-by-Step Dry Run — asteroids = [10, 2, -5]Let us trace exactly what happens:Processing 10:Stack is empty, no collision possiblesurvived = true → push 10Stack: [10]Processing 2:Stack top is 10 (positive), current is 2 (positive) — same direction, no collisionsurvived = true → push 2Stack: [10, 2]Processing -5:Stack top is 2 (positive), current is -5 (negative) — collision!2 < 5 → stack top smaller, pop 2. survived stays trueStack: [10]Stack top is 10 (positive), current is -5 (negative) — collision again!10 > 5 → stack top bigger, current asteroid destroyed. survived = false, breakStack: [10]survived = false → do not push -5Final stack: [10] → output: [10] ✅Step-by-Step Dry Run — asteroids = [3, 5, -6, 2, -1, 4]Processing 3: stack empty → push. Stack: [3]Processing 5: both positive, same direction → push. Stack: [3, 5]Processing -6:Collision with 5: 5 < 6 → pop 5. Stack: [3]Collision with 3: 3 < 6 → pop 3. Stack: []Stack empty → survived = true → push -6Stack: [-6]Processing 2: stack top is -6 (negative), current is 2 (positive) — same direction check fails, no collision → push. Stack: [-6, 2]Processing -1:Collision with 2: 2 > 1 → stack top bigger, -1 explodes. survived = falseStack: [-6, 2]Processing 4: stack top is 2 (positive), current is 4 (positive) — same direction → push. Stack: [-6, 2, 4]Final stack: [-6, 2, 4] → output: [-6, 2, 4] ✅Understanding the survived FlagThe survived boolean flag is the most important design decision in this solution. It tracks whether the current asteroid makes it through all collisions.It starts as true — we assume the asteroid survives until proven otherwise. It only becomes false in two situations — when the stack top is bigger (current asteroid destroyed) or when both are equal size (mutual destruction). If survived is still true after the while loop, the asteroid either won all its battles or never had any — either way it gets pushed onto the stack.This flag eliminates the need for complicated nested conditions and makes the logic clean and readable.Building the Result ArrayOne important detail — when you pop everything from a stack to build an array, the order is reversed. The stack gives you elements from top to bottom (last to first). So we fill the result array from the end to the beginning using i = ans.length - 1 going down to 0. This preserves the original left-to-right order of surviving asteroids.Time and Space ComplexityTime Complexity: O(n) — each asteroid is pushed onto the stack at most once and popped at most once. Even though there is a while loop inside the for loop, each element participates in at most one push and one pop across the entire run. Total operations stay linear.Space Complexity: O(n) — in the worst case (all asteroids moving right, no collisions) all n asteroids sit on the stack simultaneously.Common Mistakes to AvoidForgetting that same-direction asteroids never collide The collision condition is specifically st.peek() > 0 && ast[i] < 0. Two positive asteroids, two negative asteroids, or a negative followed by a positive — none of these collide. Only right then left.Not using a loop for chain collisions A single left-moving asteroid can destroy multiple right-moving ones in sequence. If you only check the stack top once instead of looping, you will miss chain destructions like in the [3, 5, -6] example.Forgetting the survived flag and always pushing Without the flag, a destroyed asteroid still gets pushed onto the stack, giving wrong results.Wrong array reconstruction from stack Forgetting that stack order is reversed and filling the array from left to right gives a backwards answer. Always fill from the last index downward.How This Problem Differs From Previous Stack ProblemsEvery previous stack problem in this series had a simple push-or-pop decision per character. Asteroid Collision introduces something new — a while loop inside the for loop. This is because one incoming asteroid can trigger multiple consecutive pops (chain collisions). The stack is no longer just storing history — it is actively participating in a simulation where multiple stored elements can be affected by a single incoming element.This is the defining characteristic of harder stack problems and is exactly what appears in problems like Largest Rectangle in Histogram and Trapping Rain Water.FAQs — People Also AskQ1. Why is a Stack used to solve LeetCode 735 Asteroid Collision? Because right-moving asteroids wait on the stack until a left-moving asteroid arrives. The left-moving asteroid battles the top of the stack repeatedly — this LIFO chain reaction behavior is exactly what a stack handles naturally and efficiently.Q2. What is the time complexity of LeetCode 735? O(n) time because each asteroid is pushed and popped at most once regardless of how many chain collisions happen. Space complexity is O(n) for the stack in the worst case.Q3. When do two asteroids NOT collide in LeetCode 735? Two asteroids never collide when both move right (both positive), both move left (both negative), or when a left-moving asteroid comes before a right-moving one — they move away from each other in that case.Q4. Is LeetCode 735 asked in coding interviews? Yes, it is commonly asked at companies like Amazon, Google, and Microsoft as a Medium stack problem. It tests whether you can handle simulation problems with multiple conditional branches and chain reactions — skills that translate directly to real world system design thinking.Q5. What is the difference between LeetCode 735 and LeetCode 496 Next Greater Element? Both use a stack and involve comparing elements. In Next Greater Element, you search forward for something bigger. In Asteroid Collision, collisions happen between the current element and stack contents, and the current element might destroy multiple previous elements in a chain before settling. The collision logic in 735 is more complex.Similar LeetCode Problems to Practice Next496. Next Greater Element I — Easy — monotonic stack pattern739. Daily Temperatures — Medium — next greater with index distance1047. Remove All Adjacent Duplicates In String — Easy — chain removal with stack84. Largest Rectangle in Histogram — Hard — advanced stack simulation503. Next Greater Element II — Medium — circular array with monotonic stackConclusionLeetCode 735 Asteroid Collision is a wonderful problem that takes the stack simulation pattern to the next level. The key insight is recognizing that only right-then-left asteroid pairs can collide, that chain collisions require a while loop not just an if statement, and that the survived flag keeps the logic clean across all cases.Work through every dry run in this article carefully — especially the [3, 5, -6, 2, -1, 4] example — because seeing chain collisions play out step by step is what makes this pattern click permanently.Once this problem makes sense, you are genuinely ready for the harder stack problems that follow. Keep going!

LeetCodeJavaStackArrayMedium
LeetCode 123 — Best Time to Buy and Sell Stock III | At Most Two Transactions Explained

LeetCode 123 — Best Time to Buy and Sell Stock III | At Most Two Transactions Explained

🚀 Try This Problem First!Before reading the solution, attempt it yourself on LeetCode — you will learn much more that way.🔗 Problem Link: https://leetcode.com/problems/best-time-to-buy-and-sell-stock-iii/What Is the Problem Asking?You have a list of stock prices, one for each day. You are allowed to buy and sell the stock, but with one important restriction — you can make at most two transactions in total. A transaction means one buy followed by one sell.You cannot hold two stocks at the same time, meaning you must sell whatever you are holding before you buy again.Your job is to find the maximum profit you can make by doing at most two transactions. If there is no way to make any profit, return 0.Example to build intuition: prices = [3, 3, 5, 0, 0, 3, 1, 4]If you buy on day 4 (price 0) and sell on day 6 (price 3), profit = 3. Then buy on day 7 (price 1) and sell on day 8 (price 4), profit = 3. Total profit = 6. That is the best you can do.Constraints:1 ≤ prices.length ≤ 10⁵0 ≤ prices[i] ≤ 10⁵Why Is This Harder Than the Previous Stock Problems?In LeetCode 121, you could only make one transaction — so you just found the best single buy-sell pair.In LeetCode 122, you could make unlimited transactions — so you greedily collected every upward price movement.Here, you are limited to exactly two transactions. Greedy does not work anymore because sometimes skipping a small profit early on allows you to make a much bigger combined profit across two trades. You need a smarter strategy.The Big Idea Behind the SolutionHere is a simple way to think about it.If you are allowed two transactions, you can imagine drawing a dividing line somewhere in the price array. The first transaction happens entirely on the left side of that line. The second transaction happens entirely on the right side.So the problem becomes:For every possible dividing position, find the best profit on the left side and the best profit on the right side.Add them together.The maximum sum across all dividing positions is your answer.The challenge is doing this efficiently without checking every position with a nested loop, which would be O(N²) and too slow.Approach 1 — Left-Right Prefix DPThe idea:Instead of trying every split point from scratch, precompute the answers in advance.Build a leftProfit array where leftProfit[i] stores the best single transaction profit you can make using prices from day 0 to day i.Build a rightProfit array where rightProfit[i] stores the best single transaction profit you can make using prices from day i to the last day.Then for every index i, the answer if you split at i is simply leftProfit[i] + rightProfit[i]. Take the maximum across all indices.How to build leftProfit:Go left to right. Keep track of the minimum price seen so far. At each day i, the best profit you could make ending at or before day i is either the same as the day before, or selling today at today's price minus the minimum price seen so far. Take whichever is larger.How to build rightProfit:Go right to left. Keep track of the maximum price seen so far from the right. At each day i, the best profit you could make starting at or after day i is either the same as the day after, or buying today at today's price and selling at the maximum price seen to the right. Take whichever is larger.Dry Run — prices = [3, 3, 5, 0, 0, 3, 1, 4]Building leftProfit left to right, tracking minimum price seen:Day 0 → min=3, leftProfit[0] = 0 (nothing to compare yet) Day 1 → min=3, best profit = 3-3 = 0, leftProfit[1] = max(0, 0) = 0 Day 2 → min=3, best profit = 5-3 = 2, leftProfit[2] = max(0, 2) = 2 Day 3 → min=0, best profit = 0-0 = 0, leftProfit[3] = max(2, 0) = 2 Day 4 → min=0, best profit = 0-0 = 0, leftProfit[4] = max(2, 0) = 2 Day 5 → min=0, best profit = 3-0 = 3, leftProfit[5] = max(2, 3) = 3 Day 6 → min=0, best profit = 1-0 = 1, leftProfit[6] = max(3, 1) = 3 Day 7 → min=0, best profit = 4-0 = 4, leftProfit[7] = max(3, 4) = 4leftProfit = [0, 0, 2, 2, 2, 3, 3, 4]Building rightProfit right to left, tracking maximum price seen:Day 7 → max=4, rightProfit[7] = 0 (nothing to compare yet) Day 6 → max=4, best profit = 4-1 = 3, rightProfit[6] = max(0, 3) = 3 Day 5 → max=4, best profit = 4-3 = 1, rightProfit[5] = max(3, 1) = 3 Day 4 → max=4, best profit = 4-0 = 4, rightProfit[4] = max(3, 4) = 4 Day 3 → max=4, best profit = 4-0 = 4, rightProfit[3] = max(4, 4) = 4 Day 2 → max=5, best profit = 5-5 = 0, rightProfit[2] = max(4, 0) = 4 Day 1 → max=5, best profit = 5-3 = 2, rightProfit[1] = max(4, 2) = 4 Day 0 → max=5, best profit = 5-3 = 2, rightProfit[0] = max(4, 2) = 4rightProfit = [4, 4, 4, 4, 4, 3, 3, 0]Combining at each index: Index 0 → 0 + 4 = 4 Index 1 → 0 + 4 = 4 Index 2 → 2 + 4 = 6 Index 3 → 2 + 4 = 6 Index 4 → 2 + 4 = 6 Index 5 → 3 + 3 = 6 Index 6 → 3 + 3 = 6 Index 7 → 4 + 0 = 4Maximum = 6 ✅Implementation code:class Solution { public int maxProfit(int[] prices) { int lefProf[] = new int[prices.length]; int rigProf[] = new int[prices.length]; int j = 1; int i = 0; int coL = 1; while (i < j && j < prices.length) { if (prices[i] > prices[j]) { i = j; if (coL - 1 != -1) { lefProf[coL] = lefProf[coL - 1]; } coL++; } else { int max = prices[j] - prices[i]; if (coL - 1 != -1) { lefProf[coL] = Math.max(lefProf[coL - 1], max); coL++; } else { lefProf[coL] = max; coL++; } } j++; } int ii = prices.length - 2; int jj = prices.length - 1; int coR = prices.length - 2; while (ii >= 0) { if (prices[ii] > prices[jj]) { jj = ii; if (coR + 1 < prices.length) { rigProf[coR] = rigProf[coR + 1]; } coR--; } else { int max = prices[jj] - prices[ii]; if (coR + 1 < prices.length) { rigProf[coR] = Math.max(rigProf[coR + 1], max); coR--; } else { rigProf[coR] = max; coR--; } } ii--; } int maxAns = 0; for (int k = 0; k < lefProf.length; k++) { maxAns = Math.max(maxAns, lefProf[k] + rigProf[k]); } return maxAns; }}The approach here is exactly right. You are building the left and right profit arrays using a two pointer scanning technique — the same idea as LeetCode 121 but applied twice, once going forward and once going backward. The manual counters coL and coR make it a bit harder to follow, but the logic underneath is solid.Here is the same approach written in a cleaner way so it is easier to read and debug:class Solution { public int maxProfit(int[] prices) { int n = prices.length; int[] leftProfit = new int[n]; int[] rightProfit = new int[n]; int minPrice = prices[0]; for (int i = 1; i < n; i++) { minPrice = Math.min(minPrice, prices[i]); leftProfit[i] = Math.max(leftProfit[i - 1], prices[i] - minPrice); } int maxPrice = prices[n - 1]; for (int i = n - 2; i >= 0; i--) { maxPrice = Math.max(maxPrice, prices[i]); rightProfit[i] = Math.max(rightProfit[i + 1], maxPrice - prices[i]); } int maxProfit = 0; for (int i = 0; i < n; i++) { maxProfit = Math.max(maxProfit, leftProfit[i] + rightProfit[i]); } return maxProfit; }}Time Complexity: O(N) — three separate linear passes through the array. Space Complexity: O(N) — two extra arrays of size N.Approach 2 — Four Variable State Machine DPThe idea:Instead of building arrays, what if you could track everything in just four variables and solve it in a single pass?Think about what is happening at any point in time as you go through the prices. You are always in one of these situations:You have bought your first stock and are currently holding it. You have sold your first stock and are sitting on that profit. You have bought your second stock (using the profit from the first sale) and are holding it. You have sold your second stock and collected your total profit.These four situations are your four states. For each one, you always want to know — what is the best possible profit I could have in this state up to today?Here is how each state updates as you look at a new price:buy1 — This represents the best outcome after buying the first stock. Buying costs money, so this is negative. As you see each new price, you ask: is it better to have bought on some earlier day, or to buy today? You keep whichever gives you the least cost, which means the highest value of negative price.sell1 — This represents the best profit after completing the first transaction. As you see each new price, you ask: is it better to have sold on some earlier day, or to sell today using the best buy1 we have? Selling today means adding today's price on top of buy1.buy2 — This represents the best outcome after buying the second stock. The key insight here is that you are not starting from zero — you already have the profit from the first transaction in your pocket. So buying the second stock costs today's price but you subtract it from sell1. As you see each new price, you ask: is it better to have bought the second stock earlier, or to buy today using the profit from sell1?sell2 — This is your final answer. It represents the best total profit after completing both transactions. As you see each new price, you ask: is it better to have completed both transactions earlier, or to sell the second stock today using the best buy2 we have?The beautiful thing is that all four updates happen in order on every single day, in one pass through the array. Each variable feeds into the next — buy1 feeds sell1, sell1 feeds buy2, buy2 feeds sell2.Dry Run — prices = [3, 3, 5, 0, 0, 3, 1, 4]We start with buy1 and buy2 as very negative (not yet bought anything) and sell1 and sell2 as 0 (no profit yet).Day 0, price = 3: buy1 = max(-∞, -3) = -3. Best cost to buy first stock is spending 3. sell1 = max(0, -3+3) = 0. Selling immediately gives no profit. buy2 = max(-∞, 0-3) = -3. Best cost to buy second stock after first sale is 3. sell2 = max(0, -3+3) = 0. No total profit yet.Day 1, price = 3: Nothing changes — same price as day 0.Day 2, price = 5: buy1 = max(-3, -5) = -3. Still best to have bought at price 3. sell1 = max(0, -3+5) = 2. Selling today at 5 after buying at 3 gives profit 2. buy2 = max(-3, 2-5) = -3. Buying second stock today costs too much relative to first profit. sell2 = max(0, -3+5) = 2. Completing both trades gives 2 so far.Day 3, price = 0: buy1 = max(-3, -0) = 0. Buying today at price 0 is the best possible first buy. sell1 = max(2, 0+0) = 2. Selling today at 0 gives nothing — keep the 2 from before. buy2 = max(-3, 2-0) = 2. Using the profit of 2 and buying today at 0 gives buy2 = 2. sell2 = max(2, 2+0) = 2. Selling second stock today at 0 gives nothing extra.Day 4, price = 0: Nothing changes — same price as day 3.Day 5, price = 3: buy1 = max(0, -3) = 0. Still best to have bought at price 0. sell1 = max(2, 0+3) = 3. Selling today at 3 after buying at 0 gives profit 3. buy2 = max(2, 3-3) = 2. Buying second stock today at 3 using sell1=3 gives 0. Keep 2. sell2 = max(2, 2+3) = 5. Selling second stock today at 3 after buy2=2 gives total 5.Day 6, price = 1: buy1 = max(0, -1) = 0. Still best to have bought at 0. sell1 = max(3, 0+1) = 3. Selling today at 1 gives less than 3 — no change. buy2 = max(2, 3-1) = 2. Buying second stock at 1 using sell1=3 gives 2 — same as before. sell2 = max(5, 2+1) = 5. No improvement yet.Day 7, price = 4: buy1 = max(0, -4) = 0. No change. sell1 = max(3, 0+4) = 4. Selling today at 4 after buying at 0 gives profit 4 — new best. buy2 = max(2, 4-4) = 2. No improvement. sell2 = max(5, 2+4) = 6. Selling second stock today at 4 with buy2=2 gives total 6 — new best.Output: 6 ✅class Solution { public int maxProfit(int[] prices) { int buy1 = Integer.MIN_VALUE; int sell1 = 0; int buy2 = Integer.MIN_VALUE; int sell2 = 0; for (int price : prices) { buy1 = Math.max(buy1, -price); sell1 = Math.max(sell1, buy1 + price); buy2 = Math.max(buy2, sell1 - price); sell2 = Math.max(sell2, buy2 + price); } return sell2; }}Time Complexity: O(N) — single pass through the array. Space Complexity: O(1) — only four integer variables, no extra arrays.Approach 3 — Generalizing to At Most K TransactionsOnce you understand Approach 2, there is a natural question — what if instead of two transactions, you were allowed K transactions? That is exactly LeetCode 188.Instead of four hardcoded variables, you use two arrays of size K. buy[j] tracks the best outcome after the j-th buy and sell[j] tracks the best outcome after the j-th sell. The logic is identical to Approach 2, just in a loop.For this problem set K = 2 and it gives the same answer:class Solution { public int maxProfit(int[] prices) { int k = 2; int[] buy = new int[k]; int[] sell = new int[k]; Arrays.fill(buy, Integer.MIN_VALUE); for (int price : prices) { for (int j = 0; j < k; j++) { buy[j] = Math.max(buy[j], (j == 0 ? 0 : sell[j - 1]) - price); sell[j] = Math.max(sell[j], buy[j] + price); } } return sell[k - 1]; }}Time Complexity: O(N × K) — for K=2 this is effectively O(N). Space Complexity: O(K) — two small arrays of size K.If you understand this, LeetCode 188 becomes a five minute problem.Comparing All Three ApproachesApproach 1 — Left-Right Prefix DP (Your Approach)You build two arrays — one scanning left to right, one scanning right to left — and combine them. This is the most visual and intuitive approach. It is easy to explain because you are essentially solving LeetCode 121 twice from opposite ends and adding the results. The downside is it uses O(N) extra space for the two arrays.Approach 2 — Four Variable State MachineYou solve the entire problem in a single pass using just four variables. No arrays, no extra memory. This is the most efficient and elegant solution. Once you understand what each variable represents, the code almost writes itself. This is what most interviewers are hoping to see.Approach 3 — General K-Transaction DPThis is Approach 2 extended to handle any number of transactions using arrays instead of hardcoded variables. Solving LeetCode 123 with this approach means you have already solved LeetCode 188 as a bonus.Mistakes That Catch Most PeopleTrying to use the greedy approach from LeetCode 122: Adding every upward price movement does not work when you are limited to two transactions. You need to pick the two best non-overlapping windows, not collect everything.Buying twice without selling in between: The state machine prevents this naturally — buy2 depends on sell1, so you can never enter the second buy before completing the first sell.Initializing buy variables to 0: Both buy1 and buy2 must start at Integer.MIN_VALUE. Setting them to 0 implies you bought a stock for free, which is wrong and will give inflated profits.Returning the wrong variable: Always return sell2, not buy2 or sell1. sell2 is the only variable that represents a completed two-transaction profit.Where This Fits in the Stock SeriesLeetCode 121 — One transaction only. Find the single best buy-sell pair. [ Blog is also avaliable on this - Read Now ]LeetCode 122 — Unlimited transactions. Collect every upward price movement greedily. [ Blog is also avaliable on this - Read Now ]LeetCode 188 — At most K transactions. Direct extension of Approach 3 above.LeetCode 309 — Unlimited transactions but with a cooldown day after every sale.LeetCode 714 — Unlimited transactions but with a fee charged on every transaction.Each problem adds one new constraint on top of the previous one. If you understand LeetCode 123 deeply — especially Approach 2 — every other problem in this series becomes a small modification of the same framework.Key Takeaways✅ When you are limited to two transactions, greedy fails. You need to find two non-overlapping windows that together give maximum profit.✅ The left-right prefix DP approach is the most intuitive — precompute the best single transaction from both ends, then combine at every split point.✅ The four variable state machine solves it in one pass with O(1) space. Each variable tracks the best outcome for one state — buy1 feeds sell1, sell1 feeds buy2, buy2 feeds sell2.✅ Always initialize buy variables to Integer.MIN_VALUE to represent "not yet bought anything."✅ Understanding Approach 3 here means LeetCode 188 is already solved — just change the hardcoded 2 to K.Happy Coding! This problem is the turning point in the stock series where DP becomes unavoidable — once you crack it, the rest of the series feels manageable. 🚀

LeetCodeDynamic ProgrammingHardJavaDSAPrefix DPArraysStock Series
LeetCode 3488 — Closest Equal Element Queries: A Complete Walkthrough from Brute Force to Optimal

LeetCode 3488 — Closest Equal Element Queries: A Complete Walkthrough from Brute Force to Optimal

If you have been grinding LeetCode lately, you have probably run into problems where your first clean-looking solution times out and forces you to rethink from scratch. LeetCode 3488 is exactly that kind of problem. This article walks through the complete thought process — from the naive approach that got me TLE, to the intuition shift, to the final optimized solution in Java.You can find the original problem here: LeetCode 3488 — Closest Equal Element Queries at Problem LinkUnderstanding the ProblemYou are given a circular array nums and an array of queries. For each query queries[i], you must find the minimum distance between the element at index queries[i] and any other index j such that nums[j] == nums[queries[i]]. If no such other index exists, the answer is -1.The critical detail here is the word circular. The array wraps around, which means the distance between two indices i and j in an array of length n is not simply |i - j|. It is:min( |i - j| , n - |i - j| )You can travel either clockwise or counterclockwise, and you take whichever path is shorter.Breaking Down the ExamplesExample 1nums = [1, 3, 1, 4, 1, 3, 2], queries = [0, 3, 5]For query index 0, the value is 1. Other indices holding 1 are 2 and 4. Circular distances are min(2, 5) = 2 and min(4, 3) = 3. The minimum is 2.For query index 3, the value is 4. It appears nowhere else in the array. Answer is -1.For query index 5, the value is 3. The other 3 sits at index 1. Circular distance is min(4, 3) = 3. Answer is 3.Output: [2, -1, 3]Example 2nums = [1, 2, 3, 4], queries = [0, 1, 2, 3]Every element is unique. Every query returns -1.Output: [-1, -1, -1, -1]First Attempt — Brute ForceMy first instinct was straightforward. For each query, scan the entire array, collect every index that matches the queried value, compute the circular distance to each, and return the minimum. Clean logic, easy to reason about, and dead simple to implement.while (i != queries.length) { int max = Integer.MAX_VALUE; for (int j = 0; j < nums.length; j++) { int target = nums[queries[i]]; if (nums[j] == target && j != queries[i]) { // Linear distance between the two indices int right = Math.abs(j - queries[i]); // Distance going the other direction around the ring int left = nums.length - right; // True circular distance is the shorter of the two int dist = Math.min(right, left); max = Math.min(max, dist); } } lis.add(max == Integer.MAX_VALUE ? -1 : max); i++;}This got TLE immediately, and once you look at the constraints it is obvious why. Both nums.length and queries.length can be up to 10^5. For every query you are scanning every element, giving you O(n × q) time — which in the worst case is 10 billion operations. No judge is going to wait for that.Rethinking the Approach — Where Is the Waste?After the TLE, the question I asked myself was: what work is being repeated unnecessarily?The answer was obvious in hindsight. Every time a query asks about a value like 3, the brute force scans the entire array again looking for every index that holds 3. If ten different queries all ask about value 3, you are doing that scan ten times. You are finding the same indices over and over.The fix is to do that work exactly once, before any query is processed. You precompute a map from each value to all the indices where it appears. Then for every query you simply look up the relevant list and work within it.That observation reduces the precomputation to O(n) — one pass through the array. The question then becomes: once you have that sorted list of indices for a given value, how do you find the closest one to your query index efficiently?The Key Insight — You Only Need Two NeighboursHere is the insight that makes this problem elegant. The index list for any value is sorted in ascending order because you build it by iterating left to right through the array. If your query index sits at position mid inside that sorted list, then by definition every index to the left of mid - 1 is farther away than arr[mid - 1], and every index to the right of mid + 1 is farther away than arr[mid + 1].This means you never need to compare against all duplicates. You only ever need to check the immediate left and right neighbours of your query index within the sorted list.The one subtlety is the circular wrap. Because the array itself is circular, the left neighbour of the very first element in the list is actually the last element in the list, since you can wrap around the ring. This is handled cleanly with modular arithmetic: (mid - 1 + n) % n for the left neighbour and (mid + 1) % n for the right.The Optimized Solution — HashMap + Binary SearchStep 1 — Precompute the index mapIterate through nums once and build a HashMap mapping each value to a list of all indices where it appears. The lists are sorted by construction since you insert indices in order.Step 2 — Binary search to locate the query indexFor a given query at index q, look up the index list for nums[q]. Binary search the list to find the position of q within it. This runs in O(log n) rather than O(n).Step 3 — Check immediate neighbours and compute circular distancesOnce you have the position mid, fetch arr[(mid + 1) % n] and arr[(mid - 1 + n) % n]. For each, compute the circular distance using min(|diff|, totalLength - |diff|). Return the smaller of the two.Full Annotated Java Solutionclass Solution { public List<Integer> solveQueries(int[] nums, int[] queries) { int c = 0; // Precompute: map each value to the sorted list of indices where it appears. // Since we iterate left to right, the list is sorted by construction. HashMap<Integer, List<Integer>> mp = new HashMap<>(); for (int i = 0; i < nums.length; i++) { mp.computeIfAbsent(nums[i], k -> new ArrayList<>()).add(i); } List<Integer> lis = new ArrayList<>(); while (c != queries.length) { // Retrieve the sorted index list for the value at the queried position List<Integer> arr = mp.get(nums[queries[c]]); int n = arr.size(); int i = 0; int j = n - 1; int min = -1; while (i <= j) { int mid = i + (j - i) / 2; if (arr.get(mid) == queries[c]) { // Only one occurrence in the entire array — no duplicate exists if (n == 1) { min = -1; } else { // Circular neighbour to the right within the index list int right = arr.get((mid + 1) % n); // Circular neighbour to the left within the index list int left = arr.get((mid - 1 + n) % n); // Compute circular distance to the right neighbour int d1 = Math.abs(right - queries[c]); int distRight = Math.min(d1, nums.length - d1); // Compute circular distance to the left neighbour int d2 = Math.abs(left - queries[c]); int distLeft = Math.min(d2, nums.length - d2); // The answer is the closer of the two neighbours min = Math.min(distLeft, distRight); } break; } else if (arr.get(mid) > queries[c]) { // Query index is smaller — search the left half j = mid - 1; } else { // Query index is larger — search the right half i = mid + 1; } } lis.add(min); c++; } return lis; }}Complexity AnalysisTime Complexity: O(n log n)Building the HashMap takes O(n). For each of the q queries, binary search over the index list takes O(log n) in the worst case. Total: O(n + q log n), which simplifies to O(n log n) given the constraint that q ≤ n.Space Complexity: O(n)The HashMap stores every index exactly once across all its lists, so total space used is O(n).Compared to the brute force O(n × q), this is the difference between ~1.7 million operations and ~10 billion operations at the constraint limits.Common PitfallsMixing up the two values of n. Inside the solution, n refers to arr.size() — the number of occurrences of a particular value. But when computing circular distance, you need nums.length — the full array length. These are different numbers and swapping them silently produces wrong answers.Forgetting the + n in the left neighbour formula. Writing (mid - 1) % n when mid is 0 produces -1 in Java, since Java's modulo preserves the sign of the dividend. Always write (mid - 1 + n) % n.Not handling the single-occurrence case. If a value appears only once, n == 1, and the neighbour formula wraps around to the element itself, giving a distance of zero — which is completely wrong. Guard against this explicitly before running the neighbour logic.What This Problem Teaches YouThe journey from brute force to optimal here follows a pattern worth internalizing.The brute force was correct but repeated work. Recognizing that repeated work and lifting it into a precomputation step is the single move that makes this problem tractable. The HashMap does that.Once you have a sorted structure, binary search is almost always the right tool to find a position within it. And once you have a position in a sorted structure, you only ever need to look at adjacent elements to find the nearest one — checking anything further is redundant by definition.These are not tricks specific to this problem. They are transferable patterns that appear across dozens of medium and hard problems on the platform. Internalizing them — rather than memorizing solutions — is what actually builds problem-solving ability over time.

ArraysHashMapBinary SearchCircular ArraysMediumLeetCodeJava
LeetCode 1021: Remove Outermost Parentheses — Java Solution Explained

LeetCode 1021: Remove Outermost Parentheses — Java Solution Explained

IntroductionLeetCode 1021 Remove Outermost Parentheses sounds intimidating with words like "primitive decomposition" but once you strip away the fancy terminology, it is a beautifully simple problem. It builds directly on the depth-counting technique from LeetCode 1614 and is a perfect example of how one clean insight can collapse a seemingly complex problem into just a few lines of code.Here is the Link of Question -: LeetCode 1021In this article we cover plain English explanation, what "primitive" really means, two Java approaches with detailed dry runs, complexity analysis, and everything you need to ace this in an interview.What Is the Problem Really Asking?Let us ignore the formal definition for a moment and think practically.A primitive string is the smallest self-contained valid bracket group. Think of it as a top-level unit — it opens, closes completely, and is not part of a bigger bracket group.For "(()())(())":"(()())" is one primitive — it opens and fully closes"(())" is another primitive — it opens and fully closes afterNow from each primitive, remove only the outermost opening and closing bracket, keep everything inside."(()())" → remove outer → "()()""(())" → remove outer → "()"Combined → "()()()"That is literally the whole problem!Real Life Analogy — Gift Boxes Inside a Shipping BoxImagine you ordered gifts online. They all arrive in one big shipping box. Inside that box are individual gift boxes. Inside some gift boxes are smaller boxes.The "primitive decomposition" is identifying each individual gift box inside the shipping box. Removing the outermost parentheses is like removing just the outer gift box wrapping but keeping everything inside it intact.You are not unpacking the inner boxes — just peeling off one layer from each top-level group.Understanding Primitive Decomposition VisuallyFor s = "(()())(())(()(()))":( ( ) ( ) ) ( ( ) ) ( ( ) ( ( ) ) )|_________| |_______| |_____________| primitive1 primitive2 primitive3Each group that starts at depth 0 and returns to depth 0 is one primitive. Remove the outermost ( and ) from each and concatenate what remains.The depth analogy from LeetCode 1614 is the key — a primitive starts when depth goes from 0 to 1 and ends when depth returns to 0.Approach 1: Counter With Substring (Your Solution) ✅The IdeaTrack depth with a counter. Use start and end pointers to mark the boundaries of each primitive. When depth hits 0, you have found a complete primitive — append everything between start+1 and end (skipping the outermost brackets) to the result.public String removeOuterParentheses(String s) { StringBuilder sb = new StringBuilder(); int c = 0; int start = 0; int end = 0; for (int i = 0; i < s.length(); i++) { if (s.charAt(i) == '(') { c++; } else { c--; } if (c == 0) { // primitive found from index start to end (inclusive) sb.append(s.substring(start + 1, end)); // skip outermost ( and ) start = end + 1; // next primitive starts after current end } end++; } return sb.toString();}Dry Run — s = "(()())(())"Tracking c, start, end at each step:i=0, ( → c=1, end=1i=1, ( → c=2, end=2i=2, ) → c=1, end=3i=3, ( → c=2, end=4i=4, ) → c=1, end=5i=5, ) → c=0 → primitive found! append s.substring(1, 5) = "()()" → start=6, end=6i=6, ( → c=1, end=7i=7, ( → c=2, end=8i=8, ) → c=1, end=9i=9, ) → c=0 → primitive found! append s.substring(7, 9) = "()" → start=10, end=10Result: "()()" + "()" = "()()()" ✅Time Complexity: O(n) — single pass plus substring operations Space Complexity: O(n) — StringBuilder stores the resultApproach 2: Counter With Depth Filter (Cleaner & More Intuitive)The IdeaInstead of tracking start/end pointers, use depth directly to decide whether to include each character. The insight is:The outermost ( of each primitive is always encountered when c goes from 0 to 1 — skip itThe outermost ) of each primitive is always encountered when c goes from 1 to 0 — skip itEvery other character is inside some primitive — include itpublic String removeOuterParentheses(String s) { StringBuilder sb = new StringBuilder(); int depth = 0; for (char c : s.toCharArray()) { if (c == '(') { if (depth > 0) { sb.append(c); // not the outermost (, include it } depth++; } else { depth--; if (depth > 0) { sb.append(c); // not the outermost ), include it } } } return sb.toString();}This is arguably the most elegant solution. No pointers, no substring, just one rule — if depth is already greater than 0 when you see (, it is not the outer one. If depth is still greater than 0 after decrementing on ), it is not the outer one.Dry Run — s = "()()" (Edge Case)( → depth=0, skip it, depth becomes 1) → depth becomes 0, depth=0 so skip it( → depth=0, skip it, depth becomes 1) → depth becomes 0, depth=0 so skip itResult: "" ✅ Both are primitives "()" and "()", removing outer from each gives empty string.Dry Run — s = "(()())(())(()(()))"Let us trace just which characters get included vs skipped:For the first primitive "(()())":( at depth 0 → skip (outermost opener)( at depth 1 → include) at depth 2→1 → include( at depth 1 → include) at depth 2→1 → include) at depth 1→0 → skip (outermost closer)Inner content "()()" included ✅Same logic applies to each subsequent primitive. Final result: "()()()()(())" ✅Time Complexity: O(n) — single pass Space Complexity: O(n) — StringBuilder stores resultApproach 3: Stack Based (Verbose but Clear)The IdeaUse an explicit stack to track open brackets. When the stack becomes empty after a ), you have found a complete primitive. Collect all characters between the outermost brackets.public String removeOuterParentheses(String s) { StringBuilder sb = new StringBuilder(); Stack<Character> st = new Stack<>(); int start = 0; for (int i = 0; i < s.length(); i++) { if (s.charAt(i) == '(') { st.push('('); } else { st.pop(); } if (st.empty()) { // primitive ends at i, extract inner content sb.append(s.substring(start + 1, i)); start = i + 1; } } return sb.toString();}This is the most verbose but makes the primitive detection very explicit — stack empty means primitive complete. Great for explaining your thought process in an interview before optimizing.Time Complexity: O(n) Space Complexity: O(n) — stack storageApproach ComparisonThe Stack approach is clearest for explaining primitive detection. The pointer-based counter approach (your solution) is direct and efficient. The depth-filter counter approach is the most elegant — no pointers, no substrings, just a single depth check per character. All three are O(n) time. The depth-filter approach wins on code clarity.How This Connects to the Full SeriesLooking at the bracket problem series you have been building:20 Valid Parentheses — is the string valid? 1614 Maximum Nesting Depth — how deep is the nesting? 1021 Remove Outermost Parentheses — identify and strip top-level groupsEach problem adds one more layer of understanding about bracket strings. The depth counter technique from 1614 directly powers the optimal solution here. This is pattern building at its best.Common Mistakes to AvoidOff-by-one in substring indices In your solution, s.substring(start+1, end) skips the outermost ( by using start+1, and end (not end+1) naturally excludes the outermost ) since end points to it. Getting these indices wrong by even one position gives completely wrong output.Appending the outermost brackets in the depth-filter approach The condition order matters — for (, check depth BEFORE incrementing. For ), check depth AFTER decrementing. Flipping these gives wrong results.Forgetting to update start after each primitive In pointer-based approaches, always set start = end + 1 (or start = i + 1) after appending each primitive, otherwise the next primitive's start index is wrong.FAQs — People Also AskQ1. What is a primitive valid parentheses string in LeetCode 1021? A primitive is the smallest self-contained valid bracket group that cannot be split into two smaller valid groups. It starts when depth goes from 0 to 1 and ends when depth returns to 0. For example in "(()())(())", the primitives are "(()())" and "(())".Q2. What is the most elegant solution for LeetCode 1021? The depth-filter counter approach — increment depth on (, decrement on ), and only append characters when depth is greater than 0 for ( and still greater than 0 after decrement for ). This skips outermost brackets naturally without tracking indices.Q3. What is the time complexity of LeetCode 1021? All approaches run in O(n) time with a single pass. Space complexity is O(n) for storing the output string in StringBuilder.Q4. How is LeetCode 1021 different from LeetCode 1614? LeetCode 1614 finds the maximum depth by tracking a counter. LeetCode 1021 uses the same counter technique but to identify primitive boundaries and strip their outermost characters. 1614 returns a number, 1021 returns a modified string.Q5. Is LeetCode 1021 asked in coding interviews? It appears occasionally as a follow-up to Valid Parentheses or Nesting Depth problems. The real skill it tests is understanding primitive decomposition — identifying self-contained units within a bracket string — which appears in real world parser and compiler design.Similar LeetCode Problems to Practice Next20. Valid Parentheses — Easy — validate bracket structure1614. Maximum Nesting Depth of Parentheses — Easy — find max depth1249. Minimum Remove to Make Valid Parentheses — Medium — remove minimum brackets394. Decode String — Medium — nested brackets with encoding32. Longest Valid Parentheses — Hard — longest valid substringConclusionLeetCode 1021 Remove Outermost Parentheses is a satisfying problem once you realize that "primitive decomposition" is just a fancy way of saying "find each top-level bracket group." The depth counter is the key — depth 0 to 1 marks the start of a primitive, depth 1 to 0 marks its end.The depth-filter approach is the cleanest solution — one pass, one counter, one condition. No substring, no pointers, just elegance. Master this alongside LeetCode 20 and 1614 and you will have a complete toolkit for any bracket string problem that comes your way.

StringStackEasyLeetCode
Minimum Changes to Make Alternating Binary String – LeetCode 1758 Explained

Minimum Changes to Make Alternating Binary String – LeetCode 1758 Explained

Try This QuestionBefore reading the solution, try solving the problem yourself on LeetCode:👉 https://leetcode.com/problems/minimum-changes-to-make-alternating-binary-string/Problem StatementYou are given a binary string s consisting only of characters '0' and '1'.In one operation, you can change any '0' to '1' or '1' to '0'.A string is called alternating if no two adjacent characters are the same.Example of Alternating Strings01010101011010Example of Non-Alternating Strings0001001111101Your task is to return the minimum number of operations required to make the string alternating.Example WalkthroughExample 1Inputs = "0100"Possible fix:0101Only the last character needs to change, so the answer is:Output1Example 2Inputs = "10"The string is already alternating.Output0Example 3Inputs = "1111"Possible alternating strings:01011010Minimum operations needed = 2Output2Key ObservationAn alternating binary string can only have two possible patterns:Pattern 10101010101...Pattern 21010101010...So instead of trying many combinations, we only need to check:1️⃣ How many changes are required to convert s → "010101..."2️⃣ How many changes are required to convert s → "101010..."Then we return the minimum of the two.ApproachStep 1Generate two possible alternating strings:s1 = "010101..."s2 = "101010..."Both will be of the same length as the input string.Step 2Compare the original string with both patterns.Count mismatches.For example:s = 0100s1 = 0101Mismatch count = 1Step 3Repeat for the second pattern.Finally return:min(mismatch1, mismatch2)Intuition Behind the SolutionInstead of flipping characters randomly, we compare the string with the two valid alternating possibilities.Why only two?Because:Alternating strings must start with either 0 or 1After that, the pattern is fixed.So we simply compute which pattern requires fewer changes.This makes the solution efficient and simple.Java Implementationclass Solution { public int minOperations(String s) { int co1 = 0; int co2 = 0; String s1 = ""; String s2 = ""; for(int i = 0; i < s.length(); i++){ if(i % 2 == 0){ s1 += "0"; } else { s1 += "1"; } } for(int i = 0; i < s.length(); i++){ if(i % 2 == 0){ s2 += "1"; } else { s2 += "0"; } } for(int i = 0; i < s.length(); i++){ if(s.charAt(i) != s1.charAt(i)){ co1++; } } for(int i = 0; i < s.length(); i++){ if(s.charAt(i) != s2.charAt(i)){ co2++; } } return Math.min(co1, co2); }}Complexity AnalysisTime ComplexityO(n)We iterate through the string a few times.Space ComplexityO(n)Because we create two extra strings of size n.Optimized Approach (Better Interview Answer)We can avoid creating extra strings and calculate mismatches directly.Optimized Java Codeclass Solution { public int minOperations(String s) { int pattern1 = 0; int pattern2 = 0; for(int i = 0; i < s.length(); i++){ char expected1 = (i % 2 == 0) ? '0' : '1'; char expected2 = (i % 2 == 0) ? '1' : '0'; if(s.charAt(i) != expected1) pattern1++; if(s.charAt(i) != expected2) pattern2++; } return Math.min(pattern1, pattern2); }}Space Complexity NowO(1)No extra strings required.Why This Problem Is ImportantThis problem teaches important interview concepts:✔ Pattern observation✔ Greedy thinking✔ String manipulation✔ Optimization techniquesMany companies ask similar pattern-based string problems.Final ThoughtsThe trick in this problem is realizing that only two alternating patterns exist. Once you identify that, the problem becomes straightforward.Instead of trying multiple modifications, you simply compare and count mismatches.This leads to a clean and efficient O(n) solution.If you are preparing for coding interviews, practicing problems like this will improve your pattern recognition skills, which is a key skill for solving medium and hard problems later.Happy Coding 🚀

LeetCodeBinary StringGreedy AlgorithmJavaEasy
LeetCode 20: Valid Parentheses — Java Solution Explained

LeetCode 20: Valid Parentheses — Java Solution Explained

IntroductionLeetCode 20 Valid Parentheses is arguably the single most asked Easy problem in coding interviews. It appears at Google, Amazon, Microsoft, Meta, and virtually every company that does technical interviews. It is the textbook introduction to the Stack data structure and teaches you a pattern that shows up in compilers, code editors, HTML parsers, and mathematical expression evaluators.Here is the Link of Question -: LeetCode 20In this article we cover plain English explanation, real life analogy, the optimal stack solution with detailed dry run, all tricky edge cases, complexity analysis, common mistakes, and FAQs.What Is the Problem Really Asking?You are given a string containing only bracket characters — (, ), {, }, [, ]. You need to check if the brackets are correctly matched and nested.Three rules must hold:Every opening bracket must be closed by the same type of closing bracketBrackets must be closed in the correct order — the most recently opened must be closed firstEvery closing bracket must have a corresponding opening bracketReal Life Analogy — Russian Nesting DollsThink of Russian nesting dolls. You open the biggest doll first, then a medium one inside it, then a small one inside that. To close them back, you must close the smallest first, then medium, then biggest. You cannot close the biggest doll while the smallest is still open inside.That is exactly how brackets work. The last opened bracket must be the first one closed. This Last In First Out behavior is precisely what a Stack is built for.Another analogy — think of a text editor like VS Code. When you type (, it automatically adds ). If you try to close with ] instead, the editor highlights an error. This problem is essentially building that validation logic.The Only Approach You Need: StackThe IdeaScan the string left to rightIf you see an opening bracket → push it onto the stackIf you see a closing bracket → check the top of the stack. If the top is the matching opening bracket, pop it. Otherwise the string is invalid.At the end, if the stack is empty → all brackets matched → return true. If stack still has elements → some brackets were never closed → return false.public boolean isValid(String s) {if (s.length() == 1) return false; // single char can never be validStack<Character> st = new Stack<>();for (int i = 0; i < s.length(); i++) {char c = s.charAt(i);if (c == '(' || c == '{' || c == '[') {st.push(c); // opening bracket — push and wait for its match} else {if (!st.empty()) {// check if top of stack is the matching openerif ((c == ')' && st.peek() != '(') ||(c == '}' && st.peek() != '{') ||(c == ']' && st.peek() != '[')) {return false; // wrong type of closing bracket} else {st.pop(); // matched! remove the opener}} else {return false; // closing bracket with nothing open}}}return st.empty(); // true only if all openers were matched}Detailed Dry Run — s = "([)]"This is the trickiest example. Looks balanced at first glance but is actually invalid.( → opening, push → stack: [(][ → opening, push → stack: [(, []) → closing, peek is [, but ) needs ( → mismatch → return false ✅The stack correctly catches that [ was opened after ( but we tried to close ( before closing [ — wrong order!Dry Run — s = "([{}])"( → push → stack: [(][ → push → stack: [(, []{ → push → stack: [(, [, {]} → peek is {, match! pop → stack: [(, []] → peek is [, match! pop → stack: [(]) → peek is (, match! pop → stack: []Stack is empty → return true ✅Dry Run — s = "(["( → push → stack: [(][ → push → stack: [(, []Loop ends. Stack is NOT empty → return false ✅Two brackets were opened but never closed.All the Edge Cases You Must KnowSingle character like "(" or ")" A single character can never be valid — an opener has nothing to close it, a closer has nothing before it. The early return if (s.length() == 1) return false handles this cleanly.Only closing brackets like "))" The stack is empty when the first ) arrives → return false immediately.Only opening brackets like "(((" All get pushed, nothing gets popped, stack is not empty at end → return false.Interleaved wrong types like "([)]" Caught by the mismatch check when the closing bracket does not match the stack top.Empty string Stack stays empty → st.empty() returns true → returns true. An empty string is technically valid since there are no unmatched brackets. The constraints say length ≥ 1 so this is just good to know.A Cleaner Variation Using HashMapSome developers prefer using a HashMap to store bracket pairs, making the matching condition more readable:public boolean isValid(String s) {Stack<Character> st = new Stack<>();Map<Character, Character> map = new HashMap<>();map.put(')', '(');map.put('}', '{');map.put(']', '[');for (char c : s.toCharArray()) {if (map.containsKey(c)) {// closing bracketif (st.empty() || st.peek() != map.get(c)) {return false;}st.pop();} else {st.push(c); // opening bracket}}return st.empty();}The HashMap maps each closing bracket to its expected opening bracket. This makes adding new bracket types trivial — just add one line to the map. Great for extensibility in real world code.Both versions are O(n) time and O(n) space. Choose whichever feels more readable to you.Time Complexity: O(n) — single pass through the string Space Complexity: O(n) — stack holds at most n/2 opening bracketsWhy Stack Is the Perfect Data Structure HereThe key property this problem exploits is LIFO — Last In First Out. The most recently opened bracket must be the first one closed. That is literally the definition of a stack's behavior.Any time you see a problem where the most recently seen item determines what comes next — think Stack immediately. Valid Parentheses is the purest example of this principle.How This Fits Into the Bigger Stack PatternLooking at everything you have solved so far, notice the pattern evolution:844 Backspace String Compare — # pops the last character 1047 Remove Adjacent Duplicates — matching character pops itself 2390 Removing Stars — * pops the last character 3174 Clear Digits — digit pops the last character 20 Valid Parentheses — closing bracket pops its matching openerAll of these are stack simulations. The difference here is that instead of any character being popped, only the correct matching opener is popped. This matching condition is what makes Valid Parentheses a step up in logic from the previous problems.Common Mistakes to AvoidNot checking if stack is empty before peeking If a closing bracket arrives and the stack is empty, calling peek() throws an EmptyStackException. Always check !st.empty() before peeking or popping.Returning true without checking if stack is empty Even if the loop completes without returning false, unclosed openers remain on the stack. Always return st.empty() at the end, not just true.Pushing closing brackets onto the stack Only push opening brackets. Pushing closing brackets gives completely wrong results and is the most common beginner mistake.Not handling odd length strings If s.length() is odd, it is impossible for all brackets to be matched — you can add if (s.length() % 2 != 0) return false as an early exit for a small optimization.FAQs — People Also AskQ1. Why is a Stack used to solve Valid Parentheses? Because the problem requires LIFO matching — the most recently opened bracket must be the first one closed. Stack's Last In First Out behavior maps perfectly to this requirement, making it the natural and optimal data structure choice.Q2. What is the time complexity of LeetCode 20? O(n) time where n is the length of the string. We make a single pass through the string, and each character is pushed and popped at most once. Space complexity is O(n) in the worst case when all characters are opening brackets.Q3. Can LeetCode 20 be solved without a Stack? Technically yes for simple cases using counters, but only when dealing with a single type of bracket. With three types of brackets that can be nested, a Stack is the only clean solution. Counter-based approaches break down on cases like "([)]".Q4. Is LeetCode 20 asked in FAANG interviews? Absolutely. It is one of the most commonly asked problems across all major tech companies. It tests understanding of the Stack data structure and is often used as a warmup before harder stack problems like Largest Rectangle in Histogram or Decode String.Q5. What happens if the input string has an odd length? An odd-length string can never be valid since brackets always come in pairs. You can add if (s.length() % 2 != 0) return false as an early optimization, though the stack logic handles this correctly on its own.Similar LeetCode Problems to Practice Next1047. Remove All Adjacent Duplicates In String — Easy — stack pattern with characters394. Decode String — Medium — nested brackets with encoding678. Valid Parenthesis String — Medium — wildcards added32. Longest Valid Parentheses — Hard — longest valid substring1249. Minimum Remove to Make Valid Parentheses — Medium — remove minimum brackets to make validConclusionLeetCode 20 Valid Parentheses is the definitive introduction to the Stack data structure in competitive programming and technical interviews. The logic is elegant — push openers, pop on matching closers, check empty at the end. Three rules, one data structure, one pass.Master this problem thoroughly, understand every edge case, and you will have a rock-solid foundation for every stack problem that follows — from Decode String to Largest Rectangle in Histogram.

StringStackEasyLeetCode
LeetCode 402: Remove K Digits — Java Solution Explained

LeetCode 402: Remove K Digits — Java Solution Explained

IntroductionLeetCode 402 Remove K Digits is one of those problems where the brute force solution feels obvious but completely falls apart at scale — and the optimal solution requires a genuinely clever insight that, once you see it, feels like magic.This problem sits at the intersection of two powerful techniques — Greedy thinking and Monotonic Stack. If you have already solved Next Greater Element and Asteroid Collision, you have all the building blocks you need. This is where those patterns level up.You can find the problem here — LeetCode 402 Remove K Digits.What Is the Problem Really Asking?You are given a number as a string and an integer k. Remove exactly k digits from the number such that the resulting number is as small as possible. Return it as a string without leading zeros.Example:num = "1432219", k = 3We want to remove 3 digits to make the number as small as possibleRemove 4, 3, 2 → remaining is "1219"Output: "1219"Simple goal — smallest possible number after exactly k removals.Real Life Analogy — Choosing the Best Price TagImagine you are reading a price tag digit by digit from left to right. Every time you see a digit that is bigger than the next one coming, you have a chance to remove it and make the price smaller. You have a limited number of removals — use them wisely on the biggest offenders from the left side, because leftmost digits have the most impact on the overall value.Removing a 9 from the front of a number shrinks it far more than removing a 9 from the end. This left-to-right priority with greedy removal is the entire insight of this problem.The Core Greedy InsightHere is the key question — which digit should we remove first to make the number as small as possible?Think about it this way. In the number "1432219", which digit is hurting us the most? It is 4 — because it is large and sits early in the number. After removing 4 we get "132219". Now 3 is the biggest early offender. And so on.More precisely — whenever you see a digit that is greater than the digit immediately after it, removing it makes the number smaller. This is because a larger digit sitting before a smaller digit inflates the overall value.This is the Greedy rule: scan left to right, and whenever the current digit on the stack is greater than the incoming digit, remove it (if we still have removals left).A Monotonic Increasing Stack enforces exactly this — it keeps digits in non-decreasing order, automatically kicking out any digit that is larger than what comes next.The Solution — Monotonic Stackpublic String removeKdigits(String num, int k) { Stack<Character> st = new Stack<>(); // build monotonic increasing stack for (int i = 0; i < num.length(); i++) { // while stack top is greater than current digit and we still have removals while (!st.empty() && k != 0 && st.peek() > num.charAt(i)) { st.pop(); // remove the larger digit — greedy choice k--; } st.push(num.charAt(i)); } // if k removals still remaining, remove from the end (largest digits are at top) while (k != 0) { st.pop(); k--; } // build result string from stack StringBuilder sb = new StringBuilder(); while (!st.empty()) { sb.append(st.pop()); } sb.reverse(); // stack gives reverse order // remove leading zeros while (sb.length() > 0 && sb.charAt(0) == '0') { sb.deleteCharAt(0); } // if nothing left, return "0" return sb.length() == 0 ? "0" : sb.toString();}Step-by-Step Dry Run — num = "1432219", k = 3Processing 1: Stack empty → push. Stack: [1]Processing 4: Stack top 1 < 4 → no pop. Push 4. Stack: [1, 4]Processing 3: Stack top 4 > 3 and k=3 → pop 4, k=2. Stack: [1] Stack top 1 < 3 → stop. Push 3. Stack: [1, 3]Processing 2: Stack top 3 > 2 and k=2 → pop 3, k=1. Stack: [1] Stack top 1 < 2 → stop. Push 2. Stack: [1, 2]Processing 2: Stack top 2 == 2 → not greater, stop. Push 2. Stack: [1, 2, 2]Processing 1: Stack top 2 > 1 and k=1 → pop 2, k=0. Stack: [1, 2] k=0, stop. Push 1. Stack: [1, 2, 1]Processing 9: k=0, no more removals. Push 9. Stack: [1, 2, 1, 9]k is now 0, skip the cleanup loop.Build result: pop all → "9121" → reverse → "1219" No leading zeros. Return "1219" ✅Step-by-Step Dry Run — num = "10200", k = 1Processing 1: stack empty → push. Stack: [1]Processing 0: Stack top 1 > 0 and k=1 → pop 1, k=0. Stack: [] Push 0. Stack: [0]Processing 2: k=0, no removals. Push. Stack: [0, 2]Processing 0: k=0. Push. Stack: [0, 2, 0]Processing 0: k=0. Push. Stack: [0, 2, 0, 0]k=0, skip cleanup.Build result: "0020" → reverse → "0200" Remove leading zero → "200" Return "200" ✅Step-by-Step Dry Run — num = "10", k = 2Processing 1: push. Stack: [1]Processing 0: Stack top 1 > 0 and k=2 → pop 1, k=1. Stack: [] Push 0. Stack: [0]k=1 remaining → cleanup loop → pop 0, k=0. Stack: []Build result: empty string. Remove leading zeros: nothing to remove. Length is 0 → return "0" ✅The Three Tricky Cases You Must HandleCase 1 — k is not fully used after the loop This happens with a non-decreasing number like "12345" with k=2. No element ever gets popped during the loop because no digit is greater than the next one. The stack ends up as [1,2,3,4,5] with k still = 2. The solution is to pop from the top of the stack — which holds the largest digits — until k hits 0.Case 2 — Leading zeros After removing digits, the remaining number might start with zeros. "10200" becomes "0200" after removing 1. We strip leading zeros with a while loop. But we must be careful not to strip the only zero — that is why we check length > 0 before each removal.Case 3 — Empty result If all digits are removed (like "10" with k=2), the StringBuilder ends up empty. We return "0" because an empty number is represented as zero.Why Remaining k Gets Removed From the EndAfter the main loop, if k is still greater than 0, why do we remove from the top of the stack (which corresponds to the end of the number)?Because the stack at this point holds a non-decreasing sequence. In a non-decreasing sequence, the largest values are at the end. Removing from the end removes the largest remaining digits — which is exactly the greedy choice to minimize the number.For example in "12345" with k=2: the stack is [1,2,3,4,5]. Pop top twice → remove 5 and 4 → [1,2,3] → result is "123". Correct!Time and Space ComplexityTime Complexity: O(n) — each digit is pushed onto the stack exactly once and popped at most once. Even with the while loop inside the for loop, total push and pop operations across the entire run never exceed 2n. The leading zero removal is also O(n) in the worst case. Overall stays linear.Space Complexity: O(n) — the stack holds at most n digits in the worst case when no removals happen during the main loop.Common Mistakes to AvoidNot handling remaining k after the loop This is the most common mistake. If the number is already non-decreasing, the loop never pops anything. Forgetting the cleanup loop gives the wrong answer for inputs like "12345" with k=2.Not removing leading zeros After removals, the result might start with zeros. "0200" should be returned as "200". Skipping this step gives wrong output.Returning empty string instead of "0" When all digits are removed, return "0" not "". An empty string is not a valid number representation.Using >= instead of > in the pop condition If you pop on equal digits too, you remove more than necessary and might get wrong results. Only pop when strictly greater — equal digits in sequence are fine to keep.How This Connects to the Monotonic Stack SeriesLooking at the stack problems you have been solving:496 Next Greater Element — monotonic decreasing stack, find first bigger to the right 735 Asteroid Collision — stack simulation, chain reactions 402 Remove K Digits — monotonic increasing stack, greedy removal for minimum numberThe direction of the monotonic stack flips based on what you are optimizing. For "next greater" you keep a decreasing stack. For "smallest number" you keep an increasing stack. Recognizing which direction to go is the skill that connects all these problems.FAQs — People Also AskQ1. Why does a Monotonic Stack give the smallest number in LeetCode 402? A monotonic increasing stack ensures that at every point, the digits we have kept so far are in the smallest possible order. Whenever a smaller digit arrives and the stack top is larger, removing that larger digit can only make the number smaller — this greedy choice is always optimal.Q2. What happens if k is not fully used after the main loop in LeetCode 402? The number is already in non-decreasing order so no removals happened during the loop. The remaining removals should be made from the end of the number (top of stack) since those are the largest values in a non-decreasing sequence.Q3. How are leading zeros handled in LeetCode 402? After building the result string, strip any leading zeros with a while loop. If stripping leaves an empty string, return "0" because the empty case represents zero.Q4. What is the time complexity of LeetCode 402? O(n) time because each digit is pushed and popped at most once across the entire algorithm. Space complexity is O(n) for the stack.Q5. Is LeetCode 402 asked in coding interviews? Yes, it is frequently asked at companies like Google, Amazon, and Microsoft. It tests greedy thinking combined with monotonic stack — two patterns that interviewers love because they require genuine insight rather than memorization.Similar LeetCode Problems to Practice Next496. Next Greater Element I — Easy — monotonic decreasing stack739. Daily Temperatures — Medium — next greater with distances316. Remove Duplicate Letters — Medium — similar greedy stack with constraints1081. Smallest Subsequence of Distinct Characters — Medium — same approach as 31684. Largest Rectangle in Histogram — Hard — advanced monotonic stackConclusionLeetCode 402 Remove K Digits is a beautiful problem that rewards clear thinking. The greedy insight — always remove a digit when a smaller digit comes after it — naturally leads to the monotonic stack solution. The three edge cases (remaining k, leading zeros, empty result) are what separate a buggy solution from a clean, accepted one.Work through all three dry runs carefully. Once you see how the stack stays increasing and how each pop directly corresponds to a greedy removal decision, this pattern will click permanently and carry you through harder stack problems ahead.

LeetCodeJavaStackMonotonic StackGreedyStringMedium
Ai Assistant Kas